Florian Rossmark

Monitor multiple website certificates with a single PRTG sensor

Monitor multiple website certificates with a single PRTG sensor

Due to a request on the PRTG KB of someone needing a single sensor that monitors multiple URLs for their certificate expiration I came up with the following script that is posted on this PRTG KB as well. The modified PowerShell script was provided there – it is mentioned it sourced from Stack Overflow – I found it on this link: https://stackoverflow.com/questions/28386579/modifying-ssl-cert-check-powershell-script-to-loop-through-multiple-sites

The result would look like this:

To make it more usable – you can input parameters from PRTG like this:

or this for limits – warning 60 and error 10 – you could name them but this should work as well…

And here is the modified script:

 

Consolidate many line based .CSV files in to a single .CSV with one header line and per file data lines

Consolidate many line based .CSV files in to a single .CSV with one header line and per file data lines

Summarize a huge amount of files that have line based columns and data in to a single file with the first line the headers found in all files and the actual data as per row for each file, while the headers might change throughout the source files and need to be added dynamically.

This is a special script I wrote for someone else that had about 45k files to process. It is crazy enough to be worth posting here 🙂 and can be found on Spiceworks as well.

Situation:

  • many .CSV files
  • all have the columns per line instead of in the first line
  • the data looks like
    • column,data
    • column,data
  • he needs all files transferred in to one file in this format
    • header,header,header
    • data,data,data
    • data,data,data
  • from per line to one line as a header and the data in each line per file
  • additional challenge
    • the headers might change throughout the files and add more headers

What the script does:

  1. cycle through all files
    1. detect all headers
  2. cycle a second time through all files
    1. detect all the data
    2. write the data in the right column per line per file

Flaws:

  • The script does not obey if there is data with a comma “,” – it would ignore what is behind that comma

Output:

  • Output file is a single .CSV file, comma separated columns

Execute this way:

  1. Source Directory – where the .csv files reside
  2. Target Directory – where the new output .csv will be created
  3. open CMD / command prompt
  4. go to the script-directory (where you saved it)
    1. CSCRIPT scriptname.vbs “c:\sourcedirectory” “c:\targetdirectory”

CSCRIPT will avoid that you see a million message boxes – it will output directory on your CMD / command prompt window…

Secured WinRM SSL session and PowerShell WinRM queries – example with a PRTG sensor for CPU, HDD and RAM

Secured WinRM SSL session and PowerShell WinRM queries – example with a PRTG sensor for CPU, HDD and RAM

Windows Mangement Remote Mangement / WinRM can be configured as HTTPS / encrypted connection instead of using clear text transfer of the provided information. In order to do this you need to configure it accordingly and have a valid machine certificate installed on the system.

Now – the advantage here is clearly the added security layer while you request and receive those information. More information on how to do this can be found here: https://support.microsoft.com/en-us/help/2019527/how-to-configure-winrm-for-https

Only it becomes a challenge when you want to use PowerShell and e.g. PRTG to use this HTTPS encrypted system. I came across this request and had to create a script that actually works with such an HTTPS encrypted SSL session to WinRM. You can find it below.

What it does is rather simple:

  • set the CimSessionOptions to use SSL
    • additionally it bypasses the certification checks by default – you might want to adjust this depending on your network configuration
  • it creates a new CimSession to your target system using the UseSSL option
  • and finally it executes a few queries against this session
  • the data in this example is then translated in to a PRTG compatible XML structure so you could use it in a Advanced EXE/XML sensor within PRTG

The data in this example combines information about the CPU(s), HardDrives / HDD(s) (only installed drives, not USB) and Memory usage to PRTG in a single sensor while using channels.

Due to some dynamic of the script, you want to make sure you have fixed upper and lower error limits on especially the channel Total Disks – so if something changes you can re-create the sensor due to it’s fixed channels once it did run the first time.

In theory you could provide limits within the XML response to PRTG – this is up to you – I always liked it more to configure them solely in PRTG in the sensor channels so I could adjust them per device.

PS: This was originally posted in the private PRTG channel on SpiceWorks here.

Excel custom views and Excel files that appear different for various users

Excel custom views and Excel files that appear different for various users

Excel has a feature called Custom Views (ribbon VIEW / CUSTOM VIEW) that is very little known. This feature actually allows to adjust header / footer or columns that are hidden or displayed etc… a custom view for the workbook.

Custom Views are stored in the workbook it self. Further are they automatically selected when opening a workbook.

How does Excel determine which custom view to use?

Under OPTIONS / GENERAL in the field USER NAME should be in most cases your FULL NAME from your Windows logon user respective Active Directory user. This name actually determines which custom view is used. If you alter the name to another custom view name you will automatically see this custom view – or it default back to the default view.

This becomes an even bigger issue if you are using SYSPREP images and you set the default user profile from an existing profile via the COPYPROFILE option – see here for details: https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/customize-the-default-user-profile-by-using-copyprofile.

If the source profile had EXCEL open – or actually any other OFFICE program – you will find the following registry keys set:

  • HKEY_CURRENT_USER\Software\Microsoft\Office\Common\UserInfo
    • Company
    • UserInitials
    • UserName

The critical key in this case is USERNAME.

Imagine you deploy this image with this pre-set username-key to 100 computers. One of the users now sets a CUSTOM VIEW and another user opens the file – this user will actually see the custom view because both users would have the same USERNAME value set, what was pre-set in the image. Now you have a third user that you manually installed from scratch – this user will see different values in his worksheet – Excel header and footer or columns etc…

This sure would give you some headache and would sure will wonder how can this happen.

To be pro-active – and even prevent other related issues – you should remove those three keys from any logging on user. If you remove those keys and the user opens an Office application, they will be automatically recreated depending on the current Windows or Active Directory user name and credentials.

Probably the best way to accomplish this is a GPO that applies to ALL USERS and removes those registry keys. Just make sure you check as well “apply once”.

Having this said – I as well saw circumstances where this did not help – likely due to the users have been logged on and in an Microsoft Office application. The keys had been removed, but after closing the application the old keys had been written back to the registry. Due to that you might need either a solution per PowerShell or CMD/Batch script that removes those keys when the users logs on.

You could determine if they are correct or not and if not simply delete them.

I tried to find a solution for the USERNAME value in the registry from the GPO variables – use the F3 key in the GPO Editor to show available variables (https://blogs.technet.microsoft.com/grouppolicy/2009/05/13/environment-variables-in-gp-preferences/) – you will quickly get stuck on that the full user name is not available. Therefor a script might be your best shot.

A simple solution might be using this in a CMD based login script:

Additional information can be found here as well: https://support.microsoft.com/en-us/help/302911/the-page-setup-settings-in-a-shared-workbook-are-different-for-each-us

PRTG 911 call alerts – ShoreTel

PRTG 911 call alerts – ShoreTel

This is a PRTG 911 calls sensor script that I wrote a long time ago – it seems like there is quite some interest in it so I decided to write a blog post about it.

ShoreTel writes by default Windows Eventlog entries for 911 calls. The challenge we had is to inform HR / Human Resources and Facilities about such calls and let them know from which phone it was initiated.

While engaging PRTG we solved this while constantly checking for the specific Windows Eventlog EnventID 1319 in the Application log and raising an Error if the Event happened. We had to put a script in between and filter the event entry out to gather minimal data in the end for the event and notification that is send out to the specific HR and Facilities members.

First the script here:

Save the script in this path:

C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXE

Now add a custom EXE sensor (not the advanced) to PRTG and select the script. The expected parameters are the SERVERNAME and an INTERVAL – suggestions: SHORETELSERVER and 2 – 2 as interval assuming you scan every minute / 60 seconds / this will look for entries in the logfile within the last 120 seconds while accounting for any slack and keeping the error state alive for 2 to 3 minutes in PRTG.

Set the channel upper limit to 0 – if the script detects the Windows event you will get a 1 one back that indicates the error.

Set the scan-interval to 1 minute respective 60 seconds

Further might you want to add a special e-mail notification with the format TEXT WITH CUSTOM CONTENT and a target email recipient group for whom it concerns. See the screenshots below for some examples…

The email message body looks like this (example):

1 # (Value) is above the error limit of 0.90 # in Value (Emergency Services Call to 911 on port 10.10.10.10 from user ADDRESSBOOK NAME at 1234 (Extenstion))

 

 

PowerShell – custom tables or objects

PowerShell – custom tables or objects

Powershell sometimes can be challenging. One of the more confusing things is collecting data and having them in proper formatted tables / objects to process them further.

The below script is a sheer example on how you can accomplish this while creating a table, adding the needed columns and then filling the table with rows – in this specific example while reading network adapters and filling in IP information – if available – partly multiple rows (one row per IP and adapter) and partly single rows for adapters with not IPs.

Custom tables or custom PowerShell objects example with foreach loops to fill them up and combine values from various commands in to single tables for further processing.

 

Read the UEFI stored Windows key and activate Windows

Read the UEFI stored Windows key and activate Windows

The following script will read the UEFI stored Windows licensing key and show it – it then will attempt to activate Windows and probably even e.g. your Microsoft Office installation – this is not necessarily intentional by the script, rather then a coincidence that I discovered by using this script to activate Windows after it was automatically deployed from an image that had a pre-injected volume license key. Office likely got activated along with Windows.

In any case – it helps you to activate Windows or simply read out your license key.

 

SQL Express SQLState 08001 and Error 17

SQL Express SQLState 08001 and Error 17

One of the challenges especially with SQL Express is that you need to enable some protocols on the network level first in order to connect to it. You might see an error message the one below when you try to connect to SQL – stating SQLState 08001 and Error 17.

In order to resolve this, you need to enable named pipes and TCP in the SQL Server Configuration Manager that was installed by default on your system. See the image below on how it should look like. Please note that you need to restart the SQL service in order for those changes to take effect.

Please note – there might be a need of additional configuration like the Windows Firewall or other parameters, the above just addresses a rather common issue.

Microsoft RADIUS / NPS SQL logging

Microsoft RADIUS / NPS SQL logging

An issue or question I see again and again – proper RADIUS logging with Microsoft NPS / Network Policy Server.

Let’s guide you through a few steps

  1. Install a Microsoft SQL or if not available SQL Express
    1. be aware – SQL Express has very tight database size limits and no SQL Agent – this might be an issue
  2. Create a new database via SQL Management Studio in the SQL server
    1. name it e.g. RADIUSLogging
  3. run the SQL script from this Microsoft website in a new query window against this database (make sure it is not run against any other database by accident)
    1. you could add a line like USE RADIUSLogging to prevent this – in the very top…
  4. configure your RADIUS server to log to this SQL server and database
  5. make sure you have fail-over logging to a text-file – to avoid issues in case your SQL DB grew to big or was not reachable for any reason
    1. decide in the text-file configuration if you want to deny access if there is an issue or if you still want to proceed with the logon

Now you have RADIUS logging the information to a SQL database – actually a single table – and you can dig around in it. The IT-Assets database provides a front-end example for this – you don’t need to use it – but it might be of help – see here.

To interpret all those columns and values – look at the following links for additional information:

You will face the issue that the database will grow rapidly – depending on how many requests go to your RADIUS system etc… Keep an close eye on it – use a monitoring software like Paessler / PRTG to monitor the size and keep in mind that SQL Express might have size limits like 10 GB. The full version of Microsoft SQL has no such limits and further can you use SQL Agent to execute tasks. The following script can help you purging data from the RADIUS database to keep its size under control. You can use SQL Agent (not in SQL Express) to run it automatically or if you use SQL Express either run it manually or with another solution somehow automatically against the database delete older entries.

The script actually will purge data older then 14 days – you can adjust the days to your liking / needs.

Updated domain join script including KeePass / Pleasant Password server entries for local admins

ns a website from a systems administrator for systems administrators Home IT-Admins CMDB IT-Admins tool IT Search EOL Solutions Blog Contact Links Updated domain join script including KeePass / Pleasant Password server entries for local admins

Today I post an updated version of the domain-join script I initially posted here.

In theory you can just replace the script with the new version – assuming you did not make any changes other then adjusting it to your domain / server-names.

What changed in the newer version:

  • the top lines in the script hold the basic configuration parameters
    • line 1: NetBIOS name of your Active Directory domain
    • line 2: your DNS domain name
    • line 3: your distinguished domain name / root DN of your domain
    • line 4: your default OU for new workstations
    • line 5: empty
    • line 6: KeePass / Pleasant Password Server URL
    • line 7: KeePass folder to store the password in
  • the script now relies on the above parameters rather then specifying them in various areas in the script, making the whole use / adjustment of the script way easier
  • advanced error handling
    • after the user entered the computer name and his domain admin credentials the systems checks if it can connect to the domain and if the computer name already exists
      • if the domain credentials are invalid (can be a non-admin – as long they are valid) you get a message explaining that the script will stop due to wrong credentials
      • if the computer name already exists in the domain, you get a message about it and the script stops
    • KeePass or Pleasant Password Server connection – if it fails to connect with the credentials provided, you get a message about it and the script will stop
  • adjusted messages with various colours
    • white text – standard as it was before
    • yellow text – highlighted information so it sticks better out for the end-user
    • magenta text – handled error / failure message – this is an explanation that something stopped the script from going further
    • red text – those are real PowerShell error messages – either due to not handled errors or if the error was handled plotted out to the screen as additional reference and help

For additional information, please look at the original post here.

This script is also mentioned on the API Examples page on the Pleasant Solutions web site here.

How to create an independent backup network

How to create an independent backup network

Today we look at independent backup networks especially in regards to LTO 7 and VMware ESX hosts. Be aware – this very example also applies to any backup to disk (B2D / Backup-2-Disk) solution. But a good reseller / vendor would inform you about this right away anyways.

LTO 7 and later like LTO 8 drives have a write speed faster then a 1 GBit network can handle, making it now really necessary to think about options. On top of it, you do not want to over utilize the LAN side of your servers so that the impact on the user / application facing side stays minimal. This leaves you with two options, you can group switch ports assuming you have enough 1 GB ports and use them, you will need at least 3 ports combined, or you create a whole backup network on a 10 GB basis.

Let’s run some numbers:

  • LTO7 has a write speed of about 300 MB/s uncrompressed and up to 750 MB/s compressed
  • LTO8 (L8) has a write speed of about 360 MB/s uncrompressed and up to 900 MB/s compressed

Now – your network connection is meassured in MBit/s not MByte/s. Byte to bit is 8 bit are one byte, so we need to multiply those speeds in byte with 8 bit too see the network speed numbers.

  • LTO7 uncrompressed = 300MB/s * 8 = 2400 MBit/s
  • LTO7 compressed = 750MB/s * 8 = 6000 MBit/s
  • LTO8 uncrompressed = 360MB/s * 8 = 2880 MBit/s
  • LTO8 compressed = 900MB/s * 8 = 7200 MBit/s

Assuming you want to go with grouped ports, you see that with LTO7 you would need 6 ports and LTO8 7 to 8 ports to fully utilize the speed and minimize your backup window. Additionally think about the read speed that might affect you as well – not just for recovery but for the verify of your backup.

Now – this means – add at least one 10 GB switch and one 10 GB NIC to each server – let’s do this with an example:

  • 3x VMware ESX hosts – LAN side and management is configured 1 GB – we assume there is some kind of storage behind them that has the iOPS and speed we need like an SSD based storage
  • 1x Backup media server that has an LTO7 or LTO8 drive connected – 1 GB on the LAN side

What we need – minimal:

  • 4x 10 GB NICs
  • 1x 10 GB switch
  • 4x CAT6e or CAT7 cables

What I would recommend – nice to have:

  • 4x 10 GB NICs – dual port
  • 2x 10 GB switches
  • 10x CAT7 cables – 2x to stack/trunk the switches if not stacked otherwise

This is a nice to have – a fail-over, but the minimal configuration is sufficient as well.

Cable this all in – create a new IP-scope / VLAN on the backup side – you do not need any default Gateway etc. on the Backup-Network side (10 GB). Just an independent IP scope and have every host assigned a static address.

This keeps the regular network traffic and any broadcasts away from this network and your backup will run totally independent. You might need to disable your anti-virus solution on this NIC / IP-scope on the backup media server as well, cause it might actually influence the speed quite drastically. Having it separated and independent helps keeping the security up.

On the VMware hosts – I like to even allow VMware to vMotion on this backup-LAN – simply because it is extremely efficient there – independent from your LAN and if you have it from your iSCSI network as well. But that’s just an idea.

Now – the backup – how will it grab the data from the 10 GB side of your VMware hosts – especially if you have a vSphere cluster and grab the backup through the cluster?

Simple – you adjust the hosts file on your media server. Each and every VMware host needs to be listed in the hosts-file on the media server with the IP that it has in your 10 GB backup network. This way DNS and everything will act normal in your environment, only the backup-media server will reach out to those hosts on the 10 GB network due to the IP resolution of those hosts. This is the easiest way to accomplish this.

You will not to add a 10 GB connection, backup-network IP address etc. to your VMware vSphere controller – it can stay on your LAN or server-management network as is. This also means there is no reason to mention him in the hosts file on the media server.

How this works:
your backup will contact the vSphere controller on the LAN side
it will then be redirected to the host that currently holds the VM you want to backup
the media server now will contact the VMware host directly – due to the hosts-file entry on the 10 GB backup-network
backup will process…

This of course would work with a physical server as well – like a physical file-server etc. – though, today this is rather rare and especially VMware backups are actually large files that benefit most from the LTO7 write speed so the above makes sense there most. It wouldn’t matter if you do the same to an Hyper-V environment or any other VM host/guest solution. In theory it should always work the same.

What real world write speeds can you expect?
This is the big question – here are some real world examples of this – those are single jobs on per VM basis, meaning it includes even the tape-load and tape-unload processing time and udpating the catalogs while using Veritas Backup Exec.

Backup size (VM size)elapsed time in minutesjob rate write/overalljob rate verify
4 TB6:4717,227 MB/min26,404 MB/min
2 TB6:476,822 MB/min22,233 MB/min
1.21 TB3:498,271 MB/min20,235 MB/min
147 GB0:336,491 MB/min22,655 MB/min
138 GB0:1718,403 MB/min27,726 MB/min
25 GB0:109,172 MB/min20,700 MB/min

The above list is just an example – realistically we see speeds between about 3,000 MB/min to 18,000 MB/min as for the overall speed. This is due to the VM itself for some part – thin or thick provisioned, what kind of data is it holding, how busy is the host cause we might double trouble him due to multiple drives doing backups at the same time to the same host etc… In average we see around 8,000 to 9,000 MB/min in speed, what is still great – and I wanted to show as well that it can vary quite a bit so don’t be scared. We still did improve the time the backup took from going from an LTO4 LAN based backup scenario to an LTO7 independent backup network while cutting the time in half, actually, even less then half. The slowest speeds we see today are due to systems that can only be backed up on the LAN side, while the ports are grouped there but we still don’t have the same speed as we see on the backup-network side. Many factors come in play but that all depends on the individual situation.

Hoping the information above helps some of you out there – keep in mind that your situation might be different, run some examples and ideas and if you have questions, reach out – this remains an example of what I really implemented at a company and how it affected the backup configuration and management.

WDS respective PXE boot and VMware

WDS respective PXE boot and VMware

If you try to PXE boot a VMware guest system that e.g. uses WDS / Windows Deployment Services or similar, you might encounter that the boot.wim etc. download unbelievable slow. This can take several hours. This has especially to do with booting via the VMware EFI environment. VMware BIOS does not cause this issue. You could switch a EFI system to BIOS to capture/deploy the image – but this is not really a solution rather then bypassing it.

The solution for this is pretty simple, while the download is transferred through a TFTP, VMware has an issue with the blocksize and this gets a bit messed up due to the variable blocksize between the VM guest system and your PXE server.

Set the Maximum Block Size to 1456 – what is the exact value VMware needs to work properly. Disable further the Variable Window Extension and try to PXE boot again – you will see it will load in about a minute now – depending on your WinPE image size and your issues are in the past.

In detail:

  1. open your Windows Deployment Services
  2. right click on the server and select Properties
  3. navigate to the register card TFTP
  4. set the Maximum Block Size to 1456
  5. uncheck the Enable Variable Window Extension checkbox

Gathering profile information from computer

Gathering profile information from computer

Every now an then you might need to know who logged on and when was the last logon of which user to a specific workstation as well as the size of each user profile. For this I once wrote the PowerShell script you can find below. It does a WMI query against a list of one or more target computers and reads out the information reporting it back.

As input parameter use a comma separated list of computer names – those must be reachable and administrative accessible (you need at least admin rights on the target system). You then get a output set per profile.

The output can e.g. be transformed with the parameter |ft to see it in table format – like typical in PowerShell.

Output values are:

  • ComputerName
  • ProfileName
  • ProfilePath
  • ProfileType
    • Temporary
    • Roaming
    • Mandatory
    • Corrupted
    • local
  • IsinUse
  • IsSystemAccount
  • Size
    • this needs to most processing time – it is a manual size check including even temp files.. – other then what Windows shows you
  • LastUseTime

The advantage of DFS and how to set up a working structure

The advantage of DFS and how to set up a working structure

File shares are something every IT professional will work with. Many companies have way to complicated and unstructured network file systems with to deep permissions, to many shares and access points, often several connected drives and from an IT perspective nightmares when it comes to migrating to newer servers or having satellite offices and subsidiaries gaining access to it especially on lower speed connections.

Having been in IT for about 20 years by now, I saw a lot and was challenged with it quite a bit. One of the best solutions I came across is the one I am about to show you here. It is very structured while giving you the advantage of leveraging it as you need and go and should allow you to use it in most businesses.

First of all – please note that I will not go as far as explaining and exploring the differences with Active Directory integrated and Stand-A-Lone namespaces. If by any means possible, I suggest you use Active Directory integrated namespaces to simplify the roll out, but both would work.

The structure example:

The structure example will depend on a DFS Root server and a separate File-Server per root-folder on the later network drive. This is just an example, you do not need to split it all up, thought if you can do it to keep it as structured as possible

Example target file system structure:

  • N:\
    • N:\Archive
      • N:\Archive\John Doe
      • N:\Archive\Jane Doe
    • N:\Departments
      • N:\Departments\Marketing
        • General
        • Mangement
        • Public (anyone has read access)
      • N:\Departments\Accounting
        • General
        • Mangement
        • Public (anyone has read access)
    • N:\Other
      • N:\Other\Manufacturing
      • N:\Other\Projects

The declared goal is to keep the NTFS rights structure as simple as possible and not going any deeper then e.g. level three – e.g. N:\Departments\Marketing\General

Each department folder in this example will have a public folder where a member of the department has read/write access while any non-marketing member has read access to files that are published there.

The archive tree is for terminated employees and archive data. Their information gets collected in a sub-folder in this tree, a group will be created for each of those folders and only people that got approval to access this data will see and be able to read those archived files (read-only is recommended as NTFS permission)

The file servers and their preparation:

DFS Root-Server
  • create a folder on the data-partition like D:\DFSRoots – there will not be any real data in this folder – but it will hold the actual DFS structure
    • create sub folders for the branches on the shared DFS drive like:
      • D:\DFSRoots\Departments
      • D:\DFSRoots\Archive
      • D:\DFSRoots\Other
DFS Department Server
  • create a folder on the data-partition like D:\SharedFolders\Departments
    • remove the everyone or authenticated user groups from this folder – only System and Domain-Admins should have read/write permission here while group N_Departments will have read-only access on this folder.
    • create a sub-folder for each main folder you want to see under the path N:\Departments and share it 
    • add a $ (dollar/string) sign to the share name so it remains hidden a hidden share
    • Examples:
      • D:\SharedFolders\Departments\Marketing
      • D:\SharedFolders\Departments\Accounting
  • now create the following sub-folders for each department folder as shown on the example Marketing
    • D:\SharedFolders\Departments\Marketing\General
    • D:\SharedFolders\Departments\Marketing\Management
    • D:\SharedFolders\Departments\Marketing\Public
  • create two groups in Active Directory for Marketing
    • N_Departments_Marketing_General
    • N_Departments_Marketing_Management
  • create a general group N_Departments to use it for all Public folders
  • assign the groups to their according sub-folders General and Management with read/write rights you probably will need to remove the read-access that the group N_Departments inherited from this folder 
  • assign the group N_Departments to the Public folder in all departments with read-only rights (if not inherited)
  • assign the group N_Departments_Marketing_General to the Marketing\Public folder with read/write access – allowing each member of marketing to publish information for access to other people – only marketing can write in this folder, other people only have read-access to it
DFS Archive Server
  • create a folder on the data-partition like D:\SharedFolders\Archive
    • create a sub-folder for each main folder you want to see under the path N:\Archive and share it 
    • add a $ (dollar/string) sign to the share name so it remains hidden a hidden share
    • Examples:
      • D:\SharedFolders\Archive\John Doe
      • D:\SharedFolders\Archive\Jane Doe
DFS Other Server
  • create a folder on the data-partition like D:\SharedFolders\Other
    • create a sub-folder for each main folder you want to see under the path N:\Other and share it 
    • add a $ (dollar/string) sign to the share name so it remains hidden a hidden share
    • Examples:
      • D:\SharedFolders\Other\Manufacturing
      • D:\SharedFolders\Other\Projects

The DFS namespace set up and configuration

  • add the Namespace \\domain.local\N for the N: drive (just an example)
  • add the folders Archive, Departments and Other to the namespace
  • for each of those folders you add the shared sub-folders like indicated in the list below as sub-folders (they will appear on the Namespace tab when you click on the folder in the DFS Management) and set the target to the according file-share on the specific DFS server where the data will reside
    • Departments\Marketing
    • Departments\Accounting
    • Archive\John Doe
    • Archive\Jane Doe
    • Other\Manufacturing
    • Other\Projects
    • This will actually create a shared sub-folder on the DFS Root server for each of those folders in D:\DFSRoot\

Note – information about the above example

The example below is kept simple – I did not go in to each and every right you would need to assign for the sole purpose of keeping it simple and understandable. Please investigate and set the rights as you really need them. 

As for the Archive tree, it might be beneficial to have PowerShell script automate the folder creation, group creating and rights assignment for those NTFS paths, so you limit the possible failure-rate in case you are going to archive terminated employee data and other stuff in this tree branch.

What are the real benefits of this

  • add multiple folder targets for replication
    • replication can be beneficial in a server-migration scenario as well as in a subsidiary scenario
    • you can add replications on the departments-branch example per department folder – not each subsidiary will need a mirror folder of each department, rather then just a few – this decreases the amount of data and load on the connection and size of the server respective it’s disk-space and reduce cost as well
  • a simple rights structure fully based on groups
    • in general you should never ever use a user account to assign any rights – always create a group, whether for a drive-share, NTFS rights or any other purpose. Always create a group!
    • you can add and remove users from those groups
    • you can audit the permissions on the NTFS side rather quick cause they should relate to strong group names
    • the groups can be audited against HR lists of members of the department or by department managers and directors to make sure only people that need to have certain access levels will have them
  • while limiting access to certain folders you limit the amount of damage a possible attack by malware could cause 
  • you can divide or summarize the actual file-servers that hold the data as needed in the long run
  • a simple group design with limited depth permissions is easier to maintain and audit
  • you have one central network drive that you will assign in order to give everyone access – all data will be centrally on this path independent from any file-server host-name. This can be a huge advantage cause some applications might not relate to the mapped drive rather than a UNC path what could cause you major headache when ever you want to migrate/upgrade or retire your file-servers later on
  • possible other file-shares within the corporation in other locations could be made accessible by linking them in as a folder in e.g. the others-namespace avoiding that users would need to know and remember the UNC path and you even allowing them to access any UNC path – it will act like a mapped drive while pointing in the background to an UNC path

There are many more advantages to DFS and the whole design. I hope this gives you a good overview and idea of how to design or re-design your file-server structure and simplify the whole access structure. 

Full text search and DFS drive mappings

This is a challenge that is not easy to overcome. Still, thought there is no official and directly implemented solution from Microsoft for this, I was able develop and provide a solution that will access the Windows Search Index and provide it back to the end user only using standard Windows components. All you need to know and do is described in the IT Search section of this web site.

Monitor multiple file sizes in one PRTG sensor

Monitor multiple file sizes in one PRTG sensor

The following script allows you to monitor multiple file sizes in one PRTG sensor. It expects you to determine the amount of files in the first parameter and add the correct amount of files as additional parameters.

E.g. – three files:

It of course wouldn’t mage much sense to monitor the example files, but that’s how you would give the parameters to this advanced exe/script sensor in PRTG. To test it, I recommend to use it via CSCRIPT in a DOS / CMD command window:

PRTG will see the file-name as channel name and the resulting file-size in byte per file you set up. Please be aware that you can’t just add additional channels later on – you could place dummy-files in the parameters to bypass this if ever needed.

Here is the VBS file that you need to place in to your PRTG installations advanced exe sensors directory:

This was as well posted in this PRTG article.

Enable SMBv1 on Windows 10 per GPO

Enable SMBv1 on Windows 10 per GPO

SMBv1 is an insecure protocol that you should not use if by any means possible. Windows 10 has SMBv1 disabled by default. In order to enable it you would need to go to the Control Panel and activate the Windows Feature “SMB 1.0/CIFS File Sharing Support” and at a bare minim the “SMB 1.0/CIFS Client“. You actually might just want to do that cause you really shouldn’t add more SMBv1 servers to your network.

Before you proceed reading – if you really need to enable this protocol – please make sure your systems are all patched! Especially your target servers should be patched as well – assuming they are Windows XP / 2003 / Vista / 2008 / 7 / 2008 R2 / 8 / 8.1 / 2012 / 2012 R2 / 2016 and 10. I highly recommend to look at this Microsoft link: https://docs.microsoft.com/en-us/security-updates/securitybulletins/2017/ms17-010. Additionally do I want to mention that Windows XP and Windows 2003 can be patched as well – though they are not on the list of the previous link. Look at Microsoft KB4012598 for more information or use this download link https://www.microsoft.com/en-us/download/details.aspx?id=55245. I can not warn enough about SMBv1 – you open the doors for malware here that can bring down your network in minutes and cause huge damage!

Please note – I did not research in detail if other previous Windows versions did disabled SMBv1 already by default, this article might in any case apply to Windows 7, 8 and 8.1 as well and be applicable to Windows 2008, 2008 R2, 2012, 2012 R2 and 2016 as well as newer Windows versions to come.

Now, the issue with Windows 10 and SMBv1 disabled is that often old legacy Windows 2003 servers are around that can’t just be upgraded or replaced. In order to access any file share you would need to enable SMBv1 on the client workstations. This could sure be done by preparing your installation image etc. – but if you did not plan for this or want to have more granular control, you might consider using Group Policies / GPO to enabled this Windows Feature.

 

It is further worth noting that the easiest way to find the issue is not trying to access the UNC share via the server-name rather then directly typing in the IP address in your attempt. This way you actually get a way clearer error-message from Windows. I mention this, to show you and explain that there actually is a difference between trying to access a server-name and an IP address per UNC path – especially when it comes down to Windows 10 and the error messages you might see.

Officially enabling a Windows Feature is not supported per GPOs nor is there much information out there on how to enable SMBv1 per GPO. Having faced this challenge recently, I found a good working way that is pretty easy to implement.

  1. enable the feature on 1x Windows 10 client
    1. export / document the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\mrxsmb10
    2. copy the file %windir%\system32\drivers\mrxsmb10.sys
  2. create a GPO
    1. put the mrxsmb10.sys in the GPO or a central accessible file (the target computer account must be able to read the file! – I often put it in either NETLOGON or directly in the GPO / scripts folder)
    2. Computer Configuration \ Preferences \ Windows Settings \ Files
      1. create a new entry to copy the file to the target system
      2. Source file: where you centrally placed the mrxsmb10.sys
      3. Destination file: %windir%\system32\drivers\mrxsmb10.sys
    3. Computer Configuration \ Preferences \ Windows Settings \ Registry
      1. Create or import all the registry keys from HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\mrxsmb10

A registry hive export would look like this:

Apply the GPO to your target systems / workstations and reboot them – after that you will be able to access the necessary shares. The downside is – you don’t really see the feature as enabled in the Windows-Features. It will work nevertheless.

 

Build your own lab environment with VMware

Build your own lab environment with VMware

If you have a virtual system, like VMware, and a storage device behind it you might be able to mirror your whole real / live environment and create a complete playground or shadow network to simulate any of your guest VMs like the real system and being able to change, update and adjust the configuration within this lab environment.. This is actually rather easy to accomplish and can save you a lot of headache while using it.

This is just an example, you might be able to accomplish something similar while just cloning single VMs and in theory it wouldn’t even matter if you use VMware vSphere or something like Microsoft Hyper-V. Still, it has advantages to do it this way, but let me explain it.

Assumed scenario:

You work with an VMware environment with one or more host systems and several guest VMs. You need to update software and configurations on those guest VMs but need to test this beforehand to ensure everything runs smooth. The VMs are stored on a central storage device that can do volume-level snapshots.

Prepare the environment

  1. you need one NIC per host that you will connect to a switch
  2. best use an independent switch that has no other network connections then to the 1x NIC per host
  3. create an Shadow-LAN virtual switch in your VMware cluster and use the 1x NIC per host so VMs through the cluster can communicate proper

What you do:

  1. create a snapshot and mount it to the VMware host systems as an additional volume
  2. go to the clone-volume file system and add the VMs you need to the inventory (you might want to rename to shadow-<servername> so you can easily identify them)
  3. re-configure their network connection from your regular LAN virtual switch to the shadow-LAN virtual switch you prepared
  4. start the added guest VM
    1. if you are asked if you moved or copied the guest just say you moved it – to avoid hardware / MAC and other changes possibly causing Windows to want to re-activate
    2. VMware might complain about a duplicate MAC address – you can ignore this, cause you are on two different / independent networks

Real usage example:

Let me give you a more detailed example on this with a few more details on what I personally used and did with this. The example you find below should help you understand the whole principle better.

  • VMware cluster with e.g. 10x host systems – we had enough RAM and CPU power that we could have 3x hosts go down – you won’t need that much, but of course you would have buffer for RAM and CPU usage
  • Nimble All-Flash storage arrays in the background connected to all the VMware hosts and using the Nimble VMware plugins (Note: Nimble was bought by HP / HPE as off today)
    • the Nimbles are configured to do volume level snapshots multiple times per day
  • all physical host systems had a dedicated network card (NIC) connected to an independent physical switch that was NOT connected to any other network switch
  • a virtual switch SHADOW-LAN was created and those physical NICs of the hosts systems had been assigned to it
    • this allowed any VM connected to this virtual switch to communicate with other VMs connected to the same virtual switch on other hosts

Due to migrations, software updates and quality controlled systems we constantly had the challenge to test and changes and adjustments thoroughly. So I came up with the solution to just clone a snapshot on the Nimble storage array the VM resided and mounting it to the VMware cluster, taking only minutes and then moving forward to add e.g. domain controllers, DHCP servers, necessary file-system servers and the target guest system to the inventory in VMware, adjusting their name so we could quickly identify them (even adding them to resource pools if necessary) and of course most important just changing their virtual switch configuration to the shadow switch.

Advantages and possibilities:

This now allowed to simulate the whole real world system (VMs) and simulate every change that we wanted. In order to get software there we attached if necessary VHDs that did hold what we needed or we even used a secondary internet connection to briefly connect to licensing services or update services that Vendors only provided online. The advantages of the solutions go even further:

  • simulate everything you have in your VMware environment available and have it working like the real / live system
  • if necessary, provide internet access while connecting a SECONDARY internet connection (router / firewall) to physical shadow network switch
  • adding real printers to the shadow switch to be able to test print-outs (we had those cases)
  • add physical workstations to simulate whole production environments
  • update / refresh the whole system in only a few minutes by using a fresh-snapshot clone
  • only minimal to almost none impact on the storage / free space of your storage device
    • this is due to grabbing a Nimble snapshot that was cloned and therefor created a new branch and only the deltas (changes) had an impact on the storage – we talk even for “bigger” simulations only about a few gigabyte changed data – if at all that much – of course depending on your storage and what you do
  • we installed VMware console connections on quality testing workstations so they could access the system directly on the console
    • of course only granting them minimal rights to this specific pool of VMs
    • avoiding that their access to the VMware environment had any impact to the real system
  • documenting any changes, challenges faced and solutions found
  • due to a physical switch, and best practice using a layer-3 switch, we where able to simulate whole VLANs, routing etc. within the environment and even connecting various physical systems like printers, workstations and temporarily an internet-connection to this environment

It is only a small amount of effort to initially prepare for those simulations, cause the virtual shadow switch, the physical shadow switch and the hosts network card connections to this physical switch are a one time effort. After this you just clone and mount snapshots and add the actual VMs you need while adjusting their network connection to the virtual shadow switch.

Once setup, preparing simulations usually takes less then 30 minutes till everything is cloned, mounted, added to the inventory (incl. NIC adjust to the shadow switch) and booted up.

Why not just clone all VMs via VMware?

Good questions – the answer is simple, this would have an impact on your storage capacity, cause it would create an actual clone. And it actually takes longer to clone individual VMs then to just grab a storage-level snapshot and being able to adjust what you want down to the volume level on the storage. Even the clean-up might be more intense or leave some unwanted data back – while a clone on off the volume only needs you to remove the VM guest system from the inventory and then unmount and delete the whole shadow volume.

I did write this all up cause I wanted to share it – the whole idea is not that special in theory, but I thought it is an good example on how you can accomplish having a huge and decent lab environment with only minimal effort. In any case, I hope the idea behind it will help some off you out there 🙂

 

 

Windows 2016 DHCP load balancer and it’s quirks

Windows 2016 DHCP load balancer and it’s quirks

Windows 2016 or probably even 2012 allows you to create a real load balanced / full failover DHCP server configuration, other then Windows 2008 that only allowed you to split the scopes.

Now, this works pretty great for the most part – but it has actually two major flaws you need to be aware of and actually take action on.

Neither reservations nor server / scope options are replicated.

This actually is a big deal. Assuming you are changing settings on server A for the pool, you end up that clients depending on which DHCP answered them might apply the new settings from server A or pull the old ones from server B.

Further might a reservation work when you put it in place, and then – all of a sudden a few days later you get a ticket in telling you there is an issue and you find out that the reservation didn’t pull anymore. What happened? Well – you might have set the reservation on server B but not on server A – depending on which server answered the client, you run again in to an issue.

Microsoft seems to have put a quick and dirty synchronization in place and the only true way around is to force the two DHCPs to synchronize with the following PowerShell command:

This could be automated via the Task Scheduler by using invoking the command from a DOS prompt via:

But even then, you better check all your DHCP servers and always make sure any changes are made on all DHCP servers or at least correctly replicated. Otherwise you might encounter the weirdest issues.

 

Monitor user accounts in Active Directory with PRTG

Monitor user accounts in Active Directory with PRTG

The following script will read through your current Active Directory and filter for user accounts with the following specific conditions:

  • Lockedout users – please read below for further information about this
    • all users that are lockedout
    • must be an enabled user
    • that is not expired
  • disabled users
    • all users that have been disabled
  • expired users
    • must be an enabled user
    • the expiration date is set and past the current date
  • users with password never expires set
    • must be an enabled user

This will give you a pure counter output per channel in an for PRTG Extended script sensor XML result.

But there is a theoretical flaw in one of the methods – the locked out users. Now, user accounts get locked out in Active Directory due to too many logon attempts with an invalid password. This causes Active Directory to set the lockedout bit in the object properties. The issue here is that this bit will not be set back to 0 after the defined lockout duration (GPO) is past, the property will only be set back to 0 once the lockout duration is passed and he successfully logged on.

This means, the counter might give you more results then currently true, it might count users that have been locked out but the lockout-duration passed – but they did not yet logon successfully. This is somehow a false positive, while not totally false. In any case, you need to be aware of this.

The script could be more efficient as well in the way it filters a few things, so far I optimized it as far as I could – the LockedOut value can not be set as a -Filter, in theory it might be possible to speed it up with a -Filter to the UserAccountControl (if that is even possible – not tested) – but I am not certain this would work. If you really want to speed it up you would need to work with -LDAPFilter – but this actually needs to completely replace the internal filter capabilities of Get-ADUser – you can’t use both – it is one or the other.

This script updated with a corrected version as of February 2019 and was also posted in the PRTG knowledge base here.

Inserting tables as pictures

Inserting tables as pictures

Microsoft Office has since a very long time a bug when it comes to copied data from Excel that you try to past as a picture in to a other Office application. The table often gets totally messed up might miss rows and columns even. This happens when you copy the data from Excel and try to insert it in to e.g. Microsoft Word with the special insert option Insert as Picture.

The issue is well known and the solution is actually pretty simple – you don’t copy the data from Excel, you copy a picture with a specific setting.

This also allows you to insert the copied data respective image in to any other application as an picture without using screenshot tools like the Windows internal Snipping tool.

Instructions:

  1. select the rows and columns you want to copy in Excel (mark them)
  2. while on the Home ribbon, click the little drop down icon next to the Copy button
  3. select Copy as Picture
  4. switch the appearance to As shown when printed
  5. and click OK
  6. now insert the PICTURE that you copied where you need it – this will not be a formatted table / text-data

Please note – you can also change your quick access menu in Excel and add the Copy As Image there for easier access to it.