server

Windows 10 Build 2004 / 20H1 – SMBv1 network drives not connecting

SMBv1 network drive not connecting

The newest builds and updates can possibly break some Windows 10 network connections. Saw this specifically in a situation with a SMBv1 drive that was connected via FQDN per GPO.

Windows was not able to connect to the drive, looking at NET USE all you saw was reconnecting.

Connecting to the same share via HOSTNAME and/or IP worked just fine, as well as engaging the UNC path.

The solution to this eventually is a simple registry adjustment, that has to be done in the user-profile HKCU area, so no advanced rights are needed.

Steps:

  1. open REGEDIT
  2. go to HKCU\Network
  3. select the key with the drive-letter you have issues with
  4. add a new REG-DWORD
    1. PROVIDERFLAGS
    2. Decimal 1 or DWORD 00000001
  5. Reboot

Your network drive should work normal again.

Background and Explanation:

The PROVIDERFLAGS instruct Windows to reconnect the SMBv1 network drive, more or less. It eventually did not matter if it was connected per FQDN, IP or HOSTNAME – is was the reconnect that the GPO implied, respective the NET USE /PERSISENTENT:YES switch. If you would use a script – netlogon script – you could just determine the drive as /PERSISTENT:NO and not see the issue either as well as solve it.

Eventually this is specific to SMBv1 and I can’t warn enough about the security risks this protocol has. Still – there are here and there systems that still need to stick around – hopefully secured by firewalls and even sandboxes etc..

ActiveDirectory/LDAP result limits – MaxPageSize

ns a website from a systems administrator for systems administrators Home IT-Admins CMDB IT-Admins tool IT Search EOL Solutions Blog Contact Links ActiveDirectory/LDAP result limits – MaxPageSize

ActiveDirectory, respective LDAP, has a result limit setting, MaxPageSize. Those are set by default to 1000 rows per query.

This is primarily important if you use some kind of programming language to get results from LDAP, this code must compensate those limits and engage paging.

Your LDAP query does not need to provide the limit, only the code needs to do the paging as you always just get the max. amount of results set in the current settings.

In order to check your settings do the following commands in a command prompt / cmd window:

In theory you could set different values now as well, assuming you have the permission level to do so. But this is not recommended and you should engage paging instead, as you otherwise risk to overload your DCs – even if your commands won’t cause it, a possibly DoS attack could happen – malicious or not, so leave the limits, but be aware of them.

 

VMware hosts network speed tests with iperf

VMware hosts network speed tests with iperf

Ever needed to run speed-tests between your VMware hosts? There is an CLI command iperf3 for this.

This command runs as a server and client command. One host will be the server and the other the client. There is further the possibility that some storage vendors even support the iperf3 command.

Example scenario with two VMware ESX hosts:

  • IT-ESX-01P – will act as server
    • IP: 10.0.0.1
  • IT-ESX-02P – will act as client
    • IP: 10.0.0.2

Steps and commands to execute the network speed test:

  1. Enable SSH on both hosts and connect with e.g. Putty to it, logon as well.
  2. IT-ESX-01P will act as our server
    1. disable the firewall
      1. esxcli network firewall set –enabled false
      2. The ESX firewall needs to be disabled temporarily to execute the tests – on client and server
    2. List the kernel network IP addresses
      1. esxcli network ip interface ipv4 get
      2. choose the interface IP that is on the network you want to test, only kernel-IPs will work
    3. go to the directory that holds the iperf3 command
      1. cd /usr/lib/vmware/vsan/bin
    4. start the iperf server on this host on the kernel IP you need it on
      1. ./iperf3.copy -s -B 10.0.0.1
      2. this command starts the server respective listener on the host on the specified IP address
  3. IT-ESX-02P will act as our client
    1. disable the firewall
      1. esxcli network firewall set –enabled false
      2. The ESX firewall needs to be disabled temporarily to execute the tests – on client and server
    2. go to the directory that holds the iperf3 command
      1. cd /usr/lib/vmware/vsan/bin
    3. execute the speed test against the server IP address
      1. ./iperf3 -c 10.0.0.1 -t 10 -V

      2. this will start sending packets to the server – you will see the flow on both sides
      3. cancelling this command – cntrl + c – can take a minute, be patient, especially if you mistyped the IP or forgot to disable the firewall etc..
  4. Review the results on the speed test
    1. Below are result samples for a 1 GB kernel network, a 10 GB kernel network and a 25 GB kernel network.
    2. Sample results – 1 GB
    3. Sample results – 10 GB
    4. Sample results – 25 GB
    5. Be aware, those results will vary and depend on the network bandwidth available in the moment of the test, respective the current load on the network cards of client and server.
  5. IT-ESX-01P exit server mode and enable firewall
    1. cntrl + c will exit the server mode and go back to the CLI
    2. enable the firewall
      1. esxcli network firewall set –enabled true
    3. EXIT SSH
  6. IT-ESX-02P enable firewall
    1. enable the firewall
      1. esxcli network firewall set –enabled true
    2. EXIT SSH
  7. Done

Additional links to this topic:

 

 

 

Windows Print Server Aliases

Windows Print Server Aliases

Windows Print Server Aliases – what is that and why would you even need to think about it?

For File-Servers, you can set up DFS structures and have a single point of entry as from the perspective of the client. It’s a simple named path and works rather flawless if set up right and monitored e.g. with PRTG. But what about your print server? Is it a defined hostname and the printers sit on this host? What happens when you want to upgrade the host to a new windows version or theoretically even do some special DNS routing (that’s very advanced and has hurdles, I will not address this in this posting).

Well – you can sure set up an ALIAS name in your DNS, but soon you will discover you can’t connect to the printers on this server. This is because you are missing some registry tweaks. At this point I also want to make you aware, I saw Windows updates removing those keys, so keep this article handy to reconstruct the registry in case of any issues.

You will need a total of three registry keys added, as follows:

This first key will enable DNSOnWire for the Print-Server itself. This is needed to make the print-server aware that you might use DNS ALIAS / CNAME entries to access him. More can be found e.g. here: Windows couldn’t connect to the printer – Windows Server | Microsoft Docs

This key, DisableStrictNameChecking, we need to configure the SMB server / LANManServer – he needs to be aware as well that we will use CNAMES to access the shares on the server. You can find some more information at the following link: Can’t access SMB file server – Windows Server | Microsoft Docs

And last but not least, the OptionalNames – this is the one key that’s most hidden but still so important. You can also make it REG_MULTI_SZ key. But it works with a simple REG_SZ key and the short CNAME alias that you have specified, you don’t even need use the FQDN.

There are many ways on how to accomplish this one last key, it changed throughout the Windows versions, it was possibly even renamed. Worst I saw on a Windows 2016 server was it vanished after a update session and reboot. So be prepared for that. A simple recreation and reboot fixed the issues.

Also, make sure you reboot after those changes, otherwise it won’t work.

Setting up Windows Search Index

Setting up Windows Search Index

The Windows Search indexing is a solution from Microsoft that will index your file servers and their files full text and allow your end users to get results quickly while actually engaging the fulltext search database seamlessly.

This is accomplish by just using the search box in the upper right of the Windows Explorer while residing on a network share or mapped network drive.

Many people say that the Windows Search does not work right, but my experience is that quite the opposite is true and it only depends on the right set up. This I will explain here further.

Before we go in to details – there is a challenge that goes along with DFS namespaces. I found a way to bypass this, look here to understand how it works.

  • add an additional drive to your file servers
    • I roughly recommend about 10% of the total use size of your file shares that you want to index
    • you might be able to go with less or more – but expect 10% to be more on the safe side
  • give the new drive a drive-letter (e.g. I: for Index) and name it INDEX (to make clear that this is only to be used for the INDEX database)
  • add the following path to this drive
    • I:\ProgramData\Microsoft
    • The path above mirrors the default path on the C: drive where the index will reside by default, I recommend to mirror this path on the new target drive to keep things simple and clear
  • add the feature Windows Search to your server
  • download the Microsoft Filter Pack 2.0 and install it on your server
    • yes – this is a Office 2010 SP2 filter pack and yes that’s okay
    • just download the x64 bit version of it and install it
    • this will add a iFilter pack to Windows that allows Windows to understand DOCX and XLSX files (and others) and full text index them
  • enable the new service Windows Search
    • set it to start type automatic
    • start the service
  • from your start menu (or Control Panel) open the Indexing Options
    • click on ADVANCED
      • select the earlier created path I:\ProgramData\Microsoft as new path for the index
      • after you click on OK Windows will stop the service and move the existing index
        • you might need to manually start the service again
    • click now on MODIFY and select the server shares respectively local paths you want to index
    • if you leave the Indexing Options open you will see that Windows updates the count constantly
      • Windows will slow down indexing if you work on the console or via RDP on the server Desktop
      • You will see the message INDEXING COMPLETED once all files have been index
        • this can take many days – don’t be surprised and be patient
  • think about monitoring the index and the partition the index resides on

One challenge resides – if your indexing drive becomes full, Windows Search indexing will crash the index database and likely determine it as corrupt and start from scratch. This can happen within minutes, even before any of your monitoring solutions might warn you about the full drive. There are eventlog entries about and you would easily see that the number of indexed files dropped to a very low number again. The drive becomes suddenly pretty empty as well again. This is sure one of the downsides of how Microsoft implemented this, but if you provide plenty of space in the first place, you likely won’t experience this issue.

iFilters for PDF files seem not to be necessary on current Windows versions. In the past I downloaded a iFilter for PDF files from Adobe but I experienced many issues with temporary files on the C:\ of my file servers. This is a known bug with the Adobe iFilter – as for Windows 2016 those files are full text index, but the same restrictions as in the past apply – PDF file can hold editable text that can be indexed or they hold pictures – what is often due to scans of documents – as long your scanner didn’t due OCR and translate the image to text, those files can’t be full text indexed.

Monitoring

It is almost essential to have proper monitoring in place for this. Actually, you want to monitor everything within your network, but that’s another story. I faced the same challenges and even was able to find out more about how the index database grows and why many people aren’t happy with it. Eventually it comes down to improper configurations and bad or nor monitoring at all. I highly recommend you have a look at this additional blog article about Windows Index Monitoring.

Shadow copies aren’t accessible – advanced VSS configuration

Shadow copies aren’t accessible – advanced VSS configuration

Most file servers are configured to use the Windows internal shadow copies / VSS to allow administrators or even users to quickly restore files.

Microsoft allows you to extend the default maximum of 64 shadow copies to a total of up to 512 as described here: https://docs.microsoft.com/en-us/windows/desktop/Backup/registry-keys-for-backup-and-restore#maxshadowcopies

It is pretty easy to implement this – no restart needed (if running, restart the volume shadow copy service).

  • HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\VSS\Settings
  • MaxShadowCopies DWORD
  • official maximum: 512 (decimal, NOT HEX!!!) (HEX: 0x200)

Now – we detected in January 2019 a bug that at least affects Windows 2016 servers, if not even more. We could not see the shadow copies of the current day. Any shadow copies of the previous day seemed to be fully available. The cut off was literally before midnight. After about 12 subsequent shadow copies they started to triple in.

Once we adjusted the maximum to 500 (decimal – HEX: 0x1f4) and restarting the service respective waiting till the next scheduled shadow copy executed (plus a few minutes to process a cleanup) we eventually could see the most current shadow copy from the Windows Explorer menu.

This seems to work way better then the 512 that is the defined maximum. There seems to be some kind of a bug that started with some update. We couldn’t determine it in detail and simulating this would take a lot of time.

NirSoft has a great tool to investigate your shadow copies as well here: http://www.nirsoft.net/utils/shadow_copy_view.html

This is a GUI based tool that partly lets you look in to your shadow copies. Only, if you try to open the most current paths while the 512 maximum was set, Windows Explorer still couldn’t handle it. But it was a nice detailed proof to see that the current shadow copies where as a matter of fact there.

Similar results could be determined while using PowerShell and command line commands like VSSadmin – we saw the shadow copies where there.

WMI provided the same information as well – for an example see the script here what uses WMI and PowerShell to gather information about shadow copies: https://www.it-admins.com/monitoring-shadow-copies-with-prtg/

Suggestions to configure shadow copies:

  • set a maximum of 500 instead of 512
  • do them e.g. hourly – as you need them
    • this is all a calculation, straight hourly provides you 500 copies / 24 hours a day = +/- 20 days back
    • if you go e.g. 5 AM to 9 PM and no Sundays you extend this: 500 / 17 snaps a day (hourly) = +/- 29 days => add the removed Sundays in the equation and you easily bypass a whole month
      • this would allow you while doing full virtual machine backups (VHD level backups) to keep the month end tape of every month and still be able to restore files from the shadow copies in theory – I had cases where I had to dig that deep..
  • volume configuration on your file servers (the drive letters don’t matter much)
  • add monitoring to your VSS – like described here with PRTG

 

 

Microsoft RADIUS / NPS SQL logging

Microsoft RADIUS / NPS SQL logging

An issue or question I see again and again – proper RADIUS logging with Microsoft NPS / Network Policy Server.

Let’s guide you through a few steps

  1. Install a Microsoft SQL or if not available SQL Express
    1. be aware – SQL Express has very tight database size limits and no SQL Agent – this might be an issue
  2. Create a new database via SQL Management Studio in the SQL server
    1. name it e.g. RADIUSLogging
  3. run the SQL script from this Microsoft website in a new query window against this database (make sure it is not run against any other database by accident)
    1. you could add a line like USE RADIUSLogging to prevent this – in the very top…
  4. configure your RADIUS server to log to this SQL server and database
  5. make sure you have fail-over logging to a text-file – to avoid issues in case your SQL DB grew to big or was not reachable for any reason
    1. decide in the text-file configuration if you want to deny access if there is an issue or if you still want to proceed with the logon

Now you have RADIUS logging the information to a SQL database – actually a single table – and you can dig around in it. The IT-Assets database provides a front-end example for this – you don’t need to use it – but it might be of help – see here.

To interpret all those columns and values – look at the following links for additional information:

You will face the issue that the database will grow rapidly – depending on how many requests go to your RADIUS system etc… Keep an close eye on it – use a monitoring software like Paessler / PRTG to monitor the size and keep in mind that SQL Express might have size limits like 10 GB. The full version of Microsoft SQL has no such limits and further can you use SQL Agent to execute tasks. The following script can help you purging data from the RADIUS database to keep its size under control. You can use SQL Agent (not in SQL Express) to run it automatically or if you use SQL Express either run it manually or with another solution somehow automatically against the database delete older entries.

The script actually will purge data older then 14 days – you can adjust the days to your liking / needs.

The advantage of DFS and how to set up a working structure

The advantage of DFS and how to set up a working structure

File shares are something every IT professional will work with. Many companies have way to complicated and unstructured network file systems with to deep permissions, to many shares and access points, often several connected drives and from an IT perspective nightmares when it comes to migrating to newer servers or having satellite offices and subsidiaries gaining access to it especially on lower speed connections.

Having been in IT for about 20 years by now, I saw a lot and was challenged with it quite a bit. One of the best solutions I came across is the one I am about to show you here. It is very structured while giving you the advantage of leveraging it as you need and go and should allow you to use it in most businesses.

First of all – please note that I will not go as far as explaining and exploring the differences with Active Directory integrated and Stand-A-Lone namespaces. If by any means possible, I suggest you use Active Directory integrated namespaces to simplify the roll out, but both would work.

The structure example:

The structure example will depend on a DFS Root server and a separate File-Server per root-folder on the later network drive. This is just an example, you do not need to split it all up, thought if you can do it to keep it as structured as possible

Example target file system structure:

  • N:\
    • N:\Archive
      • N:\Archive\John Doe
      • N:\Archive\Jane Doe
    • N:\Departments
      • N:\Departments\Marketing
        • General
        • Mangement
        • Public (anyone has read access)
      • N:\Departments\Accounting
        • General
        • Mangement
        • Public (anyone has read access)
    • N:\Other
      • N:\Other\Manufacturing
      • N:\Other\Projects

The declared goal is to keep the NTFS rights structure as simple as possible and not going any deeper then e.g. level three – e.g. N:\Departments\Marketing\General

Each department folder in this example will have a public folder where a member of the department has read/write access while any non-marketing member has read access to files that are published there.

The archive tree is for terminated employees and archive data. Their information gets collected in a sub-folder in this tree, a group will be created for each of those folders and only people that got approval to access this data will see and be able to read those archived files (read-only is recommended as NTFS permission)

The file servers and their preparation:

DFS Root-Server
  • create a folder on the data-partition like D:\DFSRoots – there will not be any real data in this folder – but it will hold the actual DFS structure
    • create sub folders for the branches on the shared DFS drive like:
      • D:\DFSRoots\Departments
      • D:\DFSRoots\Archive
      • D:\DFSRoots\Other
DFS Department Server
  • create a folder on the data-partition like D:\SharedFolders\Departments
    • remove the everyone or authenticated user groups from this folder – only System and Domain-Admins should have read/write permission here while group N_Departments will have read-only access on this folder.
    • create a sub-folder for each main folder you want to see under the path N:\Departments and share it 
    • add a $ (dollar/string) sign to the share name so it remains hidden a hidden share
    • Examples:
      • D:\SharedFolders\Departments\Marketing
      • D:\SharedFolders\Departments\Accounting
  • now create the following sub-folders for each department folder as shown on the example Marketing
    • D:\SharedFolders\Departments\Marketing\General
    • D:\SharedFolders\Departments\Marketing\Management
    • D:\SharedFolders\Departments\Marketing\Public
  • create two groups in Active Directory for Marketing
    • N_Departments_Marketing_General
    • N_Departments_Marketing_Management
  • create a general group N_Departments to use it for all Public folders
  • assign the groups to their according sub-folders General and Management with read/write rights you probably will need to remove the read-access that the group N_Departments inherited from this folder 
  • assign the group N_Departments to the Public folder in all departments with read-only rights (if not inherited)
  • assign the group N_Departments_Marketing_General to the Marketing\Public folder with read/write access – allowing each member of marketing to publish information for access to other people – only marketing can write in this folder, other people only have read-access to it
DFS Archive Server
  • create a folder on the data-partition like D:\SharedFolders\Archive
    • create a sub-folder for each main folder you want to see under the path N:\Archive and share it 
    • add a $ (dollar/string) sign to the share name so it remains hidden a hidden share
    • Examples:
      • D:\SharedFolders\Archive\John Doe
      • D:\SharedFolders\Archive\Jane Doe
DFS Other Server
  • create a folder on the data-partition like D:\SharedFolders\Other
    • create a sub-folder for each main folder you want to see under the path N:\Other and share it 
    • add a $ (dollar/string) sign to the share name so it remains hidden a hidden share
    • Examples:
      • D:\SharedFolders\Other\Manufacturing
      • D:\SharedFolders\Other\Projects

The DFS namespace set up and configuration

  • add the Namespace \\domain.local\N for the N: drive (just an example)
  • add the folders Archive, Departments and Other to the namespace
  • for each of those folders you add the shared sub-folders like indicated in the list below as sub-folders (they will appear on the Namespace tab when you click on the folder in the DFS Management) and set the target to the according file-share on the specific DFS server where the data will reside
    • Departments\Marketing
    • Departments\Accounting
    • Archive\John Doe
    • Archive\Jane Doe
    • Other\Manufacturing
    • Other\Projects
    • This will actually create a shared sub-folder on the DFS Root server for each of those folders in D:\DFSRoot\

Note – information about the above example

The example below is kept simple – I did not go in to each and every right you would need to assign for the sole purpose of keeping it simple and understandable. Please investigate and set the rights as you really need them. 

As for the Archive tree, it might be beneficial to have PowerShell script automate the folder creation, group creating and rights assignment for those NTFS paths, so you limit the possible failure-rate in case you are going to archive terminated employee data and other stuff in this tree branch.

What are the real benefits of this

  • add multiple folder targets for replication
    • replication can be beneficial in a server-migration scenario as well as in a subsidiary scenario
    • you can add replications on the departments-branch example per department folder – not each subsidiary will need a mirror folder of each department, rather then just a few – this decreases the amount of data and load on the connection and size of the server respective it’s disk-space and reduce cost as well
  • a simple rights structure fully based on groups
    • in general you should never ever use a user account to assign any rights – always create a group, whether for a drive-share, NTFS rights or any other purpose. Always create a group!
    • you can add and remove users from those groups
    • you can audit the permissions on the NTFS side rather quick cause they should relate to strong group names
    • the groups can be audited against HR lists of members of the department or by department managers and directors to make sure only people that need to have certain access levels will have them
  • while limiting access to certain folders you limit the amount of damage a possible attack by malware could cause 
  • you can divide or summarize the actual file-servers that hold the data as needed in the long run
  • a simple group design with limited depth permissions is easier to maintain and audit
  • you have one central network drive that you will assign in order to give everyone access – all data will be centrally on this path independent from any file-server host-name. This can be a huge advantage cause some applications might not relate to the mapped drive rather than a UNC path what could cause you major headache when ever you want to migrate/upgrade or retire your file-servers later on
  • possible other file-shares within the corporation in other locations could be made accessible by linking them in as a folder in e.g. the others-namespace avoiding that users would need to know and remember the UNC path and you even allowing them to access any UNC path – it will act like a mapped drive while pointing in the background to an UNC path

There are many more advantages to DFS and the whole design. I hope this gives you a good overview and idea of how to design or re-design your file-server structure and simplify the whole access structure. 

Full text search and DFS drive mappings

This is a challenge that is not easy to overcome. Still, thought there is no official and directly implemented solution from Microsoft for this, I was able develop and provide a solution that will access the Windows Search Index and provide it back to the end user only using standard Windows components. All you need to know and do is described in the IT Search section of this web site.

Windows 2016 DHCP load balancer and it’s quirks

Windows 2016 DHCP load balancer and it’s quirks

Windows 2016 or probably even 2012 allows you to create a real load balanced / full failover DHCP server configuration, other then Windows 2008 that only allowed you to split the scopes.

Now, this works pretty great for the most part – but it has actually two major flaws you need to be aware of and actually take action on.

Neither reservations nor server / scope options are replicated.

This actually is a big deal. Assuming you are changing settings on server A for the pool, you end up that clients depending on which DHCP answered them might apply the new settings from server A or pull the old ones from server B.

Further might a reservation work when you put it in place, and then – all of a sudden a few days later you get a ticket in telling you there is an issue and you find out that the reservation didn’t pull anymore. What happened? Well – you might have set the reservation on server B but not on server A – depending on which server answered the client, you run again in to an issue.

Microsoft seems to have put a quick and dirty synchronization in place and the only true way around is to force the two DHCPs to synchronize with the following PowerShell command:

This could be automated via the Task Scheduler by using invoking the command from a DOS prompt via:

But even then, you better check all your DHCP servers and always make sure any changes are made on all DHCP servers or at least correctly replicated. Otherwise you might encounter the weirdest issues.

 

Monitor user accounts in Active Directory with PRTG

Monitor user accounts in Active Directory with PRTG

The following script will read through your current Active Directory and filter for user accounts with the following specific conditions:

  • Lockedout users – please read below for further information about this
    • all users that are lockedout
    • must be an enabled user
    • that is not expired
  • disabled users
    • all users that have been disabled
  • expired users
    • must be an enabled user
    • the expiration date is set and past the current date
  • users with password never expires set
    • must be an enabled user

This will give you a pure counter output per channel in an for PRTG Extended script sensor XML result.

But there is a theoretical flaw in one of the methods – the locked out users. Now, user accounts get locked out in Active Directory due to too many logon attempts with an invalid password. This causes Active Directory to set the lockedout bit in the object properties. The issue here is that this bit will not be set back to 0 after the defined lockout duration (GPO) is past, the property will only be set back to 0 once the lockout duration is passed and he successfully logged on.

This means, the counter might give you more results then currently true, it might count users that have been locked out but the lockout-duration passed – but they did not yet logon successfully. This is somehow a false positive, while not totally false. In any case, you need to be aware of this.

The script could be more efficient as well in the way it filters a few things, so far I optimized it as far as I could – the LockedOut value can not be set as a -Filter, in theory it might be possible to speed it up with a -Filter to the UserAccountControl (if that is even possible – not tested) – but I am not certain this would work. If you really want to speed it up you would need to work with -LDAPFilter – but this actually needs to completely replace the internal filter capabilities of Get-ADUser – you can’t use both – it is one or the other.

This script updated with a corrected version as of February 2019 and was also posted in the PRTG knowledge base here.

Office 365/Exchange Public Folders – find out if they are still in use

Office 365/Exchange Public Folders – find out if they are still in use

Public Folders in Microsoft Exchange are one of the most challenging parts in Exchange system administrators can face. Microsoft actually tried to get rid of them several times and still they are around. As from an administrative point of view, they often did grow wild and I saw environments with a huge amount of public folders and no one was able to tell if they are still in use or not.

About a year ago I faced the same challenge and had to determine for a few thousand folders if we could retire them or not. PowerShell commands seemed to be the best approach for this, but soon I found out this is actually not as easy as I hoped it would be. There is no direct command that would give me all I needed and there was no easy way to determine if the folder is in use or not, this is especially because you can’t go with attributes like ‘last accessed’ for this cause, simply because they might show you data that is not really helpful to determine whether the folder is still in use or not. The folder ‘last modified’ is not as accurate either due to a simple marked as read action could modify this attribute as well.

Drilling further down in what I could use, I decided to see when the newest object in the folder actually was created – so the last created object in the folder – whether it was an email folder or e.g. a calendar.

The script you will find below will actually export the following columns in to a CSV file that you can process further in e.g. Microsoft Excel”:

  • Public Folder Path
    • the path to this public folder
  • Mail Enabled
    • well – is direct email enabled on this folder yes or no – this can actually be quite important and might need further review or needs groups/distribution list to be created instead
  • Mail address
    • helps to determine if this might be somewhere in use on a website etc.
  • Folder Class
    • the class in most cases is either an email folder or an calendar
  • Folder Size
    • total size of the folder – some folders might be really small and this helps to determine if you will need to keep em or not
  • Number of Items
    • like size – this helps a lot to see if it is something to discard or not
  • Top 1 object creation time
    • if there are items in the folder, this is the newest created item – specifically it is the date / time the item was created, as mentioned, modified will not help you because a simple mark as read action through Outlook already would influence this – the most accurate information I was able to find is the creation time for this cause

Now to the script(s). We actually talk about two scripts here – this is simply due to the fact that I developed this against a Office 365 Exchange system that needed me to logon and load the PowerShell modules from Office 365. The script itself should work on respective against an on premise Exchange server as well.

Simply said, you need to create both script files in the same folder – the ConnectToO365.ps1 script that is called is just a central solution in a huge script folder that is called by each script if necessary. The top section of each script first determines if there is a active session against the Office 365 environment and it will reuse it if possible or call the connect script to establish a new connection.

 

Monitor DFS replication backlog between servers in PRTG

Monitor DFS replication backlog between servers in PRTG

One of the challenges with DFS is to monitor the DFS replication backlog. There are various scripts out there to accomplish this. Unfortunately I found nothing I really liked and giving me the simple insight I wanted.

The goal was simple – a script that will monitor the backlog between two systems in both directions – meaning Server-A to Server-B and Server-B to Server-A. For both directions I wanted to see the amount of files as well as the size of those files. I did not care about what DFS groups and or DFS folders are affected in detail – this is because the amount of groups might change, the amount of folders will likely change rather frequently, meaning it would be a challenge to monitor it per group or even on a folder level very efficient. Monitoring the amounts of groups and folders alone has no really advantage, cause this would have been changed by an administrator.

Below you will find my script that actually expects three parameter – the two server names and a limit integer value. The limit will not influence the XML response of the script, you could add the <text>$Response</text> and <text>$Response2</text> tags in lines 77 and 79 after the </unit> and before the </result> tag if you want, I removed them currently.

See the picture below as an example of how the result looks like in PRTG.

Create the following script in C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML and make sure you have the C:\Windows\system32\WindowsPowerShell\v1.0\Modules\Dfsr\ and the C:\Windows\SysWow64\WindowsPowerShell\v1.0\Modules\Dfsr\ folders – you might need to copy them over. If you miss them at all, you might need to add the needed Windows Roles/Features or install the RSAT (Remote Server Administration Tools) on your system.

Print Server backup script

Print Server backup script

Print servers need to backed up. This is because of two main reasons. One is that users heavily depend on printers and a not properly working print server will cause imediate helpdesk tickets and unhappy users. The other one is that installing a new driver, might it be a new version, a new model or even an additional manufacturer, can cause other print drivers to act up or even stop working – many administrators know and fear that.

Windows server actually allows you to backup the current print drivers, installed printers and their configuration. You can use this to migrate your printers or to back them up. Of course, you can simply depend on e.g. VMware snapshots, storage level snapshots or other backups of your server. But you also could just export the whole print server configuration while using the scripts below. Those will actually call the Windows API to back up the printers and store it all in a file that you can keep centrally. You don’t just rely on snapshots or a full server backup for e.g. your SQL databases as well, do you?

The script uses a .CMD file that will execute the actual backup and send a email report, while using the SMTPSEND program from Michael Kocum (https://www.dataenter.com/download.asp) for this since I already ad it flying around – you could replace the mail send option with another prepared SMTPSEND client, a VBS script or just remove it completely. Additionally there is a .VBS script that will do a clean up of the target backup files depending on the age of the files in the specified directory.

All the parameters are explained and set in the top part of the .CMD file – I therefor will not explain them here again – you should not need to modify the scripts by default – but feel free to do so. Of course, you should create a scheduled task and execute the .CMD periodically. This can save you time and headache in case you have a malfunctioning print server system. The restore can be easily done through the Print Management MMC that Windows provides you, cause the actual backup files are createed using the same Windows APIs. Your end users will be happy that their printers got back to work in no time, hopefully.

Automate your SUS clean up

Automate your SUS clean up

Many companies rely on WSUS respective SUS services from Microsoft – aka. Windows Server Update Services as internal source and control of their update deployment to clients and servers within their network.

One of the big challenges for IT is to keep them clean and performant. The Cleanup-Assistant in the SUS management console tends to run forever and in any case means manual labor over and over again.

Below are two scripts – a CMD script that needs to be adjusted with parameters and a powershell script that will be called with those parameters. The scripts acutally will call the same API as the MMC Assistant does, just that this can be automatically performed via a scheduled task in Windows.

It helps you to keep your SUS slim and more performant.

In any way – I highly recommend to not blindly just enable all categories rather then limiting it to the once you have in place as well as once you reached a certain patch-level even actively denying updates you never will need again (keep in mind, new rolled out systems might still need older updates – but you could possibly refresh your base images or rely on Microsoft update services / online updates for those cases).

The combination of making updates obsolete and actually running a cleanup periodically will improve your SUS server performance.

As for the parameters, those are explained in the CMD script header – therefor I will not explain them here again.

Script based SQL Express backups

Script based SQL Express backups

SQL Express is widely used but has huge downside, there is no SQL Agent available. Even Windows internal databases, especially WSUS / Windows Update Services / Microsoft Update Services are in the end SQL Express like databases that do not have a SQL Agent.

Now, you can have central SQL Servers with Agents have them backed up – and I recommend on doing so if possible. But for the many times this is not possible, you will need to find another way to create those nice little .BAK files for SQL internal backups aka. SQL Maintenance Plan Backups. To work around this issue, I once wrote a script that automates this for each database found on a specific SQL server. It creates the backups via SQLCMD commands and even does a clean up of obsolete files (files older than x days), almost like SQL Maintenance Plans do it.

The script is divided in to a .CMD file that executes the actual backup and where you set the configuration/parameters and a .VBS file that is controlled by the actual .CMD script and will perform the backup cleanup. In the end you can have the .CMD send a email report – I used the SMTPSEND program from Michael Kocum (https://www.dataenter.com/download.asp) for this since I already ad it flying around – you could replace the mail send option with another prepared SMTPSEND client, a VBS script or just remove it completely.

Adjusting the settings / parameter:

This is all done in the SQLBACKUP.CMD file – the header section pretty much will explain all you need to know, from SQL Server to SQL-User and Password over Mail-Server to recipients.

If you want to execute the SQLBackups as the Windows-User that is executing the script, you need to exchange the REM (remarks) for the following two lines further down in the scripts. I apologize for the inconvenience, this is a old script I never updated to have those settings in the header (more automated), I always just changed the lines.

Everything else should be rather easy. Of course you will need sufficient access rights to the SQL databases and your destination backup folder. The task-scheduler might work best if you execute the script with “cmd /c c:\scripts\sqlbackup.cmd” (change the path as you need it) and set the working directory / startup folder right. It might help to execute the task with elevated rights etc. – all depending on your systems configuration.

Below are the two scripts – I hope this helps some of you. The generated .BAK files can simply be restored in SQL services via the GUI cause they are native SQL backup files.

Solarwinds Web Helpdesk – Slack alerts

Solarwinds Web Helpdesk – Slack alerts

This was originally posted here by myself: https://thwack.solarwinds.com/thread/114863

Solarwinds WebHelpDesk is very powerful, but for those who use Slack as communication and alerting platform, there still is no integration.

We as a IT team struggled a bit keeping up with the immense flow of emails and filtering them out as well as being really pro-active on new tickets (first response time) as well as realizing we got new tickets assigned or a user / client wrote a new note.

To overcome those challenges and due to the fact that we all have Slack on all our systems from workstation to smartphone, we decided to integrate this. Since I spend a bit of time on those scripts and thought they might be helpful for others as well, I am sharing them here now and explain on how to implement this.

Please note: Those scripts are a version 1 – I am very aware that they could be further cleaned up and simplified.. but I wanted to share them already… bugs are possible as well…

Requirements:

  • You need Solarwinds WebHelpDesk
  • The scripts use the field “Pager” in “Techs” for the Slack-Username – put all the Slack-Usernames of your Techs in this field – no @ – just the name
  • The scripts assumes you are executing it directly on the server that has the PostgreSQL database installed
  • The scripts assumes the database user/pw is defaulted to WHD

How to implement them:

  • Create the 5x files as show further below in e.g. “C:\Scripts\WHD” on the PostresSQL server
  • Create a new Windows Task that starts the WebHelpDesk_SlackAlerts.cmd file every 30 minutes
    • this file actually executes the PowerShell Scripts – it is just a work-around – it bypasses any PowerShell Script execution restritions
  • download the PostgesSQL ODBC drivers from the following link – assuming you haven’t installed them on the system already
  • Edit the PRTGSlackWebHookNotificationPSv2.ps1
    • This file was originally from our PRTG installation and modified there already. It was further modified for WHD alerts – I do not claim to have invented this script nor do I want to abuse any copyrights on it! source: www.paessler.com for monitoring solutions
    • Line 41: adjust the URL for the FavIcon.ico to your external WebHelpDesk URL
  • Create a new WebHook-Application in your Slack Account
  • Edit the CheckNewTicketsFirstResponse.ps1
    • This script posts to a generic channel in our case – we want to see new tickets as a group – assuming assigned tech is currently unavailable and couldn’t touch it…
    • adjust the PostgresSQL settings – if needed – IP / Port / User etc…
    • Line 7 – ticket_age_minutes = 88
      • this is a minute value – we alert on a Slack-Group “helpdesk” if there is a ticket older than 88 minutes – we fire the script every 30 minutes, so it could be up to two hours old..
    • Line 9 – $channel – adjust this to the Slack-Group channel you want to use for those alerts
    • Line 77 – adjust the URL to your external WebHelpdesk URL
    • Line 99 – adjust the Link to your own WebHook URL
  • Edit the CheckTicketAssigned.ps1
    • This will send the message to the Tech directly through the SlackBot channel – only he will see it
    • This will only fire if the Tech did not yet put a Tech-Note in the ticket after it was assigned to him
    • adjust the PostgresSQL settings – if needed – IP / Port / User etc…
    • Line 7 – entry_age_minutes = 32
      • this is a minute value – we run the scripts every 30 minutes – the alert due to some variable time can not be older than 32 minutes by default…
    • Line 74 – adjust the URL to your external WebHelpdesk URL
    • Line 97 – adjust the Link to your own WebHook URL
  • Edit the CheckTicketNewClientNote.ps1
    • This will send the message to the Tech directly through the SlackBot channel – only he will see it
    • This will only fire if the Tech did not yet put a Tech-Note in the ticket after the client / user did leave a comment or note
    • adjust the PostgresSQL settings – if needed – IP / Port / User etc…
    • Line 7 – entry_age_minutes = 32
      • this is a minute value – we run the scripts every 30 minutes – the alert due to some variable time can not be older than 32 minutes by default…
    • Line 74 – adjust the URL to your external WebHelpdesk URL
    • Line 97 – adjust the Link to your own WebHook URL

After that you should be all set.

We have further integration in to PRTG while monitoring the database for “new tickets older than 120 minutes” and looking for the logfile “SlackLogErrorDetails.txt” indicating that a Slack notification did not go through by the script – most likely due to special characters that the scripts already should take care off, but in case it happens again this file would appear. You can integrate that as well with your monitoring system, but this is beyond the scope of the simple notifications.

Other logfiles show just what was send out and indicate that everything is working well.

Click here to download the scripts archive WebHelpdeskSlackFiles.zip