prtg

Monitor user accounts in Active Directory with PRTG

Monitor user accounts in Active Directory with PRTG

The following script will read through your current Active Directory and filter for user accounts with the following specific conditions:

  • Lockedout users – please read below for further information about this
    • all users that are lockedout
    • must be an enabled user
    • that is not expired
  • disabled users
    • all users that have been disabled
  • expired users
    • must be an enabled user
    • the expiration date is set and past the current date
  • users with password never expires set
    • must be an enabled user

This will give you a pure counter output per channel in an for PRTG Extended script sensor XML result.

But there is a theoretical flaw in one of the methods – the locked out users. Now, user accounts get locked out in Active Directory due to too many logon attempts with an invalid password. This causes Active Directory to set the lockedout bit in the object properties. The issue here is that this bit will not be set back to 0 after the defined lockout duration (GPO) is past, the property will only be set back to 0 once the lockout duration is passed and he successfully logged on.

This means, the counter might give you more results then currently true, it might count users that have been locked out but the lockout-duration passed – but they did not yet logon successfully. This is somehow a false positive, while not totally false. In any case, you need to be aware of this.

The script could be more efficient as well in the way it filters a few things, so far I optimized it as far as I could – the LockedOut value can not be set as a -Filter, in theory it might be possible to speed it up with a -Filter to the UserAccountControl (if that is even possible – not tested) – but I am not certain this would work. If you really want to speed it up you would need to work with -LDAPFilter – but this actually needs to completely replace the internal filter capabilities of Get-ADUser – you can’t use both – it is one or the other.

This script updated with a corrected version as of February 2019 and was also posted in the PRTG knowledge base here.

Monitor DFS replication backlog between servers in PRTG

Monitor DFS replication backlog between servers in PRTG

One of the challenges with DFS is to monitor the DFS replication backlog. There are various scripts out there to accomplish this. Unfortunately I found nothing I really liked and giving me the simple insight I wanted.

The goal was simple – a script that will monitor the backlog between two systems in both directions – meaning Server-A to Server-B and Server-B to Server-A. For both directions I wanted to see the amount of files as well as the size of those files. I did not care about what DFS groups and or DFS folders are affected in detail – this is because the amount of groups might change, the amount of folders will likely change rather frequently, meaning it would be a challenge to monitor it per group or even on a folder level very efficient. Monitoring the amounts of groups and folders alone has no really advantage, cause this would have been changed by an administrator.

Below you will find my script that actually expects three parameter – the two server names and a limit integer value. The limit will not influence the XML response of the script, you could add the <text>$Response</text> and <text>$Response2</text> tags in lines 77 and 79 after the </unit> and before the </result> tag if you want, I removed them currently.

See the picture below as an example of how the result looks like in PRTG.

Create the following script in C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML and make sure you have the C:\Windows\system32\WindowsPowerShell\v1.0\Modules\Dfsr\ and the C:\Windows\SysWow64\WindowsPowerShell\v1.0\Modules\Dfsr\ folders – you might need to copy them over. If you miss them at all, you might need to add the needed Windows Roles/Features or install the RSAT (Remote Server Administration Tools) on your system.

Monitoring Shadow-Copies with PRTG

Monitoring Shadow-Copies with PRTG

This was originally posted here by myself: https://kb.paessler.com/en/topic/65026-monitor-shadow-copies-age#reply-247626

This is my solution for it – we monitor specific drives we enabled for shadow copy and wanted to see amount of shadows, newest should be within x hours and oldest should be at a minimum n hours – those limits can be configured with the limitations rather easily.

Main issue is – we talk about WMI modules that are only available in x64 if you use a x64 system. Now PRTG is executing sensors in x86, even thought it is installed on x64. Now, played around a while and came up with this simple solution.

Parameters for the parser-script (the one you need to execute) are: %host C: %host D: etc.

Parser script, needs to be in EXEXML directory: Name: Get-ShadowCopyStatsXMLx64parser.cmd

PS1 script, should to be in EXEXML directory – if not, adjust path in parser script: Name: Get-ShadowCopyStatsXML.ps1

PS: yes, the PS1 could be further optimized, but it took me already a while to find out the main issue was the x86/x64 combination, what is not possible – see Microsoft articles in MSDN/KB for more information while the Shadow-Copy WMI modules are only available in x64 on a x64 system.

Hope this helps others with the same challenge 🙂

Advanced PRTG Slack notifitications

Advanced PRTG Slack notifitications

This PRTG slack notification script was originally posted by myself here: https://kb.paessler.com/en/topic/66113-how-can-prtg-send-notifications-to-slack#reply-249147

This is our version of the script originally provided with PRTG by Paessler. – we are using the #colorstatus as well to determine to current issue and we are using actually icons to show the status – this is working pretty well for us. Besides that, reformatted the whole notification a bit so it is easier to read in the prtg slack notification messages.

Simple Slack example

It is easy to adjust and even use additional statuses/color-codes if needed..

Monitor your projectors and light bulbs with PJLink

Monitor your projectors and light bulbs with PJLink

Below is a solution rather than a question. Just thought I share it with the rest of the Paessler/PRTG world. This was originally posted here by myself: https://kb.paessler.com/en/topic/76639-monitor-your-projectors-and-light-bulbs-with-pjlink

Our challenge:

Projectors are having issues “out of now where” and of course always urgent – IT department didn’t have monitoring on it. Of course, normally you already have a conference going on or it is urgent and the projector might be really down already – most likely due to an end of life light bulb.

What we have:

NEC and Optima projectors – with Ethernet interfaces

Research and work:

We found out that there is a PJLink protocol (http://pjlink.jbmia.or.jp/english/index.html) that is supported by those devices. A little more research and we found a library on GitHub for Visual Studio (C#) (https://github.com/uow-dmurrell/ProjectorControl). We took it, modified the LAMPHOURS so we could get it public (only one lamp for now – but we only have projectors with one lamp). Modified the TEST-Project that comes with it to accept IP addresses as parameter and modified the output in to a PRTG XML format. Further modified the output so it is numeric. (We are not aware that any of that stuff is copyright protected, if so – we apologize and will stop using those components)

Result:

We added a PRTG sensor based on XML data that monitors (advanced EXE/XML) with the IP of the device as parameter and get the following information:

  • LampHours – we set ours to max. 4000 as error level
  • CoverStatus (0 OK / 1 ERROR)
  • FanStatus (0 OK / 1 ERROR)
  • FilterStatus (0 OK / 1 ERROR)
  • LampStatus (0 OK / 1 ERROR)
  • PowerStatus (0 off / 1 on = OK 2 cool down / 3 warm up = WARNING (we don’t get alerts there) 4 unknown = ERROR)

This actually helps us to keep an as close as possible eye on the projectors and being proactive if something is already reported as defect and the LampHours (adjustable, depending on manufacturer/model) are coming close to an issue, so we are ready for it.

Code changes:

ProjectorInfo.cs – added this:

Program.cs (or create your own project)

SQL Database backup monitoring

SQL Database backup monitoring

The following article was originally posted here by myself: https://kb.paessler.com/en/topic/79665-sql-database-backup-monitoring

SQL backups and their monitoring is one of the most important things. We often talk about rather complex situations including transaction-logfiles and various other stuff.

Monitoring those things was something that did cost us to many sensors with the standard-scripts etc. and was not really effective.

In order to change this – here are two scripts that will be able to solve most of your issues – you find them both at the end of this posting:

  • SQLBackupDestinationCheck.vbs
  • SQL_Database_Full_Backups.sql

SQLBackupDestinationCheck.vbs: This is a VBS script that will return XML content to Paessler/PRTG in multiple channels while using one sensor.

It expects three parameters:

  • go through first level sub folders: 0 (no) or 1 (yes)
  • file extension to obey – any other extension will be ignored – in most cases: “bak”
  • Path – should mostly be an UNC path

It will return those channels:

  • Total file count: count of all files with this extension in all folders checked
  • Total folder count: count of all folders checked
  • Oldest file found in days: oldest file – value gives back age in days
  • Newest file found in days: newest file – value gives back age in days
  • Lowest files in folder count found: lowest count of files that have been found in one folder
  • Highest files in folder count found: max. files that have been found in one folder

This needs some explanation:
The script checks a path for files with a certain extension. Let’s say you do SQL maintenance plans and use the extension .BAK to write those. You do a daily backup and keep them for 3 days to make sure they end up on a tape, further do you use sub-folders per database and you have a total of 5 databases on this system – now you will need to configure error-limits per channel – e.g.:

  • Total file count: lower limit: 10 files – upper limit: 20 files – during the backup you might have up to 20 files
  • Total folder count: 5 folders upper and lower limit – more/less then 5 would mean something changed
  • Oldest file found in days: lower limit 2 days – upper limit 4 days – older then 4 would mean the cleanup does not work
  • Newest file found in days: lower limit 0 days – upper limit 2 days – nothing newer (date issues? and nothing older as well)
  • Lowest files in folder count found: lower limit: 2 – there should be always more then 2x .BAK files in any subfolder
  • Highest files in folder count found: upper limit: 4 – anything above again would mean some clean up is not working right for one database

So – keep in mind – you can get more fancy with WARNING limits and ERROR limits – the example above will help you understand what to do – this should help you getting started. The script will save you quite a few sensors and still keep a pretty close watch on the file-system side of SQL backups – of course you could abuse it for something else then SQL backups as well – but this was my main intent for this script.
SQL_Database_Full_Backups.sql
This file will request information about backups for SQL itself. It might need a SQL 2005 or newer to work – and yes – I did post this on another PRTG KB thread – but I wanted to have the complete solution in this one post.

The script will be executed against the SQL server instance the databases reside, on the master-database. You need to specify a parameter that will be set as “@MaxHours” – this actually should be something like 26 hours, so your SQL backups are never exciting 26 hours – giving the backup some time to run as well. More for bigger databases might be necessary. If you do multiple full-backups per day, set it to e.g. 2 hours or what ever your limit is.

You will get backup 3x columns:

TotalAmountOfDatabasesTotal amount of databases of this server – this allows you not only to watch if anyone created/deleted a database on the server, it also gives you a good base-line in general
RecentlyBackupUpCounHow many databases have been backed up recently – full backup – in specified time-window
NOTRecentlyBackupUpCountHow many have not been backed up in the same time window

RecentlyBackupUpCount and NOTRecentlyBackupUpCount should always match up to TotalAmountOfDatabases – but that’s not the point. More important is – you might have backed up databases and not backed up databases – set you error-limits for all three columns accordingly – upper and lower limit – and you will see that the alert will fire if you add a database or keep the SQL agent service stopped so it hops over a single backup and misses it…

Folder: C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML
File: SQLBackupDestinationCheck.vbs

Folder: C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\sql\mssql
File: SQL_Database_Full_Backups.sql

Auto-Cycle through URLs

Auto-Cycle through URLs

Our challenge was to have several Paessler/PRTG MAPs (www.paessler.com) cycling through a TV in the IT room. We did not want to have just one static MAP at all. This was originally posted by myself here: https://kb.paessler.com/en/topic/79668-prtg-maps-auto-cycle

In order to accomplish this – we created a simple HTML file with some JAVASCRIPT code that runs through several URLs you easily can specify. Per URL there is a timeout value. Further is there a company logo that will be displayed while a MAP is loading, that will fade out and actually make the MAP visible.

The HTML code including the JavaScript is below – here are some things I wanted to explain and share about it.

Line 6 – to the end: src=”bgpicture.png” This can be replaced by any other file-name – simple use a LOGO here that you want to see while the MAP is loaded – it will fade out

Line 11 – 21 – those lines hold the URLs in var Source=[] – add a line per URL you want to cycle through, each URL has the same format as follows. Please MAKE SURE that the last URL entry is not followed by a comma “,” otherwise the script might fail to cycle.

Entry format:

  • ‘URL’,timeout,showBGfading,’title’
  • URL in text-marks
  • timeout in seconds
  • show background picture/logo fading out – 0 (do not show) or 1 (show and fade)
  • title/description in text-marks

Example: ‘https://prtg.company.local/public/mapshow.htm?id=1111&mapid=ABCDEFGH-1234-ABCD-1234-123456789000′,60,1,’Network Map’,

This would mean:

  • URL = https://prtg.company.local/public/mapshow.htm?id=1111&mapid=ABCDEFGH-1234-ABCD-1234-123456789000
  • timeout = 60 seconds
  • bgpicture = 1 – start with BGPicture from the HTML code and let it fade out (fades the map in)
  • Title/Description = Network Map

We simply load the HTML file in the browser and display it as full screen – avoiding any browser title-bar etc.

Features:

  • you will see a timeout counter in the upper right – this shows you how much longer the current view will be available.
  • you will see a title/description in the upper left while the element was loaded – it will slowly (slower the bgpicture) fade out – you can use any text there – per URL
  • you might or might not see the BGPicture element – fading out – depending on your URL configuration – we found it worked out nicely cause we didn’t want to see a …load map data… or anything and have a smooth transition between the maps..
  • we set timeouts per MAP like 60 seconds etc. – so we a) cycle quick enough and b) have enough time to look at the data shown to us
  • you can use the LEFT and RIGHT arrow key on your keyboard to jump to the previous or next URL while you execute the HTML file (if not randomized)
  • the up/down arrow keys allow you to show/hide a menu of all links available, this then allows you to click on a specific item in the list and show this specifically – the list is always generated on the fly – this prepares for future adjustments like showing where you are right now…
  • added a feature for to PAUSE the script – press P to stop the cycle any time
  • added a randomization – you can activate it and any of the URLs will be accessed randomly – if it is disabled, the script will cycle through the URLs as defined
    • var bolRandomize=true;

For fun – or how to add a few Easter Eggs:

  • you can use any file (we use MP4 and GIFs) to be displayed as well – our URL list is rather long – mostly just going through the same URLs but every now and then showing briefly a little IT joke in between – of course it depends a bit on your company – how ever – wanted to mention that we even like to do that for a short 5 seconds period.

Updated – December 2018: This is version 2.0 of the script. Updates are some minor bug fixes and mainly the ability to scroll forward and backward through the URLs while using the left and right arrow keys on you keyboard. Additionally do the up/down keys show or hide a complete menu of all links that are cycled through.This then allows you to click on a specific link to show the content.

Updated – April 2019: Version 3.0 of the script has now a PAUSE feature and a randomization feature that you can enable/disable.

Notes as per May 2022: Did not change the script but wanted to make you all aware that you might run in to issues with X-FRAME-OPTIONS set to SAMEORIGIN. This can be investigated while using your browsers developer tools (F12), you should see script errors revealing this issue. Eventually it boils down to some pages not loading (e.g., https://www.google.com) due to them not allowing to be embedded. You can see if the page offers special embedded links/URLs or try to use a proxy-script that feeds to page to the iFrame. At this point I can not offer a good working solution, the script was designed to load Paessler PRTG MAPS and this is still working just fine. Using the script beyond this purpose might work or not due to the target page settings and configurations.