Florian Rossmark

Office 365/Exchange Public Folders – find out if they are still in use

Office 365/Exchange Public Folders – find out if they are still in use

Public Folders in Microsoft Exchange are one of the most challenging parts in Exchange system administrators can face. Microsoft actually tried to get rid of them several times and still they are around. As from an administrative point of view, they often did grow wild and I saw environments with a huge amount of public folders and no one was able to tell if they are still in use or not.

About a year ago I faced the same challenge and had to determine for a few thousand folders if we could retire them or not. PowerShell commands seemed to be the best approach for this, but soon I found out this is actually not as easy as I hoped it would be. There is no direct command that would give me all I needed and there was no easy way to determine if the folder is in use or not, this is especially because you can’t go with attributes like ‘last accessed’ for this cause, simply because they might show you data that is not really helpful to determine whether the folder is still in use or not. The folder ‘last modified’ is not as accurate either due to a simple marked as read action could modify this attribute as well.

Drilling further down in what I could use, I decided to see when the newest object in the folder actually was created – so the last created object in the folder – whether it was an email folder or e.g. a calendar.

The script you will find below will actually export the following columns in to a CSV file that you can process further in e.g. Microsoft Excel”:

  • Public Folder Path
    • the path to this public folder
  • Mail Enabled
    • well – is direct email enabled on this folder yes or no – this can actually be quite important and might need further review or needs groups/distribution list to be created instead
  • Mail address
    • helps to determine if this might be somewhere in use on a website etc.
  • Folder Class
    • the class in most cases is either an email folder or an calendar
  • Folder Size
    • total size of the folder – some folders might be really small and this helps to determine if you will need to keep em or not
  • Number of Items
    • like size – this helps a lot to see if it is something to discard or not
  • Top 1 object creation time
    • if there are items in the folder, this is the newest created item – specifically it is the date / time the item was created, as mentioned, modified will not help you because a simple mark as read action through Outlook already would influence this – the most accurate information I was able to find is the creation time for this cause

Now to the script(s). We actually talk about two scripts here – this is simply due to the fact that I developed this against a Office 365 Exchange system that needed me to logon and load the PowerShell modules from Office 365. The script itself should work on respective against an on premise Exchange server as well.

Simply said, you need to create both script files in the same folder – the ConnectToO365.ps1 script that is called is just a central solution in a huge script folder that is called by each script if necessary. The top section of each script first determines if there is a active session against the Office 365 environment and it will reuse it if possible or call the connect script to establish a new connection.

 

Monitor DFS replication backlog between servers in PRTG

Monitor DFS replication backlog between servers in PRTG

One of the challenges with DFS is to monitor the DFS replication backlog. There are various scripts out there to accomplish this. Unfortunately I found nothing I really liked and giving me the simple insight I wanted.

The goal was simple – a script that will monitor the backlog between two systems in both directions – meaning Server-A to Server-B and Server-B to Server-A. For both directions I wanted to see the amount of files as well as the size of those files. I did not care about what DFS groups and or DFS folders are affected in detail – this is because the amount of groups might change, the amount of folders will likely change rather frequently, meaning it would be a challenge to monitor it per group or even on a folder level very efficient. Monitoring the amounts of groups and folders alone has no really advantage, cause this would have been changed by an administrator.

Below you will find my script that actually expects three parameter – the two server names and a limit integer value. The limit will not influence the XML response of the script, you could add the <text>$Response</text> and <text>$Response2</text> tags in lines 77 and 79 after the </unit> and before the </result> tag if you want, I removed them currently.

See the picture below as an example of how the result looks like in PRTG.

Create the following script in C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML and make sure you have the C:\Windows\system32\WindowsPowerShell\v1.0\Modules\Dfsr\ and the C:\Windows\SysWow64\WindowsPowerShell\v1.0\Modules\Dfsr\ folders – you might need to copy them over. If you miss them at all, you might need to add the needed Windows Roles/Features or install the RSAT (Remote Server Administration Tools) on your system.

Prevent ScreenSaver coming up with a PowerShell script

Prevent ScreenSaver coming up with a PowerShell script

In our daily business, we often have the issue that a GPO enforces a screensaver after a certain amount of time. This can become very annoying and actually even be an issue if you are remote in on a end-user system and don’t know their password. After doing quite some research I found out that PowerShell actually is able to help here and I wrote a script to prevent the screensaver from coming up.

First I thought, let’s see if I can control the mouse cursor, cause I thought it would be less invasive, this is actually possible – but interestingly did not have the expected effect and the screensaver kept coming up. So I did go with keystrokes, but of course I was worried about what key would be save to send. Doing some research on Microsoft websites (here the link), I found the F16 – testing around with it I found it the least invasive key to send periodically to a system – it does not even exist on a regular keyboard and can only be simulated with a key-combination. To simulate F16 you need to press SHIFT + F4 – there might be a Windows 10 (may be even earlier) combination out of WINDOWS + SHIFT + F4 respective WINDOWS + F16 what would cause a shutdown. Once I found out about that, I decided to adjust the script to F17 what seemed to bypass even that small chance of an issue that would be more problematic then the screensaver itself – you sure don’t want the system to shutdown :-).

The script you find below can be simply executed. It has a setting $minutes that you could add as a parameter and therefor adjust it when you start it – by default it will use 9999 what is a pretty long time. To be clear – this is actually not a minute interval, it is a 30 second interval and ends up in half minutes. Why? Simple – Windows has a minimum setting of 1 minute that an screensaver can come up. If the script would fire in a minute interval, there is a theoretical chance the screensaver would still win.

This is further not pure PowerShell code – well it is but actually the key stroke is send via a Windows Scripting Host WSH / WScript command SendKeys. The PowerShell only surrounds to command. It has the nice habit of having a PowerShell window open that you simply can close to end the script. If you would do the same in WSH you would need to execute the script more manual with CSCRIPT rather then just double-clicking the file what would execute it via VBSCRIPT instead causing a hidden window / process and you would need to identify it in the task-manager to kill the process. Therefor, PowerShell was the best choice in the end to accomplish the task.

Additionally you could use the below more advanced script that will control the screensaver only during a defined time window. This is a more advanced way and more usable in certain situation, cause it would automatically allow the screensaver to come up outside the defined time window and minimize the exposure of the system more. It is also more effective then completely disabling any screensaver GPO settings, cause it is more specific and adjustable.

Update: As of 10/202 I updated the script below from F13 to F10, as this works better for most situations. Be aware, it all depends on what your foreground windows will react too. Make sure the keystroke you use does not cause you any harm.

Automate Outlook signature roll outs while pulling the information from Active Directory / LDAP

Automate Outlook signature roll outs while pulling the information from Active Directory / LDAP

The Outlook signature script you will find below is a bit more complicated then most other scripts I post, cause you might need to adjust a bit more. I used it for several years (as you can see in the script when it comes to Outlook versions and registry keys) in many networks and in most cases it worked just flawless once it was set up.

What does this script do exactly?

Good question – it actually writes every time a user logs on a signature file to his profile. The information in the file are pulled from Active Directory – where you are able to e.g. change the phone number, cell phone number or e.g. last name because the employee married. The signature file will automatically update. Even more important is the onboarding process, you actually can forget about setting up the signature. Assuming you don’t use roaming profiles, well – no worries – the signature will auto create everywhere, if you call it via a login script / logon script. In theory you could call it via a GPO as well.

What you need to do – simply said

  1. get an approved example signature from HR or marketing or who ever can provide you the signature and actually put it in your Outlook as signature.
  2. then replace names, phone numbers with variables (I come to that) and save it in your Outlook.
  3. go to your %appdata%\Microsoft\Signatures folder and grab the three files (.txt / .htm / .rtf) and the sub-folder with the name of the signature you saved
  4. copy them to your \\mydomain\netlogon\signatures folder (you might need to create it – any other location would need some adjustment in the script)
  5. you will need to open the all three file formats (.txt / .htm / .rtf) in a regular text editor – plain text editor like NOTEPAD.EXE (Windows) or Notepad ++
  6. make sure the variables are a complete word and not somehow divided or have characters replaced – if something is not how it should be, adjust it and save the files
  7. Copy the OutlookSignatures.vbs file to the same path and adjust it especially in the header-section with your domain information and execute the script in a CMD / command prompt via \\mydomain\netlogon\signatures\OutlookSignatures.vbs “my signature” 1 1
  8. Now go back to your Outlook (probably close and re-open) and create a new email – you should see your signature was auto-generated and the variables have been replaced with you user-specific values from Active Directory / LDAP.
  9. you should switch your email format to all three formats – HTML / Plain Text / RTF and check the signature in all three formats – to make sure all three files where generated correctly
  10. If something is not as expected, check the source-signature files and their variables and if needed adjust the variable-replacement section of the script

What you gain from this

The signatures will auto-generate and you actually have a cheap way to roll out corporate identity conform signatures, without spending a lot of money for tools that might provide you an easier to use configuration and some more fancy features – but if you don’t need those features and you can live with a more technical way to approach this you actually have a cheap way to implement this.

The variables

The script will pull certain properties / attributes from the currently logged on user object from Active Directory – those are configured in line 33 and if you need more you will need to add them here.

Between the lines 145 and 181 you see that the script is replacing place-holder variables in the source files (.TXT / .HTM / .RTF) with the information pulled from Active Directory – all those place holders in your source files need to be @@AnyName@@ – this is to make sure you have a unique definition of what the script will replace.

Example:

This does nothing else then:

  1. replace the variable strCurrentLine with
    1. search in variable strCurrentLine
    2. for the value @@GivenName@@
    3. replace it with LDAP attribute “givenName

There are a few special examples for putting e.g. a HTML conform line-break after Job-Title in the .HTM file only in the script (I had situations where only the HTML signature did not do a Line-Break, or the Text-Version alone was not doing something, etc..) – in the end this allows you to adjust something in a one of the three signature formats.

Another example in the script writes an additional line with the cell phone / mobile number if available. If the number is set in the user object, a new line will be created depending on the file format – if the number is not set the search-variable will be removed from the signature (you off course don’t want it there) instead of writing the information out. In this case we add a “Cell: ” as prefix before the number so the signature indicates clearly what this number is about. Simply said, since we replace a variable and not a whole line – we have to write out more then just the number – in this case we want to add text.

Can you execute the script with various signatures per department?

Yes – you actually can – but you would need to do this with an additional script and e.g. IFMEMBER from Microsoft or group based GPOs etc…

Can you create more then one signature?

Yes – you can execute the script in various ways – you can roll out a NEW MAIL signature (full length signature) and a second version for REPLIES (short signature) and additional signatures that the user could choose from that aren’t set as either NEW MAIL or REPLY signature. The script header explains how to call the script and what parameters it will expect and how to set them.

Feel free to use the script below, adjust it for your needs. I know some of the stuff like your domain-name could be searched automatically instead of putting it hard-coded in the script – even the reg-keys could be more advanced, feel free to do so – but in the end it is not that much work and it does its job either way.

Debugging the script

At line 79 the statement “On Error Resume Next” avoids that you see errors that might arise. This is good for production so that the client/user does see as little as possible messages due to timeouts or special circumstances – but if you want to debug something or in the process to test the script itself, please remark the line so errors actually occur. They might not mean much in some cases, but they might also give you the hint you need to see what is going wrong.

Print Server backup script

Print Server backup script

Print servers need to backed up. This is because of two main reasons. One is that users heavily depend on printers and a not properly working print server will cause imediate helpdesk tickets and unhappy users. The other one is that installing a new driver, might it be a new version, a new model or even an additional manufacturer, can cause other print drivers to act up or even stop working – many administrators know and fear that.

Windows server actually allows you to backup the current print drivers, installed printers and their configuration. You can use this to migrate your printers or to back them up. Of course, you can simply depend on e.g. VMware snapshots, storage level snapshots or other backups of your server. But you also could just export the whole print server configuration while using the scripts below. Those will actually call the Windows API to back up the printers and store it all in a file that you can keep centrally. You don’t just rely on snapshots or a full server backup for e.g. your SQL databases as well, do you?

The script uses a .CMD file that will execute the actual backup and send a email report, while using the SMTPSEND program from Michael Kocum (https://www.dataenter.com/download.asp) for this since I already ad it flying around – you could replace the mail send option with another prepared SMTPSEND client, a VBS script or just remove it completely. Additionally there is a .VBS script that will do a clean up of the target backup files depending on the age of the files in the specified directory.

All the parameters are explained and set in the top part of the .CMD file – I therefor will not explain them here again – you should not need to modify the scripts by default – but feel free to do so. Of course, you should create a scheduled task and execute the .CMD periodically. This can save you time and headache in case you have a malfunctioning print server system. The restore can be easily done through the Print Management MMC that Windows provides you, cause the actual backup files are createed using the same Windows APIs. Your end users will be happy that their printers got back to work in no time, hopefully.

Automate your SUS clean up

Automate your SUS clean up

Many companies rely on WSUS respective SUS services from Microsoft – aka. Windows Server Update Services as internal source and control of their update deployment to clients and servers within their network.

One of the big challenges for IT is to keep them clean and performant. The Cleanup-Assistant in the SUS management console tends to run forever and in any case means manual labor over and over again.

Below are two scripts – a CMD script that needs to be adjusted with parameters and a powershell script that will be called with those parameters. The scripts acutally will call the same API as the MMC Assistant does, just that this can be automatically performed via a scheduled task in Windows.

It helps you to keep your SUS slim and more performant.

In any way – I highly recommend to not blindly just enable all categories rather then limiting it to the once you have in place as well as once you reached a certain patch-level even actively denying updates you never will need again (keep in mind, new rolled out systems might still need older updates – but you could possibly refresh your base images or rely on Microsoft update services / online updates for those cases).

The combination of making updates obsolete and actually running a cleanup periodically will improve your SUS server performance.

As for the parameters, those are explained in the CMD script header – therefor I will not explain them here again.

Solution for VSS exceptions with VMware guests / VM tools

Solution for VSS exceptions with VMware guests / VM tools

Today’s blog is about “Solution for VSS exceptions with VMware guests / VM tools” and was initially posted by myself here: https://vox.veritas.com/t5/Backup-Exec/Solution-for-VSS-exceptions-with-VMware-guests-VM-tools/td-p/829072 – it actually became a KB entry for an as of today older version of Veritas Backup Exec – but I did not want to leave it out of my blog.. Here is the link to the KB article: https://www.veritas.com/docs/000009506

Here is the article I wrote – starting with a description of the actual issue:

We have been experiencing VSS issues with VMware guests in regards to the installed Backup Exec Agent and a previously installed VMware Tools VSS option.

Uninstalling the VMware Tools VSS option in various ways including restarts did not fix our issues. If you search the internet for solutions, you find many attempts but no real solution or explanation.

One of our admins spend several hours with the Veritas support without a solution, he was about to escalate the issue with them, when we found the root cause and could actually fix our issues.

First the steps to solve this:

  1. Uninstall the VMware Tools VSS option (no restart will be requiered)
  2. Make sure VMware VSS service was deleted
    1. If this is not the case, you might need to do so manually and remove additional DLL’s etc. as well as restart the system, but this is independent from this solution
  3. You might have already done steps 1 and 2 but you still get VSS exceptions from the backup that says you have more than one VSS agent installed:
    1. V-79-8192-38331 – The backup has detected that both the VMware VSS Provider and the Symantec VSS Provider have been installed on the virtual machine ‘hostname’. However, only one VSS Provider can be used on a virutal machine. You must uninstall the VMware VSS Provider.
    2. Now you wonder what causes this and you get stuck
    3. You could uninstall the Veritas/Symantec Backup Exec Agent and only back the system up per VMDK
    4. You would lose the GRT / granular backup / restore capabilities
  4. Check your registry for the following reg key
    1. HKLM\System\CurrentControlSet\Services\BeVssProviderConflict
    2. If this key exists, but your VMware VSS provider is uninstalled, you need to follow up with step 5
  5. Open a notepad as administrator
  6. Open this file in the notepad
    1. C:\ProgramData\VMware\VMware Tools\manifest.txt
  7. Search for the following two entries:
    1. Vcbprovider_2003.installed
    2. vcbprovider.installed
  8. Make sure both of them are set to FALSE, most likely one of them is TRUE
  9. Run a test backup
    1. This test backup now should not show the exception anymore
    2. The registry key should vanish (refresh/press F5) without you taking action

So what happened?

You uninstalled the VMware Tools VSS provider, but this manifest file did not get updated. We actually could see that it sometimes does get updated and sometimes does not. This seems to be some kind of issue with the VMware Tools uninstalled/installer.

But why this manifest.txt file?

As we found out, there scripts that get executed by Symantec/Veritas Backup Exec before the backup. You might find them in two locations, and it seems to depend a bit on the Windows version which script is executed (at which location). You could edit them both and just undo the checks in the scripts, but this wouldn’t be correct. It is more correct to update the manifest.txt file. If you want to, you can check the date/time of the manifest.txt file before you change it – you might see it was not updated while you uninstalled the VMware Tools VSS provider (assuming you did only do this and not do additional installs/uninstalls within the VMware Tools / please note as well that this only is true when you still experienced those issues).

Now, back to those scripts, you find them here:

  • C:\Windows
  • C:\Program Files\Symantec\Backup Exec\RAWS\VSS Provider

The name of the script that matters:

  • pre-freeze-script.bat

This script checks several DLLs, registry entries, paths and on Windows 2008 and newer the ProgramData-Path for this specific manifest.txt file and the two entries mentioned.

Once you uninstall the VMware VSS provider, and the file did not get updated, you might see this issue and wonder how to solve it. The solution is to simply update it to mirror the uninstallation of the VMware VSS provider (vcbprovider). We double checked this with several installations and could see if the file actually gets updated, those two values are set to FALSE, if it doesn’t, at least one of the values remains true, what causes the pre-freeze-script.bat to write the registry key mentioned earlier and therefor causing the issue in the backup – exceptions.

If you still have the same issues after updating the manifest.txt, simply check all the DLL’s that are mentioned in the script and make sure they don’t exist. You might also consider to manually delete the registry-key (it seems to be just a dumy-key) to make sure there is no issue that prevents the script from deleting it. Make sure it does not re-appear after a backup! Otherwise you might still have some DLLs left on your system that cause the script to re-create the registry key.

Hope this helps a few of you out there. This was an ongoing issue for a while and I came accross those issues many times ever since Windows 2008. This applies to Windows 2008 R2, Windows 2012, Windows 2012 R2 and pretty sure to Windows 2016 as well.

It helped us getting rid of those issues completely and actually not even needing a single restart of the guest VM to solve the issues (removing the VMware VSS provide did not need a restart).

Password expiration notifications for end users

Password expiration notifications for end users

Today I wanted to share a script with you that allows you to inform your users per email that their password will expire or even is expired and reminds them about your password policies like complex passwords and how to chose a password. This is a simple VBScript and can easily be adjusted. The email will be generated from a file, in this case a HTML file that you provide. You can adjust the content of this HTML file as you need it. There are sure many commercial solutions out there that can do more then this script, but if you want to save the money and are satisfied with the provided options, this sure can be a good alternative.

Let me mention one thing about passwords first – most of us live with the usual policy that passwords should be changed periodically and need to be of a certain length and complexity. We all live with the daily calls of the help-desk about forgotten passwords, not changed passwords (pretty much why I wrote the script) and so on – what changed was a new recommendation in late 2017 or early 2018 that actually is based on statistics and data and now says – yes – complex passwords and certain lengths – but do not enforce periodic changes of those passwords due to this actually resulting in to less secure passwords while users might only change a number or even write those passwords more likely down and therefor compromising the whole attempt to secure the system.

Anyways – the next few lines will explain the parameters you can adjust in the top section of the VBS script – further below I will post the script and an example HTML file so you can start right away.

The options are between the lines 7 and 61 – don’t be scared – most of them are pretty simple to understand and are actually explained in the script itself. You should not need to modify anything outside those two lines.

About the parameter naming convention and possible values:

  • starting with str as strings – those expect text-markers and alphanumeric values “text”
  • starting with int as integer values – those are direct numeric values – e.g. 123
  • starting with bol are boolean values – those can be either TRUE or FALSE – meaning on or off

Here are the options you can set:

  • strSMTPServer: SMTP mail server DNS name or IP address
  • intSMTPServerPort: SMTP mail server port – normally 25
  • strFrom: SMTP mail from address
  • strToAdmin: SMTP mail to address for administrator emails
  • strAdminMailSubject: subject for mail to administrators
  • strUserMailSubjectExpired: subject for mails to user when password is expired
  • strUserMailSubjectWillExpire: subject for mail to user when password will expire – the exact word REPLACEWITHDAYS will be replaced by the days left value so mention it in the subject line if you want to see the value there
  • strBodyURL: URL or full file-path (HTML file path e.g. file://) to import for body, the entire content of this URL/FILE will be imported to the body of the email and should explain ways how to change the password
  • strAttachment: full file-path to an attachment for the email to the users / leave empty if no attachment
  • strLDAPSortColumn: per default: pwdLastSet / sort column for LDAP query
  • intStartWithPWexpiresInDays: If the passwords expires in days N or less, the script will inform the user – keep in mind – if you run the script daily, those users will get an email every day once their password will expire in less then the indicated days. 5 is sure a good start.
  • bolIgnoreDisabledAccounts: Disabled accounts should always be ignored
  • bolInformAdminAboutPWexpires: this will inform the admin about expiring passwords
  • bolInformAdminAboutPWisExpired: this will inform the admin about accounts with expired passwords
  • bolInformAdminAboutPWneverExpires: this will inform the admin about accounts with password set to never expire
  • bolInformAdminAboutUserCantChangePW: this will inform the admin about users who are not allowed to change their password
  • bolInformAdminAboutAccountDisabled: this will inform the admin about disabled accounts found – this would have been done in ADS by an administrator
  • bolInformAdminAboutExpiredUserAccount: this will inform the admin if the user account has an expiration date and the account is expired
  • bolInformAdminAboutAccountWithoutEMail: this will inform the admin about accounts without a set email address
  • bolInformAdminAboutStillGoodPasswords: this will inform the admin about users/passwords that are still valid
  • bolInformAdminAboutIgnoredUsersExcludedByGroup: this will inform the admin about users that have been ignored by the strGroupsExclude filter
  • Please Note: the status account locked will not be checked, this should be corrected automatically by the default security GPO instead (will be in most cases by default)
  • strSearchOUs: Filter Priority 1 – only users in those OU paths will be processed. Use LDAP DN like: “OU=Folder,OU=Folder,DC=Domain,DC=local”, you do not need to include the DC=Domain,DC=local – the script will add this information if necessary. Use | (pipe) if you want to add more then one LDAP DN path. Leave empty (“”) to disable this filter
  • strGroupsExclude : Filter Priority 2 – if the user object is still not excluded, this group exclude filter will be applied. If the user is member of one of those groups (if multiple groups are defined), he will be ignored. Use | (pipe) if you want to add more then one GroupName. Leave empty (“”) to disable this filter. Example: “Group Number1|GroupNumber2”
  • strGroupsInclude: Filter Priority 3 – if the user object is still not excluded, this group Include filter will be applied. The user has to be a member of one of those groups (if multiple groups are defined). Use | (pipe) if you want to add more then one GroupName. Leave empty (“”) to disable this filter. Example: “Group Number1|GroupNumber2”
  • bolDebug: set TRUE for script-output, highly recommended to execute the Script in CMD with CSCRIPT <ScriptName> so you see it in a command window instead of dialog boxes.
  • bolAttachDebugToAdminMail: the debug output will be attached to the admin-mail (independent from bolDebug)
  • bolTestDebugOutputToConsoleOnly: this will disable the mail.send – only output to the CMD will be generated, please enable bolDebug
  • bolRedirectMailToAdmin: this will redirect all mails to the admin, instead of sending them to the user – the subject line will include the user-mail address in this case – this allows you to do a real test and actually see what would be send out to whom – without actually sending the emails to the end user
  • bolAdminMailOnly: this will send the admin-mail only, no user mail will be generated

As always – feel free to reach out to me if you have any questions or comments.

Script to remove RemoteApp and Desktop Connections

Script to remove RemoteApp and Desktop Connections

RADC or RemoteApp and Desktop Connections are very powerful in combination with Windows 7 or newer. You actually can have Terminalserver or RDS / Remote Desktop Server applications in the users start menu and connect to them in seamless window applications.

Windows 7 made it challenging to even implement those applications in a large scale, for this sole purpose you had to use a PowerShell script that actually imported a WCX file. Windows 8 and especially Windows 10 can do this via GPO nowadays.

The GPO settings allow one RDS farm to be added and they of course will remove the RDS farm if the GPO is changed/removed.

But what about those Windows 7 clients that are still out there and those cases where you actually have other RDS / RADC connections that you want to delete, e.g. manually created ones. I just came across this scenario and wanted to share the script I just wrote. I created two files in order to executed it simply via GPO as a Cscsript in order to avoid any dialog boxes coming up.

The .CMD executes the .VBS an expects it in the same directory of course. In the .VBS you will need to change the 5th line – as inidicated. Everything else you can leave as is. Of course this script will only delete the specified connection. You could define the line 5 parameter and change line 33 from

to the following line

This would result in to deleted everything but the defined connection and therefor do a cleanup. In theory you could then put a empty string in line 5 and just clean up everything.

As always, I hope some of you find this helpful.

Script based SQL Express backups

Script based SQL Express backups

SQL Express is widely used but has huge downside, there is no SQL Agent available. Even Windows internal databases, especially WSUS / Windows Update Services / Microsoft Update Services are in the end SQL Express like databases that do not have a SQL Agent.

Now, you can have central SQL Servers with Agents have them backed up – and I recommend on doing so if possible. But for the many times this is not possible, you will need to find another way to create those nice little .BAK files for SQL internal backups aka. SQL Maintenance Plan Backups. To work around this issue, I once wrote a script that automates this for each database found on a specific SQL server. It creates the backups via SQLCMD commands and even does a clean up of obsolete files (files older than x days), almost like SQL Maintenance Plans do it.

The script is divided in to a .CMD file that executes the actual backup and where you set the configuration/parameters and a .VBS file that is controlled by the actual .CMD script and will perform the backup cleanup. In the end you can have the .CMD send a email report – I used the SMTPSEND program from Michael Kocum (https://www.dataenter.com/download.asp) for this since I already ad it flying around – you could replace the mail send option with another prepared SMTPSEND client, a VBS script or just remove it completely.

Adjusting the settings / parameter:

This is all done in the SQLBACKUP.CMD file – the header section pretty much will explain all you need to know, from SQL Server to SQL-User and Password over Mail-Server to recipients.

If you want to execute the SQLBackups as the Windows-User that is executing the script, you need to exchange the REM (remarks) for the following two lines further down in the scripts. I apologize for the inconvenience, this is a old script I never updated to have those settings in the header (more automated), I always just changed the lines.

Everything else should be rather easy. Of course you will need sufficient access rights to the SQL databases and your destination backup folder. The task-scheduler might work best if you execute the script with “cmd /c c:\scripts\sqlbackup.cmd” (change the path as you need it) and set the working directory / startup folder right. It might help to execute the task with elevated rights etc. – all depending on your systems configuration.

Below are the two scripts – I hope this helps some of you. The generated .BAK files can simply be restored in SQL services via the GUI cause they are native SQL backup files.

Join systems to a domain and create KeePass server entries for local admin’s

Join systems to a domain and create KeePass server entries for local admin’s

Please note – this script was updated – you find the updated post here.

One of the challenges in most daily IT operations is onboarding of workstations and servers (respective domain join). Over the years I came across and tried many ways to accomplish this. Today I wanted to share a script and solution others might find helpful, but first lets get down to some theory and background.

The goals and challenges:

  • simple domain join after a system was imaged
    • this is in theory possible in a fully automated process via various imaging solutions – I found that WDS (Microsoft Windows Deployment Services are in most cases the easiest way to accomplish this while having the possibility to use this in consulting for various clients, in enterprise for various departments etc. Since Windows 10 came in to the equation some of the automation with WDS became more challenging – so keeping it simple with some additional manual labor is often the easiest way to accomplish this – to simplify the process a PowerShell script became a perfect solution).
  • systems should have a local admin account (not administrator / SID 500 / who should remain disabled) with an individual password
    • typing this manual you always risk that the password is misspelled either in your password database or on the actual operating system
    • if you think it is a good idea to have the same password on all your clients I actually suggest you do some security related research!

The PowerShell script below will do the following for you:

  1. Ask for the name of the system (this will change the hostname/computername)
  2. Ask for credentials for KeePassPleasant Password Server
  3. Ask for credentials to join the system to the domain
  4. Create a local admin user account on the system
  5. Generate a password for this account
  6. Check if there is an existing KeePassPleasant Password Server entry for this system
  7. If not – it will proceed and create a entry with the machine name, username, password and various additional information like
    1. manufacturer
    2. model
    3. serial number / service tag
    4. UEFI BIOS Windows license key
    5. MAC addresses of all network cards Windows knows about
  8. And finally it will join the domain and put the system right away in to the defined OU

The whole script is only an example – you don’t have to use KeePassPleasant Password Server nor is the script perfect for any situation – you can take it and modify it as you need it – point it to various IT Asset databases or let you chose from predefined OUs etc. – adjust it as you needed – in general it is a very useful baseline and I wanted to share it.

One of the challenges is to execute the script as administrator (elevated rights) and as well bypass the script execution restrictions without compromising them in a default image, like disabling this important security feature on the image itself. To accomplish this, a simple CMD-Script actually will execute the PowerShell script. CMD-Script can right-clicked and executed as administrator and gain elevated rights. This is as of today not possible by default with PowerShell scripts (.ps1).

Create the following two files “Execute-DomainJoin.cmd” and “Execute-DomainJoin.ps1” and save them in the same directory or e.g. a portable flash drive. Adjust the PowerShell script so it connects to your domain and local systems.

Please note – this script was updated – you find the updated post here.

Explaining, adjusting and guiding your through the PowerShell script

It is important that you understand the script so you can make adjustments to it. I will try to explain everything that is important and reference some line-numbers while doing so.

  1. Lines 1-30 are just a general introduction and show some generic information
  2. Lines 31-76 hold some functions to generate a password, to bypass some certificate issues etc
    1. Lines 35-38 are worth taking a look at, here are all the characters of the four categories that will be used to generate a password. Excluded are already usually hard to read characters in some fonts and other characters that might cause issues – of course, adjust especially line 38 to your preferences and add more symbols or remove what you don’t want to use
  3. Lines 77-89 are just informational
  4. Lines 90-96 expect some user-input
    1. new computername
    2. get credentials for the domain join (admin)
      1. the script will not validate the credentials, in theory this could be done but I never found it that important
    3. get credentials to read/write on the password database server (often not the actual admin-credentials, therefor I separated those two)
      1. the script will not validate the credentials, in theory this could be done but I never found it that important
    4. the local admin username that will be created
      1. $localAdminUser = $(“$ComputerName” + “_Admin”)
      2. the above line will create a hostname_admin account – you can adjust this to your preferences
    5. 94-95 will generate a password and encrypt it so it can be used to create the local account
  5. Lines 97-103 are just informational
  6. Lines 104-216 – this is actually the whole password server communication and entry check and generation
    1. 104-115 those lines gather various information from your current system like serial number, UEFI Windows keys, etc. – you can keep em as is
    2. 116 – please enter the URL to your password server here 
    3. 117 – here your need to enter the folder where the generated credentials are going to be put in on your password server
    4. 118 – this is the subject of the entry that will be generated – adjust this to your preferences
    5. 119-120 – those are username/password for the entry – you should leave this as is
    6. 121-134 – those lines are the details in your password server entry – adjust them to your likes
    7. 135-165 – this actually will execute the following on the REST API on your password server
      1. connect to it
      2. check if a entry with the same username already exists
    8. 166-189 – this will raise an alert that this user already exists on your password server – 189 will actually exit the whole script
    9. 190-216 – this block will write to the password server – cause it did not find an entry with the new username
  7. Lines 217-241 this shows the new created username and password – it actually suggests you compare the entries on your password server to the information shown to make sure everything is correct
  8. Lines 242-251 will create the new local admin account on the system and set the password
  9. Lines 252-267 are informational
  10. Line 268 will execute the actual domain join
    1. please adjust the -Domain and the -OUPath parameter to your specific needs
    2. note that the command will automatically restart the system
  11. Lines 269-282 Those lines are informational – actually – if anything would go wrong those lines would be shown and help to take further steps after the failed domain join – in most cases those suggestions will help – in the end the error output shown by the command for the domain join (line 268) would indicate what went wrong. The restart of the system actually would bypass this message in the end (more or less)

If you have any questions, feel free to reach out to me. The script could be cleaned up more – but I wanted to provide a working version of it – so I just did a quick clean up or some special stuff and posted it here. Personally I like things a bit more structured, but as said – this is just a general example.

Please note – this script was updated – you find the updated post here.

This script is also mentioned on the API Examples page on the Pleasant Solutions web site here.

Solarwinds Web Helpdesk – Slack alerts

Solarwinds Web Helpdesk – Slack alerts

This was originally posted here by myself: https://thwack.solarwinds.com/thread/114863

Solarwinds WebHelpDesk is very powerful, but for those who use Slack as communication and alerting platform, there still is no integration.

We as a IT team struggled a bit keeping up with the immense flow of emails and filtering them out as well as being really pro-active on new tickets (first response time) as well as realizing we got new tickets assigned or a user / client wrote a new note.

To overcome those challenges and due to the fact that we all have Slack on all our systems from workstation to smartphone, we decided to integrate this. Since I spend a bit of time on those scripts and thought they might be helpful for others as well, I am sharing them here now and explain on how to implement this.

Please note: Those scripts are a version 1 – I am very aware that they could be further cleaned up and simplified.. but I wanted to share them already… bugs are possible as well…

Requirements:

  • You need Solarwinds WebHelpDesk
  • The scripts use the field “Pager” in “Techs” for the Slack-Username – put all the Slack-Usernames of your Techs in this field – no @ – just the name
  • The scripts assumes you are executing it directly on the server that has the PostgreSQL database installed
  • The scripts assumes the database user/pw is defaulted to WHD

How to implement them:

  • Create the 5x files as show further below in e.g. “C:\Scripts\WHD” on the PostresSQL server
  • Create a new Windows Task that starts the WebHelpDesk_SlackAlerts.cmd file every 30 minutes
    • this file actually executes the PowerShell Scripts – it is just a work-around – it bypasses any PowerShell Script execution restritions
  • download the PostgesSQL ODBC drivers from the following link – assuming you haven’t installed them on the system already
  • Edit the PRTGSlackWebHookNotificationPSv2.ps1
    • This file was originally from our PRTG installation and modified there already. It was further modified for WHD alerts – I do not claim to have invented this script nor do I want to abuse any copyrights on it! source: www.paessler.com for monitoring solutions
    • Line 41: adjust the URL for the FavIcon.ico to your external WebHelpDesk URL
  • Create a new WebHook-Application in your Slack Account
  • Edit the CheckNewTicketsFirstResponse.ps1
    • This script posts to a generic channel in our case – we want to see new tickets as a group – assuming assigned tech is currently unavailable and couldn’t touch it…
    • adjust the PostgresSQL settings – if needed – IP / Port / User etc…
    • Line 7 – ticket_age_minutes = 88
      • this is a minute value – we alert on a Slack-Group “helpdesk” if there is a ticket older than 88 minutes – we fire the script every 30 minutes, so it could be up to two hours old..
    • Line 9 – $channel – adjust this to the Slack-Group channel you want to use for those alerts
    • Line 77 – adjust the URL to your external WebHelpdesk URL
    • Line 99 – adjust the Link to your own WebHook URL
  • Edit the CheckTicketAssigned.ps1
    • This will send the message to the Tech directly through the SlackBot channel – only he will see it
    • This will only fire if the Tech did not yet put a Tech-Note in the ticket after it was assigned to him
    • adjust the PostgresSQL settings – if needed – IP / Port / User etc…
    • Line 7 – entry_age_minutes = 32
      • this is a minute value – we run the scripts every 30 minutes – the alert due to some variable time can not be older than 32 minutes by default…
    • Line 74 – adjust the URL to your external WebHelpdesk URL
    • Line 97 – adjust the Link to your own WebHook URL
  • Edit the CheckTicketNewClientNote.ps1
    • This will send the message to the Tech directly through the SlackBot channel – only he will see it
    • This will only fire if the Tech did not yet put a Tech-Note in the ticket after the client / user did leave a comment or note
    • adjust the PostgresSQL settings – if needed – IP / Port / User etc…
    • Line 7 – entry_age_minutes = 32
      • this is a minute value – we run the scripts every 30 minutes – the alert due to some variable time can not be older than 32 minutes by default…
    • Line 74 – adjust the URL to your external WebHelpdesk URL
    • Line 97 – adjust the Link to your own WebHook URL

After that you should be all set.

We have further integration in to PRTG while monitoring the database for “new tickets older than 120 minutes” and looking for the logfile “SlackLogErrorDetails.txt” indicating that a Slack notification did not go through by the script – most likely due to special characters that the scripts already should take care off, but in case it happens again this file would appear. You can integrate that as well with your monitoring system, but this is beyond the scope of the simple notifications.

Other logfiles show just what was send out and indicate that everything is working well.

Click here to download the scripts archive WebHelpdeskSlackFiles.zip

Monitoring Shadow-Copies with PRTG

Monitoring Shadow-Copies with PRTG

This was originally posted here by myself: https://kb.paessler.com/en/topic/65026-monitor-shadow-copies-age#reply-247626

This is my solution for it – we monitor specific drives we enabled for shadow copy and wanted to see amount of shadows, newest should be within x hours and oldest should be at a minimum n hours – those limits can be configured with the limitations rather easily.

Main issue is – we talk about WMI modules that are only available in x64 if you use a x64 system. Now PRTG is executing sensors in x86, even thought it is installed on x64. Now, played around a while and came up with this simple solution.

Parameters for the parser-script (the one you need to execute) are: %host C: %host D: etc.

Parser script, needs to be in EXEXML directory: Name: Get-ShadowCopyStatsXMLx64parser.cmd

PS1 script, should to be in EXEXML directory – if not, adjust path in parser script: Name: Get-ShadowCopyStatsXML.ps1

PS: yes, the PS1 could be further optimized, but it took me already a while to find out the main issue was the x86/x64 combination, what is not possible – see Microsoft articles in MSDN/KB for more information while the Shadow-Copy WMI modules are only available in x64 on a x64 system.

Hope this helps others with the same challenge 🙂

PRTG – Veritas Backup Exec monitoring

This was originally posted by myself here: https://kb.paessler.com/en/topic/58233-symantec-backupexec-monitoring#reply-262024

We monitor backups a little bit more advanced – I thought I should share this knowledge as well…

Single Job Monitoring: Monitor a single job and it’s results – allows you configure the Job-Name with a PRTG filter value in the SQL sensor. The results will include various values – most notable are:

  • FinalJobStatus (as text)
  • TotalDataSizeBytes
  • TotalNumberOfDirectories
  • TotalNumberOfFiles
  • TotalRateMBMin

In order to implement this sensor, add a SQLv2 Sensor and configure like this:

  • Database: BEDB
  • Instance Name: BKUPEXEC (in most cases)
  • Use input parameter: specify the exact job name
  • DBNull = Error

Channels: Most channels are rather simple to configure, they are counters, SpeedDisk or BytesDisk – as PRTG has those channel types integrated already. The special channel is FINALJOBSTATUS – in order to have this working you will need the “backupexec.jobstatus.ovl” file in your %programfilesx86%\PRTG…\Lookups directory – see below for the file.

SQL Script for single job monitoring:

The backupexec.jobstatus.ovl file:

Another SQL script we use is the one below – this actually approaches the whole monitoring more in an overview – it still depends on the JobHistory table, meaning, the job must have been running.. in theory you could work around this and actually get information on the scheduler etc.. the script below is a pure example.

Finally I wanted to mention what are our real challenges are, and we don’t yet have a really good solution: Our backup runs FULL starting Friday evening… during the week we run incremental backups. Now the incremental backups are not as critical… so let’s focus on the weekends.

What happened every now and then was that e.g. only a few tapes where write able and other might still have been locked or one of our libraries jammed etc..

In the end, it means – we e.g. came in Monday morning and discovered that 50+ % of the backups did not run.

Now, the question is, how do you monitor this. There are about a 150 jobs – they are stacked on each other. In theory I expect let say 5 running jobs, 0 completed and a 145 pending – starting Friday night – over the weekend this number will change constantly.

What I did not yet find is that a good solution that when Backup Exec waits for user interaction like insert tapes, offline library, etc. does wait for user interaction.

As well as the fact that I can’t tell PRTG on Friday I expect e.g. 150 jobs pending, on Saturday 1 PM the number should be more like 75 jobs pending and on Sunday 6 AM is should be down to 50 pending and Sunday 8 PM it should be 0 pending and 150 successful.

This is very granular, making it hard to find a solution. The jobs in our case will not finish – they are within their weekend time-window and will not be auto cancelled and therefor only manually looking in to Backup Exec will tell you if we are making progress or not.

It could be a solution to constantly see if the Total Bytes backed up goes up – but this again is challenging, we would need to compare values over time.. PRTG is as far as I know not directly able to do so and this would mean we would need to have a temp file with values form the last check in some kind of script or database that we would compare too…

So far I did not come up with the ultimate solution – every now and then I think about it a little more.. but well, I am not there yet.

Advanced PRTG Slack notifitications

Advanced PRTG Slack notifitications

This PRTG slack notification script was originally posted by myself here: https://kb.paessler.com/en/topic/66113-how-can-prtg-send-notifications-to-slack#reply-249147

This is our version of the script originally provided with PRTG by Paessler. – we are using the #colorstatus as well to determine to current issue and we are using actually icons to show the status – this is working pretty well for us. Besides that, reformatted the whole notification a bit so it is easier to read in the prtg slack notification messages.

Simple Slack example

It is easy to adjust and even use additional statuses/color-codes if needed..

Monitor your projectors and light bulbs with PJLink

Monitor your projectors and light bulbs with PJLink

Below is a solution rather than a question. Just thought I share it with the rest of the Paessler/PRTG world. This was originally posted here by myself: https://kb.paessler.com/en/topic/76639-monitor-your-projectors-and-light-bulbs-with-pjlink

Our challenge:

Projectors are having issues “out of now where” and of course always urgent – IT department didn’t have monitoring on it. Of course, normally you already have a conference going on or it is urgent and the projector might be really down already – most likely due to an end of life light bulb.

What we have:

NEC and Optima projectors – with Ethernet interfaces

Research and work:

We found out that there is a PJLink protocol (http://pjlink.jbmia.or.jp/english/index.html) that is supported by those devices. A little more research and we found a library on GitHub for Visual Studio (C#) (https://github.com/uow-dmurrell/ProjectorControl). We took it, modified the LAMPHOURS so we could get it public (only one lamp for now – but we only have projectors with one lamp). Modified the TEST-Project that comes with it to accept IP addresses as parameter and modified the output in to a PRTG XML format. Further modified the output so it is numeric. (We are not aware that any of that stuff is copyright protected, if so – we apologize and will stop using those components)

Result:

We added a PRTG sensor based on XML data that monitors (advanced EXE/XML) with the IP of the device as parameter and get the following information:

  • LampHours – we set ours to max. 4000 as error level
  • CoverStatus (0 OK / 1 ERROR)
  • FanStatus (0 OK / 1 ERROR)
  • FilterStatus (0 OK / 1 ERROR)
  • LampStatus (0 OK / 1 ERROR)
  • PowerStatus (0 off / 1 on = OK 2 cool down / 3 warm up = WARNING (we don’t get alerts there) 4 unknown = ERROR)

This actually helps us to keep an as close as possible eye on the projectors and being proactive if something is already reported as defect and the LampHours (adjustable, depending on manufacturer/model) are coming close to an issue, so we are ready for it.

Code changes:

ProjectorInfo.cs – added this:

Program.cs (or create your own project)

SQL Database backup monitoring

SQL Database backup monitoring

The following article was originally posted here by myself: https://kb.paessler.com/en/topic/79665-sql-database-backup-monitoring

SQL backups and their monitoring is one of the most important things. We often talk about rather complex situations including transaction-logfiles and various other stuff.

Monitoring those things was something that did cost us to many sensors with the standard-scripts etc. and was not really effective.

In order to change this – here are two scripts that will be able to solve most of your issues – you find them both at the end of this posting:

  • SQLBackupDestinationCheck.vbs
  • SQL_Database_Full_Backups.sql

SQLBackupDestinationCheck.vbs: This is a VBS script that will return XML content to Paessler/PRTG in multiple channels while using one sensor.

It expects three parameters:

  • go through first level sub folders: 0 (no) or 1 (yes)
  • file extension to obey – any other extension will be ignored – in most cases: “bak”
  • Path – should mostly be an UNC path

It will return those channels:

  • Total file count: count of all files with this extension in all folders checked
  • Total folder count: count of all folders checked
  • Oldest file found in days: oldest file – value gives back age in days
  • Newest file found in days: newest file – value gives back age in days
  • Lowest files in folder count found: lowest count of files that have been found in one folder
  • Highest files in folder count found: max. files that have been found in one folder

This needs some explanation:
The script checks a path for files with a certain extension. Let’s say you do SQL maintenance plans and use the extension .BAK to write those. You do a daily backup and keep them for 3 days to make sure they end up on a tape, further do you use sub-folders per database and you have a total of 5 databases on this system – now you will need to configure error-limits per channel – e.g.:

  • Total file count: lower limit: 10 files – upper limit: 20 files – during the backup you might have up to 20 files
  • Total folder count: 5 folders upper and lower limit – more/less then 5 would mean something changed
  • Oldest file found in days: lower limit 2 days – upper limit 4 days – older then 4 would mean the cleanup does not work
  • Newest file found in days: lower limit 0 days – upper limit 2 days – nothing newer (date issues? and nothing older as well)
  • Lowest files in folder count found: lower limit: 2 – there should be always more then 2x .BAK files in any subfolder
  • Highest files in folder count found: upper limit: 4 – anything above again would mean some clean up is not working right for one database

So – keep in mind – you can get more fancy with WARNING limits and ERROR limits – the example above will help you understand what to do – this should help you getting started. The script will save you quite a few sensors and still keep a pretty close watch on the file-system side of SQL backups – of course you could abuse it for something else then SQL backups as well – but this was my main intent for this script.
SQL_Database_Full_Backups.sql
This file will request information about backups for SQL itself. It might need a SQL 2005 or newer to work – and yes – I did post this on another PRTG KB thread – but I wanted to have the complete solution in this one post.

The script will be executed against the SQL server instance the databases reside, on the master-database. You need to specify a parameter that will be set as “@MaxHours” – this actually should be something like 26 hours, so your SQL backups are never exciting 26 hours – giving the backup some time to run as well. More for bigger databases might be necessary. If you do multiple full-backups per day, set it to e.g. 2 hours or what ever your limit is.

You will get backup 3x columns:

TotalAmountOfDatabasesTotal amount of databases of this server – this allows you not only to watch if anyone created/deleted a database on the server, it also gives you a good base-line in general
RecentlyBackupUpCounHow many databases have been backed up recently – full backup – in specified time-window
NOTRecentlyBackupUpCountHow many have not been backed up in the same time window

RecentlyBackupUpCount and NOTRecentlyBackupUpCount should always match up to TotalAmountOfDatabases – but that’s not the point. More important is – you might have backed up databases and not backed up databases – set you error-limits for all three columns accordingly – upper and lower limit – and you will see that the alert will fire if you add a database or keep the SQL agent service stopped so it hops over a single backup and misses it…

Folder: C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML
File: SQLBackupDestinationCheck.vbs

Folder: C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\sql\mssql
File: SQL_Database_Full_Backups.sql

Auto-Cycle through URLs

Auto-Cycle through URLs

Our challenge was to have several Paessler/PRTG MAPs (www.paessler.com) cycling through a TV in the IT room. We did not want to have just one static MAP at all. This was originally posted by myself here: https://kb.paessler.com/en/topic/79668-prtg-maps-auto-cycle

In order to accomplish this – we created a simple HTML file with some JAVASCRIPT code that runs through several URLs you easily can specify. Per URL there is a timeout value. Further is there a company logo that will be displayed while a MAP is loading, that will fade out and actually make the MAP visible.

The HTML code including the JavaScript is below – here are some things I wanted to explain and share about it.

Line 6 – to the end: src=”bgpicture.png” This can be replaced by any other file-name – simple use a LOGO here that you want to see while the MAP is loaded – it will fade out

Line 11 – 21 – those lines hold the URLs in var Source=[] – add a line per URL you want to cycle through, each URL has the same format as follows. Please MAKE SURE that the last URL entry is not followed by a comma “,” otherwise the script might fail to cycle.

Entry format:

  • ‘URL’,timeout,showBGfading,’title’
  • URL in text-marks
  • timeout in seconds
  • show background picture/logo fading out – 0 (do not show) or 1 (show and fade)
  • title/description in text-marks

Example: ‘https://prtg.company.local/public/mapshow.htm?id=1111&mapid=ABCDEFGH-1234-ABCD-1234-123456789000′,60,1,’Network Map’,

This would mean:

  • URL = https://prtg.company.local/public/mapshow.htm?id=1111&mapid=ABCDEFGH-1234-ABCD-1234-123456789000
  • timeout = 60 seconds
  • bgpicture = 1 – start with BGPicture from the HTML code and let it fade out (fades the map in)
  • Title/Description = Network Map

We simply load the HTML file in the browser and display it as full screen – avoiding any browser title-bar etc.

Features:

  • you will see a timeout counter in the upper right – this shows you how much longer the current view will be available.
  • you will see a title/description in the upper left while the element was loaded – it will slowly (slower the bgpicture) fade out – you can use any text there – per URL
  • you might or might not see the BGPicture element – fading out – depending on your URL configuration – we found it worked out nicely cause we didn’t want to see a …load map data… or anything and have a smooth transition between the maps..
  • we set timeouts per MAP like 60 seconds etc. – so we a) cycle quick enough and b) have enough time to look at the data shown to us
  • you can use the LEFT and RIGHT arrow key on your keyboard to jump to the previous or next URL while you execute the HTML file (if not randomized)
  • the up/down arrow keys allow you to show/hide a menu of all links available, this then allows you to click on a specific item in the list and show this specifically – the list is always generated on the fly – this prepares for future adjustments like showing where you are right now…
  • added a feature for to PAUSE the script – press P to stop the cycle any time
  • added a randomization – you can activate it and any of the URLs will be accessed randomly – if it is disabled, the script will cycle through the URLs as defined
    • var bolRandomize=true;

For fun – or how to add a few Easter Eggs:

  • you can use any file (we use MP4 and GIFs) to be displayed as well – our URL list is rather long – mostly just going through the same URLs but every now and then showing briefly a little IT joke in between – of course it depends a bit on your company – how ever – wanted to mention that we even like to do that for a short 5 seconds period.

Updated – December 2018: This is version 2.0 of the script. Updates are some minor bug fixes and mainly the ability to scroll forward and backward through the URLs while using the left and right arrow keys on you keyboard. Additionally do the up/down keys show or hide a complete menu of all links that are cycled through.This then allows you to click on a specific link to show the content.

Updated – April 2019: Version 3.0 of the script has now a PAUSE feature and a randomization feature that you can enable/disable.

Notes as per May 2022: Did not change the script but wanted to make you all aware that you might run in to issues with X-FRAME-OPTIONS set to SAMEORIGIN. This can be investigated while using your browsers developer tools (F12), you should see script errors revealing this issue. Eventually it boils down to some pages not loading (e.g., https://www.google.com) due to them not allowing to be embedded. You can see if the page offers special embedded links/URLs or try to use a proxy-script that feeds to page to the iFrame. At this point I can not offer a good working solution, the script was designed to load Paessler PRTG MAPS and this is still working just fine. Using the script beyond this purpose might work or not due to the target page settings and configurations.

Using PowerShell for Text-to-Speech

Using PowerShell for Text-to-Speech

PowerShell can be used for TTS / Text-to-Speech. In this specific example, PowerShell will be used for Paessler/PRTG (www.paessler.com) text-to-speech notifications. It will actually run against a remote-system in this scenario in a central NOC/Network Operations Center room and announce down sensors/systems. This was originally posted by myself on https://kb.paessler.com/en/topic/79674-can-we-have-the-ability-to-set-audio-notifications-when-sensors-go-down-up.

You simply create the script in the path C:\Program Files (x86)\PRTG Network Monitor\Notifications\EXE and create a new notification for it.

The parameters should be configured like this:

-TargetComputer ‘COMPUTER123’ -Device ‘%device’ -Name ‘%name’ -Status ‘%status’ -Message ‘%message’

Replace the COMPUTER123 with what ever client should play the sound – in our case this is the workstation that shows the MAPs on a TV and the sound actually comes out of the TV.

You might need to enable remote power shell execution on the target system, a hint for this is the following command: Enable-PSRemoting -Force

Here is the script file: Name: PRTGtoWorkstationText2Speach.ps1