script

PRTG sensor to monitor a directory for a specific file type and minimum size and age

The following script will monitor a specific directory for a file of a defined type and minimum size, it will report back to you the newest file of the specified type and over the minimum size and provide you the file age and size to PRTG.

The name of the file will also be reported back as a text value for the sensor in PRTG.

If the script encounters any errors, the total files value will be 0 for no files found or -1 for a script error. Therefor please set minimum channel error values accordingly to get alerted.

The goal here is to get the age of the file back that was found, to make sure the file is not older than expected. This is needed for some automated exports or data transfer files, this was you can be sure that your export routines work as expected.

ActiveDirectory/LDAP result limits – MaxPageSize

ns a website from a systems administrator for systems administrators Home IT-Admins CMDB IT-Admins tool IT Search EOL Solutions Blog Contact Links ActiveDirectory/LDAP result limits – MaxPageSize

ActiveDirectory, respective LDAP, has a result limit setting, MaxPageSize. Those are set by default to 1000 rows per query.

This is primarily important if you use some kind of programming language to get results from LDAP, this code must compensate those limits and engage paging.

Your LDAP query does not need to provide the limit, only the code needs to do the paging as you always just get the max. amount of results set in the current settings.

In order to check your settings do the following commands in a command prompt / cmd window:

In theory you could set different values now as well, assuming you have the permission level to do so. But this is not recommended and you should engage paging instead, as you otherwise risk to overload your DCs – even if your commands won’t cause it, a possibly DoS attack could happen – malicious or not, so leave the limits, but be aware of them.

 

Monitoring relative printer page counts with PRTG

Monitoring relative printer page counts with PRTG

PRTG has many standard sensors, but one I was always missing is a daily page count compare. The standard printer sensor gives you a total page count – but this to some extend will always be a graph that only will go up. You can only estimate the total page counts in those graphs.

If you ever looked in to the IT Assets database project, you will see that in the Printers area there is a possibility to enable detailed graphs for relative page counts.

Why is this important you might wonder. The answer is simple, as an IT Manager you need to know if a certain kind of a printer makes sense at a certain location. If you have a low end printer for only casual print-outs but you have a total over e.g. 10,000 pages printed every month, you might need to reconsider the printer model. The reasons would likely be:

  • higher cost per page
    • constant toner exchange of a compared more expensive toner cartridge
  • maintenance cost
    • you might need to constantly maintenance the printer
    • the cost for the maintenance kit are relatively high
  • downtime issues
    • due to toner empty
    • printer needs maintenance again
    • less pages in paper tray

On the other hand, a printer might also be overkill for a certain area and not be cost efficient. Those conditions also might change over time of course. Further is there often the question – is a single area printer (copier) better or multiple smaller printers. This of course can go pretty far and you want to consider Lean processes, Six Sigma guidelines and others along with this data.

How ever, I started a first draft of a script that provides me at least the total page count relative to each day in PRTG. This sure is not as efficient yet as I do this in the IT Assets database printer module, where I collect data e.g. every 30 minutes in a huge table and then later calculate all the data in a daily range respective monthly range while collecting total page counts and possibly counts per copy vs. print outs and additionally color vs. black and white print. But at least it is a start.

Below you find the first draft of this script.

One thing to know – you will need to run the following command in order to install the PowerShell SNMP module on your PRTG probing server:

The current version of the PRTG script:

Office 365 licenses and activated features per user

Office 365 licenses and activated features per user

Ever wondered which user has what license activated and e.g. which specific feature is activated? Recently I was challenged to see who has the Exchange mailbox feature enabled and who not out of the active user base. Due to the huge user-base this would have taken hours to review manually. Using PowerShell for this, connecting to Office 365, exporting the data eventually to a CSV file and filtering it in Microsoft Excel made this way easier.

The challenge here is that Microsoft uses SKU’s – or licenses – that again can have various features enabled or disabled. Let’s say you have a E5 Plan (license) assigned to your user, you still can disabled various features within this plan, e.g. Microsoft Exchange.

If you take a look at the following website, you find a whole list of GUIDs / IDs of all those various features.

https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/licensing-service-plan-reference

In case of the Microsoft Exchange Mailbox feature – we are talking about this GUID: efb87545-963c-4e0d-99df-69c6916d9eb0

Once I had identified the GUID the next step was to grab users from a specific on premise Active Directory OU and query them against Microsoft Azure on the Office 365 environment as for their assigned licenses/features. The results then are collected in a PowerShell object and eventually saved in a defined file name in a CSV format that you easily can filter in Excel afterwards.

Please keep in mind that you will need RSAT tools (PowerShell) and Azure/Office 365 connectivity, rights etc. in order for this to work.

 

PRTG and VMware 6.7 vCenter host hardware status

PRTG and VMware 6.7 vCenter host hardware status

The following script was created to bypass an issue in the SOAP API in relation with VMware, hardware vendor drivers and PRTG. In any case, you could use the same script for other monitoring systems or any other purpose – of course while adjusting it to your needs.

You can find more information about the issue here: https://kb.paessler.com/en/topic/82458-vmware-host-hardware-status-soap-sensor-returns-warnings-after-update-to-vmware-6-7

In order to make this work you need to install the VMware PowerCLI PowerShell extension on the PRTG probe server. Further will you need to inject username and password as well as the vCenter name and internal hostname in vCenter.

LDAP authentication activated targets:

  • $host %host “%windowsdomain\%windowsuser” “%windowspassword”
  • $host %host.domain.local “%windowsdomain\%windowsuser” “%windowspassword”

Otherwise – you might need to use this format:

  • $host %host root MyRootPW

Test it in PowerShell as the Probe-User first – you should see the results. Eventually the script create a sensor with multiple channels – sensors in GREEN status will be counted only – sensors in UNKNOWN status will be counted and returned as text, while as long as no YELLOW or RED status (warning or error) occurs, the sensor still stays green/okay. Warning or Error levels will automatically apply and have the problematic hardware systems in the sensor message text.

My first attempt was to show all channels on top of the summary – due to getting over 100 separate hardware statuses back and the limitation in PRTG of 50 channels per sensor, I dropped the idea – while the script still has all the code to handle it.

 

Search the Windows Security Eventlog for a string / text

Search the Windows Security Eventlog for a string / text

Lately I had to search a lot through logs – as you can tell by all my postings… I just had to create yet another script that allows you to search through the Windows Security Eventlog – while the script is easily adjustable to other log types like application log or system log.

It’s not the most pretty script – but it certainly works. Don’t be surprised if the script takes it sweet time – it might be it needs to read through a lot of eventlog entries.

 

APC InRow A/C error monitoring with PRTG

APC InRow A/C error monitoring with PRTG

It is rather hard to get valuable alarm monitoring from an APC InRow air conditioning unit. The APC A/C’s are a real pain when it comes to this, it might even be that this same principle applies to APC UPS units, but I did not have yet time to test this out.

What I really wanted is a way to monitor alerts that the unit reports. Doing so seemed to be fine with a simple SNMP sensor in PRTG but the real challenge was getting the alert text. Now, there are SNMP channels but they are only available when an alert is ongoing, meaning when there is no alert status the whole OID fails.

To compensate this, I ended up writing a simple PowerShell script that interprets the SNMP OID results, even ignores a certain failure cause I didn’t care about it, and reports back the results as a total error count (set the channel to ErrorLimit = 0 in PRTG) and if there are Errors it will write them to the text.

This is an Advanced EXE script that needs to reside in the following path:

It expects the parameters for community and IP-Address

The results of the script will always hold the top 4 error messages, but it will exclude the phrase “No Backup Units Available Alarm” from the error count – cause in certain setups like hours there are multiple units but they are not necessarily clustered – this is not a full alarm rather then a warning in my case. Feel free to adjust this in the script if you want to raise the error. You could simply remove / remark the following line:

Here a picture of a real world alarm respective issue with the APC InRow A/C in PRTG generated by the script

Move user Documents and Desktop to OneDrive

Move user Documents and Desktop to OneDrive

The PowerShell script below was design to move Documents, Music, Videos, Pictures, Favorites and Desktop to a sub-folder in a connected OneDrive. In theory the script does not depend on OneDrive and could be adjusted to any other destination.

While it normally is wise to engage GPOs to adjust those paths to internal server resources, this is not possible easily while using OneDrive. The script therefor works better here.

What it does

  1. is the current path per folder accessible
  2. does the target path exist
    1. YES: adjust the registry respective folder targets to the target path – FINISHED
    2. NO: create the target folders – see 3.
  3. is the source path on the same volume / partition – like C:
    1. YES: see below – 4.
    2. NO: check if there is enough free space for the amount of data needed to be moved
      1. YES: see below – 4.
      2. ALMOST: YELLOW warning – see below 4.
      3. NO: RED error – you could still proceed or simply close the script
  4. move the data to the new target folder
  5. remove the old folder – if not possible rename it

The script retains the special icons for the folders and engages the Windows API to adjust the folder paths.

What you need to do

  • Adjust the target-path in the top of the script
  • If desired, adjust the minimum free space value (2 GB by default) for the warning in regards to the free space – this only matters if the source and target volume / partition aren’t the same

To start the script, either right click and say run with PowerShell or run it directly in a PowerShell. This script will need to execute in the user-context and does NOT need administrative rights.

Please be advised – the script will by default not try to move e.g. DOWNLOADS.

You can adjust this, while adding the folder to the two parameter, see sample below.

If you want more folder, the script would need some special adjustments. It can be used as a base script, if you want.

 

Raspberry PI and Microsoft SQL databases

Raspberry PI and Microsoft SQL databases

Raspberry PI can read and write on a Microsoft SQL server database.

In order to accomplish this you follow the instructions here: http://pymssql.org/en/stable/index.html

To summarize it in a nutshell, here is what you need to do:

  • apt-get install freetds-dev
  • pip install pymssql

Update: Above information is for a Raspberry 2 – Raspberry 3 needs the below information as far as I know:

  • sudo apt-get install freetds-dev
  • sudo pip3 install cython
  • sudo pip3 install pymssql

Personally I had issues getting this to work in Python 3.x so I tested it in Python 2.x and it was working fine. The issue was simply that the module “pymssql” could not be found and therefor the IMPORT line already failed in the Python script. It should be a rather easy fix – like copying the files to the Python 3 modules folder, but as of now I did not have the time to investigate this further – as I was fine using Python 2 in my specific situation.

Here is a sample script

The example I tested used a SQL server user account. The documentation of PyMSSQL talks about the possibility to use Windows Authentication as well.

As for asking Google about this – there is a lot of confusion information out there – the top ranked posts aren’t really helpful, so I thought I just post it again hoping someone finds this helpful.

Monitor the total amount of sessions on your RDS farm

ins a website from a systems administrator for systems administrators Home IT-Admins CMDB IT-Admins tool IT Search EOL Solutions Blog Contact Links Monitor the total amount of sessions on your RDS farm

This script is designed for PRTG and will allow you to go through all your RDS hosts and result back the total amount of sessions and active sessions.

You have various options as server name source, see the parameter section on top of the script.

This was also posted here: https://kb.paessler.com/en/topic/83151-total-user-count-rds-windows-2016

Please note that I grabbed the original script and re-wrote it completely, adjust some issues I encountered and tried to make it as variable as possible.

 

VMware alert monitoring with PRTG and PowerShell

VMware alert monitoring with PRTG and PowerShell

There is a way to read out and process ALL alerts of your VMware environment using PowerShell and reporting the results back to PRTG. The script further down in this article does this. What you get is similar to the graphic here.

This show you the following channels:

  • Overall status
    • this will be green as long there aren’t any not acknowledged warnings or alerts in VMware
    • if the warning or alert is acknowledged, the sensor / script will return to green cause it is nothing that is new
  • Total Alerts – amount of alerts acknowledged and not ackowledged
  • Total Alerts – Acknowledged
  • Total Alerts – NOT Acknowledged
  • Total Warnings
  • Total Warnings – Acknowledged
  • Total Warnings – NOT Acknowledged
  • Total Warnings and Alerts
  • Total Warnings and Alerts – Acknowledged
  • Total Warnings and Alerts – NOT Acknowledged

As you can see – you can get more granular on your PRTG statuses if you use the channels for Warnings/Alerts that are acknowledged. You could set upper warning or error limits of 0 to keep a warning / error level in PRTG if you want to see them still.

While I was writing the script, I decided to create a new lookup value in PRTG to make it more clear. If you adjust the script in regards to add additional statuses for the channel overall status – you will need to adjust this file as well.

Let’s start with the value lookup file, you need to copy the text from the first script block in to a file you store here: C:\Program Files (x86)\PRTG Network Monitor\lookups\custom

Name the file: vmware.alerts.search.ovl

Now we need to create a custom EXE/XML sensor in this directory: C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML

Name the file: VMwareAlerts.ps1

Once you have both files created, go to PRTG and add a new sensor called EXE/Script Advanced and select the new created script file. As Parameter you either type the host-name of your vSphere server or if you created it underneath the device in PRTG just use %host.

UPDATE: I changed the script cause I found it to be better to go with the following expected parameters and always making sure you have control over username and password used to connect to VMware. Please use the follow parameter moving forward:

There are still a few challenges you might need to overcome on top of this:

  • install the VMware PowerShell extensions on your PRTG probe server
  • credentials to connect to VMware can be a challenge as I tested this
    • you might need to have the service account of the PRTG probe have sufficient access rights – needs working SSO
    • alternative use a stored credentials file in PowerShell – somewhat secure
    • or provide the credentials clear text in PowerShell – least secure
    • please see line 20 respective the command “connect-viserver” for more details
  • updated the script – it now expects username and password as parameter

You might wanna test the script before you add a sensor to PRTG – the best way to do this is directly on the PRTG server with the service account of the PRTG probe to make sure it will work as a sensor later on.

Keep in mind that the script expects a parameter – the VMware vSphere server name / web-address.

This was also posted on the PRTG KB here.

 

Monitor multiple website certificates with a single PRTG sensor

Monitor multiple website certificates with a single PRTG sensor

Due to a request on the PRTG KB of someone needing a single sensor that monitors multiple URLs for their certificate expiration I came up with the following script that is posted on this PRTG KB as well. The modified PowerShell script was provided there – it is mentioned it sourced from Stack Overflow – I found it on this link: https://stackoverflow.com/questions/28386579/modifying-ssl-cert-check-powershell-script-to-loop-through-multiple-sites

The result would look like this:

To make it more usable – you can input parameters from PRTG like this:

or this for limits – warning 60 and error 10 – you could name them but this should work as well…

And here is the modified script:

 

Consolidate many line based .CSV files in to a single .CSV with one header line and per file data lines

Consolidate many line based .CSV files in to a single .CSV with one header line and per file data lines

Summarize a huge amount of files that have line based columns and data in to a single file with the first line the headers found in all files and the actual data as per row for each file, while the headers might change throughout the source files and need to be added dynamically.

This is a special script I wrote for someone else that had about 45k files to process. It is crazy enough to be worth posting here 🙂 and can be found on Spiceworks as well.

Situation:

  • many .CSV files
  • all have the columns per line instead of in the first line
  • the data looks like
    • column,data
    • column,data
  • he needs all files transferred in to one file in this format
    • header,header,header
    • data,data,data
    • data,data,data
  • from per line to one line as a header and the data in each line per file
  • additional challenge
    • the headers might change throughout the files and add more headers

What the script does:

  1. cycle through all files
    1. detect all headers
  2. cycle a second time through all files
    1. detect all the data
    2. write the data in the right column per line per file

Flaws:

  • The script does not obey if there is data with a comma “,” – it would ignore what is behind that comma

Output:

  • Output file is a single .CSV file, comma separated columns

Execute this way:

  1. Source Directory – where the .csv files reside
  2. Target Directory – where the new output .csv will be created
  3. open CMD / command prompt
  4. go to the script-directory (where you saved it)
    1. CSCRIPT scriptname.vbs “c:\sourcedirectory” “c:\targetdirectory”

CSCRIPT will avoid that you see a million message boxes – it will output directory on your CMD / command prompt window…

Secured WinRM SSL session and PowerShell WinRM queries – example with a PRTG sensor for CPU, HDD and RAM

Secured WinRM SSL session and PowerShell WinRM queries – example with a PRTG sensor for CPU, HDD and RAM

Windows Mangement Remote Mangement / WinRM can be configured as HTTPS / encrypted connection instead of using clear text transfer of the provided information. In order to do this you need to configure it accordingly and have a valid machine certificate installed on the system.

Now – the advantage here is clearly the added security layer while you request and receive those information. More information on how to do this can be found here: https://support.microsoft.com/en-us/help/2019527/how-to-configure-winrm-for-https

Only it becomes a challenge when you want to use PowerShell and e.g. PRTG to use this HTTPS encrypted system. I came across this request and had to create a script that actually works with such an HTTPS encrypted SSL session to WinRM. You can find it below.

What it does is rather simple:

  • set the CimSessionOptions to use SSL
    • additionally it bypasses the certification checks by default – you might want to adjust this depending on your network configuration
  • it creates a new CimSession to your target system using the UseSSL option
  • and finally it executes a few queries against this session
  • the data in this example is then translated in to a PRTG compatible XML structure so you could use it in a Advanced EXE/XML sensor within PRTG

The data in this example combines information about the CPU(s), HardDrives / HDD(s) (only installed drives, not USB) and Memory usage to PRTG in a single sensor while using channels.

Due to some dynamic of the script, you want to make sure you have fixed upper and lower error limits on especially the channel Total Disks – so if something changes you can re-create the sensor due to it’s fixed channels once it did run the first time.

In theory you could provide limits within the XML response to PRTG – this is up to you – I always liked it more to configure them solely in PRTG in the sensor channels so I could adjust them per device.

PS: This was originally posted in the private PRTG channel on SpiceWorks here.

Updated domain join script including KeePass / Pleasant Password server entries for local admins

ns a website from a systems administrator for systems administrators Home IT-Admins CMDB IT-Admins tool IT Search EOL Solutions Blog Contact Links Updated domain join script including KeePass / Pleasant Password server entries for local admins

Today I post an updated version of the domain-join script I initially posted here.

In theory you can just replace the script with the new version – assuming you did not make any changes other then adjusting it to your domain / server-names.

What changed in the newer version:

  • the top lines in the script hold the basic configuration parameters
    • line 1: NetBIOS name of your Active Directory domain
    • line 2: your DNS domain name
    • line 3: your distinguished domain name / root DN of your domain
    • line 4: your default OU for new workstations
    • line 5: empty
    • line 6: KeePass / Pleasant Password Server URL
    • line 7: KeePass folder to store the password in
  • the script now relies on the above parameters rather then specifying them in various areas in the script, making the whole use / adjustment of the script way easier
  • advanced error handling
    • after the user entered the computer name and his domain admin credentials the systems checks if it can connect to the domain and if the computer name already exists
      • if the domain credentials are invalid (can be a non-admin – as long they are valid) you get a message explaining that the script will stop due to wrong credentials
      • if the computer name already exists in the domain, you get a message about it and the script stops
    • KeePass or Pleasant Password Server connection – if it fails to connect with the credentials provided, you get a message about it and the script will stop
  • adjusted messages with various colours
    • white text – standard as it was before
    • yellow text – highlighted information so it sticks better out for the end-user
    • magenta text – handled error / failure message – this is an explanation that something stopped the script from going further
    • red text – those are real PowerShell error messages – either due to not handled errors or if the error was handled plotted out to the screen as additional reference and help

For additional information, please look at the original post here.

This script is also mentioned on the API Examples page on the Pleasant Solutions web site here.

Gathering profile information from computer

Gathering profile information from computer

Every now an then you might need to know who logged on and when was the last logon of which user to a specific workstation as well as the size of each user profile. For this I once wrote the PowerShell script you can find below. It does a WMI query against a list of one or more target computers and reads out the information reporting it back.

As input parameter use a comma separated list of computer names – those must be reachable and administrative accessible (you need at least admin rights on the target system). You then get a output set per profile.

The output can e.g. be transformed with the parameter |ft to see it in table format – like typical in PowerShell.

Output values are:

  • ComputerName
  • ProfileName
  • ProfilePath
  • ProfileType
    • Temporary
    • Roaming
    • Mandatory
    • Corrupted
    • local
  • IsinUse
  • IsSystemAccount
  • Size
    • this needs to most processing time – it is a manual size check including even temp files.. – other then what Windows shows you
  • LastUseTime

Monitor user accounts in Active Directory with PRTG

Monitor user accounts in Active Directory with PRTG

The following script will read through your current Active Directory and filter for user accounts with the following specific conditions:

  • Lockedout users – please read below for further information about this
    • all users that are lockedout
    • must be an enabled user
    • that is not expired
  • disabled users
    • all users that have been disabled
  • expired users
    • must be an enabled user
    • the expiration date is set and past the current date
  • users with password never expires set
    • must be an enabled user

This will give you a pure counter output per channel in an for PRTG Extended script sensor XML result.

But there is a theoretical flaw in one of the methods – the locked out users. Now, user accounts get locked out in Active Directory due to too many logon attempts with an invalid password. This causes Active Directory to set the lockedout bit in the object properties. The issue here is that this bit will not be set back to 0 after the defined lockout duration (GPO) is past, the property will only be set back to 0 once the lockout duration is passed and he successfully logged on.

This means, the counter might give you more results then currently true, it might count users that have been locked out but the lockout-duration passed – but they did not yet logon successfully. This is somehow a false positive, while not totally false. In any case, you need to be aware of this.

The script could be more efficient as well in the way it filters a few things, so far I optimized it as far as I could – the LockedOut value can not be set as a -Filter, in theory it might be possible to speed it up with a -Filter to the UserAccountControl (if that is even possible – not tested) – but I am not certain this would work. If you really want to speed it up you would need to work with -LDAPFilter – but this actually needs to completely replace the internal filter capabilities of Get-ADUser – you can’t use both – it is one or the other.

This script updated with a corrected version as of February 2019 and was also posted in the PRTG knowledge base here.

Prevent ScreenSaver coming up with a PowerShell script

Prevent ScreenSaver coming up with a PowerShell script

In our daily business, we often have the issue that a GPO enforces a screensaver after a certain amount of time. This can become very annoying and actually even be an issue if you are remote in on a end-user system and don’t know their password. After doing quite some research I found out that PowerShell actually is able to help here and I wrote a script to prevent the screensaver from coming up.

First I thought, let’s see if I can control the mouse cursor, cause I thought it would be less invasive, this is actually possible – but interestingly did not have the expected effect and the screensaver kept coming up. So I did go with keystrokes, but of course I was worried about what key would be save to send. Doing some research on Microsoft websites (here the link), I found the F16 – testing around with it I found it the least invasive key to send periodically to a system – it does not even exist on a regular keyboard and can only be simulated with a key-combination. To simulate F16 you need to press SHIFT + F4 – there might be a Windows 10 (may be even earlier) combination out of WINDOWS + SHIFT + F4 respective WINDOWS + F16 what would cause a shutdown. Once I found out about that, I decided to adjust the script to F17 what seemed to bypass even that small chance of an issue that would be more problematic then the screensaver itself – you sure don’t want the system to shutdown :-).

The script you find below can be simply executed. It has a setting $minutes that you could add as a parameter and therefor adjust it when you start it – by default it will use 9999 what is a pretty long time. To be clear – this is actually not a minute interval, it is a 30 second interval and ends up in half minutes. Why? Simple – Windows has a minimum setting of 1 minute that an screensaver can come up. If the script would fire in a minute interval, there is a theoretical chance the screensaver would still win.

This is further not pure PowerShell code – well it is but actually the key stroke is send via a Windows Scripting Host WSH / WScript command SendKeys. The PowerShell only surrounds to command. It has the nice habit of having a PowerShell window open that you simply can close to end the script. If you would do the same in WSH you would need to execute the script more manual with CSCRIPT rather then just double-clicking the file what would execute it via VBSCRIPT instead causing a hidden window / process and you would need to identify it in the task-manager to kill the process. Therefor, PowerShell was the best choice in the end to accomplish the task.

Additionally you could use the below more advanced script that will control the screensaver only during a defined time window. This is a more advanced way and more usable in certain situation, cause it would automatically allow the screensaver to come up outside the defined time window and minimize the exposure of the system more. It is also more effective then completely disabling any screensaver GPO settings, cause it is more specific and adjustable.

Update: As of 10/202 I updated the script below from F13 to F10, as this works better for most situations. Be aware, it all depends on what your foreground windows will react too. Make sure the keystroke you use does not cause you any harm.

Automate Outlook signature roll outs while pulling the information from Active Directory / LDAP

Automate Outlook signature roll outs while pulling the information from Active Directory / LDAP

The Outlook signature script you will find below is a bit more complicated then most other scripts I post, cause you might need to adjust a bit more. I used it for several years (as you can see in the script when it comes to Outlook versions and registry keys) in many networks and in most cases it worked just flawless once it was set up.

What does this script do exactly?

Good question – it actually writes every time a user logs on a signature file to his profile. The information in the file are pulled from Active Directory – where you are able to e.g. change the phone number, cell phone number or e.g. last name because the employee married. The signature file will automatically update. Even more important is the onboarding process, you actually can forget about setting up the signature. Assuming you don’t use roaming profiles, well – no worries – the signature will auto create everywhere, if you call it via a login script / logon script. In theory you could call it via a GPO as well.

What you need to do – simply said

  1. get an approved example signature from HR or marketing or who ever can provide you the signature and actually put it in your Outlook as signature.
  2. then replace names, phone numbers with variables (I come to that) and save it in your Outlook.
  3. go to your %appdata%\Microsoft\Signatures folder and grab the three files (.txt / .htm / .rtf) and the sub-folder with the name of the signature you saved
  4. copy them to your \\mydomain\netlogon\signatures folder (you might need to create it – any other location would need some adjustment in the script)
  5. you will need to open the all three file formats (.txt / .htm / .rtf) in a regular text editor – plain text editor like NOTEPAD.EXE (Windows) or Notepad ++
  6. make sure the variables are a complete word and not somehow divided or have characters replaced – if something is not how it should be, adjust it and save the files
  7. Copy the OutlookSignatures.vbs file to the same path and adjust it especially in the header-section with your domain information and execute the script in a CMD / command prompt via \\mydomain\netlogon\signatures\OutlookSignatures.vbs “my signature” 1 1
  8. Now go back to your Outlook (probably close and re-open) and create a new email – you should see your signature was auto-generated and the variables have been replaced with you user-specific values from Active Directory / LDAP.
  9. you should switch your email format to all three formats – HTML / Plain Text / RTF and check the signature in all three formats – to make sure all three files where generated correctly
  10. If something is not as expected, check the source-signature files and their variables and if needed adjust the variable-replacement section of the script

What you gain from this

The signatures will auto-generate and you actually have a cheap way to roll out corporate identity conform signatures, without spending a lot of money for tools that might provide you an easier to use configuration and some more fancy features – but if you don’t need those features and you can live with a more technical way to approach this you actually have a cheap way to implement this.

The variables

The script will pull certain properties / attributes from the currently logged on user object from Active Directory – those are configured in line 33 and if you need more you will need to add them here.

Between the lines 145 and 181 you see that the script is replacing place-holder variables in the source files (.TXT / .HTM / .RTF) with the information pulled from Active Directory – all those place holders in your source files need to be @@AnyName@@ – this is to make sure you have a unique definition of what the script will replace.

Example:

This does nothing else then:

  1. replace the variable strCurrentLine with
    1. search in variable strCurrentLine
    2. for the value @@GivenName@@
    3. replace it with LDAP attribute “givenName

There are a few special examples for putting e.g. a HTML conform line-break after Job-Title in the .HTM file only in the script (I had situations where only the HTML signature did not do a Line-Break, or the Text-Version alone was not doing something, etc..) – in the end this allows you to adjust something in a one of the three signature formats.

Another example in the script writes an additional line with the cell phone / mobile number if available. If the number is set in the user object, a new line will be created depending on the file format – if the number is not set the search-variable will be removed from the signature (you off course don’t want it there) instead of writing the information out. In this case we add a “Cell: ” as prefix before the number so the signature indicates clearly what this number is about. Simply said, since we replace a variable and not a whole line – we have to write out more then just the number – in this case we want to add text.

Can you execute the script with various signatures per department?

Yes – you actually can – but you would need to do this with an additional script and e.g. IFMEMBER from Microsoft or group based GPOs etc…

Can you create more then one signature?

Yes – you can execute the script in various ways – you can roll out a NEW MAIL signature (full length signature) and a second version for REPLIES (short signature) and additional signatures that the user could choose from that aren’t set as either NEW MAIL or REPLY signature. The script header explains how to call the script and what parameters it will expect and how to set them.

Feel free to use the script below, adjust it for your needs. I know some of the stuff like your domain-name could be searched automatically instead of putting it hard-coded in the script – even the reg-keys could be more advanced, feel free to do so – but in the end it is not that much work and it does its job either way.

Debugging the script

At line 79 the statement “On Error Resume Next” avoids that you see errors that might arise. This is good for production so that the client/user does see as little as possible messages due to timeouts or special circumstances – but if you want to debug something or in the process to test the script itself, please remark the line so errors actually occur. They might not mean much in some cases, but they might also give you the hint you need to see what is going wrong.