recommendations

Tools for WebAnalytics and SEO

SEO and Web Analytics are a very important part of successful web pages and blogs, like this page as well.

You want to use SEO tools and plugins on your website, read up on how to use key phrases and other tools in this constantly changing world of web pages and search engines as well as now mobile first development. Having been in IT over 25 years, I saw a lot of changes.

When creating a website, make sure you engage some tools right away, make sure you have small images, JPG is still a very good idea, engage image optimizer plugins and don’t over-size them – I am guilty of this as well.

Caching or your pages with plugins and other techniques and systems is highly recommended, because SEO judges you also on render time and things like LCP – Largest Contentful Paint.

Keep all the overhead CSS and JavaScript under control, especially side loading is a huge problem when it comes to lagging speed to load websites.

To get a good overview – use Google PageSpeed Insights to see how your page is performing and where you should start improving.

Another important tool Google Analytics – it recently changed to GA4 which you can simply switch too. The data there can give you a good idea how people finding your website and where you should improve your marketing efforts.

Now, marketing is not necessarily spending money on Google Ads, find out where people interested in your page collaborate and exchange and interact with them, if there you have e.g., a good blog entry about a topic, you might be able to link it and get more visitors.

And then there is the Google Search Console – the easiest way to find out how your page is ranking on various search requests and also making sure your page has no issues – what is critical. Make sure Googles algorithms are happy with especially the mobile version of your website, that it is deemed readable and fast. Only then you will get a good SEO rating. Ignoring issues there can cause Google to stop indexing your webpage, what will result in less clicks and a decline in visitors.

Of course, there are other tools and search engines out there, but let’s face it – the DeFacto standard is Google. This does not mean you can ignore the others, but I found that Googles free tools are most of the time already very sufficient.

All of this is just a quick overview, this topic is huge and a whole industry stays behind the SEO and web marketing – my only goal was to give owners of smaller website a good overview and some tips on how they can improve without spending a fortune.

Finally, you also should make sure your website is secured and monitored for threads. Not just the incoming threads, also that your website is not unintentionally causing a thread to your visitors due to malicious plugins or altered content. Engage third party firewalls and scan systems. Yes they will cost you money, but they will save your reputation with your visitors.

Useful registry keys to supplement settings not available in standard GPO templates

This blog entry will list some registry keys to control computer and user settings via GPO but aren’t available in the standard ADMX GPO templates.

Below you find always the same data format:

  • Computer Configuration or User Configuration
  • HIVE
  • Kay Path
  • Value Name
  • Value Type
  • Value Data
  • Short explanation
  • Link if available

Over the years I also always tried to leave a comment in the GPO’s, especially for the Registry Keys, so I could later identify them quickly and possibly even leaving a link so others could read up on these settings and options without doing long research.

Show Drive Letters first in Windows Explorer

This Registry value is set in two areas – Computer Configuration and User Configuration. See both keys below.

  • Computer Configuration
  • HKEY_LOCAL_MACHINE
  • SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer
  • ShowDriveLettersFirst
  • REG_DWORD
  • 0x4 (4)
  • Defines if the drive letter is shown first in Windows Explorer
    • 0 = After
    • 1 = Mixed
    • 2 = No drive letter
    • 3 = Before
  • User Configuration
  • HKEY_CURRENT_USER
  • Software\Microsoft\Windows\CurrentVersion\Explorer
  • ShowDriveLettersFirst
  • REG_DWORD
  • 0x4 (4)
  • Defines if the drive letter is shown first in Windows Explorer
    • 0 = After
    • 1 = Mixed
    • 2 = No drive letter
    • 3 = Before

Support URL

  • Computer Configuration
  • HKEY_LOCAL_MACHINE
  • SOFTWARE\Microsoft\Windows\CurrentVersion\OEMInformation
  • SupportURL
  • REG_SZ
  • URL to your support system
  • Set the Windows Support URL shown in the Computer Properties in the Support section – Link is behind the Online Support Website.

Support Hours

  • Computer Configuration
  • HKEY_LOCAL_MACHINE
  • SOFTWARE\Microsoft\Windows\CurrentVersion\OEMInformation
  • SupportHours
  • REG_SZ
  • e.g.:  0800-1700 Pacific Time
  • Set the Windows Support Hours shown in the Computer Properties in the Support section.

Support Hours

  • Computer Configuration
  • HKEY_LOCAL_MACHINE
  • SOFTWARE\Microsoft\Windows\CurrentVersion\OEMInformation
  • SupportPhone
  • REG_SZ
  • your helpdesk phone number
  • Set the Windows Support Phone Number shown in the Computer Properties in the Support section.

Support Manufacturer

  • Computer Configuration
  • HKEY_LOCAL_MACHINE
  • SOFTWARE\Microsoft\Windows\CurrentVersion\OEMInformation
  • Manufacturer
  • REG_SZ
  • Suggest to put in your Company name here
  • Set the Manufacturer Name / Company Name shown in the Computer Properties in the Support section.

Hide Drives with no Media

  • User Configuration
  • HKEY_CURRENT_USER
  • Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced
  • HideDrivesWithNoMedia
  • REG_DWORD
  • 00000000
  • If set to 0x0 (0) it will not hide empty drives, if set to 0x1 (1) it will hide empty drive letters from Windows Explorer.

Expand folders to current folder

  • User Configuration
  • HKEY_CURRENT_USER
  • Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced
  • NavPaneExpandToCurrentFolder
  • REG_DWORD
  • 0x1 (1)
  • This will expand all folders to the current folder in the navigation panel of Windows Explorer, by default it will only navigate to the folder but not expand the path to it in the Navigation Panel. The behavior on this changed back in Windows Vista or Windows 7. This sets it back to a more Windows XP like behavior, what makes it easier to navigate Windows Explorer.

Fast Boot Enabled

  • Computer Configuration
  • HKEY_LOCAL_MACHINE
  • SYSTEM\CurrentControlSet\Control\Session Manager\Power
  • HiberbootEnabled
  • REG_DWORD
  • 0x0 (0)
  • Turns off Windows 10 Fast Startup – meaning a real reboot is done rather then a quick reboot that is actually not a real Windows reboot. A real reboot is slower, but much cleaner.

Office 365 – Update Channel

There is a settings in the Office ADMX files under Microsoft Office 2016 (Machine)/Updates for:

  • Enable Automatic updates
  • Update Channel
  • Update Deadline

Additionally this settings should be set to make sure everything is configured the same and installs the same:

  • Computer Configuration
  • HKEY_LOCAL_MACHINE
  • SOFTWARE\Microsoft\Office\ClickToRun\Configuration
  • CDNBaseUrl
  • REG_SZ
  • http://officecdn.microsoft.com/pr/492350f6-3a01-4f97-b9c0-c7c6ddf67d60
  • This will set the Office 365 channel to current for the click to run installation.

Allow Print Driver Installation

  • Computer Configuration
  • HKEY_LOCAL_MACHINE
  • SOFTWARE\Policies\Microsoft\Windows NT\Printers\PointAndPrint
  • RestrictDriverInstallationToAdministrators
  • REG_DWORD
  • 0x0 (0)
  • Microsoft released KB5005652 which requires admin rights to install printers, and affects some existing printers that will require an admin to install driver update. Work around is to add the registry key below, which disabled this new security feature.
    • Value: 0
      • Allow non-admin users to install Point and Print printer drivers
    • Value: 1
      • Blocks non-admin users from installing Point and Print printer drivers. If this registry key does not exist, the default with KB installed will be same as Value 1, blocking non-admins from installing Point and Print printer drivers.
  • https://support.microsoft.com/en-us/topic/kb5005652-manage-new-point-and-print-default-driver-installation-behavior-cve-2021-34481-873642bf-2634-49c5-a23b-6d8e9a302872

Ensure Outlook is the default mail client

  • User Configuration
  • HKEY_CURRENT_USER
  • Software\Clients\mail
  • (Default)
  • REG_SZ
  • Microsoft Outlook
  • Ensures Microsoft Outlook is the standard mail client

Set Microsoft Teams as the default IM application

See this blog entry as well about this.

  • User Configuration
  • HKEY_CURRENT_USER
  • Software\IM Providers
  • DefaultIMApp
  • REG_SZ
  • Teams
  • Sets Microsoft Teams as the default Instant Messenger Application.

Set Microsoft Office to read User information from Active Directory

Make sure you set both registry keys for this.

Set this to “Apply once and do not reapply” as well.

This will cause Microsoft Office applications read any user information fresh from Active Directory, as it cleans the current values.

  • User Configuration
  • HKEY_CURRENT_USER
  • Software\Microsoft\Office\Common\UserInfo
  • UserName
  • (not set)
  • (not set)
  • This will cause the first Office application to read the information from Active Directory and re-create it specifically for the user.
  • User Configuration
  • HKEY_CURRENT_USER
  • Software\Microsoft\Office\Common\UserInfo
  • UserInitials
  • (not set)
  • (not set)
  • This will cause the first Office application to read the information from Active Directory and re-create it specifically for the user.

Disable the Network Sharing Wizard in Windows Explorer

  • User Configuration
  • HKEY_CURRENT_USER
  • Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced
  • SharingWizardOn
  • REG_DWORD
  • 0x0 (0)
  • Disables the Sharing Wizard in Windows Explorer.

Remove the Network form Windows Explorer

Probably one of the more important security measures you can do, to avoid the standard user browsing other systems on the network to much. It does not really prevent it, but makes it a lot less easy for regular end users, as the network area in Windows Explorer simply vanishes.

  • User Configuration
  • HKEY_CURRENT_USER
  • Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced
  • {F02C1A0D-BE21-4350-88B0-7367FC96EF3C}
  • REG_DWORD
  • 0x1 (1)
  • Remove Network from Windows Explorer.

Remove Administrative Tools from the Start Menu

This is made out of two combined registry keys. You will need to apply both for this to take affect.

Highly recommend to make sure it does not apply to any administrator accounts, as this can be contra productive.

  • User Configuration
  • HKEY_CURRENT_USER
  • Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced
  • Start_AdminToolsRoot
  • REG_DWORD
  • 0x0 (0)
  • Removes administrative tools from the start menu.
  • User Configuration
  • HKEY_CURRENT_USER
  • Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced
  • StartMenuAdminTools
  • REG_DWORD
  • 0x0 (0)
  • Removes administrative tools from the start menu.

Windows Update Restart Notifications for End Users

Please apply both Registry Keys for this to take affect.

  • Computer Configuration
  • HKEY_LOCAL_MACHINE
  • SOFTWARE\Microsoft\WindowsUpdate\UX\Settings
  • RestartNotificationsAllowed
  • REG_DWORD
  • 0x1 (1)
  • Will display Restart Notifications to End Users.
  • Computer Configuration
  • HKEY_LOCAL_MACHINE
  • SOFTWARE\Microsoft\WindowsUpdate\UX\Settings
  • RestartNotificationsAllowed2
  • REG_DWORD
  • 0x1 (1)
  • Will display Restart Notifications to End Users.

 

Monitoring relative printer page counts with PRTG

Monitoring relative printer page counts with PRTG

PRTG has many standard sensors, but one I was always missing is a daily page count compare. The standard printer sensor gives you a total page count – but this to some extend will always be a graph that only will go up. You can only estimate the total page counts in those graphs.

If you ever looked in to the IT Assets database project, you will see that in the Printers area there is a possibility to enable detailed graphs for relative page counts.

Why is this important you might wonder. The answer is simple, as an IT Manager you need to know if a certain kind of a printer makes sense at a certain location. If you have a low end printer for only casual print-outs but you have a total over e.g. 10,000 pages printed every month, you might need to reconsider the printer model. The reasons would likely be:

  • higher cost per page
    • constant toner exchange of a compared more expensive toner cartridge
  • maintenance cost
    • you might need to constantly maintenance the printer
    • the cost for the maintenance kit are relatively high
  • downtime issues
    • due to toner empty
    • printer needs maintenance again
    • less pages in paper tray

On the other hand, a printer might also be overkill for a certain area and not be cost efficient. Those conditions also might change over time of course. Further is there often the question – is a single area printer (copier) better or multiple smaller printers. This of course can go pretty far and you want to consider Lean processes, Six Sigma guidelines and others along with this data.

How ever, I started a first draft of a script that provides me at least the total page count relative to each day in PRTG. This sure is not as efficient yet as I do this in the IT Assets database printer module, where I collect data e.g. every 30 minutes in a huge table and then later calculate all the data in a daily range respective monthly range while collecting total page counts and possibly counts per copy vs. print outs and additionally color vs. black and white print. But at least it is a start.

Below you find the first draft of this script.

One thing to know – you will need to run the following command in order to install the PowerShell SNMP module on your PRTG probing server:

The current version of the PRTG script:

Move user Documents and Desktop to OneDrive

Move user Documents and Desktop to OneDrive

The PowerShell script below was design to move Documents, Music, Videos, Pictures, Favorites and Desktop to a sub-folder in a connected OneDrive. In theory the script does not depend on OneDrive and could be adjusted to any other destination.

While it normally is wise to engage GPOs to adjust those paths to internal server resources, this is not possible easily while using OneDrive. The script therefor works better here.

What it does

  1. is the current path per folder accessible
  2. does the target path exist
    1. YES: adjust the registry respective folder targets to the target path – FINISHED
    2. NO: create the target folders – see 3.
  3. is the source path on the same volume / partition – like C:
    1. YES: see below – 4.
    2. NO: check if there is enough free space for the amount of data needed to be moved
      1. YES: see below – 4.
      2. ALMOST: YELLOW warning – see below 4.
      3. NO: RED error – you could still proceed or simply close the script
  4. move the data to the new target folder
  5. remove the old folder – if not possible rename it

The script retains the special icons for the folders and engages the Windows API to adjust the folder paths.

What you need to do

  • Adjust the target-path in the top of the script
  • If desired, adjust the minimum free space value (2 GB by default) for the warning in regards to the free space – this only matters if the source and target volume / partition aren’t the same

To start the script, either right click and say run with PowerShell or run it directly in a PowerShell. This script will need to execute in the user-context and does NOT need administrative rights.

Please be advised – the script will by default not try to move e.g. DOWNLOADS.

You can adjust this, while adding the folder to the two parameter, see sample below.

If you want more folder, the script would need some special adjustments. It can be used as a base script, if you want.

 

Using PRTG to monitor manufacturing machines

ns a website from a systems administrator for systems administrators Home IT-Admins CMDB IT-Admins tool IT Search EOL Solutions Blog Contact Links Using PRTG to monitor manufacturing machines

This is a screenshot of the real-time data map of the PRTG instance that is used to monitor the data collected by the Raspberry PI and processed by PRTG to show how the progress of the production machine in manufacturing

A few weeks ago Paessler published on their blog an article I was part of that talked about a case study and implementation of how to use PRTG to real-time monitor a manufacturing machine / production machine while engaging a Raspberry PI.

The article describes what Dominik Wosiek and I implemented to monitor a manufacturing machine in real time. He started using a Raspberry PI and added eventually some magnetic field sensors to the machines robot arms to detect their movement. The data those sensor collected is interpreted by a script on the Raspberry PI and then send off to various HTTP push sensors on a free Paessler PRTG installation (we needed way less then 100 sensors and wanted to keep the installation independent).

On the PRTG instance, the data is of course collected and PRTG creates various graphs for us. We further added a PowerShell script that calculates the past time of the day. Due to us knowing the work-windows of the manufacturing department and how many parts are their daily target, we where able to use a Sensor Factory Sensor in PRTG to do some calculations and eventually show how the machine and the group controlling it was doing while comparing the output of parts relative to the time of the day – respective work hours past.

Above is an example configuration of the Sensor Factory Sensor in PRTG. We defined four channels:

  1. Production time passed in percent [%]
    1. this sensor pulls the passed time in minutes from the PowerShell Script sensor we created, it then does some math – the formula looks like this
      1. (passed minutes of the day – minutes passed when manufacturing starts) / (minutes passed when manufacturing ends – minutes passed when manufacturing starts) * 100 (to get percent)
      2. what it does in the example above:
        1. pull the passed minutes from the foreign sensor
        2. calculate 8 hours times 60 minutes (start of the day)
        3. subtract start time from passed time of the day (at 10 AM we would end up with 120 minutes)
        4. divide it with 17 hours times 60 or 5 PM in minutes of the day minus 8 AM minutes of the day – this gives you the total minutes between 8 AM and 5 PM – what is the defined manufacturing work time window
        5. multiply the result with 100 to get a percent value that shows the past time relative to the total work time window
  2. Part output vs. time [%]
    1. while the formula seems to be longer – it does nothing else then the using the same formula described in channel 4 minus the formula described in channel 2
    2. in other words – the value of part output in percent minus the value of work time passed in percent
    3. this results in either 0% – meaning the output is exactly at where they should be relative to the time past, or a negative number meaning the output is falling behind while a positive number would mean the part output is higher then expect relative to the time
      1. Note: this is all a bit relative, it might start negative in the morning, catch up to a positive number before lunch break, falling back to a negative number and then catching up to zero by the end of the day.. it depends on various factors but is a pretty good indicator
  3. Part output count
    1. this just loads the foreign channel of another sensor to show it in the same table/graph
  4. Part output in percent [%]
    1. while 25000 is the daily target amount of produced parts, this channel calculates how much of this was accomplished in percent while dividing the current count with the target count

Here is the script that I created to inject the minutes of the day in to a PRTG sensor – this is above used in channel(2323,2) within the formula.

Further details are described in the blog entry on the Paessler web site.

Setting up Windows Search Index

Setting up Windows Search Index

The Windows Search indexing is a solution from Microsoft that will index your file servers and their files full text and allow your end users to get results quickly while actually engaging the fulltext search database seamlessly.

This is accomplish by just using the search box in the upper right of the Windows Explorer while residing on a network share or mapped network drive.

Many people say that the Windows Search does not work right, but my experience is that quite the opposite is true and it only depends on the right set up. This I will explain here further.

Before we go in to details – there is a challenge that goes along with DFS namespaces. I found a way to bypass this, look here to understand how it works.

  • add an additional drive to your file servers
    • I roughly recommend about 10% of the total use size of your file shares that you want to index
    • you might be able to go with less or more – but expect 10% to be more on the safe side
  • give the new drive a drive-letter (e.g. I: for Index) and name it INDEX (to make clear that this is only to be used for the INDEX database)
  • add the following path to this drive
    • I:\ProgramData\Microsoft
    • The path above mirrors the default path on the C: drive where the index will reside by default, I recommend to mirror this path on the new target drive to keep things simple and clear
  • add the feature Windows Search to your server
  • download the Microsoft Filter Pack 2.0 and install it on your server
    • yes – this is a Office 2010 SP2 filter pack and yes that’s okay
    • just download the x64 bit version of it and install it
    • this will add a iFilter pack to Windows that allows Windows to understand DOCX and XLSX files (and others) and full text index them
  • enable the new service Windows Search
    • set it to start type automatic
    • start the service
  • from your start menu (or Control Panel) open the Indexing Options
    • click on ADVANCED
      • select the earlier created path I:\ProgramData\Microsoft as new path for the index
      • after you click on OK Windows will stop the service and move the existing index
        • you might need to manually start the service again
    • click now on MODIFY and select the server shares respectively local paths you want to index
    • if you leave the Indexing Options open you will see that Windows updates the count constantly
      • Windows will slow down indexing if you work on the console or via RDP on the server Desktop
      • You will see the message INDEXING COMPLETED once all files have been index
        • this can take many days – don’t be surprised and be patient
  • think about monitoring the index and the partition the index resides on

One challenge resides – if your indexing drive becomes full, Windows Search indexing will crash the index database and likely determine it as corrupt and start from scratch. This can happen within minutes, even before any of your monitoring solutions might warn you about the full drive. There are eventlog entries about and you would easily see that the number of indexed files dropped to a very low number again. The drive becomes suddenly pretty empty as well again. This is sure one of the downsides of how Microsoft implemented this, but if you provide plenty of space in the first place, you likely won’t experience this issue.

iFilters for PDF files seem not to be necessary on current Windows versions. In the past I downloaded a iFilter for PDF files from Adobe but I experienced many issues with temporary files on the C:\ of my file servers. This is a known bug with the Adobe iFilter – as for Windows 2016 those files are full text index, but the same restrictions as in the past apply – PDF file can hold editable text that can be indexed or they hold pictures – what is often due to scans of documents – as long your scanner didn’t due OCR and translate the image to text, those files can’t be full text indexed.

Monitoring

It is almost essential to have proper monitoring in place for this. Actually, you want to monitor everything within your network, but that’s another story. I faced the same challenges and even was able to find out more about how the index database grows and why many people aren’t happy with it. Eventually it comes down to improper configurations and bad or nor monitoring at all. I highly recommend you have a look at this additional blog article about Windows Index Monitoring.

Read the UEFI stored Windows key and activate Windows

Read the UEFI stored Windows key and activate Windows

Ever wanted to read out the UEFI stored Windows key and probably automatically try to activate Windows with a single script?

The UEFI stored Windows license key is essential due to the fact that you don’t have a physical license anymore and you should keep it just in case for situations like your motherboard was exchanged and the key not transferred properly. I came across similar situations and was glad that I had the key.

But the script below does more then just reading and displaying the key – it will try to activate Windows as well.

Please note – it is wise to combine the read out of the key with an export and save method – like writing it to a database – this script will only show the basic functionality – but this is the most important part already.

The next two lines help additionally – if you create this batch file as well and store both files in the same directory, you simply can right click the .CMD file and execute it with elevated rights (run as administrator) and it will make your live even easier. This is just a simple trick to bypass some restrictions that you might encounter while trying to execute a PowerShell script with elevated rights and bypassing the execution policy for scripts at the same time.

Microsoft RADIUS / NPS SQL logging

Microsoft RADIUS / NPS SQL logging

An issue or question I see again and again – proper RADIUS logging with Microsoft NPS / Network Policy Server.

Let’s guide you through a few steps

  1. Install a Microsoft SQL or if not available SQL Express
    1. be aware – SQL Express has very tight database size limits and no SQL Agent – this might be an issue
  2. Create a new database via SQL Management Studio in the SQL server
    1. name it e.g. RADIUSLogging
  3. run the SQL script from this Microsoft website in a new query window against this database (make sure it is not run against any other database by accident)
    1. you could add a line like USE RADIUSLogging to prevent this – in the very top…
  4. configure your RADIUS server to log to this SQL server and database
  5. make sure you have fail-over logging to a text-file – to avoid issues in case your SQL DB grew to big or was not reachable for any reason
    1. decide in the text-file configuration if you want to deny access if there is an issue or if you still want to proceed with the logon

Now you have RADIUS logging the information to a SQL database – actually a single table – and you can dig around in it. The IT-Assets database provides a front-end example for this – you don’t need to use it – but it might be of help – see here.

To interpret all those columns and values – look at the following links for additional information:

You will face the issue that the database will grow rapidly – depending on how many requests go to your RADIUS system etc… Keep an close eye on it – use a monitoring software like Paessler / PRTG to monitor the size and keep in mind that SQL Express might have size limits like 10 GB. The full version of Microsoft SQL has no such limits and further can you use SQL Agent to execute tasks. The following script can help you purging data from the RADIUS database to keep its size under control. You can use SQL Agent (not in SQL Express) to run it automatically or if you use SQL Express either run it manually or with another solution somehow automatically against the database delete older entries.

The script actually will purge data older then 14 days – you can adjust the days to your liking / needs.

Updated domain join script including KeePass / Pleasant Password server entries for local admins

ns a website from a systems administrator for systems administrators Home IT-Admins CMDB IT-Admins tool IT Search EOL Solutions Blog Contact Links Updated domain join script including KeePass / Pleasant Password server entries for local admins

Today I post an updated version of the domain-join script I initially posted here.

In theory you can just replace the script with the new version – assuming you did not make any changes other then adjusting it to your domain / server-names.

What changed in the newer version:

  • the top lines in the script hold the basic configuration parameters
    • line 1: NetBIOS name of your Active Directory domain
    • line 2: your DNS domain name
    • line 3: your distinguished domain name / root DN of your domain
    • line 4: your default OU for new workstations
    • line 5: empty
    • line 6: KeePass / Pleasant Password Server URL
    • line 7: KeePass folder to store the password in
  • the script now relies on the above parameters rather then specifying them in various areas in the script, making the whole use / adjustment of the script way easier
  • advanced error handling
    • after the user entered the computer name and his domain admin credentials the systems checks if it can connect to the domain and if the computer name already exists
      • if the domain credentials are invalid (can be a non-admin – as long they are valid) you get a message explaining that the script will stop due to wrong credentials
      • if the computer name already exists in the domain, you get a message about it and the script stops
    • KeePass or Pleasant Password Server connection – if it fails to connect with the credentials provided, you get a message about it and the script will stop
  • adjusted messages with various colours
    • white text – standard as it was before
    • yellow text – highlighted information so it sticks better out for the end-user
    • magenta text – handled error / failure message – this is an explanation that something stopped the script from going further
    • red text – those are real PowerShell error messages – either due to not handled errors or if the error was handled plotted out to the screen as additional reference and help

For additional information, please look at the original post here.

This script is also mentioned on the API Examples page on the Pleasant Solutions web site here.

How to create an independent backup network

How to create an independent backup network

Today we look at independent backup networks especially in regards to LTO 7 and VMware ESX hosts. Be aware – this very example also applies to any backup to disk (B2D / Backup-2-Disk) solution. But a good reseller / vendor would inform you about this right away anyways.

LTO 7 and later like LTO 8 drives have a write speed faster then a 1 GBit network can handle, making it now really necessary to think about options. On top of it, you do not want to over utilize the LAN side of your servers so that the impact on the user / application facing side stays minimal. This leaves you with two options, you can group switch ports assuming you have enough 1 GB ports and use them, you will need at least 3 ports combined, or you create a whole backup network on a 10 GB basis.

Let’s run some numbers:

  • LTO7 has a write speed of about 300 MB/s uncrompressed and up to 750 MB/s compressed
  • LTO8 (L8) has a write speed of about 360 MB/s uncrompressed and up to 900 MB/s compressed

Now – your network connection is meassured in MBit/s not MByte/s. Byte to bit is 8 bit are one byte, so we need to multiply those speeds in byte with 8 bit too see the network speed numbers.

  • LTO7 uncrompressed = 300MB/s * 8 = 2400 MBit/s
  • LTO7 compressed = 750MB/s * 8 = 6000 MBit/s
  • LTO8 uncrompressed = 360MB/s * 8 = 2880 MBit/s
  • LTO8 compressed = 900MB/s * 8 = 7200 MBit/s

Assuming you want to go with grouped ports, you see that with LTO7 you would need 6 ports and LTO8 7 to 8 ports to fully utilize the speed and minimize your backup window. Additionally think about the read speed that might affect you as well – not just for recovery but for the verify of your backup.

Now – this means – add at least one 10 GB switch and one 10 GB NIC to each server – let’s do this with an example:

  • 3x VMware ESX hosts – LAN side and management is configured 1 GB – we assume there is some kind of storage behind them that has the iOPS and speed we need like an SSD based storage
  • 1x Backup media server that has an LTO7 or LTO8 drive connected – 1 GB on the LAN side

What we need – minimal:

  • 4x 10 GB NICs
  • 1x 10 GB switch
  • 4x CAT6e or CAT7 cables

What I would recommend – nice to have:

  • 4x 10 GB NICs – dual port
  • 2x 10 GB switches
  • 10x CAT7 cables – 2x to stack/trunk the switches if not stacked otherwise

This is a nice to have – a fail-over, but the minimal configuration is sufficient as well.

Cable this all in – create a new IP-scope / VLAN on the backup side – you do not need any default Gateway etc. on the Backup-Network side (10 GB). Just an independent IP scope and have every host assigned a static address.

This keeps the regular network traffic and any broadcasts away from this network and your backup will run totally independent. You might need to disable your anti-virus solution on this NIC / IP-scope on the backup media server as well, cause it might actually influence the speed quite drastically. Having it separated and independent helps keeping the security up.

On the VMware hosts – I like to even allow VMware to vMotion on this backup-LAN – simply because it is extremely efficient there – independent from your LAN and if you have it from your iSCSI network as well. But that’s just an idea.

Now – the backup – how will it grab the data from the 10 GB side of your VMware hosts – especially if you have a vSphere cluster and grab the backup through the cluster?

Simple – you adjust the hosts file on your media server. Each and every VMware host needs to be listed in the hosts-file on the media server with the IP that it has in your 10 GB backup network. This way DNS and everything will act normal in your environment, only the backup-media server will reach out to those hosts on the 10 GB network due to the IP resolution of those hosts. This is the easiest way to accomplish this.

You will not to add a 10 GB connection, backup-network IP address etc. to your VMware vSphere controller – it can stay on your LAN or server-management network as is. This also means there is no reason to mention him in the hosts file on the media server.

How this works:
your backup will contact the vSphere controller on the LAN side
it will then be redirected to the host that currently holds the VM you want to backup
the media server now will contact the VMware host directly – due to the hosts-file entry on the 10 GB backup-network
backup will process…

This of course would work with a physical server as well – like a physical file-server etc. – though, today this is rather rare and especially VMware backups are actually large files that benefit most from the LTO7 write speed so the above makes sense there most. It wouldn’t matter if you do the same to an Hyper-V environment or any other VM host/guest solution. In theory it should always work the same.

What real world write speeds can you expect?
This is the big question – here are some real world examples of this – those are single jobs on per VM basis, meaning it includes even the tape-load and tape-unload processing time and udpating the catalogs while using Veritas Backup Exec.

Backup size (VM size)elapsed time in minutesjob rate write/overalljob rate verify
4 TB6:4717,227 MB/min26,404 MB/min
2 TB6:476,822 MB/min22,233 MB/min
1.21 TB3:498,271 MB/min20,235 MB/min
147 GB0:336,491 MB/min22,655 MB/min
138 GB0:1718,403 MB/min27,726 MB/min
25 GB0:109,172 MB/min20,700 MB/min

The above list is just an example – realistically we see speeds between about 3,000 MB/min to 18,000 MB/min as for the overall speed. This is due to the VM itself for some part – thin or thick provisioned, what kind of data is it holding, how busy is the host cause we might double trouble him due to multiple drives doing backups at the same time to the same host etc… In average we see around 8,000 to 9,000 MB/min in speed, what is still great – and I wanted to show as well that it can vary quite a bit so don’t be scared. We still did improve the time the backup took from going from an LTO4 LAN based backup scenario to an LTO7 independent backup network while cutting the time in half, actually, even less then half. The slowest speeds we see today are due to systems that can only be backed up on the LAN side, while the ports are grouped there but we still don’t have the same speed as we see on the backup-network side. Many factors come in play but that all depends on the individual situation.

Hoping the information above helps some of you out there – keep in mind that your situation might be different, run some examples and ideas and if you have questions, reach out – this remains an example of what I really implemented at a company and how it affected the backup configuration and management.

Gathering profile information from computer

Gathering profile information from computer

Every now an then you might need to know who logged on and when was the last logon of which user to a specific workstation as well as the size of each user profile. For this I once wrote the PowerShell script you can find below. It does a WMI query against a list of one or more target computers and reads out the information reporting it back.

As input parameter use a comma separated list of computer names – those must be reachable and administrative accessible (you need at least admin rights on the target system). You then get a output set per profile.

The output can e.g. be transformed with the parameter |ft to see it in table format – like typical in PowerShell.

Output values are:

  • ComputerName
  • ProfileName
  • ProfilePath
  • ProfileType
    • Temporary
    • Roaming
    • Mandatory
    • Corrupted
    • local
  • IsinUse
  • IsSystemAccount
  • Size
    • this needs to most processing time – it is a manual size check including even temp files.. – other then what Windows shows you
  • LastUseTime

The advantage of DFS and how to set up a working structure

The advantage of DFS and how to set up a working structure

File shares are something every IT professional will work with. Many companies have way to complicated and unstructured network file systems with to deep permissions, to many shares and access points, often several connected drives and from an IT perspective nightmares when it comes to migrating to newer servers or having satellite offices and subsidiaries gaining access to it especially on lower speed connections.

Having been in IT for about 20 years by now, I saw a lot and was challenged with it quite a bit. One of the best solutions I came across is the one I am about to show you here. It is very structured while giving you the advantage of leveraging it as you need and go and should allow you to use it in most businesses.

First of all – please note that I will not go as far as explaining and exploring the differences with Active Directory integrated and Stand-A-Lone namespaces. If by any means possible, I suggest you use Active Directory integrated namespaces to simplify the roll out, but both would work.

The structure example:

The structure example will depend on a DFS Root server and a separate File-Server per root-folder on the later network drive. This is just an example, you do not need to split it all up, thought if you can do it to keep it as structured as possible

Example target file system structure:

  • N:\
    • N:\Archive
      • N:\Archive\John Doe
      • N:\Archive\Jane Doe
    • N:\Departments
      • N:\Departments\Marketing
        • General
        • Mangement
        • Public (anyone has read access)
      • N:\Departments\Accounting
        • General
        • Mangement
        • Public (anyone has read access)
    • N:\Other
      • N:\Other\Manufacturing
      • N:\Other\Projects

The declared goal is to keep the NTFS rights structure as simple as possible and not going any deeper then e.g. level three – e.g. N:\Departments\Marketing\General

Each department folder in this example will have a public folder where a member of the department has read/write access while any non-marketing member has read access to files that are published there.

The archive tree is for terminated employees and archive data. Their information gets collected in a sub-folder in this tree, a group will be created for each of those folders and only people that got approval to access this data will see and be able to read those archived files (read-only is recommended as NTFS permission)

The file servers and their preparation:

DFS Root-Server
  • create a folder on the data-partition like D:\DFSRoots – there will not be any real data in this folder – but it will hold the actual DFS structure
    • create sub folders for the branches on the shared DFS drive like:
      • D:\DFSRoots\Departments
      • D:\DFSRoots\Archive
      • D:\DFSRoots\Other
DFS Department Server
  • create a folder on the data-partition like D:\SharedFolders\Departments
    • remove the everyone or authenticated user groups from this folder – only System and Domain-Admins should have read/write permission here while group N_Departments will have read-only access on this folder.
    • create a sub-folder for each main folder you want to see under the path N:\Departments and share it 
    • add a $ (dollar/string) sign to the share name so it remains hidden a hidden share
    • Examples:
      • D:\SharedFolders\Departments\Marketing
      • D:\SharedFolders\Departments\Accounting
  • now create the following sub-folders for each department folder as shown on the example Marketing
    • D:\SharedFolders\Departments\Marketing\General
    • D:\SharedFolders\Departments\Marketing\Management
    • D:\SharedFolders\Departments\Marketing\Public
  • create two groups in Active Directory for Marketing
    • N_Departments_Marketing_General
    • N_Departments_Marketing_Management
  • create a general group N_Departments to use it for all Public folders
  • assign the groups to their according sub-folders General and Management with read/write rights you probably will need to remove the read-access that the group N_Departments inherited from this folder 
  • assign the group N_Departments to the Public folder in all departments with read-only rights (if not inherited)
  • assign the group N_Departments_Marketing_General to the Marketing\Public folder with read/write access – allowing each member of marketing to publish information for access to other people – only marketing can write in this folder, other people only have read-access to it
DFS Archive Server
  • create a folder on the data-partition like D:\SharedFolders\Archive
    • create a sub-folder for each main folder you want to see under the path N:\Archive and share it 
    • add a $ (dollar/string) sign to the share name so it remains hidden a hidden share
    • Examples:
      • D:\SharedFolders\Archive\John Doe
      • D:\SharedFolders\Archive\Jane Doe
DFS Other Server
  • create a folder on the data-partition like D:\SharedFolders\Other
    • create a sub-folder for each main folder you want to see under the path N:\Other and share it 
    • add a $ (dollar/string) sign to the share name so it remains hidden a hidden share
    • Examples:
      • D:\SharedFolders\Other\Manufacturing
      • D:\SharedFolders\Other\Projects

The DFS namespace set up and configuration

  • add the Namespace \\domain.local\N for the N: drive (just an example)
  • add the folders Archive, Departments and Other to the namespace
  • for each of those folders you add the shared sub-folders like indicated in the list below as sub-folders (they will appear on the Namespace tab when you click on the folder in the DFS Management) and set the target to the according file-share on the specific DFS server where the data will reside
    • Departments\Marketing
    • Departments\Accounting
    • Archive\John Doe
    • Archive\Jane Doe
    • Other\Manufacturing
    • Other\Projects
    • This will actually create a shared sub-folder on the DFS Root server for each of those folders in D:\DFSRoot\

Note – information about the above example

The example below is kept simple – I did not go in to each and every right you would need to assign for the sole purpose of keeping it simple and understandable. Please investigate and set the rights as you really need them. 

As for the Archive tree, it might be beneficial to have PowerShell script automate the folder creation, group creating and rights assignment for those NTFS paths, so you limit the possible failure-rate in case you are going to archive terminated employee data and other stuff in this tree branch.

What are the real benefits of this

  • add multiple folder targets for replication
    • replication can be beneficial in a server-migration scenario as well as in a subsidiary scenario
    • you can add replications on the departments-branch example per department folder – not each subsidiary will need a mirror folder of each department, rather then just a few – this decreases the amount of data and load on the connection and size of the server respective it’s disk-space and reduce cost as well
  • a simple rights structure fully based on groups
    • in general you should never ever use a user account to assign any rights – always create a group, whether for a drive-share, NTFS rights or any other purpose. Always create a group!
    • you can add and remove users from those groups
    • you can audit the permissions on the NTFS side rather quick cause they should relate to strong group names
    • the groups can be audited against HR lists of members of the department or by department managers and directors to make sure only people that need to have certain access levels will have them
  • while limiting access to certain folders you limit the amount of damage a possible attack by malware could cause 
  • you can divide or summarize the actual file-servers that hold the data as needed in the long run
  • a simple group design with limited depth permissions is easier to maintain and audit
  • you have one central network drive that you will assign in order to give everyone access – all data will be centrally on this path independent from any file-server host-name. This can be a huge advantage cause some applications might not relate to the mapped drive rather than a UNC path what could cause you major headache when ever you want to migrate/upgrade or retire your file-servers later on
  • possible other file-shares within the corporation in other locations could be made accessible by linking them in as a folder in e.g. the others-namespace avoiding that users would need to know and remember the UNC path and you even allowing them to access any UNC path – it will act like a mapped drive while pointing in the background to an UNC path

There are many more advantages to DFS and the whole design. I hope this gives you a good overview and idea of how to design or re-design your file-server structure and simplify the whole access structure. 

Full text search and DFS drive mappings

This is a challenge that is not easy to overcome. Still, thought there is no official and directly implemented solution from Microsoft for this, I was able develop and provide a solution that will access the Windows Search Index and provide it back to the end user only using standard Windows components. All you need to know and do is described in the IT Search section of this web site.

Build your own lab environment with VMware

Build your own lab environment with VMware

If you have a virtual system, like VMware, and a storage device behind it you might be able to mirror your whole real / live environment and create a complete playground or shadow network to simulate any of your guest VMs like the real system and being able to change, update and adjust the configuration within this lab environment.. This is actually rather easy to accomplish and can save you a lot of headache while using it.

This is just an example, you might be able to accomplish something similar while just cloning single VMs and in theory it wouldn’t even matter if you use VMware vSphere or something like Microsoft Hyper-V. Still, it has advantages to do it this way, but let me explain it.

Assumed scenario:

You work with an VMware environment with one or more host systems and several guest VMs. You need to update software and configurations on those guest VMs but need to test this beforehand to ensure everything runs smooth. The VMs are stored on a central storage device that can do volume-level snapshots.

Prepare the environment

  1. you need one NIC per host that you will connect to a switch
  2. best use an independent switch that has no other network connections then to the 1x NIC per host
  3. create an Shadow-LAN virtual switch in your VMware cluster and use the 1x NIC per host so VMs through the cluster can communicate proper

What you do:

  1. create a snapshot and mount it to the VMware host systems as an additional volume
  2. go to the clone-volume file system and add the VMs you need to the inventory (you might want to rename to shadow-<servername> so you can easily identify them)
  3. re-configure their network connection from your regular LAN virtual switch to the shadow-LAN virtual switch you prepared
  4. start the added guest VM
    1. if you are asked if you moved or copied the guest just say you moved it – to avoid hardware / MAC and other changes possibly causing Windows to want to re-activate
    2. VMware might complain about a duplicate MAC address – you can ignore this, cause you are on two different / independent networks

Real usage example:

Let me give you a more detailed example on this with a few more details on what I personally used and did with this. The example you find below should help you understand the whole principle better.

  • VMware cluster with e.g. 10x host systems – we had enough RAM and CPU power that we could have 3x hosts go down – you won’t need that much, but of course you would have buffer for RAM and CPU usage
  • Nimble All-Flash storage arrays in the background connected to all the VMware hosts and using the Nimble VMware plugins (Note: Nimble was bought by HP / HPE as off today)
    • the Nimbles are configured to do volume level snapshots multiple times per day
  • all physical host systems had a dedicated network card (NIC) connected to an independent physical switch that was NOT connected to any other network switch
  • a virtual switch SHADOW-LAN was created and those physical NICs of the hosts systems had been assigned to it
    • this allowed any VM connected to this virtual switch to communicate with other VMs connected to the same virtual switch on other hosts

Due to migrations, software updates and quality controlled systems we constantly had the challenge to test and changes and adjustments thoroughly. So I came up with the solution to just clone a snapshot on the Nimble storage array the VM resided and mounting it to the VMware cluster, taking only minutes and then moving forward to add e.g. domain controllers, DHCP servers, necessary file-system servers and the target guest system to the inventory in VMware, adjusting their name so we could quickly identify them (even adding them to resource pools if necessary) and of course most important just changing their virtual switch configuration to the shadow switch.

Advantages and possibilities:

This now allowed to simulate the whole real world system (VMs) and simulate every change that we wanted. In order to get software there we attached if necessary VHDs that did hold what we needed or we even used a secondary internet connection to briefly connect to licensing services or update services that Vendors only provided online. The advantages of the solutions go even further:

  • simulate everything you have in your VMware environment available and have it working like the real / live system
  • if necessary, provide internet access while connecting a SECONDARY internet connection (router / firewall) to physical shadow network switch
  • adding real printers to the shadow switch to be able to test print-outs (we had those cases)
  • add physical workstations to simulate whole production environments
  • update / refresh the whole system in only a few minutes by using a fresh-snapshot clone
  • only minimal to almost none impact on the storage / free space of your storage device
    • this is due to grabbing a Nimble snapshot that was cloned and therefor created a new branch and only the deltas (changes) had an impact on the storage – we talk even for “bigger” simulations only about a few gigabyte changed data – if at all that much – of course depending on your storage and what you do
  • we installed VMware console connections on quality testing workstations so they could access the system directly on the console
    • of course only granting them minimal rights to this specific pool of VMs
    • avoiding that their access to the VMware environment had any impact to the real system
  • documenting any changes, challenges faced and solutions found
  • due to a physical switch, and best practice using a layer-3 switch, we where able to simulate whole VLANs, routing etc. within the environment and even connecting various physical systems like printers, workstations and temporarily an internet-connection to this environment

It is only a small amount of effort to initially prepare for those simulations, cause the virtual shadow switch, the physical shadow switch and the hosts network card connections to this physical switch are a one time effort. After this you just clone and mount snapshots and add the actual VMs you need while adjusting their network connection to the virtual shadow switch.

Once setup, preparing simulations usually takes less then 30 minutes till everything is cloned, mounted, added to the inventory (incl. NIC adjust to the shadow switch) and booted up.

Why not just clone all VMs via VMware?

Good questions – the answer is simple, this would have an impact on your storage capacity, cause it would create an actual clone. And it actually takes longer to clone individual VMs then to just grab a storage-level snapshot and being able to adjust what you want down to the volume level on the storage. Even the clean-up might be more intense or leave some unwanted data back – while a clone on off the volume only needs you to remove the VM guest system from the inventory and then unmount and delete the whole shadow volume.

I did write this all up cause I wanted to share it – the whole idea is not that special in theory, but I thought it is an good example on how you can accomplish having a huge and decent lab environment with only minimal effort. In any case, I hope the idea behind it will help some off you out there 🙂

 

 

Windows 2016 DHCP load balancer and it’s quirks

Windows 2016 DHCP load balancer and it’s quirks

Windows 2016 or probably even 2012 allows you to create a real load balanced / full failover DHCP server configuration, other then Windows 2008 that only allowed you to split the scopes.

Now, this works pretty great for the most part – but it has actually two major flaws you need to be aware of and actually take action on.

Neither reservations nor server / scope options are replicated.

This actually is a big deal. Assuming you are changing settings on server A for the pool, you end up that clients depending on which DHCP answered them might apply the new settings from server A or pull the old ones from server B.

Further might a reservation work when you put it in place, and then – all of a sudden a few days later you get a ticket in telling you there is an issue and you find out that the reservation didn’t pull anymore. What happened? Well – you might have set the reservation on server B but not on server A – depending on which server answered the client, you run again in to an issue.

Microsoft seems to have put a quick and dirty synchronization in place and the only true way around is to force the two DHCPs to synchronize with the following PowerShell command:

This could be automated via the Task Scheduler by using invoking the command from a DOS prompt via:

But even then, you better check all your DHCP servers and always make sure any changes are made on all DHCP servers or at least correctly replicated. Otherwise you might encounter the weirdest issues.

 

Automate Outlook signature roll outs while pulling the information from Active Directory / LDAP

Automate Outlook signature roll outs while pulling the information from Active Directory / LDAP

The Outlook signature script you will find below is a bit more complicated then most other scripts I post, cause you might need to adjust a bit more. I used it for several years (as you can see in the script when it comes to Outlook versions and registry keys) in many networks and in most cases it worked just flawless once it was set up.

What does this script do exactly?

Good question – it actually writes every time a user logs on a signature file to his profile. The information in the file are pulled from Active Directory – where you are able to e.g. change the phone number, cell phone number or e.g. last name because the employee married. The signature file will automatically update. Even more important is the onboarding process, you actually can forget about setting up the signature. Assuming you don’t use roaming profiles, well – no worries – the signature will auto create everywhere, if you call it via a login script / logon script. In theory you could call it via a GPO as well.

What you need to do – simply said

  1. get an approved example signature from HR or marketing or who ever can provide you the signature and actually put it in your Outlook as signature.
  2. then replace names, phone numbers with variables (I come to that) and save it in your Outlook.
  3. go to your %appdata%\Microsoft\Signatures folder and grab the three files (.txt / .htm / .rtf) and the sub-folder with the name of the signature you saved
  4. copy them to your \\mydomain\netlogon\signatures folder (you might need to create it – any other location would need some adjustment in the script)
  5. you will need to open the all three file formats (.txt / .htm / .rtf) in a regular text editor – plain text editor like NOTEPAD.EXE (Windows) or Notepad ++
  6. make sure the variables are a complete word and not somehow divided or have characters replaced – if something is not how it should be, adjust it and save the files
  7. Copy the OutlookSignatures.vbs file to the same path and adjust it especially in the header-section with your domain information and execute the script in a CMD / command prompt via \\mydomain\netlogon\signatures\OutlookSignatures.vbs “my signature” 1 1
  8. Now go back to your Outlook (probably close and re-open) and create a new email – you should see your signature was auto-generated and the variables have been replaced with you user-specific values from Active Directory / LDAP.
  9. you should switch your email format to all three formats – HTML / Plain Text / RTF and check the signature in all three formats – to make sure all three files where generated correctly
  10. If something is not as expected, check the source-signature files and their variables and if needed adjust the variable-replacement section of the script

What you gain from this

The signatures will auto-generate and you actually have a cheap way to roll out corporate identity conform signatures, without spending a lot of money for tools that might provide you an easier to use configuration and some more fancy features – but if you don’t need those features and you can live with a more technical way to approach this you actually have a cheap way to implement this.

The variables

The script will pull certain properties / attributes from the currently logged on user object from Active Directory – those are configured in line 33 and if you need more you will need to add them here.

Between the lines 145 and 181 you see that the script is replacing place-holder variables in the source files (.TXT / .HTM / .RTF) with the information pulled from Active Directory – all those place holders in your source files need to be @@AnyName@@ – this is to make sure you have a unique definition of what the script will replace.

Example:

This does nothing else then:

  1. replace the variable strCurrentLine with
    1. search in variable strCurrentLine
    2. for the value @@GivenName@@
    3. replace it with LDAP attribute “givenName

There are a few special examples for putting e.g. a HTML conform line-break after Job-Title in the .HTM file only in the script (I had situations where only the HTML signature did not do a Line-Break, or the Text-Version alone was not doing something, etc..) – in the end this allows you to adjust something in a one of the three signature formats.

Another example in the script writes an additional line with the cell phone / mobile number if available. If the number is set in the user object, a new line will be created depending on the file format – if the number is not set the search-variable will be removed from the signature (you off course don’t want it there) instead of writing the information out. In this case we add a “Cell: ” as prefix before the number so the signature indicates clearly what this number is about. Simply said, since we replace a variable and not a whole line – we have to write out more then just the number – in this case we want to add text.

Can you execute the script with various signatures per department?

Yes – you actually can – but you would need to do this with an additional script and e.g. IFMEMBER from Microsoft or group based GPOs etc…

Can you create more then one signature?

Yes – you can execute the script in various ways – you can roll out a NEW MAIL signature (full length signature) and a second version for REPLIES (short signature) and additional signatures that the user could choose from that aren’t set as either NEW MAIL or REPLY signature. The script header explains how to call the script and what parameters it will expect and how to set them.

Feel free to use the script below, adjust it for your needs. I know some of the stuff like your domain-name could be searched automatically instead of putting it hard-coded in the script – even the reg-keys could be more advanced, feel free to do so – but in the end it is not that much work and it does its job either way.

Debugging the script

At line 79 the statement “On Error Resume Next” avoids that you see errors that might arise. This is good for production so that the client/user does see as little as possible messages due to timeouts or special circumstances – but if you want to debug something or in the process to test the script itself, please remark the line so errors actually occur. They might not mean much in some cases, but they might also give you the hint you need to see what is going wrong.

Print Server backup script

Print Server backup script

Print servers need to backed up. This is because of two main reasons. One is that users heavily depend on printers and a not properly working print server will cause imediate helpdesk tickets and unhappy users. The other one is that installing a new driver, might it be a new version, a new model or even an additional manufacturer, can cause other print drivers to act up or even stop working – many administrators know and fear that.

Windows server actually allows you to backup the current print drivers, installed printers and their configuration. You can use this to migrate your printers or to back them up. Of course, you can simply depend on e.g. VMware snapshots, storage level snapshots or other backups of your server. But you also could just export the whole print server configuration while using the scripts below. Those will actually call the Windows API to back up the printers and store it all in a file that you can keep centrally. You don’t just rely on snapshots or a full server backup for e.g. your SQL databases as well, do you?

The script uses a .CMD file that will execute the actual backup and send a email report, while using the SMTPSEND program from Michael Kocum (https://www.dataenter.com/download.asp) for this since I already ad it flying around – you could replace the mail send option with another prepared SMTPSEND client, a VBS script or just remove it completely. Additionally there is a .VBS script that will do a clean up of the target backup files depending on the age of the files in the specified directory.

All the parameters are explained and set in the top part of the .CMD file – I therefor will not explain them here again – you should not need to modify the scripts by default – but feel free to do so. Of course, you should create a scheduled task and execute the .CMD periodically. This can save you time and headache in case you have a malfunctioning print server system. The restore can be easily done through the Print Management MMC that Windows provides you, cause the actual backup files are createed using the same Windows APIs. Your end users will be happy that their printers got back to work in no time, hopefully.

Password expiration notifications for end users

Password expiration notifications for end users

Today I wanted to share a script with you that allows you to inform your users per email that their password will expire or even is expired and reminds them about your password policies like complex passwords and how to chose a password. This is a simple VBScript and can easily be adjusted. The email will be generated from a file, in this case a HTML file that you provide. You can adjust the content of this HTML file as you need it. There are sure many commercial solutions out there that can do more then this script, but if you want to save the money and are satisfied with the provided options, this sure can be a good alternative.

Let me mention one thing about passwords first – most of us live with the usual policy that passwords should be changed periodically and need to be of a certain length and complexity. We all live with the daily calls of the help-desk about forgotten passwords, not changed passwords (pretty much why I wrote the script) and so on – what changed was a new recommendation in late 2017 or early 2018 that actually is based on statistics and data and now says – yes – complex passwords and certain lengths – but do not enforce periodic changes of those passwords due to this actually resulting in to less secure passwords while users might only change a number or even write those passwords more likely down and therefor compromising the whole attempt to secure the system.

Anyways – the next few lines will explain the parameters you can adjust in the top section of the VBS script – further below I will post the script and an example HTML file so you can start right away.

The options are between the lines 7 and 61 – don’t be scared – most of them are pretty simple to understand and are actually explained in the script itself. You should not need to modify anything outside those two lines.

About the parameter naming convention and possible values:

  • starting with str as strings – those expect text-markers and alphanumeric values “text”
  • starting with int as integer values – those are direct numeric values – e.g. 123
  • starting with bol are boolean values – those can be either TRUE or FALSE – meaning on or off

Here are the options you can set:

  • strSMTPServer: SMTP mail server DNS name or IP address
  • intSMTPServerPort: SMTP mail server port – normally 25
  • strFrom: SMTP mail from address
  • strToAdmin: SMTP mail to address for administrator emails
  • strAdminMailSubject: subject for mail to administrators
  • strUserMailSubjectExpired: subject for mails to user when password is expired
  • strUserMailSubjectWillExpire: subject for mail to user when password will expire – the exact word REPLACEWITHDAYS will be replaced by the days left value so mention it in the subject line if you want to see the value there
  • strBodyURL: URL or full file-path (HTML file path e.g. file://) to import for body, the entire content of this URL/FILE will be imported to the body of the email and should explain ways how to change the password
  • strAttachment: full file-path to an attachment for the email to the users / leave empty if no attachment
  • strLDAPSortColumn: per default: pwdLastSet / sort column for LDAP query
  • intStartWithPWexpiresInDays: If the passwords expires in days N or less, the script will inform the user – keep in mind – if you run the script daily, those users will get an email every day once their password will expire in less then the indicated days. 5 is sure a good start.
  • bolIgnoreDisabledAccounts: Disabled accounts should always be ignored
  • bolInformAdminAboutPWexpires: this will inform the admin about expiring passwords
  • bolInformAdminAboutPWisExpired: this will inform the admin about accounts with expired passwords
  • bolInformAdminAboutPWneverExpires: this will inform the admin about accounts with password set to never expire
  • bolInformAdminAboutUserCantChangePW: this will inform the admin about users who are not allowed to change their password
  • bolInformAdminAboutAccountDisabled: this will inform the admin about disabled accounts found – this would have been done in ADS by an administrator
  • bolInformAdminAboutExpiredUserAccount: this will inform the admin if the user account has an expiration date and the account is expired
  • bolInformAdminAboutAccountWithoutEMail: this will inform the admin about accounts without a set email address
  • bolInformAdminAboutStillGoodPasswords: this will inform the admin about users/passwords that are still valid
  • bolInformAdminAboutIgnoredUsersExcludedByGroup: this will inform the admin about users that have been ignored by the strGroupsExclude filter
  • Please Note: the status account locked will not be checked, this should be corrected automatically by the default security GPO instead (will be in most cases by default)
  • strSearchOUs: Filter Priority 1 – only users in those OU paths will be processed. Use LDAP DN like: “OU=Folder,OU=Folder,DC=Domain,DC=local”, you do not need to include the DC=Domain,DC=local – the script will add this information if necessary. Use | (pipe) if you want to add more then one LDAP DN path. Leave empty (“”) to disable this filter
  • strGroupsExclude : Filter Priority 2 – if the user object is still not excluded, this group exclude filter will be applied. If the user is member of one of those groups (if multiple groups are defined), he will be ignored. Use | (pipe) if you want to add more then one GroupName. Leave empty (“”) to disable this filter. Example: “Group Number1|GroupNumber2”
  • strGroupsInclude: Filter Priority 3 – if the user object is still not excluded, this group Include filter will be applied. The user has to be a member of one of those groups (if multiple groups are defined). Use | (pipe) if you want to add more then one GroupName. Leave empty (“”) to disable this filter. Example: “Group Number1|GroupNumber2”
  • bolDebug: set TRUE for script-output, highly recommended to execute the Script in CMD with CSCRIPT <ScriptName> so you see it in a command window instead of dialog boxes.
  • bolAttachDebugToAdminMail: the debug output will be attached to the admin-mail (independent from bolDebug)
  • bolTestDebugOutputToConsoleOnly: this will disable the mail.send – only output to the CMD will be generated, please enable bolDebug
  • bolRedirectMailToAdmin: this will redirect all mails to the admin, instead of sending them to the user – the subject line will include the user-mail address in this case – this allows you to do a real test and actually see what would be send out to whom – without actually sending the emails to the end user
  • bolAdminMailOnly: this will send the admin-mail only, no user mail will be generated

As always – feel free to reach out to me if you have any questions or comments.

Script based SQL Express backups

Script based SQL Express backups

SQL Express is widely used but has huge downside, there is no SQL Agent available. Even Windows internal databases, especially WSUS / Windows Update Services / Microsoft Update Services are in the end SQL Express like databases that do not have a SQL Agent.

Now, you can have central SQL Servers with Agents have them backed up – and I recommend on doing so if possible. But for the many times this is not possible, you will need to find another way to create those nice little .BAK files for SQL internal backups aka. SQL Maintenance Plan Backups. To work around this issue, I once wrote a script that automates this for each database found on a specific SQL server. It creates the backups via SQLCMD commands and even does a clean up of obsolete files (files older than x days), almost like SQL Maintenance Plans do it.

The script is divided in to a .CMD file that executes the actual backup and where you set the configuration/parameters and a .VBS file that is controlled by the actual .CMD script and will perform the backup cleanup. In the end you can have the .CMD send a email report – I used the SMTPSEND program from Michael Kocum (https://www.dataenter.com/download.asp) for this since I already ad it flying around – you could replace the mail send option with another prepared SMTPSEND client, a VBS script or just remove it completely.

Adjusting the settings / parameter:

This is all done in the SQLBACKUP.CMD file – the header section pretty much will explain all you need to know, from SQL Server to SQL-User and Password over Mail-Server to recipients.

If you want to execute the SQLBackups as the Windows-User that is executing the script, you need to exchange the REM (remarks) for the following two lines further down in the scripts. I apologize for the inconvenience, this is a old script I never updated to have those settings in the header (more automated), I always just changed the lines.

Everything else should be rather easy. Of course you will need sufficient access rights to the SQL databases and your destination backup folder. The task-scheduler might work best if you execute the script with “cmd /c c:\scripts\sqlbackup.cmd” (change the path as you need it) and set the working directory / startup folder right. It might help to execute the task with elevated rights etc. – all depending on your systems configuration.

Below are the two scripts – I hope this helps some of you. The generated .BAK files can simply be restored in SQL services via the GUI cause they are native SQL backup files.

Join systems to a domain and create KeePass server entries for local admin’s

Join systems to a domain and create KeePass server entries for local admin’s

Please note – this script was updated – you find the updated post here.

One of the challenges in most daily IT operations is onboarding of workstations and servers (respective domain join). Over the years I came across and tried many ways to accomplish this. Today I wanted to share a script and solution others might find helpful, but first lets get down to some theory and background.

The goals and challenges:

  • simple domain join after a system was imaged
    • this is in theory possible in a fully automated process via various imaging solutions – I found that WDS (Microsoft Windows Deployment Services are in most cases the easiest way to accomplish this while having the possibility to use this in consulting for various clients, in enterprise for various departments etc. Since Windows 10 came in to the equation some of the automation with WDS became more challenging – so keeping it simple with some additional manual labor is often the easiest way to accomplish this – to simplify the process a PowerShell script became a perfect solution).
  • systems should have a local admin account (not administrator / SID 500 / who should remain disabled) with an individual password
    • typing this manual you always risk that the password is misspelled either in your password database or on the actual operating system
    • if you think it is a good idea to have the same password on all your clients I actually suggest you do some security related research!

The PowerShell script below will do the following for you:

  1. Ask for the name of the system (this will change the hostname/computername)
  2. Ask for credentials for KeePassPleasant Password Server
  3. Ask for credentials to join the system to the domain
  4. Create a local admin user account on the system
  5. Generate a password for this account
  6. Check if there is an existing KeePassPleasant Password Server entry for this system
  7. If not – it will proceed and create a entry with the machine name, username, password and various additional information like
    1. manufacturer
    2. model
    3. serial number / service tag
    4. UEFI BIOS Windows license key
    5. MAC addresses of all network cards Windows knows about
  8. And finally it will join the domain and put the system right away in to the defined OU

The whole script is only an example – you don’t have to use KeePassPleasant Password Server nor is the script perfect for any situation – you can take it and modify it as you need it – point it to various IT Asset databases or let you chose from predefined OUs etc. – adjust it as you needed – in general it is a very useful baseline and I wanted to share it.

One of the challenges is to execute the script as administrator (elevated rights) and as well bypass the script execution restrictions without compromising them in a default image, like disabling this important security feature on the image itself. To accomplish this, a simple CMD-Script actually will execute the PowerShell script. CMD-Script can right-clicked and executed as administrator and gain elevated rights. This is as of today not possible by default with PowerShell scripts (.ps1).

Create the following two files “Execute-DomainJoin.cmd” and “Execute-DomainJoin.ps1” and save them in the same directory or e.g. a portable flash drive. Adjust the PowerShell script so it connects to your domain and local systems.

Please note – this script was updated – you find the updated post here.

Explaining, adjusting and guiding your through the PowerShell script

It is important that you understand the script so you can make adjustments to it. I will try to explain everything that is important and reference some line-numbers while doing so.

  1. Lines 1-30 are just a general introduction and show some generic information
  2. Lines 31-76 hold some functions to generate a password, to bypass some certificate issues etc
    1. Lines 35-38 are worth taking a look at, here are all the characters of the four categories that will be used to generate a password. Excluded are already usually hard to read characters in some fonts and other characters that might cause issues – of course, adjust especially line 38 to your preferences and add more symbols or remove what you don’t want to use
  3. Lines 77-89 are just informational
  4. Lines 90-96 expect some user-input
    1. new computername
    2. get credentials for the domain join (admin)
      1. the script will not validate the credentials, in theory this could be done but I never found it that important
    3. get credentials to read/write on the password database server (often not the actual admin-credentials, therefor I separated those two)
      1. the script will not validate the credentials, in theory this could be done but I never found it that important
    4. the local admin username that will be created
      1. $localAdminUser = $(“$ComputerName” + “_Admin”)
      2. the above line will create a hostname_admin account – you can adjust this to your preferences
    5. 94-95 will generate a password and encrypt it so it can be used to create the local account
  5. Lines 97-103 are just informational
  6. Lines 104-216 – this is actually the whole password server communication and entry check and generation
    1. 104-115 those lines gather various information from your current system like serial number, UEFI Windows keys, etc. – you can keep em as is
    2. 116 – please enter the URL to your password server here 
    3. 117 – here your need to enter the folder where the generated credentials are going to be put in on your password server
    4. 118 – this is the subject of the entry that will be generated – adjust this to your preferences
    5. 119-120 – those are username/password for the entry – you should leave this as is
    6. 121-134 – those lines are the details in your password server entry – adjust them to your likes
    7. 135-165 – this actually will execute the following on the REST API on your password server
      1. connect to it
      2. check if a entry with the same username already exists
    8. 166-189 – this will raise an alert that this user already exists on your password server – 189 will actually exit the whole script
    9. 190-216 – this block will write to the password server – cause it did not find an entry with the new username
  7. Lines 217-241 this shows the new created username and password – it actually suggests you compare the entries on your password server to the information shown to make sure everything is correct
  8. Lines 242-251 will create the new local admin account on the system and set the password
  9. Lines 252-267 are informational
  10. Line 268 will execute the actual domain join
    1. please adjust the -Domain and the -OUPath parameter to your specific needs
    2. note that the command will automatically restart the system
  11. Lines 269-282 Those lines are informational – actually – if anything would go wrong those lines would be shown and help to take further steps after the failed domain join – in most cases those suggestions will help – in the end the error output shown by the command for the domain join (line 268) would indicate what went wrong. The restart of the system actually would bypass this message in the end (more or less)

If you have any questions, feel free to reach out to me. The script could be cleaned up more – but I wanted to provide a working version of it – so I just did a quick clean up or some special stuff and posted it here. Personally I like things a bit more structured, but as said – this is just a general example.

Please note – this script was updated – you find the updated post here.

This script is also mentioned on the API Examples page on the Pleasant Solutions web site here.

Monitor your projectors and light bulbs with PJLink

Monitor your projectors and light bulbs with PJLink

Below is a solution rather than a question. Just thought I share it with the rest of the Paessler/PRTG world. This was originally posted here by myself: https://kb.paessler.com/en/topic/76639-monitor-your-projectors-and-light-bulbs-with-pjlink

Our challenge:

Projectors are having issues “out of now where” and of course always urgent – IT department didn’t have monitoring on it. Of course, normally you already have a conference going on or it is urgent and the projector might be really down already – most likely due to an end of life light bulb.

What we have:

NEC and Optima projectors – with Ethernet interfaces

Research and work:

We found out that there is a PJLink protocol (http://pjlink.jbmia.or.jp/english/index.html) that is supported by those devices. A little more research and we found a library on GitHub for Visual Studio (C#) (https://github.com/uow-dmurrell/ProjectorControl). We took it, modified the LAMPHOURS so we could get it public (only one lamp for now – but we only have projectors with one lamp). Modified the TEST-Project that comes with it to accept IP addresses as parameter and modified the output in to a PRTG XML format. Further modified the output so it is numeric. (We are not aware that any of that stuff is copyright protected, if so – we apologize and will stop using those components)

Result:

We added a PRTG sensor based on XML data that monitors (advanced EXE/XML) with the IP of the device as parameter and get the following information:

  • LampHours – we set ours to max. 4000 as error level
  • CoverStatus (0 OK / 1 ERROR)
  • FanStatus (0 OK / 1 ERROR)
  • FilterStatus (0 OK / 1 ERROR)
  • LampStatus (0 OK / 1 ERROR)
  • PowerStatus (0 off / 1 on = OK 2 cool down / 3 warm up = WARNING (we don’t get alerts there) 4 unknown = ERROR)

This actually helps us to keep an as close as possible eye on the projectors and being proactive if something is already reported as defect and the LampHours (adjustable, depending on manufacturer/model) are coming close to an issue, so we are ready for it.

Code changes:

ProjectorInfo.cs – added this:

Program.cs (or create your own project)