solutions

Linux and DHCP reservations aren’t working

This is something I just came across myself. While deploying an Ubuntu Linux VM the DHCP reservation did not work. This was mixed up with a Windows 2016 DHCP server.

After looking at the wrong DHCP lease I quickly saw an extremely long MAC address and figured that the Linux VM used some kind of randomization for the interface.

IFCONFIG on the system showed the correct hardware MAC address.

It took me a few minutes of research and testing till I found the root cause and rather simple solution.

Linux replaced their NIC handling on many distributions with a newer system called NETPLAN.

I read that a 2019 Windows DHCP would likely handle this correctly, did not have time to test this out. But the following worked for me:

  • List the contents /etc/netplan
    • ls /etc/netplan
    • there should be a file ending in .yaml
  • Edit this .yaml file
    • sudo nano /etc/netplan/<filename>.yaml

The file is structured like this:

Under your NICDEVICENAME add the line dhcp-identifier: mac and save (CNTRL+O) and exit (CNTRL+X) the file.

Now you can either try to apply the netplan config via sudo netplan apply or simply reboot.

This should solve the issue.

Windows 11 and SQL (Express) issues

SQL Express issues on Windows 11

Due to a change on how Windows 11 presents the disk sector size, you can have issues with SQL or SQL Express after your upgrade or even on brand new installations.

SQL might just fail to start after an upgrade, with the Event Viewer Application Log Error 1000 similar to the below one:

This is especially true for Samsung SSD 980 – be aware – the SSD 980 Pro does not have this issue, just the SSD 980. There are other OEM versions of it that have the same issue and actually a bunch of other disks.

The root cause is that the devices report the true sector size, what causes SQL to fail. This is still true with SQL Express 2019 – earlier versions as well.

As described in this Microsoft article, you can add a registry key and reboot to make Windows 11 behave like Windows 10 and earlier Windows versions.

Of course, alternative you can either install SQL on another disk drive or replace the drive with one that does not have these compatibility issues.

It remains unclear if there will be updates to this in the future from either Microsoft of the disk vendors like Samsung in the future. For now, this simple registry adjustment fixes the issue.

Reboot after the registry adjustment for the change to take effect.

MeshCentral – Certificate installation

MeshCentral - Certificate installation

MeshCentral is a remote support OpenSource platform. It runs on Windows or Linux and needs to be self hosted.

While it supports Let’s Encrypt (letsencrypt.org) certificates, this is not always a possible option. Issues you can run in to are:

  • port 80 incoming is blocked by your internet provider
  • your DNS provider does not support the ACME protocol needed

Of course, you also could just simply want to create your own certificate. To do so you go to your regular CA (certificate authority) provider and get your certificate issued. You can do so by simply engaging Windows IIS, request a new certificate per CSR, have it issued and finalize the request in IIS. Your last step is to export it including the private key.

Transfer this file now to your MeshCentral server (just use MeshCentral to transfer the file). Next you will need OpenSSL – what is often pre-installed on Linux and Raspberry, on Windows you will need to download it separately.

OpenSSL is used on the command line to extract the unencrypted key and the separate the certificate so MeshCentral can use it. Follow the next steps – while we assume your source certificate file is named source.pfx.

  1. openssl pkcs12 -in source.pfx -nocerts -out encryptedkey.key
    1. this will ask for the password for source.pfx
    2. it will also ask and have you confirm a new password (can be the same) for the destination file
  2. openssl rsa -in encryptedkey.key -out webserver-cert-private.key
    1. it will ask your for the new password of the file you created in step 1
    2. this will overwrite the webserver-cert-private.key file with a passwordless key-file as needed by MeshCentral
  3. openssl pkcs12 -in source.pfx -clcerts -nokeys -out webserver-cert-public.crt
    1. this will ask for the password for source.pfx
    2. it will overwrite the webserver-cert-public.crt file with the public part of your certificate

Now reboot the MeshCentral service/server and open a new browser window, you certificate should work now.

 

Windows 10 Build 2004 / 20H1 – SMBv1 network drives not connecting

SMBv1 network drive not connecting

The newest builds and updates can possibly break some Windows 10 network connections. Saw this specifically in a situation with a SMBv1 drive that was connected via FQDN per GPO.

Windows was not able to connect to the drive, looking at NET USE all you saw was reconnecting.

Connecting to the same share via HOSTNAME and/or IP worked just fine, as well as engaging the UNC path.

The solution to this eventually is a simple registry adjustment, that has to be done in the user-profile HKCU area, so no advanced rights are needed.

Steps:

  1. open REGEDIT
  2. go to HKCU\Network
  3. select the key with the drive-letter you have issues with
  4. add a new REG-DWORD
    1. PROVIDERFLAGS
    2. Decimal 1 or DWORD 00000001
  5. Reboot

Your network drive should work normal again.

Background and Explanation:

The PROVIDERFLAGS instruct Windows to reconnect the SMBv1 network drive, more or less. It eventually did not matter if it was connected per FQDN, IP or HOSTNAME – is was the reconnect that the GPO implied, respective the NET USE /PERSISENTENT:YES switch. If you would use a script – netlogon script – you could just determine the drive as /PERSISTENT:NO and not see the issue either as well as solve it.

Eventually this is specific to SMBv1 and I can’t warn enough about the security risks this protocol has. Still – there are here and there systems that still need to stick around – hopefully secured by firewalls and even sandboxes etc..

ActiveDirectory/LDAP result limits – MaxPageSize

ns a website from a systems administrator for systems administrators Home IT-Admins CMDB IT-Admins tool IT Search EOL Solutions Blog Contact Links ActiveDirectory/LDAP result limits – MaxPageSize

ActiveDirectory, respective LDAP, has a result limit setting, MaxPageSize. Those are set by default to 1000 rows per query.

This is primarily important if you use some kind of programming language to get results from LDAP, this code must compensate those limits and engage paging.

Your LDAP query does not need to provide the limit, only the code needs to do the paging as you always just get the max. amount of results set in the current settings.

In order to check your settings do the following commands in a command prompt / cmd window:

In theory you could set different values now as well, assuming you have the permission level to do so. But this is not recommended and you should engage paging instead, as you otherwise risk to overload your DCs – even if your commands won’t cause it, a possibly DoS attack could happen – malicious or not, so leave the limits, but be aware of them.

 

Windows Print Server Aliases

Windows Print Server Aliases

Windows Print Server Aliases – what is that and why would you even need to think about it?

For File-Servers, you can set up DFS structures and have a single point of entry as from the perspective of the client. It’s a simple named path and works rather flawless if set up right and monitored e.g. with PRTG. But what about your print server? Is it a defined hostname and the printers sit on this host? What happens when you want to upgrade the host to a new windows version or theoretically even do some special DNS routing (that’s very advanced and has hurdles, I will not address this in this posting).

Well – you can sure set up an ALIAS name in your DNS, but soon you will discover you can’t connect to the printers on this server. This is because you are missing some registry tweaks. At this point I also want to make you aware, I saw Windows updates removing those keys, so keep this article handy to reconstruct the registry in case of any issues.

You will need a total of three registry keys added, as follows:

This first key will enable DNSOnWire for the Print-Server itself. This is needed to make the print-server aware that you might use DNS ALIAS / CNAME entries to access him. More can be found e.g. here: Windows couldn’t connect to the printer – Windows Server | Microsoft Docs

This key, DisableStrictNameChecking, we need to configure the SMB server / LANManServer – he needs to be aware as well that we will use CNAMES to access the shares on the server. You can find some more information at the following link: Can’t access SMB file server – Windows Server | Microsoft Docs

And last but not least, the OptionalNames – this is the one key that’s most hidden but still so important. You can also make it REG_MULTI_SZ key. But it works with a simple REG_SZ key and the short CNAME alias that you have specified, you don’t even need use the FQDN.

There are many ways on how to accomplish this one last key, it changed throughout the Windows versions, it was possibly even renamed. Worst I saw on a Windows 2016 server was it vanished after a update session and reboot. So be prepared for that. A simple recreation and reboot fixed the issues.

Also, make sure you reboot after those changes, otherwise it won’t work.

Make Microsoft TEAMS the default IM application

Make Microsoft TEAMS the default IM application

Having multiple applications that act as chat respective IM application but you want Microsoft TEAMS to be the default Instant Messenger application especially so Outlook e.g. shows the correct online/offline as well as free and busy status for employees and so they can start a conversation directly from there, you will need to make sure that Microsoft TEAMS is the default IM Provider.

This came up especially in combination with Cisco Jabber, that is often used as the software phone client for a Cisco phone system. This application might overrule the user settings and take presence especially in Microsoft Outlook. Cisco has an article about this here that talks about various registry keys. But this is actually not the direct solution for this issue.

In order to set TEAMS, if installed, the default application for your employees, it is easiest to engage Group Policies, GPOs, for this. Simply follow the below steps. Those settings will find out if Microsoft TEAMS is available and if so set it as default IM Provider. Close Microsoft Outlook and open it again and you will see the status icons and message box being associated with Microsoft TEAMS.

Of course, you could slightly adjust the suggested GPO settings and engage e.g. Cisco Jabber or any other IM provider available instead. Just have a look at the registry path HKEY_CURRENT_USER\Software\IM Providers and see what is available and set the GPO accordingly. All you need is the name of the sub key for the DefaultIMApp value.

Steps for the user GPO

  1. Create a new GPO (or chose an existing GPO)
    1. This will be a User Configuration
  2. Navigate to User Configuration\Preferences\Windows Settings\Registry
  3. Create a new Registry Item
  4. Settings on General tab
    1. Leave the Action settings to Update
    2. Hive: HKEY_CURRENT_USER
    3. Key Path: Software\IM Providers
    4. Value name: DefaultIMApp
    5. Value type: REG_SZ
    6. Value data: Teams
  5. Settings on Common tab
    1. Check Run in logged-on user’s security contact (user policy option)
    2. Check Item-level targeting
    3. Click on Targeting and apply the following settings
      1. The following steps make sure that this is only applied if Microsoft TEAMS is available as a IM provider
      2. Click on New Item and chose Registry Match
      3. Match type: Key exists
      4. Hive: HKEY_CURRENT_USER 
      5. Key Path: Software\IM Providers\Teams
    4. It is good practice to provide a Description for this item – e.g.: This will set Microsoft TEAMS as default IM Provider for e.g. Outlook – if available as IM Provider.

 

Make sure the GPO applies to your users and you should be all set. This will make sure that even if a new application is installed and takes the IM Provider role over, that your clients will still fall back to Microsoft TEAMS. Of course, it will depend on when the GPO was reapplied and that the user actually closes and reopens Outlook.

 

PRTG and Cisco ASA VPN monitoring

PRTG and Cisco ASA VPN monitoring

The default PRTG sensor for VPN connections on a Cisco ASA has a limited of 50 users connected, actually less. This is due to the limit of 50 channels per sensor.

These days IT departments everywhere likely exceed 50 VPN users everywhere.

Since I do not need to know who is connected, rather then the amount and load on the FW, I came up with a simple sensor and MAP in PRTG to show me the essentials.

Add a snmp custom sensor to your FW in PRTG and use the OID

  • 1.3.6.1.4.1.9.9.392.1.3.1.0

This will give you a number of VPN connection. I am not 100% certain about the OID only being used for Cisco AnyConnect or other VPNs as well. I found it as a valid SNMP OID as I only use Cisco AnyConnect and my VPN tunnels aligned with this number.

Further did I add sensors for CPU / RAM and on the external interface of the FW to the map in PRTG to see the overall status and load.

Detailed information on the bandwidth are a different story, since this is more a passive point in time configuration, I don’t pull that in this map – I care about an average load picture not single pikes that only are temporarily. For this I have different approaches and sources. Mentioning this only for the big picture and cause you need to be aware of that.

Hope some find this helpful.

If you want the users that are online and offline – what is actually a big of a questionable thing due to data privacy concerns and a user not always needing to be on VPN in order to do work – you could create scrips to access more detailed SNMP data and balance this in various sensors. This is possible, but I do not recommend it. Another approach would be using the TEXT value in XML sensors and put the info there. Still, I think you might get to much data and need to ask yourself if this is even something you should collect/monitor.

Here a picture of my map as we barely started getting more home office people online.

PRTG Cisco ASA VPN users

PRTG Cisco ASA VPN users

Backlink to the Paessler PRTG KB, where this was discussed as well: https://kb.paessler.com/en/topic/64053-my-snmp-cisco-asa-vpn-users-sensor-shows-a-user-limit-error-why-what-can-i-do

RDS – Fix broken local RDS links in start menu

RDS – Fix broken local RDS links in start menu

RemoteApp and Desktop Connections are quite powerful. Still, it happens that RDS icons configured through your Windows Remote Desktop Application broker either won’t update or vanish. This can have various reasons. Out of experience, the easiest way is to manually clean up and then configure the source again – as explained step by step below…

  1. Open REGEDIT as the current user (DO NOT run as!)
    1. Navigate to:
      1. Computer\HKEY_CURRENT_USER\Software\Microsoft\Workspaces
    2. Delete the whole key WORKSPACES (just delete it! no worries)
  2. In Windows Explorer
    1. Navigate to:
      1. %appdata%\Microsoft\Workspaces
      2. Delete the whole WORKSPACES folder (yes – delete it!)
    2. Navigate to:
      1. %appdata%\Microsoft\Windows\Start Menu\Programs
      2. If there is a folder “RDS Farm Name (RADC)” then delete it completely
  3. (see footer note) Open Control Pannel
    1. Navigate to “RemoteApp and Desktop Connections” or type in search box: remote
    2. There should be nothing in the connections, add a new one while clicking on “Access RemoteApp and desktops” in the left hand menu
      1. use your RDS URL
    3. If asked for credentials, use the users credentials or have them type em in
    4. This should finish successfully
  4. You now should see the applications in the start menu again

Note: If you have a GPO or script configured to auto-configure the Control-Panel, you could just reboot as well instead of manually configuring the Control Panel again.

PRTG and VMware 6.7 vCenter host hardware status

PRTG and VMware 6.7 vCenter host hardware status

The following script was created to bypass an issue in the SOAP API in relation with VMware, hardware vendor drivers and PRTG. In any case, you could use the same script for other monitoring systems or any other purpose – of course while adjusting it to your needs.

You can find more information about the issue here: https://kb.paessler.com/en/topic/82458-vmware-host-hardware-status-soap-sensor-returns-warnings-after-update-to-vmware-6-7

In order to make this work you need to install the VMware PowerCLI PowerShell extension on the PRTG probe server. Further will you need to inject username and password as well as the vCenter name and internal hostname in vCenter.

LDAP authentication activated targets:

  • $host %host “%windowsdomain\%windowsuser” “%windowspassword”
  • $host %host.domain.local “%windowsdomain\%windowsuser” “%windowspassword”

Otherwise – you might need to use this format:

  • $host %host root MyRootPW

Test it in PowerShell as the Probe-User first – you should see the results. Eventually the script create a sensor with multiple channels – sensors in GREEN status will be counted only – sensors in UNKNOWN status will be counted and returned as text, while as long as no YELLOW or RED status (warning or error) occurs, the sensor still stays green/okay. Warning or Error levels will automatically apply and have the problematic hardware systems in the sensor message text.

My first attempt was to show all channels on top of the summary – due to getting over 100 separate hardware statuses back and the limitation in PRTG of 50 channels per sensor, I dropped the idea – while the script still has all the code to handle it.

 

Using PRTG to monitor manufacturing machines

ns a website from a systems administrator for systems administrators Home IT-Admins CMDB IT-Admins tool IT Search EOL Solutions Blog Contact Links Using PRTG to monitor manufacturing machines

This is a screenshot of the real-time data map of the PRTG instance that is used to monitor the data collected by the Raspberry PI and processed by PRTG to show how the progress of the production machine in manufacturing

A few weeks ago Paessler published on their blog an article I was part of that talked about a case study and implementation of how to use PRTG to real-time monitor a manufacturing machine / production machine while engaging a Raspberry PI.

The article describes what Dominik Wosiek and I implemented to monitor a manufacturing machine in real time. He started using a Raspberry PI and added eventually some magnetic field sensors to the machines robot arms to detect their movement. The data those sensor collected is interpreted by a script on the Raspberry PI and then send off to various HTTP push sensors on a free Paessler PRTG installation (we needed way less then 100 sensors and wanted to keep the installation independent).

On the PRTG instance, the data is of course collected and PRTG creates various graphs for us. We further added a PowerShell script that calculates the past time of the day. Due to us knowing the work-windows of the manufacturing department and how many parts are their daily target, we where able to use a Sensor Factory Sensor in PRTG to do some calculations and eventually show how the machine and the group controlling it was doing while comparing the output of parts relative to the time of the day – respective work hours past.

Above is an example configuration of the Sensor Factory Sensor in PRTG. We defined four channels:

  1. Production time passed in percent [%]
    1. this sensor pulls the passed time in minutes from the PowerShell Script sensor we created, it then does some math – the formula looks like this
      1. (passed minutes of the day – minutes passed when manufacturing starts) / (minutes passed when manufacturing ends – minutes passed when manufacturing starts) * 100 (to get percent)
      2. what it does in the example above:
        1. pull the passed minutes from the foreign sensor
        2. calculate 8 hours times 60 minutes (start of the day)
        3. subtract start time from passed time of the day (at 10 AM we would end up with 120 minutes)
        4. divide it with 17 hours times 60 or 5 PM in minutes of the day minus 8 AM minutes of the day – this gives you the total minutes between 8 AM and 5 PM – what is the defined manufacturing work time window
        5. multiply the result with 100 to get a percent value that shows the past time relative to the total work time window
  2. Part output vs. time [%]
    1. while the formula seems to be longer – it does nothing else then the using the same formula described in channel 4 minus the formula described in channel 2
    2. in other words – the value of part output in percent minus the value of work time passed in percent
    3. this results in either 0% – meaning the output is exactly at where they should be relative to the time past, or a negative number meaning the output is falling behind while a positive number would mean the part output is higher then expect relative to the time
      1. Note: this is all a bit relative, it might start negative in the morning, catch up to a positive number before lunch break, falling back to a negative number and then catching up to zero by the end of the day.. it depends on various factors but is a pretty good indicator
  3. Part output count
    1. this just loads the foreign channel of another sensor to show it in the same table/graph
  4. Part output in percent [%]
    1. while 25000 is the daily target amount of produced parts, this channel calculates how much of this was accomplished in percent while dividing the current count with the target count

Here is the script that I created to inject the minutes of the day in to a PRTG sensor – this is above used in channel(2323,2) within the formula.

Further details are described in the blog entry on the Paessler web site.

Raspberry PI and Microsoft SQL databases

Raspberry PI and Microsoft SQL databases

Raspberry PI can read and write on a Microsoft SQL server database.

In order to accomplish this you follow the instructions here: http://pymssql.org/en/stable/index.html

To summarize it in a nutshell, here is what you need to do:

  • apt-get install freetds-dev
  • pip install pymssql

Update: Above information is for a Raspberry 2 – Raspberry 3 needs the below information as far as I know:

  • sudo apt-get install freetds-dev
  • sudo pip3 install cython
  • sudo pip3 install pymssql

Personally I had issues getting this to work in Python 3.x so I tested it in Python 2.x and it was working fine. The issue was simply that the module “pymssql” could not be found and therefor the IMPORT line already failed in the Python script. It should be a rather easy fix – like copying the files to the Python 3 modules folder, but as of now I did not have the time to investigate this further – as I was fine using Python 2 in my specific situation.

Here is a sample script

The example I tested used a SQL server user account. The documentation of PyMSSQL talks about the possibility to use Windows Authentication as well.

As for asking Google about this – there is a lot of confusion information out there – the top ranked posts aren’t really helpful, so I thought I just post it again hoping someone finds this helpful.

Shadow copies aren’t accessible – advanced VSS configuration

Shadow copies aren’t accessible – advanced VSS configuration

Most file servers are configured to use the Windows internal shadow copies / VSS to allow administrators or even users to quickly restore files.

Microsoft allows you to extend the default maximum of 64 shadow copies to a total of up to 512 as described here: https://docs.microsoft.com/en-us/windows/desktop/Backup/registry-keys-for-backup-and-restore#maxshadowcopies

It is pretty easy to implement this – no restart needed (if running, restart the volume shadow copy service).

  • HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\VSS\Settings
  • MaxShadowCopies DWORD
  • official maximum: 512 (decimal, NOT HEX!!!) (HEX: 0x200)

Now – we detected in January 2019 a bug that at least affects Windows 2016 servers, if not even more. We could not see the shadow copies of the current day. Any shadow copies of the previous day seemed to be fully available. The cut off was literally before midnight. After about 12 subsequent shadow copies they started to triple in.

Once we adjusted the maximum to 500 (decimal – HEX: 0x1f4) and restarting the service respective waiting till the next scheduled shadow copy executed (plus a few minutes to process a cleanup) we eventually could see the most current shadow copy from the Windows Explorer menu.

This seems to work way better then the 512 that is the defined maximum. There seems to be some kind of a bug that started with some update. We couldn’t determine it in detail and simulating this would take a lot of time.

NirSoft has a great tool to investigate your shadow copies as well here: http://www.nirsoft.net/utils/shadow_copy_view.html

This is a GUI based tool that partly lets you look in to your shadow copies. Only, if you try to open the most current paths while the 512 maximum was set, Windows Explorer still couldn’t handle it. But it was a nice detailed proof to see that the current shadow copies where as a matter of fact there.

Similar results could be determined while using PowerShell and command line commands like VSSadmin – we saw the shadow copies where there.

WMI provided the same information as well – for an example see the script here what uses WMI and PowerShell to gather information about shadow copies: https://www.it-admins.com/monitoring-shadow-copies-with-prtg/

Suggestions to configure shadow copies:

  • set a maximum of 500 instead of 512
  • do them e.g. hourly – as you need them
    • this is all a calculation, straight hourly provides you 500 copies / 24 hours a day = +/- 20 days back
    • if you go e.g. 5 AM to 9 PM and no Sundays you extend this: 500 / 17 snaps a day (hourly) = +/- 29 days => add the removed Sundays in the equation and you easily bypass a whole month
      • this would allow you while doing full virtual machine backups (VHD level backups) to keep the month end tape of every month and still be able to restore files from the shadow copies in theory – I had cases where I had to dig that deep..
  • volume configuration on your file servers (the drive letters don’t matter much)
  • add monitoring to your VSS – like described here with PRTG

 

 

SNMP was deprecated

SNMP was deprecated

Microsoft deprecated the SNMP in Windows 2012 (R2). As of Windows 10 1809 respective Windows 2016 this feature is pretty much hidden. The decision likely was made due to security risks related to SNMP, in any case – as of right now it is still available if you really need it – but not via the good old Control Panel – Add Remove Features function. The following should even work on Windows 2019, since there is no indication that Microsoft finally removed the feature itself.

The following link is for Windows Server 2012 (R2) – it clearly states that SNMP is deprecated: https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831568(v=ws.11)

Windows client and server operating systems share the same kernel in the background – for the most part.

Alternate ways to enable the feature:

  • using Apps & Features will help you getting SNMP via Optional Features
    • then use Add a feature
  • PowerShell commands:
    • Either those commands
    • or the following version of commands
  • or you use DISM on a command prompt

In all cases – you will run in to an possible issue if you use WSUS – you might need to temporarily bypass it in order to install this feature. It is possible that you need to restart the Windows Update service on the system for this setting to take effect.

  • Open Regedit and adjust the following key

It is pretty obvious that this feature will be removed at one point – but  as of now it is still available.

Let’s talk about a few things in regards to SNMP on Windows – or even in general when it comes to all your switches, firewalls, routers and other network components.

  • using SNMP on a Windows OS is a potential security risk – actually – SNMP itself is in general, cause it is standardized in often not locked down while having as well just limited security features
  • I personally don’t see a reason to use SNMP to monitor a Windows Server – the system itself can easily be monitored by WMI and other methods – that might have pro’s and con’s – but it generally works
  • There are circumstances then you need SNMP enabled – I had this while coming across mostly UPS software that only allowed to interact with it via SNMP – the UPS itself was connected per USB and the software on a Windows server/client allowed no API calls or similar – you had to enable SNMP on Windows and then use SNMP through Windows to grab data for e.g. UPS monitoring
    • having said this – this is actually a flaw by the vendor in such a case and should by addressed with the vendor
    • there is possibly more then just an UPS software that does behave like this

VMware alert monitoring with PRTG and PowerShell

VMware alert monitoring with PRTG and PowerShell

There is a way to read out and process ALL alerts of your VMware environment using PowerShell and reporting the results back to PRTG. The script further down in this article does this. What you get is similar to the graphic here.

This show you the following channels:

  • Overall status
    • this will be green as long there aren’t any not acknowledged warnings or alerts in VMware
    • if the warning or alert is acknowledged, the sensor / script will return to green cause it is nothing that is new
  • Total Alerts – amount of alerts acknowledged and not ackowledged
  • Total Alerts – Acknowledged
  • Total Alerts – NOT Acknowledged
  • Total Warnings
  • Total Warnings – Acknowledged
  • Total Warnings – NOT Acknowledged
  • Total Warnings and Alerts
  • Total Warnings and Alerts – Acknowledged
  • Total Warnings and Alerts – NOT Acknowledged

As you can see – you can get more granular on your PRTG statuses if you use the channels for Warnings/Alerts that are acknowledged. You could set upper warning or error limits of 0 to keep a warning / error level in PRTG if you want to see them still.

While I was writing the script, I decided to create a new lookup value in PRTG to make it more clear. If you adjust the script in regards to add additional statuses for the channel overall status – you will need to adjust this file as well.

Let’s start with the value lookup file, you need to copy the text from the first script block in to a file you store here: C:\Program Files (x86)\PRTG Network Monitor\lookups\custom

Name the file: vmware.alerts.search.ovl

Now we need to create a custom EXE/XML sensor in this directory: C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML

Name the file: VMwareAlerts.ps1

Once you have both files created, go to PRTG and add a new sensor called EXE/Script Advanced and select the new created script file. As Parameter you either type the host-name of your vSphere server or if you created it underneath the device in PRTG just use %host.

UPDATE: I changed the script cause I found it to be better to go with the following expected parameters and always making sure you have control over username and password used to connect to VMware. Please use the follow parameter moving forward:

There are still a few challenges you might need to overcome on top of this:

  • install the VMware PowerShell extensions on your PRTG probe server
  • credentials to connect to VMware can be a challenge as I tested this
    • you might need to have the service account of the PRTG probe have sufficient access rights – needs working SSO
    • alternative use a stored credentials file in PowerShell – somewhat secure
    • or provide the credentials clear text in PowerShell – least secure
    • please see line 20 respective the command “connect-viserver” for more details
  • updated the script – it now expects username and password as parameter

You might wanna test the script before you add a sensor to PRTG – the best way to do this is directly on the PRTG server with the service account of the PRTG probe to make sure it will work as a sensor later on.

Keep in mind that the script expects a parameter – the VMware vSphere server name / web-address.

This was also posted on the PRTG KB here.

 

APC network cards – fix logon issues

APC network cards – fix logon issues

APC network cards – or the NIC of many APC devices like UPS and A/C – possibly even NetBotz (APC – aka. Schneider Eletric) tend to have an issue that the system is telling you in the browser that there is already a session active.

The reason is that they tend to keep the session online forever, if you don’t click on logoff before you close your browser etc.

What you will see is an error like this after you entered your valid credentials:

Notice
Someone is currently logged into the APC Management Web Server.

Please try again late

It is actually pretty easy to bypass this – you either use TELNET or SSH to logon to the system with the same credentials and then simply logout there.

While doing so – you logoff the user and you will be able to logon again.

Consolidate many line based .CSV files in to a single .CSV with one header line and per file data lines

Consolidate many line based .CSV files in to a single .CSV with one header line and per file data lines

Summarize a huge amount of files that have line based columns and data in to a single file with the first line the headers found in all files and the actual data as per row for each file, while the headers might change throughout the source files and need to be added dynamically.

This is a special script I wrote for someone else that had about 45k files to process. It is crazy enough to be worth posting here 🙂 and can be found on Spiceworks as well.

Situation:

  • many .CSV files
  • all have the columns per line instead of in the first line
  • the data looks like
    • column,data
    • column,data
  • he needs all files transferred in to one file in this format
    • header,header,header
    • data,data,data
    • data,data,data
  • from per line to one line as a header and the data in each line per file
  • additional challenge
    • the headers might change throughout the files and add more headers

What the script does:

  1. cycle through all files
    1. detect all headers
  2. cycle a second time through all files
    1. detect all the data
    2. write the data in the right column per line per file

Flaws:

  • The script does not obey if there is data with a comma “,” – it would ignore what is behind that comma

Output:

  • Output file is a single .CSV file, comma separated columns

Execute this way:

  1. Source Directory – where the .csv files reside
  2. Target Directory – where the new output .csv will be created
  3. open CMD / command prompt
  4. go to the script-directory (where you saved it)
    1. CSCRIPT scriptname.vbs “c:\sourcedirectory” “c:\targetdirectory”

CSCRIPT will avoid that you see a million message boxes – it will output directory on your CMD / command prompt window…

Excel custom views and Excel files that appear different for various users

Excel custom views and Excel files that appear different for various users

Excel has a feature called Custom Views (ribbon VIEW / CUSTOM VIEW) that is very little known. This feature actually allows to adjust header / footer or columns that are hidden or displayed etc… a custom view for the workbook.

Custom Views are stored in the workbook it self. Further are they automatically selected when opening a workbook.

How does Excel determine which custom view to use?

Under OPTIONS / GENERAL in the field USER NAME should be in most cases your FULL NAME from your Windows logon user respective Active Directory user. This name actually determines which custom view is used. If you alter the name to another custom view name you will automatically see this custom view – or it default back to the default view.

This becomes an even bigger issue if you are using SYSPREP images and you set the default user profile from an existing profile via the COPYPROFILE option – see here for details: https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/customize-the-default-user-profile-by-using-copyprofile.

If the source profile had EXCEL open – or actually any other OFFICE program – you will find the following registry keys set:

  • HKEY_CURRENT_USER\Software\Microsoft\Office\Common\UserInfo
    • Company
    • UserInitials
    • UserName

The critical key in this case is USERNAME.

Imagine you deploy this image with this pre-set username-key to 100 computers. One of the users now sets a CUSTOM VIEW and another user opens the file – this user will actually see the custom view because both users would have the same USERNAME value set, what was pre-set in the image. Now you have a third user that you manually installed from scratch – this user will see different values in his worksheet – Excel header and footer or columns etc…

This sure would give you some headache and would sure will wonder how can this happen.

To be pro-active – and even prevent other related issues – you should remove those three keys from any logging on user. If you remove those keys and the user opens an Office application, they will be automatically recreated depending on the current Windows or Active Directory user name and credentials.

Probably the best way to accomplish this is a GPO that applies to ALL USERS and removes those registry keys. Just make sure you check as well “apply once”.

Having this said – I as well saw circumstances where this did not help – likely due to the users have been logged on and in an Microsoft Office application. The keys had been removed, but after closing the application the old keys had been written back to the registry. Due to that you might need either a solution per PowerShell or CMD/Batch script that removes those keys when the users logs on.

You could determine if they are correct or not and if not simply delete them.

I tried to find a solution for the USERNAME value in the registry from the GPO variables – use the F3 key in the GPO Editor to show available variables (https://blogs.technet.microsoft.com/grouppolicy/2009/05/13/environment-variables-in-gp-preferences/) – you will quickly get stuck on that the full user name is not available. Therefor a script might be your best shot.

A simple solution might be using this in a CMD based login script:

Additional information can be found here as well: https://support.microsoft.com/en-us/help/302911/the-page-setup-settings-in-a-shared-workbook-are-different-for-each-us

PowerShell – custom tables or objects

PowerShell – custom tables or objects

Powershell sometimes can be challenging. One of the more confusing things is collecting data and having them in proper formatted tables / objects to process them further.

The below script is a sheer example on how you can accomplish this while creating a table, adding the needed columns and then filling the table with rows – in this specific example while reading network adapters and filling in IP information – if available – partly multiple rows (one row per IP and adapter) and partly single rows for adapters with not IPs.

Custom tables or custom PowerShell objects example with foreach loops to fill them up and combine values from various commands in to single tables for further processing.