database

Auditing network users against HR lists etc.

Auditing network users against HR lists etc.

Auditing network users against HR lists – a topic that is often overlooked or causes some headache due to e.g. name variations, while it is so important to make sure that your Active Directory is a clean and up to date as possible.

There are big paid solutions out there, but unless you have the budget, resources and processes in place, you will need a simpler approach. Having worked in various sized businesses, let me make some suggestions here. Keep in mind, not all of it might be applicable or best for you, but I hope it will at least provide you some ideas and help to improve your network security.

Structured groups and rights

Before we begin – you should always make any effort to have a very well structured rights base. Avoid cross use of e.g. groups for mail distribution and NTFS file system access. It seems like a good idea until someone needs access to the NTFS path and boom he/she receives the group based communication as well. This is of course just one example. Structure your file systems and right assignments well. All of it can make all the difference. Don’t complicate things, keep it simple.

Monitoring Active Directory activity

What you want is something that constantly looks for any changes to Active Directory, at a bare minimum new users and deleted users as well as group-membership changes. There is some software out there to do this, some is free, often with limited functionality, some you need to buy. Personally I was working on a Windows Service to monitor Active Directory changes, but my time is limited and to this day I did not finish it. Having said this, the IT-Asset Management Database on this website actually has a module that does just this, it monitors the most important activities while it does actually a compare of gathered SQL data against current Active Directory information and eventually sends you a daily report about changes.

Such reports might not be perfect, as they don’t real-time monitor such activity, rather then only send you daily summary reports. Paessler PRTG in combination with either some default sensors or custom scripts like Group Membership change and password reset monitoring or the more specific script for group-membership monitoring are more then helpful. Monitor especially e.g. Domain Admins groups and other groups that would allow access to sensitive areas and data of your network. Such active alerts due to a network monitoring solutions might give you the chance to act fast.

Auditing your user base against HR (Human Resources) data

Again, there is software and solutions out there that can do this automatically or help you with your efforts. But often HR is simply using their Payroll platforms and depending on what they have, it won’t have the functionality you need or they are reluctant to implement such processes or possibly provide you access after all.

The main issue I came across is that there is a difference between the HR data and what the full name in your Active Directory (e.g.) is. You can’t further not just automate user name prediction based on any HR export data you have, cause there likely will be duplicate names as well, depending on how you create user names.

HR normally has an employee number, they should be able to provide this in any employee export to you. Now, Active Directory has actually some attributes that you easily can engage: EmployeeID, employeeNumber and employeeType. PowerShell is your friend, if you want to set them. I highly recommend to use PowerShell to some extend if you create new users. Look at the CheckLists in combination with employees in the IT-Asset Management Database as well for some hints and automation. Microsoft’s Set-ADUser PowerShell command will be helpful as well.

Eventually use a tool like the IT-Admins tool to read your users from your Active Directory and export to Excel. Once you have this, you compare the HR list against the Active Directory export. Don’t bother with VLOOKUP – research the use of INDEX/MATCH in Excel. Format both tables as tables in Excel and document your process, as this might depend a bit on what tools you engaged and how the eventual data looks like. You should end up comparing the HR employee number against the HR employee number stored and exported from Active Directory. This should give you a quick and clean overview. What you should be on the lookout for are those N/A error in Excel in the compare column, as well as possibly HR data that indicates a termination or change. You can go as far as compare the department information, again there are Active Directory attributes for this as well. Department names and department IDs.

If you want to go another step, start comparing group-membership as well. Export all the groups and members, again the IT-Admins Tool can help you here. View it possibly from both sides, the group and members view as well as user is member of groups view. Have the department owners take a look at it as well, they might want to see this.

And step three will be an NTFS rights review. Never ever should there be a user account directly used to assign rights in NTFS. This always should be done via groups. How ever, to review this I again recommend using the IT-Admins Tool, as this is actually designed to help you with the process and is able to export the needed data rather quick and simple.

Don’t forget your ERP systems and systems with Active Directory independent user bases

It is not always possible to rely on Active Directory as only source for your user base. Even if you can, the right assignment in e.g. your ERP system to functions likely is independent from Active Directory. Look in to any possibility of your ERP, either API’s or possibly dig in to the database (or where ever the data is stored), to find out about right changes (groups) and review those lists as well against HR information periodically.

Office 365

Oh – it is synchronized with Active Directory, right? Well – review it anyways. There might be a user object that is not synchronized and exists only in Office 365, groups as well. Review the rights to access certain administrative areas within Office 365.

You should definitive review third party users – especially Microsoft TEAMS and sharing e.g. SharePoint or OneDrive files and folders with outside of the organization will likely create some guest accounts. Keep an eye on those.

And then the third party applications – you need to keep an eye on those, as they can cause possible harm or gain access to sensible data. Users/Employees often will install them without reviewing them thoroughly, or they simply don’t realize they might have been there and share confidential data.

Account breach / compromise and API keys in Office 365 should also be something you want to keep a good eye on. Clicking on the wrong link (it will happen!), entering the password and it is to late. Don’t think a simple password reset will solve the issue. Don’t only rely on your MFA. Review does API keys especially in Office 365. What can happen is that the password is used right away to install an API based access to Office 365 that will then independently from password changes have access to the data. Keep an eye on those things as well!

 

Summarize SQL server database file size information as a single row

Summarize SQL server database file size information as a single row

The SQL script below will show you the summarized database file size and database log file sizes on the current database. This can be helpful to monitor the actual file sizes in a single row / line.

This script was also posted in the Paessler PRTG KB, where the initial request for this came up. Posting it here cause I think it can become useful for some. The initial code used and modified is from Tri Effendi SS of stack exchange.

You will need to adjust the USE statement in the first line to the database you want to gather the information from.

 

Raspberry PI and Microsoft SQL databases

Raspberry PI and Microsoft SQL databases

Raspberry PI can read and write on a Microsoft SQL server database.

In order to accomplish this you follow the instructions here: http://pymssql.org/en/stable/index.html

To summarize it in a nutshell, here is what you need to do:

  • apt-get install freetds-dev
  • pip install pymssql

Update: Above information is for a Raspberry 2 – Raspberry 3 needs the below information as far as I know:

  • sudo apt-get install freetds-dev
  • sudo pip3 install cython
  • sudo pip3 install pymssql

Personally I had issues getting this to work in Python 3.x so I tested it in Python 2.x and it was working fine. The issue was simply that the module “pymssql” could not be found and therefor the IMPORT line already failed in the Python script. It should be a rather easy fix – like copying the files to the Python 3 modules folder, but as of now I did not have the time to investigate this further – as I was fine using Python 2 in my specific situation.

Here is a sample script

The example I tested used a SQL server user account. The documentation of PyMSSQL talks about the possibility to use Windows Authentication as well.

As for asking Google about this – there is a lot of confusion information out there – the top ranked posts aren’t really helpful, so I thought I just post it again hoping someone finds this helpful.

SQL Express SQLState 08001 and Error 17

SQL Express SQLState 08001 and Error 17

One of the challenges especially with SQL Express is that you need to enable some protocols on the network level first in order to connect to it. You might see an error message the one below when you try to connect to SQL – stating SQLState 08001 and Error 17.

In order to resolve this, you need to enable named pipes and TCP in the SQL Server Configuration Manager that was installed by default on your system. See the image below on how it should look like. Please note that you need to restart the SQL service in order for those changes to take effect.

Please note – there might be a need of additional configuration like the Windows Firewall or other parameters, the above just addresses a rather common issue.

Microsoft RADIUS / NPS SQL logging

Microsoft RADIUS / NPS SQL logging

An issue or question I see again and again – proper RADIUS logging with Microsoft NPS / Network Policy Server.

Let’s guide you through a few steps

  1. Install a Microsoft SQL or if not available SQL Express
    1. be aware – SQL Express has very tight database size limits and no SQL Agent – this might be an issue
  2. Create a new database via SQL Management Studio in the SQL server
    1. name it e.g. RADIUSLogging
  3. run the SQL script from this Microsoft website in a new query window against this database (make sure it is not run against any other database by accident)
    1. you could add a line like USE RADIUSLogging to prevent this – in the very top…
  4. configure your RADIUS server to log to this SQL server and database
  5. make sure you have fail-over logging to a text-file – to avoid issues in case your SQL DB grew to big or was not reachable for any reason
    1. decide in the text-file configuration if you want to deny access if there is an issue or if you still want to proceed with the logon

Now you have RADIUS logging the information to a SQL database – actually a single table – and you can dig around in it. The IT-Assets database provides a front-end example for this – you don’t need to use it – but it might be of help – see here.

To interpret all those columns and values – look at the following links for additional information:

You will face the issue that the database will grow rapidly – depending on how many requests go to your RADIUS system etc… Keep an close eye on it – use a monitoring software like Paessler / PRTG to monitor the size and keep in mind that SQL Express might have size limits like 10 GB. The full version of Microsoft SQL has no such limits and further can you use SQL Agent to execute tasks. The following script can help you purging data from the RADIUS database to keep its size under control. You can use SQL Agent (not in SQL Express) to run it automatically or if you use SQL Express either run it manually or with another solution somehow automatically against the database delete older entries.

The script actually will purge data older then 14 days – you can adjust the days to your liking / needs.

Automate your SUS clean up

Automate your SUS clean up

Many companies rely on WSUS respective SUS services from Microsoft – aka. Windows Server Update Services as internal source and control of their update deployment to clients and servers within their network.

One of the big challenges for IT is to keep them clean and performant. The Cleanup-Assistant in the SUS management console tends to run forever and in any case means manual labor over and over again.

Below are two scripts – a CMD script that needs to be adjusted with parameters and a powershell script that will be called with those parameters. The scripts acutally will call the same API as the MMC Assistant does, just that this can be automatically performed via a scheduled task in Windows.

It helps you to keep your SUS slim and more performant.

In any way – I highly recommend to not blindly just enable all categories rather then limiting it to the once you have in place as well as once you reached a certain patch-level even actively denying updates you never will need again (keep in mind, new rolled out systems might still need older updates – but you could possibly refresh your base images or rely on Microsoft update services / online updates for those cases).

The combination of making updates obsolete and actually running a cleanup periodically will improve your SUS server performance.

As for the parameters, those are explained in the CMD script header – therefor I will not explain them here again.

Script based SQL Express backups

Script based SQL Express backups

SQL Express is widely used but has huge downside, there is no SQL Agent available. Even Windows internal databases, especially WSUS / Windows Update Services / Microsoft Update Services are in the end SQL Express like databases that do not have a SQL Agent.

Now, you can have central SQL Servers with Agents have them backed up – and I recommend on doing so if possible. But for the many times this is not possible, you will need to find another way to create those nice little .BAK files for SQL internal backups aka. SQL Maintenance Plan Backups. To work around this issue, I once wrote a script that automates this for each database found on a specific SQL server. It creates the backups via SQLCMD commands and even does a clean up of obsolete files (files older than x days), almost like SQL Maintenance Plans do it.

The script is divided in to a .CMD file that executes the actual backup and where you set the configuration/parameters and a .VBS file that is controlled by the actual .CMD script and will perform the backup cleanup. In the end you can have the .CMD send a email report – I used the SMTPSEND program from Michael Kocum (https://www.dataenter.com/download.asp) for this since I already ad it flying around – you could replace the mail send option with another prepared SMTPSEND client, a VBS script or just remove it completely.

Adjusting the settings / parameter:

This is all done in the SQLBACKUP.CMD file – the header section pretty much will explain all you need to know, from SQL Server to SQL-User and Password over Mail-Server to recipients.

If you want to execute the SQLBackups as the Windows-User that is executing the script, you need to exchange the REM (remarks) for the following two lines further down in the scripts. I apologize for the inconvenience, this is a old script I never updated to have those settings in the header (more automated), I always just changed the lines.

Everything else should be rather easy. Of course you will need sufficient access rights to the SQL databases and your destination backup folder. The task-scheduler might work best if you execute the script with “cmd /c c:\scripts\sqlbackup.cmd” (change the path as you need it) and set the working directory / startup folder right. It might help to execute the task with elevated rights etc. – all depending on your systems configuration.

Below are the two scripts – I hope this helps some of you. The generated .BAK files can simply be restored in SQL services via the GUI cause they are native SQL backup files.

Solarwinds Web Helpdesk – Slack alerts

Solarwinds Web Helpdesk – Slack alerts

This was originally posted here by myself: https://thwack.solarwinds.com/thread/114863

Solarwinds WebHelpDesk is very powerful, but for those who use Slack as communication and alerting platform, there still is no integration.

We as a IT team struggled a bit keeping up with the immense flow of emails and filtering them out as well as being really pro-active on new tickets (first response time) as well as realizing we got new tickets assigned or a user / client wrote a new note.

To overcome those challenges and due to the fact that we all have Slack on all our systems from workstation to smartphone, we decided to integrate this. Since I spend a bit of time on those scripts and thought they might be helpful for others as well, I am sharing them here now and explain on how to implement this.

Please note: Those scripts are a version 1 – I am very aware that they could be further cleaned up and simplified.. but I wanted to share them already… bugs are possible as well…

Requirements:

  • You need Solarwinds WebHelpDesk
  • The scripts use the field “Pager” in “Techs” for the Slack-Username – put all the Slack-Usernames of your Techs in this field – no @ – just the name
  • The scripts assumes you are executing it directly on the server that has the PostgreSQL database installed
  • The scripts assumes the database user/pw is defaulted to WHD

How to implement them:

  • Create the 5x files as show further below in e.g. “C:\Scripts\WHD” on the PostresSQL server
  • Create a new Windows Task that starts the WebHelpDesk_SlackAlerts.cmd file every 30 minutes
    • this file actually executes the PowerShell Scripts – it is just a work-around – it bypasses any PowerShell Script execution restritions
  • download the PostgesSQL ODBC drivers from the following link – assuming you haven’t installed them on the system already
  • Edit the PRTGSlackWebHookNotificationPSv2.ps1
    • This file was originally from our PRTG installation and modified there already. It was further modified for WHD alerts – I do not claim to have invented this script nor do I want to abuse any copyrights on it! source: www.paessler.com for monitoring solutions
    • Line 41: adjust the URL for the FavIcon.ico to your external WebHelpDesk URL
  • Create a new WebHook-Application in your Slack Account
  • Edit the CheckNewTicketsFirstResponse.ps1
    • This script posts to a generic channel in our case – we want to see new tickets as a group – assuming assigned tech is currently unavailable and couldn’t touch it…
    • adjust the PostgresSQL settings – if needed – IP / Port / User etc…
    • Line 7 – ticket_age_minutes = 88
      • this is a minute value – we alert on a Slack-Group “helpdesk” if there is a ticket older than 88 minutes – we fire the script every 30 minutes, so it could be up to two hours old..
    • Line 9 – $channel – adjust this to the Slack-Group channel you want to use for those alerts
    • Line 77 – adjust the URL to your external WebHelpdesk URL
    • Line 99 – adjust the Link to your own WebHook URL
  • Edit the CheckTicketAssigned.ps1
    • This will send the message to the Tech directly through the SlackBot channel – only he will see it
    • This will only fire if the Tech did not yet put a Tech-Note in the ticket after it was assigned to him
    • adjust the PostgresSQL settings – if needed – IP / Port / User etc…
    • Line 7 – entry_age_minutes = 32
      • this is a minute value – we run the scripts every 30 minutes – the alert due to some variable time can not be older than 32 minutes by default…
    • Line 74 – adjust the URL to your external WebHelpdesk URL
    • Line 97 – adjust the Link to your own WebHook URL
  • Edit the CheckTicketNewClientNote.ps1
    • This will send the message to the Tech directly through the SlackBot channel – only he will see it
    • This will only fire if the Tech did not yet put a Tech-Note in the ticket after the client / user did leave a comment or note
    • adjust the PostgresSQL settings – if needed – IP / Port / User etc…
    • Line 7 – entry_age_minutes = 32
      • this is a minute value – we run the scripts every 30 minutes – the alert due to some variable time can not be older than 32 minutes by default…
    • Line 74 – adjust the URL to your external WebHelpdesk URL
    • Line 97 – adjust the Link to your own WebHook URL

After that you should be all set.

We have further integration in to PRTG while monitoring the database for “new tickets older than 120 minutes” and looking for the logfile “SlackLogErrorDetails.txt” indicating that a Slack notification did not go through by the script – most likely due to special characters that the scripts already should take care off, but in case it happens again this file would appear. You can integrate that as well with your monitoring system, but this is beyond the scope of the simple notifications.

Other logfiles show just what was send out and indicate that everything is working well.

Click here to download the scripts archive WebHelpdeskSlackFiles.zip

SQL Database backup monitoring

SQL Database backup monitoring

The following article was originally posted here by myself: https://kb.paessler.com/en/topic/79665-sql-database-backup-monitoring

SQL backups and their monitoring is one of the most important things. We often talk about rather complex situations including transaction-logfiles and various other stuff.

Monitoring those things was something that did cost us to many sensors with the standard-scripts etc. and was not really effective.

In order to change this – here are two scripts that will be able to solve most of your issues – you find them both at the end of this posting:

  • SQLBackupDestinationCheck.vbs
  • SQL_Database_Full_Backups.sql

SQLBackupDestinationCheck.vbs: This is a VBS script that will return XML content to Paessler/PRTG in multiple channels while using one sensor.

It expects three parameters:

  • go through first level sub folders: 0 (no) or 1 (yes)
  • file extension to obey – any other extension will be ignored – in most cases: “bak”
  • Path – should mostly be an UNC path

It will return those channels:

  • Total file count: count of all files with this extension in all folders checked
  • Total folder count: count of all folders checked
  • Oldest file found in days: oldest file – value gives back age in days
  • Newest file found in days: newest file – value gives back age in days
  • Lowest files in folder count found: lowest count of files that have been found in one folder
  • Highest files in folder count found: max. files that have been found in one folder

This needs some explanation:
The script checks a path for files with a certain extension. Let’s say you do SQL maintenance plans and use the extension .BAK to write those. You do a daily backup and keep them for 3 days to make sure they end up on a tape, further do you use sub-folders per database and you have a total of 5 databases on this system – now you will need to configure error-limits per channel – e.g.:

  • Total file count: lower limit: 10 files – upper limit: 20 files – during the backup you might have up to 20 files
  • Total folder count: 5 folders upper and lower limit – more/less then 5 would mean something changed
  • Oldest file found in days: lower limit 2 days – upper limit 4 days – older then 4 would mean the cleanup does not work
  • Newest file found in days: lower limit 0 days – upper limit 2 days – nothing newer (date issues? and nothing older as well)
  • Lowest files in folder count found: lower limit: 2 – there should be always more then 2x .BAK files in any subfolder
  • Highest files in folder count found: upper limit: 4 – anything above again would mean some clean up is not working right for one database

So – keep in mind – you can get more fancy with WARNING limits and ERROR limits – the example above will help you understand what to do – this should help you getting started. The script will save you quite a few sensors and still keep a pretty close watch on the file-system side of SQL backups – of course you could abuse it for something else then SQL backups as well – but this was my main intent for this script.
SQL_Database_Full_Backups.sql
This file will request information about backups for SQL itself. It might need a SQL 2005 or newer to work – and yes – I did post this on another PRTG KB thread – but I wanted to have the complete solution in this one post.

The script will be executed against the SQL server instance the databases reside, on the master-database. You need to specify a parameter that will be set as “@MaxHours” – this actually should be something like 26 hours, so your SQL backups are never exciting 26 hours – giving the backup some time to run as well. More for bigger databases might be necessary. If you do multiple full-backups per day, set it to e.g. 2 hours or what ever your limit is.

You will get backup 3x columns:

TotalAmountOfDatabasesTotal amount of databases of this server – this allows you not only to watch if anyone created/deleted a database on the server, it also gives you a good base-line in general
RecentlyBackupUpCounHow many databases have been backed up recently – full backup – in specified time-window
NOTRecentlyBackupUpCountHow many have not been backed up in the same time window

RecentlyBackupUpCount and NOTRecentlyBackupUpCount should always match up to TotalAmountOfDatabases – but that’s not the point. More important is – you might have backed up databases and not backed up databases – set you error-limits for all three columns accordingly – upper and lower limit – and you will see that the alert will fire if you add a database or keep the SQL agent service stopped so it hops over a single backup and misses it…

Folder: C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML
File: SQLBackupDestinationCheck.vbs

Folder: C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\sql\mssql
File: SQL_Database_Full_Backups.sql