Tag Archives: PowerShell

Managing the Windows Time Service with SCCM’s Configuration Items

Keeping accurate and consistent time is important in our line of business. Event and forensic correlation, authentication protocols like Kerberos that rely on timestamps and just the simple coordination of things like updates all require accurate timekeeping. Computers are unfortunately notoriously bad at keeping time so we have protocols like NTP and time synchronization hierarchies to keep all the clocks ticking. In an Active Directory environment this is one of those things that (if setup correctly on the Domain Controller holding the PDC Emulator FSMO role) just kind of takes care of itself but what if you have WORKGROUP machines? Well. You’re in luck. You can use SCCM’s Configuration Items to manage configuration settings in devices that are outside of your normal domain environment in isolated networks and beyond the reach of tools like GPOs.

There’s really two pieces to this. We need to ensure that the correct NTP servers are being used so all our domain-joined and WORKGROUP machines get their time from the same source and we need to ensure that our NTP client is running.

 

Setting the correct NTP Servers for the Windows Time Service

To get started create a Configuration Item with a Registry Value based setting. The NTPServer value sets which NTP servers the Windows Time Service (W32Time) pulls from. We can manage it like so:

  • Setting Type: Registry Value
  • Data Type: String
  • Hive Name: HKEY_LOCAL_MACHINE
  • Key Name: SYSTEM\CurrentControlSet\Services\W32Time\Parameters
  • Value Name: NtpServer

The corresponding Compliance Rule is straight forward. We just want to ensure that the same time servers we are using in our domain environment are set here as well.

  • Rule type: Value
  • Setting must comply with the following value: youtimeserver1,yourtimeserver2
  • Remediate non-compliant rules when supported: Yes
  • Report noncompliance if this setting is not found: Yes

  • Rule Type: Existential
  • Registry value must exist on client devices: Yes

 

The Setting should require the existence of the NTPServer key and set its value as specified. If it is set to something else the value will be remediated back to your desired value. You can learn more about setting the NTPServer registry key values and controlling the polling interview at this Microsoft MSDN blog post.

 

Ensuring the Windows Time Service is Running

If the time service isn’t running then you are not going to have accurate time keeping! This is further complicated by the behavior of the Window Times Service on WORKGROUP computers. The time service will stop immediately after system startup, even if the Startup Type is set to Automatic. The W32Time service is configured as a Trigger-Start service in order to reduce the number of services running in Windows 7 and Server 2008 R2 and above. The trigger (of course) that causes it to automatically start is whether or not the machine is domain-joined so for WORKGROUP machines the service status is set to Stopped. Not very helpful in our scenario. Let’s change that.

We can start by just performing a simple WQL query to see if the W32Time service is running:

  • Setting type: WQL Query
  • Data type: String
  • Namespace: root\cimv2
  • Class: Win32_Service
  • Property: Name
  • WQL query WHERE clause: Name like “%W32Time%” and State like “%Running%”

It’s a bit backward but if the query comes back with no results then the configuration state we are looking for does “not exist” and so we’ll mark it as non-compliant. It’s not intuitive but it works:

  • Rule Type: Existential
  • Registry value must exist on client devices: Yes

 

This gives us the status of the Windows Time Service but we still need to remove the [DOMAIN JOINED] trigger so the service will actually start automatically. PowerShell to the rescue!

  • Setting Type: Script
  • Data Type: Interger
  • Script: PowerShell

Discovery Script

Remediation Script

 

  • Value returned by the specified script: Equals 0
  • Run the specified remediation script when this setting is noncompliant: Yes

The Discovery script will return various non-compliant values depending on the configuration state of the endpoint. This will then cause the Remediation script to run which sets the service’s Startup Type to Automatic, removes the [DOMAIN JOINED] trigger and starts the service.

I hope this posts helps you manage your time configuration on all those weird one-off WORKGROUP machines that we all seem to have floating around out there.

Until next time, stay frosty.

Scheduling Backups with Veeam Free and PowerShell

Veeam Free Edition is an amazing product. For the low price of absolutely zero you get a whole laundry list of enterprise-grade features: VeeamZip (Full Backups), granular and application-aware restore of items, native tape library support and direct access to NFS-based VM storage using Veeam’s NFS client. One thing that Veeam Free doesn’t include however is a scheduling mechanism. We can fix that with a little bit of PowerShell that we run as a scheduled task.

I have two scripts. The first one loads the Veeam PowerShell Snap-In, connects to the Veeam server, gets a list of virtual machines and then backs them up to a specified destination.

 

I had Veeam setup on a virtual machine running on the now defunct HumbleLab. One of the disadvantages of this configuration is I don’t have separate storage to move the resulting backup files onto. You could solve this by simply using an external hard drive but I wanted something a little more… cloud-y. I setup Azure Files so I could connect to cheap, redundant and most importantly off-site, off-line storage via SMB3 to store a copy of my lab backups. The biggest downside to this is security. Azure Files is really not designed to be a full featured replacement for a traditional Windows file server. It’s really more of SMB-as-a-Service offering designed to be programmatically accessed by Azure VMs. SMB3 provides transit encryption but you would still probably be better off using a Site-to-Site VPN between your on-prem Veeam server and a Windows file server running as VM in Azure or by using Veeam’s Cloud Connect functionality. There’s also no functionality replacing or replicating NTFS permissions. The entire “security” of your Azure Files SMB share rests in the storage key. This is OK for a lab but probably not OK for production.

Here’s the script that fires off once a week and copies the backups out to Azure Files. For something like my lab it’s a perfect solution.

 

Until next time, stay frosty!

SCCM, Asset Intelligence and Adobe SWID Tags, Part II – Custom Fields

There’s some crucial information in Adobe’s SWID tags that does not get automatically collected in SCCM’s Asset Intelligence process via the SMS_SoftwareTag class.

This information is contained in the optional fields of the SWID tag and will allow you to “decode” the LeID field and determine important information for your Enterprise Term Licensing Agreement (ETLA).

 

We’re talking things like whether or not the product is licensed via a volume licensing agreement or a retail license, the activation status along with the particular version of the product (Standard vs. Pro) but as previously mentioned this information is unfortunately not pulled up in the SMS_SoftwareTag class inventory process.

I came across this blog post by Sherry Kissinger and largely cribbed this idea from her. We can use/abuse Configuration Items to run a script that parses the SWID tag files, inserts them into a WMI class that then gets collected via Hardware Inventory and from there we can run reports off of it. Sherry’s post provided the .MOF file and a VB script but I only really speak PowerShell so I rewrote her script

 

 

Set this script up in a Configuration Item, add it to a Baseline and then deploy it. When the Configuration Item runs, it should find any Adobe SWID tags, parse them and create a custom instance of a WMI class (cm_AdobeInfo) containing all of goodies from the SWID tag.

 

By adding Sherry’s custom .MOF file to your Hardware Inventory class settings you can have the SCCM agent pull this class up as part of the Hardware Inventory Cycle.

 

With the following bit of SQL you can build another nice Manager Approved (TM) report:

 

Until next time, stay frosty!

Veeam, vSphere Tags and PowerCLI – A Practical Use-case

vSphere tags have been around for a while but up until quite recently I had yet find a particularly compelling reason to use them. We are a small enough shop that it just didn’t make sense to use them for categorization purposes with only 200 virtual machines.

This year we hopped on the Veeam bandwagon and while I was designing our backup jobs I found the perfect use-case for vSphere tags. I wanted a mechanism that provided us a degree of flexibility so I could protect different “classes” of virtual machines to the degree appropriate to their criticality but also was automatic since manual processes have a way of not getting done. I also did not want an errant admin to place a virtual machine in the wrong folder or cluster and then have the virtual machine be silently excluded from any backup jobs. The solution was to use vSphere tags and PowerShell with VMware’s PowerCLI module.

If we look at the different methods you can use for targeting source objects in Veeam Backup Jobs, tags seem to offer the most flexibility. Targeting based on clusters works fine as long as you want to include everything on a particular cluster. Targeting based on hosts works as long you don’t expect DRS to move a virtual machine to another host that is not included in your Backup Job. Datastores initially seemed to make the most sense since our different virtual machine “classes” roughly correspond to what datastore they are using (and hence what storage they are on) but not every VM falls cleanly into each datastore “class”. Folders would functionally work the same way here as tags but tags are just a bit cleaner. If I create a folder structure for a new line of business application, I don’t have to go back into each of my Veeam jobs and update their targeting to add the new folders, I just tag the folders appropriately and away I go.

Tags also work great for exclusions. We run a separate vSphere cluster for our SQL workloads primarily for licensing reasons (SQL Datacenter licensing for the win!). I just setup a tag for the whole cluster, use it for targeting VMs for my application-aware Veeam Backup Jobs and also to exclude those virtual machines from the standard Backup Jobs (why back it up twice? Especially when one backup is not transaction-consistent?).

How about checking to see if there are any virtual machines that missed a tag assignment and therefore are not being backed up?

PowerShell to the rescue once again!

 

Another simple little snippet that will send you an email if you managed to inadvertently place a virtual machine into a location where it is not getting a tag assignment and therefore not getting automatically included in a backup job.
 

Until next time!

Quick and dirty PowerShell snippet to get Dell Service Tag

The new fiscal year is right around the corner for us. This time of year brings all kinds of fun for us Alaskans, spring king salmon runs, our yearly dose of three days worth of good weather and licensing true-up and hardware purchases. Now there’s about a million different ways to skin this particular cat but here’s a quick a dirty method with PowerShell.

 

If you don’t have access to the ridiculously useful Test-NetConnection cmdlet then you probably should upgrade to PowerShell v5 since unlike most Microsoft products PowerShell seems to actually improve with each version but baring that you can just open a TCP socket by instantiating the appropriate .NET object.

 

The slickest way I have ever seen this done though was with SCCM and the Dell Command Integration Suite for System Center which can generate a warranty status report for everything in your SCCM Site by connecting to database, grabbing the all the service tags and then sending that up to Dell’s warranty status API to get all kinds of juicy information like model, service level, ship date, and warranty status. Unfortunately since this was tremendously useful the team overseeing the warranty status web service decommissioned it abruptly back in 2016. Thanks for nothing ya jerks!

 

Use PowerCLI to get a list of old snapshots

VMware’s snapshot implementation is pretty handy. A quick click and you have a rollback point for whatever awesomeness your Evil Knievel rockstar developer is about to rollout. There are, as expected, some downsides.

Snapshots are essentially implemented as differencing disks and at the point-in-time you make a snapshot a child .vmdk file is created that tracks any disk changes from that point forward. When you delete the snapshot the changed blocks are merged back into the parent .vmdk. If you decide to rollback all the changes you just revert back to original parent disk which remained unchanged.

There are some performance implications:

  • Disk I/O is impacted for the VM as ESXi has to work a bit harder to write the new changes to child .vmdk file but reference the parent .vmdk file for unchanged blocks.
  • The child .vmdk files grow over time as more and more changes accumulate. The bigger it gets the more the disk I/O impacted. As you can imagine multiple snapshots can certainly impact disk performance.
  • When the snapshot is deleted, the process of merging the changes in the child .vmdk back into the parent .vmdk can be very slow depending on the size of the child .vmdk. Ask my DBA how long it took to delete a snapshot after he restored a 4TB database…

They also don’t make a great backup mechanism in of themselves:

  • The parent and child .vmdk are located on the same storage and if that storage disappears so does the production data (the parent .vmdk) and the “backup” (the child .vmdk) with it.
  • They are not transaction consistent. The hypervisor temporarily freezes I/O in order to take the snapshot without doing some kind of application-aware quiescing (although VMware Tools can quiesce the guest file system and memory).

 

For all these reasons I like to keep track of what snapshots are out there and how long they have been hanging around. Luckily PowerCLI makes this pretty easy:

 
This little snippet will connect to your vCenter servers, go through all the VMs, locate any snapshots older than seven days and then send you a nicely formatted email:

 

If you haven’t invested a little time in learning PowerShell/PowerCLI, I highly recommend you start. It is such an amazing toolset and force multiplier that I cannot imagine working without these days.

Until next time, stay frosty.

Extra credit reading: