Tag Archives: windows

Managing the Windows Time Service with SCCM’s Configuration Items

Keeping accurate and consistent time is important in our line of business. Event and forensic correlation, authentication protocols like Kerberos that rely on timestamps and just the simple coordination of things like updates all require accurate timekeeping. Computers are unfortunately notoriously bad at keeping time so we have protocols like NTP and time synchronization hierarchies to keep all the clocks ticking. In an Active Directory environment this is one of those things that (if setup correctly on the Domain Controller holding the PDC Emulator FSMO role) just kind of takes care of itself but what if you have WORKGROUP machines? Well. You’re in luck. You can use SCCM’s Configuration Items to manage configuration settings in devices that are outside of your normal domain environment in isolated networks and beyond the reach of tools like GPOs.

There’s really two pieces to this. We need to ensure that the correct NTP servers are being used so all our domain-joined and WORKGROUP machines get their time from the same source and we need to ensure that our NTP client is running.

 

Setting the correct NTP Servers for the Windows Time Service

To get started create a Configuration Item with a Registry Value based setting. The NTPServer value sets which NTP servers the Windows Time Service (W32Time) pulls from. We can manage it like so:

  • Setting Type: Registry Value
  • Data Type: String
  • Hive Name: HKEY_LOCAL_MACHINE
  • Key Name: SYSTEM\CurrentControlSet\Services\W32Time\Parameters
  • Value Name: NtpServer

The corresponding Compliance Rule is straight forward. We just want to ensure that the same time servers we are using in our domain environment are set here as well.

  • Rule type: Value
  • Setting must comply with the following value: youtimeserver1,yourtimeserver2
  • Remediate non-compliant rules when supported: Yes
  • Report noncompliance if this setting is not found: Yes

  • Rule Type: Existential
  • Registry value must exist on client devices: Yes

 

The Setting should require the existence of the NTPServer key and set its value as specified. If it is set to something else the value will be remediated back to your desired value. You can learn more about setting the NTPServer registry key values and controlling the polling interview at this Microsoft MSDN blog post.

 

Ensuring the Windows Time Service is Running

If the time service isn’t running then you are not going to have accurate time keeping! This is further complicated by the behavior of the Window Times Service on WORKGROUP computers. The time service will stop immediately after system startup, even if the Startup Type is set to Automatic. The W32Time service is configured as a Trigger-Start service in order to reduce the number of services running in Windows 7 and Server 2008 R2 and above. The trigger (of course) that causes it to automatically start is whether or not the machine is domain-joined so for WORKGROUP machines the service status is set to Stopped. Not very helpful in our scenario. Let’s change that.

We can start by just performing a simple WQL query to see if the W32Time service is running:

  • Setting type: WQL Query
  • Data type: String
  • Namespace: root\cimv2
  • Class: Win32_Service
  • Property: Name
  • WQL query WHERE clause: Name like “%W32Time%” and State like “%Running%”

It’s a bit backward but if the query comes back with no results then the configuration state we are looking for does “not exist” and so we’ll mark it as non-compliant. It’s not intuitive but it works:

  • Rule Type: Existential
  • Registry value must exist on client devices: Yes

 

This gives us the status of the Windows Time Service but we still need to remove the [DOMAIN JOINED] trigger so the service will actually start automatically. PowerShell to the rescue!

  • Setting Type: Script
  • Data Type: Interger
  • Script: PowerShell

Discovery Script

Remediation Script

 

  • Value returned by the specified script: Equals 0
  • Run the specified remediation script when this setting is noncompliant: Yes

The Discovery script will return various non-compliant values depending on the configuration state of the endpoint. This will then cause the Remediation script to run which sets the service’s Startup Type to Automatic, removes the [DOMAIN JOINED] trigger and starts the service.

I hope this posts helps you manage your time configuration on all those weird one-off WORKGROUP machines that we all seem to have floating around out there.

Until next time, stay frosty.

Scheduling Backups with Veeam Free and PowerShell

Veeam Free Edition is an amazing product. For the low price of absolutely zero you get a whole laundry list of enterprise-grade features: VeeamZip (Full Backups), granular and application-aware restore of items, native tape library support and direct access to NFS-based VM storage using Veeam’s NFS client. One thing that Veeam Free doesn’t include however is a scheduling mechanism. We can fix that with a little bit of PowerShell that we run as a scheduled task.

I have two scripts. The first one loads the Veeam PowerShell Snap-In, connects to the Veeam server, gets a list of virtual machines and then backs them up to a specified destination.

 

I had Veeam setup on a virtual machine running on the now defunct HumbleLab. One of the disadvantages of this configuration is I don’t have separate storage to move the resulting backup files onto. You could solve this by simply using an external hard drive but I wanted something a little more… cloud-y. I setup Azure Files so I could connect to cheap, redundant and most importantly off-site, off-line storage via SMB3 to store a copy of my lab backups. The biggest downside to this is security. Azure Files is really not designed to be a full featured replacement for a traditional Windows file server. It’s really more of SMB-as-a-Service offering designed to be programmatically accessed by Azure VMs. SMB3 provides transit encryption but you would still probably be better off using a Site-to-Site VPN between your on-prem Veeam server and a Windows file server running as VM in Azure or by using Veeam’s Cloud Connect functionality. There’s also no functionality replacing or replicating NTFS permissions. The entire “security” of your Azure Files SMB share rests in the storage key. This is OK for a lab but probably not OK for production.

Here’s the script that fires off once a week and copies the backups out to Azure Files. For something like my lab it’s a perfect solution.

 

Until next time, stay frosty!

SCCM, Asset Intelligence and Adobe SWID Tags, Part II – Custom Fields

There’s some crucial information in Adobe’s SWID tags that does not get automatically collected in SCCM’s Asset Intelligence process via the SMS_SoftwareTag class.

This information is contained in the optional fields of the SWID tag and will allow you to “decode” the LeID field and determine important information for your Enterprise Term Licensing Agreement (ETLA).

 

We’re talking things like whether or not the product is licensed via a volume licensing agreement or a retail license, the activation status along with the particular version of the product (Standard vs. Pro) but as previously mentioned this information is unfortunately not pulled up in the SMS_SoftwareTag class inventory process.

I came across this blog post by Sherry Kissinger and largely cribbed this idea from her. We can use/abuse Configuration Items to run a script that parses the SWID tag files, inserts them into a WMI class that then gets collected via Hardware Inventory and from there we can run reports off of it. Sherry’s post provided the .MOF file and a VB script but I only really speak PowerShell so I rewrote her script

 

 

Set this script up in a Configuration Item, add it to a Baseline and then deploy it. When the Configuration Item runs, it should find any Adobe SWID tags, parse them and create a custom instance of a WMI class (cm_AdobeInfo) containing all of goodies from the SWID tag.

 

By adding Sherry’s custom .MOF file to your Hardware Inventory class settings you can have the SCCM agent pull this class up as part of the Hardware Inventory Cycle.

 

With the following bit of SQL you can build another nice Manager Approved (TM) report:

 

Until next time, stay frosty!

Deploying VLC Media Player with SCCM

VLC Media Player is an F/OSS media player that supports a dizzying array of media formats. It’s a great example of one of those handy but infrequently used applications that are not included in our base image but generate help desk tickets when an user needs to view a live feed or listen to a meeting recording. Instead of just doing the Next-Next-Finish dance, lets package and deploy it out with SCCM. The 30 minutes to package, test and deploy VLC will pay us back in folds when our help desk no longer has to manually install the software. This reduces the time it takes to resolve these tickets and ensures that the application gets installed in a standardized way.

Start by grabbing the appropriate installer from VideoLAN’s website and copying to whatever location you use to store your source installers for SCCM. Then fire up the Administrative Console and create a New Application (Software Library – Applications – Create Application). We don’t get an .MSI installer so unfortunately we are actually going to have to do a bit of work, pick Manually specify the application information.

Next up, fill out all the relevant general information. There’s a tendency to skimp here but you might as well take the 10 seconds to provide some context and comments. You might save your team members or yourself some time in the future.

I generally make an effort to provide an icon for the Application Catalog and/or Software Center as well. Users may not know what “VLC Media Player” is but they may recognize the orange traffic cone. Again. It doesn’t take much up front work to prevent a few tickets.

Now you need to add a Deployment Type to your Application. Think of the Application as the metadata wrapped around your Deployment Types which are the actual installers. This lets you pull the logic for handling different types of clients, prerequisites and requirements away from other places like separate Collections for Windows 7 32-bit and 64-bit clients and just have one Application with two Deployment Types (a 32-bit installer and a 64-bit installer) that gets deployed to a more generic Collection. As previously mentioned, we don’t have an .MSI installer so we will have to manually specify the deployment installation/uninstallation strings along with the detection logic.

  • Installation: vlc-2.2.8-win32.exe /L=1033 /S –no-qt-privacy-ask –no-qt-updates-notif
  • Uninstallation: %ProgramFiles(x86)%\VideoLAN\VLC\uninstall.exe /S

If you review the VLC documentation you can see that /L switch specifies the language, /S switch specifies a silent install and the –no-qt-privacy-ask –no-qt-updates-notif sets the first-run settings so users don’t receive the prompt.

Without having a MSI’s handy ProductCode for setting up our Detection Logic we will have to rely on something a little more basic: Checking to see if the vlc.exe is present to tell the client whether or not the Application is actually installed. I also like to add a Version check as well so that older installs of VLC are not detected and are subsequently eligible for being upgraded.

  • Setting Type: File System
  • Type: File
  • Path: %ProgramFile(x86)%\VideoLAN\VLC
  • File or folder name: vlc.exe
  • Property: Version
  • Operator: Equals
  • Value: 2.2.8

Last but not least you need to set the User Experience settings. These are all pretty self explanatory. I do like to actually set the maximum run time and estimated installation time to something relevant for the application that way if the installer hangs it doesn’t just sit there for two hours before the agent kills it.

 

From there you should be able to test and deploy your new application! VLC Media Player is a great example of the kind of “optional” that you could just deploy as Available to your entire workstation fleet and close tickets requesting a media player with instructions on how to use the Software Center.

 

 

Until next time, stay frosty!

SCCM, Asset Intelligence and Adobe SWID Tags

Licensing. It is confusing, constantly changing and expensive. It is that last part that our managers really care about come true-up time and so a request in the format of, “Can you give me a report of all the installs of X and how many licenses of A and B we are using?” comes across your desk. Like many of the requests the come across your desk as a System Administrator these can be deceptively tricky. This post will focus on Adobe’s products.

How many installs of Adobe Acrobat XI do we have?

There are a bunch of canned reports that help you right off the bat under Monitoring – Reporting – Reports – Software – Companies and Products. If you don’t have a Reporting Services Point installed yet then get on it! The following reports are a decent start:

  • Count all inventoried products and versions
  • Count inventoried products and versions for a specific product
  • Count of instances of specific software registered with Add or Remove Programs

You may find that these reports are less accurate that you’d hope. I think of them as the “raw” data and while they are useful they don’t gracefully handle things like the difference between “Adobe Systems” and “Adobe Systems Inc.” and detect those as two separate publishers. Asset Intelligence adds a bit of, well, intelligence and allows you to get reports that are more reflective of the real world state of your endpoints.

Once you get your Asset Intelligence Synchronization Point installed (if you don’t have one already) you need to enable some Hardware Inventory Classes. Each of these incurs a minor performance penalty during the Software Inventory client task so you probably only want to enable the classes you think you will need. I find the SMS_InstalledSoftware and SMS_SoftwareTag classes to be the most useful by far so maybe start there.

You can populate these WMI classes by running the Machine Policy Retrieval & Evaluation Cycle client task followed by the Software Inventory cycle. You should now be able to get some juicy info:

 

Lots of good stuff in there, huh? Incidentally if you need a WMI class that tracks software installs to write PowerShell scripts against SMS_InstalledSoftware is far superior to the Win32_Product class because any queries to Win32_Product will cause installed MSIs to be re-configured (KB974524). This is particularly troublesome if there is a SCCM Configuration Item that is repeatedly doing this (here).

There are some great reports that you get from SMS_InstalledSoftware:

  • Software 0A1 – Summary of Installed Software in a Specific Collection
  • Software 02D – Computers with a specific software installed
  • Software 02E  – Installed software on a specific computer
  • Software 06B – Software by product name

All those reports give you a decent count of how many installs you have of a particular piece of software. That takes care of the first part of the request. How about the second?

 

What kind of installs of Adobe Acrobat XI do we have?

Between 2008 and 2010 Adobe started implementing the ISO/IEC 19770-2 SWID tag standard in their products for licensing purposes. Adobe has actually done a decent job at documenting their SWID tag implementation as well as provided information on how decode the LeID. The SWID tag is an XML file that contains all the relevant licensing information for a particular endpoint, including such goodies as the license type, product type (Standard, Pro, etc.) and the version. This information gets pulled out of the SWID tag and populates the SMS_SoftwareTag class on your clients.

 

That’s a pretty good start but if we create a custom report using the following SQL query we can get something that looks Manager Approved (TM)!

See the follow-up to this post: SCCM, Asset Intelligence and Adobe SWID Tags, Part II – Custom Fields

Until next time, stay frosty.

The HumbleLab: Storage Spaces with Tiers – Making Pigs Fly!

I have mixed feelings about homelabs. It seems ludicrous to me that in a field that changes as fast as IT that employers do not invest in training. You would think on-the-clock time dedicated to learning would be an investment that would pay itself back in spades. I also think there is something psychologically dangerous in working your 8-10 hour day and then going home and spending your evenings and weekends studying/playing in your homelab. Unplugging and leaving computers behind is pretty important, in fact I find the more and more I do IT the less interest I have in technology in general. Something, something, make an interest a career and then learn to hate it. Oh well.

That being said, IT is a fast changing field and if you are not keeping up one way or another, you are falling behind. A homelab is one way to do this, plus sometimes it is kind of nice to just do stuff without attending governance meetings or submitting to the tyranny of your organization’s change control board.

Being the cheapskate that I am, I didn’t want to go out spend thousands of my own dollars on hardware like all the cool cats in r/homelab so I just grabbed some random crap lying around work, partly just to see how much use I could squeeze out of it.

Dell OptiPlex 990 (circa 2012)

  • Intel i7-2600, 3.4GHz 4 Cores, 8 Threads, 256KB L2, 8MB L3
  • 16GBs, Non-EEC, 1333MHz DDR3
  • Samsung SSD PM830, 128GBs SATA 3.0 Gb/s
  • Samsung SSD 840 EVO 250GBs SATA 6.0 Gb/s
  • Seagate Barracuda 1TB SATA 3.0 Gb/s

The OptiPlex shipped with just the 128GB SSD which only had enough storage capacity to host the smallest of Windows virtual machines so I scrounged up the two other disks from other desktops that were slated for recycling. I am particularly proud of the Seagate because if the datecode on the drive is to be believed it was originally manufactured sometime in late 2009.

A bit of a pig huh? Let’s see if we can make this little porker fly.

A picture of the inside of HumbleLab

Oh yeah… look at that quality hardware and cable management. Gonna be hosting prod workloads on this baby.

I started out with a pretty simple/lazy install of Windows Server 2012 R2 and the Hyper-V role. At this point in time I only had the original 128GB SSD that operating system was installed on and the ancient Seagate being utilized for .VHD/.VHDX storage.

Performance was predictably abysmal, especially once I got a SQL VM setup and “running”:

IOmeter output

At this point, I added in the other 256GB SSD, destroyed the volume I was using for .VHD/.VHDX storage and recreated it using Storage Spaces. I don’t have much to say about Storage Spaces here since I have such a simple/stupid setup. I just created a single Storage Pool using the 256GB SSD and 1TB SATA drive. Obviously with only two disks I was limited to a Simple Storage Layout (no disk redundancy/YOLO mode). I did opt to create a larger 8GB Write Cache using PowerShell but other than that I pretty much just clicked through the wizard in Server Manager:

 

Let’s see how we did:

IOMeter Results with Storage Tiers

A marked improvement! We tripled our IOPS from a snail-like 234 to a tortoise-like 820 and managed to reduce the response time from 14ms to 5ms. The latency reduction is probably the most important. We generally shoot for under 2ms for our production workloads but considering the hardware 5-6ms isn’t bad at all.

 

What if I just run .VHDX file directly on the shared 128GB SSD that the Hyper-V host is utilizing without any Storage Tiers involved at all?

Hmm… not surprisingly the results are even better but what was surprising is by how much.  We are looking at sub 2ms latency and about four and half times more IOPS than what my Storage Spaces Virtual Disk can deliver.

Of course benchmarks, especially quick and dirty ones like this, are very rarely the whole story and likely do not even come close to simulating your true workload but at least it gives us a basic picture of what my aging hardware can do: SATA = Glacial, Storage Tiers with SSD Caching=OK, SSD=Good. It also illustrates just how damn fast SSDs are. If you have a poorly performing application, moving it over to SSD storage is likely going to be the single easiest thing you can do to improve its performance. Sure, the existing bottleneck in the codebase or database design is still there, but does that matter anymore if everything is moving 4x faster? Like they say, Hardware is Cheap, Developers are Expensive.

I put this together prior to the general release of Server 2016 so it would be interesting to see if running this same setup on 2016’s implementation of Storage Spaces with ReFS instead of NTFS would yield better results. It also would be interesting to refactor the SQL database and at the very least place the TempDB, SysDBs and Log files directly onto to host’s 128GB SSD. A project for another time I guess…

Until next time… may your pigs fly!

A flying pig powered by a rocket

Additional reading / extra credit:

FFFUUUU Internet Explorer… a rant about an outage

I am not normally a proponent of hating on Microsoft, mostly because I think much of the hate they get for design decisions is simply because people do not take the time to understand how Microsoft’s new widget of the month works and why it works that way. I also think it is largely pointless. All Hardware Sucks, All Software Sucks once you really start to dig around under the hood. That and Microsoft doesn’t really give a shit about what you want and why you want it. If you are an enterprise customer they have you by the balls and you and Microsoft both know it. You are just going to have to deal with tiles, the Windows Store and all the other consumer centric bullshit that is coming your way regardless of how “enterprise friendly” your sales rep says Microsoft is.

That being said, I cannot always take my own medicine of enlightened apathy and Stockholm Syndrome and this is one of those times. We had a Windows Update get deployed this week that broke about 60% – 75% of our fleet, specifically Internet Explorer 11. Unfortunately we have a few line-of-business web applications that rely on it. You can imagine how that went.

Now there are a lot of reasons why this happened but midway through my support call where we are piecing together an uninstallation script to remove all the prerequisites of Internet Explorer 11 I had what I call a “boss epiphany”. A “boss epiphany” is when you step out your technical day-to-day and start asking bigger questions and is so named because my boss has a habit of doing this. I generally find it kind of annoying in a good-natured way because I feel like there is a disregard for the technical complexities that I have to deal with in order to make things work but I can’t begrudge that he cuts to the heart of the matter. And six hours into our outage what was the epiphany… “Why is this so fucking hard? We are using Microsoft’s main line-of-business browser (Internet Explorer) and their main line-of-business tool for managing workstations in an enterprise environment (SCCM).”

The answer is complicated from (my) technical perspective but the “boss epiphany” is a really good point. This shit should be easy. It’s not. Or I suck at it. Or maybe both. AND that brings me to my rant. Why in the name of Odin’s beard is software deployment and management in Windows so stupid? All SCCM is doing is really just running an installer. For all its “Enterprisy-ness” it just runs whatever stupid installer you get from Adobe, Microsoft or Oracle. There’s no standardization, no packaging or no guarantee anything will actually be atomic. Even MSI installers can do insane things – like accept arguments in long form (TRANSFROMS=stupidapp.mst) but not short form (/t stupidapp.mst) or my particular favorite, search for ProductKey registry keys to uninstall any older version of the application, and then try to uninstall it via the original .MSI. This fails horribly when that .MSI lives in a non-persistent client side cache (C:\Windows\ccmcache). Linux was created by a bunch of dope-smoking neckbeards and European commies and they have had solid standardized package management for like ten years. I remember taking a Debian Stable install up to Testing, and then down-grading to Stable and then finally just upgrading the whole thing to Unstable. AND EVERYTHING WORKED (MOSTLY). Lets see you try that kind of kernel and user land gymnastics with Windows. Maybe I just have not spent enough supporting Linux to hate it yet but I cannot help but admire the beauty of apt-get update && apt-get upgrade when most of my software deployments means gluing various .EXEs and registry keys together with batch files or PowerShell. It’s 2016 and this is how we are managing software deployments? I feel like I’m taking crazy pills here.

 

Lets look at the IEAK as a specific example since I suspect that’s half the reason I got us into this mess. The quotes from this r/sccm thread are perfect here:

  • “IEAK can’t handle pre reqs cleanly. Also ‘installs’ IE11 and marks it as successful if it fails due to prereqs”
  • “Dittoing this. IEAK was a nightmare.”
  • “IEAK worked fine for us apart from one issue. When installing it would fail to get a return from the WMI installed check of KB2729094 quick enough so it assumed it wasn’t installed and would not complete the IE11 install.”
  • “It turns out that even though the IEAK gave me a setup file it was still reaching out to the Internet to download the main payload for IE”
  • “I will never use IEAK again for an IE11 deployment, mainly for the reason you stated but also the CEIP issue.”

And that’s the supported, “Enterprise” deployment method. If you start digging around on the Internet, you see there are people out there deploying Internet Explorer 11 with Task Sequences, custom batch files, custom PowerShell scripts and the PowerShell Deployment Toolkit. Again, the technical part of me understands that Internet Explorer is a complicated piece of software and that there are reasons it is deployed this way but ultimately if it is easier for me to deploy Firefox with SCCM than Internet Explorer… well that just doesn’t seem right now does it?

Until next time… throw your computer away and go outside. Computers are dumb.