Tag Archives: endpoint management

Managing the Windows Time Service with SCCM’s Configuration Items

Keeping accurate and consistent time is important in our line of business. Event and forensic correlation, authentication protocols like Kerberos that rely on timestamps and just the simple coordination of things like updates all require accurate timekeeping. Computers are unfortunately notoriously bad at keeping time so we have protocols like NTP and time synchronization hierarchies to keep all the clocks ticking. In an Active Directory environment this is one of those things that (if setup correctly on the Domain Controller holding the PDC Emulator FSMO role) just kind of takes care of itself but what if you have WORKGROUP machines? Well. You’re in luck. You can use SCCM’s Configuration Items to manage configuration settings in devices that are outside of your normal domain environment in isolated networks and beyond the reach of tools like GPOs.

There’s really two pieces to this. We need to ensure that the correct NTP servers are being used so all our domain-joined and WORKGROUP machines get their time from the same source and we need to ensure that our NTP client is running.

 

Setting the correct NTP Servers for the Windows Time Service

To get started create a Configuration Item with a Registry Value based setting. The NTPServer value sets which NTP servers the Windows Time Service (W32Time) pulls from. We can manage it like so:

  • Setting Type: Registry Value
  • Data Type: String
  • Hive Name: HKEY_LOCAL_MACHINE
  • Key Name: SYSTEM\CurrentControlSet\Services\W32Time\Parameters
  • Value Name: NtpServer

The corresponding Compliance Rule is straight forward. We just want to ensure that the same time servers we are using in our domain environment are set here as well.

  • Rule type: Value
  • Setting must comply with the following value: youtimeserver1,yourtimeserver2
  • Remediate non-compliant rules when supported: Yes
  • Report noncompliance if this setting is not found: Yes

  • Rule Type: Existential
  • Registry value must exist on client devices: Yes

 

The Setting should require the existence of the NTPServer key and set its value as specified. If it is set to something else the value will be remediated back to your desired value. You can learn more about setting the NTPServer registry key values and controlling the polling interview at this Microsoft MSDN blog post.

 

Ensuring the Windows Time Service is Running

If the time service isn’t running then you are not going to have accurate time keeping! This is further complicated by the behavior of the Window Times Service on WORKGROUP computers. The time service will stop immediately after system startup, even if the Startup Type is set to Automatic. The W32Time service is configured as a Trigger-Start service in order to reduce the number of services running in Windows 7 and Server 2008 R2 and above. The trigger (of course) that causes it to automatically start is whether or not the machine is domain-joined so for WORKGROUP machines the service status is set to Stopped. Not very helpful in our scenario. Let’s change that.

We can start by just performing a simple WQL query to see if the W32Time service is running:

  • Setting type: WQL Query
  • Data type: String
  • Namespace: root\cimv2
  • Class: Win32_Service
  • Property: Name
  • WQL query WHERE clause: Name like “%W32Time%” and State like “%Running%”

It’s a bit backward but if the query comes back with no results then the configuration state we are looking for does “not exist” and so we’ll mark it as non-compliant. It’s not intuitive but it works:

  • Rule Type: Existential
  • Registry value must exist on client devices: Yes

 

This gives us the status of the Windows Time Service but we still need to remove the [DOMAIN JOINED] trigger so the service will actually start automatically. PowerShell to the rescue!

  • Setting Type: Script
  • Data Type: Interger
  • Script: PowerShell

Discovery Script

Remediation Script

 

  • Value returned by the specified script: Equals 0
  • Run the specified remediation script when this setting is noncompliant: Yes

The Discovery script will return various non-compliant values depending on the configuration state of the endpoint. This will then cause the Remediation script to run which sets the service’s Startup Type to Automatic, removes the [DOMAIN JOINED] trigger and starts the service.

I hope this posts helps you manage your time configuration on all those weird one-off WORKGROUP machines that we all seem to have floating around out there.

Until next time, stay frosty.

SCCM, Asset Intelligence and Adobe SWID Tags, Part II – Custom Fields

There’s some crucial information in Adobe’s SWID tags that does not get automatically collected in SCCM’s Asset Intelligence process via the SMS_SoftwareTag class.

This information is contained in the optional fields of the SWID tag and will allow you to “decode” the LeID field and determine important information for your Enterprise Term Licensing Agreement (ETLA).

 

We’re talking things like whether or not the product is licensed via a volume licensing agreement or a retail license, the activation status along with the particular version of the product (Standard vs. Pro) but as previously mentioned this information is unfortunately not pulled up in the SMS_SoftwareTag class inventory process.

I came across this blog post by Sherry Kissinger and largely cribbed this idea from her. We can use/abuse Configuration Items to run a script that parses the SWID tag files, inserts them into a WMI class that then gets collected via Hardware Inventory and from there we can run reports off of it. Sherry’s post provided the .MOF file and a VB script but I only really speak PowerShell so I rewrote her script

 

 

Set this script up in a Configuration Item, add it to a Baseline and then deploy it. When the Configuration Item runs, it should find any Adobe SWID tags, parse them and create a custom instance of a WMI class (cm_AdobeInfo) containing all of goodies from the SWID tag.

 

By adding Sherry’s custom .MOF file to your Hardware Inventory class settings you can have the SCCM agent pull this class up as part of the Hardware Inventory Cycle.

 

With the following bit of SQL you can build another nice Manager Approved (TM) report:

 

Until next time, stay frosty!

SCCM, Asset Intelligence and Adobe SWID Tags

Licensing. It is confusing, constantly changing and expensive. It is that last part that our managers really care about come true-up time and so a request in the format of, “Can you give me a report of all the installs of X and how many licenses of A and B we are using?” comes across your desk. Like many of the requests the come across your desk as a System Administrator these can be deceptively tricky. This post will focus on Adobe’s products.

How many installs of Adobe Acrobat XI do we have?

There are a bunch of canned reports that help you right off the bat under Monitoring – Reporting – Reports – Software – Companies and Products. If you don’t have a Reporting Services Point installed yet then get on it! The following reports are a decent start:

  • Count all inventoried products and versions
  • Count inventoried products and versions for a specific product
  • Count of instances of specific software registered with Add or Remove Programs

You may find that these reports are less accurate that you’d hope. I think of them as the “raw” data and while they are useful they don’t gracefully handle things like the difference between “Adobe Systems” and “Adobe Systems Inc.” and detect those as two separate publishers. Asset Intelligence adds a bit of, well, intelligence and allows you to get reports that are more reflective of the real world state of your endpoints.

Once you get your Asset Intelligence Synchronization Point installed (if you don’t have one already) you need to enable some Hardware Inventory Classes. Each of these incurs a minor performance penalty during the Software Inventory client task so you probably only want to enable the classes you think you will need. I find the SMS_InstalledSoftware and SMS_SoftwareTag classes to be the most useful by far so maybe start there.

You can populate these WMI classes by running the Machine Policy Retrieval & Evaluation Cycle client task followed by the Software Inventory cycle. You should now be able to get some juicy info:

 

Lots of good stuff in there, huh? Incidentally if you need a WMI class that tracks software installs to write PowerShell scripts against SMS_InstalledSoftware is far superior to the Win32_Product class because any queries to Win32_Product will cause installed MSIs to be re-configured (KB974524). This is particularly troublesome if there is a SCCM Configuration Item that is repeatedly doing this (here).

There are some great reports that you get from SMS_InstalledSoftware:

  • Software 0A1 – Summary of Installed Software in a Specific Collection
  • Software 02D – Computers with a specific software installed
  • Software 02E  – Installed software on a specific computer
  • Software 06B – Software by product name

All those reports give you a decent count of how many installs you have of a particular piece of software. That takes care of the first part of the request. How about the second?

 

What kind of installs of Adobe Acrobat XI do we have?

Between 2008 and 2010 Adobe started implementing the ISO/IEC 19770-2 SWID tag standard in their products for licensing purposes. Adobe has actually done a decent job at documenting their SWID tag implementation as well as provided information on how decode the LeID. The SWID tag is an XML file that contains all the relevant licensing information for a particular endpoint, including such goodies as the license type, product type (Standard, Pro, etc.) and the version. This information gets pulled out of the SWID tag and populates the SMS_SoftwareTag class on your clients.

 

That’s a pretty good start but if we create a custom report using the following SQL query we can get something that looks Manager Approved (TM)!

See the follow-up to this post: SCCM, Asset Intelligence and Adobe SWID Tags, Part II – Custom Fields

Until next time, stay frosty.

Five things to not screw up with SCCM

With great power comes great responsibility

Uncle Ben seemed like a pretty wise dude when when he dropped this particular knowledge bomb on Peter Parker. As sysadmins we should already be aware of the tremendous amount of power that has been placed into our hands. Using tools like SCCM further serve to underline this point and while I think SCCM is an amazing product and has the ability to be a fantastic force multiplier you can also reduce your business’ infrastructure to ashes within hours if you use it wrong. I can think of two such events where an SCCM Administrator has mistakenly done some tremendous damage: In 2014 a Windows 7 deployment re-imaged most of the computers, including their servers at Emory University and another unfortunate event where a contractor managed to accomplish the same thing at the Commonwealth Bank of Australia back in the early 2000s.

There are a few things you can do to enjoy the incredible automation, configuration and standardization benefits of SCCM while reducing your likelihood of an R.G.E.

Dynamic Collection Queries

SCCM is all about performing an action on large groups of computers. Therefore it is absolutely imperative that your Collections ACTUALLY CONTAIN THE THINGS YOU THINK THEY DO. Your Collections need to start large and gradually get smaller using a sort of matryoshka doll scheme based on dynamic queries and limiting Collections. You should double/triple/quadruple check your dynamic queries to make sure they are doing what you think they are doing when you create them. It is wise to review these queries on a regular basis to make sure an underlying change in something like Active Directory OU structure or naming convention hasn’t caused your query to match 2000 objects instead of your intended 200. Finally, I highly recommend spot-checking Collection members of your targeted Collection before deploying anything particular hairy and/or when deploying to a large Collection because no matter how diligent we are, we all make mistakes.

Maintenance Windows

“The bond traders are down! The bond traders are down! Cry and hue! Panic! The CIO is on his way to your boss’s office!” Not what you want to hear at 7:00 AM as you are just starting on your first cup of coffee, huh? You can prevent this by making sure your Maintenance Windows are setup correctly. SCCM will do what you tell it to do and if you tell it to allow the agent to reboot at 11:00AM instead of 11:00PM, that’s what’s going to happen.

I like setting up an entirely separate Collection hierarchy that is used solely for setting Maintenance Windows and include my other Collections as members. This prevents issues where the same Collection is used for both targeting and scheduling. It also reduces Maintenance Window sprawl where machines are members of multiple Collections all with different Maintenance Windows. It’s important to consider that Maintenance Windows are “union-ed” so if you have a client in Collection A with a Maintenance Window of 20:00 – 22:00 and in Collection B with a Maintenance Window of 12:00 – 21:00 that client can reboot anywhere between 12:00 – 22:00. There’s nothing more annoying than a workstation that was left in a forgotten testing Collection with a Maintenance Window spanning the whole business day – especially after the technician was done testing and that workstation was delivered to some Department Director.

I am also a huge fan of the idea of a “Default Maintenance Window” where you have a Maintenance Window that is in the past and non-reoccurring that all SCCM clients are a member of. This means that no matter what happens with a computer’s Collection membership it isn’t just going to randomly reboot if it has updates queued up and its current Maintenance Window policy is inadvertently removed.

Last but not least, and this goes for really anything that is scheduled in SCCM, pay attention to date and time. Watch for AM versus PM, 24-hour time vs. 12-hour time,  new day rollover (i.e., 08/20 11:59PM to 08/21 12:00PM) and UTC versus local time.

Required Task Sequences

Of all the things in SCCM this is probably one of the most dangerous. Task Sequences generally involve re-partitioning, re-formatting and re-imaging a computer which has the nice little side effect of removing everything previously on it. You’ll notice that both of those incidents I mentioned at the start of this post were caused by Task Sequences that inadvertently ran on a much larger group of computers than was intended. As a general guideline, I council staff to avoid deploying Task Sequences as Required outside of the Unknown Computers Collection. The potential to nuke your line of business application servers and replace them with Windows 10 is reduced if you have done your fundamentals right in setting up your Collections but I still recommend deploying to small Collections, making your Deployment Available instead of Required (especially if you are testing), restricting who can deploy Task Sequences and password protecting the Task Sequence. I would much rather reboot severs to clear the WinPE environment than recover them from backups.

Automatic Deployment Rules

Anything in SCCM that does stuff automatically deserves some scrutiny. Automatic Deployment Rules are another version of Dynamic Collection Queries. You want to use them and they make your life easier but you need to be sure that they do what you think that they do, especially before they blast out this month’s patches to the All Clients collection instead of the Patch Testing collection. Deployment templates can make it harder to screw up your SUP deployments and once again pay attention to the advertisement and deadline time watching for mistakes with UTC vs. local time or +1 day rollover, the Maintenance Window behavior and which collection you are deploying to. And please, please, please test your SUP groups first before deploying them widely. You too can learn from our mistakes.

Source Files Management and Organization

A messy boat is a dangerous boat. There is a tendency for the source files directory that you are using to store all your installers for Application and Package builds to just descends into chaos over time. This makes it increasingly difficult to figure out what installers are still being used and what stuff was part of some long forgotten test. What’s important here is that you have a standard for file organization and you enforce it with an iron fist.

I like to break things out like this:

A picture depicting the Source Files folder structure

Organizing your source files… It’s a Good Thing.

It’s a pretty straight forward scheme but you get the idea: Applications – Vendor – Software Title – Version and Bitness – Installer. You may need to add more granularity to your Software Updates Deployment Package folders depending on your available bandwidth and how many updates you are deploying in a single SUP group. We have had good results with grouping them by year but then again we are not an agency with offices all over rural Alaska.

 

Mitigation Techniques

There are a few techniques you can use to prevent yourself from doing something terrible.

Roll-based Access Control

You can think of Security Scopes as the largest possible number of clients a single admin can break. If you have a big enough team, the clever use of RBAC will allow you limit how much damage individual team members can do. For example: You could divide your 12 person SCCM team into three sub-teams and use RBAC to limit each sub-team to only being able to manage 1/3 of your clients. You could take this idea a step further and give your tier-1 help desk the ability to do basic “non-dangerous” actions but still allow them the ability to use SCCM to perform their job. This is pretty context specific but there is a lot you can do with RBAC to limit the potential scope of an Administrator’s actions.

Application Requirements (Global Conditions)

You can use Application Requirements as a basic mechanism to prevent bad things from happening if they are deployed to the wrong Collection inadvertently.

Look at all these nice, clean servers… it would be a shame if someone accidentally deployed the Java JRE to all of them, wouldn’t it? Well, if you put in a Requirement that checks the value of ProductType in the Win32_OperatingSystem WMI class to ensure the client has a workstation operating system then the Application will fail its Requirements check and won’t be installed on those servers.

 

There’s so much in WMI that you could build some WQL queries that prevent “dangerous” applications from meeting a Requirement of clients outside its intended deployment.

 

PowerShell Killswitch

SCCM is a pull-based architecture. An implication of this is once the clients have a bad policy they are going to act on it. The first thing you should do if you discover a policy is stomping on your clients is to try and limit the damage by preventing unaffected clients from pulling it. A simple PowerShell script that stops the IIS App Pools backing your Management Points and Distribution Points will act as a crude but effective kill switch. By having this script prepped and ready to go you can immediately stop the spread of something bad and then focus your efforts on correcting the mistake.

Sane Client Settings

There is a tendency to crank up some of the client-side polling frequencies in smaller SCCM implementations in order to make things “go faster” however another way to look at the polling interval is that this is the the period of time it takes for all of your clients to have received a bad policy and possibly acted on it. If your client policy polling interval is 15 minutes that means in 15 minutes you will have re-imaged all your clients if you really screwed up and deployed a Required Task Sequence to All Systems. The longer the polling frequency, the more time you have to identify a bad policy, stop it and begin rebuilding before it has nuked your whole fleet.

Team Processes

A few simple soft processes can go a long way. If you are deploying out an Application or Updates to your whole fleet, send out a notification to your business leaders. People are generally more forgiving of mistakes when they are notified of significant changes first. Perform a gradual roll-out over a week or two instead of blasting out your Office 365 installation application to all 500 workstations at once. Setting sane scheduling and installation deadlines in your Deployments helps here too.

If you are doing something that could be potentially dangerous, grab a coworker and do pilot/co-pilot for the deployment. You (the pilot) perform the work but you walk your coworker (the co-pilot) through each step and have them verify it. Putting a second pair of eyes on a deployment avoids things like inadvertently clicking the “Allow clients to restart outside of Maintenance Windows” checkbox. Next time you need to do this deployment switch roles – Bam! Instant cross training!

Don’t be in a hurry. Nine times out of ten, the dangerous thing is simple to deploy but the simple settings cannot be wrong. Take your time to do things right and push back when you are given unrealistic schedules or asked to deploy things outside of your roll-out process. In the mountains we like to say, slow is fast and fast is dead. In SCCM I like to say, slow is fast, and fast is fired.

Read-Only Friday is the holiest of days on the Sysadmin calendar. Keep it with reverence and respect.

Consider enabling the High Risk Deployment Setting. If you do this make sure you tune the settings so your admins don’t get alert fatigue and just learn to click next, next, finish or eventually they will click next, next, finish and go “oops”.

 

I hope this is helpful. If you have other ideas on how not blow up everything with SCCM feel free to comment. I’m always up for learning something new!

Until next time, stay frosty.

 

 

 

Quick and dirty PowerShell snippet to get Dell Service Tag

The new fiscal year is right around the corner for us. This time of year brings all kinds of fun for us Alaskans, spring king salmon runs, our yearly dose of three days worth of good weather and licensing true-up and hardware purchases. Now there’s about a million different ways to skin this particular cat but here’s a quick a dirty method with PowerShell.

 

If you don’t have access to the ridiculously useful Test-NetConnection cmdlet then you probably should upgrade to PowerShell v5 since unlike most Microsoft products PowerShell seems to actually improve with each version but baring that you can just open a TCP socket by instantiating the appropriate .NET object.

 

The slickest way I have ever seen this done though was with SCCM and the Dell Command Integration Suite for System Center which can generate a warranty status report for everything in your SCCM Site by connecting to database, grabbing the all the service tags and then sending that up to Dell’s warranty status API to get all kinds of juicy information like model, service level, ship date, and warranty status. Unfortunately since this was tremendously useful the team overseeing the warranty status web service decommissioned it abruptly back in 2016. Thanks for nothing ya jerks!