Tag Archives: windows server 2016

Scheduling Backups with Veeam Free and PowerShell

Veeam Free Edition is an amazing product. For the low price of absolutely zero you get a whole laundry list of enterprise-grade features: VeeamZip (Full Backups), granular and application-aware restore of items, native tape library support and direct access to NFS-based VM storage using Veeam’s NFS client. One thing that Veeam Free doesn’t include however is a scheduling mechanism. We can fix that with a little bit of PowerShell that we run as a scheduled task.

I have two scripts. The first one loads the Veeam PowerShell Snap-In, connects to the Veeam server, gets a list of virtual machines and then backs them up to a specified destination.

 

I had Veeam setup on a virtual machine running on the now defunct HumbleLab. One of the disadvantages of this configuration is I don’t have separate storage to move the resulting backup files onto. You could solve this by simply using an external hard drive but I wanted something a little more… cloud-y. I setup Azure Files so I could connect to cheap, redundant and most importantly off-site, off-line storage via SMB3 to store a copy of my lab backups. The biggest downside to this is security. Azure Files is really not designed to be a full featured replacement for a traditional Windows file server. It’s really more of SMB-as-a-Service offering designed to be programmatically accessed by Azure VMs. SMB3 provides transit encryption but you would still probably be better off using a Site-to-Site VPN between your on-prem Veeam server and a Windows file server running as VM in Azure or by using Veeam’s Cloud Connect functionality. There’s also no functionality replacing or replicating NTFS permissions. The entire “security” of your Azure Files SMB share rests in the storage key. This is OK for a lab but probably not OK for production.

Here’s the script that fires off once a week and copies the backups out to Azure Files. For something like my lab it’s a perfect solution.

 

Until next time, stay frosty!

Quick and dirty PowerShell snippet to get Dell Service Tag

The new fiscal year is right around the corner for us. This time of year brings all kinds of fun for us Alaskans, spring king salmon runs, our yearly dose of three days worth of good weather and licensing true-up and hardware purchases. Now there’s about a million different ways to skin this particular cat but here’s a quick a dirty method with PowerShell.

 

If you don’t have access to the ridiculously useful Test-NetConnection cmdlet then you probably should upgrade to PowerShell v5 since unlike most Microsoft products PowerShell seems to actually improve with each version but baring that you can just open a TCP socket by instantiating the appropriate .NET object.

 

The slickest way I have ever seen this done though was with SCCM and the Dell Command Integration Suite for System Center which can generate a warranty status report for everything in your SCCM Site by connecting to database, grabbing the all the service tags and then sending that up to Dell’s warranty status API to get all kinds of juicy information like model, service level, ship date, and warranty status. Unfortunately since this was tremendously useful the team overseeing the warranty status web service decommissioned it abruptly back in 2016. Thanks for nothing ya jerks!

 

The HumbleLab: Windows Server 2016, ReFS and “no sufficient eligible resources” Storage Tier Errors

Well, that didn’t last too long did it? Three months after getting my Windows Server 2012 R2 based HumbleLab setup I tore it down  to start fresh.

As a refresher The HumbleLab lives on some pretty humble hardware:

Dell OptiPlex 990 (circa 2012)

  • Intel i7-2600, 3.4GHz 4 Cores, 8 Threads, 256KB L2, 8MB L3
  • 16GBs, Non-EEC, 1333MHz DDR3
  • Samsung SSD PM830, 128GBs SATA 3.0 Gb/s
  • Samsung SSD 840 EVO 250GBs SATA 6.0 Gb/s
  • Seagate Barracuda 1TB SATA 3.0 Gb/s

However I did managed to scrounge up a Hitachi/HGST Ultrastar 7K3000 3TB SATA drive in our parts bin that was manufactured in April 2011 to swap places with the eight year old Seagate drive.  Not only is the Hitachi drive three years newer but it also has three times as much capacity bringing a whopping 3TBs of raw storage to the little HumbleLab! Double win!

My OptiPlex lacks any kind of real storage management and my Storage Pool was configured in Simple Storage Layout which just stripes the data across all the drives in the Storage Pool. It also should go without saying that I am not using any of Storage Space’s Failover Clustering or Scale-Out functionality. I couldn’t think of simple way to swap my SATA drives other than to export my Virtual Machines, destroy the Storage Pool, swap the drives and recreate it. The only problem is I didn’t really have any readily available temporary storage that I could dump my VMs on and my lab was kind of broken so I just nuked everything and started over with a fresh install of Server 2016 which I wanted to upgrade to anyway. Oh well, sometimes the smartest way forward is kind of stupid.

Not much to say about the install process but I did run across the same “storage pool does not have sufficient eligible resources” issue creating my Storage Pool.

Neat! There’s still a rounding error in the GUI. Never change Microsoft. Never change.

According to the Internet’s most accurate source of technical information, Microsoft’s TechNet Forums, there is a rounding error in how disks are presented in the wizard. I guess what happens is when you want to use all 2.8TBs of your disk, the numbers don’t match up exactly with the actual capacity and consequently the wizard fails as it tries to create a Storage Tier bigger than the underlying disk. I guess. I mean it seems plausible at least. If you specify the size in GBs or even MBs supposedly that will work but naturally it didn’t work for me and I ended up trying to create my new Virtual Disk using PowerShell. I slowly backed off the size of my Storage Tiers from the total capacity of the underlying disks until it worked with 3GBs worth of slack space. A little disappointing that the wizard doesn’t automagically do this for you and doubly disappointing that this issue is still present in Server 2016.

Here’s my PowerShell snippet:

 

Now for the big reveal? How’d we do?

Not bad at all for running on junk! We were able to squeeze a bit more go juice out of the HumbleLab with Server 2016 and ReFS! We bumped the IOPS up to 2240 from 880 and reduced latency down to sub 2ms numbers from 4ms which is amazing considering what we are running this on.

I think that this performance increase is largely due to the combination of how Storage Tiers and ReFS are implemented in Server 2016 and not due to ReFS’s block cloning technology which is focused on optimizing certain types of storage operations associated with virtualization workloads. As I understand it, Storage Tiers previously were “passive” in the sense that a scheduled task would move hot data onto SSD tiers and cooling/cold data back onto HDD tiers whereas in Server 2016 Storage Tiers and ReFS can do realtime storage optimization. Holy shmow! Windows Server is starting to look like a real operating system these days! There are plenty of gotchas of course and it is not really clear to me whether they are talking about Storage Spaces / Storage Tiers or Storage Spaces Direct but either way I am happy with the performance increase!

Until next time!