The HumbleLab: Windows Server 2016, ReFS and “no sufficient eligible resources” Storage Tier Errors

Well, that didn’t last too long did it? Three months after getting my Windows Server 2012 R2 based HumbleLab setup I tore it down  to start fresh.

As a refresher The HumbleLab lives on some pretty humble hardware:

Dell OptiPlex 990 (circa 2012)

  • Intel i7-2600, 3.4GHz 4 Cores, 8 Threads, 256KB L2, 8MB L3
  • 16GBs, Non-EEC, 1333MHz DDR3
  • Samsung SSD PM830, 128GBs SATA 3.0 Gb/s
  • Samsung SSD 840 EVO 250GBs SATA 6.0 Gb/s
  • Seagate Barracuda 1TB SATA 3.0 Gb/s

However I did managed to scrounge up a Hitachi/HGST Ultrastar 7K3000 3TB SATA drive in our parts bin that was manufactured in April 2011 to swap places with the eight year old Seagate drive.  Not only is the Hitachi drive three years newer but it also has three times as much capacity bringing a whopping 3TBs of raw storage to the little HumbleLab! Double win!

My OptiPlex lacks any kind of real storage management and my Storage Pool was configured in Simple Storage Layout which just stripes the data across all the drives in the Storage Pool. It also should go without saying that I am not using any of Storage Space’s Failover Clustering or Scale-Out functionality. I couldn’t think of simple way to swap my SATA drives other than to export my Virtual Machines, destroy the Storage Pool, swap the drives and recreate it. The only problem is I didn’t really have any readily available temporary storage that I could dump my VMs on and my lab was kind of broken so I just nuked everything and started over with a fresh install of Server 2016 which I wanted to upgrade to anyway. Oh well, sometimes the smartest way forward is kind of stupid.

Not much to say about the install process but I did run across the same “storage pool does not have sufficient eligible resources” issue creating my Storage Pool.

Neat! There’s still a rounding error in the GUI. Never change Microsoft. Never change.

According to the Internet’s most accurate source of technical information, Microsoft’s TechNet Forums, there is a rounding error in how disks are presented in the wizard. I guess what happens is when you want to use all 2.8TBs of your disk, the numbers don’t match up exactly with the actual capacity and consequently the wizard fails as it tries to create a Storage Tier bigger than the underlying disk. I guess. I mean it seems plausible at least. If you specify the size in GBs or even MBs supposedly that will work but naturally it didn’t work for me and I ended up trying to create my new Virtual Disk using PowerShell. I slowly backed off the size of my Storage Tiers from the total capacity of the underlying disks until it worked with 3GBs worth of slack space. A little disappointing that the wizard doesn’t automagically do this for you and doubly disappointing that this issue is still present in Server 2016.

Here’s my PowerShell snippet:

 

Now for the big reveal? How’d we do?

Not bad at all for running on junk! We were able to squeeze a bit more go juice out of the HumbleLab with Server 2016 and ReFS! We bumped the IOPS up to 2240 from 880 and reduced latency down to sub 2ms numbers from 4ms which is amazing considering what we are running this on.

I think that this performance increase is largely due to the combination of how Storage Tiers and ReFS are implemented in Server 2016 and not due to ReFS’s block cloning technology which is focused on optimizing certain types of storage operations associated with virtualization workloads. As I understand it, Storage Tiers previously were “passive” in the sense that a scheduled task would move hot data onto SSD tiers and cooling/cold data back onto HDD tiers whereas in Server 2016 Storage Tiers and ReFS can do realtime storage optimization. Holy shmow! Windows Server is starting to look like a real operating system these days! There are plenty of gotchas of course and it is not really clear to me whether they are talking about Storage Spaces / Storage Tiers or Storage Spaces Direct but either way I am happy with the performance increase!

Until next time!