A downside to VVols

I picked up a Dell Equallogic PS6000 for my homelab.  Updated it to the latest firmware and discovered it’s capable of VVols.  Yay!  I created a container and (eventually) migrated nearly everything to it.  Seriously, every VM except  Avamar VE.  Started creating and destroying VMs; DRS is happily moving VMs among the hosts.

UNTIL (dun dun dun)

The Equallogic VSM, running the VASA storage provider gets stuck during a vMotion.  Hmm, I notice that all of the powered-off VMs now have a status of “inaccessible”.  On the hosts, the VVol “datastore” is inaccessible.  

Ok, that’s bad.  Thank goodness for Cormac Hogan’s post about this issue.  It boils down to a chicken-and-egg problem.  vCenter relies on the VASA provider to supply information about the VVol.  If the VASA provider resides on the VVols, there’s no apparent way to recover it.  There’s no datastore to find the vmx and re-register, the connections to the VVols are based on the VM, so if it’s not running, there’s no connection to it.

To resolve, I had to create a new instance of the Equallogic VSM, re-register it with vCenter, re-register it as a VASA provider and add the Equallogic group.  Thankfully, the array itself is the source-of-truth for the VVol configuration, so the New VSM picked it up seamlessly.

So your options are apparently to place the VSM/VASA provider on a non-VVol or build a new one every time it shuts down.  Not cool.

 

Advertisement

Best features in vSphere 5

Of the myriad of new features announced as part of vSphere 5 yesterday, I’ve narrowed it down to a few of my favorites; these are the game-changers.  I really hope they live up to the expectations!

In no particular order…

Swap to Local SSD

This is a neat feature to take advantage of those fast SSD drives in your hosts that you’re not really using much.  (The UCS B230 comes to mind).  With this feature, vSphere can move the swapfile to the host’s local SSDs for fast response time.  Of course, if you’re using the swap file, you probably want to add RAM.
VMFS 5 – 64TB Volumes

This is a much-needed feature.  The 2TB limit on VMFS extents has been making for environments that are more complicated than necessary.  Now users can have fewer, larger data stores.  You’ll want to make sure your array supports the Atomic Test & Set (ATS) function in VAAI before you put a lot of VMs on a single LUN.

Monster VMs

vSphere 5 now provides support for VMs with 32 vCPU and 1TB of vRAM.  (no wonder considering the new pricing structure… grumble grumble)

vStorage DRS

This is a cool feature I’m looking forward to working with.  You’ll create a new object called a “Datastore Cluster” and assign several Datastores to it.  I’m pretty sure you’re going to want the datastores that are clustered to have similar performance characteristics.  vStorage DRS will measure the storage I/O per VM and per datastore and make suggestions or actually move the VMDKs to another datastore in the cluster if it will help balance the storage I/O load.  This is really neat technology!  Imagine that you’re NOT sitting on top of your performance statistics and don’t notice that the LUNs housing your Exchange mailboxes are getting slammed, affecting performance of other VMDKs on those LUNs.  Much like DRS lets you set-it-and-forget-it when it comes to balancing the memory and CPU load, vStorage DRS will do the same for storage I/O.  In addition, it allows you to set affinity and anti-affinity rules all the way down to the individual VMDKs.

Multi-NIC enablement for vMotion

Honestly, this is one of those features I thought vSphere already had.  When you assign multiple uplinks to your vmkernel and make them both active, it will now actually take advantage of them, increasing the bandwidth and speed of vMotion actions.

VAAI for NFS, full file clone, thick disks on nfs

Before now, when you created a VMDK on an NFS datastore, you got a thin-provisioned disk.  Now, you can explicitly create thick disks and let the array (assuming its compatible) handle the work of zeroing it out.  In addition, VAAI for NAS will offload the work of cloning a VM to the array and off of the host, greatly reducing the work and network traffic involved.
HA rewrite

Halleluiah! I’m thrilled to see this!  HA is no longer dependent on DNS.  It is supposed to be faster to enable, more descriptive when it fails and have less funky networking requirements.  I’m eager to work with this and see if its all true.

Live Storage vMotion

Until now, when you needed to move a running VM from shared storage to storage only one of the hosts can see, it was a two-step process. First vMotion the VM to the host that can see the new storage, then storage vMotion it to the right datastore.  Now, you’re able t0 do that move in a single step.  Not sure how often this will get used, but it’s nice.

vCenter Appliance

I’m happy to see a VMware appliance for vCenter Server.  This will eliminate a Windows license for non-Microsoft shops and give the service a smaller footprint with less resource consumption.  The downside I see is that not all plug-ins will work with the Linux-based appliance and it cannot currently be used with VMware View Composer.  I expect this to mature rapidly and become a more robust solution.
vSphere Storage Appliance

This is another great feature that will make the resiliency of vSphere accessible to smaller shops that don’t have the budget to buy an enterprise-class storage array.  It works by maintaining a primary and a replica of each datastore on each of up-to three hosts.  Each host in the VSA cluster runs an NFS server and exports the primary datastores residing there.  In this way, should any one host in the VSA cluster fail, the datastores will remain online via the running replica.  This should not be seen as an alternative to enterprise-class storage, but a good entry point for low-cost deployments needing some resiliency.