VMware I/O Analyzer 1.0.0 Review

Since I’m in the middle of replacing one storage array (EMC Celerra NS-120) with another (EMC VNX 5500), I’m interested in comparisons of I/O performance between the arrays and between the various LUN configurations.  VMware made the I/O Analyzer Fling available on December 5, 2011.  Hopefully, you know how to deploy the OVA, so I won’t bore you with those details and will simply start with “okay, I’ve got it installed per the instructions, now what?”

  • Using your vSphere Client, open a console on the IOAnalyzer VM you imported.  Logon as “root”, using “vmware” as the password.  This will take you to an ordinary gray screen.

    IO Analyzer Grey Screen
  • Back on the vSphere Client, make a note of the IP Address of the IO Analyzer VM and the host on which it’s running.
  • Launch Firefox or Chrome and point it to the IP address of the IO Analyzer.  It should come up with the “Home” screen.

    I/O Analyzer Home Screen
  • Click “|| Setup & Run ||” at the top to go to that area.
  • On the Setup & Run page, we’ll set the host to gather ESXtop statistics on and the parameters to pass to IOMeter running in the IO Analyzer VM.
  • Under “Host Credentials”, enter the host name or IP address of the host where the IO Analyzer VM is running, provide the root password and click “Add Host” to save the details.
  • Under “Guest/Worker to Workload Binding”, select the host under the Host Name dropdown.  Supposedly, the “Select VM” dropdown will enumerate the VMs on the selected host, but I never saw it behave like this.  Just choose one of the VM names listed – the name is just used in the formatting report and has no bearing on the results.
  • The “Workload Spec” dropdown lists the access pattern you want to use with IOMeter.  There are some that represent a Workstation, SQL or Exchange Server in addition to the percent read/percent random ones – pick one.
  • In the VM IP Address, provide the IP address of the IO Analyzer VM (same IP address your browser is pointed to) and click “Add Worker”
  • You may add additional workers/access patterns, but when I tried, these additional workers were not spawned on the VM and only used the last workload defined.
  • Enter a duration for the test to run in seconds – usually 120 and click “Run”.
  • Flip over to the console on the IO Analyzer VM and watch IOMeter launch

    IOMeter running on the VM
  • In IOMeter, you can choose the “Results Display” tab, set the Results since to “Last Update” and the Update Frequency to 2 to see the results while the test is progressing.  This is great for those of us with little to no patience.
  • When finished, IOMeter will close and the “Setup & Run” page of the Web UI will state “Run complete.  Goto results tab.”
  • Click the “|| Results ||” link at the top, select the most recent item from the Result Summary drop down and click “View Summary” to load the details.

    IO Analyser Results View
  • The IOMeter Results summary is obtained from the IOMetere running within the VM and provides Read & Write IOPS and MBPS.
  • The Host Results Summary is obtained from ESXtop against the specified host

You can compare the IOPS and MBPS from IOMeter and ESXtop to see what additional load the host is enduring.

To measure the performance of another LUN, simply use Storage vMotion to migrate the IO Analyzer VM elsewhere and rerun it.

My Observations

The VM dropdown thing is annoying, but not really crucial.  The tool will let you select a host unrelated to the VM and confound yourself.  You must logon to the VM on the console in order for it to fire off IOMeter and obtain the results.  It’s a little clunky; multiple workers are not spawned on the VM when selected in the Web GUI.  But, you can simply run IOMeter yourself in the VM and set your own params.

Coming Up

I’ll post my performance statistics for various LUNs and later compare them to the storage on the VNX5500  with and without- FAST cache.

VMware View 4.6 incompatible with VM v8 – correction

I’ve got an environment running vSphere 5.0 and VMware View 4.6 (because View 5.0 isn’t GA yet).  I found that when I upgrade the VM version and VMware Tools of my “Parent” Windows 7 VM, then recompose the Pool, the View client can no longer connect to the Desktop over PCoIP!

Here’s some more details, I’m using a security server in a separate VLAN from the Connection Server, but even if I connect the View client directly to the Connection Server, the behavior is the same.  It acts just like the PCoIP port is blocked (it’s not BTW); first the black screen, then the session dies.

If I choose the snapshot made before the HW version and Tools version upgrade, and re-recompose the pool,  the client connects as expected.  There are no other apparent differences between the “working” snapshot and the “not-working” snapshot, so I must conclude that VMware View 4.6 is incompatible with VM v8.

If this is, or is not the case, or you have a workaround, please let me know!

 

Edit: thanks to the very first comment, I reinstalled the View Agent on the parent VM an viola’, it worked like a charm.

 

vSphere Licensing Update!

Thank you VMware!  It’s refreshing when a company with VMware’s magnitude in the market actually listens to partners and customers.

These changes to the vSphere 5 licensing model mean that we can continue to use loads of pRAM in our hosts.  It also intelligently does not penalize users for transient VMs.  I’m happy to see these changes and think that it’ll make the transition much easier for our customers.

vSphere 5 Licensing Change Ramifications

Until now, vSphere licenses have focused on populated physical processor sockets.  To maximize the bang-for-the-buck, we sought few processors with as many cores as our vSphere license would permit and load ’em up with RAM.  Picture a 6-host cluster where each host has two 10-core processors and 192GB RAM running vSphere Enterprise Plus.

In this case, we buy twelve Enterprise Plus licenses, entitling us to 576GB vRAM.  This is a N+1 HA cluster, so we can consume up to 5 hosts-worth of pRAM, or 960GB.  This leaves us having to purchase an additional EIGHT licenses of Enterprise Plus to cover processors we don’t actually own.  Sure, we could configure the cluster as N+2, reducing my effective capacity from 83.3% to 66% and still have to buy four more licenses just to use all the pRAM in the cluster.

Compounding the frustration is that we’ve been encouraged to make use of great features like transparent page sharing which make memory over-subscription easy and efficient.  Now, if I apply a 1.5:1 memory over-subscription rate to my cluster above, I may have to come up with EIGHTEEN Enterprise Plus licenses for processors we don’t actually own.

I am aware that the vRAM entitlement is pooled by license type across the entire vCenter-managed VMs, so if you have a couple of 2-host clusters, you might be able to apply some of that entitlement to the bigger clusters…

With this license change, I think VMware is driving people to use smaller, less efficient clusters.  Customers are effectively penalized for having more than (Number of Hosts-1) * vRAM entitlement per CPU in their hosts.

This is just my humble opinion and does not necessarily reflect the opinion of my employer or partners or anyone else for that matter

Best features in vSphere 5

Of the myriad of new features announced as part of vSphere 5 yesterday, I’ve narrowed it down to a few of my favorites; these are the game-changers.  I really hope they live up to the expectations!

In no particular order…

Swap to Local SSD

This is a neat feature to take advantage of those fast SSD drives in your hosts that you’re not really using much.  (The UCS B230 comes to mind).  With this feature, vSphere can move the swapfile to the host’s local SSDs for fast response time.  Of course, if you’re using the swap file, you probably want to add RAM.
VMFS 5 – 64TB Volumes

This is a much-needed feature.  The 2TB limit on VMFS extents has been making for environments that are more complicated than necessary.  Now users can have fewer, larger data stores.  You’ll want to make sure your array supports the Atomic Test & Set (ATS) function in VAAI before you put a lot of VMs on a single LUN.

Monster VMs

vSphere 5 now provides support for VMs with 32 vCPU and 1TB of vRAM.  (no wonder considering the new pricing structure… grumble grumble)

vStorage DRS

This is a cool feature I’m looking forward to working with.  You’ll create a new object called a “Datastore Cluster” and assign several Datastores to it.  I’m pretty sure you’re going to want the datastores that are clustered to have similar performance characteristics.  vStorage DRS will measure the storage I/O per VM and per datastore and make suggestions or actually move the VMDKs to another datastore in the cluster if it will help balance the storage I/O load.  This is really neat technology!  Imagine that you’re NOT sitting on top of your performance statistics and don’t notice that the LUNs housing your Exchange mailboxes are getting slammed, affecting performance of other VMDKs on those LUNs.  Much like DRS lets you set-it-and-forget-it when it comes to balancing the memory and CPU load, vStorage DRS will do the same for storage I/O.  In addition, it allows you to set affinity and anti-affinity rules all the way down to the individual VMDKs.

Multi-NIC enablement for vMotion

Honestly, this is one of those features I thought vSphere already had.  When you assign multiple uplinks to your vmkernel and make them both active, it will now actually take advantage of them, increasing the bandwidth and speed of vMotion actions.

VAAI for NFS, full file clone, thick disks on nfs

Before now, when you created a VMDK on an NFS datastore, you got a thin-provisioned disk.  Now, you can explicitly create thick disks and let the array (assuming its compatible) handle the work of zeroing it out.  In addition, VAAI for NAS will offload the work of cloning a VM to the array and off of the host, greatly reducing the work and network traffic involved.
HA rewrite

Halleluiah! I’m thrilled to see this!  HA is no longer dependent on DNS.  It is supposed to be faster to enable, more descriptive when it fails and have less funky networking requirements.  I’m eager to work with this and see if its all true.

Live Storage vMotion

Until now, when you needed to move a running VM from shared storage to storage only one of the hosts can see, it was a two-step process. First vMotion the VM to the host that can see the new storage, then storage vMotion it to the right datastore.  Now, you’re able t0 do that move in a single step.  Not sure how often this will get used, but it’s nice.

vCenter Appliance

I’m happy to see a VMware appliance for vCenter Server.  This will eliminate a Windows license for non-Microsoft shops and give the service a smaller footprint with less resource consumption.  The downside I see is that not all plug-ins will work with the Linux-based appliance and it cannot currently be used with VMware View Composer.  I expect this to mature rapidly and become a more robust solution.
vSphere Storage Appliance

This is another great feature that will make the resiliency of vSphere accessible to smaller shops that don’t have the budget to buy an enterprise-class storage array.  It works by maintaining a primary and a replica of each datastore on each of up-to three hosts.  Each host in the VSA cluster runs an NFS server and exports the primary datastores residing there.  In this way, should any one host in the VSA cluster fail, the datastores will remain online via the running replica.  This should not be seen as an alternative to enterprise-class storage, but a good entry point for low-cost deployments needing some resiliency.