VMware I/O Analyzer 1.0.0 Review

Since I’m in the middle of replacing one storage array (EMC Celerra NS-120) with another (EMC VNX 5500), I’m interested in comparisons of I/O performance between the arrays and between the various LUN configurations.  VMware made the I/O Analyzer Fling available on December 5, 2011.  Hopefully, you know how to deploy the OVA, so I won’t bore you with those details and will simply start with “okay, I’ve got it installed per the instructions, now what?”

  • Using your vSphere Client, open a console on the IOAnalyzer VM you imported.  Logon as “root”, using “vmware” as the password.  This will take you to an ordinary gray screen.

    IO Analyzer Grey Screen
  • Back on the vSphere Client, make a note of the IP Address of the IO Analyzer VM and the host on which it’s running.
  • Launch Firefox or Chrome and point it to the IP address of the IO Analyzer.  It should come up with the “Home” screen.

    I/O Analyzer Home Screen
  • Click “|| Setup & Run ||” at the top to go to that area.
  • On the Setup & Run page, we’ll set the host to gather ESXtop statistics on and the parameters to pass to IOMeter running in the IO Analyzer VM.
  • Under “Host Credentials”, enter the host name or IP address of the host where the IO Analyzer VM is running, provide the root password and click “Add Host” to save the details.
  • Under “Guest/Worker to Workload Binding”, select the host under the Host Name dropdown.  Supposedly, the “Select VM” dropdown will enumerate the VMs on the selected host, but I never saw it behave like this.  Just choose one of the VM names listed – the name is just used in the formatting report and has no bearing on the results.
  • The “Workload Spec” dropdown lists the access pattern you want to use with IOMeter.  There are some that represent a Workstation, SQL or Exchange Server in addition to the percent read/percent random ones – pick one.
  • In the VM IP Address, provide the IP address of the IO Analyzer VM (same IP address your browser is pointed to) and click “Add Worker”
  • You may add additional workers/access patterns, but when I tried, these additional workers were not spawned on the VM and only used the last workload defined.
  • Enter a duration for the test to run in seconds – usually 120 and click “Run”.
  • Flip over to the console on the IO Analyzer VM and watch IOMeter launch

    IOMeter running on the VM
  • In IOMeter, you can choose the “Results Display” tab, set the Results since to “Last Update” and the Update Frequency to 2 to see the results while the test is progressing.  This is great for those of us with little to no patience.
  • When finished, IOMeter will close and the “Setup & Run” page of the Web UI will state “Run complete.  Goto results tab.”
  • Click the “|| Results ||” link at the top, select the most recent item from the Result Summary drop down and click “View Summary” to load the details.

    IO Analyser Results View
  • The IOMeter Results summary is obtained from the IOMetere running within the VM and provides Read & Write IOPS and MBPS.
  • The Host Results Summary is obtained from ESXtop against the specified host

You can compare the IOPS and MBPS from IOMeter and ESXtop to see what additional load the host is enduring.

To measure the performance of another LUN, simply use Storage vMotion to migrate the IO Analyzer VM elsewhere and rerun it.

My Observations

The VM dropdown thing is annoying, but not really crucial.  The tool will let you select a host unrelated to the VM and confound yourself.  You must logon to the VM on the console in order for it to fire off IOMeter and obtain the results.  It’s a little clunky; multiple workers are not spawned on the VM when selected in the Web GUI.  But, you can simply run IOMeter yourself in the VM and set your own params.

Coming Up

I’ll post my performance statistics for various LUNs and later compare them to the storage on the VNX5500  with and without- FAST cache.

NFS Datastore on EMC NS-120 for VMware Data Recovery

Background

I have a handful of 7.2K RPM 1TB SATA drives in the NS-120 that I didn’t want to use as a datastore for multiple VMs, but would be an ideal backup-to-disk location.  Clearly, I could have connected the hosts to the storage over the existing FC connections, but really, this IS a Celerra and I can expose volumes via iSCSI or NFS.  So, I created a pool and a LUN, assigned the LUN (on the CLARiiON portion) to the Celerra.

Datastore on NFS

When you add a NFS export to the Celerra, you have to create a network interface on the data movers first.  The NFS “server” listens on this interface.  The documentation is a little misleading – I was trying to connect to the Control Station IP.  Making matters worse, the VMKernel logs states “The NFS server does not support MOUNT version 3 over TCP”.  That’s a lot of help.  Here’s a wild goose, go catch it.

Hopefully, this will help someone save some time when connecting their vSphere host to an NFS export on a Celerra.

VMware Data Recovery

I deployed VDR using the ovf, this is nice.  It prompts where to put the vmdk, what network to use and let it transfer.

Next, install the VDR client plugin and restart the vSphere Client.  You’ll need to add a vDisk or two to the VDR appliance to use as backup locations.  Don’t start the VMware Data Recovery app or the VDR appliance until after you’ve added the additional vDisks. (not a big deal, but you’ll have to restart the appliance to get it to see the vDisks you added)

The Getting Started Wizard is pretty self-explanatory and it segues into the Backup Wizard.  The wizard will require you to format and mount the backup locations you created in the Getting Started Wizard, but once that is finished, you’ll be able to continue your first backup job configuration.  Naturally, once I had it created and scheduled, I had to make it backup now.  Like a kid, I ran to see the drive lights blinking on the SATA drives only exposed via NFS on the EMC.  Yay!  it worked!