Archive

Posts Tagged ‘VAAI’

A downside to VVols

03/17/2017 Comments off

I picked up a Dell Equallogic PS6000 for my homelab.  Updated it to the latest firmware and discovered it’s capable of VVols.  Yay!  I created a container and (eventually) migrated nearly everything to it.  Seriously, every VM except  Avamar VE.  Started creating and destroying VMs; DRS is happily moving VMs among the hosts.

UNTIL (dun dun dun)

The Equallogic VSM, running the VASA storage provider gets stuck during a vMotion.  Hmm, I notice that all of the powered-off VMs now have a status of “inaccessible”.  On the hosts, the VVol “datastore” is inaccessible.  

Ok, that’s bad.  Thank goodness for Cormac Hogan’s post about this issue.  It boils down to a chicken-and-egg problem.  vCenter relies on the VASA provider to supply information about the VVol.  If the VASA provider resides on the VVols, there’s no apparent way to recover it.  There’s no datastore to find the vmx and re-register, the connections to the VVols are based on the VM, so if it’s not running, there’s no connection to it.

To resolve, I had to create a new instance of the Equallogic VSM, re-register it with vCenter, re-register it as a VASA provider and add the Equallogic group.  Thankfully, the array itself is the source-of-truth for the VVol configuration, so the New VSM picked it up seamlessly.

So your options are apparently to place the VSM/VASA provider on a non-VVol or build a new one every time it shuts down.  Not cool.

 

VMware vSphere 5 AutoDeploy on Cisco UCS – Part 2: Image Profiles

After completing Part 1, we have DHCP configured to assign a reserved IP address to the Cisco B200 M2 blades when they boot to the vNIC. Now the goal is to create the image that the auto-deploy hosts will use..

The image building procedure sounds complicated, but once you break it down, it’s not too bad. First, we need to inventory the components (VIBs) that’ll be needed on the hosts; above-and-beyond the base install. In our case, we needed the HA agent, the Cisco Nexus 1000V VEM and the EMC NAS Plugin for VAAI. The HA driver will be downloaded from the vCenter Server, but you’ll have to download the licensed ZIP files from Cisco and EMC for the others.

In addition to the enhancements, we’ll need the VMware ESXi 5.0 offline bundle, “VMware-ESXi-5.0.0-469512-depot.zip” from the licensed product downloads area of VMware.com. This is essentially a “starter-kit” for image builder, it contains the default packages for ESXi 5.0.

Preparation:

  1. Copy these files into C:\depot
    • VMware-ESXi-5.0.0-469512-depot.zip
    • VEM500-201108271.zip
    • EMCNasPlugin-1.0-10.zip
  2. Launch PowerCLI

On to the PowerCLI code:

Register the offline bundle as a Software Depot (aka source)

Add-EsxSoftwareDepot “C:\depot\VMware-ESXi-5.0.0-469512-depot.zip”

Connect powerCLI to your vCenter server (replace x.x.x.x with your vCenter server’s name or IP)

Connect-VIServer –server x.x.x.x

List the image profiles contained in the offline bundle, ESXi-5.0.0-469512-no-tools and ESXi-5.0.0-469512-standard. We’re going to work with “standard”.

Get-EsxImageProfile

Register vCenter Server depot for HA agent

Add-EsxSoftwareDepot -DepotUrl http://X.X.X.X:80/vSphere-HA-depot

Register depot for updates to ESXi

Add-EsxSoftwareDepot -DepotUrl https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

Register depot for Nexus 1000V VEM and VAAI plugin for VNX NAS

add-esxsoftwaredepot c:\depot\VEM500-201108271.zip
add-esxsoftwaredepot c:\depot\EMCNasPlugin-1.0-10.zip

List the image profiles, except now it will list several more versions. For each, there is a “no-tools” and a “standard”. Make a note of the newest “standard” image (or the one you want to use)

get-esximageprofile

Clones the standard “ESXi-5.0.0-20111204001” image profile to a new image profile with the name “ESXi-HA-VEM-VAAI-20111204001”

New-EsxImageProfile –cloneprofile ESXi-5.0.0-20111204001-standard –name “ESXi-HA-VEM-VAAI-20111204001”

Add the HA agent (vmware-fdm) to our custom image profile

Add-EsxSoftwarePackage -ImageProfile “ESXi-HA-VEM-VAAI-20111204001”-SoftwarePackage vmware-fdm

Check for the VEM package “cisco-vem-v131-esx”

get-esxsoftwarepackage -Name cisco*

Add the Nexus 1000V VEM to our custom image profile

add-esxsoftwarepackage -Imageprofile “ESXi-HA-VEM-VAAI-20111204001” -SoftwarePackage cisco-vem-v131-esx

Check for EMC VAAI Plugin for NAS “EMCNasPlugin”

get-esxsoftwarepackage -Name emc*

Add the EMC VAAI plugin for NAS to our custom image profile

add-esxsoftwarepackage -Imageprofile “ESXi-HA-VEM-VAAI-20111204001” -SoftwarePackage EMCNasPlugin

Export our custom image to a big zip file – we’ll use this to apply future updates

export-esximageprofile -imageprofile “ESXi-HA-VEM-VAAI-20111204001” -Filepath “C:\depot\ESXi-HA-VEM-VAAI-20111204001.zip” –ExporttoBundle

Deploy Rules
OK, now we have a nice image profile, let’s assign it to a deployment rule. To get Auto-Deploy working, we’ll need a good Host Profile and details from a reference host. So, we’ll apply our initial image profile to our reference host, then use our reference host to create a host profile and update the RuleSetCompliance

Create a new temporary rule with our image profile and an IP range; then add it to the active ruleset.

New-DeployRule –Name “TempRule” –Item “ESXi-HA-VEM-VAAI-20111204001 –Pattern “ipv4=10.10.0.23”
Add-DeployRule -DeployRule “TempRule”

At this point, we booted up the blade that would become the reference host. I knew that DHCP would give it the IP that we identified in the temporary deployment rule. BTW – Auto-deploy is not really fast, it takes 10 minutes or so from power-on to visible in vCenter.

Repair Ruleset
You may have noticed a warning about a component that is not auto-deploy ready;  we have to fix that.

In the following code, “referencehost.mydomain.com” is the FQDN of my reference host. This procedure will modify the ruleset to ignore the warning on the affected VIB.

Test-DeployRuleSetCompliance referencehost.mydomain.com
$tr = Test-DeployRuleSetCompliance referencehost.mydomain.com
Repair-DeployRuleSetCompliance $tr
Test-DeployRuleSetCompliance referencehost.mydomain.com

After this completes, reboot the reference host and add it to your Nexus 1000V DVS.

Part 3 (coming soon!) will cover the host profile and final updates to the deployment rules.

References:
https://communities.cisco.com/docs/DOC-26572

Best features in vSphere 5

07/13/2011 Comments off

Of the myriad of new features announced as part of vSphere 5 yesterday, I’ve narrowed it down to a few of my favorites; these are the game-changers.  I really hope they live up to the expectations!

In no particular order…

Swap to Local SSD

This is a neat feature to take advantage of those fast SSD drives in your hosts that you’re not really using much.  (The UCS B230 comes to mind).  With this feature, vSphere can move the swapfile to the host’s local SSDs for fast response time.  Of course, if you’re using the swap file, you probably want to add RAM.
VMFS 5 – 64TB Volumes

This is a much-needed feature.  The 2TB limit on VMFS extents has been making for environments that are more complicated than necessary.  Now users can have fewer, larger data stores.  You’ll want to make sure your array supports the Atomic Test & Set (ATS) function in VAAI before you put a lot of VMs on a single LUN.

Monster VMs

vSphere 5 now provides support for VMs with 32 vCPU and 1TB of vRAM.  (no wonder considering the new pricing structure… grumble grumble)

vStorage DRS

This is a cool feature I’m looking forward to working with.  You’ll create a new object called a “Datastore Cluster” and assign several Datastores to it.  I’m pretty sure you’re going to want the datastores that are clustered to have similar performance characteristics.  vStorage DRS will measure the storage I/O per VM and per datastore and make suggestions or actually move the VMDKs to another datastore in the cluster if it will help balance the storage I/O load.  This is really neat technology!  Imagine that you’re NOT sitting on top of your performance statistics and don’t notice that the LUNs housing your Exchange mailboxes are getting slammed, affecting performance of other VMDKs on those LUNs.  Much like DRS lets you set-it-and-forget-it when it comes to balancing the memory and CPU load, vStorage DRS will do the same for storage I/O.  In addition, it allows you to set affinity and anti-affinity rules all the way down to the individual VMDKs.

Multi-NIC enablement for vMotion

Honestly, this is one of those features I thought vSphere already had.  When you assign multiple uplinks to your vmkernel and make them both active, it will now actually take advantage of them, increasing the bandwidth and speed of vMotion actions.

VAAI for NFS, full file clone, thick disks on nfs

Before now, when you created a VMDK on an NFS datastore, you got a thin-provisioned disk.  Now, you can explicitly create thick disks and let the array (assuming its compatible) handle the work of zeroing it out.  In addition, VAAI for NAS will offload the work of cloning a VM to the array and off of the host, greatly reducing the work and network traffic involved.
HA rewrite

Halleluiah! I’m thrilled to see this!  HA is no longer dependent on DNS.  It is supposed to be faster to enable, more descriptive when it fails and have less funky networking requirements.  I’m eager to work with this and see if its all true.

Live Storage vMotion

Until now, when you needed to move a running VM from shared storage to storage only one of the hosts can see, it was a two-step process. First vMotion the VM to the host that can see the new storage, then storage vMotion it to the right datastore.  Now, you’re able t0 do that move in a single step.  Not sure how often this will get used, but it’s nice.

vCenter Appliance

I’m happy to see a VMware appliance for vCenter Server.  This will eliminate a Windows license for non-Microsoft shops and give the service a smaller footprint with less resource consumption.  The downside I see is that not all plug-ins will work with the Linux-based appliance and it cannot currently be used with VMware View Composer.  I expect this to mature rapidly and become a more robust solution.
vSphere Storage Appliance

This is another great feature that will make the resiliency of vSphere accessible to smaller shops that don’t have the budget to buy an enterprise-class storage array.  It works by maintaining a primary and a replica of each datastore on each of up-to three hosts.  Each host in the VSA cluster runs an NFS server and exports the primary datastores residing there.  In this way, should any one host in the VSA cluster fail, the datastores will remain online via the running replica.  This should not be seen as an alternative to enterprise-class storage, but a good entry point for low-cost deployments needing some resiliency.