Turns out that all snapshots are placed in the VM’s default/root folder. Sounds obvious, right? How about if you’ve assigned multiple virtual hard disks on different datastores? That’s where you can have trouble. Using VMware Data Recovery, which first creates a snapshot, then backs it the VMDK, you may not realize until it’s too late that you’ve filled up one of your datastores. VMware provides a way to alter the snapshot “working directory”, but keep in mind that you’ll have to reset that property each time you Storage vMotion that VM.
Not all smartcard readers are created equal
Recently, I was working with a customer site where they used smartcards to authenticate to applications. In this case, since the reader was not part of the VMware View session authentication, the USB reader itself had to be passed through via redirection.
I tried setting the “AllowSmartcards” value to true:
HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VDM\USB AllowSmartcards=true
But the reader still wasn’t redirected to the View session. In this case, I ended up having to follow this KB article to recognize the device as one that can be redirected. The particular reader is identified as a “USB Keyboard”, which are typically not redirected. Obviously, you’ll have to make sure that you don’t redirect your actual keyboard.
VMware releases for iPad
First, the View client for iPad. This is a great app and you can tell that VMware put a lot of time into making the user experience very nice. Its easier to use than Wyse PocketCloud in part because of the extra effort they put in to accommodate common Windows actions. I still use PocketCloud on the iPad because it’s a pretty good remote desktop client for non-View machines.
Yesterday, I downloaded and installed the vSphere client for iPad. It’s pretty cool. It requires another virtual appliance be added, the vCenter Mobile Access (vCMA). The vCMA provides a web interface for vSphere, but it’s not really good-looking enough for the iPad directly. So, the new application looks fantastic and consumes the information from the vCMA.
The application allows administrators to start/stop and view performance statistics on VMs and hosts. The performance graphs are works of art, they’re stunning and easy to read.
The application is missing a few key features that I suspect will be coming soon. Namely, there’s no exposure into the clusters. I’d like to see comparative performance host-to-host within a cluster. I’d like to be able to edit the cluster settings. Their objective was to provide the 80% most-frequently-used functions, so I suppose editing VM, host and cluster settings doesn’t fall into that 80%. Overall, I’m thrilled to have the application and expect it to become even more essential over time.
Completed EMC Technology Architect Exams!
I’m excited to have passed the last exam for the EMC Technology Architect certification. There is a tremendous amount of material to study (~1600 pages). EMC has so many product lines, that even with that much material, it still feels like you’re only scratching the surface. There’s many, many different replication technologies to accommodate any scenario, so the challenge is know which is appropriate for which situation.
This certification is a necessary requirement for VCE partners and is only available to EMC partners.
Experience in upgrading UCS to 1.4(1j)
The UCS deployment in the Mobile VCE is different from many deployments because it does not employ many of the redundant and fault-tolerant options and doesn’t run a production workload. So, I have the flexibility to bring it down at almost any time for as long a duration as needed.
All this aside, it IS a complete Cisco UCS deployment with all the same behavior as if it were in production. This means, I can perform an upgrade or configuration change in this environment first and work through all the ramifications before performing the same action on a production environment.
There’s a lot of excitement around the web about the new features in this upgrade and I’ve been looking forward to installing it. For me, I’m excited about the lengthy list of fixes, the ability to integrate the management of the C-series server with the UCS Manager, FC port-channels and user-labels.
To start, I used the vSphere client to shut down the VMs I could and moved those I couldn’t to the C-series. Please note that I have not yet connected the C-series to the Fabric Interconnect for integration yet (that’s another post). Then I shutdown the blades themselves.
For the actual upgrade, I simply followed the upgrade guide – there’s no reason to go through the details of that here.
However, the experience was not exactly stress-free.
Although it makes sense, when the IO Module is rebooted, the Fabric Interconnect loses connection to the Chassis. It cycled through a sequence of heart-stopping error messages before finally rediscovering the chassis and servers and stabilizing. During this phase, it’s best to not look – the error messages led me to believe the IOM had become incompatible with the Fabric Interconnect. Like I said, after a few minutes, the error messages all resolved and every component was successfully updated to 1.4.1.
GUI changes after upgrade
Nodes on the Equipment tree for Rack-Mounts/FEX and Rack-Mounts/Servers
User labels (Yes! )
I’ll be connecting the C-series to the Fabric Interconnect soon and am looking forward to setting up the FC port channel.
Resolve Hardware Status Alert SEL_FULLNESS
I noticed an alert on two UCS B250M2 hosts in the vSphere Client. The alert Name was “Status of other host hardware objects”. This isn’t helpful. To get more information, you have to navigate to the Hardware Status tab of the host properties. Here I saw more information about the alert. It’s cryptically named “System Board 0 SEL_FULLNESS”.
This points to the System Event Log for the UCS blade itself. Luckily, this is easily cleared by using the UCS Manager to navigate to the management Logs tab of the Server properties under Equipment.
Once there, you can back up and clear the SEL. Within a few minutes, the vSphere sensors will update and the alert will be gone.
UPDATE: Once UCSM has been updated to 1.4.1, the “Management Logs” tab is named “SEL Logs”
Venture has a unique tool in its arsenal. We are the only partner that I know of that are able to bring a complete top-tier Virtual Computing Environment to a customer. The Mobile VCE is composed of four 10-RU rolling cabinets, which together contain the storage, compute and network resources. This rolling data center allows us to demonstrate the technology in front of a customer; at their location – even on their network.
The options are nearly limitless. We can not only show a vSphere vMotion from host to host, but move a UCS service profile from one B-series blade to another. We’re able to step through a simple deployment of VMware View, consumed by thin clients and demonstrate how VMware data Recovery can work with the NAS features of the EMC Celerra NS-120.
To see how the technologies discussed here (and more) can be used by your company, please contact us.
NFS Datastore on EMC NS-120 for VMware Data Recovery
I have a handful of 7.2K RPM 1TB SATA drives in the NS-120 that I didn’t want to use as a datastore for multiple VMs, but would be an ideal backup-to-disk location. Clearly, I could have connected the hosts to the storage over the existing FC connections, but really, this IS a Celerra and I can expose volumes via iSCSI or NFS. So, I created a pool and a LUN, assigned the LUN (on the CLARiiON portion) to the Celerra.
Datastore on NFS
When you add a NFS export to the Celerra, you have to create a network interface on the data movers first. The NFS “server” listens on this interface. The documentation is a little misleading – I was trying to connect to the Control Station IP. Making matters worse, the VMKernel logs states “The NFS server does not support MOUNT version 3 over TCP”. That’s a lot of help. Here’s a wild goose, go catch it.
Hopefully, this will help someone save some time when connecting their vSphere host to an NFS export on a Celerra.
VMware Data Recovery
I deployed VDR using the ovf, this is nice. It prompts where to put the vmdk, what network to use and let it transfer.
Next, install the VDR client plugin and restart the vSphere Client. You’ll need to add a vDisk or two to the VDR appliance to use as backup locations. Don’t start the VMware Data Recovery app or the VDR appliance until after you’ve added the additional vDisks. (not a big deal, but you’ll have to restart the appliance to get it to see the vDisks you added)
The Getting Started Wizard is pretty self-explanatory and it segues into the Backup Wizard. The wizard will require you to format and mount the backup locations you created in the Getting Started Wizard, but once that is finished, you’ll be able to continue your first backup job configuration. Naturally, once I had it created and scheduled, I had to make it backup now. Like a kid, I ran to see the drive lights blinking on the SATA drives only exposed via NFS on the EMC. Yay! it worked!
Features coming soon
Our VCE already has a lot going on, but over the next few weeks, I plan to
- get VMware View working with a thin client (Done!)
- Deploy Cisco UC to have a fully mobile, virtualized collaboration system
- complete the integration with UCS, vSphere and Unisphere
- Deploy Data Recovery Manager using storage on NAS (Done!)
- Complete deployment of Nexus 1000V (Done!)