Handy VMKB for SRM & VR 5.1

VMKB 1009562  has a lot of good information, I’m not going to repeat it here, but it is a great resource for determining what network ports have to be open between what devices when using SRM & vSphere Replication.

Also, this diagram is surprisingly complicated… (reminds me of a dream-catcher)

SRM & VR ports

Advertisement

vCloud Director Database Issues

We had a situation recently where vCloud Director 1.5.1 (that’s one-dot-five-dot-one, not five-dot-one) failed to delete an external network, giving this error:

Could not execute JDBC batch update
– The DELETE statement conflicted with the REFERENCE constraint “allocated_ip_add_fk”. The conflict occurred in database “vCloud”, table “dbo.network_assigned_ip”, column ‘allocated_ip_add_id’.

This indicates that the record in the network_assigned_ip table cannot be deleted because it has a reference to a record in another table.

I had previously deleted all the related vApp networks and org networks and removed any stranded items.

To find the erroneous record, we first obtained the logical network id based on the network name:

Select * from [vCloud].[dbo].[logical_network] where name like ‘%BAD_NET%’ –note the network name has been set to “BAD_NET”

This returned one record, the first column value is the logical network ID, so we’re going to use this value to identify any assigned IP addresses in that logical networks via this query:

SELECT * from network_assigned_ip where logical_net_id = 0xLOGICALNETWORKID –replace with correct Logical network id value from previous query

Yes, I know how to do an INNER JOIN, but that’s not the point…

At this point, we had no left over IP addresses for that network, so we deleted the errant record from the logical_network table.

Now, the external network no longer appears, but everything’s not rosy.

vCD will only let you bind one network to a given port group and the database indicates that the port group I need to bind to a new external network is still in use. In my case this query was expected to have a record for the port group but did not:

Select * from [dbo].[ui_portgroups_avail_list_view

This information is stored in two different tables; vlan_in_use and real_network. Searching the vlan_in_use table for my VLAN ID did not return any records, so there must be a lingering item in real_network. I found it by querying the real_networks table and deleted that record.

Now ui_portgroups_avail_list_view includes a record for my port group, so I’m good-to-go.

Hopefully this information will be helpful to someone. I strongly suggest working with VMware support and having a case open and a very recent database backup before making any changes to the vCloud database.

How to move a VMware View desktop from one Pool to Another

First, I doubt this procedure is supported. Second, if you follow these steps, you do so at your own risk. It worked for me, but your mileage may vary.

Ok, so let’s say that you have a pool of full desktops – no linked clones – using dedicated assignment. It has become necessary to split the desktops in the pool into two smaller pools for some reason. We don’t want to actually change the desktop VM itself in any way, just move it to a new pool.

Prerequisites:

  • Source and Destination Pools – full desktops, dedicated assignment
  • The names from View administrator of the desktops you want to move
  • Administrative rights on a View Connection Server

Steps:

  1. Logon to a View Connection Server as an administrator
  2. Follow the steps here to open the View ADAM database in ADSIEdit
  3. Get the GUID for the Desktop VM
    1. Right-click the Connection View ADAM Database [localhost:389], and click New > Query.
    2. Under Root of Search, click Browse.. and select the Servers organizational unit.
    3. Click OK.
    4. In the Query String, paste this search string:(&(objectClass=pae-VM)(pae-displayname=VirtualMachineName))Where VirtualMachineName is the name of the virtual machine for which you are trying to locate the GUID. You may use * or ? as wildcards to match multiple desktops.
    5. Click OK to create the query.
    6. Click the query in the left pane. The virtual machines that match the search are displayed in the right pane.
    7. Record the GUID in cn=<GUID>.
  4. Expand View ADAM Database [localhost]
  5. Expand DC=vdi,dc=vmware,dc=int
  6. Click on OU=Server Groups.  Confirm that all your Desktop Pools appear on the right.
  7. Right-click the Server Group Object corresponding to your source desktop pool and choose Properties.
  8. In the Attribute Editor, scroll down to the “pae-memberDN” attribute.  Double-click to open it.
  9. This is the list of VMs currently in the pool.  Locate the GUID(s) matching the distinguished named and click “remove” to remove them from the pool.  I strongly suggest copying the removed value to notepad so you can paste it back it accurately in the coming steps.
  10. When finished removing the appropriate objects from the pae-memberDN attribute, click OK twice to close the attribute and the Server Group object
  11. Right-click the Server Group Object corresponding to your destination desktop pool and choose Properties.
  12. In the Attribute Editor, scroll down to the “pae-memberDN” attribute.  Double-click to open it.
  13. Paste the distinguished name value for the desktop GUID int the “Value to add” text box and click “Add”.  Repeat for additional desktops
  14. When finished adding records to the pae-memberDN attribute, click OK twice to close the attribute and the Server Group object
  15. Close ADSIEdit and launch View Administrator to see the results.

Good Luck, please let me know if you try this.
References:
VMware KB 1008658
VMware KB 2012377

Something New!

After 15 great years at Venture Technologies, I’m moving on to boost my career. I’ve been given an opportunity to join EMC Consulting as an Advisory Solutions Architect. Everyone I’ve met there has been very friendly and knowledgable. I’m excited to join a great team and make the most of this opportunity. I’ll continue to tweet and blog about the things that interest me and may be a help to others.

One way to update a vSphere 5 host from CLI

Every time a new version comes out, I have to dig up this procedure, so I figured I’d post it for quicker reference next time.

If you have a vSphere Update Manager in your environment, use it, if not, here is one method to update a host: (using ESXi-5.0.0-20120302001-standard.zip in this case)

  1. Locate the Offline Bundle with the desired version or create an image profile from it that includes additional vibs you may need.
  2. Copy that zip file to a datastore that the host to be upgraded has access to.
  3. Use the vSphere CLI on a local machine and vMA to run this command to determine the profiles contained in the bundle

    esxcli --server=<host IP> --username=root --password=<password> software sources profile list --depot="[DATASTORE]ESXi-5.0.0-20120302001-standard.zip"

    Returns “ESXi-5.0.0-20120302001-standard”, “ESXi-5.0.0-20120302001-no-tools”; we’re going to use “standard” in this case.

  4. Put the host in maintenance mode

    vicfg-hostops.pl --server=<host IP> --username=root --password=<password> --operation enter

  5. Apply update to host using profile

    esxcli --server=<host IP> --username=root --password=<password>software profile update --depot="[DATASTORE] update-from-esxi5.0-5.0_update01.zip" --profile="ESXi-5.0.0-20120302001-standard"

  6. Reboot the host

    vicfg-hostops.pl --server=<host IP> --username=root --password=<password> --operation reboot

Configure Wyse Windows Embedded Standard thin client to load VMware View Client automatically

Objectives:

  • Faster time from power on to View logon prompt
  • User cannot access any other applications on the Windows Embedded O/S
  • Exiting the View Client automatically relaunches it
  • Administrator account is not affected

Procedure:

  1. Disable PXE boot
    1. As the thin client is booting, hit <delete> to enter the BIOS
    2. At the BIOS password prompt, enter “Fireport” (unless the BIOS password has been changed)
    3. Update the device boot order so Hard Drive is first
  2. Get “User” SID
    1. Reboot thin client, load Windows as default “user”
    2. Click “Start|Shutdown|Log off” while holding <shift>, continue holding <shift>
    3. At the logon prompt, logon as administrator using the password “Wyse#123”
    4. Launch Regedit, navigate to HKEY_USERS
    5. Examine the USERNAME value under each “HKEY_USERS\<SID>\Volatile Environment” to find which SID belongs to the default “user”.  A SID begins with “S-1-5-“.
    6. Double-click “Disable FBWF”, wait for system to reboot
  3. Create scripts
    1. Click “Start|Shutdown|Log off” while holding <shift>, continue holding <shift>
    2. At the logon prompt, logon as administrator using the password “Wyse#123
    3. Launch Windows Explorer
    4. Create a folder named” bat” in the root of C:\
    5. Launch Notepad; paste in the following:

      @echo off
      :View
      “C:\Program Files\VMware\VMware View\Client\bin\wswc.exe”
      goto View

    6. Save the file as C:\bat\View.cmd
    7. Launch Notepad; paste in the following:

      Set WshShell = CreateObject(“WScript.Shell”)
      WshShell.Run chr(34) & “C:\bat\view.cmd” & Chr(34), 0
      Set WshShell = Nothing

    8. Save the file as C:\bat\View.vbs
  4. Update Shell for “User”
    1. Launch Regedit
    2. Navigate to “HKEY_USERS\<SID>\SOFTWARE\Microsoft\Windows NT\CurrentVersion\WinLogon”
    3. If the “Shell” values does not exist, create it as a new String Value
    4. Update the “Shell” value to “wscript c:\bat\view.vbs”
    5. Close Regedit
    6. Double-click “Enable FBWF”
    7. Reboot thin client

Credits to Sparko Design, Free Wyse Monkeys and MidWest Wyse Guys.

Best features in vSphere 5

Of the myriad of new features announced as part of vSphere 5 yesterday, I’ve narrowed it down to a few of my favorites; these are the game-changers.  I really hope they live up to the expectations!

In no particular order…

Swap to Local SSD

This is a neat feature to take advantage of those fast SSD drives in your hosts that you’re not really using much.  (The UCS B230 comes to mind).  With this feature, vSphere can move the swapfile to the host’s local SSDs for fast response time.  Of course, if you’re using the swap file, you probably want to add RAM.
VMFS 5 – 64TB Volumes

This is a much-needed feature.  The 2TB limit on VMFS extents has been making for environments that are more complicated than necessary.  Now users can have fewer, larger data stores.  You’ll want to make sure your array supports the Atomic Test & Set (ATS) function in VAAI before you put a lot of VMs on a single LUN.

Monster VMs

vSphere 5 now provides support for VMs with 32 vCPU and 1TB of vRAM.  (no wonder considering the new pricing structure… grumble grumble)

vStorage DRS

This is a cool feature I’m looking forward to working with.  You’ll create a new object called a “Datastore Cluster” and assign several Datastores to it.  I’m pretty sure you’re going to want the datastores that are clustered to have similar performance characteristics.  vStorage DRS will measure the storage I/O per VM and per datastore and make suggestions or actually move the VMDKs to another datastore in the cluster if it will help balance the storage I/O load.  This is really neat technology!  Imagine that you’re NOT sitting on top of your performance statistics and don’t notice that the LUNs housing your Exchange mailboxes are getting slammed, affecting performance of other VMDKs on those LUNs.  Much like DRS lets you set-it-and-forget-it when it comes to balancing the memory and CPU load, vStorage DRS will do the same for storage I/O.  In addition, it allows you to set affinity and anti-affinity rules all the way down to the individual VMDKs.

Multi-NIC enablement for vMotion

Honestly, this is one of those features I thought vSphere already had.  When you assign multiple uplinks to your vmkernel and make them both active, it will now actually take advantage of them, increasing the bandwidth and speed of vMotion actions.

VAAI for NFS, full file clone, thick disks on nfs

Before now, when you created a VMDK on an NFS datastore, you got a thin-provisioned disk.  Now, you can explicitly create thick disks and let the array (assuming its compatible) handle the work of zeroing it out.  In addition, VAAI for NAS will offload the work of cloning a VM to the array and off of the host, greatly reducing the work and network traffic involved.
HA rewrite

Halleluiah! I’m thrilled to see this!  HA is no longer dependent on DNS.  It is supposed to be faster to enable, more descriptive when it fails and have less funky networking requirements.  I’m eager to work with this and see if its all true.

Live Storage vMotion

Until now, when you needed to move a running VM from shared storage to storage only one of the hosts can see, it was a two-step process. First vMotion the VM to the host that can see the new storage, then storage vMotion it to the right datastore.  Now, you’re able t0 do that move in a single step.  Not sure how often this will get used, but it’s nice.

vCenter Appliance

I’m happy to see a VMware appliance for vCenter Server.  This will eliminate a Windows license for non-Microsoft shops and give the service a smaller footprint with less resource consumption.  The downside I see is that not all plug-ins will work with the Linux-based appliance and it cannot currently be used with VMware View Composer.  I expect this to mature rapidly and become a more robust solution.
vSphere Storage Appliance

This is another great feature that will make the resiliency of vSphere accessible to smaller shops that don’t have the budget to buy an enterprise-class storage array.  It works by maintaining a primary and a replica of each datastore on each of up-to three hosts.  Each host in the VSA cluster runs an NFS server and exports the primary datastores residing there.  In this way, should any one host in the VSA cluster fail, the datastores will remain online via the running replica.  This should not be seen as an alternative to enterprise-class storage, but a good entry point for low-cost deployments needing some resiliency.

Not all smartcard readers are created equal

Recently, I was working with a customer site where they used smartcards to authenticate to applications. In this case, since the reader was not part of the VMware View session authentication, the USB reader itself had to be passed through via redirection.

I tried setting the “AllowSmartcards” value to true:
HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VDM\USB AllowSmartcards=true

But the reader still wasn’t redirected to the View session. In this case, I ended up having to follow this KB article to recognize the device as one that can be redirected. The particular reader is identified as a “USB Keyboard”, which are typically not redirected. Obviously, you’ll have to make sure that you don’t redirect your actual keyboard.

Resolve Hardware Status Alert SEL_FULLNESS

I noticed an alert on two UCS B250M2 hosts in the vSphere Client.  The alert Name was “Status of other host hardware objects”.  This isn’t helpful.  To get more information, you have to navigate to the Hardware Status tab of the host properties.  Here I saw more information about the alert.  It’s cryptically named “System Board 0 SEL_FULLNESS”.

SEL_FULLNESS alert in vSphere Client

This points to the System Event Log for the UCS blade itself.  Luckily, this is easily cleared by using the UCS Manager to navigate to the management Logs tab of the Server properties under Equipment.

Clear management Log for UCS Blade

Once there, you can back up and clear the SEL.  Within a few minutes, the vSphere sensors will update and the alert will be gone.

UPDATE:  Once UCSM has been updated to 1.4.1, the “Management Logs” tab is named “SEL Logs”

NFS Datastore on EMC NS-120 for VMware Data Recovery

Background

I have a handful of 7.2K RPM 1TB SATA drives in the NS-120 that I didn’t want to use as a datastore for multiple VMs, but would be an ideal backup-to-disk location.  Clearly, I could have connected the hosts to the storage over the existing FC connections, but really, this IS a Celerra and I can expose volumes via iSCSI or NFS.  So, I created a pool and a LUN, assigned the LUN (on the CLARiiON portion) to the Celerra.

Datastore on NFS

When you add a NFS export to the Celerra, you have to create a network interface on the data movers first.  The NFS “server” listens on this interface.  The documentation is a little misleading – I was trying to connect to the Control Station IP.  Making matters worse, the VMKernel logs states “The NFS server does not support MOUNT version 3 over TCP”.  That’s a lot of help.  Here’s a wild goose, go catch it.

Hopefully, this will help someone save some time when connecting their vSphere host to an NFS export on a Celerra.

VMware Data Recovery

I deployed VDR using the ovf, this is nice.  It prompts where to put the vmdk, what network to use and let it transfer.

Next, install the VDR client plugin and restart the vSphere Client.  You’ll need to add a vDisk or two to the VDR appliance to use as backup locations.  Don’t start the VMware Data Recovery app or the VDR appliance until after you’ve added the additional vDisks. (not a big deal, but you’ll have to restart the appliance to get it to see the vDisks you added)

The Getting Started Wizard is pretty self-explanatory and it segues into the Backup Wizard.  The wizard will require you to format and mount the backup locations you created in the Getting Started Wizard, but once that is finished, you’ll be able to continue your first backup job configuration.  Naturally, once I had it created and scheduled, I had to make it backup now.  Like a kid, I ran to see the drive lights blinking on the SATA drives only exposed via NFS on the EMC.  Yay!  it worked!