Archive

Archive for the ‘Nexus 1000V’ Category

Use Cisco Nexus 1000V for virtual hosts in nested ESXi

11/14/2013 Comments off

The native VMware vSwitch and Distributed vSwitch do not use MAC-learning. This was removed because the vSwitches would be aware of the VMs attached to them and the MAC addresses in use. As a result, if you nest ESXi under a standard vSwitch and power-on VMs under the nested instance, those VMs will be unable to communicate because their MACs are masked by the virtual host and the vSwitch is not aware of them.

Workaround options:

  1. Enable Promiscuous mode on the vSwitch.
  2. This works but should never be used in production.  It adds a lot of unnecessary traffic and work to the physical NICs.  It makes troubleshooting difficult and is a security risk
  3. Attach your virtual hosts to a Cisco Nexus 1000V.
  4. The 1000V retains MAC-learning, so VMs on nested virtual ESXi hosts can successfully communicate because the switch learns the nested MAC addresses.
  5. If your physical servers support virtual interfaces, you can create additional “physical” interfaces and pass them through to the virtual instances.  This allows you to place the virtual hosts on the same switch as the physical hosts if you choose.  There is obviously a finite amount of virtual interfaces you can create in the service profile, but I think this is a clean, low-overhead solution for environments using Cisco UCS or HP C7000 or similar.

Conclusion

The Nexus 1000V brings back important functionality for nested ESXi environments, especially those environments that do not have access to features like virtual interfaces and service profiles.

Helpful links:

Standing Up The Cisco Nexus 1000v In Less Than 10 Minutes by Kendrick Coleman

Advertisements

VMware vSphere 5 AutoDeploy on Cisco UCS – Part 2: Image Profiles

After completing Part 1, we have DHCP configured to assign a reserved IP address to the Cisco B200 M2 blades when they boot to the vNIC. Now the goal is to create the image that the auto-deploy hosts will use..

The image building procedure sounds complicated, but once you break it down, it’s not too bad. First, we need to inventory the components (VIBs) that’ll be needed on the hosts; above-and-beyond the base install. In our case, we needed the HA agent, the Cisco Nexus 1000V VEM and the EMC NAS Plugin for VAAI. The HA driver will be downloaded from the vCenter Server, but you’ll have to download the licensed ZIP files from Cisco and EMC for the others.

In addition to the enhancements, we’ll need the VMware ESXi 5.0 offline bundle, “VMware-ESXi-5.0.0-469512-depot.zip” from the licensed product downloads area of VMware.com. This is essentially a “starter-kit” for image builder, it contains the default packages for ESXi 5.0.

Preparation:

  1. Copy these files into C:\depot
    • VMware-ESXi-5.0.0-469512-depot.zip
    • VEM500-201108271.zip
    • EMCNasPlugin-1.0-10.zip
  2. Launch PowerCLI

On to the PowerCLI code:

Register the offline bundle as a Software Depot (aka source)

Add-EsxSoftwareDepot “C:\depot\VMware-ESXi-5.0.0-469512-depot.zip”

Connect powerCLI to your vCenter server (replace x.x.x.x with your vCenter server’s name or IP)

Connect-VIServer –server x.x.x.x

List the image profiles contained in the offline bundle, ESXi-5.0.0-469512-no-tools and ESXi-5.0.0-469512-standard. We’re going to work with “standard”.

Get-EsxImageProfile

Register vCenter Server depot for HA agent

Add-EsxSoftwareDepot -DepotUrl http://X.X.X.X:80/vSphere-HA-depot

Register depot for updates to ESXi

Add-EsxSoftwareDepot -DepotUrl https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

Register depot for Nexus 1000V VEM and VAAI plugin for VNX NAS

add-esxsoftwaredepot c:\depot\VEM500-201108271.zip
add-esxsoftwaredepot c:\depot\EMCNasPlugin-1.0-10.zip

List the image profiles, except now it will list several more versions. For each, there is a “no-tools” and a “standard”. Make a note of the newest “standard” image (or the one you want to use)

get-esximageprofile

Clones the standard “ESXi-5.0.0-20111204001” image profile to a new image profile with the name “ESXi-HA-VEM-VAAI-20111204001”

New-EsxImageProfile –cloneprofile ESXi-5.0.0-20111204001-standard –name “ESXi-HA-VEM-VAAI-20111204001”

Add the HA agent (vmware-fdm) to our custom image profile

Add-EsxSoftwarePackage -ImageProfile “ESXi-HA-VEM-VAAI-20111204001”-SoftwarePackage vmware-fdm

Check for the VEM package “cisco-vem-v131-esx”

get-esxsoftwarepackage -Name cisco*

Add the Nexus 1000V VEM to our custom image profile

add-esxsoftwarepackage -Imageprofile “ESXi-HA-VEM-VAAI-20111204001” -SoftwarePackage cisco-vem-v131-esx

Check for EMC VAAI Plugin for NAS “EMCNasPlugin”

get-esxsoftwarepackage -Name emc*

Add the EMC VAAI plugin for NAS to our custom image profile

add-esxsoftwarepackage -Imageprofile “ESXi-HA-VEM-VAAI-20111204001” -SoftwarePackage EMCNasPlugin

Export our custom image to a big zip file – we’ll use this to apply future updates

export-esximageprofile -imageprofile “ESXi-HA-VEM-VAAI-20111204001” -Filepath “C:\depot\ESXi-HA-VEM-VAAI-20111204001.zip” –ExporttoBundle

Deploy Rules
OK, now we have a nice image profile, let’s assign it to a deployment rule. To get Auto-Deploy working, we’ll need a good Host Profile and details from a reference host. So, we’ll apply our initial image profile to our reference host, then use our reference host to create a host profile and update the RuleSetCompliance

Create a new temporary rule with our image profile and an IP range; then add it to the active ruleset.

New-DeployRule –Name “TempRule” –Item “ESXi-HA-VEM-VAAI-20111204001 –Pattern “ipv4=10.10.0.23”
Add-DeployRule -DeployRule “TempRule”

At this point, we booted up the blade that would become the reference host. I knew that DHCP would give it the IP that we identified in the temporary deployment rule. BTW – Auto-deploy is not really fast, it takes 10 minutes or so from power-on to visible in vCenter.

Repair Ruleset
You may have noticed a warning about a component that is not auto-deploy ready;  we have to fix that.

In the following code, “referencehost.mydomain.com” is the FQDN of my reference host. This procedure will modify the ruleset to ignore the warning on the affected VIB.

Test-DeployRuleSetCompliance referencehost.mydomain.com
$tr = Test-DeployRuleSetCompliance referencehost.mydomain.com
Repair-DeployRuleSetCompliance $tr
Test-DeployRuleSetCompliance referencehost.mydomain.com

After this completes, reboot the reference host and add it to your Nexus 1000V DVS.

Part 3 (coming soon!) will cover the host profile and final updates to the deployment rules.

References:
https://communities.cisco.com/docs/DOC-26572

VMware vSphere 5 AutoDeploy on Cisco UCS – Part 1: DHCP

First, many thanks to Gabe and Duncan for their great Auto-Deploy guides that got me started.  Found here and here.  Their information answered a lot of questions, but left me with even more questions about how to implement it in my environment.

My goal is to demonstrate how to implement and configure vSphere Auto-deploy in a near-production environment that uses vSphere 5, Cisco UCS, EMC storage, Nexus 1000V and vShield Edge.

The first hurdle I ran into was trying to make DHCP cooperate.  I’m using vShield Edge for DHCP in some of the protected networks, but the Cisco 2900-series router is doing DHCP for the network where the vSphere Management addresses live.  In IOS for DHCP, you can assign a manual address in a pool via the “hardware-address” OR the “client-identifier” parameter.  Looks like “client-identifier” is used by DHCP, whereas “hardware-address” is used by BOOTP.  When booting, the blade first draws information via BOOTP, but after acquiring the details from TFTP, it changes its personality and sends another DHCP DISCOVER request.

Here’s how we got this working in our environment:

  • Identify permanent addresses for your hosts  (10.10.0.23 in this case)
  • Identify a temporary address for each host (10.10.0.123 is this case)
  • Make sure those addresses are not excluded

    ip dhcp excluded-address 10.10.0.0 10.10.0.20
    ip dhcp excluded-address 10.10.0.25 10.10.0.120
    ip dhcp excluded-address 10.10.0.125 10.10.0.210
    ip dhcp excluded-address 10.10.0.251 10.10.0.255

  • Create your “main” pool if it doesn’t already exist

    ip dhcp pool mgmt
    network 10.10.0.0 255.255.255.0
    default-router 10.10.0.253
    dns-server 10.10.0.61 10.10.0.62
    lease 0 8
    update arp

  • Create Pool for your permanent host address, make sure to use the “client-identifier” parameter

    ip dhcp pool AutoDeploy23
    host 10.10.0.23 255.255.255.0
    client-identifier 0100.25b5.0000.2d
    bootfile undionly.kpxe.vmw-hardwired
    next-server 10.10.0.50
    client-name AutoDeploy23
    dns-server 10.10.0.61 10.10.0.62
    option 66 ip 10.10.0.50
    option 67 ascii undionly.kpxe.vmw-hardwired
    default-router 10.10.0.253
    lease 0 8
    update arp

  • Create Pool for the temporary host address, assigned first by BOOTP and dropped after PXE boot

    ip dhcp pool AutoDeploy123
    host 10.10.0.123 255.255.255.0
    hardware-address 0025.b500.002d
    bootfile undionly.kpxe.vmw-hardwired
    next-server 10.10.0.50
    client-name AutoDeploy23
    dns-server 10.10.0.61 10.10.0.62
    option 66 ip 10.10.0.50
    option 67 ascii undionly.kpxe.vmw-hardwired
    default-router 10.10.0.253
    lease 0 8

Continue on to Part 2, covering the creation and assignment of the image profile

Cisco VN-Link is awesome

01/18/2011 Comments off

First, many thanks to Jeremy Waldrop and his walkthrough video.  This provided me with a lot of help and answers to the questions I had.

I’m so impressed with VN-Link that I’m kicking myself for not deploying it sooner.  In my view, it easily is a better choice than the Nexus 1000V.  Sure, it essentially uses the Nexus 1000V’s Virtual Ethernet Module, but since it doesn’t require the Supervisor Module (VSM) to run as a VM, you can use those processor cycles for other VMs.

In a VN-Link DVS, the relationship between vSphere and UCSM is much more apparent.  Because the switch “brains” are in the Fabric Interconnect and each VM gets assigned a dymanic vNIC, UCSM is aware of which VMs reside on which host and consume which vNIC.

I especially like that I can add port groups to the VN-Link DVS without using the CLI.  All of the virtual network configuration is performed via UCSM.  This makes for quick and easy additions of VLANs, port profiles and port groups.

This Cisco White Paper advocates Hypervisor Bypass, (which breaks vMotion, FT and snapshots by the way), but describes a 9 percent performance improvement by using VN-Link over a hypervisor-based switch.  A 9 percent improvement that doesn’t break things is a big deal, if you ask me.

There are cases where the VN-Link just won’t do:

  • There is no Fabric Interconnect
  • You must use Access Control Lists between VLANs
  • You must have SNMP monitoring of the VSM.

Beyond these cases, if you have the requisite components (Fabric Interconnects, M81KR VICs, vSphere Enterprise Plus), I’d suggest taking a strong look at VN-Link.

 

 

 

Functional Diagram

12/01/2010 Comments off

This diagram depicts the relationship of the components involved in the running and support of the Virtual Computing Environment.

 

Mobile VCE - Functional Diagram

Mobile VCE - Functional Diagram

Here, you can see the role of the hypervisor on each B-series blade and on the C-series, in addition to the management and connectivity with the UCS and EMC components.

 

Cisco Nexus 1000V deployment

11/24/2010 Comments off

I’ve having lots or trouble getting the VSM and VEM to see each other.  The Cisco Troubleshooting guide seems like it’ll be a lot of help, but I’ve reconfigured the VSM about 50 times already.  It has been quite the learning experience.  I suspect that the control VLAN is not present on both the C-series server and the B200 I’m trying to use.

So, I’m setting up a new VLAN in the N5K and in the 6120 and in the standard vSwitch on both hosts.  I’ve assigned that new VLAN as both the control and packet VLAN for the VSM and have my fingers crossed.

Will update as things progress…

Ok, found my problem.  I’d not set the VLANs on the vNICs of the Service Profile assigned to the Host with the VEM to include the Control/Packet VLAN.  Once I did that, the Host moved to the VDS without error.