Archive

Posts Tagged ‘Distributed Switch’

Use Cisco Nexus 1000V for virtual hosts in nested ESXi

11/14/2013 Comments off

The native VMware vSwitch and Distributed vSwitch do not use MAC-learning. This was removed because the vSwitches would be aware of the VMs attached to them and the MAC addresses in use. As a result, if you nest ESXi under a standard vSwitch and power-on VMs under the nested instance, those VMs will be unable to communicate because their MACs are masked by the virtual host and the vSwitch is not aware of them.

Workaround options:

  1. Enable Promiscuous mode on the vSwitch.
  2. This works but should never be used in production.  It adds a lot of unnecessary traffic and work to the physical NICs.  It makes troubleshooting difficult and is a security risk
  3. Attach your virtual hosts to a Cisco Nexus 1000V.
  4. The 1000V retains MAC-learning, so VMs on nested virtual ESXi hosts can successfully communicate because the switch learns the nested MAC addresses.
  5. If your physical servers support virtual interfaces, you can create additional “physical” interfaces and pass them through to the virtual instances.  This allows you to place the virtual hosts on the same switch as the physical hosts if you choose.  There is obviously a finite amount of virtual interfaces you can create in the service profile, but I think this is a clean, low-overhead solution for environments using Cisco UCS or HP C7000 or similar.

Conclusion

The Nexus 1000V brings back important functionality for nested ESXi environments, especially those environments that do not have access to features like virtual interfaces and service profiles.

Helpful links:

Standing Up The Cisco Nexus 1000v In Less Than 10 Minutes by Kendrick Coleman

Cisco VN-Link is awesome

01/18/2011 Comments off

First, many thanks to Jeremy Waldrop and his walkthrough video.  This provided me with a lot of help and answers to the questions I had.

I’m so impressed with VN-Link that I’m kicking myself for not deploying it sooner.  In my view, it easily is a better choice than the Nexus 1000V.  Sure, it essentially uses the Nexus 1000V’s Virtual Ethernet Module, but since it doesn’t require the Supervisor Module (VSM) to run as a VM, you can use those processor cycles for other VMs.

In a VN-Link DVS, the relationship between vSphere and UCSM is much more apparent.  Because the switch “brains” are in the Fabric Interconnect and each VM gets assigned a dymanic vNIC, UCSM is aware of which VMs reside on which host and consume which vNIC.

I especially like that I can add port groups to the VN-Link DVS without using the CLI.  All of the virtual network configuration is performed via UCSM.  This makes for quick and easy additions of VLANs, port profiles and port groups.

This Cisco White Paper advocates Hypervisor Bypass, (which breaks vMotion, FT and snapshots by the way), but describes a 9 percent performance improvement by using VN-Link over a hypervisor-based switch.  A 9 percent improvement that doesn’t break things is a big deal, if you ask me.

There are cases where the VN-Link just won’t do:

  • There is no Fabric Interconnect
  • You must use Access Control Lists between VLANs
  • You must have SNMP monitoring of the VSM.

Beyond these cases, if you have the requisite components (Fabric Interconnects, M81KR VICs, vSphere Enterprise Plus), I’d suggest taking a strong look at VN-Link.

 

 

 

Functional Diagram

12/01/2010 Comments off

This diagram depicts the relationship of the components involved in the running and support of the Virtual Computing Environment.

 

Mobile VCE - Functional Diagram

Mobile VCE - Functional Diagram

Here, you can see the role of the hypervisor on each B-series blade and on the C-series, in addition to the management and connectivity with the UCS and EMC components.

 

Cisco Nexus 1000V deployment

11/24/2010 Comments off

I’ve having lots or trouble getting the VSM and VEM to see each other.  The Cisco Troubleshooting guide seems like it’ll be a lot of help, but I’ve reconfigured the VSM about 50 times already.  It has been quite the learning experience.  I suspect that the control VLAN is not present on both the C-series server and the B200 I’m trying to use.

So, I’m setting up a new VLAN in the N5K and in the 6120 and in the standard vSwitch on both hosts.  I’ve assigned that new VLAN as both the control and packet VLAN for the VSM and have my fingers crossed.

Will update as things progress…

Ok, found my problem.  I’d not set the VLANs on the vNICs of the Service Profile assigned to the Host with the VEM to include the Control/Packet VLAN.  Once I did that, the Host moved to the VDS without error.