First, many thanks to Jeremy Waldrop and his walkthrough video. This provided me with a lot of help and answers to the questions I had.
I’m so impressed with VN-Link that I’m kicking myself for not deploying it sooner. In my view, it easily is a better choice than the Nexus 1000V. Sure, it essentially uses the Nexus 1000V’s Virtual Ethernet Module, but since it doesn’t require the Supervisor Module (VSM) to run as a VM, you can use those processor cycles for other VMs.
In a VN-Link DVS, the relationship between vSphere and UCSM is much more apparent. Because the switch “brains” are in the Fabric Interconnect and each VM gets assigned a dymanic vNIC, UCSM is aware of which VMs reside on which host and consume which vNIC.
I especially like that I can add port groups to the VN-Link DVS without using the CLI. All of the virtual network configuration is performed via UCSM. This makes for quick and easy additions of VLANs, port profiles and port groups.
This Cisco White Paper advocates Hypervisor Bypass, (which breaks vMotion, FT and snapshots by the way), but describes a 9 percent performance improvement by using VN-Link over a hypervisor-based switch. A 9 percent improvement that doesn’t break things is a big deal, if you ask me.
There are cases where the VN-Link just won’t do:
- There is no Fabric Interconnect
- You must use Access Control Lists between VLANs
- You must have SNMP monitoring of the VSM.
Beyond these cases, if you have the requisite components (Fabric Interconnects, M81KR VICs, vSphere Enterprise Plus), I’d suggest taking a strong look at VN-Link.