Archive

Posts Tagged ‘Fabric Interconect’

Cisco VN-Link is awesome

01/18/2011 Comments off

First, many thanks to Jeremy Waldrop and his walkthrough video.  This provided me with a lot of help and answers to the questions I had.

I’m so impressed with VN-Link that I’m kicking myself for not deploying it sooner.  In my view, it easily is a better choice than the Nexus 1000V.  Sure, it essentially uses the Nexus 1000V’s Virtual Ethernet Module, but since it doesn’t require the Supervisor Module (VSM) to run as a VM, you can use those processor cycles for other VMs.

In a VN-Link DVS, the relationship between vSphere and UCSM is much more apparent.  Because the switch “brains” are in the Fabric Interconnect and each VM gets assigned a dymanic vNIC, UCSM is aware of which VMs reside on which host and consume which vNIC.

I especially like that I can add port groups to the VN-Link DVS without using the CLI.  All of the virtual network configuration is performed via UCSM.  This makes for quick and easy additions of VLANs, port profiles and port groups.

This Cisco White Paper advocates Hypervisor Bypass, (which breaks vMotion, FT and snapshots by the way), but describes a 9 percent performance improvement by using VN-Link over a hypervisor-based switch.  A 9 percent improvement that doesn’t break things is a big deal, if you ask me.

There are cases where the VN-Link just won’t do:

  • There is no Fabric Interconnect
  • You must use Access Control Lists between VLANs
  • You must have SNMP monitoring of the VSM.

Beyond these cases, if you have the requisite components (Fabric Interconnects, M81KR VICs, vSphere Enterprise Plus), I’d suggest taking a strong look at VN-Link.

 

 

 

Advertisements

Experience in upgrading UCS to 1.4(1j)

01/15/2011 Comments off

The UCS deployment in the Mobile VCE is different from many deployments because it does not employ many of the redundant and fault-tolerant options and doesn’t run a production workload.  So, I have the flexibility to bring it down at almost any time for as long a duration as needed.

All this aside, it IS a complete Cisco UCS deployment with all the same behavior as if it were in production.  This means, I can perform an upgrade or configuration change in this environment first and work through all the ramifications before performing the same action on a production environment.

There’s a lot of excitement around the web about the new features in this upgrade and I’ve been looking forward to installing it.  For me, I’m excited about the lengthy list of fixes, the ability to integrate the management of the C-series server with the UCS Manager, FC port-channels and user-labels.

To start, I used the vSphere client to shut down the VMs I could and moved those I couldn’t to the C-series.  Please note that I have not yet connected the C-series to the Fabric Interconnect for integration yet (that’s another post).  Then I shutdown the blades themselves.

For the actual upgrade, I simply followed the upgrade guide – there’s no reason to go through the details of that here.

However, the experience was not exactly stress-free.

Although it makes sense, when the IO Module is rebooted, the Fabric Interconnect loses connection to the Chassis.  It cycled through a sequence of heart-stopping error messages before finally rediscovering the chassis and servers and stabilizing.  During this phase, it’s best to not look – the error messages led me to believe the IOM had become incompatible with the Fabric Interconnect.  Like I said, after a few minutes, the error messages all resolved and every component was successfully updated to 1.4.1.

GUI changes after upgrade

Nodes on the Equipment tree for Rack-Mounts/FEX and Rack-Mounts/Servers

 

 

 

 

 

 

User labels (Yes! )

Summary
I’ll be connecting the C-series to the Fabric Interconnect soon and am looking forward to setting up the FC port channel.