With any virtual switch design, it is necessary to ensure network redundancy, typically accomplished by assigning multiple vmnics to a single virtual switch. To visualize how we might do this with Flex10 let’s consider the following.
As you can see I’ve assigned one FlexNIC from each Flex adapter to a virtual switch. I’ve also allocated bandwidth to the individual FlexNICs according to my needs. This is an extremely useful benefit of the Flex10 allowing the separation/customization depending on specific environment needs. It’s important to keep in mind the aggregate bandwidth of all FlexNICs on a single Flex adapter must equate to a total of 10Gb. Hopefully up to this point, everything has made sense. Part of any virtual switch configuration is the creation of port groups. Here is where specific VLAN’s are specified to indicate what network segment is to be served by that specific port group. In order for the specified VLAN’s to be assigned, we first need to look at how the Interconnect modules are configured from a networking perspective.
Interconnect modules are connected to upstream switches in a couple different manners. Depending on the environment, it is important to be aware of limitations. The first way we can configure uplinks is what HP calls “Shared Uplink”. With shared uplinks you can take multiple ports and bond them together. This in a sense creates an ether channel. (Note: you cannot create an ether channel containing uplinks from each Interconnect module. If an ether channel is created it must consist of multiple ports from the same Interconnect module.)
Some limitations you’ll want to be aware of when configuring Shared Uplinks are:
- A total of 128 VLAN’s is currently supported by the Interconnect modules.
- If you are to be setting things up in an active/active manner as I have above, then the max per Interconnect module is just 64 VLAN’s.
- All VLAN’s must be specified on the Interconnect module in order for that VLAN to be assigned to a FlexNIC.
- A total of just 32 VLAN’s is the max assigned number of VLAN’s per FlexNIC
- You cannot assign the same VLAN number(s) to multiple FlexNICs residing on the same Flex Adapter.
- A max of 4092 MTU can be set when enabling jumbo frames (VMware best practices suggests setting an MTU to 9000 so it is important to pay attention to this note as setting it incorrectly can cause storage performance problems)
The second way we can configure uplinks is called “Tunneled Uplink”. With this configuration, you are creating a trunk from the upstream switch and passing it all the way through to the FlexNIC. The only limitation that I’m aware of with this type of configuration is there is no way to assign that trunk to multiple FlexNICs residing on the same Flex Adapter. This is because the same VLAN would be presented to multiple FlexNICs which is also not supported in the Shared Uplink set. I suppose you could have multiple Tunneled Uplinks assigned to different FlexNICs but some sort of pruning would have to exist on the trunk to ensure similar VLANs do not impact one another.
With the limitations above for Flex10, the best way I have found to effectively setup ESX virtual switches is to configure things as above, and then have a set of four uplinks (two per Interconnect module). With this configuration, both a Shared Uplink set and Tunneled Uplink can be specified. With the Shared model a single uplink is assigned from both Interconnect Modules and break out the various VLAN’s. This Shared Uplink set would be dedicated to the Service Console, VMotion, NAS/iSCSI, and VM FT traffic. Because of this, minimal VLAN’s would need to specified, for very easy setup and configuration. This would then leave the remaining two uplinks which would be setup in a Tunneled Uplink. This set of uplinks would be dedicated to virtual machine traffic and therefor would be sent straight to a FlexNIC where you could assign the necessary port groups. To put a picture with all that, it would look like the following.
By configuring things in this manner, vSwitch0,1,2 would belong to a Shared Uplink (denoted by the red cables) and vSwitch3 would belong to the Tunneled Uplink (denoted by green cables). Redundancy is provided by assigning a FlexNIC from each Flex Adapter. Easy enough right? Well so I thought…
The final piece of this puzzle and where I’ve run into problems is related to how vSphere detects a network failure. Because of the way blades work, they will always have a link to the Interconnect module meaning setting the redundancy on the virtual switch to be “Link Status” won’t work without some further configurations (below) since a network failover upstream won’t be relayed down to the individual FlexNICs. The next logical answer would seem to be to use a Beacon Probe but in my testing this also will not work. So now what?
Flex10 has two features called “Smartlink” and DCC (Dynamic Channel Control). Smartlink and DCC enable the Interconnect modules to relay link status of the uplink switches to the individual FlexNICs however until very recent, this was not supported by ESX. In order to take advantage of Smartlink and DCC, new drivers and firmware from VMware and HP must be applied. HP’s firmware can be found here and will be applied to the Flex Adapter of each blade. Once applied, VMware’s new driver found here will need to be installed. I’ve listed steps below that you can follow to install them.
- Extract the contents of the ISO. We’re after the zip file in the offline-bundle directory.
- Copy the zip file to your ESX host.
- Connect to your ESX host and issue the following command to install the new driver “esxupdate –bundle=<filename> update”
- Reboot and your new driver will be enabled!
Once the new firmware and driver have been installed, full support of Smartlink and DCC will be enabled meaning we can go back to a “Link Status” failover detection method within ESX!
So to bring it all together, I definitely see the benefits of the Flex10, especially in how you can control the bandwidth down to the individual FlexNICs. I also like the logically separated FlexNICs as it allows us to be plenty creative in how we choose to configure our virtual switches. Like all new features/technology, being on bleeding edge has its disadvantages such as the driver and firmware issue to be aware of but with a little work and knowledge, the benefits of Flex10 can be taken advantage of. The only other item to consider is the requirement of having to individually assign VLAN’s on the Interconnect modules when utilizing Shared Uplinks. It’s definitely something to keep in mind should you run into issues where lots of VLAN’s are required, however in most cases this can be corrected by setting up a Tunneled Uplink. To date, HP is the only vendor out there allowing such splitting of network I/O in this manner and I’d love to hear people’s thoughts on the Flex10, their implementations, and use cases.
Closely related to this discussion is Cisco’s “Palo” card for their UCS platform, which is to compete with HP’s Flex10 but I’ll have to leave that to another discussion…