Skip to main content

Configure NIC teaming for V Switch


Two or more NIC cards are required to use NIC teaming in  ESX server.



Step 1 : Create a new vSwitch and give it a network label like "VM Network” . Add the required NICs to the switch. Procedure on how to add nic to a virtual switch is described below



Select the vSwitch from the Configuration -> networking . Select Network adapters and click on add


In the Add Adapter Wizard select the nic card which needs to be added to the switch and click on next. If the nic card is already attached to another virtual switch, it will be removed from it and added to the new one.. Go to next screen


In the failover order, we can configure the active and standby adapters if required. Click next and finish the configuration

Step 2: Configure teaming

Configuring teaming in Esx server will implement load balancing for outbound traffic only. For implementing the same for inbound traffic the required configurations need to be done on the physical switch to which the NICs are actually connected
Switch configuration steps

Suppose two nics vnic0 and vnic1 are added to a virtual switch vswitch0. vnic0 and vnic1 are physical adapters connected to switch ports gi0/23 and gi0/24

1) Create a port channel in the switch for the ports.

Commands used for the same in Cisco Catalyst IOS-based physical switches are given below:

s3(config)#int port-channel1
s3(config-if)#description NIC team for ESX server
s3(config-if)#int gi0/23
s3(config-if)#channel-group 1 mode on
s3(config-if)#int gi0/24
s3(config-if)#channel-group 1 mode on


This creates port-channel1  and assigns GigabitEthernet0/23 and GigabitEthernet0/24 into team

2) Ensure that the load balancing mechanism that is used by both the switch and ESX Server matches. To find out the switch’s current load balancing mechanism, use this command in enable mode:

show etherchannel load-balance

This will report the current load balancing algorithm in use by the switch. We can either use IP- based load balancing or MAC-based load balancing . IP-based option gives  better utilization across the members of the NIC team than some of the other options

3) Set the switch load-balancing algorithm using one of the following commands in global configuration mode:

port-channel load-balance src-dst-ip (to enable IP-based load balancing)
port-channel load-balance src-mac (to enable MAC-based load balancing)

Comments

Popular posts from this blog

Windows server 2012: where is my start button??

If you have been using Windows Server OS for a while, the one thing that will strike you most when you login to a Windows server 2012 is that there is no start button!!.. What??..How am I going to manage it?? Microsoft feels that you really dont need a start button, since you can do almost everything from your server  manager or even remotely from your desktop. After all the initial configurations are done, you could also do away with the GUI and go back to server core option.(In server 2012, there is an option to add and remove GUI). So does that mean, you need to learn to live without a start button. Actually no, the start button is very much there .Lets start looking for it. Option 1: There is "charms" bar on the side of your deskop, where you will find a "start" option. You can use the "Windows +C" shortcut to pop out the charms bar Option 2: There is a hidden "start area"in  the bottom left corner of your desktop

Install nested KVM in VMware ESXi 5.1

In this blog, I will explain the steps required to run a nested KVM hypervisor on  Vmware ESXi. The installation of KVM is done on Ubuntu 13.10(64 bit). Note: It is assumed that you have already installed your Ubuntu 13.10 VM in ESXi, and hence we will not look into the Ubuntu installation part. 1) Upgrade VM Hardware version to 9. In my ESXi server, the default VM hardware version was 8. So I had to shutdown my VM and upgrade the Hardware version to 9 to get the KVM hypervisor working. You can right click the VM and select the Upgrade hardware option to do this. 2)In the ESXi host In /etc/vmware edit the 'config' file and add the following setting vhv.enable = "TRUE" 3)Edit the VM settings and go to VM settings > Options  > CPU/MMU Virtualization . Select the Intel EPT option 4) Go to Options->CPUID mask> Advanced-> Level 1, add the following CPU mask level ECX  ---- ---- ---- ---- ---- ---- --H- ---- 5) Open the vmx

Virtual fibre channel in Hyper V

Virtual fibre channel option in Hyper V allows the connection to pass through from physical  fibre channel HBA to virtual fibre channel HBA, and still have the flexibilities like live migration. Pre-requisites: VM should be running Windows Server 2008, 2008 R2 or Windows Server 2012 Supported physical HBA with N_Port Virtualization(NPIV) enabled in the HBA. This can be enabled using any management utility provided by the SAN manufacturer. If you need to enable live migration, each host should be having two physical HBAs and each HBA should have two World Wide Names(WWN). WWN is used to established connectivity to FC storage.When you perform migration, the second node can use the second WWN to connect to the storage and then the first node can release its connection. Thereby the storage connectivity is maintained during live migration Configuring virtual fibre channel is a two step process Step 1: Create a Virtual SAN in the Hyper-V host First you need to click on Virtual