Skip to main content

Xen learnings

This week, I was trying to get my head around a new Hypervisor(new for me, obiviously), ie XenServer. Though it is pretty much same as ESXi and is free as well, there are some striking differences as well.The observations are based on the free version of Citrix XenServer version 6.2.0

- While ESXi needs a paid vCenter to manage multiple hosts, you can use the free XenCenter software to manage multiple Xen hosts

- Latest version of Xen server doesnt have the equalent of DRS or DPM. There was a feature named workload balancing, which was strangely discontinued in version 6.2.0 stating reason that there are no takers for it.

- It does offer a High Availability of VMs using pool based clustering of hosts

-XenMotion is the equalent of VM live migration, but it is restricted to one VM at a time

-XenCenter doesnt have a web client like vCenter

-There was a tool named XenConvert used for physical to virtual conversion, but it is retired as well.

-There is an option named Dynamic Memory Control(DMC), which can be used for dynamic allocation of memory for VMs. We can set a maximum and minimum memory options for VMs which will be used by XenServer to manage memory crunch situations

-Thin provisioning is supported  for local storages only

-Distributed vSwitch controller appliance  is available for centralized management of networks in XenCenter. However, this too is being depreciated in v6.2.0

PS: One interesting point to note is that the configuration limits document of xenServer is very small when compared to VMware and it doesnt have much details mentioned. For example, VMware specified the maximum number of vCPUs that we can create per physical processor core. For v5.1 its 25 and for v5.5 its 32. However Xen doesnt give you a hardcoded value for that. When we contacted Xen support regarding the same, they mentioned that there is no limit!!! Obiviously, that means you have to keep performance in mind while deciding on the number of vCPUs

Here is a good comparison matrix between various hypervisors available in market

http://www.virtualizationmatrix.com/matrix.php?category_search=all&free_based=1





Comments

Popular posts from this blog

Windows server 2012: where is my start button??

If you have been using Windows Server OS for a while, the one thing that will strike you most when you login to a Windows server 2012 is that there is no start button!!.. What??..How am I going to manage it?? Microsoft feels that you really dont need a start button, since you can do almost everything from your server  manager or even remotely from your desktop. After all the initial configurations are done, you could also do away with the GUI and go back to server core option.(In server 2012, there is an option to add and remove GUI). So does that mean, you need to learn to live without a start button. Actually no, the start button is very much there .Lets start looking for it. Option 1: There is "charms" bar on the side of your deskop, where you will find a "start" option. You can use the "Windows +C" shortcut to pop out the charms bar Option 2: There is a hidden "start area"in  the bottom left corner of your desktop

Install nested KVM in VMware ESXi 5.1

In this blog, I will explain the steps required to run a nested KVM hypervisor on  Vmware ESXi. The installation of KVM is done on Ubuntu 13.10(64 bit). Note: It is assumed that you have already installed your Ubuntu 13.10 VM in ESXi, and hence we will not look into the Ubuntu installation part. 1) Upgrade VM Hardware version to 9. In my ESXi server, the default VM hardware version was 8. So I had to shutdown my VM and upgrade the Hardware version to 9 to get the KVM hypervisor working. You can right click the VM and select the Upgrade hardware option to do this. 2)In the ESXi host In /etc/vmware edit the 'config' file and add the following setting vhv.enable = "TRUE" 3)Edit the VM settings and go to VM settings > Options  > CPU/MMU Virtualization . Select the Intel EPT option 4) Go to Options->CPUID mask> Advanced-> Level 1, add the following CPU mask level ECX  ---- ---- ---- ---- ---- ---- --H- ---- 5) Open the vmx

Virtual fibre channel in Hyper V

Virtual fibre channel option in Hyper V allows the connection to pass through from physical  fibre channel HBA to virtual fibre channel HBA, and still have the flexibilities like live migration. Pre-requisites: VM should be running Windows Server 2008, 2008 R2 or Windows Server 2012 Supported physical HBA with N_Port Virtualization(NPIV) enabled in the HBA. This can be enabled using any management utility provided by the SAN manufacturer. If you need to enable live migration, each host should be having two physical HBAs and each HBA should have two World Wide Names(WWN). WWN is used to established connectivity to FC storage.When you perform migration, the second node can use the second WWN to connect to the storage and then the first node can release its connection. Thereby the storage connectivity is maintained during live migration Configuring virtual fibre channel is a two step process Step 1: Create a Virtual SAN in the Hyper-V host First you need to click on Virtual