Tag Archives: Xen

Paravirtulization with Citrix XenServer 5.5 and Ubuntu 9.10

Few days ago I had a task to P2V an old Ubuntu 9.10 machine. The P2V process was very easy and went smooth. The challenge was how to get this VM paravirtualized. After reading on the net how others have done this and what problems they had, I managed to finish my task pretty quick. Anyway... I hope maybe this post helps someone - definitely will help me if I will have to do this task again.

This post describes with simple step-by-step instructions how to install Ubuntu 9.10 VM as a paravirtualized virtual machine on a Citrix XenServer 5.5.

Creating Our Guest Ubuntu VM
Our first step is to get an Ubuntu VM installed as a typical HVM. You can find many different options on the web about partitioning and recommended partition sizes. A default installation of Ubuntu 9.10 will install on two partitions:

  • a root (/) partition, which includes the boot system (/boot), and
  • a swap partition.

For this article I installed Ubuntu with default partition options.

Configuring XenServer

First login on XenServer console.

Get UUID for the new created VM.

[codesyntax lang="bash"]

xe vm-list name-label="ubuntu-vm" params=uuid --minimal



Find the VM’s hard drive, known as a virtual block device (VBD):

[codesyntax lang="bash"]

xe vm-disk-list uuid=ed788e42-aabd-f78e-180a-5e46ec8b2465


Disk 0 VBD:
uuid ( RO)             : ceb500b7-b154-2251-2fcd-5de05da50368
    vm-name-label ( RO): ubuntu10.04
       userdevice ( RW): 0

Mark the VBD as bootable:

[codesyntax lang="bash"]

xe vbd-param-set uuid=ceb500b7-b154-2251-2fcd-5de05da50368 bootable=true


We don't want our VM to run as HVM:

[codesyntax lang="bash"]

xe vm-param-set uuid=ed788e42-aabd-f78e-180a-5e46ec8b2465 HVM-boot-policy=
xe vm-param-set uuid=ed788e42-aabd-f78e-180a-5e46ec8b2465 PV-bootloader=pygrub


pygrub can’t handle grub2, so we have to manually set these parameters for paravirtualization:

[codesyntax lang="bash"]

xe vm-param-set uuid=ed788e42-aabd-f78e-180a-5e46ec8b2465 PV-bootloader-args="--kernel=/boot/vmlinuz-2.6.31-14-generic --ramdisk=/boot/initrd.img-2.6.31-14-generic"
xe vm-param-set uuid=ed788e42-aabd-f78e-180a-5e46ec8b2465 PV-args="root=UUID=706a70b4-09ee-4682-8f08-c8eb79ddd410 ro quiet"



  • 706a70b4-09ee-4682-8f08-c8eb79ddd410 - UUID for the bootable partition. You can find it in grub configuration file or using blkid command
  • if you have a separate partition for boot then you will have to change kernel and ramdisk parameters to --kernel=/vmlinuz-2.6.31-14-generic --ramdisk=/initrd.img-2.6.31-14-generic

Close and restart your XenCenter client (it appears to be a bit buggy and doesn’t let you type into the new console until it’s restarted), and boot up your VM (which will now start in PV mode).

Install XenServer tools

Attach the XenServer tools ISO image (xs-tools.iso) and mount the CD on your VM.

[codesyntax lang="bash"]

mount /dev/cdrom /mnt


Install XenServer tools

[codesyntax lang="bash"]

dpkg -i /mnt/Linux/xe-guest-utilities_5.5.0-466_amd64.deb


During the install, you would have likely noticed a couple errors, specifically:

update-rc.d: warning: xe-linux-distribution start runlevel arguments (S) do not match LSB Default-Start values (2 3 4 5)
[: 31: configure: unexpected operator

The package was build for Debian, not for Ubuntu so we don't have to worry about error message. We need to adjust the default start/kill runlevels.

[codesyntax lang="bash"]

update-rc.d -f xe-linux-distribution remove
update-rc.d xe-linux-distribution defaults


Since now we are paravirtualized, XenServer will want to use HVC0, not the traditionally TTY.

[codesyntax lang="bash"]

sed -e "s/tty1/hvc0/ig" /etc/init/tty1.conf | sudo bash -c 'cat > /etc/init/hvc0.conf'


Accessing GUI on ubuntu paravirtualized VM

If you try and start the GUI on a paravirtualized Ubuntu VM in XenServer, you’ll get the following error:

Primary device is not PCI
(EE) open /dev/fb0: No such file or directory
(EE) No devices detected

In a paravirtualized world there is no such thing as a physical console (nor is there a physical CPU, physical memory etc). Hence for completely paravirtualized OSes (with a paravirtualized kernel like Xen) there’s no GUI console.

In other words, use VNC for now:

Install VNC
[codesyntax lang="bash"]

apt-get install vnc4server


Set the VNC resolution (whatever resolution you want to see on your desktop machine you’ll be using the VNC client on
[codesyntax lang="bash"]

vncserver -geometry 1280x1024 -depth 24


Create a password and VNC server should create some configuration files and start up.

Now we need to edit one of the configuration files

[codesyntax lang="bash"]

vncserver -kill :1


[codesyntax lang="bash"]

vim ~/.vnc/xstartup


# Uncomment the following two lines for normal desktop:
exec sh /etc/X11/xinit/xinitrc

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
x-terminal-emulator -geometry 1280x1024+10+10 -ls -title “$VNCDESKTOP Desktop” &
x-window-manager &
[codesyntax lang="bash"]



Start up the VNC server again
[codesyntax lang="bash"]

vncserver -geometry 1280x1024 -depth 24



Can't type at login prompt.

I ran into one instance where I could see the login prompt but nothing I typed appeared.  First, make sure you click inside the console window.  If that does not resolve the issue, close and reopen XenCenter.  This fixed the issue when I encountered it.

PV is not working and I need to get the VM back up.

[codesyntax lang="bash"]

xe vm-param-set uuid= HVM-boot-policy="BIOS order"


To return to PV mode, clear the HVM-boot-policy parameter.

[codesyntax lang="bash"]

xe vm-param-set uuid= HVM-boot-policy=


Please note that HVM-boot-policy parameter IS case sensitive.


Autostart VM in free version of XenServer 6.x

Unlike previous versions, VMs do not have a visible property in the GUI allowing autostart, which kinda sucks big time. This has been claimed to interfere with High Availability (HA) and produced unexpected results during HA functions.

So, what we are going to do?!

First approach is to set auto_poweron parameter to true at the pool and VM level.

Setting the XenServer to allow Auto-Start
1. Gather the UUID’s of the pools you wish to auto-start.
To get the list of the pool’s on your XenServer type

[codesyntax lang="bash"]

xe pool-list

2. Copy the UUID of the pool. If you have just one server, it will still have a pool UUID as bellow:

uuid ( RO)                : d170d718-e0de-92fc-b920-f4c59cc62e91
          name-label ( RW):
    name-description ( RW):
              master ( RO): 755d4ea3-373b-44b9-8ae3-3cd6f77a7f33
          default-SR ( RW): 51218f44-6ac6-4893-98fb-f924b08f7af9

3. Set the pool or server to allow auto-start:

[codesyntax lang="bash"]

xe pool-param-set uuid=UUID other-config:auto_poweron=true

Note: *Replacing UUID with the UUID of the XenServer or pool.

Setting the Virtual Machines to Auto-Start
1. Gather the UUID’s of the Virtual Machine you want to auto-start by typing:
[codesyntax lang="bash"]

xe vm-list


Note: This generates a list of Virtual Machines in your pool or server and their associated UUID’s.

2. Copy the UUID of the Virtual Machines you want to auto-start, and type the following command for each Virtual Machine to auto-start:
[codesyntax lang="bash"]

xe vm-param-set uuid=UUID other-config:auto_poweron=true


Note: *Replace UUID with the UUID of the Virtual Machine to auto-start.*

For this second part (enabling auto-start for the VMs) we can use a little one-line script, which would enable autostart for ALL vms:

[codesyntax lang="bash"]

for i in `xe vm-list is-control-domain=false –minimal | tr , ‘  ’`; do xe vm-param-set uuid=$i other-config:auto_poweron=true; done


Edit rc.local file to start all vms with "auto_poweron" in their other-config

Add the following lines at the end of /etc/rc.local file:

[ -e /proc/xen ] || exit 0


# wait for xapi to complete initialisation for a max of XAPI_START_TIMEOUT_SECONDS
/opt/xensource/bin/xapi-wait-init-complete ${XAPI_START_TIMEOUT_SECONDS}

if [ $? -eq 0 ]; then

pool=$(xe pool-list params=uuid --minimal 2> /dev/null)

auto_poweron=$(xe pool-param-get uuid=${pool} param-name=other-config param-key=auto_poweron 2> /dev/null)
if [ $? -eq 0 ] && [ "${auto_poweron}" = "true" ]; then
logger "$0 auto_poweron is enabled on the pool-- this is an unsupported configuration."

# if xapi init completed then start vms (best effort, don't report errors)
xe vm-start other-config:auto_poweron=true power-state=halted --multiple >/dev/null 2>/dev/null || true

Second approach is to use vApp

1. Create vApp
2. Choose vms to vApp
3. Choose boot order and delays between starts
4. To get uuid of vApp use:

[codesyntax lang="bash"]

xe appliance-list name-label="name-vapp"


5. Edit rc.local file to start vApp:

[codesyntax lang="bash"]

echo "sleep 40" >> /etc/rc.local
echo "xe appliance-start uuid=uuid-vapp" >> /etc/rc.local

7. Save file, reboot XenServer


Creating backups of running VMs in XenServer

With XenServer it is possible to create backups of VMs, even if they are running. The process is as follows:

Search for the uuid of the VMs to backup

First look for the uuid of the VMs to backup. We don’t want to backup the control domain itself, so we add is-control-domain=false to the vm-list command:
[codesyntax lang="bash"]

xe vm-list is-control-domain=false


Create a snapshot of each (running)

Now we create a snapshot of the VMs we want to backup, replacing the uuid one by one with the ones we found with the previous command. Also replace the name of the snapshot if desired:
[codesyntax lang="bash"]

xe vm-snapshot uuid=8d6f9d81-95b5-2ffb-4ecc-b5e442cc5c22 new-name-label=gb-r7n2-snapshot


This command has a return value: the uuid of the created snapshot. Then we transform the snapshot into a VM to be able to save it to a file, replacing uuid with the return value of the previous command:
[codesyntax lang="bash"]

xe template-param-set is-a-template=false ha-always-run=false uuid=4efd0392-8881-176c-012e-a56e9cb2beed


Save the snapshot to file

In the next step we save the snapshot to a file, replacing uuid with the snapshot uuid and providing a meaningful filename:
[codesyntax lang="bash"]

xe vm-export vm=4efd0392-8881-176c-012e-a56e9cb2beed filename=gb-r7n2-snapshot.xva


Remove the created snapshot

In the final step we delete the snapshot:
[codesyntax lang="bash"]

xe vm-uninstall uuid=4efd0392-8881-176c-012e-a56e9cb2beed force=true


Source: http://www.jansipke.nl/creating-backups-of-running-vms-in-xenserver/

How to install smartmontools on Citrix XenServer


Citrix XenServer 5.5, 5.6 and 6.0 are based on CentOS 5.4, but the smartmontools package is not available in the default Citrix repository, so it can't be installed with yum install smartmontools (as it would be possible with any other CentOS distribution).


It is probably safest to use the same versions that were in the original CentOS 5.4 distribution. Since that release is no longer the latest, we have to go to an archive server to find the packages. The packages listed below are actually used in other CentOS releases too, so these are probably the latest versions anyway.

[codesyntax lang="bash"]

wget http://vault.centos.org/5.4/os/i386/CentOS/mailx-8.1.1-44.2.2.i386.rpm
wget http://vault.centos.org/5.4/os/i386/CentOS/smartmontools-5.38-2.el5.i386.rpm

rpm -hiv smartmontools-5.38-2.el5.i386.rpm mailx-8.1.1-44.2.2.i386.rpm


Checking the disk status

We can now retrieve the disk status using:

[codesyntax lang="bash"]

smartctl -d ata -a /dev/sda


The most important fields are: SMART overall-health self-assessment test result - that should always have a value of PASSED. Other important fields are:

  • Reallocated_Sector_Ct, which counts the number of bad blocks that have been reallocated. It should be a low number. If this value increases, it is an alarm signal. Make a backup and replace the disk drive.
  • Current_Pending_Sector, which is the number of blocks with read errors that are not yet reallocated.
  • Offline_Uncorrectable.

Also check out the columns VALUE WORST THRESH. For each attribute, the current value of the field should never be lower than the threshold defined by the manufacturer.

Automatic monitoring of disk drives

The smartd daemon handles automatic testing for all drives, logs any status changes in /var/log/syslog or /var/log/messages and sends a status email in case of a problem (if mail is enabled, see below). It is configured in the file /etc/smartd.conf.

The following lines will run a short test every day between 02:00 and 03:00, and a long test on every Saturday between 03:00 and 04:00. If there is a problem, send an email to the configured address. The -M test option will send a test email whenever the smartd daemon is started.

The DEVICESCAN line would normally cause default test runs for all disks that smartd finds, but does not work on my system for some reason. So it is commented out and the tests will only run for explicitly listed devices.

[codesyntax lang="bash"]

/dev/sda -d ata -a -s (S/../.././02|L/../../6/03) -t -m user@example.com
/dev/sdb -d ata -a -s (S/../.././02|L/../../6/03) -t -m user@example.com

#DEVICESCAN -H -m root


After any change to the /etc/smartd.conf file the smartd daemon should be restarted:

[codesyntax lang="bash"]

/etc/init.d/smartd restart


Enabling email on Citrix XenServer

Citrix XenServer is not configured to run a mail server. Therefore without further configuration smartd might attempt to send out warning emails in case anything fails, but no mails will actually receive their destination.

It is fortunately not necessary to install a full-blown email package. XenServer comes with ssmtp preinstalled, which simply forwards emails to a real mail server.

To enable mail sending on Citrix XenServer, set up /etc/ssmtp/ssmtp.conf. You need to provide a real mail server and the local domain name.

# /etc/ssmtp.conf -- a config file for sSMTP sendmail.


After setting up /etc/ssmtp/ssmtp.conf, send a test email from the console to make sure that the email gets through:

[codesyntax lang="bash"]

echo "this is a test mail" | mailx -s "Test mail" user@example.com


If you are using the -M test option to a /etc/smartd.conf device definition, you can also restart the smartd daemon to have it send out test emails.

Source: http://www.schirmacher.de/diReallocated_Sector_Ctsplay/INFO/Install+smartmontools+on+Citrix+XenServer

How to determine what xenserver is pool-master

[codesyntax lang="bash"]

xe host-param-get param-name=name-label uuid=`xe pool-list | grep master | awk '{print $4}'`


How to convert a vmware linux virtual machine to xenserver virtual machine

As the title of this documents says, this document describes how to convert a vmware virtual machine to xenserver virtual machine. Although so far this procedure hasn't fail, please use this procedure on your own risk.

1. Install quemu on the vmware Server or another Linux machine (on debian based distribution use apt-get install qemu-utils, on centos use yum install qemu)

2. Uninstall vmware modules on the vmware guest you wish to convert

3. Stop the vmware guest

4. Check the format of the vmdk file:

qemu-img info guest22-flat.vmdk
image: fooguest22-flat.vmdk
file format: raw
virtual size: 15G
disk size: 15G

5. If the file format is "raw", we need not convert the image file using qemu-img convert, just rename to .img file. If it is not "raw", use "qemu-img" to convert the image to raw format.

6. Convert the file with qemu-img
[codesyntax lang="bash"]

qemu-img convert guest22-flat.vmdk -O raw /volumes/guest22/guest22.img


7. Copy the image file to the xen server with scp or rsync
[codesyntax lang="bash"]

rsync -avz --stats --progress --partial /volumes/guest22/guest22.img root@xenserver:~/


8. Create a guest with at least the same disk size and amount of RAM as the imported vmware virtual disk. Rename the disk under properties so you can locate it later (your_disk_name).
9. Open a console or connect to the xen server with ssh and find the disk copy the uuid of the host

[codesyntax lang="bash"]

xe vdi-list name-label=your_disk_name


uuid ( RO)                : 565c8fcf-5a52-4f05-8fd0-de943b99fa12
          name-label ( RW): your_disk_name
    name-description ( RW): your_disk_name description
             sr-uuid ( RO): 81c5bb77-8fe5-628e-f407-73b07b7054cd
        virtual-size ( RO): 8589934592
            sharable ( RO): false
           read-only ( RO): false

10. Import the image (use the uuid from the step 9)
[codesyntax lang="bash"]

xe vdi-import uuid=565c8fcf-5a52-4f05-8fd0-de943b99fa12 filename=guest22.img


11. Fire up your converted xen image. You may have to modify the grub boot loader vmware uses /dev/sda for it’s HD and xen uses /dev/hda.

12. If you machine does not boot press e at the grub prompt an search for root=/dev/sda1 line and change it to root=/dev/hda1 please change this in you grub.conf once the machine has booted and save your changes

Note: if your vmware guest has multiple 2G vmdk files you need to merge all of them into one single file. Please consult the page: https://sysadmin.compxtreme.ro/vmware-how-do-you-merge-multiple-2gb-disk-files-to-single-vmdk-file/

xen vm gzip export import

To export just leave the filename blank as in this example:
[codesyntax lang="bash"]

xe vm-export vm=VM-UUID filename= | gzip -c > /mnt/vm.xva.gz


To import use /dev/stdin as filename:
[codesyntax lang="bash"]

gunzip -c /mnt/vm.xva.gz | xe vm-import sr-uuid=SR-UUID filename=/dev/stdin


Xen 6.0.2 software Raid - installation procedure

This document describes how to install XenServer 6.0.2 on a node without hardware raid.

Install Software

Install XenServer 6.0.2 on /dev/sda and do NOT configure any local storage (it is easier to do that afterwards). /dev/sda should containt three partitions, please verify with the following command:

[codesyntax lang="bash"]

sgdisk -p /dev/sda


The first partition is used for XenServer installation, the second one is used for backups during XenServer upgrades.

1. Now we are going to use /dev/sdb as the mirror disk. Clear the partition table.
[codesyntax lang="bash"]

sgdisk --zap-all /dev/sdb


2. Install a GPT table on /dev/sdb
[codesyntax lang="bash"]

sgdisk --mbrtogpt --clear /dev/sdb


3. Create partitions on /dev/sdb. Please note that the following commands are dependent on your installation. Copy the start and the last sectors from the /dev/sda (output of sgdisk -p /dev/sda)
[codesyntax lang="bash"]

sgdisk --new=1:34:8388641 /dev/sdb
sgdisk --typecode=1:fd00 /dev/sdb
sgdisk --attributes=1:set:2 /dev/sdb
sgdisk --new=2:8388642:16777249 /dev/sdb
sgdisk --typecode=2:fd00 /dev/sdb
sgdisk --new=3:16777250:3907029134 /dev/sdb
sgdisk --typecode=3:fd00 /dev/sdb


4. Create RAID devices
[codesyntax lang="bash"]

mknod /dev/md0 b 9 0
mknod /dev/md1 b 9 1
mknod /dev/md2 b 9 2
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb2
mdadm --create /dev/md2 --level=1 --raid-devices=2 missing /dev/sdb3


5. Create bitmaps for each RAID device. Bitmaps slightly impact throughput but significantly reduce the rebuilt time when the array fails.
[codesyntax lang="bash"]

mdadm --grow /dev/md0 -b internal
mdadm --grow /dev/md1 -b internal
mdadm --grow /dev/md2 -b internal


6. Format the root disk and mount it at /mnt
[codesyntax lang="bash"]

mkfs.ext3 /dev/md0
mount /dev/md0 /mnt


7. Copy the root filesystem to the RAID array (please be patient this step may take a while).
[codesyntax lang="bash"]

cp -vxpR / /mnt


8. Change the root filesystem in /mnt/etc/fstab to /dev/md0.
[codesyntax lang="bash"]

sed -r -i 's,LABEL=root-\w+ ,/dev/md0 ,g' /mnt/etc/fstab


9. Install the bootloader on the second hard disk.
[codesyntax lang="bash"]

mount --bind /dev /mnt/dev
mount -t sysfs none /mnt/sys
mount -t proc none /mnt/proc
chroot /mnt /sbin/extlinux --install /boot
dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sdb


10. Make a new initrd image which contains a driver for the new root filesystem on the software RAID array.
[codesyntax lang="bash"]

chroot /mnt
mkinitrd -v -f --theme=/usr/share/splash --without-multipath /boot/initrd-`uname -r`.img `uname -r`


11. edit /mnt/boot/extlinux.conf and replace every mention of the old root filesystem (root=LABEL=xxx) with root=/dev/md0.
[codesyntax lang="bash"]

sed -r -i 's,root=LABEL=root-\w+ ,root=/dev/md0 ,g' /mnt/boot/extlinux.conf
sed -r -i 's,root=LABEL=root-\w+ ,root=/dev/md0 ,g' /boot/extlinux.conf


12. Unmount the new root and reboot. Important: Remember to use the boot menu of your BIOS to boot from the second hard disk this time!
[codesyntax lang="bash"]

umount /mnt/proc
umount /mnt/sys
umount /mnt/dev
umount /mnt


13. XenServer is up again, include /dev/sda in the array
[codesyntax lang="bash"]

sgdisk --typecode=1:fd00 /dev/sda
sgdisk --typecode=2:fd00 /dev/sda
sgdisk --typecode=3:fd00 /dev/sda
mdadm -a /dev/md0 /dev/sda1
mdadm -a /dev/md1 /dev/sda2
mdadm -a /dev/md2 /dev/sda3


14. The array needs to complete its initial build/synchronisation. That will take a while.
[codesyntax lang="bash"]

watch --interval=1 cat /proc/mdstat


15. Add /dev/md2 as a local SR to XenServer.
[codesyntax lang="bash"]

xe sr-create content-type=user device-config:device=/dev/md2 name-label="Local Storage" shared=false type=lvm


type=ext is required if you turned on thin provisioning in the installer. Otherwise use type=lvm

Final notes:

* The second partition is used by XenServer for backups, which is why its the same size as the first partition. If you put the install CD in and boot it, an option shows up for "restore XenServer 6.0 from backup partition"

* I have created bitmaps for each raid as well. In the event of the host going down dirty, the raids can require a synch. Simply doing this is enough to add a bitmap for changed pages.

doing cat /proc/mdstat will now say something like

cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[1] sdb1[0]
4193216 blocks [2/2] [UU]
bitmap: 128/128 pages [512KB], 16KB chunk

md1 : active raid1 sda2[1] sdb2[0]
4193216 blocks [2/2] [UU]
bitmap: 0/128 pages [0KB], 16KB chunk

md2 : active raid1 sda3[1] sdb3[0]
968372864 blocks [2/2] [UU]
bitmap: 0/231 pages [0KB], 2048KB chunk

* If you are installing on server which can not boot from the second disk, you must physically swap the two drives to make the machine boot off sdb and use /dev/md0 as root

* If you are going to setup a Xen 6 installation over network (via PXE) and the installation process hangs right after "Freeing unused kernel memory: 280k freed", you pass to the kernel (/tftpboot/pxelinux.cfg/main.menu) the following parameter xencons as follows:

append xenserver6/xen.gz dom0_mem=752M com1=9600,8n1 console=com1,tty --- xenserver6/vmlinuz console=tty0 console=ttyS0,9600n8 xencons=ttyS0,9600n8 answerfile=http://netboot.vendio.com/xenserver6/answers.xml install --- xenserver6/install.img

* To speed up the raid build process the following command can be used (default value is 1000):
[codesyntax lang="bash"]

echo 100000 > /proc/sys/dev/raid/speed_limit_min


* TIP: You can use the attached script to automate the steps 1 to 9.

Good luck

The script: xen6.sh.zip

Source: http://blog.codeaddict.org/?p=5