Category Archives: Virtualization - Page 2

xen vm gzip export import

To export just leave the filename blank as in this example:
[codesyntax lang="bash"]

xe vm-export vm=VM-UUID filename= | gzip -c > /mnt/vm.xva.gz

[/codesyntax]

To import use /dev/stdin as filename:
[codesyntax lang="bash"]

gunzip -c /mnt/vm.xva.gz | xe vm-import sr-uuid=SR-UUID filename=/dev/stdin

[/codesyntax]

Vmware: How do you merge multiple 2GB disk files to single vmdk file

In case of the 2GB split disk format, your virtual disk consists of multiple data files (e.g. <vmname>-sNNN.vmdk) and one header/descriptor file (<vmname>.vmdk) which describes the virtual disk. In the vmware-vdiskmanager command, it's only the header/descriptor vmdk which you need to supply as the source virtual disk.

Convert to a pre-allocated disk.
[codesyntax lang="bash"]

vmware-vdiskmanager -r sourceDisk.vmdk -t 2 destinationDisk.vmdk

[/codesyntax]

The following line is simply changing a pre-allocated input disk into a growable target disk.
[codesyntax lang="bash"]

vmware-vdiskmanager -r sourceDisk.vmdk -t 0 destinationDisk.vmdk

[/codesyntax]

Execute a command in all running OpenVZ Containers

If you wish to execute a command in all running Containers, you can use the following script:
[codesyntax lang="bash"]

for i in `cat /proc/vz/veinfo | awk '{print $1}'|egrep -v '^0$'`; \

do echo "Container $i"; vzctl exec $i <command>; done

[/codesyntax]

where <command> is the command to be executed in all the running Containers. For example:

[codesyntax lang="bash"]
for i in `cat /proc/vz/veinfo | awk '{print $1}'|egrep -v '^0$'`; \

do echo "Container $i"; vzctl exec $i uptime; done
[/codesyntax]

Container 1
2:26pm up 6 days, 1:28, 0 users, load average: 0.00, 0.00, 0.00
Container 101
2:26pm up 6 days, 1:39, 0 users, load average: 0.00, 0.00, 0.00

Source: http://download.swsoft.com/virtuozzo/virtuozzo4.0/docs/en/lin/VzLinuxUG/260.htm

How to view in what openvz vps a process is running

1. Login on openvz node and use ps command to find the PID
[codesyntax lang="bash"]

ssh root@openvz-node
ps auxwwwf

[/codesyntax]

2. After finding PID execute following commands
[codesyntax lang="bash"]

PID=12345
for i in `vzlist -a | grep running | awk '{print $1}'`; do echo $i; ps $* -p $(grep -l "^envID:[[:space:]]*$i\$" /proc/[0-9]*/status | sed -e 's=/proc/\([0-9]*\)/.*=\1=') | grep $PID; done

[/codesyntax]

How to install openvz on CentOS 6.2

1. Add openvz repo
[codesyntax lang="bash"]

cd /etc/yum.repos.d
wget http://download.openvz.org/openvz.repo
rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ

[/codesyntax]

2. Install openvz
[codesyntax lang="bash"]

yum install openvz-kernel-rhel6 vzctl vzquota bridge-utils

[/codesyntax]

3. Modify relevant kernel settings (sysctl.conf)
[codesyntax lang="bash"]

vim /etc/sysctl.conf

[/codesyntax]

net.ipv4.ip_forward=1
kernel.sysrq = 1

net.ipv4.conf.all.rp_filter=1
net.ipv4.icmp_echo_ignore_broadcasts=1

net.ipv4.conf.default.forwarding=1
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.eth0.proxy_arp=1

[codesyntax lang="bash"]

sysctl -p

[/codesyntax]

4. Reboot the server
[codesyntax lang="bash"]

reboot

[/codesyntax]

5. Disable se linux
[codesyntax lang="bash"]

vim /etc/sysconfig/selinux

[/codesyntax]

SELINUX=disabled

6. Install strace (I do not remember why I initially wanted this installed, but still it has nothing to spoil)

[codesyntax lang="bash"]

yum install strace

[/codesyntax]

Useless, but still: View in what OpenVZ Container are you in

1. Logon on the machine and execute the following command:

[codesyntax lang="bash"]

cat /proc/vz/veinfo | awk '{print "ID: "$1 " - IP: "$4}'

[/codesyntax]

Xen 6.0.2 software Raid - installation procedure

This document describes how to install XenServer 6.0.2 on a node without hardware raid.

Install Software

Install XenServer 6.0.2 on /dev/sda and do NOT configure any local storage (it is easier to do that afterwards). /dev/sda should containt three partitions, please verify with the following command:

[codesyntax lang="bash"]

sgdisk -p /dev/sda

[/codesyntax]

The first partition is used for XenServer installation, the second one is used for backups during XenServer upgrades.

1. Now we are going to use /dev/sdb as the mirror disk. Clear the partition table.
[codesyntax lang="bash"]

sgdisk --zap-all /dev/sdb

[/codesyntax]

2. Install a GPT table on /dev/sdb
[codesyntax lang="bash"]

sgdisk --mbrtogpt --clear /dev/sdb

[/codesyntax]

3. Create partitions on /dev/sdb. Please note that the following commands are dependent on your installation. Copy the start and the last sectors from the /dev/sda (output of sgdisk -p /dev/sda)
[codesyntax lang="bash"]

sgdisk --new=1:34:8388641 /dev/sdb
sgdisk --typecode=1:fd00 /dev/sdb
sgdisk --attributes=1:set:2 /dev/sdb
sgdisk --new=2:8388642:16777249 /dev/sdb
sgdisk --typecode=2:fd00 /dev/sdb
sgdisk --new=3:16777250:3907029134 /dev/sdb
sgdisk --typecode=3:fd00 /dev/sdb

[/codesyntax]

4. Create RAID devices
[codesyntax lang="bash"]

mknod /dev/md0 b 9 0
mknod /dev/md1 b 9 1
mknod /dev/md2 b 9 2
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb2
mdadm --create /dev/md2 --level=1 --raid-devices=2 missing /dev/sdb3

[/codesyntax]

5. Create bitmaps for each RAID device. Bitmaps slightly impact throughput but significantly reduce the rebuilt time when the array fails.
[codesyntax lang="bash"]

mdadm --grow /dev/md0 -b internal
mdadm --grow /dev/md1 -b internal
mdadm --grow /dev/md2 -b internal

[/codesyntax]

6. Format the root disk and mount it at /mnt
[codesyntax lang="bash"]

mkfs.ext3 /dev/md0
mount /dev/md0 /mnt

[/codesyntax]

7. Copy the root filesystem to the RAID array (please be patient this step may take a while).
[codesyntax lang="bash"]

cp -vxpR / /mnt

[/codesyntax]

8. Change the root filesystem in /mnt/etc/fstab to /dev/md0.
[codesyntax lang="bash"]

sed -r -i 's,LABEL=root-\w+ ,/dev/md0 ,g' /mnt/etc/fstab

[/codesyntax]

9. Install the bootloader on the second hard disk.
[codesyntax lang="bash"]

mount --bind /dev /mnt/dev
mount -t sysfs none /mnt/sys
mount -t proc none /mnt/proc
chroot /mnt /sbin/extlinux --install /boot
dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sdb

[/codesyntax]

10. Make a new initrd image which contains a driver for the new root filesystem on the software RAID array.
[codesyntax lang="bash"]

chroot /mnt
mkinitrd -v -f --theme=/usr/share/splash --without-multipath /boot/initrd-`uname -r`.img `uname -r`
exit

[/codesyntax]

11. edit /mnt/boot/extlinux.conf and replace every mention of the old root filesystem (root=LABEL=xxx) with root=/dev/md0.
[codesyntax lang="bash"]

sed -r -i 's,root=LABEL=root-\w+ ,root=/dev/md0 ,g' /mnt/boot/extlinux.conf
sed -r -i 's,root=LABEL=root-\w+ ,root=/dev/md0 ,g' /boot/extlinux.conf

[/codesyntax]

12. Unmount the new root and reboot. Important: Remember to use the boot menu of your BIOS to boot from the second hard disk this time!
[codesyntax lang="bash"]

umount /mnt/proc
umount /mnt/sys
umount /mnt/dev
umount /mnt
reboot

[/codesyntax]

13. XenServer is up again, include /dev/sda in the array
[codesyntax lang="bash"]

sgdisk --typecode=1:fd00 /dev/sda
sgdisk --typecode=2:fd00 /dev/sda
sgdisk --typecode=3:fd00 /dev/sda
mdadm -a /dev/md0 /dev/sda1
mdadm -a /dev/md1 /dev/sda2
mdadm -a /dev/md2 /dev/sda3

[/codesyntax]

14. The array needs to complete its initial build/synchronisation. That will take a while.
[codesyntax lang="bash"]

watch --interval=1 cat /proc/mdstat

[/codesyntax]

15. Add /dev/md2 as a local SR to XenServer.
[codesyntax lang="bash"]

xe sr-create content-type=user device-config:device=/dev/md2 name-label="Local Storage" shared=false type=lvm

[/codesyntax]

type=ext is required if you turned on thin provisioning in the installer. Otherwise use type=lvm

Final notes:

* The second partition is used by XenServer for backups, which is why its the same size as the first partition. If you put the install CD in and boot it, an option shows up for "restore XenServer 6.0 from backup partition"

* I have created bitmaps for each raid as well. In the event of the host going down dirty, the raids can require a synch. Simply doing this is enough to add a bitmap for changed pages.

doing cat /proc/mdstat will now say something like

cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[1] sdb1[0]
4193216 blocks [2/2] [UU]
bitmap: 128/128 pages [512KB], 16KB chunk

md1 : active raid1 sda2[1] sdb2[0]
4193216 blocks [2/2] [UU]
bitmap: 0/128 pages [0KB], 16KB chunk

md2 : active raid1 sda3[1] sdb3[0]
968372864 blocks [2/2] [UU]
bitmap: 0/231 pages [0KB], 2048KB chunk

* If you are installing on server which can not boot from the second disk, you must physically swap the two drives to make the machine boot off sdb and use /dev/md0 as root

* If you are going to setup a Xen 6 installation over network (via PXE) and the installation process hangs right after "Freeing unused kernel memory: 280k freed", you pass to the kernel (/tftpboot/pxelinux.cfg/main.menu) the following parameter xencons as follows:

append xenserver6/xen.gz dom0_mem=752M com1=9600,8n1 console=com1,tty --- xenserver6/vmlinuz console=tty0 console=ttyS0,9600n8 xencons=ttyS0,9600n8 answerfile=http://netboot.vendio.com/xenserver6/answers.xml install --- xenserver6/install.img

* To speed up the raid build process the following command can be used (default value is 1000):
[codesyntax lang="bash"]

echo 100000 > /proc/sys/dev/raid/speed_limit_min

[/codesyntax]

* TIP: You can use the attached script to automate the steps 1 to 9.

Good luck

The script: xen6.sh.zip

Source: http://blog.codeaddict.org/?p=5