Hi All,

As you know my first week I've been working on setting up
a live migration test environment (not that I've run much tests
yet).

I thought it would be good to document what I did / how to
create a similar setup. This is intended to be put at:
http://spice-space.org/page/Howtos

But before I put it there I was wondering if anyone has any
comments / corrections ?

Here is my draft version of the doc:

HowTo setup a spice live migration test setup
---------------------------------------------

This document describes how to setup a realistic environment
for doing live migration of qemu virtualmachines (vm's)
which are using the qxl (spice) vga device and have a spice
client connected.

The described setup does this all without using RHEV-VM,
this is mostly meant for people doing spice development who
want to test live migration without using RHEV-VM.


1. Needed hardware
------------------

For a *realistic* setup you will need at least 4 machines:
1) Hypervisor A, this machine will be running vm's
2) Hypervisor B, this machine will be running vm's
3) Storage server, for live migration storage shared between
   the hypervisors is needed in a realistic environment this
   will live on a separate machine or a SAN
4) A client

These machines need to be connected to each other using a
network with good performance at a minimum a 100Mbit switched
network, if you want to go fancy you can create a separate
storage LAN and normal network LAN.

The 2 hypervisors need to be 64 bit machines with hardware
virtualisation capable cpu's. Preferably they both should
have AMD or both Intel cpu's. If one of them has an AMD cpu
and one an Intel cpu you can only use recent qemu-kvm versions.


2. Installing the machines
--------------------------

1) The hypervisors, you may want to make the hypervisors
   multiboot between for example EL-5 x86_64 and EL-6 x86_64,
   so that you can test live migration with EL-5 hypervisors
   with a EL-6 spice client connected or visa versa. Note that
   live migration between different version hypervisors is not
   supported for all possible combinations.

   Once you've the base os's installed you can either install
   the spice client, server and qemu version from packages or
   build them from source (see the building spice howto).

2) The storage server. The storage server can serve vm disk
   images either over nfs or export them as iscsi disks. Since
   iscsi disks perform better, this documents describes how to
   do things using iscsi.

   For the storage server you can install any linux as base os
   you like. Before installing the base os you need to think
   about where the images will be stored, will they be normal
   files, or are you going to make them disk partitions / lvm
   logical volumes? Files are the most flexible, performance
   wise disk partitions or logical volumes are better. Logical
   volumes also allow a decent amount of flexibility, so those
   are advised.

   Once the base os is installed, it is time to configure the
   machine to serve a number of iscsi targets. First install
   scsi-target-utils, for example:
   yum install scsi-target-utils

   If you've chosen to use files you need to create empty images
   files using for example dd:
   dd if=/dev/zero of=image.img bs=1024 count=20000000

   Now edit /etc/tgt/targets.conf and define as many targets there
   as you want to have vm disk images. The file includes examples
   of howto define a target. Once the targets are defined restart
   the tgtd service, for example on Fedora / EL:
   service tgtd restart

3) The client, you likely want to make the client machine
   multiboot. So that you can test with both the linux and Windows
   client and that you can test different client versions.


3. Prepping the hypervisors
---------------------------

Besides installing spice server and qemu noth hypervisors require some
more special preperations. They need to connect to the iscsi targets
on the storage server and there network settings need to be configured
so that qemu can run in bridged mode, which is necessary to keep
the vm's network connection working after migrating it to a different
machine.

1) Preparing the network configuration to allow vm's to connect in
   bridged mode. On Fedora / EL this can be done by modifying the
   relevant /etc/sysconfig/network-scripts/ifcfg-eth# to look like this:
   DEVICE=eth#
   TYPE=Ethernet
   HWADDR=xx:xx:xx:xx:xx:xx
   ONBOOT=yes
   NM_CONTROLLED=no
   USERCTL=no
   BRIDGE=br0

   And then create a /etc/sysconfig/network-scripts/ifcfg-br0 file like
   this:

   DEVICE=br0
   TYPE=Bridge
   ONBOOT=yes
   NM_CONTROLLED=no
   USERCTL=no
   DELAY=0
   BOOTPROTO=none
   IPADDR=192.168.1.101
   PREFIX=24
   GATEWAY=192.168.1.1
   DNS1=192.168.1.1
   IPV6INIT=no

   Adjusting the IP settings to the settings the relevant eth# used to have.
   Then restart the network boot service:
   service network restart

   On Fedora you also need to enable the network servce to run on reboot,
   using for example "ntsysv".

   Last you will need to create a /etc/qemu-ifup script, with the following
   contents:

   #!/bin/sh

   echo "Executing /etc/qemu-ifup"
   echo "Bringing up $1 for bridged mode..."
   ifconfig $1 0.0.0.0 promisc up
   echo "Adding $1 to br0..."
   brctl addif br0 $1
   sleep 2

   Don't forget the chmod +x it!

2) Make the iscsi export disk images available as iscsi disks on *both* the
   hypervisors. Connecting to an iscsi disk consists of 2 steps:

   1) Discover available images, to discover available images do:
      iscsiadm -m discovery -t sendtargets -p 192.168.1.100

      Where 192.168.1.100 is the ip address (or the hostname) of the
      storage server. This gives for example the following output:
      192.168.1.100:3260,1 iqn.localdomain.shalem:iscsi1
      192.168.1.100:3260,1 iqn.localdomain.shalem:iscsi2

   2) Login to specific images, for example:
      iscsiadm -m node -T iqn.localdomain.shalem:iscsi1 -l

      Note that atleast on Fedora / EL you need to do this only once,
      if the iscsi boot service is enabled (the default) the system
      will automatically re-connect to these iscsi "disks" on reboot.


3. Starting a vm
----------------

Now it is time to start a vm with qxl vga. The basic form is like this:
sudo qemu-system-x86_64 -enable-kvm -m 2048 -smp 2 -name foobar \
-drive 
file=/dev/disk/by-path/ip-192.168.1.100:3260-iscsi-iqn.localdomain.shalem:iscsi1-lun-1,media=disk
 \
-net nic,macaddr=52:54:00:7a:b4:7c,vlan=0 -net tap,vlan=0 \
-vga qxl -spice port=5930,disable-ticketing \
-monitor stdio

You should change the -name, file=, macaddr= and port= parameters for
each additional vm you want to start. Note the /dev/disk/by-path/ method
of pointing at the iscsi target which holds the vm's disk image. The
drive must be specified this way so that the filename will work on both
hypervisors, this is a must for migration to work.

When you first start the machine you will likely want to add an option
for it to find installation media, usally using the -cdrom parameter, see
man qemu.

If your virtual machine is going to run windows xp, you likely want to
add ",model=rtl8139" to the -net nic,... parameter as the default is
to emulate an e1000 and xp does not have a driver for that.

If your virtual machine is going to run a recent linux, you may want to
",model=virtio" to both the -drive file=... and the -net nic,...
parameters. With older qemu's you also need to add ,boot=on to the
-drive paramter to boot from a virtio disk.


4. Connecting the client
------------------------

Connecting to the vm using the linux client is simple:
spicec -h 192.168.1.101 -p 5930

Where the -h parameter is the ip address or hostname for the hypervisor
and -p is the port you specified for spice when starting the vm

Note that with older spice versions spicec is installed under /usr/libexec
and thus needs to be started as:
/usr/libexec/spicec -h 192.168.1.101 -p 5930


5. Migrating the vm to the other hypervisor
-------------------------------------------

To migrate the vm to the other hypervisor, you first need to start
a vm on the other hypervisor ready to receive the vm state info from
the running vm. Use the exact same qemu cmdline as you used to start
the original (source) vm and add "-incoming tcp:0:4444" at the end
of the cmdline.

Then on the monitor if the source vm (the terminal / console from
which the source vm was started), type the following commands:

"migrate_set_speed 5m"

Limit the speed with which the migration data will be send from
one hypervisor to the other, otherwise they take all bandwidth
and the user experience in the client suffers.

spice_migrate_info 192.168.1.102 5930

Tell qemu to tell the spice client to get ready to stop
talking to the current vm (192.168.1.101:5930) and to
connect to the new vm (192.168.1.102:5930) as soon as the
migration is done.

migrate -d tcp:192.168.1.102:4444

And start the actual migration, when this command exits the
migration is finished and the source vm can be quit.


Regards,

Hans

_______________________________________________
Spice-devel mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/spice-devel

Reply via email to