Hi Avi,

I had missed to include some important syslog lines from the
host system. See attachment.

On 03/10/10 14:15, Avi Kivity wrote:
> You have tons of iowait time, indicating an I/O bottleneck.

Is this disk IO or network IO? The rsync session puts a
high load on both, but actually I do not see how a high
load on disk or block IO could make the virtual hosts
unresponsive, as shown by the hosts syslog?

> What filesystem are you using for the host?  Are you using qcow2 or raw
> access?  What's the qemu command line.

It is ext3 and qcow2. Currently I am testing with reiserfs
on the host system. The system performance seems to be worse,
compared with ext3.

Here is the kvm command line (as generated by libvirt):

/usr/bin/kvm -S -M pc-0.11 -enable-kvm -m 1024 -smp 1 -name test0.0 \
        -uuid 74e71149-4baf-3af0-9c99-f4e50273296f \
        -monitor unix:/var/lib/libvirt/qemu/test0.0.monitor,server,nowait \
        -boot c -drive if=ide,media=cdrom,bus=1,unit=0 \
        -drive file=/export/storage/test0.0.img,if=virtio,boot=on \
        -net nic,macaddr=00:16:36:94:7e:f3,vlan=0,model=virtio,name=net0 \
        -net tap,fd=60,vlan=0,name=hostnet0 -serial pty -parallel none \
        -usb -vnc -k en-us -vga cirrus -balloon virtio

>> How many virtual machines would you assume I could run on a
>> host with 64 GByte RAM, 2 quad cores, a bonding NIC with
>> 4*1Gbit/sec and a hardware RAID? Each vhost is supposed to
>> get 4 GByte RAM and 1 CPU.
> 15 guests should fit comfortably, more with ksm running if the workloads
> are similar, or if you use ballooning.

15 vhosts would be nice. ksm is in the kernel, but not in my qemu-kvm

> Here the problem is likely the host filesystem and/or I/O scheduler.
> The optimal layout is placing guest disks in LVM volumes, and accessing
> them with -drive file=...,cache=none.  However, file-based access should
> also work.

I will try LVM tomorrow, when the test with reiserfs is completed.

Many thanx


Attachment: syslog.gz
Description: application/gzip

Reply via email to