Hello everyone,

i've a running KVM instance, ran by Openstack. The command used is (checked 
from ps auxwww):

/usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 4096 -smp 
4,sockets=4,cores=1,threads=1 -name instance-0000001b -uuid 
598878a6-a5dc-4e15-ab17-3ab8b954f00a -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0000001b.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-drive 
file=/var/lib/nova/instances/instance-0000001b/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
 -device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=21,id=hostnet0 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:75:90:da,bus=pci.0,addr=0x3 
-chardev 
file,id=charserial0,path=/var/lib/nova/instances/instance-0000001b/console.log 
-device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 
-device isa-serial,chardev=charserial1,id=serial1 -usb -device 
usb-tablet,id=input0 -vnc 0.0.0.0:2 -k en-us -vga cirrus -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

I've seen that just running an apt-get install it takes some time to read 
package list and then unpack packages etc.. Now I've tried to scp from an 
OpenVZ vm from our old servers to this VM, both has 4GB of ram and 4 cores, 
host machines has both an i7 cpu and 16gb ram.

The file i was trying to transfer is a 86GB tarball, during transfer the source 
vm has a load of 0.7, the destination vm was around 4-5 and it was incredibly 
slow. I've tried to run an iostat -xN and i got this:

Linux 3.2.0-29-virtual (closr)  08/10/2012      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           7.29    0.03    2.65   29.57    0.04   60.43

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
vda               0.28     0.86    3.38   41.49    92.50 20128.00   901.15    
97.54 2173.44  379.09 2319.82  16.75  75.15

2300 as w_await seems a bit high to me. It was also taking about 20 seconds to 
open a new screen and spawn a shell but cpu load wasn't high, about 50% of one 
core.

Instance disk file is on a RAID1 md array and iostat -xN on host is this:

Linux 3.2.0-27-generic (server2.visup.it)       08/10/2012      _x86_64_        
(8 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.22    0.00    0.53    1.18    0.00   97.07

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sdb               0.66    14.74    1.23   26.54    91.37  1844.22   139.42     
0.62   22.44   14.23   22.82   9.30  25.84
sda               0.65    14.74    1.25   26.53    87.71  1844.22   139.07     
0.63   22.56   14.86   22.92   9.37  26.03
md3               0.00     0.00    3.85   21.28   178.98  1728.10   151.80     
0.00    0.00    0.00    0.00   0.00   0.00
md1               0.00     0.00    0.00    0.00     0.01     0.00     7.97     
0.00    0.00    0.00    0.00   0.00   0.00
md127             0.00     0.00    0.00    0.00     0.01     0.00     7.70     
0.00    0.00    0.00    0.00   0.00   0.00
md2               0.00     0.00    0.00    0.00     0.01     0.00     8.00     
0.00    0.00    0.00    0.00   0.00   0.00
md126             0.00     0.00    0.00   18.19     0.01   114.25    12.56     
0.00    0.00    0.00    0.00   0.00   0.00
md0               0.00     0.00    0.01    0.02     0.03     0.08     8.00     
0.00    0.00    0.00    0.00   0.00   0.00
main-root         0.00     0.00    3.85   21.10   178.98  1728.10   152.88     
2.65  106.18   15.10  122.79   5.09  12.71

the / fs is mounted on main-root lvm partition.

Any idea?

Thanks in advance

Best Regards--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to