On Sun, 2007-07-08 at 11:14 +0300, Avi Kivity wrote:
> Dave Hansen wrote:
> > I've noticed that some of my tests run *MUCH* slower in kvm-28 than in
> > 27.  I'm sure that wall time is pretty wonky in the guests, but it is
> > much slower in real-world time as well.
> >
> > Here's a little test to create a 32MB zeroed file with dd.  Here it is
> > from kvm-27 (this took ~5.5 seconds on my wristwatch):
> > 33554432 bytes transferred in 0.052050 seconds (644657845 bytes/sec)
> > 33554432 bytes transferred in 0.062933 seconds (533176451 bytes/sec)
> >
> > Here's the same thing from kvm-28 (~80 seconds on my wristwatch):
> > 33554432 bytes transferred in 38.607065 seconds (869127 bytes/sec)
> > 33554432 bytes transferred in 22.274318 seconds (1506418 bytes/sec)
> >
> > Same host kernel, same kvm kernel modules (from kvm-28) same guest
> > kernel, same command-line options, same disk image.
> >
> > Any ideas what is going on?
> >   
> 
> Is this repeatable?  I don't see anything in kvm-27..kvm-28 that 
> warrants such a regression.

Right now, it's completely repeatable

> Things to check:
> 
> - effect of pinning the vm onto one cpu (with 'taskset')

Tried this.  No effect that I can see at all.

> - does any counter in kvm_stat behave differently

Nothing really stands out.  The most strange thing to me is that the
counters are very stationary on the slow one.  It doesn't look like it
is spinning on something, just that it is idle.  Its counters during the
dd look very much like the guest does when it is idle:

the slow one:
[EMAIL PROTECTED]:~$ KVM=28 HDA=qemu.raw ./run-qemu -snapshot
idle:
 efer_reload    156984     800
 exits          395676     800
 halt_exits       7165     100
 invlpg              0       0
 io_exits       123487     700
 irq_exits        1406       0
 irq_window        817       0
 light_exits    238692       0
 mmio_exits      24716       0
 pf_fixed       130354       0
 pf_guest        57309       0
 request_irq         0       0
 signal_exit       781       0
 tlb_flush        2267       0

dd'ing (mosly idle?):
 efer_reload    197669     856
 exits          455341     856
 halt_exits      11568     100
 invlpg              0       0
 io_exits       159006     740
 irq_exits        2125      14
 irq_window        975       2
 light_exits    257672       0
 mmio_exits      24716       0
 pf_fixed       146646       0
 pf_guest        59596       0
 request_irq         0       0
 signal_exit      1383      14
 tlb_flush        2455       0

burst during dd:
 efer_reload    205387     854
 exits          465166    2953
 halt_exits      12471      98
 invlpg              0       0
 io_exits       165605     733
 irq_exits        2341      30
 irq_window        985       2
 light_exits    259779    2099
 mmio_exits      24716       0
 pf_fixed       148731    2082
 pf_guest        59596       0
 request_irq         0       0
33554432 bytes transferred in 20.837575 seconds (1610285 bytes/sec)

the fast one:
$ KVM=27 HDA=qemu.raw ./run-qemu -snapshot
idle:
 efer_reload    164333     800
 exits          416798     800
 halt_exits       4570     100
 invlpg              0       0
 io_exits       132855     700
 irq_exits         838       0
 irq_window       2037       0
 light_exits    252465       0
 mmio_exits      24716       0
 pf_fixed       130325       0
 pf_guest        57275       0
 request_irq         0       0
 signal_exit       127       0
 tlb_flush        1864       0

during dd:
 efer_reload    243917   29963
 exits          523608   34205
 halt_exits       6766       0
 invlpg              0       0
 io_exits       207716   28730
 irq_exits        1079      52
 irq_window       4525    1232
 light_exits    279695    4235
 mmio_exits      24716       0
 pf_fixed       148656    4177
 pf_guest        59560       0
 request_irq         0       0
 signal_exit       154       7
 tlb_flush        2043       8
33554432 bytes transferred in 0.100797 seconds (332891147 bytes/sec)

> If you are using qcow, maybe the effect is due to the first test hitting 
> a hole and the second being forced to read from disk.  I recommend doing 
> performance tests from a real partition or volume.

I switched over to a raw image, and used -snapshot.  I don't have the
disk space to give them their own partitions right now.  

-- Dave


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to