[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2014-04-15 Thread Bryan Quigley
QEMU 1.7 was released, Quantal has 10ish days left of support, and
Raring is EOL

** Changed in: qemu
   Status: Fix Committed = Fix Released

** Changed in: qemu-kvm (Ubuntu Quantal)
   Status: Triaged = Invalid

** Changed in: qemu-kvm (Ubuntu Raring)
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  Fix Released
Status in “qemu-kvm” package in Ubuntu:
  Fix Released
Status in “qemu-kvm” source package in Precise:
  Fix Released
Status in “qemu-kvm” source package in Quantal:
  Invalid
Status in “qemu-kvm” source package in Raring:
  Invalid
Status in “qemu-kvm” source package in Saucy:
  Fix Released

Bug description:
  SRU Justification
  [Impact]
   * Users of QEMU that save their memory states using savevm/loadvm or migrate 
experience worse performance after the migration/loadvm. To workaround these 
issues VMs must be completely rebooted. Optimally we should be able to restore 
a VM's memory state an expect no performance issue.

  [Test Case]

   * savevm/loadvm:
     - Create a VM and install a test suite such as lmbench.
     - Get numbers right after boot and record them.
     - Open up the qemu monitor and type the following:
   stop
   savevm 0
   loadvm 0
   c
     - Measure performance and record numbers.
     - Compare if numbers are within margin of error.
   * migrate:
     - Create VM, install lmbench, get numbers.
     - Open up qemu monitor and type the following:
   stop
   migrate exec:dd of=~/save.vm
   quit
     - Start a new VM using qemu but add the following argument:
   -incoming exec:dd if=~/save.vm
     - Run performance test and compare.

   If performance measured is similar then we pass the test case.

  [Regression Potential]

   * The fix is a backport of two upstream patches:
  ad0b5321f1f797274603ebbe20108b0750baee94
  211ea74022f51164a7729030b28eec90b6c99a08

  One patch allows QEMU to use THP if its enabled.
  The other patch changes logic to not memset pages to zero when loading memory 
for the vm (on an incoming migration).

   * I've also run the qa-regression-testing test-qemu.py script and it
  passes all tests.

  [Additional Information]

  Kernels from 3.2 onwards are affected, and all have the config:
  CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
  applicable.

  --

  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  ubuntu-12.04 - first boot
  

[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-11-27 Thread Paolo Bonzini
Fix will be part of QEMU 1.7.0 (commit fc1c4a5, migration: drop
MADVISE_DONT_NEED for incoming zero pages, 2013-10-24).


** Changed in: qemu
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  Fix Committed
Status in “qemu-kvm” package in Ubuntu:
  Fix Released
Status in “qemu-kvm” source package in Precise:
  Fix Released
Status in “qemu-kvm” source package in Quantal:
  Triaged
Status in “qemu-kvm” source package in Raring:
  Triaged
Status in “qemu-kvm” source package in Saucy:
  Fix Released

Bug description:
  SRU Justification
  [Impact]
   * Users of QEMU that save their memory states using savevm/loadvm or migrate 
experience worse performance after the migration/loadvm. To workaround these 
issues VMs must be completely rebooted. Optimally we should be able to restore 
a VM's memory state an expect no performance issue.

  [Test Case]

   * savevm/loadvm:
     - Create a VM and install a test suite such as lmbench.
     - Get numbers right after boot and record them.
     - Open up the qemu monitor and type the following:
   stop
   savevm 0
   loadvm 0
   c
     - Measure performance and record numbers.
     - Compare if numbers are within margin of error.
   * migrate:
     - Create VM, install lmbench, get numbers.
     - Open up qemu monitor and type the following:
   stop
   migrate exec:dd of=~/save.vm
   quit
     - Start a new VM using qemu but add the following argument:
   -incoming exec:dd if=~/save.vm
     - Run performance test and compare.

   If performance measured is similar then we pass the test case.

  [Regression Potential]

   * The fix is a backport of two upstream patches:
  ad0b5321f1f797274603ebbe20108b0750baee94
  211ea74022f51164a7729030b28eec90b6c99a08

  One patch allows QEMU to use THP if its enabled.
  The other patch changes logic to not memset pages to zero when loading memory 
for the vm (on an incoming migration).

   * I've also run the qa-regression-testing test-qemu.py script and it
  passes all tests.

  [Additional Information]

  Kernels from 3.2 onwards are affected, and all have the config:
  CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
  applicable.

  --

  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds

[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-11-08 Thread Chris J Arges
** Changed in: qemu-kvm (Ubuntu Quantal)
 Assignee: Chris J Arges (arges) = (unassigned)

** Changed in: qemu-kvm (Ubuntu Raring)
 Assignee: Chris J Arges (arges) = (unassigned)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Fix Released
Status in “qemu-kvm” source package in Precise:
  Fix Released
Status in “qemu-kvm” source package in Quantal:
  Triaged
Status in “qemu-kvm” source package in Raring:
  Triaged
Status in “qemu-kvm” source package in Saucy:
  Fix Released

Bug description:
  SRU Justification
  [Impact]
   * Users of QEMU that save their memory states using savevm/loadvm or migrate 
experience worse performance after the migration/loadvm. To workaround these 
issues VMs must be completely rebooted. Optimally we should be able to restore 
a VM's memory state an expect no performance issue.

  [Test Case]

   * savevm/loadvm:
     - Create a VM and install a test suite such as lmbench.
     - Get numbers right after boot and record them.
     - Open up the qemu monitor and type the following:
   stop
   savevm 0
   loadvm 0
   c
     - Measure performance and record numbers.
     - Compare if numbers are within margin of error.
   * migrate:
     - Create VM, install lmbench, get numbers.
     - Open up qemu monitor and type the following:
   stop
   migrate exec:dd of=~/save.vm
   quit
     - Start a new VM using qemu but add the following argument:
   -incoming exec:dd if=~/save.vm
     - Run performance test and compare.

   If performance measured is similar then we pass the test case.

  [Regression Potential]

   * The fix is a backport of two upstream patches:
  ad0b5321f1f797274603ebbe20108b0750baee94
  211ea74022f51164a7729030b28eec90b6c99a08

  One patch allows QEMU to use THP if its enabled.
  The other patch changes logic to not memset pages to zero when loading memory 
for the vm (on an incoming migration).

   * I've also run the qa-regression-testing test-qemu.py script and it
  passes all tests.

  [Additional Information]

  Kernels from 3.2 onwards are affected, and all have the config:
  CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
  applicable.

  --

  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
 

[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-24 Thread Launchpad Bug Tracker
This bug was fixed in the package qemu-kvm - 1.0+noroms-0ubuntu14.12

---
qemu-kvm (1.0+noroms-0ubuntu14.12) precise-proposed; urgency=low

  * migration-do-not-overwrite-zero-pages.patch,
call-madv-hugepage-for-guest-ram-allocations.patch:
Fix performance degradation after migrations, and savevm/loadvm.
(LP: #1100843)
 -- Chris J Arges chris.j.ar...@ubuntu.com   Wed, 02 Oct 2013 16:26:27 -0500

** Changed in: qemu-kvm (Ubuntu Precise)
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Fix Released
Status in “qemu-kvm” source package in Precise:
  Fix Released
Status in “qemu-kvm” source package in Quantal:
  Triaged
Status in “qemu-kvm” source package in Raring:
  Triaged
Status in “qemu-kvm” source package in Saucy:
  Fix Released

Bug description:
  SRU Justification
  [Impact]
   * Users of QEMU that save their memory states using savevm/loadvm or migrate 
experience worse performance after the migration/loadvm. To workaround these 
issues VMs must be completely rebooted. Optimally we should be able to restore 
a VM's memory state an expect no performance issue.

  [Test Case]

   * savevm/loadvm:
     - Create a VM and install a test suite such as lmbench.
     - Get numbers right after boot and record them.
     - Open up the qemu monitor and type the following:
   stop
   savevm 0
   loadvm 0
   c
     - Measure performance and record numbers.
     - Compare if numbers are within margin of error.
   * migrate:
     - Create VM, install lmbench, get numbers.
     - Open up qemu monitor and type the following:
   stop
   migrate exec:dd of=~/save.vm
   quit
     - Start a new VM using qemu but add the following argument:
   -incoming exec:dd if=~/save.vm
     - Run performance test and compare.

   If performance measured is similar then we pass the test case.

  [Regression Potential]

   * The fix is a backport of two upstream patches:
  ad0b5321f1f797274603ebbe20108b0750baee94
  211ea74022f51164a7729030b28eec90b6c99a08

  One patch allows QEMU to use THP if its enabled.
  The other patch changes logic to not memset pages to zero when loading memory 
for the vm (on an incoming migration).

   * I've also run the qa-regression-testing test-qemu.py script and it
  passes all tests.

  [Additional Information]

  Kernels from 3.2 onwards are affected, and all have the config:
  CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
  applicable.

  --

  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device 

[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-11 Thread Chris J Arges
I have verified this on my local machine using virt-manager's save
memory, savevm/loadvm via the qemu monitor , and migrate via qemu
monitor.

** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Fix Released
Status in “qemu-kvm” source package in Precise:
  Fix Committed
Status in “qemu-kvm” source package in Quantal:
  Triaged
Status in “qemu-kvm” source package in Raring:
  Triaged
Status in “qemu-kvm” source package in Saucy:
  Fix Released

Bug description:
  SRU Justification
  [Impact]
   * Users of QEMU that save their memory states using savevm/loadvm or migrate 
experience worse performance after the migration/loadvm. To workaround these 
issues VMs must be completely rebooted. Optimally we should be able to restore 
a VM's memory state an expect no performance issue.

  [Test Case]

   * savevm/loadvm:
     - Create a VM and install a test suite such as lmbench.
     - Get numbers right after boot and record them.
     - Open up the qemu monitor and type the following:
   stop
   savevm 0
   loadvm 0
   c
     - Measure performance and record numbers.
     - Compare if numbers are within margin of error.
   * migrate:
     - Create VM, install lmbench, get numbers.
     - Open up qemu monitor and type the following:
   stop
   migrate exec:dd of=~/save.vm
   quit
     - Start a new VM using qemu but add the following argument:
   -incoming exec:dd if=~/save.vm
     - Run performance test and compare.

   If performance measured is similar then we pass the test case.

  [Regression Potential]

   * The fix is a backport of two upstream patches:
  ad0b5321f1f797274603ebbe20108b0750baee94
  211ea74022f51164a7729030b28eec90b6c99a08

  One patch allows QEMU to use THP if its enabled.
  The other patch changes logic to not memset pages to zero when loading memory 
for the vm (on an incoming migration).

   * I've also run the qa-regression-testing test-qemu.py script and it
  passes all tests.

  [Additional Information]

  Kernels from 3.2 onwards are affected, and all have the config:
  CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
  applicable.

  --

  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple 

Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-10 Thread Peter Lieven

On 07.10.2013 11:55, Paolo Bonzini wrote:

Il 07/10/2013 11:49, Peter Lieven ha scritto:

It's in general not easy to do this if you take non-x86 targets into
account.

What about the dirty way to zero out all non zero pages at the beginning of
ram_load?

I'm not sure I follow?

sth like this for each ram block at the beginning of ram_load.

 
+base = memory_region_get_ram_ptr(block-mr);

+for (offset = 0; offset  block-length;
+ offset += TARGET_PAGE_SIZE) {
+if (!is_zero_page(base + offset)) {
+memset(base + offset, 0x00, TARGET_PAGE_SIZE);
+}
+}
+

Then add a capability skip_zero_pages which does not sent them on the source
and enables this zeroing. it would also be possible to skip the zero check
for each incoming compressed pages.

Peter





[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-10 Thread Brian Murray
Hello Mark, or anyone else affected,

Accepted qemu-kvm into precise-proposed. The package will build now and
be available at http://launchpad.net/ubuntu/+source/qemu-kvm/1.0+noroms-
0ubuntu14.12 in a few hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to
enable and use -proposed.  Your feedback will aid us getting this update
out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested, and change the tag
from verification-needed to verification-done. If it does not fix the
bug for you, please add a comment stating that, and change the tag to
verification-failed.  In either case, details of your testing will help
us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance!

** Changed in: qemu-kvm (Ubuntu Precise)
   Status: In Progress = Fix Committed

** Tags added: verification-needed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Fix Released
Status in “qemu-kvm” source package in Precise:
  Fix Committed
Status in “qemu-kvm” source package in Quantal:
  Triaged
Status in “qemu-kvm” source package in Raring:
  Triaged
Status in “qemu-kvm” source package in Saucy:
  Fix Released

Bug description:
  SRU Justification
  [Impact]
   * Users of QEMU that save their memory states using savevm/loadvm or migrate 
experience worse performance after the migration/loadvm. To workaround these 
issues VMs must be completely rebooted. Optimally we should be able to restore 
a VM's memory state an expect no performance issue.

  [Test Case]

   * savevm/loadvm:
     - Create a VM and install a test suite such as lmbench.
     - Get numbers right after boot and record them.
     - Open up the qemu monitor and type the following:
   stop
   savevm 0
   loadvm 0
   c
     - Measure performance and record numbers.
     - Compare if numbers are within margin of error.
   * migrate:
     - Create VM, install lmbench, get numbers.
     - Open up qemu monitor and type the following:
   stop
   migrate exec:dd of=~/save.vm
   quit
     - Start a new VM using qemu but add the following argument:
   -incoming exec:dd if=~/save.vm
     - Run performance test and compare.

   If performance measured is similar then we pass the test case.

  [Regression Potential]

   * The fix is a backport of two upstream patches:
  ad0b5321f1f797274603ebbe20108b0750baee94
  211ea74022f51164a7729030b28eec90b6c99a08

  One patch allows QEMU to use THP if its enabled.
  The other patch changes logic to not memset pages to zero when loading memory 
for the vm (on an incoming migration).

   * I've also run the qa-regression-testing test-qemu.py script and it
  passes all tests.

  [Additional Information]

  Kernels from 3.2 onwards are affected, and all have the config:
  CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
  applicable.

  --

  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  

Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-07 Thread Peter Lieven

On 06.10.2013 15:57, Zhang Haoyu wrote:

From my testing this has been fixed in the saucy version (1.5.0) of

qemu. It is fixed by this patch:

f1c72795af573b24a7da5eb52375c9aba8a37972

However later in the history this commit was reverted, and again broke

this. The other commit that fixes this is:

211ea74022f51164a7729030b28eec90b6c99a08


See below post,please.
https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html


I would still like to fix qemu to not load roms etc. if we set up a migration 
target. In this case
we could drop the madvise, skip the checking for zero pages and also could 
avoid sending
zero pages at all. It would be the cleanest solution.

Peter



Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-07 Thread Paolo Bonzini
Il 07/10/2013 08:38, Peter Lieven ha scritto:
 On 06.10.2013 15:57, Zhang Haoyu wrote:
 From my testing this has been fixed in the saucy version (1.5.0) of
 qemu. It is fixed by this patch:
 f1c72795af573b24a7da5eb52375c9aba8a37972

 However later in the history this commit was reverted, and again broke
 this. The other commit that fixes this is:
 211ea74022f51164a7729030b28eec90b6c99a08

 See below post,please.
 https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html
 
 I would still like to fix qemu to not load roms etc. if we set up a
 migration target. In this case
 we could drop the madvise, skip the checking for zero pages and also
 could avoid sending
 zero pages at all. It would be the cleanest solution.

It's in general not easy to do this if you take non-x86 targets into
account.

Paolo




Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-07 Thread Peter Lieven

On 07.10.2013 11:37, Paolo Bonzini wrote:

Il 07/10/2013 08:38, Peter Lieven ha scritto:

On 06.10.2013 15:57, Zhang Haoyu wrote:

From my testing this has been fixed in the saucy version (1.5.0) of

qemu. It is fixed by this patch:

f1c72795af573b24a7da5eb52375c9aba8a37972

However later in the history this commit was reverted, and again broke

this. The other commit that fixes this is:

211ea74022f51164a7729030b28eec90b6c99a08


See below post,please.
https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html

I would still like to fix qemu to not load roms etc. if we set up a
migration target. In this case
we could drop the madvise, skip the checking for zero pages and also
could avoid sending
zero pages at all. It would be the cleanest solution.

It's in general not easy to do this if you take non-x86 targets into
account.

What about the dirty way to zero out all non zero pages at the beginning of
ram_load?

Peter




Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-07 Thread Paolo Bonzini
Il 07/10/2013 11:49, Peter Lieven ha scritto:
 It's in general not easy to do this if you take non-x86 targets into
 account.
 What about the dirty way to zero out all non zero pages at the beginning of
 ram_load?

I'm not sure I follow?

Paolo



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-07 Thread Chris J Arges
I found that two patches need to be backported to solve this issue:

ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08

I've added the necessary bits into precise and tried a few tests:
1) Measure performance before and after savevm/loadvm.
2) Measure performance before and after a migrate to the same host.

In both cases the performance measured by something like lmbench was the same 
as the previous run.
A test build is available here:
http://people.canonical.com/~arges/lp1100843/precise_v2/

** Patch added: fix-lp1100843-precise.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+attachment/3864309/+files/fix-lp1100843-precise.debdiff

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Fix Released
Status in “qemu-kvm” source package in Precise:
  In Progress
Status in “qemu-kvm” source package in Quantal:
  Triaged
Status in “qemu-kvm” source package in Raring:
  Triaged
Status in “qemu-kvm” source package in Saucy:
  Fix Released

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:

[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-07 Thread Chris J Arges
** Description changed:

+ SRU Justification
+ [Impact] 
+  * Users of QEMU that save their memory states using savevm/loadvm or migrate 
experience worse performance after the migration/loadvm. To workaround these 
issues VMs must be completely rebooted. Optimally we should be able to restore 
a VM's memory state an expect no performance issue.   
+ 
+ [Test Case]
+ 
+  * savevm/loadvm:
+- Create a VM and install a test suite such as lmbench.
+- Get numbers right after boot and record them.
+- Open up the qemu monitor and type the following:
+  stop
+  savevm 0
+  loadvm 0
+  c
+- Measure performance and record numbers.
+- Compare if numbers are within margin of error.
+  * migrate:
+- Create VM, install lmbench, get numbers.
+- Open up qemu monitor and type the following:
+  stop
+  migrate exec:dd of=~/save.vm
+  quit
+- Start a new VM using qemu but add the following argument:
+  -incoming exec:dd if=~/save.vm
+- Run performance test and compare.
+  
+  If performance measured is similar then we pass the test case. 
+ 
+ [Regression Potential]
+ 
+  * The fix is a backport of two upstream patches:
+ ad0b5321f1f797274603ebbe20108b0750baee94
+ 211ea74022f51164a7729030b28eec90b6c99a08
+ 
+ On patch allows QEMU to use THP if its enabled.
+ The other patch changes logic to not memset pages to zero when loading memory 
for the vm (on an incoming migration).
+ 
+ --
+ 
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.
  
-  I'm seeing a performance degradation after live migration on Precise,
+  I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)
  
  I first noticed this problem with phoronix doing compilation tests, and
  then tried lmbench where even simple calls experience performance
  degradation.
  
  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being used
  after migration, but this didn't pan out.  Someone else has a similar
  problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
  
  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid
  f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device
  virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
  
  Disk backend is LVM running on SAN via FC connection (using symlink from
  /var/lib/one/datastores/0/2/disk.0 above)
  
- 
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds
  
  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s
- 
  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds
  
  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s
  
- 
- I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...
+ I don't have phoronix results for 10.04 handy, but they were within 1%
+ of each other...
  
  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 

[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-07 Thread Chris J Arges
** Description changed:

  SRU Justification
- [Impact] 
-  * Users of QEMU that save their memory states using savevm/loadvm or migrate 
experience worse performance after the migration/loadvm. To workaround these 
issues VMs must be completely rebooted. Optimally we should be able to restore 
a VM's memory state an expect no performance issue.   
+ [Impact]
+  * Users of QEMU that save their memory states using savevm/loadvm or migrate 
experience worse performance after the migration/loadvm. To workaround these 
issues VMs must be completely rebooted. Optimally we should be able to restore 
a VM's memory state an expect no performance issue.
  
  [Test Case]
  
-  * savevm/loadvm:
-- Create a VM and install a test suite such as lmbench.
-- Get numbers right after boot and record them.
-- Open up the qemu monitor and type the following:
-  stop
-  savevm 0
-  loadvm 0
-  c
-- Measure performance and record numbers.
-- Compare if numbers are within margin of error.
-  * migrate:
-- Create VM, install lmbench, get numbers.
-- Open up qemu monitor and type the following:
-  stop
-  migrate exec:dd of=~/save.vm
-  quit
-- Start a new VM using qemu but add the following argument:
-  -incoming exec:dd if=~/save.vm
-- Run performance test and compare.
-  
-  If performance measured is similar then we pass the test case. 
+  * savevm/loadvm:
+    - Create a VM and install a test suite such as lmbench.
+    - Get numbers right after boot and record them.
+    - Open up the qemu monitor and type the following:
+  stop
+  savevm 0
+  loadvm 0
+  c
+    - Measure performance and record numbers.
+    - Compare if numbers are within margin of error.
+  * migrate:
+    - Create VM, install lmbench, get numbers.
+    - Open up qemu monitor and type the following:
+  stop
+  migrate exec:dd of=~/save.vm
+  quit
+    - Start a new VM using qemu but add the following argument:
+  -incoming exec:dd if=~/save.vm
+    - Run performance test and compare.
+ 
+  If performance measured is similar then we pass the test case.
  
  [Regression Potential]
  
-  * The fix is a backport of two upstream patches:
+  * The fix is a backport of two upstream patches:
  ad0b5321f1f797274603ebbe20108b0750baee94
  211ea74022f51164a7729030b28eec90b6c99a08
  
  On patch allows QEMU to use THP if its enabled.
  The other patch changes logic to not memset pages to zero when loading memory 
for the vm (on an incoming migration).
  
+  * I've also run the qa-regression-testing test-qemu.py script and it passes 
all tests.
  --
  
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.
  
   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)
  
  I first noticed this problem with phoronix doing compilation tests, and
  then tried lmbench where even simple calls experience performance
  degradation.
  
  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being used
  after migration, but this didn't pan out.  Someone else has a similar
  problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
  
  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid
  f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device
  virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
  
  Disk backend is LVM running on SAN via FC connection 

[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-07 Thread Chris J Arges
** Description changed:

  SRU Justification
  [Impact]
   * Users of QEMU that save their memory states using savevm/loadvm or migrate 
experience worse performance after the migration/loadvm. To workaround these 
issues VMs must be completely rebooted. Optimally we should be able to restore 
a VM's memory state an expect no performance issue.
  
  [Test Case]
  
   * savevm/loadvm:
     - Create a VM and install a test suite such as lmbench.
     - Get numbers right after boot and record them.
     - Open up the qemu monitor and type the following:
   stop
   savevm 0
   loadvm 0
   c
     - Measure performance and record numbers.
     - Compare if numbers are within margin of error.
   * migrate:
     - Create VM, install lmbench, get numbers.
     - Open up qemu monitor and type the following:
   stop
   migrate exec:dd of=~/save.vm
   quit
     - Start a new VM using qemu but add the following argument:
   -incoming exec:dd if=~/save.vm
     - Run performance test and compare.
  
   If performance measured is similar then we pass the test case.
  
  [Regression Potential]
  
   * The fix is a backport of two upstream patches:
  ad0b5321f1f797274603ebbe20108b0750baee94
  211ea74022f51164a7729030b28eec90b6c99a08
  
- On patch allows QEMU to use THP if its enabled.
+ One patch allows QEMU to use THP if its enabled.
  The other patch changes logic to not memset pages to zero when loading memory 
for the vm (on an incoming migration).
  
-  * I've also run the qa-regression-testing test-qemu.py script and it passes 
all tests.
+  * I've also run the qa-regression-testing test-qemu.py script and it
+ passes all tests.
+ 
+ [Additional Information]
+ 
+ Kernels from 3.2 onwards are affected, and all have the config:
+ CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is
+ applicable.
+ 
  --
  
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.
  
   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)
  
  I first noticed this problem with phoronix doing compilation tests, and
  then tried lmbench where even simple calls experience performance
  degradation.
  
  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being used
  after migration, but this didn't pan out.  Someone else has a similar
  problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
  
  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid
  f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device
  virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
  
  Disk backend is LVM running on SAN via FC connection (using symlink from
  /var/lib/one/datastores/0/2/disk.0 above)
  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds
  
  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s
  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds
  
  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s
  
  I 

Re: [Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-10-06 Thread Zhang Haoyu
From my testing this has been fixed in the saucy version (1.5.0) of
qemu. It is fixed by this patch:
f1c72795af573b24a7da5eb52375c9aba8a37972

However later in the history this commit was reverted, and again broke
this. The other commit that fixes this is:
211ea74022f51164a7729030b28eec90b6c99a08

See below post,please.
https://lists.gnu.org/archive/html/qemu-devel/2013-08/msg05062.html

Thanks,
Zhang Haoyu

So 211ea740 needs to be backported to P/Q/R to fix this issue. I have a
v1 packages of a precise backport here, I've confirmed performance
differences between savevm/loadvm cycles:
http://people.canonical.com/~arges/lp1100843/precise/

** No longer affects: linux (Ubuntu)

** Also affects: qemu-kvm (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Also affects: qemu-kvm (Ubuntu Quantal)
   Importance: Undecided
   Status: New

** Also affects: qemu-kvm (Ubuntu Raring)
   Importance: Undecided
   Status: New

** Also affects: qemu-kvm (Ubuntu Saucy)
   Importance: High
 Assignee: Chris J Arges (arges)
   Status: In Progress

** Changed in: qemu-kvm (Ubuntu Precise)
 Assignee: (unassigned) = Chris J Arges (arges)

** Changed in: qemu-kvm (Ubuntu Quantal)
 Assignee: (unassigned) = Chris J Arges (arges)

** Changed in: qemu-kvm (Ubuntu Raring)
 Assignee: (unassigned) = Chris J Arges (arges)

** Changed in: qemu-kvm (Ubuntu Precise)
   Importance: Undecided = High

** Changed in: qemu-kvm (Ubuntu Quantal)
   Importance: Undecided = High

** Changed in: qemu-kvm (Ubuntu Raring)
   Importance: Undecided = High

** Changed in: qemu-kvm (Ubuntu Saucy)
 Assignee: Chris J Arges (arges) = (unassigned)

** Changed in: qemu-kvm (Ubuntu Saucy)
   Status: In Progress = Fix Released

** Changed in: qemu-kvm (Ubuntu Raring)
   Status: New = Triaged

** Changed in: qemu-kvm (Ubuntu Quantal)
   Status: New = Triaged

** Changed in: qemu-kvm (Ubuntu Precise)
   Status: New = In Progress



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-09-26 Thread Chris J Arges
** Changed in: qemu-kvm (Ubuntu)
   Status: Triaged = In Progress

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “linux” package in Ubuntu:
  Confirmed
Status in “qemu-kvm” package in Ubuntu:
  In Progress

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-09-26 Thread Chris J Arges
From my testing this has been fixed in the saucy version (1.5.0) of qemu. It 
is fixed by this patch:
f1c72795af573b24a7da5eb52375c9aba8a37972

However later in the history this commit was reverted, and again broke this. 
The other commit that fixes this is:
211ea74022f51164a7729030b28eec90b6c99a08

So 211ea740 needs to be backported to P/Q/R to fix this issue. I have a v1 
packages of a precise backport here, I've confirmed performance differences 
between savevm/loadvm cycles:
http://people.canonical.com/~arges/lp1100843/precise/

** No longer affects: linux (Ubuntu)

** Also affects: qemu-kvm (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Also affects: qemu-kvm (Ubuntu Quantal)
   Importance: Undecided
   Status: New

** Also affects: qemu-kvm (Ubuntu Raring)
   Importance: Undecided
   Status: New

** Also affects: qemu-kvm (Ubuntu Saucy)
   Importance: High
 Assignee: Chris J Arges (arges)
   Status: In Progress

** Changed in: qemu-kvm (Ubuntu Precise)
 Assignee: (unassigned) = Chris J Arges (arges)

** Changed in: qemu-kvm (Ubuntu Quantal)
 Assignee: (unassigned) = Chris J Arges (arges)

** Changed in: qemu-kvm (Ubuntu Raring)
 Assignee: (unassigned) = Chris J Arges (arges)

** Changed in: qemu-kvm (Ubuntu Precise)
   Importance: Undecided = High

** Changed in: qemu-kvm (Ubuntu Quantal)
   Importance: Undecided = High

** Changed in: qemu-kvm (Ubuntu Raring)
   Importance: Undecided = High

** Changed in: qemu-kvm (Ubuntu Saucy)
 Assignee: Chris J Arges (arges) = (unassigned)

** Changed in: qemu-kvm (Ubuntu Saucy)
   Status: In Progress = Fix Released

** Changed in: qemu-kvm (Ubuntu Raring)
   Status: New = Triaged

** Changed in: qemu-kvm (Ubuntu Quantal)
   Status: New = Triaged

** Changed in: qemu-kvm (Ubuntu Precise)
   Status: New = In Progress

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Fix Released
Status in “qemu-kvm” source package in Precise:
  In Progress
Status in “qemu-kvm” source package in Quantal:
  Triaged
Status in “qemu-kvm” source package in Raring:
  Triaged
Status in “qemu-kvm” source package in Saucy:
  Fix Released

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  

[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-09-24 Thread Chris J Arges
** Changed in: qemu-kvm (Ubuntu)
 Assignee: (unassigned) = Chris J Arges (arges)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “linux” package in Ubuntu:
  Confirmed
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-09-08 Thread Stephen Gran
This is being looked at in an upstream thread at
http://lists.gnu.org/archive/html/qemu-devel/2013-07/msg01850.html

Cheers,

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “linux” package in Ubuntu:
  Confirmed
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-09-02 Thread Stephen Gran
We are reliably seeing this post live-migration on an openstack
platform.

Setup:
hypervisor == Ubuntu 12.04.3 LTS
libvirt === 1.0.2-0ubuntu11.13.04.2~cloud0
qemu-kvm === 1.0+noroms-0ubuntu14.10
storage: NFS exports
Guest VM OS: Ubuntu 12.04.1 LTS and CentOS 6.4

We have ept enabled.

Sample instance:

domain type=kvm
  uuidf3c16d27-2586-44c8-b9d9-84b74b42b5d3/uuid
  nameinstance-0508/name
  memory4194304/memory
  vcpu2/vcpu
  os
typehvm/type
boot dev=hd/
  /os
  features
acpi/
  /features
  clock offset=utc
timer name=pit tickpolicy=delay/
timer name=rtc tickpolicy=catchup/
  /clock
  cpu mode=host-model match=exact/
  devices
disk type=file device=disk
  driver name=qemu type=qcow2 cache=none/
  source file=/var/lib/nova/instances/instance-0508/disk/
  target bus=virtio dev=vda/
/disk
interface type=bridge
  mac address=fa:16:3e:5d:0e:6a/
  model type=virtio/
  source bridge=qbrf43e9d83-56/
  filterref filter=nova-instance-instance-0508-fa163e5d0e6a
parameter name=IP value=10.253.138.156/
parameter name=DHCPSERVER value=10.253.138.51/
  /filterref
/interface
serial type=file
  source path=/var/lib/nova/instances/instance-0508/console.log/
/serial
serial type=pty/
input type=tablet bus=usb/
graphics type=vnc autoport=yes keymap=en-us listen=0.0.0.0/
  /devices
/domain

We have a test environment and are willing to assist in debugging.
Please let us know what we can do to help.

Cheers,

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “linux” package in Ubuntu:
  Confirmed
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I 

[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-07-08 Thread Fletcher Kubota
My HyperDex cluster nodes performance dropped significantly after migrating 
them (virsh migrate --live ...).they are hosted on precise KVM (12.04.2 Precise 
Pangolin).  first Google search result landed me on this page. it seems i'm not 
the only one who's  encountering this problem.  I hope this gets resolved soon 
as livemigration is a major feature for any hypervisor solution in my opinion 
...
cheers

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “linux” package in Ubuntu:
  Confirmed
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-05-24 Thread Paolo Bonzini
Can you please check  if you have EPT enabled? This could be
https://bugzilla.kernel.org/show_bug.cgi?id=58771

** Bug watch added: Linux Kernel Bug Tracker #58771
   http://bugzilla.kernel.org/show_bug.cgi?id=58771

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “linux” package in Ubuntu:
  Confirmed
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-05-24 Thread Paolo Bonzini
Oops, I missed Chris's comment #28. Thanks.

From comment #23, the 1.4 machine type seems to be fast, while 1.3 is
slow. This doesn't make much sense, given the differences between the
two machine types:

enable_compat_apic_id_mode();

.driver   = usb-tablet,\
.property = usb_version,\
.value= stringify(1),\

.driver   = virtio-net-pci,\
.property = ctrl_mac_addr,\
.value= off,  \

.driver   = virtio-net-pci, \
.property = mq, \
.value= off, \

.driver   = e1000,\
.property = autonegotiation,\
.value= off,\

This is why I suspected the issue was not 100% reproducible.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “linux” package in Ubuntu:
  Confirmed
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-05-24 Thread C Cormier
@Paolo  yes, when i was doing that testing i was able to consistently
reproduce those results in #23, but it was a red herring, as of now i
cannot reproduce the results in #23 consistently (i suspect it may have
had something to do with the order i was executing tests but didn’t
chase it any further).

Yes, EPT enabled, I submitted that kernel bug in #30.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “linux” package in Ubuntu:
  Confirmed
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-05-09 Thread Jonathan Jefferson
** Changed in: linux (Ubuntu)
   Status: Incomplete = Confirmed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “linux” package in Ubuntu:
  Confirmed
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-05-08 Thread C Cormier
Update:

From our testing this bug affects KVM Hypervisors on Intel processors
that have the EPT feature enabled with Kernels 3.0 and greater. A list
of Intel EPT supported CPUs here
(http://ark.intel.com/Products/VirtualizationTechnology).

When using a KVM Hypervisor Host with Linux kernel 3.0 or newer kernel
with Intel EPT this bug shows itself. If the kvm_intel module is loaded
with option ept=N guest performance is significantly decreased versus
enabled, but it does maintain consistent performance pre and post
restoration/migration.

Exceptions:
-A KVM Host with 2.6.32 or 2.6.39 Kernel with EPT enabled this bug is not 
triggered.
-A KVM Host without the EPT feature enabled Intel CPU this bug is not triggered.
-A KVM Host with Kernel 3.0+ and EPT kvm_intel module option disabled in this 
bug is not triggered

A KVM hypervisor with EPT enabled on Linux Kernel  3.0 appears to be
the key here.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-05-08 Thread Serge Hallyn
** Also affects: linux (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “linux” package in Ubuntu:
  New
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-05-01 Thread Jonathan Jefferson
I used this handy tool to run system call preliminary benchmarks: 
http://code.google.com/p/byte-unixbench/

 In a nutshell, what I found is a confirmation that live migration does indeed 
degrade performance on precise KVM. 
 I hope the below results help narrow down this critical problem to eventually 
have it resolved in 12.04 LTS version.

detail results:
Compiled the benchmarking tool and then:
root@sample-vm:~/UnixBench# ./Run syscall

Output:

** before live-migration **

Benchmark Run: Wed May 01 2013 20:29:54 - 20:32:04
1 CPU in system; running 1 parallel copy of tests
System Call Overhead4177612.4 lps   (10.0 s, 7 samples)
System Benchmarks Partial Index  BASELINE   RESULTINDEX
System Call Overhead  15000.04177612.4   2785.1
   
System Benchmarks Index Score (Partial Only) 2785.1


** after live-migration **

Benchmark Run: Wed May 01 2013 20:35:16 - 20:37:26
1 CPU in system; running 1 parallel copy of tests
System Call Overhead3065118.3 lps   (10.0 s, 7 samples)
System Benchmarks Partial Index  BASELINE   RESULTINDEX
System Call Overhead  15000.03065118.3   2043.4
   
System Benchmarks Index Score (Partial Only) 2043.4


XML domain dump:

  memory1048576/memory
  currentMemory1048576/currentMemory
  vcpu1/vcpu
  cputune
shares1024/shares
  /cputune
  os
type arch='x86_64' machine='pc-1.0'hvm/type
boot dev='hd'/
  /os
  features
acpi/
  /features
  clock offset='utc'/
  on_poweroffdestroy/on_poweroff
  on_rebootrestart/on_reboot
  on_crashdestroy/on_crash
  devices
emulator/usr/bin/kvm/emulator
disk type='file' device='disk'
  driver name='qemu' type='qcow2'/
  source file='HIDEME'/
  target dev='vda' bus='virtio'/
  alias name='virtio-disk0'/
  address type='pci' domain='0x' bus='0x00' slot='0x04' 
function='0x0'/
/disk
disk type='file' device='cdrom'
  driver name='qemu' type='raw'/
  source file='HIDEME'/
  target dev='hda' bus='ide'/
  readonly/
  alias name='ide0-0-0'/
  address type='drive' controller='0' bus='0' unit='0'/
/disk
controller type='ide' index='0'
  alias name='ide0'/
  address type='pci' domain='0x' bus='0x00' slot='0x01' 
function='0x1'/
/controller

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
 

[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-04-30 Thread Jonathan Jefferson
I have a few VMs (precise) that process high-volume transaction jobs
each night.  After I've done a live-migrate operation to replace faulty
power supply on a bare-metal server, we encountered sluggish performance
on the migrated VMs, significant higher CPU is recorded in particular,
where the same nightly job would consume way more CPU and took more time
to finish on identical hardware.

Upon investigation, we noticed that the only change introduced was the
live migrate operation. Upon rebooting the guest OS of the VMs, the
performance is back to normal. I suspect we're hitting the same problem
as the one filed here.. I will attempt to  run lmbench next to see if I
would notice similar behavior on system calls costs as the one recorded
in comments #19..21,23.

--
Latest KVM is used from Ubuntu 12.04 LTS :: qemu-kvm (1.0+noroms-0ubuntu14.8)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-04-16 Thread Serge Hallyn
** Also affects: qemu
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-04-16 Thread Paolo Bonzini
The results of comment 23 suggest that the issue is not 100%
reproducible.  Can you please run the benchmark 3-4 times
(presave/postrestore) and showall 4 results? One benchmark only, e.g.
simple read will do.

Also please try putting a big file on disk (something like dd
if=/dev/zero of=bigfile count=64K bs=64K) and then doing cat bigfile 
/dev/null after restoring. Please check if that makes performance more
consistent.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions



[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues

2013-04-16 Thread C Cormier
Can you clarify what's not 100% reproducible? The only time that it is
not reproducible on my system is between different qemu machine types as
I listed. If tests are performed on same machine-type they are
reproducible 100% of the time on the same host and vm guest as shown in
comment #23.

 I have re-run what your requesting for machine type pc-1.0.

---machine type pc-1.0---
-Presave-
Simple read: 0.1273 microseconds
Simple read: 0.1259 microseconds
Simple read: 0.1270 microseconds
Simple read: 0.1268 microseconds

-postrestore-
performing: dd if=/dev/zero of=bigfile count=32K bs=64K
32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB) copied, 15.2912 s, 140 MB/s
performing: cat bigfile  /dev/null
Simple read: 0.2700 microseconds
Simple read: 0.2736 microseconds
Simple read: 0.2713 microseconds
Simple read: 0.2747 microseconds

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1100843

Title:
  Live Migration Causes Performance Issues

Status in QEMU:
  New
Status in “qemu-kvm” package in Ubuntu:
  Triaged

Bug description:
  I have 2 physical hosts running Ubuntu Precise.  With 1.0+noroms-
  0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal,
  built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs
  from source to test, but libvirt seems to have an issue with it that I
  haven't been able to track down yet.

   I'm seeing a performance degradation after live migration on Precise,
  but not Lucid.  These hosts are managed by libvirt (tested both
  0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula.  I
  don't seem to have this problem with lucid guests (running a number of
  standard kernels, 3.2.5 mainline and backported linux-
  image-3.2.0-35-generic as well.)

  I first noticed this problem with phoronix doing compilation tests,
  and then tried lmbench where even simple calls experience performance
  degradation.

  I've attempted to post to the kvm mailing list, but so far the only
  suggestion was it may be related to transparent hugepages not being
  used after migration, but this didn't pan out.  Someone else has a
  similar problem here -
  http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592

  qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu
  Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1
  -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults
  -chardev
  socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait
  -mon chardev=charmonitor,id=monitor,mode=control -rtc
  base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device
  piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
  file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio-
  disk0,format=raw,cache=none -device virtio-blk-
  pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
  disk0,bootindex=1 -drive
  file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive-
  ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive
  =drive-ide0-0-0,id=ide0-0-0 -netdev
  tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
  pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3
  -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155
  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

  Disk backend is LVM running on SAN via FC connection (using symlink
  from /var/lib/one/datastores/0/2/disk.0 above)

  
  ubuntu-12.04 - first boot
  ==
  Simple syscall: 0.0527 microseconds
  Simple read: 0.1143 microseconds
  Simple write: 0.0953 microseconds
  Simple open/close: 1.0432 microseconds

  Using phoronix pts/compuational
  ImageMagick - 31.54s
  Linux Kernel 3.1 - 43.91s
  Mplayer - 30.49s
  PHP - 22.25s

  
  ubuntu-12.04 - post live migration
  ==
  Simple syscall: 0.0621 microseconds
  Simple read: 0.2485 microseconds
  Simple write: 0.2252 microseconds
  Simple open/close: 1.4626 microseconds

  Using phoronix pts/compilation
  ImageMagick - 43.29s
  Linux Kernel 3.1 - 76.67s
  Mplayer - 45.41s
  PHP - 29.1s

  
  I don't have phoronix results for 10.04 handy, but they were within 1% of 
each other...

  ubuntu-10.04 - first boot
  ==
  Simple syscall: 0.0524 microseconds
  Simple read: 0.1135 microseconds
  Simple write: 0.0972 microseconds
  Simple open/close: 1.1261 microseconds

  
  ubuntu-10.04 - post live migration
  ==
  Simple syscall: 0.0526 microseconds
  Simple read: 0.1075 microseconds
  Simple write: 0.0951 microseconds
  Simple open/close: 1.0413 microseconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions