[Qemu-devel] [Bug 1490611] Re: Using qemu >=2.2.1 to convert raw->VHD (fixed) adds extra padding to the result file, which Microsoft Azure rejects as invalid
Can you rebase your fix on 1:2.5+dfsg-5ubuntu10.4 (due to the regression fix mentioned in #25)? Another thing about your backport is that it dropped the qem2 bits from the patch. Is there a reason for this? If so please mention it in the debian/patch file. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1490611 Title: Using qemu >=2.2.1 to convert raw->VHD (fixed) adds extra padding to the result file, which Microsoft Azure rejects as invalid Status in QEMU: Fix Released Status in qemu package in Ubuntu: Fix Released Status in qemu source package in Xenial: In Progress Bug description: [Impact] * Starting with a raw disk image, using "qemu-img convert" to convert from raw to VHD results in the output VHD file's virtual size being aligned to the nearest 516096 bytes (16 heads x 63 sectors per head x 512 bytes per sector), instead of preserving the input file's size as the output VHD's virtual disk size. * Microsoft Azure requires that disk images (VHDs) submitted for upload have virtual sizes aligned to a megabyte boundary. (Ex. 4096MB, 4097MB, 4098MB, etc. are OK, 4096.5MB is rejected with an error.) This is reflected in Microsoft's documentation: https://azure.microsoft.com /en-us/documentation/articles/virtual-machines-linux-create-upload- vhd-generic/ * The fix for this bug is a backport from upstream. http://git.qemu.org/?p=qemu.git;a=commitdiff;h=fb9245c2610932d33ce14 [Test Case] * This is reproducible with the following set of commands (including the Azure command line tools from https://github.com/Azure/azure- xplat-cli). For the following example, I used qemu version 2.2.1: $ dd if=/dev/zero of=source-disk.img bs=1M count=4096 $ stat source-disk.img File: ‘source-disk.img’ Size: 4294967296 Blocks: 798656 IO Block: 4096 regular file Device: fc01h/64513dInode: 13247963Links: 1 Access: (0644/-rw-r--r--) Uid: ( 1000/ smkent) Gid: ( 1000/ smkent) Access: 2015-08-18 09:48:02.613988480 -0700 Modify: 2015-08-18 09:48:02.825985646 -0700 Change: 2015-08-18 09:48:02.825985646 -0700 Birth: - $ qemu-img convert -f raw -o subformat=fixed -O vpc source-disk.img dest-disk.vhd $ stat dest-disk.vhd File: ‘dest-disk.vhd’ Size: 4296499712 Blocks: 535216 IO Block: 4096 regular file Device: fc01h/64513dInode: 13247964Links: 1 Access: (0644/-rw-r--r--) Uid: ( 1000/ smkent) Gid: ( 1000/ smkent) Access: 2015-08-18 09:50:22.252077624 -0700 Modify: 2015-08-18 09:49:24.424868868 -0700 Change: 2015-08-18 09:49:24.424868868 -0700 Birth: - $ azure vm image create testimage1 dest-disk.vhd -o linux -l "West US" info:Executing command vm image create + Retrieving storage accounts info:VHD size : 4097 MB info:Uploading 4195800.5 KB Requested:100.0% Completed:100.0% Running: 0 Time: 1m 0s Speed: 6744 KB/s info:https://[redacted].blob.core.windows.net/vm-images/dest-disk.vhd was uploaded successfully error: The VHD https://[redacted].blob.core.windows.net/vm-images/dest-disk.vhd has an unsupported virtual size of 4296499200 bytes. The size must be a whole number (in MBs). info:Error information has been recorded to /home/smkent/.azure/azure.err error: vm image create command failed * A fixed qemu-img will not result in an error during azure image creation. It will require passing -o force_size, which will leverage the backported functionality. [Regression Potential] * The upstream fix introduces a qemu-img option (-o force_size) which is unset by default. The regression potential is very low, as a result. ... I also ran the above commands using qemu 2.4.0, which resulted in the same error as the conversion behavior is the same. However, qemu 2.1.1 and earlier (including qemu 2.0.0 installed by Ubuntu 14.04) does not pad the virtual disk size during conversion. Using qemu-img convert from qemu versions <=2.1.1 results in a VHD that is exactly the size of the raw input file plus 512 bytes (for the VHD footer). Those qemu versions do not attempt to realign the disk. As a result, Azure accepts VHD files created using those versions of qemu-img convert for upload. Is there a reason why newer qemu realigns the converted VHD file? It would be useful if an option were added to disable this feature, as current versions of qemu cannot be used to create VHD files for Azure using Microsoft's official instructions. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1490611/+subscriptions
[Qemu-devel] [Bug 1297218] Update Released
The verification of the Stable Release Update for qemu has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1297218 Title: guest hangs after live migration due to tsc jump Status in QEMU: New Status in glusterfs package in Ubuntu: Invalid Status in qemu package in Ubuntu: Fix Released Status in glusterfs source package in Trusty: Confirmed Status in qemu source package in Trusty: Fix Committed Bug description: = SRU Justification: 1. Impact: guests hang after live migration with 100% cpu 2. Upstream fix: a set of four patches fix this upstream 3. Stable fix: we have a backport of the four patches into a single patch. 4. Test case: try a set of migrations of different VMS (it is unfortunately not 100% reproducible) 5. Regression potential: the patch is not trivial, however the lp:qa-regression-tests testsuite passed 100% with this package. = We have two identical Ubuntu servers running libvirt/kvm/qemu, sharing a Gluster filesystem. Guests can be live migrated between them. However, live migration often leads to the guest being stuck at 100% for a while. In that case, the dmesg output for such a guest will show (once it recovers): Clocksource tsc unstable (delta = 662463064082 ns). In this particular example, a guest was migrated and only after 11 minutes (662 seconds) did it become responsive again. It seems that newly booted guests doe not suffer from this problem, these can be migrated back and forth at will. After a day or so, the problem becomes apparent. It also seems that migrating from server A to server B causes much more problems than going from B back to A. If necessary, I can do more measurements to qualify these observations. The VM servers run Ubuntu 13.04 with these packages: Kernel: 3.8.0-35-generic x86_64 Libvirt: 1.0.2 Qemu: 1.4.0 Gluster-fs: 3.4.2 (libvirt access the images via the filesystem, not using libgfapi yet as the Ubuntu libvirt is not linked against libgfapi). The interconnect between both machines (both for migration and gluster) is 10GbE. Both servers are synced to NTP and well within 1ms form one another. Guests are either Ubuntu 13.04 or 13.10. On the guests, the current_clocksource is kvm-clock. The XML definition of the guests only contains: Now as far as I've read in the documentation of kvm-clock, it specifically supports live migrations, so I'm a bit surprised at these problems. There isn't all that much information to find on these issue, although I have found postings by others that seem to have run into the same issues, but without a solution. --- ApportVersion: 2.14.1-0ubuntu3 Architecture: amd64 DistroRelease: Ubuntu 14.04 Package: libvirt (not installed) ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-3.13.0-24-generic root=UUID=1b0c3c6d-a9b8-4e84-b076-117ae267d178 ro console=ttyS1,115200n8 BOOTIF=01-00-25-90-75-b5-c8 ProcVersionSignature: Ubuntu 3.13.0-24.47-generic 3.13.9 Tags: trusty apparmor apparmor apparmor apparmor apparmor Uname: Linux 3.13.0-24-generic x86_64 UpgradeStatus: No upgrade log present (probably fresh install) UserGroups: _MarkForUpload: True modified.conffile..etc.default.libvirt.bin: [modified] modified.conffile..etc.libvirt.libvirtd.conf: [modified] modified.conffile..etc.libvirt.qemu.conf: [modified] modified.conffile..etc.libvirt.qemu.networks.default.xml: [deleted] mtime.conffile..etc.default.libvirt.bin: 2014-05-12T19:07:40.020662 mtime.conffile..etc.libvirt.libvirtd.conf: 2014-05-13T14:40:25.894837 mtime.conffile..etc.libvirt.qemu.conf: 2014-05-12T18:58:27.885506 To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1297218/+subscriptions
[Qemu-devel] [Bug 1297218] Re: guest hangs after live migration due to tsc jump
Hello Paul, or anyone else affected, Accepted qemu into trusty-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/qemu/2.0.0+dfsg- 2ubuntu1.25 in a few hours, and then in the -proposed repository. Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users. If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision. Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance! ** Changed in: qemu (Ubuntu Trusty) Status: Confirmed => Fix Committed ** Tags added: verification-needed -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1297218 Title: guest hangs after live migration due to tsc jump Status in QEMU: New Status in glusterfs package in Ubuntu: Invalid Status in qemu package in Ubuntu: Fix Released Status in glusterfs source package in Trusty: Confirmed Status in qemu source package in Trusty: Fix Committed Bug description: = SRU Justification: 1. Impact: guests hang after live migration with 100% cpu 2. Upstream fix: a set of four patches fix this upstream 3. Stable fix: we have a backport of the four patches into a single patch. 4. Test case: try a set of migrations of different VMS (it is unfortunately not 100% reproducible) 5. Regression potential: the patch is not trivial, however the lp:qa-regression-tests testsuite passed 100% with this package. = We have two identical Ubuntu servers running libvirt/kvm/qemu, sharing a Gluster filesystem. Guests can be live migrated between them. However, live migration often leads to the guest being stuck at 100% for a while. In that case, the dmesg output for such a guest will show (once it recovers): Clocksource tsc unstable (delta = 662463064082 ns). In this particular example, a guest was migrated and only after 11 minutes (662 seconds) did it become responsive again. It seems that newly booted guests doe not suffer from this problem, these can be migrated back and forth at will. After a day or so, the problem becomes apparent. It also seems that migrating from server A to server B causes much more problems than going from B back to A. If necessary, I can do more measurements to qualify these observations. The VM servers run Ubuntu 13.04 with these packages: Kernel: 3.8.0-35-generic x86_64 Libvirt: 1.0.2 Qemu: 1.4.0 Gluster-fs: 3.4.2 (libvirt access the images via the filesystem, not using libgfapi yet as the Ubuntu libvirt is not linked against libgfapi). The interconnect between both machines (both for migration and gluster) is 10GbE. Both servers are synced to NTP and well within 1ms form one another. Guests are either Ubuntu 13.04 or 13.10. On the guests, the current_clocksource is kvm-clock. The XML definition of the guests only contains: Now as far as I've read in the documentation of kvm-clock, it specifically supports live migrations, so I'm a bit surprised at these problems. There isn't all that much information to find on these issue, although I have found postings by others that seem to have run into the same issues, but without a solution. --- ApportVersion: 2.14.1-0ubuntu3 Architecture: amd64 DistroRelease: Ubuntu 14.04 Package: libvirt (not installed) ProcCmdline: BOOT_IMAGE=/boot/vmlinuz-3.13.0-24-generic root=UUID=1b0c3c6d-a9b8-4e84-b076-117ae267d178 ro console=ttyS1,115200n8 BOOTIF=01-00-25-90-75-b5-c8 ProcVersionSignature: Ubuntu 3.13.0-24.47-generic 3.13.9 Tags: trusty apparmor apparmor apparmor apparmor apparmor Uname: Linux 3.13.0-24-generic x86_64 UpgradeStatus: No upgrade log present (probably fresh install) UserGroups: _MarkForUpload: True modified.conffile..etc.default.libvirt.bin: [modified] modified.conffile..etc.libvirt.libvirtd.conf: [modified] modified.conffile..etc.libvirt.qemu.conf: [modified] modified.conffile..etc.libvirt.qemu.networks.default.xml: [deleted] mtime.conffile..etc.default.libvirt.bin: 2014-05-12T19:07:40.020662 mtime.conffile..etc.libvirt.libvirtd.conf: 2014-05-13T14:40:25.894837 mtime.conffile..etc.libvirt.qemu.conf: 2014-05-12T18:58:27.885506 To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1297218/+subscriptions
[Qemu-devel] [Bug 1465935] Update Released
The verification of the Stable Release Update for qemu has completed successfully and the package has now been released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1465935 Title: kvm_irqchip_commit_routes: Assertion `ret == 0' failed Status in QEMU: Fix Released Status in qemu package in Ubuntu: Fix Released Status in qemu source package in Trusty: Fix Released Status in qemu source package in Utopic: Won't Fix Status in qemu source package in Vivid: Fix Released Bug description: Several my QEMU instances crashed, and in the qemu log, I can see this assertion failure, qemu-system-x86_64: /build/buildd/qemu-2.0.0+dfsg/kvm-all.c:984: kvm_irqchip_commit_routes: Assertion `ret == 0' failed. The QEMU version is 2.0.0, HV OS is ubuntu 12.04, kernel 3.2.0-38. Guest OS is RHEL 6.3. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1465935/+subscriptions
[Qemu-devel] [Bug 1516203] Re: qemu-system-x86_64 crashed with SIGSEGV in SDL_BlitCopy()
Can you please test with the 4.2.0-19.23 kernel on Xenial as well? Thanks ** Changed in: qemu (Ubuntu) Status: New => Incomplete -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1516203 Title: qemu-system-x86_64 crashed with SIGSEGV in SDL_BlitCopy() Status in QEMU: New Status in qemu package in Ubuntu: Incomplete Bug description: with -device virtio-vga -cdrom ubuntu-15.10-desktop-amd64.iso ProblemType: Crash DistroRelease: Ubuntu 16.04 Package: qemu-system-x86 1:2.4+dfsg-4ubuntu2 ProcVersionSignature: Ubuntu 4.3.0-0.6-generic 4.3.0 Uname: Linux 4.3.0-0-generic x86_64 ApportVersion: 2.19.2-0ubuntu6 Architecture: amd64 CurrentDesktop: Unity Date: Sat Nov 14 05:05:25 2015 EcryptfsInUse: Yes ExecutablePath: /usr/bin/qemu-system-x86_64 InstallationDate: Installed on 2012-12-22 (1056 days ago) InstallationMedia: Ubuntu 12.04.1 LTS "Precise Pangolin" - Release amd64 (20120823.1) KvmCmdLine: COMMAND STAT EUID RUID PID PPID %CPU COMMAND kvm-irqfd-clean S< 0 0 497 2 0.0 [kvm-irqfd-clean] MachineType: Hewlett-Packard HP Elite 7300 Series MT ProcCmdline: qemu-system-x86_64 -machine ubuntu,accel=kvm -m 1024 -device virtio-vga -cdrom ubuntu-15.10-desktop-amd64.iso ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.3.0-0-generic root=UUID=8905185c-9d82-498c-970c-6fdb9ee07c45 ro quiet splash zswap.enabled=1 crashkernel=384M-:128M vt.handoff=7 SegvAnalysis: Segfault happened at: 0x7fecce94624f:movq (%rax),%mm0 PC (0x7fecce94624f) ok source "(%rax)" (0x7fecb5f0e010) not located in a known VMA region (needed readable region)! destination "%mm0" ok SegvReason: reading unknown VMA Signal: 11 SourcePackage: qemu StacktraceTop: ?? () from /usr/lib/x86_64-linux-gnu/libSDL-1.2.so.0 ?? () from /usr/lib/x86_64-linux-gnu/libSDL-1.2.so.0 SDL_LowerBlit () from /usr/lib/x86_64-linux-gnu/libSDL-1.2.so.0 SDL_UpperBlit () from /usr/lib/x86_64-linux-gnu/libSDL-1.2.so.0 ?? () Title: qemu-system-x86_64 crashed with SIGSEGV in SDL_LowerBlit() UpgradeStatus: Upgraded to xenial on 2013-06-19 (877 days ago) UserGroups: adm cdrom dip gnunet kismet kvm libvirtd lpadmin plugdev sambashare sbuild sudo dmi.bios.date: 05/18/2011 dmi.bios.vendor: AMI dmi.bios.version: 7.05 dmi.board.name: 2AB5 dmi.board.vendor: PEGATRON CORPORATION dmi.board.version: 1.01 dmi.chassis.asset.tag: CZC126149V dmi.chassis.type: 3 dmi.chassis.vendor: Hewlett-Packard dmi.modalias: dmi:bvnAMI:bvr7.05:bd05/18/2011:svnHewlett-Packard:pnHPElite7300SeriesMT:pvr1.01:rvnPEGATRONCORPORATION:rn2AB5:rvr1.01:cvnHewlett-Packard:ct3:cvr: dmi.product.name: HP Elite 7300 Series MT dmi.product.version: 1.01 dmi.sys.vendor: Hewlett-Packard To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1516203/+subscriptions
[Qemu-devel] [Bug 1465935] Re: kvm_irqchip_commit_routes: Assertion `ret == 0' failed
Hello Li, or anyone else affected, Accepted qemu into trusty-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/qemu/2.0.0+dfsg- 2ubuntu1.20 in a few hours, and then in the -proposed repository. Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users. If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision. Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance! ** Changed in: qemu (Ubuntu Trusty) Status: In Progress => Fix Committed ** Tags added: verification-needed ** Changed in: qemu (Ubuntu Vivid) Status: In Progress => Fix Committed -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1465935 Title: kvm_irqchip_commit_routes: Assertion `ret == 0' failed Status in QEMU: New Status in qemu package in Ubuntu: Fix Released Status in qemu source package in Precise: Invalid Status in qemu source package in Trusty: Fix Committed Status in qemu source package in Utopic: Won't Fix Status in qemu source package in Vivid: Fix Committed Bug description: Several my QEMU instances crashed, and in the qemu log, I can see this assertion failure, qemu-system-x86_64: /build/buildd/qemu-2.0.0+dfsg/kvm-all.c:984: kvm_irqchip_commit_routes: Assertion `ret == 0' failed. The QEMU version is 2.0.0, HV OS is ubuntu 12.04, kernel 3.2.0-38. Guest OS is RHEL 6.3. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1465935/+subscriptions
[Qemu-devel] [Bug 1465935] Please test proposed package
Hello Li, or anyone else affected, Accepted qemu into vivid-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/qemu/1:2.2+dfsg- 5expubuntu9.6 in a few hours, and then in the -proposed repository. Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users. If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision. Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance! -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1465935 Title: kvm_irqchip_commit_routes: Assertion `ret == 0' failed Status in QEMU: New Status in qemu package in Ubuntu: Fix Released Status in qemu source package in Precise: Invalid Status in qemu source package in Trusty: Fix Committed Status in qemu source package in Utopic: Won't Fix Status in qemu source package in Vivid: Fix Committed Bug description: Several my QEMU instances crashed, and in the qemu log, I can see this assertion failure, qemu-system-x86_64: /build/buildd/qemu-2.0.0+dfsg/kvm-all.c:984: kvm_irqchip_commit_routes: Assertion `ret == 0' failed. The QEMU version is 2.0.0, HV OS is ubuntu 12.04, kernel 3.2.0-38. Guest OS is RHEL 6.3. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1465935/+subscriptions
[Qemu-devel] [Bug 1463172] Re: destination arm board hangs after migration from x86 source
** Changed in: qemu (Ubuntu) Importance: Undecided => Low -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1463172 Title: destination arm board hangs after migration from x86 source Status in QEMU: New Status in qemu package in Ubuntu: Incomplete Bug description: The qemu destination on an arm board hangs after migration from a x86 source. With qemu emulating Arch, the migration works fine while the vm is still in the boot selection screen, but if the machine is booted, then the destination arm board vm hangs indefinitely after migrating from the x86 source. This bug does not occur the other way around, meaning a booted vm originally run on arm board will continue to work after migrating to a x86 destination. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1463172/+subscriptions
[Qemu-devel] [Bug 1448985] Re: Ubuntu 14.04 LTS, 14.10, 15.04, 15.10 guests do not boot to Unity from QEMU-KVM Ubuntu 14.04 LTS, 14.10, 15.04 hosts
** Also affects: qemu Importance: Undecided Status: New -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1448985 Title: Ubuntu 14.04 LTS, 14.10, 15.04, 15.10 guests do not boot to Unity from QEMU-KVM Ubuntu 14.04 LTS, 14.10, 15.04 hosts Status in QEMU: New Status in qemu package in Ubuntu: Confirmed Bug description: STEPS TO REPRODUCE: 1. Install Ubuntu 14.04.2 LTS or Ubuntu 14.10 or Ubuntu 15.04 with all updates (it is a host system). 2. Download one (or all) isos: * Ubuntu 14.0.4.2 i386 iso (ubuntu-14.04.2-desktop-i386.iso, MD5SUM = a8a14f1f92c1ef35dae4966a2ae1a264). * Ubuntu 14.10 i386 iso (ubuntu-14.10-desktop-i386.iso, MD5SUM = 4a3c4b8421af51c29c84fb6f4b3fe109) * Ubuntu 15.04 i386 iso (ubuntu-15.04-desktop-i386.iso, MD5SUM = 6ea04093b767ad6778aa245d53625612) 3. Boot one (or all) isos with as QEMU-KVM guest with the following commands: * sudo kvm -m 1536 -cdrom ubuntu-*-desktop-i386.iso * sudo kvm -m 1536 -cdrom ubuntu-*-desktop-i386.iso -vga std * sudo kvm -m 1536 -cdrom ubuntu-*-desktop-i386.iso -vga vmware * or from usb-creator-gtk via Test disk button 4. Click on Try Ubuntu EXPECTED RESULTS: ISO is booted to Unity desktop, user can test and use it. ACTUAL RESULTS: In 14.04 and 14.10 guests can see empty purple desktop or purple desktop with two shortcuts (Examples and Install Ubuntu ...). 15.10 guest does not boot or boot to safe graphic mode (it is bug 1437740). This bug should be confirmed and fixed. Users may want to run Ubuntu in QEMU/KVM, not just VirtualBox. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1448985/+subscriptions
[Qemu-devel] [Bug 1323758] Re: Mouse stops working when connected usb-storage-device
** Tags added: upstream -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1323758 Title: Mouse stops working when connected usb-storage-device Status in QEMU: New Status in qemu package in Ubuntu: New Bug description: I'm running a guest that has Windows 8 Pro (x64) installed. Every time I pass through a usb storage device from the host to the guest, the mouse stops working in the vnc client. When I remove the usb-device the mouse works again. The mouse only stops working when I pass through a usb storage device and then make the vlc viewer (client) inactive by clicking on another program on the local computer (where I'm running the vnc viewer (client)). As long as I keep the vnc viewer active, the mouse works without any problems. But as soon as I make the vnc viewer inactive and then active again, the mouse will no longer work. I have to reboot the guest or remove the usb storage device. I can't find any related problems on the internet, so it may be just me? I hope someone can help me with this. EDIT: I posted the extra/new information in comments. But as I know see it might be wrong and maybe I should've posted them in this bug description container (by editing)? Please tell me if I did it wrong and I will change it. Sorry for this misunderstanding. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1323758/+subscriptions
[Qemu-devel] [Bug 1465935] Re: kvm_irqchip_commit_routes: Assertion `ret == 0' failed
** Also affects: qemu (Ubuntu Vivid) Importance: Undecided Status: New ** Also affects: qemu (Ubuntu Precise) Importance: Undecided Status: New ** Also affects: qemu (Ubuntu Utopic) Importance: Undecided Status: New ** Also affects: qemu (Ubuntu Trusty) Importance: Undecided Status: New ** Changed in: qemu (Ubuntu) Assignee: (unassigned) = Stefan Bader (smb) -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1465935 Title: kvm_irqchip_commit_routes: Assertion `ret == 0' failed Status in QEMU: New Status in qemu package in Ubuntu: Confirmed Status in qemu source package in Precise: New Status in qemu source package in Trusty: New Status in qemu source package in Utopic: New Status in qemu source package in Vivid: New Bug description: Several my QEMU instances crashed, and in the qemu log, I can see this assertion failure, qemu-system-x86_64: /build/buildd/qemu-2.0.0+dfsg/kvm-all.c:984: kvm_irqchip_commit_routes: Assertion `ret == 0' failed. The QEMU version is 2.0.0, HV OS is ubuntu 12.04, kernel 3.2.0-38. Guest OS is RHEL 6.3. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1465935/+subscriptions
[Qemu-devel] [Bug 1239008] Re: qemu fails to scroll screen on ^Vidmem output
Can you test with the latest version to see if this still affects you? If this still is a problem, any information on how to obtain the Guest OS in question that would also be helpful. ** Changed in: qemu (Ubuntu) Status: New = Incomplete -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1239008 Title: qemu fails to scroll screen on ^Vidmem output Status in QEMU: New Status in qemu package in Ubuntu: Incomplete Bug description: Pascal uses ^Vidmem for B800 console output. The terminal does not oblige the Pascal OS code to scroll the output. Virtualbox emulation works, so this must be a qemu bug. Using QEMU in KVM mode as Ubuntu LTS. Source line to trip bug(in theory pushes VideoMem up one line): procedure Scroll; //this is whats causing crashes. FIXME:Virtualbox not affected.QEMU BUG? begin if scrolldisabled then exit; if (CursorPosY = 24) then begin //in case called before end of screen blank:= $20 or (TextAttr shl 8); Move((VidMem+(2*80))^,VidMem^,24*(2*80)); // Empty last line FillWord((VidMem+(24*2*80))^,80,Blank); CursorPosX:=1; CursorPosY:=23; update_cursor; end; end; To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1239008/+subscriptions
[Qemu-devel] [Bug 1292234] Re: qcow2 image corruption on non-extent filesystems (ext3)
** No longer affects: qemu -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1292234 Title: qcow2 image corruption on non-extent filesystems (ext3) Status in qemu package in Ubuntu: In Progress Bug description: The security team uses a tool (http://bazaar.launchpad.net/~ubuntu- bugcontrol/ubuntu-qa-tools/master/view/head:/vm-tools/uvt) that uses libvirt snapshots quite a bit. I noticed after upgrading to trusty some time ago that qemu 1.7 (and the qemu 2.0 in the candidate ppa) has had stability problems such that the disk/partition table seems to be corrupted after removing a libvirt snapshot and then creating another with the same name. I don't have a very simple reproducer, but had enough that hallyn suggested I file a bug. First off: qemu-kvm 2.0~git-20140307.4c288ac-0ubuntu2 $ cat /proc/version_signature Ubuntu 3.13.0-16.36-generic 3.13.5 $ qemu-img info ./forhallyn-trusty-amd64.img image: ./forhallyn-trusty-amd64.img file format: qcow2 virtual size: 8.0G (8589934592 bytes) disk size: 4.0G cluster_size: 65536 Format specific information: compat: 0.10 Steps to reproduce: 1. create a virtual machine. For a simplified reproducer, I used virt-manager with: OS type: Linux Version: Ubuntu 14.04 Memory: 768 CPUs: 1 Select managed or existing (Browse, new volume) Create a new storage volume: qcow2 Max capacity: 8192 Allocation: 0 Advanced: NAT kvm x86_64 firmware: default 2. install a VM. I used trusty-desktop-amd64.iso from Jan 23 since it seems like I can hit the bug more reliably if I have lots of updates in a dist-upgrade. I have seen this with lucid-trusty guests that are i386 and amd64. After the install, reboot and then cleanly shutdown 3. Backup the image file somewhere since steps 1 and 2 take a while :) 4. Execute the following commands which are based on what our uvt tool does: $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh snapshot-current --name forhallyn-trusty-amd64 pristine $ virsh start forhallyn-trusty-amd64 $ virsh snapshot-list forhallyn-trusty-amd64 # this is showing as shutoff after start, this might be different with qemu 1.5 in guest: sudo apt-get update sudo apt-get dist-upgrade 780 upgraded... shutdown -h now $ virsh snapshot-delete forhallyn-trusty-amd64 pristine --children $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh start forhallyn-trusty-amd64 # this command works, but there is often disk corruption The idea behind the above is to create a new VM with a pristine snapshot that we could revert later if we wanted. Instead, we boot the VM, run apt-get dist-upgrade, cleanly shutdown and then remove the old 'pristine' snapshot and create a new 'pristine' snapshot. The intention is to update the VM and the pristine snapshot so that when we boot the next time, we boot from the updated VM and can revert back to the updated VM. After running 'virsh start' after doing snapshot-delete/snapshot- create-as, the disk may be corrupted. This can be seen with grub failing to find .mod files, the kernel not booting, init failing, etc. This does not seem to be related to the machine type used. Ie, pc- i440fx-1.5, pc-i440fx-1.7 and pc-i440fx-2.0 all fail with qemu 2.0, pc-i440fx-1.5 and pc-i440fx-1.7 fail with qemu 1.7 and pc-i440fx-1.5 works fine with qemu 1.5. Only workaround I know if is to downgrade qemu to 1.5.0+dfsg- 3ubuntu5.4 from Ubuntu 13.10. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1292234/+subscriptions
Re: [Qemu-devel] Nested KVM L2 guest hangs
Ariel, You can easily use a supported 3.16 kernel on Ubuntu 14.04: sudo apt-get install --install-recommends linux-generic-lts-utopic If you have further problems with 3.16 or 3.13 on the distro kernel please feel free to file a bug: https://bugs.launchpad.net/ubuntu/+filebug Hope that helps. Thanks! --chris j arges
[Qemu-devel] [Bug 1292234] Re: qcow2 image corruption on non-extent filesystems (ext3)
** Summary changed: - qcow2 image corruption in trusty (qemu 1.7 and 2.0 candidate) + qcow2 image corruption on non-extent filesystems (ext3) -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1292234 Title: qcow2 image corruption on non-extent filesystems (ext3) Status in QEMU: New Status in qemu package in Ubuntu: In Progress Bug description: The security team uses a tool (http://bazaar.launchpad.net/~ubuntu- bugcontrol/ubuntu-qa-tools/master/view/head:/vm-tools/uvt) that uses libvirt snapshots quite a bit. I noticed after upgrading to trusty some time ago that qemu 1.7 (and the qemu 2.0 in the candidate ppa) has had stability problems such that the disk/partition table seems to be corrupted after removing a libvirt snapshot and then creating another with the same name. I don't have a very simple reproducer, but had enough that hallyn suggested I file a bug. First off: qemu-kvm 2.0~git-20140307.4c288ac-0ubuntu2 $ cat /proc/version_signature Ubuntu 3.13.0-16.36-generic 3.13.5 $ qemu-img info ./forhallyn-trusty-amd64.img image: ./forhallyn-trusty-amd64.img file format: qcow2 virtual size: 8.0G (8589934592 bytes) disk size: 4.0G cluster_size: 65536 Format specific information: compat: 0.10 Steps to reproduce: 1. create a virtual machine. For a simplified reproducer, I used virt-manager with: OS type: Linux Version: Ubuntu 14.04 Memory: 768 CPUs: 1 Select managed or existing (Browse, new volume) Create a new storage volume: qcow2 Max capacity: 8192 Allocation: 0 Advanced: NAT kvm x86_64 firmware: default 2. install a VM. I used trusty-desktop-amd64.iso from Jan 23 since it seems like I can hit the bug more reliably if I have lots of updates in a dist-upgrade. I have seen this with lucid-trusty guests that are i386 and amd64. After the install, reboot and then cleanly shutdown 3. Backup the image file somewhere since steps 1 and 2 take a while :) 4. Execute the following commands which are based on what our uvt tool does: $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh snapshot-current --name forhallyn-trusty-amd64 pristine $ virsh start forhallyn-trusty-amd64 $ virsh snapshot-list forhallyn-trusty-amd64 # this is showing as shutoff after start, this might be different with qemu 1.5 in guest: sudo apt-get update sudo apt-get dist-upgrade 780 upgraded... shutdown -h now $ virsh snapshot-delete forhallyn-trusty-amd64 pristine --children $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh start forhallyn-trusty-amd64 # this command works, but there is often disk corruption The idea behind the above is to create a new VM with a pristine snapshot that we could revert later if we wanted. Instead, we boot the VM, run apt-get dist-upgrade, cleanly shutdown and then remove the old 'pristine' snapshot and create a new 'pristine' snapshot. The intention is to update the VM and the pristine snapshot so that when we boot the next time, we boot from the updated VM and can revert back to the updated VM. After running 'virsh start' after doing snapshot-delete/snapshot- create-as, the disk may be corrupted. This can be seen with grub failing to find .mod files, the kernel not booting, init failing, etc. This does not seem to be related to the machine type used. Ie, pc- i440fx-1.5, pc-i440fx-1.7 and pc-i440fx-2.0 all fail with qemu 2.0, pc-i440fx-1.5 and pc-i440fx-1.7 fail with qemu 1.7 and pc-i440fx-1.5 works fine with qemu 1.5. Only workaround I know if is to downgrade qemu to 1.5.0+dfsg- 3ubuntu5.4 from Ubuntu 13.10. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1292234/+subscriptions
[Qemu-devel] [Bug 1292234] Re: qcow2 image corruption in trusty (qemu 1.7 and 2.0 candidate)
FWIW, just re-reproduced this with latest upstream kernel / qemu / fresh qcow2 image. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1292234 Title: qcow2 image corruption in trusty (qemu 1.7 and 2.0 candidate) Status in QEMU: New Status in qemu package in Ubuntu: In Progress Bug description: The security team uses a tool (http://bazaar.launchpad.net/~ubuntu- bugcontrol/ubuntu-qa-tools/master/view/head:/vm-tools/uvt) that uses libvirt snapshots quite a bit. I noticed after upgrading to trusty some time ago that qemu 1.7 (and the qemu 2.0 in the candidate ppa) has had stability problems such that the disk/partition table seems to be corrupted after removing a libvirt snapshot and then creating another with the same name. I don't have a very simple reproducer, but had enough that hallyn suggested I file a bug. First off: qemu-kvm 2.0~git-20140307.4c288ac-0ubuntu2 $ cat /proc/version_signature Ubuntu 3.13.0-16.36-generic 3.13.5 $ qemu-img info ./forhallyn-trusty-amd64.img image: ./forhallyn-trusty-amd64.img file format: qcow2 virtual size: 8.0G (8589934592 bytes) disk size: 4.0G cluster_size: 65536 Format specific information: compat: 0.10 Steps to reproduce: 1. create a virtual machine. For a simplified reproducer, I used virt-manager with: OS type: Linux Version: Ubuntu 14.04 Memory: 768 CPUs: 1 Select managed or existing (Browse, new volume) Create a new storage volume: qcow2 Max capacity: 8192 Allocation: 0 Advanced: NAT kvm x86_64 firmware: default 2. install a VM. I used trusty-desktop-amd64.iso from Jan 23 since it seems like I can hit the bug more reliably if I have lots of updates in a dist-upgrade. I have seen this with lucid-trusty guests that are i386 and amd64. After the install, reboot and then cleanly shutdown 3. Backup the image file somewhere since steps 1 and 2 take a while :) 4. Execute the following commands which are based on what our uvt tool does: $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh snapshot-current --name forhallyn-trusty-amd64 pristine $ virsh start forhallyn-trusty-amd64 $ virsh snapshot-list forhallyn-trusty-amd64 # this is showing as shutoff after start, this might be different with qemu 1.5 in guest: sudo apt-get update sudo apt-get dist-upgrade 780 upgraded... shutdown -h now $ virsh snapshot-delete forhallyn-trusty-amd64 pristine --children $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh start forhallyn-trusty-amd64 # this command works, but there is often disk corruption The idea behind the above is to create a new VM with a pristine snapshot that we could revert later if we wanted. Instead, we boot the VM, run apt-get dist-upgrade, cleanly shutdown and then remove the old 'pristine' snapshot and create a new 'pristine' snapshot. The intention is to update the VM and the pristine snapshot so that when we boot the next time, we boot from the updated VM and can revert back to the updated VM. After running 'virsh start' after doing snapshot-delete/snapshot- create-as, the disk may be corrupted. This can be seen with grub failing to find .mod files, the kernel not booting, init failing, etc. This does not seem to be related to the machine type used. Ie, pc- i440fx-1.5, pc-i440fx-1.7 and pc-i440fx-2.0 all fail with qemu 2.0, pc-i440fx-1.5 and pc-i440fx-1.7 fail with qemu 1.7 and pc-i440fx-1.5 works fine with qemu 1.5. Only workaround I know if is to downgrade qemu to 1.5.0+dfsg- 3ubuntu5.4 from Ubuntu 13.10. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1292234/+subscriptions
[Qemu-devel] [Bug 1292234] Re: qcow2 image corruption in trusty (qemu 1.7 and 2.0 candidate)
** Changed in: qemu (Ubuntu) Status: Confirmed = In Progress -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1292234 Title: qcow2 image corruption in trusty (qemu 1.7 and 2.0 candidate) Status in QEMU: New Status in qemu package in Ubuntu: In Progress Bug description: The security team uses a tool (http://bazaar.launchpad.net/~ubuntu- bugcontrol/ubuntu-qa-tools/master/view/head:/vm-tools/uvt) that uses libvirt snapshots quite a bit. I noticed after upgrading to trusty some time ago that qemu 1.7 (and the qemu 2.0 in the candidate ppa) has had stability problems such that the disk/partition table seems to be corrupted after removing a libvirt snapshot and then creating another with the same name. I don't have a very simple reproducer, but had enough that hallyn suggested I file a bug. First off: qemu-kvm 2.0~git-20140307.4c288ac-0ubuntu2 $ cat /proc/version_signature Ubuntu 3.13.0-16.36-generic 3.13.5 $ qemu-img info ./forhallyn-trusty-amd64.img image: ./forhallyn-trusty-amd64.img file format: qcow2 virtual size: 8.0G (8589934592 bytes) disk size: 4.0G cluster_size: 65536 Format specific information: compat: 0.10 Steps to reproduce: 1. create a virtual machine. For a simplified reproducer, I used virt-manager with: OS type: Linux Version: Ubuntu 14.04 Memory: 768 CPUs: 1 Select managed or existing (Browse, new volume) Create a new storage volume: qcow2 Max capacity: 8192 Allocation: 0 Advanced: NAT kvm x86_64 firmware: default 2. install a VM. I used trusty-desktop-amd64.iso from Jan 23 since it seems like I can hit the bug more reliably if I have lots of updates in a dist-upgrade. I have seen this with lucid-trusty guests that are i386 and amd64. After the install, reboot and then cleanly shutdown 3. Backup the image file somewhere since steps 1 and 2 take a while :) 4. Execute the following commands which are based on what our uvt tool does: $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh snapshot-current --name forhallyn-trusty-amd64 pristine $ virsh start forhallyn-trusty-amd64 $ virsh snapshot-list forhallyn-trusty-amd64 # this is showing as shutoff after start, this might be different with qemu 1.5 in guest: sudo apt-get update sudo apt-get dist-upgrade 780 upgraded... shutdown -h now $ virsh snapshot-delete forhallyn-trusty-amd64 pristine --children $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh start forhallyn-trusty-amd64 # this command works, but there is often disk corruption The idea behind the above is to create a new VM with a pristine snapshot that we could revert later if we wanted. Instead, we boot the VM, run apt-get dist-upgrade, cleanly shutdown and then remove the old 'pristine' snapshot and create a new 'pristine' snapshot. The intention is to update the VM and the pristine snapshot so that when we boot the next time, we boot from the updated VM and can revert back to the updated VM. After running 'virsh start' after doing snapshot-delete/snapshot- create-as, the disk may be corrupted. This can be seen with grub failing to find .mod files, the kernel not booting, init failing, etc. This does not seem to be related to the machine type used. Ie, pc- i440fx-1.5, pc-i440fx-1.7 and pc-i440fx-2.0 all fail with qemu 2.0, pc-i440fx-1.5 and pc-i440fx-1.7 fail with qemu 1.7 and pc-i440fx-1.5 works fine with qemu 1.5. Only workaround I know if is to downgrade qemu to 1.5.0+dfsg- 3ubuntu5.4 from Ubuntu 13.10. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1292234/+subscriptions
[Qemu-devel] [Bug 1368815] Re: qemu-img convert intermittently corrupts output images
Looking at the fixes, I also see the following commits remove the above changes, which could mean we might encounter this again: c4875e5 raw-posix: SEEK_HOLE suffices, get rid of FIEMAP d1f06fe raw-posix: The SEEK_HOLE code is flawed, rewrite it Note there is also a related issue: bug 1292234 So far testing with the proposed qemu version or upstream I still encounter issues on ext4 w/ ^extent and ext3 filesystems. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1368815 Title: qemu-img convert intermittently corrupts output images Status in Mirantis OpenStack: Triaged Status in OpenStack Compute (Nova): In Progress Status in QEMU: In Progress Status in qemu package in Ubuntu: Fix Released Status in qemu source package in Trusty: Fix Released Status in qemu source package in Utopic: Fix Committed Status in qemu source package in Vivid: Fix Released Bug description: == Impact: occasional image corruption (any format on local filesystem) Test case: see the qemu-img command below Regression potential: this cherrypicks a patch from upstream to a not-insignificantly older qemu source tree. While the cherrypick seems sane, it's possible that there are subtle interactions with the other delta. I'd really like for a full qa-regression-test qemu testcase to be run against this package. == -- Found in releases qemu-2.0.0, qemu-2.0.2, qemu-2.1.0. Tested on Ubuntu 14.04 using Ext4 filesystems. The command qemu-img convert -O raw inputimage.qcow2 outputimage.raw intermittently creates corrupted output images, when the input image is not yet fully synchronized to disk. While the issue has actually been discovered in operation of of OpenStack nova, it can be reproduced easily on command line using cat $SRC_PATH $TMP_PATH $QEMU_IMG_PATH convert -O raw $TMP_PATH $DST_PATH cksum $DST_PATH on filesystems exposing this behavior. (The difficult part of this exercise is to prepare a filesystem to reliably trigger this race. On my test machine some filesystems are affected while other aren't, and unfortunately I haven't found the relevant difference between them, yet. Possible it's timing issues completely out of userspace control ...) The root cause, however, is the same as in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00069.html and it can be solved the same way as suggested in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00102.html In qemu, file block/raw-posix.c use the FIEMAP_FLAG_SYNC, i.e change f.fm.fm_flags = 0; to f.fm.fm_flags = FIEMAP_FLAG_SYNC; As discussed in the thread mentioned above, retrieving a page cache coherent map of file extents is possible only after fsync on that file. See also https://bugs.launchpad.net/nova/+bug/1350766 In that bug report filed against nova, fsync had been suggested to be performed by the framework invoking qemu-img. However, as the choice of fiemap -- implying this otherwise unneeded fsync of a temporary file -- is not made by the caller but by qemu-img, I agree with the nova bug reviewer's objection to put it into nova. The fsync should instead be triggered by qemu-img utilizing the FIEMAP_FLAG_SYNC, specifically intended for that purpose. To manage notifications about this bug go to: https://bugs.launchpad.net/mos/+bug/1368815/+subscriptions
[Qemu-devel] [Bug 1368815] Re: qemu-img convert intermittently corrupts output images
** Tags removed: verification-needed-utopic ** Tags added: verification-done-utopic -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1368815 Title: qemu-img convert intermittently corrupts output images Status in OpenStack Compute (Nova): In Progress Status in QEMU: In Progress Status in qemu package in Ubuntu: Fix Released Status in qemu source package in Trusty: Fix Released Status in qemu source package in Utopic: Fix Committed Status in qemu source package in Vivid: Fix Released Bug description: == Impact: occasional image corruption (any format on local filesystem) Test case: see the qemu-img command below Regression potential: this cherrypicks a patch from upstream to a not-insignificantly older qemu source tree. While the cherrypick seems sane, it's possible that there are subtle interactions with the other delta. I'd really like for a full qa-regression-test qemu testcase to be run against this package. == -- Found in releases qemu-2.0.0, qemu-2.0.2, qemu-2.1.0. Tested on Ubuntu 14.04 using Ext4 filesystems. The command qemu-img convert -O raw inputimage.qcow2 outputimage.raw intermittently creates corrupted output images, when the input image is not yet fully synchronized to disk. While the issue has actually been discovered in operation of of OpenStack nova, it can be reproduced easily on command line using cat $SRC_PATH $TMP_PATH $QEMU_IMG_PATH convert -O raw $TMP_PATH $DST_PATH cksum $DST_PATH on filesystems exposing this behavior. (The difficult part of this exercise is to prepare a filesystem to reliably trigger this race. On my test machine some filesystems are affected while other aren't, and unfortunately I haven't found the relevant difference between them, yet. Possible it's timing issues completely out of userspace control ...) The root cause, however, is the same as in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00069.html and it can be solved the same way as suggested in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00102.html In qemu, file block/raw-posix.c use the FIEMAP_FLAG_SYNC, i.e change f.fm.fm_flags = 0; to f.fm.fm_flags = FIEMAP_FLAG_SYNC; As discussed in the thread mentioned above, retrieving a page cache coherent map of file extents is possible only after fsync on that file. See also https://bugs.launchpad.net/nova/+bug/1350766 In that bug report filed against nova, fsync had been suggested to be performed by the framework invoking qemu-img. However, as the choice of fiemap -- implying this otherwise unneeded fsync of a temporary file -- is not made by the caller but by qemu-img, I agree with the nova bug reviewer's objection to put it into nova. The fsync should instead be triggered by qemu-img utilizing the FIEMAP_FLAG_SYNC, specifically intended for that purpose. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1368815/+subscriptions
[Qemu-devel] [Bug 1368815] Re: qemu-img convert intermittently corrupts output images
Tony, Yea, its a different bug. I tested with the above patched package and upstream qemu from git, and I can still hit bug 129224. I was hoping this also fixed my issue, but unfortunately it seems to be a different issue that occurs when using the same types of filesystems. I have a solid reproducer on my desk so let me know which experiments / areas of code / etc I should look at. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1368815 Title: qemu-img convert intermittently corrupts output images Status in OpenStack Compute (Nova): In Progress Status in QEMU: In Progress Status in qemu package in Ubuntu: Fix Released Status in qemu source package in Trusty: Fix Released Status in qemu source package in Utopic: Fix Committed Status in qemu source package in Vivid: Fix Released Bug description: == Impact: occasional image corruption (any format on local filesystem) Test case: see the qemu-img command below Regression potential: this cherrypicks a patch from upstream to a not-insignificantly older qemu source tree. While the cherrypick seems sane, it's possible that there are subtle interactions with the other delta. I'd really like for a full qa-regression-test qemu testcase to be run against this package. == -- Found in releases qemu-2.0.0, qemu-2.0.2, qemu-2.1.0. Tested on Ubuntu 14.04 using Ext4 filesystems. The command qemu-img convert -O raw inputimage.qcow2 outputimage.raw intermittently creates corrupted output images, when the input image is not yet fully synchronized to disk. While the issue has actually been discovered in operation of of OpenStack nova, it can be reproduced easily on command line using cat $SRC_PATH $TMP_PATH $QEMU_IMG_PATH convert -O raw $TMP_PATH $DST_PATH cksum $DST_PATH on filesystems exposing this behavior. (The difficult part of this exercise is to prepare a filesystem to reliably trigger this race. On my test machine some filesystems are affected while other aren't, and unfortunately I haven't found the relevant difference between them, yet. Possible it's timing issues completely out of userspace control ...) The root cause, however, is the same as in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00069.html and it can be solved the same way as suggested in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00102.html In qemu, file block/raw-posix.c use the FIEMAP_FLAG_SYNC, i.e change f.fm.fm_flags = 0; to f.fm.fm_flags = FIEMAP_FLAG_SYNC; As discussed in the thread mentioned above, retrieving a page cache coherent map of file extents is possible only after fsync on that file. See also https://bugs.launchpad.net/nova/+bug/1350766 In that bug report filed against nova, fsync had been suggested to be performed by the framework invoking qemu-img. However, as the choice of fiemap -- implying this otherwise unneeded fsync of a temporary file -- is not made by the caller but by qemu-img, I agree with the nova bug reviewer's objection to put it into nova. The fsync should instead be triggered by qemu-img utilizing the FIEMAP_FLAG_SYNC, specifically intended for that purpose. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1368815/+subscriptions
[Qemu-devel] [Bug 1368815] Re: qemu-img convert intermittently corrupts output images
Just to clarify it's bug 1292234 in the previous comment. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1368815 Title: qemu-img convert intermittently corrupts output images Status in OpenStack Compute (Nova): In Progress Status in QEMU: In Progress Status in qemu package in Ubuntu: Fix Released Status in qemu source package in Trusty: Fix Released Status in qemu source package in Utopic: Fix Committed Status in qemu source package in Vivid: Fix Released Bug description: == Impact: occasional image corruption (any format on local filesystem) Test case: see the qemu-img command below Regression potential: this cherrypicks a patch from upstream to a not-insignificantly older qemu source tree. While the cherrypick seems sane, it's possible that there are subtle interactions with the other delta. I'd really like for a full qa-regression-test qemu testcase to be run against this package. == -- Found in releases qemu-2.0.0, qemu-2.0.2, qemu-2.1.0. Tested on Ubuntu 14.04 using Ext4 filesystems. The command qemu-img convert -O raw inputimage.qcow2 outputimage.raw intermittently creates corrupted output images, when the input image is not yet fully synchronized to disk. While the issue has actually been discovered in operation of of OpenStack nova, it can be reproduced easily on command line using cat $SRC_PATH $TMP_PATH $QEMU_IMG_PATH convert -O raw $TMP_PATH $DST_PATH cksum $DST_PATH on filesystems exposing this behavior. (The difficult part of this exercise is to prepare a filesystem to reliably trigger this race. On my test machine some filesystems are affected while other aren't, and unfortunately I haven't found the relevant difference between them, yet. Possible it's timing issues completely out of userspace control ...) The root cause, however, is the same as in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00069.html and it can be solved the same way as suggested in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00102.html In qemu, file block/raw-posix.c use the FIEMAP_FLAG_SYNC, i.e change f.fm.fm_flags = 0; to f.fm.fm_flags = FIEMAP_FLAG_SYNC; As discussed in the thread mentioned above, retrieving a page cache coherent map of file extents is possible only after fsync on that file. See also https://bugs.launchpad.net/nova/+bug/1350766 In that bug report filed against nova, fsync had been suggested to be performed by the framework invoking qemu-img. However, as the choice of fiemap -- implying this otherwise unneeded fsync of a temporary file -- is not made by the caller but by qemu-img, I agree with the nova bug reviewer's objection to put it into nova. The fsync should instead be triggered by qemu-img utilizing the FIEMAP_FLAG_SYNC, specifically intended for that purpose. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1368815/+subscriptions
[Qemu-devel] [Bug 1292234] Re: qcow2 image corruption in trusty (qemu 1.7 and 2.0 candidate)
Serge, So I was able to just compile my own qemu and test with that. I did attempt a reverse bisect, and was able to reproduce as early as v1.1 and also reproduce on master HEAD. v1.0 was inconclusive because qcow2 format I made with the newer binary seemed to be incompatible with v1.0; however from Jamies testing this seems to be a working version; so I'd say somewhere between v1.0.0, v1.1.0 lies the original change that enabled this issue. As I've been unable to reproduce this without virsh, reverse bisecting and using older qemu versions is a bit challenging as machine types change, features virsh wants to use aren't available, etc. Another interesting thing I tested today was I was able to reproduce with ext4 with extents disabled; maybe that gives more clues. Just to make sure I wasn't crazy, mkfs'd the partition to vanilla ext4 and iterated for most of the afternoon with no failures. My next steps are going to be enabling verbose output for qcow2, looking more deeply into what gets corrupted in the file, and turning on host filesystem debugging. --chris -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1292234 Title: qcow2 image corruption in trusty (qemu 1.7 and 2.0 candidate) Status in QEMU: New Status in qemu package in Ubuntu: Confirmed Bug description: The security team uses a tool (http://bazaar.launchpad.net/~ubuntu- bugcontrol/ubuntu-qa-tools/master/view/head:/vm-tools/uvt) that uses libvirt snapshots quite a bit. I noticed after upgrading to trusty some time ago that qemu 1.7 (and the qemu 2.0 in the candidate ppa) has had stability problems such that the disk/partition table seems to be corrupted after removing a libvirt snapshot and then creating another with the same name. I don't have a very simple reproducer, but had enough that hallyn suggested I file a bug. First off: qemu-kvm 2.0~git-20140307.4c288ac-0ubuntu2 $ cat /proc/version_signature Ubuntu 3.13.0-16.36-generic 3.13.5 $ qemu-img info ./forhallyn-trusty-amd64.img image: ./forhallyn-trusty-amd64.img file format: qcow2 virtual size: 8.0G (8589934592 bytes) disk size: 4.0G cluster_size: 65536 Format specific information: compat: 0.10 Steps to reproduce: 1. create a virtual machine. For a simplified reproducer, I used virt-manager with: OS type: Linux Version: Ubuntu 14.04 Memory: 768 CPUs: 1 Select managed or existing (Browse, new volume) Create a new storage volume: qcow2 Max capacity: 8192 Allocation: 0 Advanced: NAT kvm x86_64 firmware: default 2. install a VM. I used trusty-desktop-amd64.iso from Jan 23 since it seems like I can hit the bug more reliably if I have lots of updates in a dist-upgrade. I have seen this with lucid-trusty guests that are i386 and amd64. After the install, reboot and then cleanly shutdown 3. Backup the image file somewhere since steps 1 and 2 take a while :) 4. Execute the following commands which are based on what our uvt tool does: $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh snapshot-current --name forhallyn-trusty-amd64 pristine $ virsh start forhallyn-trusty-amd64 $ virsh snapshot-list forhallyn-trusty-amd64 # this is showing as shutoff after start, this might be different with qemu 1.5 in guest: sudo apt-get update sudo apt-get dist-upgrade 780 upgraded... shutdown -h now $ virsh snapshot-delete forhallyn-trusty-amd64 pristine --children $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh start forhallyn-trusty-amd64 # this command works, but there is often disk corruption The idea behind the above is to create a new VM with a pristine snapshot that we could revert later if we wanted. Instead, we boot the VM, run apt-get dist-upgrade, cleanly shutdown and then remove the old 'pristine' snapshot and create a new 'pristine' snapshot. The intention is to update the VM and the pristine snapshot so that when we boot the next time, we boot from the updated VM and can revert back to the updated VM. After running 'virsh start' after doing snapshot-delete/snapshot- create-as, the disk may be corrupted. This can be seen with grub failing to find .mod files, the kernel not booting, init failing, etc. This does not seem to be related to the machine type used. Ie, pc- i440fx-1.5, pc-i440fx-1.7 and pc-i440fx-2.0 all fail with qemu 2.0, pc-i440fx-1.5 and pc-i440fx-1.7 fail with qemu 1.7 and pc-i440fx-1.5 works fine with qemu 1.5. Only workaround I know if is to downgrade qemu to 1.5.0+dfsg- 3ubuntu5.4 from Ubuntu 13.10. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1292234/+subscriptions
[Qemu-devel] [Bug 1349277] Re: AArch64 emulation ignores SPSel=0 when taking (or returning from) an exception at EL1 or greater
** Changed in: qemu (Ubuntu) Assignee: (unassigned) = Chris J Arges (arges) ** Changed in: qemu (Ubuntu) Status: New = In Progress ** Changed in: qemu (Ubuntu) Importance: Undecided = Medium -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1349277 Title: AArch64 emulation ignores SPSel=0 when taking (or returning from) an exception at EL1 or greater Status in QEMU: New Status in qemu package in Ubuntu: In Progress Bug description: The AArch64 emulation ignores SPSel=0 when: (1) taking an interrupt from an exception level greater than EL0 (e.g., EL1t), (2) returning from an exception (via ERET) to an exception level greater than EL0 (e.g., EL1t), with SPSR_ELx[SPSel]=0. The attached patch fixes the problem in my application. Background: I'm running a standalone application (toy OS) that is performing preemptive multithreading between threads running at EL1t, with exception handling / context switching occurring at EL1h. This bug causes the stack pointer to be corrupted in the threads running at EL1t (they end up with a version of the EL1h stack pointer (SP_EL1)). Occurs in: qemu-2.1.0-rc1 (found in) commit c60a57ff497667780132a3fcdc1500c83af5d5c0 (current master) To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1349277/+subscriptions
[Qemu-devel] [Bug 1349277] Re: AArch64 emulation ignores SPSel=0 when taking (or returning from) an exception at EL1 or greater
Uploaded fixed package for Vivid: https://launchpad.net/ubuntu/+source/qemu/2.1+dfsg-7ubuntu3 Please let me know if this fixes the issue. ** Changed in: qemu (Ubuntu) Status: In Progress = Fix Committed -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1349277 Title: AArch64 emulation ignores SPSel=0 when taking (or returning from) an exception at EL1 or greater Status in QEMU: New Status in qemu package in Ubuntu: Fix Committed Bug description: The AArch64 emulation ignores SPSel=0 when: (1) taking an interrupt from an exception level greater than EL0 (e.g., EL1t), (2) returning from an exception (via ERET) to an exception level greater than EL0 (e.g., EL1t), with SPSR_ELx[SPSel]=0. The attached patch fixes the problem in my application. Background: I'm running a standalone application (toy OS) that is performing preemptive multithreading between threads running at EL1t, with exception handling / context switching occurring at EL1h. This bug causes the stack pointer to be corrupted in the threads running at EL1t (they end up with a version of the EL1h stack pointer (SP_EL1)). Occurs in: qemu-2.1.0-rc1 (found in) commit c60a57ff497667780132a3fcdc1500c83af5d5c0 (current master) To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1349277/+subscriptions
[Qemu-devel] [Bug 1292234] Re: qcow2 image corruption in trusty (qemu 1.7 and 2.0 candidate)
Also I've been able to reproduce this with the latest master in qemu, and even with the latest daily 3.18-rcX kernel on the host. ** Also affects: qemu Importance: Undecided Status: New -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1292234 Title: qcow2 image corruption in trusty (qemu 1.7 and 2.0 candidate) Status in QEMU: New Status in qemu package in Ubuntu: Confirmed Bug description: The security team uses a tool (http://bazaar.launchpad.net/~ubuntu- bugcontrol/ubuntu-qa-tools/master/view/head:/vm-tools/uvt) that uses libvirt snapshots quite a bit. I noticed after upgrading to trusty some time ago that qemu 1.7 (and the qemu 2.0 in the candidate ppa) has had stability problems such that the disk/partition table seems to be corrupted after removing a libvirt snapshot and then creating another with the same name. I don't have a very simple reproducer, but had enough that hallyn suggested I file a bug. First off: qemu-kvm 2.0~git-20140307.4c288ac-0ubuntu2 $ cat /proc/version_signature Ubuntu 3.13.0-16.36-generic 3.13.5 $ qemu-img info ./forhallyn-trusty-amd64.img image: ./forhallyn-trusty-amd64.img file format: qcow2 virtual size: 8.0G (8589934592 bytes) disk size: 4.0G cluster_size: 65536 Format specific information: compat: 0.10 Steps to reproduce: 1. create a virtual machine. For a simplified reproducer, I used virt-manager with: OS type: Linux Version: Ubuntu 14.04 Memory: 768 CPUs: 1 Select managed or existing (Browse, new volume) Create a new storage volume: qcow2 Max capacity: 8192 Allocation: 0 Advanced: NAT kvm x86_64 firmware: default 2. install a VM. I used trusty-desktop-amd64.iso from Jan 23 since it seems like I can hit the bug more reliably if I have lots of updates in a dist-upgrade. I have seen this with lucid-trusty guests that are i386 and amd64. After the install, reboot and then cleanly shutdown 3. Backup the image file somewhere since steps 1 and 2 take a while :) 4. Execute the following commands which are based on what our uvt tool does: $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh snapshot-current --name forhallyn-trusty-amd64 pristine $ virsh start forhallyn-trusty-amd64 $ virsh snapshot-list forhallyn-trusty-amd64 # this is showing as shutoff after start, this might be different with qemu 1.5 in guest: sudo apt-get update sudo apt-get dist-upgrade 780 upgraded... shutdown -h now $ virsh snapshot-delete forhallyn-trusty-amd64 pristine --children $ virsh snapshot-create-as forhallyn-trusty-amd64 pristine uvt snapshot $ virsh start forhallyn-trusty-amd64 # this command works, but there is often disk corruption The idea behind the above is to create a new VM with a pristine snapshot that we could revert later if we wanted. Instead, we boot the VM, run apt-get dist-upgrade, cleanly shutdown and then remove the old 'pristine' snapshot and create a new 'pristine' snapshot. The intention is to update the VM and the pristine snapshot so that when we boot the next time, we boot from the updated VM and can revert back to the updated VM. After running 'virsh start' after doing snapshot-delete/snapshot- create-as, the disk may be corrupted. This can be seen with grub failing to find .mod files, the kernel not booting, init failing, etc. This does not seem to be related to the machine type used. Ie, pc- i440fx-1.5, pc-i440fx-1.7 and pc-i440fx-2.0 all fail with qemu 2.0, pc-i440fx-1.5 and pc-i440fx-1.7 fail with qemu 1.7 and pc-i440fx-1.5 works fine with qemu 1.5. Only workaround I know if is to downgrade qemu to 1.5.0+dfsg- 3ubuntu5.4 from Ubuntu 13.10. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1292234/+subscriptions
[Qemu-devel] [Bug 1368815] Please test proposed package
Hello Michael, or anyone else affected, Accepted qemu into trusty-proposed. The package will build now and be available at http://launchpad.net/ubuntu/+source/qemu/2.0.0+dfsg- 2ubuntu1.8 in a few hours, and then in the -proposed repository. Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users. If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision. Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance! -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1368815 Title: qemu-img convert intermittently corrupts output images Status in OpenStack Compute (Nova): In Progress Status in QEMU: In Progress Status in “qemu” package in Ubuntu: Fix Released Status in “qemu” source package in Trusty: Fix Committed Status in “qemu” source package in Utopic: Fix Committed Status in “qemu” source package in Vivid: Fix Released Bug description: == Impact: occasional image corruption (any format on local filesystem) Test case: see the qemu-img command below Regression potential: this cherrypicks a patch from upstream to a not-insignificantly older qemu source tree. While the cherrypick seems sane, it's possible that there are subtle interactions with the other delta. I'd really like for a full qa-regression-test qemu testcase to be run against this package. == -- Found in releases qemu-2.0.0, qemu-2.0.2, qemu-2.1.0. Tested on Ubuntu 14.04 using Ext4 filesystems. The command qemu-img convert -O raw inputimage.qcow2 outputimage.raw intermittently creates corrupted output images, when the input image is not yet fully synchronized to disk. While the issue has actually been discovered in operation of of OpenStack nova, it can be reproduced easily on command line using cat $SRC_PATH $TMP_PATH $QEMU_IMG_PATH convert -O raw $TMP_PATH $DST_PATH cksum $DST_PATH on filesystems exposing this behavior. (The difficult part of this exercise is to prepare a filesystem to reliably trigger this race. On my test machine some filesystems are affected while other aren't, and unfortunately I haven't found the relevant difference between them, yet. Possible it's timing issues completely out of userspace control ...) The root cause, however, is the same as in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00069.html and it can be solved the same way as suggested in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00102.html In qemu, file block/raw-posix.c use the FIEMAP_FLAG_SYNC, i.e change f.fm.fm_flags = 0; to f.fm.fm_flags = FIEMAP_FLAG_SYNC; As discussed in the thread mentioned above, retrieving a page cache coherent map of file extents is possible only after fsync on that file. See also https://bugs.launchpad.net/nova/+bug/1350766 In that bug report filed against nova, fsync had been suggested to be performed by the framework invoking qemu-img. However, as the choice of fiemap -- implying this otherwise unneeded fsync of a temporary file -- is not made by the caller but by qemu-img, I agree with the nova bug reviewer's objection to put it into nova. The fsync should instead be triggered by qemu-img utilizing the FIEMAP_FLAG_SYNC, specifically intended for that purpose. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1368815/+subscriptions
[Qemu-devel] [Bug 1368815] Re: qemu-img convert intermittently corrupts output images
Hello Michael, or anyone else affected, Accepted qemu into utopic-proposed. The package will build now and be available at http://launchpad.net/ubuntu/+source/qemu/2.1+dfsg- 4ubuntu6.2 in a few hours, and then in the -proposed repository. Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users. If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-needed to verification-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed. In either case, details of your testing will help us make a better decision. Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance! ** Changed in: qemu (Ubuntu Utopic) Status: Triaged = Fix Committed ** Tags added: verification-needed ** Changed in: qemu (Ubuntu Trusty) Status: Triaged = Fix Committed -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1368815 Title: qemu-img convert intermittently corrupts output images Status in OpenStack Compute (Nova): In Progress Status in QEMU: In Progress Status in “qemu” package in Ubuntu: Fix Released Status in “qemu” source package in Trusty: Fix Committed Status in “qemu” source package in Utopic: Fix Committed Status in “qemu” source package in Vivid: Fix Released Bug description: == Impact: occasional image corruption (any format on local filesystem) Test case: see the qemu-img command below Regression potential: this cherrypicks a patch from upstream to a not-insignificantly older qemu source tree. While the cherrypick seems sane, it's possible that there are subtle interactions with the other delta. I'd really like for a full qa-regression-test qemu testcase to be run against this package. == -- Found in releases qemu-2.0.0, qemu-2.0.2, qemu-2.1.0. Tested on Ubuntu 14.04 using Ext4 filesystems. The command qemu-img convert -O raw inputimage.qcow2 outputimage.raw intermittently creates corrupted output images, when the input image is not yet fully synchronized to disk. While the issue has actually been discovered in operation of of OpenStack nova, it can be reproduced easily on command line using cat $SRC_PATH $TMP_PATH $QEMU_IMG_PATH convert -O raw $TMP_PATH $DST_PATH cksum $DST_PATH on filesystems exposing this behavior. (The difficult part of this exercise is to prepare a filesystem to reliably trigger this race. On my test machine some filesystems are affected while other aren't, and unfortunately I haven't found the relevant difference between them, yet. Possible it's timing issues completely out of userspace control ...) The root cause, however, is the same as in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00069.html and it can be solved the same way as suggested in http://lists.gnu.org/archive/html/coreutils/2011-04/msg00102.html In qemu, file block/raw-posix.c use the FIEMAP_FLAG_SYNC, i.e change f.fm.fm_flags = 0; to f.fm.fm_flags = FIEMAP_FLAG_SYNC; As discussed in the thread mentioned above, retrieving a page cache coherent map of file extents is possible only after fsync on that file. See also https://bugs.launchpad.net/nova/+bug/1350766 In that bug report filed against nova, fsync had been suggested to be performed by the framework invoking qemu-img. However, as the choice of fiemap -- implying this otherwise unneeded fsync of a temporary file -- is not made by the caller but by qemu-img, I agree with the nova bug reviewer's objection to put it into nova. The fsync should instead be triggered by qemu-img utilizing the FIEMAP_FLAG_SYNC, specifically intended for that purpose. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1368815/+subscriptions
[Qemu-devel] [Bug 1387881] Re: qemu fails to recognize full virtualization
Answers to some of your questions: Can you only get this on one particular host (i.e. hardware type)? - No as I can repro in KVM -Is there anything in syslog? - Nothing relevant Which packages (dpkg -l | grep qemu) ii qemu-utils 2.0.0+dfsg-2ubuntu1.6 amd64 QEMU utilities Does a reboot (without installing new packages) fix the problem? - No -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1387881 Title: qemu fails to recognize full virtualization Status in QEMU: New Status in “linux” package in Ubuntu: Invalid Status in “qemu” package in Ubuntu: New Status in “virtinst” package in Ubuntu: New Bug description: System: 14.04 qemu - 2.0.0+dfsg-2ubuntu1.6 virtinst 0.600.4-3ubuntu2 Command: virt-install --name juju-bootstrap --ram=2048 --vcpus=1 --hvm \ --virt-type=kvm --pxe --boot network,hd --os-variant=ubuntutrusty \ --graphics vnc --noautoconsole --os-type=linux --accelerate \ --disk=/var/lib/libvirt/images/juju-bootstrap.qcow2,bus=virtio,format=qcow2,cache=none,sparse=true,size=20 \ --network=bridge=br0,model=virtio Error: ERROR Host does not support virtualization type 'hvm' Diagnostics: $ sudo kvm -vnc :1 -monitor stdio [sudo] password for cscloud: QEMU 2.0.0 monitor - type 'help' for more information (qemu) KVM internal error. Suberror: 1 emulation failure EAX= EBX=4001 ECX=0030 EDX=0cfd ESI= EDI= EBP= ESP=6fcc EIP=0fedb30c EFL=0002 [---] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0010 00409300 DPL=0 DS [-WA] CS =0008 00c09a00 DPL=0 CS32 [-R-] SS =0010 00409200 DPL=0 DS [-W-] DS =0010 00409300 DPL=0 DS [-WA] FS =0010 00c09300 DPL=0 DS [-WA] GS =0010 00c09300 DPL=0 DS [-WA] LDT= 8200 DPL=0 LDT TR = 8b00 DPL=0 TSS32-busy GDT= 000f6688 0037 IDT= 000f66c6 CR0=6011 CR2= CR3= CR4= DR0= DR1= DR2= DR3= DR6=0ff0 DR7=0400 EFER= Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 info kvm kvm support: enabled (qemu) lsmod|grep kvm kvm_intel 143109 0 kvm 451552 1 kvm_intel $ dmesg|grep -i kvm [5.722167] kvm: Nested Virtualization enabled [5.722190] kvm: Nested Paging enabled --- I haven't been able to get much out of libvirt as the kvm instance never starts. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1387881/+subscriptions
[Qemu-devel] [Bug 1387881] Re: qemu fails to recognize full virtualization
I can reproduce this in a VM: 1) boot a clean cloud images (uvt-kvm create lp1387881) 2) apt-get install virtinst 3) sudo virt-install --name juju-bootstrap --ram=2048 --vcpus=1 --hvm --virt-type=kvm --pxe --boot network,hd --os-variant=ubuntutrusty --graphics vnc --noautoconsole --os-type=linux --accelerate --nodisks -d You'll get the following: $ sudo virt-install --name juju-bootstrap --ram=2048 --vcpus=1 --hvm --virt-type=kvm --pxe --boot network,hd --os-variant=ubuntutrusty --graphics vnc --noautoconsole --os-type=linux --accelerate --nodisks -d [Wed, 05 Nov 2014 16:23:29 virt-install 3297] DEBUG (cli:227) Launched with command line: /usr/bin/virt-install --name juju-bootstrap --ram=2048 --vcpus=1 --hvm --virt-type=kvm --pxe --boot network,hd --os-variant=ubuntutrusty --graphics vnc --noautoconsole --os-type=linux --accelerate --nodisks -d [Wed, 05 Nov 2014 16:23:29 virt-install 3297] DEBUG (cli:332) Requesting libvirt URI default [Wed, 05 Nov 2014 16:23:29 virt-install 3297] DEBUG (cli:334) Received libvirt URI qemu:///system [Wed, 05 Nov 2014 16:23:29 virt-install 3297] DEBUG (virt-install:258) Requesting virt method 'hvm', hv type 'kvm'. [Wed, 05 Nov 2014 16:23:29 virt-install 3297] ERROR (cli:445) Host does not support virtualization type 'hvm' [Wed, 05 Nov 2014 16:23:29 virt-install 3297] DEBUG (cli:448) Traceback (most recent call last): File /usr/bin/virt-install, line 273, in get_virt_type machine=options.machine) File /usr/lib/python2.7/dist-packages/virtinst/CapabilitiesParser.py, line 736, in guest_lookup {'virttype' : osstr, 'arch' : archstr}) ValueError: Host does not support virtualization type 'hvm' -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1387881 Title: qemu fails to recognize full virtualization Status in QEMU: New Status in “linux” package in Ubuntu: Invalid Status in “qemu” package in Ubuntu: New Bug description: System: 14.04 qemu - 2.0.0+dfsg-2ubuntu1.6 virtinst 0.600.4-3ubuntu2 Command: virt-install --name juju-bootstrap --ram=2048 --vcpus=1 --hvm \ --virt-type=kvm --pxe --boot network,hd --os-variant=ubuntutrusty \ --graphics vnc --noautoconsole --os-type=linux --accelerate \ --disk=/var/lib/libvirt/images/juju-bootstrap.qcow2,bus=virtio,format=qcow2,cache=none,sparse=true,size=20 \ --network=bridge=br0,model=virtio Error: ERROR Host does not support virtualization type 'hvm' Diagnostics: $ sudo kvm -vnc :1 -monitor stdio [sudo] password for cscloud: QEMU 2.0.0 monitor - type 'help' for more information (qemu) KVM internal error. Suberror: 1 emulation failure EAX= EBX=4001 ECX=0030 EDX=0cfd ESI= EDI= EBP= ESP=6fcc EIP=0fedb30c EFL=0002 [---] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0010 00409300 DPL=0 DS [-WA] CS =0008 00c09a00 DPL=0 CS32 [-R-] SS =0010 00409200 DPL=0 DS [-W-] DS =0010 00409300 DPL=0 DS [-WA] FS =0010 00c09300 DPL=0 DS [-WA] GS =0010 00c09300 DPL=0 DS [-WA] LDT= 8200 DPL=0 LDT TR = 8b00 DPL=0 TSS32-busy GDT= 000f6688 0037 IDT= 000f66c6 CR0=6011 CR2= CR3= CR4= DR0= DR1= DR2= DR3= DR6=0ff0 DR7=0400 EFER= Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 info kvm kvm support: enabled (qemu) lsmod|grep kvm kvm_intel 143109 0 kvm 451552 1 kvm_intel $ dmesg|grep -i kvm [5.722167] kvm: Nested Virtualization enabled [5.722190] kvm: Nested Paging enabled --- I haven't been able to get much out of libvirt as the kvm instance never starts. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1387881/+subscriptions
[Qemu-devel] [Bug 1387881] Re: qemu fails to recognize full virtualization
** Also affects: virtinst (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1387881 Title: qemu fails to recognize full virtualization Status in QEMU: New Status in “linux” package in Ubuntu: Invalid Status in “qemu” package in Ubuntu: New Status in “virtinst” package in Ubuntu: New Bug description: System: 14.04 qemu - 2.0.0+dfsg-2ubuntu1.6 virtinst 0.600.4-3ubuntu2 Command: virt-install --name juju-bootstrap --ram=2048 --vcpus=1 --hvm \ --virt-type=kvm --pxe --boot network,hd --os-variant=ubuntutrusty \ --graphics vnc --noautoconsole --os-type=linux --accelerate \ --disk=/var/lib/libvirt/images/juju-bootstrap.qcow2,bus=virtio,format=qcow2,cache=none,sparse=true,size=20 \ --network=bridge=br0,model=virtio Error: ERROR Host does not support virtualization type 'hvm' Diagnostics: $ sudo kvm -vnc :1 -monitor stdio [sudo] password for cscloud: QEMU 2.0.0 monitor - type 'help' for more information (qemu) KVM internal error. Suberror: 1 emulation failure EAX= EBX=4001 ECX=0030 EDX=0cfd ESI= EDI= EBP= ESP=6fcc EIP=0fedb30c EFL=0002 [---] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0010 00409300 DPL=0 DS [-WA] CS =0008 00c09a00 DPL=0 CS32 [-R-] SS =0010 00409200 DPL=0 DS [-W-] DS =0010 00409300 DPL=0 DS [-WA] FS =0010 00c09300 DPL=0 DS [-WA] GS =0010 00c09300 DPL=0 DS [-WA] LDT= 8200 DPL=0 LDT TR = 8b00 DPL=0 TSS32-busy GDT= 000f6688 0037 IDT= 000f66c6 CR0=6011 CR2= CR3= CR4= DR0= DR1= DR2= DR3= DR6=0ff0 DR7=0400 EFER= Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 info kvm kvm support: enabled (qemu) lsmod|grep kvm kvm_intel 143109 0 kvm 451552 1 kvm_intel $ dmesg|grep -i kvm [5.722167] kvm: Nested Virtualization enabled [5.722190] kvm: Nested Paging enabled --- I haven't been able to get much out of libvirt as the kvm instance never starts. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1387881/+subscriptions
[Qemu-devel] [Bug 1387881] Re: qemu fails to recognize full virtualization
This is a package dependency issue and not a kernel issue. Once davidpbritton installed 'qemu-kvm' he was able to install using virt- install just fine. So overall either the packages davidpbritton was installing weren't sufficient to use qemu with KVM, or we have a dependency problem in the Ubuntu packaging. ** Changed in: linux (Ubuntu) Status: Incomplete = Invalid -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1387881 Title: qemu fails to recognize full virtualization Status in QEMU: New Status in “linux” package in Ubuntu: Invalid Status in “qemu” package in Ubuntu: New Bug description: System: 14.04 qemu - 2.0.0+dfsg-2ubuntu1.6 virtinst 0.600.4-3ubuntu2 Command: virt-install --name juju-bootstrap --ram=2048 --vcpus=1 --hvm \ --virt-type=kvm --pxe --boot network,hd --os-variant=ubuntutrusty \ --graphics vnc --noautoconsole --os-type=linux --accelerate \ --disk=/var/lib/libvirt/images/juju-bootstrap.qcow2,bus=virtio,format=qcow2,cache=none,sparse=true,size=20 \ --network=bridge=br0,model=virtio Error: ERROR Host does not support virtualization type 'hvm' Diagnostics: $ sudo kvm -vnc :1 -monitor stdio [sudo] password for cscloud: QEMU 2.0.0 monitor - type 'help' for more information (qemu) KVM internal error. Suberror: 1 emulation failure EAX= EBX=4001 ECX=0030 EDX=0cfd ESI= EDI= EBP= ESP=6fcc EIP=0fedb30c EFL=0002 [---] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0010 00409300 DPL=0 DS [-WA] CS =0008 00c09a00 DPL=0 CS32 [-R-] SS =0010 00409200 DPL=0 DS [-W-] DS =0010 00409300 DPL=0 DS [-WA] FS =0010 00c09300 DPL=0 DS [-WA] GS =0010 00c09300 DPL=0 DS [-WA] LDT= 8200 DPL=0 LDT TR = 8b00 DPL=0 TSS32-busy GDT= 000f6688 0037 IDT= 000f66c6 CR0=6011 CR2= CR3= CR4= DR0= DR1= DR2= DR3= DR6=0ff0 DR7=0400 EFER= Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 info kvm kvm support: enabled (qemu) lsmod|grep kvm kvm_intel 143109 0 kvm 451552 1 kvm_intel $ dmesg|grep -i kvm [5.722167] kvm: Nested Virtualization enabled [5.722190] kvm: Nested Paging enabled --- I haven't been able to get much out of libvirt as the kvm instance never starts. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1387881/+subscriptions
[Qemu-devel] [Bug 1387881] Re: qemu fails to recognize full virtualization
So I'm not completely sure virtinst should depend on qemu-kvm, since we assume one can use it without --virt-type=kvm, so my thoughts would be you should install additional package as necessary to use qemu w/ KVM. I'll let hallyn comment as well. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1387881 Title: qemu fails to recognize full virtualization Status in QEMU: New Status in “linux” package in Ubuntu: Invalid Status in “qemu” package in Ubuntu: New Bug description: System: 14.04 qemu - 2.0.0+dfsg-2ubuntu1.6 virtinst 0.600.4-3ubuntu2 Command: virt-install --name juju-bootstrap --ram=2048 --vcpus=1 --hvm \ --virt-type=kvm --pxe --boot network,hd --os-variant=ubuntutrusty \ --graphics vnc --noautoconsole --os-type=linux --accelerate \ --disk=/var/lib/libvirt/images/juju-bootstrap.qcow2,bus=virtio,format=qcow2,cache=none,sparse=true,size=20 \ --network=bridge=br0,model=virtio Error: ERROR Host does not support virtualization type 'hvm' Diagnostics: $ sudo kvm -vnc :1 -monitor stdio [sudo] password for cscloud: QEMU 2.0.0 monitor - type 'help' for more information (qemu) KVM internal error. Suberror: 1 emulation failure EAX= EBX=4001 ECX=0030 EDX=0cfd ESI= EDI= EBP= ESP=6fcc EIP=0fedb30c EFL=0002 [---] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0010 00409300 DPL=0 DS [-WA] CS =0008 00c09a00 DPL=0 CS32 [-R-] SS =0010 00409200 DPL=0 DS [-W-] DS =0010 00409300 DPL=0 DS [-WA] FS =0010 00c09300 DPL=0 DS [-WA] GS =0010 00c09300 DPL=0 DS [-WA] LDT= 8200 DPL=0 LDT TR = 8b00 DPL=0 TSS32-busy GDT= 000f6688 0037 IDT= 000f66c6 CR0=6011 CR2= CR3= CR4= DR0= DR1= DR2= DR3= DR6=0ff0 DR7=0400 EFER= Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 info kvm kvm support: enabled (qemu) lsmod|grep kvm kvm_intel 143109 0 kvm 451552 1 kvm_intel $ dmesg|grep -i kvm [5.722167] kvm: Nested Virtualization enabled [5.722190] kvm: Nested Paging enabled --- I haven't been able to get much out of libvirt as the kvm instance never starts. To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1387881/+subscriptions
[Qemu-devel] [Bug 1285708] Re: FreeBSD Guest crash on boot due to xsave instruction issue
** Changed in: linux (Ubuntu Precise) Status: New = Won't Fix -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1285708 Title: FreeBSD Guest crash on boot due to xsave instruction issue Status in QEMU: New Status in qemu-kvm: New Status in “linux” package in Ubuntu: Fix Released Status in “linux” source package in Precise: Won't Fix Status in “linux” source package in Trusty: In Progress Bug description: When trying to boot a working FreeBSD 9.1/9.2 guest on a kvm/qemu host with the following command: kvm -m 256 -cdrom FreeBSD-9.2-RELEASE-amd64-disc1.iso -drive file=FreeBSD-9.2-RELEASE-amd64.qcow2,if=virtio -net nic,model=virtio -net user -nographic -vnc :10 -enable-kvm -balloon virtio -cpu core2duo,+xsave The FreeBSD Guest will kernel crash on boot with the following error: panic: CPU0 does not support X87 or SSE: 0 When launching the guest without the cpu flags, it works just fine. This bug has been resolved in source: https://lkml.org/lkml/2014/2/22/58 Can this fix be included in Precise ASAP! To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1285708/+subscriptions
[Qemu-devel] [Bug 1285708] Re: FreeBSD Guest crash on boot due to xsave instruction issue
Testing with the identified patch doesn't solve the issue, in addition looking at the cpu flags (core2duo +xsave) this may not be a valid configuration since xsave assumes additional features will be there (instead of creating an older cpu model with xsave.) Therefore I believe this is a configuration issue. The comment above in #4 assumes your host has the correct cpu features, you can always run something like: kvm -cpu host,+xsave,check to check if are issues with the host plus additional features settings. In addition I've been able to use '-cpu SandyBridge,+xsave' and this also works. Marking this 'Won't Fix' as there is a clear workaround (use another CPU model), and this configuration may not be valid. Thanks ** Changed in: linux (Ubuntu Trusty) Status: In Progress = Won't Fix ** Changed in: linux (Ubuntu) Status: Fix Released = Invalid ** Changed in: linux (Ubuntu Precise) Importance: Medium = Undecided ** Changed in: linux (Ubuntu Trusty) Importance: Medium = Undecided -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1285708 Title: FreeBSD Guest crash on boot due to xsave instruction issue Status in QEMU: Invalid Status in qemu-kvm: New Status in “linux” package in Ubuntu: Invalid Status in “linux” source package in Precise: Won't Fix Status in “linux” source package in Trusty: Won't Fix Bug description: When trying to boot a working FreeBSD 9.1/9.2 guest on a kvm/qemu host with the following command: kvm -m 256 -cdrom FreeBSD-9.2-RELEASE-amd64-disc1.iso -drive file=FreeBSD-9.2-RELEASE-amd64.qcow2,if=virtio -net nic,model=virtio -net user -nographic -vnc :10 -enable-kvm -balloon virtio -cpu core2duo,+xsave The FreeBSD Guest will kernel crash on boot with the following error: panic: CPU0 does not support X87 or SSE: 0 When launching the guest without the cpu flags, it works just fine. This bug has been resolved in source: https://lkml.org/lkml/2014/2/22/58 Can this fix be included in Precise ASAP! To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1285708/+subscriptions
[Qemu-devel] [Bug 1285708] Re: FreeBSD Guest crash on boot due to xsave instruction issue
I should have refreshed my browser before commenting. : ) Thanks Jesse and Paolo. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1285708 Title: FreeBSD Guest crash on boot due to xsave instruction issue Status in QEMU: Invalid Status in qemu-kvm: New Status in “linux” package in Ubuntu: Invalid Status in “linux” source package in Precise: Won't Fix Status in “linux” source package in Trusty: Won't Fix Bug description: When trying to boot a working FreeBSD 9.1/9.2 guest on a kvm/qemu host with the following command: kvm -m 256 -cdrom FreeBSD-9.2-RELEASE-amd64-disc1.iso -drive file=FreeBSD-9.2-RELEASE-amd64.qcow2,if=virtio -net nic,model=virtio -net user -nographic -vnc :10 -enable-kvm -balloon virtio -cpu core2duo,+xsave The FreeBSD Guest will kernel crash on boot with the following error: panic: CPU0 does not support X87 or SSE: 0 When launching the guest without the cpu flags, it works just fine. This bug has been resolved in source: https://lkml.org/lkml/2014/2/22/58 Can this fix be included in Precise ASAP! To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1285708/+subscriptions
[Qemu-devel] [Bug 1308341] Re: Multiple CPUs causes blue screen on Windows guest (14.04 regression)
*** This bug is a duplicate of bug 1346917 *** https://bugs.launchpad.net/bugs/1346917 ** This bug is no longer a duplicate of bug 1307473 guest hang due to missing clock interrupt ** This bug has been marked a duplicate of bug 1346917 Using KSM on NUMA capable machines can cause KVM guest performance and stability issues -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1308341 Title: Multiple CPUs causes blue screen on Windows guest (14.04 regression) Status in QEMU: New Status in “linux” package in Ubuntu: Confirmed Status in “qemu” package in Ubuntu: Confirmed Bug description: Configuring a Windows 7 guest using more than one CPU cases the guest to fail. This happens after a few hours after guest boot. This is the error on the blue screen: A clock interrupt was not received on a secondary processor within the allocated time interval After resetting, the guest will never boot and a new bluescreen with the error STOP: 0x005c appears. Shutting down the guest completely and restarting it will allow it to boot and run for a few hours again. The guest was created using virt-manager. The error happens with or without virtio devices and with both 32-bit and 64-bit Windows 7 guests. I am using Ubuntu 14.04 release candidate. qemu-kvm version 2.0.0~rc1+dfsg-0ubuntu3 To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1308341/+subscriptions
[Qemu-devel] [Bug 1285708] Re: FreeBSD Guest crash on boot due to xsave instruction issue
As a workaround use the 'host' cpu type so the proper bits are enabled: -cpu host,+xsave -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1285708 Title: FreeBSD Guest crash on boot due to xsave instruction issue Status in QEMU: New Status in qemu-kvm: New Status in “linux” package in Ubuntu: Fix Released Status in “linux” source package in Precise: New Status in “linux” source package in Trusty: In Progress Bug description: When trying to boot a working FreeBSD 9.1/9.2 guest on a kvm/qemu host with the following command: kvm -m 256 -cdrom FreeBSD-9.2-RELEASE-amd64-disc1.iso -drive file=FreeBSD-9.2-RELEASE-amd64.qcow2,if=virtio -net nic,model=virtio -net user -nographic -vnc :10 -enable-kvm -balloon virtio -cpu core2duo,+xsave The FreeBSD Guest will kernel crash on boot with the following error: panic: CPU0 does not support X87 or SSE: 0 When launching the guest without the cpu flags, it works just fine. This bug has been resolved in source: https://lkml.org/lkml/2014/2/22/58 Can this fix be included in Precise ASAP! To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1285708/+subscriptions
[Qemu-devel] [Bug 1308341] Re: Multiple CPUs causes blue screen on Windows guest (14.04 regression)
*** This bug is a duplicate of bug 1307473 *** https://bugs.launchpad.net/bugs/1307473 ** This bug has been marked a duplicate of bug 1307473 guest hang due to missing clock interrupt -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1308341 Title: Multiple CPUs causes blue screen on Windows guest (14.04 regression) Status in QEMU: New Status in “linux” package in Ubuntu: Confirmed Status in “qemu” package in Ubuntu: Confirmed Bug description: Configuring a Windows 7 guest using more than one CPU cases the guest to fail. This happens after a few hours after guest boot. This is the error on the blue screen: A clock interrupt was not received on a secondary processor within the allocated time interval After resetting, the guest will never boot and a new bluescreen with the error STOP: 0x005c appears. Shutting down the guest completely and restarting it will allow it to boot and run for a few hours again. The guest was created using virt-manager. The error happens with or without virtio devices and with both 32-bit and 64-bit Windows 7 guests. I am using Ubuntu 14.04 release candidate. qemu-kvm version 2.0.0~rc1+dfsg-0ubuntu3 To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1308341/+subscriptions
[Qemu-devel] [Bug 1285708] Re: FreeBSD Guest crash on boot due to xsave instruction issue
$ git tag --contains 56c103ec040b1944c8866f79aa768265c0dd2986 v3.15-rc1 The patch in question is already in 3.16 / Utopic. I can reproduce this easily in Trusty. ** Also affects: linux (Ubuntu Trusty) Importance: Undecided Status: New ** Changed in: linux (Ubuntu) Status: Incomplete = Fix Released ** Changed in: linux (Ubuntu Trusty) Status: New = In Progress ** Changed in: linux (Ubuntu Trusty) Assignee: (unassigned) = Chris J Arges (arges) ** Changed in: linux (Ubuntu Trusty) Importance: Undecided = Medium ** Also affects: linux (Ubuntu Precise) Importance: Undecided Status: New ** Changed in: linux (Ubuntu Precise) Assignee: (unassigned) = Chris J Arges (arges) ** Changed in: linux (Ubuntu Precise) Importance: Undecided = Medium -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1285708 Title: FreeBSD Guest crash on boot due to xsave instruction issue Status in QEMU: New Status in qemu-kvm: New Status in “linux” package in Ubuntu: Fix Released Status in “linux” source package in Precise: New Status in “linux” source package in Trusty: In Progress Bug description: When trying to boot a working FreeBSD 9.1/9.2 guest on a kvm/qemu host with the following command: kvm -m 256 -cdrom FreeBSD-9.2-RELEASE-amd64-disc1.iso -drive file=FreeBSD-9.2-RELEASE-amd64.qcow2,if=virtio -net nic,model=virtio -net user -nographic -vnc :10 -enable-kvm -balloon virtio -cpu core2duo,+xsave The FreeBSD Guest will kernel crash on boot with the following error: panic: CPU0 does not support X87 or SSE: 0 When launching the guest without the cpu flags, it works just fine. This bug has been resolved in source: https://lkml.org/lkml/2014/2/22/58 Can this fix be included in Precise ASAP! To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1285708/+subscriptions
[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
** Changed in: qemu-kvm (Ubuntu Quantal) Assignee: Chris J Arges (arges) = (unassigned) ** Changed in: qemu-kvm (Ubuntu Raring) Assignee: Chris J Arges (arges) = (unassigned) -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues Status in QEMU: New Status in “qemu-kvm” package in Ubuntu: Fix Released Status in “qemu-kvm” source package in Precise: Fix Released Status in “qemu-kvm” source package in Quantal: Triaged Status in “qemu-kvm” source package in Raring: Triaged Status in “qemu-kvm” source package in Saucy: Fix Released Bug description: SRU Justification [Impact] * Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue. [Test Case] * savevm/loadvm: - Create a VM and install a test suite such as lmbench. - Get numbers right after boot and record them. - Open up the qemu monitor and type the following: stop savevm 0 loadvm 0 c - Measure performance and record numbers. - Compare if numbers are within margin of error. * migrate: - Create VM, install lmbench, get numbers. - Open up qemu monitor and type the following: stop migrate exec:dd of=~/save.vm quit - Start a new VM using qemu but add the following argument: -incoming exec:dd if=~/save.vm - Run performance test and compare. If performance measured is similar then we pass the test case. [Regression Potential] * The fix is a backport of two upstream patches: ad0b5321f1f797274603ebbe20108b0750baee94 211ea74022f51164a7729030b28eec90b6c99a08 One patch allows QEMU to use THP if its enabled. The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration). * I've also run the qa-regression-testing test-qemu.py script and it passes all tests. [Additional Information] Kernels from 3.2 onwards are affected, and all have the config: CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is applicable. -- I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms- 0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal, built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs from source to test, but libvirt seems to have an issue with it that I haven't been able to track down yet. I'm seeing a performance degradation after live migration on Precise, but not Lucid. These hosts are managed by libvirt (tested both 0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I don't seem to have this problem with lucid guests (running a number of standard kernels, 3.2.5 mainline and backported linux- image-3.2.0-35-generic as well.) I first noticed this problem with phoronix doing compilation tests, and then tried lmbench where even simple calls experience performance degradation. I've attempted to post to the kvm mailing list, but so far the only suggestion was it may be related to transparent hugepages not being used after migration, but this didn't pan out. Someone else has a similar problem here - http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592 qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Disk backend is LVM running on SAN via FC connection (using symlink from /var/lib/one/datastores/0/2/disk.0 above) ubuntu-12.04 - first boot == Simple syscall: 0.0527 microseconds Simple read: 0.1143 microseconds Simple write: 0.0953 microseconds
[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
I have verified this on my local machine using virt-manager's save memory, savevm/loadvm via the qemu monitor , and migrate via qemu monitor. ** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues Status in QEMU: New Status in “qemu-kvm” package in Ubuntu: Fix Released Status in “qemu-kvm” source package in Precise: Fix Committed Status in “qemu-kvm” source package in Quantal: Triaged Status in “qemu-kvm” source package in Raring: Triaged Status in “qemu-kvm” source package in Saucy: Fix Released Bug description: SRU Justification [Impact] * Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue. [Test Case] * savevm/loadvm: - Create a VM and install a test suite such as lmbench. - Get numbers right after boot and record them. - Open up the qemu monitor and type the following: stop savevm 0 loadvm 0 c - Measure performance and record numbers. - Compare if numbers are within margin of error. * migrate: - Create VM, install lmbench, get numbers. - Open up qemu monitor and type the following: stop migrate exec:dd of=~/save.vm quit - Start a new VM using qemu but add the following argument: -incoming exec:dd if=~/save.vm - Run performance test and compare. If performance measured is similar then we pass the test case. [Regression Potential] * The fix is a backport of two upstream patches: ad0b5321f1f797274603ebbe20108b0750baee94 211ea74022f51164a7729030b28eec90b6c99a08 One patch allows QEMU to use THP if its enabled. The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration). * I've also run the qa-regression-testing test-qemu.py script and it passes all tests. [Additional Information] Kernels from 3.2 onwards are affected, and all have the config: CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is applicable. -- I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms- 0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal, built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs from source to test, but libvirt seems to have an issue with it that I haven't been able to track down yet. I'm seeing a performance degradation after live migration on Precise, but not Lucid. These hosts are managed by libvirt (tested both 0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I don't seem to have this problem with lucid guests (running a number of standard kernels, 3.2.5 mainline and backported linux- image-3.2.0-35-generic as well.) I first noticed this problem with phoronix doing compilation tests, and then tried lmbench where even simple calls experience performance degradation. I've attempted to post to the kvm mailing list, but so far the only suggestion was it may be related to transparent hugepages not being used after migration, but this didn't pan out. Someone else has a similar problem here - http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592 qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Disk backend is LVM running on SAN via FC connection (using symlink from /var/lib/one/datastores/0/2/disk.0 above) ubuntu-12.04 - first boot == Simple syscall: 0.0527 microseconds Simple read: 0.1143 microseconds Simple
[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
I found that two patches need to be backported to solve this issue: ad0b5321f1f797274603ebbe20108b0750baee94 211ea74022f51164a7729030b28eec90b6c99a08 I've added the necessary bits into precise and tried a few tests: 1) Measure performance before and after savevm/loadvm. 2) Measure performance before and after a migrate to the same host. In both cases the performance measured by something like lmbench was the same as the previous run. A test build is available here: http://people.canonical.com/~arges/lp1100843/precise_v2/ ** Patch added: fix-lp1100843-precise.debdiff https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1100843/+attachment/3864309/+files/fix-lp1100843-precise.debdiff -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues Status in QEMU: New Status in “qemu-kvm” package in Ubuntu: Fix Released Status in “qemu-kvm” source package in Precise: In Progress Status in “qemu-kvm” source package in Quantal: Triaged Status in “qemu-kvm” source package in Raring: Triaged Status in “qemu-kvm” source package in Saucy: Fix Released Bug description: I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms- 0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal, built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs from source to test, but libvirt seems to have an issue with it that I haven't been able to track down yet. I'm seeing a performance degradation after live migration on Precise, but not Lucid. These hosts are managed by libvirt (tested both 0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I don't seem to have this problem with lucid guests (running a number of standard kernels, 3.2.5 mainline and backported linux- image-3.2.0-35-generic as well.) I first noticed this problem with phoronix doing compilation tests, and then tried lmbench where even simple calls experience performance degradation. I've attempted to post to the kvm mailing list, but so far the only suggestion was it may be related to transparent hugepages not being used after migration, but this didn't pan out. Someone else has a similar problem here - http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592 qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Disk backend is LVM running on SAN via FC connection (using symlink from /var/lib/one/datastores/0/2/disk.0 above) ubuntu-12.04 - first boot == Simple syscall: 0.0527 microseconds Simple read: 0.1143 microseconds Simple write: 0.0953 microseconds Simple open/close: 1.0432 microseconds Using phoronix pts/compuational ImageMagick - 31.54s Linux Kernel 3.1 - 43.91s Mplayer - 30.49s PHP - 22.25s ubuntu-12.04 - post live migration == Simple syscall: 0.0621 microseconds Simple read: 0.2485 microseconds Simple write: 0.2252 microseconds Simple open/close: 1.4626 microseconds Using phoronix pts/compilation ImageMagick - 43.29s Linux Kernel 3.1 - 76.67s Mplayer - 45.41s PHP - 29.1s I don't have phoronix results for 10.04 handy, but they were within 1% of each other... ubuntu-10.04 - first boot == Simple syscall: 0.0524 microseconds Simple read: 0.1135 microseconds Simple write: 0.0972 microseconds Simple open/close: 1.1261 microseconds ubuntu-10.04 - post live migration == Simple syscall: 0.0526 microseconds Simple read: 0.1075 microseconds Simple write: 0.0951 microseconds Simple open/close: 1.0413 microseconds To manage notifications about this bug go to:
[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
** Description changed: + SRU Justification + [Impact] + * Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue. + + [Test Case] + + * savevm/loadvm: +- Create a VM and install a test suite such as lmbench. +- Get numbers right after boot and record them. +- Open up the qemu monitor and type the following: + stop + savevm 0 + loadvm 0 + c +- Measure performance and record numbers. +- Compare if numbers are within margin of error. + * migrate: +- Create VM, install lmbench, get numbers. +- Open up qemu monitor and type the following: + stop + migrate exec:dd of=~/save.vm + quit +- Start a new VM using qemu but add the following argument: + -incoming exec:dd if=~/save.vm +- Run performance test and compare. + + If performance measured is similar then we pass the test case. + + [Regression Potential] + + * The fix is a backport of two upstream patches: + ad0b5321f1f797274603ebbe20108b0750baee94 + 211ea74022f51164a7729030b28eec90b6c99a08 + + On patch allows QEMU to use THP if its enabled. + The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration). + + -- + I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms- 0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal, built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs from source to test, but libvirt seems to have an issue with it that I haven't been able to track down yet. - I'm seeing a performance degradation after live migration on Precise, + I'm seeing a performance degradation after live migration on Precise, but not Lucid. These hosts are managed by libvirt (tested both 0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I don't seem to have this problem with lucid guests (running a number of standard kernels, 3.2.5 mainline and backported linux- image-3.2.0-35-generic as well.) I first noticed this problem with phoronix doing compilation tests, and then tried lmbench where even simple calls experience performance degradation. I've attempted to post to the kvm mailing list, but so far the only suggestion was it may be related to transparent hugepages not being used after migration, but this didn't pan out. Someone else has a similar problem here - http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592 qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Disk backend is LVM running on SAN via FC connection (using symlink from /var/lib/one/datastores/0/2/disk.0 above) - ubuntu-12.04 - first boot == Simple syscall: 0.0527 microseconds Simple read: 0.1143 microseconds Simple write: 0.0953 microseconds Simple open/close: 1.0432 microseconds Using phoronix pts/compuational ImageMagick - 31.54s Linux Kernel 3.1 - 43.91s Mplayer - 30.49s PHP - 22.25s - ubuntu-12.04 - post live migration == Simple syscall: 0.0621 microseconds Simple read: 0.2485 microseconds Simple write: 0.2252 microseconds Simple open/close: 1.4626 microseconds Using phoronix pts/compilation ImageMagick - 43.29s Linux Kernel 3.1 - 76.67s Mplayer - 45.41s PHP - 29.1s - - I don't have phoronix results for 10.04 handy, but they were within 1% of each other... + I don't have phoronix results for 10.04 handy, but they were within 1% + of each other... ubuntu-10.04 - first boot == Simple syscall: 0.0524 microseconds Simple read: 0.1135
[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
** Description changed: SRU Justification - [Impact] - * Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue. + [Impact] + * Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue. [Test Case] - * savevm/loadvm: -- Create a VM and install a test suite such as lmbench. -- Get numbers right after boot and record them. -- Open up the qemu monitor and type the following: - stop - savevm 0 - loadvm 0 - c -- Measure performance and record numbers. -- Compare if numbers are within margin of error. - * migrate: -- Create VM, install lmbench, get numbers. -- Open up qemu monitor and type the following: - stop - migrate exec:dd of=~/save.vm - quit -- Start a new VM using qemu but add the following argument: - -incoming exec:dd if=~/save.vm -- Run performance test and compare. - - If performance measured is similar then we pass the test case. + * savevm/loadvm: + - Create a VM and install a test suite such as lmbench. + - Get numbers right after boot and record them. + - Open up the qemu monitor and type the following: + stop + savevm 0 + loadvm 0 + c + - Measure performance and record numbers. + - Compare if numbers are within margin of error. + * migrate: + - Create VM, install lmbench, get numbers. + - Open up qemu monitor and type the following: + stop + migrate exec:dd of=~/save.vm + quit + - Start a new VM using qemu but add the following argument: + -incoming exec:dd if=~/save.vm + - Run performance test and compare. + + If performance measured is similar then we pass the test case. [Regression Potential] - * The fix is a backport of two upstream patches: + * The fix is a backport of two upstream patches: ad0b5321f1f797274603ebbe20108b0750baee94 211ea74022f51164a7729030b28eec90b6c99a08 On patch allows QEMU to use THP if its enabled. The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration). + * I've also run the qa-regression-testing test-qemu.py script and it passes all tests. -- I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms- 0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal, built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs from source to test, but libvirt seems to have an issue with it that I haven't been able to track down yet. I'm seeing a performance degradation after live migration on Precise, but not Lucid. These hosts are managed by libvirt (tested both 0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I don't seem to have this problem with lucid guests (running a number of standard kernels, 3.2.5 mainline and backported linux- image-3.2.0-35-generic as well.) I first noticed this problem with phoronix doing compilation tests, and then tried lmbench where even simple calls experience performance degradation. I've attempted to post to the kvm mailing list, but so far the only suggestion was it may be related to transparent hugepages not being used after migration, but this didn't pan out. Someone else has a similar problem here - http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592 qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Disk backend is LVM running on SAN via FC connection
[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
** Description changed: SRU Justification [Impact] * Users of QEMU that save their memory states using savevm/loadvm or migrate experience worse performance after the migration/loadvm. To workaround these issues VMs must be completely rebooted. Optimally we should be able to restore a VM's memory state an expect no performance issue. [Test Case] * savevm/loadvm: - Create a VM and install a test suite such as lmbench. - Get numbers right after boot and record them. - Open up the qemu monitor and type the following: stop savevm 0 loadvm 0 c - Measure performance and record numbers. - Compare if numbers are within margin of error. * migrate: - Create VM, install lmbench, get numbers. - Open up qemu monitor and type the following: stop migrate exec:dd of=~/save.vm quit - Start a new VM using qemu but add the following argument: -incoming exec:dd if=~/save.vm - Run performance test and compare. If performance measured is similar then we pass the test case. [Regression Potential] * The fix is a backport of two upstream patches: ad0b5321f1f797274603ebbe20108b0750baee94 211ea74022f51164a7729030b28eec90b6c99a08 - On patch allows QEMU to use THP if its enabled. + One patch allows QEMU to use THP if its enabled. The other patch changes logic to not memset pages to zero when loading memory for the vm (on an incoming migration). - * I've also run the qa-regression-testing test-qemu.py script and it passes all tests. + * I've also run the qa-regression-testing test-qemu.py script and it + passes all tests. + + [Additional Information] + + Kernels from 3.2 onwards are affected, and all have the config: + CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y. Therefore enabling THP is + applicable. + -- I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms- 0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal, built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs from source to test, but libvirt seems to have an issue with it that I haven't been able to track down yet. I'm seeing a performance degradation after live migration on Precise, but not Lucid. These hosts are managed by libvirt (tested both 0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I don't seem to have this problem with lucid guests (running a number of standard kernels, 3.2.5 mainline and backported linux- image-3.2.0-35-generic as well.) I first noticed this problem with phoronix doing compilation tests, and then tried lmbench where even simple calls experience performance degradation. I've attempted to post to the kvm mailing list, but so far the only suggestion was it may be related to transparent hugepages not being used after migration, but this didn't pan out. Someone else has a similar problem here - http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592 qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Disk backend is LVM running on SAN via FC connection (using symlink from /var/lib/one/datastores/0/2/disk.0 above) ubuntu-12.04 - first boot == Simple syscall: 0.0527 microseconds Simple read: 0.1143 microseconds Simple write: 0.0953 microseconds Simple open/close: 1.0432 microseconds Using phoronix pts/compuational ImageMagick - 31.54s Linux Kernel 3.1 - 43.91s Mplayer - 30.49s PHP - 22.25s ubuntu-12.04 - post live migration == Simple syscall: 0.0621 microseconds Simple read: 0.2485 microseconds Simple write: 0.2252 microseconds Simple open/close: 1.4626 microseconds Using phoronix pts/compilation ImageMagick - 43.29s Linux Kernel 3.1 - 76.67s Mplayer - 45.41s PHP - 29.1s I
[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
** Changed in: qemu-kvm (Ubuntu) Status: Triaged = In Progress -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues Status in QEMU: New Status in “linux” package in Ubuntu: Confirmed Status in “qemu-kvm” package in Ubuntu: In Progress Bug description: I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms- 0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal, built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs from source to test, but libvirt seems to have an issue with it that I haven't been able to track down yet. I'm seeing a performance degradation after live migration on Precise, but not Lucid. These hosts are managed by libvirt (tested both 0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I don't seem to have this problem with lucid guests (running a number of standard kernels, 3.2.5 mainline and backported linux- image-3.2.0-35-generic as well.) I first noticed this problem with phoronix doing compilation tests, and then tried lmbench where even simple calls experience performance degradation. I've attempted to post to the kvm mailing list, but so far the only suggestion was it may be related to transparent hugepages not being used after migration, but this didn't pan out. Someone else has a similar problem here - http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592 qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Disk backend is LVM running on SAN via FC connection (using symlink from /var/lib/one/datastores/0/2/disk.0 above) ubuntu-12.04 - first boot == Simple syscall: 0.0527 microseconds Simple read: 0.1143 microseconds Simple write: 0.0953 microseconds Simple open/close: 1.0432 microseconds Using phoronix pts/compuational ImageMagick - 31.54s Linux Kernel 3.1 - 43.91s Mplayer - 30.49s PHP - 22.25s ubuntu-12.04 - post live migration == Simple syscall: 0.0621 microseconds Simple read: 0.2485 microseconds Simple write: 0.2252 microseconds Simple open/close: 1.4626 microseconds Using phoronix pts/compilation ImageMagick - 43.29s Linux Kernel 3.1 - 76.67s Mplayer - 45.41s PHP - 29.1s I don't have phoronix results for 10.04 handy, but they were within 1% of each other... ubuntu-10.04 - first boot == Simple syscall: 0.0524 microseconds Simple read: 0.1135 microseconds Simple write: 0.0972 microseconds Simple open/close: 1.1261 microseconds ubuntu-10.04 - post live migration == Simple syscall: 0.0526 microseconds Simple read: 0.1075 microseconds Simple write: 0.0951 microseconds Simple open/close: 1.0413 microseconds To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions
[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
From my testing this has been fixed in the saucy version (1.5.0) of qemu. It is fixed by this patch: f1c72795af573b24a7da5eb52375c9aba8a37972 However later in the history this commit was reverted, and again broke this. The other commit that fixes this is: 211ea74022f51164a7729030b28eec90b6c99a08 So 211ea740 needs to be backported to P/Q/R to fix this issue. I have a v1 packages of a precise backport here, I've confirmed performance differences between savevm/loadvm cycles: http://people.canonical.com/~arges/lp1100843/precise/ ** No longer affects: linux (Ubuntu) ** Also affects: qemu-kvm (Ubuntu Precise) Importance: Undecided Status: New ** Also affects: qemu-kvm (Ubuntu Quantal) Importance: Undecided Status: New ** Also affects: qemu-kvm (Ubuntu Raring) Importance: Undecided Status: New ** Also affects: qemu-kvm (Ubuntu Saucy) Importance: High Assignee: Chris J Arges (arges) Status: In Progress ** Changed in: qemu-kvm (Ubuntu Precise) Assignee: (unassigned) = Chris J Arges (arges) ** Changed in: qemu-kvm (Ubuntu Quantal) Assignee: (unassigned) = Chris J Arges (arges) ** Changed in: qemu-kvm (Ubuntu Raring) Assignee: (unassigned) = Chris J Arges (arges) ** Changed in: qemu-kvm (Ubuntu Precise) Importance: Undecided = High ** Changed in: qemu-kvm (Ubuntu Quantal) Importance: Undecided = High ** Changed in: qemu-kvm (Ubuntu Raring) Importance: Undecided = High ** Changed in: qemu-kvm (Ubuntu Saucy) Assignee: Chris J Arges (arges) = (unassigned) ** Changed in: qemu-kvm (Ubuntu Saucy) Status: In Progress = Fix Released ** Changed in: qemu-kvm (Ubuntu Raring) Status: New = Triaged ** Changed in: qemu-kvm (Ubuntu Quantal) Status: New = Triaged ** Changed in: qemu-kvm (Ubuntu Precise) Status: New = In Progress -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues Status in QEMU: New Status in “qemu-kvm” package in Ubuntu: Fix Released Status in “qemu-kvm” source package in Precise: In Progress Status in “qemu-kvm” source package in Quantal: Triaged Status in “qemu-kvm” source package in Raring: Triaged Status in “qemu-kvm” source package in Saucy: Fix Released Bug description: I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms- 0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal, built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs from source to test, but libvirt seems to have an issue with it that I haven't been able to track down yet. I'm seeing a performance degradation after live migration on Precise, but not Lucid. These hosts are managed by libvirt (tested both 0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I don't seem to have this problem with lucid guests (running a number of standard kernels, 3.2.5 mainline and backported linux- image-3.2.0-35-generic as well.) I first noticed this problem with phoronix doing compilation tests, and then tried lmbench where even simple calls experience performance degradation. I've attempted to post to the kvm mailing list, but so far the only suggestion was it may be related to transparent hugepages not being used after migration, but this didn't pan out. Someone else has a similar problem here - http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592 qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Disk backend is LVM running on SAN via FC connection (using symlink from /var/lib/one/datastores/0/2/disk.0 above) ubuntu-12.04 - first boot == Simple syscall: 0.0527 microseconds Simple read: 0.1143 microseconds Simple write: 0.0953 microseconds
[Qemu-devel] [Bug 1100843] Re: Live Migration Causes Performance Issues
** Changed in: qemu-kvm (Ubuntu) Assignee: (unassigned) = Chris J Arges (arges) -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1100843 Title: Live Migration Causes Performance Issues Status in QEMU: New Status in “linux” package in Ubuntu: Confirmed Status in “qemu-kvm” package in Ubuntu: Triaged Bug description: I have 2 physical hosts running Ubuntu Precise. With 1.0+noroms- 0ubuntu14.7 and qemu-kvm 1.2.0+noroms-0ubuntu7 (source from quantal, built for Precise with pbuilder.) I attempted to build qemu-1.3.0 debs from source to test, but libvirt seems to have an issue with it that I haven't been able to track down yet. I'm seeing a performance degradation after live migration on Precise, but not Lucid. These hosts are managed by libvirt (tested both 0.9.8-2ubuntu17 and 1.0.0-0ubuntu4) in conjunction with OpenNebula. I don't seem to have this problem with lucid guests (running a number of standard kernels, 3.2.5 mainline and backported linux- image-3.2.0-35-generic as well.) I first noticed this problem with phoronix doing compilation tests, and then tried lmbench where even simple calls experience performance degradation. I've attempted to post to the kvm mailing list, but so far the only suggestion was it may be related to transparent hugepages not being used after migration, but this didn't pan out. Someone else has a similar problem here - http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592 qemu command line example: /usr/bin/kvm -name one-2 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid f89e31a4-4945-c12c-6544-149ba0746c2f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-virtio- disk0,format=raw,cache=none -device virtio-blk- pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio- disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/2/disk.1,if=none,id=drive- ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive =drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net- pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:2,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Disk backend is LVM running on SAN via FC connection (using symlink from /var/lib/one/datastores/0/2/disk.0 above) ubuntu-12.04 - first boot == Simple syscall: 0.0527 microseconds Simple read: 0.1143 microseconds Simple write: 0.0953 microseconds Simple open/close: 1.0432 microseconds Using phoronix pts/compuational ImageMagick - 31.54s Linux Kernel 3.1 - 43.91s Mplayer - 30.49s PHP - 22.25s ubuntu-12.04 - post live migration == Simple syscall: 0.0621 microseconds Simple read: 0.2485 microseconds Simple write: 0.2252 microseconds Simple open/close: 1.4626 microseconds Using phoronix pts/compilation ImageMagick - 43.29s Linux Kernel 3.1 - 76.67s Mplayer - 45.41s PHP - 29.1s I don't have phoronix results for 10.04 handy, but they were within 1% of each other... ubuntu-10.04 - first boot == Simple syscall: 0.0524 microseconds Simple read: 0.1135 microseconds Simple write: 0.0972 microseconds Simple open/close: 1.1261 microseconds ubuntu-10.04 - post live migration == Simple syscall: 0.0526 microseconds Simple read: 0.1075 microseconds Simple write: 0.0951 microseconds Simple open/close: 1.0413 microseconds To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1100843/+subscriptions