[Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
** Changed in: qemu Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1570134 Title: While committing snapshot qemu crashes with SIGABRT Status in QEMU: Fix Released Bug description: Information: OS: Slackware64-Current Compiled with: gcc version 5.3.0 (GCC) / glibc 2.23 Compiled using: CFLAGS="-O2 -fPIC" \ CXXFLAGS="-O2 -fPIC" \ LDFLAGS="-L/usr/lib64" \ ./configure \ --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var \ --libdir=/usr/lib64 \ --enable-spice \ --enable-kvm \ --enable-glusterfs \ --enable-libiscsi \ --enable-libusb \ --target-list=x86_64-softmmu,i386-softmmu \ --enable-debug Source: qemu-2.5.1.tar.bz2 Running as: /usr/bin/qemu-system-x86_64 -name test1,debug-threads=on -S -machine pc-1.1,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 4b30ec13-6609-4a56-8731-d400c38189ef -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-test1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -global kvm- pit.lost_tick_policy=discard -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/datastore/vm/test1/test1.img,format=qcow2,if=none,id=drive- virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive =drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,id =drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive =drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net pci,netdev=hostnet0,id=net0,mac=52:54:00:66:2e:0f,bus=pci.0,addr=0x3 -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on File system: zfs v0.6.5.6 While running: virsh blockcommit test1 vda --active --pivot --verbose VM running very heavy IO load GDB reporting: #0 0x7fd80132c3f8 in raise () at /lib64/libc.so.6 #1 0x7fd80132dffa in abort () at /lib64/libc.so.6 #2 0x7fd801324c17 in __assert_fail_base () at /lib64/libc.so.6 #3 0x7fd801324cc2 in () at /lib64/libc.so.6 #4 0x55d9918d7572 in bdrv_replace_in_backing_chain (old=0x55d993ed9c10, new=0x55d9931ccc10) at block.c:2096 __PRETTY_FUNCTION__ = "bdrv_replace_in_backing_chain" #5 0x55d991911869 in mirror_exit (job=0x55d993fef830, opaque=0x55d999bbefe0) at block/mirror.c:376 to_replace = 0x55d993ed9c10 s = 0x55d993fef830 data = 0x55d999bbefe0 replace_aio_context = src = 0x55d993ed9c10 #6 0x55d9918da1dc in block_job_defer_to_main_loop_bh (opaque=0x55d9940ce850) at blockjob.c:481 data = 0x55d9940ce850 aio_context = 0x55d9931a2610 #7 0x55d9918d014b in aio_bh_poll (ctx=ctx@entry=0x55d9931a2610) at async.c:92 bh = bhp = next = 0x55d99440f910 ret = 1 #8 0x55d9918dc8c0 in aio_dispatch (ctx=0x55d9931a2610) at aio-posix.c:305 node = progress = false #9 0x55d9918d000e in aio_ctx_dispatch (source=, callback=, user_data=) at async.c:231 ctx = #10 0x7fd8037cf787 in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0 #11 0x55d9918db03b in main_loop_wait () at main-loop.c:211 context = 0x55d9931a3200 pfds = ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #12 0x55d9918db03b in main_loop_wait (timeout=) at main-loop.c:256 ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #13 0x55d9918db03b in main_loop_wait (nonblocking=) at main-loop.c:504 ret = 0 timeout = 4294967295 timeout_ns = #14 0x55d991679cc4 in main () at vl.c:1923 nonblocking = last_io = 2 i = snapshot = linux_boot = initrd_filename = kernel_filename = kernel_cmdline = boot_order = boot_once = ds = cyls = heads = secs = translation = hda_opts = opts = machine_opts = icount_opts = olist = optind = 49 optarg = 0x7fffc6d27f43 "timestamp=on" loadvm = machine_class = 0x55d993194d10 cpu_model = vga_model = 0x0 qtest_chrdev = qtest_log = pid_file = incoming = defconfig = userconfig = false log_mask = log_file =
[Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
** Changed in: qemu Status: New => Fix Committed -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1570134 Title: While committing snapshot qemu crashes with SIGABRT Status in QEMU: Fix Committed Bug description: Information: OS: Slackware64-Current Compiled with: gcc version 5.3.0 (GCC) / glibc 2.23 Compiled using: CFLAGS="-O2 -fPIC" \ CXXFLAGS="-O2 -fPIC" \ LDFLAGS="-L/usr/lib64" \ ./configure \ --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var \ --libdir=/usr/lib64 \ --enable-spice \ --enable-kvm \ --enable-glusterfs \ --enable-libiscsi \ --enable-libusb \ --target-list=x86_64-softmmu,i386-softmmu \ --enable-debug Source: qemu-2.5.1.tar.bz2 Running as: /usr/bin/qemu-system-x86_64 -name test1,debug-threads=on -S -machine pc-1.1,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 4b30ec13-6609-4a56-8731-d400c38189ef -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-test1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -global kvm- pit.lost_tick_policy=discard -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/datastore/vm/test1/test1.img,format=qcow2,if=none,id=drive- virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive =drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,id =drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive =drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net pci,netdev=hostnet0,id=net0,mac=52:54:00:66:2e:0f,bus=pci.0,addr=0x3 -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on File system: zfs v0.6.5.6 While running: virsh blockcommit test1 vda --active --pivot --verbose VM running very heavy IO load GDB reporting: #0 0x7fd80132c3f8 in raise () at /lib64/libc.so.6 #1 0x7fd80132dffa in abort () at /lib64/libc.so.6 #2 0x7fd801324c17 in __assert_fail_base () at /lib64/libc.so.6 #3 0x7fd801324cc2 in () at /lib64/libc.so.6 #4 0x55d9918d7572 in bdrv_replace_in_backing_chain (old=0x55d993ed9c10, new=0x55d9931ccc10) at block.c:2096 __PRETTY_FUNCTION__ = "bdrv_replace_in_backing_chain" #5 0x55d991911869 in mirror_exit (job=0x55d993fef830, opaque=0x55d999bbefe0) at block/mirror.c:376 to_replace = 0x55d993ed9c10 s = 0x55d993fef830 data = 0x55d999bbefe0 replace_aio_context = src = 0x55d993ed9c10 #6 0x55d9918da1dc in block_job_defer_to_main_loop_bh (opaque=0x55d9940ce850) at blockjob.c:481 data = 0x55d9940ce850 aio_context = 0x55d9931a2610 #7 0x55d9918d014b in aio_bh_poll (ctx=ctx@entry=0x55d9931a2610) at async.c:92 bh = bhp = next = 0x55d99440f910 ret = 1 #8 0x55d9918dc8c0 in aio_dispatch (ctx=0x55d9931a2610) at aio-posix.c:305 node = progress = false #9 0x55d9918d000e in aio_ctx_dispatch (source=, callback=, user_data=) at async.c:231 ctx = #10 0x7fd8037cf787 in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0 #11 0x55d9918db03b in main_loop_wait () at main-loop.c:211 context = 0x55d9931a3200 pfds = ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #12 0x55d9918db03b in main_loop_wait (timeout=) at main-loop.c:256 ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #13 0x55d9918db03b in main_loop_wait (nonblocking=) at main-loop.c:504 ret = 0 timeout = 4294967295 timeout_ns = #14 0x55d991679cc4 in main () at vl.c:1923 nonblocking = last_io = 2 i = snapshot = linux_boot = initrd_filename = kernel_filename = kernel_cmdline = boot_order = boot_once = ds = cyls = heads = secs = translation = hda_opts = opts = machine_opts = icount_opts = olist = optind = 49 optarg = 0x7fffc6d27f43 "timestamp=on" loadvm = machine_class = 0x55d993194d10 cpu_model = vga_model = 0x0 qtest_chrdev = qtest_log = pid_file = incoming = defconfig = userconfig = false log_mask = log_file =
Re: [Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
On Fri, 04/22 18:55, Matthew Schumacher wrote: > Running master as of this morning 4/22 and I'm not getting any more > crashes, and I'm flat beating on it. RC3 still crashes on me, so > whatever the fix is, came after rc3. Matthew, It was bcd82a9..ab27c3b from last Friday (yes, after -rc3). Thank you so much for your reporting and testing. Fam > > -- > You received this bug notification because you are a member of qemu- > devel-ml, which is subscribed to QEMU. > https://bugs.launchpad.net/bugs/1570134 > > Title: > While committing snapshot qemu crashes with SIGABRT > > Status in QEMU: > New > > Bug description: > Information: > > OS: Slackware64-Current > Compiled with: gcc version 5.3.0 (GCC) / glibc 2.23 > Compiled using: > > CFLAGS="-O2 -fPIC" \ > CXXFLAGS="-O2 -fPIC" \ > LDFLAGS="-L/usr/lib64" \ > ./configure \ > --prefix=/usr \ > --sysconfdir=/etc \ > --localstatedir=/var \ > --libdir=/usr/lib64 \ > --enable-spice \ > --enable-kvm \ > --enable-glusterfs \ > --enable-libiscsi \ > --enable-libusb \ > --target-list=x86_64-softmmu,i386-softmmu \ > --enable-debug > > Source: qemu-2.5.1.tar.bz2 > > Running as: > > /usr/bin/qemu-system-x86_64 -name test1,debug-threads=on -S -machine > pc-1.1,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp > 2,sockets=2,cores=1,threads=1 -uuid > 4b30ec13-6609-4a56-8731-d400c38189ef -no-user-config -nodefaults > -chardev > > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-test1/monitor.sock,server,nowait > -mon chardev=charmonitor,id=monitor,mode=control -rtc > base=localtime,clock=vm,driftfix=slew -global kvm- > pit.lost_tick_policy=discard -no-shutdown -boot strict=on -device > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive > file=/datastore/vm/test1/test1.img,format=qcow2,if=none,id=drive- > virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive > =drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,id > =drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive > =drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev > tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net > pci,netdev=hostnet0,id=net0,mac=52:54:00:66:2e:0f,bus=pci.0,addr=0x3 > -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on > > File system: zfs v0.6.5.6 > > While running: > virsh blockcommit test1 vda --active --pivot --verbose > > VM running very heavy IO load > > GDB reporting: > > #0 0x7fd80132c3f8 in raise () at /lib64/libc.so.6 > #1 0x7fd80132dffa in abort () at /lib64/libc.so.6 > #2 0x7fd801324c17 in __assert_fail_base () at /lib64/libc.so.6 > #3 0x7fd801324cc2 in () at /lib64/libc.so.6 > #4 0x55d9918d7572 in bdrv_replace_in_backing_chain > (old=0x55d993ed9c10, new=0x55d9931ccc10) at block.c:2096 > __PRETTY_FUNCTION__ = "bdrv_replace_in_backing_chain" > #5 0x55d991911869 in mirror_exit (job=0x55d993fef830, > opaque=0x55d999bbefe0) at block/mirror.c:376 > to_replace = 0x55d993ed9c10 > s = 0x55d993fef830 > data = 0x55d999bbefe0 > replace_aio_context = > src = 0x55d993ed9c10 > #6 0x55d9918da1dc in block_job_defer_to_main_loop_bh > (opaque=0x55d9940ce850) at blockjob.c:481 > data = 0x55d9940ce850 > aio_context = 0x55d9931a2610 > #7 0x55d9918d014b in aio_bh_poll (ctx=ctx@entry=0x55d9931a2610) at > async.c:92 > bh = > bhp = > next = 0x55d99440f910 > ret = 1 > #8 0x55d9918dc8c0 in aio_dispatch (ctx=0x55d9931a2610) at > aio-posix.c:305 > node = > progress = false > #9 0x55d9918d000e in aio_ctx_dispatch (source=, > callback=, user_data=) at async.c:231 > ctx = > #10 0x7fd8037cf787 in g_main_context_dispatch () at > /usr/lib64/libglib-2.0.so.0 > #11 0x55d9918db03b in main_loop_wait () at main-loop.c:211 > context = 0x55d9931a3200 > pfds = > ret = 0 > spin_counter = 1 > ret = 0 > timeout = 4294967295 > timeout_ns = > #12 0x55d9918db03b in main_loop_wait (timeout=) at > main-loop.c:256 > ret = 0 > spin_counter = 1 > ret = 0 > timeout = 4294967295 > timeout_ns = > #13 0x55d9918db03b in main_loop_wait (nonblocking=) at > main-loop.c:504 > ret = 0 > timeout = 4294967295 > timeout_ns = > #14 0x55d991679cc4 in main () at vl.c:1923 > nonblocking = > last_io = 2 > i = > snapshot = > linux_boot = > initrd_filename = > kernel_filename = > kernel_cmdline = > boot_order = > boot_once = > ds = > cyls = >
[Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
Running master as of this morning 4/22 and I'm not getting any more crashes, and I'm flat beating on it. RC3 still crashes on me, so whatever the fix is, came after rc3. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1570134 Title: While committing snapshot qemu crashes with SIGABRT Status in QEMU: New Bug description: Information: OS: Slackware64-Current Compiled with: gcc version 5.3.0 (GCC) / glibc 2.23 Compiled using: CFLAGS="-O2 -fPIC" \ CXXFLAGS="-O2 -fPIC" \ LDFLAGS="-L/usr/lib64" \ ./configure \ --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var \ --libdir=/usr/lib64 \ --enable-spice \ --enable-kvm \ --enable-glusterfs \ --enable-libiscsi \ --enable-libusb \ --target-list=x86_64-softmmu,i386-softmmu \ --enable-debug Source: qemu-2.5.1.tar.bz2 Running as: /usr/bin/qemu-system-x86_64 -name test1,debug-threads=on -S -machine pc-1.1,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 4b30ec13-6609-4a56-8731-d400c38189ef -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-test1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -global kvm- pit.lost_tick_policy=discard -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/datastore/vm/test1/test1.img,format=qcow2,if=none,id=drive- virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive =drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,id =drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive =drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net pci,netdev=hostnet0,id=net0,mac=52:54:00:66:2e:0f,bus=pci.0,addr=0x3 -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on File system: zfs v0.6.5.6 While running: virsh blockcommit test1 vda --active --pivot --verbose VM running very heavy IO load GDB reporting: #0 0x7fd80132c3f8 in raise () at /lib64/libc.so.6 #1 0x7fd80132dffa in abort () at /lib64/libc.so.6 #2 0x7fd801324c17 in __assert_fail_base () at /lib64/libc.so.6 #3 0x7fd801324cc2 in () at /lib64/libc.so.6 #4 0x55d9918d7572 in bdrv_replace_in_backing_chain (old=0x55d993ed9c10, new=0x55d9931ccc10) at block.c:2096 __PRETTY_FUNCTION__ = "bdrv_replace_in_backing_chain" #5 0x55d991911869 in mirror_exit (job=0x55d993fef830, opaque=0x55d999bbefe0) at block/mirror.c:376 to_replace = 0x55d993ed9c10 s = 0x55d993fef830 data = 0x55d999bbefe0 replace_aio_context = src = 0x55d993ed9c10 #6 0x55d9918da1dc in block_job_defer_to_main_loop_bh (opaque=0x55d9940ce850) at blockjob.c:481 data = 0x55d9940ce850 aio_context = 0x55d9931a2610 #7 0x55d9918d014b in aio_bh_poll (ctx=ctx@entry=0x55d9931a2610) at async.c:92 bh = bhp = next = 0x55d99440f910 ret = 1 #8 0x55d9918dc8c0 in aio_dispatch (ctx=0x55d9931a2610) at aio-posix.c:305 node = progress = false #9 0x55d9918d000e in aio_ctx_dispatch (source=, callback=, user_data=) at async.c:231 ctx = #10 0x7fd8037cf787 in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0 #11 0x55d9918db03b in main_loop_wait () at main-loop.c:211 context = 0x55d9931a3200 pfds = ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #12 0x55d9918db03b in main_loop_wait (timeout=) at main-loop.c:256 ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #13 0x55d9918db03b in main_loop_wait (nonblocking=) at main-loop.c:504 ret = 0 timeout = 4294967295 timeout_ns = #14 0x55d991679cc4 in main () at vl.c:1923 nonblocking = last_io = 2 i = snapshot = linux_boot = initrd_filename = kernel_filename = kernel_cmdline = boot_order = boot_once = ds = cyls = heads = secs = translation = hda_opts = opts = machine_opts = icount_opts = olist = optind = 49 optarg = 0x7fffc6d27f43 "timestamp=on" loadvm = machine_class = 0x55d993194d10 cpu_model = vga_model = 0x0 qtest_chrdev = qtest_log = pid_file = incoming =
Re: [Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
On 20 April 2016 at 19:09, Max Reitzwrote: > On 20.04.2016 02:03, Matthew Schumacher wrote: >> Qemu still crashes for me, but the debug is again very different. When >> I attach to the qemu process from gdb, it is unable to provide a >> backtrace when it crashes. The log file is different too. Any ideas? >> >> qemu-system-x86_64: block.c:2307: bdrv_replace_in_backing_chain: >> Assertion `!bdrv_requests_pending(old)' failed. > > This message is exactly the same as you saw in 2.5.1, so I guess we've > at least averted a regression in 2.6.0. Could somebody summarize for me the state of this bug w.r.t. the upcoming release? In particular: * are there any patches on-list for it which should go into rc3? * are there any further problems which we plan to fix for 2.6 but which there aren't patches for yet? thanks -- PMM
Re: [Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
On Thu, 04/21 08:34, Fam Zheng wrote: > On Wed, 04/20 22:03, Max Reitz wrote: > > On 20.04.2016 20:09, Max Reitz wrote: > > > On 20.04.2016 02:03, Matthew Schumacher wrote: > > >> Max, > > >> > > >> Qemu still crashes for me, but the debug is again very different. When > > >> I attach to the qemu process from gdb, it is unable to provide a > > >> backtrace when it crashes. The log file is different too. Any ideas? > > >> > > >> qemu-system-x86_64: block.c:2307: bdrv_replace_in_backing_chain: > > >> Assertion `!bdrv_requests_pending(old)' failed. > > > > > > This message is exactly the same as you saw in 2.5.1, so I guess we've > > > at least averted a regression in 2.6.0. > > > > I get the same message in 2.5.0, in 2.4.0 it's "Co-routine re-entered > > recursively". 2.3.0 works fine. > > > > Bisecting the regression between 2.3.0 and 2.4.0 interestingly yields > > 48ac0a4df84662f as the problematic commit, but I can't imagine that this > > is the root issue. The effective change it brings is that for active > > commits, the buf_size is no longer the same as the granularity, but the > > default mirror buf_size instead. > > > > When forcing buf_size to the granularity, the issue first appears with > > commit 3f09bfbc7bee812 (after 2.4.0, before 2.5.0), which is much less > > surprising, because this is the one that introduced the assertion in the > > first place. > > > > However, I still don't think the assertion is the problem but the fact > > that the guest device can still send requests after bdrv_drained_begin(). > > Thanks for debugging this. > > bdrv_drained_begin isn't effective because the guest notifier handler is not > registered as "external": > > virtio_queue_set_host_notifier_fd_handler > event_notifier_set_handler > qemu_set_fd_handler > aio_set_fd_handler(ctx, fd, >is_external, /* false */ >...) > > > is_external SHOULD be true here. > This patch survives the reproducer I have on top of master (also submitted to qemu-devel for 2.6): --- diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index f745c4a..002c2c6 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -1829,10 +1829,11 @@ void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign, bool set_handler) { if (assign && set_handler) { -event_notifier_set_handler(>host_notifier, - virtio_queue_host_notifier_read); +aio_set_event_notifier(qemu_get_aio_context(), >host_notifier, + true, virtio_queue_host_notifier_read); } else { -event_notifier_set_handler(>host_notifier, NULL); +aio_set_event_notifier(qemu_get_aio_context(), >host_notifier, + true, NULL); } if (!assign) { /* Test and clear notifier before after disabling event,
Re: [Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
On Wed, 04/20 22:03, Max Reitz wrote: > On 20.04.2016 20:09, Max Reitz wrote: > > On 20.04.2016 02:03, Matthew Schumacher wrote: > >> Max, > >> > >> Qemu still crashes for me, but the debug is again very different. When > >> I attach to the qemu process from gdb, it is unable to provide a > >> backtrace when it crashes. The log file is different too. Any ideas? > >> > >> qemu-system-x86_64: block.c:2307: bdrv_replace_in_backing_chain: > >> Assertion `!bdrv_requests_pending(old)' failed. > > > > This message is exactly the same as you saw in 2.5.1, so I guess we've > > at least averted a regression in 2.6.0. > > I get the same message in 2.5.0, in 2.4.0 it's "Co-routine re-entered > recursively". 2.3.0 works fine. > > Bisecting the regression between 2.3.0 and 2.4.0 interestingly yields > 48ac0a4df84662f as the problematic commit, but I can't imagine that this > is the root issue. The effective change it brings is that for active > commits, the buf_size is no longer the same as the granularity, but the > default mirror buf_size instead. > > When forcing buf_size to the granularity, the issue first appears with > commit 3f09bfbc7bee812 (after 2.4.0, before 2.5.0), which is much less > surprising, because this is the one that introduced the assertion in the > first place. > > However, I still don't think the assertion is the problem but the fact > that the guest device can still send requests after bdrv_drained_begin(). Thanks for debugging this. bdrv_drained_begin isn't effective because the guest notifier handler is not registered as "external": virtio_queue_set_host_notifier_fd_handler event_notifier_set_handler qemu_set_fd_handler aio_set_fd_handler(ctx, fd, is_external, /* false */ ...) is_external SHOULD be true here.
Re: [Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
On 20.04.2016 20:09, Max Reitz wrote: > On 20.04.2016 02:03, Matthew Schumacher wrote: >> Max, >> >> Qemu still crashes for me, but the debug is again very different. When >> I attach to the qemu process from gdb, it is unable to provide a >> backtrace when it crashes. The log file is different too. Any ideas? >> >> qemu-system-x86_64: block.c:2307: bdrv_replace_in_backing_chain: >> Assertion `!bdrv_requests_pending(old)' failed. > > This message is exactly the same as you saw in 2.5.1, so I guess we've > at least averted a regression in 2.6.0. I get the same message in 2.5.0, in 2.4.0 it's "Co-routine re-entered recursively". 2.3.0 works fine. Bisecting the regression between 2.3.0 and 2.4.0 interestingly yields 48ac0a4df84662f as the problematic commit, but I can't imagine that this is the root issue. The effective change it brings is that for active commits, the buf_size is no longer the same as the granularity, but the default mirror buf_size instead. When forcing buf_size to the granularity, the issue first appears with commit 3f09bfbc7bee812 (after 2.4.0, before 2.5.0), which is much less surprising, because this is the one that introduced the assertion in the first place. However, I still don't think the assertion is the problem but the fact that the guest device can still send requests after bdrv_drained_begin(). Max signature.asc Description: OpenPGP digital signature
Re: [Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
On 20.04.2016 02:03, Matthew Schumacher wrote: > Max, > > Qemu still crashes for me, but the debug is again very different. When > I attach to the qemu process from gdb, it is unable to provide a > backtrace when it crashes. The log file is different too. Any ideas? > > qemu-system-x86_64: block.c:2307: bdrv_replace_in_backing_chain: > Assertion `!bdrv_requests_pending(old)' failed. This message is exactly the same as you saw in 2.5.1, so I guess we've at least averted a regression in 2.6.0. I'm CC-ing some people who are more involved with this (although Paolo is on PTO right now, but well...). (The following is more of a note to those people than to you, Matthew.) Summary: I think bdrv_drained_begin() does not behave as advertised. So the assertion that is failing here asserts that no requests are pending on the mirror block jobs source BDS. However, we do invoke a bdrv_drained_begin() on exactly that BDS at the end of mirror_run(). When that function returns, there are indeed no more requests pending for that BDS. But once mirror_exit() is invoked, there may be new requests pending. I reproduced that by running bonnie++ in a guest and then just committed a snapshot and invoked block-job-complete right after the BLOCK_JOB_READY event; sometimes, in bdrv_requests_pending(s->common.bs) is true in mirror_exit() (which is bad), sometimes it's false. I just used a plain virtio-blk drive without dataplane. I'm not sure exactly how bdrv_drained_begin() and in turn aio_disable_external() are supposed to work, but as a matter of fact a BDS may receive requests even after those functions are called. Just putting an assert(!bs->quiesce_counter) in tracked_request_begin() will make it fail even before I started the mirror block job (due to some flush). So in my case the problematic request regarding the mirroring comes from blk_aio_ready_entry(); putting an assert(!blk_bs(blk)->quiesce_counter) into blk_aio_readv() yields the following backtrace: #0 0x7f3e750bd2a8 in raise () from /usr/lib/libc.so.6 No symbol table info available. #1 0x7f3e750be72a in abort () from /usr/lib/libc.so.6 No symbol table info available. #2 0x7f3e750b61b7 in __assert_fail_base () from /usr/lib/libc.so.6 No symbol table info available. #3 0x7f3e750b6262 in __assert_fail () from /usr/lib/libc.so.6 No symbol table info available. #4 0x564cf7d4e25e in blk_aio_readv (blk=, sector_num=, iov=, nb_sectors=, cb=, opaque=) at qemu/block/block-backend.c:1002 __PRETTY_FUNCTION__ = "blk_aio_readv" #5 0x564cf7ab2cf3 in submit_requests (niov=, num_reqs=, start=, mrb=, blk=) at qemu/hw/block/virtio-blk.c:361 nb_sectors = is_write = qiov = sector_num = #6 virtio_blk_submit_multireq (blk=0x564cf9f80250, mrb=mrb@entry=0x7ffeffbfce40) at qemu/hw/block/virtio-blk.c:391 i = start = num_reqs = niov = nb_sectors = max_xfer_len = sector_num = #7 0x564cf7ab38c2 in virtio_blk_handle_vq (s=0x564cf9e51268, vq=) at qemu/hw/block/virtio-blk.c:593 req = 0x0 mrb = {reqs = {0x564cfb8e8c30, 0x564cfb7bc290, 0x0 }, num_reqs = 2, is_write = false} #8 0x564cf7addcf5 in virtio_queue_notify_vq (vq=0x564cfa000be0) at qemu/hw/virtio/virtio.c:1108 vdev = 0x564cf9e51268 #9 0x564cf7d19980 in aio_dispatch (ctx=0x564cf9e42f40) at qemu/aio-posix.c:327 tmp = revents = node = 0x7f3e54015030 progress = false #10 0x564cf7d0eecd in aio_ctx_dispatch (source=, callback=, user_data=) at qemu/async.c:233 ctx = #11 0x7f3e781d7f07 in g_main_context_dispatch () from /usr/lib/libglib-2.0.so.0 No symbol table info available. #12 0x564cf7d1803b in glib_pollfds_poll () at qemu/main-loop.c:213 context = 0x564cf9e44800 pfds = #13 os_host_main_loop_wait (timeout=) at qemu/main-loop.c:258 ret = 2 spin_counter = 2 #14 main_loop_wait (nonblocking=) at qemu/main-loop.c:506 ret = 2 timeout = 1000 timeout_ns = #15 0x564cf7a4c91c in main_loop () at qemu/vl.c:1934 nonblocking = last_io = 0 #16 main (argc=, argv=, envp=) at qemu/vl.c:4658 Maybe bdrv_drained_begin() is supposed to work like this and to let this request through but that would be pretty counter-intuitive. Max signature.asc Description: OpenPGP digital signature
[Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
Max, Qemu still crashes for me, but the debug is again very different. When I attach to the qemu process from gdb, it is unable to provide a backtrace when it crashes. The log file is different too. Any ideas? qemu-system-x86_64: block.c:2307: bdrv_replace_in_backing_chain: Assertion `!bdrv_requests_pending(old)' failed. (gdb) attach 5563 Attaching to process 5563 Reading symbols from /usr/bin/qemu-system-x86_64...cdone. oReading symbols from /usr/lib64/libepoxy.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libdrm.so.2...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libgbm.so.1...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libX11.so.6...n(no debugging symbols found)...done. Reading symbols from /usr/lib64/libz.so.1...(no debugging symbols found)...done. Reading symbols from /lib64/libaio.so.1...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libiscsi.so.4...done. Reading symbols from /usr/lib64/libcurl.so.4...(no debugging symbols found)...done. Reading symbols from /lib64/libacl.so.1...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libgfapi.so.0...done. Reading symbols from /usr/lib64/libglusterfs.so.0...done. Reading symbols from /usr/lib64/libgfrpc.so.0...done. Reading symbols from /usr/lib64/libgfxdr.so.0...done. Reading symbols from /lib64/libuuid.so.1...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libssh2.so.1...done. Reading symbols from /lib64/libbz2.so.1...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libpixman-1.so.0...(no debugging symbols found)...done. Reading symbols from /lib64/libutil.so.1...(no debugging symbols found)...done. Reading symbols from /lib64/libncurses.so.5...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libpng16.so.16...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libjpeg.so.62...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libsasl2.so.3...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libSDL-1.2.so.0...(no debugging symbols found)...done. Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done. [New LWP 5588] [New LWP 5587] [New LWP 5586] [New LWP 5585] [New LWP 5584] [New LWP 5583] [New LWP 5582] [New LWP 5581] [New LWP 5580] [New LWP 5579] [New LWP 5578] [New LWP 5577] [New LWP 5576] [New LWP 5575] [New LWP 5574] [New LWP 5573] [New LWP 5572] [New LWP 5571] [New LWP 5570] [New LWP 5568] [New LWP 5567] [New LWP 5566] [New LWP 5564] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Reading symbols from /usr/lib64/libvte.so.9...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libgtk-x11-2.0.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libgdk-x11-2.0.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libpangocairo-1.0.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libatk-1.0.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libgdk_pixbuf-2.0.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libpangoft2-1.0.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libpango-1.0.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libfontconfig.so.1...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libfreetype.so.6...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libgio-2.0.so.0...t(no debugging symbols found)...done. Reading symbols from /usr/lib64/libgobject-2.0.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libglib-2.0.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libcairo.so.2...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libXext.so.6...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libnettle.so.6...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libgnutls.so.30...(no debugging symbols found)...done. Reading symbols from /usr/lib64/liblzo2.so.2...done. Reading symbols from /usr/lib64/libspice-server.so.1...done. Reading symbols from /usr/lib64/libcacard.so.0...done. Reading symbols from /usr/lib64/libusb-1.0.so.0...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libgthread-2.0.so.0...(no debugging symbols found)...done. Reading symbols from /lib64/librt.so.1...(no debugging symbols found)...done. Reading symbols from /usr/lib64/libstdc++.so.6...(no debugging symbols found)...done. Reading symbols from /lib64/libm.so.6...i(no debugging symbols found)...done. Reading symbols from /usr/lib64/libgcc_s.so.1...(no debugging symbols found)...done. Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.
[Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
Hi Matthew, I now reproduced the issue myself, and it appears the second patch just missed one little thing. The attached patch (together with patch 1 from above) fixes the problem for me. (Also available from https://github.com/XanClic/qemu.git, branch lp-1570134-pl2; archive: https://github.com/XanClic/qemu/archive/lp-1570134-pl2.zip) While it was probably more or less noticed by chance (this is most likely a different issue than the one in 2.5.1), thank you for bringing this up. 2.6.0 is close to release, so it's good that this issue was still found. Max ** Patch added: "0002-Quickfix-block-mirror-Refresh-stale-HBI-cache.patch" https://bugs.launchpad.net/qemu/+bug/1570134/+attachment/4640207/+files/0002-Quickfix-block-mirror-Refresh-stale-HBI-cache.patch -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1570134 Title: While committing snapshot qemu crashes with SIGABRT Status in QEMU: New Bug description: Information: OS: Slackware64-Current Compiled with: gcc version 5.3.0 (GCC) / glibc 2.23 Compiled using: CFLAGS="-O2 -fPIC" \ CXXFLAGS="-O2 -fPIC" \ LDFLAGS="-L/usr/lib64" \ ./configure \ --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var \ --libdir=/usr/lib64 \ --enable-spice \ --enable-kvm \ --enable-glusterfs \ --enable-libiscsi \ --enable-libusb \ --target-list=x86_64-softmmu,i386-softmmu \ --enable-debug Source: qemu-2.5.1.tar.bz2 Running as: /usr/bin/qemu-system-x86_64 -name test1,debug-threads=on -S -machine pc-1.1,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 4b30ec13-6609-4a56-8731-d400c38189ef -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-test1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -global kvm- pit.lost_tick_policy=discard -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/datastore/vm/test1/test1.img,format=qcow2,if=none,id=drive- virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive =drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,id =drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive =drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net pci,netdev=hostnet0,id=net0,mac=52:54:00:66:2e:0f,bus=pci.0,addr=0x3 -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on File system: zfs v0.6.5.6 While running: virsh blockcommit test1 vda --active --pivot --verbose VM running very heavy IO load GDB reporting: #0 0x7fd80132c3f8 in raise () at /lib64/libc.so.6 #1 0x7fd80132dffa in abort () at /lib64/libc.so.6 #2 0x7fd801324c17 in __assert_fail_base () at /lib64/libc.so.6 #3 0x7fd801324cc2 in () at /lib64/libc.so.6 #4 0x55d9918d7572 in bdrv_replace_in_backing_chain (old=0x55d993ed9c10, new=0x55d9931ccc10) at block.c:2096 __PRETTY_FUNCTION__ = "bdrv_replace_in_backing_chain" #5 0x55d991911869 in mirror_exit (job=0x55d993fef830, opaque=0x55d999bbefe0) at block/mirror.c:376 to_replace = 0x55d993ed9c10 s = 0x55d993fef830 data = 0x55d999bbefe0 replace_aio_context = src = 0x55d993ed9c10 #6 0x55d9918da1dc in block_job_defer_to_main_loop_bh (opaque=0x55d9940ce850) at blockjob.c:481 data = 0x55d9940ce850 aio_context = 0x55d9931a2610 #7 0x55d9918d014b in aio_bh_poll (ctx=ctx@entry=0x55d9931a2610) at async.c:92 bh = bhp = next = 0x55d99440f910 ret = 1 #8 0x55d9918dc8c0 in aio_dispatch (ctx=0x55d9931a2610) at aio-posix.c:305 node = progress = false #9 0x55d9918d000e in aio_ctx_dispatch (source=, callback=, user_data=) at async.c:231 ctx = #10 0x7fd8037cf787 in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0 #11 0x55d9918db03b in main_loop_wait () at main-loop.c:211 context = 0x55d9931a3200 pfds = ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #12 0x55d9918db03b in main_loop_wait (timeout=) at main-loop.c:256 ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #13 0x55d9918db03b in main_loop_wait (nonblocking=) at main-loop.c:504 ret = 0 timeout = 4294967295 timeout_ns = #14 0x55d991679cc4 in main () at vl.c:1923 nonblocking = last_io = 2 i = snapshot = linux_boot =
[Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
Thank you for working on this. Super helpful to have someone looking at this issue! With those two patches applied to 2.6.0-rc2 I still get the following: qemu-system-x86_64: block/mirror.c:342: mirror_iteration: Assertion `hbitmap_next == next_sector' failed. The line number confirms that qemu was patched before it was compiled. Here is the full backtrace: #0 0x7f4e5aa213f8 in raise () at /lib64/libc.so.6 #1 0x7f4e5aa22ffa in abort () at /lib64/libc.so.6 #2 0x7f4e5aa19c17 in __assert_fail_base () at /lib64/libc.so.6 #3 0x7f4e5aa19cc2 in () at /lib64/libc.so.6 #4 0x564d5afc1dab in mirror_run (s=0x564d5eb9c2d0) at block/mirror.c:342 hbitmap_next = next_sector = 29561984 next_chunk = 230953 nb_chunks = 4 end = 209715200 sectors_per_chunk = 128 source = 0x564d5d273b00 sector_num = 29561472 delay_ns = 0 delay_ns = 0 cnt = should_complete = s = 0x564d5eb9c2d0 data = bs = 0x564d5d273b00 sector_num = end = length = last_pause_ns = bdi = {cluster_size = 65536, vm_state_offset = 107374182400, is_dirty = false, unallocated_blocks_are_zero = true, can_write_zeroes_with_unmap = true, needs_compressed_writes = false} backing_filename = "\000\060" ret = n = 1048576 target_cluster_size = __PRETTY_FUNCTION__ = "mirror_run" #5 0x564d5afc1dab in mirror_run (opaque=0x564d5eb9c2d0) at block/mirror.c:619 delay_ns = 0 cnt = should_complete = s = 0x564d5eb9c2d0 data = bs = 0x564d5d273b00 sector_num = end = length = last_pause_ns = bdi = {cluster_size = 65536, vm_state_offset = 107374182400, is_dirty = false, unallocated_blocks_are_zero = true, can_write_zeroes_with_unmap = true, needs_compressed_writes = false} backing_filename = "\000\060" ret = n = 1048576 target_cluster_size = __PRETTY_FUNCTION__ = "mirror_run" #6 0x564d5b027e4a in coroutine_trampoline (i0=, i1=) at util/coroutine-ucontext.c:78 self = 0x564d5eacc520 co = 0x564d5eacc520 #7 0x7f4e5aa36560 in __start_context () at /lib64/libc.so.6 #8 0x7ffc151258c0 in () #9 0x in () -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1570134 Title: While committing snapshot qemu crashes with SIGABRT Status in QEMU: New Bug description: Information: OS: Slackware64-Current Compiled with: gcc version 5.3.0 (GCC) / glibc 2.23 Compiled using: CFLAGS="-O2 -fPIC" \ CXXFLAGS="-O2 -fPIC" \ LDFLAGS="-L/usr/lib64" \ ./configure \ --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var \ --libdir=/usr/lib64 \ --enable-spice \ --enable-kvm \ --enable-glusterfs \ --enable-libiscsi \ --enable-libusb \ --target-list=x86_64-softmmu,i386-softmmu \ --enable-debug Source: qemu-2.5.1.tar.bz2 Running as: /usr/bin/qemu-system-x86_64 -name test1,debug-threads=on -S -machine pc-1.1,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 4b30ec13-6609-4a56-8731-d400c38189ef -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-test1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -global kvm- pit.lost_tick_policy=discard -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/datastore/vm/test1/test1.img,format=qcow2,if=none,id=drive- virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive =drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,id =drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive =drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net pci,netdev=hostnet0,id=net0,mac=52:54:00:66:2e:0f,bus=pci.0,addr=0x3 -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on File system: zfs v0.6.5.6 While running: virsh blockcommit test1 vda --active --pivot --verbose VM running very heavy IO load GDB reporting: #0 0x7fd80132c3f8 in raise () at /lib64/libc.so.6 #1 0x7fd80132dffa in abort () at /lib64/libc.so.6 #2 0x7fd801324c17 in __assert_fail_base () at /lib64/libc.so.6 #3 0x7fd801324cc2 in () at /lib64/libc.so.6 #4 0x55d9918d7572 in bdrv_replace_in_backing_chain (old=0x55d993ed9c10, new=0x55d9931ccc10) at block.c:2096 __PRETTY_FUNCTION__ = "bdrv_replace_in_backing_chain" #5 0x55d991911869 in mirror_exit
[Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
And the second patch, because I'm either too stupid to make Launchpad attach two files to a single comment, or because Launchpad actually doesn't want me to for some reason. ** Patch added: "0002-Quickfix-block-mirror-Refresh-stale-HBI-cache.patch" https://bugs.launchpad.net/qemu/+bug/1570134/+attachment/4638458/+files/0002-Quickfix-block-mirror-Refresh-stale-HBI-cache.patch -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1570134 Title: While committing snapshot qemu crashes with SIGABRT Status in QEMU: New Bug description: Information: OS: Slackware64-Current Compiled with: gcc version 5.3.0 (GCC) / glibc 2.23 Compiled using: CFLAGS="-O2 -fPIC" \ CXXFLAGS="-O2 -fPIC" \ LDFLAGS="-L/usr/lib64" \ ./configure \ --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var \ --libdir=/usr/lib64 \ --enable-spice \ --enable-kvm \ --enable-glusterfs \ --enable-libiscsi \ --enable-libusb \ --target-list=x86_64-softmmu,i386-softmmu \ --enable-debug Source: qemu-2.5.1.tar.bz2 Running as: /usr/bin/qemu-system-x86_64 -name test1,debug-threads=on -S -machine pc-1.1,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 4b30ec13-6609-4a56-8731-d400c38189ef -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-test1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -global kvm- pit.lost_tick_policy=discard -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/datastore/vm/test1/test1.img,format=qcow2,if=none,id=drive- virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive =drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,id =drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive =drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net pci,netdev=hostnet0,id=net0,mac=52:54:00:66:2e:0f,bus=pci.0,addr=0x3 -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on File system: zfs v0.6.5.6 While running: virsh blockcommit test1 vda --active --pivot --verbose VM running very heavy IO load GDB reporting: #0 0x7fd80132c3f8 in raise () at /lib64/libc.so.6 #1 0x7fd80132dffa in abort () at /lib64/libc.so.6 #2 0x7fd801324c17 in __assert_fail_base () at /lib64/libc.so.6 #3 0x7fd801324cc2 in () at /lib64/libc.so.6 #4 0x55d9918d7572 in bdrv_replace_in_backing_chain (old=0x55d993ed9c10, new=0x55d9931ccc10) at block.c:2096 __PRETTY_FUNCTION__ = "bdrv_replace_in_backing_chain" #5 0x55d991911869 in mirror_exit (job=0x55d993fef830, opaque=0x55d999bbefe0) at block/mirror.c:376 to_replace = 0x55d993ed9c10 s = 0x55d993fef830 data = 0x55d999bbefe0 replace_aio_context = src = 0x55d993ed9c10 #6 0x55d9918da1dc in block_job_defer_to_main_loop_bh (opaque=0x55d9940ce850) at blockjob.c:481 data = 0x55d9940ce850 aio_context = 0x55d9931a2610 #7 0x55d9918d014b in aio_bh_poll (ctx=ctx@entry=0x55d9931a2610) at async.c:92 bh = bhp = next = 0x55d99440f910 ret = 1 #8 0x55d9918dc8c0 in aio_dispatch (ctx=0x55d9931a2610) at aio-posix.c:305 node = progress = false #9 0x55d9918d000e in aio_ctx_dispatch (source=, callback=, user_data=) at async.c:231 ctx = #10 0x7fd8037cf787 in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0 #11 0x55d9918db03b in main_loop_wait () at main-loop.c:211 context = 0x55d9931a3200 pfds = ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #12 0x55d9918db03b in main_loop_wait (timeout=) at main-loop.c:256 ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #13 0x55d9918db03b in main_loop_wait (nonblocking=) at main-loop.c:504 ret = 0 timeout = 4294967295 timeout_ns = #14 0x55d991679cc4 in main () at vl.c:1923 nonblocking = last_io = 2 i = snapshot = linux_boot = initrd_filename = kernel_filename = kernel_cmdline = boot_order = boot_once = ds = cyls = heads = secs = translation = hda_opts = opts = machine_opts = icount_opts = olist = optind = 49 optarg = 0x7fffc6d27f43
[Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
Hi Matthew, Thank you for your report! Could you try again with these two patches applied? Alternatively, you may fetch the resulting tree from https://github.com/XanClic/qemu.git, branch lp-1570134-pl (https://github.com/XanClic/qemu/archive/lp-1570134-pl.zip). Max ** Patch added: "0001-Quickfix-block-mirror-Revive-dead-code.patch" https://bugs.launchpad.net/qemu/+bug/1570134/+attachment/4638457/+files/0001-Quickfix-block-mirror-Revive-dead-code.patch -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1570134 Title: While committing snapshot qemu crashes with SIGABRT Status in QEMU: New Bug description: Information: OS: Slackware64-Current Compiled with: gcc version 5.3.0 (GCC) / glibc 2.23 Compiled using: CFLAGS="-O2 -fPIC" \ CXXFLAGS="-O2 -fPIC" \ LDFLAGS="-L/usr/lib64" \ ./configure \ --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var \ --libdir=/usr/lib64 \ --enable-spice \ --enable-kvm \ --enable-glusterfs \ --enable-libiscsi \ --enable-libusb \ --target-list=x86_64-softmmu,i386-softmmu \ --enable-debug Source: qemu-2.5.1.tar.bz2 Running as: /usr/bin/qemu-system-x86_64 -name test1,debug-threads=on -S -machine pc-1.1,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 4b30ec13-6609-4a56-8731-d400c38189ef -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-test1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -global kvm- pit.lost_tick_policy=discard -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/datastore/vm/test1/test1.img,format=qcow2,if=none,id=drive- virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive =drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,id =drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive =drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net pci,netdev=hostnet0,id=net0,mac=52:54:00:66:2e:0f,bus=pci.0,addr=0x3 -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on File system: zfs v0.6.5.6 While running: virsh blockcommit test1 vda --active --pivot --verbose VM running very heavy IO load GDB reporting: #0 0x7fd80132c3f8 in raise () at /lib64/libc.so.6 #1 0x7fd80132dffa in abort () at /lib64/libc.so.6 #2 0x7fd801324c17 in __assert_fail_base () at /lib64/libc.so.6 #3 0x7fd801324cc2 in () at /lib64/libc.so.6 #4 0x55d9918d7572 in bdrv_replace_in_backing_chain (old=0x55d993ed9c10, new=0x55d9931ccc10) at block.c:2096 __PRETTY_FUNCTION__ = "bdrv_replace_in_backing_chain" #5 0x55d991911869 in mirror_exit (job=0x55d993fef830, opaque=0x55d999bbefe0) at block/mirror.c:376 to_replace = 0x55d993ed9c10 s = 0x55d993fef830 data = 0x55d999bbefe0 replace_aio_context = src = 0x55d993ed9c10 #6 0x55d9918da1dc in block_job_defer_to_main_loop_bh (opaque=0x55d9940ce850) at blockjob.c:481 data = 0x55d9940ce850 aio_context = 0x55d9931a2610 #7 0x55d9918d014b in aio_bh_poll (ctx=ctx@entry=0x55d9931a2610) at async.c:92 bh = bhp = next = 0x55d99440f910 ret = 1 #8 0x55d9918dc8c0 in aio_dispatch (ctx=0x55d9931a2610) at aio-posix.c:305 node = progress = false #9 0x55d9918d000e in aio_ctx_dispatch (source=, callback=, user_data=) at async.c:231 ctx = #10 0x7fd8037cf787 in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0 #11 0x55d9918db03b in main_loop_wait () at main-loop.c:211 context = 0x55d9931a3200 pfds = ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #12 0x55d9918db03b in main_loop_wait (timeout=) at main-loop.c:256 ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #13 0x55d9918db03b in main_loop_wait (nonblocking=) at main-loop.c:504 ret = 0 timeout = 4294967295 timeout_ns = #14 0x55d991679cc4 in main () at vl.c:1923 nonblocking = last_io = 2 i = snapshot = linux_boot = initrd_filename = kernel_filename = kernel_cmdline = boot_order = boot_once = ds = cyls = heads = secs = translation = hda_opts = opts = machine_opts = icount_opts =
[Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
I just tested master, and it does the same as 2.6.0-rc The 2.6.0 branch crashes much faster than 2.5.x -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1570134 Title: While committing snapshot qemu crashes with SIGABRT Status in QEMU: New Bug description: Information: OS: Slackware64-Current Compiled with: gcc version 5.3.0 (GCC) / glibc 2.23 Compiled using: CFLAGS="-O2 -fPIC" \ CXXFLAGS="-O2 -fPIC" \ LDFLAGS="-L/usr/lib64" \ ./configure \ --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var \ --libdir=/usr/lib64 \ --enable-spice \ --enable-kvm \ --enable-glusterfs \ --enable-libiscsi \ --enable-libusb \ --target-list=x86_64-softmmu,i386-softmmu \ --enable-debug Source: qemu-2.5.1.tar.bz2 Running as: /usr/bin/qemu-system-x86_64 -name test1,debug-threads=on -S -machine pc-1.1,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 4b30ec13-6609-4a56-8731-d400c38189ef -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-test1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -global kvm- pit.lost_tick_policy=discard -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/datastore/vm/test1/test1.img,format=qcow2,if=none,id=drive- virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive =drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,id =drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive =drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net pci,netdev=hostnet0,id=net0,mac=52:54:00:66:2e:0f,bus=pci.0,addr=0x3 -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on File system: zfs v0.6.5.6 While running: virsh blockcommit test1 vda --active --pivot --verbose VM running very heavy IO load GDB reporting: #0 0x7fd80132c3f8 in raise () at /lib64/libc.so.6 #1 0x7fd80132dffa in abort () at /lib64/libc.so.6 #2 0x7fd801324c17 in __assert_fail_base () at /lib64/libc.so.6 #3 0x7fd801324cc2 in () at /lib64/libc.so.6 #4 0x55d9918d7572 in bdrv_replace_in_backing_chain (old=0x55d993ed9c10, new=0x55d9931ccc10) at block.c:2096 __PRETTY_FUNCTION__ = "bdrv_replace_in_backing_chain" #5 0x55d991911869 in mirror_exit (job=0x55d993fef830, opaque=0x55d999bbefe0) at block/mirror.c:376 to_replace = 0x55d993ed9c10 s = 0x55d993fef830 data = 0x55d999bbefe0 replace_aio_context = src = 0x55d993ed9c10 #6 0x55d9918da1dc in block_job_defer_to_main_loop_bh (opaque=0x55d9940ce850) at blockjob.c:481 data = 0x55d9940ce850 aio_context = 0x55d9931a2610 #7 0x55d9918d014b in aio_bh_poll (ctx=ctx@entry=0x55d9931a2610) at async.c:92 bh = bhp = next = 0x55d99440f910 ret = 1 #8 0x55d9918dc8c0 in aio_dispatch (ctx=0x55d9931a2610) at aio-posix.c:305 node = progress = false #9 0x55d9918d000e in aio_ctx_dispatch (source=, callback=, user_data=) at async.c:231 ctx = #10 0x7fd8037cf787 in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0 #11 0x55d9918db03b in main_loop_wait () at main-loop.c:211 context = 0x55d9931a3200 pfds = ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #12 0x55d9918db03b in main_loop_wait (timeout=) at main-loop.c:256 ret = 0 spin_counter = 1 ret = 0 timeout = 4294967295 timeout_ns = #13 0x55d9918db03b in main_loop_wait (nonblocking=) at main-loop.c:504 ret = 0 timeout = 4294967295 timeout_ns = #14 0x55d991679cc4 in main () at vl.c:1923 nonblocking = last_io = 2 i = snapshot = linux_boot = initrd_filename = kernel_filename = kernel_cmdline = boot_order = boot_once = ds = cyls = heads = secs = translation = hda_opts = opts = machine_opts = icount_opts = olist = optind = 49 optarg = 0x7fffc6d27f43 "timestamp=on" loadvm = machine_class = 0x55d993194d10 cpu_model = vga_model = 0x0 qtest_chrdev = qtest_log = pid_file = incoming = defconfig = userconfig = false log_mask
[Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
It still fails with ext4: #0 0x7fbaa12b33f8 in raise () at /lib64/libc.so.6 #1 0x7fbaa12b4ffa in abort () at /lib64/libc.so.6 #2 0x7fbaa12abc17 in __assert_fail_base () at /lib64/libc.so.6 #3 0x7fbaa12abcc2 in () at /lib64/libc.so.6 #4 0x5646b990f926 in mirror_run (s=0x5646bc50f480) at block/mirror.c:335 next_sector = 36659200 next_chunk = 286400 nb_chunks = 80 end = 209715200 sectors_per_chunk = 128 source = 0x5646bcb7 sector_num = 36648960 delay_ns = 0 delay_ns = 0 cnt = 15360 should_complete = s = 0x5646bc50f480 data = bs = 0x5646bcb7 sector_num = end = length = last_pause_ns = bdi = {cluster_size = 65536, vm_state_offset = 107374182400, is_dirty = false, unallocated_blocks_are_zero = true, can_write_zeroes_with_unmap = true, needs_compressed_writes = false} backing_filename = "\000" ret = n = 1048576 target_cluster_size = __PRETTY_FUNCTION__ = "mirror_run" #5 0x5646b990f926 in mirror_run (opaque=0x5646bc50f480) at block/mirror.c:613 delay_ns = 0 cnt = 15360 should_complete = s = 0x5646bc50f480 data = bs = 0x5646bcb7 sector_num = end = length = last_pause_ns = bdi = {cluster_size = 65536, vm_state_offset = 107374182400, is_dirty = false, unallocated_blocks_are_zero = true, can_write_zeroes_with_unmap = true, needs_compressed_writes = false} backing_filename = "\000" ret = n = 1048576 target_cluster_size = __PRETTY_FUNCTION__ = "mirror_run" #6 0x5646b997568a in coroutine_trampoline (i0=, i1=) at util/coroutine-ucontext.c:78 self = 0x5646bc5115b0 co = 0x5646bc5115b0 #7 0x7fbaa12c8560 in __start_context () at /lib64/libc.so.6 #8 0x5646bd2b98b0 in () #9 0x in () qemu-system-x86_64: block/mirror.c:335: mirror_iteration: Assertion `hbitmap_next == next_sector' failed. I can't seem to get stable snapshotting and blockpull with a loaded VM. Interestingly enough, the last command libvirt passes to qemu is: 2016-04-14 20:47:58.196+: 18932: debug : qemuMonitorJSONCommandWithFd:294 : Send command '{"execute":"query-block-jobs","id":"libvirt-69"}' for write with FD -1 2016-04-14 20:47:58.196+: 18932: info : qemuMonitorSend:1005 : QEMU_MONITOR_SEND_MSG: mon=0x7f1874001a30 msg={"execute":"query-block-jobs","id":"libvirt-69"} 2016-04-14 20:47:58.197+: 18929: info : qemuMonitorIOWrite:529 : QEMU_MONITOR_IO_WRITE: mon=0x7f1874001a30 buf={"execute":"query-block-jobs","id":"libvirt-69"} Odd that it would SIGABRT on a smile query-block-jobs. Even more interesting is that it crashes on the first or second or third snapshot/block-commit cycle when using EXT4, but would sometimes go for 30-40 cycles on ZFS. Any ideas? I'm certainly willing to test and help in any way I can. Thanks! -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1570134 Title: While committing snapshot qemu crashes with SIGABRT Status in QEMU: New Bug description: Information: OS: Slackware64-Current Compiled with: gcc version 5.3.0 (GCC) / glibc 2.23 Compiled using: CFLAGS="-O2 -fPIC" \ CXXFLAGS="-O2 -fPIC" \ LDFLAGS="-L/usr/lib64" \ ./configure \ --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var \ --libdir=/usr/lib64 \ --enable-spice \ --enable-kvm \ --enable-glusterfs \ --enable-libiscsi \ --enable-libusb \ --target-list=x86_64-softmmu,i386-softmmu \ --enable-debug Source: qemu-2.5.1.tar.bz2 Running as: /usr/bin/qemu-system-x86_64 -name test1,debug-threads=on -S -machine pc-1.1,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 4b30ec13-6609-4a56-8731-d400c38189ef -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-test1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -global kvm- pit.lost_tick_policy=discard -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/datastore/vm/test1/test1.img,format=qcow2,if=none,id=drive- virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive =drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,id =drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive =drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net pci,netdev=hostnet0,id=net0,mac=52:54:00:66:2e:0f,bus=pci.0,addr=0x3 -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
[Qemu-devel] [Bug 1570134] Re: While committing snapshot qemu crashes with SIGABRT
Sure, I did the same test and still got a SIGABRT, but the debug looks a little different: Backtrace: #0 0x7f8f0d46a3f8 in raise () at /lib64/libc.so.6 #1 0x7f8f0d46bffa in abort () at /lib64/libc.so.6 #2 0x7f8f0d462c17 in __assert_fail_base () at /lib64/libc.so.6 #3 0x7f8f0d462cc2 in () at /lib64/libc.so.6 #4 0x55ff4ce33926 in mirror_run (s=0x55ff4fc00dd0) at block/mirror.c:335 next_sector = 31174784 next_chunk = 243553 nb_chunks = 29 end = 209715200 sectors_per_chunk = 128 source = 0x55ff4e1eb050 sector_num = 31171072 delay_ns = 0 delay_ns = 0 cnt = 157184 should_complete = s = 0x55ff4fc00dd0 data = bs = 0x55ff4e1eb050 sector_num = end = length = last_pause_ns = bdi = {cluster_size = 65536, vm_state_offset = 107374182400, is_dirty = false, unallocated_blocks_are_zero = true, can_write_zeroes_with_unmap = true, needs_compressed_writes = false} backing_filename = "\000\021" ret = n = 1048576 target_cluster_size = __PRETTY_FUNCTION__ = "mirror_run" #5 0x55ff4ce33926 in mirror_run (opaque=0x55ff4fc00dd0) at block/mirror.c:613 delay_ns = 0 cnt = 157184 should_complete = s = 0x55ff4fc00dd0 data = bs = 0x55ff4e1eb050 sector_num = end = length = last_pause_ns = bdi = {cluster_size = 65536, vm_state_offset = 107374182400, is_dirty = false, unallocated_blocks_are_zero = true, can_write_zeroes_with_unmap = true, needs_compressed_writes = false} backing_filename = "\000\021" ret = n = 1048576 target_cluster_size = __PRETTY_FUNCTION__ = "mirror_run" #6 0x55ff4ce9968a in coroutine_trampoline (i0=, i1=) at util/coroutine-ucontext.c:78 self = 0x55ff4f6c2c80 co = 0x55ff4f6c2c80 #7 0x7f8f0d47f560 in __start_context () at /lib64/libc.so.6 #8 0x7ffc759cb060 in () #9 0x in () I get this in the log: qemu-system-x86_64: block/mirror.c:335: mirror_iteration: Assertion `hbitmap_next == next_sector' failed. The system was compiled like this: Install prefix/usr BIOS directory/usr/share/qemu binary directory /usr/bin library directory /usr/lib64 module directory /usr/lib64/qemu libexec directory /usr/libexec include directory /usr/include config directory /etc local state directory /var Manual directory /usr/share/man ELF interp prefix /usr/gnemul/qemu-%M Source path /tmp/qemu-2.6.0-rc1 C compilercc Host C compiler cc C++ compiler c++ Objective-C compiler clang ARFLAGS rv CFLAGS-pthread -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -g -O2 -fPIC QEMU_CFLAGS -I/usr/include/pixman-1 -I$(SRC_PATH)/dtc/libfdt -DHAS_LIBSSH2_SFTP_FSYNC -fPIE -DPIE -m64 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -Wendif-labels -Wmissing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-strong -I/usr/include/p11-kit-1-I/usr/include/libpng16 -I/usr/include/spice-server -I/usr/include/cacard -I/usr/include/nss -I/usr/include/nspr -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/spice-1 -I/usr/include/cacard -I/usr/include/nss -I/usr/include/nspr -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/libusb-1.0 LDFLAGS -Wl,--warn-common -Wl,-z,relro -Wl,-z,now -pie -m64 -g -L/usr/lib64 make make install install pythonpython -B smbd /usr/sbin/smbd module supportno host CPU x86_64 host big endian no target list x86_64-softmmu i386-softmmu tcg debug enabled yes gprof enabled no sparse enabledno strip binariesno profiler no static build no pixmansystem SDL support yes GTK support yes GTK GL supportno GNUTLS supportyes GNUTLS hash yes GNUTLS rndyes libgcrypt no libgcrypt kdf no nettleyes (3.2) nettle kdfyes libtasn1 yes VTE support yes curses supportyes virgl support no curl support yes mingw32 support no Audio drivers oss Block whitelist (rw) Block whitelist (ro) VirtFS supportyes VNC support yes VNC SASL support yes VNC JPEG support yes VNC PNG support yes xen support no brlapi supportno bluez supportno Documentation yes PIE yes vde support no netmap supportno Linux AIO support yes ATTR/XATTR support yes Install blobs yes KVM support yes RDMA