On 20.09.18 18:19, Kevin Wolf wrote:
> bdrv_drain_poll_top_level() was buggy because it didn't release the
> AioContext lock of the node to be drained before calling aio_poll().
> This way, callbacks called by aio_poll() would possibly take the lock a
> second time and run into a deadlock with a nested AIO_WAIT_WHILE() call.
> 
> However, it turns out that the aio_poll() call isn't actually needed any
> more. It was introduced in commit 91af091f923, which is effectively
> reverted by this patch. The cases it was supposed to fix are now covered
> by bdrv_drain_poll(), which waits for block jobs to reach a quiescent
> state.
> 
> Signed-off-by: Kevin Wolf <kw...@redhat.com>
> Reviewed-by: Fam Zheng <f...@redhat.com>
> Reviewed-by: Max Reitz <mre...@redhat.com>
> ---
>  block/io.c | 8 --------
>  1 file changed, 8 deletions(-)

Hm...  While looking at iotest 129 (which I think is broken because it
tries to use BB-level throttling which doesn't do anything for the
mirror job), I noticed this:

$ x86_64-softmmu/qemu-system-x86_64 \
    -object throttle-group,id=tg0 \
    -drive node-name=node0,driver=throttle,\
throttle-group=tg0,file.driver=qcow2,file.file.driver=file,\
file.file.filename=/tmp/src.qcow2 -qmp stdio \
<<EOF
{"execute":"qmp_capabilities"}
{"execute":"drive-mirror","arguments":{"device":"node0",
 "target":"/tmp/tgt.qcow2","sync":"full","format":"qcow2",
  "mode":"absolute-paths","job-id":"mirror-job0"}}
{"execute":"block-job-cancel",
 "arguments":{"device":"mirror-job0","force":true}}
{"execute":"quit"}
EOF

[...]
qemu-system-x86_64: block/block-backend.c:2211: blk_root_drained_end:
Assertion `blk->quiesce_counter' failed.
[1]    2722 abort (core dumped)  x86_64-softmmu/qemu-system-x86_64
-object throttle-group,id=tg0 -drive  -qmp

(Which worked before commit 4cf077b59fc73eec29f8b7d082919dbb278bdc86,
i.e. this one.)

Max

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to