Stressing the VDI code with qemu-img: qemu-img convert -p -W -m 16 -O vdi input.qcow2 output.vdi
leads to a hang relatively quickly on a machine with sufficient CPUs. A similar test targetting either raw or qcow2 formats, or avoiding out-of-order writes, completes fine. At the point of the hang all of the coroutines are sitting in qemu_co_queue_wait_impl(), called from either qemu_co_rwlock_rdlock() or qemu_co_rwlock_upgrade(), all referencing the same CoRwlock (BDRVVdiState.bmap_lock). The comment in the last patch explains what I believe is happening - downgrading an rwlock from write to read can later result in a failure to schedule an appropriate coroutine when the read lock is released. A less invasive change might be to simply have the read side of the unlock code mark *all* queued coroutines as runnable. This seems somewhat wasteful, as any read hopefuls that run before a write hopeful will immediately put themselves back on the queue. No code other than block/vdi.c appears to use qemu_co_rwlock_downgrade(). The block/vdi.c changes are small things noticed by inspection when looking for the cause of the hang. v2: - Add some r-by (Philippe, Paolo). - Add a test for the rwlock downgrade behaviour (Paolo). - Improve unlock to avoid thundering herd (Paolo). David Edmondson (6): block/vdi: When writing new bmap entry fails, don't leak the buffer block/vdi: Don't assume that blocks are larger than VdiHeader coroutine/mutex: Store the coroutine in the CoWaitRecord only once test-coroutine: Add rwlock downgrade test coroutine/rwlock: Wake writers in preference to readers coroutine/rwlock: Avoid thundering herd when unlocking block/vdi.c | 11 ++-- include/qemu/coroutine.h | 8 ++- tests/test-coroutine.c | 112 +++++++++++++++++++++++++++++++++++++ util/qemu-coroutine-lock.c | 27 ++++----- 4 files changed, 138 insertions(+), 20 deletions(-) -- 2.30.1