Re: [Qemu-block] [PATCH v3 2/2] aio: Do aio_notify_accept only during blocking aio_poll

2018-09-09 Thread Fam Zheng
On Fri, 09/07 17:51, Kevin Wolf wrote:
> Am 09.08.2018 um 15:22 hat Fam Zheng geschrieben:
> > Furthermore, blocking aio_poll is only allowed on home thread
> > (in_aio_context_home_thread), because otherwise two blocking
> > aio_poll()'s can steal each other's ctx->notifier event and cause
> > hanging just like described above.
> 
> It's good to have this assertion now at least, but after digging into
> some bugs, I think in fact that any aio_poll() (even non-blocking) is
> only allowed in the home thread: At least one reason is that if you run
> it from a different thread, qemu_get_current_aio_context() returns the
> wrong AioContext in any callbacks called by aio_poll(). Anything else
> using TLS can have similar problems.
> 
> One instance where this matters is fixed/worked around by Sergio's
> "util/async: use qemu_aio_coroutine_enter in co_schedule_bh_cb". We
> wouldn't even need that patch if we could make sure that aio_poll() is
> never called from the wrong thread. This would feel more robust.
> 
> I'll fix the aio_poll() calls in drain (the AIO_WAIT_WHILE() ones are
> already fine, the rest by removing them). After that,
> bdrv_set_aio_context() is still problematic, but the rest should be
> okay. Hopefully we can use the tighter assertion then.

Fully agree with you.

Fam



Re: [Qemu-block] [PATCH v3 2/2] aio: Do aio_notify_accept only during blocking aio_poll

2018-09-07 Thread Kevin Wolf
Am 09.08.2018 um 15:22 hat Fam Zheng geschrieben:
> Furthermore, blocking aio_poll is only allowed on home thread
> (in_aio_context_home_thread), because otherwise two blocking
> aio_poll()'s can steal each other's ctx->notifier event and cause
> hanging just like described above.

It's good to have this assertion now at least, but after digging into
some bugs, I think in fact that any aio_poll() (even non-blocking) is
only allowed in the home thread: At least one reason is that if you run
it from a different thread, qemu_get_current_aio_context() returns the
wrong AioContext in any callbacks called by aio_poll(). Anything else
using TLS can have similar problems.

One instance where this matters is fixed/worked around by Sergio's
"util/async: use qemu_aio_coroutine_enter in co_schedule_bh_cb". We
wouldn't even need that patch if we could make sure that aio_poll() is
never called from the wrong thread. This would feel more robust.

I'll fix the aio_poll() calls in drain (the AIO_WAIT_WHILE() ones are
already fine, the rest by removing them). After that,
bdrv_set_aio_context() is still problematic, but the rest should be
okay. Hopefully we can use the tighter assertion then.

Kevin



[Qemu-block] [PATCH v3 2/2] aio: Do aio_notify_accept only during blocking aio_poll

2018-08-09 Thread Fam Zheng
An aio_notify() pairs with an aio_notify_accept(). The former should
happen in the main thread or a vCPU thread, and the latter should be
done in the IOThread.

There is one rare case that the main thread or vCPU thread may "steal"
the aio_notify() event just raised by itself, in bdrv_set_aio_context()
[1]. The sequence is like this:

main thread IO Thread
===
bdrv_drained_begin()
  aio_disable_external(ctx)
aio_poll(ctx, true)
  ctx->notify_me += 2
...
bdrv_drained_end()
  ...
aio_notify()
...
bdrv_set_aio_context()
  aio_poll(ctx, false)
[1] aio_notify_accept(ctx)
  ppoll() /* Hang! */

[1] is problematic. It will clear the ctx->notifier event so that
the blocked ppoll() will not return.

(For the curious, this bug was noticed when booting a number of VMs
simultaneously in RHV.  One or two of the VMs will hit this race
condition, making the VIRTIO device unresponsive to I/O commands. When
it hangs, Seabios is busy waiting for a read request to complete (read
MBR), right after initializing the virtio-blk-pci device, using 100%
guest CPU. See also https://bugzilla.redhat.com/show_bug.cgi?id=1562750
for the original bug analysis.)

aio_notify() only injects an event when ctx->notify_me is set,
correspondingly aio_notify_accept() is only useful when ctx->notify_me
_was_ set. Move the call to it into the "blocking" branch. This will
effectively skip [1] and fix the hang.

Furthermore, blocking aio_poll is only allowed on home thread
(in_aio_context_home_thread), because otherwise two blocking
aio_poll()'s can steal each other's ctx->notifier event and cause
hanging just like described above.

Cc: qemu-sta...@nongnu.org
Suggested-by: Paolo Bonzini 
Signed-off-by: Fam Zheng 
---
 util/aio-posix.c | 4 ++--
 util/aio-win32.c | 3 ++-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/util/aio-posix.c b/util/aio-posix.c
index b5c7f463aa..b5c609b68b 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -591,6 +591,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
  * so disable the optimization now.
  */
 if (blocking) {
+assert(in_aio_context_home_thread(ctx));
 atomic_add(>notify_me, 2);
 }
 
@@ -633,6 +634,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
 if (blocking) {
 atomic_sub(>notify_me, 2);
+aio_notify_accept(ctx);
 }
 
 /* Adjust polling time */
@@ -676,8 +678,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
 }
 }
 
-aio_notify_accept(ctx);
-
 /* if we have any readable fds, dispatch event */
 if (ret > 0) {
 for (i = 0; i < npfd; i++) {
diff --git a/util/aio-win32.c b/util/aio-win32.c
index e676a8d9b2..c58957cc4b 100644
--- a/util/aio-win32.c
+++ b/util/aio-win32.c
@@ -373,11 +373,12 @@ bool aio_poll(AioContext *ctx, bool blocking)
 ret = WaitForMultipleObjects(count, events, FALSE, timeout);
 if (blocking) {
 assert(first);
+assert(in_aio_context_home_thread(ctx));
 atomic_sub(>notify_me, 2);
+aio_notify_accept(ctx);
 }
 
 if (first) {
-aio_notify_accept(ctx);
 progress |= aio_bh_poll(ctx);
 first = false;
 }
-- 
2.17.1