[Bug 1805256] Re: qemu_futex_wait() lockups in ARM64: 2 possible issues

2019-09-11 Thread Rafael David Tinoco
> Zhengui's theory that notify_me doesn't work properly on ARM is more
> promising, but he couldn't provide a clear explanation of why he thought
> notify_me is involved.  In particular, I would have expected notify_me to
> be wrong if the qemu_poll_ns call came from aio_ctx_dispatch, for example:
> glib_pollfds_fill
>   g_main_context_prepare
> aio_ctx_prepare
>   atomic_or(>notify_me, 1)
> qemu_poll_ns
> glib_pollfds_poll
>   g_main_context_check
> aio_ctx_check
>   atomic_and(>notify_me, ~1)
>   g_main_context_dispatch
> aio_ctx_dispatch
>   /* do something for event */
> qemu_poll_ns 


I tried confining execution in a single NUMA domain (cpu & mem) and
still faced the issue, then, I added a mutex "ctx->notify_me_lcktest"
into context to protect "ctx->notify_me", like showed bellow, and it
seems to have either fixed or mitigated it.

I was able to cause the hung once every 3 or 4 runs. I have already ran
qemu-img convert more than 30 times now and couldn't reproduce it again.

Next step is to play with the barriers and check why existing ones
aren't enough for ordering access to ctx->notify_me ... or should I
try/do something else in your opinion ?

This arch/machine (Huawei D06):

$ lscpu
Byte Order:  Little Endian
CPU(s):  96
On-line CPU(s) list: 0-95
Thread(s) per core:  1
Core(s) per socket:  48
Socket(s):   2
NUMA node(s):4
Vendor ID:   0x48
Model:   0
CPU max MHz: 2000.
CPU min MHz: 200.
L1d cache:   64K
L1i cache:   64K
L2 cache:512K
L3 cache:32768K
NUMA node0 CPU(s):   0-23
NUMA node1 CPU(s):   24-47
NUMA node2 CPU(s):   48-71
NUMA node3 CPU(s):   72-95
Flags:   fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics
cpuid asimdrdm dcpop

diff --git a/include/block/aio.h b/include/block/aio.h
index 0ca25dfec6..0724086d91 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -84,6 +84,7 @@ struct AioContext {
  * dispatch phase, hence a simple counter is enough for them.
 uint32_t notify_me;
+QemuMutex notify_me_lcktest;

 /* A lock to protect between QEMUBH and AioHandler adders and deleter,
  * and to ensure that no callbacks are removed while we're walking and
diff --git a/util/aio-posix.c b/util/aio-posix.c
index 51c41ed3c9..031d6e2997 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -529,7 +529,9 @@ static bool run_poll_handlers(AioContext *ctx,
int64_t max_ns, int64_t *timeout)
 bool progress;
 int64_t start_time, elapsed_time;

 assert(qemu_lockcnt_count(>list_lock) > 0);

 trace_run_poll_handlers_begin(ctx, max_ns, *timeout);
@@ -601,8 +603,10 @@ bool aio_poll(AioContext *ctx, bool blocking)
  * so disable the optimization now.
 if (blocking) {
 atomic_add(>notify_me, 2);

@@ -647,8 +651,10 @@ bool aio_poll(AioContext *ctx, bool blocking)

 if (blocking) {
 atomic_sub(>notify_me, 2);

 /* Adjust polling time */
diff --git a/util/async.c b/util/async.c
index c10642a385..140e1e86f5 100644
--- a/util/async.c
+++ b/util/async.c
@@ -221,7 +221,9 @@ aio_ctx_prepare(GSource *source, gint*timeout)
 AioContext *ctx = (AioContext *) source;

 atomic_or(>notify_me, 1);

 /* We assume there is no timeout already supplied */
 *timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx));
@@ -239,8 +241,10 @@ aio_ctx_check(GSource *source)
 AioContext *ctx = (AioContext *) source;
 QEMUBH *bh;

 atomic_and(>notify_me, ~1);

 for (bh = ctx->first_bh; bh; bh = bh->next) {
 if (bh->scheduled) {
@@ -346,11 +350,13 @@ void aio_notify(AioContext *ctx)
 /* Write e.g. bh->scheduled before reading ctx->notify_me.  Pairs
  * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll.
 if (ctx->notify_me) {
 atomic_mb_set(>notified, true);

 void aio_notify_accept(AioContext *ctx)
@@ -424,6 +430,8 @@ AioContext *aio_context_new(Error 

[Bug 1805256] Re: qemu_futex_wait() lockups in ARM64: 2 possible issues

2019-09-11 Thread Rafael David Tinoco
Quick update...

> value INT_MAX (4294967295) seems WRONG for qemu_futex_wait():
> - EV_BUSY, being -1, and passed as an argument qemu_futex_wait(void *,
> unsigned), is a two's complement, making argument into a INT_MAX when
> that's not what is expected (unless I missed something).
> *** If that is the case, unsure if you, Paolo, prefer declaring
> *(QemuEvent)->value as an integer or changing EV_BUSY to "2" would okay
> here ***
> BUG: description:
> https://bugs.launchpad.net/qemu/+bug/1805256/comments/15

I realized this might be intentional, but, still, I tried:


looking for anything that could have misbehaved in arm64 (specially
concerned on casting and type conversions between the functions).

> - Should qemu_event_set() check return code from
> qemu_futex_wake()->qemu_futex()->syscall() in order to know if ANY
> waiter was ever woken up ? Maybe even loop until at least 1 is awaken ?

And I also tried:

-qemu_futex(f, FUTEX_WAKE, n, NULL, NULL, 0);
+while(qemu_futex(pval, FUTEX_WAKE, val, NULL, NULL, 0) == 0)

and it made little difference (took way more time for me to reproduce
the issue though):

(gdb) run
Starting program: /usr/bin/qemu-img convert -f qcow2 -O qcow2
./disk01.ext4.qcow2 ./output.qcow2

[New Thread 0xbec5ad90 (LWP 72839)]
[New Thread 0xbe459d90 (LWP 72840)]
[New Thread 0xbdb57d90 (LWP 72841)]
[New Thread 0xacac9d90 (LWP 72859)]
[New Thread 0xa7ffed90 (LWP 72860)]
[New Thread 0xa77fdd90 (LWP 72861)]
[New Thread 0xa6ffcd90 (LWP 72862)]
[New Thread 0xa67fbd90 (LWP 72863)]
[New Thread 0xa5ffad90 (LWP 72864)]

[Thread 0xa5ffad90 (LWP 72864) exited]
[Thread 0xa6ffcd90 (LWP 72862) exited]
[Thread 0xa77fdd90 (LWP 72861) exited]
[Thread 0xbdb57d90 (LWP 72841) exited]
[Thread 0xa67fbd90 (LWP 72863) exited]
[Thread 0xacac9d90 (LWP 72859) exited]
[Thread 0xa7ffed90 (LWP 72860) exited]


All the tasks left are blocked in a system call, so no task left to call
qemu_futex_wake() to unblock thread #2 (in futex()), which would unblock
thread #1 (doing poll() in a pipe with thread #2).

Those 7 threads exit before disk conversion is complete (sometimes in
the beginning, sometimes at the end).

I'll try to check why those tasks exited.

Any thoughts ?


You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

  qemu-img hangs on rcu_call_ready_event logic in Aarch64 when
  converting images

To manage notifications about this bug go to:

ubuntu-bugs mailing list