Re: [PULL 09/14] qmp: Move dispatcher to a coroutine

2020-10-19 Thread Volker Rümelin
>>> Hi Kevin,
>>>
>>> since commit 9ce44e2ce2 "qmp: Move dispatcher to a coroutine" I see
>>> the following error on Windows whenever I close the QEMU window or
>>> shut down the guest.
>>>
>>> $ ./qemu-system-x86_64.exe -machine pc,accel=tcg -display gtk
>>> **
>>> ERROR:../qemu/util/aio-win32.c:337:aio_poll: assertion failed: 
>>> (in_aio_context_home_thread(ctx))
>>> Bail out! ERROR:../qemu/util/aio-win32.c:337:aio_poll: assertion failed: 
>>> (in_aio_context_home_thread(ctx))
>>>
>>> I wonder if you forgot to apply the changes to util/aio-posix.c to
>>> util/aio-win32.c too? This solves the problem on my Windows host. But
>>> I have to admit I don't know the code here.
> Hi Volker,
>
> yes, you're right. The assertion in the Windows code was added only in
> commit 5710a3e09f9 after I had posted some versions of the patch series,
> so while I did check this initially, I missed during the rebase for
> later versions of the series that I would have to update the patches for
> Windows.
>
> Would you like to send a patch for this? I could send one myself if you
> prefer, but I can only compile-test Windows patches, so I'd have to rely
> on your testing anyway.
>
> Kevin

Thank you for that information. I'll send a patch.

With best regards,
Volker



Re: [PULL 09/14] qmp: Move dispatcher to a coroutine

2020-10-19 Thread Kevin Wolf
Am 17.10.2020 um 10:15 hat Volker Rümelin geschrieben:
> > From: Kevin Wolf 
> >
> > This moves the QMP dispatcher to a coroutine and runs all QMP command
> > handlers that declare 'coroutine': true in coroutine context so they
> > can avoid blocking the main loop while doing I/O or waiting for other
> > events.
> >
> > For commands that are not declared safe to run in a coroutine, the
> > dispatcher drops out of coroutine context by calling the QMP command
> > handler from a bottom half.
> 
> Hi Kevin,
> 
> since commit 9ce44e2ce2 "qmp: Move dispatcher to a coroutine" I see
> the following error on Windows whenever I close the QEMU window or
> shut down the guest.
> 
> $ ./qemu-system-x86_64.exe -machine pc,accel=tcg -display gtk
> **
> ERROR:../qemu/util/aio-win32.c:337:aio_poll: assertion failed: 
> (in_aio_context_home_thread(ctx))
> Bail out! ERROR:../qemu/util/aio-win32.c:337:aio_poll: assertion failed: 
> (in_aio_context_home_thread(ctx))
> 
> I wonder if you forgot to apply the changes to util/aio-posix.c to
> util/aio-win32.c too? This solves the problem on my Windows host. But
> I have to admit I don't know the code here.

Hi Volker,

yes, you're right. The assertion in the Windows code was added only in
commit 5710a3e09f9 after I had posted some versions of the patch series,
so while I did check this initially, I missed during the rebase for
later versions of the series that I would have to update the patches for
Windows.

Would you like to send a patch for this? I could send one myself if you
prefer, but I can only compile-test Windows patches, so I'd have to rely
on your testing anyway.

Kevin

> > diff --git a/util/aio-posix.c b/util/aio-posix.c
> > index 280f27bb99..30f5354b1e 100644
> > --- a/util/aio-posix.c
> > +++ b/util/aio-posix.c
> > @@ -15,6 +15,7 @@
> >  
> >  #include "qemu/osdep.h"
> >  #include "block/block.h"
> > +#include "qemu/main-loop.h"
> >  #include "qemu/rcu.h"
> >  #include "qemu/rcu_queue.h"
> >  #include "qemu/sockets.h"
> > @@ -558,8 +559,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
> >   * There cannot be two concurrent aio_poll calls for the same 
> > AioContext (or
> >   * an aio_poll concurrent with a GSource prepare/check/dispatch 
> > callback).
> >   * We rely on this below to avoid slow locked accesses to 
> > ctx->notify_me.
> > + *
> > + * aio_poll() may only be called in the AioContext's thread. 
> > iohandler_ctx
> > + * is special in that it runs in the main thread, but that thread's 
> > context
> > + * is qemu_aio_context.
> >   */
> > -assert(in_aio_context_home_thread(ctx));
> > +assert(in_aio_context_home_thread(ctx == iohandler_get_aio_context() ?
> > +  qemu_get_aio_context() : ctx));
> >  
> >  qemu_lockcnt_inc(>list_lock);
> >  
> 




Re: [PULL 09/14] qmp: Move dispatcher to a coroutine

2020-10-17 Thread Volker Rümelin
> From: Kevin Wolf 
>
> This moves the QMP dispatcher to a coroutine and runs all QMP command
> handlers that declare 'coroutine': true in coroutine context so they
> can avoid blocking the main loop while doing I/O or waiting for other
> events.
>
> For commands that are not declared safe to run in a coroutine, the
> dispatcher drops out of coroutine context by calling the QMP command
> handler from a bottom half.

Hi Kevin,

since commit 9ce44e2ce2 "qmp: Move dispatcher to a coroutine" I see the 
following error on Windows whenever I close the QEMU window or shut down the 
guest.

$ ./qemu-system-x86_64.exe -machine pc,accel=tcg -display gtk
**
ERROR:../qemu/util/aio-win32.c:337:aio_poll: assertion failed: 
(in_aio_context_home_thread(ctx))
Bail out! ERROR:../qemu/util/aio-win32.c:337:aio_poll: assertion failed: 
(in_aio_context_home_thread(ctx))

I wonder if you forgot to apply the changes to util/aio-posix.c to 
util/aio-win32.c too? This solves the problem on my Windows host. But I have to 
admit I don't know the code here.

With best regards,
Volker

> diff --git a/util/aio-posix.c b/util/aio-posix.c
> index 280f27bb99..30f5354b1e 100644
> --- a/util/aio-posix.c
> +++ b/util/aio-posix.c
> @@ -15,6 +15,7 @@
>  
>  #include "qemu/osdep.h"
>  #include "block/block.h"
> +#include "qemu/main-loop.h"
>  #include "qemu/rcu.h"
>  #include "qemu/rcu_queue.h"
>  #include "qemu/sockets.h"
> @@ -558,8 +559,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
>   * There cannot be two concurrent aio_poll calls for the same AioContext 
> (or
>   * an aio_poll concurrent with a GSource prepare/check/dispatch 
> callback).
>   * We rely on this below to avoid slow locked accesses to ctx->notify_me.
> + *
> + * aio_poll() may only be called in the AioContext's thread. 
> iohandler_ctx
> + * is special in that it runs in the main thread, but that thread's 
> context
> + * is qemu_aio_context.
>   */
> -assert(in_aio_context_home_thread(ctx));
> +assert(in_aio_context_home_thread(ctx == iohandler_get_aio_context() ?
> +  qemu_get_aio_context() : ctx));
>  
>  qemu_lockcnt_inc(>list_lock);
>  




Re: [PULL 09/14] qmp: Move dispatcher to a coroutine

2020-10-13 Thread Kevin Wolf
Am 13.10.2020 um 09:56 hat Philippe Mathieu-Daudé geschrieben:
> On Mon, Oct 12, 2020 at 8:57 PM Ben Widawsky  wrote:
> >
> > On 20-10-12 16:02:34, Alex Bennée wrote:
> > >
> > > Kevin Wolf  writes:
> > >
> > > > Am 12.10.2020 um 13:53 hat Philippe Mathieu-Daudé geschrieben:
> > > >> On 10/12/20 1:25 PM, Kevin Wolf wrote:
> > > >> > Am 12.10.2020 um 12:47 hat Alex Bennée geschrieben:
> > > >> > >
> > > >> > > Markus Armbruster  writes:
> > > >> > >
> > > >> > > > From: Kevin Wolf 
> > > >> > > >
> > > >> > > > This moves the QMP dispatcher to a coroutine and runs all QMP 
> > > >> > > > command
> > > >> > > > handlers that declare 'coroutine': true in coroutine context so 
> > > >> > > > they
> > > >> > > > can avoid blocking the main loop while doing I/O or waiting for 
> > > >> > > > other
> > > >> > > > events.
> > > >> > >
> > > >> > > This subtly changes the replay behaviour leading to a hang in:
> > > >> > >
> > > >> > >10:55:18 [alex.bennee@hackbox2:~/l/q/b/bisect] 
> > > >> > > (625581c2…)|✚1(+1/-1) + ./tests/venv/bin/avocado run 
> > > >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
> > > >> > >Fetching asset from 
> > > >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
> > > >> > >JOB ID : ec11fd2544f06e6c0d421f16afa757b49f7ed734
> > > >> > >JOB LOG: 
> > > >> > > /home/alex.bennee/avocado/job-results/job-2020-10-12T11.40-ec11fd2/job.log
> > > >> > > (1/1) 
> > > >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt: 
> > > >> > > ERROR: Could not perform graceful shutdown (26.27 s)
> > > >> > >RESULTS: PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | 
> > > >> > > INTERRUPT 0 | CANCEL 0
> > > >> > >JOB TIME   : 27.77 s
> > > >> > >
> > > >> > > Looking at the log:
> > > >> > >
> > > >> > >2020-10-12 11:40:31,426 __init__ L0085 DEBUG| [
> > > >> > > 3.887411] rtc-pl031 901.pl031: setting system clock to 
> > > >> > > 2020-10-12 10:40:31 UTC (1602499231)
> > > >> > >2020-10-12 11:40:31,428 __init__ L0085 DEBUG| [
> > > >> > > 3.887431] sr_init: No PMIC hook to init smartreflex
> > > >> > >2020-10-12 11:40:31,447 __init__ L0085 DEBUG| [
> > > >> > > 3.897193] uart-pl011 900.pl011: no DMA platform data
> > > >> > >2020-10-12 11:40:31,460 __init__ L0085 DEBUG| [
> > > >> > > 3.897242] md: Waiting for all devices to be available before 
> > > >> > > autodetect
> > > >> > >2020-10-12 11:40:31,462 __init__ L0085 DEBUG| [
> > > >> > > 3.897259] md: If you don't use raid, use raid=noautodetect
> > > >> > >2020-10-12 11:40:31,475 __init__ L0085 DEBUG| [
> > > >> > > 3.897819] md: Autodetecting RAID arrays.
> > > >> > >2020-10-12 11:40:31,476 __init__ L0085 DEBUG| [
> > > >> > > 3.897832] md: autorun ...
> > > >> > >2020-10-12 11:40:31,477 __init__ L0085 DEBUG| [
> > > >> > > 3.897842] md: ... autorun DONE.
> > > >> > >2020-10-12 11:40:31,483 __init__ L0085 DEBUG| [
> > > >> > > 3.897962] VFS: Cannot open root device "(null)" or 
> > > >> > > unknown-block(0,0): error -6
> > > >> > >2020-10-12 11:40:31,483 qmp  L0245 DEBUG| >>> 
> > > >> > > {'execute': 'quit'}
> > > >> > >2020-10-12 11:40:31,495 qmp  L0145 DEBUG| <<< 
> > > >> > > {'timestamp': {'seconds': 1602499231, 'microseconds': 493379}, 
> > > >> > > 'event': 'SHUTDOWN', 'data': {'guest': True, 'reason': 
> > > >> > > 'guest-reset'}}
> > > >> > >2020-10-12 11:40:31,733 machine  L0325 WARNI| qemu 
> > > >> > > received signal 6; command: "./qemu-system-arm -display none -vga 
> > > >> > > none -chardev 
> > > >> > > socket,id=mon,path=/var/tmp/tmpzls53khe/qemu-8487-monitor.sock 
> > > >> > > -mon chardev=mon,mode=control -machine virt -chardev 
> > > >> > > socket,id=console,path=/var/tmp/tmpzls53khe/qemu-8487-console.sock,server,nowait
> > > >> > >  -serial chardev:console -icount 
> > > >> > > shift=1,rr=record,rrfile=/var/tmp/avocado_n00stdrf/avocado_job_aw60qdul/1-tests_acceptance_replay_kernel.py_ReplayKernel.test_arm_virt/replay.bin
> > > >> > >  -kernel 
> > > >> > > /home/alex.bennee/avocado/data/cache/by_location/62750ce9e069e69e6a7ff04ff54c382ee660b92a/vmlinuz
> > > >> > >  -append printk.time=1 panic=-1 console=ttyAMA0 -net none 
> > > >> > > -no-reboot"
> > > >> >
> > > >> > This looks like a crash (SIGABRT) rather than a hang. Do you have a
> > > >> > stack trace for the crashed process?
> > > >>
> > > >> No crash, exit(0):
> > > >
> > > > Why does the log say "qemu received signal 6" then?
> > > >
> > > >> VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
> > > >
> > > > Alex has this error in the logs before this commit, so I assume this is
> > > > expected. All of the following is then probably expected, too, because
> > > > it follows directly from this error:
> > > >
> > > >> Please append a correct "root=" boot option; here are the 

Re: [PULL 09/14] qmp: Move dispatcher to a coroutine

2020-10-13 Thread Philippe Mathieu-Daudé
On Mon, Oct 12, 2020 at 8:57 PM Ben Widawsky  wrote:
>
> On 20-10-12 16:02:34, Alex Bennée wrote:
> >
> > Kevin Wolf  writes:
> >
> > > Am 12.10.2020 um 13:53 hat Philippe Mathieu-Daudé geschrieben:
> > >> On 10/12/20 1:25 PM, Kevin Wolf wrote:
> > >> > Am 12.10.2020 um 12:47 hat Alex Bennée geschrieben:
> > >> > >
> > >> > > Markus Armbruster  writes:
> > >> > >
> > >> > > > From: Kevin Wolf 
> > >> > > >
> > >> > > > This moves the QMP dispatcher to a coroutine and runs all QMP 
> > >> > > > command
> > >> > > > handlers that declare 'coroutine': true in coroutine context so 
> > >> > > > they
> > >> > > > can avoid blocking the main loop while doing I/O or waiting for 
> > >> > > > other
> > >> > > > events.
> > >> > >
> > >> > > This subtly changes the replay behaviour leading to a hang in:
> > >> > >
> > >> > >10:55:18 [alex.bennee@hackbox2:~/l/q/b/bisect] 
> > >> > > (625581c2…)|✚1(+1/-1) + ./tests/venv/bin/avocado run 
> > >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
> > >> > >Fetching asset from 
> > >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
> > >> > >JOB ID : ec11fd2544f06e6c0d421f16afa757b49f7ed734
> > >> > >JOB LOG: 
> > >> > > /home/alex.bennee/avocado/job-results/job-2020-10-12T11.40-ec11fd2/job.log
> > >> > > (1/1) 
> > >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt: ERROR: 
> > >> > > Could not perform graceful shutdown (26.27 s)
> > >> > >RESULTS: PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | 
> > >> > > INTERRUPT 0 | CANCEL 0
> > >> > >JOB TIME   : 27.77 s
> > >> > >
> > >> > > Looking at the log:
> > >> > >
> > >> > >2020-10-12 11:40:31,426 __init__ L0085 DEBUG| [
> > >> > > 3.887411] rtc-pl031 901.pl031: setting system clock to 
> > >> > > 2020-10-12 10:40:31 UTC (1602499231)
> > >> > >2020-10-12 11:40:31,428 __init__ L0085 DEBUG| [
> > >> > > 3.887431] sr_init: No PMIC hook to init smartreflex
> > >> > >2020-10-12 11:40:31,447 __init__ L0085 DEBUG| [
> > >> > > 3.897193] uart-pl011 900.pl011: no DMA platform data
> > >> > >2020-10-12 11:40:31,460 __init__ L0085 DEBUG| [
> > >> > > 3.897242] md: Waiting for all devices to be available before 
> > >> > > autodetect
> > >> > >2020-10-12 11:40:31,462 __init__ L0085 DEBUG| [
> > >> > > 3.897259] md: If you don't use raid, use raid=noautodetect
> > >> > >2020-10-12 11:40:31,475 __init__ L0085 DEBUG| [
> > >> > > 3.897819] md: Autodetecting RAID arrays.
> > >> > >2020-10-12 11:40:31,476 __init__ L0085 DEBUG| [
> > >> > > 3.897832] md: autorun ...
> > >> > >2020-10-12 11:40:31,477 __init__ L0085 DEBUG| [
> > >> > > 3.897842] md: ... autorun DONE.
> > >> > >2020-10-12 11:40:31,483 __init__ L0085 DEBUG| [
> > >> > > 3.897962] VFS: Cannot open root device "(null)" or 
> > >> > > unknown-block(0,0): error -6
> > >> > >2020-10-12 11:40:31,483 qmp  L0245 DEBUG| >>> 
> > >> > > {'execute': 'quit'}
> > >> > >2020-10-12 11:40:31,495 qmp  L0145 DEBUG| <<< 
> > >> > > {'timestamp': {'seconds': 1602499231, 'microseconds': 493379}, 
> > >> > > 'event': 'SHUTDOWN', 'data': {'guest': True, 'reason': 
> > >> > > 'guest-reset'}}
> > >> > >2020-10-12 11:40:31,733 machine  L0325 WARNI| qemu 
> > >> > > received signal 6; command: "./qemu-system-arm -display none -vga 
> > >> > > none -chardev 
> > >> > > socket,id=mon,path=/var/tmp/tmpzls53khe/qemu-8487-monitor.sock -mon 
> > >> > > chardev=mon,mode=control -machine virt -chardev 
> > >> > > socket,id=console,path=/var/tmp/tmpzls53khe/qemu-8487-console.sock,server,nowait
> > >> > >  -serial chardev:console -icount 
> > >> > > shift=1,rr=record,rrfile=/var/tmp/avocado_n00stdrf/avocado_job_aw60qdul/1-tests_acceptance_replay_kernel.py_ReplayKernel.test_arm_virt/replay.bin
> > >> > >  -kernel 
> > >> > > /home/alex.bennee/avocado/data/cache/by_location/62750ce9e069e69e6a7ff04ff54c382ee660b92a/vmlinuz
> > >> > >  -append printk.time=1 panic=-1 console=ttyAMA0 -net none -no-reboot"
> > >> >
> > >> > This looks like a crash (SIGABRT) rather than a hang. Do you have a
> > >> > stack trace for the crashed process?
> > >>
> > >> No crash, exit(0):
> > >
> > > Why does the log say "qemu received signal 6" then?
> > >
> > >> VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
> > >
> > > Alex has this error in the logs before this commit, so I assume this is
> > > expected. All of the following is then probably expected, too, because
> > > it follows directly from this error:
> > >
> > >> Please append a correct "root=" boot option; here are the available
> > >> partitions:
> > >> Kernel panic - not syncing: VFS: Unable to mount root fs on
> > >> unknown-block(0,0)
> > >> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.16-300.fc29.armv7hl #1
> > >> Hardware name: Generic DT based system
> > >> [] 

Re: [PULL 09/14] qmp: Move dispatcher to a coroutine

2020-10-12 Thread Ben Widawsky
On 20-10-12 16:02:34, Alex Bennée wrote:
> 
> Kevin Wolf  writes:
> 
> > Am 12.10.2020 um 13:53 hat Philippe Mathieu-Daudé geschrieben:
> >> On 10/12/20 1:25 PM, Kevin Wolf wrote:
> >> > Am 12.10.2020 um 12:47 hat Alex Bennée geschrieben:
> >> > > 
> >> > > Markus Armbruster  writes:
> >> > > 
> >> > > > From: Kevin Wolf 
> >> > > > 
> >> > > > This moves the QMP dispatcher to a coroutine and runs all QMP command
> >> > > > handlers that declare 'coroutine': true in coroutine context so they
> >> > > > can avoid blocking the main loop while doing I/O or waiting for other
> >> > > > events.
> >> > > 
> >> > > This subtly changes the replay behaviour leading to a hang in:
> >> > > 
> >> > >10:55:18 [alex.bennee@hackbox2:~/l/q/b/bisect] 
> >> > > (625581c2…)|✚1(+1/-1) + ./tests/venv/bin/avocado run 
> >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
> >> > >Fetching asset from 
> >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
> >> > >JOB ID : ec11fd2544f06e6c0d421f16afa757b49f7ed734
> >> > >JOB LOG: 
> >> > > /home/alex.bennee/avocado/job-results/job-2020-10-12T11.40-ec11fd2/job.log
> >> > > (1/1) 
> >> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt: ERROR: 
> >> > > Could not perform graceful shutdown (26.27 s)
> >> > >RESULTS: PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | 
> >> > > INTERRUPT 0 | CANCEL 0
> >> > >JOB TIME   : 27.77 s
> >> > > 
> >> > > Looking at the log:
> >> > > 
> >> > >2020-10-12 11:40:31,426 __init__ L0085 DEBUG| [
> >> > > 3.887411] rtc-pl031 901.pl031: setting system clock to 2020-10-12 
> >> > > 10:40:31 UTC (1602499231)
> >> > >2020-10-12 11:40:31,428 __init__ L0085 DEBUG| [
> >> > > 3.887431] sr_init: No PMIC hook to init smartreflex
> >> > >2020-10-12 11:40:31,447 __init__ L0085 DEBUG| [
> >> > > 3.897193] uart-pl011 900.pl011: no DMA platform data
> >> > >2020-10-12 11:40:31,460 __init__ L0085 DEBUG| [
> >> > > 3.897242] md: Waiting for all devices to be available before autodetect
> >> > >2020-10-12 11:40:31,462 __init__ L0085 DEBUG| [
> >> > > 3.897259] md: If you don't use raid, use raid=noautodetect
> >> > >2020-10-12 11:40:31,475 __init__ L0085 DEBUG| [
> >> > > 3.897819] md: Autodetecting RAID arrays.
> >> > >2020-10-12 11:40:31,476 __init__ L0085 DEBUG| [
> >> > > 3.897832] md: autorun ...
> >> > >2020-10-12 11:40:31,477 __init__ L0085 DEBUG| [
> >> > > 3.897842] md: ... autorun DONE.
> >> > >2020-10-12 11:40:31,483 __init__ L0085 DEBUG| [
> >> > > 3.897962] VFS: Cannot open root device "(null)" or unknown-block(0,0): 
> >> > > error -6
> >> > >2020-10-12 11:40:31,483 qmp  L0245 DEBUG| >>> 
> >> > > {'execute': 'quit'}
> >> > >2020-10-12 11:40:31,495 qmp  L0145 DEBUG| <<< 
> >> > > {'timestamp': {'seconds': 1602499231, 'microseconds': 493379}, 
> >> > > 'event': 'SHUTDOWN', 'data': {'guest': True, 'reason': 'guest-reset'}}
> >> > >2020-10-12 11:40:31,733 machine  L0325 WARNI| qemu received 
> >> > > signal 6; command: "./qemu-system-arm -display none -vga none -chardev 
> >> > > socket,id=mon,path=/var/tmp/tmpzls53khe/qemu-8487-monitor.sock -mon 
> >> > > chardev=mon,mode=control -machine virt -chardev 
> >> > > socket,id=console,path=/var/tmp/tmpzls53khe/qemu-8487-console.sock,server,nowait
> >> > >  -serial chardev:console -icount 
> >> > > shift=1,rr=record,rrfile=/var/tmp/avocado_n00stdrf/avocado_job_aw60qdul/1-tests_acceptance_replay_kernel.py_ReplayKernel.test_arm_virt/replay.bin
> >> > >  -kernel 
> >> > > /home/alex.bennee/avocado/data/cache/by_location/62750ce9e069e69e6a7ff04ff54c382ee660b92a/vmlinuz
> >> > >  -append printk.time=1 panic=-1 console=ttyAMA0 -net none -no-reboot"
> >> > 
> >> > This looks like a crash (SIGABRT) rather than a hang. Do you have a
> >> > stack trace for the crashed process?
> >> 
> >> No crash, exit(0):
> >
> > Why does the log say "qemu received signal 6" then?
> >
> >> VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
> >
> > Alex has this error in the logs before this commit, so I assume this is
> > expected. All of the following is then probably expected, too, because
> > it follows directly from this error:
> >
> >> Please append a correct "root=" boot option; here are the available
> >> partitions:
> >> Kernel panic - not syncing: VFS: Unable to mount root fs on
> >> unknown-block(0,0)
> >> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.16-300.fc29.armv7hl #1
> >> Hardware name: Generic DT based system
> >> [] (unwind_backtrace) from [] (show_stack+0x20/0x24)
> >> [] (show_stack) from [] (dump_stack+0x88/0xa8)
> >> [] (dump_stack) from [] (panic+0xd4/0x26c)
> >> [] (panic) from [] (mount_block_root+0x250/0x2ec)
> >> [] (mount_block_root) from [] (mount_root+0x78/0x90)
> >> [] (mount_root) from [] 

Re: [PULL 09/14] qmp: Move dispatcher to a coroutine

2020-10-12 Thread Alex Bennée


Kevin Wolf  writes:

> Am 12.10.2020 um 13:53 hat Philippe Mathieu-Daudé geschrieben:
>> On 10/12/20 1:25 PM, Kevin Wolf wrote:
>> > Am 12.10.2020 um 12:47 hat Alex Bennée geschrieben:
>> > > 
>> > > Markus Armbruster  writes:
>> > > 
>> > > > From: Kevin Wolf 
>> > > > 
>> > > > This moves the QMP dispatcher to a coroutine and runs all QMP command
>> > > > handlers that declare 'coroutine': true in coroutine context so they
>> > > > can avoid blocking the main loop while doing I/O or waiting for other
>> > > > events.
>> > > 
>> > > This subtly changes the replay behaviour leading to a hang in:
>> > > 
>> > >10:55:18 [alex.bennee@hackbox2:~/l/q/b/bisect] (625581c2…)|✚1(+1/-1) 
>> > > + ./tests/venv/bin/avocado run 
>> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
>> > >Fetching asset from 
>> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
>> > >JOB ID : ec11fd2544f06e6c0d421f16afa757b49f7ed734
>> > >JOB LOG: 
>> > > /home/alex.bennee/avocado/job-results/job-2020-10-12T11.40-ec11fd2/job.log
>> > > (1/1) tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt: 
>> > > ERROR: Could not perform graceful shutdown (26.27 s)
>> > >RESULTS: PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 
>> > > 0 | CANCEL 0
>> > >JOB TIME   : 27.77 s
>> > > 
>> > > Looking at the log:
>> > > 
>> > >2020-10-12 11:40:31,426 __init__ L0085 DEBUG| [3.887411] 
>> > > rtc-pl031 901.pl031: setting system clock to 2020-10-12 10:40:31 UTC 
>> > > (1602499231)
>> > >2020-10-12 11:40:31,428 __init__ L0085 DEBUG| [3.887431] 
>> > > sr_init: No PMIC hook to init smartreflex
>> > >2020-10-12 11:40:31,447 __init__ L0085 DEBUG| [3.897193] 
>> > > uart-pl011 900.pl011: no DMA platform data
>> > >2020-10-12 11:40:31,460 __init__ L0085 DEBUG| [3.897242] 
>> > > md: Waiting for all devices to be available before autodetect
>> > >2020-10-12 11:40:31,462 __init__ L0085 DEBUG| [3.897259] 
>> > > md: If you don't use raid, use raid=noautodetect
>> > >2020-10-12 11:40:31,475 __init__ L0085 DEBUG| [3.897819] 
>> > > md: Autodetecting RAID arrays.
>> > >2020-10-12 11:40:31,476 __init__ L0085 DEBUG| [3.897832] 
>> > > md: autorun ...
>> > >2020-10-12 11:40:31,477 __init__ L0085 DEBUG| [3.897842] 
>> > > md: ... autorun DONE.
>> > >2020-10-12 11:40:31,483 __init__ L0085 DEBUG| [3.897962] 
>> > > VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
>> > >2020-10-12 11:40:31,483 qmp  L0245 DEBUG| >>> {'execute': 
>> > > 'quit'}
>> > >2020-10-12 11:40:31,495 qmp  L0145 DEBUG| <<< 
>> > > {'timestamp': {'seconds': 1602499231, 'microseconds': 493379}, 'event': 
>> > > 'SHUTDOWN', 'data': {'guest': True, 'reason': 'guest-reset'}}
>> > >2020-10-12 11:40:31,733 machine  L0325 WARNI| qemu received 
>> > > signal 6; command: "./qemu-system-arm -display none -vga none -chardev 
>> > > socket,id=mon,path=/var/tmp/tmpzls53khe/qemu-8487-monitor.sock -mon 
>> > > chardev=mon,mode=control -machine virt -chardev 
>> > > socket,id=console,path=/var/tmp/tmpzls53khe/qemu-8487-console.sock,server,nowait
>> > >  -serial chardev:console -icount 
>> > > shift=1,rr=record,rrfile=/var/tmp/avocado_n00stdrf/avocado_job_aw60qdul/1-tests_acceptance_replay_kernel.py_ReplayKernel.test_arm_virt/replay.bin
>> > >  -kernel 
>> > > /home/alex.bennee/avocado/data/cache/by_location/62750ce9e069e69e6a7ff04ff54c382ee660b92a/vmlinuz
>> > >  -append printk.time=1 panic=-1 console=ttyAMA0 -net none -no-reboot"
>> > 
>> > This looks like a crash (SIGABRT) rather than a hang. Do you have a
>> > stack trace for the crashed process?
>> 
>> No crash, exit(0):
>
> Why does the log say "qemu received signal 6" then?
>
>> VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
>
> Alex has this error in the logs before this commit, so I assume this is
> expected. All of the following is then probably expected, too, because
> it follows directly from this error:
>
>> Please append a correct "root=" boot option; here are the available
>> partitions:
>> Kernel panic - not syncing: VFS: Unable to mount root fs on
>> unknown-block(0,0)
>> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.16-300.fc29.armv7hl #1
>> Hardware name: Generic DT based system
>> [] (unwind_backtrace) from [] (show_stack+0x20/0x24)
>> [] (show_stack) from [] (dump_stack+0x88/0xa8)
>> [] (dump_stack) from [] (panic+0xd4/0x26c)
>> [] (panic) from [] (mount_block_root+0x250/0x2ec)
>> [] (mount_block_root) from [] (mount_root+0x78/0x90)
>> [] (mount_root) from [] (prepare_namespace+0x15c/0x19c)
>> [] (prepare_namespace) from []
>> (kernel_init_freeable+0x2c0/0x370)
>> [] (kernel_init_freeable) from []
>> (kernel_init+0x18/0x128)
>> [] (kernel_init) from [] (ret_from_fork+0x14/0x2c)
>> Exception stack(0xc790bfb0 to 0xc790bff8)
>> 

Re: [PULL 09/14] qmp: Move dispatcher to a coroutine

2020-10-12 Thread Kevin Wolf
Am 12.10.2020 um 13:53 hat Philippe Mathieu-Daudé geschrieben:
> On 10/12/20 1:25 PM, Kevin Wolf wrote:
> > Am 12.10.2020 um 12:47 hat Alex Bennée geschrieben:
> > > 
> > > Markus Armbruster  writes:
> > > 
> > > > From: Kevin Wolf 
> > > > 
> > > > This moves the QMP dispatcher to a coroutine and runs all QMP command
> > > > handlers that declare 'coroutine': true in coroutine context so they
> > > > can avoid blocking the main loop while doing I/O or waiting for other
> > > > events.
> > > 
> > > This subtly changes the replay behaviour leading to a hang in:
> > > 
> > >10:55:18 [alex.bennee@hackbox2:~/l/q/b/bisect] (625581c2…)|✚1(+1/-1) + 
> > > ./tests/venv/bin/avocado run 
> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
> > >Fetching asset from 
> > > tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
> > >JOB ID : ec11fd2544f06e6c0d421f16afa757b49f7ed734
> > >JOB LOG: 
> > > /home/alex.bennee/avocado/job-results/job-2020-10-12T11.40-ec11fd2/job.log
> > > (1/1) tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt: 
> > > ERROR: Could not perform graceful shutdown (26.27 s)
> > >RESULTS: PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 
> > > | CANCEL 0
> > >JOB TIME   : 27.77 s
> > > 
> > > Looking at the log:
> > > 
> > >2020-10-12 11:40:31,426 __init__ L0085 DEBUG| [3.887411] 
> > > rtc-pl031 901.pl031: setting system clock to 2020-10-12 10:40:31 UTC 
> > > (1602499231)
> > >2020-10-12 11:40:31,428 __init__ L0085 DEBUG| [3.887431] 
> > > sr_init: No PMIC hook to init smartreflex
> > >2020-10-12 11:40:31,447 __init__ L0085 DEBUG| [3.897193] 
> > > uart-pl011 900.pl011: no DMA platform data
> > >2020-10-12 11:40:31,460 __init__ L0085 DEBUG| [3.897242] 
> > > md: Waiting for all devices to be available before autodetect
> > >2020-10-12 11:40:31,462 __init__ L0085 DEBUG| [3.897259] 
> > > md: If you don't use raid, use raid=noautodetect
> > >2020-10-12 11:40:31,475 __init__ L0085 DEBUG| [3.897819] 
> > > md: Autodetecting RAID arrays.
> > >2020-10-12 11:40:31,476 __init__ L0085 DEBUG| [3.897832] 
> > > md: autorun ...
> > >2020-10-12 11:40:31,477 __init__ L0085 DEBUG| [3.897842] 
> > > md: ... autorun DONE.
> > >2020-10-12 11:40:31,483 __init__ L0085 DEBUG| [3.897962] 
> > > VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
> > >2020-10-12 11:40:31,483 qmp  L0245 DEBUG| >>> {'execute': 
> > > 'quit'}
> > >2020-10-12 11:40:31,495 qmp  L0145 DEBUG| <<< 
> > > {'timestamp': {'seconds': 1602499231, 'microseconds': 493379}, 'event': 
> > > 'SHUTDOWN', 'data': {'guest': True, 'reason': 'guest-reset'}}
> > >2020-10-12 11:40:31,733 machine  L0325 WARNI| qemu received 
> > > signal 6; command: "./qemu-system-arm -display none -vga none -chardev 
> > > socket,id=mon,path=/var/tmp/tmpzls53khe/qemu-8487-monitor.sock -mon 
> > > chardev=mon,mode=control -machine virt -chardev 
> > > socket,id=console,path=/var/tmp/tmpzls53khe/qemu-8487-console.sock,server,nowait
> > >  -serial chardev:console -icount 
> > > shift=1,rr=record,rrfile=/var/tmp/avocado_n00stdrf/avocado_job_aw60qdul/1-tests_acceptance_replay_kernel.py_ReplayKernel.test_arm_virt/replay.bin
> > >  -kernel 
> > > /home/alex.bennee/avocado/data/cache/by_location/62750ce9e069e69e6a7ff04ff54c382ee660b92a/vmlinuz
> > >  -append printk.time=1 panic=-1 console=ttyAMA0 -net none -no-reboot"
> > 
> > This looks like a crash (SIGABRT) rather than a hang. Do you have a
> > stack trace for the crashed process?
> 
> No crash, exit(0):

Why does the log say "qemu received signal 6" then?

> VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6

Alex has this error in the logs before this commit, so I assume this is
expected. All of the following is then probably expected, too, because
it follows directly from this error:

> Please append a correct "root=" boot option; here are the available
> partitions:
> Kernel panic - not syncing: VFS: Unable to mount root fs on
> unknown-block(0,0)
> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.16-300.fc29.armv7hl #1
> Hardware name: Generic DT based system
> [] (unwind_backtrace) from [] (show_stack+0x20/0x24)
> [] (show_stack) from [] (dump_stack+0x88/0xa8)
> [] (dump_stack) from [] (panic+0xd4/0x26c)
> [] (panic) from [] (mount_block_root+0x250/0x2ec)
> [] (mount_block_root) from [] (mount_root+0x78/0x90)
> [] (mount_root) from [] (prepare_namespace+0x15c/0x19c)
> [] (prepare_namespace) from []
> (kernel_init_freeable+0x2c0/0x370)
> [] (kernel_init_freeable) from []
> (kernel_init+0x18/0x128)
> [] (kernel_init) from [] (ret_from_fork+0x14/0x2c)
> Exception stack(0xc790bfb0 to 0xc790bff8)
> bfa0:   
> 
> bfc0:     

Re: [PULL 09/14] qmp: Move dispatcher to a coroutine

2020-10-12 Thread Philippe Mathieu-Daudé

On 10/12/20 1:25 PM, Kevin Wolf wrote:

Am 12.10.2020 um 12:47 hat Alex Bennée geschrieben:


Markus Armbruster  writes:


From: Kevin Wolf 

This moves the QMP dispatcher to a coroutine and runs all QMP command
handlers that declare 'coroutine': true in coroutine context so they
can avoid blocking the main loop while doing I/O or waiting for other
events.


This subtly changes the replay behaviour leading to a hang in:

   10:55:18 [alex.bennee@hackbox2:~/l/q/b/bisect] (625581c2…)|✚1(+1/-1) + 
./tests/venv/bin/avocado run 
tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
   Fetching asset from 
tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
   JOB ID : ec11fd2544f06e6c0d421f16afa757b49f7ed734
   JOB LOG: 
/home/alex.bennee/avocado/job-results/job-2020-10-12T11.40-ec11fd2/job.log
(1/1) tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt: ERROR: 
Could not perform graceful shutdown (26.27 s)
   RESULTS: PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | 
CANCEL 0
   JOB TIME   : 27.77 s

Looking at the log:

   2020-10-12 11:40:31,426 __init__ L0085 DEBUG| [3.887411] 
rtc-pl031 901.pl031: setting system clock to 2020-10-12 10:40:31 UTC 
(1602499231)
   2020-10-12 11:40:31,428 __init__ L0085 DEBUG| [3.887431] 
sr_init: No PMIC hook to init smartreflex
   2020-10-12 11:40:31,447 __init__ L0085 DEBUG| [3.897193] 
uart-pl011 900.pl011: no DMA platform data
   2020-10-12 11:40:31,460 __init__ L0085 DEBUG| [3.897242] md: 
Waiting for all devices to be available before autodetect
   2020-10-12 11:40:31,462 __init__ L0085 DEBUG| [3.897259] md: If 
you don't use raid, use raid=noautodetect
   2020-10-12 11:40:31,475 __init__ L0085 DEBUG| [3.897819] md: 
Autodetecting RAID arrays.
   2020-10-12 11:40:31,476 __init__ L0085 DEBUG| [3.897832] md: 
autorun ...
   2020-10-12 11:40:31,477 __init__ L0085 DEBUG| [3.897842] md: ... 
autorun DONE.
   2020-10-12 11:40:31,483 __init__ L0085 DEBUG| [3.897962] VFS: Cannot open 
root device "(null)" or unknown-block(0,0): error -6
   2020-10-12 11:40:31,483 qmp  L0245 DEBUG| >>> {'execute': 'quit'}
   2020-10-12 11:40:31,495 qmp  L0145 DEBUG| <<< {'timestamp': 
{'seconds': 1602499231, 'microseconds': 493379}, 'event': 'SHUTDOWN', 'data': {'guest': 
True, 'reason': 'guest-reset'}}
   2020-10-12 11:40:31,733 machine  L0325 WARNI| qemu received signal 6; command: 
"./qemu-system-arm -display none -vga none -chardev 
socket,id=mon,path=/var/tmp/tmpzls53khe/qemu-8487-monitor.sock -mon 
chardev=mon,mode=control -machine virt -chardev 
socket,id=console,path=/var/tmp/tmpzls53khe/qemu-8487-console.sock,server,nowait -serial 
chardev:console -icount 
shift=1,rr=record,rrfile=/var/tmp/avocado_n00stdrf/avocado_job_aw60qdul/1-tests_acceptance_replay_kernel.py_ReplayKernel.test_arm_virt/replay.bin
 -kernel 
/home/alex.bennee/avocado/data/cache/by_location/62750ce9e069e69e6a7ff04ff54c382ee660b92a/vmlinuz
 -append printk.time=1 panic=-1 console=ttyAMA0 -net none -no-reboot"


This looks like a crash (SIGABRT) rather than a hang. Do you have a
stack trace for the crashed process?


No crash, exit(0):

VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
Please append a correct "root=" boot option; here are the available 
partitions:
Kernel panic - not syncing: VFS: Unable to mount root fs on 
unknown-block(0,0)

CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.16-300.fc29.armv7hl #1
Hardware name: Generic DT based system
[] (unwind_backtrace) from [] (show_stack+0x20/0x24)
[] (show_stack) from [] (dump_stack+0x88/0xa8)
[] (dump_stack) from [] (panic+0xd4/0x26c)
[] (panic) from [] (mount_block_root+0x250/0x2ec)
[] (mount_block_root) from [] (mount_root+0x78/0x90)
[] (mount_root) from [] (prepare_namespace+0x15c/0x19c)
[] (prepare_namespace) from [] 
(kernel_init_freeable+0x2c0/0x370)
[] (kernel_init_freeable) from [] 
(kernel_init+0x18/0x128)

[] (kernel_init) from [] (ret_from_fork+0x14/0x2c)
Exception stack(0xc790bfb0 to 0xc790bff8)
bfa0:    

bfc0:        


bfe0:     0013 

-> PSCI call
   -> QEMU_PSCI_0_2_FN_SYSTEM_RESET
  -> SHUTDOWN_CAUSE_GUEST_RESET
 -> exit(0)



Kevin


   2020-10-12 11:40:31,734 stacktrace   L0039 ERROR|
   2020-10-12 11:40:31,734 stacktrace   L0042 ERROR| Reproduced traceback 
from: 
/home/alex.bennee/lsrc/qemu.git/builds/bisect/tests/venv/lib/python3.6/site-packages/avocado/core/test.py:767
   2020-10-12 11:40:31,735 stacktrace   L0045 ERROR| Traceback (most recent 
call last):
   2020-10-12 11:40:31,735 stacktrace   L0045 ERROR|   File 
"/home/alex.bennee/lsrc/qemu.git/python/qemu/machine.py", line 435, in 
_do_shutdown
   2020-10-12 11:40:31,735 

Re: [PULL 09/14] qmp: Move dispatcher to a coroutine

2020-10-12 Thread Kevin Wolf
Am 12.10.2020 um 12:47 hat Alex Bennée geschrieben:
> 
> Markus Armbruster  writes:
> 
> > From: Kevin Wolf 
> >
> > This moves the QMP dispatcher to a coroutine and runs all QMP command
> > handlers that declare 'coroutine': true in coroutine context so they
> > can avoid blocking the main loop while doing I/O or waiting for other
> > events.
> 
> This subtly changes the replay behaviour leading to a hang in:
> 
>   10:55:18 [alex.bennee@hackbox2:~/l/q/b/bisect] (625581c2…)|✚1(+1/-1) + 
> ./tests/venv/bin/avocado run 
> tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
>   Fetching asset from 
> tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
>   JOB ID : ec11fd2544f06e6c0d421f16afa757b49f7ed734
>   JOB LOG: 
> /home/alex.bennee/avocado/job-results/job-2020-10-12T11.40-ec11fd2/job.log
>(1/1) tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt: ERROR: 
> Could not perform graceful shutdown (26.27 s)
>   RESULTS: PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | 
> CANCEL 0
>   JOB TIME   : 27.77 s
> 
> Looking at the log:
> 
>   2020-10-12 11:40:31,426 __init__ L0085 DEBUG| [3.887411] 
> rtc-pl031 901.pl031: setting system clock to 2020-10-12 10:40:31 UTC 
> (1602499231)
>   2020-10-12 11:40:31,428 __init__ L0085 DEBUG| [3.887431] 
> sr_init: No PMIC hook to init smartreflex
>   2020-10-12 11:40:31,447 __init__ L0085 DEBUG| [3.897193] 
> uart-pl011 900.pl011: no DMA platform data
>   2020-10-12 11:40:31,460 __init__ L0085 DEBUG| [3.897242] md: 
> Waiting for all devices to be available before autodetect
>   2020-10-12 11:40:31,462 __init__ L0085 DEBUG| [3.897259] md: If 
> you don't use raid, use raid=noautodetect
>   2020-10-12 11:40:31,475 __init__ L0085 DEBUG| [3.897819] md: 
> Autodetecting RAID arrays.
>   2020-10-12 11:40:31,476 __init__ L0085 DEBUG| [3.897832] md: 
> autorun ...
>   2020-10-12 11:40:31,477 __init__ L0085 DEBUG| [3.897842] md: 
> ... autorun DONE.
>   2020-10-12 11:40:31,483 __init__ L0085 DEBUG| [3.897962] VFS: 
> Cannot open root device "(null)" or unknown-block(0,0): error -6
>   2020-10-12 11:40:31,483 qmp  L0245 DEBUG| >>> {'execute': 
> 'quit'}
>   2020-10-12 11:40:31,495 qmp  L0145 DEBUG| <<< {'timestamp': 
> {'seconds': 1602499231, 'microseconds': 493379}, 'event': 'SHUTDOWN', 'data': 
> {'guest': True, 'reason': 'guest-reset'}}
>   2020-10-12 11:40:31,733 machine  L0325 WARNI| qemu received signal 
> 6; command: "./qemu-system-arm -display none -vga none -chardev 
> socket,id=mon,path=/var/tmp/tmpzls53khe/qemu-8487-monitor.sock -mon 
> chardev=mon,mode=control -machine virt -chardev 
> socket,id=console,path=/var/tmp/tmpzls53khe/qemu-8487-console.sock,server,nowait
>  -serial chardev:console -icount 
> shift=1,rr=record,rrfile=/var/tmp/avocado_n00stdrf/avocado_job_aw60qdul/1-tests_acceptance_replay_kernel.py_ReplayKernel.test_arm_virt/replay.bin
>  -kernel 
> /home/alex.bennee/avocado/data/cache/by_location/62750ce9e069e69e6a7ff04ff54c382ee660b92a/vmlinuz
>  -append printk.time=1 panic=-1 console=ttyAMA0 -net none -no-reboot"

This looks like a crash (SIGABRT) rather than a hang. Do you have a
stack trace for the crashed process?

Kevin

>   2020-10-12 11:40:31,734 stacktrace   L0039 ERROR|
>   2020-10-12 11:40:31,734 stacktrace   L0042 ERROR| Reproduced traceback 
> from: 
> /home/alex.bennee/lsrc/qemu.git/builds/bisect/tests/venv/lib/python3.6/site-packages/avocado/core/test.py:767
>   2020-10-12 11:40:31,735 stacktrace   L0045 ERROR| Traceback (most 
> recent call last):
>   2020-10-12 11:40:31,735 stacktrace   L0045 ERROR|   File 
> "/home/alex.bennee/lsrc/qemu.git/python/qemu/machine.py", line 435, in 
> _do_shutdown
>   2020-10-12 11:40:31,735 stacktrace   L0045 ERROR| 
> self._soft_shutdown(timeout, has_quit)
>   2020-10-12 11:40:31,735 stacktrace   L0045 ERROR|   File 
> "/home/alex.bennee/lsrc/qemu.git/python/qemu/machine.py", line 415, in 
> _soft_shutdown
>   2020-10-12 11:40:31,735 stacktrace   L0045 ERROR| 
> self._qmp.cmd('quit')
>   2020-10-12 11:40:31,735 stacktrace   L0045 ERROR|   File 
> "/home/alex.bennee/lsrc/qemu.git/python/qemu/qmp.py", line 266, in cmd
>   2020-10-12 11:40:31,735 stacktrace   L0045 ERROR| return 
> self.cmd_obj(qmp_cmd)
>   2020-10-12 11:40:31,735 stacktrace   L0045 ERROR|   File 
> "/home/alex.bennee/lsrc/qemu.git/python/qemu/qmp.py", line 249, in cmd_obj
>   2020-10-12 11:40:31,736 stacktrace   L0045 ERROR| raise 
> QMPConnectError("Unexpected empty reply from server")
>   2020-10-12 11:40:31,736 stacktrace   L0045 ERROR| 
> qemu.qmp.QMPConnectError: Unexpected empty reply from server
>   2020-10-12 11:40:31,736 stacktrace   L0045 ERROR|
>   2020-10-12 11:40:31,736 stacktrace   L0045 ERROR| The above exception 
> was the direct cause of the following 

Re: [PULL 09/14] qmp: Move dispatcher to a coroutine

2020-10-12 Thread Alex Bennée


Markus Armbruster  writes:

> From: Kevin Wolf 
>
> This moves the QMP dispatcher to a coroutine and runs all QMP command
> handlers that declare 'coroutine': true in coroutine context so they
> can avoid blocking the main loop while doing I/O or waiting for other
> events.

This subtly changes the replay behaviour leading to a hang in:

  10:55:18 [alex.bennee@hackbox2:~/l/q/b/bisect] (625581c2…)|✚1(+1/-1) + 
./tests/venv/bin/avocado run 
tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
  Fetching asset from 
tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt
  JOB ID : ec11fd2544f06e6c0d421f16afa757b49f7ed734
  JOB LOG: 
/home/alex.bennee/avocado/job-results/job-2020-10-12T11.40-ec11fd2/job.log
   (1/1) tests/acceptance/replay_kernel.py:ReplayKernel.test_arm_virt: ERROR: 
Could not perform graceful shutdown (26.27 s)
  RESULTS: PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | 
CANCEL 0
  JOB TIME   : 27.77 s

Looking at the log:

  2020-10-12 11:40:31,426 __init__ L0085 DEBUG| [3.887411] 
rtc-pl031 901.pl031: setting system clock to 2020-10-12 10:40:31 UTC 
(1602499231)
  2020-10-12 11:40:31,428 __init__ L0085 DEBUG| [3.887431] sr_init: 
No PMIC hook to init smartreflex
  2020-10-12 11:40:31,447 __init__ L0085 DEBUG| [3.897193] 
uart-pl011 900.pl011: no DMA platform data
  2020-10-12 11:40:31,460 __init__ L0085 DEBUG| [3.897242] md: 
Waiting for all devices to be available before autodetect
  2020-10-12 11:40:31,462 __init__ L0085 DEBUG| [3.897259] md: If 
you don't use raid, use raid=noautodetect
  2020-10-12 11:40:31,475 __init__ L0085 DEBUG| [3.897819] md: 
Autodetecting RAID arrays.
  2020-10-12 11:40:31,476 __init__ L0085 DEBUG| [3.897832] md: 
autorun ...
  2020-10-12 11:40:31,477 __init__ L0085 DEBUG| [3.897842] md: ... 
autorun DONE.
  2020-10-12 11:40:31,483 __init__ L0085 DEBUG| [3.897962] VFS: 
Cannot open root device "(null)" or unknown-block(0,0): error -6
  2020-10-12 11:40:31,483 qmp  L0245 DEBUG| >>> {'execute': 'quit'}
  2020-10-12 11:40:31,495 qmp  L0145 DEBUG| <<< {'timestamp': 
{'seconds': 1602499231, 'microseconds': 493379}, 'event': 'SHUTDOWN', 'data': 
{'guest': True, 'reason': 'guest-reset'}}
  2020-10-12 11:40:31,733 machine  L0325 WARNI| qemu received signal 6; 
command: "./qemu-system-arm -display none -vga none -chardev 
socket,id=mon,path=/var/tmp/tmpzls53khe/qemu-8487-monitor.sock -mon 
chardev=mon,mode=control -machine virt -chardev 
socket,id=console,path=/var/tmp/tmpzls53khe/qemu-8487-console.sock,server,nowait
 -serial chardev:console -icount 
shift=1,rr=record,rrfile=/var/tmp/avocado_n00stdrf/avocado_job_aw60qdul/1-tests_acceptance_replay_kernel.py_ReplayKernel.test_arm_virt/replay.bin
 -kernel 
/home/alex.bennee/avocado/data/cache/by_location/62750ce9e069e69e6a7ff04ff54c382ee660b92a/vmlinuz
 -append printk.time=1 panic=-1 console=ttyAMA0 -net none -no-reboot"
  2020-10-12 11:40:31,734 stacktrace   L0039 ERROR|
  2020-10-12 11:40:31,734 stacktrace   L0042 ERROR| Reproduced traceback 
from: 
/home/alex.bennee/lsrc/qemu.git/builds/bisect/tests/venv/lib/python3.6/site-packages/avocado/core/test.py:767
  2020-10-12 11:40:31,735 stacktrace   L0045 ERROR| Traceback (most recent 
call last):
  2020-10-12 11:40:31,735 stacktrace   L0045 ERROR|   File 
"/home/alex.bennee/lsrc/qemu.git/python/qemu/machine.py", line 435, in 
_do_shutdown
  2020-10-12 11:40:31,735 stacktrace   L0045 ERROR| 
self._soft_shutdown(timeout, has_quit)
  2020-10-12 11:40:31,735 stacktrace   L0045 ERROR|   File 
"/home/alex.bennee/lsrc/qemu.git/python/qemu/machine.py", line 415, in 
_soft_shutdown
  2020-10-12 11:40:31,735 stacktrace   L0045 ERROR| 
self._qmp.cmd('quit')
  2020-10-12 11:40:31,735 stacktrace   L0045 ERROR|   File 
"/home/alex.bennee/lsrc/qemu.git/python/qemu/qmp.py", line 266, in cmd
  2020-10-12 11:40:31,735 stacktrace   L0045 ERROR| return 
self.cmd_obj(qmp_cmd)
  2020-10-12 11:40:31,735 stacktrace   L0045 ERROR|   File 
"/home/alex.bennee/lsrc/qemu.git/python/qemu/qmp.py", line 249, in cmd_obj
  2020-10-12 11:40:31,736 stacktrace   L0045 ERROR| raise 
QMPConnectError("Unexpected empty reply from server")
  2020-10-12 11:40:31,736 stacktrace   L0045 ERROR| 
qemu.qmp.QMPConnectError: Unexpected empty reply from server
  2020-10-12 11:40:31,736 stacktrace   L0045 ERROR|
  2020-10-12 11:40:31,736 stacktrace   L0045 ERROR| The above exception was 
the direct cause of the following exception:
  2020-10-12 11:40:31,736 stacktrace   L0045 ERROR|
  2020-10-12 11:40:31,736 stacktrace   L0045 ERROR| Traceback (most recent 
call last):
  2020-10-12 11:40:31,736 stacktrace   L0045 ERROR|   File 
"/home/alex.bennee/lsrc/qemu.git/builds/bisect/tests/acceptance/replay_kernel.py",
 line 128, in test_arm_virt
  2020-10-12