Re: [PATCH v1] qmp: don't hold ctx lock while querying blockstats

2020-07-10 Thread Kevin Wolf
Am 10.07.2020 um 10:54 hat Zhenyu Ye geschrieben:
> Because the QMP command runs in the main thread, and changes
> to the aio context of iothread will only be executed in the
> main thread (they will not be in parallel), so there is no
> need a lock protection while querying blockstats.
> 
> If we hold the lock here, while the I/O pressure is high in
> vm and the I/O returns slowly, the main thread will be stuck
> until the lock is released, which will affect the vcpu operation
> and finall cause the vm to be stuck.
> 
> Signed-off-by: Zhenyu Ye 

Are you saying that you think qmp_query_blockstats() runs in the
iothread? This is not true, it runs in the main thread.

This makes dropping the lock wrong, and even dropping it shouldn't make
a difference for other things that should run in the main thread because
the main thread is still busy executing the QMP command.

Am I missing something?

(Also, a change like this is most definitely *not* trivial.)

Kevin




Re: [PATCH v1] qmp: don't hold ctx lock while querying blockstats

2020-07-10 Thread no-reply
Patchew URL: https://patchew.org/QEMU/20200710085400.343-1-yezhen...@huawei.com/



Hi,

This series failed the docker-quick@centos7 build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===

Not run: 259
Failures: 192
Failed 1 of 119 iotests
make: *** [check-tests/check-block.sh] Error 1
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 669, in 
sys.exit(main())
---
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', 
'--label', 'com.qemu.instance.uuid=c086654c8b434de29ab30387058535ba', '-u', 
'1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', 
'-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 
'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', 
'/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', 
'/var/tmp/patchew-tester-tmp-0xv6b4b9/src/docker-src.2020-07-10-05.11.36.18650:/var/tmp/qemu:z,ro',
 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit 
status 2.
filter=--filter=label=com.qemu.instance.uuid=c086654c8b434de29ab30387058535ba
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-0xv6b4b9/src'
make: *** [docker-run-test-quick@centos7] Error 2

real15m29.715s
user0m9.321s


The full log is available at
http://patchew.org/logs/20200710085400.343-1-yezhen...@huawei.com/testing.docker-quick@centos7/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

[PATCH v1] qmp: don't hold ctx lock while querying blockstats

2020-07-10 Thread Zhenyu Ye
Because the QMP command runs in the main thread, and changes
to the aio context of iothread will only be executed in the
main thread (they will not be in parallel), so there is no
need a lock protection while querying blockstats.

If we hold the lock here, while the I/O pressure is high in
vm and the I/O returns slowly, the main thread will be stuck
until the lock is released, which will affect the vcpu operation
and finall cause the vm to be stuck.

Signed-off-by: Zhenyu Ye 
---
 block/qapi.c | 6 --
 1 file changed, 6 deletions(-)

diff --git a/block/qapi.c b/block/qapi.c
index afd9f3b4a7..fa56bc145d 100644
--- a/block/qapi.c
+++ b/block/qapi.c
@@ -609,11 +609,8 @@ BlockStatsList *qmp_query_blockstats(bool has_query_nodes,
 if (has_query_nodes && query_nodes) {
 for (bs = bdrv_next_node(NULL); bs; bs = bdrv_next_node(bs)) {
 BlockStatsList *info = g_malloc0(sizeof(*info));
-AioContext *ctx = bdrv_get_aio_context(bs);
 
-aio_context_acquire(ctx);
 info->value = bdrv_query_bds_stats(bs, false);
-aio_context_release(ctx);
 
 *p_next = info;
 p_next = >next;
@@ -621,7 +618,6 @@ BlockStatsList *qmp_query_blockstats(bool has_query_nodes,
 } else {
 for (blk = blk_all_next(NULL); blk; blk = blk_all_next(blk)) {
 BlockStatsList *info;
-AioContext *ctx = blk_get_aio_context(blk);
 BlockStats *s;
 char *qdev;
 
@@ -629,7 +625,6 @@ BlockStatsList *qmp_query_blockstats(bool has_query_nodes,
 continue;
 }
 
-aio_context_acquire(ctx);
 s = bdrv_query_bds_stats(blk_bs(blk), true);
 s->has_device = true;
 s->device = g_strdup(blk_name(blk));
@@ -643,7 +638,6 @@ BlockStatsList *qmp_query_blockstats(bool has_query_nodes,
 }
 
 bdrv_query_blk_stats(s->stats, blk);
-aio_context_release(ctx);
 
 info = g_malloc0(sizeof(*info));
 info->value = s;
-- 
2.19.1