When the block job ends, it removes filter from the tree. However the
last reference to filter bds is dropped when the job is destroyed.
So when we have finalized but not dismissed job, if we try to
'query-named-block-nodes', QEMU will stumble upon a half-dead filter
and crash, since the filter now
QEMU crashes on QMP command 'query-named-block-nodes' if we have
finalized but not dismissed block job with filter, for example
block-stream.
This happens because the filter no longer has references from which
QEMU can query block info. Skip such filters while listing block nodes.
This patchset al
Check if we can list named block nodes when the block-stream is
finalized but not yet dismissed
This previously led to a crash
Signed-off-by: Andrey Zhadchenko
---
tests/qemu-iotests/030 | 17 +
tests/qemu-iotests/030.out | 4 ++--
2 files changed, 19 insertions(+), 2 deleti
Unlike other transaction commands, bitmap operations do not drain target
bds. If we have an IOThread, this may result in some inconsistencies, as
bitmap content may change during transaction command.
Add bdrv_drained_begin()/end() to bitmap operations.
Signed-off-by: Andrey Zhadchenko
---
blockd
Now all transaction actions drain their respective bds
Also, bdrv_drain_all() did not protect anything in case of IOThreads
Signed-off-by: Andrey Zhadchenko
---
blockdev.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/blockdev.c b/blockdev.c
index 7a376fce90..65932f9afb 100644
--- a/bloc
The last return statement should return true, as we already evaluated that
start == next_dirty
Also, fix hbitmap_status() description in header
Cc: qemu-sta...@nongnu.org
Fixes: a6426475a75 ("block/dirty-bitmap: introduce bdrv_dirty_bitmap_status()")
Signed-off-by: Andrey Zhadchenko
---
include
Although QEMU virtio is quite fast, there is still some room for
improvements. Disk latency can be reduced if we handle virito-blk requests
in host kernel istead of passing them to QEMU. The patch adds vhost-blk
backend which sets up vhost-blk kernel module to process requests.
test setup and resu
Sending second version of this patchset as @stefanha requested.
The main difference from the previous version is added vhost
multithreading support.
Also I must note currently there are several problems which
I intend to reconsider/fix later:
- vmsd is present but migration is not supported
- Bl
Although QEMU virtio is quite fast, there is still some room for
improvements. Disk latency can be reduced if we handle virito-blk requests
in host kernel istead of passing them to QEMU. The patch adds vhost-blk
backend which sets up vhost-blk kernel module to process requests.
test setup and resu
Although QEMU virtio-blk is quite fast, there is still some room for
improvements. Disk latency can be reduced if we handle virito-blk requests
in host kernel so we avoid a lot of syscalls and context switches.
The biggest disadvantage of this vhost-blk flavor is raw format.
Luckily Kirill Thai pr
10 matches
Mail list logo