On 8/8/23 06:35, Klaus Jensen wrote:
From: Klaus Jensen
Hi,
There was a small typo in the last pull. This replaces it.
The following changes since commit 0450cf08976f9036feaded438031b4cba94f6452:
Merge tag 'fixes-pull-request' ofhttps://gitlab.com/marcandre.lureau/qemu
into staging (2023
On Tue, Aug 08, 2023 at 11:58:52AM -0400, Stefan Hajnoczi wrote:
> CoMutex has poor performance when lock contention is high. The tracked
> requests list is accessed frequently and performance suffers in QEMU
> multi-queue block layer scenarios.
>
> It is not necessary to use CoMutex for the reque
On Tue, Aug 08, 2023 at 11:58:51AM -0400, Stefan Hajnoczi wrote:
> Signed-off-by: Stefan Hajnoczi
> ---
> block/io.c | 8 +++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
Reviewed-by: Eric Blake
>
> diff --git a/block/io.c b/block/io.c
> index 055fcf7438..85d5176256 100644
> --- a/bl
On Mon, Jul 31, 2023 at 05:33:38PM -0300, Fabiano Rosas wrote:
> We can fail the blk_insert_bs() at init_blk_migration(), leaving the
> BlkMigDevState without a dirty_bitmap and BlockDriverState. Account
> for the possibly missing elements when doing cleanup.
>
> Fix the following crashes:
>
> Th
CoMutex has poor performance when lock contention is high. The tracked
requests list is accessed frequently and performance suffers in QEMU
multi-queue block layer scenarios.
It is not necessary to use CoMutex for the requests lock. The lock is
always released across coroutine yield operations. It
As part of the ongoing multi-queue QEMU block layer work, I found that CoMutex
reqs_lock scales poorly when more IOThreads are added. These patches double
IOPS in the 4 IOThreads randread benchmark that I have been running with my
out-of-tree virtio-blk-iothread-vq-mapping branch
(https://gitlab.co
Signed-off-by: Stefan Hajnoczi
---
block/io.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/block/io.c b/block/io.c
index 055fcf7438..85d5176256 100644
--- a/block/io.c
+++ b/block/io.c
@@ -593,8 +593,14 @@ static void coroutine_fn
tracked_request_end(BdrvTrackedRequ
From: Klaus Jensen
The Reclaim Unit Update operation in I/O Management Receive does not
verify the presence of a configured endurance group prior to accessing
it.
Fix this.
Cc: qemu-sta...@nongnu.org
Fixes: 73064edfb864 ("hw/nvme: flexible data placement emulation")
Signed-off-by: Klaus Jensen
From: Klaus Jensen
Fix two potential accesses to null pointers.
Klaus Jensen (2):
hw/nvme: fix null pointer access in directive receive
hw/nvme: fix null pointer access in ruh update
hw/nvme/ctrl.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
--
2.41.0
From: Klaus Jensen
nvme_directive_receive() does not check if an endurance group has been
configured (set) prior to testing if flexible data placement is enabled
or not.
Fix this.
Cc: qemu-sta...@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1815
Fixes: 73064edfb864 ("hw/nv
From: Klaus Jensen
Hi,
There was a small typo in the last pull. This replaces it.
The following changes since commit 0450cf08976f9036feaded438031b4cba94f6452:
Merge tag 'fixes-pull-request' of https://gitlab.com/marcandre.lureau/qemu
into staging (2023-08-07 13:55:00 -0700)
are available i
added Kevin and Hanna for block, since this seems still untouched?
Thanks,
Claudio
On 7/31/23 22:33, Fabiano Rosas wrote:
> We can fail the blk_insert_bs() at init_blk_migration(), leaving the
> BlkMigDevState without a dirty_bitmap and BlockDriverState. Account
> for the possibly missing elemen
12 matches
Mail list logo