USB
hw/usb/dev-storage-classic.c | 9 -
1 file changed, 9 deletions(-)
Reviewed-by: Hanna Czenczek
On 10.02.24 09:46, Michael Tokarev wrote:
09.02.2024 19:51, Hanna Czenczek :
On 09.02.24 15:08, Michael Tokarev wrote:
02.02.2024 17:47, Hanna Czenczek :
Hi,
Without the AioContext lock, a BB's context may kind of change at any
time (unless it has a root node, and I/O requests are pending
On 09.02.24 15:38, Michael Tokarev wrote:
02.02.2024 18:31, Hanna Czenczek :
Commit d3f6f294aeadd5f88caf0155e4360808c95b3146 ("virtio-blk: always set
ioeventfd during startup") has made virtio_blk_start_ioeventfd() always
kick the virtqueue (set the ioeventfd), regardless of whet
On 09.02.24 15:08, Michael Tokarev wrote:
02.02.2024 17:47, Hanna Czenczek :
Hi,
Without the AioContext lock, a BB's context may kind of change at any
time (unless it has a root node, and I/O requests are pending). That
also means that its own context (BlockBackend.ctx) and that of its root
On 06.02.24 17:53, Stefan Hajnoczi wrote:
On Fri, Feb 02, 2024 at 03:47:53PM +0100, Hanna Czenczek wrote:
Hi,
Without the AioContext lock, a BB's context may kind of change at any
time (unless it has a root node, and I/O requests are pending). That
also means that its own context
On 06.02.24 15:04, Stefan Hajnoczi wrote:
QEMU's coding style generally forbids C99 mixed declarations.
Signed-off-by: Stefan Hajnoczi
---
hw/block/virtio-blk.c | 25 ++---
1 file changed, 14 insertions(+), 11 deletions(-)
Reviewed-by: Hanna Czenczek
that there is no race.
Suggested-by: Hanna Reitz
Signed-off-by: Stefan Hajnoczi
---
qapi/qmp-dispatch.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
Reviewed-by: Hanna Czenczek
On 05.02.24 18:26, Stefan Hajnoczi wrote:
The VirtIOBlock::rq field has had the type void * since its introduction
in commit 869a5c6df19a ("Stop VM on error in virtio-blk. (Gleb
Natapov)").
Perhaps this was done to avoid the forward declaration of
VirtIOBlockReq.
Hanna Czenczek p
On 05.02.24 18:26, Stefan Hajnoczi wrote:
Hanna Czenczek noted that the array index in
virtio_blk_dma_restart_cb() is not bounds-checked:
g_autofree VirtIOBlockReq **vq_rq = g_new0(VirtIOBlockReq *, num_queues);
...
while (rq) {
VirtIOBlockReq *next = rq->next;
uint1
}
Later on we access s->vq_aio_context[0] under the assumption that there
is as least one virtqueue. Hanna Czenczek noted that
it would help to show that the array index is already valid.
Add an assertion to document that s->vq_aio_context[0] is always
safe...and catch future c
On 05.02.24 18:26, Stefan Hajnoczi wrote:
Hanna Czenczek noticed that the safety of
`vq_aio_context[vq->value] = ctx;` with user-defined vq->value inputs is
not obvious.
The code is structured in validate() + apply() steps so input validation
is there, but it happens way e
the notifiers.
Buglink: https://issues.redhat.com/browse/RHEL-3934
Signed-off-by: Hanna Czenczek
---
include/block/aio.h | 7 ++-
hw/virtio/virtio.c | 42 ++
2 files changed, 48 insertions(+), 1 deletion(-)
diff --git a/include/block/aio.h b/include
is version (v1 too) just ensures the notifier
is enabled after the drain, regardless of its state before.
- Use event_notifier_set() instead of virtio_queue_notify() in patch 2
- Added patch 3
Hanna Czenczek (3):
virtio-scsi: Attach event vq notifier with no_poll
virtio: Re-enable not
36fd126
("virtio-scsi: implement BlockDevOps->drained_begin()")
Reviewed-by: Stefan Hajnoczi
Tested-by: Fiona Ebner
Reviewed-by: Fiona Ebner
Signed-off-by: Hanna Czenczek
---
hw/scsi/virtio-scsi.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/hw
can reuse that function.
Signed-off-by: Hanna Czenczek
---
hw/block/virtio-blk.c | 21 ++---
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 227d83569f..22b8eef69b 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/vi
.
In addition, because the context can be set and queried from different
threads concurrently, it has to be accessed with atomic operations.
Buglink: https://issues.redhat.com/browse/RHEL-19381
Suggested-by: Kevin Wolf
Signed-off-by: Hanna Czenczek
---
block/block-backend.c | 22
bdrv_try_change_aio_context(), which
creates a drained section. With this patch, we keep the BB in-flight
counter elevated throughout, so we know the BB's context cannot change.
Signed-off-by: Hanna Czenczek
---
hw/scsi/scsi-bus.c | 30 +-
1 file changed, 21 insertions
from changing while the BH
is scheduled/running then is just a nice side effect.
Hanna Czenczek (2):
block-backend: Allow concurrent context changes
scsi: Await request purging
block/block-backend.c | 22 +++---
hw/scsi/scsi-bus.c| 30 +-
2
On 01.02.24 16:25, Hanna Czenczek wrote:
On 01.02.24 15:28, Stefan Hajnoczi wrote:
[...]
Did you find a scenario where the virtio-scsi AioContext is different
from the scsi-hd BB's Aiocontext?
Technically, that’s the reason for this thread, specifically that
virtio_scsi_hotunplug
On 01.02.24 16:25, Hanna Czenczek wrote:
[...]
It just seems simpler to me to not rely on the BB's context at all.
Hm, I now see the problem is that the processing (and scheduling) is
largely done in generic SCSI code, which doesn’t have access to
virtio-scsi’s context, only
On 01.02.24 15:28, Stefan Hajnoczi wrote:
On Thu, Feb 01, 2024 at 03:10:12PM +0100, Hanna Czenczek wrote:
On 31.01.24 21:35, Stefan Hajnoczi wrote:
On Fri, Jan 26, 2024 at 04:24:49PM +0100, Hanna Czenczek wrote:
On 26.01.24 14:18, Kevin Wolf wrote:
Am 25.01.2024 um 18:32 hat Hanna Czenczek
On 31.01.24 21:35, Stefan Hajnoczi wrote:
On Fri, Jan 26, 2024 at 04:24:49PM +0100, Hanna Czenczek wrote:
On 26.01.24 14:18, Kevin Wolf wrote:
Am 25.01.2024 um 18:32 hat Hanna Czenczek geschrieben:
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben
On 01.02.24 11:21, Kevin Wolf wrote:
Am 01.02.2024 um 10:43 hat Hanna Czenczek geschrieben:
On 31.01.24 11:17, Kevin Wolf wrote:
Am 29.01.2024 um 17:30 hat Hanna Czenczek geschrieben:
I don’t like using drain as a form of lock specifically against AioContext
changes, but maybe Stefan is right
On 31.01.24 11:17, Kevin Wolf wrote:
Am 29.01.2024 um 17:30 hat Hanna Czenczek geschrieben:
I don’t like using drain as a form of lock specifically against AioContext
changes, but maybe Stefan is right, and we should use it in this specific
case to get just the single problem fixed. (Though
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running o
On 26.01.24 14:18, Kevin Wolf wrote:
Am 25.01.2024 um 18:32 hat Hanna Czenczek geschrieben:
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock
On 25.01.24 19:18, Hanna Czenczek wrote:
On 25.01.24 19:03, Stefan Hajnoczi wrote:
On Wed, Jan 24, 2024 at 06:38:30PM +0100, Hanna Czenczek wrote:
[...]
@@ -3563,6 +3574,13 @@ void
virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
aio_set_event_notifier_poll(ctx
On 25.01.24 19:03, Stefan Hajnoczi wrote:
On Wed, Jan 24, 2024 at 06:38:30PM +0100, Hanna Czenczek wrote:
During drain, we do not care about virtqueue notifications, which is why
we remove the handlers on it. When removing those handlers, whether vq
notifications are enabled or not depends
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running o
On 24.01.24 22:53, Stefan Hajnoczi wrote:
On Wed, Jan 24, 2024 at 01:12:47PM +0100, Hanna Czenczek wrote:
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext
the notifiers.
Buglink: https://issues.redhat.com/browse/RHEL-3934
Signed-off-by: Hanna Czenczek
---
include/block/aio.h | 7 ++-
hw/virtio/virtio.c | 42 ++
2 files changed, 48 insertions(+), 1 deletion(-)
diff --git a/include/block/aio.h b/include
36fd126
("virtio-scsi: implement BlockDevOps->drained_begin()")
Signed-off-by: Hanna Czenczek
---
hw/scsi/virtio-scsi.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 690aceec45..9f02ceea09 100644
fic case of
virtio-scsi hot-plugging and -unplugging, you can use this patch:
https://czenczek.de/0001-DONTMERGE-Fix-crash-on-scsi-unplug.patch
[1] https://lists.nongnu.org/archive/html/qemu-block/2024-01/msg00317.html
Hanna Czenczek (2):
virtio-scsi: Attach event vq notifier with no_poll
vir
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running o
On 23.01.24 18:10, Kevin Wolf wrote:
Am 23.01.2024 um 17:40 hat Hanna Czenczek geschrieben:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running o
On 23.01.24 17:40, Hanna Czenczek wrote:
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running only the BlockBackend's AioContext may acc
On 21.12.23 22:23, Kevin Wolf wrote:
From: Stefan Hajnoczi
Stop depending on the AioContext lock and instead access
SCSIDevice->requests from only one thread at a time:
- When the VM is running only the BlockBackend's AioContext may access
the requests list.
- When the VM is stopped only the
On 02.01.24 16:24, Hanna Czenczek wrote:
[...]
I’ve attached the preliminary patch that I didn’t get to send (or test
much) last year. Not sure if it has the same CPU-usage-spike issue
Fiona was seeing, the only functional difference is that I notify the
vq after attaching the notifiers
On 23.01.24 12:12, Fiona Ebner wrote:
[...]
I noticed poll_set_started() is not called, because
ctx->fdmon_ops->need_wait(ctx) was true, i.e. ctx->poll_disable_cnt was
positive (I'm using fdmon_poll). I then found this is because of the
notifier for the event vq, being attached with
On 22.01.24 18:52, Hanna Czenczek wrote:
On 22.01.24 18:41, Hanna Czenczek wrote:
On 05.01.24 15:30, Fiona Ebner wrote:
Am 05.01.24 um 14:43 schrieb Fiona Ebner:
Am 03.01.24 um 14:35 schrieb Paolo Bonzini:
On 1/3/24 12:40, Fiona Ebner wrote:
I'm happy to report that I cannot reproduce
On 22.01.24 18:41, Hanna Czenczek wrote:
On 05.01.24 15:30, Fiona Ebner wrote:
Am 05.01.24 um 14:43 schrieb Fiona Ebner:
Am 03.01.24 um 14:35 schrieb Paolo Bonzini:
On 1/3/24 12:40, Fiona Ebner wrote:
I'm happy to report that I cannot reproduce the CPU-usage-spike issue
with the patch, but I
On 05.01.24 15:30, Fiona Ebner wrote:
Am 05.01.24 um 14:43 schrieb Fiona Ebner:
Am 03.01.24 um 14:35 schrieb Paolo Bonzini:
On 1/3/24 12:40, Fiona Ebner wrote:
I'm happy to report that I cannot reproduce the CPU-usage-spike issue
with the patch, but I did run into an assertion failure when
On 02.01.24 16:53, Paolo Bonzini wrote:
On Tue, Jan 2, 2024 at 4:24 PM Hanna Czenczek wrote:
I’ve attached the preliminary patch that I didn’t get to send (or test
much) last year. Not sure if it has the same CPU-usage-spike issue
Fiona was seeing, the only functional difference is that I
the vq
after attaching the notifiers instead of before.
HannaFrom 451aae74fc19a6ea5cd6381247cd9202571651e8 Mon Sep 17 00:00:00 2001
From: Hanna Czenczek
Date: Wed, 6 Dec 2023 18:24:55 +0100
Subject: [PATCH] Keep notifications disabled during drain
Preliminary patch with a preliminary
Message-Id: <20230825040556.4217-1-faithilike...@gmail.com>
Reviewed-by: Stefan Hajnoczi
[hreitz: Rebased and fixed comment spelling]
Signed-off-by: Hanna Czenczek
---
block/file-posix.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/block/file-posix.c b/block/file-p
s->offset. Also, remove "offset" from BDRVRawState as
there is no usage anymore.
Fixes: 4751d09adcc3 ("block: introduce zone append write for zoned devices")
Signed-off-by: Naohiro Aota
Message-Id: <20231030073853.2601162-1-naohiro.a...@wdc.com>
Reviewed-by: Sam Li
The following changes since commit 3e01f1147a16ca566694b97eafc941d62fa1e8d8:
Merge tag 'pull-sp-20231105' of https://gitlab.com/rth7680/qemu into staging
(2023-11-06 09:34:22 +0800)
are available in the Git repository at:
https://gitlab.com/hreitz/qemu.git tags/pull-block-2023-11-06
for
-off-by: Jean-Louis Dupond
Message-Id: <20231003125236.216473-2-jean-lo...@dupond.be>
[hreitz: Made the documentation change more verbose, as discussed
on-list]
Signed-off-by: Hanna Czenczek
---
qapi/block-core.json | 24 ++--
block/qcow2-cluster.
On 30.10.23 08:38, Naohiro Aota wrote:
raw_co_zone_append() sets "s->offset" where "BDRVRawState *s". This pointer
is used later at raw_co_prw() to save the block address where the data is
written.
When multiple IOs are on-going at the same time, a later IO's
raw_co_zone_append() call
On 09.06.23 22:19, Fabiano Rosas wrote:
This is another caller of bdrv_get_allocated_file_size() that needs to
be converted to a coroutine because that function will be made
asynchronous when called (indirectly) from the QMP dispatcher.
This QMP command is a candidate because it calls
| 4 +++-
2 files changed, 40 insertions(+), 4 deletions(-)
Reviewed-by: Hanna Czenczek
files changed, 12 insertions(+), 8 deletions(-)
Reviewed-by: Hanna Czenczek
in a coroutine.
Signed-off-by: Fabiano Rosas
Reviewed-by: Eric Blake
---
include/block/block-io.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Hanna Czenczek
-coroutine-wrapper.py | 1 +
2 files changed, 2 insertions(+)
Reviewed-by: Hanna Czenczek
On 09.06.23 22:19, Fabiano Rosas wrote:
This is another caller of bdrv_get_allocated_file_size() that needs to
be converted to a coroutine because that function will be made
asynchronous when called (indirectly) from the QMP dispatcher.
This QMP command is a candidate because it calls
On 09.06.23 22:19, Fabiano Rosas wrote:
We're currently doing a full query-block just to enumerate the devices
for qmp_nbd_server_add and then discarding the BlockInfoList
afterwards. Alter hmp_nbd_server_start to instead iterate explicitly
over the block_backends list.
This allows the removal
On 09.06.23 22:19, Fabiano Rosas wrote:
We're converting callers of bdrv_get_allocated_file_size() to run in
coroutines because that function will be made asynchronous when called
(indirectly) from the QMP dispatcher.
This function is a candidate because it calls bdrv_query_image_info()
->
On 09.06.23 22:19, Fabiano Rosas wrote:
From: Lin Ma
We're converting callers of bdrv_get_allocated_file_size() to run in
coroutines because that function will be made asynchronous when called
(indirectly) from the QMP dispatcher.
This QMP command is a candidate because it indirectly calls
On 03.11.23 16:51, Hanna Czenczek wrote:
On 20.10.23 23:56, Andrey Drobyshev wrote:
[...]
@@ -528,6 +543,14 @@ for use_backing_file in yes no; do
else
_make_test_img -o extended_l2=on 1M
fi
+ # Write cluster #0 and discard its subclusters #0-#3
+ $QEMU_IO -c
On 20.10.23 23:56, Andrey Drobyshev wrote:
This commit makes the discard operation work on the subcluster level
rather than cluster level. It introduces discard_l2_subclusters()
function and makes use of it in qcow2 discard implementation, much like
it's done with zero_in_l2_slice() /
On 20.10.23 23:56, Andrey Drobyshev wrote:
Add _verify_du_delta() checker which is used to check that real disk
usage delta meets the expectations. For now we use it for checking that
subcluster-based discard/unmap operations lead to actual disk usage
decrease (i.e. PUNCH_HOLE operation is
On 20.10.23 23:56, Andrey Drobyshev wrote:
Move the definition from iotests/250 to common.rc. This is used to
detect real disk usage of sparse files. In particular, we want to use
it for checking subclusters-based discards.
Signed-off-by: Andrey Drobyshev
---
tests/qemu-iotests/250 |
On 20.10.23 23:56, Andrey Drobyshev wrote:
When zeroizing subclusters within single cluster, detect usage of the
BDRV_REQ_MAY_UNMAP flag and fall through to the subcluster-based discard
operation, much like it's done with the cluster-based discards. That
way subcluster-aligned operations
On 16.10.23 15:42, Hanna Czenczek wrote:
Based-on: <20231004014532.1228637-1-stefa...@redhat.com>
([PATCH v2 0/3] vhost: clean up device reset)
Based-on: <20231016083201.23736-1-hre...@redhat.com>
([PATCH] vhost-user: Fix protocol feature bit conflic
On 01.11.23 20:53, Vladimir Sementsov-Ogievskiy wrote:
On 31.10.23 17:05, Hanna Czenczek wrote:
On 04.10.23 15:56, Vladimir Sementsov-Ogievskiy wrote:
From: Vladimir Sementsov-Ogievskiy
Actually block job is not completed without the final flush. It's
rather unexpected to have broken target
(Sorry, opened another reply window, forgot I already had one open...)
On 20.10.23 23:56, Andrey Drobyshev wrote:
This commit makes the discard operation work on the subcluster level
rather than cluster level. It introduces discard_l2_subclusters()
function and makes use of it in qcow2 discard
On 20.10.23 23:56, Andrey Drobyshev wrote:
This commit makes the discard operation work on the subcluster level
rather than cluster level. It introduces discard_l2_subclusters()
function and makes use of it in qcow2 discard implementation, much like
it's done with zero_in_l2_slice() /
Drobyshev
---
block/qcow2-cluster.c | 18 +++---
1 file changed, 15 insertions(+), 3 deletions(-)
Reviewed-by: Hanna Czenczek
On 20.10.23 23:56, Andrey Drobyshev wrote:
This helper simply obtains the l2 table parameters of the cluster which
contains the given subclusters range. Right now this info is being
obtained and used by zero_l2_subclusters(). As we're about to introduce
the subclusters discard operation, this
insertions(+), 4 deletions(-)
Reviewed-by: Hanna Czenczek
On 01.10.23 22:46, Denis V. Lunev wrote:
Can you please not top-post. This makes the discussion complex. This
approach is followed in this mailing list and in other similar lists
like LKML.
On 10/1/23 19:08, Mike Maslenkin wrote:
I thought about "conv=notrunc", but my main concern is changed
On 04.10.23 15:56, Vladimir Sementsov-Ogievskiy wrote:
From: Vladimir Sementsov-Ogievskiy
Actually block job is not completed without the final flush. It's
rather unexpected to have broken target when job was successfully
completed long ago and now we fail to flush or process just
--
1 file changed, 4 insertions(+), 2 deletions(-)
Reviewed-by: Hanna Czenczek
, this change here is necessary, so:
Reviewed-by: Hanna Czenczek
On 25.08.23 06:05, Sam Li wrote:
When the zoned request fail, it needs to update only the wp of
the target zones for not disrupting the in-flight writes on
these other zones. The wp is updated successfully after the
request completes.
Fixed the callers with right offset and nr_zones.
On 03.10.23 14:52, Jean-Louis Dupond wrote:
When the discard-no-unref flag is enabled, we keep the reference for
normal discard requests.
But when a discard is executed on a snapshot/qcow2 image with backing,
the discards are saved as zero clusters in the snapshot image.
When committing the
On 03.10.23 14:52, Jean-Louis Dupond wrote:
When the discard-no-unref flag is enabled, we keep the reference for
normal discard requests.
But when a discard is executed on a snapshot/qcow2 image with backing,
the discards are saved as zero clusters in the snapshot image.
When committing the
-by: Hanna Czenczek
On 18.10.23 14:14, Michael S. Tsirkin wrote:
On Wed, Oct 04, 2023 at 02:58:59PM +0200, Hanna Czenczek wrote:
Currently, the vhost-user documentation says that rings are to be
initialized in a disabled state when VHOST_USER_F_PROTOCOL_FEATURES is
negotiated. However, by the time of feature
On 17.10.23 09:53, Viresh Kumar wrote:
On 17-10-23, 09:51, Hanna Czenczek wrote:
Not that I’m really opposed to that, but I don’t see the problem with just
doing that in the same work that makes qemu actually use this flag, exactly
because it’s just a -1/+1 change.
I can send a v2, but should
On 17.10.23 09:49, Viresh Kumar wrote:
On 13-10-23, 20:02, Hanna Czenczek wrote:
On 10.10.23 16:35, Alex Bennée wrote:
I was going to say there is also the rust-vmm vhost-user-master crates
which we've imported:
https://github.com/vireshk/vhost
for the Xen Vhost Frontend:
https
On 17.10.23 07:36, Viresh Kumar wrote:
On 16-10-23, 12:40, Alex Bennée wrote:
Viresh Kumar writes:
On 16-10-23, 11:45, Manos Pitsidianakis wrote:
On Mon, 16 Oct 2023 11:32, Hanna Czenczek wrote:
diff --git a/include/hw/virtio/vhost-user.h
b/include/hw/virtio/vhost-user.h
index 9f9ddf878d
Add the interface for transferring the back-end's state during migration
as defined previously in vhost-user.rst.
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Hanna Czenczek
---
include/hw/virtio/vhost-backend.h | 24 +
include/hw/virtio/vhost-user.h| 1 +
include/hw/virtio/vhost.h
via CHECK_DEVICE_STATE, which on the destination side includes
checking for integrity (i.e. errors during deserialization).
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Hanna Czenczek
---
docs/interop/vhost-user.rst | 172
1 file changed, 172 insertions
it explicit that the
enabled/disabled state is tracked even while the vring is stopped.
Every vring is initialized in a disabled state, and SET_FEATURES without
VHOST_USER_F_PROTOCOL_FEATURES simply becomes one way to enable all
vrings.
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Hanna Czenczek
is completely stopped,
i.e. all vrings are stopped, the back-end should cease to modify any
state relating to the guest. Do this by calling it "suspended".
Suggested-by: Stefan Hajnoczi
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Hanna Czenczek
---
docs/interop/vhost-use
, and
writes each chunk consecutively into the migration stream, prefixed by
its length. EOF is indicated by a 0-length chunk.
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Hanna Czenczek
---
include/hw/virtio/vhost.h | 35 +++
hw/virtio/vhost.c | 204 ++
2
[--] 'vhost-user-fs: Implement internal migration'
Changes patch by patch:
- Patch 1: Amended documentation
- Patches 4 and 5: Bumped feature bit and command values as necessary so
as not to conflict with F_SHARED_OBJECT
Hanna Czenczek (7):
vhost-user.rst: Improve [GS]ET_VRING_BASE doc
vh
mmands use different payload structures depending on whether the vring
is split or packed.
Signed-off-by: Hanna Czenczek
---
docs/interop/vhost-user.rst | 77 +++--
1 file changed, 73 insertions(+), 4 deletions(-)
diff --git a/docs/interop/vhost-user.rst b/docs/int
, it can be disabled.
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Hanna Czenczek
---
hw/virtio/vhost-user-fs.c | 101 +-
1 file changed, 100 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c
index 49d699ffc2
On 16.10.23 10:45, Manos Pitsidianakis wrote:
On Mon, 16 Oct 2023 11:32, Hanna Czenczek wrote:
diff --git a/include/hw/virtio/vhost-user.h
b/include/hw/virtio/vhost-user.h
index 9f9ddf878d..1d4121431b 100644
--- a/include/hw/virtio/vhost-user.h
+++ b/include/hw/virtio/vhost-user.h
@@ -29,7
introduced in 16094766627, but was not
defined.
Fixes: 160947666276c5b7f6bca4d746bcac2966635d79
("vhost-user: add shared_object msg")
Signed-off-by: Hanna Czenczek
---
docs/interop/vhost-user.rst | 11 +++
include/hw/virtio/vhost-user.h| 3 ++-
s
On 10.10.23 16:35, Alex Bennée wrote:
Hanna Czenczek writes:
(adding Viresh to CC for Xen Vhost questions)
On 10.10.23 12:36, Alex Bennée wrote:
Hanna Czenczek writes:
On 10.10.23 06:00, Yajun Wu wrote:
On 10/9/2023 5:13 PM, Hanna Czenczek wrote:
External email: Use caution opening
On 10.10.23 12:36, Alex Bennée wrote:
Hanna Czenczek writes:
On 10.10.23 06:00, Yajun Wu wrote:
On 10/9/2023 5:13 PM, Hanna Czenczek wrote:
External email: Use caution opening links or attachments
On 09.10.23 11:07, Hanna Czenczek wrote:
On 09.10.23 10:21, Hanna Czenczek wrote
On 10.10.23 06:00, Yajun Wu wrote:
On 10/9/2023 5:13 PM, Hanna Czenczek wrote:
External email: Use caution opening links or attachments
On 09.10.23 11:07, Hanna Czenczek wrote:
On 09.10.23 10:21, Hanna Czenczek wrote:
On 07.10.23 04:22, Yajun Wu wrote:
[...]
The main motivation
On 09.10.23 11:07, Hanna Czenczek wrote:
On 09.10.23 10:21, Hanna Czenczek wrote:
On 07.10.23 04:22, Yajun Wu wrote:
[...]
The main motivation of adding VHOST_USER_SET_STATUS is to let
backend DPDK know
when DRIVER_OK bit is valid. It's an indication of all VQ
configuration has sent
On 09.10.23 10:21, Hanna Czenczek wrote:
On 07.10.23 04:22, Yajun Wu wrote:
[...]
The main motivation of adding VHOST_USER_SET_STATUS is to let backend
DPDK know
when DRIVER_OK bit is valid. It's an indication of all VQ
configuration has sent,
otherwise DPDK has to rely on first queue pair
On 07.10.23 04:22, Yajun Wu wrote:
On 10/6/2023 6:34 PM, Michael S. Tsirkin wrote:
External email: Use caution opening links or attachments
On Fri, Oct 06, 2023 at 11:47:55AM +0200, Hanna Czenczek wrote:
On 06.10.23 11:26, Michael S. Tsirkin wrote:
On Fri, Oct 06, 2023 at 11:15:55AM +0200
On 06.10.23 22:49, Alex Bennée wrote:
Hanna Czenczek writes:
On 06.10.23 17:17, Alex Bennée wrote:
Hanna Czenczek writes:
On 06.10.23 12:34, Michael S. Tsirkin wrote:
On Fri, Oct 06, 2023 at 11:47:55AM +0200, Hanna Czenczek wrote:
On 06.10.23 11:26, Michael S. Tsirkin wrote:
On Fri
On 06.10.23 17:17, Alex Bennée wrote:
Hanna Czenczek writes:
On 06.10.23 12:34, Michael S. Tsirkin wrote:
On Fri, Oct 06, 2023 at 11:47:55AM +0200, Hanna Czenczek wrote:
On 06.10.23 11:26, Michael S. Tsirkin wrote:
On Fri, Oct 06, 2023 at 11:15:55AM +0200, Hanna Czenczek wrote
1 - 100 of 387 matches
Mail list logo