On 3/21/24 3:48 PM, Dongli Zhang wrote:
Hi Jonah,
Would you mind helping explain how does VIRTIO_F_IN_ORDER improve the
performance?
https://lore.kernel.org/all/20240321155717.1392787-1-jonah.pal...@oracle.com/#t
I tried to look for it from prior discussions but could not find why.
https:
Hi Jonah,
Would you mind helping explain how does VIRTIO_F_IN_ORDER improve the
performance?
https://lore.kernel.org/all/20240321155717.1392787-1-jonah.pal...@oracle.com/#t
I tried to look for it from prior discussions but could not find why.
https://lore.kernel.org/all/byapr18mb2791df7e6c0f61
Daniel P. Berrangé pointed out that the coroutine
pool size heuristic is very conservative. Instead of halving
max_map_count, he suggested reserving 5,000 mappings for non-coroutine
users based on observations of guests he has access to.
Fixes: 86a637e48104 ("coroutine: cap per-thread local pool
The following changes since commit fea445e8fe9acea4f775a832815ee22bdf2b0222:
Merge tag 'pull-maintainer-final-for-real-this-time-200324-1' of
https://gitlab.com/stsquad/qemu into staging (2024-03-21 10:31:56 +)
are available in the Git repository at:
https://gitlab.com/stefanha/qemu.git
Implements in-order handling for most virtio devices using the
VIRTIO_F_IN_ORDER transport feature, specifically those who call
virtqueue_push to push their used elements onto the used ring.
The logic behind this implementation is as follows:
1.) virtqueue_pop always enqueues VirtQueueElements in
Define order variables for their use in a VirtQueue's in-order hash
table. Also initialize current_order variables to 0 when creating or
resetting a VirtQueue. These variables are used when the device has
negotiated the VIRTIO_F_IN_ORDER transport feature.
A VirtQueue's current_order_idx represent
Extend the virtio device property definitions to include the
VIRTIO_F_IN_ORDER feature.
The default state of this feature is disabled, allowing it to be
explicitly enabled where it's supported.
Signed-off-by: Jonah Palmer
---
include/hw/virtio/virtio.h | 4 +++-
1 file changed, 3 insertions(+),
Implements in-order handling for the virtio-net device.
Since virtio-net utilizes batching for its Rx VirtQueue, the device is
responsible for calling virtqueue_flush once it has completed its
batching operation.
Note:
-
It's unclear if this implementation is really necessary to "guarantee"
t
Implements in-order handling for vhost devices using shadow virtqueues.
Since vhost's shadow virtqueues utilize batching in their
vhost_svq_flush calls, the vhost device is responsible for calling
virtqueue_flush once it has completed its batching operation.
Note:
-
It's unclear if this imple
Add support for the VIRTIO_F_IN_ORDER feature across a variety of vhost
devices.
The inclusion of VIRTIO_F_IN_ORDER in the feature bits arrays for these
devices ensures that the backend is capable of offering and providing
support for this feature, and that it can be disabled if the backend
does n
Define a GLib hash table (GHashTable) member in a device's VirtQueue
and add its creation, destruction, and reset functions appropriately.
Also define a function to handle the deallocation of InOrderVQElement
values whenever they're removed from the hash table or the hash table
is destroyed. This h
The goal of these patches is to add support to a variety of virtio and
vhost devices for the VIRTIO_F_IN_ORDER transport feature. This feature
indicates that all buffers are used by the device in the same order in
which they were made available by the driver.
These patches attempt to implement a g
Define the InOrderVQElement structure for the VIRTIO_F_IN_ORDER
transport feature implementation.
The InOrderVQElement structure is used to encapsulate out-of-order
VirtQueueElement data that was processed by the host. This data
includes:
- The processed VirtQueueElement (elem)
- Length of data
Changes in v2:
* Ran into another issue while writing the IO test Stefan wanted
to have (good call :)), so include a fix for that and add the
test. I didn't notice during manual testing, because I hadn't
used a scripted QMP 'quit', so there was no race.
Fiona Ebner (2):
blo
Previously, bdrv_pad_request() could not deal with a NULL qiov when
a read needed to be aligned. During prefetch, a stream job will pass a
NULL qiov. Add a test case to cover this scenario.
By accident, also covers a previous race during shutdown, where block
graph changes during iteration in bdrv
From: Stefan Reiter
Some operations, e.g. block-stream, perform reads while discarding the
results (only copy-on-read matters). In this case, they will pass NULL
as the target QEMUIOVector, which will however trip bdrv_pad_request,
since it wants to extend its passed vector. In particular, this i
The old_bs variable in bdrv_next() is currently determined by looking
at the old block backend. However, if the block graph changes before
the next bdrv_next() call, it might be that the associated BDS is not
the same that was referenced previously. In that case, the wrong BDS
is unreferenced, lead
17 matches
Mail list logo