Hi Peter, hi Daniel,
On Mon, May 6, 2024 at 5:29 PM Peter Xu wrote:
>
> On Mon, May 06, 2024 at 12:08:43PM +0200, Jinpu Wang wrote:
> > Hi Peter, hi Daniel,
>
> Hi, Jinpu,
>
> Thanks for sharing this test results. Sounds like a great news.
>
> What's your plan next? Would it then be worthwhile
Hello,
> -Original Message-
> From: Peter Xu [mailto:pet...@redhat.com]
> Sent: Monday, May 6, 2024 11:18 PM
> To: Gonglei (Arei)
> Cc: Daniel P. Berrangé ; Markus Armbruster
> ; Michael Galaxy ; Yu Zhang
> ; Zhijian Li (Fujitsu) ; Jinpu Wang
> ; Elmar Gerdes ;
> qemu-de...@nongnu.org;
Commit 1f25c172f837 ("monitor: use aio_co_reschedule_self()") was a code
cleanup that uses aio_co_reschedule_self() instead of open coding
coroutine rescheduling.
Bug RHEL-34618 was reported and Kevin Wolf identified
the root cause. I missed that aio_co_reschedule_self() ->
This series fixes RHEL-34618 "qemu crash on Assertion `luringcb->co->ctx ==
s->aio_context' failed when do block_resize on hotplug disk with aio=io_uring":
https://issues.redhat.com/browse/RHEL-34618
Kevin identified commit 1f25c172f837 ("monitor: use aio_co_reschedule_self()")
as the root cause.
The main loop has two AioContexts: qemu_aio_context and iohandler_ctx.
The main loop runs them both, but nested aio_poll() calls on
qemu_aio_context exclude iohandler_ctx.
Which one should qemu_get_current_aio_context() return when called from
the main loop? Document that it's always
On Fri, May 03, 2024 at 07:33:17PM +0200, Kevin Wolf wrote:
> Am 06.02.2024 um 20:06 hat Stefan Hajnoczi geschrieben:
> > The aio_co_reschedule_self() API is designed to avoid the race
> > condition between scheduling the coroutine in another AioContext and
> > yielding.
> >
> > The QMP dispatch
On Mon, May 06, 2024 at 12:08:43PM +0200, Jinpu Wang wrote:
> Hi Peter, hi Daniel,
Hi, Jinpu,
Thanks for sharing this test results. Sounds like a great news.
What's your plan next? Would it then be worthwhile / possible moving QEMU
into that direction? Would that greatly simplify rdma code
On Mon, May 06, 2024 at 02:06:28AM +, Gonglei (Arei) wrote:
> Hi, Peter
Hey, Lei,
Happy to see you around again after years.
> RDMA features high bandwidth, low latency (in non-blocking lossless
> network), and direct remote memory access by bypassing the CPU (As you
> know, CPU resources
Extend the virtio device property definitions to include the
VIRTIO_F_IN_ORDER feature.
The default state of this feature is disabled, allowing it to be
explicitly enabled where it's supported.
Tested-by: Lei Yang
Acked-by: Eugenio Pérez
Signed-off-by: Jonah Palmer
---
Add support for the VIRTIO_F_IN_ORDER feature across a variety of vhost
devices.
The inclusion of VIRTIO_F_IN_ORDER in the feature bits arrays for these
devices ensures that the backend is capable of offering and providing
support for this feature, and that it can be disabled if the backend
does
Add VIRTIO_F_IN_ORDER feature support for virtqueue_fill operations.
The goal of the virtqueue_fill operation when the VIRTIO_F_IN_ORDER
feature has been negotiated is to search for this now-used element,
set its length, and mark the element as filled in the VirtQueue's
used_elems array.
By
The goal of these patches is to add support to a variety of virtio and
vhost devices for the VIRTIO_F_IN_ORDER transport feature. This feature
indicates that all buffers are used by the device in the same order in
which they were made available by the driver.
These patches attempt to implement a
Add VIRTIO_F_IN_ORDER feature support for virtqueue_flush operations.
The goal of the virtqueue_flush operation when the VIRTIO_F_IN_ORDER
feature has been negotiated is to write elements to the used/descriptor
ring in-order and then update used_idx.
The function iterates through the
Add VIRTIO_F_IN_ORDER feature support in virtqueue_split_pop and
virtqueue_packed_pop.
VirtQueueElements popped from the available/descritpor ring are added to
the VirtQueue's used_elems array in-order and in the same fashion as
they would be added the used and descriptor rings, respectively.
Add the boolean 'filled' member to the VirtQueueElement structure. The
use of this boolean will signify if the element has been written to the
used / descriptor ring or not. This boolean is used to support the
VIRTIO_F_IN_ORDER feature.
Tested-by: Lei Yang
Signed-off-by: Jonah Palmer
---
Hi Peter, hi Daniel,
On Fri, May 3, 2024 at 4:33 PM Peter Xu wrote:
>
> On Fri, May 03, 2024 at 08:40:03AM +0200, Jinpu Wang wrote:
> > I had a brief check in the rsocket changelog, there seems some
> > improvement over time,
> > might be worth revisiting this. due to socket abstraction, we
= (cdw10 & 0xff);
+uint8_t mo = cdw10 & 0xf;
switch (mo) {
case NVME_IOMS_MO_NOP:
---
base-commit: 84b0eb1826f690aa8d51984644318ee6c810f5bf
change-id: 20240506-fix-ioms-mo-97098c6c5396
Best regards,
--
Klaus Jensen
17 matches
Mail list logo