On Thu, Feb 19, 2026 at 04:49:11PM +0100, Kevin Wolf wrote:
> Am 18.02.2026 um 17:41 hat Jens Axboe geschrieben:
> > On 2/18/26 9:19 AM, Jens Axboe wrote:
> > > On 2/18/26 9:11 AM, Stefan Hajnoczi wrote:
> > >> On Wed, Feb 18, 2026 at 10:57:02AM +0100, Fiona Ebner wrote:
> > >>> Am 13.02.26 um 5:05 PM schrieb Kevin Wolf:
> > >>>> Am 13.02.2026 um 15:26 hat Jens Axboe geschrieben:
> > >>>>> When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
> > >>>>> block I/O coroutine inline on the vCPU thread because
> > >>>>> qemu_get_current_aio_context() returns the main AioContext when BQL is
> > >>>>> held. The coroutine calls luring_co_submit() which queues an SQE via
> > >>>>> fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only 
> > >>>>> happens
> > >>>>> in gsource_prepare() on the main loop thread.
> > >>>>
> > >>>> Ouch! Yes, looks like we completely missed I/O submitted in vCPU 
> > >>>> threads
> > >>>> in the recent changes (or I guess worker threads in theory, but I don't
> > >>>> think there any that actually make use of aio_add_sqe()).
> > >>>>
> > >>>>> Since the coroutine ran inline (not via aio_co_schedule()), no BH is
> > >>>>> scheduled and aio_notify() is never called. The main loop remains 
> > >>>>> asleep
> > >>>>> in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted 
> > >>>>> until
> > >>>>> the next timer fires.
> > >>>>>
> > >>>>> Fix this by calling aio_notify() after queuing the SQE. This wakes the
> > >>>>> main loop via the eventfd so it can run gsource_prepare() and submit 
> > >>>>> the
> > >>>>> pending SQE promptly.
> > >>>>>
> > >>>>> This is a generic fix that benefits all devices using aio=io_uring.
> > >>>>> Without it, AHCI/SATA devices see MUCH worse I/O latency since they 
> > >>>>> use
> > >>>>> MMIO (not ioeventfd like virtio) and have no other mechanism to wake 
> > >>>>> the
> > >>>>> main loop after queuing block I/O.
> > >>>>>
> > >>>>> This is usually a bit hard to detect, as it also relies on the ppoll
> > >>>>> loop not waking up for other activity, and micro benchmarks tend not 
> > >>>>> to
> > >>>>> see it because they don't have any real processing time. With a
> > >>>>> synthetic test case that has a few usleep() to simulate processing of
> > >>>>> read data, it's very noticeable. The below example reads 128MB with
> > >>>>> O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
> > >>>>> each batch submit, and a 1ms delay after processing each completion.
> > >>>>> Running it on /dev/sda yields:
> > >>>>>
> > >>>>> time sudo ./iotest /dev/sda
> > >>>>>
> > >>>>> ________________________________________________________
> > >>>>> Executed in   25.76 secs      fish           external
> > >>>>>    usr time    6.19 millis  783.00 micros    5.41 millis
> > >>>>>    sys time   12.43 millis  642.00 micros   11.79 millis
> > >>>>>
> > >>>>> while on a virtio-blk or NVMe device we get:
> > >>>>>
> > >>>>> time sudo ./iotest /dev/vdb
> > >>>>>
> > >>>>> ________________________________________________________
> > >>>>> Executed in    1.25 secs      fish           external
> > >>>>>    usr time    1.40 millis    0.30 millis    1.10 millis
> > >>>>>    sys time   17.61 millis    1.43 millis   16.18 millis
> > >>>>>
> > >>>>> time sudo ./iotest /dev/nvme0n1
> > >>>>>
> > >>>>> ________________________________________________________
> > >>>>> Executed in    1.26 secs      fish           external
> > >>>>>    usr time    6.11 millis    0.52 millis    5.59 millis
> > >>>>>    sys time   13.94 millis    1.50 millis   12.43 millis
> > >>>>>
> > >>>>> where the latter are consistent. If we run the same test but keep the
> > >>>>> socket for the ssh connection active by having activity there, then
> > >>>>> the sda test looks as follows:
> > >>>>>
> > >>>>> time sudo ./iotest /dev/sda
> > >>>>>
> > >>>>> ________________________________________________________
> > >>>>> Executed in    1.23 secs      fish           external
> > >>>>>    usr time    2.70 millis   39.00 micros    2.66 millis
> > >>>>>    sys time    4.97 millis  977.00 micros    3.99 millis
> > >>>>>
> > >>>>> as now the ppoll loop is woken all the time anyway.
> > >>>>>
> > >>>>> After this fix, on an idle system:
> > >>>>>
> > >>>>> time sudo ./iotest /dev/sda
> > >>>>>
> > >>>>> ________________________________________________________
> > >>>>> Executed in    1.30 secs      fish           external
> > >>>>>    usr time    2.14 millis    0.14 millis    2.00 millis
> > >>>>>    sys time   16.93 millis    1.16 millis   15.76 millis
> > >>>>>
> > >>>>> Signed-off-by: Jens Axboe <[email protected]>
> > >>>>> ---
> > >>>>>  util/fdmon-io_uring.c | 8 ++++++++
> > >>>>>  1 file changed, 8 insertions(+)
> > >>>>>
> > >>>>> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
> > >>>>> index d0b56127c670..96392876b490 100644
> > >>>>> --- a/util/fdmon-io_uring.c
> > >>>>> +++ b/util/fdmon-io_uring.c
> > >>>>> @@ -181,6 +181,14 @@ static void fdmon_io_uring_add_sqe(AioContext 
> > >>>>> *ctx,
> > >>>>>  
> > >>>>>      trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, 
> > >>>>> sqe->off,
> > >>>>>                                   cqe_handler);
> > >>>>> +
> > >>>>> +    /*
> > >>>>> +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU 
> > >>>>> thread
> > >>>>> +     * runs a coroutine inline (holding BQL), it queues SQEs here 
> > >>>>> but the
> > >>>>> +     * actual io_uring_submit() only happens in gsource_prepare().  
> > >>>>> Without
> > >>>>> +     * this notify, ppoll() can sleep up to 499ms before submitting.
> > >>>>> +     */
> > >>>>> +    aio_notify(ctx);
> > >>>>>  }
> > >>>>
> > >>>> Makes sense to me.
> > >>>>
> > >>>> At first I wondered if we should use defer_call() for the aio_notify()
> > >>>> to batch the submission, but of course holding the BQL will already 
> > >>>> take
> > >>>> care of that. And in iothreads where there is no BQL, the aio_notify()
> > >>>> shouldn't make a difference anyway because we're already in the right
> > >>>> thread.
> > >>>>
> > >>>> I suppose the other variation could be have another io_uring_enter()
> > >>>> call here (but then probably really through defer_call()) to avoid
> > >>>> waiting for another CPU to submit the request in its main loop. But I
> > >>>> don't really have an intuition if that would make things better or 
> > >>>> worse
> > >>>> in the common case.
> > >>>>
> > >>>> Fiona, does this fix your case, too?
> > >>>
> > >>> Yes, it does fix my issue [0] and the second patch gives another small
> > >>> improvement :)
> > >>>
> > >>> Would it be slightly cleaner to have aio_add_sqe() call aio_notify()
> > >>> itself? Since aio-posix.c calls downwards into fdmon-io_uring.c, it
> > >>> would feel nicer to me to not have fdmon-io_uring.c call "back up". I
> > >>> guess it also depends on whether we expect another future fdmon
> > >>> implementation with .add_sqe() to also benefit from it.
> > >>
> > >> Calling aio_notify() from aio-posix.c:aio_add_sqe() sounds better to me
> > >> because fdmon-io_uring.c has to be careful about calling aio_*() APIs to
> > >> avoid loops.
> > > 
> > > Would anyone care to make that edit? I'm on a plane and gone for a bit,
> > > so won't get back to this for the next week. But I would love to see a
> > > fix go in, as this issue has been plaguing me with test timeouts for
> > > quite a while on the CI front. And seems like I'm not alone, if the
> > > patches fix Fiona's issues as well.
> > 
> > Still on a plane but tested this one and it works for me too. Does seem
> > like a better approach, rather than stuff it in the fdmon part.
> > 
> > Feel free to run with this one and also to update the commit message if
> > you want. Thanks!
> > 
> > 
> > commit a8a94e7a05964d470b8fba50c9d4769489c21752
> > Author: Jens Axboe <[email protected]>
> > Date:   Fri Feb 13 06:52:14 2026 -0700
> > 
> >     aio-posix: notify main loop when SQEs are queued
> >     
> >     When a vCPU thread handles MMIO (holding BQL), aio_co_enter() runs the
> >     block I/O coroutine inline on the vCPU thread because
> >     qemu_get_current_aio_context() returns the main AioContext when BQL is
> >     held. The coroutine calls luring_co_submit() which queues an SQE via
> >     fdmon_io_uring_add_sqe(), but the actual io_uring_submit() only happens
> >     in gsource_prepare() on the main loop thread.
> >     
> >     Since the coroutine ran inline (not via aio_co_schedule()), no BH is
> >     scheduled and aio_notify() is never called. The main loop remains asleep
> >     in ppoll() with up to a 499ms timeout, leaving the SQE unsubmitted until
> >     the next timer fires.
> >     
> >     Fix this by calling aio_notify() after queuing the SQE. This wakes the
> >     main loop via the eventfd so it can run gsource_prepare() and submit the
> >     pending SQE promptly.
> >     
> >     This is a generic fix that benefits all devices using aio=io_uring.
> >     Without it, AHCI/SATA devices see MUCH worse I/O latency since they use
> >     MMIO (not ioeventfd like virtio) and have no other mechanism to wake the
> >     main loop after queuing block I/O.
> >     
> >     This is usually a bit hard to detect, as it also relies on the ppoll
> >     loop not waking up for other activity, and micro benchmarks tend not to
> >     see it because they don't have any real processing time. With a
> >     synthetic test case that has a few usleep() to simulate processing of
> >     read data, it's very noticeable. The below example reads 128MB with
> >     O_DIRECT in 128KB chunks in batches of 16, and has a 1ms delay before
> >     each batch submit, and a 1ms delay after processing each completion.
> >     Running it on /dev/sda yields:
> >     
> >     time sudo ./iotest /dev/sda
> >     
> >     ________________________________________________________
> >     Executed in   25.76 secs      fish           external
> >        usr time    6.19 millis  783.00 micros    5.41 millis
> >        sys time   12.43 millis  642.00 micros   11.79 millis
> >     
> >     while on a virtio-blk or NVMe device we get:
> >     
> >     time sudo ./iotest /dev/vdb
> >     
> >     ________________________________________________________
> >     Executed in    1.25 secs      fish           external
> >        usr time    1.40 millis    0.30 millis    1.10 millis
> >        sys time   17.61 millis    1.43 millis   16.18 millis
> >     
> >     time sudo ./iotest /dev/nvme0n1
> >     
> >     ________________________________________________________
> >     Executed in    1.26 secs      fish           external
> >        usr time    6.11 millis    0.52 millis    5.59 millis
> >        sys time   13.94 millis    1.50 millis   12.43 millis
> >     
> >     where the latter are consistent. If we run the same test but keep the
> >     socket for the ssh connection active by having activity there, then
> >     the sda test looks as follows:
> >     
> >     time sudo ./iotest /dev/sda
> >     
> >     ________________________________________________________
> >     Executed in    1.23 secs      fish           external
> >        usr time    2.70 millis   39.00 micros    2.66 millis
> >        sys time    4.97 millis  977.00 micros    3.99 millis
> >     
> >     as now the ppoll loop is woken all the time anyway.
> >     
> >     After this fix, on an idle system:
> >     
> >     time sudo ./iotest /dev/sda
> >     
> >     ________________________________________________________
> >     Executed in    1.30 secs      fish           external
> >        usr time    2.14 millis    0.14 millis    2.00 millis
> >        sys time   16.93 millis    1.16 millis   15.76 millis
> >     
> >     Signed-off-by: Jens Axboe <[email protected]>
> > 
> > diff --git a/util/aio-posix.c b/util/aio-posix.c
> > index e24b955fd91a..8c7b3795c82d 100644
> > --- a/util/aio-posix.c
> > +++ b/util/aio-posix.c
> > @@ -813,5 +813,13 @@ void aio_add_sqe(void (*prep_sqe)(struct io_uring_sqe 
> > *sqe, void *opaque),
> >  {
> >      AioContext *ctx = qemu_get_current_aio_context();
> >      ctx->fdmon_ops->add_sqe(ctx, prep_sqe, opaque, cqe_handler);
> > +
> > +    /*
> > +     * Wake the main loop if it is sleeping in ppoll().  When a vCPU thread
> > +     * runs a coroutine inline (holding BQL), it queues SQEs here but the
> 
> I think the comment could even be more generic here. This is not
> specific to coroutines, but the scenario is just that a vCPU thread
> holding the BQL performs I/O.

Good idea, I generalized the comment when merging the patch.

Stefan

Attachment: signature.asc
Description: PGP signature

Reply via email to