Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()

2023-09-26 Thread Stefan Hajnoczi
On Tue, Sep 26, 2023 at 09:28:15AM +0800, Ming Lei wrote:
> On Mon, Sep 25, 2023 at 05:17:10PM -0400, Stefan Hajnoczi wrote:
> > On Fri, Sep 15, 2023 at 03:04:05PM +0800, Jason Wang wrote:
> > > On Fri, Sep 8, 2023 at 11:25 PM Ming Lei  wrote:
> > > >
> > > > On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote:
> > > > > On 9/8/23 8:34 AM, Ming Lei wrote:
> > > > > > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote:
> > > > > >> On 9/8/23 3:30 AM, Ming Lei wrote:
> > > > > >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > > > > >>> index ad636954abae..95a3d31a1ef1 100644
> > > > > >>> --- a/io_uring/io_uring.c
> > > > > >>> +++ b/io_uring/io_uring.c
> > > > > >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work 
> > > > > >>> *work)
> > > > > >>>   }
> > > > > >>>   }
> > > > > >>>
> > > > > >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */
> > > > > >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && 
> > > > > >>> def->iopoll_queue)
> > > > > >>> + issue_flags |= IO_URING_F_NONBLOCK;
> > > > > >>> +
> > > > > >>
> > > > > >> I think this comment deserves to be more descriptive. Normally we
> > > > > >> absolutely cannot block for polled IO, it's only OK here because 
> > > > > >> io-wq
> > > > > >
> > > > > > Yeah, we don't do that until commit 2bc057692599 ("block: don't 
> > > > > > make REQ_POLLED
> > > > > > imply REQ_NOWAIT") which actually push the responsibility/risk up to
> > > > > > io_uring.
> > > > > >
> > > > > >> is the issuer and not necessarily the poller of it. That generally 
> > > > > >> falls
> > > > > >> upon the original issuer to poll these requests.
> > > > > >>
> > > > > >> I think this should be a separate commit, coming before the main 
> > > > > >> fix
> > > > > >> which is below.
> > > > > >
> > > > > > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and 
> > > > > > the
> > > > > > approach in V2 doesn't need this change.
> > > > > >
> > > > > >>
> > > > > >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool 
> > > > > >>> cancel_all, struct io_sq_data *sqd)
> > > > > >>>   finish_wait(>wait, );
> > > > > >>>   } while (1);
> > > > > >>>
> > > > > >>> + /*
> > > > > >>> +  * Reap events from each ctx, otherwise these requests may take
> > > > > >>> +  * resources and prevent other contexts from being moved on.
> > > > > >>> +  */
> > > > > >>> + xa_for_each(>xa, index, node)
> > > > > >>> + io_iopoll_try_reap_events(node->ctx);
> > > > > >>
> > > > > >> The main issue here is that if someone isn't polling for them, 
> > > > > >> then we
> > > > > >
> > > > > > That is actually what this patch is addressing, :-)
> > > > >
> > > > > Right, that part is obvious :)
> > > > >
> > > > > >> get to wait for a timeout before they complete. This can delay 
> > > > > >> exit, for
> > > > > >> example, as we're now just waiting 30 seconds (or whatever the 
> > > > > >> timeout
> > > > > >> is on the underlying device) for them to get timed out before exit 
> > > > > >> can
> > > > > >> finish.
> > > > > >
> > > > > > For the issue on null_blk, device timeout handler provides
> > > > > > forward-progress, such as requests are released, so new IO can be
> > > > > > handled.
> > > > > >
> > > > > > However, not all devices support timeout, such as virtio device.
> > > > >
> > > > > That's a bug in the driver, you cannot sanely support polled IO and 
> > > > > not
> > > > > be able to deal with timeouts. Someone HAS to reap the requests and
> > > > > there are only two things that can do that - the application doing the
> > > > > polled IO, or if that doesn't happen, a timeout.
> > > >
> > > > OK, then device driver timeout handler has new responsibility of 
> > > > covering
> > > > userspace accident, :-)
> > 
> > Sorry, I don't have enough context so this is probably a silly question:
> > 
> > When an application doesn't reap a polled request, why doesn't the block
> > layer take care of this in a generic way? I don't see anything
> > driver-specific about this.
> 
> block layer doesn't have knowledge to handle that, io_uring knows the
> application is exiting, and can help to reap the events.

I thought the discussion was about I/O timeouts in general but here
you're only mentioning application exit. Are we talking about I/O
timeouts or purely about cleaning up I/O requests when an application
exits?

> 
> But the big question is that if there is really IO timeout for virtio-blk.
> If there is, the reap done in io_uring may never return and cause other
> issue, so if it is done in io_uring, that can be just thought as sort of
> improvement.

virtio-blk drivers have no way of specifying timeouts on the device or
aborting/canceling requests.

virtio-blk devices may fail requests if they implement an internal
timeout mechanism (e.g. the host kernel fails requests after a host
timeout), but this is not controlled by the driver and there is no
guarantee that the device 

Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()

2023-09-25 Thread Ming Lei
On Mon, Sep 25, 2023 at 05:17:10PM -0400, Stefan Hajnoczi wrote:
> On Fri, Sep 15, 2023 at 03:04:05PM +0800, Jason Wang wrote:
> > On Fri, Sep 8, 2023 at 11:25 PM Ming Lei  wrote:
> > >
> > > On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote:
> > > > On 9/8/23 8:34 AM, Ming Lei wrote:
> > > > > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote:
> > > > >> On 9/8/23 3:30 AM, Ming Lei wrote:
> > > > >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > > > >>> index ad636954abae..95a3d31a1ef1 100644
> > > > >>> --- a/io_uring/io_uring.c
> > > > >>> +++ b/io_uring/io_uring.c
> > > > >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work 
> > > > >>> *work)
> > > > >>>   }
> > > > >>>   }
> > > > >>>
> > > > >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */
> > > > >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
> > > > >>> + issue_flags |= IO_URING_F_NONBLOCK;
> > > > >>> +
> > > > >>
> > > > >> I think this comment deserves to be more descriptive. Normally we
> > > > >> absolutely cannot block for polled IO, it's only OK here because 
> > > > >> io-wq
> > > > >
> > > > > Yeah, we don't do that until commit 2bc057692599 ("block: don't make 
> > > > > REQ_POLLED
> > > > > imply REQ_NOWAIT") which actually push the responsibility/risk up to
> > > > > io_uring.
> > > > >
> > > > >> is the issuer and not necessarily the poller of it. That generally 
> > > > >> falls
> > > > >> upon the original issuer to poll these requests.
> > > > >>
> > > > >> I think this should be a separate commit, coming before the main fix
> > > > >> which is below.
> > > > >
> > > > > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the
> > > > > approach in V2 doesn't need this change.
> > > > >
> > > > >>
> > > > >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool 
> > > > >>> cancel_all, struct io_sq_data *sqd)
> > > > >>>   finish_wait(>wait, );
> > > > >>>   } while (1);
> > > > >>>
> > > > >>> + /*
> > > > >>> +  * Reap events from each ctx, otherwise these requests may take
> > > > >>> +  * resources and prevent other contexts from being moved on.
> > > > >>> +  */
> > > > >>> + xa_for_each(>xa, index, node)
> > > > >>> + io_iopoll_try_reap_events(node->ctx);
> > > > >>
> > > > >> The main issue here is that if someone isn't polling for them, then 
> > > > >> we
> > > > >
> > > > > That is actually what this patch is addressing, :-)
> > > >
> > > > Right, that part is obvious :)
> > > >
> > > > >> get to wait for a timeout before they complete. This can delay exit, 
> > > > >> for
> > > > >> example, as we're now just waiting 30 seconds (or whatever the 
> > > > >> timeout
> > > > >> is on the underlying device) for them to get timed out before exit 
> > > > >> can
> > > > >> finish.
> > > > >
> > > > > For the issue on null_blk, device timeout handler provides
> > > > > forward-progress, such as requests are released, so new IO can be
> > > > > handled.
> > > > >
> > > > > However, not all devices support timeout, such as virtio device.
> > > >
> > > > That's a bug in the driver, you cannot sanely support polled IO and not
> > > > be able to deal with timeouts. Someone HAS to reap the requests and
> > > > there are only two things that can do that - the application doing the
> > > > polled IO, or if that doesn't happen, a timeout.
> > >
> > > OK, then device driver timeout handler has new responsibility of covering
> > > userspace accident, :-)
> 
> Sorry, I don't have enough context so this is probably a silly question:
> 
> When an application doesn't reap a polled request, why doesn't the block
> layer take care of this in a generic way? I don't see anything
> driver-specific about this.

block layer doesn't have knowledge to handle that, io_uring knows the
application is exiting, and can help to reap the events.

But the big question is that if there is really IO timeout for virtio-blk.
If there is, the reap done in io_uring may never return and cause other
issue, so if it is done in io_uring, that can be just thought as sort of
improvement.

The real bug fix is still in device driver, usually only the driver timeout
handler can provide forward progress guarantee.

> 
> Driver-specific behavior would be sending an abort/cancel upon timeout.
> virtio-blk cannot do that because there is no such command in the device
> specification at the moment. So simply waiting for the polled request to
> complete is the only thing that can be done (aside from resetting the
> device), and it's generic behavior.

Then looks not safe to support IO polling for virtio-blk, maybe disable it
at default now until the virtio-blk spec starts to support IO abort?

Thanks,
Ming

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()

2023-09-25 Thread Stefan Hajnoczi
On Fri, Sep 15, 2023 at 03:04:05PM +0800, Jason Wang wrote:
> On Fri, Sep 8, 2023 at 11:25 PM Ming Lei  wrote:
> >
> > On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote:
> > > On 9/8/23 8:34 AM, Ming Lei wrote:
> > > > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote:
> > > >> On 9/8/23 3:30 AM, Ming Lei wrote:
> > > >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > > >>> index ad636954abae..95a3d31a1ef1 100644
> > > >>> --- a/io_uring/io_uring.c
> > > >>> +++ b/io_uring/io_uring.c
> > > >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work)
> > > >>>   }
> > > >>>   }
> > > >>>
> > > >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */
> > > >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
> > > >>> + issue_flags |= IO_URING_F_NONBLOCK;
> > > >>> +
> > > >>
> > > >> I think this comment deserves to be more descriptive. Normally we
> > > >> absolutely cannot block for polled IO, it's only OK here because io-wq
> > > >
> > > > Yeah, we don't do that until commit 2bc057692599 ("block: don't make 
> > > > REQ_POLLED
> > > > imply REQ_NOWAIT") which actually push the responsibility/risk up to
> > > > io_uring.
> > > >
> > > >> is the issuer and not necessarily the poller of it. That generally 
> > > >> falls
> > > >> upon the original issuer to poll these requests.
> > > >>
> > > >> I think this should be a separate commit, coming before the main fix
> > > >> which is below.
> > > >
> > > > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the
> > > > approach in V2 doesn't need this change.
> > > >
> > > >>
> > > >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool 
> > > >>> cancel_all, struct io_sq_data *sqd)
> > > >>>   finish_wait(>wait, );
> > > >>>   } while (1);
> > > >>>
> > > >>> + /*
> > > >>> +  * Reap events from each ctx, otherwise these requests may take
> > > >>> +  * resources and prevent other contexts from being moved on.
> > > >>> +  */
> > > >>> + xa_for_each(>xa, index, node)
> > > >>> + io_iopoll_try_reap_events(node->ctx);
> > > >>
> > > >> The main issue here is that if someone isn't polling for them, then we
> > > >
> > > > That is actually what this patch is addressing, :-)
> > >
> > > Right, that part is obvious :)
> > >
> > > >> get to wait for a timeout before they complete. This can delay exit, 
> > > >> for
> > > >> example, as we're now just waiting 30 seconds (or whatever the timeout
> > > >> is on the underlying device) for them to get timed out before exit can
> > > >> finish.
> > > >
> > > > For the issue on null_blk, device timeout handler provides
> > > > forward-progress, such as requests are released, so new IO can be
> > > > handled.
> > > >
> > > > However, not all devices support timeout, such as virtio device.
> > >
> > > That's a bug in the driver, you cannot sanely support polled IO and not
> > > be able to deal with timeouts. Someone HAS to reap the requests and
> > > there are only two things that can do that - the application doing the
> > > polled IO, or if that doesn't happen, a timeout.
> >
> > OK, then device driver timeout handler has new responsibility of covering
> > userspace accident, :-)

Sorry, I don't have enough context so this is probably a silly question:

When an application doesn't reap a polled request, why doesn't the block
layer take care of this in a generic way? I don't see anything
driver-specific about this.

Driver-specific behavior would be sending an abort/cancel upon timeout.
virtio-blk cannot do that because there is no such command in the device
specification at the moment. So simply waiting for the polled request to
complete is the only thing that can be done (aside from resetting the
device), and it's generic behavior.

Thanks,
Stefan

> >
> > We may document this requirement for driver.
> >
> > So far the only one should be virtio-blk, and the two virtio storage
> > drivers never implement timeout handler.
> >
> 
> Adding Stefan for more comments.
> 
> Thanks
> 


signature.asc
Description: PGP signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()

2023-09-15 Thread Jason Wang
On Fri, Sep 8, 2023 at 11:25 PM Ming Lei  wrote:
>
> On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote:
> > On 9/8/23 8:34 AM, Ming Lei wrote:
> > > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote:
> > >> On 9/8/23 3:30 AM, Ming Lei wrote:
> > >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> > >>> index ad636954abae..95a3d31a1ef1 100644
> > >>> --- a/io_uring/io_uring.c
> > >>> +++ b/io_uring/io_uring.c
> > >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work)
> > >>>   }
> > >>>   }
> > >>>
> > >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */
> > >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
> > >>> + issue_flags |= IO_URING_F_NONBLOCK;
> > >>> +
> > >>
> > >> I think this comment deserves to be more descriptive. Normally we
> > >> absolutely cannot block for polled IO, it's only OK here because io-wq
> > >
> > > Yeah, we don't do that until commit 2bc057692599 ("block: don't make 
> > > REQ_POLLED
> > > imply REQ_NOWAIT") which actually push the responsibility/risk up to
> > > io_uring.
> > >
> > >> is the issuer and not necessarily the poller of it. That generally falls
> > >> upon the original issuer to poll these requests.
> > >>
> > >> I think this should be a separate commit, coming before the main fix
> > >> which is below.
> > >
> > > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the
> > > approach in V2 doesn't need this change.
> > >
> > >>
> > >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool 
> > >>> cancel_all, struct io_sq_data *sqd)
> > >>>   finish_wait(>wait, );
> > >>>   } while (1);
> > >>>
> > >>> + /*
> > >>> +  * Reap events from each ctx, otherwise these requests may take
> > >>> +  * resources and prevent other contexts from being moved on.
> > >>> +  */
> > >>> + xa_for_each(>xa, index, node)
> > >>> + io_iopoll_try_reap_events(node->ctx);
> > >>
> > >> The main issue here is that if someone isn't polling for them, then we
> > >
> > > That is actually what this patch is addressing, :-)
> >
> > Right, that part is obvious :)
> >
> > >> get to wait for a timeout before they complete. This can delay exit, for
> > >> example, as we're now just waiting 30 seconds (or whatever the timeout
> > >> is on the underlying device) for them to get timed out before exit can
> > >> finish.
> > >
> > > For the issue on null_blk, device timeout handler provides
> > > forward-progress, such as requests are released, so new IO can be
> > > handled.
> > >
> > > However, not all devices support timeout, such as virtio device.
> >
> > That's a bug in the driver, you cannot sanely support polled IO and not
> > be able to deal with timeouts. Someone HAS to reap the requests and
> > there are only two things that can do that - the application doing the
> > polled IO, or if that doesn't happen, a timeout.
>
> OK, then device driver timeout handler has new responsibility of covering
> userspace accident, :-)
>
> We may document this requirement for driver.
>
> So far the only one should be virtio-blk, and the two virtio storage
> drivers never implement timeout handler.
>

Adding Stefan for more comments.

Thanks

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH V3] io_uring: fix IO hang in io_wq_put_and_exit from do_exit()

2023-09-08 Thread Ming Lei
On Fri, Sep 08, 2023 at 08:44:45AM -0600, Jens Axboe wrote:
> On 9/8/23 8:34 AM, Ming Lei wrote:
> > On Fri, Sep 08, 2023 at 07:49:53AM -0600, Jens Axboe wrote:
> >> On 9/8/23 3:30 AM, Ming Lei wrote:
> >>> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> >>> index ad636954abae..95a3d31a1ef1 100644
> >>> --- a/io_uring/io_uring.c
> >>> +++ b/io_uring/io_uring.c
> >>> @@ -1930,6 +1930,10 @@ void io_wq_submit_work(struct io_wq_work *work)
> >>>   }
> >>>   }
> >>>  
> >>> + /* It is fragile to block POLLED IO, so switch to NON_BLOCK */
> >>> + if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
> >>> + issue_flags |= IO_URING_F_NONBLOCK;
> >>> +
> >>
> >> I think this comment deserves to be more descriptive. Normally we
> >> absolutely cannot block for polled IO, it's only OK here because io-wq
> > 
> > Yeah, we don't do that until commit 2bc057692599 ("block: don't make 
> > REQ_POLLED
> > imply REQ_NOWAIT") which actually push the responsibility/risk up to
> > io_uring.
> > 
> >> is the issuer and not necessarily the poller of it. That generally falls
> >> upon the original issuer to poll these requests.
> >>
> >> I think this should be a separate commit, coming before the main fix
> >> which is below.
> > 
> > Looks fine, actually IO_URING_F_NONBLOCK change isn't a must, and the
> > approach in V2 doesn't need this change.
> > 
> >>
> >>> @@ -3363,6 +3367,12 @@ __cold void io_uring_cancel_generic(bool 
> >>> cancel_all, struct io_sq_data *sqd)
> >>>   finish_wait(>wait, );
> >>>   } while (1);
> >>>  
> >>> + /*
> >>> +  * Reap events from each ctx, otherwise these requests may take
> >>> +  * resources and prevent other contexts from being moved on.
> >>> +  */
> >>> + xa_for_each(>xa, index, node)
> >>> + io_iopoll_try_reap_events(node->ctx);
> >>
> >> The main issue here is that if someone isn't polling for them, then we
> > 
> > That is actually what this patch is addressing, :-)
> 
> Right, that part is obvious :)
> 
> >> get to wait for a timeout before they complete. This can delay exit, for
> >> example, as we're now just waiting 30 seconds (or whatever the timeout
> >> is on the underlying device) for them to get timed out before exit can
> >> finish.
> > 
> > For the issue on null_blk, device timeout handler provides
> > forward-progress, such as requests are released, so new IO can be
> > handled.
> > 
> > However, not all devices support timeout, such as virtio device.
> 
> That's a bug in the driver, you cannot sanely support polled IO and not
> be able to deal with timeouts. Someone HAS to reap the requests and
> there are only two things that can do that - the application doing the
> polled IO, or if that doesn't happen, a timeout.

OK, then device driver timeout handler has new responsibility of covering
userspace accident, :-)

We may document this requirement for driver.

So far the only one should be virtio-blk, and the two virtio storage
drivers never implement timeout handler.

> 
> > Here we just call io_iopoll_try_reap_events() to poll submitted IOs
> > for releasing resources, so no need to rely on device timeout handler
> > any more, and the extra exit delay can be avoided.
> > 
> > But io_iopoll_try_reap_events() may not be enough because io_wq
> > associated with current context can get released resource immediately,
> > then new IOs are submitted successfully, but who can poll these new
> > submitted IOs, then all device resources can be held by this (freed)io_wq
> > for nothing.
> > 
> > I guess we may have to take the approach in patch V2 by only canceling
> > polled IO for avoiding the thread_exit regression, or other ideas?
> 
> Ideally the behavior seems like it should be that if a task goes away,
> any pending polled IO it has should be reaped. With the above notion
> that a driver supporting poll absolutely must be able to deal with
> timeouts, it's not a strict requirement as we know that requests will be
> reaped.

Then looks the io_uring fix is less important, and I will see if one
easy fix can be figured out, one way is to reap event when exiting both
current task and the associated io_wq.

Thanks,
Ming

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization