If we end up getting woken in poll (due to a signal), then we may need
to punt the poll request to an async worker. When we do that, we look up
the list to queue at, deferefencing req->submit.sqe, however that is
only set for requests we initially decided to queue async.

This fixes a crash with poll command usage and wakeups that need to punt
to async context.

Fixes: 54a91f3bb9b9 ("io_uring: limit parallelism of buffered writes")
Signed-off-by: Jens Axboe <[email protected]>

---

diff --git a/fs/io_uring.c b/fs/io_uring.c
index b550f685eadd..8c28bd7792b6 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -453,16 +453,15 @@ static void __io_commit_cqring(struct io_ring_ctx *ctx)
 static inline void io_queue_async_work(struct io_ring_ctx *ctx,
                                       struct io_kiocb *req)
 {
-       int rw;
+       int rw = 0;
 
-       switch (req->submit.sqe->opcode) {
-       case IORING_OP_WRITEV:
-       case IORING_OP_WRITE_FIXED:
-               rw = !(req->rw.ki_flags & IOCB_DIRECT);
-               break;
-       default:
-               rw = 0;
-               break;
+       if (req->submit.sqe) {
+               switch (req->submit.sqe->opcode) {
+               case IORING_OP_WRITEV:
+               case IORING_OP_WRITE_FIXED:
+                       rw = !(req->rw.ki_flags & IOCB_DIRECT);
+                       break;
+               }
        }
 
        queue_work(ctx->sqo_wq[rw], &req->work);
@@ -1721,6 +1720,7 @@ static int io_poll_add(struct io_kiocb *req, const struct 
io_uring_sqe *sqe)
        if (!poll->file)
                return -EBADF;
 
+       req->submit.sqe = NULL;
        INIT_WORK(&req->work, io_poll_complete_work);
        events = READ_ONCE(sqe->poll_events);
        poll->events = demangle_poll(events) | EPOLLERR | EPOLLHUP;

-- 
Jens Axboe

Reply via email to