Jens Axboe <ax...@kernel.dk> writes:

> We currently merge async work items if we see a strict sequential hit.
> This helps avoid unnecessary workqueue switches when we don't need
> them. We can extend this merging to cover cases where it's not a strict
> sequential hit, but the IO still fits within the same page. If an
> application is doing multiple requests within the same page, we don't
> want separate workers waiting on the same page to complete IO. It's much
> faster to let the first worker bring in the page, then operate on that
> page from the same worker to complete the next request(s).
>
> Signed-off-by: Jens Axboe <ax...@kernel.dk>

Reviewed-by: Jeff Moyer <jmo...@redhat.com>

Minor nit below.

> @@ -1994,7 +2014,7 @@ static void io_sq_wq_submit_work(struct work_struct 
> *work)
>   */
>  static bool io_add_to_prev_work(struct async_list *list, struct io_kiocb 
> *req)
>  {
> -     bool ret = false;
> +     bool ret;
>  
>       if (!list)
>               return false;

This hunk looks unrelated.  Also, I think you could actually change that
to be initialized to true, and get rid of the assignment later:

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 03fcd974fd1d..a94c8584c480 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1994,7 +1994,7 @@ static void io_sq_wq_submit_work(struct work_struct *work)
  */
 static bool io_add_to_prev_work(struct async_list *list, struct io_kiocb *req)
 {
-       bool ret = false;
+       bool ret = true;
 
        if (!list)
                return false;
@@ -2003,7 +2003,6 @@ static bool io_add_to_prev_work(struct async_list *list, 
struct io_kiocb *req)
        if (!atomic_read(&list->cnt))
                return false;
 
-       ret = true;
        spin_lock(&list->lock);
        list_add_tail(&req->list, &list->list);
        /*

Cheers,
Jeff

Reply via email to