On Mon,  6 Aug 2018 10:30:58 +0200 Christoph Hellwig <[email protected]> wrote:

> If we get a keyed wakeup for a aio poll waitqueue and wake can acquire the
> ctx_lock without spinning we can just complete the iocb straight from the
> wakeup callback to avoid a context switch.

Why do we try to avoid spinning on the lock?

> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -1672,13 +1672,26 @@ static int aio_poll_wake(struct wait_queue_entry 
> *wait, unsigned mode, int sync,
>               void *key)
>  {
>       struct poll_iocb *req = container_of(wait, struct poll_iocb, wait);
> +     struct aio_kiocb *iocb = container_of(req, struct aio_kiocb, poll);
>       __poll_t mask = key_to_poll(key);
>  
>       req->woken = true;
>  
>       /* for instances that support it check for an event match first: */
> -     if (mask && !(mask & req->events))
> -             return 0;
> +     if (mask) {
> +             if (!(mask & req->events))
> +                     return 0;
> +
> +             /* try to complete the iocb inline if we can: */

ie, this comment explains 'what" but not "why".

(There's a typo in Subject:, btw)

> +             if (spin_trylock(&iocb->ki_ctx->ctx_lock)) {
> +                     list_del(&iocb->ki_list);
> +                     spin_unlock(&iocb->ki_ctx->ctx_lock);
> +
> +                     list_del_init(&req->wait.entry);
> +                     aio_poll_complete(iocb, mask);
> +                     return 1;
> +             }
> +     }
>  
>       list_del_init(&req->wait.entry);
>       schedule_work(&req->work);

Reply via email to