On Tue, Aug 07, 2018 at 09:04:41AM -0700, Andrew Morton wrote:
> > Because it is faster obviously.  I can update the comment.
> 
> I meant the comment could explain why it's a trylock instead of a
> spin_lock().

We could something like this the patch below.

Al, do you want me to resend or can you just fold it in?

diff --git a/fs/aio.c b/fs/aio.c
index 5943098a87c6..84df2c2bf80b 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -1684,7 +1684,8 @@ static int aio_poll_wake(struct wait_queue_entry *wait, 
unsigned mode, int sync,
 
                /*
                 * Try to complete the iocb inline if we can to avoid a costly
-                * context switch.
+                * context switch.  As the waitqueue lock nests inside the ctx
+                * lock we can only do that if we can get it without waiting.
                 */
                if (spin_trylock(&iocb->ki_ctx->ctx_lock)) {
                        list_del(&iocb->ki_list);


Reply via email to