The following situation leads to deadlock:

[task 1]                          [task 2]                         [task 3]
kill_fasync()                     mm_update_next_owner()           
 spin_lock_irqsave(&fa->fa_lock)   read_lock(&tasklist_lock)        
  send_sigio()                    <IRQ>                             ...
   read_lock(&fown->lock)         kill_fasync()                     ...
    read_lock(&tasklist_lock)      spin_lock_irqsave(&fa->fa_lock)  ...

Task 1 can't acquire read locked tasklist_lock, since there is
already task 3 expressed its wish to take the lock exclusive.
Task 2 holds the read locked lock, but it can't take the spin lock.

The patch makes queued_read_lock_slowpath() to give task 1 the same
priority as it was an interrupt handler, and to take the lock
dispite of task 3 is waiting it, and this prevents the deadlock.
It seems there is no better way to detect such the situations,
also in general it's not good to wait so long for readers with
interrupts disabled, since read_lock may nest with another locks
and delay the system.

This also should go to mainstream.

Signed-off-by: Kirill Tkhai <>
 kernel/qrwlock.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/qrwlock.c b/kernel/qrwlock.c
index a79625ae5444..fb933f7a4631 100644
--- a/kernel/qrwlock.c
+++ b/kernel/qrwlock.c
@@ -70,7 +70,7 @@ void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts)
         * Readers come here when they cannot get the lock without waiting
-       if (unlikely(in_interrupt())) {
+       if (unlikely(irqs_disabled())) {
                 * Readers in interrupt context will get the lock immediately
                 * if the writer is just waiting (not holding the lock yet).

Devel mailing list

Reply via email to