From: Greg Kroah-Hartman <[email protected]>

From: Thomas Schoebel-Theuer <[email protected]>

This patch and problem analysis is specific for 4.4 LTS, due to incomplete
backporting of other fixes. Later LTS series have different backports.

The following is obviously incorrect:

static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this,
             struct futex_hash_bucket *hb)
{
[...]
        raw_spin_lock(&pi_state->pi_mutex.wait_lock);
[...]
        raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
[...]
}

The 4.4-specific fix should probably go in the direction of
b4abf91047c,
making everything irq-safe.

Probably, backporting of b4abf91047c
to 4.4 LTS could thus be another good idea.

However, this might involve some more 4.4-specific work and
require thorough testing:

> git log --oneline v4.4..b4abf91047c -- kernel/futex.c 
> kernel/locking/rtmutex.c | wc -l
10

So this patch is just an obvious quickfix for now.

Hint: the lock order is documented in 4.9.y and later. A similar
documenting is missing in 4.4.y. Please somebody either backport also,
or write a new description, if there would be some differences I cannot
easily see at the moment. Without reliable docs,
inspection of the locking correctness may become a pain.
 
Signed-off-by: Thomas Schoebel-Theuer <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Lee Jones <[email protected]>
Fixes: 394fc4981426 ("futex: Rework inconsistent rt_mutex/futex_q state")
Fixes: 6510e4a2d04f ("futex,rt_mutex: Provide futex specific rt_mutex API")
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
 kernel/futex.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1408,7 +1408,7 @@ static int wake_futex_pi(u32 __user *uad
        if (pi_state->owner != current)
                return -EINVAL;
 
-       raw_spin_lock(&pi_state->pi_mutex.wait_lock);
+       raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
        new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
 
        /*


Reply via email to