From: Peter Zijlstra <pet...@infradead.org>

commit bebe5b514345f09be2c15e414d076b02ecb9cce8 upstream.

The problem with returning -EAGAIN when the waiter state mismatches is that
it becomes very hard to proof a bounded execution time on the
operation. And seeing that this is a RT operation, this is somewhat
important.

While in practise; given the previous patch; it will be very unlikely to
ever really take more than one or two rounds, proving so becomes rather
hard.

However, now that modifying wait_list is done while holding both hb->lock
and wait_lock, the scenario can be avoided entirely by acquiring wait_lock
while still holding hb-lock. Doing a hand-over, without leaving a hole.

Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: juri.le...@arm.com
Cc: bige...@linutronix.de
Cc: xlp...@redhat.com
Cc: rost...@goodmis.org
Cc: mathieu.desnoy...@efficios.com
Cc: jdesfos...@efficios.com
Cc: dvh...@infradead.org
Cc: bris...@redhat.com
Link: http://lkml.kernel.org/r/20170322104152.112378...@infradead.org
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
Tested-by: Henrik Austad <haus...@cisco.com>
---
 kernel/futex.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/kernel/futex.c b/kernel/futex.c
index 1cc40dd..14d270e 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1395,15 +1395,10 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, 
struct futex_pi_state *pi_
        WAKE_Q(wake_q);
        int ret = 0;
 
-       raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
        new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
-       if (!new_owner) {
+       if (WARN_ON_ONCE(!new_owner)) {
                /*
-                * Since we held neither hb->lock nor wait_lock when coming
-                * into this function, we could have raced with futex_lock_pi()
-                * such that we might observe @this futex_q waiter, but the
-                * rt_mutex's wait_list can be empty (either still, or again,
-                * depending on which side we land).
+                * As per the comment in futex_unlock_pi() this should not 
happen.
                 *
                 * When this happens, give up our locks and try again, giving
                 * the futex_lock_pi() instance time to complete, either by
@@ -2807,15 +2802,18 @@ retry:
                if (pi_state->owner != current)
                        goto out_unlock;
 
+               get_pi_state(pi_state);
                /*
-                * Grab a reference on the pi_state and drop hb->lock.
+                * Since modifying the wait_list is done while holding both
+                * hb->lock and wait_lock, holding either is sufficient to
+                * observe it.
                 *
-                * The reference ensures pi_state lives, dropping the hb->lock
-                * is tricky.. wake_futex_pi() will take rt_mutex::wait_lock to
-                * close the races against futex_lock_pi(), but in case of
-                * _any_ fail we'll abort and retry the whole deal.
+                * By taking wait_lock while still holding hb->lock, we ensure
+                * there is no point where we hold neither; and therefore
+                * wake_futex_pi() must observe a state consistent with what we
+                * observed.
                 */
-               get_pi_state(pi_state);
+               raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
                spin_unlock(&hb->lock);
 
                ret = wake_futex_pi(uaddr, uval, pi_state);
-- 
2.7.4

Reply via email to