4.13-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Peter Zijlstra <[email protected]>

commit 153fbd1226fb30b8630802aa5047b8af5ef53c9f upstream.

Dmitry (through syzbot) reported being able to trigger the WARN in
get_pi_state() and a use-after-free on:

        raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);

Both are due to this race:

  exit_pi_state_list()                          put_pi_state()

  lock(&curr->pi_lock)
  while() {
        pi_state = list_first_entry(head);
        hb = hash_futex(&pi_state->key);
        unlock(&curr->pi_lock);

                                                
dec_and_test(&pi_state->refcount);

        lock(&hb->lock)
        lock(&pi_state->pi_mutex.wait_lock)     // uaf if pi_state free'd
        lock(&curr->pi_lock);

        ....

        unlock(&curr->pi_lock);
        get_pi_state();                         // WARN; refcount==0

The problem is we take the reference count too late, and don't allow it
being 0. Fix it by using inc_not_zero() and simply retrying the loop
when we fail to get a refcount. In that case put_pi_state() should
remove the entry from the list.

Reported-by: Dmitry Vyukov <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
Cc: Gratian Crisan <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
Cc: syzbot 
<bot+2af19c9e1ffe4d4ee1d16c56ae7580feaee75...@syzkaller.appspotmail.com>
Cc: [email protected]
Fixes: c74aef2d06a9 ("futex: Fix pi_state->owner serialization")
Link: 
http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
 kernel/futex.c |   23 ++++++++++++++++++++---
 1 file changed, 20 insertions(+), 3 deletions(-)

--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -901,11 +901,27 @@ void exit_pi_state_list(struct task_stru
         */
        raw_spin_lock_irq(&curr->pi_lock);
        while (!list_empty(head)) {
-
                next = head->next;
                pi_state = list_entry(next, struct futex_pi_state, list);
                key = pi_state->key;
                hb = hash_futex(&key);
+
+               /*
+                * We can race against put_pi_state() removing itself from the
+                * list (a waiter going away). put_pi_state() will first
+                * decrement the reference count and then modify the list, so
+                * its possible to see the list entry but fail this reference
+                * acquire.
+                *
+                * In that case; drop the locks to let put_pi_state() make
+                * progress and retry the loop.
+                */
+               if (!atomic_inc_not_zero(&pi_state->refcount)) {
+                       raw_spin_unlock_irq(&curr->pi_lock);
+                       cpu_relax();
+                       raw_spin_lock_irq(&curr->pi_lock);
+                       continue;
+               }
                raw_spin_unlock_irq(&curr->pi_lock);
 
                spin_lock(&hb->lock);
@@ -916,8 +932,10 @@ void exit_pi_state_list(struct task_stru
                 * task still owns the PI-state:
                 */
                if (head->next != next) {
+                       /* retain curr->pi_lock for the loop invariant */
                        raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
                        spin_unlock(&hb->lock);
+                       put_pi_state(pi_state);
                        continue;
                }
 
@@ -925,9 +943,8 @@ void exit_pi_state_list(struct task_stru
                WARN_ON(list_empty(&pi_state->list));
                list_del_init(&pi_state->list);
                pi_state->owner = NULL;
-               raw_spin_unlock(&curr->pi_lock);
 
-               get_pi_state(pi_state);
+               raw_spin_unlock(&curr->pi_lock);
                raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
                spin_unlock(&hb->lock);
 


Reply via email to