There is no agreed-upon definition of spin_unlock_wait()'s semantics,
and it appears that all callers could do just as well with a lock/unlock
pair.  This commit therefore replaces the spin_unlock_wait() call in
do_task_dead() with spin_lock() followed immediately by spin_unlock().
This should be safe from a performance perspective because the lock is
this tasks ->pi_lock, and this is called only after the task exits.

Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Will Deacon <will.dea...@arm.com>
Cc: Alan Stern <st...@rowland.harvard.edu>
Cc: Andrea Parri <parri.and...@gmail.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
[ paulmck: Replace leading smp_mb() with smp_mb__before_spinlock(),
  courtesy of Arnd Bergmann's noting its odd location. ]
---
 kernel/sched/core.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e91138fcde86..48a8760fedf4 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3460,8 +3460,9 @@ void __noreturn do_task_dead(void)
         * To avoid it, we have to wait for releasing tsk->pi_lock which
         * is held by try_to_wake_up()
         */
-       smp_mb();
-       raw_spin_unlock_wait(&current->pi_lock);
+       smp_mb__before_spinlock();
+       raw_spin_lock_irq(&current->pi_lock);
+       raw_spin_unlock_irq(&current->pi_lock);
 
        /* Causes final put_task_struct in finish_task_switch(): */
        __set_current_state(TASK_DEAD);
-- 
2.5.2

Reply via email to