From: Steven Rostedt <[email protected]>

The raw_spin_unlock() is not a full memory barrier. It only keeps
things from leaking past it, but does not prevent leaks from entering
the critical section. That is:

        p = 1;

        raw_spin_lock();
        [...]
        raw_spin_unlock();

        y = x

 Can turn into:

        p = 1;

        raw_spin_lock();

        load x

        store p = 1

        raw_spin_unlock();

        y = x

This means that the condition check in __swait_event() (and friends)
can be seen before the h->list is set.

        raw_spin_lock();

        load condition;

        store h->list;

        raw_spin_unlock();

And the other CPU can see h->list as empty, and this CPU see condition
as not set, and possibly miss the wake up.

To prevent this from happening, add an mb() after setting the h->list.

Reviewed-by: Paul E. McKenney <[email protected]>
Signed-off-by: Steven Rostedt <[email protected]>
---
 kernel/wait-simple.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/wait-simple.c b/kernel/wait-simple.c
index 9725a11..2c85626 100644
--- a/kernel/wait-simple.c
+++ b/kernel/wait-simple.c
@@ -16,6 +16,8 @@
 static inline void __swait_enqueue(struct swait_head *head, struct swaiter *w)
 {
        list_add(&w->node, &head->list);
+       /* We can't let the condition leak before the setting of head */
+       smp_mb();
 }
 
 /* Removes w from head->list. Must be called with head->lock locked. */
-- 
1.7.10.4


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to