Queued spinlocks are not used by DEC Alpha, and furthermore operations
such as READ_ONCE() and release/relaxed RMW atomics are being changed
to imply smp_read_barrier_depends().  This commit therefore removes the
now-redundant smp_read_barrier_depends() from queued_spin_lock_slowpath(),
and adjusts the comments accordingly.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ingo Molnar <[email protected]>
---
 kernel/locking/qspinlock.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 294294c71ba4..38ece035039e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -170,7 +170,7 @@ static __always_inline void clear_pending_set_locked(struct 
qspinlock *lock)
  * @tail : The new queue tail code word
  * Return: The previous queue tail code word
  *
- * xchg(lock, tail)
+ * xchg(lock, tail), which heads an address dependency
  *
  * p,*,* -> n,*,* ; prev = xchg(lock, node)
  */
@@ -409,13 +409,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, 
u32 val)
        if (old & _Q_TAIL_MASK) {
                prev = decode_tail(old);
                /*
-                * The above xchg_tail() is also a load of @lock which 
generates,
-                * through decode_tail(), a pointer.
-                *
-                * The address dependency matches the RELEASE of xchg_tail()
-                * such that the access to @prev must happen after.
+                * The above xchg_tail() is also a load of @lock which
+                * generates, through decode_tail(), a pointer.  The address
+                * dependency matches the RELEASE of xchg_tail() such that
+                * the subsequent access to @prev happens after.
                 */
-               smp_read_barrier_depends();
 
                WRITE_ONCE(prev->next, node);
 
-- 
2.5.2

Reply via email to