>From memory ordering point of view, spin_unlock_wait() provides
the same guarantees as spin_lock(); spin_unlock().

Therefore the smp_mb() after spin_lock() is not necessary,
spin_unlock_wait() must provide the memory ordering.

Signed-off-by: Manfred Spraul <[email protected]>
Cc: Will Deacon <[email protected]>
---
 ipc/sem.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index 6586e0a..a5da82c 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -355,14 +355,6 @@ static inline int sem_lock(struct sem_array *sma, struct 
sembuf *sops,
                 */
                spin_lock(&sem->lock);
 
-               /*
-                * See 51d7d5205d33
-                * ("powerpc: Add smp_mb() to arch_spin_is_locked()"):
-                * A full barrier is required: the write of sem->lock
-                * must be visible before the read is executed
-                */
-               smp_mb();
-
                if (!smp_load_acquire(&sma->complex_mode)) {
                        /* fast path successful! */
                        return sops->sem_num;
-- 
2.7.4

Reply via email to