[PATCH AUTOSEL for 4.15 156/189] arm64: spinlock: Fix theoretical trylock() A-B-A with LSE atomics

2018-04-08 Thread Sasha Levin
From: Will Deacon 

[ Upstream commit 202fb4ef81e3ec765c23bd1e6746a5c25b797d0e ]

If the spinlock "next" ticket wraps around between the initial LDR
and the cmpxchg in the LSE version of spin_trylock, then we can erroneously
think that we have successfuly acquired the lock because we only check
whether the next ticket return by the cmpxchg is equal to the owner ticket
in our updated lock word.

This patch fixes the issue by performing a full 32-bit check of the lock
word when trying to determine whether or not the CASA instruction updated
memory.

Reported-by: Catalin Marinas 
Signed-off-by: Will Deacon 
Signed-off-by: Catalin Marinas 
Signed-off-by: Sasha Levin 
---
 arch/arm64/include/asm/spinlock.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/spinlock.h 
b/arch/arm64/include/asm/spinlock.h
index fdb827c7832f..ebdae15d665d 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -87,8 +87,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
"   cbnz%w1, 1f\n"
"   add %w1, %w0, %3\n"
"   casa%w0, %w1, %2\n"
-   "   and %w1, %w1, #0x\n"
-   "   eor %w1, %w1, %w0, lsr #16\n"
+   "   sub %w1, %w1, %3\n"
+   "   eor %w1, %w1, %w0\n"
"1:")
: "=" (lockval), "=" (tmp), "+Q" (*lock)
: "I" (1 << TICKET_SHIFT)
-- 
2.15.1


[PATCH AUTOSEL for 4.15 156/189] arm64: spinlock: Fix theoretical trylock() A-B-A with LSE atomics

2018-04-08 Thread Sasha Levin
From: Will Deacon 

[ Upstream commit 202fb4ef81e3ec765c23bd1e6746a5c25b797d0e ]

If the spinlock "next" ticket wraps around between the initial LDR
and the cmpxchg in the LSE version of spin_trylock, then we can erroneously
think that we have successfuly acquired the lock because we only check
whether the next ticket return by the cmpxchg is equal to the owner ticket
in our updated lock word.

This patch fixes the issue by performing a full 32-bit check of the lock
word when trying to determine whether or not the CASA instruction updated
memory.

Reported-by: Catalin Marinas 
Signed-off-by: Will Deacon 
Signed-off-by: Catalin Marinas 
Signed-off-by: Sasha Levin 
---
 arch/arm64/include/asm/spinlock.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/spinlock.h 
b/arch/arm64/include/asm/spinlock.h
index fdb827c7832f..ebdae15d665d 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -87,8 +87,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
"   cbnz%w1, 1f\n"
"   add %w1, %w0, %3\n"
"   casa%w0, %w1, %2\n"
-   "   and %w1, %w1, #0x\n"
-   "   eor %w1, %w1, %w0, lsr #16\n"
+   "   sub %w1, %w1, %3\n"
+   "   eor %w1, %w1, %w0\n"
"1:")
: "=" (lockval), "=" (tmp), "+Q" (*lock)
: "I" (1 << TICKET_SHIFT)
-- 
2.15.1