On Mon, Oct 19, 2015 at 08:46:02PM -0700, Davidlohr Bueso wrote:
> On Tue, 20 Oct 2015, Boqun Feng wrote:
>
> >>@@ -93,7 +94,7 @@ static __always_inline void queued_spin_unlock(struct
> >>qspinlock *lock)
> >>/*
> >> * smp_mb__before_atomic() in order to guarantee release semantics
> >>
On Tue, 20 Oct 2015, Boqun Feng wrote:
@@ -93,7 +94,7 @@ static __always_inline void queued_spin_unlock(struct
qspinlock *lock)
/*
* smp_mb__before_atomic() in order to guarantee release semantics
*/
- smp_mb__before_atomic_dec();
+ smp_mb__before_atomic();
Hi Waiman,
On Thu, Oct 15, 2015 at 06:51:03PM -0400, Waiman Long wrote:
> This patch replaces the cmpxchg() and xchg() calls in the native
> qspinlock code with the more relaxed _acquire or _release versions of
> those calls to enable other architectures to adopt queued spinlocks
> with less memor
This patch replaces the cmpxchg() and xchg() calls in the native
qspinlock code with the more relaxed _acquire or _release versions of
those calls to enable other architectures to adopt queued spinlocks
with less memory barrier performance overhead.
Signed-off-by: Waiman Long
---
include/asm-gen
4 matches
Mail list logo