On 09/14/2015 08:06 AM, Peter Zijlstra wrote:
On Fri, Sep 11, 2015 at 03:27:44PM -0700, Davidlohr Bueso wrote:
On Fri, 11 Sep 2015, Waiman Long wrote:
@@ -46,7 +46,7 @@ static inline bool virt_queued_spin_lock(struct qspinlock
*lock)
if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
On 09/11/2015 06:27 PM, Davidlohr Bueso wrote:
On Fri, 11 Sep 2015, Waiman Long wrote:
@@ -46,7 +46,7 @@ static inline bool virt_queued_spin_lock(struct
qspinlock *lock)
if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
return false;
-while (atomic_cmpxchg(>val, 0, _Q_LOCKED_VAL)
On Fri, Sep 11, 2015 at 03:27:44PM -0700, Davidlohr Bueso wrote:
> On Fri, 11 Sep 2015, Waiman Long wrote:
>
> >@@ -46,7 +46,7 @@ static inline bool virt_queued_spin_lock(struct qspinlock
> >*lock)
> > if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
> > return false;
> >
> >-
On Fri, Sep 11, 2015 at 03:27:44PM -0700, Davidlohr Bueso wrote:
> On Fri, 11 Sep 2015, Waiman Long wrote:
>
> >@@ -46,7 +46,7 @@ static inline bool virt_queued_spin_lock(struct qspinlock
> >*lock)
> > if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
> > return false;
> >
> >-
On 09/11/2015 06:27 PM, Davidlohr Bueso wrote:
On Fri, 11 Sep 2015, Waiman Long wrote:
@@ -46,7 +46,7 @@ static inline bool virt_queued_spin_lock(struct
qspinlock *lock)
if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
return false;
-while (atomic_cmpxchg(>val, 0, _Q_LOCKED_VAL)
On 09/14/2015 08:06 AM, Peter Zijlstra wrote:
On Fri, Sep 11, 2015 at 03:27:44PM -0700, Davidlohr Bueso wrote:
On Fri, 11 Sep 2015, Waiman Long wrote:
@@ -46,7 +46,7 @@ static inline bool virt_queued_spin_lock(struct qspinlock
*lock)
if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
On Fri, 11 Sep 2015, Waiman Long wrote:
@@ -46,7 +46,7 @@ static inline bool virt_queued_spin_lock(struct qspinlock
*lock)
if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
return false;
- while (atomic_cmpxchg(>val, 0, _Q_LOCKED_VAL) != 0)
+ while
This patch replaces the cmpxchg() and xchg() calls in the native
qspinlock code with more relaxed versions of those calls to enable
other architectures to adopt queued spinlocks with less performance
overhead.
Signed-off-by: Waiman Long
---
arch/x86/include/asm/qspinlock.h |2 +-
This patch replaces the cmpxchg() and xchg() calls in the native
qspinlock code with more relaxed versions of those calls to enable
other architectures to adopt queued spinlocks with less performance
overhead.
Signed-off-by: Waiman Long
---
arch/x86/include/asm/qspinlock.h
On Fri, 11 Sep 2015, Waiman Long wrote:
@@ -46,7 +46,7 @@ static inline bool virt_queued_spin_lock(struct qspinlock
*lock)
if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
return false;
- while (atomic_cmpxchg(>val, 0, _Q_LOCKED_VAL) != 0)
+ while
10 matches
Mail list logo