This patch adds the necessary XEN specific code to allow XEN to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long
---
arch/x86/xen/spinlock.c | 149 +--
kernel/Kconfig.locks|2
its cpu number in whichever node is pointed to by the tail part
of the lock word. Secondly, pv_link_and_wait_node() will propagate the
existing head from the old to the new tail node.
Signed-off-by: Waiman Long
---
arch/x86/include/asm/paravirt.h | 22 ++
arch/x86/include/asm
ded to make the qspinlock achieve performance
parity with ticket spinlock at light load.
All this is horribly broken on Alpha pre EV56 (and any other arch that
cannot do single-copy atomic byte stores).
Signed-off-by: Peter Zijlstra
Signed-off-by: Waiman Long
---
include/asm-gene
optimization which will make the queue spinlock code perform
better than the generic implementation.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
arch/x86/Kconfig |1 +
arch/x86/include/asm/qspinlock.h | 25 +
arch/x86/include/asm
locked bit
into a new clear_pending_set_locked() function.
This patch also simplifies the trylock operation before queuing by
calling queue_spin_trylock() directly.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
include/asm-generic/qspinlock_types.h |2 +
kernel/locking
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
arch/x86/include/asm/spinlock.h |4 ++--
arch/x86/kernel/kvm.c|2 +-
arch/x86/kernel
imeUsr Time
-- -
ticketlock 2075 10.00 216.35 3.49
qspinlock 3023 10.00 198.20 4.80
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
kernel/locking/qsp
. For the time being, unlock call site patching will
not be part of this patch series.
Peter Zijlstra (3):
qspinlock: Add pending bit
qspinlock: Optimize for smaller NR_CPUS
qspinlock: Revert to test-and-set on hypervisors
Waiman Long (8):
qspinlock: A simple generic 4-byte queue spinloc
Signed-off-by: Waiman Long
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/locking/qspinlock.c| 119 +++--
2 files changed, 107 insertions(+), 24 deletions(-)
diff --git a/include/asm-generic/qspinlock_types.h
b/include/asm-generic/qspinlo
From: Peter Zijlstra
When we detect a hypervisor (!paravirt, see qspinlock paravirt support
patches), revert to a simple test-and-set lock to avoid the horrors
of queue preemption.
Signed-off-by: Peter Zijlstra
Signed-off-by: Waiman Long
---
arch/x86/include/asm/qspinlock.h | 14
lock is acquired, the queue node can be released to
be used later.
Signed-off-by: Waiman Long
Signed-off-by: Peter Zijlstra
---
include/asm-generic/qspinlock.h | 132 +
include/asm-generic/qspinlock_types.h | 58 +
kernel/Kconfig.locks
On 10/27/2014 02:02 PM, Konrad Rzeszutek Wilk wrote:
On Mon, Oct 27, 2014 at 01:38:20PM -0400, Waiman Long wrote:
My concern is that spin_unlock() can be called in many places, including
loadable kernel modules. Can the paravirt_patch_ident_32() function able to
patch all of them in reasonable
101 - 112 of 112 matches
Mail list logo