Commit-ID:  41946c86876ea6a3e8857182356e6d76dbfe7fb6
Gitweb:     http://git.kernel.org/tip/41946c86876ea6a3e8857182356e6d76dbfe7fb6
Author:     Pan Xinhui <xinhui....@linux.vnet.ibm.com>
AuthorDate: Wed, 2 Nov 2016 05:08:31 -0400
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Tue, 22 Nov 2016 12:48:06 +0100

locking/core, powerpc: Implement vcpu_is_preempted(cpu)

Optimize spinlock and mutex busy-loops by providing a vcpu_is_preempted(cpu)
function on pSeries. We do not support PowerNV.

All this can be achieved by using lppaca->yield_count, which is zero on PowerNV.

Suggested-by: Boqun Feng <boqun.f...@gmail.com>
Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Pan Xinhui <xinhui....@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: david.lai...@aculab.com
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: b...@kernel.crashing.org
Cc: borntrae...@de.ibm.com
Cc: bsinghar...@gmail.com
Cc: d...@stgolabs.net
Cc: jgr...@suse.com
Cc: kernel...@gmail.com
Cc: konrad.w...@oracle.com
Cc: linuxppc-...@lists.ozlabs.org
Cc: m...@ellerman.id.au
Cc: paul...@linux.vnet.ibm.com
Cc: pau...@samba.org
Cc: pbonz...@redhat.com
Cc: rkrc...@redhat.com
Cc: virtualizat...@lists.linux-foundation.org
Cc: will.dea...@arm.com
Cc: xen-devel-requ...@lists.xenproject.org
Cc: xen-de...@lists.xenproject.org
Link: 
http://lkml.kernel.org/r/1478077718-37424-5-git-send-email-xinhui....@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 arch/powerpc/include/asm/spinlock.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/powerpc/include/asm/spinlock.h 
b/arch/powerpc/include/asm/spinlock.h
index fa37fe9..8c1b913 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -52,6 +52,14 @@
 #define SYNC_IO
 #endif
 
+#ifdef CONFIG_PPC_PSERIES
+#define vcpu_is_preempted vcpu_is_preempted
+static inline bool vcpu_is_preempted(int cpu)
+{
+       return !!(be32_to_cpu(lppaca_of(cpu).yield_count) & 1);
+}
+#endif
+
 static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
 {
        return lock.slock == 0;

Reply via email to