Commit-ID:  d9345c65eb7930ac6755cf593ee7686f4029ccf4
Gitweb:     http://git.kernel.org/tip/d9345c65eb7930ac6755cf593ee7686f4029ccf4
Author:     Pan Xinhui <xinhui....@linux.vnet.ibm.com>
AuthorDate: Wed, 2 Nov 2016 05:08:28 -0400
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Tue, 22 Nov 2016 12:48:05 +0100

sched/core: Introduce the vcpu_is_preempted(cpu) interface

This patch is the first step to add support to improve lock holder
preemption beaviour.

vcpu_is_preempted(cpu) does the obvious thing: it tells us whether a
vCPU is preempted or not.

Defaults to false on architectures that don't support it.

Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Tested-by: Juergen Gross <jgr...@suse.com>
Signed-off-by: Pan Xinhui <xinhui....@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
[ Translated the changelog to English. ]
Acked-by: Christian Borntraeger <borntrae...@de.ibm.com>
Acked-by: Paolo Bonzini <pbonz...@redhat.com>
Cc: david.lai...@aculab.com
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: b...@kernel.crashing.org
Cc: boqun.f...@gmail.com
Cc: bsinghar...@gmail.com
Cc: d...@stgolabs.net
Cc: kernel...@gmail.com
Cc: konrad.w...@oracle.com
Cc: linuxppc-...@lists.ozlabs.org
Cc: m...@ellerman.id.au
Cc: paul...@linux.vnet.ibm.com
Cc: pau...@samba.org
Cc: rkrc...@redhat.com
Cc: virtualizat...@lists.linux-foundation.org
Cc: will.dea...@arm.com
Cc: xen-devel-requ...@lists.xenproject.org
Cc: xen-de...@lists.xenproject.org
Link: 
http://lkml.kernel.org/r/1478077718-37424-2-git-send-email-xinhui....@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 include/linux/sched.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index dc37cbe..37261af 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -3510,6 +3510,18 @@ static inline void set_task_cpu(struct task_struct *p, 
unsigned int cpu)
 
 #endif /* CONFIG_SMP */
 
+/*
+ * In order to reduce various lock holder preemption latencies provide an
+ * interface to see if a vCPU is currently running or not.
+ *
+ * This allows us to terminate optimistic spin loops and block, analogous to
+ * the native optimistic spin heuristic of testing if the lock owner task is
+ * running or not.
+ */
+#ifndef vcpu_is_preempted
+# define vcpu_is_preempted(cpu)        false
+#endif
+
 extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
 extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
 

Reply via email to