在 16/7/17 21:50, Pan Xinhui 写道:
在 16/7/17 16:10, Noam Camus 写道:
From: Noam Camus <noa...@mellanox.com>
Today there are platforms with many CPUs (up to 4K).
Trying to boot only part of the CPUs may result in too long string.
For example lets take NPS platform that is part of ar
在 16/7/17 16:10, Noam Camus 写道:
From: Noam Camus
Today there are platforms with many CPUs (up to 4K).
Trying to boot only part of the CPUs may result in too long string.
For example lets take NPS platform that is part of arch/arc.
This platform have SMP system with 256
在 16/7/15 16:47, Peter Zijlstra 写道:
So the reason I never get around to this is because the patch stinks.
It simply doesn't make sense... Remember, the harder you make a reviewer
work the less likely the review will be done.
Present things in clear concise language and draw a picture.
On
Hi, Baibir
sorry for late responce, I missed reading your mail.
在 16/7/6 18:54, Balbir Singh 写道:
On Tue, 2016-06-28 at 10:43 -0400, Pan Xinhui wrote:
This is to fix some lock holder preemption issues. Some other locks
implementation do a spin loop before acquiring the lock itself
From: pan xinhui <xinhui@linux.vnet.ibm.com>
This patch aims to get rid of endianness in queued_write_unlock(). We
want to set __qrwlock->wmode to NULL, however the address is not
>cnts in big endian machine. That causes queued_write_unlock()
write NULL to the wrong field of __
n.f...@gmail.com>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/lppaca.h | 6 ++
arch/powerpc/include/asm/spinlock.h | 15 +++
2 files changed, 21 insertions(+)
diff --git a/arch/powerpc/include/asm/lppaca.h
b/arch/powerpc/include/
pteempted check.
archs can implement it by define arch_vcpu_is_preempted().
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
include/linux/sched.h | 9 +
1 file changed, 9 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6e42ada..dc0a9c3
mlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
kernel/locking/osq_lock.c | 16 +++-
1 file changed, 15 insertio
so need fix other XXX_spin_on_owner later based on this patch set.
these spin_on_onwer variant cause rcu stall.
Pan Xinhui (3):
powerpc/spinlock: support vcpu preempted check
locking/osq: Drop the overload of osq_lock()
kernel/sched: introduce vcpu preempted check interface
arch/powerpc/
在 2017/2/8 14:09, Boqun Feng 写道:
On Wed, Feb 08, 2017 at 12:05:40PM +0800, Boqun Feng wrote:
On Wed, Feb 08, 2017 at 11:39:10AM +0800, Xinhui Pan wrote:
2016-12-26 4:26 GMT+08:00 Waiman Long :
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
在 2017/2/8 14:09, Boqun Feng 写道:
On Wed, Feb 08, 2017 at 12:05:40PM +0800, Boqun Feng wrote:
On Wed, Feb 08, 2017 at 11:39:10AM +0800, Xinhui Pan wrote:
2016-12-26 4:26 GMT+08:00 Waiman Long :
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
在 2017/2/8 14:09, Boqun Feng 写道:
On Wed, Feb 08, 2017 at 12:05:40PM +0800, Boqun Feng wrote:
On Wed, Feb 08, 2017 at 11:39:10AM +0800, Xinhui Pan wrote:
2016-12-26 4:26 GMT+08:00 Waiman Long :
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
Once xmon is triggered by sysrq-x, it is enabled always afterwards even
if it is disabled during boot. This will cause a system reset interrut
fail to dump. So keep xmon in its original state after exit.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/xmon/xmon
commands. Turn xmon off if 'z'
is following.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/xmon/xmon.c | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 9c0e17c..2f4e7b1
在 2017/2/16 18:57, Guilherme G. Piccoli 写道:
On 16/02/2017 03:09, Michael Ellerman wrote:
Pan Xinhui <xinhui@linux.vnet.ibm.com> writes:
Once xmon is triggered by sysrq-x, it is enabled always afterwards even
if it is disabled during boot. This will cause a system reset interru
在 2017/2/17 14:05, Michael Ellerman 写道:
Pan Xinhui <xin...@linux.vnet.ibm.com> writes:
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 9c0e17c..f6e5c3d 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -76,6 +76,7 @@ static int xmon_gate;
--- ---
4 4053.3 Mop/s 4223.7 Mop/s +4.2%
8 3310.4 Mop/s 3406.0 Mop/s +2.9%
12 2576.4 Mop/s 2674.6 Mop/s +3.8%
Signed-off-by: Waiman Long <long...@redhat.com>
---
Works on my side :)
Reviewed-by: Pan Xinhui <xinhui@linux.vnet.ibm.c
在 2016/9/30 13:52, Boqun Feng 写道:
On Fri, Sep 30, 2016 at 12:49:52PM +0800, Pan Xinhui wrote:
在 2016/9/29 23:51, Christian Borntraeger 写道:
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu
在 2016/9/29 23:51, Christian Borntraeger 写道:
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by moving
在 2016/9/29 18:31, Peter Zijlstra 写道:
On Thu, Sep 29, 2016 at 12:23:19PM +0200, Christian Borntraeger wrote:
On 09/29/2016 12:10 PM, Peter Zijlstra wrote:
On Thu, Jul 21, 2016 at 07:45:10AM -0400, Pan Xinhui wrote:
change from v2:
no code change, fix typos, update some comments
hi, Paolo
thanks for your reply.
在 2016/9/30 14:58, Paolo Bonzini 写道:
Please consider s390 and (x86/arm) KVM. Once we have a few, more can
follow later, but I think its important to not only have PPC support for
this.
Actually the s390 preemted check via sigp sense running is
在 2016/9/30 17:08, Paolo Bonzini 写道:
On 30/09/2016 10:52, Pan Xinhui wrote:
x86 has no hypervisor support, and I'd like to understand the desired
semantics first, so I don't think it should block this series. In
Once a guest do a hypercall or something similar, IOW
orrect cpu number.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
tools/perf/bench/futex-hash.c | 2 +-
tools/perf/bench/futex-lock-pi.c | 2 +-
tools/perf/bench/futex-requeue.c | 2 +-
tools/perf/bench/futex-wake-parallel.c | 2 +-
tools/perf/be
pSeries run as a guest and might need pv-qspinlock.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/platforms/pseries/Kconfig | 8
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/kernel/Makefile
in the hash table might not be the correct lock holder, as for
performace issue, we does not take care of hash conflict.
Also introduce spin_lock_holder, which tells who owns the lock now.
currently the only user is spin_unlock_wait.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.
-by: Boqun Feng <boqun.f...@gmail.com>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
kernel/locking/qspinlock_paravirt.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/locking/qspinlock_paravirt.h
b/kernel/locking/qspinlock_paravirt.h
index 8a99
pseries will use qspinlock by default.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/platforms/pseries/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/platforms/pseries/Kconfig
b/arch/powerpc/platforms/pseries/Kconfig
index bec90fb..f
two endianness
system.
We override some arch_spin_xxx as powerpc has io_sync stuff which makes
sure the io operations are protected by the lock correctly.
There is another special case, see commit
2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
Signed-off-by: Pan X
1008.3 1122.61134.2
=
System Benchmarks Index Score 1072.0 1108.91050.6
--------
Pan Xinhui (6):
pv-qspin
will introduce latency and a little overhead. And
we do NOT want to suffer any latency on some cases, e.g. in interrupt handler.
The second parameter *confer* can indicate such case.
__spin_wake_cpu is simpiler, it will wake up one vcpu regardless of its
current vcpu state.
Signed-off-by: Pan
Scripts (1 concurrent) |23224.3 lpm |22607.4 lpm
Shell Scripts (8 concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/x86/inclu
->yiled_count keeps zero on
powerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng <boqun.f...@gmail.com>
Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/spinl
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
kernel/locking/mutex.c | 15 +--
kernel/locking/rwsem-xadd.c | 16 +---
2 files changed, 26 in
ncurrent) |23224.3 lpm |22607.4 lpm
Shell Scripts (8 concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Pan Xinhui (5):
kernel/sched: introduce vcpu preempted check interface
locking/osq: Drop the
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng <boqun.f...@gmail.com>
Signed-off-by: Pan Xinhui <xinhui...
.
Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
include/linux/sched.h | 12
1 file changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 348f51b..44
.
Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntrae...@de.ibm.com>
Tested-by: Juergen Gross <jgr...@suse.com>
---
include/linux/sched.h | 12
1 fil
ripts (8 concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Christian Borntraeger (1):
s390/spinlock: Provide vcpu_is_preempted
Juergen Gross (1):
x86, xen: support vcpu preempted check
Pan Xinhui (7):
kernel/sched: i
the spin loops upon on the retval of
vcpu_is_preempted.
As kernel has used this interface, So lets support it.
To deal with kernel and kvm/xen, add vcpu_is_preempted into struct
pv_lock_ops.
Then kvm or xen could provide their own implementation to support
vcpu_is_preempted.
Signed-off-by: Pan
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntrae...@de.ibm.com>
Tested-by: Juergen Gross <jgr...@suse.com>
---
kernel
->yiled_count keeps zero on
powerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng <boqun.f...@gmail.com>
Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/spinl
在 2016/10/24 23:18, Paolo Bonzini 写道:
On 24/10/2016 17:14, Radim Krčmář wrote:
2016-10-24 16:39+0200, Paolo Bonzini:
On 19/10/2016 19:24, Radim Krčmář wrote:
+ if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED)
+ if (kvm_read_guest_cached(vcpu->kvm, >arch.st.stime,
+
Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/x86/include/uapi/asm/kvm_para.h | 3 ++-
arch/x86/kernel/kvm.c| 12
arch/x86/kvm/x86.c | 18 +++
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by
n preempted.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
Documentation/virtual/kvm/msr.txt | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/msr.txt
b/Documentation/virtual/kvm/msr.txt
index 2a71c8f..3376f13 100644
--- a
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng <boqun.f...@gmail.com>
Signed-off-by: Pan Xinhui <xinhui@linux
early yielding.
A quick test (4 vcpus on 1 physical cpu doing a parallel build job
with "make -j 8") reduced system time by about 5% with this patch.
Signed-off-by: Juergen Gross <jgr...@suse.com>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/x86/xe
在 2016/10/19 23:58, Juergen Gross 写道:
On 19/10/16 12:20, Pan Xinhui wrote:
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip
在 2016/10/20 01:24, Radim Krčmář 写道:
2016-10-19 06:20-0400, Pan Xinhui:
This is to fix some lock holder preemption issues. Some other locks
implementation do a spin loop before acquiring the lock itself.
Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It
takes the cpu
在 2016/11/16 18:23, Peter Zijlstra 写道:
On Wed, Nov 16, 2016 at 12:19:09PM +0800, Pan Xinhui wrote:
Hi, Peter.
I think we can avoid a function call in a simpler way. How about below
static inline bool vcpu_is_preempted(int cpu)
{
/* only set in pv case
在 2016/11/15 23:47, Peter Zijlstra 写道:
On Wed, Nov 02, 2016 at 05:08:33AM -0400, Pan Xinhui wrote:
diff --git a/arch/x86/include/asm/paravirt_types.h
b/arch/x86/include/asm/paravirt_types.h
index 0f400c0..38c3bb7 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng <boqun.f...@gmail.com>
Signed-off-by: Pan Xinhui <xinhui@linux
concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/x86/kernel/kvm.c | 12
1 file changed, 12 insertions(+)
diff --git a/arch/x86/ker
early yielding.
A quick test (4 vcpus on 1 physical cpu doing a parallel build job
with "make -j 8") reduced system time by about 5% with this patch.
Signed-off-by: Juergen Gross <jgr...@suse.com>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/x86/xe
kvm_steal_time ::preempted to indicate that if
one vcpu is running or not.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/x86/include/uapi/asm/kvm_para.h | 4 +++-
arch/x86/kvm/x86.c | 16
2 files changed, 19 insertions(+), 1 deletion(-)
->yiled_count keeps zero on
powerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng <boqun.f...@gmail.com>
Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/spinl
u has been preempted.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
Acked-by: Radim Krčmář <rkrc...@redhat.com>
---
Documentation/virtual/kvm/msr.txt | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/msr.txt
b/Documentation/v
It allows us to update some status or field of one struct partially.
We can also save one kvm_read_guest_cached if we just update one filed
of the struct regardless of its current value.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
include/linux/kvm_host.h | 2 ++
vi
the spin loops upon on the retval of
vcpu_is_preempted.
As kernel has used this interface, So lets support it.
To deal with kernel and kvm/xen, add vcpu_is_preempted into struct
pv_lock_ops.
Then kvm or xen could provide their own implementation to support
vcpu_is_preempted.
Signed-off-by: Pan
.
Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntrae...@de.ibm.com>
Tested-by: Juergen Gross <jgr...@suse.com>
---
include/linux/sched.h | 12
1 fil
79.0 lps
Christian Borntraeger (1):
s390/spinlock: Provide vcpu_is_preempted
Juergen Gross (1):
x86, xen: support vcpu preempted check
Pan Xinhui (9):
kernel/sched: introduce vcpu preempted check interface
locking/osq: Drop the overload of osq_lock()
kernel/locking: Drop the overload o
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntrae...@de.ibm.com>
Tested-by: Juergen Gross <jgr...@suse.com>
---
kernel
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by
call_common
2.83% sched-messaging [kernel.vmlinux] [k] copypage_power7
2.64% sched-messaging [kernel.vmlinux] [k] rwsem_spin_on_owner
2.00% sched-messaging [kernel.vmlinux] [k] osq_lock
Suggested-by: Boqun Feng <boqun.f...@gmail.com>
Signed-off-by: Pan Xinhui <xinhui@linux
tem Call Overhead | 10385653.0 lps | 10419979.0 lps
Christian Borntraeger (1):
s390/spinlock: Provide vcpu_is_preempted
Juergen Gross (1):
x86, xen: support vcpu preempted check
Pan Xinhui (9):
kernel/sched: introduce vcpu preempted check interface
locking/o
essaging [kernel.vmlinux] [k] system_call
2.69% sched-messaging [kernel.vmlinux] [k] wait_consider_task
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntrae...@de.ibm.com>
Acked-by: Paolo Bonzini <pbonz...@redhat.com>
Tested-by: Juer
.
Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntrae...@de.ibm.com>
Acked-by: Paolo Bonzini <pbonz...@redhat.com>
Tested-by: Juergen Gross <jgr...@suse.com>
-
->yiled_count keeps zero on
PowerNV. So we can just skip the machine type check.
Suggested-by: Boqun Feng <boqun.f...@gmail.com>
Suggested-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/include/asm/spinlock.h
It allows us to update some status or field of one struct partially.
We can also save one kvm_read_guest_cached if we just update one filed
of the struct regardless of its current value.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
Acked-by: Paolo Bonzini <pbonz...@r
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by
kvm_steal_time ::preempted to indicate that if
one vcpu is running or not.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
Acked-by: Paolo Bonzini <pbonz...@redhat.com>
---
arch/x86/include/uapi/asm/kvm_para.h | 4 +++-
arch/x86/kvm/x86.c | 16 +
early yielding.
A quick test (4 vcpus on 1 physical cpu doing a parallel build job
with "make -j 8") reduced system time by about 5% with this patch.
Signed-off-by: Juergen Gross <jgr...@suse.com>
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/x86/xe
u has been preempted.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
Acked-by: Radim Krčmář <rkrc...@redhat.com>
Acked-by: Paolo Bonzini <pbonz...@redhat.com>
---
Documentation/virtual/kvm/msr.txt | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git
concurrent) | 3531.4 lpm | 3211.9 lpm
System Call Overhead | 10385653.0 lps | 10419979.0 lps
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
Acked-by: Paolo Bonzini <pbonz...@redhat.com>
---
arch/x86/kernel/kvm.c | 12
1 file
the spin loops upon the retval of
vcpu_is_preempted.
As kernel has used this interface, So lets support it.
To deal with kernel and kvm/xen, add vcpu_is_preempted into struct
pv_lock_ops.
Then kvm or xen could provide their own implementation to support
vcpu_is_preempted.
Signed-off-by: Pan Xinhui
在 2016/10/29 03:38, Konrad Rzeszutek Wilk 写道:
On Fri, Oct 28, 2016 at 04:11:16AM -0400, Pan Xinhui wrote:
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4
hi, Peter
I think I know the point.
then could we just let __eax rettype(here is bool), not unsigned long?
I does not do tests for my thoughts.
@@ -461,7 +461,9 @@ int paravirt_disable_iospace(void);
#define PVOP_VCALL_ARGS
\
If prev node is not in runnig state or its cpu is preempted, we need
wait early in pv_wait_node. After commit "sched/core: Introduce the
vcpu_is_preempted(cpu) interface" kernel has knowledge of one vcpu is
running or not. So lets use it.
Signed-off-by: Pan Xinhui <xinhui@linux
for me.
Reviewed-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
fs/lockd/svc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index fc4084e..bd6fcf9 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -561,7 +561,7 @@ static struct ctl_table
在 2016/12/15 15:24, Jia He 写道:
This is to let bool variable could be correctly displayed in
big/little endian sysctl procfs. sizeof(bool) is arch dependent,
proc_dobool should work in all arches.
Suggested-by: Pan Xinhui <xin...@linux.vnet.ibm.com>
Signed-off-by: Jia He <hejia...@
hi, jia
nice catch!
However I think we should fix it totally.
This is because do_proc_dointvec_conv() try to get a int value from a bool *.
something like below might help. pls. ignore the code style and this is tested
:)
diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index
在 2016/12/11 23:36, Jia He 写道:
nsm_use_hostnames is a module parameter and it will be exported to sysctl
procfs. This is to let user sometimes change it from userspace. But the
minimal unit for sysctl procfs read/write it sizeof(int).
In big endian system, the converting from/to bool to/from
在 2016/12/12 01:43, Pan Xinhui 写道:
hi, jia
nice catch!
However I think we should fix it totally.
This is because do_proc_dointvec_conv() try to get a int value from a bool *.
something like below might help. pls. ignore the code style and this is tested
在 2016/12/7 03:14, Waiman Long 写道:
A number of cmpxchg calls in qspinlock_paravirt.h were replaced by more
relaxed versions to improve performance on architectures that use LL/SC.
Signed-off-by: Waiman Long
---
thanks!
I apply it on my tree. and the tests is okay.
ke
If prev node is not in runnig state or its vCPU is preempted, we can give
up our vCPU slices ASAP in pv_wait_node. After commit d9345c65eb79
("sched/core: Introduce the vcpu_is_preempted(cpu) interface") kernel
has knowledge of one vCPU is running or not.
Signed-off-by: Pan Xinh
在 2016/12/6 08:58, Boqun Feng 写道:
On Mon, Dec 05, 2016 at 10:19:22AM -0500, Pan Xinhui wrote:
pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/platforms/pseries/Kconfig | 8
1 file changed, 8 inse
correct waiman's address.
在 2016/12/6 08:47, Boqun Feng 写道:
On Mon, Dec 05, 2016 at 10:19:21AM -0500, Pan Xinhui wrote:
This patch add basic code to enable qspinlock on powerpc. qspinlock is
one kind of fairlock implementation. And seen some performance improvement
under some scenarios
在 2016/12/6 09:24, Pan Xinhui 写道:
在 2016/12/6 08:58, Boqun Feng 写道:
On Mon, Dec 05, 2016 at 10:19:22AM -0500, Pan Xinhui wrote:
pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/platforms/pseries/Kconf
in the hash table might not be the correct lock holder, as for
performace issue, we does not take care of hash conflict.
Also introduce spin_lock_holder, which tells who owns the lock now.
currently the only user is spin_unlock_wait.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.
Avoid a function call under native version of qspinlock. On powerNV,
bafore applying this patch, every unlock is expensive. This small
optimizes enhance the performance.
We use static_key with jump_label which removes unnecessary loads of
lppaca and its stuff.
Signed-off-by: Pan Xinhui <xin
1134.2
=
System Benchmarks Index Score 1072.0 1108.91050.6
--------
Pan Xinhui (6):
powerpc/qspinlock: powerpc support qspinlock
powerpc
pSeries run as a guest and might need pv-qspinlock.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/platforms/pseries/Kconfig | 8
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/kernel/Makefile
will introduce latency and a little overhead. And we
do NOT want to suffer any latency on some cases, e.g. in interrupt handler.
The second parameter *confer* can indicate such case.
__spin_wake_cpu is simpiler, it will wake up one vcpu regardless of its
current vcpu state.
Signed-off-by: Pan
pSeries/powerNV will use qspinlock from now on.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.com>
---
arch/powerpc/platforms/pseries/Kconfig | 8
1 file changed, 8 insertions(+)
diff --git a/arch/powerpc/platforms/pseries/Kconfig
b/arch/powerpc/platforms/pseries/Kconfig
endianness
system.
We override some arch_spin_XXX as powerpc has io_sync stuff which makes
sure the io operations are protected by the lock correctly.
There is another special case, see commit
2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
Signed-off-by: Pan Xinh
在 2016/12/2 12:35, yjin 写道:
On 2016年12月02日 12:22, Balbir Singh wrote:
On Fri, Dec 2, 2016 at 3:15 PM, Michael Ellerman wrote:
yanjiang@windriver.com writes:
diff --git a/arch/powerpc/include/asm/cputime.h
b/arch/powerpc/include/asm/cputime.h
index
1050.6
----
Pan Xinhui (6):
powerpc/qspinlock: powerpc support qspinlock
powerpc: platforms/Kconfig: Add qspinlock build config
powerpc: lib/locks.c: Add cpu yield/wake helper function
powerpc/pv-qspinlock: powerpc support pv-qspinlo
Avoid a function call under native version of qspinlock. On powerNV,
bafore applying this patch, every unlock is expensive. This small
optimizes enhance the performance.
We use static_key with jump_lable which removes unnecessary loads of
lppaca and its stuff.
Signed-off-by: Pan Xinhui <xin
in the hash table might not be the correct lock holder, as for
performace issue, we does not take care of hash conflict.
Also introduce spin_lock_holder, which tells who owns the lock now.
currently the only user is spin_unlock_wait.
Signed-off-by: Pan Xinhui <xinhui@linux.vnet.ibm.
endianness
system.
We override some arch_spin_XXX as powerpc has io_sync stuff which makes
sure the io operations are protected by the lock correctly.
There is another special case, see commit
2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
Signed-off-by: Pan Xinh
201 - 300 of 645 matches
Mail list logo