On 2017/8/29 22:56, Michael S. Tsirkin wrote:
On Tue, Aug 29, 2017 at 11:46:34AM +, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM.
But are we trading a lot of CPU for a bit of lower latency?
The main reason
On 2017/9/1 14:58, Wanpeng Li wrote:
2017-09-01 14:44 GMT+08:00 Yang Zhang <yang.zhang...@gmail.com>:
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.
On 2017/8/29 20:46, Peter Zijlstra wrote:
On Tue, Aug 29, 2017 at 11:46:41AM +, Yang Zhang wrote:
In ttwu_do_wakeup, it will update avg_idle when wakeup from idle. Here
we just reuse this logic to update the poll time. It may be a little
late to update the poll in ttwu_do_wakeup
On 2017/8/29 21:55, Konrad Rzeszutek Wilk wrote:
On Tue, Aug 29, 2017 at 11:46:35AM +, Yang Zhang wrote:
So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called in
idle path which will polling for a while before we enter the real idle
state.
In virtualization, idle path
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
halt_poll_threshold=3 -- 15642.38 bits/s
cause the CPU waste so we adopt a smart polling mechanism to
reduce the useless poll.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Jeremy Fitzhardinge <jer...@goop.org>
Cc: Chris Wright <chr...@sous-sol.org>
-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: "H. Peter Anvin" <h...@zytor.com>
Cc: x...@ker
-- 157.6 %CPU
V1 -> V2:
- integrate the smart halt poll into paravirt code
- use idle_stamp instead of check_poll
- since it hard to get whether vcpu is the only task in pcpu, so we
don't consider it in this series.(May improve it in future)
Yang Zhang (7):
x86/paravirt: Add pv_idle_
To reduce the cost of poll, we introduce three sysctl to control the
poll time.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Jonathan Corbet <cor...@lwn.net>
Cc: Jeremy Fitzhardinge <jer...@goop.org>
Cc: Chris Wright
.update is used to adjust the next poll time.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Jeremy Fitzhardinge <jer...@goop.org>
Cc: Chris Wright <chr...@sous-sol.org>
Cc: Alok Kataria <akata...@vmware
using smart idle poll to reduce the useless poll when system is idle.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi..
is that idle_stamp only used when using CFS scheduler. But
it is ok since it is the default policy for scheduler and only consider
it should enough.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: P
Add poll in do_idle. For UP VM, if there are running task, it will not
goes into idle path, so we only enable poll in SMP VM.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar
On 2017/8/16 12:04, Michael S. Tsirkin wrote:
On Thu, Jun 22, 2017 at 11:22:13AM +, root wrote:
From: Yang Zhang <yang.zhang...@gmail.com>
This patch introduce a new mechanism to poll for a while before
entering idle state.
David has a topic in KVM forum to describe the problem on c
On 2017/7/17 17:54, Alexander Graf wrote:
On 17.07.17 11:26, Yang Zhang wrote:
On 2017/7/14 17:37, Alexander Graf wrote:
On 13.07.17 13:49, Yang Zhang wrote:
On 2017/7/4 22:13, Radim Krčmář wrote:
2017-07-03 17:28+0800, Yang Zhang:
The background is that we(Alibaba Cloud) do get more
On 2017/7/14 17:37, Alexander Graf wrote:
On 13.07.17 13:49, Yang Zhang wrote:
On 2017/7/4 22:13, Radim Krčmář wrote:
2017-07-03 17:28+0800, Yang Zhang:
The background is that we(Alibaba Cloud) do get more and more
complaints
from our customers in both KVM and Xen compare to bare
On 2017/7/4 22:13, Radim Krčmář wrote:
2017-07-03 17:28+0800, Yang Zhang:
The background is that we(Alibaba Cloud) do get more and more complaints
from our customers in both KVM and Xen compare to bare-mental.After
investigations, the root cause is known to us: big cost in message passing
On 2017/7/3 18:06, Thomas Gleixner wrote:
On Mon, 3 Jul 2017, Yang Zhang wrote:
The background is that we(Alibaba Cloud) do get more and more complaints from
our customers in both KVM and Xen compare to bare-mental.After investigations,
the root cause is known to us: big cost in message passing
On 2017/6/27 22:22, Radim Krčmář wrote:
2017-06-27 15:56+0200, Paolo Bonzini:
On 27/06/2017 15:40, Radim Krčmář wrote:
... which is not necessarily _wrong_. It's just a different heuristic.
Right, it's just harder to use than host's single_task_running() -- the
VCPU calling
On 2017/6/23 11:58, Yang Zhang wrote:
On 2017/6/22 19:51, Paolo Bonzini wrote:
On 22/06/2017 13:22, root wrote:
==
+poll_grow: (X86 only)
+
+This parameter is multiplied in the grow_poll_ns() to increase the
poll time.
+By default
On 2017/6/23 12:35, Wanpeng Li wrote:
2017-06-23 12:08 GMT+08:00 Yang Zhang <yang.zhang...@gmail.com>:
On 2017/6/22 19:50, Wanpeng Li wrote:
2017-06-22 19:22 GMT+08:00 root <yang.zhang...@gmail.com>:
From: Yang Zhang <yang.zhang...@gmail.com>
Some latency-intensive
On 2017/6/22 19:50, Wanpeng Li wrote:
2017-06-22 19:22 GMT+08:00 root <yang.zhang...@gmail.com>:
From: Yang Zhang <yang.zhang...@gmail.com>
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified
On 2017/6/22 22:23, Thomas Gleixner wrote:
On Thu, 22 Jun 2017, root wrote:
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -39,6 +39,10 @@
#include
#include
+#ifdef CONFIG_HYPERVISOR_GUEST
+unsigned long poll_threshold_ns;
+#endif
+
/*
* per-CPU TSS segments. Threads
On 2017/6/22 22:32, Thomas Gleixner wrote:
On Thu, 22 Jun 2017, root wrote:
@@ -962,6 +962,7 @@ __visible void __irq_entry smp_apic_timer_interrupt(struct
pt_regs *regs)
* interrupt lock, which is the WrongThing (tm) to do.
*/
entering_ack_irq();
+ check_poll();
On 2017/6/22 19:51, Paolo Bonzini wrote:
On 22/06/2017 13:22, root wrote:
==
+poll_grow: (X86 only)
+
+This parameter is multiplied in the grow_poll_ns() to increase the poll time.
+By default, the values is 2.
+
On 2017/6/22 19:22, root wrote:
From: Yang Zhang <yang.zhang...@gmail.com>
Sorry to use wrong username to send patch because i am using a new
machine which don't setup the git config well.
Some latency-intensive workload will see obviously performance
drop when running inside VM. Th
26 matches
Mail list logo