> On 6 Nov 2017, at 06:58, Andreas Färber wrote:
>
>> Am 05.11.2017 um 04:39 schrieb Ard Biesheuvel:
>>> On 4 November 2017 at 20:06, Andreas Färber wrote:
Am 04.11.2017 um 23:39 schrieb Ard Biesheuvel:
> On 4 November 2017 at 15:30, Andreas Färber
> On 6 Nov 2017, at 06:58, Andreas Färber wrote:
>
>> Am 05.11.2017 um 04:39 schrieb Ard Biesheuvel:
>>> On 4 November 2017 at 20:06, Andreas Färber wrote:
Am 04.11.2017 um 23:39 schrieb Ard Biesheuvel:
> On 4 November 2017 at 15:30, Andreas Färber wrote:
>> Am 04.11.2017 um
On 2017/8/29 22:56, Michael S. Tsirkin wrote:
On Tue, Aug 29, 2017 at 11:46:34AM +, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM.
But are we trading a lot of CPU for a bit of lower latency?
The main reason
On 2017/8/29 22:56, Michael S. Tsirkin wrote:
On Tue, Aug 29, 2017 at 11:46:34AM +, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM.
But are we trading a lot of CPU for a bit of lower latency?
The main reason
On 2017/9/1 14:58, Wanpeng Li wrote:
2017-09-01 14:44 GMT+08:00 Yang Zhang <yang.zhang...@gmail.com>:
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.
On 2017/9/1 14:58, Wanpeng Li wrote:
2017-09-01 14:44 GMT+08:00 Yang Zhang :
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
halt_poll_threshold=2
On 2017/8/29 20:46, Peter Zijlstra wrote:
On Tue, Aug 29, 2017 at 11:46:41AM +, Yang Zhang wrote:
In ttwu_do_wakeup, it will update avg_idle when wakeup from idle. Here
we just reuse this logic to update the poll time. It may be a little
late to update the poll in ttwu_do_wakeup
On 2017/8/29 20:46, Peter Zijlstra wrote:
On Tue, Aug 29, 2017 at 11:46:41AM +, Yang Zhang wrote:
In ttwu_do_wakeup, it will update avg_idle when wakeup from idle. Here
we just reuse this logic to update the poll time. It may be a little
late to update the poll in ttwu_do_wakeup
On 2017/8/29 21:55, Konrad Rzeszutek Wilk wrote:
On Tue, Aug 29, 2017 at 11:46:35AM +, Yang Zhang wrote:
So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called in
idle path which will polling for a while before we enter the real idle
state.
In virtualization, idle path
On 2017/8/29 21:55, Konrad Rzeszutek Wilk wrote:
On Tue, Aug 29, 2017 at 11:46:35AM +, Yang Zhang wrote:
So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called in
idle path which will polling for a while before we enter the real idle
state.
In virtualization, idle path
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
halt_poll_threshold=3 -- 15642.38 bits/s
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
halt_poll_threshold=3 -- 15642.38 bits/s
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
halt_poll_threshold=3 -- 15642.38 bits/s
On 2017/8/29 22:02, Wanpeng Li wrote:
Here is the data we get when running benchmark netperf:
2. w/ patch:
halt_poll_threshold=1 -- 15803.89 bits/s -- 159.5 %CPU
halt_poll_threshold=2 -- 15899.04 bits/s -- 161.5 %CPU
halt_poll_threshold=3 -- 15642.38 bits/s
On 2017/8/29 19:58, Alexander Graf wrote:
On 08/29/2017 01:46 PM, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost i have seen is
inside idle path
On 2017/8/29 19:58, Alexander Graf wrote:
On 08/29/2017 01:46 PM, Yang Zhang wrote:
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost i have seen is
inside idle path
cause the CPU waste so we adopt a smart polling mechanism to
reduce the useless poll.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Jeremy Fitzhardinge <jer...@goop.org>
Cc: Chris Wright <chr...@sous-sol.org>
cause the CPU waste so we adopt a smart polling mechanism to
reduce the useless poll.
Signed-off-by: Yang Zhang
Signed-off-by: Quan Xu
Cc: Jeremy Fitzhardinge
Cc: Chris Wright
Cc: Alok Kataria
Cc: Rusty Russell
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: x...@ker
-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: "H. Peter Anvin" <h...@zytor.com>
Cc: x...
-off-by: Yang Zhang
Signed-off-by: Quan Xu
Cc: Paolo Bonzini
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: x...@kernel.org
Cc: k...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
arch/x86/kernel/kvm.c | 26 ++
1 file changed, 26 insertion
using smart idle poll to reduce the useless poll when system is idle.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi..
To reduce the cost of poll, we introduce three sysctl to control the
poll time.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Jonathan Corbet <cor...@lwn.net>
Cc: Jeremy Fitzhardinge <jer...@goop.org>
Cc: Chris Wright
using smart idle poll to reduce the useless poll when system is idle.
Signed-off-by: Yang Zhang
Signed-off-by: Quan Xu
Cc: Paolo Bonzini
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: x...@kernel.org
Cc: k...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
To reduce the cost of poll, we introduce three sysctl to control the
poll time.
Signed-off-by: Yang Zhang
Signed-off-by: Quan Xu
Cc: Jonathan Corbet
Cc: Jeremy Fitzhardinge
Cc: Chris Wright
Cc: Alok Kataria
Cc: Rusty Russell
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin&qu
-- 157.6 %CPU
V1 -> V2:
- integrate the smart halt poll into paravirt code
- use idle_stamp instead of check_poll
- since it hard to get whether vcpu is the only task in pcpu, so we
don't consider it in this series.(May improve it in future)
Yang Zhang (7):
x86/paravirt: Add pv_idle_
Add poll in do_idle. For UP VM, if there are running task, it will not
goes into idle path, so we only enable poll in SMP VM.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar
is that idle_stamp only used when using CFS scheduler. But
it is ok since it is the default policy for scheduler and only consider
it should enough.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: P
-- 157.6 %CPU
V1 -> V2:
- integrate the smart halt poll into paravirt code
- use idle_stamp instead of check_poll
- since it hard to get whether vcpu is the only task in pcpu, so we
don't consider it in this series.(May improve it in future)
Yang Zhang (7):
x86/paravirt: Add pv_idle_
Add poll in do_idle. For UP VM, if there are running task, it will not
goes into idle path, so we only enable poll in SMP VM.
Signed-off-by: Yang Zhang
Signed-off-by: Quan Xu
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: x...@kernel.org
Cc: Peter Zijlstra
Cc: Boris
is that idle_stamp only used when using CFS scheduler. But
it is ok since it is the default policy for scheduler and only consider
it should enough.
Signed-off-by: Yang Zhang
Signed-off-by: Quan Xu
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: linux-kernel@vger.kernel.org
---
include/linux/sched/idle.h | 4
.update is used to adjust the next poll time.
Signed-off-by: Yang Zhang <yang.zhang...@gmail.com>
Signed-off-by: Quan Xu <quan@gmail.com>
Cc: Jeremy Fitzhardinge <jer...@goop.org>
Cc: Chris Wright <chr...@sous-sol.org>
Cc: Alok Kataria <akata...@vmware
.update is used to adjust the next poll time.
Signed-off-by: Yang Zhang
Signed-off-by: Quan Xu
Cc: Jeremy Fitzhardinge
Cc: Chris Wright
Cc: Alok Kataria
Cc: Rusty Russell
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: x...@kernel.org
Cc: Peter Zijlstra
Cc: Andy
hanged, 28 insertions(+), 37 deletions(-)
Reviewed-by: Yang Zhang <yang.zhang...@gmail.com>
--
Yang
Alibaba Cloud Computing
hanged, 28 insertions(+), 37 deletions(-)
Reviewed-by: Yang Zhang
--
Yang
Alibaba Cloud Computing
On 2017/8/24 18:12, Paolo Bonzini wrote:
The host pkru is restored right after vcpu exit (commit 1be0e61), so
KVM_GET_XSAVE will return the host PKRU value instead. In general,
the PKRU value in vcpu->arch.guest_fpu.state cannot be trusted.
Series as follows:
1) fix independent bug which
On 2017/8/24 18:12, Paolo Bonzini wrote:
The host pkru is restored right after vcpu exit (commit 1be0e61), so
KVM_GET_XSAVE will return the host PKRU value instead. In general,
the PKRU value in vcpu->arch.guest_fpu.state cannot be trusted.
Series as follows:
1) fix independent bug which
On 2017/8/24 17:19, Paolo Bonzini wrote:
On 24/08/2017 11:09, Yang Zhang wrote:
+if (static_cpu_has(X86_FEATURE_OSPKE) &&
We expose protection key to VM without check whether OSPKE is enabled or
not. Why not check guest's cpuid here which also can avoid unnecessary
access
On 2017/8/24 17:19, Paolo Bonzini wrote:
On 24/08/2017 11:09, Yang Zhang wrote:
+if (static_cpu_has(X86_FEATURE_OSPKE) &&
We expose protection key to VM without check whether OSPKE is enabled or
not. Why not check guest's cpuid here which also can avoid unnecessary
access
On 2017/8/24 5:26, Paolo Bonzini wrote:
Move it to struct kvm_arch_vcpu, replacing guest_pkru_valid with a
simple comparison against the host value of the register.
Thanks for refine the patches.:)
Signed-off-by: Paolo Bonzini
---
arch/x86/include/asm/kvm_host.h | 1
On 2017/8/24 5:26, Paolo Bonzini wrote:
Move it to struct kvm_arch_vcpu, replacing guest_pkru_valid with a
simple comparison against the host value of the register.
Thanks for refine the patches.:)
Signed-off-by: Paolo Bonzini
---
arch/x86/include/asm/kvm_host.h | 1 +
On 2017/8/17 16:51, Wanpeng Li wrote:
2017-08-17 16:48 GMT+08:00 Yang Zhang <yang.zhang...@gmail.com>:
On 2017/8/17 16:31, Wanpeng Li wrote:
2017-08-17 16:28 GMT+08:00 Wanpeng Li <kernel...@gmail.com>:
2017-08-17 16:07 GMT+08:00 Yang Zhang <yang.zhang...@gmail.com>:
On 2017/8/17 16:51, Wanpeng Li wrote:
2017-08-17 16:48 GMT+08:00 Yang Zhang :
On 2017/8/17 16:31, Wanpeng Li wrote:
2017-08-17 16:28 GMT+08:00 Wanpeng Li :
2017-08-17 16:07 GMT+08:00 Yang Zhang :
On 2017/8/17 0:56, Radim Krčmář wrote:
2017-08-16 17:10+0300, Michael S. Tsirkin
On 2017/8/17 16:31, Wanpeng Li wrote:
2017-08-17 16:28 GMT+08:00 Wanpeng Li <kernel...@gmail.com>:
2017-08-17 16:07 GMT+08:00 Yang Zhang <yang.zhang...@gmail.com>:
On 2017/8/17 0:56, Radim Krčmář wrote:
2017-08-16 17:10+0300, Michael S. Tsirkin:
On Wed, Aug 16, 2017 at 03:3
On 2017/8/17 16:31, Wanpeng Li wrote:
2017-08-17 16:28 GMT+08:00 Wanpeng Li :
2017-08-17 16:07 GMT+08:00 Yang Zhang :
On 2017/8/17 0:56, Radim Krčmář wrote:
2017-08-16 17:10+0300, Michael S. Tsirkin:
On Wed, Aug 16, 2017 at 03:34:54PM +0200, Paolo Bonzini wrote:
Microsoft pointed out
On 2017/8/17 0:56, Radim Krčmář wrote:
2017-08-16 17:10+0300, Michael S. Tsirkin:
On Wed, Aug 16, 2017 at 03:34:54PM +0200, Paolo Bonzini wrote:
Microsoft pointed out privately to me that KVM's handling of
KVM_FAST_MMIO_BUS is invalid. Using skip_emulation_instruction is invalid
in EPT
On 2017/8/17 0:56, Radim Krčmář wrote:
2017-08-16 17:10+0300, Michael S. Tsirkin:
On Wed, Aug 16, 2017 at 03:34:54PM +0200, Paolo Bonzini wrote:
Microsoft pointed out privately to me that KVM's handling of
KVM_FAST_MMIO_BUS is invalid. Using skip_emulation_instruction is invalid
in EPT
On 2017/8/16 12:04, Michael S. Tsirkin wrote:
On Thu, Jun 22, 2017 at 11:22:13AM +, root wrote:
From: Yang Zhang <yang.zhang...@gmail.com>
This patch introduce a new mechanism to poll for a while before
entering idle state.
David has a topic in KVM forum to describe the problem on c
On 2017/8/16 12:04, Michael S. Tsirkin wrote:
On Thu, Jun 22, 2017 at 11:22:13AM +, root wrote:
From: Yang Zhang
This patch introduce a new mechanism to poll for a while before
entering idle state.
David has a topic in KVM forum to describe the problem on current KVM VM
when running some
On 2017/7/17 17:54, Alexander Graf wrote:
On 17.07.17 11:26, Yang Zhang wrote:
On 2017/7/14 17:37, Alexander Graf wrote:
On 13.07.17 13:49, Yang Zhang wrote:
On 2017/7/4 22:13, Radim Krčmář wrote:
2017-07-03 17:28+0800, Yang Zhang:
The background is that we(Alibaba Cloud) do get more
On 2017/7/17 17:54, Alexander Graf wrote:
On 17.07.17 11:26, Yang Zhang wrote:
On 2017/7/14 17:37, Alexander Graf wrote:
On 13.07.17 13:49, Yang Zhang wrote:
On 2017/7/4 22:13, Radim Krčmář wrote:
2017-07-03 17:28+0800, Yang Zhang:
The background is that we(Alibaba Cloud) do get more
On 2017/7/14 17:37, Alexander Graf wrote:
On 13.07.17 13:49, Yang Zhang wrote:
On 2017/7/4 22:13, Radim Krčmář wrote:
2017-07-03 17:28+0800, Yang Zhang:
The background is that we(Alibaba Cloud) do get more and more
complaints
from our customers in both KVM and Xen compare to bare
On 2017/7/14 17:37, Alexander Graf wrote:
On 13.07.17 13:49, Yang Zhang wrote:
On 2017/7/4 22:13, Radim Krčmář wrote:
2017-07-03 17:28+0800, Yang Zhang:
The background is that we(Alibaba Cloud) do get more and more
complaints
from our customers in both KVM and Xen compare to bare
On 2017/7/4 22:13, Radim Krčmář wrote:
2017-07-03 17:28+0800, Yang Zhang:
The background is that we(Alibaba Cloud) do get more and more complaints
from our customers in both KVM and Xen compare to bare-mental.After
investigations, the root cause is known to us: big cost in message passing
On 2017/7/4 22:13, Radim Krčmář wrote:
2017-07-03 17:28+0800, Yang Zhang:
The background is that we(Alibaba Cloud) do get more and more complaints
from our customers in both KVM and Xen compare to bare-mental.After
investigations, the root cause is known to us: big cost in message passing
On 2017/7/3 18:06, Thomas Gleixner wrote:
On Mon, 3 Jul 2017, Yang Zhang wrote:
The background is that we(Alibaba Cloud) do get more and more complaints from
our customers in both KVM and Xen compare to bare-mental.After investigations,
the root cause is known to us: big cost in message passing
On 2017/7/3 18:06, Thomas Gleixner wrote:
On Mon, 3 Jul 2017, Yang Zhang wrote:
The background is that we(Alibaba Cloud) do get more and more complaints from
our customers in both KVM and Xen compare to bare-mental.After investigations,
the root cause is known to us: big cost in message passing
On 2017/6/27 22:22, Radim Krčmář wrote:
2017-06-27 15:56+0200, Paolo Bonzini:
On 27/06/2017 15:40, Radim Krčmář wrote:
... which is not necessarily _wrong_. It's just a different heuristic.
Right, it's just harder to use than host's single_task_running() -- the
VCPU calling
On 2017/6/27 22:22, Radim Krčmář wrote:
2017-06-27 15:56+0200, Paolo Bonzini:
On 27/06/2017 15:40, Radim Krčmář wrote:
... which is not necessarily _wrong_. It's just a different heuristic.
Right, it's just harder to use than host's single_task_running() -- the
VCPU calling
On 2017/6/23 11:58, Yang Zhang wrote:
On 2017/6/22 19:51, Paolo Bonzini wrote:
On 22/06/2017 13:22, root wrote:
==
+poll_grow: (X86 only)
+
+This parameter is multiplied in the grow_poll_ns() to increase the
poll time.
+By default
On 2017/6/23 11:58, Yang Zhang wrote:
On 2017/6/22 19:51, Paolo Bonzini wrote:
On 22/06/2017 13:22, root wrote:
==
+poll_grow: (X86 only)
+
+This parameter is multiplied in the grow_poll_ns() to increase the
poll time.
+By default
On 2017/6/23 12:35, Wanpeng Li wrote:
2017-06-23 12:08 GMT+08:00 Yang Zhang <yang.zhang...@gmail.com>:
On 2017/6/22 19:50, Wanpeng Li wrote:
2017-06-22 19:22 GMT+08:00 root <yang.zhang...@gmail.com>:
From: Yang Zhang <yang.zhang...@gmail.com>
Some latency-intensive
On 2017/6/23 12:35, Wanpeng Li wrote:
2017-06-23 12:08 GMT+08:00 Yang Zhang :
On 2017/6/22 19:50, Wanpeng Li wrote:
2017-06-22 19:22 GMT+08:00 root :
From: Yang Zhang
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason
On 2017/6/22 19:50, Wanpeng Li wrote:
2017-06-22 19:22 GMT+08:00 root <yang.zhang...@gmail.com>:
From: Yang Zhang <yang.zhang...@gmail.com>
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified
On 2017/6/22 19:50, Wanpeng Li wrote:
2017-06-22 19:22 GMT+08:00 root :
From: Yang Zhang
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost i have seen is
inside
On 2017/6/22 22:23, Thomas Gleixner wrote:
On Thu, 22 Jun 2017, root wrote:
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -39,6 +39,10 @@
#include
#include
+#ifdef CONFIG_HYPERVISOR_GUEST
+unsigned long poll_threshold_ns;
+#endif
+
/*
* per-CPU TSS segments. Threads
On 2017/6/22 22:23, Thomas Gleixner wrote:
On Thu, 22 Jun 2017, root wrote:
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -39,6 +39,10 @@
#include
#include
+#ifdef CONFIG_HYPERVISOR_GUEST
+unsigned long poll_threshold_ns;
+#endif
+
/*
* per-CPU TSS segments. Threads
On 2017/6/22 22:32, Thomas Gleixner wrote:
On Thu, 22 Jun 2017, root wrote:
@@ -962,6 +962,7 @@ __visible void __irq_entry smp_apic_timer_interrupt(struct
pt_regs *regs)
* interrupt lock, which is the WrongThing (tm) to do.
*/
entering_ack_irq();
+ check_poll();
On 2017/6/22 22:32, Thomas Gleixner wrote:
On Thu, 22 Jun 2017, root wrote:
@@ -962,6 +962,7 @@ __visible void __irq_entry smp_apic_timer_interrupt(struct
pt_regs *regs)
* interrupt lock, which is the WrongThing (tm) to do.
*/
entering_ack_irq();
+ check_poll();
On 2017/6/22 19:51, Paolo Bonzini wrote:
On 22/06/2017 13:22, root wrote:
==
+poll_grow: (X86 only)
+
+This parameter is multiplied in the grow_poll_ns() to increase the poll time.
+By default, the values is 2.
+
On 2017/6/22 19:51, Paolo Bonzini wrote:
On 22/06/2017 13:22, root wrote:
==
+poll_grow: (X86 only)
+
+This parameter is multiplied in the grow_poll_ns() to increase the poll time.
+By default, the values is 2.
+
On 2017/6/22 19:22, root wrote:
From: Yang Zhang <yang.zhang...@gmail.com>
Sorry to use wrong username to send patch because i am using a new
machine which don't setup the git config well.
Some latency-intensive workload will see obviously performance
drop when running inside VM. Th
On 2017/6/22 19:22, root wrote:
From: Yang Zhang
Sorry to use wrong username to send patch because i am using a new
machine which don't setup the git config well.
Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason
On 2016/9/28 19:50, Paolo Bonzini wrote:
On 28/09/2016 13:40, Wu, Feng wrote:
IIUIC, the issue you describe above is that IPI for posted-interrupts may be
issued between
vcpu->mode = IN_GUEST_MODE;
and
local_irq_disable();
But if that really happens, we will call kvm_vcpu_kick() in
On 2016/9/28 19:50, Paolo Bonzini wrote:
On 28/09/2016 13:40, Wu, Feng wrote:
IIUIC, the issue you describe above is that IPI for posted-interrupts may be
issued between
vcpu->mode = IN_GUEST_MODE;
and
local_irq_disable();
But if that really happens, we will call kvm_vcpu_kick() in
On 2016/9/28 5:20, Paolo Bonzini wrote:
Calling apic_find_highest_irr results in IRR being scanned twice,
once in vmx_sync_pir_from_irr and once in apic_search_irr. Change
vcpu_enter_guest to use sync_pir_from_irr (with a new argument to
trigger the RVI write), and let sync_pir_from_irr get the
On 2016/9/28 5:20, Paolo Bonzini wrote:
Calling apic_find_highest_irr results in IRR being scanned twice,
once in vmx_sync_pir_from_irr and once in apic_search_irr. Change
vcpu_enter_guest to use sync_pir_from_irr (with a new argument to
trigger the RVI write), and let sync_pir_from_irr get the
On 2016/9/28 5:20, Paolo Bonzini wrote:
Since bf9f6ac8d749 ("KVM: Update Posted-Interrupts Descriptor when vCPU
is blocked", 2015-09-18) the posted interrupt descriptor is checked
unconditionally for PIR.ON. Therefore we don't need KVM_REQ_EVENT to
trigger the scan and, if NMIs or SMIs are not
On 2016/9/28 5:20, Paolo Bonzini wrote:
Since bf9f6ac8d749 ("KVM: Update Posted-Interrupts Descriptor when vCPU
is blocked", 2015-09-18) the posted interrupt descriptor is checked
unconditionally for PIR.ON. Therefore we don't need KVM_REQ_EVENT to
trigger the scan and, if NMIs or SMIs are not
On 2016/8/9 18:19, Wincy Van wrote:
On Tue, Aug 9, 2016 at 5:32 PM, Yang Zhang <yang.zhang...@gmail.com> wrote:
On 2016/8/9 2:16, Radim Krčmář wrote:
msr bitmap can be used to avoid a VM exit (interception) on guest MSR
accesses. In some configurations of VMX controls, the guest ca
On 2016/8/9 18:19, Wincy Van wrote:
On Tue, Aug 9, 2016 at 5:32 PM, Yang Zhang wrote:
On 2016/8/9 2:16, Radim Krčmář wrote:
msr bitmap can be used to avoid a VM exit (interception) on guest MSR
accesses. In some configurations of VMX controls, the guest can even
directly access host's
On 2016/8/9 20:23, Radim Krčmář wrote:
2016-08-09 17:32+0800, Yang Zhang:
On 2016/8/9 2:16, Radim Krčmář wrote:
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
@@ -6995,16 +6982,21 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
return 1
On 2016/8/9 20:23, Radim Krčmář wrote:
2016-08-09 17:32+0800, Yang Zhang:
On 2016/8/9 2:16, Radim Krčmář wrote:
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
@@ -6995,16 +6982,21 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
return 1
On 2016/8/9 2:16, Radim Krčmář wrote:
msr bitmap can be used to avoid a VM exit (interception) on guest MSR
accesses. In some configurations of VMX controls, the guest can even
directly access host's x2APIC MSRs. See SDM 29.5 VIRTUALIZING MSR-BASED
APIC ACCESSES.
L2 could read all L0's x2APIC
On 2016/8/9 2:16, Radim Krčmář wrote:
msr bitmap can be used to avoid a VM exit (interception) on guest MSR
accesses. In some configurations of VMX controls, the guest can even
directly access host's x2APIC MSRs. See SDM 29.5 VIRTUALIZING MSR-BASED
APIC ACCESSES.
L2 could read all L0's x2APIC
On 2016/7/13 17:35, Paolo Bonzini wrote:
On 13/07/2016 11:21, Yang Zhang wrote:
+static int handle_desc(struct kvm_vcpu *vcpu)
+{
+WARN_ON(!(vcpu->arch.cr4 & X86_CR4_UMIP));
I think WARN_ON is too heavy since a malicious guest may trigger it always.
I missed this---how so?
On 2016/7/13 17:35, Paolo Bonzini wrote:
On 13/07/2016 11:21, Yang Zhang wrote:
+static int handle_desc(struct kvm_vcpu *vcpu)
+{
+WARN_ON(!(vcpu->arch.cr4 & X86_CR4_UMIP));
I think WARN_ON is too heavy since a malicious guest may trigger it always.
I missed this---how so?
On 2016/7/13 17:35, Paolo Bonzini wrote:
On 13/07/2016 11:21, Yang Zhang wrote:
+if ((cr4 & X86_CR4_UMIP) && !boot_cpu_has(X86_FEATURE_UMIP)) {
+vmcs_set_bits(SECONDARY_VM_EXEC_CONTROL,
+ SECONDARY_EXEC_DESC);
+hw_cr4 &= ~X86_CR4_U
On 2016/7/13 17:35, Paolo Bonzini wrote:
On 13/07/2016 11:21, Yang Zhang wrote:
+if ((cr4 & X86_CR4_UMIP) && !boot_cpu_has(X86_FEATURE_UMIP)) {
+vmcs_set_bits(SECONDARY_VM_EXEC_CONTROL,
+ SECONDARY_EXEC_DESC);
+hw_cr4 &= ~X86_CR4_U
On 2016/7/13 3:20, Paolo Bonzini wrote:
UMIP (User-Mode Instruction Prevention) is a feature of future
Intel processors (Cannonlake?) that blocks SLDT, SGDT, STR, SIDT
and SMSW from user-mode processes.
On Intel systems it's *almost* possible to emulate it; it slows
down the instructions when
On 2016/7/13 3:20, Paolo Bonzini wrote:
UMIP (User-Mode Instruction Prevention) is a feature of future
Intel processors (Cannonlake?) that blocks SLDT, SGDT, STR, SIDT
and SMSW from user-mode processes.
On Intel systems it's *almost* possible to emulate it; it slows
down the instructions when
On 2016/7/13 3:20, Paolo Bonzini wrote:
UMIP (User-Mode Instruction Prevention) is a feature of future
Intel processors (Cannonlake?) that blocks SLDT, SGDT, STR, SIDT
I remember there is no Cannonlake any more. It should be Icelake. :)
and SMSW from user-mode processes.
Do you know the
On 2016/7/13 3:20, Paolo Bonzini wrote:
UMIP (User-Mode Instruction Prevention) is a feature of future
Intel processors (Cannonlake?) that blocks SLDT, SGDT, STR, SIDT
I remember there is no Cannonlake any more. It should be Icelake. :)
and SMSW from user-mode processes.
Do you know the
On 2016/7/11 23:52, Radim Krčmář wrote:
2016-07-11 16:14+0200, Paolo Bonzini:
On 11/07/2016 15:48, Radim Krčmář wrote:
I guess the easiest solution is to replace kvm_apic_id with a field in
struct kvm_lapic, which is already shifted right by 24 in xAPIC mode.
(I guess the fewest LOC is to
On 2016/7/11 23:52, Radim Krčmář wrote:
2016-07-11 16:14+0200, Paolo Bonzini:
On 11/07/2016 15:48, Radim Krčmář wrote:
I guess the easiest solution is to replace kvm_apic_id with a field in
struct kvm_lapic, which is already shifted right by 24 in xAPIC mode.
(I guess the fewest LOC is to
On 2016/7/11 17:17, Paolo Bonzini wrote:
On 11/07/2016 10:56, Yang Zhang wrote:
On 2016/7/11 15:44, Paolo Bonzini wrote:
On 11/07/2016 08:06, Yang Zhang wrote:
Changes to MSI addresses follow the format used by interrupt remapping
unit.
The upper address word, that used to be 0, contains
On 2016/7/11 17:17, Paolo Bonzini wrote:
On 11/07/2016 10:56, Yang Zhang wrote:
On 2016/7/11 15:44, Paolo Bonzini wrote:
On 11/07/2016 08:06, Yang Zhang wrote:
Changes to MSI addresses follow the format used by interrupt remapping
unit.
The upper address word, that used to be 0, contains
On 2016/7/11 15:43, Paolo Bonzini wrote:
On 11/07/2016 08:07, Yang Zhang wrote:
mutex_lock(>arch.apic_map_lock);
+kvm_for_each_vcpu(i, vcpu, kvm)
+if (kvm_apic_present(vcpu))
+max_id = max(max_id, kvm_apic_id(vcpu->arch.apic));
+
+new = kzalloc(sizeof(
On 2016/7/11 15:43, Paolo Bonzini wrote:
On 11/07/2016 08:07, Yang Zhang wrote:
mutex_lock(>arch.apic_map_lock);
+kvm_for_each_vcpu(i, vcpu, kvm)
+if (kvm_apic_present(vcpu))
+max_id = max(max_id, kvm_apic_id(vcpu->arch.apic));
+
+new = kzalloc(sizeof(
On 2016/7/11 15:44, Paolo Bonzini wrote:
On 11/07/2016 08:06, Yang Zhang wrote:
Changes to MSI addresses follow the format used by interrupt remapping
unit.
The upper address word, that used to be 0, contains upper 24 bits of
the LAPIC
address in its upper 24 bits. Lower 8 bits are reserved
On 2016/7/11 15:44, Paolo Bonzini wrote:
On 11/07/2016 08:06, Yang Zhang wrote:
Changes to MSI addresses follow the format used by interrupt remapping
unit.
The upper address word, that used to be 0, contains upper 24 bits of
the LAPIC
address in its upper 24 bits. Lower 8 bits are reserved
1 - 100 of 184 matches
Mail list logo