On Fri, Mar 7, 2014 at 1:22 AM, Christian Benvenuti (benve)
be...@cisco.com wrote:
-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org]
On Behalf Of Stefan Hajnoczi
Sent: Friday, February 14, 2014 7:58 AM
To: Cedric Bosdonnat
Cc: Jan Kiszka;
Hi Stefan, Christian,
On Fri, 2014-03-07 at 10:16 +0100, Stefan Hajnoczi wrote:
I am not applying as a student and I am not offering myself as a mentor (I
do not qualify as a mentor), I Just wanted to point out a possible
interesting
(and challenging) project.
I am afraid it would be
Alex Williamson reported that a Windows game does something weird that
makes the guest save and restore debug registers on each context switch.
This cause several hundred thousands vmexits per second, and basically
cuts performance in half when running under KVM.
However, when not running in
Unlike other intercepts, debug register intercepts will be modified
in hot paths if the guest OS is bad or otherwise gets tricked into
doing so.
Avoid calling recalc_intercepts 16 times for debug registers.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
arch/x86/kvm/svm.c | 41
When preparing the VMCS02, the CPU-based execution controls is computed
by vmx_exec_control. Turn off DR access exits there, too, if the
KVM_DEBUGREG_WONT_EXIT bit is set in switch_db_regs.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
arch/x86/kvm/vmx.c | 4
1 file changed, 4
When not running in guest-debug mode (i.e. the guest controls the debug
registers, having to take an exit for each DR access is a waste of time.
If the guest gets into a state where each context switch causes DR to be
saved and restored, this can take away as much as 40% of the execution
time from
When not running in guest-debug mode (i.e. the guest controls the debug
registers, having to take an exit for each DR access is a waste of time.
If the guest gets into a state where each context switch causes DR to be
saved and restored, this can take away as much as 40% of the execution
time from
The next patch will add another bit that we can test with the
same if.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
arch/x86/include/asm/kvm_host.h | 6 +-
arch/x86/kvm/x86.c | 4 +++-
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git
When not running in guest-debug mode, the guest controls the debug
registers and having to take an exit for each DR access is a waste
of time. If the guest gets into a state where each context switch
causes DR to be saved and restored, this can take away as much as 40%
of the execution time from
Currently, this works even if the bit is not in min, because the bit is always
set in MSR_IA32_VMX_ENTRY_CTLS. Mention it for the sake of documentation, and
to avoid surprises if we later switch to MSR_IA32_VMX_TRUE_ENTRY_CTLS.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
On 07/03/14 04:46, Pravin Shelar wrote:
On Thu, Mar 6, 2014 at 9:09 AM, Zoltan Kiss zoltan.k...@citrix.com wrote:
Do you have any feedback on this? I'm also adding KVM list as they might be
interested in this.
Zoli
On 28/02/14 19:16, Zoltan Kiss wrote:
The kernel datapath now switched to
Can we have per-VM PLE values?
My understanding is that the ple values are kvm module setting which applying
to all VMs in the system.
And all vms must be stopped first, then unload kvm-intel, reload kvm-intel with
new ple setting.
/sbin/modprobe -r kvm-intel
/sbin/modprobe kvm-intel
Il 06/03/2014 18:33, Jan Kiszka ha scritto:
Move the check for leaving L2 on pending and intercepted IRQs or NMIs
from the *_allowed handler into a dedicated callback. Invoke this
callback at the relevant points before KVM checks if IRQs/NMIs can be
injected. The callback has the task to
On 03/07/2014 05:46 AM, Pravin Shelar wrote:
But I found bug in datapath user-space queue code. I am not sure how
this can work with skb fragments and MMAP-netlink socket.
Here is what happens, OVS allocates netlink skb and adds fragments to
skb using skb_zero_copy(), then calls
On 2014-03-07 16:44, Paolo Bonzini wrote:
Il 06/03/2014 18:33, Jan Kiszka ha scritto:
Move the check for leaving L2 on pending and intercepted IRQs or NMIs
from the *_allowed handler into a dedicated callback. Invoke this
callback at the relevant points before KVM checks if IRQs/NMIs can be
Il 07/03/2014 17:29, Jan Kiszka ha scritto:
On 2014-03-07 16:44, Paolo Bonzini wrote:
With this patch do we still need
if (is_guest_mode(vcpu) nested_exit_on_intr(vcpu))
/*
* We get here if vmx_interrupt_allowed() said we can't
*
On Fri, Mar 7, 2014 at 7:58 AM, Thomas Graf tg...@redhat.com wrote:
On 03/07/2014 05:46 AM, Pravin Shelar wrote:
But I found bug in datapath user-space queue code. I am not sure how
this can work with skb fragments and MMAP-netlink socket.
Here is what happens, OVS allocates netlink skb and
On Thu, Mar 06, 2014 at 06:33:58PM +0100, Jan Kiszka wrote:
We cannot rely on the hardware-provided preemption timer support because
we are holding L2 in HLT outside non-root mode.
Furthermore, emulating
the preemption will resolve tick rate errata on older Intel CPUs.
Can you describe this
Il 07/03/2014 18:20, Marcelo Tosatti ha scritto:
On Thu, Mar 06, 2014 at 06:33:58PM +0100, Jan Kiszka wrote:
We cannot rely on the hardware-provided preemption timer support because
we are holding L2 in HLT outside non-root mode.
Furthermore, emulating
the preemption will resolve tick rate
On 2014-03-07 17:46, Paolo Bonzini wrote:
Il 07/03/2014 17:29, Jan Kiszka ha scritto:
On 2014-03-07 16:44, Paolo Bonzini wrote:
With this patch do we still need
if (is_guest_mode(vcpu) nested_exit_on_intr(vcpu))
/*
* We get here if
On Fri, Mar 7, 2014 at 4:29 AM, Zoltan Kiss zoltan.k...@citrix.com wrote:
On 07/03/14 04:46, Pravin Shelar wrote:
On Thu, Mar 6, 2014 at 9:09 AM, Zoltan Kiss zoltan.k...@citrix.com
wrote:
Do you have any feedback on this? I'm also adding KVM list as they might
be
interested in this.
Zoli
On Fri, Mar 07, 2014 at 02:26:19PM +, Li, Bin (Bin) wrote:
Can we have per-VM PLE values?
My understanding is that the ple values are kvm module setting which applying
to all VMs in the system.
And all vms must be stopped first, then unload kvm-intel, reload kvm-intel
with new ple
On 03/07/2014 06:19 PM, Pravin Shelar wrote:
On Fri, Mar 7, 2014 at 7:58 AM, Thomas Graf tg...@redhat.com wrote:
On 03/07/2014 05:46 AM, Pravin Shelar wrote:
But I found bug in datapath user-space queue code. I am not sure how
this can work with skb fragments and MMAP-netlink socket.
Here is
On 2014-03-07 18:28, Jan Kiszka wrote:
On 2014-03-07 17:46, Paolo Bonzini wrote:
Il 07/03/2014 17:29, Jan Kiszka ha scritto:
On 2014-03-07 16:44, Paolo Bonzini wrote:
With this patch do we still need
if (is_guest_mode(vcpu) nested_exit_on_intr(vcpu))
/*
Il 07/03/2014 19:19, Jan Kiszka ha scritto:
On 2014-03-07 18:28, Jan Kiszka wrote:
On 2014-03-07 17:46, Paolo Bonzini wrote:
Il 07/03/2014 17:29, Jan Kiszka ha scritto:
On 2014-03-07 16:44, Paolo Bonzini wrote:
With this patch do we still need
if (is_guest_mode(vcpu)
On 2014-03-07 19:19, Jan Kiszka wrote:
On 2014-03-07 18:28, Jan Kiszka wrote:
On 2014-03-07 17:46, Paolo Bonzini wrote:
Il 07/03/2014 17:29, Jan Kiszka ha scritto:
On 2014-03-07 16:44, Paolo Bonzini wrote:
With this patch do we still need
if (is_guest_mode(vcpu)
On Fri, Mar 7, 2014 at 10:05 AM, Thomas Graf tg...@redhat.com wrote:
On 03/07/2014 06:19 PM, Pravin Shelar wrote:
On Fri, Mar 7, 2014 at 7:58 AM, Thomas Graf tg...@redhat.com wrote:
On 03/07/2014 05:46 AM, Pravin Shelar wrote:
But I found bug in datapath user-space queue code. I am not
According to SDM 27.2.3, IDT vectoring information will not be valid on
vmexits caused by external NMIs. So we have to avoid creating such
scenarios by delaying EXIT_REASON_EXCEPTION_NMI injection as long as we
have a pending interrupt because that one would be migrated to L1's IDT
vectoring info
Move the check for leaving L2 on pending and intercepted IRQs or NMIs
from the *_allowed handler into a dedicated callback. Invoke this
callback at the relevant points before KVM checks if IRQs/NMIs can be
injected. The callback has the task to switch from L2 to L1 if needed
and inject the proper
As I noticed a rebase conflict of these pending patches and I wanted to
remind the fact that their are still pending ;), a quick update round.
No functional changes since v2.
Jan
Jan Kiszka (4):
KVM: nVMX: Rework interception of IRQs and NMIs
KVM: nVMX: Fully emulate preemption timer
KVM:
We cannot rely on the hardware-provided preemption timer support because
we are holding L2 in HLT outside non-root mode. Furthermore, emulating
the preemption will resolve tick rate errata on older Intel CPUs.
The emulation is based on hrtimer which is started on L2 entry, stopped
on L2 exit and
It's no longer possible to enter enable_irq_window in guest mode when
L1 intercepts external interrupts and we are entering L2. This is now
caught in vcpu_enter_guest. So we can remove the check from the VMX
version of enable_irq_window, thus the need to return an error code from
both
On 2014-03-07 20:03, Jan Kiszka wrote:
As I noticed a rebase conflict of these pending patches and I wanted to
remind the fact that their are still pending ;), a quick update round.
No functional changes since v2.
Forgot to press save to send this as well:
Also passed some stress testing of
Il 07/03/2014 19:26, Jan Kiszka ha scritto:
Reading through my code again, I'm now wondering why I added
check_nested_events to both inject_pending_event and vcpu_enter_guest.
The former seems redundant, only vcpu_enter_guest calls
inject_pending_event. I guess I forgot a cleanup here.
Nah,
Il 07/03/2014 20:03, Jan Kiszka ha scritto:
@@ -4631,22 +4631,8 @@ static void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool
masked)
static int vmx_nmi_allowed(struct kvm_vcpu *vcpu)
{
- if (is_guest_mode(vcpu)) {
- if (to_vmx(vcpu)-nested.nested_run_pending)
-
From: Jason Wang jasow...@redhat.com
Date: Fri, 7 Mar 2014 13:28:27 +0800
This is because the delay added by htb may lead the delay the finish
of DMAs and cause the pending DMAs for tap0 exceeds the limit
(VHOST_MAX_PEND). In this case vhost stop handling tx request until
htb send some
Fully agree.
It will be a very helpful feature to make ple setting per VM.
This feature will provide more flexible control to the VM user. All KVM user
will love to have it.
The enhancement we proposed is neither overlapping nor conflicting with this
feature. The enhancement is targeting to
On Fri, Mar 07, 2014 at 10:08:52PM +, Li, Bin (Bin) wrote:
Fully agree.
It will be a very helpful feature to make ple setting per VM.
This feature will provide more flexible control to the VM user. All KVM user
will love to have it.
The enhancement we proposed is neither overlapping
On Thu, 06 Mar 2014 10:46:22 +0100, Paolo Bonzini pbonz...@redhat.com wrote:
Il 06/03/2014 09:52, Robie Basak ha scritto:
On Sat, Mar 01, 2014 at 03:27:56PM +, Grant Likely wrote:
I would also reference section 3.3 (Boot Option Variables Default Boot
Behavior) and 3.4.1.1 (Removable
On Thu, 6 Mar 2014 12:04:50 +, Robie Basak robie.ba...@canonical.com
wrote:
On Thu, Mar 06, 2014 at 12:44:57PM +0100, Laszlo Ersek wrote:
If I understand correctly, the question is this:
Given a hypervisor that doesn't support non-volatile UEFI variables
(including BootOrder and
On Thu, 6 Mar 2014 08:52:13 +, Robie Basak robie.ba...@canonical.com
wrote:
On Sat, Mar 01, 2014 at 03:27:56PM +, Grant Likely wrote:
I would also reference section 3.3 (Boot Option Variables Default Boot
Behavior) and 3.4.1.1 (Removable Media Boot Behavior) here. It's fine to
41 matches
Mail list logo