On 2024-04-19 01:47 PM, James Houghton wrote:
> On Thu, Apr 11, 2024 at 10:28 AM David Matlack wrote:
> > On 2024-04-11 10:08 AM, David Matlack wrote:
> > bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> > {
> > bool young = false;
> >
>
On 2024-04-01 11:29 PM, James Houghton wrote:
> Only handle the TDP MMU case for now. In other cases, if a bitmap was
> not provided, fallback to the slowpath that takes mmu_lock, or, if a
> bitmap was provided, inform the caller that the bitmap is unreliable.
I think this patch will trigger a
On 2024-04-01 11:29 PM, James Houghton wrote:
> Add kvm_arch_prepare_bitmap_age() for architectures to indiciate that
> they support bitmap-based aging in kvm_mmu_notifier_test_clear_young()
> and that they do not need KVM to grab the MMU lock for writing. This
> function allows architectures to
On 2024-04-01 11:29 PM, James Houghton wrote:
> The bitmap is provided for secondary MMUs to use if they support it. For
> test_young(), after it returns, the bitmap represents the pages that
> were young in the interval [start, end). For clear_young, it represents
> the pages that we wish the
On 2024-04-01 11:29 PM, James Houghton wrote:
> This patchset adds a fast path in KVM to test and clear access bits on
> sptes without taking the mmu_lock. It also adds support for using a
> bitmap to (1) test the access bits for many sptes in a single call to
> mmu_notifier_test_young, and to (2)
On Thu, Apr 11, 2024 at 11:00 AM David Matlack wrote:
>
> On Thu, Apr 11, 2024 at 10:28 AM David Matlack wrote:
> >
> > On 2024-04-11 10:08 AM, David Matlack wrote:
> > > On 2024-04-01 11:29 PM, James Houghton wrote:
> > > > Only handle the TDP MMU ca
On Thu, Apr 11, 2024 at 10:28 AM David Matlack wrote:
>
> On 2024-04-11 10:08 AM, David Matlack wrote:
> > On 2024-04-01 11:29 PM, James Houghton wrote:
> > > Only handle the TDP MMU case for now. In other cases, if a bitmap was
> > > not provided, fallback to th
On 2024-04-11 10:08 AM, David Matlack wrote:
> On 2024-04-01 11:29 PM, James Houghton wrote:
> > Only handle the TDP MMU case for now. In other cases, if a bitmap was
> > not provided, fallback to the slowpath that takes mmu_lock, or, if a
> > bitmap was provided, inform the c
On 2024-04-01 11:29 PM, James Houghton wrote:
> Only handle the TDP MMU case for now. In other cases, if a bitmap was
> not provided, fallback to the slowpath that takes mmu_lock, or, if a
> bitmap was provided, inform the caller that the bitmap is unreliable.
>
> Suggested-by: Yu Zhao
>
On Mon, Jul 2, 2018 at 11:23 PM Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> Implement paravirtual apic hooks to enable PV IPIs.
Very cool. Thanks for working on this!
>
> apic->send_IPI_mask
> apic->send_IPI_mask_allbutself
> apic->send_IPI_allbutself
> apic->send_IPI_all
>
> The PV IPIs
On Mon, Jul 2, 2018 at 11:23 PM Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> Implement paravirtual apic hooks to enable PV IPIs.
Very cool. Thanks for working on this!
>
> apic->send_IPI_mask
> apic->send_IPI_mask_allbutself
> apic->send_IPI_allbutself
> apic->send_IPI_all
>
> The PV IPIs
On Thu, Jul 27, 2017 at 6:54 AM, Paolo Bonzini wrote:
> Since the current implementation of VMCS12 does a memcpy in and out
> of guest memory, we do not need current_vmcs12 and current_vmcs12_page
> anymore. current_vmptr is enough to read and write the VMCS12.
This patch
On Thu, Jul 27, 2017 at 6:54 AM, Paolo Bonzini wrote:
> Since the current implementation of VMCS12 does a memcpy in and out
> of guest memory, we do not need current_vmcs12 and current_vmcs12_page
> anymore. current_vmptr is enough to read and write the VMCS12.
This patch also fixes dirty
On Thu, Feb 16, 2017 at 1:33 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
> The FPU is always active now when running KVM.
>
> Signed-off-by: Paolo Bonzini <pbonz...@redhat.com>
Reviewed-by: David Matlack <dmatl...@google.com>
Glad to see this cleanup! Thanks fo
On Thu, Feb 16, 2017 at 1:33 AM, Paolo Bonzini wrote:
>
> The FPU is always active now when running KVM.
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: David Matlack
Glad to see this cleanup! Thanks for doing it.
> ---
> arch/x86/include/asm/kvm_host.h | 3 --
>
On Tue, Nov 29, 2016 at 12:40 PM, Kyle Huey wrote:
> We can't return both the pass/fail boolean for the vmcs and the upcoming
> continue/exit-to-userspace boolean for skip_emulated_instruction out of
> nested_vmx_check_vmcs, so move skip_emulated_instruction out of it instead.
On Tue, Nov 29, 2016 at 12:40 PM, Kyle Huey wrote:
> We can't return both the pass/fail boolean for the vmcs and the upcoming
> continue/exit-to-userspace boolean for skip_emulated_instruction out of
> nested_vmx_check_vmcs, so move skip_emulated_instruction out of it instead.
>
> Additionally,
static_key_deferred_flush() API to flush pending updates on
module unload.
Signed-off-by: David Matlack <dmatl...@google.com>
---
arch/x86/kvm/lapic.c | 6 ++
arch/x86/kvm/lapic.h | 1 +
arch/x86/kvm/x86.c | 1 +
3 files changed, 8 insertions(+)
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
static_key_deferred_flush() API to flush pending updates on
module unload.
Signed-off-by: David Matlack
---
arch/x86/kvm/lapic.c | 6 ++
arch/x86/kvm/lapic.h | 1 +
arch/x86/kvm/x86.c | 1 +
3 files changed, 8 insertions(+)
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 34a66b2..1b80fa3 100644
Modules that use static_key_deferred need a way to synchronize with
any delayed work that is still pending when the module is unloaded.
Introduce static_key_deferred_flush() which flushes any pending
jump label updates.
Signed-off-by: David Matlack <dmatl...@google.com>
---
include
Modules that use static_key_deferred need a way to synchronize with
any delayed work that is still pending when the module is unloaded.
Introduce static_key_deferred_flush() which flushes any pending
jump label updates.
Signed-off-by: David Matlack
---
include/linux/jump_label_ratelimit.h | 5
On Wed, Nov 30, 2016 at 2:33 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
> - Original Message -
>> From: "Radim Krčmář" <rkrc...@redhat.com>
>> To: "David Matlack" <dmatl...@google.com>
>> Cc: k...@vger.kernel.org, l
On Wed, Nov 30, 2016 at 2:33 PM, Paolo Bonzini wrote:
> - Original Message -
>> From: "Radim Krčmář"
>> To: "David Matlack"
>> Cc: k...@vger.kernel.org, linux-kernel@vger.kernel.org, jmatt...@google.com,
>> pbonz...@redhat.com
&g
On Wed, Nov 30, 2016 at 3:22 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
>
> On 30/11/2016 03:14, David Matlack wrote:
>> This patchset adds support setting the VMX capability MSRs from userspace.
>> This is required for migration of nested-capable VMs to differe
On Wed, Nov 30, 2016 at 3:22 AM, Paolo Bonzini wrote:
>
>
> On 30/11/2016 03:14, David Matlack wrote:
>> This patchset adds support setting the VMX capability MSRs from userspace.
>> This is required for migration of nested-capable VMs to different CPUs and
>>
On Wed, Nov 30, 2016 at 3:16 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
> On 30/11/2016 03:14, David Matlack wrote:
>>
>> /* secondary cpu-based controls */
>> @@ -2868,36 +2865,32 @@ static int vmx_get_vmx_msr(struct kvm_vcpu *vcpu,
>> u32 msr_index, u6
On Wed, Nov 30, 2016 at 3:16 AM, Paolo Bonzini wrote:
> On 30/11/2016 03:14, David Matlack wrote:
>>
>> /* secondary cpu-based controls */
>> @@ -2868,36 +2865,32 @@ static int vmx_get_vmx_msr(struct kvm_vcpu *vcpu,
>> u32 msr_index, u64 *pdata)
>>
, they do not need to be on
the default MSR save/restore lists. The userspace hypervisor can set
the values of these MSRs or read them from KVM at VCPU creation time,
and restore the same value after every save/restore.
Signed-off-by: David Matlack <dmatl...@google.com>
---
arch/x86/inclu
, they do not need to be on
the default MSR save/restore lists. The userspace hypervisor can set
the values of these MSRs or read them from KVM at VCPU creation time,
and restore the same value after every save/restore.
Signed-off-by: David Matlack
---
arch/x86/include/asm/vmx.h | 31 +
arch
.PG is 0. Previously this configuration would succeed and
"IA-32e mode guest" would silently be disabled by KVM.
Signed-off-by: David Matlack <dmatl...@google.com>
---
arch/x86/kvm/vmx.c | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/vmx.
.
This patch also initializes MSR_IA32_CR{0,4}_FIXED1 from the CPU's MSRs
by default. This is a saner than the current default of -1ull, which
includes bits that the host CPU does not support.
Signed-off-by: David Matlack <dmatl...@google.com>
---
arch/x86/kvm/vmx.
not verify must-be-0 bits. Fix these checks
to identify must-be-0 bits according to MSR_IA32_VMX_CR{0,4}_FIXED1.
This patch should introduce no change in behavior in KVM, since these
MSRs are still -1ULL.
Signed-off-by: David Matlack <dmatl...@google.com>
---
arch/x86/kvm/vmx.
.PG is 0. Previously this configuration would succeed and
"IA-32e mode guest" would silently be disabled by KVM.
Signed-off-by: David Matlack
---
arch/x86/kvm/vmx.c | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4927
.
This patch also initializes MSR_IA32_CR{0,4}_FIXED1 from the CPU's MSRs
by default. This is a saner than the current default of -1ull, which
includes bits that the host CPU does not support.
Signed-off-by: David Matlack
---
arch/x86/kvm/vmx.c | 55
not verify must-be-0 bits. Fix these checks
to identify must-be-0 bits according to MSR_IA32_VMX_CR{0,4}_FIXED1.
This patch should introduce no change in behavior in KVM, since these
MSRs are still -1ULL.
Signed-off-by: David Matlack
---
arch/x86/kvm/vmx.c | 77
MX_BASIC,
MSR_IA32_VMX_CR{0,4}_FIXED{0,1}.
* Include VMX_INS_OUTS in MSR_IA32_VMX_BASIC initial value.
David Matlack (5):
KVM: nVMX: generate non-true VMX MSRs based on true versions
KVM: nVMX: support restore of VMX capability MSRs
KVM: nVMX: fix checks on CR{0,4} during virtual VMX ope
MX_BASIC,
MSR_IA32_VMX_CR{0,4}_FIXED{0,1}.
* Include VMX_INS_OUTS in MSR_IA32_VMX_BASIC initial value.
David Matlack (5):
KVM: nVMX: generate non-true VMX MSRs based on true versions
KVM: nVMX: support restore of VMX capability MSRs
KVM: nVMX: fix checks on CR{0,4} during virtual VMX ope
ested_vmx. This also lets
userspace avoid having to restore the non-true MSRs.
Note this does not preclude emulating MSR_IA32_VMX_BASIC[55]=0. To do so,
we simply need to set all the default1 bits in the true MSRs (such that
the true MSRs and the generated non-true MSRs are equal).
Signed-off-by:
ested_vmx. This also lets
userspace avoid having to restore the non-true MSRs.
Note this does not preclude emulating MSR_IA32_VMX_BASIC[55]=0. To do so,
we simply need to set all the default1 bits in the true MSRs (such that
the true MSRs and the generated non-true MSRs are equal).
Signed-o
On Tue, Nov 29, 2016 at 12:01 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>> On Mon, Nov 28, 2016 at 2:48 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>> > On 28/11/2016 22:11, David Matlack wrote:
>> >> > PINBASED_CTLS, PROCBASED_CTLS, EXIT_CTLS
On Tue, Nov 29, 2016 at 12:01 AM, Paolo Bonzini wrote:
>> On Mon, Nov 28, 2016 at 2:48 PM, Paolo Bonzini wrote:
>> > On 28/11/2016 22:11, David Matlack wrote:
>> >> > PINBASED_CTLS, PROCBASED_CTLS, EXIT_CTLS and ENTRY_CTLS can be derived
>> >> > fr
On Mon, Nov 28, 2016 at 2:48 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
> On 28/11/2016 22:11, David Matlack wrote:
>> > PINBASED_CTLS, PROCBASED_CTLS, EXIT_CTLS and ENTRY_CTLS can be derived
>> > from their "true" counterparts, so I think it's better to
On Mon, Nov 28, 2016 at 2:48 PM, Paolo Bonzini wrote:
> On 28/11/2016 22:11, David Matlack wrote:
>> > PINBASED_CTLS, PROCBASED_CTLS, EXIT_CTLS and ENTRY_CTLS can be derived
>> > from their "true" counterparts, so I think it's better to remove the
>> >
On Wed, Nov 23, 2016 at 3:28 PM, David Matlack <dmatl...@google.com> wrote:
> On Wed, Nov 23, 2016 at 2:11 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>> On 23/11/2016 23:07, David Matlack wrote:
>>> A downside of this scheme is we'd have to remember to update
&g
On Wed, Nov 23, 2016 at 3:28 PM, David Matlack wrote:
> On Wed, Nov 23, 2016 at 2:11 PM, Paolo Bonzini wrote:
>> On 23/11/2016 23:07, David Matlack wrote:
>>> A downside of this scheme is we'd have to remember to update
>>> nested_vmx_cr4_fixed1_update() before givin
On Wed, Nov 23, 2016 at 3:44 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
> On 23/11/2016 02:14, David Matlack wrote:
>> switch (msr_index) {
>> case MSR_IA32_VMX_BASIC:
>> + return vmx_restore_vmx_basic(vmx, data);
>> + cas
On Wed, Nov 23, 2016 at 3:44 AM, Paolo Bonzini wrote:
> On 23/11/2016 02:14, David Matlack wrote:
>> switch (msr_index) {
>> case MSR_IA32_VMX_BASIC:
>> + return vmx_restore_vmx_basic(vmx, data);
>> + case MSR_IA32_VMX_TRUE_P
On Wed, Nov 23, 2016 at 3:31 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
> On 23/11/2016 02:14, David Matlack wrote:
>> +static bool fixed_bits_valid(u64 val, u64 fixed0, u64 fixed1)
>> +{
>> + return ((val & fixed0) == fixed0)
On Wed, Nov 23, 2016 at 3:31 AM, Paolo Bonzini wrote:
> On 23/11/2016 02:14, David Matlack wrote:
>> +static bool fixed_bits_valid(u64 val, u64 fixed0, u64 fixed1)
>> +{
>> + return ((val & fixed0) == fixed0) && ((~val & ~fixed1) == ~fix
On Wed, Nov 23, 2016 at 3:45 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
> On 23/11/2016 02:14, David Matlack wrote:
>> This patchset includes v2 of "KVM: nVMX: support restore of VMX capability
>> MSRs" (patch 1) as well as some additional related patches
On Wed, Nov 23, 2016 at 3:45 AM, Paolo Bonzini wrote:
>
> On 23/11/2016 02:14, David Matlack wrote:
>> This patchset includes v2 of "KVM: nVMX: support restore of VMX capability
>> MSRs" (patch 1) as well as some additional related patches that came up
>> while p
On Wed, Nov 23, 2016 at 2:11 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
> On 23/11/2016 23:07, David Matlack wrote:
>> A downside of this scheme is we'd have to remember to update
>> nested_vmx_cr4_fixed1_update() before giving VMs new CPUID bits. If we
>>
On Wed, Nov 23, 2016 at 2:11 PM, Paolo Bonzini wrote:
> On 23/11/2016 23:07, David Matlack wrote:
>> A downside of this scheme is we'd have to remember to update
>> nested_vmx_cr4_fixed1_update() before giving VMs new CPUID bits. If we
>> forget, a VM could end up with differ
On Wed, Nov 23, 2016 at 11:24 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
>
> On 23/11/2016 20:16, David Matlack wrote:
>> > Oh, I thought userspace would do that! Doing it in KVM is fine as well,
>> > but then do we need to give userspace access to CR{0,4}
On Wed, Nov 23, 2016 at 11:24 AM, Paolo Bonzini wrote:
>
>
> On 23/11/2016 20:16, David Matlack wrote:
>> > Oh, I thought userspace would do that! Doing it in KVM is fine as well,
>> > but then do we need to give userspace access to CR{0,4}_FIXED{0,1} at all?
>
dated by userspace,
>> regenerate MSR_IA32_CR4_FIXED1 to match it.
>>
>> Signed-off-by: David Matlack <dmatl...@google.com>
>
> Oh, I thought userspace would do that! Doing it in KVM is fine as well,
> but then do we need to give userspace access to CR{0,4}_FIXED{0,1}
t; regenerate MSR_IA32_CR4_FIXED1 to match it.
>>
>> Signed-off-by: David Matlack
>
> Oh, I thought userspace would do that! Doing it in KVM is fine as well,
> but then do we need to give userspace access to CR{0,4}_FIXED{0,1} at all?
I think it should be safe for userspace to
.PG is 0. Previously this configuration would succeed and
"IA-32e mode guest" would silently be disabled by KVM.
Signed-off-by: David Matlack <dmatl...@google.com>
---
arch/x86/kvm/vmx.c | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/vmx.
.PG is 0. Previously this configuration would succeed and
"IA-32e mode guest" would silently be disabled by KVM.
Signed-off-by: David Matlack
---
arch/x86/kvm/vmx.c | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index ac5d
Set MSR_IA32_CR{0,4}_FIXED1 to match the CPU's MSRs.
In addition, MSR_IA32_CR4_FIXED1 should reflect the available CR4 bits
according to CPUID. Whenever guest CPUID is updated by userspace,
regenerate MSR_IA32_CR4_FIXED1 to match it.
Signed-off-by: David Matlack <dmatl...@google.com>
--
not verify must-be-0 bits. Fix these checks
to identify must-be-0 bits according to MSR_IA32_VMX_CR{0,4}_FIXED1.
This patch should introduce no change in behavior in KVM, since these
MSRs are still -1ULL.
Signed-off-by: David Matlack <dmatl...@google.com>
---
arch/x86/kvm/vmx.
Set MSR_IA32_CR{0,4}_FIXED1 to match the CPU's MSRs.
In addition, MSR_IA32_CR4_FIXED1 should reflect the available CR4 bits
according to CPUID. Whenever guest CPUID is updated by userspace,
regenerate MSR_IA32_CR4_FIXED1 to match it.
Signed-off-by: David Matlack
---
Note: "x86/cpufeature
not verify must-be-0 bits. Fix these checks
to identify must-be-0 bits according to MSR_IA32_VMX_CR{0,4}_FIXED1.
This patch should introduce no change in behavior in KVM, since these
MSRs are still -1ULL.
Signed-off-by: David Matlack
---
arch/x86/kvm/vmx.c | 68
, they do not need to be on
the default MSR save/restore lists. The userspace hypervisor can set
the values of these MSRs or read them from KVM at VCPU creation time,
and restore the same value after every save/restore.
Signed-off-by: David Matlack <dmatl...@google.com>
---
arch/x86/inclu
d VM-entry that came up when
testing patches 2 and 3.
Changes since v1:
* Support restoring less-capable versions of MSR_IA32_VMX_BASIC,
MSR_IA32_VMX_CR{0,4}_FIXED{0,1}.
* Include VMX_INS_OUTS in MSR_IA32_VMX_BASIC initial value.
David Matlack (4):
KVM: nVMX: support restore of VMX capability
, they do not need to be on
the default MSR save/restore lists. The userspace hypervisor can set
the values of these MSRs or read them from KVM at VCPU creation time,
and restore the same value after every save/restore.
Signed-off-by: David Matlack
---
arch/x86/include/asm/vmx.h | 31 +
arch
d VM-entry that came up when
testing patches 2 and 3.
Changes since v1:
* Support restoring less-capable versions of MSR_IA32_VMX_BASIC,
MSR_IA32_VMX_CR{0,4}_FIXED{0,1}.
* Include VMX_INS_OUTS in MSR_IA32_VMX_BASIC initial value.
David Matlack (4):
KVM: nVMX: support restore of VMX capability
NABLE) and upon a
> cpuid-induced VM exit checks the cpuid faulting state and the CPL.
> kvm_require_cpl is even kind enough to inject the GP fault for us.
>
> Signed-off-by: Kyle Huey <kh...@kylehuey.com>
Reviewed-by: David Matlack <dmatl...@google.com>
(v10)
uced VM exit checks the cpuid faulting state and the CPL.
> kvm_require_cpl is even kind enough to inject the GP fault for us.
>
> Signed-off-by: Kyle Huey
Reviewed-by: David Matlack
(v10)
On Sun, Nov 6, 2016 at 12:57 PM, Kyle Huey wrote:
> Hardware support for faulting on the cpuid instruction is not required to
> emulate it, because cpuid triggers a VM exit anyways. KVM handles the relevant
> MSRs (MSR_PLATFORM_INFO and MSR_MISC_FEATURES_ENABLE) and upon a
>
On Sun, Nov 6, 2016 at 12:57 PM, Kyle Huey wrote:
> Hardware support for faulting on the cpuid instruction is not required to
> emulate it, because cpuid triggers a VM exit anyways. KVM handles the relevant
> MSRs (MSR_PLATFORM_INFO and MSR_MISC_FEATURES_ENABLE) and upon a
> cpuid-induced VM exit
On Fri, Nov 4, 2016 at 2:57 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
> On 04/11/2016 21:34, David Matlack wrote:
>> On Mon, Oct 31, 2016 at 6:37 PM, Kyle Huey <m...@kylehuey.com> wrote:
>>> + case MSR_PLATFORM_INFO:
>>> +
On Fri, Nov 4, 2016 at 2:57 PM, Paolo Bonzini wrote:
>
> On 04/11/2016 21:34, David Matlack wrote:
>> On Mon, Oct 31, 2016 at 6:37 PM, Kyle Huey wrote:
>>> + case MSR_PLATFORM_INFO:
>>> + /* cpuid faulting is supported */
>
On Mon, Oct 31, 2016 at 6:37 PM, Kyle Huey wrote:
> Hardware support for faulting on the cpuid instruction is not required to
> emulate it, because cpuid triggers a VM exit anyways. KVM handles the relevant
> MSRs (MSR_PLATFORM_INFO and MSR_MISC_FEATURES_ENABLE) and upon a
>
On Mon, Oct 31, 2016 at 6:37 PM, Kyle Huey wrote:
> Hardware support for faulting on the cpuid instruction is not required to
> emulate it, because cpuid triggers a VM exit anyways. KVM handles the relevant
> MSRs (MSR_PLATFORM_INFO and MSR_MISC_FEATURES_ENABLE) and upon a
> cpuid-induced VM exit
On Fri, Sep 9, 2016 at 9:38 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
> On 09/09/2016 00:13, David Matlack wrote:
>> Hi Paolo,
>>
>> On Tue, Sep 6, 2016 at 3:29 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>>> Bad things happen if a g
On Fri, Sep 9, 2016 at 9:38 AM, Paolo Bonzini wrote:
>
> On 09/09/2016 00:13, David Matlack wrote:
>> Hi Paolo,
>>
>> On Tue, Sep 6, 2016 at 3:29 PM, Paolo Bonzini wrote:
>>> Bad things happen if a guest using the TSC deadline timer is migrated.
>>> Th
Hi Paolo,
On Tue, Sep 6, 2016 at 3:29 PM, Paolo Bonzini wrote:
> Bad things happen if a guest using the TSC deadline timer is migrated.
> The guest doesn't re-calibrate the TSC after migration, and the
> TSC frequency can and will change unless your processor supports TSC
>
Hi Paolo,
On Tue, Sep 6, 2016 at 3:29 PM, Paolo Bonzini wrote:
> Bad things happen if a guest using the TSC deadline timer is migrated.
> The guest doesn't re-calibrate the TSC after migration, and the
> TSC frequency can and will change unless your processor supports TSC
> scaling (on Intel
On Thu, Jul 14, 2016 at 1:33 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>
>
> On 14/07/2016 02:16, David Matlack wrote:
>> KVM maintains L1's current VMCS in guest memory, at the guest physical
>> page identified by the argument to VMPTRLD. This makes hairy
>>
On Thu, Jul 14, 2016 at 1:33 AM, Paolo Bonzini wrote:
>
>
> On 14/07/2016 02:16, David Matlack wrote:
>> KVM maintains L1's current VMCS in guest memory, at the guest physical
>> page identified by the argument to VMPTRLD. This makes hairy
>> time-of-check to time-of
ush during VMXOFF, which is not mandated by the spec,
but also not in conflict with the spec.
Signed-off-by: David Matlack <dmatl...@google.com>
---
arch/x86/kvm/vmx.c | 31 ---
1 file changed, 28 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/ar
ush during VMXOFF, which is not mandated by the spec,
but also not in conflict with the spec.
Signed-off-by: David Matlack
---
arch/x86/kvm/vmx.c | 31 ---
1 file changed, 28 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 64a79f
On Tue, Jul 5, 2016 at 10:36 AM, Paolo Bonzini wrote:
> Bad things happen if a guest using the TSC deadline timer is migrated.
> The guest doesn't re-calibrate the TSC after migration, and the
> TSC frequency can and will change unless your processor supports TSC
> scaling
On Tue, Jul 5, 2016 at 10:36 AM, Paolo Bonzini wrote:
> Bad things happen if a guest using the TSC deadline timer is migrated.
> The guest doesn't re-calibrate the TSC after migration, and the
> TSC frequency can and will change unless your processor supports TSC
> scaling (on Intel this is only
On Thu, Jun 30, 2016 at 1:54 PM, Radim Krčmář wrote:
> KVM_CAP_X2APIC_API can be enabled to extend APIC ID in get/set ioctl and MSI
> addresses to 32 bits. Both are needed to support x2APIC.
>
> The capability has to be toggleable and disabled by default, because get/set
>
On Thu, Jun 30, 2016 at 1:54 PM, Radim Krčmář wrote:
> KVM_CAP_X2APIC_API can be enabled to extend APIC ID in get/set ioctl and MSI
> addresses to 32 bits. Both are needed to support x2APIC.
>
> The capability has to be toggleable and disabled by default, because get/set
> ioctl shifted and
On Thu, Jun 16, 2016 at 9:47 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
> On 16/06/2016 18:43, David Matlack wrote:
>> On Thu, Jun 16, 2016 at 1:21 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>>> This gains ~20 clock cycles per vmexit. On Intel there is
On Thu, Jun 16, 2016 at 9:47 AM, Paolo Bonzini wrote:
> On 16/06/2016 18:43, David Matlack wrote:
>> On Thu, Jun 16, 2016 at 1:21 AM, Paolo Bonzini wrote:
>>> This gains ~20 clock cycles per vmexit. On Intel there is no need
>>> anymore to enable the interrupts
On Thu, Jun 16, 2016 at 1:21 AM, Paolo Bonzini wrote:
> This gains ~20 clock cycles per vmexit. On Intel there is no need
> anymore to enable the interrupts in vmx_handle_external_intr, since we
> are using the "acknowledge interrupt on exit" feature. AMD needs to do
> that
On Thu, Jun 16, 2016 at 1:21 AM, Paolo Bonzini wrote:
> This gains ~20 clock cycles per vmexit. On Intel there is no need
> anymore to enable the interrupts in vmx_handle_external_intr, since we
> are using the "acknowledge interrupt on exit" feature. AMD needs to do
> that temporarily, and
On Tue, May 24, 2016 at 4:11 PM, Wanpeng Li <kernel...@gmail.com> wrote:
> 2016-05-25 6:38 GMT+08:00 David Matlack <dmatl...@google.com>:
>> On Tue, May 24, 2016 at 12:57 AM, Wanpeng Li <kernel...@gmail.com> wrote:
>>> From: Wanpeng Li <wanpeng...@hotmail.
On Tue, May 24, 2016 at 4:11 PM, Wanpeng Li wrote:
> 2016-05-25 6:38 GMT+08:00 David Matlack :
>> On Tue, May 24, 2016 at 12:57 AM, Wanpeng Li wrote:
>>> From: Wanpeng Li
>>>
>>> If an emulated lapic timer will fire soon(in the scope of 10us the
>>
M's default configuration, I'd prefer to
only add more polling when the gain is clear. If there are guest
workloads that want this patch, I'd suggest polling for timers be
default-off. At minimum, there should be a module parameter to control
it (like Christian Borntraeger suggested).
>
>
polling when the gain is clear. If there are guest
workloads that want this patch, I'd suggest polling for timers be
default-off. At minimum, there should be a module parameter to control
it (like Christian Borntraeger suggested).
>
> Cc: Paolo Bonzini
> Cc: Radim Krčmář
> Cc: David Mat
On Mon, May 23, 2016 at 6:13 PM, Yang Zhang <yang.zhang...@gmail.com> wrote:
> On 2016/5/24 2:04, David Matlack wrote:
>>
>> On Sun, May 22, 2016 at 6:26 PM, Yang Zhang <yang.zhang...@gmail.com>
>> wrote:
>>>
>>> On 2016/5/21 2:37, David Matla
On Mon, May 23, 2016 at 6:13 PM, Yang Zhang wrote:
> On 2016/5/24 2:04, David Matlack wrote:
>>
>> On Sun, May 22, 2016 at 6:26 PM, Yang Zhang
>> wrote:
>>>
>>> On 2016/5/21 2:37, David Matlack wrote:
>>>>
>>>>
>>>
On Sun, May 22, 2016 at 6:26 PM, Yang Zhang <yang.zhang...@gmail.com> wrote:
> On 2016/5/21 2:37, David Matlack wrote:
>>
>> It's not obvious to me why polling for a timer interrupt would improve
>> context switch latency. Can you explain a bit more?
>
>
>
On Sun, May 22, 2016 at 6:26 PM, Yang Zhang wrote:
> On 2016/5/21 2:37, David Matlack wrote:
>>
>> It's not obvious to me why polling for a timer interrupt would improve
>> context switch latency. Can you explain a bit more?
>
>
> We have a workload which using high
er expiration before schedule out.
>
> iperf TCP get ~6% bandwidth improvement.
I think my question got lost in the previous thread :). Can you
explain why TCP bandwidth improves with this patch?
>
> Cc: Paolo Bonzini <pbonz...@redhat.com>
> Cc: Radim Krčmář <rkrc...@r
1 - 100 of 329 matches
Mail list logo