t page is unmapped
> during CPU hot unplug, and so KVM's clearing of the eVMCS controls needs
> to occur with CPU hot(un)plug disabled, otherwise KVM could attempt to
> write to a CPU's VP assist page after it's unmapped.
>
> Reported-by: Vitaly Kuznetsov
> Signed-off-by: Sean Ch
rv.nested_features & HV_X64_NESTED_DIRECT_FLUSH)
> - vmx_x86_ops.enable_l2_tlb_flush
> - = hv_enable_l2_tlb_flush;
> -
> - } else {
> - enlightened_vmcs = false;
> - }
> -#endif
> + hv_init_evmcs();
>
> r = kvm_init(_init_ops, sizeof(struct vcpu_vmx),
>__alignof__(struct vcpu_vmx), THIS_MODULE);
Reviewed-by: Vitaly Kuznetsov
--
Vitaly
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Sean Christopherson writes:
> On Thu, Nov 03, 2022, Vitaly Kuznetsov wrote:
>> Sean Christopherson writes:
>> > + /*
>> > + * Reset everything to support using non-enlightened VMCS access later
>> > + * (e.g. when we reloa
Sean Christopherson writes:
> To make it obvious that KVM doesn't have a lurking bug, cleanup eVMCS
> enabling if kvm_init() fails even though the enabling doesn't strictly
> need to be unwound. eVMCS enabling only toggles values that are fully
> contained in the VMX module, i.e. it's
Gavin Shan writes:
> This enables asynchronous page fault from guest side. The design
> is highlighted as below:
>
>* The per-vCPU shared memory region, which is represented by
> "struct kvm_vcpu_pv_apf_data", is allocated. The reason and
> token associated with the received
Gavin Shan writes:
> This adds inline helper kvm_check_async_pf_completion_queue() to
> check if there are pending completion in the queue. The empty stub
> is also added on !CONFIG_KVM_ASYNC_PF so that the caller needn't
> consider if CONFIG_KVM_ASYNC_PF is enabled.
>
> All checks on the
Sergey Senozhatsky writes:
> Add KVM suspend/hibernate PM-notifier which lets architectures
> to implement arch-specific VM suspend code. For instance, on x86
> this sets PVCLOCK_GUEST_STOPPED on all the VCPUs.
>
> Our case is that user puts the host system into sleep multiple
> times a day
Anastassios Nanos writes:
> Moreover, it doesn't involve *any* mode switch at all while printing
> out the result of the addition of these two registers -- which I
> guess for a simple use-case like this it isn't much.
> But if we were to scale this to a large number of exits (and their
>
regs(vcpu, kvm_run);
> + sync_regs(vcpu);
> enable_cpu_timer_accounting(vcpu);
>
> might_fault();
> @@ -4393,7 +4400,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> }
>
> disable_cpu_timer_accounting(vcpu);
> - store_regs(vcpu, kvm_run);
> + store_regs(vcpu);
>
> kvm_sigset_deactivate(vcpu);
Haven't tried to compile this but the change itself looks obviously
correct, so
Reviewed-by: Vitaly Kuznetsov
--
Vitaly
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
_hv_nested.c
> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
> @@ -290,8 +290,7 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
> r = RESUME_HOST;
> break;
> }
> - r = kvmhv_run_singl
kvm/arm/mmu.c
> @@ -1892,7 +1892,6 @@ static void handle_access_fault(struct kvm_vcpu *vcpu,
> phys_addr_t fault_ipa)
> /**
> * kvm_handle_guest_abort - handles all 2nd stage aborts
> * @vcpu:the VCPU pointer
> - * @run: the kvm_run structure
> *
&
Tianjia Zhang writes:
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining
_vcpu'. This likely deserves it's own patch though.
> if (ret)
> return ret;
> }
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 74bdb7bf3295..e18faea89146 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -
Tianjia Zhang writes:
> kvm_arch_vcpu_ioctl_run() is only called in the file kvm_main.c,
> where vcpu->run is the kvm_run parameter, so it has been replaced.
>
> Signed-off-by: Tianjia Zhang
> ---
> arch/x86/kvm/x86.c | 8
> virt/kvm/arm/arm.c | 2 +-
> 2 files changed, 5
Sean Christopherson writes:
> On Mon, Mar 23, 2020 at 01:10:40PM +0100, Vitaly Kuznetsov wrote:
>> Sean Christopherson writes:
>>
>> > +
>> > + .runtime_ops = _x86_ops,
>> > +};
>>
>> Unrelated to your patch but I think we can make th
hibit_reasons(ulong bit)
> return supported & BIT(bit);
> }
>
> -static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
> +static struct kvm_x86_ops vmx_x86_ops __initdata = {
> .hardware_unsetup = hardware_unsetup,
>
> .h
* value. Don't use the timer if it might cause spurious exits
> + * at a rate faster than 0.1 Hz (of uninterrupted guest time).
> + */
> + if (use_timer_freq > 0xu / 10)
> + enable_preemption_timer = false;
> +
Sean Christopherson writes:
> Replace the kvm_x86_ops pointer in common x86 with an instance of the
> struct to save one memory instance when invoking function. Copy the
> struct by value to set the ops during kvm_init().
>
> Arbitrarily use kvm_x86_ops.hardware_enable to track whether or not
xit;
> }
>
> kvm_set_posted_intr_wakeup_handler(wakeup_handler);
> @@ -7965,7 +7965,8 @@ static __init int hardware_setup(void)
> nested_vmx_setup_ctls_msrs(_config.nested,
> vmx_capability.ept);
>
> - r = nested_vmx_hardware_setup(kvm_vmx_exit_handlers);
> + r = nested_vmx_hardware_setup(_x86_ops,
> + kvm_vmx_exit_handlers);
> if (r)
> return r;
> }
Reviewed-by: Vitaly Kuznetsov
--
Vitaly
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
; .hardware_enable = svm_hardware_enable,
> .hardware_disable = svm_hardware_disable,
Reviewed-by: Vitaly Kuznetsov
--
Vitaly
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
kvm_vcpu *vcpu)
> return to_vmx(vcpu)->nested.vmxon;
> }
>
> -static __exit void hardware_unsetup(void)
> +static void hardware_unsetup(void)
> {
> if (nested)
> nested_vmx_hardware_unsetup();
Reviewed-by: Vitaly Kuznetsov
--
Vitaly
_
are_setup();
> + r = ops->hardware_setup();
> if (r != 0)
> return r;
>
> @@ -9665,13 +9666,14 @@ void kvm_arch_hardware_unsetup(void)
> int kvm_arch_check_processor_compat(void *opaque)
> {
> struct cpuinfo_x86 *c = _data(smp_processor_id());
&
Sean Christopherson writes:
> On Fri, Feb 07, 2020 at 07:53:34PM -0500, Peter Xu wrote:
>> On Fri, Feb 07, 2020 at 04:42:33PM -0800, Sean Christopherson wrote:
>> > On Fri, Feb 07, 2020 at 07:18:32PM -0500, Peter Xu wrote:
>> > > On Fri, Feb 07, 2020 at 11:45:32AM -0800, Sean Christopherson
Sean Christopherson writes:
> +Vitaly for HyperV
>
> On Thu, Feb 06, 2020 at 04:41:06PM -0500, Peter Xu wrote:
>> On Thu, Feb 06, 2020 at 01:21:20PM -0800, Sean Christopherson wrote:
>> > On Thu, Feb 06, 2020 at 03:02:00PM -0500, Peter Xu wrote:
>> > > But that matters to this patch because if
24 matches
Mail list logo