On Thu, 2021-04-01 at 15:03 +0200, Vitaly Kuznetsov wrote:
> Maxim Levitsky <mlevi...@redhat.com> writes:
> 
> > Currently to support Intel->AMD migration, if CPU vendor is GenuineIntel,
> > we emulate the full 64 value for MSR_IA32_SYSENTER_{EIP|ESP}
> > msrs, and we also emulate the sysenter/sysexit instruction in long mode.
> > 
> > (Emulator does still refuse to emulate sysenter in 64 bit mode, on the
> > ground that the code for that wasn't tested and likely has no users)
> > 
> > However when virtual vmload/vmsave is enabled, the vmload instruction will
> > update these 32 bit msrs without triggering their msr intercept,
> > which will lead to having stale values in kvm's shadow copy of these msrs,
> > which relies on the intercept to be up to date.
> > 
> > Fix/optimize this by doing the following:
> > 
> > 1. Enable the MSR intercepts for SYSENTER MSRs iff vendor=GenuineIntel
> >    (This is both a tiny optimization and also ensures that in case
> >    the guest cpu vendor is AMD, the msrs will be 32 bit wide as
> >    AMD defined).
> > 
> > 2. Store only high 32 bit part of these msrs on interception and combine
> >    it with hardware msr value on intercepted read/writes
> >    iff vendor=GenuineIntel.
> > 
> > 3. Disable vmload/vmsave virtualization if vendor=GenuineIntel.
> >    (It is somewhat insane to set vendor=GenuineIntel and still enable
> >    SVM for the guest but well whatever).
> >    Then zero the high 32 bit parts when kvm intercepts and emulates vmload.
> > 
> > Thanks a lot to Paulo Bonzini for helping me with fixing this in the most
> 
> s/Paulo/Paolo/ :-)
Sorry about that!

> 
> > correct way.
> > 
> > This patch fixes nested migration of 32 bit nested guests, that was
> > broken because incorrect cached values of SYSENTER msrs were stored in
> > the migration stream if L1 changed these msrs with
> > vmload prior to L2 entry.
> > 
> > Signed-off-by: Maxim Levitsky <mlevi...@redhat.com>
> > ---
> >  arch/x86/kvm/svm/svm.c | 99 +++++++++++++++++++++++++++---------------
> >  arch/x86/kvm/svm/svm.h |  6 +--
> >  2 files changed, 68 insertions(+), 37 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index 271196400495..6c39b0cd6ec6 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -95,6 +95,8 @@ static const struct svm_direct_access_msrs {
> >  } direct_access_msrs[MAX_DIRECT_ACCESS_MSRS] = {
> >     { .index = MSR_STAR,                            .always = true  },
> >     { .index = MSR_IA32_SYSENTER_CS,                .always = true  },
> > +   { .index = MSR_IA32_SYSENTER_EIP,               .always = false },
> > +   { .index = MSR_IA32_SYSENTER_ESP,               .always = false },
> >  #ifdef CONFIG_X86_64
> >     { .index = MSR_GS_BASE,                         .always = true  },
> >     { .index = MSR_FS_BASE,                         .always = true  },
> > @@ -1258,16 +1260,6 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
> >     if (kvm_vcpu_apicv_active(vcpu))
> >             avic_init_vmcb(svm);
> >  
> > -   /*
> > -    * If hardware supports Virtual VMLOAD VMSAVE then enable it
> > -    * in VMCB and clear intercepts to avoid #VMEXIT.
> > -    */
> > -   if (vls) {
> > -           svm_clr_intercept(svm, INTERCEPT_VMLOAD);
> > -           svm_clr_intercept(svm, INTERCEPT_VMSAVE);
> > -           svm->vmcb->control.virt_ext |= 
> > VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
> > -   }
> > -
> >     if (vgif) {
> >             svm_clr_intercept(svm, INTERCEPT_STGI);
> >             svm_clr_intercept(svm, INTERCEPT_CLGI);
> > @@ -2133,9 +2125,11 @@ static int vmload_vmsave_interception(struct 
> > kvm_vcpu *vcpu, bool vmload)
> >  
> >     ret = kvm_skip_emulated_instruction(vcpu);
> >  
> > -   if (vmload)
> > +   if (vmload) {
> >             nested_svm_vmloadsave(vmcb12, svm->vmcb);
> > -   else
> > +           svm->sysenter_eip_hi = 0;
> > +           svm->sysenter_esp_hi = 0;
> > +   } else
> >             nested_svm_vmloadsave(svm->vmcb, vmcb12);
> 
> Nitpicking: {} are now needed for both branches here.
I didn't knew about this rule, and I'll take this into
account next time. Thanks!


Best regards,
        Maxim Levitsky


Reply via email to