On Fri, 2020-06-05 at 13:07 -0700, Sean Christopherson wrote:
> Reinitialize IA32_FEAT_CTL on the BSP during wakeup to handle the
> case
> where firmware doesn't initialize or save/restore across S3.  This
> fixes
> a bug where IA32_FEAT_CTL is left uninitialized and results in VMXON
> taking a #GP due to VMX not being fully enabled, i.e. breaks KVM.
> 
> Use init_ia32_feat_ctl() to "restore" IA32_FEAT_CTL as it already
> deals
> with the case where the MSR is locked, and because APs already redo
> init_ia32_feat_ctl() during suspend by virtue of the SMP boot flow
> being
> used to reinitialize APs upon wakeup.  Do the call in the early
> wakeup
> flow to avoid dependencies in the syscore_ops chain, e.g. simply
> adding
> a resume hook is not guaranteed to work, as KVM does VMXON in its own
> resume hook, kvm_resume(), when KVM has active guests.
> 
> Reported-by: Brad Campbell <lists2...@fnarfbargle.com>
> Cc: Maxim Levitsky <mlevi...@redhat.com>
> Cc: Paolo Bonzini <pbonz...@redhat.com>
> Cc: k...@vger.kernel.org
> Fixes: 21bd3467a58e ("KVM: VMX: Drop initialization of IA32_FEAT_CTL
> MSR")
> Signed-off-by: Sean Christopherson <sean.j.christopher...@intel.com>
> ---
>  arch/x86/include/asm/cpu.h | 5 +++++
>  arch/x86/kernel/cpu/cpu.h  | 4 ----
>  arch/x86/power/cpu.c       | 6 ++++++
>  3 files changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
> index dd17c2da1af5..da78ccbd493b 100644
> --- a/arch/x86/include/asm/cpu.h
> +++ b/arch/x86/include/asm/cpu.h
> @@ -58,4 +58,9 @@ static inline bool handle_guest_split_lock(unsigned
> long ip)
>       return false;
>  }
>  #endif
> +#ifdef CONFIG_IA32_FEAT_CTL
> +void init_ia32_feat_ctl(struct cpuinfo_x86 *c);
> +#else
> +static inline void init_ia32_feat_ctl(struct cpuinfo_x86 *c) {}
> +#endif
>  #endif /* _ASM_X86_CPU_H */
> diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
> index 37fdefd14f28..38ab6e115eac 100644
> --- a/arch/x86/kernel/cpu/cpu.h
> +++ b/arch/x86/kernel/cpu/cpu.h
> @@ -80,8 +80,4 @@ extern void x86_spec_ctrl_setup_ap(void);
>  
>  extern u64 x86_read_arch_cap_msr(void);
>  
> -#ifdef CONFIG_IA32_FEAT_CTL
> -void init_ia32_feat_ctl(struct cpuinfo_x86 *c);
> -#endif
> -
>  #endif /* ARCH_X86_CPU_H */
> diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
> index aaff9ed7ff45..b0d3c5ca6d80 100644
> --- a/arch/x86/power/cpu.c
> +++ b/arch/x86/power/cpu.c
> @@ -193,6 +193,8 @@ static void fix_processor_context(void)
>   */
>  static void notrace __restore_processor_state(struct saved_context
> *ctxt)
>  {
> +     struct cpuinfo_x86 *c;
> +
>       if (ctxt->misc_enable_saved)
>               wrmsrl(MSR_IA32_MISC_ENABLE, ctxt->misc_enable);
>       /*
> @@ -263,6 +265,10 @@ static void notrace
> __restore_processor_state(struct saved_context *ctxt)
>       mtrr_bp_restore();
>       perf_restore_debug_store();
>       msr_restore_context(ctxt);
> +
> +     c = &cpu_data(smp_processor_id());
> +     if (cpu_has(c, X86_FEATURE_MSR_IA32_FEAT_CTL))
> +             init_ia32_feat_ctl(c);
>  }
>  
>  /* Needed by apm.c */


I don't have currently an active VMX system to test this on,
but from the code and from my knowelege of this area this looks all
right.

Reviewed-by: Maxim Levitsky <mlevi...@redhat.com>

Best regards,
        Maxim Levitsky

Reply via email to