On 26/06/18 07:38, Jan Beulich wrote:
> While we don't expect CR0 to change behind our backs, cope with this
> happening, but other than for CR4 also log a (debug) message.
>
> Signed-off-by: Jan Beulich <[email protected]>
>
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -1676,7 +1676,7 @@ void vmx_vmentry_failure(void)
>  void vmx_do_resume(struct vcpu *v)
>  {
>      bool_t debug_state;
> -    unsigned long host_cr4;
> +    unsigned long host_cr4, host_cr0, cr0;
>  
>      if ( v->arch.hvm_vmx.active_cpu == smp_processor_id() )
>          vmx_vmcs_reload(v);
> @@ -1732,6 +1732,15 @@ void vmx_do_resume(struct vcpu *v)
>      if ( host_cr4 != read_cr4() )
>          __vmwrite(HOST_CR4, read_cr4());
>  
> +    /* Check host CR0 (its value shouldn't have changed). */
> +    __vmread(HOST_CR0, &host_cr0);
> +    cr0 = read_cr0();

For better or worse, read_cr0() isn't a cached read, so this adds a real
mov from cr0 into the resume path, which is a large overhead for a path
we expect never to take.

Now that we are 64bit, this could possibly be changed, as task switches
can't occur and change TS behind our back (and all guest task switches
are handled by Xen).

I'd also like to consider a point raised by Huawai at XenSummit.  Once
we handle #NM and disable the intercept, clts/stts inside the guest
still cause a vmexit.  In one HPC workload, this accounted for a 40%
performance impact.

On Intel hardware, this can be fixed with the cr0_host/guest mask
setting, similar to the cr4 changes in c/s 40681735502, and on AMD
hardware by making use of GENERAL1_INTERCEPT_CR0_SEL_WRITE in preference
to CR_INTERCEPT_CR0_WRITE.

With those optimisations in place, I don't believe these checks would be
warranted.

~Andrew

> +    if ( host_cr0 != cr0 )
> +    {
> +        dprintk(XENLOG_ERR, "%pv: CR0 %lx != %lx\n", v, host_cr0, cr0);
> +        __vmwrite(HOST_CR0, cr0);
> +    }
> +
>      reset_stack_and_jump(vmx_asm_do_vmentry);
>  }
>  
>
>


_______________________________________________
Xen-devel mailing list
[email protected]
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to