On 12/04/2018 17:25, Vitaly Kuznetsov wrote:
> @@ -5335,6 +5353,9 @@ static void __always_inline 
> vmx_disable_intercept_for_msr(unsigned long *msr_bit
>       if (!cpu_has_vmx_msr_bitmap())
>               return;
>  
> +     if (static_branch_unlikely(&enable_emsr_bitmap))
> +             evmcs_touch_msr_bitmap();
> +
>       /*
>        * See Intel PRM Vol. 3, 20.6.9 (MSR-Bitmap Address). Early manuals
>        * have the write-low and read-high bitmap offsets the wrong way round.
> @@ -5370,6 +5391,9 @@ static void __always_inline 
> vmx_enable_intercept_for_msr(unsigned long *msr_bitm
>       if (!cpu_has_vmx_msr_bitmap())
>               return;
>  
> +     if (static_branch_unlikely(&enable_emsr_bitmap))
> +             evmcs_touch_msr_bitmap();

I'm not sure about the "unlikely".  Can you just check current_evmcs
instead (dropping the static key completely)?

The function, also, is small enough that inlining should be beneficial.

Paolo

Reply via email to