Re: [PATCH v3] Enable MSR-BASED TPR shadow even if APICv is inactive

2016-09-21 Thread Wanpeng Li
2016-09-22 0:35 GMT+08:00 Radim Krčmář :
> 2016-09-21 10:14+0800, Wanpeng Li:
[...]
> Could only be "else" without if here.  I'll change that when refactoring
> of vmx bitmap handling, posting soon (hopefully).
>
[...]
>
> Pasto here (disable vs. enable).  It's not called with !apicv_active and
> the refactoring will delete the whole function, so

Your refactoring is based on this patch, right? So I will send out v4
to fix the issue which you pointed out in this patch for further
bisect.

Regards,
Wanpeng Li


Re: [PATCH v3] Enable MSR-BASED TPR shadow even if APICv is inactive

2016-09-21 Thread Radim Krčmář
2016-09-21 10:14+0800, Wanpeng Li:
> From: Wanpeng Li 
> 
> I observed that kvmvapic(to optimize flexpriority=N or AMD) is used 
> to boost TPR access when testing kvm-unit-test/eventinj.flat tpr case
> on my haswell desktop (w/ flexpriority, w/o APICv). Commit (8d14695f9542 
> x86, apicv: add virtual x2apic support) disable virtual x2apic mode 
> completely if w/o APICv, and the author also told me that windows guest
> can't enter into x2apic mode when he developed the APICv feature several 
> years ago. However, it is not truth currently, Interrupt Remapping and 
> vIOMMU is added to qemu and the developers from Intel test windows 8 can 
> work in x2apic mode w/ Interrupt Remapping enabled recently. 
> 
> This patch enables TPR shadow for virtual x2apic mode to boost 
> windows guest in x2apic mode even if w/o APICv.
> 
> Can pass the kvm-unit-test.
> 
> Suggested-by: Radim Krčmář 
> Suggested-by: Wincy Van 
> Cc: Paolo Bonzini 
> Cc: Radim Krčmář 
> Cc: Wincy Van 
> Cc: Yang Zhang 
> Signed-off-by: Wanpeng Li 
> ---
> v2 -> v3:
>  * introduce a new set of bitmaps and assign them in vmx_set_msr_bitmap()
> v1 -> v2: 
>  * leverage the cached msr bitmap
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> @@ -2518,10 +2520,18 @@ static void vmx_set_msr_bitmap(struct kvm_vcpu *vcpu)
>   else if (cpu_has_secondary_exec_ctrls() &&
>(vmcs_read32(SECONDARY_VM_EXEC_CONTROL) &
> SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE)) {
> - if (is_long_mode(vcpu))
> - msr_bitmap = vmx_msr_bitmap_longmode_x2apic;
> - else
> - msr_bitmap = vmx_msr_bitmap_legacy_x2apic;
> + if (enable_apicv && kvm_vcpu_apicv_active(vcpu)) {
> + if (is_long_mode(vcpu))
> + msr_bitmap = vmx_msr_bitmap_longmode_x2apic;
> + else
> + msr_bitmap = vmx_msr_bitmap_legacy_x2apic;
> + } else if ((enable_apicv && !kvm_vcpu_apicv_active(vcpu)) ||
> + !enable_apicv) {

Could only be "else" without if here.  I'll change that when refactoring
of vmx bitmap handling, posting soon (hopefully).

> + if (is_long_mode(vcpu))
> + msr_bitmap = 
> vmx_msr_bitmap_longmode_x2apic_apicv_inactive;
> + else
> + msr_bitmap = 
> vmx_msr_bitmap_legacy_x2apic_apicv_inactive;
> + }
>   } else {
>   if (is_long_mode(vcpu))
>   msr_bitmap = vmx_msr_bitmap_longmode;
> @@ -4678,28 +4688,49 @@ static void vmx_disable_intercept_for_msr(u32 msr, 
> bool longmode_only)
> -static void vmx_enable_intercept_msr_read_x2apic(u32 msr)
> +static void vmx_enable_intercept_msr_read_x2apic(u32 msr, bool apicv_active)
>  {
> - __vmx_enable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic,
> - msr, MSR_TYPE_R);
> - __vmx_enable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic,
> - msr, MSR_TYPE_R);
> + if (apicv_active) {
> + __vmx_enable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic,
> + msr, MSR_TYPE_R);
> + __vmx_enable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic,
> + msr, MSR_TYPE_R);
> + } else {
> + 
> __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic_apicv_inactive,
> + msr, MSR_TYPE_R);
> + 
> __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic_apicv_inactive,
> + msr, MSR_TYPE_R);

Pasto here (disable vs. enable).  It's not called with !apicv_active and
the refactoring will delete the whole function, so

Reviwed-by: Radim Krčmář 


Re: [PATCH v3] Enable MSR-BASED TPR shadow even if APICv is inactive

2016-09-20 Thread Wanpeng Li
The subject should be "KVM: VMX: Enable MSR-BASED TPR shadow even if
APICv is inactive"
2016-09-21 10:14 GMT+08:00 Wanpeng Li :
> From: Wanpeng Li 
>
> I observed that kvmvapic(to optimize flexpriority=N or AMD) is used
> to boost TPR access when testing kvm-unit-test/eventinj.flat tpr case
> on my haswell desktop (w/ flexpriority, w/o APICv). Commit (8d14695f9542
> x86, apicv: add virtual x2apic support) disable virtual x2apic mode
> completely if w/o APICv, and the author also told me that windows guest
> can't enter into x2apic mode when he developed the APICv feature several
> years ago. However, it is not truth currently, Interrupt Remapping and
> vIOMMU is added to qemu and the developers from Intel test windows 8 can
> work in x2apic mode w/ Interrupt Remapping enabled recently.
>
> This patch enables TPR shadow for virtual x2apic mode to boost
> windows guest in x2apic mode even if w/o APICv.
>
> Can pass the kvm-unit-test.
>
> Suggested-by: Radim Krčmář 
> Suggested-by: Wincy Van 
> Cc: Paolo Bonzini 
> Cc: Radim Krčmář 
> Cc: Wincy Van 
> Cc: Yang Zhang 
> Signed-off-by: Wanpeng Li 
> ---
> v2 -> v3:
>  * introduce a new set of bitmaps and assign them in vmx_set_msr_bitmap()
> v1 -> v2:
>  * leverage the cached msr bitmap
>
>  arch/x86/kvm/vmx.c | 133 
> ++---
>  1 file changed, 95 insertions(+), 38 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 5cede40..4f3042c 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -927,6 +927,8 @@ static unsigned long *vmx_msr_bitmap_legacy;
>  static unsigned long *vmx_msr_bitmap_longmode;
>  static unsigned long *vmx_msr_bitmap_legacy_x2apic;
>  static unsigned long *vmx_msr_bitmap_longmode_x2apic;
> +static unsigned long *vmx_msr_bitmap_legacy_x2apic_apicv_inactive;
> +static unsigned long *vmx_msr_bitmap_longmode_x2apic_apicv_inactive;
>  static unsigned long *vmx_vmread_bitmap;
>  static unsigned long *vmx_vmwrite_bitmap;
>
> @@ -2518,10 +2520,18 @@ static void vmx_set_msr_bitmap(struct kvm_vcpu *vcpu)
> else if (cpu_has_secondary_exec_ctrls() &&
>  (vmcs_read32(SECONDARY_VM_EXEC_CONTROL) &
>   SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE)) {
> -   if (is_long_mode(vcpu))
> -   msr_bitmap = vmx_msr_bitmap_longmode_x2apic;
> -   else
> -   msr_bitmap = vmx_msr_bitmap_legacy_x2apic;
> +   if (enable_apicv && kvm_vcpu_apicv_active(vcpu)) {
> +   if (is_long_mode(vcpu))
> +   msr_bitmap = vmx_msr_bitmap_longmode_x2apic;
> +   else
> +   msr_bitmap = vmx_msr_bitmap_legacy_x2apic;
> +   } else if ((enable_apicv && !kvm_vcpu_apicv_active(vcpu)) ||
> +   !enable_apicv) {
> +   if (is_long_mode(vcpu))
> +   msr_bitmap = 
> vmx_msr_bitmap_longmode_x2apic_apicv_inactive;
> +   else
> +   msr_bitmap = 
> vmx_msr_bitmap_legacy_x2apic_apicv_inactive;
> +   }
> } else {
> if (is_long_mode(vcpu))
> msr_bitmap = vmx_msr_bitmap_longmode;
> @@ -4678,28 +4688,49 @@ static void vmx_disable_intercept_for_msr(u32 msr, 
> bool longmode_only)
> msr, MSR_TYPE_R | MSR_TYPE_W);
>  }
>
> -static void vmx_enable_intercept_msr_read_x2apic(u32 msr)
> +static void vmx_enable_intercept_msr_read_x2apic(u32 msr, bool apicv_active)
>  {
> -   __vmx_enable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic,
> -   msr, MSR_TYPE_R);
> -   __vmx_enable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic,
> -   msr, MSR_TYPE_R);
> +   if (apicv_active) {
> +   __vmx_enable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic,
> +   msr, MSR_TYPE_R);
> +   __vmx_enable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic,
> +   msr, MSR_TYPE_R);
> +   } else {
> +   
> __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic_apicv_inactive,
> +   msr, MSR_TYPE_R);
> +   
> __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic_apicv_inactive,
> +   msr, MSR_TYPE_R);
> +   }
>  }
>
> -static void vmx_disable_intercept_msr_read_x2apic(u32 msr)
> +static void vmx_disable_intercept_msr_read_x2apic(u32 msr, bool apicv_active)
>  {
> -   __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic,
> -   msr, MSR_TYPE_R);
> -   __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic,
> -   msr, MSR_TYPE_R);
> +   if (apicv_active) {
> +   __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2a