Re: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-24 Thread Paolo Bonzini


On 24/04/2015 09:46, Zhang, Yang Z wrote:
  On the other hand vmexit is lighter and lighter on newer processors; a
  Sandy Bridge has less than half the vmexit cost of a Core 2 (IIRC 1000
  vs. 2500 clock cycles approximately).
 
 1000 cycles? I remember it takes about 4000 cycle even in HSW server.

I was going from memory, but I now measured it with the vmexit test of
kvm-unit-tests.  With both SNB Xeon E5 and IVB Core i7, returns about
1400 clock cycles for a vmcall exit.  This includes the overhead of
doing the cpuid itself.

Thus the vmexit cost is around 1300 cycles.  Of this the vmresume
instruction is probably around 800 cycles, and the rest is introduced by
KVM.  There are at least 4-5 memory barriers and locked instructions.

Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-24 Thread Zhang, Yang Z
Paolo Bonzini wrote on 2015-04-24:
 
 
 On 24/04/2015 09:46, Zhang, Yang Z wrote:
 On the other hand vmexit is lighter and lighter on newer
 processors; a Sandy Bridge has less than half the vmexit cost of a
 Core 2 (IIRC
 1000 vs. 2500 clock cycles approximately).
 
 1000 cycles? I remember it takes about 4000 cycle even in HSW server.
 
 I was going from memory, but I now measured it with the vmexit test of
 kvm-unit-tests.  With both SNB Xeon E5 and IVB Core i7, returns about
 1400 clock cycles for a vmcall exit.  This includes the overhead of
 doing the cpuid itself.
 
 Thus the vmexit cost is around 1300 cycles.  Of this the vmresume
 instruction is probably around 800 cycles, and the rest is introduced
 by KVM.  There are at least 4-5 memory barriers and locked instructions.

Yes, that's make sense. The average vmexit/vmentry handle cost is around 4000 
cycles. But I guess xsaveopt doesn't take so many cycles. Does anyone have the 
xsaveopt cost data?

 
 Paolo


Best regards,
Yang


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-24 Thread Paolo Bonzini


On 24/04/2015 03:16, Zhang, Yang Z wrote:
 This is interesting since previous measurements on KVM have had
 the exact opposite results.  I think we need to understand this a
 lot more.
 
 What I can tell is that vmexit is heavy. So it is reasonable to see
 the improvement under some cases, especially kernel is using eager
 FPU now which means each schedule may trigger a vmexit.

On the other hand vmexit is lighter and lighter on newer processors; a
Sandy Bridge has less than half the vmexit cost of a Core 2 (IIRC 1000
vs. 2500 clock cycles approximately).

Also, measurement were done on Westmere but Sandy Bridge is the first
processor to have XSAVEOPT and thus use eager FPU.

Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-24 Thread Zhang, Yang Z
Paolo Bonzini wrote on 2015-04-24:
 
 
 On 24/04/2015 03:16, Zhang, Yang Z wrote:
 This is interesting since previous measurements on KVM have had the
 exact opposite results.  I think we need to understand this a lot
 more.
 
 What I can tell is that vmexit is heavy. So it is reasonable to see
 the improvement under some cases, especially kernel is using eager
 FPU now which means each schedule may trigger a vmexit.
 
 On the other hand vmexit is lighter and lighter on newer processors; a
 Sandy Bridge has less than half the vmexit cost of a Core 2 (IIRC 1000
 vs. 2500 clock cycles approximately).
 

1000 cycles? I remember it takes about 4000 cycle even in HSW server.

 Also, measurement were done on Westmere but Sandy Bridge is the first
 processor to have XSAVEOPT and thus use eager FPU.
 
 Paolo


Best regards,
Yang


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-23 Thread Dave Hansen
On 04/23/2015 02:13 PM, Liang Li wrote:
 When compiling kernel on westmere, the performance of eager FPU
 is about 0.4% faster than lazy FPU.

Do you have an theory why this is?  What does the regression come from?


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-23 Thread H. Peter Anvin
On 04/23/2015 08:28 AM, Dave Hansen wrote:
 On 04/23/2015 02:13 PM, Liang Li wrote:
 When compiling kernel on westmere, the performance of eager FPU
 is about 0.4% faster than lazy FPU.
 
 Do you have an theory why this is?  What does the regression come from?
 

This is interesting since previous measurements on KVM have had the
exact opposite results.  I think we need to understand this a lot more.

-hpa


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-23 Thread Zhang, Yang Z
H. Peter Anvin wrote on 2015-04-24:
 On 04/23/2015 08:28 AM, Dave Hansen wrote:
 On 04/23/2015 02:13 PM, Liang Li wrote:
 When compiling kernel on westmere, the performance of eager FPU is
 about 0.4% faster than lazy FPU.
 
 Do you have an theory why this is?  What does the regression come from?
 
 
 This is interesting since previous measurements on KVM have had the
 exact opposite results.  I think we need to understand this a lot more.

What I can tell is that vmexit is heavy. So it is reasonable to see the 
improvement under some cases, especially kernel is using eager FPU now which 
means each schedule may trigger a vmexit.

 
   -hpa



Best regards,
Yang


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-23 Thread Liang Li
Romove lazy FPU logic and use eager FPU entirely. Eager FPU does
not have performance regression, and it can simplify the code.

When compiling kernel on westmere, the performance of eager FPU
is about 0.4% faster than lazy FPU.

Signed-off-by: Liang Li liang.z...@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
---
 arch/x86/include/asm/kvm_host.h |  1 -
 arch/x86/kvm/svm.c  | 22 ++--
 arch/x86/kvm/vmx.c  | 74 +++--
 arch/x86/kvm/x86.c  |  8 +
 include/linux/kvm_host.h|  2 --
 5 files changed, 9 insertions(+), 98 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index dea2e7e..5d84cc9 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -743,7 +743,6 @@ struct kvm_x86_ops {
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
-   void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
 
void (*tlb_flush)(struct kvm_vcpu *vcpu);
 
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index ce741b8..1b3b29b 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1087,7 +1087,6 @@ static void init_vmcb(struct vcpu_svm *svm)
struct vmcb_control_area *control = svm-vmcb-control;
struct vmcb_save_area *save = svm-vmcb-save;
 
-   svm-vcpu.fpu_active = 1;
svm-vcpu.arch.hflags = 0;
 
set_cr_intercept(svm, INTERCEPT_CR0_READ);
@@ -1529,15 +1528,12 @@ static void update_cr0_intercept(struct vcpu_svm *svm)
ulong gcr0 = svm-vcpu.arch.cr0;
u64 *hcr0 = svm-vmcb-save.cr0;
 
-   if (!svm-vcpu.fpu_active)
-   *hcr0 |= SVM_CR0_SELECTIVE_MASK;
-   else
-   *hcr0 = (*hcr0  ~SVM_CR0_SELECTIVE_MASK)
-   | (gcr0  SVM_CR0_SELECTIVE_MASK);
+   *hcr0 = (*hcr0  ~SVM_CR0_SELECTIVE_MASK)
+   | (gcr0  SVM_CR0_SELECTIVE_MASK);
 
mark_dirty(svm-vmcb, VMCB_CR);
 
-   if (gcr0 == *hcr0  svm-vcpu.fpu_active) {
+   if (gcr0 == *hcr0) {
clr_cr_intercept(svm, INTERCEPT_CR0_READ);
clr_cr_intercept(svm, INTERCEPT_CR0_WRITE);
} else {
@@ -1568,8 +1564,6 @@ static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned 
long cr0)
if (!npt_enabled)
cr0 |= X86_CR0_PG | X86_CR0_WP;
 
-   if (!vcpu-fpu_active)
-   cr0 |= X86_CR0_TS;
/*
 * re-enable caching here because the QEMU bios
 * does not do it - this results in some delay at
@@ -1795,7 +1789,6 @@ static void svm_fpu_activate(struct kvm_vcpu *vcpu)
 
clr_exception_intercept(svm, NM_VECTOR);
 
-   svm-vcpu.fpu_active = 1;
update_cr0_intercept(svm);
 }
 
@@ -4139,14 +4132,6 @@ static bool svm_has_wbinvd_exit(void)
return true;
 }
 
-static void svm_fpu_deactivate(struct kvm_vcpu *vcpu)
-{
-   struct vcpu_svm *svm = to_svm(vcpu);
-
-   set_exception_intercept(svm, NM_VECTOR);
-   update_cr0_intercept(svm);
-}
-
 #define PRE_EX(exit)  { .exit_code = (exit), \
.stage = X86_ICPT_PRE_EXCEPT, }
 #define POST_EX(exit) { .exit_code = (exit), \
@@ -4381,7 +4366,6 @@ static struct kvm_x86_ops svm_x86_ops = {
.cache_reg = svm_cache_reg,
.get_rflags = svm_get_rflags,
.set_rflags = svm_set_rflags,
-   .fpu_deactivate = svm_fpu_deactivate,
 
.tlb_flush = svm_flush_tlb,
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index f5e8dce..811a666 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1567,7 +1567,7 @@ static void update_exception_bitmap(struct kvm_vcpu *vcpu)
u32 eb;
 
eb = (1u  PF_VECTOR) | (1u  UD_VECTOR) | (1u  MC_VECTOR) |
-(1u  NM_VECTOR) | (1u  DB_VECTOR);
+(1u  DB_VECTOR);
if ((vcpu-guest_debug 
 (KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP)) ==
(KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP))
@@ -1576,8 +1576,6 @@ static void update_exception_bitmap(struct kvm_vcpu *vcpu)
eb = ~0;
if (enable_ept)
eb = ~(1u  PF_VECTOR); /* bypass_guest_pf = 0 */
-   if (vcpu-fpu_active)
-   eb = ~(1u  NM_VECTOR);
 
/* When we are running a nested L2 guest and L1 specified for it a
 * certain exception bitmap, we must trap the same exceptions and pass
@@ -1961,9 +1959,6 @@ static void vmx_fpu_activate(struct kvm_vcpu *vcpu)
 {
ulong cr0;
 
-   if (vcpu-fpu_active)
-   return;
-   vcpu-fpu_active = 1;
cr0 = vmcs_readl(GUEST_CR0);
cr0 = ~(X86_CR0_TS | X86_CR0_MP);
cr0 |= kvm_read_cr0_bits(vcpu, X86_CR0_TS | X86_CR0_MP);
@@ -1994,33 +1989,6 @@ static inline unsigned long nested_read_cr4(struct 
vmcs12 *fields)
(fields-cr4_read_shadow  

Re: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-23 Thread Paolo Bonzini


On 23/04/2015 23:13, Liang Li wrote:
 Romove lazy FPU logic and use eager FPU entirely. Eager FPU does
 not have performance regression, and it can simplify the code.
 
 When compiling kernel on westmere, the performance of eager FPU
 is about 0.4% faster than lazy FPU.
 
 Signed-off-by: Liang Li liang.z...@intel.com
 Signed-off-by: Xudong Hao xudong@intel.com

A patch like this requires much more benchmarking than what you have done.

First, what guest did you use?  A modern Linux guest will hardly ever exit
to userspace: the scheduler uses the TSC deadline timer, which is handled
in the kernel; the clocksource uses the TSC; virtio-blk devices are kicked
via ioeventfd.

What happens if you time a Windows guest (without any Hyper-V enlightenments),
or if you use clocksource=acpi_pm?

Second, 0.4% by itself may not be statistically significant.  How did
you gather the result?  How many times did you run the benchmark?  Did
the guest report any stolen time?


And finally, even if the patch was indeed a performance improvement,
there is much more that you can remove.  fpu_active is always 1, 
vmx_fpu_activate only has one call site that can be simplified just to

vcpu-arch.cr0_guest_owned_bits = X86_CR0_TS;
vmcs_writel(CR0_GUEST_HOST_MASK, ~vcpu-arch.cr0_guest_owned_bits);

and so on.

Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-23 Thread Jan Kiszka
On 2015-04-23 12:40, Paolo Bonzini wrote:
 
 
 On 23/04/2015 23:13, Liang Li wrote:
 Romove lazy FPU logic and use eager FPU entirely. Eager FPU does
 not have performance regression, and it can simplify the code.

 When compiling kernel on westmere, the performance of eager FPU
 is about 0.4% faster than lazy FPU.

 Signed-off-by: Liang Li liang.z...@intel.com
 Signed-off-by: Xudong Hao xudong@intel.com
 
 A patch like this requires much more benchmarking than what you have done.
 
 First, what guest did you use?  A modern Linux guest will hardly ever exit
 to userspace: the scheduler uses the TSC deadline timer, which is handled
 in the kernel; the clocksource uses the TSC; virtio-blk devices are kicked
 via ioeventfd.
 
 What happens if you time a Windows guest (without any Hyper-V enlightenments),
 or if you use clocksource=acpi_pm?
 
 Second, 0.4% by itself may not be statistically significant.  How did
 you gather the result?  How many times did you run the benchmark?  Did
 the guest report any stolen time?
 
 
 And finally, even if the patch was indeed a performance improvement,
 there is much more that you can remove.  fpu_active is always 1, 
 vmx_fpu_activate only has one call site that can be simplified just to
 
 vcpu-arch.cr0_guest_owned_bits = X86_CR0_TS;
 vmcs_writel(CR0_GUEST_HOST_MASK, ~vcpu-arch.cr0_guest_owned_bits);
 
 and so on.

And it would be good to know how the benchmarks look like on other CPUs
than the chosen Intel model. Including older ones.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-23 Thread Wanpeng Li
Cc Rik, who is doing the similar work. :)
On Fri, Apr 24, 2015 at 05:13:03AM +0800, Liang Li wrote:
Romove lazy FPU logic and use eager FPU entirely. Eager FPU does
not have performance regression, and it can simplify the code.

When compiling kernel on westmere, the performance of eager FPU
is about 0.4% faster than lazy FPU.

Signed-off-by: Liang Li liang.z...@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
---
 arch/x86/include/asm/kvm_host.h |  1 -
 arch/x86/kvm/svm.c  | 22 ++--
 arch/x86/kvm/vmx.c  | 74 +++--
 arch/x86/kvm/x86.c  |  8 +
 include/linux/kvm_host.h|  2 --
 5 files changed, 9 insertions(+), 98 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index dea2e7e..5d84cc9 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -743,7 +743,6 @@ struct kvm_x86_ops {
   void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
   unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
   void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
-  void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
 
   void (*tlb_flush)(struct kvm_vcpu *vcpu);
 
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index ce741b8..1b3b29b 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1087,7 +1087,6 @@ static void init_vmcb(struct vcpu_svm *svm)
   struct vmcb_control_area *control = svm-vmcb-control;
   struct vmcb_save_area *save = svm-vmcb-save;
 
-  svm-vcpu.fpu_active = 1;
   svm-vcpu.arch.hflags = 0;
 
   set_cr_intercept(svm, INTERCEPT_CR0_READ);
@@ -1529,15 +1528,12 @@ static void update_cr0_intercept(struct vcpu_svm *svm)
   ulong gcr0 = svm-vcpu.arch.cr0;
   u64 *hcr0 = svm-vmcb-save.cr0;
 
-  if (!svm-vcpu.fpu_active)
-  *hcr0 |= SVM_CR0_SELECTIVE_MASK;
-  else
-  *hcr0 = (*hcr0  ~SVM_CR0_SELECTIVE_MASK)
-  | (gcr0  SVM_CR0_SELECTIVE_MASK);
+  *hcr0 = (*hcr0  ~SVM_CR0_SELECTIVE_MASK)
+  | (gcr0  SVM_CR0_SELECTIVE_MASK);
 
   mark_dirty(svm-vmcb, VMCB_CR);
 
-  if (gcr0 == *hcr0  svm-vcpu.fpu_active) {
+  if (gcr0 == *hcr0) {
   clr_cr_intercept(svm, INTERCEPT_CR0_READ);
   clr_cr_intercept(svm, INTERCEPT_CR0_WRITE);
   } else {
@@ -1568,8 +1564,6 @@ static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned 
long cr0)
   if (!npt_enabled)
   cr0 |= X86_CR0_PG | X86_CR0_WP;
 
-  if (!vcpu-fpu_active)
-  cr0 |= X86_CR0_TS;
   /*
* re-enable caching here because the QEMU bios
* does not do it - this results in some delay at
@@ -1795,7 +1789,6 @@ static void svm_fpu_activate(struct kvm_vcpu *vcpu)
 
   clr_exception_intercept(svm, NM_VECTOR);
 
-  svm-vcpu.fpu_active = 1;
   update_cr0_intercept(svm);
 }
 
@@ -4139,14 +4132,6 @@ static bool svm_has_wbinvd_exit(void)
   return true;
 }
 
-static void svm_fpu_deactivate(struct kvm_vcpu *vcpu)
-{
-  struct vcpu_svm *svm = to_svm(vcpu);
-
-  set_exception_intercept(svm, NM_VECTOR);
-  update_cr0_intercept(svm);
-}
-
 #define PRE_EX(exit)  { .exit_code = (exit), \
   .stage = X86_ICPT_PRE_EXCEPT, }
 #define POST_EX(exit) { .exit_code = (exit), \
@@ -4381,7 +4366,6 @@ static struct kvm_x86_ops svm_x86_ops = {
   .cache_reg = svm_cache_reg,
   .get_rflags = svm_get_rflags,
   .set_rflags = svm_set_rflags,
-  .fpu_deactivate = svm_fpu_deactivate,
 
   .tlb_flush = svm_flush_tlb,
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index f5e8dce..811a666 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1567,7 +1567,7 @@ static void update_exception_bitmap(struct kvm_vcpu 
*vcpu)
   u32 eb;
 
   eb = (1u  PF_VECTOR) | (1u  UD_VECTOR) | (1u  MC_VECTOR) |
-   (1u  NM_VECTOR) | (1u  DB_VECTOR);
+   (1u  DB_VECTOR);
   if ((vcpu-guest_debug 
(KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP)) ==
   (KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP))
@@ -1576,8 +1576,6 @@ static void update_exception_bitmap(struct kvm_vcpu 
*vcpu)
   eb = ~0;
   if (enable_ept)
   eb = ~(1u  PF_VECTOR); /* bypass_guest_pf = 0 */
-  if (vcpu-fpu_active)
-  eb = ~(1u  NM_VECTOR);
 
   /* When we are running a nested L2 guest and L1 specified for it a
* certain exception bitmap, we must trap the same exceptions and pass
@@ -1961,9 +1959,6 @@ static void vmx_fpu_activate(struct kvm_vcpu *vcpu)
 {
   ulong cr0;
 
-  if (vcpu-fpu_active)
-  return;
-  vcpu-fpu_active = 1;
   cr0 = vmcs_readl(GUEST_CR0);
   cr0 = ~(X86_CR0_TS | X86_CR0_MP);
   cr0 |= kvm_read_cr0_bits(vcpu, X86_CR0_TS | X86_CR0_MP);
@@ -1994,33 +1989,6 @@ static inline unsigned long nested_read_cr4(struct 
vmcs12 *fields)
 

Re: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-23 Thread Wanpeng Li
On Fri, Apr 24, 2015 at 05:13:03AM +0800, Liang Li wrote:
Romove lazy FPU logic and use eager FPU entirely. Eager FPU does
not have performance regression, and it can simplify the code.

When compiling kernel on westmere, the performance of eager FPU
is about 0.4% faster than lazy FPU.

Signed-off-by: Liang Li liang.z...@intel.com
Signed-off-by: Xudong Hao xudong@intel.com
---
 arch/x86/include/asm/kvm_host.h |  1 -
 arch/x86/kvm/svm.c  | 22 ++--
 arch/x86/kvm/vmx.c  | 74 +++--
 arch/x86/kvm/x86.c  |  8 +
 include/linux/kvm_host.h|  2 --
 5 files changed, 9 insertions(+), 98 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index dea2e7e..5d84cc9 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -743,7 +743,6 @@ struct kvm_x86_ops {
   void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
   unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
   void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
-  void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
 
   void (*tlb_flush)(struct kvm_vcpu *vcpu);
 
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index ce741b8..1b3b29b 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1087,7 +1087,6 @@ static void init_vmcb(struct vcpu_svm *svm)
   struct vmcb_control_area *control = svm-vmcb-control;
   struct vmcb_save_area *save = svm-vmcb-save;
 
-  svm-vcpu.fpu_active = 1;
   svm-vcpu.arch.hflags = 0;
 
   set_cr_intercept(svm, INTERCEPT_CR0_READ);
@@ -1529,15 +1528,12 @@ static void update_cr0_intercept(struct vcpu_svm *svm)
   ulong gcr0 = svm-vcpu.arch.cr0;
   u64 *hcr0 = svm-vmcb-save.cr0;
 
-  if (!svm-vcpu.fpu_active)
-  *hcr0 |= SVM_CR0_SELECTIVE_MASK;
-  else
-  *hcr0 = (*hcr0  ~SVM_CR0_SELECTIVE_MASK)
-  | (gcr0  SVM_CR0_SELECTIVE_MASK);
+  *hcr0 = (*hcr0  ~SVM_CR0_SELECTIVE_MASK)
+  | (gcr0  SVM_CR0_SELECTIVE_MASK);
 
   mark_dirty(svm-vmcb, VMCB_CR);
 
-  if (gcr0 == *hcr0  svm-vcpu.fpu_active) {
+  if (gcr0 == *hcr0) {
   clr_cr_intercept(svm, INTERCEPT_CR0_READ);
   clr_cr_intercept(svm, INTERCEPT_CR0_WRITE);
   } else {
@@ -1568,8 +1564,6 @@ static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned 
long cr0)
   if (!npt_enabled)
   cr0 |= X86_CR0_PG | X86_CR0_WP;
 
-  if (!vcpu-fpu_active)
-  cr0 |= X86_CR0_TS;
   /*
* re-enable caching here because the QEMU bios
* does not do it - this results in some delay at
@@ -1795,7 +1789,6 @@ static void svm_fpu_activate(struct kvm_vcpu *vcpu)
 
   clr_exception_intercept(svm, NM_VECTOR);
 
-  svm-vcpu.fpu_active = 1;
   update_cr0_intercept(svm);
 }
 
@@ -4139,14 +4132,6 @@ static bool svm_has_wbinvd_exit(void)
   return true;
 }
 
-static void svm_fpu_deactivate(struct kvm_vcpu *vcpu)
-{
-  struct vcpu_svm *svm = to_svm(vcpu);
-
-  set_exception_intercept(svm, NM_VECTOR);
-  update_cr0_intercept(svm);
-}

Do you test it on AMD cpu? What's the performance you get?

Regards,
Wanpeng Li 

-
 #define PRE_EX(exit)  { .exit_code = (exit), \
   .stage = X86_ICPT_PRE_EXCEPT, }
 #define POST_EX(exit) { .exit_code = (exit), \
@@ -4381,7 +4366,6 @@ static struct kvm_x86_ops svm_x86_ops = {
   .cache_reg = svm_cache_reg,
   .get_rflags = svm_get_rflags,
   .set_rflags = svm_set_rflags,
-  .fpu_deactivate = svm_fpu_deactivate,
 
   .tlb_flush = svm_flush_tlb,
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index f5e8dce..811a666 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1567,7 +1567,7 @@ static void update_exception_bitmap(struct kvm_vcpu 
*vcpu)
   u32 eb;
 
   eb = (1u  PF_VECTOR) | (1u  UD_VECTOR) | (1u  MC_VECTOR) |
-   (1u  NM_VECTOR) | (1u  DB_VECTOR);
+   (1u  DB_VECTOR);
   if ((vcpu-guest_debug 
(KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP)) ==
   (KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP))
@@ -1576,8 +1576,6 @@ static void update_exception_bitmap(struct kvm_vcpu 
*vcpu)
   eb = ~0;
   if (enable_ept)
   eb = ~(1u  PF_VECTOR); /* bypass_guest_pf = 0 */
-  if (vcpu-fpu_active)
-  eb = ~(1u  NM_VECTOR);
 
   /* When we are running a nested L2 guest and L1 specified for it a
* certain exception bitmap, we must trap the same exceptions and pass
@@ -1961,9 +1959,6 @@ static void vmx_fpu_activate(struct kvm_vcpu *vcpu)
 {
   ulong cr0;
 
-  if (vcpu-fpu_active)
-  return;
-  vcpu-fpu_active = 1;
   cr0 = vmcs_readl(GUEST_CR0);
   cr0 = ~(X86_CR0_TS | X86_CR0_MP);
   cr0 |= kvm_read_cr0_bits(vcpu, X86_CR0_TS | X86_CR0_MP);
@@ -1994,33 +1989,6 @@ static inline unsigned long 

Re: [v6] kvm/fpu: Enable fully eager restore kvm FPU

2015-04-23 Thread Rik van Riel
On 04/23/2015 06:57 PM, Wanpeng Li wrote:
 Cc Rik, who is doing the similar work. :)

Hi Liang,

I posted this patch earlier, which should have the same effect as
your patch on more modern systems, while not loading the FPU context
for guests that barely use it on older systems:

https://lkml.org/lkml/2015/4/23/349

I have to admit the diffstat on your patch looks very nice, but
it might be good to know what impact it has on older systems...

 On Fri, Apr 24, 2015 at 05:13:03AM +0800, Liang Li wrote:
 Romove lazy FPU logic and use eager FPU entirely. Eager FPU does
 not have performance regression, and it can simplify the code.

 When compiling kernel on westmere, the performance of eager FPU
 is about 0.4% faster than lazy FPU.

 Signed-off-by: Liang Li liang.z...@intel.com
 Signed-off-by: Xudong Hao xudong@intel.com
 ---
 arch/x86/include/asm/kvm_host.h |  1 -
 arch/x86/kvm/svm.c  | 22 ++--
 arch/x86/kvm/vmx.c  | 74 
 +++--
 arch/x86/kvm/x86.c  |  8 +
 include/linux/kvm_host.h|  2 --
 5 files changed, 9 insertions(+), 98 deletions(-)

 diff --git a/arch/x86/include/asm/kvm_host.h 
 b/arch/x86/include/asm/kvm_host.h
 index dea2e7e..5d84cc9 100644
 --- a/arch/x86/include/asm/kvm_host.h
 +++ b/arch/x86/include/asm/kvm_host.h
 @@ -743,7 +743,6 @@ struct kvm_x86_ops {
  void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
  unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
  void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
 -void (*fpu_deactivate)(struct kvm_vcpu *vcpu);

  void (*tlb_flush)(struct kvm_vcpu *vcpu);

 diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
 index ce741b8..1b3b29b 100644
 --- a/arch/x86/kvm/svm.c
 +++ b/arch/x86/kvm/svm.c
 @@ -1087,7 +1087,6 @@ static void init_vmcb(struct vcpu_svm *svm)
  struct vmcb_control_area *control = svm-vmcb-control;
  struct vmcb_save_area *save = svm-vmcb-save;

 -svm-vcpu.fpu_active = 1;
  svm-vcpu.arch.hflags = 0;

  set_cr_intercept(svm, INTERCEPT_CR0_READ);
 @@ -1529,15 +1528,12 @@ static void update_cr0_intercept(struct vcpu_svm 
 *svm)
  ulong gcr0 = svm-vcpu.arch.cr0;
  u64 *hcr0 = svm-vmcb-save.cr0;

 -if (!svm-vcpu.fpu_active)
 -*hcr0 |= SVM_CR0_SELECTIVE_MASK;
 -else
 -*hcr0 = (*hcr0  ~SVM_CR0_SELECTIVE_MASK)
 -| (gcr0  SVM_CR0_SELECTIVE_MASK);
 +*hcr0 = (*hcr0  ~SVM_CR0_SELECTIVE_MASK)
 +| (gcr0  SVM_CR0_SELECTIVE_MASK);

  mark_dirty(svm-vmcb, VMCB_CR);

 -if (gcr0 == *hcr0  svm-vcpu.fpu_active) {
 +if (gcr0 == *hcr0) {
  clr_cr_intercept(svm, INTERCEPT_CR0_READ);
  clr_cr_intercept(svm, INTERCEPT_CR0_WRITE);
  } else {
 @@ -1568,8 +1564,6 @@ static void svm_set_cr0(struct kvm_vcpu *vcpu, 
 unsigned long cr0)
  if (!npt_enabled)
  cr0 |= X86_CR0_PG | X86_CR0_WP;

 -if (!vcpu-fpu_active)
 -cr0 |= X86_CR0_TS;
  /*
   * re-enable caching here because the QEMU bios
   * does not do it - this results in some delay at
 @@ -1795,7 +1789,6 @@ static void svm_fpu_activate(struct kvm_vcpu *vcpu)

  clr_exception_intercept(svm, NM_VECTOR);

 -svm-vcpu.fpu_active = 1;
  update_cr0_intercept(svm);
 }

 @@ -4139,14 +4132,6 @@ static bool svm_has_wbinvd_exit(void)
  return true;
 }

 -static void svm_fpu_deactivate(struct kvm_vcpu *vcpu)
 -{
 -struct vcpu_svm *svm = to_svm(vcpu);
 -
 -set_exception_intercept(svm, NM_VECTOR);
 -update_cr0_intercept(svm);
 -}
 -
 #define PRE_EX(exit)  { .exit_code = (exit), \
  .stage = X86_ICPT_PRE_EXCEPT, }
 #define POST_EX(exit) { .exit_code = (exit), \
 @@ -4381,7 +4366,6 @@ static struct kvm_x86_ops svm_x86_ops = {
  .cache_reg = svm_cache_reg,
  .get_rflags = svm_get_rflags,
  .set_rflags = svm_set_rflags,
 -.fpu_deactivate = svm_fpu_deactivate,

  .tlb_flush = svm_flush_tlb,

 diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
 index f5e8dce..811a666 100644
 --- a/arch/x86/kvm/vmx.c
 +++ b/arch/x86/kvm/vmx.c
 @@ -1567,7 +1567,7 @@ static void update_exception_bitmap(struct kvm_vcpu 
 *vcpu)
  u32 eb;

  eb = (1u  PF_VECTOR) | (1u  UD_VECTOR) | (1u  MC_VECTOR) |
 - (1u  NM_VECTOR) | (1u  DB_VECTOR);
 + (1u  DB_VECTOR);
  if ((vcpu-guest_debug 
   (KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP)) ==
  (KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP))
 @@ -1576,8 +1576,6 @@ static void update_exception_bitmap(struct kvm_vcpu 
 *vcpu)
  eb = ~0;
  if (enable_ept)
  eb = ~(1u  PF_VECTOR); /* bypass_guest_pf = 0 */
 -if (vcpu-fpu_active)
 -eb = ~(1u  NM_VECTOR);

  /* When we are running a nested L2 guest and L1 specified for it a
   * certain exception bitmap, we must trap the same exceptions and pass
 @@ -1961,9 +1959,6 @@ static void