Re: [PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-10 Thread Eugene Korenevsky
 GCC doesn't warn that ((u32)e-index  24) == 0x800 is always false?
 I think SDM says '(e-index  8) == 0x8'.

Missed that. Thank you.


-- 
Eugene
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-10 Thread Eugene Korenevsky
 Hi, Eugene, is it okay to split my part up?

I think the patch is atomic. No ideas how this patch could be split
without breaking its integrity.
You are a co-author of the patch since your ideas make significant part of it.


-- 
Eugene
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-10 Thread Eugene Korenevsky
Will send fixed patch this evening.

-- 
Eugene
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-10 Thread Wanpeng Li
Hi all,
On Wed, Dec 10, 2014 at 08:07:45AM -0100, Eugene Korenevsky wrote:
 Hi, Eugene, is it okay to split my part up?

I think the patch is atomic. No ideas how this patch could be split
without breaking its integrity.
You are a co-author of the patch since your ideas make significant part of it.


Since Wincy send out his patch before you, I prefer he send out a newer 
version which fix issues in his own patch, then you can send out another 
enhanced one based on Wincy's work.

Regards,
Wanpeng Li 


-- 
Eugene
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-10 Thread Wanpeng Li
On Wed, Dec 10, 2014 at 08:13:58AM -0100, Eugene Korenevsky wrote:
Will send fixed patch this evening.

Please see my reply to another thread.


-- 
Eugene
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-10 Thread Wincy Van
2014-12-10 17:01 GMT+08:00 Wanpeng Li wanpeng...@linux.intel.com:
 Hi all,
 On Wed, Dec 10, 2014 at 08:07:45AM -0100, Eugene Korenevsky wrote:
 Hi, Eugene, is it okay to split my part up?

I think the patch is atomic. No ideas how this patch could be split
without breaking its integrity.
You are a co-author of the patch since your ideas make significant part of it.


 Since Wincy send out his patch before you, I prefer he send out a newer
 version which fix issues in his own patch, then you can send out another
 enhanced one based on Wincy's work.

 Ok, I will send out the version two ASAP, thanks.

Wincy
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-10 Thread Bandan Das
Eugene Korenevsky ekorenev...@gmail.com writes:

 Hi, Eugene, is it okay to split my part up?

 I think the patch is atomic. No ideas how this patch could be split
 without breaking its integrity.
 You are a co-author of the patch since your ideas make significant part of it.

I was suggesting adding the interfaces you introduced in the first patch
and then using these interfaces in the second patch to make reviewing easier.
It's ok to mention that the second depends on the first.

If Wincy has code contributions to the patch, he should sign it off too,
else maybe add a Suggested-by to give him credit for his ideas.

Also, please include a v3 in the Subject when you submit your next version.


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-09 Thread Bandan Das
Eugene Korenevsky ekorenev...@gmail.com writes:

 Several hypervisors use MSR loading/storing to run guests.
 This patch implements emulation of this feature and allows these
 hypervisors to work in L1.

 The following is emulated:
 - Loading MSRs on VM-entries
 - Saving MSRs on VM-exits
 - Loading MSRs on VM-exits

 Actions taken on loading MSRs:
 - MSR load area is verified
 - For each MSR entry from load area:
 - MSR load entry is read and verified
 - MSR value is safely written
 Actions taken on storing MSRs:
 - MSR store area is verified
 - For each MSR entry from store area:
 - MSR entry is read and verified
 - MSR value is safely read using MSR index from MSR entry
 - MSR value is written to MSR entry

 The code performs checks required by Intel Software Developer Manual.

Thank you for the commit message.

 This patch is partially based on Wincy Wan's work.
Typo in name -  ^

I have added Jan and Wincy to the CC list since they reviewed your earlier 
proposal.
I think it would be better to split this up as I mentioned earlier, however,
if the other reviewers and the maintainer don't have objections, I am ok :)


 Signed-off-by: Eugene Korenevsky ekorenev...@gmail.com
 ---
  arch/x86/include/asm/vmx.h|   6 +
  arch/x86/include/uapi/asm/msr-index.h |   3 +
  arch/x86/include/uapi/asm/vmx.h   |   2 +
  arch/x86/kvm/vmx.c| 210 
 --
  virt/kvm/kvm_main.c   |   1 +
  5 files changed, 215 insertions(+), 7 deletions(-)

 diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
 index 45afaee..8bdb247 100644
 --- a/arch/x86/include/asm/vmx.h
 +++ b/arch/x86/include/asm/vmx.h
 @@ -457,6 +457,12 @@ struct vmx_msr_entry {
  #define ENTRY_FAIL_VMCS_LINK_PTR 4
  
  /*
 + * VMX Abort codes
 + */
 +#define VMX_ABORT_MSR_STORE_FAILURE  1
 +#define VMX_ABORT_MSR_LOAD_FAILURE   4
 +
 +/*
   * VM-instruction error numbers
   */
  enum vm_instruction_error_number {
 diff --git a/arch/x86/include/uapi/asm/msr-index.h 
 b/arch/x86/include/uapi/asm/msr-index.h
 index e21331c..3c9c601 100644
 --- a/arch/x86/include/uapi/asm/msr-index.h
 +++ b/arch/x86/include/uapi/asm/msr-index.h
 @@ -316,6 +316,9 @@
  #define MSR_IA32_UCODE_WRITE 0x0079
  #define MSR_IA32_UCODE_REV   0x008b
  
 +#define MSR_IA32_SMM_MONITOR_CTL 0x009b
 +#define MSR_IA32_SMBASE  0x009e
 +
  #define MSR_IA32_PERF_STATUS 0x0198
  #define MSR_IA32_PERF_CTL0x0199
  #define MSR_AMD_PSTATE_DEF_BASE  0xc0010064
 diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
 index b813bf9..52ad8e2 100644
 --- a/arch/x86/include/uapi/asm/vmx.h
 +++ b/arch/x86/include/uapi/asm/vmx.h
 @@ -56,6 +56,7 @@
  #define EXIT_REASON_MSR_READ31
  #define EXIT_REASON_MSR_WRITE   32
  #define EXIT_REASON_INVALID_STATE   33
 +#define EXIT_REASON_MSR_LOAD_FAILURE34
  #define EXIT_REASON_MWAIT_INSTRUCTION   36
  #define EXIT_REASON_MONITOR_INSTRUCTION 39
  #define EXIT_REASON_PAUSE_INSTRUCTION   40
 @@ -116,6 +117,7 @@
   { EXIT_REASON_APIC_WRITE,APIC_WRITE }, \
   { EXIT_REASON_EOI_INDUCED,   EOI_INDUCED }, \
   { EXIT_REASON_INVALID_STATE, INVALID_STATE }, \
 + { EXIT_REASON_MSR_LOAD_FAILURE,  MSR_LOAD_FAILURE }, \
   { EXIT_REASON_INVD,  INVD }, \
   { EXIT_REASON_INVVPID,   INVVPID }, \
   { EXIT_REASON_INVPCID,   INVPCID }, \
 diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
 index 9bcc871..86dc7db 100644
 --- a/arch/x86/kvm/vmx.c
 +++ b/arch/x86/kvm/vmx.c
 @@ -8571,6 +8571,168 @@ static void prepare_vmcs02(struct kvm_vcpu *vcpu, 
 struct vmcs12 *vmcs12)
   kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12-guest_rip);
  }
  
 +static bool vmx_msr_switch_area_verify(struct kvm_vcpu *vcpu,
 +unsigned long count_field,
 +unsigned long addr_field,
 +int maxphyaddr)
 +{
 + u64 count, addr;
 +
 + BUG_ON(vmcs12_read_any(vcpu, count_field, count));
 + BUG_ON(vmcs12_read_any(vcpu, addr_field, addr));
BUG_ON() is a bit harsh, please use WARN_ON and bail out.

 + if (!IS_ALIGNED(addr, 16))
 + goto fail;
 + if (addr  maxphyaddr)
 + goto fail;
 + if ((addr + count * sizeof(struct vmx_msr_entry) - 1)  maxphyaddr)
 + goto fail;
 + return true;
 +fail:
 + pr_warn_ratelimited(
 + nVMX: invalid MSR switch (0x%lx, 0x%lx, %d, %llu, 0x%08llx),
 + count_field, addr_field, maxphyaddr, count, addr);
 + return false;
 +}
 +
 +static bool nested_vmx_msr_switch_verify(struct kvm_vcpu *vcpu,
 +  struct vmcs12 *vmcs12)
 +{
 + int maxphyaddr;
 +
 + if 

Re: [PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-09 Thread Eugene Korenevsky
 I have added Jan and Wincy to the CC list since they reviewed your earlier 
 proposal.
 I think it would be better to split this up as I mentioned earlier, however,
 if the other reviewers and the maintainer don't have objections, I am ok :)

OK, the final patch is following.

-- 
Eugene
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-09 Thread Eugene Korenevsky
Several hypervisors use MSR loading/storing to run guests on VMX.
This patch implements emulation of this feature and allows these
hypervisors to work in L1.

The following is emulated:
- Loading MSRs on VM-entries
- Saving MSRs on VM-exits
- Loading MSRs on VM-exits

Actions taken on loading MSRs:
- MSR load area is verified
- For each MSR entry from load area:
- MSR load entry is read and verified
- MSR value is safely written
Actions taken on storing MSRs:
- MSR store area is verified
- For each MSR entry from store area:
- MSR entry is read and verified
- MSR value is safely read using MSR index from MSR entry
- MSR value is written to MSR entry

The code performs checks required by Intel Software Developer Manual.

Signed-off-by: Eugene Korenevsky ekorenev...@gmail.com
---
 arch/x86/include/asm/vmx.h|   6 +
 arch/x86/include/uapi/asm/msr-index.h |   3 +
 arch/x86/include/uapi/asm/vmx.h   |   2 +
 arch/x86/kvm/vmx.c| 206 --
 virt/kvm/kvm_main.c   |   1 +
 5 files changed, 211 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 45afaee..8bdb247 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -457,6 +457,12 @@ struct vmx_msr_entry {
 #define ENTRY_FAIL_VMCS_LINK_PTR   4
 
 /*
+ * VMX Abort codes
+ */
+#define VMX_ABORT_MSR_STORE_FAILURE1
+#define VMX_ABORT_MSR_LOAD_FAILURE 4
+
+/*
  * VM-instruction error numbers
  */
 enum vm_instruction_error_number {
diff --git a/arch/x86/include/uapi/asm/msr-index.h 
b/arch/x86/include/uapi/asm/msr-index.h
index e21331c..3c9c601 100644
--- a/arch/x86/include/uapi/asm/msr-index.h
+++ b/arch/x86/include/uapi/asm/msr-index.h
@@ -316,6 +316,9 @@
 #define MSR_IA32_UCODE_WRITE   0x0079
 #define MSR_IA32_UCODE_REV 0x008b
 
+#define MSR_IA32_SMM_MONITOR_CTL   0x009b
+#define MSR_IA32_SMBASE0x009e
+
 #define MSR_IA32_PERF_STATUS   0x0198
 #define MSR_IA32_PERF_CTL  0x0199
 #define MSR_AMD_PSTATE_DEF_BASE0xc0010064
diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
index b813bf9..52ad8e2 100644
--- a/arch/x86/include/uapi/asm/vmx.h
+++ b/arch/x86/include/uapi/asm/vmx.h
@@ -56,6 +56,7 @@
 #define EXIT_REASON_MSR_READ31
 #define EXIT_REASON_MSR_WRITE   32
 #define EXIT_REASON_INVALID_STATE   33
+#define EXIT_REASON_MSR_LOAD_FAILURE34
 #define EXIT_REASON_MWAIT_INSTRUCTION   36
 #define EXIT_REASON_MONITOR_INSTRUCTION 39
 #define EXIT_REASON_PAUSE_INSTRUCTION   40
@@ -116,6 +117,7 @@
{ EXIT_REASON_APIC_WRITE,APIC_WRITE }, \
{ EXIT_REASON_EOI_INDUCED,   EOI_INDUCED }, \
{ EXIT_REASON_INVALID_STATE, INVALID_STATE }, \
+   { EXIT_REASON_MSR_LOAD_FAILURE,  MSR_LOAD_FAILURE }, \
{ EXIT_REASON_INVD,  INVD }, \
{ EXIT_REASON_INVVPID,   INVVPID }, \
{ EXIT_REASON_INVPCID,   INVPCID }, \
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9bcc871..636a4c4 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -8571,6 +8571,173 @@ static void prepare_vmcs02(struct kvm_vcpu *vcpu, 
struct vmcs12 *vmcs12)
kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12-guest_rip);
 }
 
+static bool vmx_msr_switch_area_verify(struct kvm_vcpu *vcpu,
+  unsigned long count_field,
+  unsigned long addr_field,
+  int maxphyaddr)
+{
+   u64 count, addr;
+
+   if (vmcs12_read_any(vcpu, count_field, count) ||
+   vmcs12_read_any(vcpu, addr_field, addr)) {
+   WARN_ON(1);
+   return false;
+   }
+   if (!IS_ALIGNED(addr, 16))
+   goto fail;
+   if (addr  maxphyaddr)
+   goto fail;
+   if ((addr + count * sizeof(struct vmx_msr_entry) - 1)  maxphyaddr)
+   goto fail;
+   return true;
+fail:
+   pr_warn_ratelimited(
+   nVMX: invalid MSR switch (0x%lx, 0x%lx, %d, %llu, 0x%08llx),
+   count_field, addr_field, maxphyaddr, count, addr);
+   return false;
+}
+
+static bool nested_vmx_msr_switch_verify(struct kvm_vcpu *vcpu,
+struct vmcs12 *vmcs12)
+{
+   int maxphyaddr;
+
+   if (vmcs12-vm_exit_msr_load_count == 0 
+   vmcs12-vm_exit_msr_store_count == 0 
+   vmcs12-vm_entry_msr_load_count == 0)
+   return true; /* Fast path */
+   maxphyaddr = cpuid_maxphyaddr(vcpu);
+   return vmx_msr_switch_area_verify(vcpu, VM_EXIT_MSR_LOAD_COUNT,
+ VM_EXIT_MSR_LOAD_ADDR, maxphyaddr) 
+  vmx_msr_switch_area_verify(vcpu, VM_EXIT_MSR_STORE_COUNT,
+   

Re: [PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-09 Thread Radim Krčmář
2014-12-10 00:18+0300, Eugene Korenevsky:
 +static bool vmx_load_msr_entry_verify(struct kvm_vcpu *vcpu,
 +   struct vmx_msr_entry *e)
[...]
 + /* x2APIC MSR accesses are not allowed */
 + if (apic_x2apic_mode(vcpu-arch.apic)  (e-index  24) == 0x800)
 + return false;

GCC doesn't warn that ((u32)e-index  24) == 0x800 is always false?
I think SDM says '(e-index  8) == 0x8'.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-09 Thread Wincy Van
2014-12-10 5:12 GMT+08:00 Eugene Korenevsky ekorenev...@gmail.com:
 I have added Jan and Wincy to the CC list since they reviewed your earlier 
 proposal.
 I think it would be better to split this up as I mentioned earlier, however,
 if the other reviewers and the maintainer don't have objections, I am ok :)

 OK, the final patch is following.

Hi, Eugene, is it okay to split my part up?




Thanks,

WIncy
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] KVM: x86: nVMX: support for MSR loading/storing

2014-12-08 Thread Eugene Korenevsky
Several hypervisors use MSR loading/storing to run guests.
This patch implements emulation of this feature and allows these
hypervisors to work in L1.

The following is emulated:
- Loading MSRs on VM-entries
- Saving MSRs on VM-exits
- Loading MSRs on VM-exits

Actions taken on loading MSRs:
- MSR load area is verified
- For each MSR entry from load area:
- MSR load entry is read and verified
- MSR value is safely written
Actions taken on storing MSRs:
- MSR store area is verified
- For each MSR entry from store area:
- MSR entry is read and verified
- MSR value is safely read using MSR index from MSR entry
- MSR value is written to MSR entry

The code performs checks required by Intel Software Developer Manual.

This patch is partially based on Wincy Wan's work.


Signed-off-by: Eugene Korenevsky ekorenev...@gmail.com
---
 arch/x86/include/asm/vmx.h|   6 +
 arch/x86/include/uapi/asm/msr-index.h |   3 +
 arch/x86/include/uapi/asm/vmx.h   |   2 +
 arch/x86/kvm/vmx.c| 210 --
 virt/kvm/kvm_main.c   |   1 +
 5 files changed, 215 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 45afaee..8bdb247 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -457,6 +457,12 @@ struct vmx_msr_entry {
 #define ENTRY_FAIL_VMCS_LINK_PTR   4
 
 /*
+ * VMX Abort codes
+ */
+#define VMX_ABORT_MSR_STORE_FAILURE1
+#define VMX_ABORT_MSR_LOAD_FAILURE 4
+
+/*
  * VM-instruction error numbers
  */
 enum vm_instruction_error_number {
diff --git a/arch/x86/include/uapi/asm/msr-index.h 
b/arch/x86/include/uapi/asm/msr-index.h
index e21331c..3c9c601 100644
--- a/arch/x86/include/uapi/asm/msr-index.h
+++ b/arch/x86/include/uapi/asm/msr-index.h
@@ -316,6 +316,9 @@
 #define MSR_IA32_UCODE_WRITE   0x0079
 #define MSR_IA32_UCODE_REV 0x008b
 
+#define MSR_IA32_SMM_MONITOR_CTL   0x009b
+#define MSR_IA32_SMBASE0x009e
+
 #define MSR_IA32_PERF_STATUS   0x0198
 #define MSR_IA32_PERF_CTL  0x0199
 #define MSR_AMD_PSTATE_DEF_BASE0xc0010064
diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
index b813bf9..52ad8e2 100644
--- a/arch/x86/include/uapi/asm/vmx.h
+++ b/arch/x86/include/uapi/asm/vmx.h
@@ -56,6 +56,7 @@
 #define EXIT_REASON_MSR_READ31
 #define EXIT_REASON_MSR_WRITE   32
 #define EXIT_REASON_INVALID_STATE   33
+#define EXIT_REASON_MSR_LOAD_FAILURE34
 #define EXIT_REASON_MWAIT_INSTRUCTION   36
 #define EXIT_REASON_MONITOR_INSTRUCTION 39
 #define EXIT_REASON_PAUSE_INSTRUCTION   40
@@ -116,6 +117,7 @@
{ EXIT_REASON_APIC_WRITE,APIC_WRITE }, \
{ EXIT_REASON_EOI_INDUCED,   EOI_INDUCED }, \
{ EXIT_REASON_INVALID_STATE, INVALID_STATE }, \
+   { EXIT_REASON_MSR_LOAD_FAILURE,  MSR_LOAD_FAILURE }, \
{ EXIT_REASON_INVD,  INVD }, \
{ EXIT_REASON_INVVPID,   INVVPID }, \
{ EXIT_REASON_INVPCID,   INVPCID }, \
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9bcc871..86dc7db 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -8571,6 +8571,168 @@ static void prepare_vmcs02(struct kvm_vcpu *vcpu, 
struct vmcs12 *vmcs12)
kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12-guest_rip);
 }
 
+static bool vmx_msr_switch_area_verify(struct kvm_vcpu *vcpu,
+  unsigned long count_field,
+  unsigned long addr_field,
+  int maxphyaddr)
+{
+   u64 count, addr;
+
+   BUG_ON(vmcs12_read_any(vcpu, count_field, count));
+   BUG_ON(vmcs12_read_any(vcpu, addr_field, addr));
+   if (!IS_ALIGNED(addr, 16))
+   goto fail;
+   if (addr  maxphyaddr)
+   goto fail;
+   if ((addr + count * sizeof(struct vmx_msr_entry) - 1)  maxphyaddr)
+   goto fail;
+   return true;
+fail:
+   pr_warn_ratelimited(
+   nVMX: invalid MSR switch (0x%lx, 0x%lx, %d, %llu, 0x%08llx),
+   count_field, addr_field, maxphyaddr, count, addr);
+   return false;
+}
+
+static bool nested_vmx_msr_switch_verify(struct kvm_vcpu *vcpu,
+struct vmcs12 *vmcs12)
+{
+   int maxphyaddr;
+
+   if (vmcs12-vm_exit_msr_load_count == 0 
+   vmcs12-vm_exit_msr_store_count == 0 
+   vmcs12-vm_entry_msr_load_count == 0)
+   return true; /* Fast path */
+   maxphyaddr = cpuid_maxphyaddr(vcpu);
+   return vmx_msr_switch_area_verify(vcpu, VM_EXIT_MSR_LOAD_COUNT,
+ VM_EXIT_MSR_LOAD_ADDR, maxphyaddr) 
+  vmx_msr_switch_area_verify(vcpu, VM_EXIT_MSR_STORE_COUNT,
+