Found while browsing Xen code: While we assume that the STI interrupt
shadow also inplies virtual NMI blocking, some processors may have a
different opinion (SDM 3: 22.3). To avoid misunderstandings that would
cause endless VM entry attempts, translate STI into MOV SS blocking when
requesting the NMI window.

Signed-off-by: Jan Kiszka <[email protected]>
---

 arch/x86/kvm/vmx.c |   15 +++++++++++++++
 1 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 14873b9..474f720 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2614,12 +2614,27 @@ static void enable_irq_window(struct kvm_vcpu *vcpu)
 static void enable_nmi_window(struct kvm_vcpu *vcpu)
 {
        u32 cpu_based_vm_exec_control;
+       u32 interruptibility;
 
        if (!cpu_has_virtual_nmis()) {
                enable_irq_window(vcpu);
                return;
        }
 
+       /*
+        * SDM 3: 22.3 (June 2009)
+        * "A logical processor may also prevent such a VM exit [NMI-window
+        * exit] if there is blocking of events by STI."
+        * So better convert STI blocking into MOV SS to avoid premature VM
+        * exits that would end up in an endless loop.
+        */
+       interruptibility = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO);
+       if (interruptibility & GUEST_INTR_STATE_STI) {
+               interruptibility &= ~GUEST_INTR_STATE_STI;
+               interruptibility |= GUEST_INTR_STATE_MOV_SS;
+               vmcs_write32(GUEST_INTERRUPTIBILITY_INFO, interruptibility);
+       }
+
        cpu_based_vm_exec_control = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
        cpu_based_vm_exec_control |= CPU_BASED_VIRTUAL_NMI_PENDING;
        vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control);
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to