https://bugzilla.kernel.org/show_bug.cgi?id=53861

           Summary: nVMX: inaccuracy in emulation of entry failure
           Product: Virtualization
           Version: unspecified
          Platform: All
        OS/Version: Linux
              Tree: Mainline
            Status: NEW
          Severity: enhancement
          Priority: P1
         Component: kvm
        AssignedTo: virtualization_...@kernel-bugs.osdl.org
        ReportedBy: n...@math.technion.ac.il
        Regression: No


Emulation of nested entry (L1->L2) failure is rather involved, and there are
two kinds of entry failures - some recognized before the vmcs02 was touched
(and nested_vmx_failValid/Invalid() is used), and some after we started to
touch vmcs02 (and nested_vmx_entry_failure() is used). This whole business is
explained in the Intel SDM, section "VM-entry failures during or after loading
guest state".

But where's a corner cases related to *buggy L0* that we probably do not
emulate sensibly:

Imagine that L0 runs L2 for L1, and succeeds, but then exits to L0 for some
reason and L0 handles this event (without L1's involvement) and wants to resume
L2. What if this entry fails, e.g., because we (L0) filled some vmcs02 field
incorrectly? Neither nested_vmx_failValid() or nested_vmx_entry_failure() are
appropriate because L2 did run for a while and most likely changed vmcs02 (so
we need to update vmcs12 with prepare_vmcs12()).

This can only happen due to L0 bug (which sets something wrong in the vmcs) so
perhaps the best solution is just to kill L1 in this case? Is there a better
solution?

-- 
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to