The address translation logic in get_physical_address() will currently
truncate physical addresses to 32 bits unless long mode is enabled.
This is incorrect when using physical address extensions (PAE) outside
of long mode, with the result that a 32-bit operating system using PAE
to access memory above 4G will experience undefined behaviour.
Instead, truncation must be applied only to non-paging mode, because
all paths that go through page table accesses already produce a
correctly-masked address.

Furthermore, when inspecting the code I noticed that the A20 mask is
applied incorrectly when NPT is active.  The mask should not be applied
to the addresses that are looked up in the NPT, only to the physical
addresses.  Obviously no hypervisor is going to leave A20 masking on,
but the code actually becomes simpler so let's do it.

Patches 1 and 2 fix cases in which the addresses must be masked,
or overflow is otherwise invalid, for MMU_PHYS_IDX accesses.

Patch 3 fixes the bug, by limiting the masking to the case of CR0.PG=0.

Patches 4 and 5 further clean up the MMU functions to centralize
application of the A20 mask and fix bugs in the process.

Untested except for running the SVM tests from kvm-unit-tests
(which is better than nothing, still).

Supersedes: 
<0102018c8d11471f-9a6d73eb-0c34-4f61-8d37-5a4418f9e0d7-000...@eu-west-1.amazonses.com>

Paolo Bonzini (5):
  target/i386: mask high bits of CR3 in 32-bit mode
  target/i386: check validity of VMCB addresses
  target/i386: Fix physical address truncation
  target/i386: remove unnecessary/wrong application of the A20 mask
  target/i386: leave the A20 bit set in the final NPT walk

 target/i386/tcg/sysemu/excp_helper.c | 44 ++++++++++++----------------
 target/i386/tcg/sysemu/misc_helper.c |  3 ++
 target/i386/tcg/sysemu/svm_helper.c  | 27 +++++++++++++----
 3 files changed, 43 insertions(+), 31 deletions(-)

-- 
2.43.0


Reply via email to