There is no circumstance under which any part of the Xen image in memory wants
to have any cacheability other than Write Back.

Furthermore, unmapping random pieces of Xen like that comes with a non-trivial
risk of a Triple Fault, or other fatal condition.  Even if it appears to work,
an NMI or interrupt as a far wider reach into Xen mappings than calling
map_pages_to_xen() thrice.

Simplfy the safety checking to a BUG_ON().  It is substantially more correct
and robust than following either of the unlikely(alias) paths.

Signed-off-by: Andrew Cooper <andrew.coop...@citrix.com>
---
CC: Jan Beulich <jbeul...@suse.com>
CC: Roger Pau Monné <roger....@citrix.com>
CC: Wei Liu <w...@xen.org>

I'm half tempted to drop the check entirely, but in that case it would be
better to inline the result into the two callers.
---
 xen/arch/x86/mm.c | 21 +++++++++------------
 1 file changed, 9 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d799032dc82..9bd4e5cc1d2f 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -785,24 +785,21 @@ bool is_iomem_page(mfn_t mfn)
 
 static int update_xen_mappings(unsigned long mfn, unsigned int cacheattr)
 {
-    int err = 0;
     bool alias = mfn >= PFN_DOWN(xen_phys_start) &&
          mfn < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START);
-    unsigned long xen_va =
-        XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT);
+
+    /*
+     * Something is catastrophically broken if someone is trying to change the
+     * cacheability of Xen in memory...
+     */
+    BUG_ON(alias);
 
     if ( boot_cpu_has(X86_FEATURE_XEN_SELFSNOOP) )
         return 0;
 
-    if ( unlikely(alias) && cacheattr )
-        err = map_pages_to_xen(xen_va, _mfn(mfn), 1, 0);
-    if ( !err )
-        err = map_pages_to_xen((unsigned long)mfn_to_virt(mfn), _mfn(mfn), 1,
-                     PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr));
-    if ( unlikely(alias) && !cacheattr && !err )
-        err = map_pages_to_xen(xen_va, _mfn(mfn), 1, PAGE_HYPERVISOR);
-
-    return err;
+    return map_pages_to_xen(
+        (unsigned long)mfn_to_virt(mfn), _mfn(mfn), 1,
+        PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr));
 }
 
 #ifndef NDEBUG
-- 
2.11.0


Reply via email to