From: Xenia Ragiadakou <xenia.ragiada...@amd.com>

Dom0 PVH might need XENMEM_exchange when passing contiguous memory
addresses to firmware or co-processors not behind an IOMMU.

XENMEM_exchange was blocked for HVM/PVH DomUs, and accidentally it
impacted Dom0 PVH as well.

Permit Dom0 PVH to call XENMEM_exchange while leaving it blocked for
HVM/PVH DomUs.

Signed-off-by: Xenia Ragiadakou <xenia.ragiada...@amd.com>
Signed-off-by: Stefano Stabellini <stefano.stabell...@amd.com>

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 1cf2365167..e995944333 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4401,7 +4401,7 @@ int steal_page(
     const struct domain *owner;
     int rc;
 
-    if ( paging_mode_external(d) )
+    if ( paging_mode_external(d) && !is_hardware_domain(d) )
         return -EOPNOTSUPP;
 
     /* Grab a reference to make sure the page doesn't change under our feet */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 8ca4e1a842..796eec081b 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -794,7 +794,7 @@ static long 
memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
             rc = guest_physmap_add_page(d, _gfn(gpfn), mfn,
                                         exch.out.extent_order) ?: rc;
 
-            if ( !paging_mode_translate(d) &&
+            if ( (!paging_mode_translate(d) || is_hardware_domain(d)) &&
                  __copy_mfn_to_guest_offset(exch.out.extent_start,
                                             (i << out_chunk_order) + j,
                                             mfn) )

Reply via email to