Allocating or freeing p2m pages doesn't alter the size of the mempool; only
the split between free and used pages.

Right now, the hypercalls operate on the free subset of the pool, meaning that
XEN_DOMCTL_get_paging_mempool_size varies with time as the guest shuffles its
physmap, and XEN_DOMCTL_set_paging_mempool_size ignores the used subset of the
pool and lets the guest grow unbounded.

This fixes test-pagign-mempool on ARM so that the behaviour matches x86.

This is part of XSA-409 / CVE-2022-33747.

Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool")
Signed-off-by: Andrew Cooper <[email protected]>
Reviewed-by: Julien Grall <[email protected]>
Release-acked-by: Henry Wang <[email protected]>
---
CC: Jan Beulich <[email protected]>
CC: Roger Pau MonnĂ© <[email protected]>
CC: Wei Liu <[email protected]>
CC: Stefano Stabellini <[email protected]>
CC: Julien Grall <[email protected]>
CC: Volodymyr Babchuk <[email protected]>
CC: Bertrand Marquis <[email protected]>
CC: Henry Wang <[email protected]>
CC: Anthony PERARD <[email protected]>
---
 xen/arch/arm/p2m.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index b2f7e8d804aa..9bc5443d9e8a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -72,7 +72,6 @@ static struct page_info *p2m_alloc_page(struct domain *d)
             spin_unlock(&d->arch.paging.lock);
             return NULL;
         }
-        d->arch.paging.p2m_total_pages--;
     }
     spin_unlock(&d->arch.paging.lock);
 
@@ -85,10 +84,7 @@ static void p2m_free_page(struct domain *d, struct page_info 
*pg)
     if ( is_hardware_domain(d) )
         free_domheap_page(pg);
     else
-    {
-        d->arch.paging.p2m_total_pages++;
         page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
-    }
     spin_unlock(&d->arch.paging.lock);
 }
 
-- 
2.11.0


Reply via email to