Add the missing index increase in the M2P clearing loop, otherwise the loop
keeps pointlessly setting the same MFN entry repeatedly. This seems to be
an oversight from the change that introduced support to process high order
pages in one go.
Fixes: 3c352011c0d3 ("x86/PoD: shorten certain operations on higher order
ranges")
Signed-off-by: Roger Pau Monné <[email protected]>
---
xen/arch/x86/mm/p2m-pod.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 05633fe2ac88..22dde913cc3c 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -655,7 +655,7 @@ decrease_reservation(struct domain *d, gfn_t gfn, unsigned
int order)
}
p2m_tlb_flush_sync(p2m);
for ( j = 0; j < n; ++j )
- set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
+ set_gpfn_from_mfn(mfn_x(mfn) + j, INVALID_M2P_ENTRY);
p2m_pod_cache_add(p2m, page, cur_order);
ioreq_request_mapcache_invalidate(d);
--
2.51.0