On Tue, Apr 26, 2022 at 12:26:10PM +0200, Jan Beulich wrote: > Just like for PV guests MMU_MACHPHYS_UPDATE implies marking of the > respective page as dirty, additions to a HVM guest's P2M should do so. > > For HVM the opposite is also true: Pages being removed from the P2M are > no longer dirty at their prior GFN; there's no point in telling the tool > stack to try and copy that page, when this will fail anyway (until > perhaps a new page gets placed there). Introduce paging_mark_pfn_clean() > (intentionally without a paging_mark_clean() counterpart) to handle > this. Note that while there is an earlier call to set_gpfn_from_mfn() in > guest_physmap_add_entry(), but there's little reason to mark the page > clean there when later in the function it'll be marked dirty. This is > even more so given that at this point it's only the M2P that gets > updated, with the P2M still left unchanged. > > Signed-off-by: Jan Beulich <[email protected]>
Reviewed-by: Roger Pau Monné <[email protected]> > --- > p2m_add_page()'s error handling looks bogus in this regard anyway: If an > error occurs before an MFN actually is assciated with the new GFN, the > M2P entry ought to be restored imo. But of course a guest is still hosed > if the operation succeeds partially. > > Note that I've not even checked mem-paging and mem-sharing code for > whether they may need similar adjustment. At least the latters is, aiui, > incompatible with log-dirty mode anyway. > --- > v3: Re-base. > > --- a/xen/arch/x86/mm/p2m.c > +++ b/xen/arch/x86/mm/p2m.c > @@ -549,7 +549,10 @@ p2m_remove_entry(struct p2m_domain *p2m, > { > p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, NULL, NULL); > if ( !p2m_is_special(t) && !p2m_is_shared(t) ) > + { > set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY); > + paging_mark_pfn_clean(p2m->domain, _pfn(gfn_x(gfn) + i)); > + } > } > } > > @@ -737,8 +740,11 @@ p2m_add_page(struct domain *d, gfn_t gfn > if ( !p2m_is_grant(t) ) > { > for ( i = 0; i < (1UL << page_order); i++ ) > + { > set_gpfn_from_mfn(mfn_x(mfn_add(mfn, i)), > gfn_x(gfn_add(gfn, i))); > + paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn) + i)); > + } > } > } > else > @@ -1096,6 +1102,7 @@ static int set_typed_p2m_entry(struct do > { > ASSERT(mfn_valid(mfn_add(omfn, i))); > set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY); > + paging_mark_pfn_clean(d, _pfn(gfn_x(gfn) + i)); > > ioreq_request_mapcache_invalidate(d); > } > @@ -1117,6 +1124,7 @@ static int set_typed_p2m_entry(struct do > { > ASSERT(mfn_valid(mfn_add(omfn, i))); > set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY); > + paging_mark_pfn_clean(d, _pfn(gfn_x(gfn) + i)); > } > > ioreq_request_mapcache_invalidate(d); > --- a/xen/arch/x86/mm/p2m-pod.c > +++ b/xen/arch/x86/mm/p2m-pod.c > @@ -657,7 +657,10 @@ decrease_reservation(struct domain *d, g > } > p2m_tlb_flush_sync(p2m); > for ( j = 0; j < n; ++j ) > + { > set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY); > + paging_mark_pfn_clean(d, _pfn(gfn_x(gfn) + i + j)); > + } > p2m_pod_cache_add(p2m, page, cur_order); > > ioreq_request_mapcache_invalidate(d); > --- a/xen/arch/x86/mm/paging.c > +++ b/xen/arch/x86/mm/paging.c > @@ -260,7 +260,7 @@ static int paging_log_dirty_disable(stru > } > > /* Mark a page as dirty, with taking guest pfn as parameter */ > -void paging_mark_pfn_dirty(struct domain *d, pfn_t pfn) > +static void mark_pfn_dirty(struct domain *d, pfn_t pfn, bool dirty) Nit: set_pfn_logdirty() or similar? The function name makes it look like it's marking the pfn as dirty (when it's also used to clear it). No strong opinion, it just seems to me the name is slightly confusing. Thanks, Roger.
