On 1/12/26 06:55, Francois Dugast wrote:
> From: Matthew Brost <[email protected]>
> 
> Add free_zone_device_folio_prepare(), a helper that restores large
> ZONE_DEVICE folios to a sane, initial state before freeing them.
> 
> Compound ZONE_DEVICE folios overwrite per-page state (e.g. pgmap and
> compound metadata). Before returning such pages to the device pgmap
> allocator, each constituent page must be reset to a standalone
> ZONE_DEVICE folio with a valid pgmap and no compound state.
> 
> Use this helper prior to folio_free() for device-private and
> device-coherent folios to ensure consistent device page state for
> subsequent allocations.
> 
> Fixes: d245f9b4ab80 ("mm/zone_device: support large zone device private 
> folios")
> Cc: Zi Yan <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Balbir Singh <[email protected]>
> Cc: Lorenzo Stoakes <[email protected]>
> Cc: Liam R. Howlett <[email protected]>
> Cc: Vlastimil Babka <[email protected]>
> Cc: Mike Rapoport <[email protected]>
> Cc: Suren Baghdasaryan <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Alistair Popple <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Suggested-by: Alistair Popple <[email protected]>
> Signed-off-by: Matthew Brost <[email protected]>
> Signed-off-by: Francois Dugast <[email protected]>
> ---
>  include/linux/memremap.h |  1 +
>  mm/memremap.c            | 55 ++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 56 insertions(+)
> 
> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
> index 97fcffeb1c1e..88e1d4707296 100644
> --- a/include/linux/memremap.h
> +++ b/include/linux/memremap.h
> @@ -230,6 +230,7 @@ static inline bool is_fsdax_page(const struct page *page)
>  
>  #ifdef CONFIG_ZONE_DEVICE
>  void zone_device_page_init(struct page *page, unsigned int order);
> +void free_zone_device_folio_prepare(struct folio *folio);
>  void *memremap_pages(struct dev_pagemap *pgmap, int nid);
>  void memunmap_pages(struct dev_pagemap *pgmap);
>  void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 39dc4bd190d0..375a61e18858 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -413,6 +413,60 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn)
>  }
>  EXPORT_SYMBOL_GPL(get_dev_pagemap);
>  
> +/**
> + * free_zone_device_folio_prepare() - Prepare a ZONE_DEVICE folio for 
> freeing.
> + * @folio: ZONE_DEVICE folio to prepare for release.
> + *
> + * ZONE_DEVICE pages/folios (e.g., device-private memory or fsdax-backed 
> pages)
> + * can be compound. When freeing a compound ZONE_DEVICE folio, the tail pages
> + * must be restored to a sane ZONE_DEVICE state before they are released.
> + *
> + * This helper:
> + *   - Clears @folio->mapping and, for compound folios, clears each page's
> + *     compound-head state (ClearPageHead()/clear_compound_head()).
> + *   - Resets the compound order metadata (folio_reset_order()) and then
> + *     initializes each constituent page as a standalone ZONE_DEVICE folio:
> + *       * clears ->mapping
> + *       * restores ->pgmap (prep_compound_page() overwrites it)
> + *       * clears ->share (only relevant for fsdax; unused for 
> device-private)
> + *
> + * If @folio is order-0, only the mapping is cleared and no further work is
> + * required.
> + */
> +void free_zone_device_folio_prepare(struct folio *folio)
> +{
> +     struct dev_pagemap *pgmap = page_pgmap(&folio->page);
> +     int order, i;
> +
> +     VM_WARN_ON_FOLIO(!folio_is_zone_device(folio), folio);
> +
> +     folio->mapping = NULL;
> +     order = folio_order(folio);
> +     if (!order)
> +             return;
> +
> +     folio_reset_order(folio);
> +
> +     for (i = 0; i < (1UL << order); i++) {
> +             struct page *page = folio_page(folio, i);
> +             struct folio *new_folio = (struct folio *)page;
> +
> +             ClearPageHead(page);
> +             clear_compound_head(page);
> +
> +             new_folio->mapping = NULL;
> +             /*
> +              * Reset pgmap which was over-written by
> +              * prep_compound_page().
> +              */
> +             new_folio->pgmap = pgmap;
> +             new_folio->share = 0;   /* fsdax only, unused for device 
> private */
> +             VM_WARN_ON_FOLIO(folio_ref_count(new_folio), new_folio);
> +             VM_WARN_ON_FOLIO(!folio_is_zone_device(new_folio), new_folio);

Does calling the free_folio() callback on new_folio solve the issue you are 
facing, or is
that PMD_ORDER more frees than we'd like?

> +     }
> +}
> +EXPORT_SYMBOL_GPL(free_zone_device_folio_prepare);
> +
>  void free_zone_device_folio(struct folio *folio)
>  {
>       struct dev_pagemap *pgmap = folio->pgmap;
> @@ -454,6 +508,7 @@ void free_zone_device_folio(struct folio *folio)
>       case MEMORY_DEVICE_COHERENT:
>               if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->folio_free))
>                       break;
> +             free_zone_device_folio_prepare(folio);
>               pgmap->ops->folio_free(folio, order);
>               percpu_ref_put_many(&folio->pgmap->ref, nr);
>               break;

Balbir

Reply via email to