On 08.09.25 02:04, Balbir Singh wrote:

subject:

"mm/rmap: rmap and migration support for device-private PMD entries"


Add device-private THP support to reverse mapping infrastructure,
enabling proper handling during migration and walk operations.

The key changes are:
- add_migration_pmd()/remove_migration_pmd(): Handle device-private
   entries during folio migration and splitting
- page_vma_mapped_walk(): Recognize device-private THP entries during
   VMA traversal operations

This change supports folio splitting and migration operations on
device-private entries.

Cc: Andrew Morton <a...@linux-foundation.org>
Cc: David Hildenbrand <da...@redhat.com>
Cc: Zi Yan <z...@nvidia.com>
Cc: Joshua Hahn <joshua.hah...@gmail.com>
Cc: Rakie Kim <rakie....@sk.com>
Cc: Byungchul Park <byungc...@sk.com>
Cc: Gregory Price <gou...@gourry.net>
Cc: Ying Huang <ying.hu...@linux.alibaba.com>
Cc: Alistair Popple <apop...@nvidia.com>
Cc: Oscar Salvador <osalva...@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoa...@oracle.com>
Cc: Baolin Wang <baolin.w...@linux.alibaba.com>
Cc: "Liam R. Howlett" <liam.howl...@oracle.com>
Cc: Nico Pache <npa...@redhat.com>
Cc: Ryan Roberts <ryan.robe...@arm.com>
Cc: Dev Jain <dev.j...@arm.com>
Cc: Barry Song <bao...@kernel.org>
Cc: Lyude Paul <ly...@redhat.com>
Cc: Danilo Krummrich <d...@kernel.org>
Cc: David Airlie <airl...@gmail.com>
Cc: Simona Vetter <sim...@ffwll.ch>
Cc: Ralph Campbell <rcampb...@nvidia.com>
Cc: Mika Penttilä <mpent...@redhat.com>
Cc: Matthew Brost <matthew.br...@intel.com>
Cc: Francois Dugast <francois.dug...@intel.com>

Signed-off-by: Balbir Singh <balb...@nvidia.com>
---

[...]


+++ b/mm/page_vma_mapped.c
@@ -250,12 +250,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk 
*pvmw)
                        pvmw->ptl = pmd_lock(mm, pvmw->pmd);
                        pmde = *pvmw->pmd;
                        if (!pmd_present(pmde)) {
-                               swp_entry_t entry;
+                               swp_entry_t entry = pmd_to_swp_entry(pmde);
if (!thp_migration_supported() ||
                                    !(pvmw->flags & PVMW_MIGRATION))
                                        return not_found(pvmw);
-                               entry = pmd_to_swp_entry(pmde);
                                if (!is_migration_entry(entry) ||
                                    !check_pmd(swp_offset_pfn(entry), pvmw))
                                        return not_found(pvmw);

Why this change? Looks unrelated.

@@ -277,6 +276,15 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk 
*pvmw)
                         * cannot return prematurely, while zap_huge_pmd() has
                         * cleared *pmd but not decremented compound_mapcount().

Reminder to self: cleanup compound_mapcount() leftovers

                         */
+                       swp_entry_t entry;
+
+                       entry = pmd_to_swp_entry(pmde);

swp_entry_t entry = pmd_to_swp_entry(pmde);

+
+                       if (is_device_private_entry(entry)) {
+                               pvmw->ptl = pmd_lock(mm, pvmw->pmd);
+                               return true;
+                       }
+
                        if ((pvmw->flags & PVMW_SYNC) &&
                            thp_vma_suitable_order(vma, pvmw->address,
                                                   PMD_ORDER) &&
diff --git a/mm/rmap.c b/mm/rmap.c
index 236ceff5b276..6de1baf7a4f1 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1063,8 +1063,10 @@ static int page_vma_mkclean_one(struct 
page_vma_mapped_walk *pvmw)
                } else {
  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
                        pmd_t *pmd = pvmw->pmd;
-                       pmd_t entry;
+                       pmd_t entry = pmdp_get(pmd);
+ if (!pmd_present(entry))
+                               continue;
                        if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
                                continue;

If you just did a pmdp_get() you should use it in these functions as well. If not (cleanup later), do a straight *pmd like the others.



Apart from that nothing jumped at me.

--
Cheers

David / dhildenb

Reply via email to