On 11/15/25 11:51, Andrew Morton wrote:
> On Sat, 15 Nov 2025 11:28:35 +1100 Balbir Singh <[email protected]> wrote:
> 
>> Follow the pattern used in remove_migration_pte() in
>> remove_migration_pmd(). Process the migration entries and if the entry
>> type is device private, override the pmde with a device private entry
>> and set the soft dirty and uffd_wp bits with the pmd_swp_mksoft_dirty
>> and pmd_swp_mkuffd_wp
>>
>> ...
>>
>> This fixup should be squashed into the patch "mm/rmap: extend rmap and
>> migration support" of mm/mm-unstable
>>
> 
> OK.  After fixing up
> mm-replace-pmd_to_swp_entry-with-softleaf_from_pmd.patch, mm.git's
> mm/huge_memory.c has the below.  Please double-check.
> 
> 
> void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
> {
>       struct folio *folio = page_folio(new);
>       struct vm_area_struct *vma = pvmw->vma;
>       struct mm_struct *mm = vma->vm_mm;
>       unsigned long address = pvmw->address;
>       unsigned long haddr = address & HPAGE_PMD_MASK;
>       pmd_t pmde;
>       softleaf_t entry;
> 
>       if (!(pvmw->pmd && !pvmw->pte))
>               return;
> 
>       entry = softleaf_from_pmd(*pvmw->pmd);
>       folio_get(folio);
>       pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot));
> 
>       if (pmd_swp_soft_dirty(*pvmw->pmd))
>               pmde = pmd_mksoft_dirty(pmde);
>       if (softleaf_is_migration_write(entry))
>               pmde = pmd_mkwrite(pmde, vma);
>       if (pmd_swp_uffd_wp(*pvmw->pmd))
>               pmde = pmd_mkuffd_wp(pmde);
>       if (!softleaf_is_migration_young(entry))
>               pmde = pmd_mkold(pmde);
>       /* NOTE: this may contain setting soft-dirty on some archs */
>       if (folio_test_dirty(folio) && softleaf_is_migration_dirty(entry))
>               pmde = pmd_mkdirty(pmde);
> 
>       if (folio_is_device_private(folio)) {
>               swp_entry_t entry;
> 
>               if (pmd_write(pmde))
>                       entry = make_writable_device_private_entry(
>                                                       page_to_pfn(new));
>               else
>                       entry = make_readable_device_private_entry(
>                                                       page_to_pfn(new));
>               pmde = swp_entry_to_pmd(entry);
> 
>               if (pmd_swp_soft_dirty(*pvmw->pmd))
>                       pmde = pmd_swp_mksoft_dirty(pmde);
>               if (pmd_swp_uffd_wp(*pvmw->pmd))
>                       pmde = pmd_swp_mkuffd_wp(pmde);
>       }
> 
>       if (folio_test_anon(folio)) {
>               rmap_t rmap_flags = RMAP_NONE;
> 
>               if (!softleaf_is_migration_read(entry))
>                       rmap_flags |= RMAP_EXCLUSIVE;
> 
>               folio_add_anon_rmap_pmd(folio, new, vma, haddr, rmap_flags);
>       } else {
>               folio_add_file_rmap_pmd(folio, new, vma);
>       }
>       VM_BUG_ON(pmd_write(pmde) && folio_test_anon(folio) && 
> !PageAnonExclusive(new));
>       set_pmd_at(mm, haddr, pvmw->pmd, pmde);
> 
>       /* No need to invalidate - it was non-present before */
>       update_mmu_cache_pmd(vma, address, pvmw->pmd);
>       trace_remove_migration_pmd(address, pmd_val(pmde));
> }


Thanks, Andrew! Looks good!

Balbir

Reply via email to