On 11/17/25 23:58, David Hildenbrand (Red Hat) wrote:
> On 15.11.25 01:28, Balbir Singh wrote:
>> Follow the pattern used in remove_migration_pte() in
>> remove_migration_pmd(). Process the migration entries and if the entry
>> type is device private, override the pmde with a device private entry
>> and set the soft dirty and uffd_wp bits with the pmd_swp_mksoft_dirty
>> and pmd_swp_mkuffd_wp
>>
>> Cc: Andrew Morton <[email protected]>
>> Cc: David Hildenbrand <[email protected]>
>> Cc: Zi Yan <[email protected]>
>> Cc: Joshua Hahn <[email protected]>
>> Cc: Rakie Kim <[email protected]>
>> Cc: Byungchul Park <[email protected]>
>> Cc: Gregory Price <[email protected]>
>> Cc: Ying Huang <[email protected]>
>> Cc: Alistair Popple <[email protected]>
>> Cc: Oscar Salvador <[email protected]>
>> Cc: Lorenzo Stoakes <[email protected]>
>> Cc: Baolin Wang <[email protected]>
>> Cc: "Liam R. Howlett" <[email protected]>
>> Cc: Nico Pache <[email protected]>
>> Cc: Ryan Roberts <[email protected]>
>> Cc: Dev Jain <[email protected]>
>> Cc: Barry Song <[email protected]>
>> Cc: Lyude Paul <[email protected]>
>> Cc: Danilo Krummrich <[email protected]>
>> Cc: David Airlie <[email protected]>
>> Cc: Simona Vetter <[email protected]>
>> Cc: Ralph Campbell <[email protected]>
>> Cc: Mika Penttilä <[email protected]>
>> Cc: Matthew Brost <[email protected]>
>> Cc: Francois Dugast <[email protected]>
>>
>> Signed-off-by: Balbir Singh <[email protected]>
>> ---
>> This fixup should be squashed into the patch "mm/rmap: extend rmap and
>> migration support" of mm/mm-unstable
>>
>>   mm/huge_memory.c | 27 +++++++++++++++++----------
>>   1 file changed, 17 insertions(+), 10 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 9dda8c48daca..50ba458efcab 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -4698,16 +4698,6 @@ void remove_migration_pmd(struct page_vma_mapped_walk 
>> *pvmw, struct page *new)
>>       folio_get(folio);
>>       pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot));
>>   -    if (folio_is_device_private(folio)) {
>> -        if (pmd_write(pmde))
>> -            entry = make_writable_device_private_entry(
>> -                            page_to_pfn(new));
>> -        else
>> -            entry = make_readable_device_private_entry(
>> -                            page_to_pfn(new));
>> -        pmde = swp_entry_to_pmd(entry);
>> -    }
>> -
>>       if (pmd_swp_soft_dirty(*pvmw->pmd))
>>           pmde = pmd_mksoft_dirty(pmde);
>>       if (is_writable_migration_entry(entry))
>> @@ -4720,6 +4710,23 @@ void remove_migration_pmd(struct page_vma_mapped_walk 
>> *pvmw, struct page *new)
>>       if (folio_test_dirty(folio) && is_migration_entry_dirty(entry))
>>           pmde = pmd_mkdirty(pmde);
>>   +    if (folio_is_device_private(folio)) {
>> +        swp_entry_t entry;
> 
> It's a bit nasty to have the same variable shadowed here.
> 
> We could reuse the existing entry by handling the code more similar to 
> remove_migration_pte(): determine RMAP_EXCLUSIVE earlier.
> 
>> +
>> +        if (pmd_write(pmde))
>> +            entry = make_writable_device_private_entry(
>> +                            page_to_pfn(new));
>> +        else
>> +            entry = make_readable_device_private_entry(
>> +                            page_to_pfn(new));
>> +        pmde = swp_entry_to_pmd(entry);
>> +
>> +        if (pmd_swp_soft_dirty(*pvmw->pmd))
>> +            pmde = pmd_swp_mksoft_dirty(pmde);
>> +        if (pmd_swp_uffd_wp(*pvmw->pmd))
>> +            pmde = pmd_swp_mkuffd_wp(pmde);
>> +    }
>> +
>>       if (folio_test_anon(folio)) {
>>           rmap_t rmap_flags = RMAP_NONE;
>>   
> 
> I guess at some point we could separate both parts completely (no need to do 
> all this work on pmdb before the folio_is_device_private(folio) check, so 
> this could be
> 
> if (folio_is_device_private(folio)) {
>     ...
> } else {
>     entry = pmd_to_swp_entry(*pvmw->pmd);
>     folio_get(folio);
>     ...
> }
> 
> That is something for another day though, and remove_migration_pte() should 
> be cleaned up then as well.
> 

Agreed and Thanks for the review!

Balbir

Reply via email to