On Fri, May 08, 2026 at 04:55:19PM +0100, Kiryl Shutsemau (Meta) wrote: > Three mm paths outside the fault handler gate on the uffd PTE bit > today: khugepaged (skip collapse on ranges carrying markers), rmap > (cap unmap batching), and GUP (force a fault through > gup_can_follow_protnone). Extend each to treat VM_UFFD_RWP the same > as VM_UFFD_WP; otherwise per-PTE RWP state is silently destroyed or > bypassed. > > khugepaged: try_collapse_pte_mapped_thp() and > file_backed_vma_is_retractable() already refuse to collapse or > retract page tables on ranges carrying the uffd PTE bit. Broaden the > VMA predicate from userfaultfd_wp() to userfaultfd_protected() so > VM_UFFD_RWP ranges get the same protection. hpage_collapse_scan_pmd() > needs no change — its existing pte_uffd() check already catches an > RWP PTE because it carries the uffd bit. > > rmap: folio_unmap_pte_batch() caps batching at 1 for VM_UFFD_RWP so > the restore path handles each PTE with its own marker. > > GUP: gup_can_follow_protnone() forces a fault on VM_UFFD_RWP VMAs > regardless of FOLL_HONOR_NUMA_FAULT. RWP uses protnone as an > access-tracking marker, not for NUMA hinting, so any GUP — read or > write — must go through the userfaultfd fault path. > > Signed-off-by: Kiryl Shutsemau <[email protected]> > Assisted-by: Claude:claude-opus-4-6
Acked-by: Mike Rapoport (Microsoft) <[email protected]> > --- > include/linux/mm.h | 10 +++++++++- > mm/khugepaged.c | 18 +++++++++++------- > mm/rmap.c | 2 +- > 3 files changed, 21 insertions(+), 9 deletions(-) -- Sincerely yours, Mike.

