On Fri, May 08, 2026 at 04:55:18PM +0100, Kiryl Shutsemau (Meta) wrote:
> The uffd PTE bit must survive any kernel path that rewrites a PTE
> on a VM_UFFD_RWP VMA, otherwise the marker that carries PAGE_NONE
> semantics is silently dropped and the next access leaks past RWP
> tracking. Wire the preservation through every path that rewrites a
> VM_UFFD_RWP PTE.
> 
> Swap and device-exclusive: do_swap_page(), restore_exclusive_pte(),
> and unuse_pte() (swapoff()) re-apply PAGE_NONE when the swap PTE
> carries the uffd bit and the VMA has VM_UFFD_RWP.
> 
> Migration: remove_migration_pte() and remove_migration_pmd() do the
> same after the migration entry is replaced with a real PTE/PMD.
> 
> Fork: __copy_present_ptes(), copy_present_page(), copy_nonpresent_pte(),
> copy_huge_pmd(), copy_huge_non_present_pmd(), and
> copy_hugetlb_page_range() keep the uffd bit on the child when the
> destination VMA has VM_UFFD_RWP, matching the existing VM_UFFD_WP
> handling. Add VM_UFFD_RWP to VM_COPY_ON_FORK so the flag itself
> propagates.
> 
> mprotect(): change_pte_range() and change_huge_pmd() restore PAGE_NONE
> after pte_modify()/pmd_modify() have recomputed the base protection
> from a (possibly user-changed) vm_page_prot. pte_modify() preserves
> _PAGE_UFFD, so the bit stays; we just have to force PAGE_NONE back
> on top.
> 
> Signed-off-by: Kiryl Shutsemau <[email protected]>
> Assisted-by: Claude:claude-opus-4-6

Acked-by: Mike Rapoport (Microsoft) <[email protected]>

> ---
>  include/linux/mm.h |  3 ++-
>  mm/huge_memory.c   | 47 ++++++++++++++++++++++++++++++++++++++++++----
>  mm/hugetlb.c       | 40 ++++++++++++++++++++++++++++++---------
>  mm/memory.c        | 47 +++++++++++++++++++++++++++++++++++++++-------
>  mm/migrate.c       |  8 ++++++++
>  mm/mprotect.c      | 10 ++++++++++
>  mm/mremap.c        | 13 +++++++++++--
>  mm/swapfile.c      |  5 +++++
>  mm/userfaultfd.c   | 14 ++++++++++++++
>  9 files changed, 164 insertions(+), 23 deletions(-)
> 

-- 
Sincerely yours,
Mike.

Reply via email to