Hi Peter,

On Mon, Feb 02, 2026 at 04:49:09PM -0500, Peter Xu wrote:
> Hi, Mike,
> 
> On Tue, Jan 27, 2026 at 09:29:23PM +0200, Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)" <[email protected]>
> > 
> > Split the code that finds, locks and verifies VMA from mfill_atomic()
> > into a helper function.
> > 
> > This function will be used later during refactoring of
> > mfill_atomic_pte_copy().
> > 
> > Add a counterpart mfill_put_vma() helper that unlocks the VMA and
> > releases map_changing_lock.
> > 
> > Signed-off-by: Mike Rapoport (Microsoft) <[email protected]>
> > ---
> >  mm/userfaultfd.c | 124 ++++++++++++++++++++++++++++-------------------
> >  1 file changed, 73 insertions(+), 51 deletions(-)
> > 
> > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > index 9dd285b13f3b..45d8f04aaf4f 100644
> > --- a/mm/userfaultfd.c
> > +++ b/mm/userfaultfd.c
> > @@ -157,6 +157,73 @@ static void uffd_mfill_unlock(struct vm_area_struct 
> > *vma)
> >  }
> >  #endif
> >  
> > +static void mfill_put_vma(struct mfill_state *state)
> > +{
> > +   up_read(&state->ctx->map_changing_lock);
> > +   uffd_mfill_unlock(state->vma);
> > +   state->vma = NULL;
> > +}
> > +
> > +static int mfill_get_vma(struct mfill_state *state)
> > +{
> > +   struct userfaultfd_ctx *ctx = state->ctx;
> > +   uffd_flags_t flags = state->flags;
> > +   struct vm_area_struct *dst_vma;
> > +   int err;
> > +
> > +   /*
> > +    * Make sure the vma is not shared, that the dst range is
> > +    * both valid and fully within a single existing vma.
> > +    */
> > +   dst_vma = uffd_mfill_lock(ctx->mm, state->dst_start, state->len);
> > +   if (IS_ERR(dst_vma))
> > +           return PTR_ERR(dst_vma);
> > +
> > +   /*
> > +    * If memory mappings are changing because of non-cooperative
> > +    * operation (e.g. mremap) running in parallel, bail out and
> > +    * request the user to retry later
> > +    */
> > +   down_read(&ctx->map_changing_lock);
> > +   err = -EAGAIN;
> > +   if (atomic_read(&ctx->mmap_changing))
> > +           goto out_unlock;
> > +
> > +   err = -EINVAL;
> > +
> > +   /*
> > +    * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but
> > +    * it will overwrite vm_ops, so vma_is_anonymous must return false.
> > +    */
> > +   if (WARN_ON_ONCE(vma_is_anonymous(dst_vma) &&
> > +       dst_vma->vm_flags & VM_SHARED))
> > +           goto out_unlock;
> > +
> > +   /*
> > +    * validate 'mode' now that we know the dst_vma: don't allow
> > +    * a wrprotect copy if the userfaultfd didn't register as WP.
> > +    */
> > +   if ((flags & MFILL_ATOMIC_WP) && !(dst_vma->vm_flags & VM_UFFD_WP))
> > +           goto out_unlock;
> > +
> > +   if (is_vm_hugetlb_page(dst_vma))
> > +           goto out;
> > +
> > +   if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma))
> > +           goto out_unlock;
> > +   if (!vma_is_shmem(dst_vma) &&
> > +       uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE))
> > +           goto out_unlock;
> 
> IMHO it's a bit weird to check for vma permissions in a get_vma() function.
> 
> Also, in the follow up patch it'll be also reused in
> mfill_copy_folio_retry() which doesn't need to check vma permission.
> 
> Maybe we can introduce mfill_vma_check() for these two checks? Then we can
> also drop the slightly weird is_vm_hugetlb_page() check (and "out" label)
> above.

This version of get_vma() keeps the checks exactly as they were when we
were retrying after dropping the lock and I prefer to have them this way to
begin with.
Later we can optimize this further after the dust settles after these
changes.

-- 
Sincerely yours,
Mike.

Reply via email to