On Mon 05-08-19 15:20:17, john.hubb...@gmail.com wrote:
> From: John Hubbard <jhubb...@nvidia.com>
> 
> For pages that were retained via get_user_pages*(), release those pages
> via the new put_user_page*() routines, instead of via put_page() or
> release_pages().

Hmm, this is an interesting code path. There seems to be a mix of pages
in the game. We get one page via follow_page_mask but then other pages
in the range are filled by __munlock_pagevec_fill and that does a direct
pte walk. Is using put_user_page correct in this case? Could you explain
why in the changelog?

> This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
> ("mm: introduce put_user_page*(), placeholder versions").
> 
> Cc: Dan Williams <dan.j.willi...@intel.com>
> Cc: Daniel Black <dan...@linux.ibm.com>
> Cc: Jan Kara <j...@suse.cz>
> Cc: Jérôme Glisse <jgli...@redhat.com>
> Cc: Matthew Wilcox <wi...@infradead.org>
> Cc: Mike Kravetz <mike.krav...@oracle.com>
> Signed-off-by: John Hubbard <jhubb...@nvidia.com>
> ---
>  mm/mlock.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/mlock.c b/mm/mlock.c
> index a90099da4fb4..b980e6270e8a 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -345,7 +345,7 @@ static void __munlock_pagevec(struct pagevec *pvec, 
> struct zone *zone)
>                               get_page(page); /* for putback_lru_page() */
>                               __munlock_isolated_page(page);
>                               unlock_page(page);
> -                             put_page(page); /* from follow_page_mask() */
> +                             put_user_page(page); /* from follow_page_mask() 
> */
>                       }
>               }
>       }
> @@ -467,7 +467,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
>               if (page && !IS_ERR(page)) {
>                       if (PageTransTail(page)) {
>                               VM_BUG_ON_PAGE(PageMlocked(page), page);
> -                             put_page(page); /* follow_page_mask() */
> +                             put_user_page(page); /* follow_page_mask() */
>                       } else if (PageTransHuge(page)) {
>                               lock_page(page);
>                               /*
> @@ -478,7 +478,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
>                                */
>                               page_mask = munlock_vma_page(page);
>                               unlock_page(page);
> -                             put_page(page); /* follow_page_mask() */
> +                             put_user_page(page); /* follow_page_mask() */
>                       } else {
>                               /*
>                                * Non-huge pages are handled in batches via
> -- 
> 2.22.0

-- 
Michal Hocko
SUSE Labs

Reply via email to