On Tue, Apr 10, 2018 at 05:53:51AM -0700, Matthew Wilcox wrote:
> From: Matthew Wilcox <mawil...@microsoft.com>
> 
> The page cache has used the mapping's GFP flags for allocating
> radix tree nodes for a long time.  It took care to always mask off the
> __GFP_HIGHMEM flag, and masked off other flags in other paths, but the
> __GFP_ZERO flag was still able to sneak through.  The __GFP_DMA and
> __GFP_DMA32 flags would also have been able to sneak through if they
> were ever used.  Fix them all by using GFP_RECLAIM_MASK at the innermost
> location, and remove it from earlier in the callchain.
> 
> Fixes: 19f99cee206c ("f2fs: add core inode operations")

Why this patch fix 19f99cee206c instead of 449dd6984d0e?
F2FS doesn't have any problem before introducing 449dd6984d0e?


> Reported-by: Minchan Kim <minc...@kernel.org>
> Signed-off-by: Matthew Wilcox <mawil...@microsoft.com>
> Cc: sta...@vger.kernel.org
> ---
>  mm/filemap.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index c2147682f4c3..1a4bfc5ed3dc 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -785,7 +785,7 @@ int replace_page_cache_page(struct page *old, struct page 
> *new, gfp_t gfp_mask)
>       VM_BUG_ON_PAGE(!PageLocked(new), new);
>       VM_BUG_ON_PAGE(new->mapping, new);
>  
> -     error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
> +     error = radix_tree_preload(gfp_mask & GFP_RECLAIM_MASK);
>       if (!error) {
>               struct address_space *mapping = old->mapping;
>               void (*freepage)(struct page *);
> @@ -841,7 +841,7 @@ static int __add_to_page_cache_locked(struct page *page,
>                       return error;
>       }
>  
> -     error = radix_tree_maybe_preload(gfp_mask & ~__GFP_HIGHMEM);
> +     error = radix_tree_maybe_preload(gfp_mask & GFP_RECLAIM_MASK);
>       if (error) {
>               if (!huge)
>                       mem_cgroup_cancel_charge(page, memcg, false);
> @@ -1574,8 +1574,7 @@ struct page *pagecache_get_page(struct address_space 
> *mapping, pgoff_t offset,
>               if (fgp_flags & FGP_ACCESSED)
>                       __SetPageReferenced(page);
>  
> -             err = add_to_page_cache_lru(page, mapping, offset,
> -                             gfp_mask & GFP_RECLAIM_MASK);
> +             err = add_to_page_cache_lru(page, mapping, offset, gfp_mask);
>               if (unlikely(err)) {
>                       put_page(page);
>                       page = NULL;
> @@ -2378,7 +2377,7 @@ static int page_cache_read(struct file *file, pgoff_t 
> offset, gfp_t gfp_mask)
>               if (!page)
>                       return -ENOMEM;
>  
> -             ret = add_to_page_cache_lru(page, mapping, offset, gfp_mask & 
> GFP_KERNEL);
> +             ret = add_to_page_cache_lru(page, mapping, offset, gfp_mask);
>               if (ret == 0)
>                       ret = mapping->a_ops->readpage(file, page);
>               else if (ret == -EEXIST)
> -- 
> 2.16.3
> 

Reply via email to