Greg,
hold on with this one as it needs a follow-up fix
(https://lkml.org/lkml/2013/10/30/520) which is not merged yet AFAICS

On Wed 30-10-13 15:40:18, [email protected] wrote:
> 
> This is a note to let you know that I've just added the patch titled
> 
>     fs: buffer: move allocation failure loop into the allocator
> 
> to the 3.11-stable tree which can be found at:
>     
> http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
> 
> The filename of the patch is:
>      fs-buffer-move-allocation-failure-loop-into-the-allocator.patch
> and it can be found in the queue-3.11 subdirectory.
> 
> If you, or anyone else, feels it should not be added to the stable tree,
> please let <[email protected]> know about it.
> 
> 
> From 84235de394d9775bfaa7fa9762a59d91fef0c1fc Mon Sep 17 00:00:00 2001
> From: Johannes Weiner <[email protected]>
> Date: Wed, 16 Oct 2013 13:47:00 -0700
> Subject: fs: buffer: move allocation failure loop into the allocator
> 
> From: Johannes Weiner <[email protected]>
> 
> commit 84235de394d9775bfaa7fa9762a59d91fef0c1fc upstream.
> 
> Buffer allocation has a very crude indefinite loop around waking the
> flusher threads and performing global NOFS direct reclaim because it can
> not handle allocation failures.
> 
> The most immediate problem with this is that the allocation may fail due
> to a memory cgroup limit, where flushers + direct reclaim might not make
> any progress towards resolving the situation at all.  Because unlike the
> global case, a memory cgroup may not have any cache at all, only
> anonymous pages but no swap.  This situation will lead to a reclaim
> livelock with insane IO from waking the flushers and thrashing unrelated
> filesystem cache in a tight loop.
> 
> Use __GFP_NOFAIL allocations for buffers for now.  This makes sure that
> any looping happens in the page allocator, which knows how to
> orchestrate kswapd, direct reclaim, and the flushers sensibly.  It also
> allows memory cgroups to detect allocations that can't handle failure
> and will allow them to ultimately bypass the limit if reclaim can not
> make progress.
> 
> Reported-by: azurIt <[email protected]>
> Signed-off-by: Johannes Weiner <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> Signed-off-by: Linus Torvalds <[email protected]>
> Signed-off-by: Greg Kroah-Hartman <[email protected]>
> 
> ---
>  fs/buffer.c     |   14 ++++++++++++--
>  mm/memcontrol.c |    2 ++
>  2 files changed, 14 insertions(+), 2 deletions(-)
> 
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -1005,9 +1005,19 @@ grow_dev_page(struct block_device *bdev,
>       struct buffer_head *bh;
>       sector_t end_block;
>       int ret = 0;            /* Will call free_more_memory() */
> +     gfp_t gfp_mask;
>  
> -     page = find_or_create_page(inode->i_mapping, index,
> -             (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE);
> +     gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS;
> +     gfp_mask |= __GFP_MOVABLE;
> +     /*
> +      * XXX: __getblk_slow() can not really deal with failure and
> +      * will endlessly loop on improvised global reclaim.  Prefer
> +      * looping in the allocator rather than here, at least that
> +      * code knows what it's doing.
> +      */
> +     gfp_mask |= __GFP_NOFAIL;
> +
> +     page = find_or_create_page(inode->i_mapping, index, gfp_mask);
>       if (!page)
>               return ret;
>  
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2772,6 +2772,8 @@ done:
>       return 0;
>  nomem:
>       *ptr = NULL;
> +     if (gfp_mask & __GFP_NOFAIL)
> +             return 0;
>       return -ENOMEM;
>  bypass:
>       *ptr = root_mem_cgroup;
> 
> 
> Patches currently in stable-queue which might be from [email protected] are
> 
> queue-3.11/fs-buffer-move-allocation-failure-loop-into-the-allocator.patch

-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to