On Thu, 12 May 2011, [email protected] wrote:
> 
> The patch below does not apply to the .38-stable tree.
> If someone wants it applied there, or to any other stable or longterm
> tree, then please email the backport, including the original git commit
> id to <[email protected]>.

Thank you for reporting that the original of this patch failed to apply
to the .38-stable tree.  Please re-apply it after applying this other
fix below from 2.6.39.  Whereas for .33-longterm, I'll send a backport.

Thanks,
Hugh

commit fc5da22ae35d4720be59af8787a8a6d5e4da9517
Author: Hugh Dickins <[email protected]>
Date:   Thu Apr 14 15:22:07 2011 -0700
Subject: [PATCH] tmpfs: fix off-by-one in max_blocks checks
    
    If you fill up a tmpfs, df was showing
    
      tmpfs                   460800         -         -   -  /tmp
    
    because of an off-by-one in the max_blocks checks.  Fix it so df shows
    
      tmpfs                   460800    460800         0 100% /tmp
    
    Signed-off-by: Hugh Dickins <[email protected]>
    Cc: Tim Chen <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Linus Torvalds <[email protected]>

diff --git a/mm/shmem.c b/mm/shmem.c
index 58da7c1..8fa27e4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -421,7 +421,8 @@ static swp_entry_t *shmem_swp_alloc(struct shmem_inode_info 
*info, unsigned long
                 * a waste to allocate index if we cannot allocate data.
                 */
                if (sbinfo->max_blocks) {
-                       if (percpu_counter_compare(&sbinfo->used_blocks, 
(sbinfo->max_blocks - 1)) > 0)
+                       if (percpu_counter_compare(&sbinfo->used_blocks,
+                                               sbinfo->max_blocks - 1) >= 0)
                                return ERR_PTR(-ENOSPC);
                        percpu_counter_inc(&sbinfo->used_blocks);
                        spin_lock(&inode->i_lock);
@@ -1397,7 +1398,8 @@ repeat:
                shmem_swp_unmap(entry);
                sbinfo = SHMEM_SB(inode->i_sb);
                if (sbinfo->max_blocks) {
-                       if ((percpu_counter_compare(&sbinfo->used_blocks, 
sbinfo->max_blocks) > 0) ||
+                       if (percpu_counter_compare(&sbinfo->used_blocks,
+                                               sbinfo->max_blocks) >= 0 ||
                            shmem_acct_block(info->flags)) {
                                spin_unlock(&info->lock);
                                error = -ENOSPC;

> 
> ------------------ original commit in Linus's tree ------------------
> 
> From 59a16ead572330deb38e5848151d30ed1af754bc Mon Sep 17 00:00:00 2001
> From: Hugh Dickins <[email protected]>
> Date: Wed, 11 May 2011 15:13:38 -0700
> Subject: [PATCH] tmpfs: fix spurious ENOSPC when racing with unswap
> 
> Testing the shmem_swaplist replacements for igrab() revealed another bug:
> writes to /dev/loop0 on a tmpfs file which fills its filesystem were
> sometimes failing with "Buffer I/O error"s.
> 
> These came from ENOSPC failures of shmem_getpage(), when racing with
> swapoff: the same could happen when racing with another shmem_getpage(),
> pulling the page in from swap in between our find_lock_page() and our
> taking the info->lock (though not in the single-threaded loop case).
> 
> This is unacceptable, and surprising that I've not noticed it before:
> it dates back many years, but (presumably) was made a lot easier to
> reproduce in 2.6.36, which sited a page preallocation in the race window.
> 
> Fix it by rechecking the page cache before settling on an ENOSPC error.
> 
> Signed-off-by: Hugh Dickins <[email protected]>
> Cc: Konstantin Khlebnikov <[email protected]>
> Cc: <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> Signed-off-by: Linus Torvalds <[email protected]>
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index dc17551..9e755c1 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1407,20 +1407,14 @@ repeat:
>               if (sbinfo->max_blocks) {
>                       if (percpu_counter_compare(&sbinfo->used_blocks,
>                                               sbinfo->max_blocks) >= 0 ||
> -                         shmem_acct_block(info->flags)) {
> -                             spin_unlock(&info->lock);
> -                             error = -ENOSPC;
> -                             goto failed;
> -                     }
> +                         shmem_acct_block(info->flags))
> +                             goto nospace;
>                       percpu_counter_inc(&sbinfo->used_blocks);
>                       spin_lock(&inode->i_lock);
>                       inode->i_blocks += BLOCKS_PER_PAGE;
>                       spin_unlock(&inode->i_lock);
> -             } else if (shmem_acct_block(info->flags)) {
> -                     spin_unlock(&info->lock);
> -                     error = -ENOSPC;
> -                     goto failed;
> -             }
> +             } else if (shmem_acct_block(info->flags))
> +                     goto nospace;
>  
>               if (!filepage) {
>                       int ret;
> @@ -1500,6 +1494,24 @@ done:
>       error = 0;
>       goto out;
>  
> +nospace:
> +     /*
> +      * Perhaps the page was brought in from swap between find_lock_page
> +      * and taking info->lock?  We allow for that at add_to_page_cache_lru,
> +      * but must also avoid reporting a spurious ENOSPC while working on a
> +      * full tmpfs.  (When filepage has been passed in to shmem_getpage, it
> +      * is already in page cache, which prevents this race from occurring.)
> +      */
> +     if (!filepage) {
> +             struct page *page = find_get_page(mapping, idx);
> +             if (page) {
> +                     spin_unlock(&info->lock);
> +                     page_cache_release(page);
> +                     goto repeat;
> +             }
> +     }
> +     spin_unlock(&info->lock);
> +     error = -ENOSPC;
>  failed:
>       if (*pagep != filepage) {
>               unlock_page(filepage);

_______________________________________________
stable mailing list
[email protected]
http://linux.kernel.org/mailman/listinfo/stable

Reply via email to