FYI

commit cc54d1333e409f714aa9c7db63f7f9ed07cc57a9
tree f301f581dd4389028f8b2588940d456904e552f1
parent 2e8f68c45925123d33d476ce369b570bd989dd9a
author Larry Woodman <[EMAIL PROTECTED]> Fri, 15 Jul 2005 11:32:08 -0400
committer Marcelo Tosatti <[EMAIL PROTECTED]> Tue, 26 Jul 2005 07:52:46 -0300

    [PATCH] workaround inode cache (prune_icache/__refile_inode) SMP races

    Over the past couple of weeks we have seen two races in the inode cache
    code. The first is between [dispose_list()] and __refile_inode() and the
    second is between prune_icache() and truncate_inodes(). I posted both of
    these patches but wanted to make sure they got properly reviewed and
    included in RHEL3-U6.

--- a/fs/inode.c
+++ b/fs/inode.c
@@ -297,7 +297,7 @@ static inline void __refile_inode(struct
 {
        struct list_head *to;

-       if (inode->i_state & I_FREEING)
+       if (inode->i_state & (I_FREEING|I_CLEAR))
                return;
        if (list_empty(&inode->i_hash))
                return;
@@ -634,7 +634,9 @@ void clear_inode(struct inode *inode)
                cdput(inode->i_cdev);
                inode->i_cdev = NULL;
        }
+       spin_lock(&inode_lock);
        inode->i_state = I_CLEAR;
+       spin_unlock(&inode_lock);
 }

 /*


On Sun, Jun 19, 2005 at 11:07:44PM +0000, Chris Caputo wrote:
> My basic repro method was:
> 
> --
> 0) start irqbalance
> 1) run loop_dbench, which is the following dbench script which uses
>    client_plain.txt:
> 
>    #!/bin/sh
> 
>    while [ 1 ]
>    do
>         date
>         dbench 2
> 2) wait for oops
> --
> 
> I think I was using dbench-2.1:
> 
>   http://samba.org/ftp/tridge/dbench/dbench-2.1.tar.gz
> 
> In my case irqbalance was key.  If I didn't run it I never got the 
> problem.  I think irqbalance just did a good job of exasperating a race 
> condition in some way.
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to