This is a note to let you know that I've just added the patch titled

    cifs: don't compare uniqueids in cifs_prime_dcache unless server inode 
numbers are in use

to the 3.7-stable tree which can be found at:
    
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     
cifs-don-t-compare-uniqueids-in-cifs_prime_dcache-unless-server-inode-numbers-are-in-use.patch
and it can be found in the queue-3.7 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.


>From 2f2591a34db6c9361faa316c91a6e320cb4e6aee Mon Sep 17 00:00:00 2001
From: Jeff Layton <[email protected]>
Date: Tue, 18 Dec 2012 06:35:10 -0500
Subject: cifs: don't compare uniqueids in cifs_prime_dcache unless server inode 
numbers are in use

From: Jeff Layton <[email protected]>

commit 2f2591a34db6c9361faa316c91a6e320cb4e6aee upstream.

Oliver reported that commit cd60042c caused his cifs mounts to
continually thrash through new inodes on readdir. His servers are not
sending inode numbers (or he's not using them), and the new test in
that function doesn't account for that sort of setup correctly.

If we're not using server inode numbers, then assume that the inode
attached to the dentry hasn't changed. Go ahead and update the
attributes in place, but keep the same inode number.

Reported-and-Tested-by: Oliver Mössinger <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
Signed-off-by: Steve French <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
 fs/cifs/readdir.c |   19 +++++++++++++++----
 1 file changed, 15 insertions(+), 4 deletions(-)

--- a/fs/cifs/readdir.c
+++ b/fs/cifs/readdir.c
@@ -78,6 +78,7 @@ cifs_prime_dcache(struct dentry *parent,
        struct dentry *dentry, *alias;
        struct inode *inode;
        struct super_block *sb = parent->d_inode->i_sb;
+       struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
 
        cFYI(1, "%s: for %s", __func__, name->name);
 
@@ -91,10 +92,20 @@ cifs_prime_dcache(struct dentry *parent,
                int err;
 
                inode = dentry->d_inode;
-               /* update inode in place if i_ino didn't change */
-               if (inode && CIFS_I(inode)->uniqueid == fattr->cf_uniqueid) {
-                       cifs_fattr_to_inode(inode, fattr);
-                       goto out;
+               if (inode) {
+                       /*
+                        * If we're generating inode numbers, then we don't
+                        * want to clobber the existing one with the one that
+                        * the readdir code created.
+                        */
+                       if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM))
+                               fattr->cf_uniqueid = CIFS_I(inode)->uniqueid;
+
+                       /* update inode in place if i_ino didn't change */
+                       if (CIFS_I(inode)->uniqueid == fattr->cf_uniqueid) {
+                               cifs_fattr_to_inode(inode, fattr);
+                               goto out;
+                       }
                }
                err = d_invalidate(dentry);
                dput(dentry);


Patches currently in stable-queue which might be from [email protected] are

queue-3.7/cifs-rename-cifs_readdir_lookup-to-cifs_prime_dcache-and-make-it-void-return.patch
queue-3.7/smb3-mounts-fail-with-access-denied-to-some-servers.patch
queue-3.7/cifs-adjust-sequence-number-downward-after-signing-nt_cancel.patch
queue-3.7/cifs-don-t-compare-uniqueids-in-cifs_prime_dcache-unless-server-inode-numbers-are-in-use.patch
queue-3.7/cifs-move-check-for-null-socket-into-smb_send_rqst.patch
queue-3.7/nfs-don-t-extend-writes-to-cover-entire-page-if-pagecache-is-invalid.patch
queue-3.7/nfs-don-t-zero-out-the-rest-of-the-page-if-we-hit-the-eof-on-a-dio-read.patch
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to