Commit:     9852a0e76cd9c89e71f84e784212fdd7a97ae93a
Parent:     6610a0bc8dcc120daa1d93807d470d5cbf777c39
Author:     Andrew Morton <[EMAIL PROTECTED]>
AuthorDate: Tue Oct 16 23:30:33 2007 -0700
Committer:  Linus Torvalds <[EMAIL PROTECTED]>
CommitDate: Wed Oct 17 08:43:02 2007 -0700

    writeback: fix time ordering of the per superblock dirty inode lists: 
memory-backed inodes
    For reasons which escape me, inodes which are dirty against a ram-backed
    filesystem are managed in the same way as inodes which are backed by real
    Probably we could optimise things here.  But given that we skip the entire
    supeblock as son as we hit the first dirty inode, there's not a lot to be
    And the code does need to handle one particular non-backed superblock: the
    kernel's fake internal superblock which holds all the blockdevs.
    Still.  At present when the code encounters an inode which is dirty against 
    memory-backed filesystem it will skip that inode by refiling it back onto
    s_dirty.  But it fails to update the inode's timestamp when doing so which 
    least makes the debugging code upset.
    Cc: Mike Waychison <[EMAIL PROTECTED]>
    Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
    Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>
 fs/fs-writeback.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 08b9f83..f8618e0 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -354,7 +354,7 @@ sync_sb_inodes(struct super_block *sb, struct 
writeback_control *wbc)
                long pages_skipped;
                if (!bdi_cap_writeback_dirty(bdi)) {
-                       list_move(&inode->i_list, &sb->s_dirty);
+                       redirty_tail(inode);
                        if (sb_is_blkdev_sb(sb)) {
                                 * Dirty memory-backed blockdev: the ramdisk
To unsubscribe from this list: send the line "unsubscribe git-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at

Reply via email to