Commit:     637aff46f94a754207c80c8c64bf1b74f24b967d
Parent:     2f718ffc16c43a435d12919c75dbfad518abd056
Author:     Nick Piggin <[EMAIL PROTECTED]>
AuthorDate: Tue Oct 16 01:25:00 2007 -0700
Committer:  Linus Torvalds <[EMAIL PROTECTED]>
CommitDate: Tue Oct 16 09:42:55 2007 -0700

    fs: fix data-loss on error
    New buffers against uptodate pages are simply be marked uptodate, while the
    buffer_new bit remains set.  This causes error-case code to zero out parts 
    those buffers because it thinks they contain stale data: wrong, they are
    actually uptodate so this is a data loss situation.
    Fix this by actually clearning buffer_new and marking the buffer dirty.  It
    makes sense to always clear buffer_new before setting a buffer uptodate.
    Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
    Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
    Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>
 fs/buffer.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 09bb80c..9ece6c2 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1813,7 +1813,9 @@ static int __block_prepare_write(struct inode *inode, 
struct page *page,
                                if (PageUptodate(page)) {
+                                       clear_buffer_new(bh);
+                                       mark_buffer_dirty(bh);
                                if (block_end > to || block_start < from) {
To unsubscribe from this list: send the line "unsubscribe git-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at

Reply via email to