Commit:     cda9205da24aeaa8fb086b0fb85cdf39571ecc3f
Parent:     15c945c3d0913d73a7d57d7a0a3c4e2902598cc6
Author:     Chen, Kenneth W <[EMAIL PROTECTED]>
AuthorDate: Mon Jan 22 20:40:43 2007 -0800
Committer:  Linus Torvalds <[EMAIL PROTECTED]>
CommitDate: Tue Jan 23 07:52:06 2007 -0800

    [PATCH] fix blk_direct_IO bio preparation
    For large size DIO that needs multiple bio, one full page worth of data was
    lost at the boundary of bio's maximum sector or segment limits.  After a
    bio is full and got submitted.  The outer while (nbytes) { ...  } loop will
    allocate a new bio and just march on to index into next page.  It just
    forgets about the page that bio_add_page() rejected when previous bio is
    full.  Fix it by put the rejected page back to pvec so we pick it up again
    for the next bio.
    Signed-off-by: Ken Chen <[EMAIL PROTECTED]>
    Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
    Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>
 fs/block_dev.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index da020be..d9bdf2b 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -190,6 +190,12 @@ static struct page *blk_get_page(unsigned long addr, 
size_t count, int rw,
        return pvec->page[pvec->idx++];
+/* return a page back to pvec array */
+static void blk_unget_page(struct page *page, struct pvec *pvec)
+       pvec->page[--pvec->idx] = page;
 static ssize_t
 blkdev_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov,
                 loff_t pos, unsigned long nr_segs)
@@ -278,6 +284,8 @@ same_bio:
                                count = min(count, nbytes);
                                goto same_bio;
+               } else {
+                       blk_unget_page(page, &pvec);
                /* bio is ready, submit it */
To unsubscribe from this list: send the line "unsubscribe git-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at

Reply via email to