We're panicing in ocfs2_read_blocks_sync() if a jbd-managed buffer is seen.
At first glance, this seems ok but in reality it can happen. My test case
was to just run 'exorcist'. A struct inode is being pushed out of memory but
is then re-read at a later time, before the buffer has been checkpointed by
jbd. This causes a BUG to be hit in ocfs2_read_blocks_sync().

Signed-off-by: Mark Fasheh <[EMAIL PROTECTED]>
---
 fs/ocfs2/buffer_head_io.c |   12 +++---------
 1 files changed, 3 insertions(+), 9 deletions(-)

diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c
index 7e947c6..fe2710f 100644
--- a/fs/ocfs2/buffer_head_io.c
+++ b/fs/ocfs2/buffer_head_io.c
@@ -112,7 +112,7 @@ int ocfs2_read_blocks_sync(struct ocfs2_super *osb, u64 
block,
                bh = bhs[i];
 
                if (buffer_jbd(bh)) {
-                       mlog(ML_ERROR,
+                       mlog(ML_BH_IO,
                             "trying to sync read a jbd "
                             "managed bh (blocknr = %llu), skipping\n",
                             (unsigned long long)bh->b_blocknr);
@@ -147,15 +147,9 @@ int ocfs2_read_blocks_sync(struct ocfs2_super *osb, u64 
block,
        for (i = nr; i > 0; i--) {
                bh = bhs[i - 1];
 
-               if (buffer_jbd(bh)) {
-                       mlog(ML_ERROR,
-                            "the journal got the buffer while it was "
-                            "locked for io! (blocknr = %llu)\n",
-                            (unsigned long long)bh->b_blocknr);
-                       BUG();
-               }
+               if (!buffer_jbd(bh))
+                       wait_on_buffer(bh);
 
-               wait_on_buffer(bh);
                if (!buffer_uptodate(bh)) {
                        /* Status won't be cleared from here on out,
                         * so we can safely record this and loop back
-- 
1.5.6


_______________________________________________
Ocfs2-devel mailing list
[email protected]
http://oss.oracle.com/mailman/listinfo/ocfs2-devel

Reply via email to