On Mon, Dec 24, 2007 at 10:25:53AM +0800, Fengguang Wu wrote:
> On Sun, Dec 23, 2007 at 10:35:45AM -0800, [EMAIL PROTECTED] wrote:
> > http://bugzilla.kernel.org/show_bug.cgi?id=9291
> 
> Hmm, I just tried JFS on LVM - still OK.
> It seems not related to LVM.

I can now reproduce the bug on JFS with the following command:

debootstrap --arch i386 etch /mnt/jfs http://debian.ustc.edu.cn/debian

It's a rather compound procedure, but I just cannot trigger the bug through
simple operations like cp/concatenate/truncate ...

The symptoms:

- one pdflush stuck in D state:

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       321  0.0  0.0      0     0 ?        D    13:45   0:01 [pdflush]
root     15397  0.0  0.0      0     0 ?        S    14:21   0:00 [pdflush]

- `sync` temporarily breaks pdflush out of the loop. but 5s later
  wb_kupdate() wakes up and pdflush goes D again.

- the loop in wb_kupdate() goes like this:

[ 4188.005005] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4188.005028] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0
[ 4188.105452] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4188.105473] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0
[ 4188.205814] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4188.205835] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0
[ 4188.306080] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4188.306108] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0
[ 4188.406563] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4188.406585] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0
[ 4188.506988] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4188.507009] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0
[ 4188.607438] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4188.607459] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0
[ 4188.707892] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4188.707926] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0
[ 4188.808286] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4188.808309] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0
[ 4188.908625] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4188.908646] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0
[ 4189.009169] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4189.009182] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0
[ 4189.109454] requeue_io 301: inode 0 size 320753664 at 08:18(sdb8)
[ 4189.109476] mm/page-writeback.c 668 wb_kupdate: pdflush(321) 39494 global 16 
0 0 wc _M tw 1024 sk 0

Here is the printk for wb_kupdate lines:

        printk(KERN_DEBUG "%s %d %s: %s(%d) %ld "
                        "global %lu %lu %lu "
                        "wc %c%c tw %ld sk %ld\n",
                        file, line, func,
                        current->comm, current->pid, n,
                        global_page_state(NR_FILE_DIRTY),
                        global_page_state(NR_WRITEBACK),
                        global_page_state(NR_UNSTABLE_NFS),
                        wbc->encountered_congestion ? 'C':'_',
                        wbc->more_io ? 'M':'_',
                        wbc->nr_to_write,
                        wbc->pages_skipped);


The requeue_io lines show that the special inode 0 in JFS is tagged dirty in
the radix tree but does not have any dirty pages to sync.

Any ideas on possible causes?

Thank you,
Fengguang


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to