Hi, I have a questions about JFS commit thread.

I'm using JFS on USB HDD for testing filesystem.
I usually tested pulling out USB suddenly during heavy write or delete
operation.

In general case, jfsCommit thread do work well. But sometimes kernel
panic(NULL dereference) occurred on jfsCommit.
I tried to find where or why the panic occurred.

As the result of several rerun test. I found where the panic occurred.
I found 4 place that panic occurred.

- jfs_lazycommit -> txLazyCommit -> txUndateMap -> txFreeMap()
struct inode *ipbmap = JFS_SBI(ip->i_sb)->ipbmap;
- jfs_lazycommit -> txLazyCommit -> txUndateMap -> txAllocPMap()
struct inode *ipbmap = JFS_SBI(ip->i_sb)->ipbmap;
- jfs_lazycommit -> txLazyCommit -> txUnlock()
LOGSYNC_LOCK(log, flags);
- jfs_lazycommit -> txLazyCommit()
spin_lock_irq(&log->gclock);

So, I think one of below pointers may have wrong value when suddenly USB
HDD has disconnected during writing or deleting
Is it possible guess? If it is, I wonder why that pointers has corrupted.

tblk / tblk->sb / JFS_SBI(tblk->sb) / JFS_SBI(tblk->sb)->log /
JFS_SBI(tblk->sb)->ipimap

One of panic message is as below.

[ 457.688000] lbmIODone: I/O error in JFS log
[ 457.688000] lmPostGC: tblk = 0xe005e0b8, flag = 0x384
[ 457.688000] lmPostGC: tblk = 0xe005e05c, flag = 0x384
[ 457.688000] txLazyCommit: processing tblk 0xe005e0b8
[ 457.688000] txFreeMap: tblk:0xe005e0b8 maplock:0xe0094510 maptype:0x40
[ 457.688000] lmPostGC: tblk = 0xe005e2e0, flag = 0x384
[ 457.688000] txUnlock: tblk = 0xe005e0b8
[ 457.727000] jfs_flush_journal: log:0xcdd4d680 wait=0
[ 457.727000] __get_metapage: ino = 16, lblock = 0x800001, abs=1
[ 457.727000] __get_metapage: returning = 0xcfac69a8 data = 0xc7c91000
[ 457.727000] release_metapage: mp = 0xcfac69a8, flag = 0x1
[ 457.727000] __get_metapage: ino = 16, lblock = 0x800001, abs=1
[ 457.727000] __get_metapage: returning = 0xcfac69a8 data = 0xc7c91000
[ 457.727000] release_metapage: mp = 0xcfac69a8, flag = 0x1
[ 457.728000] metapage_write_end_io: I/O error
[ 457.728000] metapage_write_end_io: I/O error
[ 457.728000] metapage_write_end_io: I/O error
[ 457.728000] metapage_write_end_io: I/O error
[ 457.728000] __get_metapage: ino = 16, lblock = 0x800001, abs=1
[ 457.728000] __get_metapage: returning = 0xcfac69a8 data = 0xc7c91000
[ 457.728000] release_metapage: mp = 0xcfac69a8, flag = 0x1
[ 457.728000] jfs_flush_journal: log:0xcdd4d680 wait=0
[ 457.728000] jfs_flush_journal: log:0xcdd4d680 wait=0
[ 457.728000] jfs_flush_journal: log:0xcdd4d680 wait=0
[ 457.728000] metapage_write_end_io: I/O error
[ 457.728000] metapage_write_end_io: I/O error
[ 457.728000] jfs_flush_journal: log:0xcdd4d680 wait=1
[ 457.728000] jfs_flush_journal: log:0xcdd4d680 wait=0
[ 457.728000] lmWriteRecord: lrd:0x4000 bp:0xcb842e80 pn:1635 eor:0x348
[ 457.728000] In jfs_put_super
[ 457.728000] UnMount JFS: sb:0xc7c7ca00
[ 457.728000] jfs_flush_journal: log:0xcdd4d680 wait=1
[ 457.728000] __get_metapage: ino = 16, lblock = 0x0, abs=0
[ 457.728000] zeroing mp = 0xcfac69a8
[ 457.728000] __get_metapage: returning = 0xcfac69a8 data = 0xc7c9b000
[ 457.728000] release_metapage: mp = 0xcfac69a8, flag = 0x5
[ 457.728000] __get_metapage: ino = 16, lblock = 0x800001, abs=1
[ 457.728000] __get_metapage: returning = 0xcfac6570 data = 0xc7c91000
[ 457.729000] release_metapage: mp = 0xcfac6570, flag = 0x1
[ 457.729000] metapage_write_end_io: I/O error
[ 457.729000] __get_metapage: ino = 16, lblock = 0xd, abs=1
[ 457.729000] release_metapage: mp = 0xcfac6570, flag = 0x5
[ 457.729000] metapage_releasepage: mp = 0xcfac69a8
[ 457.729000] metapage_releasepage: mp = 0xcfac6690
[ 457.729000] metapage_releasepage: mp = 0xcfac6a38
[ 457.729000] metapage_releasepage: mp = 0xcfac6840
[ 457.729000] __get_metapage: ino = 1, lblock = 0x0, abs=0
[ 457.729000] zeroing mp = 0xcfac6840
[ 457.729000] __get_metapage: returning = 0xcfac6840 data = 0xc7c37000
[ 457.729000] release_metapage: mp = 0xcfac6840, flag = 0x5
[ 457.729000] metapage_write_end_io: I/O error
[ 457.729000] __get_metapage: ino = 1, lblock = 0xb, abs=1
[ 457.729000] __get_metapage: returning = 0xcfac6a38 data = 0xcb0a4000
[ 457.729000] release_metapage: mp = 0xcfac6a38, flag = 0x5
[ 457.729000] metapage_releasepage: mp = 0xcfac6840
[ 457.729000] __get_metapage: ino = 1, lblock = 0x0, abs=0
[ 457.729000] zeroing mp = 0xcfac6840
[ 457.729000] __get_metapage: returning = 0xcfac6840 data = 0xc7d34000
[ 457.729000] release_metapage: mp = 0xcfac6840, flag = 0x5
[ 457.729000] metapage_write_end_io: I/O error
[ 457.729000] __get_metapage: ino = 1, lblock = 0xb, abs=1
[ 457.729000] __get_metapage: returning = 0xcfac6a38 data = 0xcb0a4000
[ 457.729000] release_metapage: mp = 0xcfac6a38, flag = 0x5
[ 457.729000] metapage_releasepage: mp = 0xcfac6840
[ 457.729000] __get_metapage: ino = 2, lblock = 0x0, abs=0
[ 457.729000] __get_metapage: returning = 0xcfac6840 data = 0xcaa32000
[ 457.729000] release_metapage: mp = 0xcfac6840, flag = 0x5
[ 457.730000] metapage_write_end_io: I/O error
[ 457.730000] __get_metapage: ino = 2, lblock = 0xb, abs=1
[ 457.730000] __get_metapage: returning = 0xcfac6a38 data = 0xcb0a4000
[ 457.730000] release_metapage: mp = 0xcfac6a38, flag = 0x5
[ 457.730000] metapage_releasepage: mp = 0xcfac6840
[ 457.730000] metapage_releasepage: mp = 0xcfac6768
[ 457.730000] jfs_flush_journal: log:0xcdd4d680 wait=0
[ 457.730000] metapage_write_end_io: I/O error
[ 457.730000] metapage_write_end_io: I/O error
[ 457.730000] lmLogClose: log:0xcdd4d680
[ 457.730000] lmLogShutdown: log:0xcdd4d680
[ 457.730000] jfs_flush_journal: log:0xcdd4d680 wait=2
[ 457.929000] ext3_abort called.
[ 457.929000] EXT3-fs error (device sda2): ext3_put_super: Couldn't
clean up the journal
[ 457.929000] Remounting filesystem read-only
[ 458.185000] unlocking lid = 8, tlck = 0xe0094200
[ 458.215000] CPU 1 Unable to handle kernel paging request at virtual
address 000000ac, epc == 8000b8b4, ra == 8021a130
[ 458.215000] Oops[#1]:
[ 458.215000] Cpu 1
[ 458.215000] $ 0 : 00000000 10008b00 10008b01 0400087c
[ 458.215000] $ 4 : 000000ac cf877518 10008b00 ffff00fe
[ 458.215000] $ 8 : cf8e9fe0 00008b00 00000000 cf8f8000
[ 458.215000] $12 : 8294e373 00000005 00000001 8140a580
[ 458.215000] $16 : cfac6960 e0094200 00000008 80480000
[ 458.215000] $20 : 80480000 00000000 00000000 e005e0b8
[ 458.215000] $24 : 8140a5a0 8001b68c
[ 458.215000] $28 : cf8e8000 cf8e9e50 80430000 8021a130
[ 458.215000] Hi : 0000006a
[ 458.215000] Lo : afbe07c0
[ 458.215000] epc : 8000b8b4 _spin_lock_irqsave+0x20/0x50
[ 458.215000] Tainted: P
[ 458.215000] ra : 8021a130 txUnlock+0xe0/0x318
[ 458.215000] Status: 10008b02 KERNEL EXL
[ 458.215000] Cause : 00800008
[ 458.215000] BadVA : 000000ac
[ 458.215000] PrId : 0002a044 (Broadcom BMIPS4380)
[ 458.215000] Modules linked in: cifs wl(P) usbabs
[ 458.215000] Process jfsCommit (pid: 115, threadinfo=cf8e8000,
task=cf85ea88, tls=00000000)
[ 458.215000] Stack : 8047e140 00000008 e0094200 00000000 000000ac
80430000 e005e0b8 cdd4d680
[ 458.215000] ceb53b80 8047e174 80430000 80480000 8047e140 80430000
cf8e9ea0 8021e8b8
[ 458.215000] 00000000 e005e0b8 e005e0e4 ffff00fe 00000000 cf85ea88
800319dc 00100100
[ 458.215000] 00200200 8002a408 cf83fe30 00000000 8021e55c 00000000
00000000 00000000
[ 458.215000] 00000000 00000000 00000000 800580bc 00000000 00000000
00000000 00000000
[ 458.215000] ...
[ 458.215000] Call Trace:
[ 458.215000] [<8000b8b4>] _spin_lock_irqsave+0x20/0x50
[ 458.215000] [<8021a130>] txUnlock+0xe0/0x318
[ 458.215000] [<8021e8b8>] jfs_lazycommit+0x35c/0x3f0
[ 458.215000] [<800580bc>] kthread+0x84/0x8c
[ 458.215000] [<8000f340>] kernel_thread_helper+0x10/0x18
[ 458.215000]
[ 458.215000]
[ 458.215000] Code: 00000040 00000040 000000c0 <c0850000> 24a34000
e0830000 1060011f 00000000 00051b82
------------------------------------------------------------------------------
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to