Hi everyone,

  we have 24TB SW-RAID6 array with one large JFS partition. There was 
power failure (bad UPS) when jfs_fsck have been running and after this 
accident I'm not able to get JFS clean.
There is jfs_fsck error which I get :

**Phase 1 - Check Blocks, Files/Directories, and  Directory Entries
Duplicate reference to 4 block(s) beginning at offset 5975789204 found 
in file system object IA16.
Inode A16 has references to cross linked blocks.
Multiple metadata references to 4 blocks beginning at offset 5975789204 
have been detected.
Duplicate block references have been detected in Metadata.  CANNOT 
CONTINUE.
processing terminated:  10/25/2013 9:14:15  with return code: 10060 exit 
code: 4.


I'm able to mount partition read only and backup almost all data. Some 
data are unreadable, with jfs_lookup kernel error:

Oct  3 12:08:44 nash kernel: ERROR: (device md3): diRead: i_ino != 
di_number
Oct  3 12:08:44 nash kernel: jfs_lookup: iget failed on inum 54550587
Oct  3 12:08:44 nash kernel: jfs_lookup: iget failed on inum 52483751
Oct  3 12:08:44 nash kernel: jfs_lookup: iget failed on inum 52483751


One solution is reformat partition and restore data from backup, but 
with 23TB of data it is little bit time consuming. So I would like to 
try to use jfs_debugfs in some way and to repair existing partition. But 
I'm not sure how and if it's possible at all.

Can any JFS expert help me, please?

Thanks in advance,   Daniel Vecerka CTU Prague

------------------------------------------------------------------------------
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60135991&iu=/4140/ostg.clktrk
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to