On Tue, 2006-06-20 at 12:42 -0600, Poul Petersen wrote:
>       I have a jfs filesystem that resides on a RAID5 set, for which
> two disks were falsely failed. I mention that because it might just be
> that my filesystem has been hopelessly corrupted bu the parity re-sync.
> However, LVM does still recognize the logical volume and fsck.jfs seems
> to recognize that it is a filesystem, but fsck.jfs throws a segmentation
> fault almost immediately:
> 
> [EMAIL PROTECTED] jfsutils-1.1.11]# fsck.jfs -afv /dev/vg01-vat/rsync 
> fsck.jfs version 1.1.11, 05-Jun-2006
> processing started: 6/20/2006 11.36.51
> The current device is:  /dev/vg01-vat/rsync
> Open(...READ/WRITE EXCLUSIVE...) returned rc = 0
> Primary superblock is valid.
> The type of file system for the device is JFS.
> Block size in bytes:  4096
> Filesystem size in blocks:  390701056
> **Phase 0 - Replay Journal Log
> LOGREDO:  Allocating for ReDoPage:  (d) 4096 bytes
> LOGREDO:  Allocating for NoDoFile:  (d) 4096 bytes
> LOGREDO:  Allocating for BMap:  (d) 788448 bytes
> LOGREDO:  Allocating for IMap:  (d) 192272 bytes
> Segmentation fault
> 
> Anything else I can try? Thanks,

fsck.jfs shouldn't segfault, so you could try running it under gdb to
determine where the segfault is coming from and I may be able to give
you a patch that gets you further.

Or, you could try running fsck with the -n flag (read-only) just to get
an idea how corrupted the file system is.  With the -n flag, fsck won't
try to replay the journal, so you will avoid whatever problem you're
currently seeing.

If fsck -n doesn't report too many errors, you could run it with
--omit_journal_replay, which will skip the journal replay, but still
attempt to fix any errors.

Third, you can try to mount the file system read-only (mount -oro) and
recover whatever data you are able to access.

-- 
David Kleikamp
IBM Linux Technology Center



_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to