jfs_fsck -n /dev/mapper/Videocache-Videocache
jfs_fsck version 1.1.15, 04-Mar-2011
processing started: 3/29/2012 15:57:07
The current device is:  /dev/mapper/Videocache-Videocache
Block size in bytes:  4096
Filesystem size in blocks:  7322919936
**Phase 1 - Check Blocks, Files/Directories, and  Directory Entries
Errors detected in the Primary File/Directory Allocation Table.
Errors detected in the Secondary File/Directory Allocation Table.
CANNOT CONTINUE.
---------------------------------------------------------------------------------------------
 jfs_fsck -dv /dev/mapper/Videocache-Videocache
jfs_fsck version 1.1.15, 04-Mar-2011
processing started: 3/29/2012 15:59:10

FSCK  Device /dev/mapper/Videocache-Videocache is currently mounted READ
ONLY.
Using default parameter: -p [xchkdsk.c:3033]
The current device is:  /dev/mapper/Videocache-Videocache [xchkdsk.c:1527]
Open(...READ/WRITE EXCLUSIVE...) returned rc = 0 [fsckpfs.c:3233]
Primary superblock is valid. [fsckmeta.c:1551]
The type of file system for the device is JFS. [xchkdsk.c:1544]
Block size in bytes:  4096 [xchkdsk.c:1857]
Filesystem size in blocks:  7322919936 [xchkdsk.c:1864]
**Phase 0 - Replay Journal Log [xchkdsk.c:1871]
LOGREDO:  Log already redone! [logredo.c:555]
logredo returned rc = 0 [xchkdsk.c:1903]
**Phase 1 - Check Blocks, Files/Directories, and  Directory Entries
[xchkdsk.c:1996]
Primary metadata inode A2 is corrupt. [fsckmeta.c:3171]
Duplicate reference to 894760 block(s) beginning at offset 16 found in file
system object MA2. [fsckwsp.c:452]
Inode A2 has references to cross linked blocks. [fsckwsp.c:1772]
Errors detected in the Primary File/Directory Allocation Table.
[fsckmeta.c:1890]
Errors detected in the Secondary File/Directory Allocation Table.
[fsckmeta.c:1895]
CANNOT CONTINUE. [fsckmeta.c:1902]
processing terminated:  3/29/2012 15:59:11  with return code: -10049  exit
code: 4. [xchkdsk.c:475]
-------------------------------------------------------------------------------------------------------------


I have a LVM that was 28 TB, and working fine in JFS, I then extended my
LVM to 33 TB (correctly) and then did the following mount -o remount,resize
/dev/mapper/Device ; it showed that it then had a bad superblock, and
mounted it read only. after that, I tried fsck, and it gave the above
results. I dumped the log and this is whats inside of it:

JOURNAL SUPERBLOCK:
------------------------------------------------------
   magic number: x 87654321
   version     : x 1
   serial      : x 3
   size        : t 8192 pages (4096 bytes/page)
   bsize       : t 4096 bytes/block
   l2bsize     : t 12
   flag        : x 10200900
   state       : x 1
   end         : x 193c28

======================================================


**WARNING** jfs_logdump and log file /dev/mapper/Videocache-Videocache
state is LOGREDONE

======================================================

logrec d 0   Logaddr= x 193c28   Nextaddr= x 193c04   Backchain = x 0

****************************************************************
LOG_SYNCPT   (type = d 16384)   logtid = d 0    aggregate = d 0

        data length = d 0
        sync = x 0
****************************************************************


----------------------------------------------------------------------


When I mount read only, the data is all there, intact. The question is
though, is there ANY way to fix the superblock so that I can mount the file
system read-write. When I do a df -h in read only mode it shows the
following -> /dev/mapper/Device
                       33T  8.5T   25T  26% /videos

Which displays the new size correctly. I have googled endlessly and could
not come to a conclusion, any help is MUCH MUCH appreciated.

Sincerely,
Maher Kassem.
------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to