Thank l you. Totally forgot about try to mount read only. 
I upgraded to jfsutils 1.1.15 and I'm rerunning fsck now. Hopefully this will 
fare better. Otherwise I'll try to move everything off beofre doing anything 
risky.
Appreciate the help and fast response. 


-
Adam Crane
Systems Administeator
Superb Internet Corp
Sent from Samsung Mobile so 
Please pardon the spelling

-------- Original message --------
From: Dave Kleikamp <[email protected]> 
Date: 09/17/2013  6:05 PM  (GMT-05:00) 
To: Adam Crane <[email protected]> 
Cc: [email protected] 
Subject: Re: [Jfs-discussion] Journal Corrupt, Logredo Failed. 
 
On 09/17/2013 04:08 PM, Adam Crane wrote:
> hey,
> I'm not entirely sure if this is the correct place to ask for help, but
> I've been looking around and can't find anywhere else. I'm really at a
> loss and could use some help. The system crashed and upon reboot I can't
> get the drive to mount. The system is running off of a Debian based
> operating system, OpenMediaVault.

Have you tried mounting the volume read-only (mount -oro)? jfs will try
to mount a dirty volume read-only, but will not allow it to be mounted
read-write until fsck succeeds against it. If a read-only mount works,
it may be best to back up what you can before trying to fix it again.

> Linux Kernel: 2.6.32-5-amd64
> Problem array: 8 x 3tb sata raid5
> Array is a single Partition using the entire volume. Currently there is
> about 7TBs of data.
> 
> jfs_tune -l /dev/sdb1
> jfs_tune version 1.1.13, 17-Jul-2008
> 
> JFS filesystem superblock:
> 
> JFS magic number:    'JFS1'
> JFS version:        1
> JFS state:        dirty
> JFS flags:        JFS_LINUX  JFS_COMMIT  JFS_GROUPCOMMIT  JFS_INLINELOG 
> Aggregate block size:    4096 bytes
> Aggregate size:        40956038120 blocks
> Physical block size:    512 bytes
> Allocation group size:    67108864 aggregate blocks
> Log device number:    0x811
> Filesystem creation:    Sat Oct  6 15:45:18 2012
> Volume label:        'satisfactio'
> 
>  jfs_logdump -a /dev/sdb1
> jfs_logdump version 1.1.13, 17-Jul-2008
> Device Name: /dev/sdb1
> LOGREDO:  The Journal Log has wrapped. [logredo.c:1339]
> LOGREDO:   logRead: Log wrapped over itself (lognumread = (d) 8191).
> [log_read.c:377]
> log read failed 0x4c0c9c
> JFS_LOGDUMP: The current JFS log has been dumped into ./jfslog.dmp
> root@akakios:/home/ftp# cat ./jfslog.dmp
> JOURNAL SUPERBLOCK:
> ------------------------------------------------------
>    magic number: x 87654321
>    version     : x 1
>    serial      : x e
>    size        : t 8192 pages (4096 bytes/page)
>    bsize       : t 4096 bytes/block
>    l2bsize     : t 12
>    flag        : x 10200900
>    state       : x 0
>    end         : x 18277a0
> 
> I have the full log available if need be.
> 
> When Running fsck I get:
> 
>  fsck.jfs -v /dev/sdb1
> fsck.jfs version 1.1.13, 17-Jul-2008
> processing started: 9/12/2013 12.25.19
> Using default parameter: -p
> The current device is:  /dev/sdb1
> Open(...READ/WRITE EXCLUSIVE...) returned rc = 0
> Primary superblock is valid.
> The type of file system for the device is JFS.
> Block size in bytes:  4096
> Filesystem size in blocks:  5119669248
> **Phase 0 - Replay Journal Log
> LOGREDO:   doAfter: updatePage failed.  (logaddr = 0x04c1234, rc = (d) 274)
> LOGREDO:  Invalid RedoPage record at 0x04c1234.
> logredo failed (rc=-274).  fsck continuing.
> **Phase 1 - Check Blocks, Files/Directories, and  Directory Entries
> File system object DF1702687 has corrupt data (9).
> File system object DF1702721 has corrupt data (9).
> File system object DF1702728 has corrupt data (9).
> File system object DF1702730 has corrupt data (9).
> File system object DF2393002 has corrupt data (9).
> .......|
> Then it just continues. 22 data corrupt alerts, then just the | moving
> back and forth. It has so far been running for 2 days and is still on
> Phase 1 and no more corrupt data messages, since the first 30 minutes of
> fsck.

That doesn't seem right. You have an old version of jfsutils, not that
it has been updated in a while, but version 1.15 might be worth trying.
You can find the latest at http://jfs.sourceforge.net/

> 
> I was looking into moving the journal to an external journal but wasn't
> sure if that would work and what the affects would be if it didn't work
> and how much data I would loose.
> I do not mind loosing a few hours to a days worth of data. I can recover
> that.
> 
> If someone could help provide some steps or solutions I would very grateful.
> Thank You.

If jfsutils 1.15 fails in the same way, I can try to figure out what's
wrong. I'm not sure I'd find a way to get it to fix the volume or not.

Good luck,
Shaggy
------------------------------------------------------------------------------
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint
2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes
Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. 
http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to