Have you looked at jfsrec ? It's helped me in the past

On Mon, Mar 20, 2017 at 11:20 AM, Wolfgang Draxinger
<wdraxinger.maill...@draxit.de> wrote:
> Hi,
>
>
> I have a JFS filesystem image that corrupted. Since this is a question
> about data recovery I expect a lot "non-answers" in the line of "where's
> your backup?" or "do it on block level image copies". Thanks, I know
> that, you don't have to lecture me on that. See the notes at the end for
> what and why. Thank you.
>
> *jfs_fsck* refuses to work on it, bailing out with the following error
> message:
>
>      ~ % sudo jfs_fsck /dev/loop0
>      jfs_fsck version 1.1.15, 04-Mar-2011
>      processing started: 3/1/2017 13:08:53
>      Using default parameter: -p
>      The current device is:  /dev/loop0
>      Superblock is corrupt and cannot be repaired
>      since both primary and secondary copies are corrupt.
>
>       CANNOT CONTINUE.
>
> Using *jfs_debugfs* commands `su` and `s2p` the folloing information was
> obtained:
>
>      ~ % sudo  jfs_debug /dev/loop0
>      jfs_debugfs version 1.1.15, 04-Mar-2011
>
>      Aggregate Block Size: 4096
>
> **Output of `su p`:**
>
> [1] s_magic:            'JFS1'          [15] s_ait2.addr1:   0x00
> [2] s_version:          1               [16] s_ait2.addr2:   0x0000e92f
> [3] s_size:     0x000000015d4d4ec0           s_ait2.address: 59695
> [4] s_bsize:            4096            [17] s_logdev:       0x00000900
> [5] s_l2bsize:          12              [18] s_logserial:    0x0009afb1
> [6] s_l2bfactor:        3               [19] s_logpxd.len:   8192
> [7] s_pbsize:           512             [20] s_logpxd.addr1: 0x00
> [8] s_l2pbsize:         9               [21] s_logpxd.addr2: 0x2baa0160
> [9] pad:                Not Displayed        s_logpxd.address: 732561760
> [10] s_agsize:          0x00800000      [22] s_fsckpxd.len:     22408
> [11] s_flag:            0x10200900      [23] s_fsckpxd.addr1:   0x00
>                          JFS_LINUX       [24] s_fsckpxd.addr2: 0x2ba9a9d8
>         JFS_COMMIT       JFS_GROUPCOMMIT      s_fsckpxd.address: 32539352
>                          JFS_INLINELOG   [25] s_time.tv_sec:  0x4902c28b
>                                          [26] s_time.tv_nsec: 0x00000000
>                                          [27] s_fpack: 'thor_storag'
> [12] s_state:           0x00000001      FM_MOUNT
> [13] s_compress:        0
> [14] s_ait2.len:        4
>
> **Output of `su s`:**
>
> [1] s_magic:            '    '          [15] s_ait2.addr1: 0x00
> [2] s_version:          0               [16] s_ait2.addr2: 0x00000000
> [3] s_size:     0x0000000000000000           s_ait2.address:  0
> [4] s_bsize:            0               [17] s_logdev:     0x00000000
> [5] s_l2bsize:          0               [18] s_logserial:  0x00000000
> [6] s_l2bfactor:        0               [19] s_logpxd.len:    0
> [7] s_pbsize:           0               [20] s_logpxd.addr1: 0x00
> [8] s_l2pbsize:         0               [21] s_logpxd.addr2: 0x00000000
> [9] pad:                Not Displayed        s_logpxd.address: 0
> [10] s_agsize:          0x00000000      [22] s_fsckpxd.len:    0
> [11] s_flag:            0x00000000      [23] s_fsckpxd.addr1: 0x00
>                                          [24] s_fsckpxd.addr2: 0x00000000
>                                               s_fsckpxd.address: 0
>                                          [25] s_time.tv_sec:  0x00000000
>                                          [26] s_time.tv_nsec: 0x00000000
>                                          [27] s_fpack:           ''
> [12] s_state:           0x00000000
>               FM_CLEAN
> [13] s_compress:        0
> [14] s_ait2.len:        0
>
> **Output of `s2p p`:**
>
> [1] s_magic:            'JFS1'          [16] s_aim2.len:     2
> [2] s_version:          1               [17] s_aim2.addr1:   0x00
> [3] s_size:     0x000000015d4d4ec0      [18] s_aim2.addr2:   0x0000e92d
> [4] s_bsize:            4096                 s_aim2.address: 59693
> [5] s_l2bsize:          12              [19] s_logdev:       0x00000900
> [6] s_l2bfactor:        3               [20] s_logserial:    0x0009afb1
> [7] s_pbsize:           512             [21] s_logpxd.len:   8192
> [8] s_l2pbsize:         9               [22] s_logpxd.addr1: 0x00
> [9]  s_agsize:          0x00800000      [23] s_logpxd.addr2: 0x2baa0160
> [10] s_flag:            0x10200900           s_logpxd.address: 732561760
>               LINUX                      [24] s_fsckpxd.len:    22408
>      GROUPCOMMIT                         [25] s_fsckpxd.addr1: 0x00
>                  INLINELOG               [26] s_fsckpxd.addr2: 0x2ba9a9d8
>                                              s_fsckpxd.address: 732539352
> [11] s_state:           0x00000001      [27] s_fsckloglen:      50
>                  MOUNT                   [28] s_fscklog:         2
> [12] s_compress:        0               [29] s_fpack:  'thor_storagة�+'
> [13] s_ait2.len:        4
> [14] s_ait2.addr1:      0x00
> [15] s_ait2.addr2:      0x0000e92f
>       s_ait2.address:    59695
>
> **Output of `s2p s`:**
>
> [1] s_magic:            '    '          [16] s_aim2.len:     0
> [2] s_version:          0               [17] s_aim2.addr1:   0x00
> [3] s_size:     0x0000000000000000      [18] s_aim2.addr2:   0x00000000
> [4] s_bsize:            0                    s_aim2.address: 0
> [5] s_l2bsize:          0               [19] s_logdev:       0x00000000
> [6] s_l2bfactor:        0               [20] s_logserial:    0x00000000
> [7] s_pbsize:           0               [21] s_logpxd.len:   0
> [8] s_l2pbsize:         0               [22] s_logpxd.addr1: 0x00
> [9]  s_agsize:          0x00000000      [23] s_logpxd.addr2: 0x00000000
> [10] s_flag:            0x00000000           s_logpxd.address:  0
>                                          [24] s_fsckpxd.len:     0
>                                          [25] s_fsckpxd.addr1: 0x00
>                                          [26] s_fsckpxd.addr2: 0x00000000
>                                               s_fsckpxd.address: 0
> [11] s_state:           0x00000000      [27] s_fsckloglen:      0
>                  CLEAN                   [28] s_fscklog:         0
> [12] s_compress:        0               [29] s_fpack:  '        '
> [13] s_ait2.len:        0
> [14] s_ait2.addr1:      0x00
> [15] s_ait2.addr2:      0x00000000
>       s_ait2.address:    0
>
> Now the question is, in what do I have to manipulate in the
> superblock(s) in which values, so that either *jfs_fsck* can be
> convinced to go to work, or that I can mount it (using the Linux jfs
> kernel filesystem implementation)?
>
> It'd be also perfectly acceptable to have some tool that chews through
> the filesystem image and spits out everything it considers to be a file
> therein, think `lost+found`.
>
> ----
>
> ###End Notes
>
> Due to events somewhat under my control a Linux mdadm RAID-5 with a JFS
> filesystem on it got corrupted in a way, which failure mode I still not
> fully understand to this day. This FS was part of a NAS I shared with
> another dude and I repeatedly reminded him, that RAIDs don't replace
> backups, yet despite these warnings other dude has some non-backuped
> data on that RAID (*and is now in a stage of being semi-angry… at me I
> have the impression*). The immediate action was to remove the disks from
> the system and create block level copies of them. And any kind of
> manipulation I'm going to do happens on *snapshots* of these copies, so
> whatever fuckup happens in the recovery process, I can rollback at any time.
>
> Recently I was finally able to put the mdadm RAID into a (hopefully
> consistent) state, that allows for further steps.
>
> Basically I want to recover what's there, call it a day and write off
> the irrecoverable losses.
>
> Regards,
>
> Wolfgang
>
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> Jfs-discussion mailing list
> Jfs-discussion@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/jfs-discussion

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Jfs-discussion mailing list
Jfs-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to