> On Sun, 2006-10-08 at 19:23 +0200, [EMAIL PROTECTED] wrote:
>> Hi!
>>
>> A while ago I accidentally mke2fs:ed my root volume. Long story short -
>> I
>> blame the fact that the device numbering differed between my gentoo
>> installation and the ubuntu live cd. I had backups on the most important
>> data. However not on the newest digital photos and some other stuff.
>>
>> I have not found any good recovery tools aimed at jfs. I did run
>> magicrescue and PhotoRec. Both looks for potential file starts and ends
>> and extracts the data in between. This works ok on some filesystems and
>> on
>> volumes with low fragmentation. I did extract a few hundred thousand
>> possible images, and the ones that I were interested in had random
>> corruption or parts of the images were shuffeled around.
>>
>> I guess that this depends on fragmentation so that the files were not
>> allocated in one large continous extent. After browsing the archives of
>> this mailing list I got the idea that it might be possible to scan for
>> inodes with a simple sanity check. I found this thread quite
>> interesting:
>> http://sourceforge.net/mailarchive/forum.php?thread_id=8137509&forum_id=43911
>>
>> I am currently reading up on JFS (trying to understand the
>> 'layout'-paper)
>> and studying the source code. It seems like there is a lot of usable
>> functions in jfs_debugfs.
>>
>> My idea is a program that does a read-only extraction of files from a
>> trashed JFS filesystem (on disk or image). As there may be random errors
>> on the volume one have to take into account that the metadata read from
>> the device may be wrong.
>>
>> My approach is to scan the volume for sane inodes, and then try to write
>> its extents to a file on an other (mounted) device. If the filename is
>> extractable then use it, else use a file serial number (like
>> dinode.di_number). It sounds quite simple but I guess its not. Is it a
>> feasable approach? I will gladly spend some time to (try to) implement
>> this. Any advice/ideas/help is greatly appriciated!
>
> Finding the file data may be about that easy.  Finding the file name
> would require parsing the directory inodes.  This is doable, but would
> probably double the amount of work you'd need to do.
>
> I think the thread you found should contain the information you need to
> identify a group of inodes.  If you have any questions, direct them my
> way.
>

I now have a semi-working program that can extract files with filenames
and paths. I am test-driving it on a healthy jfs-image. It does not work
that relaible yet. So, heres my questions:

Does an inode-extent _always_ consist of 32 consecutive inodes, with
consecutive di_number? The reason I ask is that I found inodes, that seems
to be valid, but which are parts of a much smaller extent (5-25inos or
so). Is this possible? It might also be due to that neighbouring inodes,
that should have been a part of the extent, failed the sanity-check
(mentioned elseware in this thread).

I am getting some dtree.header.nextindex==-1 when I am parsing the dtrees.
Is this really a valid value or does it indicate that there are errors in
the dtree?

Best regards,
Simon


-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to