On Wed, 2005-09-07 at 15:33 +0200, Andreas Engelbert wrote: > Greetings! > > All the special JFS-structures as superblocks, freeblock maps, inode > maps, reserved inodes on a 1.6 TB partition are overwritten by mkfs.ext2. > > Most of the inodes are probably untouched, because JFS dynamicly > allocates inodes at unused blocks, right? But they are cut off from the > B-Tree.
Correct. > I thought it might be a rather easy task to investigate each aggregate > block and guess, if the data casted into the jfs_dinode is plausible. > For example skip inode candidates with timestamps in the future and so > on. After this time consuming investigation, the mapping of inode numbers > to blocks could be rebuild. It should also be possible to crawl through > the discovered nodes and mark their used blocks. This is plausible. Inodes are allocated in 16K blocks of 32 inodes, so if find 32 consecutive ino's in the right position, that would be at least part of a decent sanity check on whether you found an inode extent. > So we end up with a lot cut-off subtrees. Next task is to find the root > of each subtree and to join them in a recreated global fileset-root. > With everything else rebuild like a mkjfs would have done. jfs_fsck does something like this to verify that the directory tree is sane. jfs_mkfs does not deal with it since it only needs to create an empty root directory. > Have I forgot something or got it completely wrong? I've done a little > coding but more java then c and dont feel competent for that task. But a > filesystem guru here would probably have made such a tool, if possible. > Is there a way to automate jfs_debug and do the necessary steps with it? What you propose sounds reasonable, but as far as I know, such a tool doesn't exist. There is probably code in jfs_debugfs that you could use. I'm not sure about using jfs_debugfs in an automated way. Another example of some simple code to read the jfs data structures is in the grub source. It would be nice to have a much more aggressive fsck that tried to rebuild anything that has been destroyed and tried to salvage anything recoverable, but that's not how the current jfs_fsck was designed, and I don't see it happening. Backups are still the best way to plan for catastrophic file system damage. > For me, its not so much the lost data as the challenge to get it back > together and maybe learn a bit about the filesystem im using. But > down-to-earth, its a bit to hard for me, so i'm grateful for every help. I don't have the time to really provide any real help with this, but I don't mind answering questions. > (please excuse my bad english) I didn't have any trouble with your English. Shaggy -- David Kleikamp IBM Linux Technology Center ------------------------------------------------------- SF.Net email is Sponsored by the Better Software Conference & EXPO September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf _______________________________________________ Jfs-discussion mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/jfs-discussion
