On Mon, 2005-09-12 at 11:43 +0200, Andreas Engelbert wrote:

> You point me to agg ino 16, wich is the "filedescriptor" for the IAG
> pages of the first fileset.  So I dont have to touch the aggregate inode
> table/map and neither the secondary structures?

I don't think so.  I think what is created by mkfs.jfs should be okay.

> I did investigate a bit in the jfs layout paper, but wasnt able to get
> it all into my head. Here is my clouded plan:
> 
> Agg ino 16 modification. A contiguous imap makes for a simple B+tree
> with just one xad struct: off=0; len=entire [in blocks]; addr=block
> number of the imap. But how are the 40 bits of addr split into addr1 and
> addr2?

addr1 is the high order part, so addr = (addr1 << 32) | addr2
(see addressXAD in jfs_xtree.h)
> 
> /*inode allocation group page
> 
> Now the imap. The steps for finding a inode in the Imap start with
> iag key = (inum / 4096) * 4096 + 4096. There is a offset of one page. Im
> not sure what has to be done with the first 4k.

This is the inode map control page struct dinomap (or dinomap_disk,
which is explicitly cast to be little-endian).  You'll probably have to
make sure that in_nextiag is accurate.  It should be one more than the
number of iags.  mkfs should initialize in_nbperiext & in_l2nbperiext,
and I think fsck will take care of the rest.

> I go for a contiguous strip of enough IAG pages to span till the largest
> found inode number.
> 
> The agstart and iagnum problem. Should I forget about the 102 AGs on my
> aggregate? Probably the old inods are alligned to the same
> AG-partitioning. Would a incorrect iagnum in the IAG struct cause any
> problems for reading access?

My first thought was that it is probably easier to create new IAGs, but
then if you can locate the existing ones, you don't have to worry about
using disk blocks that may be used elsewhere.  The first IAG (and the
first inode extent) will have been re-created by mkfs, so you'll have to
add any other inode extents that belong to IAG 0 to it.

It doesn't look like iagnum is looked at for read access, but it may
cause problems writing.  I'm not sure if if fsck will fix a bad iagnum
or not.

> The only thing I understand about the fwd and back pointers, is that
> they are recreated by fsck. But the rest is pretty much clear to me.
> Just set all bits to not-allocated and then switch them for every inode
> in the list, wich has at least one link. And of course set the pxd_t to
> the inode extent addresses.

This sounds right.  I think you can let fsck worry about the fwd & back
pointers.

> Assumed, I get this far and hand over to fsck for final clearance. I
> hope its routines are not too quick in dumping inconsistant inodes. But
> if so, that would only result in overwriting something like nlink and
> inostamp, and should be reversible once the inos have been found, right?

I think all the fsck will do is set di_nlink = 0.

If you want to be cautious, you can probably write the inode extents to
a sparse file before trying to run fsck against the partition.  If you
have the space, you could make a copy of the whole partition, but I know
it's rather large.

A utility to save the inode extents would do something like:

while read extent-offset {
        seek(partition, extent-offset)
        read(buffer, 16k)
        seek(file, extent-offset)
        write(buffer 16K)
}

> Im getting more optimistic now. It seems that the really difficult stuff
> like traversing all the directory trees are not necessary. But there are
> probably more problems ahead.

I'm sure you'll run into something.  Hopefully, the problems won't be
too difficult to overcome.
> 
> Andreas

-- 
David Kleikamp
IBM Linux Technology Center



-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to