Hi Eric,

Eric Schrock wrote:
> Max -
>
> Sorry for the late reply.
>
> On Tue, Sep 18, 2007 at 08:48:58AM +0200, max at bruningsystems.com wrote:
>   
>> Well... Currrently, I changed mdb for all raw targets to load ctf for
>> the entire kernel.  I had wanted to do "::loadctf module", but found
>> that because a lot of the basic data types are defined in other
>> modules (notably unix/genunix), it was easier to just do what mdb does
>> for the "-k" option, and load everything.  So, right now, I hacked in
>> several pieces of code from kt_activate() and various other mdb_kvm.c
>> code related to ctf into mdb_rawfile.c.  To be honest, I expected to
>> have to do more work.  Once I had a "shadow" kt_data_t under the
>> rf_data_t, the ::print stuff magically worked.  The nice thing about
>> this is that the zfs ctf stuff comes free.
>>     
>
> CTF files are optionally uniquified against a common target (typically
> genunix) using 'ctfmerge'.  I would imagine this information is encoded
> in the CTF file somewhere, so that when you load 'ufs' you can notice
> that it depends on CTF data in 'genunix' and load that as well.
>
>   
Yes, I figured this out.  I figured I could either add a way to allow 
another object (the
raw disk) to the files being examined with mdb -k, (or mdb on a kernel 
symtab/object),
or add functionality to mdb on raw files.  I added the functionality to 
raw files.  I did not want
to re-invent a lot of code.  However, some of the routines I am using 
are statically defined in
mdb_kvm.c.  So I made copies of the routines I needed and changed names 
(and one or two
other minor things).
>> Right now, it seems to work for single disks, but I am not getting
>> data that looks reasonable.  I posted a question on the zfs-discuss
>> list about the uberblock_t dva (blkptr) and what it refers to, as what
>> I see using ::print objset_phys_t does not look right.  The problem of
>> multiple disks is currently beyond my scope as I do not have enough
>> hardware (or money) to get into that.  Having said that, I would think
>> I should be able to use the nvpair stuff at the beginning of any raw
>> disk in the pool to get the configuration info that is needed to 
>> handle this.
>>
>> The main reason I wanted this change in mdb in the first place was to
>> be able to actually figure out what IS the on-disk format.  The white
>> paper at the zfs community web site basically shows (label 0)
>> consisting of 8k of blank space, 8k of boot header, 112k of nvpairs,
>> and a 128k uberblock_t array.  This is followed by a repeat of the
>> same info (label 1), and then a cloud for the remaining xxxGB/TB until
>> the end where label 0 is again repeated twice.  What I want to do is,
>> given the uberblock (or an inumber, or a znode), find the data
>> corresponding to this on the disk in zfs, similarly to what I can do
>> with ufs.  So far, I'm not there...  I think an ability to 
>> do this will greatly enhance understanding (at least, my
>> understanding), of how zfs works.
>>     
>
> Definitely.  It certainly seems useful for examining a single-disk,
> uncompressed (including metadata) pool.  To make this truly useful for
> ZFS in general, we would have to develop a ZFS-specific backend that
> understood things like multiple devices, compression, RAID-Z, etc.
>   
It turns out that, by default, all zfs metadata is compressed.  There is 
a flag to turn
off compression, but it apparently only works for indirect blkptrs.  I 
have figured out
a way to show the correct data.  Currently, I have a dcmd that reads the 
compressed data
from the disk, uncompresses it, writes the uncompressed data to a temp 
file, and then runs
the modified mdb on the compressed file to ::print the data.  Next week 
I shall add a
::zprint command that takes a blkptr address on disk and uses the blkptr 
info to figure out
what object the data refers to and the compression (if any) used, then 
do the un-compression
and dump the result in a temp file, and finally, run ::print with the 
temp file.  It's not terribly
efficient, but it will work.  When I get this working, I'll turn my 
attention to multiple devices,
RAID-Z, etc.  I should be able to get the information needed from the 
zfs label info and the
block pointers themselves (I think).

But first, I am doing a little more work on the ufs side.  I have a 
walker that, given a disk location
of an inode for a directory, walks the directory entries.  And I should 
finish a dcmd today
that, given an inumber, returns the disk address of the inode.  I have 
had a few students that
work with ufs sustaining say that they would like to be able to examine 
the intent log, so I may
spend a little time there as well.
>   
>> A "webrev"?  How do I do that?
>>     
>
> You can find this tool in the SUNWonbld tool package, which you should
> have if you are building ON sources.  If you're building from the
> Mercurial sources I'm not sure how it works since I'm still using
> teamware.  You may want to ask tools-discuss if it isn't obvious.
>   
I took a look at someone else's webrev posting, and, as I recall, there 
are instructions.  I don't know
if I should "clean up" the code, or post as is.  Any suggestions?

Hey, thanks for getting back to me.  Other than you, Alan D., Mike S., 
and John L., I was beginning
to think either no one understood the usefulness, or no one cared...  Or 
maybe, as Joerg S. suggested (I think it was him),
everyone is waiting for this to degenerate into a licensing 
discussion... Oh wait, that's on the opensolaris-discuss list.

max



Reply via email to