Eric Schrock wrote:
> Max -
>
> Interesting stuff.  It seems to me that there are basically two separate
> things here:
>
> 1. Something like ::loadctf to load arbitrary CTF data from files.
>
> 2. A mechanism to identify a raw target as a UFS device and auto-load the CTF 
> data from /kernel/fs/ufs.
>   
Well... Currrently, I changed mdb for all raw targets to load ctf for 
the entire kernel.
I had wanted to do "::loadctf module", but found that because a lot of 
the basic data types
are defined in other modules (notably unix/genunix), it was easier to 
just do what mdb does
for the "-k" option, and load everything.  So, right now, I hacked in 
several pieces of code
from kt_activate() and various other mdb_kvm.c code related to ctf into 
mdb_rawfile.c.
 To be honest, I expected to have to do more work.  Once I had a 
"shadow" kt_data_t under the rf_data_t,
the ::print stuff magically worked.  The nice thing about this is that 
the zfs ctf stuff comes free.
> Both of these seem useful in their own right, so I hope your solution
> can be adapted to fit both needs.
>
> Note that ZFS is going to be vastly more difficult, depending on how
> much you want to try to accomplish.  The problem is that ZFS data can be
> spread across multiple devices, so you don't have a single raw target.
> For a dynamically striped or mirrored pool, you could look at a single
> disk, but if any DVA referenced another toplevel vdev, you'd have to
> start up another MDB session on the other device and look at it there.
> Tackling RAID-Z is even more difficult because the raw block you're
> looking for is spread across several devices.
>   
Right now, it seems to work for single disks, but I am not getting data 
that looks reasonable.
I posted a question on the zfs-discuss list about the uberblock_t dva 
(blkptr) and what it
refers to, as what I see using ::print objset_phys_t does not look 
right.  The problem of multiple
disks is currently beyond my scope as I do not have enough hardware (or 
money) to get into that.
Having said that, I would think I should be able to use the nvpair stuff 
at the beginning of
any raw disk in the pool to get the configuration info that is needed to 
handle this.

The main reason I wanted this change in mdb in the first place was to be 
able to actually figure out
what IS the on-disk format.  The white paper at the zfs community web 
site basically shows (label 0)
consisting of 8k of blank space, 8k of boot header, 112k of nvpairs,  
and a 128k uberblock_t array.
This is followed by a repeat of the same info (label 1), and then a cloud
for the remaining xxxGB/TB until the end where label 0 is again repeated 
twice.  What I want to do
is, given the uberblock (or an inumber, or a znode), find the data 
corresponding to this on the disk in zfs, similarly to
what I can do with ufs.  So far, I'm not there...  I think an ability to 
do this will greatly enhance
understanding (at least, my understanding), of how zfs works.
> One could imagine a 'ZFS pool' backend instead of a 'raw + CTF' backend.
> The problem then becomes the fact that the fundamental address in ZFS
> is a DVA, which is 128 bits.  MDB uses uintptr_t sized values for
> virtually everything, so getting this to work would be quite a
> challenge (but extremely cool).
>   
I had thought of the 128 bit problem, but don't think it is a problem on 
the (very small) usb hard
drives that I am using...  (But even these can be a problem for blocks 
that are past the first 4GB).
> Anyway, what you have now is sounds interesting, it'd be cool to see a
> webrev of what you have so far.
>   
A "webrev"?  How do I do that?
> - Eric
>
> --
> Eric Schrock, Solaris Kernel Development       http://blogs.sun.com/eschrock
>
>   


Reply via email to