On Sun, Sep 23, 2007 at 08:56:21AM +0200, max at bruningsystems.com wrote:
>
> Yes, I figured this out.  I figured I could either add a way to allow
> another object (the raw disk) to the files being examined with mdb -k,
> (or mdb on a kernel symtab/object), or add functionality to mdb on raw
> files.  I added the functionality to raw files.  I did not want to
> re-invent a lot of code.  However, some of the routines I am using are
> statically defined in mdb_kvm.c.  So I made copies of the routines I
> needed and changed names (and one or two other minor things).
>

Definitely using the raw target seems like the right approach.  I would
image you would want to expand on the functionality in rf_activate() to
optionally pull in CTF data according to the disk type gathered from
libfstyp.  We already have some code along these lines for loading the
DOF debugging module:

        /*
         * Load any debugging support modules that match the file type, as
         * determined by our poor man's /etc/magic.  If many clients need
         * to use this feature, rf_magic[] should be computed dynamically.
         */
        for (m = rf_magic; m->rfm_str != NULL; m++) {
                char *buf = mdb_alloc(m->rfm_len, UM_SLEEP);

                if (mdb_tgt_vread(t, buf, m->rfm_len, 0) == m->rfm_len &&
                    bcmp(buf, m->rfm_str, m->rfm_len) == 0) {
                        (void) mdb_module_load(m->rfm_mod,
                            MDB_MOD_LOCAL | MDB_MOD_SILENT);
                }

                mdb_free(buf, m->rfm_len);
        }

I would add a call to fstyp_ident() to grab the filesystem type (if
available), and then attempt to load the CTF data for the corresponding
module (as well as genunix or other linked CTF data).  This could be
expanded to pull in dmods as well, to allow for UFS dcmds that examine
raw disks.

> It turns out that, by default, all zfs metadata is compressed.  There
> is a flag to turn off compression, but it apparently only works for
> indirect blkptrs.  I have figured out a way to show the correct data.
> Currently, I have a dcmd that reads the compressed data from the disk,
> uncompresses it, writes the uncompressed data to a temp file, and then
> runs the modified mdb on the compressed file to ::print the data.
> Next week I shall add a ::zprint command that takes a blkptr address
> on disk and uses the blkptr info to figure out what object the data
> refers to and the compression (if any) used, then do the
> un-compression and dump the result in a temp file, and finally, run
> ::print with the temp file.  It's not terribly efficient, but it will
> work.  When I get this working, I'll turn my attention to multiple
> devices, RAID-Z, etc.  I should be able to get the information needed
> from the zfs label info and the block pointers themselves (I think).

As you go down this path, you will eventually want to transition from
raw file + CTF into a special ZFS target.  This would allow you to
transparently decompress data behind the scenes by natively dealing with
DVAs (modulo the 128-bit problem), and you wouldn't have to create
special commands like '::zprint'.  It's also what you'll need to do if
you ever hope to work with multiple devices.

> I took a look at someone else's webrev posting, and, as I recall,
> there are instructions.  I don't know if I should "clean up" the code,
> or post as is.  Any suggestions?

It's up to you.  I think some of us are interested in what you have
right now, but if you want to clean it up first and make it look like
something that could be putback to OpenSolaris, that's your decision.

> Hey, thanks for getting back to me.  Other than you, Alan D., Mike S.,
> and John L., I was beginning to think either no one understood the
> usefulness, or no one cared...

The power of debugging, in general, is lost on most developers.  Thanks
to the work we've done in OpenSolaris (DTrace, MDB, CTF, ptools, etc),
the average OpenSolaris developer is vastly more engaged than your
typical developer.  But as we've time and time again, you need to build
the tools first before developers can get excited about it.  Just the
other day I found myself wanting ::loadctf, because I had used ::context
to examine a userland process from a crash dump, but I had no CTF data
available to print structures.  It was sitting on disk, but there was no
way to tell MDB to load it.  There are definitely some powerful things
you are doing that will be useful to  developers.  The mdb fanatics
among us are definitely hoping you'll continue your work.

- Eric

--
Eric Schrock, Solaris Kernel Development       http://blogs.sun.com/eschrock

Reply via email to