Thanks to everyone who took the time to comment on our proposal. We have 
replied to each one in private discussion, and now present this summary 
of our responses.

The first 2 items result in some changes or additions to our proposal.


Garrett D'Amore wrote:
> +1.  (I cheated -- I've seen the case materials ahead of time.  :-)
> This enhancement (and the need to perform subsequent actions to 
> decompress the crash dump) probably deserves special mention in the 
> Release Notes.  I'd also add, in retrospect, it seems like perhaps mdb 
> ought to have its man page updated with at least a passing reference 
> to compressed crash dumps (and the step required to decompress them.)
We agree that the mdb man page should describe vmdump.X, as it already 
mentions vmcore.X and unix.X. We will provide an updated mdb man page 
shortly. We'll follow up during integration and make sure that the 
Release Notes are updated as well.


Darren J Moffat wrote:
>>      -z y | n                                                      |
>>          Modify the dump configuration to control  the  operation  |
>>          of savecore on reboot. The options are y (yes) to enable  |
>>          saving core files in a compressed  format,  and  n  (no)  |
>>          automatically   uncompress  the  crash  dump  file.  The  |
>>          default is yes, because crash dump  files  can  be  very  |
>>          large  and  will require less file system space if saved  |
>>          in a compressed format.                                   |
>
> That is a very clunky interface.  I suspect it is partly that way 
> because "-y and -n" are already used in dumpadm(1M) to determine if 
> savecore should run or not and -u is already used to mean update.
>
> Instead of "-z y" what about -Z
> Instead of "-z n" what about -U
>
> Or "-S compress" "-S uncompress"  "-S" denoting "savecore options".
We discussed a few alternatives with Darren.

We agreed to keep "-z *", because z implies compression in the Unix 
world, and defining two different flags, with different mnemonics, to 
enable or disable it is harder to remember for the user.  It also 
consumes 2 flags for every new binary option added, leaving less room 
for future expansion.

We noted that the zfs(1M) utility uses "on | off".

We will change our proposal to "-z on | off", and update the dumpadm man 
page.



Alan Hargreaves wrote:
> It's not completely clear to me from the notes whether or not to 
> uncompress a vmdump.N needs to be done on the machine that generated 
> the vmdump.N, or if it can be done anywhere else. From a support 
> perspective, the latter would be nice. i.e. Customer uploads a 
> vmcore.N to us and we uncompress it.
>
> What I was getting at was, do I ned to run savecore a second time on 
> the machine that ghenerated the dump; or could I run savecore elsewhere.
>
Yes, vmdump.N can definitely be moved to another machine and 
uncompressed there. That is a main goal of the project. The fast track 
notes did not include some of the description of the project, as the 
emphasis here is on interfaces. Please see the 1-pager and design spec 
in the supporting materials at 
http://arc.opensolaris.org/caselog/PSARC/2009/330/materials.


Ivek Szczesniak wrote (same thread):
> Storing the file as compressed data is not as easy as you think. You
> will need a specialized unpack command as the stock version of
> /usr/bin/bunzip does not handle sparse files, vmdump.N is sparse and
> mdb will no longer be able to access the files via mmap(2).
>   
savecore(1m) is the specialized unpack command. It can be run on the 
same machine, or vmdump.N can be uploaded to another machine, and 
savecore run there. mdb(1) can access the files after they are uncompressed. 


J?rg Schilling wrote (same thread):
> There us bunzip2 or gunzip, which rogran are you talking about?
>
> bzip2 compresses null bytes in an efficient way and it would not be hard
> to add support for doing a lseek() instead of a write if a block of 
> uncompressed data appears to contain only nulls.
The bzip2 library is built into the kernel and savecore. A dump image 
has several sections, not all of them compressed. So, there is no 
outside compress/uncompress utility involved. It does indeed compress 
zeros remarkably well. In addition, savecore does not write out zero 
pages. This leaves holes in vmcore.N that mdb reads as zeros.



Ivek Szczesniak wrote:
> The stdio implementation in libc is among the slowest stdio
> versions out there. If you want to archive better performance you
> should use the stdio implementation in libast or use mmap(2).
This is an interesting implementation suggestion, but is outside the 
scope of PSARC because it does not affect the interfaces
being proposed. We did achieve quite a speedup over the old method. 
We'll take another look.


Cyril Plisko wrote:
> "    Several compression methods were compared before bzip2 was
>     chosen.
> "
>
> Can you please share the results of that comparison ?
>   
The best resource is here:
http://en.wikipedia.org/wiki/Comparison_of_file_archivers

We wrote several little compress utilities and ran them against some 
large core files. 
Our results showed that lzjb < gzip < bzip2 < 7zip, for compression ratio 
and CPU time, but 7zip was far too CPU expensive, so we chose 
bzip2. With enough CPUs, bzip2 can be parallelized such that compression 
CPU time is not the bottleneck, and we reap the benefits of higher bzip2 
compression.





Reply via email to