On Thu, Feb 23, 2006 at 10:26:37AM +0900, Ken'ichi Ohmichi wrote:

[..]
> 
> >  May be filtering can be two pass process. In first pass we get rid
> >  of zero pages. This will not require interfacing with gdb and it 
> >  can be a small utility which can be run from initrd itself in second
> >  kenrel. This utility can be part of "crash" or kexec-tools. I am hoping
> >  that removing zero pages itself should lead to substantial reduction 
> >  in dump image size.
> >
> >  In the second pass, we can filter out rest of the pages. This second
> >  pass will run from the regular kernel context. This can be interfaced
> >  with crash. 
> I think the partial dump feature should be run in second kernel in order
> to shorten the down time and to decrease the amount of disk space used
> for crash dump.  The amount of zero pages depends on the memory usage,
> and it will be difficult to reliably shorten the down time if only zero
> pages are eliminated in second kernel.
> 

I agree. Just eleminating zero pages might not help in all the situations
and is dependent on memory usage. Hence, complete filtering in second
kernel itself is a desirable feature.


> Ideally, the kernel should keep information about various kernel data
> structures for the partial dump utility to access during crash dump
> instead of using a vmlinux image compiled with -g option.
> However, this approach would require modification to the kernel.
> 
> I think a more practical approach would be to build a partial dump
> command which takes advantage of the crash source code.  This way, it
> would be possible to which will have less dependency on the kernel mm
> structures and which could be run on second kernel at the same time.
> 

Are you planning to use debug vmlinux and gdb as backend? If yes then how
would you reduce the memory usage (in comparision to crash) so that this
utility can run in second kernel. If no then do you think hardcoding the
structure size and fields and not supporting NUMA machines is a sustainable
alternative.
 
> [..]
> >Looks like you are not regenerating the headers. Don't you have to
> >regenerate the headers after the filtering. After filtering, memory will
> >be much more fragmented and you require many more elf headers, with each
> >header describing one contiguous chunk of memory.
> There is no need to change the ELF header because the page that the
> program doesn't dump is changed into zero page with lseek(). 
> 

How does that help in reducing the file size then? Your dump file size has not
reduced on disk. The moment you do lseek() and skip few pages and later
wrote data, gap will be filled with zeros. This basically defeats the
purpose of filtering. Am I missing something?

Thanks
Vivek
_______________________________________________
fastboot mailing list
[email protected]
https://lists.osdl.org/mailman/listinfo/fastboot

Reply via email to