On Thu, Aug 29, 2013 at 04:37:44PM -0700, Eric W. Biederman wrote:

[..]
> This situation is people who run machines of unreasonable size really
> would like to use multiple cpus when generating crash dumps.

Yes. Now kdump is in a phase where people are doing scalability work. And
one of the problems is that on mutli tera byte machines, filtering is
taking long time. With-in filtering it is especially the compression which
takes the longest (hatayama and cliff wickman have done some study here).

So Idea seems to be that bring up more cpus in the second kernel and try
to parallelize compression work and try to save dump time of multi tera
byte machines.

Thanks
Vivek

_______________________________________________
kexec mailing list
[email protected]
http://lists.infradead.org/mailman/listinfo/kexec

Reply via email to