On 04/12/2013 01:24 AM, James Washer wrote:
Machines are getting ever bigger. I routinely look at crash dumps from
systems with 2TB  or more of memory. I'm finding I'm wasting too much
time waiting on crash to complete a command. For example "kmem -s" took
close to an hour on the dump I'm looking at now.

Most of the hosts where I run crash on large cores are IO bound rather than compute bound when examining files in the 100s of GB - TB range.

I'm not sure that just throwing in some parallelism would do much to affect the performance.

Regards,
Bryn.


--
Crash-utility mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/crash-utility

Reply via email to