On 02/18/2014 11:05 AM, Thomas Bächler wrote:
Am 17.02.2014 21:27, schrieb Manuel Reimer:
As soon as a bigger coredump (about 500 MB) is to be stored, the whole
system slows down significantly. Seems like storing such big amounts of
data takes pretty long and is a very CPU hungry process...
I completely agree. Since the kernel ignores the maximum coredump size
when core_pattern is used, a significant amount of time passes whenever
a larger process crashes, with no benefit (since the dump never gets
saved anywhere).
This is extremely annoying if processes with sizes in the tens or
hundreds of gigabytes crash, which sadly happened to me quite a few
times recently.
If this feature is broken by design, why is it still enabled by default
on Arch Linux? systemd-coredump makes it nearly impossible to debug
bigger processes and it took me quite some time to figure out how to get
coredumps placed to /var/tmp so I can use them to find out why my
process has crashed.
Yours
Manuel
_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel