On Mon, Feb 17, 2014 at 9:27 PM, Manuel Reimer
<manuel.s...@nurfuerspam.de> wrote:
> Hello,
>
> if a bigger application crashes with coredump, then systemd-coredump seems
> to have a few problems with that.
>
> At first, there is the 767 MB limitation which just "drops" all bigger
> coredumps.
>
> But even below this limit it seems to be impossible to store coredumps. I
> did a few tries and found out that, with default configuration, the limit
> seems to be at about 130 MB. Bigger coredumps are just dropped and I cannot
> find any errors logged to somewhere.
>
> It seems to be possible to work around this problem by increasing
> SystemMaxFileSize to 1000M. With this configuration change, bigger coredumps
> seem to be possible, but this causes another problem.
>
> As soon as a bigger coredump (about 500 MB) is to be stored, the whole
> system slows down significantly. Seems like storing such big amounts of data
> takes pretty long and is a very CPU hungry process...
>
> Can someone please give some informations on this? Maybe it's a bad idea to
> store such big amounts of data in the journal? If so, what's the solution?
> Will journald get improvements in this area?
>
> Thank you very much in advance.
>
> Greetings,
>
> Manuel

I wish there was a good way to install a system debugger which could
inspect the process and its memory at the time of the crash and
generate a short textual report, like libSegFault, or a minidump, like
breakpad. Either hopefully small enough to just chuck into the
journal.

core_pattern requires funneling the entire process memory through a
pipe and making a copy of it. LD_PRELOAD seems terribly brittle and
doesn't work on statically linked binaries.
_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel

Reply via email to