Hello,

I have a Rails application that produces quite a bit of log output -
about 500MB per day, maybe 3-4 million lines.  Currently this is going
into a normal file with daily rotation.

I tried dumping this into journald via STDOUT so that I could see
everything in one place.  On a standard Google Cloud Platform
instance, this used about 10% extra CPU.  I was willing to live with
that, but more of a problem was the rapid increase in storage used for
the log.  It was growing at about 10x the rate as a flat file for the
2 hours I ran the experiment.  That is, after 2 hours, the usage
reported by 'sudo journalctl --disk-usage' was over 400MB, which is
not much less than I would normally see for an entire day's worth of
logging.

I am wondering if this is to be expected due to journald's extra
functionality and complexity, or does this seem incorrect?  I'm using
systemd 229 on Ubuntu 16.04.

Thank you,
Bill Lipa
_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Reply via email to