On Fri, Oct 12, 2012 at 03:58:06PM +0100, Colin Guthrie wrote: > 'Twas brillig, and Michael Olbrich at 12/10/12 09:52 did gyre and gimble: > > On Fri, Oct 12, 2012 at 04:31:46AM -0400, Dave Reisner wrote: > >> On Fri, Oct 12, 2012 at 09:19:08AM +0100, Colin Guthrie wrote: > >>> 'Twas brillig, and David Strauss at 12/10/12 05:39 did gyre and gimble: > >>>> On Thu, Oct 11, 2012 at 3:31 PM, Colin Guthrie <gm...@colin.guthr.ie> > >>>> wrote: > >>>>> Something is obviously not good there! journald is using something in > >>>>> the region of 250MB res. > >>>>> > >>>>> What's the best way to debug this? > >>>> > >>>> What version are you on? The Fedora 17 journal does excessive mapping > >>>> that's fixed in current versions. > >>> > >>> Arg, I forgot that detail - I'm on 194, no patches to journal related > >>> stuff applied. > >>> > >>> As others are not seeing it jump out at them, I figured it might be due > >>> to just using /run and not flushing it to disk etc. > >>> > > > > Well the mapped memory is the same physical memory, that is used in the > > tmpfs. You'll probably run out of memory there first, before journald runs > > out of virtual address space. And that will happen, even if journald does > > not map it. Setting RuntimeMaxUse in journald.conf should help here. > > Well perhaps, but with systemd 189 (I think, perhaps a bit earlier) I > certainly didn't see this kind of memory usage. I'm certainly hitting > OOMs much more regularly now unless I restart systemd occasionally. > > Also, that explanation doesn't really explain: > > > [root@jimmy ~]# cat /proc/$(pidof systemd-journald)/status | grep Vm > VmPeak: 6611116 kB > VmSize: 6611112 kB
I've tried to flood the journal with messages for a while now. VmPeak is stable and VmSize grows until it reaches VmPeak and the drops significantly. This looks like some limit is reached and then journald releases most of its memory. > VmLck: 0 kB > VmPin: 0 kB > VmHWM: 253692 kB > VmRSS: 227680 kB > VmData: 960 kB > VmStk: 136 kB > VmExe: 164 kB > VmLib: 2804 kB > VmPTE: 12876 kB > VmSwap: 172 kB > > [root@jimmy ~]# cat /proc/$(pidof systemd-journald)/maps | grep -c /run/log/ > 1939 This goes up and down with VmSize for me. > [root@jimmy ~]# du -sh /run/log/ > 10M /run/log/ > > > So, I've got about 2k mmaps, using about 230Mb RSS and yet my journal > data is only 10M in tmpfs... doesn't sound right to me! > > Unless all this RSS memory is just fake and pretend and just different > maps onto that 10MB and it's really "free", but my understanding was > that it was the VmSize that was all the virtual mmap stuff, not the > VmRSS... please correct me if I'm wrong. VmSize is, as far as I know the currently used virtual memory. This does not mean that there is any real memory associated with it. VmRSS is the total actual memory that is mapped into the address space of the application. Usually mapping a lot of files is no problem. When the memory is needed then the kernel just frees some pages. After all, it can just read the files from disk again when needed. On a tmpfs however thats not possible, there is no persistent storage backing the files. What I think happens for you is the following: For some reason, journald is not releasing old mappings. It still rotates files in /run/log/journal/, so "du -sh /run/log/" seems correct. However, the old mappings mean, that even though the file is deleted, its content is still there. So the mappings are for files in /run/log/journal/, they just don't show up because there is no directory entry for them any more. I guess the algorithm that decides when to release old mappings is broken for you. Regards, Michael -- Pengutronix e.K. | | Industrial Linux Solutions | http://www.pengutronix.de/ | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | _______________________________________________ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel