I did push some significant changes to the low-memory pageout and kill code on August 20 to master. I have MFC'd that to the release branch today (September 8th) so try building a new kernel. The fix deals with an edge case that can occur when programs allocate large amounts of memory all at once in excess of cache+free can cause the process killer to prematurely activate.
-Matt On Tue, Sep 8, 2015 at 12:02 AM, YONETANI Tomokazu <y0n3t...@gmail.com> wrote: > Hi, > > The wired pages stay at slightly less than 1.6G, but active and inactive > keep growing until it starts killing some processes. A silly workaround > could be to run something as > $ perl -e '$x="@";$x=$x x 1048576; $x x 1048576' > so as the active count gets pushed back to something like 50M, which gives > me a few to several days without OOM. > > Best Regards, > YONETANI Tomokazu. > > On Thu, Aug 20, 2015 at 10:01:31AM -0700, Matthew Dillon wrote: > > Continue monitoring the wired pages (from systat -vm 1) on your system > with > > the new kernel and see if those tick-up from day to day. > > > > -Matt > > > > On Tue, Aug 18, 2015 at 3:49 AM, YONETANI Tomokazu <y0n3t...@gmail.com> > > wrote: > > > > > Hi, > > > > > > On Sun, Aug 16, 2015 at 05:09:27PM -0700, Matthew Dillon wrote: > > > > There are numerous possibilities. For example, tmpfs use. You could > > > check > > > > if the wired page count has become excessive (could be an indication > of a > > > > leak). There was a bug fix made in master related to a memory leak > from > > > > locked memory that was fixed on July 12 (a51ba7a69d2c5084f2 in > master), > > > you > > > > could try cherry-picking that one into your local tree and see if it > > > helps. > > > > > > > > You'll need to do some more investigation. The most likely > possibility > > > is > > > > tmpfs use. The wired memory leak is a possibility too but depends > on the > > > > system workload. > > > > > > Thank you for the hints and cherry-picking the fix. I updated the box > > > with the new source this morning, and I'll come back with the new > result > > > later. > > > > > > On Sun, Aug 16, 2015 at 10:27:08PM +0800, Nuno Antunes wrote: > > > > Any clue in vmstat -m ? > > > > > > On older kernel, it looked like this; mostly occupied by vfscache, > > > then vnodes and HAMMER-inodes. so I'm guessing either hammer cleanup, > > > updating the locate db, or git pull may have increased the usage. > > > tmpfs-related numbers were less than 100, on the other hand. > > > > > > $ sed -E 's/^(.{20})(.{7})(.*)$/\2&/' vmstat-m-before |sort -nr |sed > > > 's/^.......//' |head -n5 > > > vfscache 881481 81743K 0K 764928K 1033104 0 > 0 > > > vnodes 420508 170832K 0K 134203388K 420508 0 > 0 > > > HAMMER-inodes 404482 353922K 0K 134203388K 416101 0 > 0 > > > HAMMER-others 3994 720K 0K 764928K 3754450 0 > 0 > > > devfs 3803 602K 0K 764928K 4382 0 > 0 > > > > > > > > > Best Regards, > > > YONETANI Tomokazu. > > > >