Hi, On Sun, Aug 16, 2015 at 05:09:27PM -0700, Matthew Dillon wrote: > There are numerous possibilities. For example, tmpfs use. You could check > if the wired page count has become excessive (could be an indication of a > leak). There was a bug fix made in master related to a memory leak from > locked memory that was fixed on July 12 (a51ba7a69d2c5084f2 in master), you > could try cherry-picking that one into your local tree and see if it helps. > > You'll need to do some more investigation. The most likely possibility is > tmpfs use. The wired memory leak is a possibility too but depends on the > system workload.
Thank you for the hints and cherry-picking the fix. I updated the box with the new source this morning, and I'll come back with the new result later. On Sun, Aug 16, 2015 at 10:27:08PM +0800, Nuno Antunes wrote: > Any clue in vmstat -m ? On older kernel, it looked like this; mostly occupied by vfscache, then vnodes and HAMMER-inodes. so I'm guessing either hammer cleanup, updating the locate db, or git pull may have increased the usage. tmpfs-related numbers were less than 100, on the other hand. $ sed -E 's/^(.{20})(.{7})(.*)$/\2&/' vmstat-m-before |sort -nr |sed 's/^.......//' |head -n5 vfscache 881481 81743K 0K 764928K 1033104 0 0 vnodes 420508 170832K 0K 134203388K 420508 0 0 HAMMER-inodes 404482 353922K 0K 134203388K 416101 0 0 HAMMER-others 3994 720K 0K 764928K 3754450 0 0 devfs 3803 602K 0K 764928K 4382 0 0 Best Regards, YONETANI Tomokazu.
vmstat-m-before.gz
Description: application/gzip
vmstat-m-after.gz
Description: application/gzip