On Feb 25, 2010, at 12:47 PM, David Birdsong wrote: > On Thu, Feb 25, 2010 at 8:41 AM, Barry Abrahamson <[email protected]> > wrote: >> >> On Feb 25, 2010, at 2:26 AM, David Birdsong wrote: >> >>> I have seen this happen. >>> >>> I have a similar hardware setup, though I changed the multi-ssd raid >>> into 3 separate cache file arguments. >> >> Did you try RAID and switch to the separate cache files because performance >> was better? > seemingly so. > > for some reason enabling block_dump showed that kswapd was always > writing to those devices despite their not being any swap space on > them. > > i searched around fruitlessly to try to understand the overhead of > software raid to explain this, but once i discovered varnish could > take on multiple cache files, i saw no reason for the software raid > and just abandoned it.
Interesting - I will try it out! Thanks for the info. >>> We had roughly 240GB storage space total, after about 2-3 weeks and >>> sm_bfree reached ~20GB. lru_nuked started incrementing, sm_bfree >>> climbed to ~60GB, but lru_nuking never stopped. >> >> How did you fix it? > i haven't yet. > > i'm changing up how i cache content, such that lru_nuking can be > better tolerated. In my case, Varnish took a cache of 1 million objects, purged 920k of them. When there were 80k objects left the child restarted, thus dumping the remaining 80k :) -- Barry Abrahamson | Systems Wrangler | Automattic Blog: http://barry.wordpress.com _______________________________________________ varnish-misc mailing list [email protected] http://projects.linpro.no/mailman/listinfo/varnish-misc
