Hi Kristian, Thanks for the explanation. However, I'm still struggling with how to make sense of the first part of my post. Specifically, how do I know how much of my cache has been used and how severely under allocated is it? If there is some memory free, why is n_lru_nuked increasing at a rate of ~50/s?
I have another varnish box with 24GB of RAM and a 20GB cache file with the following stats: varnishstat -1 -f n_lru_nuked,sm_bfree,sm_balloc n_lru_nuked 11350022 . N LRU nuked objects sm_balloc 4124241920 . bytes allocated sm_bfree 17350594560 . bytes free According to the calculation above then the used memory would be negative. Am I misunderstanding something? Thanks, Damon On Thu, Sep 8, 2011 at 12:40 AM, Kristian Lyngstol < [email protected]> wrote: > On Wed, Sep 07, 2011 at 11:46:34AM -0700, Damon Snyder wrote: > > According to the docs, the key statistic to look at is n_lru_nuked. This > > value is constantly increasing. Every time you run 'varnishstat -1 -f > > n_lru_nuked' the value changes. However, the value of sm_bfree seems to > > always show some space available: > > > > varnishstat -1 -f n_lru_nuked,sm_bfree,sm_balloc > > n_lru_nuked 135193763 . N LRU nuked objects > > sm_balloc 5468946432 . bytes allocated > > sm_bfree 2047246336 . bytes free > > sm_bfree is a counter of how much memory has been freed, not how much is > available. Every time an object is removed, expired, etc, this will > increase, and it is never reduced. _balloc is the counter part of that, > and is increased every time something is allocated, and never reduced. > From these numbers you can calculate how much memory is currently used: > > 5468946432 - 2047246336 = 3421700096 (a little over 3GB) > > However, you don't have to actually do that yourself, as _nbytes is > doing exactly that. > > An important detail is that the size specified in the -s arguments is > /not/ the total memory footprint varnish will have. It is only the total > cache size for actual data, not counting overhead. For Varnish 2.1, we > know that there's an additional overhead of slightly over 1kB for each > object stored, assuming 64-bit systems (your millage may vary, but this > gives you an idea). > > On top of that is a bit of data for things like threads and sessions, > but I rarely take that into consideration myself as it will be a > practically constant size measured in MB. Assuming the memory-footprint > for each thread is 10kB (it's unlikely that it's that large, specially > considering copy-on-write and whatnot), 1000 threads will give a > constant overhead of 10MB, so not something to consider. > > However, 10 million objects will give 10GB of overhead, so that should > be accounted for when you decide how much memory the cache will use with > -s. Examples I've run with is -s malloc,24GB and -s malloc,28GB on two > different sites, both running on a 32GB-system but with different > average object-size. > > Hopefully this helps you figure out what's going on. > > - Kristian > >
_______________________________________________ varnish-misc mailing list [email protected] https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
