tongueroo wrote:
Thats a good point. Um. maybe thats swap is whaat is happening.
Note there is a "-k" flag to memcached which, depending on underlying
OS, can lock down the pages so they won't page out to swap space. Since
your mongrel memory usage will be variable, you may be able to use that
flag to avoid a situation where memory pressure evicts memcached's pages.
Also, I don't know cruby's internals all that well, but running a quick
rails app here locally seems to show that all of the memory allocations
are in heap. Since those won't be given back until the process exits,
if you're running lots of mongrels without ever exiting you're likely
not as efficient about memory usage as you could be. I only see
a very small amount of memory mmap()'d. If things are evenly
distributed and cruby (and the underlying memory allocator you're using)
are efficient about memory allocation, it may not be too bad.
One solution for this would be to recycle the mongrels on a regular
basis. Maybe you're already doing that? There are some other solutions
too.
Though I dont think that we are swapping, that would makes the most
sense.
Below are our 9 memcached servers
1. how much free ram as listed by free -m
2. memcached version number
3. how much ram is being used by memcached right now, its near full
again (full at 1GB)
https://gist.github.com/e886d958a4bc8e103810
Right now we it doesnt appear that we have swapping.
However, we do run our memcached instances on the same slices as our
app servers where our mongrels live. Perhaps
spikes in mongrels is causing it to swap..
Do those free numbers look good?
From what Ive been told is that the second row of free is more
important..
-/+ buffers/cache: XXX XXX
Thats how much actual free ram we have before we start swapping A gig
on each slice seems o-plenty.
Thanks again for all the helpful feedback and responses thus far.
Tung
On Mar 18, 8:04 pm, Dustin <[email protected]> wrote:
On Mar 18, 8:00 pm, tongueroo <[email protected]> wrote:
memcached reads are reported as very slow. 10+ seconds.
Are you giving it more RAM than you actually have? I would expect
that behavior if it were fetching from swap.