Jed Reynolds wrote:
Putting memcached on the same nodes as you put your apache workers leaves you in a position to run your memcached into swap during a request spike/flood and then you may as well just reboot your node because the performance has fallen away badly.

At Last.fm we run memcached on the same nodes as Apache, but all our webservers are diskless. Although that means that memcached can't swap out, it's not any better: if the node runs out of RAM without swap, the machine will lock up for several minutes while the OOM killer thinks about what to do - and that seriously affects any clients connected to memcached.

Having said that, in normal use we don't hit this problem - we've found that it's possible to control the RAM usage of the machine to within a few tens of megabytes, even with PHP4 leaking memory like a sieve.

We only have ~6 Apache children per core though (between 12 and 48 children per machine); request queuing happens with Perlbal, so we only have enough Apache processes as the machine's CPU can handle - this helps to control Apache memory usage.

--
Russ Garrett
Last.fm Ltd.
[EMAIL PROTECTED]

Reply via email to