This post describes about half of why I stopped using Apache
everywhere I could a while ago.
--
Dustin Sallings (mobile)
On Feb 1, 2008, at 23:24, Jed Reynolds <[EMAIL PROTECTED]> wrote:
dormando wrote:
More or less.
The most efficient-for-the-money method to start out with is to add
more RAM to your CPU nodes (dynamic webservers, whatever) and run a
memcached instance on there too. Cheap to add RAM to hardware
you're using for other things, but expensive to get entire boxes
for it.
Very very few people actually peg memcached with CPU usage, and if
you, you should be able to afford a few dedicated machines :)
Putting memcached on the same nodes as you put your apache workers
leaves you in a position to run your memcached into swap during a
request spike/flood and then you may as well just reboot your node
because the performance has fallen away badly. For example, if you
expect to run up to apache 200 workers per node with a worker size
of 20MB, this means 4GB of ram. If you want to dedicate 1G for
memcached, make sure you have ram leftover for the rest of your OS
and cache and buffers. However, the longer you work your workers,
and depending on your app settings, expect your apache processes to
fatten over time. So if your workers grow from 20MB to 60MB (I
regularly see 66MB httpd processes in my environment), then you've
created a situation where your workers demand 12GB during a request
spike. If you don't have >12GB ram...uh...yeah.
My point: if you want your web nodes to *take a beating* (and I've
seen this happen repeatedly from spambots and trackback botnets)
don't put memcache on your webnodes. Put your memcache on nodes that
are well protected from memory starvation ...like dedicated boxes or
an NFS server.
If you're worried about CPU thrashing a lot, you can use utilities
like schedutils to 'ping' memcached to a specific core on a
specific CPU, and 'mask' your webserver processes to all of the
rest. It can help a little bit but isn't usually necessary.
I wouldn't worry about httpd instances thrashing the cpu, because
httpd workers overload on a multi-cpu box pretty well. I've often
watched a 4GB, 4 core Xeon 2.6ghz box handle 4000-10000 connections
per second, under load 10-20 with about 400 apache workers and while
it swapped a bit, it kept up surprisingly well. (My httpd instances
were not as large--more like 25MB). I had my memcached instances on
my NFS node, which never sustained much load. There were also 3
mysql servers behind it, too :-) I appreciated that web server a lot.
Jed