"G.W. Haywood" <[EMAIL PROTECTED]> writes:
> Would it be breaching any confidences to tell us how many
> kilobyterequests per memorymegabyte or some other equally daft
> dimensionless numbers?
I assume the number you're looking for is an ideal ratio between the proxy and
the backend server? No single number exists. You need to monitor your system
and tune.
In theory you can calculate it by knowing the size of the average request, and
the latency to generate an average request in the backend. If your pages take
200ms to generate, and they're 4k on average, then they'll take 1s to spool
out to a 56kbs link and you'll need a 5:1 ratio. In practice however that
doesn't work out so cleanly because the OS is also doing buffering and because
it's really the worst case you're worried about, not the average.
If you have the memory you could just shoot for the most processes you can
handle, something like 256:32 for example is pretty aggressive. If your
backend scripts are written efficiently you'll probably find the backend
processes are nearly all idle.
I tried to use the minspareservers and maxspareservers and the other similar
parameters to let apache tune this automatically and found it didn't work out
well with mod_perl. What happened was that starting up perl processes was the
single most cpu intensive thing apache could do, so as soon as it decided it
needed a new process it slowed down the existing processes and put itself into
a feedback loop. I prefer to force apache to start a fixed number of processes
and just stick with that number.
--
greg