Chuck Swiger a écrit :
On Oct 27, 2008, at 11:39 AM, Francis Dubé wrote:
I've read that this is mainly caused by Apache spawning too many
processes. Everyone seems to suggest to decrease the MaxClients
directive in Apache(set to 450 at the moment), but here's the
problem...i need to increase it ! During peaks all the processes are
in use, we even have little drops sometime because there isn't enough
processes to serve the requests. Our traffic is increasing slowly over
time so i'm affraid that it'll become a real problem soon. Any tips on
how I could deal with this situation, Apache's or FreBSD's side ?
You need to keep your MaxClients setting limited to what your system can
run under high load; generally the amount of system memory is the
governing factor.  If you set your MaxClients higher than that, your
system will start swapping under the load and once you start hitting VM,
it's game over: your throughput will plummet and clients will start
getting lots of broken connections, just as you describe.
According to top, we have about 2G of Inactive RAM with 1,5G Active (4G
total RAM with amd64). Swapping is not a problem in this case. After
checking multiple things (MySQL, networks, CPU, RAM) when a drop occurs,
we determined that everytimes there is drop, the number is Apache's
process is MaxClients (ps aux | grep httpd | wc -l) and the new http
request doesn't get answer from Apache (the TCP hanshakes completes but
Apache never push the data).
Thanks for your reply!
For a rough starting point, divide system RAM by httpd's typical
resident memory size. If your load legitimately exceeds this, you'll
need to beef up the machine or run multiple webserver boxes behind a
load-balancer (IPFW round-robin or similar with PF is a starting point,
but something like a Netscaler or Foundry ServerIron are what the big
websites generally use).
firstname.lastname@example.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"