On Oct 27, 2008, at 11:39 AM, Francis Dubé wrote:
I've read that this is mainly caused by Apache spawning too many processes. Everyone seems to suggest to decrease the MaxClients directive in Apache(set to 450 at the moment), but here's the problem...i need to increase it ! During peaks all the processes are in use, we even have little drops sometime because there isn't enough processes to serve the requests. Our traffic is increasing slowly over time so i'm affraid that it'll become a real problem soon. Any tips on how I could deal with this situation, Apache's or FreBSD's side ?


You need to keep your MaxClients setting limited to what your system can run under high load; generally the amount of system memory is the governing factor. [1] If you set your MaxClients higher than that, your system will start swapping under the load and once you start hitting VM, it's game over: your throughput will plummet and clients will start getting lots of broken connections, just as you describe.

For a rough starting point, divide system RAM by httpd's typical resident memory size. If your load legitimately exceeds this, you'll need to beef up the machine or run multiple webserver boxes behind a load-balancer (IPFW round-robin or similar with PF is a starting point, but something like a Netscaler or Foundry ServerIron are what the big websites generally use).

--
-Chuck

[1]: There can be other bottlenecks; sometimes poorly written external cgi-bin scripts or dynamic content coming from mod_perl, mod_php, etc can demand a lot of CPU or end up blocking on some resource (ie, DB locking) and choking the webserver performance before it runs out of RAM. But you can run a site getting several million hits a day on a Sun E250 with only 1GB of RAM and 2 x ~400MHz CPU. :-)_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to