Hello, So, I just got through a very stressful couple of hours. I've been running a rails application for a few months on a dedicated server. The application isn't really that intensive and has been serving roughly 50k requests / day. It's been handling really well, really fast, and all is good. The deployment configuration I am using is apache 2.2.3 with mod_proxy_balancer proxying to 3 mongrel processes. It's' been really good to me so far handling the load like a champ.
Now, 2 hours ago, everything is seemingly normal when all of a sudden the site grinds to a halt. Yet, it doesn't seem like I'm getting a surge of traffic. I check the server and it is 90% idle. I check my mongrel processes and they are still dishing out the site like champs. The culprit is apache! It takes it 20 - 40 seconds to server a small static html file. Something is wrong. This is where me being a server admin noob sucks. After way too long I find out that apache is maxing out the connections. Apache is configured to handle 256 max connections with a 300 second timeout. Keep alive is off. This has been working for months and for no apparent reason doesn't seem to work anymore. Long story short, I reduced the timeout setting to 15 seconds and it seems fine now. It maxes out at roughly 90 connections now. I pretty much only use apache as a front for mongrel, so I ask here. Were my original settings completely dumb or is something else wrong? Maybe something I can look at with how I configured apache to proxy to mongrel? Why are so many connections staying open? I'm no amazing system administrator, so I'm hoping that it's something I did wrong and stupid and having a 300 second timeout is just a mistake I made. Any insight? -carl -- EPA Rating: 3000 Lines of Code / Gallon (of coffee) _______________________________________________ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users