Jeremy Hinegardner <[EMAIL PROTECTED]> wrote: > On Mon, May 05, 2008 at 11:37:56PM -0400, Brian Weaver wrote: > > Unfortunately a long running request could cause our entire web > > application to "hang". As these requests are being processing one or > > more mongrel instances continued to accept and enqueue new connections > > it can't immediately service.
Heh, this is also the exact problem I designed qrp around, too. > Another solution: > > You could stick HAproxy in front of your mongrel cluster with a the > configuration somewhat like: > > listen application *:80 > balance roundrobin > server mongrel0 127.0.0.1:5000 minconn 1 maxconn 1 check > server mongrel1 127.0.0.1:5001 minconn 1 maxconn 1 check > server mongrel2 127.0.0.1:5002 minconn 1 maxconn 1 check > server mongrel3 127.0.0.1:5003 minconn 1 maxconn 1 check > server mongrel4 127.0.0.1:5004 minconn 1 maxconn 1 check > > The 'minconn 1 maxconn 1' will force haproxy to queue the results within > itself instead of mongrel, and the 'check' will take the mongrel out of > rotation if it goes down, and add it back into the rotation as soon as > it comes back up. As soon as one mongrel is finished with a request, > haproxy will give it one of the ones it has queued. > > This is a simple version of what I'm doing to load balance clusters of > mongrels these days. I initially tried doing this with haproxy before I made qrp, but it made monitoring/testing something against an individual backend Mongrel process far more difficult as I require Mongrel to accept connections haproxy didn't forward; so I disabled concurrency on the Mongrel side instead. -- Eric Wong _______________________________________________ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users