We've also found it beneficial to partition parts of the mongrel cluster into separate functional groups. For example we run 3 sets of mongrels.
The first handle dynamic pages generated by the app and are as fast as the db can find and serve the information, the 2nd generate images from thumbnails we store on disk and have a higher cpu cost, but are also constrained by the time taken to send_file the resulting data back to the client (don't forget this has to happen before the rails lock is released). The third mongrel group handle image uploads from our members and can take a very long time to process images if they were large or poorly formed. This separation allowed us to channel requests that had similar resource usage requirements into different pools. Using a smarter web front end like nginx also allows queuing of the request at the web server rather than nginx, which means when the upload requests for the 100mb tiff blows up, it doesn't take 20 other requests that are sitting on the same mongrels queue with it. We found Apaches mod_balancer to be useless in handling proper distribution of requests even after we had segregated the mongrels. My advice is to switch to nginx or some smart hardware if you can afford it. Cheers Dave On 27/10/2007, at 10:46 AM, Luis Lavena wrote: > On 10/26/07, Andrew Arrow <[EMAIL PROTECTED]> wrote: >> We have a load balancer sending requests to one of X boxes and one of >> N mongrel processes on that box. >> >> Since each mongrel processes is multi-threaded but it has a mutex >> around the section that calls rails, we end up with several requests >> queued up waiting when they could have gone to another box with a >> free process. >> >> For example, boxA, and boxB. >> >> boxA has mongrels 1 through 10 >> boxB has mongrels 11 through 20 >> >> Load balancer sends a request to boxA mongrel 5. >> Load balancer sends a request to boxB mongrel 12. >> Load balancer sends a request to boxA mongrel 5 again. >> It has to wait for the 1st request still running on boxA mongrel 5. >> >> How can we help the load balancer know it should have sent the >> request to any number of other free mongrels vs. queuing up threads >> that have to wait? >> > > That's the bad logic of your balancer. > > If it's a hardware one, check if there are some algorithms you can > tweak. > > For a software-based one, check the configuration file if there is > some param about spreading / distributing the requests. > > Also, could be helpful if you mix the box A and box B list of members > for the cluster: > > A: 192.168.0.15 > B: 192.168.0.16 > > A-mongrel1 > B-mongrel11 > A-mongrel2 > B-mongrel12 > > etc... > > -- > Luis Lavena > Multimedia systems > - > Leaders are made, they are not born. They are made by hard effort, > which is the price which all of us must pay to achieve any goal that > is worthwhile. > Vince Lombardi > _______________________________________________ > Mongrel-users mailing list > Mongrel-users@rubyforge.org > http://rubyforge.org/mailman/listinfo/mongrel-users _______________________________________________ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users