Re: [Mongrel] Why Rails + mongrel_cluster + load balancing doesn't work for us and the beginning of a solution
On Mit 20.09.2006 11:18, Paul Butcher wrote: We have been searching for a Rails deployment architecture which works for us for some time. We've recently moved from Apache 1.3 + FastCGI to Apache 2.2 + mod_proxy_balancer + mongrel_cluster, and it's a significant improvement. But it still exhibits serious performance problems. Have you ever use haproxy http://haproxy.1wt.eu/ ?! He have the following feature which can help you: --- http://haproxy.1wt.eu/download/1.2/doc/haproxy-en.txt 3.4) Limiting the number of concurrent sessions on each server weight minconn maxconn --- This tool can also check the availibility of your backends. For ssl you need a ssl-termination SW such as stunnel or delegate or which ever SW you prefer. On the haproxy site you have a patch for stunnel for the x-forwarded-for header, if you need ;-) Regards Alex ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
Re: [Mongrel] Why Rails + mongrel_cluster + load balancing doesn't work for us and the beginning of a solution
Hi! On Wed, Sep 20, 2006 at 11:18:53AM +0100, Paul Butcher wrote: We have been searching for a Rails deployment architecture which works for us for some time. We've recently moved from Apache 1.3 + FastCGI to Apache 2.2 + mod_proxy_balancer + mongrel_cluster, and it's a significant improvement. But it still exhibits serious performance problems. hey, cool, I really like the simplicity of your approach. However I tried to solve your problem with Pen, and here's what I got: standard pen setup, no special options besides 'no sticky sessions' and 'non blocking mode': pen -fndr 9000 localhost:9001 localhost:9002 Fast: 13 Slow: 6 just as predicted by you. However, if I limit *one* of the mongrels to 1 connection at a time, I get this: pen -fdnr 9000 localhost:9001:1 localhost:9002 Fast: 59 Slow: 6 When I limit both backend servers to only 1 connection the test script always bails out since pen doesn't seem to keep connections like you do in your queue, but closes them if it finds no server to dispatch to, so it needs at least one backend process to pile up connections at when the need arises. I have no idea if this is a solution that would be useful under real life conditions, but found the behaviour quite interesting. I also raised the number of threads requesting the fast action, which led to more successful requests on the fast action, and (sometimes) less on the slow one, i.e. Fast: 67-69 Slow: 5-6 with 10 threads accessing the fast action. Whatever load balancing you use, you'll always need to have more Mongrels than there are concurrent requests for 'slow' actions to avoid delays for clients requesting a 'fast' action. So maybe, if the slow actions are well known, one could just reserve a pool of Mongrels for these slow actions, and another one for the fast ones... Jens -- webit! Gesellschaft für neue Medien mbH www.webit.de Dipl.-Wirtschaftsingenieur Jens Krämer [EMAIL PROTECTED] Schnorrstraße 76 Tel +49 351 46766 0 D-01069 Dresden Fax +49 351 46766 66 ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
Re: [Mongrel] Why Rails + mongrel_cluster + load balancing doesn't work for us and the beginning of a solution
Zed Shaw zedshaw at zedshaw.com wrote But, if your interface is time sensitive then why does it have actions that take too much time? See the conundrum there? I was kinda expecting that response, Zed, but I didn't want to rob you of the pleasure of saying it ;-) We are in fact using backgroundrb for a number of long-running actions. It's fantastic, but it isn't ever going to solve the problem for us completely. There are a whole bunch of reasons, which may be specific to our particular application, but I doubt it. First the less convincing arguments: 1) We're lazy (in a good way). Our application is used by hundreds of our employees, 24 hours a day. We, correctly, invest a great deal of time making sure that the things that they do over and over again are as efficient as they possibly can be. We don't, however, believe that it's appropriate for us to spend time optimising things which are only used by one or two guys once or twice a day. It's not as if we have a lack of things to spend our time on, and we'd rather spend that time on things that really matter, rather than optimising things which only need to be optimised because the performance characteristics of Rails mean that one poorly performing action results in *all* actions performing poorly. 2) We're not infallible. Sometimes we screw up and release a version of the software where one of our actions takes longer than it should. This is bad if it only affects the action we screwed up, but if that's all it does then it's not a disaster. If one slow action screws up *all* the actions, however, that is a disaster. 3) Even if we spend all of the time we possibly can optimising actions, different actions will always take different amounts of time to execute. It's not as if there's one true execution time for an action. The bad effects become apparent whenever you have any two actions which take noticeably different amounts of time to execute. Yes, in the example that I gave it was 1 second and 10 seconds. But the same basic effect would be there if we were talking about 1ms and 10ms. Those arguments would be enough, I believe. But in our case there's one much stronger argument which trumps all the above IMHO: 4) It's not possible to predict the time that an action, even a heavily optimised one, will take. The vagaries of cacheing, swapping etc mean that this is simply out of our hands. In our case, for example, many of our actions involve fulltext searches of the database (we have no choice about this - it's fundamental to the nature of what we do). The performance of a fulltext search is *extremely* unpredictable (in MySQL anyway). There can be occasions where the first time a particular search is done, it takes 10 seconds. The second time, because the database has got the idea, it can take a few milliseconds. Backgroundrb cannot solve this problem - if we sent all of these actions to backgroundrb, pretty much *every* action would end up being sent to backgroundrb and then we end up with a very similar load balancing problem - it just becomes a problem of load balancing to backgroundrb instead of to mongrel. Make sense? -- paul.butcher-msgCount++ Snetterton, Castle Combe, Cadwell Park... Who says I have a one track mind? MSN: [EMAIL PROTECTED] AIM: paulrabutcher Skype: paulrabutcher LinkedIn: https://www.linkedin.com/in/paulbutcher ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
Re: [Mongrel] Why Rails + mongrel_cluster + load balancing doesn't work for us and the beginning of a solution
I have to chime in and agree with Zed here. We had similar problems at MOG, and came to the conclusion that solving the delegation problem is just curing the symptoms. By systematically offloading a lot of slow requests to background processes, we got more flexibility and the ability to check the progress of such events. I'd say the general consensus is that any page that needs more than a few seconds to load needs optimization or offloading. Joshua Sierles ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users