On Fri, Oct 9, 2009 at 7:11 PM, Eric Wong <normalper...@yhbt.net> wrote:
> Dusty Doris <unic...@dusty.name> wrote:
>> On Fri, Oct 9, 2009 at 6:01 PM, Eric Wong <normalper...@yhbt.net> wrote:
>> > Dusty Doris <unic...@dusty.name> wrote:
>> >> 1.  Simply use mongrels upstream and let it round-robin between all
>> >> the unicorn instances on the different servers?  Or, perhaps use the
>> >> fair-upstream plugin?
>> >>
>> >> nginx -> [unicorns]
>> >
>> > Based on your description of your current setup, this would be the best
>> > way to go.  I would configure a lowish listen() :backlog for the
>> > Unicorns, fail_timeout=0 in nginx for every server  This setup means
>> > round-robin by default, but if one machine gets a :backlog overflow,
>> > then nginx will automatically retry on a different backend.
>>
>> Thanks for the recommendation.  I was going to give that a shot first
>> to see how it went, as it would also be the easiest to manage.
>>
>> When you say a lowish backlog?  What kind of numbers are you talking
>> about?  Say, we had 8 workers running that stayed pretty active.  They
>> are usually quick to respond, with an occasional 2 second response
>> (say 1/100) due to a bad sql query that we need to fix.  Would lowish
>> be 16, 32, 64, 128, 1024?
>
> 1024 is the default in Mongrel and Unicorn which is very generous.  5 is
> the default value that Ruby initializes the sockets at, so picking
> something in between is recommended.  It really depends on your app and
> comfort level.  You can also tune and refine it over time safely
> without worrying too much about dropping connections by configuring
> multiple listeners per-instance (see below).
>
> Keep in mind the backlog is rarely an exact setting, it's more of a
> recommendation to the kernel (and the actual value is often higher
> than specified).
>
>> Oh and thanks for the tip on the fail_timeout.
>
> No problem, I somehow thought it was widely-known by now...
>
>> > You can also try the following, which is similar to what I describe in:
>> >
>> >  http://article.gmane.org/gmane.comp.lang.ruby.unicorn.general/31
>> >
>>
>> Thats an interesting idea, thanks for sharing it.  I like how the
>> individual server also acts as a load balancer, but only if its having
>> trouble itself.  Otherwise, it just handles the requests through the
>> socket connection.
>> I appreciate your reply and especially for Unicorn.
>
> You can also try a combination of (1) above and my proposed idea in
> $gmane/31 by configuring two listeners per-Unicorn instance:
>
>   # primary
>   listen 8080, :backlog => 10, :tcp_nopush => true
>
>   # only when all servers overflow the backlog=10 above
>   listen 8081, :backlog => 1024, :tcp_nopush => true
>
> And then putting the 8081s as a backup in nginx like this:
>
>   upstream unicorn_failover {
>     # round-robin between unicorns with small backlogs
>    # as the primary option
>     server 192.168.0.1:8080 fail_timeout=0;
>     server 192.168.0.2:8080 fail_timeout=0;
>     server 192.168.0.3:8080 fail_timeout=0;
>
>     # the "backup" parameter means nginx won't ever try these
>    # unless the set of listeners above fail.
>     server 192.168.0.1:8081 fail_timeout=0 backup;
>     server 192.168.0.2:8081 fail_timeout=0 backup;
>     server 192.168.0.3:8081 fail_timeout=0 backup;
>   }
>
> You can monitor the nginx error logs and see how often it fails on the
> low backlog listener, and then increment/decrement the backlog of
> the primary listeners as needed to get better load-balancing.
>
> --
> Eric Wong
>

Awesome!

I am going to give that a shot.
_______________________________________________
mongrel-unicorn mailing list
mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn

Reply via email to