On Fri, Oct 9, 2009 at 6:01 PM, Eric Wong <normalper...@yhbt.net> wrote:
> Dusty Doris <unic...@dusty.name> wrote:
>> Thanks for this post Chris, it was very informative and has answered a
>> few questions that I've had in my head over the last couple of days.
>> I've been testing unicorn with a few apps for a couple days and
>> actually already moved one over to it.
>>
>> I have a question for list.
>
> First off, please don't top post, thanks :)

Sorry about that.

>
>> We are currently setup with a load balancer that runs nginx and
>> haproxy.  Nginx, simply proxies to haproxy, which then balances that
>> across multiple mongrel or thin instances that span several servers.
>> We simply include the public directory on our load balancer so nginx
>> can serve static files right there.  We don't have nginx running on
>> the app servers, they are just mongrel or thin.
>>
>> So, my question.  How would you do a Unicorn deployment when you have
>> multiple app servers?
>
> For me, it depends on the amount of static files you serve with nginx
> and also the traffic you hit.
>
> Can I assume you're running Linux 2.6 (with epoll + awesome VFS layer)?
>
> May I also assume your load balancer box is not very stressed right now?
>

Yep.  We server our css, javascript, and some images out of the load
balancer.  But for the majority of our images and all the dynamically
created ones, we serve those from dedicated image servers that have
their own nginx instance running on each one.

>> 1.  Simply use mongrels upstream and let it round-robin between all
>> the unicorn instances on the different servers?  Or, perhaps use the
>> fair-upstream plugin?
>>
>> nginx -> [unicorns]
>
> Based on your description of your current setup, this would be the best
> way to go.  I would configure a lowish listen() :backlog for the
> Unicorns, fail_timeout=0 in nginx for every server  This setup means
> round-robin by default, but if one machine gets a :backlog overflow,
> then nginx will automatically retry on a different backend.
>

Thanks for the recommendation.  I was going to give that a shot first
to see how it went, as it would also be the easiest to manage.

When you say a lowish backlog?  What kind of numbers are you talking
about?  Say, we had 8 workers running that stayed pretty active.  They
are usually quick to respond, with an occasional 2 second response
(say 1/100) due to a bad sql query that we need to fix.  Would lowish
be 16, 32, 64, 128, 1024?

Oh and thanks for the tip on the fail_timeout.

>> 2.  Keep haproxy in the middle?
>>
>> nginx -> haproxy -> [unicorns]
>
> This is probably not necessary, but it can't hurt a whole lot either.
>
> Also an option for balancing.  If you're uncomfortable with the first
> approach you can also configure haproxy as a backup server:
>
>  upstream unicorn_failover {
>    # round-robin between unicorn app servers on the LAN:
>    server 192.168.0.1:8080 fail_timeout=0;
>    server 192.168.0.2:8080 fail_timeout=0;
>    server 192.168.0.3:8080 fail_timeout=0;
>
>    # haproxy, configured the same way as you do now
>    # the "backup" parameter means nginx won't hit haproxy unless
>    # all the direct unicorn connections have backlog overflows
>    # or other issues
>    server 127.0.0.1:8080 fail_timeout=0 backup; # haproxy backup
>  }
>
> So your traffic flow may look like the first for the common case, but
> you may have a slightly more balanced/queueing solution in case you're
> completely overloaded.
>
>> 3.  Stick haproxy in front and have it balance between the app servers
>> that run their own nginx?
>>
>> haproxy -> [nginxs] -> unicorn # could use socket instead of tcp in this case
>
> This is probably only necessary if:
>
>  1) you have a lot of static files that don't all fit in the VFS caches
>
>  2) you handle a lot of large uploads/responses and nginx buffering will
>     thrash one box
>
> I know some sites that run this (or similar) config, but it's mainly
> because this is what they've had for 5-10 years and don't have
> time/resources to test new setups.
>
>> I would love to hear any opinions.
>
> You can also try the following, which is similar to what I describe in:
>
>  http://article.gmane.org/gmane.comp.lang.ruby.unicorn.general/31
>

Thats an interesting idea, thanks for sharing it.  I like how the
individual server also acts as a load balancer, but only if its having
trouble itself.  Otherwise, it just handles the requests through the
socket connection.

> Pretty much all the above setups are valid.  The important part is that
> nginx must sit *somewhere* in between Unicorn and the rest of the world.
>
> --
> Eric Wong
>

I appreciate your reply and especially for Unicorn.

Thanks!
_______________________________________________
mongrel-unicorn mailing list
mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn

Reply via email to