Steve Midgley <[EMAIL PROTECTED]> wrote:
> Hi Eric,
> 
> This is very interesting - thanks for your notification on qrp. I have 
> a question for you. I believe there is an nginx module called "fair 
> proxy" which is supposed to be intelligent about queuing requests so 
> that only "free mongrels" (i.e. mongrels without an active Rails task 
> running) receive requests.
> 
> Ezra blogged it here:
> 
> http://brainspl.at/articles/2007/11/09/a-fair-proxy-balancer-for-nginx-and-mongrel

Hi Steven,

I noted it in the README and original announcement:

> > Other existing solutions (and why I chose qrp):
> > 
> >   Fair-proxy balancer patch for nginx - this can keep new connections
> >   away from busy Mongrels, but if all (concurrency-disabled) Mongrels in
> >   your pool get busy, then you'll still get 502 errors.

> I wonder how what you're working on differs and/or improves on this 
> system (obviously the architecture is different but I'm wondering about 
> performance/effect)?
> 
> Would there be a reason to run both? Would your tool be preferred in 
> some circumstances to the nginx fair proxy balancer? If so, what kind 
> of circumstances? Or do they basically solve the same problem at 
> different points in the stack?

I believe my solution is better for the worst-case scenario when all
Mongrels are busy servicing requests and another new request comes in.

The fair proxy balancer would give 502s if I disabled concurrency in the
Mongrels.

Leaving concurrency in the Mongrels while Rails is single-threaded is
not a comfortable solution, either; mainly because requests can still
end up waiting on the current one.


Imagine this scenario with the fair proxy balancer + Mongrel concurrency:

  10 mongrels, and 10 concurrent requests => all good, requests would be
  evenly distributed

  If there are 20 concurrent requests on 10 Mongrels, then each Mongrel
  would be processing two requests each.  If *one* of the first ten
  requests is a slow one, then the second request in that Mongrel could
  be stuck waiting even after the other 18 requests are happily
  finished.

This is why I'm not comfortable having concurrency in Mongrel + Rails


I haven't benchmarked the two, but if I had to bet money on it, I'd say
nginx fair balancer + Mongrel concurrency would be slighly faster on the
average case than my solution.  It's just the odd pedantic case I'm
worried about.

So the average processing time of a request going from 200ms to 150ms
doesn't mean a whole lot to me, it's somebody that's stuck behind a 10s
request when all they want is a 200ms request that bothers me.


Running both could work, but I'd prefer to keep nginx as stupidly simple
as possible.  I actually considered adding retry/queueing logic to
nginx, but it would be a more complex addition to the event loop. I
actually discovered how the "backup" directive worked when reading the
nginx source and investigating adding queueing to nginx.

Mine relies on nginx 0.6.7 or later for the "backup" directive.  Using
nginx 0.6.x may be too scary for some people, but the site I wrote qrp
for has been using nginx 0.6.x since before I was hired to work on it,
and it works pretty well.

> Thanks for any additional detail on your very interesting project!

You're welcome :>

-- 
Eric Wong
_______________________________________________
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users

Reply via email to