Ryan, thanks for explaining this.  I appreciate it.

I do have one other question:

> Thin can accept many connections concurrently but processes (i.e.,
> reads the request, dispatches to app, writes response) only one of
> them at a time.

Wow, I had thought that Thin can handle different stages of these
steps for different connections at the same time.  So, it's really
linear then?  One full connection is handled at a time, like this:

[ read | dispatch | response ]
[ read | dispatch | response ]
...
[ read | dispatch | response ]

I'm just trying to do some back-of-the-envelope modeling for my
service on Heroku.

Thanks!


On Jul 26, 3:45 pm, Ryan Tomayko <[email protected]> wrote:
> On Sun, Jul 26, 2009 at 12:16 PM, Brian
>
>
>
> Hammond<[email protected]> wrote:
>
> > On Jul 25, 5:55 pm, Ryan Tomayko <[email protected]> wrote:
> >> > As I understand it, each "dyno" handles one connection at a time.  So,
> >> > N dynos means N connections at a time.
>
> >> Kind of. The request path through heroku looks something like this:
>
> >> Client -> Nginx -> Varnish -> Balancer -> Dyno
>
> > Right.  I should've said "request" instead of "connection".
>
> >> A dyno can process a single request at a time (1 dyno = 1 single
> >> threaded Thin process). Multiple requests from a single client -- even
> >> over a single keep-alive connection -- may be routed to different
> >> dynos. Requests backlog at the balancer and are not sent to a dyno
> >> until one is free.
>
> > Isn't the point of Thin to scale with many thousands of concurrent
> > connections (via EventMachine via epoll on Debian)?
>
> Thin can handle many thousands of connections but processes a single
> request at a time (unless you run with the the --threaded option,
> which is still considered experimental and only allows for 20
> concurrent requests using threads). You do not get all of the
> advantages you would expect from an async web server. The issue is due
> partially to how Thin dispatches requests but also to how Rack works.
> The Rack spec is entirely synchronous and blocking, so it's at odds
> with async programming styles.
>
> All that being said, there are some async extensions to Rack and Thin
> that make true async possible (e.g.,http://github.com/raggi/async_sinatra). 
> These break the rack spec but
> allow a large number of requests to be active at any given time
> without the use of threads or fibers. I haven't tested these on heroku
> but my guess is that they would have little benefit since only a
> single request is routed to a backend at a time.
>
> >  Are you saying that Thin is configured to only allow *one* connection
> > at a time?
>
> Thin can accept many connections concurrently but processes (i.e.,
> reads the request, dispatches to app, writes response) only one of
> them at a time. There's been some discussion recently around using
> fibers to overcome this limitation on the Thin mailing list. See:
>
> http://groups.google.com/group/thin-ruby/browse_thread/thread/8649153...http://groups.google.com/group/thin-ruby/browse_thread/thread/adca85d...http://groups.google.com/group/thin-ruby/browse_thread/thread/194c132...
>
> >> That's right. Keep-alive can still reduce the number of connections
> >> each client needs to establish but long lived / streaming responses
> >> aren't possible with Heroku's architecture.
>
> > OK.  Thanks for the verification.
>
> Thanks,
> Ryan
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Heroku" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to