That was a good explanation and would make a good addition to the
docs. So what is the difference between a dyno and a compute unit?

Carl

On Mon, May 25, 2009 at 3:58 PM, Adam Wiggins <[email protected]> wrote:
>
> This doc is in the works right now, thanks for giving us a friendly
> nudge on getting it finished. :)
>
> In the meantime, here's a rough cut of the explanation:
>
> The backlog is the number of requests waiting to be processed.  If
> you're using the default (one dyno), it can only process one request
> at a time.  This is more than enough for a single user or even a few
> users.  For example, if your app takes 100ms to serve a request, the
> single dyno can serve 10 requests per second.  (This doesn't count
> static assets or cached pages, which are served straight from Varnish
> and never touch your dyno.)
>
> However, imagine the case where you're getting 15 requests per second.
>  Your single dyno can serve 10 per second (1000ms in a second / 100ms
> per request = 10 reqs/sec).  So after one second you'll have served 10
> requests but have 5 in the backlog waiting to be processed.  This
> would be fine if no more requests came in, as those 5 requests would
> be served in the next 0.5 seconds.  However, if you have another 15
> requests come in, at the end of the 2nd second you'll have 5 + 15 - 10
> = 10 requests in your backlog.  After 3 seconds, the backlog will be
> 15 deep, and so on.
>
> If that volume keeps up, the backlog will just keep getting deeper.
> Eventually, the Heroku routing mesh will detect that you're hopelessly
> behind, and start turning away some of the requests.
>
> You have three options:
>
> 1. Increase the number of dynos you have.  You can calculate how many
> you need by taking your average time spent serving a request (100ms in
> the example above) and multiplying it by how many requests you wish to
> serve per second.  15 requests per second at 100ms per request would
> require two dynos.  30 requests per second would require 3 dynos,
> although you might to go with 4 or even 5 to give yourself a little
> headroom, since traffic often comes in spikes.
>
> 2. Speed up your response time.  A performance analysis tool like New
> Relic is invaluable here.  If you're spending more than 100ms in your
> dyno processing the request, you probably have room for optimization.
> This is a big topic, but some options here include use of memcached,
> HTTP caching, optimizing your database queries, or using a faster web
> layer like Rails Metal.
>
> 3. If your traffic is low-value, such that you don't feel it's worth
> paying for more dynos, then perhaps you are comfortable with turning
> users away during traffic spikes.  In that case, take no action, and
> be aware that some users will get this error message.
>
> Adam
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Heroku" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to