Awesome. That's even better than I'd hoped! :-)

 - A

On Aug 17, 12:00 pm, Adam Wiggins <[email protected]> wrote:
> On Sun, Aug 16, 2009 at 10:39 AM, Alex Chaffee<[email protected]> wrote:
> > During the few seconds the restart is happening, what happens to
> > incoming HTTP requests? Are they queued or do they fail? If the
> > latter, where's the error page and can we change it?
>
> > And what's the timing like for applying the change to multiple dynos?
> > Is it possible that one dyno is still running the old code while
> > another is running the new code?
>
> Requests will always be going to one and only one revision of the code
> at a time.  The sequence is like this:
>
> 1. You git push, which compiles a slug.
>
> 2. If successful, the slug is distributed to our dyno grid and dynos
> are started for the new slug.  But no traffic is being sent to them
> yet - the routing mesh is routing all traffic to the dynos running the
> old slug.
>
> 3. Once all the new dynos are up and ready to receive connections, the
> routing mesh updates its routing table.  Existing requests will
> complete on the old dynos, any new requests will be sent to the new
> ones.
>
> In summary, the switch is instantaneous and seamless, with no period
> of error pages or requests being routed to the wrong dyno.
>
> You can test this yourself by running a simple load test during a
> deploy.  Something like:
>
> $ ab -c 1 -n 50http://myapp.heroku.com/| egrep '^(Complete|Failed)' &
> $ git push heroku
>
> When I ran this I got the following output (after all the git push /
> heroku slug compile messages):
>
> Complete requests:      50
> Failed requests:        0
>
> Adam
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Heroku" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to