I'm stumped, so I throw this out there in hopes that someone has experienced something similar.
I have a fairly large app (uses Spree models to serve up about 1,200 products, but custom views.) The app runs like a champ locally, up and responsive within a second or two since upgrading to Rails 3.2. Switching to production on my local box makes it even faster. Never have to wait. On Heroku, however, it's a different story. I deploy, the dynos start up as expected, but I get good ol' Error H12 (Request timeout) for about 5 to 10 minutes. I have 2 dynos spinning up, and one will become responsive at that 5 minute mark, and the other will continue to timeout. After 10 minutes, everything seems fine. This happens every time a dyno restarts too. So if the dyno chokes and Heroku is kind enough to restart it for me, the site will timeout on all requests hitting that dyno for 5 to 10 minutes. I have hooked up NewRelic to see if I can see anything weird going on, and whatever request comes in just hangs. There is no SQL being executed, there are no infinite loops. Just the stinkin' timeout. The only code that is common to all pages builds my navigation based on my Spree::Taxon hierarchy. So maybe my next step is just to make that static since it's not likely to change any time soon. Other than that, I'm stumped. If it matters, I'm using Thin to serve up the pages, and saw the same behavior with Webrick. Considering trying Unicorn, but it doesn't feel like the server is the problem. Any ideas? Have you seen anything similar? Any tips for finding where things are actually hanging? Thanks in advance. Joe -- You received this message because you are subscribed to the Google Groups "Heroku" group. To view this discussion on the web visit https://groups.google.com/d/msg/heroku/-/1rnFqaMNssMJ. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/heroku?hl=en.
