> This worked perfectly for 2 years. Looks like after we upgraded to unicorn 
> 4.4, the normal requests started to get stuck in the queue. That happens 
> randomly, several times per day. When that happens, requests wait for up to 7 
> seconds to be served. At that time most or all of the workers are available 
> and not doing anything. Unicorn restart fixes the problem.

Monitoring the queue is important and raindrops is all about it,
I have a small nice munin script to keep track on it:
https://github.com/troex/raindrops-watcher-munin
Live example with 8 workers:
http://mon.melkov.net/PeterHost-RU/spb1.melkov.net/raindrops_watcher.html

If your unicorn queue gets higher than something is probably blocking
the app, in my example you can notice on daily graph that at the each
hour beginning it gets a small spike - that's rake task rushing DB, as
a result app becoming slower and the queue grows which can lead to that
clients wait 3-4 seconds while rails log resulting only with 50-100ms
higher average response times.


>From my experience I never had problems between nginx and unicorn,
if it gets stuck than 99% it's some blocker in the app.

As Eric said the harder to trace issues are when worker dies and you
don't get logging from rails app, for that case take a look at previous
post about that problem that helped me to find out the problem in my app:
http://article.gmane.org/gmane.comp.lang.ruby.unicorn.general/1269


Also request-log-analytics and new_relic is good way to start tracing
blocks and slowdowns in the app. And raindrops if great for monitoring
the queue before requests get processed by the app.
_______________________________________________
Unicorn mailing list - [email protected]
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

Reply via email to