I agree with you in principle, absolutely. However, when you have a code-base the size of ours (100Ks of lines of Ruby) with 100s of developers, and new ones coming on every month with no prior Ruby or Rails experience, we can't rely on everyone doing the right thing all the time. With a surface area of that size, there will be things that are missed, especially when you run gems like Liquid that have billions of different ways of composing templates—some of these paths, unfortunately, are going to be slow. We definitely chase down all the worst offenders, but when new ones creep up, we do our best to chase them down when time allows. Using this hook, allows us to monitor when that happens, and how often it happens, and for which endpoints. With 100Ks of lines of code, 100s of developers, and 10s of thousands of requests per second—once in a million happens every couple of seconds. Multiply that with the size of the code-base, and Unicorn timeouts due to the conditions below will happen somewhat often.
It becomes difficult, because sometimes you have legitimate requests that take 10-20s, because the merchant's data set is so large that it exposes anomalies. Again, with the size of our code-base, we need this wiggle room in the global timeout to not just error on users. You can have endpoints that do 4 HTTP requests, 5 RPC requests, 4 MySQL queries, and 30 calls to Memcached. In that case, your worst case is the timeout of all of those actions, which easily exceeds the Unicorn timeout. We've debated having "budgets" and "shitlisting" (http://sirupsen.com/shitlists/) paths that obviously take longer than the budget for a single resource. The probability of more than one resource being very slow at once, is quite low (and if it is, again, we rely on the Unicorn timeout). In other words, the Unicorn timeout is not a crutch for timeouts in the application, but a global timeout to as a last line of defense against many timeouts, or some bug we didn't foresee. This seems unavoidable in my eyes, unless you have very aggressive timeouts and meticulously keep track of the budgets in a testing environment and raise if the budget is exceeded. When we hit many timeouts, we use Semian (http://github.com/shopify/semian) to trigger circuit breakers—so the reliance on Unicorn should be brief. Some of these bugs are even deep in Ruby, Jean B, one of my co-workers submitted a bug about there being no write_timeout in Net::HTTP (you even replied!): https://bugs.ruby-lang.org/issues/13396 BTW we deployed 5.3.0 and replaced our `before_murder` hook with `after_worker_exit`. Everything works perfectly and we finally are not using a forked version of Unicorn anymore. Thanks for the release!
