Alexander Dymo <[email protected]> wrote: > In short: > - we have two groups of workers: > - one serving long-running requests that take more than 10 sec, listening > to a '/tmp/long_requests_unicorn.sock' socket > - another serving normal requests, listening to '/tmp/unicorn.sock' socket > - nginx determines which request goes to which sockets. > > This worked perfectly for 2 years. Looks like after we upgraded to > unicorn 4.4, the normal requests started to get stuck in the queue. > That happens randomly, several times per day. When that happens, > requests wait for up to 7 seconds to be served. At that time most or > all of the workers are available and not doing anything. Unicorn > restart fixes the problem.
How are you determining requests get stuck for up to 7 seconds? Just hitting the app? How is the system (CPU/RAM/swap usage) around this time? Are you using Raindrops::LastDataRecv or Raindrops::Watcher? (If not and you're on Linux, please give them a try[1]). Anything in the stderr logs? Dying/restarted workers might cause this. Otherwise, I'd look for unexpected long-running requests in your Rails logs. > Has anyone seen something freezes like that? I'd appreciate any help > with debugging and understanding this problem. I certainly have not. Did you perform any other upgrades around this point? Can you try reverting to 4.3.1 (or earlier, and not changing anything else) and see if the problem presents itself there? Also, which OS/version is this? [1] http://raindrops.bogomips.org/ _______________________________________________ Unicorn mailing list - [email protected] http://rubyforge.org/mailman/listinfo/mongrel-unicorn Do not quote signatures (like this one) or top post when replying
