Whew! After a long day of debugging, I think I've  gotten closer to isolating 
this.

I've upgraded us to your pre-release gem but I can still get it working (both 
before & after a lot of the below fixes)

>> After a USR2 signal our process tree winds up looking like this, with
>> several master-esque processes listed as children (but without the
>> "worker[N]" label):
>> 
>> app      14402  4.4  0.8 199612 70264 ?        S    14:07   0:04 
>> unicorn_rails master -c config/unicorn.rb -E production -D
>> app      14433  0.0  0.8 204540 68504 ?        Sl   14:07   0:00  \_ 
>> unicorn_rails worker[0] -c config/unicorn.rb -E production -D
>> app      14435  0.0  0.8 204540 68508 ?        Sl   14:07   0:00  \_ 
>> unicorn_rails worker[1] -c config/unicorn.rb -E production -D
>> app      14438  0.0  0.8 199748 65840 ?        S    14:07   0:00  \_ 
>> /usr/bin/ruby1.8 /usr/bin/unicorn_rails -c config/unicorn.rb -E production -D
>> app      14440  0.0  0.8 204540 68508 ?        Sl   14:07   0:00  \_ 
>> unicorn_rails worker[3] -c config/unicorn.rb -E production -D
>> app      14442  0.0  0.8 204540 68508 ?        Sl   14:07   0:00  \_ 
>> unicorn_rails worker[4] -c config/unicorn.rb -E production -D
>> app      14445  0.0  0.8 199760 65840 ?        S    14:07   0:00  \_ 
>> /usr/bin/ruby1.8 /usr/bin/unicorn_rails -c config/unicorn.rb -E production -D
>> app      14447  0.0  0.8 204540 68508 ?        Sl   14:07   0:00  \_ 
>> unicorn_rails worker[6] -c config/unicorn.rb -E production -D
>> app      14449  0.0  0.8 204780 69272 ?        Sl   14:07   0:00  \_ 
>> unicorn_rails worker[7] -c config/unicorn.rb -E production -D
>> 
>> Sending another USR2 signal will bring a new master into the mix as a
>> child, spins up a single child worker of its own (which also resembles
>> the "/usr/bin/ruby1.8" master-esque processes), and then fails to
>> continue. 
> 
> Anything in your before_fork/after_fork hooks?  Since it looks like
> you're on a Linux system, can you strace the master while you send
> it a USR2 and see if anything strange happens?

The only real contents of our before_hook is a send-QUIT-on-first-worker, which 
I swapped out for the default SIGTTOU behavior. No change.


> I assume you're using regular "unicorn" to run your Sinatra apps and not
> "unicorn_rails".  I made some largish cleanups to both for the 0.97.0
> release and and perhaps some bugs slipped into the "_rails" variant.
> 
> Not sure if it's a problem, but with Bundler I assume Rack itself is a
> bundled dependency, but you're starting unicorn_rails out of
> /usr/bin/unicorn_rails which indicates Unicorn is not bundled (and won't
> use the bundled Rack).  Can you ensure your unbundled Rack is the same
> version as the bundled one to be on the safe side?
> 

My system & bundled rack versions match.

Swapped to vanilla "unicorn" instead of "unicorn_rails" -- also no dice

I switched to using "bundle exec unicorn", which uses 
RAILS_ROOT/vendor/bundler_gems/bin/unicorn instead of /usr/bin/unicorn. Was 
convinced this would be it, but no dice.

Attaching some relevant stack traces... the new "orphan" master & its 1st child 
are pretty boring while just hanging out:

Process 20738 attached - interrupt to quit
futex(0x2aaaaafb23c0, FUTEX_WAIT, 2, NULL

Sending a USR2 to the new, orphaned master...

Process 20738 attached - interrupt to quit
select(10, [9], [], [], {23, 661000})   = ? ERESTARTNOHAND (To be restarted)
--- SIGUSR2 (User defined signal 2) @ 0 (0) ---
rt_sigreturn(0xc)                       = -1 EINTR (Interrupted system call)
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
clock_gettime(CLOCK_MONOTONIC, {2782804, 359708496}) = 0
select(0, [], [], [], {0, 0})           = 0 (Timeout)
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
fcntl(5, F_GETFL)                       = 0x801 (flags O_WRONLY|O_NONBLOCK)
write(5, "."..., 1)                     = 1
clock_gettime(CLOCK_MONOTONIC, {2782804, 360046496}) = 0
select(10, [9], [], [], {20, 464341}


I've also produced straces of the *original* master during USR2 restarts, both 
a success trace and a failure trace. 
Here's a tarball with both complete traces as well as filtered/grepp'd ones:
http://jamiedubs.com/files/unicorn-strace.tgz

I've also found that kill -9'ing the 1st worker of the new orphaned master 
allows it to continue operation as normal (spinning up workers and taking 
control from the original master) -- suggesting something is up with just that 
first worker (!). I'm going to keep noodling with before_/after_fork strategies.

-jamie
_______________________________________________
Unicorn mailing list - mongrel-unicorn@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

Reply via email to