John Joseph Bachir wrote: > Hi folks. > > Using mongrel_rails and the mongrel_cluster capistrano recipes, I > often encounter a situation where some of the mongrel processes don't > die in time to be restarted. The output of capistrano will tell me > something like "mongrel on port 8001 is already up", but that's only > because capistrano/mongrel_rails failed to take it down in the first > place. > > The solution is to do a full deploy:stop a couple times to make sure > they are all down, and then do a deploy:start. > > Is my problem typical? Is there a solution? Seems like mongrel_rails > and/or the capistrano recipes should wait for the processes to stop > before attempting to restart them. > > Thanks for any insight, > John > >
Most of the responses assume that waiting for your mongrels to stop is better than sending them the signal and continuing on with starting a new batch of servers. I don't see a problem with this, unless the old processes finished off any requests in the pipeline start picking up new requests...can anyone verify that a "stop" command to a mongrel cluster will keep the mongrel(s) that were sent the signal from serving new requests? Assuming that is true, then it already would be a "rolling restart", from my understanding. -- Posted via http://www.ruby-forum.com/. _______________________________________________ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users