Re: [Mongrel] monit vs mongrel cluster

2007-04-08 Thread Ezra Zygmuntowicz
Henry-

That is what it quoted earlier in this email. Here is the monitrc  
for one mongrel using mongrel_cluster:

check process mongrel_USERNAME_5000
   with pidfile /data/USERNAME/shared/log/mongrel.5000.pid
   start program = /usr/bin/mongrel_rails cluster::start -C /data/ 
USERNAME/current/config/mongrel_cluster.yml --clean --only 5000
   stop program = /usr/bin/mongrel_rails cluster::stop -C /data/ 
USERNAME/current/config/mongrel_cluster.yml  --only 5000
   if totalmem is greater than 110.0 MB for 4 cycles then  
restart   # eating up memory?
   if cpu is greater than 50% for 2 cycles then  
alert  # send an email to admin
   if cpu is greater than 80% for 3 cycles then  
restart# hung process?
   if loadavg(5min) greater than 10 for 8 cycles then  
restart  # bad, bad, bad
   if 20 restarts within 20 cycles then  
timeout # something is wrong, call the sys-admin
   group mongrel


You need one of those entries for each mongrel you need to run.

Cheers-
-Ezra


On Apr 7, 2007, at 7:54 AM, Henry wrote:

 Ezra,

 Would you mind sharing the portion of your monit.conf that handles
 the cluster?

 Many thanks,
 Henry


 On Apr 3, 2007, at 6:28 PM, Ezra Zygmuntowicz wrote:


  Yes mongrel_cluster handles the pid files. Also it does a better job
 of stopping mongrels. The problem I had when I used monit and
 mongrel_rails without mongrel_cluster was that if a mongrel used too
 much memory monit woudl not be able to stop it sometimes and so
 execution woudl fail and timeout.

  Using mongrel_clutser avoids this problem completely. Trust me I've
 tried it all different ways. I did monit without mongrel_cluster for
 a about a full month on close to 200 servers and then switched them
 all to monit and mongrel_cluster and get much better results.

 -Ezra

 On Apr 3, 2007, at 3:00 PM, snacktime wrote:

 Makes sense that mongrel_cluster would handle a lot of edge cases
 better then monit.  Is it mainly the pid file handling that has been
 the main issue so far?

 Have you tried daemontools?  Seems to me like it would be more
 reliable since you wouldn't have to deal with pid files and
 backgrounding mongrel.

 Chris

 On 4/3/07, Ezra Zygmuntowicz [EMAIL PROTECTED] wrote:

 On Apr 3, 2007, at 1:39 PM, snacktime wrote:

 Is there anything mongrel cluster gives you that monit doesn't?
 I'll
 be using monit to monitor a number of other services anyways,  
 so it
 seems logical to just use it for everything including mongrel.

 Chris


 Chris-

 WHen you use monit you can still use mongrel_cluster to
 manage it.
 You need the latest pre release of mongrel_cluster. This is the  
 best
 configuration I've been able to come up with for 64Bit systems. If
 your on 32bit system then you can lower the memory limits by about
 20-30%

 check process mongrel_%= @username %_5000
with pidfile /data/%= @username %/shared/log/mongrel.5000.pid
start program = /usr/bin/mongrel_rails cluster::start -C /
 data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only
 5000
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/ 
 %=
 @username %/current/config/mongrel_cluster.yml --clean --only  
 5000
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys-
 admin
group mongrel

 check process mongrel_%= @username %_5001
with pidfile /data/%= @username %/shared/log/mongrel.5001.pid
start program = /usr/bin/mongrel_rails cluster::start -C /
 data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only
 5001
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/ 
 %=
 @username %/current/config/mongrel_cluster.yml --clean --only  
 5001
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys-
 admin
group mongrel

 check process mongrel_%= @username %_5002
with pidfile /data/%= @username %/shared/log/mongrel.5002.pid
start program = /usr/bin/mongrel_rails cluster::start -C /
 data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only
 5002
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/ 
 %=
 @username 

Re: [Mongrel] monit vs mongrel cluster

2007-04-08 Thread Henry
Thanks, Ezra.  Apologies for not checking the whole thread.

On Apr 8, 2007, at 12:40 PM, Ezra Zygmuntowicz wrote:

 Henry-

   That is what it quoted earlier in this email. Here is the monitrc
 for one mongrel using mongrel_cluster:

 check process mongrel_USERNAME_5000
with pidfile /data/USERNAME/shared/log/mongrel.5000.pid
start program = /usr/bin/mongrel_rails cluster::start -C /data/
 USERNAME/current/config/mongrel_cluster.yml --clean --only 5000
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/
 USERNAME/current/config/mongrel_cluster.yml  --only 5000
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys- 
 admin
group mongrel


   You need one of those entries for each mongrel you need to run.

 Cheers-
 -Ezra


 On Apr 7, 2007, at 7:54 AM, Henry wrote:

 Ezra,

 Would you mind sharing the portion of your monit.conf that handles
 the cluster?

 Many thanks,
 Henry


 On Apr 3, 2007, at 6:28 PM, Ezra Zygmuntowicz wrote:


 Yes mongrel_cluster handles the pid files. Also it does a better  
 job
 of stopping mongrels. The problem I had when I used monit and
 mongrel_rails without mongrel_cluster was that if a mongrel used too
 much memory monit woudl not be able to stop it sometimes and so
 execution woudl fail and timeout.

 Using mongrel_clutser avoids this problem completely. Trust me I've
 tried it all different ways. I did monit without mongrel_cluster for
 a about a full month on close to 200 servers and then switched them
 all to monit and mongrel_cluster and get much better results.

 -Ezra

 On Apr 3, 2007, at 3:00 PM, snacktime wrote:

 Makes sense that mongrel_cluster would handle a lot of edge cases
 better then monit.  Is it mainly the pid file handling that has  
 been
 the main issue so far?

 Have you tried daemontools?  Seems to me like it would be more
 reliable since you wouldn't have to deal with pid files and
 backgrounding mongrel.

 Chris

 On 4/3/07, Ezra Zygmuntowicz [EMAIL PROTECTED] wrote:

 On Apr 3, 2007, at 1:39 PM, snacktime wrote:

 Is there anything mongrel cluster gives you that monit doesn't?
 I'll
 be using monit to monitor a number of other services anyways,
 so it
 seems logical to just use it for everything including mongrel.

 Chris


 Chris-

 WHen you use monit you can still use mongrel_cluster to
 manage it.
 You need the latest pre release of mongrel_cluster. This is the
 best
 configuration I've been able to come up with for 64Bit systems. If
 your on 32bit system then you can lower the memory limits by about
 20-30%

 check process mongrel_%= @username %_5000
with pidfile /data/%= @username %/shared/log/mongrel.5000.pid
start program = /usr/bin/mongrel_rails cluster::start -C /
 data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only
 5000
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/
 %=
 @username %/current/config/mongrel_cluster.yml --clean --only
 5000
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the  
 sys-
 admin
group mongrel

 check process mongrel_%= @username %_5001
with pidfile /data/%= @username %/shared/log/mongrel.5001.pid
start program = /usr/bin/mongrel_rails cluster::start -C /
 data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only
 5001
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/
 %=
 @username %/current/config/mongrel_cluster.yml --clean --only
 5001
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the  
 sys-
 admin
group mongrel

 check process mongrel_%= @username %_5002
with pidfile /data/%= @username %/shared/log/mongrel.5002.pid
start program = /usr/bin/mongrel_rails cluster::start -C /
 data/%
 = @username 

Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread Ezra Zygmuntowicz

On Apr 3, 2007, at 1:39 PM, snacktime wrote:

 Is there anything mongrel cluster gives you that monit doesn't?  I'll
 be using monit to monitor a number of other services anyways, so it
 seems logical to just use it for everything including mongrel.

 Chris


Chris-

WHen you use monit you can still use mongrel_cluster to manage it.  
You need the latest pre release of mongrel_cluster. This is the best  
configuration I've been able to come up with for 64Bit systems. If  
your on 32bit system then you can lower the memory limits by about  
20-30%

check process mongrel_%= @username %_5000
   with pidfile /data/%= @username %/shared/log/mongrel.5000.pid
   start program = /usr/bin/mongrel_rails cluster::start -C /data/% 
= @username %/current/config/mongrel_cluster.yml --clean --only 5000
   stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=  
@username %/current/config/mongrel_cluster.yml --clean --only 5000
   if totalmem is greater than 110.0 MB for 4 cycles then  
restart   # eating up memory?
   if cpu is greater than 50% for 2 cycles then  
alert  # send an email to admin
   if cpu is greater than 80% for 3 cycles then  
restart# hung process?
   if loadavg(5min) greater than 10 for 8 cycles then  
restart  # bad, bad, bad
   if 20 restarts within 20 cycles then  
timeout # something is wrong, call the sys-admin
   group mongrel

check process mongrel_%= @username %_5001
   with pidfile /data/%= @username %/shared/log/mongrel.5001.pid
   start program = /usr/bin/mongrel_rails cluster::start -C /data/% 
= @username %/current/config/mongrel_cluster.yml --clean --only 5001
   stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=  
@username %/current/config/mongrel_cluster.yml --clean --only 5001
   if totalmem is greater than 110.0 MB for 4 cycles then  
restart   # eating up memory?
   if cpu is greater than 50% for 2 cycles then  
alert  # send an email to admin
   if cpu is greater than 80% for 3 cycles then  
restart# hung process?
   if loadavg(5min) greater than 10 for 8 cycles then  
restart  # bad, bad, bad
   if 20 restarts within 20 cycles then  
timeout # something is wrong, call the sys-admin
   group mongrel

check process mongrel_%= @username %_5002
   with pidfile /data/%= @username %/shared/log/mongrel.5002.pid
   start program = /usr/bin/mongrel_rails cluster::start -C /data/% 
= @username %/current/config/mongrel_cluster.yml --clean --only 5002
   stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=  
@username %/current/config/mongrel_cluster.yml --clean --only 5002
   if totalmem is greater than 110.0 MB for 4 cycles then  
restart   # eating up memory?
   if cpu is greater than 50% for 2 cycles then  
alert  # send an email to admin
   if cpu is greater than 80% for 3 cycles then  
restart# hung process?
   if loadavg(5min) greater than 10 for 8 cycles then  
restart  # bad, bad, bad
   if 20 restarts within 20 cycles then  
timeout # something is wrong, call the sys-admin
   group mongrel


I wen't for a while using my own scripts to start and stop mongrel  
without using mongrel_cluster. But it works more reliably when I use  
mongrel_cluster and monit together.

Cheers-
-- Ezra Zygmuntowicz 
-- Lead Rails Evangelist
-- [EMAIL PROTECTED]
-- Engine Yard, Serious Rails Hosting
-- (866) 518-YARD (9273)


___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users


Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread Kevin Williams
I do pretty much the same thing with monit, except I don't use
mongrel_cluster. Because monit needs to handle each instance of
mongrel separately, I didn't see the point in using a clustering tool
to handle single instances. It's a minor difference, really -
specifying the mongrel_rails options in the monitrc file vs. the
config/mongrel_cluster.yml file. Either way, I use monit to restart a
mongrel cluster (or group as monit calls it).

I'd say monit offers _more_ than mongrel_cluster, but I'm no expert.


On 4/3/07, Ezra Zygmuntowicz [EMAIL PROTECTED] wrote:

 On Apr 3, 2007, at 1:39 PM, snacktime wrote:

  Is there anything mongrel cluster gives you that monit doesn't?  I'll
  be using monit to monitor a number of other services anyways, so it
  seems logical to just use it for everything including mongrel.
 
  Chris
 

 Chris-

 WHen you use monit you can still use mongrel_cluster to manage it.
 You need the latest pre release of mongrel_cluster. This is the best
 configuration I've been able to come up with for 64Bit systems. If
 your on 32bit system then you can lower the memory limits by about
 20-30%

 check process mongrel_%= @username %_5000
with pidfile /data/%= @username %/shared/log/mongrel.5000.pid
start program = /usr/bin/mongrel_rails cluster::start -C /data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only 5000
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
 @username %/current/config/mongrel_cluster.yml --clean --only 5000
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys-admin
group mongrel

 check process mongrel_%= @username %_5001
with pidfile /data/%= @username %/shared/log/mongrel.5001.pid
start program = /usr/bin/mongrel_rails cluster::start -C /data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only 5001
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
 @username %/current/config/mongrel_cluster.yml --clean --only 5001
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys-admin
group mongrel

 check process mongrel_%= @username %_5002
with pidfile /data/%= @username %/shared/log/mongrel.5002.pid
start program = /usr/bin/mongrel_rails cluster::start -C /data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only 5002
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
 @username %/current/config/mongrel_cluster.yml --clean --only 5002
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys-admin
group mongrel


 I wen't for a while using my own scripts to start and stop mongrel
 without using mongrel_cluster. But it works more reliably when I use
 mongrel_cluster and monit together.

 Cheers-
 -- Ezra Zygmuntowicz
 -- Lead Rails Evangelist
 -- [EMAIL PROTECTED]
 -- Engine Yard, Serious Rails Hosting
 -- (866) 518-YARD (9273)


 ___
 Mongrel-users mailing list
 Mongrel-users@rubyforge.org
 http://rubyforge.org/mailman/listinfo/mongrel-users



-- 
Cheers,

Kevin Williams
http://www.almostserio.us/

Any sufficiently advanced technology is indistinguishable from
Magic. - Arthur C. Clarke
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users


Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread snacktime
Makes sense that mongrel_cluster would handle a lot of edge cases
better then monit.  Is it mainly the pid file handling that has been
the main issue so far?

Have you tried daemontools?  Seems to me like it would be more
reliable since you wouldn't have to deal with pid files and
backgrounding mongrel.

Chris

On 4/3/07, Ezra Zygmuntowicz [EMAIL PROTECTED] wrote:

 On Apr 3, 2007, at 1:39 PM, snacktime wrote:

  Is there anything mongrel cluster gives you that monit doesn't?  I'll
  be using monit to monitor a number of other services anyways, so it
  seems logical to just use it for everything including mongrel.
 
  Chris
 

 Chris-

 WHen you use monit you can still use mongrel_cluster to manage it.
 You need the latest pre release of mongrel_cluster. This is the best
 configuration I've been able to come up with for 64Bit systems. If
 your on 32bit system then you can lower the memory limits by about
 20-30%

 check process mongrel_%= @username %_5000
with pidfile /data/%= @username %/shared/log/mongrel.5000.pid
start program = /usr/bin/mongrel_rails cluster::start -C /data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only 5000
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
 @username %/current/config/mongrel_cluster.yml --clean --only 5000
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys-admin
group mongrel

 check process mongrel_%= @username %_5001
with pidfile /data/%= @username %/shared/log/mongrel.5001.pid
start program = /usr/bin/mongrel_rails cluster::start -C /data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only 5001
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
 @username %/current/config/mongrel_cluster.yml --clean --only 5001
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys-admin
group mongrel

 check process mongrel_%= @username %_5002
with pidfile /data/%= @username %/shared/log/mongrel.5002.pid
start program = /usr/bin/mongrel_rails cluster::start -C /data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only 5002
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
 @username %/current/config/mongrel_cluster.yml --clean --only 5002
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys-admin
group mongrel


 I wen't for a while using my own scripts to start and stop mongrel
 without using mongrel_cluster. But it works more reliably when I use
 mongrel_cluster and monit together.

 Cheers-
 -- Ezra Zygmuntowicz
 -- Lead Rails Evangelist
 -- [EMAIL PROTECTED]
 -- Engine Yard, Serious Rails Hosting
 -- (866) 518-YARD (9273)


 ___
 Mongrel-users mailing list
 Mongrel-users@rubyforge.org
 http://rubyforge.org/mailman/listinfo/mongrel-users

___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users


Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread Ezra Zygmuntowicz

Yes mongrel_cluster handles the pid files. Also it does a better job  
of stopping mongrels. The problem I had when I used monit and  
mongrel_rails without mongrel_cluster was that if a mongrel used too  
much memory monit woudl not be able to stop it sometimes and so  
execution woudl fail and timeout.

Using mongrel_clutser avoids this problem completely. Trust me I've  
tried it all different ways. I did monit without mongrel_cluster for  
a about a full month on close to 200 servers and then switched them  
all to monit and mongrel_cluster and get much better results.

-Ezra

On Apr 3, 2007, at 3:00 PM, snacktime wrote:

 Makes sense that mongrel_cluster would handle a lot of edge cases
 better then monit.  Is it mainly the pid file handling that has been
 the main issue so far?

 Have you tried daemontools?  Seems to me like it would be more
 reliable since you wouldn't have to deal with pid files and
 backgrounding mongrel.

 Chris

 On 4/3/07, Ezra Zygmuntowicz [EMAIL PROTECTED] wrote:

 On Apr 3, 2007, at 1:39 PM, snacktime wrote:

 Is there anything mongrel cluster gives you that monit doesn't?   
 I'll
 be using monit to monitor a number of other services anyways, so it
 seems logical to just use it for everything including mongrel.

 Chris


 Chris-

 WHen you use monit you can still use mongrel_cluster to  
 manage it.
 You need the latest pre release of mongrel_cluster. This is the best
 configuration I've been able to come up with for 64Bit systems. If
 your on 32bit system then you can lower the memory limits by about
 20-30%

 check process mongrel_%= @username %_5000
with pidfile /data/%= @username %/shared/log/mongrel.5000.pid
start program = /usr/bin/mongrel_rails cluster::start -C /data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only  
 5000
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
 @username %/current/config/mongrel_cluster.yml --clean --only 5000
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys- 
 admin
group mongrel

 check process mongrel_%= @username %_5001
with pidfile /data/%= @username %/shared/log/mongrel.5001.pid
start program = /usr/bin/mongrel_rails cluster::start -C /data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only  
 5001
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
 @username %/current/config/mongrel_cluster.yml --clean --only 5001
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys- 
 admin
group mongrel

 check process mongrel_%= @username %_5002
with pidfile /data/%= @username %/shared/log/mongrel.5002.pid
start program = /usr/bin/mongrel_rails cluster::start -C /data/%
 = @username %/current/config/mongrel_cluster.yml --clean --only  
 5002
stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
 @username %/current/config/mongrel_cluster.yml --clean --only 5002
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys- 
 admin
group mongrel


 I wen't for a while using my own scripts to start and stop  
 mongrel
 without using mongrel_cluster. But it works more reliably when I use
 mongrel_cluster and monit together.

 Cheers-
 -- Ezra Zygmuntowicz
 -- Lead Rails Evangelist
 -- [EMAIL PROTECTED]
 -- Engine Yard, Serious Rails Hosting
 -- (866) 518-YARD (9273)


 ___
 Mongrel-users mailing list
 Mongrel-users@rubyforge.org
 http://rubyforge.org/mailman/listinfo/mongrel-users

 ___
 Mongrel-users mailing list
 Mongrel-users@rubyforge.org
 http://rubyforge.org/mailman/listinfo/mongrel-users


-- Ezra Zygmuntowicz 
-- Lead Rails Evangelist
-- [EMAIL PROTECTED]

Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread Zack Chandler
Ezra,

The --clean option is only available in 1.0.1.1 beta I believe?  Are
you finding it stable enough for production environments (EY)?

I've been bitten by orphaned pids many times - I'm looking forward to
putting this into production...

--
Zack Chandler
http://depixelate.com

On 4/3/07, Ezra Zygmuntowicz [EMAIL PROTECTED] wrote:

 Yes mongrel_cluster handles the pid files. Also it does a better job
 of stopping mongrels. The problem I had when I used monit and
 mongrel_rails without mongrel_cluster was that if a mongrel used too
 much memory monit woudl not be able to stop it sometimes and so
 execution woudl fail and timeout.

 Using mongrel_clutser avoids this problem completely. Trust me I've
 tried it all different ways. I did monit without mongrel_cluster for
 a about a full month on close to 200 servers and then switched them
 all to monit and mongrel_cluster and get much better results.

 -Ezra

 On Apr 3, 2007, at 3:00 PM, snacktime wrote:

  Makes sense that mongrel_cluster would handle a lot of edge cases
  better then monit.  Is it mainly the pid file handling that has been
  the main issue so far?
 
  Have you tried daemontools?  Seems to me like it would be more
  reliable since you wouldn't have to deal with pid files and
  backgrounding mongrel.
 
  Chris
 
  On 4/3/07, Ezra Zygmuntowicz [EMAIL PROTECTED] wrote:
 
  On Apr 3, 2007, at 1:39 PM, snacktime wrote:
 
  Is there anything mongrel cluster gives you that monit doesn't?
  I'll
  be using monit to monitor a number of other services anyways, so it
  seems logical to just use it for everything including mongrel.
 
  Chris
 
 
  Chris-
 
  WHen you use monit you can still use mongrel_cluster to
  manage it.
  You need the latest pre release of mongrel_cluster. This is the best
  configuration I've been able to come up with for 64Bit systems. If
  your on 32bit system then you can lower the memory limits by about
  20-30%
 
  check process mongrel_%= @username %_5000
 with pidfile /data/%= @username %/shared/log/mongrel.5000.pid
 start program = /usr/bin/mongrel_rails cluster::start -C /data/%
  = @username %/current/config/mongrel_cluster.yml --clean --only
  5000
 stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
  @username %/current/config/mongrel_cluster.yml --clean --only 5000
 if totalmem is greater than 110.0 MB for 4 cycles then
  restart   # eating up memory?
 if cpu is greater than 50% for 2 cycles then
  alert  # send an email to admin
 if cpu is greater than 80% for 3 cycles then
  restart# hung process?
 if loadavg(5min) greater than 10 for 8 cycles then
  restart  # bad, bad, bad
 if 20 restarts within 20 cycles then
  timeout # something is wrong, call the sys-
  admin
 group mongrel
 
  check process mongrel_%= @username %_5001
 with pidfile /data/%= @username %/shared/log/mongrel.5001.pid
 start program = /usr/bin/mongrel_rails cluster::start -C /data/%
  = @username %/current/config/mongrel_cluster.yml --clean --only
  5001
 stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
  @username %/current/config/mongrel_cluster.yml --clean --only 5001
 if totalmem is greater than 110.0 MB for 4 cycles then
  restart   # eating up memory?
 if cpu is greater than 50% for 2 cycles then
  alert  # send an email to admin
 if cpu is greater than 80% for 3 cycles then
  restart# hung process?
 if loadavg(5min) greater than 10 for 8 cycles then
  restart  # bad, bad, bad
 if 20 restarts within 20 cycles then
  timeout # something is wrong, call the sys-
  admin
 group mongrel
 
  check process mongrel_%= @username %_5002
 with pidfile /data/%= @username %/shared/log/mongrel.5002.pid
 start program = /usr/bin/mongrel_rails cluster::start -C /data/%
  = @username %/current/config/mongrel_cluster.yml --clean --only
  5002
 stop program = /usr/bin/mongrel_rails cluster::stop -C /data/%=
  @username %/current/config/mongrel_cluster.yml --clean --only 5002
 if totalmem is greater than 110.0 MB for 4 cycles then
  restart   # eating up memory?
 if cpu is greater than 50% for 2 cycles then
  alert  # send an email to admin
 if cpu is greater than 80% for 3 cycles then
  restart# hung process?
 if loadavg(5min) greater than 10 for 8 cycles then
  restart  # bad, bad, bad
 if 20 restarts within 20 cycles then
  timeout # something is wrong, call the sys-
  admin
 group mongrel
 
 
  I wen't for a while using my own scripts to start and stop
  mongrel
  without using mongrel_cluster. But it works more reliably when I use
  mongrel_cluster and monit together.
 
  Cheers-
  -- Ezra Zygmuntowicz
  -- Lead Rails Evangelist
  -- [EMAIL PROTECTED]
  -- Engine Yard, Serious Rails 

Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread Alexey Verkhovsky
On 4/3/07, Ezra Zygmuntowicz [EMAIL PROTECTED] wrote:
 mongrel_cluster handles the pid files. Also it does a better job
 of stopping mongrels.

Is there some fundamental reason why Mongrel itself cannot handle
these issues well, or does it just need more work in this area?

Alex
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users