Re: [Mongrel] monit vs mongrel cluster

2007-04-08 Thread Henry
Thanks, Ezra.  Apologies for not checking the whole thread.

On Apr 8, 2007, at 12:40 PM, Ezra Zygmuntowicz wrote:

> Henry-
>
>   That is what it quoted earlier in this email. Here is the monitrc
> for one mongrel using mongrel_cluster:
>
> check process mongrel_USERNAME_5000
>with pidfile /data/USERNAME/shared/log/mongrel.5000.pid
>start program = "/usr/bin/mongrel_rails cluster::start -C /data/
> USERNAME/current/config/mongrel_cluster.yml --clean --only 5000"
>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/
> USERNAME/current/config/mongrel_cluster.yml  --only 5000"
>if totalmem is greater than 110.0 MB for 4 cycles then
> restart   # eating up memory?
>if cpu is greater than 50% for 2 cycles then
> alert  # send an email to admin
>if cpu is greater than 80% for 3 cycles then
> restart# hung process?
>if loadavg(5min) greater than 10 for 8 cycles then
> restart  # bad, bad, bad
>if 20 restarts within 20 cycles then
> timeout # something is wrong, call the sys- 
> admin
>group mongrel
>
>
>   You need one of those entries for each mongrel you need to run.
>
> Cheers-
> -Ezra
>
>
> On Apr 7, 2007, at 7:54 AM, Henry wrote:
>
>> Ezra,
>>
>> Would you mind sharing the portion of your monit.conf that handles
>> the cluster?
>>
>> Many thanks,
>> Henry
>>
>>
>> On Apr 3, 2007, at 6:28 PM, Ezra Zygmuntowicz wrote:
>>
>>>
>>> Yes mongrel_cluster handles the pid files. Also it does a better  
>>> job
>>> of stopping mongrels. The problem I had when I used monit and
>>> mongrel_rails without mongrel_cluster was that if a mongrel used too
>>> much memory monit woudl not be able to stop it sometimes and so
>>> execution woudl fail and timeout.
>>>
>>> Using mongrel_clutser avoids this problem completely. Trust me I've
>>> tried it all different ways. I did monit without mongrel_cluster for
>>> a about a full month on close to 200 servers and then switched them
>>> all to monit and mongrel_cluster and get much better results.
>>>
>>> -Ezra
>>>
>>> On Apr 3, 2007, at 3:00 PM, snacktime wrote:
>>>
 Makes sense that mongrel_cluster would handle a lot of edge cases
 better then monit.  Is it mainly the pid file handling that has  
 been
 the main issue so far?

 Have you tried daemontools?  Seems to me like it would be more
 reliable since you wouldn't have to deal with pid files and
 backgrounding mongrel.

 Chris

 On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:
>
> On Apr 3, 2007, at 1:39 PM, snacktime wrote:
>
>> Is there anything mongrel cluster gives you that monit doesn't?
>> I'll
>> be using monit to monitor a number of other services anyways,
>> so it
>> seems logical to just use it for everything including mongrel.
>>
>> Chris
>>
>
> Chris-
>
> WHen you use monit you can still use mongrel_cluster to
> manage it.
> You need the latest pre release of mongrel_cluster. This is the
> best
> configuration I've been able to come up with for 64Bit systems. If
> your on 32bit system then you can lower the memory limits by about
> 20-30%
>
> check process mongrel_<%= @username %>_5000
>with pidfile /data/<%= @username %>/shared/log/mongrel.5000.pid
>start program = "/usr/bin/mongrel_rails cluster::start -C /
> data/<%
> = @username %>/current/config/mongrel_cluster.yml --clean --only
> 5000"
>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/
> <%=
> @username %>/current/config/mongrel_cluster.yml --clean --only
> 5000"
>if totalmem is greater than 110.0 MB for 4 cycles then
> restart   # eating up memory?
>if cpu is greater than 50% for 2 cycles then
> alert  # send an email to admin
>if cpu is greater than 80% for 3 cycles then
> restart# hung process?
>if loadavg(5min) greater than 10 for 8 cycles then
> restart  # bad, bad, bad
>if 20 restarts within 20 cycles then
> timeout # something is wrong, call the  
> sys-
> admin
>group mongrel
>
> check process mongrel_<%= @username %>_5001
>with pidfile /data/<%= @username %>/shared/log/mongrel.5001.pid
>start program = "/usr/bin/mongrel_rails cluster::start -C /
> data/<%
> = @username %>/current/config/mongrel_cluster.yml --clean --only
> 5001"
>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/
> <%=
> @username %>/current/config/mongrel_cluster.yml --clean --only
> 5001"
>if totalmem is greater than 110.0 MB for 4 cycles then
> restart   # eating up memory?
>if cpu is greater than 50% for 2 cycles then
> alert  # send an email to admin
>if cpu is greater than 80% f

Re: [Mongrel] monit vs mongrel cluster

2007-04-08 Thread Ezra Zygmuntowicz
Henry-

That is what it quoted earlier in this email. Here is the monitrc  
for one mongrel using mongrel_cluster:

check process mongrel_USERNAME_5000
   with pidfile /data/USERNAME/shared/log/mongrel.5000.pid
   start program = "/usr/bin/mongrel_rails cluster::start -C /data/ 
USERNAME/current/config/mongrel_cluster.yml --clean --only 5000"
   stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/ 
USERNAME/current/config/mongrel_cluster.yml  --only 5000"
   if totalmem is greater than 110.0 MB for 4 cycles then  
restart   # eating up memory?
   if cpu is greater than 50% for 2 cycles then  
alert  # send an email to admin
   if cpu is greater than 80% for 3 cycles then  
restart# hung process?
   if loadavg(5min) greater than 10 for 8 cycles then  
restart  # bad, bad, bad
   if 20 restarts within 20 cycles then  
timeout # something is wrong, call the sys-admin
   group mongrel


You need one of those entries for each mongrel you need to run.

Cheers-
-Ezra


On Apr 7, 2007, at 7:54 AM, Henry wrote:

> Ezra,
>
> Would you mind sharing the portion of your monit.conf that handles
> the cluster?
>
> Many thanks,
> Henry
>
>
> On Apr 3, 2007, at 6:28 PM, Ezra Zygmuntowicz wrote:
>
>>
>>  Yes mongrel_cluster handles the pid files. Also it does a better job
>> of stopping mongrels. The problem I had when I used monit and
>> mongrel_rails without mongrel_cluster was that if a mongrel used too
>> much memory monit woudl not be able to stop it sometimes and so
>> execution woudl fail and timeout.
>>
>>  Using mongrel_clutser avoids this problem completely. Trust me I've
>> tried it all different ways. I did monit without mongrel_cluster for
>> a about a full month on close to 200 servers and then switched them
>> all to monit and mongrel_cluster and get much better results.
>>
>> -Ezra
>>
>> On Apr 3, 2007, at 3:00 PM, snacktime wrote:
>>
>>> Makes sense that mongrel_cluster would handle a lot of edge cases
>>> better then monit.  Is it mainly the pid file handling that has been
>>> the main issue so far?
>>>
>>> Have you tried daemontools?  Seems to me like it would be more
>>> reliable since you wouldn't have to deal with pid files and
>>> backgrounding mongrel.
>>>
>>> Chris
>>>
>>> On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:

 On Apr 3, 2007, at 1:39 PM, snacktime wrote:

> Is there anything mongrel cluster gives you that monit doesn't?
> I'll
> be using monit to monitor a number of other services anyways,  
> so it
> seems logical to just use it for everything including mongrel.
>
> Chris
>

 Chris-

 WHen you use monit you can still use mongrel_cluster to
 manage it.
 You need the latest pre release of mongrel_cluster. This is the  
 best
 configuration I've been able to come up with for 64Bit systems. If
 your on 32bit system then you can lower the memory limits by about
 20-30%

 check process mongrel_<%= @username %>_5000
with pidfile /data/<%= @username %>/shared/log/mongrel.5000.pid
start program = "/usr/bin/mongrel_rails cluster::start -C /
 data/<%
 = @username %>/current/config/mongrel_cluster.yml --clean --only
 5000"
stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/ 
 <%=
 @username %>/current/config/mongrel_cluster.yml --clean --only  
 5000"
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, call the sys-
 admin
group mongrel

 check process mongrel_<%= @username %>_5001
with pidfile /data/<%= @username %>/shared/log/mongrel.5001.pid
start program = "/usr/bin/mongrel_rails cluster::start -C /
 data/<%
 = @username %>/current/config/mongrel_cluster.yml --clean --only
 5001"
stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/ 
 <%=
 @username %>/current/config/mongrel_cluster.yml --clean --only  
 5001"
if totalmem is greater than 110.0 MB for 4 cycles then
 restart   # eating up memory?
if cpu is greater than 50% for 2 cycles then
 alert  # send an email to admin
if cpu is greater than 80% for 3 cycles then
 restart# hung process?
if loadavg(5min) greater than 10 for 8 cycles then
 restart  # bad, bad, bad
if 20 restarts within 20 cycles then
 timeout # something is wrong, cal

Re: [Mongrel] monit vs mongrel cluster

2007-04-08 Thread Henry
Ezra,

Would you mind sharing the portion of your monit.conf that handles  
the cluster?

Many thanks,
Henry


On Apr 3, 2007, at 6:28 PM, Ezra Zygmuntowicz wrote:

>
>   Yes mongrel_cluster handles the pid files. Also it does a better job
> of stopping mongrels. The problem I had when I used monit and
> mongrel_rails without mongrel_cluster was that if a mongrel used too
> much memory monit woudl not be able to stop it sometimes and so
> execution woudl fail and timeout.
>
>   Using mongrel_clutser avoids this problem completely. Trust me I've
> tried it all different ways. I did monit without mongrel_cluster for
> a about a full month on close to 200 servers and then switched them
> all to monit and mongrel_cluster and get much better results.
>
> -Ezra
>
> On Apr 3, 2007, at 3:00 PM, snacktime wrote:
>
>> Makes sense that mongrel_cluster would handle a lot of edge cases
>> better then monit.  Is it mainly the pid file handling that has been
>> the main issue so far?
>>
>> Have you tried daemontools?  Seems to me like it would be more
>> reliable since you wouldn't have to deal with pid files and
>> backgrounding mongrel.
>>
>> Chris
>>
>> On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:
>>>
>>> On Apr 3, 2007, at 1:39 PM, snacktime wrote:
>>>
 Is there anything mongrel cluster gives you that monit doesn't?
 I'll
 be using monit to monitor a number of other services anyways, so it
 seems logical to just use it for everything including mongrel.

 Chris

>>>
>>> Chris-
>>>
>>> WHen you use monit you can still use mongrel_cluster to
>>> manage it.
>>> You need the latest pre release of mongrel_cluster. This is the best
>>> configuration I've been able to come up with for 64Bit systems. If
>>> your on 32bit system then you can lower the memory limits by about
>>> 20-30%
>>>
>>> check process mongrel_<%= @username %>_5000
>>>with pidfile /data/<%= @username %>/shared/log/mongrel.5000.pid
>>>start program = "/usr/bin/mongrel_rails cluster::start -C / 
>>> data/<%
>>> = @username %>/current/config/mongrel_cluster.yml --clean --only
>>> 5000"
>>>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
>>> @username %>/current/config/mongrel_cluster.yml --clean --only 5000"
>>>if totalmem is greater than 110.0 MB for 4 cycles then
>>> restart   # eating up memory?
>>>if cpu is greater than 50% for 2 cycles then
>>> alert  # send an email to admin
>>>if cpu is greater than 80% for 3 cycles then
>>> restart# hung process?
>>>if loadavg(5min) greater than 10 for 8 cycles then
>>> restart  # bad, bad, bad
>>>if 20 restarts within 20 cycles then
>>> timeout # something is wrong, call the sys-
>>> admin
>>>group mongrel
>>>
>>> check process mongrel_<%= @username %>_5001
>>>with pidfile /data/<%= @username %>/shared/log/mongrel.5001.pid
>>>start program = "/usr/bin/mongrel_rails cluster::start -C / 
>>> data/<%
>>> = @username %>/current/config/mongrel_cluster.yml --clean --only
>>> 5001"
>>>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
>>> @username %>/current/config/mongrel_cluster.yml --clean --only 5001"
>>>if totalmem is greater than 110.0 MB for 4 cycles then
>>> restart   # eating up memory?
>>>if cpu is greater than 50% for 2 cycles then
>>> alert  # send an email to admin
>>>if cpu is greater than 80% for 3 cycles then
>>> restart# hung process?
>>>if loadavg(5min) greater than 10 for 8 cycles then
>>> restart  # bad, bad, bad
>>>if 20 restarts within 20 cycles then
>>> timeout # something is wrong, call the sys-
>>> admin
>>>group mongrel
>>>
>>> check process mongrel_<%= @username %>_5002
>>>with pidfile /data/<%= @username %>/shared/log/mongrel.5002.pid
>>>start program = "/usr/bin/mongrel_rails cluster::start -C / 
>>> data/<%
>>> = @username %>/current/config/mongrel_cluster.yml --clean --only
>>> 5002"
>>>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
>>> @username %>/current/config/mongrel_cluster.yml --clean --only 5002"
>>>if totalmem is greater than 110.0 MB for 4 cycles then
>>> restart   # eating up memory?
>>>if cpu is greater than 50% for 2 cycles then
>>> alert  # send an email to admin
>>>if cpu is greater than 80% for 3 cycles then
>>> restart# hung process?
>>>if loadavg(5min) greater than 10 for 8 cycles then
>>> restart  # bad, bad, bad
>>>if 20 restarts within 20 cycles then
>>> timeout # something is wrong, call the sys-
>>> admin
>>>group mongrel
>>>
>>>
>>> I wen't for a while using my own scripts to start and stop
>>> mongrel
>>> without using mongrel_cluster. But it works more reliably when I use
>>> mongrel_cluster and monit together.
>>>
>>> Cheers-
>>> -- Ezra Zygmuntowi

Re: [Mongrel] monit vs mongrel cluster

2007-04-04 Thread Alexey Verkhovsky
On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:
> The reason it's not in mongrel is because bradley made the
> mongrel_cluster gem and so Zed saw no reason to add the same stuff in
> mongrel.
>
> I have asked for a --clean option for the mongel_rails command that
> would cleanup PID"s but I haven't had time to make a patch,

Same story here. I keep meaning to sit down and write that patch for
the last 10 days or so. If you get there first, please ping me
off-list.

Another daemonization problem to look at is this: when there is no pid
file, but the port is bound to some other process, 'mongrel_rails
start' silently exits with exit code zero. Meantime, the .pid file is
created a few seconds AFTER 'mongrel_rails start' is done. This is not
as important as above, but kind of not right, and in the same area.

Alex
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users


Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread Ezra Zygmuntowicz

On Apr 3, 2007, at 4:19 PM, Zack Chandler wrote:

> Ezra,
>
> The --clean option is only available in 1.0.1.1 beta I believe?  Are
> you finding it stable enough for production environments (EY)?
>
> I've been bitten by orphaned pids many times - I'm looking forward to
> putting this into production...
>

Zack-

Yes the prerelease is production worthy on linux, it still may have  
an issue on FreeBSD. If you are on linux then I highly recommend the  
upgrade. It is running fine on hundreds of servers no and works much  
better then any other way I've had this stuff setup.


On Apr 3, 2007, at 4:32 PM, Alexey Verkhovsky wrote:
> On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:
>> mongrel_cluster handles the pid files. Also it does a better job
>> of stopping mongrels.
>
> Is there some fundamental reason why Mongrel itself cannot handle
> these issues well, or does it just need more work in this area?
>
> Alex

The reason it's not in mongrel is because bradley made the  
mongrel_cluster gem and so Zed saw no reason to add the same stuff in  
mongrel.

I have asked for a --clean option for the mongel_rails command that  
would cleanup PID"s but I haven't had time to make a patch,  
especially since mongrel_cluster does a great job managing this stuff.


Cheers-

-- Ezra Zygmuntowicz 
-- Lead Rails Evangelist
-- [EMAIL PROTECTED]
-- Engine Yard, Serious Rails Hosting
-- (866) 518-YARD (9273)


___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users


Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread Alexey Verkhovsky
On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:
> mongrel_cluster handles the pid files. Also it does a better job
> of stopping mongrels.

Is there some fundamental reason why Mongrel itself cannot handle
these issues well, or does it just need more work in this area?

Alex
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users


Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread Zack Chandler
Ezra,

The --clean option is only available in 1.0.1.1 beta I believe?  Are
you finding it stable enough for production environments (EY)?

I've been bitten by orphaned pids many times - I'm looking forward to
putting this into production...

--
Zack Chandler
http://depixelate.com

On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:
>
> Yes mongrel_cluster handles the pid files. Also it does a better job
> of stopping mongrels. The problem I had when I used monit and
> mongrel_rails without mongrel_cluster was that if a mongrel used too
> much memory monit woudl not be able to stop it sometimes and so
> execution woudl fail and timeout.
>
> Using mongrel_clutser avoids this problem completely. Trust me I've
> tried it all different ways. I did monit without mongrel_cluster for
> a about a full month on close to 200 servers and then switched them
> all to monit and mongrel_cluster and get much better results.
>
> -Ezra
>
> On Apr 3, 2007, at 3:00 PM, snacktime wrote:
>
> > Makes sense that mongrel_cluster would handle a lot of edge cases
> > better then monit.  Is it mainly the pid file handling that has been
> > the main issue so far?
> >
> > Have you tried daemontools?  Seems to me like it would be more
> > reliable since you wouldn't have to deal with pid files and
> > backgrounding mongrel.
> >
> > Chris
> >
> > On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:
> >>
> >> On Apr 3, 2007, at 1:39 PM, snacktime wrote:
> >>
> >>> Is there anything mongrel cluster gives you that monit doesn't?
> >>> I'll
> >>> be using monit to monitor a number of other services anyways, so it
> >>> seems logical to just use it for everything including mongrel.
> >>>
> >>> Chris
> >>>
> >>
> >> Chris-
> >>
> >> WHen you use monit you can still use mongrel_cluster to
> >> manage it.
> >> You need the latest pre release of mongrel_cluster. This is the best
> >> configuration I've been able to come up with for 64Bit systems. If
> >> your on 32bit system then you can lower the memory limits by about
> >> 20-30%
> >>
> >> check process mongrel_<%= @username %>_5000
> >>with pidfile /data/<%= @username %>/shared/log/mongrel.5000.pid
> >>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
> >> = @username %>/current/config/mongrel_cluster.yml --clean --only
> >> 5000"
> >>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
> >> @username %>/current/config/mongrel_cluster.yml --clean --only 5000"
> >>if totalmem is greater than 110.0 MB for 4 cycles then
> >> restart   # eating up memory?
> >>if cpu is greater than 50% for 2 cycles then
> >> alert  # send an email to admin
> >>if cpu is greater than 80% for 3 cycles then
> >> restart# hung process?
> >>if loadavg(5min) greater than 10 for 8 cycles then
> >> restart  # bad, bad, bad
> >>if 20 restarts within 20 cycles then
> >> timeout # something is wrong, call the sys-
> >> admin
> >>group mongrel
> >>
> >> check process mongrel_<%= @username %>_5001
> >>with pidfile /data/<%= @username %>/shared/log/mongrel.5001.pid
> >>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
> >> = @username %>/current/config/mongrel_cluster.yml --clean --only
> >> 5001"
> >>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
> >> @username %>/current/config/mongrel_cluster.yml --clean --only 5001"
> >>if totalmem is greater than 110.0 MB for 4 cycles then
> >> restart   # eating up memory?
> >>if cpu is greater than 50% for 2 cycles then
> >> alert  # send an email to admin
> >>if cpu is greater than 80% for 3 cycles then
> >> restart# hung process?
> >>if loadavg(5min) greater than 10 for 8 cycles then
> >> restart  # bad, bad, bad
> >>if 20 restarts within 20 cycles then
> >> timeout # something is wrong, call the sys-
> >> admin
> >>group mongrel
> >>
> >> check process mongrel_<%= @username %>_5002
> >>with pidfile /data/<%= @username %>/shared/log/mongrel.5002.pid
> >>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
> >> = @username %>/current/config/mongrel_cluster.yml --clean --only
> >> 5002"
> >>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
> >> @username %>/current/config/mongrel_cluster.yml --clean --only 5002"
> >>if totalmem is greater than 110.0 MB for 4 cycles then
> >> restart   # eating up memory?
> >>if cpu is greater than 50% for 2 cycles then
> >> alert  # send an email to admin
> >>if cpu is greater than 80% for 3 cycles then
> >> restart# hung process?
> >>if loadavg(5min) greater than 10 for 8 cycles then
> >> restart  # bad, bad, bad
> >>if 20 restarts within 20 cycles then
> >> timeout # something is wrong, call the sys-
> >> admin
> >>group 

Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread Ezra Zygmuntowicz

Yes mongrel_cluster handles the pid files. Also it does a better job  
of stopping mongrels. The problem I had when I used monit and  
mongrel_rails without mongrel_cluster was that if a mongrel used too  
much memory monit woudl not be able to stop it sometimes and so  
execution woudl fail and timeout.

Using mongrel_clutser avoids this problem completely. Trust me I've  
tried it all different ways. I did monit without mongrel_cluster for  
a about a full month on close to 200 servers and then switched them  
all to monit and mongrel_cluster and get much better results.

-Ezra

On Apr 3, 2007, at 3:00 PM, snacktime wrote:

> Makes sense that mongrel_cluster would handle a lot of edge cases
> better then monit.  Is it mainly the pid file handling that has been
> the main issue so far?
>
> Have you tried daemontools?  Seems to me like it would be more
> reliable since you wouldn't have to deal with pid files and
> backgrounding mongrel.
>
> Chris
>
> On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:
>>
>> On Apr 3, 2007, at 1:39 PM, snacktime wrote:
>>
>>> Is there anything mongrel cluster gives you that monit doesn't?   
>>> I'll
>>> be using monit to monitor a number of other services anyways, so it
>>> seems logical to just use it for everything including mongrel.
>>>
>>> Chris
>>>
>>
>> Chris-
>>
>> WHen you use monit you can still use mongrel_cluster to  
>> manage it.
>> You need the latest pre release of mongrel_cluster. This is the best
>> configuration I've been able to come up with for 64Bit systems. If
>> your on 32bit system then you can lower the memory limits by about
>> 20-30%
>>
>> check process mongrel_<%= @username %>_5000
>>with pidfile /data/<%= @username %>/shared/log/mongrel.5000.pid
>>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
>> = @username %>/current/config/mongrel_cluster.yml --clean --only  
>> 5000"
>>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
>> @username %>/current/config/mongrel_cluster.yml --clean --only 5000"
>>if totalmem is greater than 110.0 MB for 4 cycles then
>> restart   # eating up memory?
>>if cpu is greater than 50% for 2 cycles then
>> alert  # send an email to admin
>>if cpu is greater than 80% for 3 cycles then
>> restart# hung process?
>>if loadavg(5min) greater than 10 for 8 cycles then
>> restart  # bad, bad, bad
>>if 20 restarts within 20 cycles then
>> timeout # something is wrong, call the sys- 
>> admin
>>group mongrel
>>
>> check process mongrel_<%= @username %>_5001
>>with pidfile /data/<%= @username %>/shared/log/mongrel.5001.pid
>>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
>> = @username %>/current/config/mongrel_cluster.yml --clean --only  
>> 5001"
>>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
>> @username %>/current/config/mongrel_cluster.yml --clean --only 5001"
>>if totalmem is greater than 110.0 MB for 4 cycles then
>> restart   # eating up memory?
>>if cpu is greater than 50% for 2 cycles then
>> alert  # send an email to admin
>>if cpu is greater than 80% for 3 cycles then
>> restart# hung process?
>>if loadavg(5min) greater than 10 for 8 cycles then
>> restart  # bad, bad, bad
>>if 20 restarts within 20 cycles then
>> timeout # something is wrong, call the sys- 
>> admin
>>group mongrel
>>
>> check process mongrel_<%= @username %>_5002
>>with pidfile /data/<%= @username %>/shared/log/mongrel.5002.pid
>>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
>> = @username %>/current/config/mongrel_cluster.yml --clean --only  
>> 5002"
>>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
>> @username %>/current/config/mongrel_cluster.yml --clean --only 5002"
>>if totalmem is greater than 110.0 MB for 4 cycles then
>> restart   # eating up memory?
>>if cpu is greater than 50% for 2 cycles then
>> alert  # send an email to admin
>>if cpu is greater than 80% for 3 cycles then
>> restart# hung process?
>>if loadavg(5min) greater than 10 for 8 cycles then
>> restart  # bad, bad, bad
>>if 20 restarts within 20 cycles then
>> timeout # something is wrong, call the sys- 
>> admin
>>group mongrel
>>
>>
>> I wen't for a while using my own scripts to start and stop  
>> mongrel
>> without using mongrel_cluster. But it works more reliably when I use
>> mongrel_cluster and monit together.
>>
>> Cheers-
>> -- Ezra Zygmuntowicz
>> -- Lead Rails Evangelist
>> -- [EMAIL PROTECTED]
>> -- Engine Yard, Serious Rails Hosting
>> -- (866) 518-YARD (9273)
>>
>>
>> ___
>> Mongrel-users mailing list
>> Mongrel-users@rubyforge.org
>> http://rubyforge.org/mailman/l

Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread snacktime
Makes sense that mongrel_cluster would handle a lot of edge cases
better then monit.  Is it mainly the pid file handling that has been
the main issue so far?

Have you tried daemontools?  Seems to me like it would be more
reliable since you wouldn't have to deal with pid files and
backgrounding mongrel.

Chris

On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:
>
> On Apr 3, 2007, at 1:39 PM, snacktime wrote:
>
> > Is there anything mongrel cluster gives you that monit doesn't?  I'll
> > be using monit to monitor a number of other services anyways, so it
> > seems logical to just use it for everything including mongrel.
> >
> > Chris
> >
>
> Chris-
>
> WHen you use monit you can still use mongrel_cluster to manage it.
> You need the latest pre release of mongrel_cluster. This is the best
> configuration I've been able to come up with for 64Bit systems. If
> your on 32bit system then you can lower the memory limits by about
> 20-30%
>
> check process mongrel_<%= @username %>_5000
>with pidfile /data/<%= @username %>/shared/log/mongrel.5000.pid
>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
> = @username %>/current/config/mongrel_cluster.yml --clean --only 5000"
>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
> @username %>/current/config/mongrel_cluster.yml --clean --only 5000"
>if totalmem is greater than 110.0 MB for 4 cycles then
> restart   # eating up memory?
>if cpu is greater than 50% for 2 cycles then
> alert  # send an email to admin
>if cpu is greater than 80% for 3 cycles then
> restart# hung process?
>if loadavg(5min) greater than 10 for 8 cycles then
> restart  # bad, bad, bad
>if 20 restarts within 20 cycles then
> timeout # something is wrong, call the sys-admin
>group mongrel
>
> check process mongrel_<%= @username %>_5001
>with pidfile /data/<%= @username %>/shared/log/mongrel.5001.pid
>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
> = @username %>/current/config/mongrel_cluster.yml --clean --only 5001"
>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
> @username %>/current/config/mongrel_cluster.yml --clean --only 5001"
>if totalmem is greater than 110.0 MB for 4 cycles then
> restart   # eating up memory?
>if cpu is greater than 50% for 2 cycles then
> alert  # send an email to admin
>if cpu is greater than 80% for 3 cycles then
> restart# hung process?
>if loadavg(5min) greater than 10 for 8 cycles then
> restart  # bad, bad, bad
>if 20 restarts within 20 cycles then
> timeout # something is wrong, call the sys-admin
>group mongrel
>
> check process mongrel_<%= @username %>_5002
>with pidfile /data/<%= @username %>/shared/log/mongrel.5002.pid
>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
> = @username %>/current/config/mongrel_cluster.yml --clean --only 5002"
>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
> @username %>/current/config/mongrel_cluster.yml --clean --only 5002"
>if totalmem is greater than 110.0 MB for 4 cycles then
> restart   # eating up memory?
>if cpu is greater than 50% for 2 cycles then
> alert  # send an email to admin
>if cpu is greater than 80% for 3 cycles then
> restart# hung process?
>if loadavg(5min) greater than 10 for 8 cycles then
> restart  # bad, bad, bad
>if 20 restarts within 20 cycles then
> timeout # something is wrong, call the sys-admin
>group mongrel
>
>
> I wen't for a while using my own scripts to start and stop mongrel
> without using mongrel_cluster. But it works more reliably when I use
> mongrel_cluster and monit together.
>
> Cheers-
> -- Ezra Zygmuntowicz
> -- Lead Rails Evangelist
> -- [EMAIL PROTECTED]
> -- Engine Yard, Serious Rails Hosting
> -- (866) 518-YARD (9273)
>
>
> ___
> Mongrel-users mailing list
> Mongrel-users@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-users
>
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users


Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread Kevin Williams
I do pretty much the same thing with monit, except I don't use
mongrel_cluster. Because monit needs to handle each instance of
mongrel separately, I didn't see the point in using a clustering tool
to handle single instances. It's a minor difference, really -
specifying the mongrel_rails options in the monitrc file vs. the
config/mongrel_cluster.yml file. Either way, I use monit to restart a
mongrel cluster (or "group" as monit calls it).

I'd say monit offers _more_ than mongrel_cluster, but I'm no expert.


On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:
>
> On Apr 3, 2007, at 1:39 PM, snacktime wrote:
>
> > Is there anything mongrel cluster gives you that monit doesn't?  I'll
> > be using monit to monitor a number of other services anyways, so it
> > seems logical to just use it for everything including mongrel.
> >
> > Chris
> >
>
> Chris-
>
> WHen you use monit you can still use mongrel_cluster to manage it.
> You need the latest pre release of mongrel_cluster. This is the best
> configuration I've been able to come up with for 64Bit systems. If
> your on 32bit system then you can lower the memory limits by about
> 20-30%
>
> check process mongrel_<%= @username %>_5000
>with pidfile /data/<%= @username %>/shared/log/mongrel.5000.pid
>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
> = @username %>/current/config/mongrel_cluster.yml --clean --only 5000"
>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
> @username %>/current/config/mongrel_cluster.yml --clean --only 5000"
>if totalmem is greater than 110.0 MB for 4 cycles then
> restart   # eating up memory?
>if cpu is greater than 50% for 2 cycles then
> alert  # send an email to admin
>if cpu is greater than 80% for 3 cycles then
> restart# hung process?
>if loadavg(5min) greater than 10 for 8 cycles then
> restart  # bad, bad, bad
>if 20 restarts within 20 cycles then
> timeout # something is wrong, call the sys-admin
>group mongrel
>
> check process mongrel_<%= @username %>_5001
>with pidfile /data/<%= @username %>/shared/log/mongrel.5001.pid
>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
> = @username %>/current/config/mongrel_cluster.yml --clean --only 5001"
>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
> @username %>/current/config/mongrel_cluster.yml --clean --only 5001"
>if totalmem is greater than 110.0 MB for 4 cycles then
> restart   # eating up memory?
>if cpu is greater than 50% for 2 cycles then
> alert  # send an email to admin
>if cpu is greater than 80% for 3 cycles then
> restart# hung process?
>if loadavg(5min) greater than 10 for 8 cycles then
> restart  # bad, bad, bad
>if 20 restarts within 20 cycles then
> timeout # something is wrong, call the sys-admin
>group mongrel
>
> check process mongrel_<%= @username %>_5002
>with pidfile /data/<%= @username %>/shared/log/mongrel.5002.pid
>start program = "/usr/bin/mongrel_rails cluster::start -C /data/<%
> = @username %>/current/config/mongrel_cluster.yml --clean --only 5002"
>stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=
> @username %>/current/config/mongrel_cluster.yml --clean --only 5002"
>if totalmem is greater than 110.0 MB for 4 cycles then
> restart   # eating up memory?
>if cpu is greater than 50% for 2 cycles then
> alert  # send an email to admin
>if cpu is greater than 80% for 3 cycles then
> restart# hung process?
>if loadavg(5min) greater than 10 for 8 cycles then
> restart  # bad, bad, bad
>if 20 restarts within 20 cycles then
> timeout # something is wrong, call the sys-admin
>group mongrel
>
>
> I wen't for a while using my own scripts to start and stop mongrel
> without using mongrel_cluster. But it works more reliably when I use
> mongrel_cluster and monit together.
>
> Cheers-
> -- Ezra Zygmuntowicz
> -- Lead Rails Evangelist
> -- [EMAIL PROTECTED]
> -- Engine Yard, Serious Rails Hosting
> -- (866) 518-YARD (9273)
>
>
> ___
> Mongrel-users mailing list
> Mongrel-users@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-users
>


-- 
Cheers,

Kevin Williams
http://www.almostserio.us/

"Any sufficiently advanced technology is indistinguishable from
Magic." - Arthur C. Clarke
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users


Re: [Mongrel] monit vs mongrel cluster

2007-04-03 Thread Ezra Zygmuntowicz

On Apr 3, 2007, at 1:39 PM, snacktime wrote:

> Is there anything mongrel cluster gives you that monit doesn't?  I'll
> be using monit to monitor a number of other services anyways, so it
> seems logical to just use it for everything including mongrel.
>
> Chris
>

Chris-

WHen you use monit you can still use mongrel_cluster to manage it.  
You need the latest pre release of mongrel_cluster. This is the best  
configuration I've been able to come up with for 64Bit systems. If  
your on 32bit system then you can lower the memory limits by about  
20-30%

check process mongrel_<%= @username %>_5000
   with pidfile /data/<%= @username %>/shared/log/mongrel.5000.pid
   start program = "/usr/bin/mongrel_rails cluster::start -C /data/<% 
= @username %>/current/config/mongrel_cluster.yml --clean --only 5000"
   stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=  
@username %>/current/config/mongrel_cluster.yml --clean --only 5000"
   if totalmem is greater than 110.0 MB for 4 cycles then  
restart   # eating up memory?
   if cpu is greater than 50% for 2 cycles then  
alert  # send an email to admin
   if cpu is greater than 80% for 3 cycles then  
restart# hung process?
   if loadavg(5min) greater than 10 for 8 cycles then  
restart  # bad, bad, bad
   if 20 restarts within 20 cycles then  
timeout # something is wrong, call the sys-admin
   group mongrel

check process mongrel_<%= @username %>_5001
   with pidfile /data/<%= @username %>/shared/log/mongrel.5001.pid
   start program = "/usr/bin/mongrel_rails cluster::start -C /data/<% 
= @username %>/current/config/mongrel_cluster.yml --clean --only 5001"
   stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=  
@username %>/current/config/mongrel_cluster.yml --clean --only 5001"
   if totalmem is greater than 110.0 MB for 4 cycles then  
restart   # eating up memory?
   if cpu is greater than 50% for 2 cycles then  
alert  # send an email to admin
   if cpu is greater than 80% for 3 cycles then  
restart# hung process?
   if loadavg(5min) greater than 10 for 8 cycles then  
restart  # bad, bad, bad
   if 20 restarts within 20 cycles then  
timeout # something is wrong, call the sys-admin
   group mongrel

check process mongrel_<%= @username %>_5002
   with pidfile /data/<%= @username %>/shared/log/mongrel.5002.pid
   start program = "/usr/bin/mongrel_rails cluster::start -C /data/<% 
= @username %>/current/config/mongrel_cluster.yml --clean --only 5002"
   stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/<%=  
@username %>/current/config/mongrel_cluster.yml --clean --only 5002"
   if totalmem is greater than 110.0 MB for 4 cycles then  
restart   # eating up memory?
   if cpu is greater than 50% for 2 cycles then  
alert  # send an email to admin
   if cpu is greater than 80% for 3 cycles then  
restart# hung process?
   if loadavg(5min) greater than 10 for 8 cycles then  
restart  # bad, bad, bad
   if 20 restarts within 20 cycles then  
timeout # something is wrong, call the sys-admin
   group mongrel


I wen't for a while using my own scripts to start and stop mongrel  
without using mongrel_cluster. But it works more reliably when I use  
mongrel_cluster and monit together.

Cheers-
-- Ezra Zygmuntowicz 
-- Lead Rails Evangelist
-- [EMAIL PROTECTED]
-- Engine Yard, Serious Rails Hosting
-- (866) 518-YARD (9273)


___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users


[Mongrel] monit vs mongrel cluster

2007-04-03 Thread snacktime
Is there anything mongrel cluster gives you that monit doesn't?  I'll
be using monit to monitor a number of other services anyways, so it
seems logical to just use it for everything including mongrel.

Chris
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users