Thanks, Ezra.  Apologies for not checking the whole thread.

On Apr 8, 2007, at 12:40 PM, Ezra Zygmuntowicz wrote:

> Henry-
>
>       That is what it quoted earlier in this email. Here is the monitrc
> for one mongrel using mongrel_cluster:
>
> check process mongrel_USERNAME_5000
>    with pidfile /data/USERNAME/shared/log/mongrel.5000.pid
>    start program = "/usr/bin/mongrel_rails cluster::start -C /data/
> USERNAME/current/config/mongrel_cluster.yml --clean --only 5000"
>    stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/
> USERNAME/current/config/mongrel_cluster.yml  --only 5000"
>    if totalmem is greater than 110.0 MB for 4 cycles then
> restart       # eating up memory?
>    if cpu is greater than 50% for 2 cycles then
> alert                  # send an email to admin
>    if cpu is greater than 80% for 3 cycles then
> restart                # hung process?
>    if loadavg(5min) greater than 10 for 8 cycles then
> restart          # bad, bad, bad
>    if 20 restarts within 20 cycles then
> timeout                         # something is wrong, call the sys- 
> admin
>    group mongrel
>
>
>       You need one of those entries for each mongrel you need to run.
>
> Cheers-
> -Ezra
>
>
> On Apr 7, 2007, at 7:54 AM, Henry wrote:
>
>> Ezra,
>>
>> Would you mind sharing the portion of your monit.conf that handles
>> the cluster?
>>
>> Many thanks,
>> Henry
>>
>>
>> On Apr 3, 2007, at 6:28 PM, Ezra Zygmuntowicz wrote:
>>
>>>
>>>     Yes mongrel_cluster handles the pid files. Also it does a better  
>>> job
>>> of stopping mongrels. The problem I had when I used monit and
>>> mongrel_rails without mongrel_cluster was that if a mongrel used too
>>> much memory monit woudl not be able to stop it sometimes and so
>>> execution woudl fail and timeout.
>>>
>>>     Using mongrel_clutser avoids this problem completely. Trust me I've
>>> tried it all different ways. I did monit without mongrel_cluster for
>>> a about a full month on close to 200 servers and then switched them
>>> all to monit and mongrel_cluster and get much better results.
>>>
>>> -Ezra
>>>
>>> On Apr 3, 2007, at 3:00 PM, snacktime wrote:
>>>
>>>> Makes sense that mongrel_cluster would handle a lot of edge cases
>>>> better then monit.  Is it mainly the pid file handling that has  
>>>> been
>>>> the main issue so far?
>>>>
>>>> Have you tried daemontools?  Seems to me like it would be more
>>>> reliable since you wouldn't have to deal with pid files and
>>>> backgrounding mongrel.
>>>>
>>>> Chris
>>>>
>>>> On 4/3/07, Ezra Zygmuntowicz <[EMAIL PROTECTED]> wrote:
>>>>>
>>>>> On Apr 3, 2007, at 1:39 PM, snacktime wrote:
>>>>>
>>>>>> Is there anything mongrel cluster gives you that monit doesn't?
>>>>>> I'll
>>>>>> be using monit to monitor a number of other services anyways,
>>>>>> so it
>>>>>> seems logical to just use it for everything including mongrel.
>>>>>>
>>>>>> Chris
>>>>>>
>>>>>
>>>>> Chris-
>>>>>
>>>>>         WHen you use monit you can still use mongrel_cluster to
>>>>> manage it.
>>>>> You need the latest pre release of mongrel_cluster. This is the
>>>>> best
>>>>> configuration I've been able to come up with for 64Bit systems. If
>>>>> your on 32bit system then you can lower the memory limits by about
>>>>> 20-30%
>>>>>
>>>>> check process mongrel_<%= @username %>_5000
>>>>>    with pidfile /data/<%= @username %>/shared/log/mongrel.5000.pid
>>>>>    start program = "/usr/bin/mongrel_rails cluster::start -C /
>>>>> data/<%
>>>>> = @username %>/current/config/mongrel_cluster.yml --clean --only
>>>>> 5000"
>>>>>    stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/
>>>>> <%=
>>>>> @username %>/current/config/mongrel_cluster.yml --clean --only
>>>>> 5000"
>>>>>    if totalmem is greater than 110.0 MB for 4 cycles then
>>>>> restart       # eating up memory?
>>>>>    if cpu is greater than 50% for 2 cycles then
>>>>> alert                  # send an email to admin
>>>>>    if cpu is greater than 80% for 3 cycles then
>>>>> restart                # hung process?
>>>>>    if loadavg(5min) greater than 10 for 8 cycles then
>>>>> restart          # bad, bad, bad
>>>>>    if 20 restarts within 20 cycles then
>>>>> timeout                         # something is wrong, call the  
>>>>> sys-
>>>>> admin
>>>>>    group mongrel
>>>>>
>>>>> check process mongrel_<%= @username %>_5001
>>>>>    with pidfile /data/<%= @username %>/shared/log/mongrel.5001.pid
>>>>>    start program = "/usr/bin/mongrel_rails cluster::start -C /
>>>>> data/<%
>>>>> = @username %>/current/config/mongrel_cluster.yml --clean --only
>>>>> 5001"
>>>>>    stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/
>>>>> <%=
>>>>> @username %>/current/config/mongrel_cluster.yml --clean --only
>>>>> 5001"
>>>>>    if totalmem is greater than 110.0 MB for 4 cycles then
>>>>> restart       # eating up memory?
>>>>>    if cpu is greater than 50% for 2 cycles then
>>>>> alert                  # send an email to admin
>>>>>    if cpu is greater than 80% for 3 cycles then
>>>>> restart                # hung process?
>>>>>    if loadavg(5min) greater than 10 for 8 cycles then
>>>>> restart          # bad, bad, bad
>>>>>    if 20 restarts within 20 cycles then
>>>>> timeout                         # something is wrong, call the  
>>>>> sys-
>>>>> admin
>>>>>    group mongrel
>>>>>
>>>>> check process mongrel_<%= @username %>_5002
>>>>>    with pidfile /data/<%= @username %>/shared/log/mongrel.5002.pid
>>>>>    start program = "/usr/bin/mongrel_rails cluster::start -C /
>>>>> data/<%
>>>>> = @username %>/current/config/mongrel_cluster.yml --clean --only
>>>>> 5002"
>>>>>    stop program = "/usr/bin/mongrel_rails cluster::stop -C /data/
>>>>> <%=
>>>>> @username %>/current/config/mongrel_cluster.yml --clean --only
>>>>> 5002"
>>>>>    if totalmem is greater than 110.0 MB for 4 cycles then
>>>>> restart       # eating up memory?
>>>>>    if cpu is greater than 50% for 2 cycles then
>>>>> alert                  # send an email to admin
>>>>>    if cpu is greater than 80% for 3 cycles then
>>>>> restart                # hung process?
>>>>>    if loadavg(5min) greater than 10 for 8 cycles then
>>>>> restart          # bad, bad, bad
>>>>>    if 20 restarts within 20 cycles then
>>>>> timeout                         # something is wrong, call the  
>>>>> sys-
>>>>> admin
>>>>>    group mongrel
>>>>>
>>>>>
>>>>>         I wen't for a while using my own scripts to start and stop
>>>>> mongrel
>>>>> without using mongrel_cluster. But it works more reliably when I
>>>>> use
>>>>> mongrel_cluster and monit together.
>>>>>
>>>>> Cheers-
>>>>> -- Ezra Zygmuntowicz
>>>>> -- Lead Rails Evangelist
>>>>> -- [EMAIL PROTECTED]
>>>>> -- Engine Yard, Serious Rails Hosting
>>>>> -- (866) 518-YARD (9273)
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Mongrel-users mailing list
>>>>> Mongrel-users@rubyforge.org
>>>>> http://rubyforge.org/mailman/listinfo/mongrel-users
>>>>>
>>>> _______________________________________________
>>>> Mongrel-users mailing list
>>>> Mongrel-users@rubyforge.org
>>>> http://rubyforge.org/mailman/listinfo/mongrel-users
>>>>
>>>
>>> -- Ezra Zygmuntowicz 
>>> -- Lead Rails Evangelist
>>> -- [EMAIL PROTECTED]
>>> -- Engine Yard, Serious Rails Hosting
>>> -- (866) 518-YARD (9273)
>>>
>>>
>>> _______________________________________________
>>> Mongrel-users mailing list
>>> Mongrel-users@rubyforge.org
>>> http://rubyforge.org/mailman/listinfo/mongrel-users
>>
>> _______________________________________________
>> Mongrel-users mailing list
>> Mongrel-users@rubyforge.org
>> http://rubyforge.org/mailman/listinfo/mongrel-users
>>
>
> -- Ezra Zygmuntowicz 
> -- Lead Rails Evangelist
> -- [EMAIL PROTECTED]
> -- Engine Yard, Serious Rails Hosting
> -- (866) 518-YARD (9273)
>
>
> _______________________________________________
> Mongrel-users mailing list
> Mongrel-users@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-users

_______________________________________________
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users

Reply via email to