If you plan on regularly killing your application (for whatever reason),
then this is a pretty good option.  This is a pretty common practice for
apache modules and fastcgi applications as a hold-over from dealing with
older leaky C apps.

I'd personally prefer for my Ruby web apps to re-run the GC rather than have
to startup/shutdown/parse configs/connect to external resources costs, but
it's because they are far less likely to leak memory that the GC can't catch
or get into an unstable state.

- scott

On Fri, Mar 21, 2008 at 1:22 PM, Evan Weaver <[EMAIL PROTECTED]> wrote:

> > The alternative is to terminate your application after N number of
> requests and never run the > GC, which I'm not a fan of.
>
> WSGI (Python) can do that, and it's a pretty nice alternative to
> having Monit kill a leaky app that may have a bunch of requests queued
> up (Mongrel soft shutdown not withstanding).
>
> Evan
>
> On Fri, Mar 21, 2008 at 3:23 PM, Scott Windsor <[EMAIL PROTECTED]> wrote:
> > On Fri, Mar 21, 2008 at 11:49 AM, Kirk Haines <[EMAIL PROTECTED]>
> wrote:
> >
> > >
> > > On Fri, Mar 21, 2008 at 12:12 PM, Scott Windsor <[EMAIL PROTECTED]>
> > wrote:
> > > > Sorry, for the re-post, but I'm new to the mailing list and wanted
> to
> > bring
> > > > back up and old topic I saw in the archives.
> > > >
> > > >
> http://rubyforge.org/pipermail/mongrel-users/2008-February/004991.html
> > > >
> > > > I think a patch to delay garbage collection and run it later is
> pretty
> > > > important for high performance web applications.  I do understand
> the
> > >
> > > In the vast majority of cases you are going to do a worse job of
> > > determining when and how often to run the GC than even MRI Ruby's
> > > simple algorithms.  MRI garbage collection stops the world -- nothing
> > > else happens while the GC runs --  so when talking about overall
> > > throughput on an application, you don't want it to run any more than
> > > necessary.
> > >
> > > I don't use Rails, but in the past I have experimented with this quite
> > > a lot under IOWA, and in my normal applications (i.e. not using
> > > RMagick) I could never come up with an algorithm of self-managed
> > > GC.disable/GC.enable/GC.start that gave the same overall level of
> > > throughput that I got by letting Ruby start the GC according to its
> > > own algorithms.  That experience makes me skeptical of that approach
> > > in the general case, though there are occasional specific cases where
> > > it can be useful.
> > >
> > >
> > > Kirk Haines
> >
> > I understand that the GC is quite knowledgeable about when to run
> garbage
> > collection when examining the heap.  But, the GC doesn't know anything
> about
> > my application or it's state.  The fact that when the GC runs everything
> > stops is why I'd prefer to limit when the GC will run.  I'd rather it
> run
> > outside of serving a web request rather then when it's right in the
> middle
> > of serving requests.
> >
> > I know that the ideal situation is to not need to run the GC, but the
> > reality is that I'm using various gems and plugins and not all are well
> > behaved and free of memory leaks.  Rails itself may also have regular
> leaks
> > from time to time and I'd prefer to have my application consistently be
> slow
> > than randomly (and unexpectedly) be slow.  The alternative is to
> terminate
> > your application after N number of requests and never run the GC, which
> I'm
> > not a fan of.
> >
> > - scott
> >
> > _______________________________________________
> >  Mongrel-users mailing list
> >  Mongrel-users@rubyforge.org
> >  http://rubyforge.org/mailman/listinfo/mongrel-users
> >
>
>
>
> --
> Evan Weaver
> Cloudburst, LLC
> _______________________________________________
> Mongrel-users mailing list
> Mongrel-users@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-users
>
_______________________________________________
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users

Reply via email to