> You'll likely either end up using more RAM than you otherwise would
> have in between GC calls, resulting in bigger processes

This is definitely true. Keep in mind that the in-struct mark phase
means that the entire process has to lurch out of swap whenever the GC
runs. Since the process is now much bigger, and the pages idled longer
and are more likely to be swapped out, that can be pretty a brutal
hit.

Evan

On Fri, Mar 21, 2008 at 4:19 PM, Kirk Haines <[EMAIL PROTECTED]> wrote:
> On Fri, Mar 21, 2008 at 1:23 PM, Scott Windsor <[EMAIL PROTECTED]> wrote:
>
>  > I understand that the GC is quite knowledgeable about when to run garbage
>  > collection when examining the heap.  But, the GC doesn't know anything 
> about
>  > my application or it's state.  The fact that when the GC runs everything
>  > stops is why I'd prefer to limit when the GC will run.  I'd rather it run
>  > outside of serving a web request rather then when it's right in the middle
>  > of serving requests.
>
>  It doesn't matter, if one is looking at overall throughput.  And how
>  long do your GC runs take?  If you have a GC invocation that is
>  noticable on a single request, your processes must be gigantic, which
>  would suggest to me that there's a more fundamental problem with the
>  app.
>
>
>  > I know that the ideal situation is to not need to run the GC, but the
>  > reality is that I'm using various gems and plugins and not all are well
>  > behaved and free of memory leaks.  Rails itself may also have regular leaks
>
>  No, it's impractical to never run the GC.  The ideal situation, at
>  least where execution performance and throughput on a high performance
>  app is concerned, is to just intelligently reduce how often it needs
>  to run by paying attention to your object creation.  In particular,
>  pay attention to the throwaway object creation.
>
>
>  > from time to time and I'd prefer to have my application consistently be 
> slow
>  > than randomly (and unexpectedly) be slow.  The alternative is to terminate
>  > your application after N number of requests and never run the GC, which I'm
>  > not a fan of.
>
>  If your goal is to deal with memory leaks, then you really need to
>  define what that means in a GC'd language like Ruby.
>  To me, a leak is something that consumes memory in a way that eludes
>  the GC's ability to track it and reuse it.  The fundamental nature of
>  that sort of thing is that the GC can't help you with it.
>
>  If by leaks, you mean code that just creates a lot of objects that the
>  GC needs to clean up, then those aren't leaks.  It may be inefficient
>  code, but it's not a memory leak.
>
>  And in the end, while disabling GC over the course of a request may
>  result in processing that one request more quickly than it would have
>  been processed otherwise, the disable/enable dance is going to cost
>  you something.
>
>  You'll likely either end up using more RAM than you otherwise would
>  have in between GC calls, resulting in bigger processes, or you end up
>  calling GC more often than you otherwise would have, reducing your
>  high performance app's throughput.
>
>  And for the general cases, that's not an advantageous situation.
>
>  To be more specific, if excessive RAM usage and GC costs that are
>  noticable to the user during requests is a common thing for Rails
>  apps, and the reason for that is bad code in Rails and not just bad
>  user code, then the Rails folks should be the targets of a
>  conversation on the matter.  Mongrel itself, though, does not need to
>  be, and should not be playing manual memory management games on the
>  behalf of a web framework.
>
>
>
>
>  Kirk Haines
>  _______________________________________________
>  Mongrel-users mailing list
>  Mongrel-users@rubyforge.org
>  http://rubyforge.org/mailman/listinfo/mongrel-users
>



-- 
Evan Weaver
Cloudburst, LLC
_______________________________________________
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users

Reply via email to