Navneet Aron <navneeta...@gmail.com> wrote:
> Hi Folks, I've a rails application in production environment.The rails app
> has both a website as well as REST apis that are called by mobile
> devices. It has a apache front end and  8 mongrel_rails running when I start
> the cluster. Each of them stabalize around 60 MB initially. After that
> pretty rapidly one or two of the mongrel_rails process's memory usage climbs
> upto 300 MB (within 30 min). If I leave the system for 2 hours, pretty much
> all the processes would have reached upwards of 300 MB .  (Also there are
> times when I can leave the system running pretty much the whole day and
> memory usage will NOT go upto 300MB).

Hi,

It this sort of stuff depends on your application, too:

* Rule #1: Don't slurp in your application:

  - LIMIT all your SELECT statements in SQL, use will_paginate to
    display results (or whatever pagination helper is hot these days)

  - don't read entire files into memory, read in blocks of 8K - 1M at
    depending on your IO performance; mongrel itself tries to read off
    the socket in 16K chunks.

  - if you run commands that output a lot of crap, read them
    incrementally with IO.popen or redirect them to a tempfile
    and read them incremementally there, `command` will slurp all
    of that into memory.

  A huge class of memory usage problems can be solved by avoiding slurping.

* Do you have slow actions that could cause a lot of clients to
  bunch up behind it?  Make those actions faster, and then set
  num_processors to a low-ish number (1-30) in your mongrel config
  if you have to.  Otherwise one Mongrel could have 900+ threads
  queued up waiting on one slow one.  Make all your actions fast(ish).

  The _only_ way Mongrel itself can be blamed for memory growth like
  that is to have too many threads running; in all other cases it's
  solely application/framework's fault :)

I assume you log your requests, look at your logs and find out if
certain requests are taking a long time.  Or, see if there's a sudden
burst of traffic within a short time period ("short time period" meaning
around the time it takes the longest request to finish on your site).

If all requests finish pretty quickly and there were no traffic spike,
then it could be one or a few bad requests passed that cause your
application to eat memory like mad.  For your idempotent requests, it
would be worth it to setup an isolated instance with one Mongrel to
replay request logs against and log memory growth before/after each
request made.


Back to Rule #1, I semi-recently learned of a change to glibc malloc
that probably caused a lot of issues for slurpers:

  http://www.canonware.com/~ttt/2009/05/mr-malloc-gets-schooled.html

Since Ruby doesn't expose the malloc(3) method, I've released a
(very lightly tested) gem here:
  http://bogomips.org/mall/ ( gem install mall )

> The entire site becomes really slow and I've to restart the server. We
> wanted to profile the app, but we couldn't find a good profiling tool for
> Ruby 1.8.7

Evan's bleak_house was alright the last time I needed it (ages ago) but
not the easiest to get going.  I haven't needed to use anything lately
but I haven't been doing much Ruby.

Other things to look out for in your app:

  OpenStruct - just avoid them, use Struct or Hash instead.  I can't
               remember exactly what's wrong with them, even, but
               they were BAD.

  finalizers - make sure the blocks you pass to them don't have
               the object you're finalizing bound to them, a common
               mistake.

-- 
Eric Wong
_______________________________________________
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users

Reply via email to