Some further findings: "Hello World" Rails application on my rather humble rig (Dell D620 laptop running Ubuntu 6.10, Ruby 1.8.4 and Rails 1.2.2) can handle over 500 hits per second on the following action:
def say_hi render :text => 'Hi!' end It also doesn't leak any memory at all *when it is not overloaded*. E.g., under maximum non-concurent load (single-threaded test client that fires the next request immediately upon receiving a response to the previous one), it stays up forever. When Mongrel + "Hello, World" is overloaded, there is a memory leak to the tune of 6 Mb per hour. I have yet to figure out where is it coming from. On 3/7/07 Zed A. Shaw <[EMAIL PROTECTED]> wrote:
I'd say first off the solution is: just quit doing that.
This is something I can wholeheartedly agree with :) maxing out your Mongrel servers then you're seriously screwed anyway
and there's nothing but -n to help you.
After a couple of hours quietly pumping iron in a gym, I came to the same conclusion. Let me explain myself, however. The situation that I am concerned about is 20 to 50 apps clustered on the same set of boxes, written by one group of people and supervised by another. Think "large corporate IT", or a shared hosting a la TextDrive. I want to maximize throughput under heavy load, but a more important problem is to reduce the ability of one app screwing up other apps on the same box(es).
It's as simple as you make threads, threads take ram, threads don't go away fast enough.
What I was thinking is that by uncoupling the request from its thread, you can probably max out all capabilities (CPU, I/O, Rails) of a 4 cores commodity box with only 15-30 threads. 10-20 request handlers (that will either serve static stuff or carry the request to the Rails queue), one rails handler that loops over requests queue, takes requests to Rails and drops responses off in the response queue, 5-10 response handlers (whose job is simply to copy Rails response from responses queue to originating sockets). Right now, as far as I understand the code, request is carried all the way through by the same thread. On second thoughts, this is asynchronous communications between threads within the process. Far too much design and maintenance overhead for the marginal benefits it may (or may not) bring. Basically, just me being stupid by trying to be too smart. :)
Until Ruby's IO, GC, and threads improve drastically you'll keep hitting these problems.
Yes. Meantime, the recipe apparently is "serve static stuff through an upstream web server, and use smaller values of --num--procs". Mongrel that only receives dynamic requests is, essentialy, a single-threaded process, anyway. The only reason to have more than one (1) thread is so that other requests can queue up while it's doing something that takes time. Cool. By the way, is Ruby 1.9 solving all of these issues?
No one who is sane is trying to really run a Rails app on a 64 meg VPS - -
thats just asking for a lot of pain. Well, entry-level slices on most Rails VPS services are 64 Mb. My poking around so far seems to say "it's doable, but you need to tune it". Best regards, Alex
_______________________________________________ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users