On 3/6/07, Ken Wei <[EMAIL PROTECTED]> wrote: Hey,
Looks like we are on the same stage of the learning curve about this stuff. So, let's share our discoveries with the rest of the world :) httperf --server 192.168.1.1 --port 3000 --rate 80 --uri /my_test --num-call
1 --num-conn 10000 The memory usage of the mongrel server grows from 20M to 144M in 20 seconds, it's
This is exactly what Mongrel does when it cannot cope with the incoming traffic. I've discovered the same effect today. You are definitely overloading it with 80 requests per second. After all, it's a single-threaded instance of a fairly CPU-heavy framework. With no page caching it should cope with ~10 to 30 requests per second max. The crappy part about this, after the overload condition is off, the Mongrel process stays at 150Mb. Not a problem when you are hosting one app on the box, but becomes a problem when it's ten. By the way, check the errors section of httperf report, and the production.log. See if there are "fd_unavailable" socket errors in the former, and probably some complaints about "too many files open" in the latter. If there are, you need to either increase the number of file descriptors in the Linux kernel, or decrease the max number of open sockets in the Mongrel(s), with -n option. I don't know if it solves the "RAM footprint growing to 150 Mb" problem... I will know it first thing tomorrow morning :) Alex
_______________________________________________ Mongrel-users mailing list [email protected] http://rubyforge.org/mailman/listinfo/mongrel-users
