adam
On 5/10/06, Roberto Saccon <[EMAIL PROTECTED]> wrote:
Zed, thanks very much for this detailed description. One question to 2.a) if no real server just one hop away from the entry-web-server (or loadbalancer) is available (e.g.: just having rented one dedicated server) , what is the best alternative:
1) run httperf on the same machine as the webserver
2) run httperf on the same machine as the webserver but on its own XEN virtual machine.
3) run httperf from development machine over DSL connection
regards
RobertoOn 5/10/06, Zed Shaw < [EMAIL PROTECTED]> wrote:On Wed, 2006-05-10 at 16:15 -0400, Adam Denenberg wrote:
> is there a way to determine how best to determine the number of
> mongrel processes to start? Right now i am running 2 in production
> but I see some people run about 8 or so. What is the cutoff and
> determening factor for this ?
There is no set number that is "best" since that depends on factors like
the type of application, hardware you run on, how dynamic the appication
is, etc.
I've found that 8-12 mongrel processes per CPU does right, but I
determined this by starting with 1 and then doing the following:
1) You'll need a URL to a small file that is running on your apache
server and is not served by Mongrel at all. This URL will be your "best
possible baseline".
2) Build your baseline measurement first. Using httperf, measure the
speed of your URL from #1 above so that you know how fast you could
possibly get if you served everything static in ideal situations.
a) ***** Make sure you do this on a different machine over an ideal
network. Not your damn wifi over a phone line through sixteen poorly
configured routers. Right next to the box your testing with a fast
switch and only one hop is the best test situation. This removes
network latency from your test as a confounding factor.*****
3) Pick a page that's a good representative page for your application.
Make sure you disable logins to make this test easier to run. Hit this
Rails page and compare it to your baseline page.
a) If your *rails* measurement is FASTER than your baseline
measurement then you screwed up. Rails shouldn't be faster than a file
off your static server. Check your config.
b) If your rails measurement is horribly slow compared to baseline
then you've got some config to do before you even start tuning the
number of process. Repeat this test until one mongrel is as fast as
possible.
4) Once you've got a Rails page going at a reasonable speed, then you'll
want to increase the --rate setting to make sure that it can handle the
reported rate.
5) Finally, you alternate between adding a mongrel process and running
test #4 with the next highest rate you'd get. You basically stop when
adding one more server doesn't improve your rate.
a) Make sure you run one round of test #4 to get the server "warmed
up", and then run the real one. Hell, run like 5 or 6 just to make sure
you're not getting a possibly bad reading.
b) Example, you run #4 and find out the --rate one mongrel can support
is 120 req/second. You add another mongrel and run the test again with
--rate 240. It handles this just find so you add another and get --rate
360. Ok, try another one and you get it dies. Giving --rate 480 gets
you only a rate of 100. Your server has hit it's max and broke. Try
tuning the --rate down at this and see if it's totally busted (like, 4
mongrels only gets you --rate 380) or if it's pretty close to 480.
That should do it. A good practice is to also look at the CPUs on the
server with top and see what kind of thrashing you give the server.
HTTPERF
Here's the commands I use for each test, but read the man page for
httperf so that you learn to use it. It's an important tool and just
cut-pasting what I have here is not going to do it for you.
#2 && #3) httperf --server www.theserver.com --port 80 --uri /tested
--num-conns <10 second count>
#4) httperf --server www.theserver.com --port 80 --uri /tested
--num-conns <10 second count> --rate <reported req/sec>
Where <10 second count> is you put enough connections to make the test
run for 10 seconds. Start off with like 100 and keep raising it until
it goes for 10 seconds.
Where <reported req/sec> is you put in whatever httperf said the
estimated requests/second were from #3. What you're doing here is
seeing if it really can handle that much concurrency. Try raising it up
and dropping it down to see the impact of performance on higher loads.
Have fun.
--
Zed A. Shaw
http://www.zedshaw.com/
http://mongrel.rubyforge.org/
_______________________________________________
Mongrel-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/mongrel-users
--Roberto Saccon
_______________________________________________
Mongrel-users mailing list
[email protected]
http://rubyforge.org/mailman/listinfo/mongrel-users
_______________________________________________ Mongrel-users mailing list [email protected] http://rubyforge.org/mailman/listinfo/mongrel-users
