Sean,
I think it would be helpful to further explain your testing scenario.
How do you simulate concurrent users?
What is RSTav?
Usersps is sessions per second??
I think most folks use Apache Bench
http://httpd.apache.org/docs/2.0/programs/ab.html
as a fairly common industry standard for HTTP server performance.
Would you consider rerunning your test using ab as well?
Equivalently, you might look at httpperf (see the haproxy web page for
some notes)
One tuning thing you might try is dropping down your timeouts.
You have:
timeout connect 10000
timeout client 300000
timeout server 300000
I typically use an order of magnitude smaller.
5000
50000
50000
(these are exaple defaults listed in an example in 2.3 of the HA proxy docs)
http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
Best of luck,
Joel
On 1/29/11 10:53 AM, Sean Hess wrote:
I'm performing real-world load tests for the first time, and my results
aren't making a lot of sense.
Just to make sure I have the test harness working, I'm not testing
"real" application code yet, I'm just hitting a web page that simulates
an IO delay (500 ms), and then serializes out some json (about 85 bytes
of content). It's not accessing the database, or doing anything other
than printing out that data. My application servers are written in
node.js, on 512MB VPSes on rackspace (centos55).
Here are the results that don't make sense:
https://gist.github.com/802082
When I run this test against a single application server (bottom one),
You can see that it stays pretty flat (about 550ms response time) until
it gets to 1500 simultaneous users, when it starts to error out and get
slow.
When I run it against an haproxy instance in front of 4 of the same
nodes (top one), my performance is worse. It doesn't drop any
connections, but the response time edges up much earlier than against a
single node.
Does this make any sense to you? Does haproxy need more RAM? I was
watching the box while the test was running and the haproxy process
didn't get higher than 20% CPU and 10% RAM.
Please help, thanks!