Ok, here are the results from apache benchmark *before* making any other
changes to the system (1.4, timeouts, etc).

The Test - https://gist.github.com/802251

The Results against the 1*256 haproxy -> 4*512 node cluster -
https://gist.github.com/802268
Here's the haproxy status after the test -
http://dl.dropbox.com/u/1165308/ab_haproxy.png

The Results against the 1*512 node instance - https://gist.github.com/802271

So, it looks about the same. The single instance outperforms the cluster,
which doesn't make any sense. I'll try those changes and see if it gets any
better.



On Sat, Jan 29, 2011 at 2:44 PM, Sean Hess <[email protected]> wrote:

> Thanks Joel,
>
> I'm working on converting the test to ab (shouldn't take long) and trying
> out 1.4, but to answer your questions. RSTavg is average response time.
> There's a 500ms timer in the http response, and some serialization. It's
> over the local network. So that should be about 550ms under no load.
>
> Users per second, yes.
>
> I didn't use ab to start because I'm not interested in response time, per
> se, but at what load response time starts to fail. I don't know an effective
> way to do this with ab, partially because it doesn't support stepping (my
> test steps through the concurrency levels specified by "users", I should
> rename Usersps to sessions per second, because if a "user" takes less than
> 1 second they start again right away). My testing harness allows me to write
> tests in my application language, blah blah.. you get the idea. But yes,
> I'll run ab and see if I get the same results.
>
> I'll also try your changes to the timeouts. Thanks for your help!
>
>
>
> On Sat, Jan 29, 2011 at 1:00 PM, Joel Krauska <[email protected]> wrote:
>
>> Speculation, but using a newer version of haproxy (1.4) might also improve
>> performance for you.
>>
>> --Joel
>>
>>
>> On 1/29/11 10:53 AM, Sean Hess wrote:
>>
>>> I'm performing real-world load tests for the first time, and my results
>>> aren't making a lot of sense.
>>>
>>> Just to make sure I have the test harness working, I'm not testing
>>> "real" application code yet, I'm just hitting a web page that simulates
>>> an IO delay (500 ms), and then serializes out some json (about 85 bytes
>>> of content). It's not accessing the database, or doing anything other
>>> than printing out that data. My application servers are written in
>>> node.js, on 512MB VPSes on rackspace (centos55).
>>>
>>> Here are the results that don't make sense:
>>>
>>> https://gist.github.com/802082
>>>
>>> When I run this test against a single application server (bottom one),
>>> You can see that it stays pretty flat (about 550ms response time) until
>>> it gets to 1500 simultaneous users, when it starts to error out and get
>>> slow.
>>>
>>> When I run it against an haproxy instance in front of 4 of the same
>>> nodes (top one), my performance is worse. It doesn't drop any
>>> connections, but the response time edges up much earlier than against a
>>> single node.
>>>
>>> Does this make any sense to you? Does haproxy need more RAM? I was
>>> watching the box while the test was running and the haproxy process
>>> didn't get higher than 20% CPU and 10% RAM.
>>>
>>> Please help, thanks!
>>>
>>
>>
>

Reply via email to