On Sun, Jan 30, 2011 at 06:34:34AM -0700, Sean Hess wrote:
> Ah, shoot, I forgot to mention that I'm running that ab test on ten boxes in
> parallel, so actual concurrency level is 10x what ab spits out. That's why I
> put "1500" above the one that had a concurrency of 150.
>
> Sorry for the confu
Ah, shoot, I forgot to mention that I'm running that ab test on ten boxes in
parallel, so actual concurrency level is 10x what ab spits out. That's why I
put "1500" above the one that had a concurrency of 150.
Sorry for the confusion.
I'm currently looking into keepalive stuff, going with a simpl
On Sat, Jan 29, 2011 at 04:01:33PM -0700, Sean Hess wrote:
> Unfortunately, the results are almost exactly the same with haproxy 1.4 and
> those changes you recommended. I'm so confused...
Your numbers indicate a big problem somewhere :
Concurrency Level: 200
Requests per second: 176.56 [#/se
Hi Sean,
> So, it looks about the same. The single instance outperforms the
> cluster, which doesn't make any sense. I'll try those changes and see if
> it gets any better.
at first glance it looks like your problem (increased latency) could be
related to connection setup costs:
You're using "o
Unfortunately, the results are almost exactly the same with haproxy 1.4 and
those changes you recommended. I'm so confused...
Thanks for your help!
On Sat, Jan 29, 2011 at 3:25 PM, Sean Hess wrote:
> Ok, here are the results from apache benchmark *before* making any other
> changes to the syste
Ok, here are the results from apache benchmark *before* making any other
changes to the system (1.4, timeouts, etc).
The Test - https://gist.github.com/802251
The Results against the 1*256 haproxy -> 4*512 node cluster -
https://gist.github.com/802268
Here's the haproxy status after the test -
ht
(Sorry for the double-post Joel, I accidentally only sent this to you
instead of the mailing list)
Thanks Joel,
I'm working on converting the test to ab (shouldn't take long) and trying
out 1.4, but to answer your questions. RSTavg is average response time.
There's a 500ms timer in the http respo
Sean,
I think it would be helpful to further explain your testing scenario.
How do you simulate concurrent users?
What is RSTav?
Usersps is sessions per second??
I think most folks use Apache Bench
http://httpd.apache.org/docs/2.0/programs/ab.html
as a fairly common industry standard for HTTP
Oh, here's my ha proxy config
https://gist.github.com/802098
and here's why my haproxy status looks like shortly after the test
http://dl.dropbox.com/u/1165308/haproxy.png
On Sat, Jan 29, 2011 at 11:53 AM, Sean Hess wrote:
> I'm performing real-world load tests for the first time, and my resu
I'm performing real-world load tests for the first time, and my results
aren't making a lot of sense.
Just to make sure I have the test harness working, I'm not testing "real"
application code yet, I'm just hitting a web page that simulates an IO delay
(500 ms), and then serializes out some json (
10 matches
Mail list logo