My goal in this testing was to produce a report for management showing how
haproxy with the four tomcats increased capacity and improved scale vs having
direct access to tomcat. I would think it would be pretty straight forward to
produce that graph. Unfortunately, the results that I received did not line
up. So, I figure that I have a misconfiguration of some sort. Here are the
details;
So, I have 3 scenarios:
Scenario 1:
/———>tomcat1
/———> machine 3 —
/ \———>tomcat2
ab on machine1 ——> haproxy on machine 2 ——
\ /———->tomcat3
\———> machine 4 —
\———>tomcat4
Scenario 2:
/———>tomcat1
/———> machine 3 —
/ \———>tomcat2
ab and haproxy on machine 2 ——
\ /———->tomcat3
\———> machine 4 —
\———>tomcat4
Scenario 3:
/———>tomcat1
ab and haproxy on machine 3 ——
\———>tomcat2
For all of the scenarios, each box is a vm with 4 procs, 12 Gb of RAM, and
fiber channel io. They are all using the latest drivers from vmware and are
patched up redhat EL 5.5 (latest). Here is the uname string:
2.6.18-194.17.1.el5 #1 SMP Mon Sep 20 07:12:06 EDT 2010 x86_64 x86_64 x86_64
GNU/Linux
In all cases I tested by running the following test:
ab -n 10000 -c 100 http://tomcat1:8080/examples/jsp/
ab -n 10000 -c 100 http://haproxy:81/examples/jsp/
I also ran these tests at various levels of concurrency (actually ran them at
powers of 2) and received the same results.
Here is an example of the results that I receive:
Scenario 1:
ab -n 10000 -c 100 http://tomcat1:8080/examples/jsp/
Concurrency Level: 100
Time taken for tests: 1.602049 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Non-2xx responses: 10001
Total transferred: 1630163 bytes
HTML transferred: 0 bytes
Requests per second: 6242.01 [#/sec] (mean)
Time per request: 16.020 [ms] (mean)
Time per request: 0.160 [ms] (mean, across all concurrent requests)
Transfer rate: 993.10 [Kbytes/sec] received
ab -n 10000 -c 100 http://haproxy:81/examples/jsp/
Concurrency Level: 100
Time taken for tests: 1.821838 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Non-2xx responses: 10000
Total transferred: 1930000 bytes
HTML transferred: 0 bytes
Requests per second: 5488.96 [#/sec] (mean)
Time per request: 18.218 [ms] (mean)
Time per request: 0.182 [ms] (mean, across all concurrent requests)
Transfer rate: 1034.12 [Kbytes/sec] received
Scenario 2:
ab -n 10000 -c 100 http://tomcat1:8080/examples/jsp/
Concurrency Level: 100
Time taken for tests: 1.558019 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Non-2xx responses: 10002
Total transferred: 1630326 bytes
HTML transferred: 0 bytes
Requests per second: 6418.41 [#/sec] (mean)
Time per request: 15.580 [ms] (mean)
Time per request: 0.156 [ms] (mean, across all concurrent requests)
Transfer rate: 1021.81 [Kbytes/sec] received
ab -n 10000 -c 100 http://haproxy:81/examples/jsp/
Concurrency Level: 100
Time taken for tests: 1.915394 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Non-2xx responses: 10012
Total transferred: 1932316 bytes
HTML transferred: 0 bytes
Requests per second: 5220.86 [#/sec] (mean)
Time per request: 19.154 [ms] (mean)
Time per request: 0.192 [ms] (mean, across all concurrent requests)
Transfer rate: 985.18 [Kbytes/sec] received
Scenario 3:
ab -n 10000 -c 100 http://tomcat1:8080/examples/jsp/
Concurrency Level: 100
Time taken for tests: 1.765041 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Non-2xx responses: 10010
Total transferred: 1631630 bytes
HTML transferred: 0 bytes
Requests per second: 5665.59 [#/sec] (mean)
Time per request: 17.650 [ms] (mean)
Time per request: 0.177 [ms] (mean, across all concurrent requests)
Transfer rate: 902.53 [Kbytes/sec] received
ab -n 10000 -c 100 http://haproxy:81/examples/jsp/
Concurrency Level: 100
Time taken for tests: 2.978128 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Non-2xx responses: 10023
Total transferred: 1934439 bytes
HTML transferred: 0 bytes
Requests per second: 3357.81 [#/sec] (mean)
Time per request: 29.781 [ms] (mean)
Time per request: 0.298 [ms] (mean, across all concurrent requests)
Transfer rate: 634.29 [Kbytes/sec] received
I appreciate any insights you can provide — or even just ideas :)
LES
On Oct 7, 2010, at 12:18 AM, Hank A. Paulson wrote:
> What did the haproxy stats web page show during the test?
> How long was each test run? many people seem to run ab for a few seconds.
> Was tomcat "doing" anything for the test urls, I am a bit shocked you got
> 3700 rps from tomcat. Most apps I have seen on it fail at much lower rps.
>
> Raise the max conn for each server and for the front end and see if you get
> better results.
>
> On 10/6/10 7:11 PM, Les Stroud wrote:
>> I did a little more digging and found several blogs that suggest that I will
>> take a performance hit on virtual platforms. In fact, this guy
>> (http://www.mail-archive.com/[email protected]/msg03119.html) seems to
>> have
>> the same problem. The part that is concerning me is not the overall
>> performance, but that I am getting worse performance with 4 servers than I am
>> with 1 server. I realize there are a lot of complications, but I have to be
>> doing something very wrong to get a decrease.
>>
>> I have even tried putting haproxy on the same server with 2 tomcat servers
>> and
>> used 127.0.0.1 to take as much of the network out as possible. I still get a
>> lower number of requests per second when going through haproxy to the 2
>> tomcats (as opposed to going directly to one of the tomcats). This test is
>> using ab locally on the same machine.
>>
>> I have tried all of the sysctl settings that I have found listed on the
>> board.
>> Is there anything I am missing??
>>
>> I appreciate the help,
>> Les Stroud
>>
>> On Oct 6, 2010, at 3:56 PM, Les Stroud wrote:
>>
>>> I’ve figured I would find answers to this in the archive, but have been
>>> unable to. So, I appreciate the time.
>>>
>>> I am setting up an haproxy instance in front of some tomcat instances. As a
>>> test, I ran ab against one of the tomcat instances directly with an
>>> increasing number of concurrent connections. I then repeated the same test
>>> with haproxy fronting 4 tomcat servers. I was hoping to see that the haproxy
>>> setup would perform a higher number of requests per second and hold that
>>> higher number with increasingly high traffic. Unfortunately, it did not.
>>>
>>> Hitting the tomcat servers directly, I was able to get in excess of 3700
>>> rqs/s. With haproxy in front of that tomcat instance and three others (using
>>> roundrobin), I never surpassed 2500. I also did not find that I was able to
>>> handle an increased amount of concurrency (both started giving errors around
>>> 20000).
>>>
>>> I have tuned the tcp params on the linux side per the suggestions I have
>>> seen on here. Are there any other places I can start to figure out what I
>>> have wrong in my configuration??
>>>
>>> Thanx,
>>> LES
>>>
>>>
>>> ———
>>>
>>> haproxy.cfg
>>>
>>> global
>>> #log loghost local0 info
>>> maxconn 500
>>> nbproc 4
>>> stats socket /tmp/haproxy.sock level admin
>>> defaults
>>> log global
>>> clitimeout 60000
>>> srvtimeout 30000
>>> contimeout 4000
>>> retries 3
>>> option redispatch
>>> option httpclose
>>> option abortonclose
>>>
>>> listen stats 192.168.60.158:8081
>>> mode http
>>> stats uri /stat #Comment this if you need to specify diff stat path for
>>> viewing stat page
>>> stats enable
>>> listen erp_cluster_https 0.0.0.0:81
>>> mode http
>>> balance roundrobin
>>> option forwardfor except 0.0.0.0
>>> reqadd X-Forwarded-Proto:\ https
>>> cookie SERVERID insert indirect
>>> server tomcat01-instance1 192.168.60.156:8080 cookie A check
>>> server tomcat01-instance2 192.168.60.156:18080 cookie A check
>>> server tomcat02-instance1 192.168.60.157:8080 cookie A check
>>> server tomcat02-instance2 192.168.60.157:18080 cookie A check
>>
>