Re: major performance decrease in total throughput with HAproxy 1.4.20

2012-08-18 Thread Willy Tarreau
On Thu, Aug 16, 2012 at 01:43:50PM -0400, Saul Waizer wrote:
 Well, it turns out it was the option httpclose that was set on the
 defaults
 
 I commented out both httpclose and http-server-close and I got the desired
 throughput, 2k+ req/sec, then I enabled http-server-close and ran the test
 again and still got the desired throughput, enabling httpclose made it go
 down to 100 req/sec. Why would this cause such behavior though?

it may be many things. It's possible that you're saturating a firewall
somewhere in the chain between the client and haproxy. Since we don't
know if the 100 req/s is stable during the whole test or is just an
average measured at the end of the test, it could very well be that
you reach the total number of conns a firewall supports and have to
wait for them to time out.

The ratio is too high to be just a matter of virtual vs physical, because
there are not *that* many extra packets, by default without tuning, you'd
get 4 packets per request instead of 8.

It's also possible that jmeter takes a lot of time to establish a connection,
for instance because it would be designed to work with connection pools by
default and would recycle closed connections inefficiently. You should vary
the components you're using in order to find the issue, because quite clearly
a limit of 100 req/s is extremely low, it could be handled by a watch !

Regards,
Willy




Re: major performance decrease in total throughput with HAproxy 1.4.20

2012-08-17 Thread Baptiste
Hi,

To summary, with httpclose, you have around 100 rps.
With no option, you get 2K rps (which means your servers can do http
keepalives).
when you enable option http-server-close only, you have also 2K rps,
because HAProxy does HTTP keepalive on the server side.

2 options:
1. there is a magic setup in JMETER which allows a maximum of 100
connections opened per bench
(have a look at the concurrent pool size and tell us if its value, if any).
2. since you're in a virtual environment, when you disable keepalives
on the client side, you're doing a lot of network exchange with very
small packets, which is the worst case for any hypervisors.
Have a look at the latest graph on this page, which shows the
performance loss of each virtual network layer available on the market
currently:
http://blog.exceliance.fr/2012/04/24/hypervisors-virtual-network-performance-comparison-from-a-virtualized-load-balancer-point-of-view/

Cheers



Re: major performance decrease in total throughput with HAproxy 1.4.20

2012-08-17 Thread Amol
Thanks guys,
yeah i read more about the http keepalive and figured that using option 
http-server-close would be advisable for performance/concurrency sake..

Thanks again..




 From: Baptiste bed...@gmail.com
To: Lukas Tribus luky...@hotmail.com 
Cc: haproxy@formilux.org 
Sent: Friday, August 17, 2012 3:35 AM
Subject: Re: major performance decrease in total throughput with HAproxy 1.4.20
 
Hi,

To summary, with httpclose, you have around 100 rps.
With no option, you get 2K rps (which means your servers can do http
keepalives).
when you enable option http-server-close only, you have also 2K rps,
because HAProxy does HTTP keepalive on the server side.

2 options:
1. there is a magic setup in JMETER which allows a maximum of 100
connections opened per bench
(have a look at the concurrent pool size and tell us if its value, if any).
2. since you're in a virtual environment, when you disable keepalives
on the client side, you're doing a lot of network exchange with very
small packets, which is the worst case for any hypervisors.
Have a look at the latest graph on this page, which shows the
performance loss of each virtual network layer available on the market
currently:
http://blog.exceliance.fr/2012/04/24/hypervisors-virtual-network-performance-comparison-from-a-virtualized-load-balancer-point-of-view/

Cheers

Re: major performance decrease in total throughput with HAproxy 1.4.20

2012-08-16 Thread Saul Waizer
Hey Willy, thanks for your response, answers bellow:


- Is this load stable or does it vary a lot during the test ?

The load is pretty stable, it doesn't seem to go up more than .70 max

- Do you have conntrack loaded on the LB ?

Its installed, any specific command you want me to try?

- Is the LB a real or virtual machine ?

Virtual, the entire environment is virtual including the openAM servers

- Are you observing a high CPU or network usage anywhere in the chain ?

There is an initial spike on CPU when Jmeter starts but thats normal if you
don't use a ramp up period

- If you remove one of your servers, does the throughput remain the same or
does it drop by half ?

Stays exactly the same, probably because of the sticky session?

The only thing I'm seeing that is wrong in your config is that you should
remove the option httpclose statement in the defaults section and in the
backend section, but I'm pretty sure that at such a low load, it won't make
any difference.

I have removed it and tested with and without it, makes no difference. The
strangest thing is that it seems like you reach a limit and it wont go over
80-100 req/sec.

One last thing I forgot to mention, I am testing on a hot standby HAproxy
that is configured exactly as the first one and I use Keepalived for high
availability, so Keepalived is the only other process running on the box.

Any ideas?
Thanks

On Thu, Aug 16, 2012 at 1:16 AM, Willy Tarreau w...@1wt.eu wrote:

 Hi Saul,

 On Wed, Aug 15, 2012 at 02:43:57PM -0400, Saul Waizer wrote:
  Hey list,
 
  I am having a strange issue with my latest implementation of HAproxy. I
  have 2 openAM servers (tomcat) behind my haproxy box running version
 1.4.20
  on Ubuntu 10 X_86, all properly configured to be behind a load balancer.
 I
  used Jmeter to test the openAM servers individually and both give
  consistent results of ~1600-1800 req/sec, however, when I run the same
  exact test through the HAproxy I can barely get 100 req/sec! This setup
 in
  theory should allow me to double my throughput.

 Wow, 100 req/s is pretty low. Is this load stable or does it vary a lot
 during the test ? Do you have conntrack loaded on the LB ? Is the LB a
 real or virtual machine ? Are you observing a high CPU or network usage
 anywhere in the chain ? If you remove one of your servers, does the
 throughput remain the same or does it drop by half ?

 The only thing I'm seeing that is wrong in your config is that you should
 remove the option httpclose statement in the defaults section and in the
 backend section, but I'm pretty sure that at such a low load, it won't make
 any difference.

 Regards,
 Willy




Re: major performance decrease in total throughput with HAproxy 1.4.20

2012-08-16 Thread Baptiste
 Any ideas?
 Thanks



Hi,

Could be interesting to have a look at HAProxy logs :)
They may provide useful information about network and application
response time (enable the http-server-close option).

cheers



Re: major performance decrease in total throughput with HAproxy 1.4.20

2012-08-16 Thread Saul Waizer
Well, it turns out it was the option httpclose that was set on the
defaults

I commented out both httpclose and http-server-close and I got the desired
throughput, 2k+ req/sec, then I enabled http-server-close and ran the test
again and still got the desired throughput, enabling httpclose made it go
down to 100 req/sec. Why would this cause such behavior though?

Thanks

On Thu, Aug 16, 2012 at 1:11 PM, Baptiste bed...@gmail.com wrote:

  Any ideas?
  Thanks
 
 

 Hi,

 Could be interesting to have a look at HAProxy logs :)
 They may provide useful information about network and application
 response time (enable the http-server-close option).

 cheers



Re: major performance decrease in total throughput with HAproxy 1.4.20

2012-08-16 Thread Amol
The only thing I'm seeing that is wrong in your config is that you 
shouldremove the option httpclose statement in the defaults section and in the
backend section,

hi willy, i am trying to understand why option httpclose would be a problem?  
is it because haproxy has to do more work on both the ends to add the 
connection:close to the header?
i have it set in my environments for a while now...
should i be using option foreclose instead..






 From: Willy Tarreau w...@1wt.eu
To: Saul Waizer saulwai...@gmail.com 
Cc: HAproxy Mailing Lists haproxy@formilux.org 
Sent: Thursday, August 16, 2012 1:16 AM
Subject: Re: major performance decrease in total throughput with HAproxy 1.4.20
 
Hi Saul,

On Wed, Aug 15, 2012 at 02:43:57PM -0400, Saul Waizer wrote:
 Hey list,
 
 I am having a strange issue with my latest implementation of HAproxy. I
 have 2 openAM servers (tomcat) behind my haproxy box running version 1.4.20
 on Ubuntu 10 X_86, all properly configured to be behind a load balancer. I
 used Jmeter to test the openAM servers individually and both give
 consistent results of ~1600-1800 req/sec, however, when I run the same
 exact test through the HAproxy I can barely get 100 req/sec! This setup in
 theory should allow me to double my throughput.

Wow, 100 req/s is pretty low. Is this load stable or does it vary a lot
during the test ? Do you have conntrack loaded on the LB ? Is the LB a
real or virtual machine ? Are you observing a high CPU or network usage
anywhere in the chain ? If you remove one of your servers, does the
throughput remain the same or does it drop by half ?

The only thing I'm seeing that is wrong in your config is that you should
remove the option httpclose statement in the defaults section and in the
backend section, but I'm pretty sure that at such a low load, it won't make
any difference.

Regards,
Willy

RE: major performance decrease in total throughput with HAproxy 1.4.20

2012-08-16 Thread Lukas Tribus

 i am trying to understand why option httpclose would be a problem?

With httpclose in your configuration, you need 2 tcp session per *request* on 
your haproxy box. When you disable httpclose and enable only http-server-close, 
then you will use keepalive between the client and haproxy.

With keepalive, you will have a lower number of tcp sessions and a way lower 
number of packets on your network (think about the packets exchanged if every 
http requests needs a full 3 way tcp handshake). Haproxy is not the bottleneck 
with httpclose, but something else is.

That can be conntrack on the haproxy box, conntrack on the client box, or any 
firewall in between. The bottleneck can also be your virtualization system. 
Make sure you assign static, dedicated ressources to the VM (disable anything 
like RAM Ballooning) and unload conntrack from every box (client and haproxy). 
Also, check for packet loss, input and crc errors on your NICs.

  


major performance decrease in total throughput with HAproxy 1.4.20

2012-08-15 Thread Saul Waizer
Hey list,

I am having a strange issue with my latest implementation of HAproxy. I
have 2 openAM servers (tomcat) behind my haproxy box running version 1.4.20
on Ubuntu 10 X_86, all properly configured to be behind a load balancer. I
used Jmeter to test the openAM servers individually and both give
consistent results of ~1600-1800 req/sec, however, when I run the same
exact test through the HAproxy I can barely get 100 req/sec! This setup in
theory should allow me to double my throughput.

Note: This is a pretty decent server, 4gb of ram and 4 procs with nothing
else other than HAproxy running.

My relevant HAproxy config bellow:

#-
# Global settings Main
#-
global
log 127.0.0.1 local0 info
pidfile /var/run/haproxy.pid
   # stats socket /var/run/haproxy.stat mode 666
maxconn 65000
userhaproxy
group   haproxy
daemon


defaults
modehttp
log global
option  dontlognull
option  httplog
option  httpclose
option http-server-close
option forwardfor   except 127.0.0.0/8
option  redispatch
stats enable
stats uri /st
timeout connect 5000 # default 5 second time out if a backend is not
found
timeout client 300s
timeout server 300s
#timeout http-request10s
#timeout queue   1m
#timeout http-keep-alive 10s
timeout check   5s
maxconn 65000
retries 3


frontend sso *:8080
 default_backend   sso

acl sso1 hdr_dom(Host) -i auth.mydomain.lan

use_backend sso if sso1

backend sso
mode http
stats enable
option httpclose
cookie SERVERID insert nocache
#appsession amlbcookie len 20 timeout 3h request-learn
option httpchk HEAD /opensso/isAlive.jsp HTTP/1.0
balance roundrobin
server openam 10.1.1.5:8080 cookie 01 id 1001 check weight 100
server openam2 10.1.1.6:8080 cookie 02 id 1002 check weight 100


Thank you in advance for any assistance in this matter.


Re: major performance decrease in total throughput with HAproxy 1.4.20

2012-08-15 Thread Willy Tarreau
Hi Saul,

On Wed, Aug 15, 2012 at 02:43:57PM -0400, Saul Waizer wrote:
 Hey list,
 
 I am having a strange issue with my latest implementation of HAproxy. I
 have 2 openAM servers (tomcat) behind my haproxy box running version 1.4.20
 on Ubuntu 10 X_86, all properly configured to be behind a load balancer. I
 used Jmeter to test the openAM servers individually and both give
 consistent results of ~1600-1800 req/sec, however, when I run the same
 exact test through the HAproxy I can barely get 100 req/sec! This setup in
 theory should allow me to double my throughput.

Wow, 100 req/s is pretty low. Is this load stable or does it vary a lot
during the test ? Do you have conntrack loaded on the LB ? Is the LB a
real or virtual machine ? Are you observing a high CPU or network usage
anywhere in the chain ? If you remove one of your servers, does the
throughput remain the same or does it drop by half ?

The only thing I'm seeing that is wrong in your config is that you should
remove the option httpclose statement in the defaults section and in the
backend section, but I'm pretty sure that at such a low load, it won't make
any difference.

Regards,
Willy