Aren't using HTTPS in the frontend when benchmarking haproxy and plain HTTP 
when benchmarking the original server, are you? That could explain the 
performance differences.


Anyway, you do want to enable keepalive and to do that you need to remove 
"option httpclose" from the sections and insert "option http-server-close" in 
both frontend and backend [1].

You can unload the conntrack module in your kernel with insmod -r <module> 
(check loaded modules with lsmod), but consider that you may need it for 
stateless iptable rules (check for "established" in the "iptables -vnL" output).


Lukas


[1] 
http://cbonte.github.com/haproxy-dconv/configuration-1.5.html#option%20http-server-close



________________________________
> Date: Thu, 7 Mar 2013 12:32:24 +0200 
> From: [email protected] 
> CC: [email protected] 
> Subject: Re: Lots of TIME_WAITs and slow response time 
> 
> Lukas, 
> thanks for your feedback on this. 
> 
> To be honest I am not sure about keep alive on either side (I just 
> started reading about this to get a better overview) but here's some 
> numbers for running the command 
> netstat -a -n|grep "IP:80" |wc -l 
> - on the client side, there are 24K connections 
> - on the server side (each server) there are only some 10 connections 
> 
> I am not sure I understand what you mean by "unload conntrack for best 
> performance" 
> 
> There are no network issue, I double checked. 
> 
> 
> Here's my HAproxy config 
> global 
> maxconn 16384 
> pidfile /var/run/haproxy.pid 
> daemon 
> 
> defaults 
> mode http 
> retries 3 
> option redispatch 
> maxconn 14000 
> timeout client 70s 
> timeout server 70s 
> timeout connect 5s 
> 
> frontend ft_test 
> mode http 
> bind IP1_HAPROXY:443 ssl crt /etc/haproxy/certificates/cert1.pem 
> bind IP2_HAPROXY:443 ssl crt /etc/haproxy/certificates/cer2.pem 
> default_backend bk_site.com 
> 
> backend bk_site.com 
> mode http 
> cookie Site insert indirect nocache 
> option abortonclose 
> option httpclose 
> option forwardfor 
> 
> server NS31 IP_SERVER1:80 check inter 5000 rise 2 fall 3 cookie 
> Server1 maxconn 7000 maxqueue 5000 
> server NS32 IP_SERVER2:80 check inter 5000 rise 2 fall 3 cookie 
> Server2 maxconn 7000 maxqueue 5000 
> 
> listen HAPROXYSERVER 0.0.0.0:80 
> mode http 
> #option http-server-close 
> stats enable 
> stats auth user:pass 
> balance roundrobin 
> cookie Site insert indirect nocache 
> #option http-server-close 
> option httpchk HEAD /v2/index.php HTTP/1.1\r\nHost:\ api.site.com 
> option abortonclose 
> option httpclose 
> option forwardfor 
> option tcp-smart-accept 
> option tcp-smart-connect 
> 
> 
> #config with IP affinity 
> server NS31 IP_SERVER1:80 check inter 5000 rise 2 fall 3 cookie 
> Server1 maxconn 7000 maxqueue 5000 
> server NS32 IP_SERVER2:80 check inter 5000 rise 2 fall 3 cookie 
> Server2 maxconn 7000 maxqueue 5000 
> 
> listen HAPROXYSERVERMySQL 0.0.0.0:3306 
> mode tcp 
> #option httpchk GET /mysqlchk/?port=3306 
> balance roundrobin 
> #option abortonclose 
> #option tcp-smart-accept 
> #option tcp-smart-connect 
> 
> server NS31 IP_SERVER1:3306 check inter 5000 rise 2 fall 3 maxconn 
> 7000 maxqueue 5000 
> server NS32 IP_SERVER1:3306 check inter 5000 rise 2 fall 3 maxconn 
> 7000 maxqueue 5000 
> 
> 
> 
> On 03/06/2013 08:31 PM, Lukas Tribus wrote: 
> 
> 
> Are you using keepalive on haproxy? Perhaps you are confronting nginx with 
> keepalive enabled and haproxy with client-side keepalive disabled? 
> 
> Can you share the haproxy config? 
> 
> Thomas is right, you should probably unload conntrack for best performance. 
> 
> Also make sure you don't have any network issues (packet loss or reordering). 
> 
> 
> Regards, 
> 
> Lukas 
> 
> 
> ---------------------------------------- 
> 
> 
> Date: Wed, 6 Mar 2013 17:27:19 +0200 
> From: [email protected]<mailto:[email protected]> 
> To: [email protected]<mailto:[email protected]> 
> Subject: Lots of TIME_WAITs and slow response time 
> 
> 
> Hi, 
> I have a 3 server architecture, a HAproxy that sends balanced traffic to 
> 2 Nginx servers. 
> 
> I noticed that if I run "ab -n 1000 -c 100" directly on the Nginx server 
> I get response times between 40 and 80ms 
> If I run the same test via HAproxy I receive results between 120 and 
> 4000ms. And this is I get lucky as I can even wait for 30seconds to get 
> all the replies back from HAproxy server. 
> 
> I've been struggling to tune the kernel params and other things for days 
> now but I still cannot make this work properly. 
> I noticed that on HAproxy server I have all the time at least 20K 
> TIME_WAIT connections ( netstat -a -n|grep TIME_WAIT|wc -l) 
> while on the Nginx servers that number is about 600 for each one. 
> 
> I suspect that is because of this that the service doesn't always 
> perform well but I would surely appreciate any advice from you. 
> Also, let me know what kernel params or configuration files you might 
> need me to share with you in order to get a better understanding. 
> 
> Thank you, 
> Alex 
> 
> 
> 
> 
> 
> 
> -- 
> Alex Florescu 
> http://PagePeeker.com<http://pagepeeker.com> 
> [email protected]<mailto:[email protected]> 
> twitter: @PagePeeker<https://twitter.com/PagePeeker> 
> facebook: https://www.facebook.com/pagepeeker 
                                          

Reply via email to