Re: client keep-alive when servers
Hi Willy, Le 01/02/2013 08:44, Willy Tarreau a écrit : I have some vague memories of someone here reporting this on the tomcat ML, resulting in a fix one or two years ago, but I may confuse with something else. Maybe you should experiment with newer versions ? What you need is just the server to emit content-length with every response, including close ones, and to fall back to chunked encoding when content-length is unknown. You're right, this was discussed here on the mailing list : http://thread.gmane.org/gmane.comp.web.haproxy/2755 Then, Óscar opened a thread on the tomcat one : http://marc.info/?t=12701162342r=1w=2 The issue was fixed in tomcat 6.0.27 and 5.5.29/ Cheers -- Cyril Bonté
Re: client keep-alive when servers
Hi Cyril, On Fri, Feb 01, 2013 at 09:07:35AM +0100, Cyril Bonté wrote: Hi Willy, Le 01/02/2013 08:44, Willy Tarreau a écrit : I have some vague memories of someone here reporting this on the tomcat ML, resulting in a fix one or two years ago, but I may confuse with something else. Maybe you should experiment with newer versions ? What you need is just the server to emit content-length with every response, including close ones, and to fall back to chunked encoding when content-length is unknown. You're right, this was discussed here on the mailing list : http://thread.gmane.org/gmane.comp.web.haproxy/2755 Then, Óscar opened a thread on the tomcat one : http://marc.info/?t=12701162342r=1w=2 The issue was fixed in tomcat 6.0.27 and 5.5.29/ Great, thanks a lot for the pointers ! Willy
Comparison to nginx
Hi I'm looking for some advice in comparing haproxy to nginx. I've been happily using haproxy for all my load balancing needs for the past few years and in my opinion I think its great. I've recently been working to deploy it my latest role but am coming up against resistance from supporters of nginx which granted is already a technology widely used in the company but not one that I have any experience with. Below is the configuration I have developed for my requirements with haproxy, I was hoping that someone that is familiar with both technologies could comment on anything I will be losing if I indeed give in and use nginx instead. Comments on improvements to the haproxy configuration also welcomed. Thanks Will Lewis - global daemon quiet maxconn 20 pidfile /local/haproxy/haproxy.pid uid 60003 gid 1001 chroot /local/haproxy/run log 127.0.0.1 local0 log 127.0.0.1 local1 notice log-tag haproxy defaults log global balance roundrobin mode http monitor-uri /haproxy http-check send-state retries 3 timeout connect 6000 timeout client 102 timeout server 102 timeout http-request 6000 option abortonclose option forwardfor except 127.0.0.1 option http-pretend-keepalive option http-server-close option httplog option log-health-checks option log-separate-errors option redispatch option tcpka option splice-auto errorfile 200 /local/haproxy/errorfiles/200.http errorfile 400 /local/haproxy/errorfiles/400.http errorfile 403 /local/haproxy/errorfiles/403.http errorfile 408 /local/haproxy/errorfiles/408.http errorfile 500 /local/haproxy/errorfiles/500.http errorfile 502 /local/haproxy/errorfiles/502.http errorfile 503 /local/haproxy/errorfiles/503.http listen stats :7000 mode http stats uri / frontend external bind *:8081 bind *:8443 ssl crt /local/haproxy/certs/main.pem crt /local/haproxy/certs/ bind *:8444 ssl crt /local/haproxy/certs/partner.pem crt /local/haproxy/certs/ acl is_secure dst_port eq 8443 8444 maxconn 20 # Capture User-Agent and X-Forward-For headers to the log capture request header User-agent len 45 capture request header X-Forwarded-For len 15 # Capture any 302 redirects to the log capture response header Location len 20 # Capture content length to the log capture response header Content-length len 9 compression algo gzip compression type text/cmd text/css text/csv text/html text/javascript text/plain text/vcard text/xml application/json application/x-www-form-urlencoded application/javascript application/x-javascript compression offload # Remove X-Proto header added from any external source reqidel ^X-Proto:.* # Presence of X-Proto: SSL header now genuinely indicates we have received communication on SSL reqadd X-Proto:\ SSL if is_secure # We keep track of connection rates and connection numbers stick-table type ip size 200k expire 2m store conn_rate(3s),conn_cur # And we do this per source address tcp-request connection track-sc1 src acl source_rate_abuser sc1_conn_rate gt 500 acl source_connections_abuser sc1_conn_cur gt 5000 use_backend be_sf-slow if source_rate_abuser || source_connections_abuser default_backend be_sf backend be_sf cookie srv-eu insert domain .example.com server srv_1 10.0.0.1:9081 cookie b802 check inter 5000 maxconn 700 server srv_2 10.0.0.2:9081 cookie b803 check inter 5000 maxconn 700 server srv_3 10.0.0.3:9081 cookie b804 check inter 5000 maxconn 700 server srv_4 10.0.0.4:9081 cookie b805 check inter 5000 maxconn 700 server srv_5 10.0.0.5:9081 cookie b806 check inter 5000 maxconn 700 server srv_6 10.0.0.6:9081 cookie b807 check inter 5000 maxconn 700 server srv_7 10.0.0.7:9081 cookie b808 check inter 5000 maxconn 700 server srv_8 10.0.0.8:9081 cookie b809 check inter 5000 maxconn 700 server srv_9 10.0.0.9:9081 cookie b80a check inter 5000 maxconn 700 backend be_sf-slow cookie srv-eu insert domain .example.com server srv_1 10.0.0.1:9081 cookie b802 check inter 5000 maxconn 100 server srv_2 10.0.0.2:9081 cookie b803 check inter 5000 maxconn 100 server srv_3 10.0.0.3:9081 cookie b804 check inter 5000 maxconn 100 server srv_4 10.0.0.4:9081 cookie b805 check inter 5000 maxconn 100 server srv_5 10.0.0.5:9081 cookie b806 check inter 5000 maxconn 100 server srv_6 10.0.0.6:9081 cookie b807 check inter 5000 maxconn 100 server srv_7 10.0.0.7:9081 cookie b808 check inter 5000 maxconn 100 server srv_8 10.0.0.8:9081 cookie b809 check inter 5000 maxconn 100 server srv_9 10.0.0.9:9081 cookie b80a check inter 5000 maxconn 100
Re: Comparison to nginx
Hi William, I'm not sure I'd change anything that wasn't causing me pain. If nginx is working nicely then there are probably other things that aren't that are more rewarding of attention. Are there any pain points that you currently have? Maybe haproxy could improve some of those. Thanks, Steven On 1 February 2013 11:09, William Lewis m...@wlewis.co.uk wrote: Hi I'm looking for some advice in comparing haproxy to nginx. I've been happily using haproxy for all my load balancing needs for the past few years and in my opinion I think its great. I've recently been working to deploy it my latest role but am coming up against resistance from supporters of nginx which granted is already a technology widely used in the company but not one that I have any experience with. Below is the configuration I have developed for my requirements with haproxy, I was hoping that someone that is familiar with both technologies could comment on anything I will be losing if I indeed give in and use nginx instead. Comments on improvements to the haproxy configuration also welcomed. Thanks Will Lewis - global daemon quiet maxconn 20 pidfile /local/haproxy/haproxy.pid uid 60003 gid 1001 chroot /local/haproxy/run log 127.0.0.1 local0 log 127.0.0.1 local1 notice log-tag haproxy defaults log global balance roundrobin mode http monitor-uri /haproxy http-check send-state retries 3 timeout connect 6000 timeout client 102 timeout server 102 timeout http-request 6000 option abortonclose option forwardfor except 127.0.0.1 option http-pretend-keepalive option http-server-close option httplog option log-health-checks option log-separate-errors option redispatch option tcpka option splice-auto errorfile 200 /local/haproxy/errorfiles/200.http errorfile 400 /local/haproxy/errorfiles/400.http errorfile 403 /local/haproxy/errorfiles/403.http errorfile 408 /local/haproxy/errorfiles/408.http errorfile 500 /local/haproxy/errorfiles/500.http errorfile 502 /local/haproxy/errorfiles/502.http errorfile 503 /local/haproxy/errorfiles/503.http listen stats :7000 mode http stats uri / frontend external bind *:8081 bind *:8443 ssl crt /local/haproxy/certs/main.pem crt /local/haproxy/certs/ bind *:8444 ssl crt /local/haproxy/certs/partner.pem crt /local/haproxy/certs/ acl is_secure dst_port eq 8443 8444 maxconn 20 # Capture User-Agent and X-Forward-For headers to the log capture request header User-agent len 45 capture request header X-Forwarded-For len 15 # Capture any 302 redirects to the log capture response header Location len 20 # Capture content length to the log capture response header Content-length len 9 compression algo gzip compression type text/cmd text/css text/csv text/html text/javascript text/plain text/vcard text/xml application/json application/x-www-form-urlencoded application/javascript application/x-javascript compression offload # Remove X-Proto header added from any external source reqidel ^X-Proto:.* # Presence of X-Proto: SSL header now genuinely indicates we have received communication on SSL reqadd X-Proto:\ SSL if is_secure # We keep track of connection rates and connection numbers stick-table type ip size 200k expire 2m store conn_rate(3s),conn_cur # And we do this per source address tcp-request connection track-sc1 src acl source_rate_abuser sc1_conn_rate gt 500 acl source_connections_abuser sc1_conn_cur gt 5000 use_backend be_sf-slow if source_rate_abuser || source_connections_abuser default_backend be_sf backend be_sf cookie srv-eu insert domain .example.com server srv_1 10.0.0.1:9081 cookie b802 check inter 5000 maxconn 700 server srv_2 10.0.0.2:9081 cookie b803 check inter 5000 maxconn 700 server srv_3 10.0.0.3:9081 cookie b804 check inter 5000 maxconn 700 server srv_4 10.0.0.4:9081 cookie b805 check inter 5000 maxconn 700 server srv_5 10.0.0.5:9081 cookie b806 check inter 5000 maxconn 700 server srv_6 10.0.0.6:9081 cookie b807 check inter 5000 maxconn 700 server srv_7 10.0.0.7:9081 cookie b808 check inter 5000 maxconn 700 server srv_8 10.0.0.8:9081 cookie b809 check inter 5000 maxconn 700 server srv_9 10.0.0.9:9081 cookie b80a check inter 5000 maxconn 700 backend be_sf-slow cookie srv-eu insert domain .example.com server srv_1 10.0.0.1:9081 cookie b802 check inter 5000 maxconn 100 server srv_2 10.0.0.2:9081 cookie b803 check inter 5000 maxconn 100 server srv_3 10.0.0.3:9081 cookie b804 check inter 5000 maxconn 100 server srv_4 10.0.0.4:9081 cookie b805 check inter 5000 maxconn 100 server srv_5 10.0.0.5:9081
Re: Comparison to nginx
Hi Steve, Its not a question of replacing nginx with haproxy. The existing solution was dns round robin directly to application servers, that then proxy on to a different node if they didn't hold the required state (which is horrible) I've deployed haproxy in front of this setup but I'm now being asked to replace it again with nginx to harmonize with other infrastructure in the company, and I'm trying to understand what I might lose (other than my time and sanity) in doing that. Thanks Will On Feb 1, 2013, at 11:15 AM, Steven Acreman steven.acre...@alfresco.com wrote: Hi William, I'm not sure I'd change anything that wasn't causing me pain. If nginx is working nicely then there are probably other things that aren't that are more rewarding of attention. Are there any pain points that you currently have? Maybe haproxy could improve some of those. Thanks, Steven On 1 February 2013 11:09, William Lewis m...@wlewis.co.uk wrote: Hi I'm looking for some advice in comparing haproxy to nginx. I've been happily using haproxy for all my load balancing needs for the past few years and in my opinion I think its great. I've recently been working to deploy it my latest role but am coming up against resistance from supporters of nginx which granted is already a technology widely used in the company but not one that I have any experience with. Below is the configuration I have developed for my requirements with haproxy, I was hoping that someone that is familiar with both technologies could comment on anything I will be losing if I indeed give in and use nginx instead. Comments on improvements to the haproxy configuration also welcomed. Thanks Will Lewis - global daemon quiet maxconn 20 pidfile /local/haproxy/haproxy.pid uid 60003 gid 1001 chroot /local/haproxy/run log 127.0.0.1 local0 log 127.0.0.1 local1 notice log-tag haproxy defaults log global balance roundrobin mode http monitor-uri /haproxy http-check send-state retries 3 timeout connect 6000 timeout client 102 timeout server 102 timeout http-request 6000 option abortonclose option forwardfor except 127.0.0.1 option http-pretend-keepalive option http-server-close option httplog option log-health-checks option log-separate-errors option redispatch option tcpka option splice-auto errorfile 200 /local/haproxy/errorfiles/200.http errorfile 400 /local/haproxy/errorfiles/400.http errorfile 403 /local/haproxy/errorfiles/403.http errorfile 408 /local/haproxy/errorfiles/408.http errorfile 500 /local/haproxy/errorfiles/500.http errorfile 502 /local/haproxy/errorfiles/502.http errorfile 503 /local/haproxy/errorfiles/503.http listen stats :7000 mode http stats uri / frontend external bind *:8081 bind *:8443 ssl crt /local/haproxy/certs/main.pem crt /local/haproxy/certs/ bind *:8444 ssl crt /local/haproxy/certs/partner.pem crt /local/haproxy/certs/ acl is_secure dst_port eq 8443 8444 maxconn 20 # Capture User-Agent and X-Forward-For headers to the log capture request header User-agent len 45 capture request header X-Forwarded-For len 15 # Capture any 302 redirects to the log capture response header Location len 20 # Capture content length to the log capture response header Content-length len 9 compression algo gzip compression type text/cmd text/css text/csv text/html text/javascript text/plain text/vcard text/xml application/json application/x-www-form-urlencoded application/javascript application/x-javascript compression offload # Remove X-Proto header added from any external source reqidel ^X-Proto:.* # Presence of X-Proto: SSL header now genuinely indicates we have received communication on SSL reqadd X-Proto:\ SSL if is_secure # We keep track of connection rates and connection numbers stick-table type ip size 200k expire 2m store conn_rate(3s),conn_cur # And we do this per source address tcp-request connection track-sc1 src acl source_rate_abuser sc1_conn_rate gt 500 acl source_connections_abuser sc1_conn_cur gt 5000 use_backend be_sf-slow if source_rate_abuser || source_connections_abuser default_backend be_sf backend be_sf cookie srv-eu insert domain .example.com server srv_1 10.0.0.1:9081 cookie b802 check inter 5000 maxconn 700 server srv_2 10.0.0.2:9081 cookie b803 check inter 5000 maxconn 700 server srv_3 10.0.0.3:9081 cookie b804 check inter 5000 maxconn 700 server srv_4 10.0.0.4:9081 cookie b805 check inter 5000 maxconn 700 server srv_5 10.0.0.5:9081 cookie b806 check inter 5000 maxconn 700 server
Re: Comparison to nginx
On Fri, Feb 1, 2013 at 11:22 AM, William Lewis m...@wlewis.co.uk wrote: Hi Steve, Its not a question of replacing nginx with haproxy. The existing solution was dns round robin directly to application servers, that then proxy on to a different node if they didn't hold the required state (which is horrible) I've deployed haproxy in front of this setup but I'm now being asked to replace it again with nginx to harmonize with other infrastructure in the company, and I'm trying to understand what I might lose (other than my time and sanity) in doing that. Thanks Will On Feb 1, 2013, at 11:15 AM, Steven Acreman steven.acre...@alfresco.com wrote: Hi William, I'm not sure I'd change anything that wasn't causing me pain. If nginx is working nicely then there are probably other things that aren't that are more rewarding of attention. Are there any pain points that you currently have? Maybe haproxy could improve some of those. Thanks, Steven On 1 February 2013 11:09, William Lewis m...@wlewis.co.uk wrote: Hi I'm looking for some advice in comparing haproxy to nginx. I've been happily using haproxy for all my load balancing needs for the past few years and in my opinion I think its great. I've recently been working to deploy it my latest role but am coming up against resistance from supporters of nginx which granted is already a technology widely used in the company but not one that I have any experience with. Below is the configuration I have developed for my requirements with haproxy, I was hoping that someone that is familiar with both technologies could comment on anything I will be losing if I indeed give in and use nginx instead. Comments on improvements to the haproxy configuration also welcomed. Thanks Will Lewis - global daemon quiet maxconn 20 pidfile /local/haproxy/haproxy.pid uid 60003 gid 1001 chroot /local/haproxy/run log 127.0.0.1 local0 log 127.0.0.1 local1 notice log-tag haproxy defaults log global balance roundrobin mode http monitor-uri /haproxy http-check send-state retries 3 timeout connect 6000 timeout client 102 timeout server 102 timeout http-request 6000 option abortonclose option forwardfor except 127.0.0.1 option http-pretend-keepalive option http-server-close option httplog option log-health-checks option log-separate-errors option redispatch option tcpka option splice-auto errorfile 200 /local/haproxy/errorfiles/200.http errorfile 400 /local/haproxy/errorfiles/400.http errorfile 403 /local/haproxy/errorfiles/403.http errorfile 408 /local/haproxy/errorfiles/408.http errorfile 500 /local/haproxy/errorfiles/500.http errorfile 502 /local/haproxy/errorfiles/502.http errorfile 503 /local/haproxy/errorfiles/503.http listen stats :7000 mode http stats uri / frontend external bind *:8081 bind *:8443 ssl crt /local/haproxy/certs/main.pem crt /local/haproxy/certs/ bind *:8444 ssl crt /local/haproxy/certs/partner.pem crt /local/haproxy/certs/ acl is_secure dst_port eq 8443 8444 maxconn 20 # Capture User-Agent and X-Forward-For headers to the log capture request header User-agent len 45 capture request header X-Forwarded-For len 15 # Capture any 302 redirects to the log capture response header Location len 20 # Capture content length to the log capture response header Content-length len 9 compression algo gzip compression type text/cmd text/css text/csv text/html text/javascript text/plain text/vcard text/xml application/json application/x-www-form-urlencoded application/javascript application/x-javascript compression offload # Remove X-Proto header added from any external source reqidel ^X-Proto:.* # Presence of X-Proto: SSL header now genuinely indicates we have received communication on SSL reqadd X-Proto:\ SSL if is_secure # We keep track of connection rates and connection numbers stick-table type ip size 200k expire 2m store conn_rate(3s),conn_cur # And we do this per source address tcp-request connection track-sc1 src acl source_rate_abuser sc1_conn_rate gt 500 acl source_connections_abuser sc1_conn_cur gt 5000 use_backend be_sf-slow if source_rate_abuser || source_connections_abuser default_backend be_sf backend be_sf cookie srv-eu insert domain .example.com server srv_1 10.0.0.1:9081 cookie b802 check inter 5000 maxconn 700 server srv_2 10.0.0.2:9081 cookie b803 check inter 5000 maxconn 700 server srv_3 10.0.0.3:9081 cookie b804 check inter 5000 maxconn 700 server srv_4 10.0.0.4:9081 cookie b805 check inter 5000 maxconn 700 server srv_5 10.0.0.5:9081 cookie b806 check inter 5000 maxconn 700
Re: Comparison to nginx
I couldn't agree more, but I'm really in need of more concrete reasons for pushing back against this. On Feb 1, 2013, at 12:40 PM, shouldbe q931 shouldbeq...@gmail.com wrote: On Fri, Feb 1, 2013 at 11:22 AM, William Lewis m...@wlewis.co.uk wrote: Hi Steve, Its not a question of replacing nginx with haproxy. The existing solution was dns round robin directly to application servers, that then proxy on to a different node if they didn't hold the required state (which is horrible) I've deployed haproxy in front of this setup but I'm now being asked to replace it again with nginx to harmonize with other infrastructure in the company, and I'm trying to understand what I might lose (other than my time and sanity) in doing that. Thanks Will On Feb 1, 2013, at 11:15 AM, Steven Acreman steven.acre...@alfresco.com wrote: Hi William, I'm not sure I'd change anything that wasn't causing me pain. If nginx is working nicely then there are probably other things that aren't that are more rewarding of attention. Are there any pain points that you currently have? Maybe haproxy could improve some of those. Thanks, Steven On 1 February 2013 11:09, William Lewis m...@wlewis.co.uk wrote: Hi I'm looking for some advice in comparing haproxy to nginx. I've been happily using haproxy for all my load balancing needs for the past few years and in my opinion I think its great. I've recently been working to deploy it my latest role but am coming up against resistance from supporters of nginx which granted is already a technology widely used in the company but not one that I have any experience with. Below is the configuration I have developed for my requirements with haproxy, I was hoping that someone that is familiar with both technologies could comment on anything I will be losing if I indeed give in and use nginx instead. Comments on improvements to the haproxy configuration also welcomed. Thanks Will Lewis - global daemon quiet maxconn 20 pidfile /local/haproxy/haproxy.pid uid 60003 gid 1001 chroot /local/haproxy/run log 127.0.0.1 local0 log 127.0.0.1 local1 notice log-tag haproxy defaults log global balance roundrobin mode http monitor-uri /haproxy http-check send-state retries 3 timeout connect 6000 timeout client 102 timeout server 102 timeout http-request 6000 option abortonclose option forwardfor except 127.0.0.1 option http-pretend-keepalive option http-server-close option httplog option log-health-checks option log-separate-errors option redispatch option tcpka option splice-auto errorfile 200 /local/haproxy/errorfiles/200.http errorfile 400 /local/haproxy/errorfiles/400.http errorfile 403 /local/haproxy/errorfiles/403.http errorfile 408 /local/haproxy/errorfiles/408.http errorfile 500 /local/haproxy/errorfiles/500.http errorfile 502 /local/haproxy/errorfiles/502.http errorfile 503 /local/haproxy/errorfiles/503.http listen stats :7000 mode http stats uri / frontend external bind *:8081 bind *:8443 ssl crt /local/haproxy/certs/main.pem crt /local/haproxy/certs/ bind *:8444 ssl crt /local/haproxy/certs/partner.pem crt /local/haproxy/certs/ acl is_secure dst_port eq 8443 8444 maxconn 20 # Capture User-Agent and X-Forward-For headers to the log capture request header User-agent len 45 capture request header X-Forwarded-For len 15 # Capture any 302 redirects to the log capture response header Location len 20 # Capture content length to the log capture response header Content-length len 9 compression algo gzip compression type text/cmd text/css text/csv text/html text/javascript text/plain text/vcard text/xml application/json application/x-www-form-urlencoded application/javascript application/x-javascript compression offload # Remove X-Proto header added from any external source reqidel ^X-Proto:.* # Presence of X-Proto: SSL header now genuinely indicates we have received communication on SSL reqadd X-Proto:\ SSL if is_secure # We keep track of connection rates and connection numbers stick-table type ip size 200k expire 2m store conn_rate(3s),conn_cur # And we do this per source address tcp-request connection track-sc1 src acl source_rate_abuser sc1_conn_rate gt 500 acl source_connections_abuser sc1_conn_cur gt 5000 use_backend be_sf-slow if source_rate_abuser || source_connections_abuser default_backend be_sf backend be_sf cookie srv-eu insert domain .example.com server srv_1 10.0.0.1:9081 cookie b802 check inter 5000 maxconn 700 server srv_2 10.0.0.2:9081
RE: Comparison to nginx
For example, Nginx doesn't have uri based load-balancing, you need to code it yourself. We have tried to use Nginx as a load-balancer for 10Gbit infra, and we got problems with IOps as it was not splice + max speed of 3Gbit/s on Nginx server, while if you go with HAproxy you get 9.6Gbit/s for same amount of requests and files. From: William Lewis [mailto:will...@netproteus.net] On Behalf Of William Lewis Sent: vrijdag 1 februari 2013 13:50 To: shouldbe q931 Cc: Steven Acreman; haproxy@formilux.org Subject: Re: Comparison to nginx I couldn't agree more, but I'm really in need of more concrete reasons for pushing back against this. On Feb 1, 2013, at 12:40 PM, shouldbe q931 shouldbeq...@gmail.com wrote: On Fri, Feb 1, 2013 at 11:22 AM, William Lewis m...@wlewis.co.uk wrote: Hi Steve, Its not a question of replacing nginx with haproxy. The existing solution was dns round robin directly to application servers, that then proxy on to a different node if they didn't hold the required state (which is horrible) I've deployed haproxy in front of this setup but I'm now being asked to replace it again with nginx to harmonize with other infrastructure in the company, and I'm trying to understand what I might lose (other than my time and sanity) in doing that. Thanks Will On Feb 1, 2013, at 11:15 AM, Steven Acreman steven.acre...@alfresco.com wrote: Hi William, I'm not sure I'd change anything that wasn't causing me pain. If nginx is working nicely then there are probably other things that aren't that are more rewarding of attention. Are there any pain points that you currently have? Maybe haproxy could improve some of those. Thanks, Steven On 1 February 2013 11:09, William Lewis m...@wlewis.co.uk wrote: Hi I'm looking for some advice in comparing haproxy to nginx. I've been happily using haproxy for all my load balancing needs for the past few years and in my opinion I think its great. I've recently been working to deploy it my latest role but am coming up against resistance from supporters of nginx which granted is already a technology widely used in the company but not one that I have any experience with. Below is the configuration I have developed for my requirements with haproxy, I was hoping that someone that is familiar with both technologies could comment on anything I will be losing if I indeed give in and use nginx instead. Comments on improvements to the haproxy configuration also welcomed. Thanks Will Lewis - global daemon quiet maxconn 20 pidfile /local/haproxy/haproxy.pid uid 60003 gid 1001 chroot /local/haproxy/run log 127.0.0.1 local0 log 127.0.0.1 local1 notice log-tag haproxy defaults log global balance roundrobin mode http monitor-uri /haproxy http-check send-state retries 3 timeout connect 6000 timeout client 102 timeout server 102 timeout http-request 6000 option abortonclose option forwardfor except 127.0.0.1 option http-pretend-keepalive option http-server-close option httplog option log-health-checks option log-separate-errors option redispatch option tcpka option splice-auto errorfile 200 /local/haproxy/errorfiles/200.http errorfile 400 /local/haproxy/errorfiles/400.http errorfile 403 /local/haproxy/errorfiles/403.http errorfile 408 /local/haproxy/errorfiles/408.http errorfile 500 /local/haproxy/errorfiles/500.http errorfile 502 /local/haproxy/errorfiles/502.http errorfile 503 /local/haproxy/errorfiles/503.http listen stats :7000 mode http stats uri / frontend external bind *:8081 bind *:8443 ssl crt /local/haproxy/certs/main.pem crt /local/haproxy/certs/ bind *:8444 ssl crt /local/haproxy/certs/partner.pem crt /local/haproxy/certs/ acl is_secure dst_port eq 8443 8444 maxconn 20 # Capture User-Agent and X-Forward-For headers to the log capture request header User-agent len 45 capture request header X-Forwarded-For len 15 # Capture any 302 redirects to the log capture response header Location len 20 # Capture content length to the log capture response header Content-length len 9 compression algo gzip compression type text/cmd text/css text/csv text/html text/javascript text/plain text/vcard text/xml application/json application/x-www-form-urlencoded application/javascript application/x-javascript compression offload # Remove X-Proto header added from any external source reqidel ^X-Proto:.* # Presence of X-Proto: SSL header now genuinely indicates we have received communication on SSL reqadd X-Proto:\ SSL if is_secure # We keep track of connection rates and connection numbers
Re: client keep-alive when servers
On 02/01/2013 03:07 AM, Cyril Bonté wrote: Hi Willy, Le 01/02/2013 08:44, Willy Tarreau a écrit : I have some vague memories of someone here reporting this on the tomcat ML, resulting in a fix one or two years ago, but I may confuse with something else. Maybe you should experiment with newer versions ? What you need is just the server to emit content-length with every response, including close ones, and to fall back to chunked encoding when content-length is unknown. You're right, this was discussed here on the mailing list : http://thread.gmane.org/gmane.comp.web.haproxy/2755 Then, Óscar opened a thread on the tomcat one : http://marc.info/?t=12701162342r=1w=2 The issue was fixed in tomcat 6.0.27 and 5.5.29/ Cheers Thank you for the details for the details. The pieces are all fitting together -- this particular service happens to be stuck on 6.0.20.
Re: Comparison to nginx
How about going the other way and fully commenting the config, sending it to them and asking them how they would implement all of the things that you are using in HAProxy in nginx. If they pass it back to you as that's your job, then you can reasonably ask them that as you have a working solution in HAProxy, would this not be a waste of resource ? You could even suggest that it might be better for the organisation to move to using HAProxy instead of nginx for its advanced reverse proxy and load balancing capabilities.
Re: HAProxy on multi-CPU Hardware
Search for nbproc in http://haproxy.1wt.eu/download/1.4/doc/configuration.txt, which explains how HaProxy handles multiple CPUs in a box. Chris On 01/02/2013 15:54, Peter Mellquist wrote: Hi! My understanding is that HAProxy is a single process event model which utilizes a single CPU even if running on SMP / Multi CPU systems? Have there been any considerations for having HAProxy fork or thread allowing a single config file to feed proxies across many CPUs on the same box? There are various design models for doing this but I am interested what has already been done? Thanks, Peter.
Re: HAProxy on multi-CPU Hardware
Excellent, just what I was looking for! Peter. On Fri, Feb 1, 2013 at 9:28 AM, Chris Sarginson ch...@sargy.co.uk wrote: Search for nbproc in http://haproxy.1wt.eu/download/1.4/doc/configuration.txt, which explains how HaProxy handles multiple CPUs in a box. Chris On 01/02/2013 15:54, Peter Mellquist wrote: Hi! My understanding is that HAProxy is a single process event model which utilizes a single CPU even if running on SMP / Multi CPU systems? Have there been any considerations for having HAProxy fork or thread allowing a single config file to feed proxies across many CPUs on the same box? There are various design models for doing this but I am interested what has already been done? Thanks, Peter.
Re: HAProxy on multi-CPU Hardware
Peter Mellquist 1 February, 2013 12:48 PM Excellent, just what I was looking for!Peter. Just remember that the admin socket will not work as expected with this as it round robins between all the running processes. You would have to size any specific ACLs that deal with session rate or maxconn accordingly or your limits will be too high. You could also investigate installing an IPVS load balancer (keepalived) sending to multiple haproxy processes spawned with different loopback IPs.
Re: HAProxy on multi-CPU Hardware
Thanks for the good input Tait. This would be a code mod but, maybe it would be possible to have different mgmt ports for each process. Maybe baseport+N where N is the process number. I like the IPVS option but keeping it all within HAProxy seems nicer. This definitely gives me some cool stuff to prototype over the weekend! Peter. The real goal I am interested in is running HAProxy on virtual machines, or in the cloud, where I can provision a load balancer on a multi-cpu VM with HAPtroxy tuned with enough processes to handle a specific work flow. Before this information, I was thinking that this would be limited to only VMs with a single CPU On Fri, Feb 1, 2013 at 10:26 AM, Tait Clarridge t...@taiter.com wrote: Peter Mellquist pemellqu...@gmail.com 1 February, 2013 12:48 PM Excellent, just what I was looking for! Peter. Just remember that the admin socket will not work as expected with this as it round robins between all the running processes. You would have to size any specific ACLs that deal with session rate or maxconn accordingly or your limits will be too high. You could also investigate installing an IPVS load balancer (keepalived) sending to multiple haproxy processes spawned with different loopback IPs. compose-unknown-contact.jpg
haproxy invoked oom-killer
oom-killer just killed my haproxy instance. Anyone know if there is a way to prioritize haproxy and have it get killed after something else? Or, any tuning that might help. It looked like I had plenty of swap space available when it decided to kill haproxy. Thanks for any advice. Linux 3.3.7-1.fc16.x86_64 HA-Proxy version 1.4.20 # free -m total used free sharedbuffers cached Mem: 995357637 0 3 25 -/+ buffers/cache:328667 Swap: 2015 92 1923 messages: Feb 1 15:48:03 prx2 kernel: [21556065.639023] sched: RT throttling activated Feb 1 15:48:03 prx2 heartbeat: [15556]: WARN: Gmain_timeout_dispatch: Dispatch function for check for signals was delayed 1470 ms ( 1010 ms) before being called (GSource: 0x20b4c20) Feb 1 15:48:03 prx2 heartbeat: [15556]: info: Gmain_timeout_dispatch: started at 2588817760 should have started at 2588817613 Feb 1 15:48:14 prx2 kernel: [21556076.952895] oom_kill_process: 997778 callbacks suppressed Feb 1 15:48:14 prx2 kernel: [21556076.952900] haproxy invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_score_adj=0 Feb 1 15:48:14 prx2 kernel: [21556076.952934] haproxy cpuset=/ mems_allowed=0 Feb 1 15:48:14 prx2 kernel: [21556076.952946] Pid: 9654, comm: haproxy Not tainted 3.3.7-1.fc16.x86_64 #1 Feb 1 15:48:14 prx2 kernel: [21556076.952948] Call Trace: Feb 1 15:48:14 prx2 kernel: [21556076.952978] [810c7811] ? cpuset_print_task_mems_allowed+0x91/0xa0 Feb 1 15:48:14 prx2 kernel: [21556076.952993] [81123cd0] dump_header+0x80/0x1d0 Feb 1 15:48:14 prx2 kernel: [21556076.952997] [81124125] oom_kill_process+0x85/0x290 Feb 1 15:48:14 prx2 kernel: [21556076.953000] [81124770] out_of_memory+0x1c0/0x400 Feb 1 15:48:14 prx2 kernel: [21556076.953004] [81129d7f] __alloc_pages_nodemask+0x8df/0x8f0 Feb 1 15:48:14 prx2 kernel: [21556076.953016] [81521652] ? __ip_local_out+0xa2/0xb0 Feb 1 15:48:14 prx2 kernel: [21556076.953022] [81160a93] alloc_pages_current+0xa3/0x110 Feb 1 15:48:14 prx2 kernel: [21556076.953025] [8152adde] tcp_sendmsg+0x53e/0xdf0 Feb 1 15:48:14 prx2 kernel: [21556076.953031] [81550e74] inet_sendmsg+0x64/0xb0 Feb 1 15:48:14 prx2 kernel: [21556076.953043] [8126dc63] ? selinux_socket_sendmsg+0x23/0x30 Feb 1 15:48:14 prx2 kernel: [21556076.953052] [814ced17] sock_sendmsg+0x117/0x130 Feb 1 15:48:14 prx2 kernel: [21556076.953055] [81521652] ? __ip_local_out+0xa2/0xb0 Feb 1 15:48:14 prx2 kernel: [21556076.953065] [81067d6e] ? mod_timer+0x13e/0x2f0 Feb 1 15:48:14 prx2 kernel: [21556076.953069] [814d220d] sys_sendto+0x13d/0x190 Feb 1 15:48:14 prx2 kernel: [21556076.953073] [810d345c] ? __audit_syscall_entry+0xcc/0x310 Feb 1 15:48:14 prx2 kernel: [21556076.953076] [810d3a76] ? __audit_syscall_exit+0x3d6/0x410 Feb 1 15:48:14 prx2 kernel: [21556076.953084] [815fc529] system_call_fastpath+0x16/0x1b Feb 1 15:48:14 prx2 kernel: [21556076.953086] Mem-Info: Feb 1 15:48:14 prx2 kernel: [21556076.953088] Node 0 DMA per-cpu: Feb 1 15:48:14 prx2 kernel: [21556076.953198] CPU0: hi:0, btch: 1 usd: 0 Feb 1 15:48:14 prx2 kernel: [21556076.953200] CPU1: hi:0, btch: 1 usd: 0 Feb 1 15:48:14 prx2 kernel: [21556076.953203] CPU2: hi:0, btch: 1 usd: 0 Feb 1 15:48:14 prx2 kernel: [21556076.953205] CPU3: hi:0, btch: 1 usd: 0 Feb 1 15:48:14 prx2 kernel: [21556076.953206] Node 0 DMA32 per-cpu: Feb 1 15:48:14 prx2 kernel: [21556076.953209] CPU0: hi: 186, btch: 31 usd: 56 Feb 1 15:48:14 prx2 kernel: [21556076.953210] CPU1: hi: 186, btch: 31 usd: 0 Feb 1 15:48:14 prx2 kernel: [21556076.953212] CPU2: hi: 186, btch: 31 usd: 0 Feb 1 15:48:14 prx2 kernel: [21556076.953214] CPU3: hi: 186, btch: 31 usd: 29 Feb 1 15:48:14 prx2 kernel: [21556076.953218] active_anon:138 inactive_anon:194 isolated_anon:0 Feb 1 15:48:14 prx2 kernel: [21556076.953219] active_file:24 inactive_file:80 isolated_file:0 Feb 1 15:48:14 prx2 kernel: [21556076.953220] unevictable:4373 dirty:0 writeback:213 unstable:0 Feb 1 15:48:14 prx2 kernel: [21556076.953221] free:12235 slab_reclaimable:47686 slab_unreclaimable:25122 Feb 1 15:48:14 prx2 kernel: [21556076.953222] mapped:1506 shmem:2 pagetables:725 bounce:0 Feb 1 15:48:14 prx2 kernel: [21556076.953224] Node 0 DMA free:4640kB min:680kB low:848kB high:1020kB active_anon:44kB inactive_anon:84kB active_file:0kB inactive_file:44kB unevictable:352kB isolated(anon):0kB isolated(file):0kB present:15656kB mlocked:352kB dirty:0kB writeback:112kB mapped:352kB shmem:0kB slab_reclaimable:328kB slab_unreclaimable:276kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:592 all_unreclaimable? yes Feb 1 15:48:14 prx2 kernel: [21556076.953236] lowmem_reserve[]: 0 992 992 992 Feb 1
Re: HAProxy on multi-CPU Hardware
Peter Mellquist mailto:pemellqu...@gmail.com 1 February, 2013 4:10 PM Thanks for the good input Tait. This would be a code mod but, maybe it would be possible to have different mgmt ports for each process. Maybe baseport+N where N is the process number. I like the IPVS option but keeping it all within HAProxy seems nicer. This definitely gives me some cool stuff to prototype over the weekend! Peter. The real goal I am interested in is running HAProxy on virtual machines, or in the cloud, where I can provision a load balancer on a multi-cpu VM with HAPtroxy tuned with enough processes to handle a specific work flow. Before this information, I was thinking that this would be limited to only VMs with a single CPU No problem. There is a way to pin specific processes to ports so you can get the stats via the web UI and aggregate them with your graphing/monitoring tools, can't find the specific page that shows how to do it but google will be able to help there. The admin socket AFAIK can't be split yet, so if you don't use it to dynamically set weights for slow servers or to disable servers during a code update or if it is spitting out errors then you should be fine without it (but where's the fun in not diving deeper). Cheers, Tait
Can HAProxy use a response header value to redirect to a different URL?
Hi, I am trying to figure out if a solution I am considering is logically/technically viable, not asking for anyone to do my work :-) We have been using a pair of failover HAProxy servers to both balance load across a number of backends in different data centers, as well as to shape traffic coming to our site if the user agent is mobile, and they have not set a cookie to avoid our mobile site, we send them to the home page of our mobile site. As we deploy more responsive pages on the full site (http://my.full.site.com) we want to send that mobile traffic there rather than the dedicated mobile site. We also want to be able to sent request for specific full site page to specific mobile site pages. The use case is as follows: Inbound http request (http://my.full.site.com/specific/URI/page.html) is evaluated to see if the user agent is one of a mobile type (we have an acl to do this). IF mobil is False, THEN request is passed to one of the full site backends and the response is served without further manipulation IF mobile is True, THEN the request is passed to the backend and the response header is evaluated for the presence of two custom response headers (e.g. responsive_page AND mobile_url) IF custom response header responsive_page = True THEN pass the page back to the requester without manipulation IF custom response header mobile_url is populated with an alternate URL (mobile_url : http://my.mobile.com/different-specific/URI/page.html) THEN the request is redirected to that specific mobile_url. ELSEIF Return a Default mobile page (http://my.mobile.com/home.html) At present, all mobile traffic is evaluated, and if mobile is true then it ALL REQUESTS go to a single default mobile page on the mobile server (http://my.mobile.com/home.html). We also want to be able to match some specific page requests to the full site, and respond to those with custom pages on mobile site, rather then the generic default mobile page. If the page developers can populate the responsive_page and mobile_url header response values themselves, then they can migrate pages without having to request proxy changes. I realize I am I taking an inbound request to HAProxy, evaluating the user agent, then if mobile, evaluating the response to HAProxy, and if there is a value in the response header using that to initiate a new backend request response cycle to fulfill the original inbound request. I can see that is a loop. I know it cannot be the most efficient thing, but it would allow us to have custom redirection that can be maintained by the owners of the backend pages. Any feedback would be appreciated. Robert Robert Snyder Outreach Technology Services The Pennsylvania State University The 329 Building, Suite 306E University Park PA 16802 Phone: 814-865-0912 E-mail: rsny...@psu.edu signature.asc Description: Message signed with OpenPGP using GPGMail
Re: Comparison to nginx
Hi, The reason is simple: You need a load-balancer. HAProxy is a load-balancer with advanced features: many weighted algorithm, many different persistence type (even using application cookies), advanced reporting, etc... Nginx isn't, despite very basic features, a load-balancer. That said, it can be used in simple deployment.. cheers On Fri, Feb 1, 2013 at 2:14 PM, shouldbe q931 shouldbeq...@gmail.com wrote: How about going the other way and fully commenting the config, sending it to them and asking them how they would implement all of the things that you are using in HAProxy in nginx. If they pass it back to you as that's your job, then you can reasonably ask them that as you have a working solution in HAProxy, would this not be a waste of resource ? You could even suggest that it might be better for the organisation to move to using HAProxy instead of nginx for its advanced reverse proxy and load balancing capabilities.
Re: SSL offloading with NTLM auth
Could you please remove this pretent keepalive option from your configuration and give it a try? HAProxy may close the connection because of it. And yes, a tcpdump between haproxy and the CAS server may help as well. cheers On Fri, Feb 1, 2013 at 7:11 AM, Roland r...@bayreuth.tk wrote: Hi Baptiste, thanks a lot! If I connect the same computer with the same account and unchanged settings (except the URL of webaccess) directly to the CAS it works without any problems. Connection is established immediately. I also verified with Microsoft Remote Connectivity Analyzer. It stops with an error: = Attempting to ping RPC proxy mc.nkd.com. RPC Proxy can't be pinged. Additional Details An HTTP 401 Unauthorized response was received from the remote Unknown server. This is usually the result of an incorrect username or password. If you are attempting to log onto an Office 365 service, ensure you are using your full User Principal Name (UPN). = All tests before (HTTP authentication methods, IIS configuration, SSL credentials a.s.o.) are running fine. I'm absoultely clueless. I think I'll try to narrow down the problem with tcpdump- maybe the connection is forcably closed on some side. Cheers, Roland On Thu, 31 Jan 2013, Baptiste wrote: Hi, 401 is absolutely normal in NTLM. There are 2 or 3 request/response before the user is really authenticated when using NTLM. When HAProxy load-balances NTLM based services, the only log line you'll see will be 401 errors. Even if the connection works properly. This is due to the tunnel mode, which seems to be properly configured in your conf, as far as I can see. In tunnel mode, haproxy analyzes the first request, logs the first response, (hence the 401) and creates a tunnel between the client and the server. From now, on this connection, HAProxy will only transmit payload, even if that's HTTP, nothing will be analyzed anymore. The tunnel mode is mandatory for NTLM, because if you change TCP source port during the connection, it brakes the authentication. Could you confirm your outlook session works? I mean that your client is well connected to your exchange server? I can confirm HAProxy works properly with Exchange 2010 and with 2013 as well. Cheers On Thu, Jan 31, 2013 at 4:13 PM, Roland r...@bayreuth.tk wrote: Hi! I'm using haproxy 1.5dev17 and try to balance traffic destined for MS Exchange 2010 CAS servers. OWA and ActiveSync are working without any problems- but Outlook Anywhere (RPC over HTTP with NTLM auth) produces an error 401 even with Microsofts Remote Connectivity Analyzer. HAProxy runs in SSL offload mode. The cert is an officialy signed one. My haproxy.conf is (partially): ... defaults modehttp maxconn 5 contimeout 4000 clitimeout 5 srvtimeout 5 balance roundrobin log global option tcplog option redispatch option contstats option dontlognull timeout connect 5s timeout http-keep-alive 5s timeout http-request 15s timeout queue 30s timeout client 300s timeout server 300s default-server inter 3s rise 2 fall 3 backlog 1 option http-pretend-keepalive frontendWebAccess maxconn 5 bind172.17.336.433:666 ssl crt /usr/local/etc/haproxy-certs/mc.dom.com.pem modehttp option httplog log global no option httpclose acl ACLRPC path_beg -i /rpc/rpcproxy.dll use_backend OutlookAnywhere if ACLRPC ... backend OutlookAnywhere stick-table type ip size 10240k expire 60m stick on src cookie SRV insert nocache balance roundrobin option redispatch server juno 172.17.336.433:80 cookie oasrv1 weight 1 check ... The one active CAS server used for testing purposes (juno) is configured for SSL offloading for RPC. All other Exchange directories in IIS are set to not require SSL on this system. When running HAProxy in debug mode an Outlook Anywhere session looks like: 0005:WebAccess.clireq[000d:]: RPC_IN_DATA /Rpc/RpcProxy.dll?lips.dom.intl:6001 HTTP/1.1 0005:WebAccess.clihdr[000d:]: Accept: application/rpc 0005:WebAccess.clihdr[000d:]: User-Agent: MSRPC 0005:WebAccess.clihdr[000d:]: Authorization: NTLM TlRMTVNTUAABB4IIogAGAbEdDw== 0005:WebAccess.clihdr[000d:]: Host: mc.dom.com 0005:WebAccess.clihdr[000d:]: Content-Length: 0 0005:OutlookAnywhere.srvrep[000d:000e]: HTTP/1.1 401 Unauthorized 0005:OutlookAnywhere.srvhdr[000d:000e]: Content-Type: text/html 0005:OutlookAnywhere.srvhdr[000d:000e]: Server: Microsoft-IIS/7.5 0005:OutlookAnywhere.srvhdr[000d:000e]: WWW-Authenticate: NTLM