tcp ESTABLISHED in http-keep-alive mode is as twice as tunnel mode.
Hi, all! I was confused when we use haproxy and option http-kee-alive, the established is as twice as tunnel. But other tcp status stayed the same level. #the tunnel mode LISTEN 5 FIN_WAIT_1 325 FIN_WAIT_2 254 SYN_SENT 49 LAST_ACK 399 CLOSING 16 CLOSE_WAIT 70 CLOSED 247 SYN_RCVD 13 TIME_WAIT 338 ESTABLISHED 5797 #the http-keep-alived mode. LISTEN 5 FIN_WAIT_1 166 FIN_WAIT_2 426 SYN_SENT 103 LAST_ACK 819 CLOSING 5 CLOSE_WAIT 137 CLOSED 410 SYN_RCVD 24 TIME_WAIT 346 ESTABLISHED 10019 And the configure we use #2013## global log 192.168.149.1:10602 local4 info pidfile /var/run/haproxy.pid maxconn 10 maxpipes 5 daemon stats socket /tmp/haproxy.sock mode 755 level admin nbproc 1 spread-checks 5 tune.rcvbuf.client 16384 tune.rcvbuf.server 32768 tune.sndbuf.client 65536 tune.sndbuf.server 16384 node haproxy defaults #TCP SECTION maxconn 20 backlog 32768 timeout connect 10s timeout client 60s timeout server 60s timeout queue 30s timeout check 5s timeout http-request 5s timeout http-keep-alive 10s timeout tunnel 3600s #HTTP SECTION option accept-invalid-http-request option accept-invalid-http-response option redispatch retries 2 option httplog no option checkcache option http-keep-alive #SYSTEM SECTION option dontlog-normal option dontlognull option log-separate-errors # frontend ## frontend tcp-in-tos02 bind :2001 mss 1360 transparent mode tcp log global option tcplog no option http-keep-alive no option accept-invalid-http-request #distingush HTTP and non-HTTP tcp-request inspect-delay 60s tcp-request content accept if HTTP acl check_SquidCluster-tos02 nbsrv(SquidCluster-tos02) 0 #ACTION use_backend Direct if !HTTP use_backend SquidCluster-tos02 if !check_SquidCluster-tos02 default_backend Direct backend SquidCluster-tos02 mode http option forwardfor header X-Client balance hdr(Host) source 0.0.0.0 option httpchk GET http://www.yahoo.com server sq-L1-n1a 192.168.138.1:3001 weight 20 check inter 5s maxconn 1 server sq-L1-n1b 192.168.138.1:3002 weight 20 check inter 5s maxconn 1 server sq-L1-n1c 192.168.138.1:3003 weight 20 check inter 5s maxconn 1 server sq-L1-n2a 192.168.138.2:3001 weight 20 check inter 5s maxconn 1 server sq-L1-n2b 192.168.138.2:3002 weight 20 check inter 5s maxconn 1 server sq-L1-n3a 192.168.138.3:3001 weight 20 check inter 5s maxconn 1 server sq-L1-n3b 192.168.138.3:3002 weight 20 check inter 5s maxconn 1 server sq-L1-n3c 192.168.138.3:3003 weight 20 check inter 5s maxconn 1 server sq-L1-n3d 192.168.138.3:3004 weight 20 check inter 5s maxconn 1 backend Direct mode tcp log global option tcplog no option http-keep-alive no option httpclose no option http-server-close no option accept-invalid-http-response no option http-pretend-keepalive source 0.0.0.0 usesrc clientip option transparent Can any one help me to explain this ?
Feature request bind add fib option
Hi, all! Referenced http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#bind bind: has a lot of bind options. Can you add another option setfib=number for our freeBSD users ? Thanks! We have some situation which have to use it. setfib=number this parameter sets the associated routing table, FIB (the SO_SETFIB option) for the listening socket.On FreeBSD
Re: Thousands of FIN_WAIT_2 CLOSED ESTABLISHED in haproxy1.5-dev21-6b07bf7
Hi! Thanks for your reply! We finally found out that the directive in our haproxy.conf tcp-request inspect-delay 30s made this error happened. I think this because the global settings in our defualts: timeout client 60s --- But the tcp-request inspect-delay 30s in our frontend didn't cover the timeout. the documentation says: http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#tcp-request inspect-delay Note that the client timeout must cover at least the inspection delay, otherwise it will expire first After we change the tcp-request inspect-delay 30s to tcp-request inspect-delay 60s,it works like a charm!
Re: Thousands of FIN_WAIT_2 CLOSED ESTABLISHED in haproxy1.5-dev21-6b07bf7
Hi, Lukas! Like I said, you will need to reproduce the problem on a box with no traffic at all - so the impact of a single connection can be analyzed (sockets status on the frontend/backend, for example). Its nearly impossible to this on a busy box with a lot of production traffic. Also, the configuration needs to be trimmed down to a single, specific use case (you already said you suspect a specific backend). I follow your idea, but it seems single connection couldn't reproduce the problem. The sockets stats disappeared very fast when I open my explorer.And I analyzed with wireshark and didn't see any abnormal flows or packets.
Re: Thousands of FIN_WAIT_2 CLOSED ESTABLISHED in haproxy1.5-dev21-6b07bf7
Hi, Lukas! The configuration is fairly complex. I suggest you try reproducing the problem on a staging system with minimalistic configuration and traffic. Also, if you are able to reproduce it, try the same in FreeBSD 9.2. On your production box, I'm afraid we have to much noise (complex configuration and lots of traffic) to understand where things go wrong. Thanks very much for your answer ! Actually, we just used FreeBSD9.2 with the same configuration before, but the situation almost the same :( And is there any other possible reason there ? Or is there any possible tools for track the problem ?
Thousands of FIN_WAIT_2 CLOSED ESTABLISHED in haproxy1.5-dev21-6b07bf7
Hi, all! Recently, we use haproxy1.5-dev21 in our product.And we want to get the benefit of http-keep-alive. But after we added the option http-keep-alive and deployed new version of haproxy. We found that the connection of FIN_WAIT_2 CLOSED ESTABLISHED increased quickly. when we change to the tunnel mode, it decreased. root@Haproxy01:~ # session-count.sh LISTEN 8 FIN_WAIT_1 245 FIN_WAIT_2 22836 SYN_SENT 46 LAST_ACK 943 CLOSING 4 CLOSE_WAIT 1151 CLOSED 21940 SYN_RCVD 11 TIME_WAIT 255 ESTABLISHED 13894 And some related configuration below. defaults #TCP SECTION maxconn 20 backlog 32768 timeout connect 10s timeout client 60s timeout server 60s timeout queue 30s timeout check 5s timeout http-request 5s timeout http-keep-alive 10s timeout tunnel 3600s #option nolinger #option http-no-delay #HTTP SECTION option accept-invalid-http-request option accept-invalid-http-response option redispatch retries 2 option httplog no option checkcache option http-keep-alive # frontend ## frontend tcp-in bind :2001 mss 1360 transparent mode tcp log global option tcplog no option http-keep-alive no option accept-invalid-http-request #distingush HTTP and non-HTTP tcp-request inspect-delay 30s tcp-request content accept if HTTP #ACL DEFINE acl squid_incompatiable-Host hdr_reg(Host) -f /usr/local/etc/acl-define.d/squid_incompatiable-Host.txt #ACL DEFINE of websocket acl missing_host hdr_cnt(Host) eq 0 acl has_range hdr_cnt(Range) gt 0 acl check_SquidCluster-tos02 nbsrv(SquidCluster-tos02) 0 #ACL DEFINE of websocket acl is_websocket hdr(Upgrade) -i WebSocket acl is_websocket hdr_beg(Host) -i ws acl matches_media url_reg -i -f /usr/local/etc/acl-define.d/whitelist.txt acl check_bk_SquidMediaCluster-tos02 nbsrv(SquidMediaCluster-tos02) 0 #ACTION use_backend Direct if !HTTP use_backend Direct if HTTP_1.1 missing_host use_backend Direct if METH_CONNECT use_backend NginxClusterWebsockets if is_websocket use_backend NginxClusterNormal if HTTP squid_incompatiable-Host use_backend SquidMediaCluster-tos02 if HTTP matches_media !check_bk_SquidMediaCluster-tos02 use_backend SquidCluster-tos02 if !check_SquidCluster-tos02 default_backend Direct backend SquidCluster-tos02 mode http option forwardfor header X-Client balance hdr(Host) log global acl mgmt-src src -f /usr/local/etc/acl-define.d/mgmt-src.txt acl is_internal_error status ge 500 #reqadd Internal-Proto:\ 02 rspideny . if is_internal_error !mgmt-src rspidel ^via:.* unless mgmt-src rspidel ^x-cache:* unless mgmt-src rspidel ^x-cache-lookup:* unless mgmt-src rspidel ^X-Ecap:* unless mgmt-src source 0.0.0.0 option httpchk GET http://www.baidu.com server sq-L1-n1a 192.168.138.1:3001 weight 20 check inter 5s maxconn 1 server sq-L1-n1b 192.168.138.1:3002 weight 20 check inter 5s maxconn 1 server sq-L1-n1c 192.168.138.1:3003 weight 20 check inter 5s maxconn 1 server sq-L1-n2a 192.168.138.2:3001 weight 20 check inter 5s maxconn 1 server sq-L1-n2b 192.168.138.2:3002 weight 20 check inter 5s maxconn 1 server sq-L1-n3a 192.168.138.3:3001 weight 20 check inter 5s maxconn 1 server sq-L1-n3b 192.168.138.3:3002 weight 20 check inter 5s maxconn 1 server sq-L1-n3c 192.168.138.3:3003 weight 20 check inter 5s maxconn 1 server sq-L1-n3d 192.168.138.3:3004 weight 20 check inter 5s maxconn 1 backend Direct mode tcp log global option tcplog no option http-keep-alive no option httpclose no option http-server-close no option accept-invalid-http-response no option http-pretend-keepalive source 0.0.0.0 usesrc clientip option transparent we also found out that increased connection was not come from backend SquidCluster-tos02, but almost all came from backend Direct. root@Haproxy01:~ # netstat -na|egrep (3001|3002|3003|3004) |wc -l 1761 Can any one help to fix this ?
Re: Thousands of FIN_WAIT_2 CLOSED ESTABLISHED in haproxy1.5-dev21-6b07bf7
Hi, Lukas! Thanks for your reply! And my OS is : FreeBSD Haproxy01 10.0-BETA2 FreeBSD 10.0-BETA2 #0 r257417: Thu Oct 31 13:02:48 CST 2013 haproxy version: root@Haproxy01:/usr/ports/net/haproxy-devel # haproxy -vv HA-Proxy version 1.5-dev21-6b07bf7 2013/12/17 Copyright 2000-2013 Willy Tarreau w...@1wt.eu Build options : TARGET = freebsd CPU = generic CC = gcc47 CFLAGS = -O2 -fno-strict-aliasing -pipe -msse3 -I/usr/local/include -L/usr/local/lib -fno-omit-frame-pointer -Wl,--eh-frame-hdr -DFREEBSD_PORTS OPTIONS = USE_TPROXY=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 Encrypted password support via crypt(3): yes Built with zlib version : 1.2.8 Compression algorithms supported : identity, deflate, gzip Built with OpenSSL version : OpenSSL 1.0.1e-freebsd 11 Feb 2013 Running on OpenSSL version : OpenSSL 1.0.1e-freebsd 11 Feb 2013 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports prefer-server-ciphers : yes Built with PCRE version : 8.33 2013-05-28 PCRE library supports JIT : yes Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY Available polling systems : kqueue : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use kqueue. And my whole configuration below #2013## global pidfile /var/run/haproxy.pid maxconn 10 maxpipes 5 daemon stats socket /tmp/haproxy.sock mode 755 level admin nbproc 1 spread-checks 5 tune.rcvbuf.client 16384 tune.rcvbuf.server 32768 tune.sndbuf.client 65536 tune.sndbuf.server 16384 node haproxy defaults #TCP SECTION maxconn 20 backlog 32768 timeout connect 10s timeout client 60s timeout server 60s timeout queue 30s timeout check 5s timeout http-request 5s timeout http-keep-alive 10s timeout tunnel 3600s #option nolinger #option http-no-delay #HTTP SECTION option accept-invalid-http-request option accept-invalid-http-response option redispatch retries 2 option httplog no option checkcache option http-keep-alive #SYSTEM SECTION option dontlog-normal option dontlognull option log-separate-errors ## listen admin_stat bind :2101 mode http log global stats enable stats refresh 30s stats uri /admin?stats stats realm Haproxy\ Statistics stats auth admin:haproxy2012 stats hide-version # frontend ## frontend tcp-in-tos02 bind :2001 mss 1360 transparent mode tcp log global option tcplog no option http-keep-alive no option accept-invalid-http-request #distingush HTTP and non-HTTP tcp-request inspect-delay 30s tcp-request content accept if HTTP #ACL DEFINE acl squid_incompatiable-Host hdr_reg(Host) -f /usr/local/etc/acl-define.d/squid_incompatiable-Host.txt #ACL DEFINE of websocket acl missing_host hdr_cnt(Host) eq 0 acl has_range hdr_cnt(Range) gt 0 acl check_SquidCluster-tos02 nbsrv(SquidCluster-tos02) 0 #ACL DEFINE of websocket acl is_websocket hdr(Upgrade) -i WebSocket acl is_websocket hdr_beg(Host) -i ws acl matches_media url_reg -i -f /usr/local/etc/acl-define.d/whitelist.txt acl check_bk_SquidMediaCluster-tos02 nbsrv(SquidMediaCluster-tos02) 0 #ACTION use_backend Direct if !HTTP use_backend Direct if HTTP_1.1 missing_host use_backend Direct if METH_CONNECT use_backend NginxClusterWebsockets if is_websocket use_backend NginxClusterNormal if HTTP squid_incompatiable-Host use_backend SquidMediaCluster-tos02 if HTTP matches_media !check_bk_SquidMediaCluster-tos02 use_backend SquidCluster-tos02 if !check_SquidCluster-tos02 default_backend Direct #default_backend SquidCluster-tos02 backend SquidCluster-tos02 mode http option forwardfor header X-Client balance hdr(Host) log global acl mgmt-src src -f /usr/local/etc/acl-define.d/mgmt-src.txt acl is_internal_error status ge 500 #reqadd Internal-Proto:\ 02 rspideny . if is_internal_error !mgmt-src rspidel ^via:.* unless mgmt-src rspidel ^x-cache:* unless mgmt-src rspidel ^x-cache-lookup:* unless mgmt-src rspidel ^X-Ecap:* unless mgmt-src source 0.0.0.0 option httpchk GET http://www.baidu.com server sq-L1-n1a 192.168.138.1:3001 weight 20 check inter 5s maxconn 1 server sq-L1-n1b
Re: Thousands of FIN_WAIT_2 CLOSED ESTABLISHED in haproxy1.5-dev21-6b07bf7
Hi, Lukas! On Wed, Jan 8, 2014 at 3:12 AM, Lukas Tribus luky...@hotmail.com wrote: Hi, Recently, we use haproxy1.5-dev21 in our product.And we want to get the benefit of http-keep-alive. But after we added the option http-keep-alive and deployed new version of haproxy. We found that the connection of FIN_WAIT_2 CLOSED ESTABLISHED increased quickly. when we change to the tunnel mode, it decreased. What release did you previously run? Please also specify your kernel release and the output of ./haproxy -vv. root@Haproxy01:~ # session-count.sh LISTEN 8 FIN_WAIT_1 245 FIN_WAIT_2 22836 SYN_SENT 46 LAST_ACK 943 CLOSING 4 CLOSE_WAIT 1151 CLOSED 21940 SYN_RCVD 11 TIME_WAIT 255 ESTABLISHED 13894 But we don't know where does high numbers are, backend or frontend (or both; equally distributed). Can you try (by matching your frontend port): netstat -nat | grep :2001 | wc -l I calculated the connection in frontend and backend direct.( as freeBSD don't show port of 2001 in netstat -na when connect to client.) root@Haproxy01:~ # sh frontend_tcp_conns.sh FIN_WAIT_1 129 FIN_WAIT_2 25729 LAST_ACK 1730 CLOSING 5 CLOSE_WAIT 1560 CLOSED 211 SYN_RCVD 33 TIME_WAIT 466 ESTABLISHED 13161 root@Haproxy01:~ # sh direct_tcp_conns.sh FIN_WAIT_1 176 FIN_WAIT_2 244 SYN_SENT 1326 LAST_ACK 523 CLOSING 4 CLOSE_WAIT 1579 CLOSED 24321 TIME_WAIT 36 ESTABLISHED 7206
Re: Feature request: TOS based ACL.
Hi, all! What I wanna to do is using acl to capture the TOS field on http-request traffic. On Thu, Jan 2, 2014 at 10:29 AM, Ge Jin altman87...@gmail.com wrote: Hi, Lukas! Thats great, but is there can be anything like this? acl bad_guys tos-acl 0x20 block if bad_guys On Tue, Dec 31, 2013 at 7:14 PM, Lukas Tribus luky...@hotmail.com wrote: Hi, Could haproxy add a tos based acl? http://en.wikipedia.org/wiki/Type_of_service We want to do some action on the traffic based on the tos field. Should work already with something like this: acl local_net src 192.168.0.0/16 http-response set-tos 46 if local_net http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-http-response Regards, Lukas
Re: Feature request: TOS based ACL.
Hi, Lukas! Thats great, but is there can be anything like this? acl bad_guys tos-acl 0x20 block if bad_guys On Tue, Dec 31, 2013 at 7:14 PM, Lukas Tribus luky...@hotmail.com wrote: Hi, Could haproxy add a tos based acl? http://en.wikipedia.org/wiki/Type_of_service We want to do some action on the traffic based on the tos field. Should work already with something like this: acl local_net src 192.168.0.0/16 http-response set-tos 46 if local_net http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-http-response Regards, Lukas
Feature request: TOS based ACL.
Hi, all! Could haproxy add a tos based acl? http://en.wikipedia.org/wiki/Type_of_service We want to do some action on the traffic based on the tos field.
Re: HTTP and send-proxy
Hi, Baptiste! Thanks for your reply, I found there is an incorrect configure in my On Sat, Oct 12, 2013 at 5:47 PM, Baptiste bed...@gmail.com wrote: Hi Jinge, None of your servers are available in the farm so HAProxy returns 503. you should have a look at your logs or run a tcpdump between HAProxy and the server to know the issue. Maybe your HTTP check URL is wrong or you need a Host header. Baptiste On Sat, Oct 12, 2013 at 4:48 AM, jinge altman87...@gmail.com wrote: Hi all! I want use the haproxy PROXY protocol for our use case. To send our clients ip address to the peer haproxy. But after I config the send-proxy and accept-proxy in the configuration. The web nevent be successful responsed. The 503 error always there. the configure there ha-L0.conf -- # frontend ## frontend tcp-in bind 192.168.137.41:2220 bind 192.168.132.41:2221 bind 192.168.133.41: mode tcp log global option tcplog #distingush HTTP and non-HTTP tcp-request inspect-delay 30s tcp-request content accept if HTTP #ACL DEFINE acl squid_incompatiable-Host hdr_reg(Host) -f /usr/local/etc/acl-define.d/squid_incompatiable-Host.txt acl direct-dstip dst -f /usr/local/etc/acl-define.d/direct_out-dst.txt #ACL DEFINE of websocket acl missing_host hdr_cnt(Host) eq 0 acl QQClient hdr(User-Agent) -i QQClient acl has_range hdr_cnt(Range) gt 0 #ACTION use_backend Direct if !HTTP use_backend Direct if HTTP_1.1 missing_host use_backend Direct if direct-dstip use_backend Direct if METH_CONNECT use_backend Direct if QQClient default_backend HAL1 backend HAL1 mode http log global source 0.0.0.0 server ha2-l1-n1 localhost:3330 send-proxy ha-L1.conf -- # frontend ## frontend localhostlister bind localhost:3330 accept-proxy mode http #ACL DEFINE acl direct-dstip dst -f /usr/local/etc/acl-define.d/direct_out-dst.txt #ACL DEFINE of websocket acl is_websocket hdr(Upgrade) -i WebSocket acl is_websocket hdr_beg(Host) -i ws acl missing_host hdr_cnt(Host) eq 0 acl QQClient hdr(User-Agent) -i QQClient acl has_range hdr_cnt(Range) gt 0 #ACTION use_backend NginxClusterWebsockets if is_websocket default_backend SquidCluster backend SquidCluster mode http option forwardfor header X-Client balance uri whole log global acl mgmt-src src -f /usr/local/etc/acl-define.d/mgmt-src.txt errorfile 502 /usr/local/etc/errorfiles/504.http acl is_internal_error status ge 500 rspideny . if is_internal_error !mgmt-src rspidel ^via:.* unless mgmt-src rspidel ^x-cache:* unless mgmt-src rspidel ^x-cache-lookup:* unless mgmt-src rspidel ^X-Ecap:* unless mgmt-src source 0.0.0.0 option httpchk GET http://192.168.172.4/check.txt server sq-L1-n1a x.x.x.x:3129 weight 20 check inter 5s maxconn 1 And we use the haproxy -d argument found the ha0 seems never send the msg to the ha1 0090:HAL1.clireq[0019:]: GET http://www.taobao.com/ HTTP/1.1 0090:HAL1.clihdr[0019:]: User-Agent: curl/7.26.0 0090:HAL1.clihdr[0019:]: Host: www.taobao.com 0090:HAL1.clihdr[0019:]: Accept: */* 0090:HAL1.clihdr[0019:]: Proxy-Connection: Keep-Alive 008d:HAL1.clicls[000e:001a] 008d:HAL1.closed[000e:001a] Is there any one can help what's the problem there ? --- Regards Jinge