V1.9 SSL engine and ssl-mode-async is unstable
HI HAProxy Team,: I am trying to use Intel qat work with HAProxy-1.9.0, but it work very unstable. and i had other try HAProxy-1.8.16 and it work will, How can i find what is wrong? 1.8.16 and 1.9.0 use same hardwave and system to running and compile, and use the same config file, the attach file is config file Thanks of any help. Best regards haproxy.conf Description: Binary data
Re: [PATCH] CLEANUP: h2: Remove debug printf in mux_h2.c
On Fri, Jan 25, 2019 at 12:56:59AM +0100, Tim Duesterhus wrote: > It was introduced by 1915ca273832ba542d72eb0645dd7ccb6d5b945f > and should be backported to 1.9. Oops, and I was very careful when rebasing my patches :-( Thanks Tim, Willy
h1-client to h2-server host header / authority conversion failure.?
Hi List, Attached a regtest which i 'think' should pass. ** s1 0.0 === expect tbl.dec[1].key == ":authority" s1 0.0 EXPECT tbl.dec[1].key (host) == ":authority" failed It seems to me the Host <> Authority conversion isn't happening properly.? But maybe i'm just making a mistake in the test case... I was using HA-Proxy version 2.0-dev0-f7a259d 2019/01/24 with this test. The test was inspired by the attempt to connect to mail.google.com , as discussed in the "haproxy 1.9.2 with boringssl" mail thread.. Not sure if this is the main problem, but it seems suspicious to me.. Regards, PiBa-NL (Pieter) varnishtest "Check H1 client to H2 server with HTX." feature ignore_unknown_macro syslog Slog_1 -repeat 1 -level info { recv } -start server s1 -repeat 2 { rxpri stream 0 { txsettings rxsettings txsettings -ack } -run stream 1 { rxreq expect tbl.dec[1].key == ":authority" expect tbl.dec[1].value == "domain.tld" txresp } -run } -start haproxy h1 -conf { global log ${Slog_1_addr}:${Slog_1_port} len 2048 local0 debug err defaults mode http timeout client 2s timeout server 2s timeout connect 1s log global option http-use-htx frontend fe1 option httplog bind "fd@${fe1}" default_backend b1 backend b1 server s1 ${s1_addr}:${s1_port} proto h2 frontend fe2 option httplog bind "fd@${fe2}" proto h2 default_backend b2 backend b2 server s2 ${s1_addr}:${s1_port} proto h2 } -start client c1 -connect ${h1_fe1_sock} { txreq -url "/" -hdr "host: domain.tld" rxresp expect resp.status == 200 } -run client c2 -connect ${h1_fe2_sock} { txpri stream 0 { txsettings -hdrtbl 0 rxsettings } -run stream 1 { txreq -req GET -url /3 -litIdxHdr inc 1 huf "domain.tld" rxresp expect resp.status == 200 } -run } -run #syslog Slog_1 -wait
Re: DDoS protection: ban clients with high HTTP error rates
I've been doing something similar for years. No need for fail2ban. frontend fe-main acl host_dynamic hdr_dom(host) -i newgrounds.com acl limit_exceeded sc1_http_err_rate(be-dynamic) gt XXX tcp-request content track-sc1 src table be-dynamic if host_dynamic use_backend be-rate-limit if limit_exceeded use_backend be-dynamic if host_dynamic backend be-rate-limit # haproxy normally returns a 503 but we want to return a 429 here. errorfile 503 /etc/haproxy/errorfiles/429.http # This may flood your error logs, so you can set this: # http-request set-log-level silent backend be-dynamic stick-table type ipv6 size 100k expire 1m store http_err_rate(1m),http_req_rate(1m) # other stuff Hope this helps! -- Brendon Colby Senior DevOps Engineer Newgrounds.com On Wed, Jan 23, 2019 at 9:19 AM Marco Colli wrote: > > Hello! > > I use HAProxy in front of a web app / service and I would like to add DDoS > protection and rate limiting. The problem is that each part of the > application has different request rates and for some customers we must accept > very hight request rates and burst, while this is not allowed for > unauthenticated users for example. So I was thinking about this solution: > > 1. Based on advanced conditions (e.g. current user) our Rails application > decides whether to return a normal response (e.g. 2xx) or a 429 (Too Many > Requests); it can also return other errors, like 401 > 2. HAProxy bans clients if they produce too many 4xx errors > > What do you think about this solution? > Also, is it correct to use HAProxy directly or it is more performant to use > fail2ban on HAProxy logs? > > This is the HAProxy configuration that I would like to use: > > frontend www-frontend > tcp-request connection reject if { src_http_err_rate(st_abuse) ge 5 } > http-request track-sc0 src table st_abuse > ... > default_backend www-backend > > backend www-backend > ... > > backend st_abuse > stick-table type ipv6 size 1m expire 10s store http_err_rate(10s) > > > > Do you think that the above rules are correct? Am I missing something? > Also, is it correct to mix *tcp*-request and src_*http*_err_rate in the > frontend? > Is it possible to include only the 4xx errors (and not 5xx) in http_err_rate? > > > Any suggestion would be greatly appreciated > Thank you > Marco Colli >
[PATCH] CLEANUP: h2: Remove debug printf in mux_h2.c
It was introduced by 1915ca273832ba542d72eb0645dd7ccb6d5b945f and should be backported to 1.9. --- src/mux_h2.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/src/mux_h2.c b/src/mux_h2.c index 586ff516..2972ca29 100644 --- a/src/mux_h2.c +++ b/src/mux_h2.c @@ -2092,8 +2092,6 @@ static int h2c_frt_handle_data(struct h2c *h2c, struct h2s *h2s) goto strm_err; } - printf("bl=%d dfl=%d dpl=%d\n", (int)h2s->body_len, (int)h2c->dfl, (int)h2c->dpl); - if (!h2_frt_transfer_data(h2s)) return 0; -- 2.20.1
Re: H2 Server Connection Resets (1.9.2)
Hi Luke, On Wed, Jan 23, 2019 at 05:16:04PM +, Luke Seelenbinder wrote: > Hi Willy, > > This is all very good to hear. I'm glad you were able to get to the bottom of > it all! > > Feel free to send along patches if you want me to test before the 1.9.3 > release. I'm more than happy to do so. OK so instead of sending you a boring series, I can propose you to run a test on 2.0-dev, which contains all the fixes I had to go through because of tiny issues everywhere related to this. If you're using git, just clone the master and checkout commit f7a259d46f8. you can simply wait for the next nightly snapshot. Just let me know if that's OK for you. I found a number of issues that were causing server aborts, mainly due to the late GOAWAY frame. Once we hit this one, the connection is quickly closed by the server, causing our output packets to be rejected and the connection to be in error. I have not yet investigated in details to see if the close happens after we got the last data or in the middle though. But now you have a new server parameter called "max-reuse". This allows to limit the number of times a server connection is reused. For example you can set it to 990 when you know that the server limits to 1000. On the tests I've run here, I managed to address all the problems related to excessive use of idle connections resulting in too many streams being sent. In addition most of the rare cases that still happen when you don't have max-reuse are properly handled as a retry. Regarding the fact that in your case the client's close seems to cause the server-side issue, I couldn't yet reproduce it though I have a few theories about it. One of them would be an unexpected response from the server causing the connection to turn to an error state. The other one would be that we'd incorrectly abort our stream and/or session and bring the connection down with us. I'll submit these theories to Olivier once he's back so that he can tell me I'm saying crap regarding some of them and we can focus on what remains :-) Regards, Willy
Re: haproxy 1.9.2 with boringssl
Am 24.01.2019 um 15:09 schrieb Aleksandar Lazic: > Am 24.01.2019 um 03:49 schrieb Willy Tarreau: >> On Wed, Jan 23, 2019 at 09:37:46PM +0100, Aleksandar Lazic wrote: >>> >>> Am 23.01.2019 um 21:27 schrieb Willy Tarreau: On Wed, Jan 23, 2019 at 09:08:00PM +0100, Aleksandar Lazic wrote: > Should it be possible to have fe with h1 and be server h2(alpn h2), as I > expect this or similar return value when I go thru haproxy? Yes absolutely. That's even what I'm doing on my tests to try to fix the issues reported by Luke. >>> >>> Okay, perfect. >>> >>> Would you like to share your config so that I can see what's wrong with my >>> config, thanks. >> >> Sure, here's a copy-paste, hoping I don't mess with anything :-) >> >> defaults >> mode http >> option http-use-htx >> option httplog >> log stdout format raw daemon >> timeout connect 4s >> timeout client 10s >> timeout server 10s >> >> frontend decrypt >> bind :4445 >> bind :4446 proto h2 >> bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2 >> default_backend trace >> >> backend trace >> stats uri /stat >> server s1 127.0.0.1:443 ssl alpn h2 verify none >> #server s2 127.0.0.1:80 >> #server s3 127.0.0.1:80 proto h2 >> >> As you can see you just connect to port 4445. > > Many thanks. > Sorry for the long mail thread but I'm not able to get a proper answer from > the ssl backend. Please ignore this mail. There is a problem within the container as a curl in the container have the same problem as haproxy, so it's related to the container run. > I have made the setup more easier. > > This setup does not return the stats page. > curl => haproxy-19 with openssl => openssl s_server internal stats page > > This setup does return the stats page. > > ### > curl -vk https://207.154.204.236:4443 > * About to connect() to 207.154.204.236 port 4443 (#0) > * Trying 207.154.204.236... > * Connected to 207.154.204.236 (207.154.204.236) port 4443 (#0) > * Initializing NSS with certpath: sql:/etc/pki/nssdb > * skipping SSL peer certificate verification > * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 > * Server certificate: > * subject: CN=h2test.livesystem.at > * start date: Jan 24 12:18:25 2019 GMT > * expire date: Apr 24 12:18:25 2019 GMT > * common name: h2test.livesystem.at > * issuer: CN=Let's Encrypt Authority X3,O=Let's Encrypt,C=US >> GET / HTTP/1.1 >> User-Agent: curl/7.29.0 >> Host: 207.154.204.236:4443 >> Accept: */* >> > * HTTP 1.0, assume close after body > < HTTP/1.0 200 ok > < Content-type: text/html > < > > > > s_server -www -alpn h2 -cert > /root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.crt > -key > /root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.key > -accept 4443 -debug -msg > Secure Renegotiation IS supported > Ciphers supported in s_server binary > . > ### > > # openssl version > OpenSSL 1.0.2k-fips 26 Jan 2017 > > # curl -V > curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.34 zlib/1.2.7 > libidn/1.28 libssh2/1.4.3 > Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 > pop3s rtsp scp sftp smtp smtps telnet tftp > Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz > unix-sockets > > > defaults > mode http > option http-use-htx > option httplog > log stdout format raw daemon debug > timeout connect 4s > timeout client 10s > timeout server 10s > > frontend decrypt > bind :4445 > bind :4446 proto h2 > #bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2 > default_backend trace > > backend trace > stats uri /stat > > # localhosts ip > server s1 207.154.204.236:4443 ssl alpn h2 verify none > > > > podman run --rm -it \ > -e SERVICE_DEST=mail.google.com \ > -e LOGLEVEL=debug \ > -e NUM_THREADS=8 \ > -e DNS_SRV001=1.1.1.1 \ > -e DNS_SRV002=8.8.8.8 \ > -e STATS_PORT=7411 \ > -e STATS_USER=test \ > -e STATS_PASSWORD=test \ > -e SERVICE_TCP_PORT=8443 \ > -e SERVICE_NAME=google-mail \ > -e SERVICE_DEST_IP=mail.google.com \ > -e SERVICE_DEST_PORT=443 \ > -e CONFIG_FILE=/mnt/haproxy2.cfg \ > -e DEBUG=1 -v /tmp/:/mnt/ \ > -p 4445 --expose 4445 \ > --net host \ > me2digital/haproxy19 > > > ### > openssl s_server -www -alpn h2 \ > -cert > ~/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.crt > \ > -key > ~/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.key > \ > -accept 4443 -debug -msg > ### > > ### > [root@doh-001 ~]# curl -vk http://127.0.0.1:4445 > * About to connect() to 127.0.0.1 port 4445 (#0) > * Trying 127.0
Re: haproxy 1.9.2 with boringssl
Am 24.01.2019 um 03:49 schrieb Willy Tarreau: > On Wed, Jan 23, 2019 at 09:37:46PM +0100, Aleksandar Lazic wrote: >> >> Am 23.01.2019 um 21:27 schrieb Willy Tarreau: >>> On Wed, Jan 23, 2019 at 09:08:00PM +0100, Aleksandar Lazic wrote: Should it be possible to have fe with h1 and be server h2(alpn h2), as I expect this or similar return value when I go thru haproxy? >>> >>> Yes absolutely. That's even what I'm doing on my tests to try to fix >>> the issues reported by Luke. >> >> Okay, perfect. >> >> Would you like to share your config so that I can see what's wrong with my >> config, thanks. > > Sure, here's a copy-paste, hoping I don't mess with anything :-) > > defaults > mode http > option http-use-htx > option httplog > log stdout format raw daemon > timeout connect 4s > timeout client 10s > timeout server 10s > > frontend decrypt > bind :4445 > bind :4446 proto h2 > bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2 > default_backend trace > > backend trace > stats uri /stat > server s1 127.0.0.1:443 ssl alpn h2 verify none > #server s2 127.0.0.1:80 > #server s3 127.0.0.1:80 proto h2 > > As you can see you just connect to port 4445. Many thanks. Sorry for the long mail thread but I'm not able to get a proper answer from the ssl backend. I have made the setup more easier. This setup does not return the stats page. curl => haproxy-19 with openssl => openssl s_server internal stats page This setup does return the stats page. ### curl -vk https://207.154.204.236:4443 * About to connect() to 207.154.204.236 port 4443 (#0) * Trying 207.154.204.236... * Connected to 207.154.204.236 (207.154.204.236) port 4443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * skipping SSL peer certificate verification * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * Server certificate: * subject: CN=h2test.livesystem.at * start date: Jan 24 12:18:25 2019 GMT * expire date: Apr 24 12:18:25 2019 GMT * common name: h2test.livesystem.at * issuer: CN=Let's Encrypt Authority X3,O=Let's Encrypt,C=US > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 207.154.204.236:4443 > Accept: */* > * HTTP 1.0, assume close after body < HTTP/1.0 200 ok < Content-type: text/html < s_server -www -alpn h2 -cert /root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.crt -key /root/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.key -accept 4443 -debug -msg Secure Renegotiation IS supported Ciphers supported in s_server binary . ### # openssl version OpenSSL 1.0.2k-fips 26 Jan 2017 # curl -V curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.34 zlib/1.2.7 libidn/1.28 libssh2/1.4.3 Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz unix-sockets defaults mode http option http-use-htx option httplog log stdout format raw daemon debug timeout connect 4s timeout client 10s timeout server 10s frontend decrypt bind :4445 bind :4446 proto h2 #bind :4443 ssl crt rsa+dh2048.pem npn h2 alpn h2 default_backend trace backend trace stats uri /stat # localhosts ip server s1 207.154.204.236:4443 ssl alpn h2 verify none podman run --rm -it \ -e SERVICE_DEST=mail.google.com \ -e LOGLEVEL=debug \ -e NUM_THREADS=8 \ -e DNS_SRV001=1.1.1.1 \ -e DNS_SRV002=8.8.8.8 \ -e STATS_PORT=7411 \ -e STATS_USER=test \ -e STATS_PASSWORD=test \ -e SERVICE_TCP_PORT=8443 \ -e SERVICE_NAME=google-mail \ -e SERVICE_DEST_IP=mail.google.com \ -e SERVICE_DEST_PORT=443 \ -e CONFIG_FILE=/mnt/haproxy2.cfg \ -e DEBUG=1 -v /tmp/:/mnt/ \ -p 4445 --expose 4445 \ --net host \ me2digital/haproxy19 ### openssl s_server -www -alpn h2 \ -cert ~/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.crt \ -key ~/.caddy/acme/acme-v02.api.letsencrypt.org/sites/h2test.livesystem.at/h2test.livesystem.at.key \ -accept 4443 -debug -msg ### ### [root@doh-001 ~]# curl -vk http://127.0.0.1:4445 * About to connect() to 127.0.0.1 port 4445 (#0) * Trying 127.0.0.1... * Connected to 127.0.0.1 (127.0.0.1) port 4445 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 127.0.0.1:4445 > Accept: */* > * HTTP 1.0, assume close after body < HTTP/1.0 503 Service Unavailable < cache-control: no-cache < content-type: text/html < 503 Service Unavailable No server is available to handle this request. * Closing connection 0 ### HAProxy output. exec /usr/local/sbin/haproxy -f /mnt/haproxy2.cfg -d Note: setting global.maxconn to 2000.