Re: Haproxy running on ipv6 and http-in/

2023-11-30 Thread Jarno Huuskonen
Hi,

On Tue, 2023-11-28 at 16:29 +0100, Christoph Kukulies wrote:
> I'm wondering why I see haproxy running on ipv6 (Ubuntu 22.04):
> 
> Excerpt from haproxy.cfg:
> 
> frontend http-in
> #    bind *:80
>     bind :::80 v4v6
> #    bind *:443 ssl crt /etc/haproxy/certs/xx.pem 
>     bind :::443 v4v6 ssl crt /etc/haproxy/certs/xx.pem
>     bind quic4@0.0.0.0:443 name quic443 ssl crt
> /etc/haproxy/certs/xxx.pem proto quic alpn h3,h3-29,h3-28,h3-27 npn
> h3,h3-29,h3-28,h3-27 allow-0rtt curves secp521r1:secp384r1
>     http-response add-header alt-svc 'h3=":443"; ma=7200,h3-29=":443";
> ma=7200,h3-Q050=":443"; ma=7200,h3-Q046=":443"; ma=7200,h3-
> Q043=":443"; ma=7200,quic=":443"; ma=7200'
> 
>     http-request return status 200 content-type text/plain lf-string
> "%[path,field(-1,/)].${ACCOUNT_THUMBPRINT}\n" if { path_beg '/.well-
> known/acme-challenge/' }
> 

This and "use_backend letsencrypt-backend if letsencrypt-acl" seem like
duplicate and only one of them is used ?

>     # Redirect if HTTPS is *not* used
>     redirect scheme https code 301 if !{ ssl_fc }
>     acl letsencrypt-acl path_beg /.well-known/acme-challenge/
> 
>     use_backend letsencrypt-backend if letsencrypt-acl
>     default_backend website
> 
> In my haproxy.log I see:
> 
> Nov 28 16:10:19 mail haproxy[59727]: :::88.181.85.41:63772
> [28/Nov/2023:16:10:19.728] http-in http-in/ 0/-1/-1/-1/0 301 97 - -
> LR-- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
> 
> This stems from a request I did that way:
> 
> curl http://www.kukulies.org
> 

Seems normal, status code is 301 and you have "redirect scheme https code
301 if !{ ssl_fc }"
Is this what you expect or do you think there're some errors ?

-Jarno


-- 
Jarno Huuskonen



Re: Old style OCSP not working anymore?

2023-07-24 Thread Jarno Huuskonen
Hello,

On Fri, 2023-07-21 at 17:31 +0200, Remi Tricot-Le Breton wrote:
> I found the faulty commit for Jarno's issue ("cc346678d MEDIUM: ssl: Add 
> ocsp_certid in ckch structure and discard ocsp buffer early").
> Here's a patch that should fix it. If you want to try it with your 
> setups be my guests, otherwise it should be merged soon if William is ok 
> with the patch.

Thanks Remi. Haproxy-2.8.1 + patch and haproxy returns OCSP response for
both binds.

-Jarno

-- 
Jarno Huuskonen



Re: Old style OCSP not working anymore?

2023-07-21 Thread Jarno Huuskonen
Hi,

On Thu, 2023-07-20 at 20:27 +0200, Sander Klein wrote:
> > The best thing to do is to test with `openssl s_client -showcerts
> > -connect some.hostname.nl:443` with both your versions to identify what
> > changed.
> 
> I've tested with 'openssl s_client -showcerts -connect mydomain.com:443 
> -servername mydomain.com -status -tlsextdebug''
> 

Does 2.8.1 send ocsp response if you connect with ipv4 address:
openssl s_client -showcerts -connect ipaddress:443 ...
(with or without -servername)

> On 2.6.14 I get an OCSP response, on 2.8.1 I get:
> 
> "OCSP response: no response sent"
> 
> It really looks like HAProxy doesn't want to send the response coming 
> from the file. Is there any more information I can gather?

I get the same result as Sander (2.6.x sends ocsp and 2.8.1 doesn't). I've
ipv4 and ipv6 binds and for ipv4 connection haproxy(2.8.1) sends ocsp and
for ipv6 it doesn't.

bind ipv4@:443 name v4ssl ssl crt-list /etc/haproxy/ssl/example.crtlist
bind ipv6@:::443 name v6ssl ssl crt-list /etc/haproxy/ssl/example.crtlist

(And example.crtlist:
/etc/haproxy/ssl/somecertfile.pem.ecdsa [alpn h2,http/1.1]
)
(and somecertfile.pem.ecdsa.ocsp in /etc/haproxy/ssl)

If I change the order of ipv4 / ipv6 binds (so bind ipv6@:::443 name
v6ssl... is first) then haproxy(2.8.1) sends ocsp with ipv6 connection and
not with ipv4.

-Jarno

-- 
Jarno Huuskonen



Re: Theoretical limits for a HAProxy instance

2022-12-12 Thread Jarno Huuskonen
Hi,

On Mon, 2022-12-12 at 09:47 +0100, Iago Alonso wrote:
> 

Can you share haproxy -vv output ?

> HAProxy config:
> global
>     log /dev/log len 65535 local0 warning
>     chroot /var/lib/haproxy
>     stats socket /run/haproxy-admin.sock mode 660 level admin
>     user haproxy
>     group haproxy
>     daemon
>     maxconn 200
>     maxconnrate 2500
>     maxsslrate 2500

From your graphs (haproxy_process_current_ssl_rate /
haproxy_process_current_connection_rate) you might hit
maxconnrate/maxsslrate

-Jarno

-- 
Jarno Huuskonen


Re: Rate Limit a specific HTML request

2022-11-22 Thread Jarno Huuskonen
Hi,

On Tue, 2022-11-22 at 20:57 +, Branitsky, Norman wrote:
> I have the following "generic" rate limit defined - 150 requests in 10s
> from the same IP address:
> stick-table  type ip size 100k expire 30s store http_req_rate(10s)
> http-request track-sc0 src unless { src -f
> /etc/CONFIG/haproxy/cidr.lst }
> http-request deny deny_status 429 if { sc_http_req_rate(0) gt 150 }
>  
> Is it possible to rate limit a specific "computationally expensive" HTML
> request from the same IP address to a much smaller number?

Untested, but try using sc1 for the search url:
http-request track-sc1 src table search_table if
acl_matching_datamart_searchbyname !acl_exclude_cidr_lst

http-request deny deny_status 429 if { sc1_http_req_cnt(search_table) gt 5 }

backend search_table
stick-table type ... store http_req_cnt,http_req_rate...

-Jarno

-- 
Jarno Huuskonen


Re: How to return 429 Status Code instead of 503

2022-11-16 Thread Jarno Huuskonen
Hello,

On Tue, 2022-11-08 at 09:30 +0530, Chilaka Ramakrishna wrote:
> On queue timeout, currently HAProxy throws 503, But i want to return 429,
> I understand that 4xx means a client problem and client can't help here.
> But due to back compatibility reasons, I want to return 429 instead of
> 503. Is this possible ?

errorfile 503 /path/to/429.http 
(http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#4-errorfile)

Or maybe it's possible with http-error
(http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#http-error)

-Jarno

-- 
Jarno Huuskonen


Re: Possible problem with custom error pages -- backend server returns 503, haproxy logs 503, but the browser gets 403

2022-08-22 Thread Jarno Huuskonen

Hello,

On 8/22/22 17:37, Shawn Heisey wrote:
The same problem also happens with 2.6.4, built with the same options as 
the dev version.


HAProxy version 2.6.4 2022/08/22 - https://haproxy.org/

I have documentation for the problem details in another project's bug 
tracker:


https://issues.apache.org/jira/browse/SOLR-16327?focusedCommentId=17582990=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17582990 





Does this happen with only HTTP/3.0(quic) or also with http/1.1 and 
http/2.0 ?


Are you able to capture the response coming from solr where haproxy 
sends wrong error ?


Testing with (2.6.4)+curl and this config (http/2 / http/1.1 only):
...
frontend test
bind ipv4@127.0.0.1:8001 alpn h2,http/1.1 ssl crt somecrt.pem

errorfiles myerrors
http-response return status 404 default-errorfiles if { status 404 }
http-response return status 403 default-errorfiles if { status 403 }
http-response return status 500 default-errorfiles if { status 500 }
http-response return status 502 default-errorfiles if { status 502 }
http-response return status 503 default-errorfiles if { status 503 }
http-response return status 504 default-errorfiles if { status 504 }
default_backend test_be

backend test_be
server srv1 127.0.0.1:9000 id 1

listen responder
bind ipv4@127.0.0.1:9000
http-request deny deny_status 503

And I receive the correct error file.

-Jarno

--
Jarno Huuskonen



Re: haproxy 2.6.0 and quic

2022-06-03 Thread Jarno Huuskonen
Hi,

On Fri, 2022-06-03 at 14:47 +0200, Markus Rietzler wrote:
> 
> Hi,
> 
> we are using haproxy 2.4.17 at the moment. i have compiled haproxy 2.6
> with quic support and quctls
> 
> when i no check my config i get
> 
> /opt/haproxy-260# /opt/haproxy-260/sbin/haproxy -c -f haproxy.cfg
> [NOTICE]   (35905) : haproxy version is 2.6.0-a1efc04
> [NOTICE]   (35905) : path to executable is /opt/haproxy-260/sbin/haproxy
> [WARNING]  (35905) : config : parsing [haproxy.cfg:100]: 'log-format'
> overrides previous 'option httplog' in 'defaults' 
> section.
> [ALERT]    (35905) : config : parsing [haproxy.cfg:213] : 'bind' :
> unsupported stream protocol for datagram family 2 
> address 'quic4@:4443'; QUIC is not compiled in if this is what you were
> looking for.

I don't think you've QUIC support compiled. I think you're missing
USE_QUIC=1 build option.

> 
> my build command was
> 
> make TARGET=linux-glibc USE_OPENSSL=1 SSL_INC=/opt/quictls/include
> SSL_LIB=/opt/quictls/lib64 
> LDFLAGS="-Wl,-rpath,/opt/quictls/lib64" ADDLIB="-lz -ldl" USE_ZLIB=1
> USE_PCRE=1 USE_PCRE=yes USE_LUA=1 
> LUA_LIB_NAME=lua5.3  LUA_INC=/usr/include/lua5.3 ;
> 
> 
> -PROCCTL +THREAD_DUMP -EVPORTS -OT -QUIC -PROMEX -MEMORY_PROFILING

-QUIC --> QUIC support missing.

-Jarno

-- 
Jarno Huuskonen



Re: possible bug in haproxy: backend switching with map file does not work with HTTP/2

2022-03-30 Thread Jarno Huuskonen
_THREADS=64, default=1).
> Built with OpenSSL version : OpenSSL 1.1.1f  31 Mar 2020
> Running on OpenSSL version : OpenSSL 1.1.1f  31 Mar 2020
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
> Built with Lua version : Lua 5.3.3
> Built with the Prometheus exporter as a service
> Built with network namespace support.
> Built with libslz for stateless compression.
> Compression algorithms supported : identity("identity"),
> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
> Support for malloc_trim() is enabled.
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
> Built with PCRE2 version : 10.34 2019-11-21
> PCRE2 library supports JIT : yes
> Encrypted password support via crypt(3): yes
> Built with gcc compiler version 9.4.0
>  
> Available polling systems :
>   epoll : pref=300,  test result OK
>    poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
>  
> Available multiplexer protocols :
> (protocols marked as  cannot be specified using 'proto' keyword)
>   h2 : mode=HTTP   side=FE|BE mux=H2  
> flags=HTX|CLEAN_ABRT|HOL_RISK|NO_UPG
>     fcgi : mode=HTTP   side=BE    mux=FCGI
> flags=HTX|HOL_RISK|NO_UPG
>     : mode=HTTP   side=FE|BE mux=H1   flags=HTX
>   h1 : mode=HTTP   side=FE|BE mux=H1  
> flags=HTX|NO_UPG
>     : mode=TCP    side=FE|BE mux=PASS flags=
>     none : mode=TCP    side=FE|BE mux=PASS
> flags=NO_UPG
>  
> Available services : prometheus-exporter
> Available filters :
>     [SPOE] spoe
>     [CACHE] cache
>     [FCGI] fcgi-app
>     [COMP] compression
>     [TRACE] trace
>  
> 
> We set response-header „X-Info“, to see which backend is chosen.
> When we use http/1.1 everything works fine:
>  
> root@ubuntu2004:/etc/haproxy# curl --http1.1  -kvhttps://127.0.0.1/x
> ...
> < HTTP/1.1 200 OK
> < date: Wed, 30 Mar 2022 12:05:21 GMT
> < server: Apache/2.4.41 (Ubuntu)
> < last-modified: Wed, 30 Mar 2022 11:25:27 GMT
> < etag: "5-5db6dcd63b259"
> < accept-ranges: bytes
> < content-length: 5
> < x-info: defaultbackend : default_1   <--default backend
> OK
> < 
> test
> * Connection #0 to host 127.0.0.1 left intact
>  
> root@ubuntu2004:/etc/haproxy# curl --http1.1  -kvhttps://127.0.0.1/2/x
> ...
> < HTTP/1.1 200 OK
> < date: Wed, 30 Mar 2022 12:05:24 GMT
> < server: Apache/2.4.41 (Ubuntu)
> < last-modified: Wed, 30 Mar 2022 11:28:30 GMT
> < etag: "7-5db6dd8521aec"
> < accept-ranges: bytes
> < content-length: 7
> < x-info: backend_2 : default_2 <-- backend_2 OK
>  
> < 
> test 2
> * Connection #0 to host 127.0.0.1 left intact
>  
> root@ubuntu2004:/etc/haproxy# curl --http1.1  -kvhttps://127.0.0.1/3/x
> ...
> < HTTP/1.1 200 OK
> < date: Wed, 30 Mar 2022 12:05:26 GMT
> < server: Apache/2.4.41 (Ubuntu)
> < last-modified: Wed, 30 Mar 2022 11:46:32 GMT
> < etag: "7-5db6e18c50c11"
> < accept-ranges: bytes
> < content-length: 7
> < x-info: backend_3 : default_3 <-- backend_3 OK
>  
> < 
> test 3
> * Connection #0 to host 127.0.0.1 left intact
>  
>  
> When we use HTTP/2, switching to backend_2 does not work:
>  
> root@ubuntu2004:/etc/haproxy# curl --http2  -kvhttps://127.0.0.1/2/x
> …
> < HTTP/2 200
> < date: Wed, 30 Mar 2022 12:09:04 GMT
> < server: Apache/2.4.41 (Ubuntu)
> < last-modified: Wed, 30 Mar 2022 11:28:30 GMT
> < etag: "7-5db6dd8521aec"
> < accept-ranges: bytes
> < content-length: 7
> < x-info: defaultbackend : default_1  <-- here we expect backend_2
> < 
> test 2
> * Connection #0 to host 127.0.0.1 left intact
>  
> Can you please check this?
>  
> Kind Regards
> Ralf Saier
> Senior Software Developer
> Tel.+49 721 663035-253
> e-mailsa...@econda.de
>  
> Angaben zum Absender:
> econda GmbH, Zimmerstr. 6, 76137 Karlsruhe
> Geschäftsführer: Christian Hagemeyer, Dr. Philipp Sorg
> Handelsregister: Amtsgericht Mannheim HRB 110559
>  
>  
>  
>  

-- 
Jarno Huuskonen


Re: Haproxy, Logging more TCP details?

2021-11-22 Thread Jarno Huuskonen

Hi,

On 11/22/21 16:33, Ben Hart wrote:
Hey there! I’ve got a handful of Haproxy servers that are serving LDAPS 
and HTTPS front/back ends.  I am new to this, so I built these and 
reused the config from the older Haproxy servers we had.


Anyway I mention that because I likely have little idea what I should be 
done here. So far everything is working.. we are able to bind and 
perform lookups successfully. What’s not working like I think it should 
is logging. I have Firewalld setup that is blocking all traffic inbound 
from the same internal subnet as the server, and allowing 0.0.0.0/0 in 
from all other sources for ports 636 and 443.


Rsyslog is matching on program name ‘haproxy’ and the default UNIX 
socket /dev/log and forwarding all info to /var/log/haproxy.log


Rsyslog is matching on program name ‘firewalld’ and sending all info to 
/var/log/firewalld.log


If I tail both files, I see many inbound connections allowed to port 
636, but no corresponding events in the haproxy.log file.  So I’m hoping 


Do you get any logs in haproxy.log ? (Any logs from "frontend 
ecorp_https" ?)


that maybe I have something on the Haproxy side that’s not quite what it 
should be.  The thought is, Maybe the connection attempts are coming in, 
but Haproxy is not fulfilling them for some reason. And I don’t have the 
appropriate log options or formats setup to determine that.


Attached is my sanitized haproxy.cfg


> global
> log /dev/loglocal0
> log /dev/loglocal1 notice
> #   log 127.0.0.1   local1
> chroot /var/lib/haproxy


You're using chroot, is rsyslog configured to listen to 
/var/lib/haproxy/dev/log ? (And if this is centos/rhel based system 
selinux allows rsyslog to create the socket and haproxy to connect to it).


Have you checked that haproxy sends logs for example with
enable logging to 127.0.0.1 and use tcpdump -nn -XX -i lo port 514 or 
something similar ?


> frontend ecorp_https
>  optiontcplog

You probably don't want to use 
tcplog(https://cbonte.github.io/haproxy-dconv/2.4/configuration.html#4-option%20tcplog) 
with mode http. AFAIK it overrides your custom log-format.


-Jarno

--
Jarno Huuskonen



Re: host-based be routing with H2

2021-10-05 Thread Jarno Huuskonen
Hi,

On Tue, 2021-10-05 at 15:56 +0200, Ionel GARDAIS wrote:
> Hi,
> 
> I'm having trouble with backend-routing based on host header when H2 is
> enabled.
> Frontend is https only and all backends are HTTP1.
> We're using v2.4.4.
> 
> When the user browser is directed to app2.example.com, it switches to
> app1.example.com.
> There is one public IP address, certificate is wildcard for the domain, so
> app1 and app2 share the same IP and certificate.
> When H2 is disabled, all is working fine.
> 
> Currently, backend selection is made with
> use_backend %[req.hdr(host),lower]
> 

Have you looked at this thread:
https://www.mail-archive.com/haproxy@formilux.org/msg40652.html
your issue sounds similar.

Is one backend the default_backend (where HTTP/2 requests go) ?

Does it work with something like:
use_backend %[req.hdr(host),lower,regsub(:\d+$,,)]
or
use_backend %[req.hdr(host),lower,word(1,:)]
(https://www.haproxy.com/blog/how-to-map-domain-names-to-backend-server-pools-with-haproxy/)

or using maps:
https://www.haproxy.com/blog/how-to-map-domain-names-to-backend-server-pools-with-haproxy/
(use_backend
%[req.hdr(host),lower,map_dom(/etc/haproxy/maps/hosts.map,be_default)])

-Jarno

> Would
> use_backend %[ssl_fc_sni,lower] # Layer 5
> or
> use_backend %[req.ssl_sni,lower] # Layer 6
> help with H2 ?
> 
> Thanks,
> Ionel
> 
> 

-- 
Jarno Huuskonen



Re: double // after domain causes ERR_HTTP2_PROTOCOL_ERROR after upgrade to 2.4.3

2021-08-20 Thread Jarno Huuskonen

Hi,

On 8/20/21 2:20 PM, Lukas Tribus wrote:

On Fri, 20 Aug 2021 at 13:08, Илья Шипицин  wrote:


double slashes behaviour is changed in BUG/MEDIUM:
h2: match absolute-path not path-absolute for :path · haproxy/haproxy@46b7dff 
(github.com)


Actually, I think the patch you are referring to would *fix* this
particular issue, as it was committed AFTER the last releases:

https://github.com/haproxy/haproxy/commit/46b7dff8f08cb6c5c3004d8874d6c5bc689a4c51

It was this fix that probably caused the issue:
https://github.com/haproxy/haproxy/commit/4b8852c70d8c4b7e225e24eb58258a15eb54c26e


Using the latest git, applying the patch manually or running a
20210820 snapshot would fix this.



Yes, 2.4.3+"BUG/MEDIUM: h2: match absolute-path not path-absolute for 
:path" and https://www.example.com// appears to work again.


-Jarno

--
Jarno Huuskonen



Re: double // after domain causes ERR_HTTP2_PROTOCOL_ERROR after upgrade to 2.4.3

2021-08-20 Thread Jarno Huuskonen

Hi,

On 8/20/21 1:46 PM, Olaf Buitelaar wrote:
After we upgraded to haproxy version 2.4.3 from 2.4.2 urls with a double 
slash after the domain stopped working. We're running the standard 
docker image

For example;
https://www.example.com// <https://www.example.com//>
https://www.example.com//some/path/ <https://www.example.com//some/path/>
https://www.haproxy.org// <https://www.haproxy.org//>

the browser gives a ERR_HTTP2_PROTOCOL_ERROR
while on 2.4.2 this worked fine. probably this has something todo with 
the mitigations of 
https://www.mail-archive.com/haproxy@formilux.org/msg41041.html 
<https://www.mail-archive.com/haproxy@formilux.org/msg41041.html>


our bind line looks like;
bind *:8443 allow-0rtt ssl crt /usr/local/etc/haproxy/xxx.bundle.pem crt 
/usr/local/etc/haproxy/yyy.bundle.pem alpn h2,http/1.1

Generally our backend servers are http1.


Same thing happens to me with 2.4.3 and 2.2.16.

Seems to happen only for https://www.example.com// but not for 
https://www.example.com/somepath//something


-Jarno

--
Jarno Huuskonen



Re: Question about available fetch-methods for http-request

2021-08-12 Thread Jarno Huuskonen

Hello,

On 8/12/21 8:59 AM, Maya Lena Ayleen Scheu wrote:
Your solution would work if I had only one static context path. The 
tricky thing is, that I would like to have it dynamic, so that the word 
between the first two “/“ always becomes the subdomain if a certain 
condition is true.
Thats where I am stuck, I don’t know how to grep that information and 
put it infront of my domain without being able to use the path_reg method.


Take a look at field,word and regsub:
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.1-field
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.1-regsub
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.1-word

And maybe path, variables and concat with field,word.

regsub probably can modify whole url to context_path.domain.com host header.

-Jarno



Best Regards, Maya

On 12. Aug 2021, at 04:15, Igor Cicimov 
<mailto:ig...@encompasscorporation.com>> wrote:


Hi Maya,

Maybe try this:

http-request set-header Hostcontext_path.ms.example.com 
<http://context_path.ms.example.com/>if { path_beg /context_path } { 
hdr(Host) -iexample.com <http://example.com/>}


*From:*Maya Lena Ayleen Scheu <mailto:maya.sc...@check24.de>>

*Sent:*Wednesday, August 11, 2021 9:58 PM
*To:*haproxy@formilux.org <mailto:haproxy@formilux.org> 
mailto:haproxy@formilux.org>>

*Subject:*Question about available fetch-methods for http-request
Hi there,

I have some questions regarding Haproxy Configuration in Version 
HA-Proxy version 2.0.23, which is not clear by reading the official 
documentation. I hope you would have some ideas how this could be solved.



*What I wish to accomplish:*

A frontend application is called by an url with a context path in it.
Haproxy should set a Header in the backend section with `http-request 
set-header Host` whereas the set Host contains the context_path found 
in the url-path. I try to make it clear with an example:


The called url looks like: `https://example.com/context_path/abc/etc 
<https://example.com/context_path/abc/etc>`
Out of this url I would need to set the following Host Header: 
`context_path.ms.example.com <http://example.com>`, while the path 
remains `/context_path/abc/etc`


While I find many fetch-examples for ACLs, I had to learn that most of 
them don’t work on `http-request set-header or set-env`. I tried to 
use `path_beg` or `path_reg`, which parses with errors, that the fetch 
method is unknown.


So something like this doesn’t work:
`http-request set-header Host %[path_reg(...)].ms.example.domain.com 
<http://ms.example.domain.com/>if host_example`


or this:
`http-request set-var(req.url_context) path_beg,lower if host_example`

*Question:*

I am certain that this should somehow be possible, as I found even 
solutions to set variables or Headers by urlp, cookies, etc.
What would be the explanation, why fetch methods like path_beg are not 
available in this context? And how to work around it?


Thank you in advance and best regards,
Maya Scheu
*Know Your Customer due diligence on demand, powered by intelligent 
process automation*
Blogs <https://www.encompasscorporation.com/blog/> | LinkedIn 
<https://www.linkedin.com/company/encompass-corporation/> | Twitter 
<https://twitter.com/EncompassCorp>
Encompass Corporation UK Ltd | Company No. SC493055 | Address: Level 
3, 33 Bothwell Street, Glasgow, UK, G2 6NL
Encompass Corporation Pty Ltd | ACN 140 556 896 | Address: Level 10, 
117 Clarence Street, Sydney, New South Wales, 2000
This email and any attachments is intended only for the use of the 
individual or entity named above and may contain confidential information
If you are not the intended recipient, any dissemination, distribution 
or copying of this email is prohibited.
If received in error, please notify us immediately by return email and 
destroy the original message.




--
Jarno Huuskonen



Re: [EXTERNAL] Re: built in ACL, REQ_CONTENT

2021-06-08 Thread Jarno Huuskonen
Hello,

On Tue, 2021-06-08 at 12:25 +, Godfrin, Philippe E wrote:
> OK, I see. An associated question, how do I gain access to that content to
> interrogate/parse the data in that content?

req.body
(https://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.6-req.body)

Can you explain a little bit what you're trying to do ?

-Jarno

> pg
> 
> -Original Message-
> From: Lukas Tribus  
> Sent: Monday, June 7, 2021 4:08 PM
> To: Godfrin, Philippe E 
> Cc: haproxy@formilux.org
> Subject: [EXTERNAL] Re: built in ACL, REQ_CONTENT
> 
> Use caution when interacting with this [EXTERNAL] email!
> 
> Hello,
> 
> On Mon, 7 Jun 2021 at 14:51, Godfrin, Philippe E 
> wrote:
> > 
> > Greetings!
> > 
> > I can’t seem to find instructions on how to use this builtin ACL. Can
> > someone point me in the right direction, please?
> 
> There is nothing specific about it, you use just like every other ACL.
> 
> http-request deny if REQ_CONTENT
> 
> http-request deny unless REQ_CONTENT
> 
> 
>  Lukas
> 
> 
> 
> 

-- 
Jarno Huuskonen


Re: Bad backend selected

2021-06-07 Thread Jarno Huuskonen
Hello,

On Mon, 2021-06-07 at 16:46 +0200, Artur wrote:
> Hello,
> 
> I'm currently running haproxy 2.4.0 and I can see something strange in
> the way haproxy selects a backend for processing some requests.
> 
> This is simplified frontend configuration that should select between
> static and dynamic (websocket) content URIs based on path_beg.
> 
> frontend wwws
>     bind 0.0.0.0:443 ssl crt /etc/haproxy/ssl/server.pem alpn
> h2,http/1.1
>     mode http
> 
>     acl is_static_prod31    path_beg /p31/
>     acl is_dynamic_prod31   path_beg /n/p31/
>     acl is_domain_name hdr(host) -i domain.name
> 
>     use_backend ws_be_prod31 if is_dynamic_prod31 is_domain_name
>     use_backend www_be_prod  if is_static_prod31 is_domain_name
> 
>     default_backend www_be_prod
> 
> What I can see in logs is that some requests are correctly processed and
> redirected to dynamic backends (websockets servers) for processing :
> 
> Jun  7 15:44:41 host haproxy[9384]: 1.2.3.4:56952
> [07/Jun/2021:15:43:31.926] wwws~ ws_be_prod31/s1 5/0/1/3/70015 101 421 -
> - --VN 34/34/27/8/0 0/0 "GET https://domain.name/n/p31/socket.io/...
> HTTP/2.0"
> 
> While others are wrongly processed by the static web server :
> 
> Jun  7 15:50:06 host haproxy[9384]: 1.2.3.4:61037
> [07/Jun/2021:15:50:06.157] wwws~ www_be_prod/web1 6/0/1/1/7 404 9318 - -
>  34/34/0/0/0 0/0 "GET https://domain.name:443/n/p31/socket.io/...
> HTTP/2.0"
> 
> However the only difference is the 443 port explicitly specified in the
> later request.
> I am not sure it's something specific to 2.4.0, but I've never seen it
> before.
> Is it an expected behaviour ? If so, how can I change my acls to correct
> it ?

Does it work if you use
hdr_dom(https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-req.hdr)
for the host header acl:
(acl is_domain_name hdr_dom(host) -i domain.name)
(or some other match that ignores port in Host header).

-Jarno

-- 
Jarno Huuskonen


Re: HAPROXY CAN NOT POINT IN TO PORT 5000 OF PATRONI

2021-04-22 Thread Jarno Huuskonen
Hi,

On Thu, 2021-04-22 at 11:39 +0200, Maciej Zdeb wrote:
> Hi,
> try removing those two lines from config:
> option httpchk
> http-check expect status 200
> 
> it is postgres (tcp backend), you should not expect http response on
> health check.
> 

I think httpchk/port 8008 is for patroni rest api:
https://patroni.readthedocs.io/en/latest/rest_api.html
(but should test w/out option httpchk / check port 8008 to rule out this).

Also the listen postgres doesn't have explicit mode tcp (it's in defaults
though), listen stats has mode tcp (but the stats probably should be mode
http).

If the system has SELinux enabled I'd check that SELinux has setsebool -P
haproxy_connect_any=On because AFAIK the default policy doesn't allow
all/random ports.

(And logs probably tell what's wrong ...)

-Jarno

> 
> czw., 22 kwi 2021 o 02:38 thủy bùi  napisał(a):
> > I have open all port by setting firewall rules, But still get the same
> > error
> > 
> > Vào Th 4, 21 thg 4, 2021 lúc 23:01 thủy bùi  đã
> > viết:
> > > I have change my configuration to TCP mode as below and restart haproxy
> > > but still got the same error: 
> > > global
> > > maxconn 100
> > > 
> > > defaults
> > > log global
> > > mode tcp
> > > retries 2
> > > timeout client 30m
> > > timeout connect 4s
> > > timeout server 30m
> > > timeout check 5s
> > > 
> > > listen stats
> > > mode tcp
> > > bind *:8009
> > > stats enable
> > > stats uri /
> > > 
> > > listen postgres
> > > bind *:5000
> > > option httpchk
> > > http-check expect status 200
> > > default-server inter 3s fall 3 rise 2 on-marked-down shutdown-
> > > sessions
> > > server postgresql_10.128.0.10_5432 10.128.0.10:5432 maxconn 100
> > > check port 8008
> > > server postgresql_10.128.0.11_5432 10.128.0.11:5432 maxconn 100
> > > check port 8008
> > > 
> > > Vào Th 4, 21 thg 4, 2021 vào lúc 22:42 Jarno Huuskonen <
> > > jarno.huusko...@uef.fi> đã viết:
> > > > Hei,
> > > > 
> > > > On Wed, 2021-04-21 at 16:27 +0100, Andrew Smalley wrote:
> > > > > From the look of  your configuration you are using  HTTP Mode, for
> > > > > PostgreSQL, you will need a TCP VIP
> > > > > 
> > > > > I noted this because of the HTTP check
> > > > > 
> > > > > try using  "mode tcp"
> > > > > 
> > > > 
> > > > defaults has mode tcp:
> > > > 
> > > > defaults
> > > >     log global
> > > >     mode tcp
> > > > ...
> > > > 
> > > > -Jarno
> > > > 
> > > > > 
> > > > > On Wed, 21 Apr 2021 at 16:25, Jarno Huuskonen
> > > > 
> > > > > wrote:
> > > > > > 
> > > > > > Hi,
> > > > > > 
> > > > > > On Wed, 2021-04-21 at 21:55 +0700, thủy bùi wrote:
> > > > > > > Dear HAproxy dev,I have install all the requirement of HAproxy
> > > > into the
> > > > > > > system alongside with patroni and etcd, but finding error while
> > > > call
> > > > > > > into
> > > > > > > port 5000.
> > > > > > > The information is provided as below.
> > > > > > > Please help me find out the issue.
> > > > > > > I have running HAproxy successfully
> > > > > > > 
> > > > > > > But unable to connect to my database throught port 5000
> > > > > > > 
> > > > > > > 
> > > > > > > psql: error: server closed the connection unexpectedly
> > > > > > > This probably means the server terminated abnormally
> > > > > > > before or while processing the request.
> > > > > > > What is your configuration?
> > > > > > > 
> > > > > > ...
> > > > > > 
> > > > > > Does haproxy run when you run it from cli
> > > > > > (haproxy -d -f /path/to/yourconfig.cfg) ?
> > > > > > 
> > > > > > And do you have errors in your logs ?
> > > > > > 
> > > > > > > Linux postgre02 3.10.0-1160.21.1.el7.x86_64 #1 SMP Tue Mar 16
> > > > 18:28:22
> > > > > > > UTC
> > > > > > > 2021 x86_64 x86_64 x86_64 GNU/Linux
> > > > > > 
> > > > > > Looks like you're running on CentOS/RHEL 7 ? Do you have selinux
> > > > enabled
> > > > > > (getenforce) ? You'll probably need to allow haproxy to connect to
> > > > all
> > > > > > ports
> > > > > > (or allow required ports). (setsebool -P haproxy_connect_any=On
> > > > might
> > > > > > help).
> > > > > > 
> > > > > > (Your logs should show if connections are denied).
> > > > > > 
> > > > > > -Jarno
> > > > > > 
> > > > > > --
> > > > > > Jarno Huuskonen
> > > > 
> > > 
> > > 
> > > -- 
> > > BUI THANH THUY 
> > > Tel: 0348672994
> > > Email: buithuy.13...@gmail.com




Re: HAPROXY CAN NOT POINT IN TO PORT 5000 OF PATRONI

2021-04-21 Thread Jarno Huuskonen
Hei,

On Wed, 2021-04-21 at 16:27 +0100, Andrew Smalley wrote:
> From the look of  your configuration you are using  HTTP Mode, for
> PostgreSQL, you will need a TCP VIP
> 
> I noted this because of the HTTP check
> 
> try using  "mode tcp"
> 

defaults has mode tcp:

defaults
log global
mode tcp
...

-Jarno

> 
> On Wed, 21 Apr 2021 at 16:25, Jarno Huuskonen 
> wrote:
> > 
> > Hi,
> > 
> > On Wed, 2021-04-21 at 21:55 +0700, thủy bùi wrote:
> > > Dear HAproxy dev,I have install all the requirement of HAproxy into the
> > > system alongside with patroni and etcd, but finding error while call
> > > into
> > > port 5000.
> > > The information is provided as below.
> > > Please help me find out the issue.
> > > I have running HAproxy successfully
> > > 
> > > But unable to connect to my database throught port 5000
> > > 
> > > 
> > > psql: error: server closed the connection unexpectedly
> > > This probably means the server terminated abnormally
> > > before or while processing the request.
> > > What is your configuration?
> > > 
> > ...
> > 
> > Does haproxy run when you run it from cli
> > (haproxy -d -f /path/to/yourconfig.cfg) ?
> > 
> > And do you have errors in your logs ?
> > 
> > > Linux postgre02 3.10.0-1160.21.1.el7.x86_64 #1 SMP Tue Mar 16 18:28:22
> > > UTC
> > > 2021 x86_64 x86_64 x86_64 GNU/Linux
> > 
> > Looks like you're running on CentOS/RHEL 7 ? Do you have selinux enabled
> > (getenforce) ? You'll probably need to allow haproxy to connect to all
> > ports
> > (or allow required ports). (setsebool -P haproxy_connect_any=On might
> > help).
> > 
> > (Your logs should show if connections are denied).
> > 
> > -Jarno
> > 
> > --
> > Jarno Huuskonen



Re: HAPROXY CAN NOT POINT IN TO PORT 5000 OF PATRONI

2021-04-21 Thread Jarno Huuskonen
Hi,

On Wed, 2021-04-21 at 21:55 +0700, thủy bùi wrote:
> Dear HAproxy dev,I have install all the requirement of HAproxy into the
> system alongside with patroni and etcd, but finding error while call into
> port 5000.
> The information is provided as below.
> Please help me find out the issue.
> I have running HAproxy successfully
> 
> But unable to connect to my database throught port 5000
> 
> 
> psql: error: server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> What is your configuration?
> 
...

Does haproxy run when you run it from cli
(haproxy -d -f /path/to/yourconfig.cfg) ?

And do you have errors in your logs ?

> Linux postgre02 3.10.0-1160.21.1.el7.x86_64 #1 SMP Tue Mar 16 18:28:22 UTC
> 2021 x86_64 x86_64 x86_64 GNU/Linux

Looks like you're running on CentOS/RHEL 7 ? Do you have selinux enabled
(getenforce) ? You'll probably need to allow haproxy to connect to all ports
(or allow required ports). (setsebool -P haproxy_connect_any=On might help).

(Your logs should show if connections are denied).

-Jarno

-- 
Jarno Huuskonen


Re: changed IP messages overrunning /var/log ?

2021-04-15 Thread Jarno Huuskonen
Hello,

On Thu, 2021-04-15 at 01:43 -0600, Jim Freeman wrote:
> This is puzzling, since haproxy.cfg directs all logs to local*
> After some investigation, it turns out that the daemon.log and syslog
> entries arrive via facility.level=daemon.info.  I've made rsyslog cfg
> changes that now stop the haproxy msgs from overrunning daemon.log and
> syslog (and allow only a representative fraction to hit haproxy.log).
> 
> Two questions :
>  1) What is different about 2.0 that "changed its IP" entries are so
> voluminous ?
>  2) Why is daemon.info involved in the logging, when the haproxy.cfg
> settings only designate local* facilities ?

Are you running haproxy as systemd service ? Those logs could be
coming from systemd (haproxy stdout/stderr).

-Jarno

-- 
Jarno Huuskonen


Re: [PATCH] JWT payloads break b64dec convertor

2021-04-13 Thread Jarno Huuskonen
Hello,

On Tue, 2021-04-06 at 01:58 +0200, Moemen MHEDHBI wrote:
> Thanks Willy and Tim for your feedback.
> 
> You can find attached the updated patches with fixed coding style (now
> set correctly in my editor), updated commit message, entry doc in sorted
> order, size_t instead of int in both enc/dec  and corresponding reg-test.

Could you add a cross reference from b64dec/base64 to ub64dec/ub64enc in
configuration.txt. Something like:
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15020,11 +15020,14 @@ and()
 b64dec
   Converts (decodes) a base64 encoded input string to its binary
   representation. It performs the inverse operation of base64().
+  For base64url("URL and Filename Safe Alphabet" (RFC 4648)) variant
+  see "ub64dec".
 
 base64
   Converts a binary input sample to a base64 string. It is used to log or
   transfer binary content in a way that can be reliably transferred (e.g.
-  an SSL ID can be copied in a header).
+  an SSL ID can be copied in a header). For base64url("URL and Filename
Safe
+  Alphabet" (RFC 4648)) variant see "ub64enc".
 
 bool
   Returns a boolean TRUE if the input value of type signed integer is


-Jarno

-- 
Jarno Huuskonen


Re: 2.2.12 and rsa/ecdsa cert regression (crash on startup) ?

2021-04-02 Thread Jarno Huuskonen
Hello,

On Thu, 2021-04-01 at 16:03 +0200, William Lallemand wrote:
> On Thu, Apr 01, 2021 at 02:26:07PM +0200, William Lallemand wrote:
> > On Thu, Apr 01, 2021 at 10:19:31AM +0000, Jarno Huuskonen wrote:
> > > Hello,
> > > 
> > > I'm seeing a regression with 2.2.12 and using rsa and ecdsa certs on
> > > bind.
> > > (cert1.pem.ecdsa
> > > cert1.pem.ecdsa.ocsp
> > > cert1.pem.ocsp
> > > cert1.pem.rsa
> > > cert1.pem.rsa.ocsp
> > > )
> > > 
> > 
> > Thanks for the report, I can reproduce the problem, I'm investigating.
> > 
> 
> Could you try the attached patch?

Thanks William, with 2.2.12 + patch haproxy starts and serves both rsa/ecdsa
certs.

I'm attaching a regtest patch that attempts to check that haproxy starts
with multi-bundle cert and serves both rsa/ecdsa certs.
(the test itself is not well tested so handle with care :)
(for example I'm not sure if ciphers ECDHE-RSA-AES128-GCM-SHA256 / ECDHE-
ECDSA-AES256-GCM-SHA384 are needed/usefull and work with boring/libressl).

-Jarno

-- 
Jarno Huuskonen
From b0aec4e620404ea38dae0fe50046ab0f2cb48398 Mon Sep 17 00:00:00 2001
From: Jarno Huuskonen 
Date: Fri, 2 Apr 2021 09:39:39 +0300
Subject: [PATCH] REGTESTS: ssl: Minimal multi-bundle certificates bind check.

This adds minimal test to check that multi-bundle (rsa/ecdsa) bind
works (for BUG/MEDIUM: ssl: ckch_inst->ctx not assigned with
 multi-bundle certificates) and both rsa/ecdsa certs are served.
---
 reg-tests/ssl/rsa_and_ecdsa_bind.pem.ecdsa |  1 +
 reg-tests/ssl/rsa_and_ecdsa_bind.pem.rsa   |  1 +
 reg-tests/ssl/set_ssl_cert.vtc | 31 ++
 3 files changed, 33 insertions(+)
 create mode 12 reg-tests/ssl/rsa_and_ecdsa_bind.pem.ecdsa
 create mode 12 reg-tests/ssl/rsa_and_ecdsa_bind.pem.rsa

diff --git a/reg-tests/ssl/rsa_and_ecdsa_bind.pem.ecdsa b/reg-tests/ssl/rsa_and_ecdsa_bind.pem.ecdsa
new file mode 12
index 0..16276ab88
--- /dev/null
+++ b/reg-tests/ssl/rsa_and_ecdsa_bind.pem.ecdsa
@@ -0,0 +1 @@
+ecdsa.pem
\ No newline at end of file
diff --git a/reg-tests/ssl/rsa_and_ecdsa_bind.pem.rsa b/reg-tests/ssl/rsa_and_ecdsa_bind.pem.rsa
new file mode 12
index 0..1b7cb2c3c
--- /dev/null
+++ b/reg-tests/ssl/rsa_and_ecdsa_bind.pem.rsa
@@ -0,0 +1 @@
+common.pem
\ No newline at end of file
diff --git a/reg-tests/ssl/set_ssl_cert.vtc b/reg-tests/ssl/set_ssl_cert.vtc
index a606b477d..022e8d6c3 100644
--- a/reg-tests/ssl/set_ssl_cert.vtc
+++ b/reg-tests/ssl/set_ssl_cert.vtc
@@ -16,6 +16,9 @@
 # any SNI. The test consists in checking that the used certificate is the right one after
 # updating it via a "set ssl cert" call.
 #
+# listen other-rsaecdsa-ssl / other-rsaecdsa checks that haproxy can bind and serve multi-bundle
+# (rsa/ecdsa) certificate.
+#
 # If this test does not work anymore:
 # - Check that you have socat
 
@@ -74,6 +77,21 @@ haproxy h1 -conf {
 bind "${tmpdir}/other-ssl.sock" ssl crt-list ${testdir}/set_default_cert.crt-list
 server s1 ${s1_addr}:${s1_port}
 
+# check that we can bind with: rsa_and_ecdsa_bind.pem.rsa / rsa_and_ecdsa_bind.pem.ecdsa
+listen other-rsaecdsa-ssl
+bind "${tmpdir}/other-rsaecdsa-ssl.sock" ssl crt ${testdir}/rsa_and_ecdsa_bind.pem
+http-request deny deny_status 200
+server s1 ${s1_addr}:${s1_port}
+
+# use other-rsa_ecdsa-ssl to check both rsa and ecdsa certs are returned
+listen other-rsaecdsa
+bind "fd@${otherrsaecdsa}"
+http-response set-header X-SSL-Server-SHA1 %[ssl_s_sha1,hex]
+use-server s1rsa if { path_end -i .rsa }
+use-server s1ecdsa if { path_end -i .ecdsa }
+server s1rsa "${tmpdir}/other-rsaecdsa-ssl.sock" ssl verify none force-tlsv12 sni str(www.test1.com) ciphers ECDHE-RSA-AES128-GCM-SHA256
+server s1ecdsa "${tmpdir}/other-rsaecdsa-ssl.sock" ssl verify none force-tlsv12 sni str(localhost) ciphers ECDHE-ECDSA-AES256-GCM-SHA384
+
 } -start
 
 
@@ -202,3 +220,16 @@ client c1 -connect ${h1_clearlst_sock} {
 expect resp.http.X-SSL-Server-SHA1 == "9DC18799428875976DDE706E9956035EE88A4CB3"
 expect resp.status == 200
 } -run
+
+# Check that other-rsaecdsa serves both rsa and ecdsa certificate
+client c1 -connect ${h1_otherrsaecdsa_sock} {
+txreq -req GET -url /dummy.rsa
+rxresp
+expect resp.http.X-SSL-Server-SHA1 == "2195C9F0FD58470313013FC27C1B9CF9864BD1C6"
+expect resp.status == 200
+
+txreq -req GET -url /dummy.ecdsa
+rxresp
+expect resp.http.X-SSL-Server-SHA1 == "A490D069DBAFBEE66DE434BEC34030ADE8BCCBF1"
+expect resp.status == 200
+} -run
-- 
2.26.3



2.2.12 and rsa/ecdsa cert regression (crash on startup) ?

2021-04-01 Thread Jarno Huuskonen
Hello,

I'm seeing a regression with 2.2.12 and using rsa and ecdsa certs on bind.
(cert1.pem.ecdsa
cert1.pem.ecdsa.ocsp
cert1.pem.ocsp
cert1.pem.rsa
cert1.pem.rsa.ocsp
)

haproxy crashes on startup:
(gdb) bt
#0  0x7710f159 in SSL_CTX_up_ref () from /lib64/libssl.so.1.1
#1  0x0042e1a3 in ssl_sock_load_cert_sni (ckch_inst=0x9adf30,
bind_conf=bind_conf@entry=0x9a6590) at src/ssl_sock.c:2866
#2  0x0043186f in ssl_sock_load_ckchs (path=,
ssl_conf=, sni_filter=, 
fcount=, err=0x7fffdb68, ckch_inst=0x7fffba08,
bind_conf=0x9a6590, ckchs=0x9a6ad0) at src/ssl_sock.c:3587
#3  ssl_sock_load_ckchs (path=, ckchs=0x9a6ad0,
bind_conf=0x9a6590, ssl_conf=, sni_filter=, 
fcount=, ckch_inst=0x7fffba08, err=0x7fffdb68) at
src/ssl_sock.c:3572
#4  0x00431b84 in ssl_sock_load_cert (path=path@entry=0x9703b8
"/etc/haproxy/ssl/cert1.pem", 
bind_conf=bind_conf@entry=0x9a6590, err=err@entry=0x7fffdb68) at
src/ssl_sock.c:3740
#5  0x0043bfbe in bind_parse_crt (args=0x7fffdc10,
cur_arg=, px=, conf=0x9a6590,
err=0x7fffdb68)
at src/cfgparse-ssl.c:645
#6  0x0048e57b in cfg_parse_listen (file=0x99b060
"/etc/haproxy/haproxy.cfg", linenum=116, args=0x7fffdc10, kwm=)
at src/cfgparse-listen.c:605
#7  0x0047fcab in readcfgfile (file=0x99b060
"/etc/haproxy/haproxy.cfg") at src/cfgparse.c:2087
#8  0x0052dd7c in init (argc=, argc@entry=6,
argv=, argv@entry=0x7fffe2f8) at src/haproxy.c:2050
#9  0x0041e3ca in main (argc=6, argv=0x7fffe2f8) at
src/haproxy.c:3180

(This is on rhel8:
HA-Proxy version 2.2.12-a723e77 2021/03/31 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2
2025.
Known bugs: http://www.haproxy.org/bugs/bugs-2.2.12.html
Running on: Linux 4.18.0-240.15.1.el8_3.x86_64 #1 SMP Wed Feb 3 03:12:15 EST
2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv -Wno-
unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-
missing-field-initializers -Wno-stringop-overflow -Wno-cast-function-type -
Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -
Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1
USE_ZLIB=1 USE_SYSTEMD=1
  DEBUG   = 

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT
+POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE -
STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H
+GETADDRINFO +OPENSSL -LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ
+CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -
OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 1.1.1g FIPS  21 Apr 2020
Running on OpenSSL version : OpenSSL 1.1.1g FIPS  21 Apr 2020
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with PCRE2 version : 10.32 2018-09-10
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 8.3.1 20191121 (Red Hat 8.3.1-5)
Built with the Prometheus exporter as a service
)

Crash doesn't happen if I use just ecdsa or rsa cert file:
cert1.pem
cert1.pem.ocsp

(Crash also doesn't happen on 2.2.10, 2.2.11, 2.3.9 and 2.4dev(haproxy-ss-
20210401))

Git bisect points to this commit:
commit b87c8899d872843c12b3516ad51da84b22538d91
BUG/MINOR: ssl: Fix update of default certificate


Something like this config should be able to reproduce:
frontend FE_crash
bind ipv4@:443 name crashv4ssl ssl crt /etc/haproxy/ssl/cert1.pem
alpn h2,http/1.1
bind ipv6@:::443 name crashv6ssl ssl crt /etc/haproxy/ssl/cert1.pem
alpn h2,http/1.1
modehttp

default_backend BE_crash

backend BE_crash   
server crash 192.168.1.105:8081 id 1 check

(And cert1.pem is multiple files:
cert1.pem.ecdsa
cert1.pem.ecdsa.ocsp
cert1.pem.ocsp
cert1.pem.rsa
cert1.pem.rsa.ocsp
)

-Jarno

-- 
Jarno Huuskonen


Re: Setting up haproxy for tomcat SSL Valve

2021-02-24 Thread Jarno Huuskonen
Hi,

On Thu, 2021-02-25 at 03:24 +0100, Aleksandar Lazic wrote:
> Hi.
> 
> I try to setup HAProxy (precisely  OpenShift Router :-)) to send the TLS/SSL
> Client
> Information's to tomcat.
> 
> On the SSL Valve page are the following parameters available.
> 
> http://tomcat.apache.org/tomcat-9.0-doc/config/valve.html#SSL_Valve
> 
> SSL_CLIENT_CERT string  PEM-encoded client certificate
> ?
> 
> The only missing parameter is "SSL_CLIENT_CERT in PEM format". There is one
> in DER Format
> ssl_c_der in HAProxy but the code in SSL-Valve expects the PEM format.
> 
> https://github.com/apache/tomcat/blob/master/java/org/apache/catalina/valves/SSLValve.java#L125
> 
> Have I overseen something in the HAProxy code or doc or isn't there
> currently an option to get
> the  client certificate out of HAProxy in PEM format?

It should be possible (had this working years ago):
(https://www.mail-archive.com/haproxy@formilux.org/msg20883.html
http://shibboleth.net/pipermail/users/2015-July/022674.html)

Something like:
http-request add-header X-SSL-Client-Cert -BEGIN\ CERTIFICATE-\
%[ssl_c_der,base64]\ -END\ CERTIFICATE-\ # don't forget last space

-Jarno

-- 
Jarno Huuskonen


Re: Inquiry

2021-01-29 Thread Jarno Huuskonen
Hi,

On Tue, 2021-01-26 at 15:55 +0100, Alexander Rossow wrote:
> Im currently using HAProxy as a http proxy infront of another http
> superproxy. HAProxy is used for authentication and then changes the Proxy
> credentials to those of the superproxy. However, we need to keep track of
> the data usage for each user. Since the superproxy is not made from our
> end, we cannot influence it at all and it ignores any "Connection: Close"
> headers. Therefore if a client uses our proxy (haproxy) and the client
> does not close the socket, the socket can stay open for multiple minutes.
> During this time we will not be able to account for the usage as the usage
> is only logged once the socket is closed. This then leads to the user
> being able to use our service for a greatly longer duration that he/she is
> supposed to. This is why theres 2 solutions I can think of.

Have thought about using stick-tables to track clients/users:
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#stick-table

-Jarno

> Solution A)
> A way that haproxy logs frequently during the entire socket duration so we
> can then reload haproxy to close all sockets once a user runs out of data
> to use.
> 
> Solution B)
> A way to use LUA during the actual tunneling (after the HTTP tunnel is
> established) so that we can reauthenticate users and log the usage
> ourselves.
> 
> Am Di., 26. Jan. 2021 um 15:12 Uhr schrieb Jarno Huuskonen <
> jarno.huusko...@uef.fi>:
> > Hi,
> > 
> > On Tue, 2021-01-26 at 14:32 +0100, Alexander Rossow wrote:
> > > Hi there,
> > > I would like to know if it is possible to update the logs while the
> > socket
> > > is open. Currently the logs are updated only after closing the socket,
> > > which causes issues. We have already tried the http close and the
> > https
> > > close server options. Unfortunately without success
> > > Thanks in advance
> > > 
> > 
> > option logasap ?
> > (
> > https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4.2-option%20logasap
> > )

-- 
Jarno Huuskonen


Re: Inquiry

2021-01-26 Thread Jarno Huuskonen
Hi,

On Tue, 2021-01-26 at 14:32 +0100, Alexander Rossow wrote:
> Hi there,
> I would like to know if it is possible to update the logs while the socket
> is open. Currently the logs are updated only after closing the socket,
> which causes issues. We have already tried the http close and the https
> close server options. Unfortunately without success
> Thanks in advance
> 

option logasap ?
(https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4.2-option%20logasap)

-Jarno

-- 
Jarno Huuskonen


Re: issue after upgrading haproxy 2.3.2

2021-01-19 Thread Jarno Huuskonen
Hello,

On Tue, 2021-01-19 at 15:40 +0530, Amol Arote wrote:
> After upgrading haproxy 2.3.2 we are facing the below ssl tls issue while
> connecting links internally, but when we check web browsing its auto
> getting tls 1.2 there is no such issue.when connecting internal links its
> not getting tls 1.2 its showing tls 1.0 and showing below error message.
> 
> org.apache.commons.httpclient.HttpMethodDirector executeWithRetry
> INFO: I/O exception (javax.net.ssl.SSLException) caught when processing
> request: Received fatal alert: protocol_version
> 
> earlier we are using HA-Proxy version 2.1.2 where everything working fine
> we are using centos 7.6 and Java 1.7

AFAIK haproxy-2.2 defaults to tls1.2 on bind:
(MEDIUM: ssl: use TLSv1.2 as the minimum default on bind lines)
https://www.haproxy.org/download/2.3/src/CHANGELOG

Can you connect to haproxy with tls1.0:
openssl s_client -connect your-haproxy-ip:443 -tls1

You can try to enable tls1.0 on server bind with:
ssl-min-ver TLSv1.0
https://cbonte.github.io/haproxy-dconv/2.3/configuration.html#5.1-ssl-min-ver

(Also AFAIK up2date java-1.7 should be able to use tls1.2).

-Jarno

-- 
Jarno Huuskonen


Re: Heath check responds up even when server is down

2020-11-04 Thread Jarno Huuskonen
Hi,

On Fri, 2020-10-30 at 00:49 +, Wesley Lukehart wrote:
> To recap;
> Exchange says component is Inactive
> IIS is up and still serving content
> healthcheck.htm page does not load, is down, unavailable, what have you
> haproxy gets 200 response from health check that supposedly isn’t
> available

Have you tested with curl / wget from haproxy server if IIS/Exchange returns
stautus=200 for /oab/healthcheck.htm ?

curl -v -k https://ip.addr.e.ss/oab/healthcheck.htm
and
# this probably sends "correct" iis.exchange.domain.com SNI to iis server,
# maybe iis/exchange needs SNI to serve correct file/status ?
curl -v -k --resolve iis.exchange.domain.com:443:iis.ip.here
https://iis.exchange.domain.com/oab/healthcheck.htm


> Here are relevant haproxy logs showing the health check as good and
> content still being proxied, even though the component is inactive (ie
> health check page is not accessible)
>  Oct 29 14:51:39  haproxy: [WARNING] 302/145139 (93952) :
> Health check for server be_ex2019_oab/ succeeded, reason:
> Layer7 check passed, code: 200, info: "HTTP status check returned code
> <3C>200<3E>", check duration: 8ms, status: 3/3 UP.
 
> Looking at the IIS logs, when the component is active, I see the GET
> requests from my workstations IP. When the component is inactive, no GET
> request is logged from my workstation.
> In addition, weather the service is active or inactive, IIS logs GET
> requests from the haproxy servers:
>  2020-10-30 00:13:01 10.168.99.91 GET /oab/healthcheck.htm - 443 -
>  - - 200 0 0 1
>  2020-10-30 00:13:11 10.168.99.91 GET /oab/healthcheck.htm - 443 -
>  - - 200 0 0 1

So both haproxy and IIS log show that /oab/healthcheck.htm is served with
status=200 to haproxy ?

When you test /oab/healthcheck.htm with browser what url do you use:
https://correct.domain.com/oab/healthcheck.htm
or https://ip.addr.es.s/oab/healthcheck.htm ? Do you get different result
with ip or hostname ?

-Jarno

-- 
Jarno Huuskonen


Re: TCP Proxy for database connections

2020-10-29 Thread Jarno Huuskonen
Hi,

On Thu, 2020-10-29 at 10:21 +0200, Jonathan Matthews wrote:
> I don’t think haproxy is what you’re looking for. You’re looking for more
> than a TCP proxy: you need a DB-specific-protocol-proxy. Haproxy can
> listen for HTTP, above the TCP layer, but not any specific DB protocols.
> 
> I think you need to look for a proxy that’s designed to work with the
> specific DB you’re wanting to expose. 
> 
> For mysql, “mysql-proxy” and “mysql-router” come to mind. -proxy never
> went GA, and I’ve not used -router. 

For mysql there are MaxScale and ProxySQL.

But I don't think you'll find a proxy that has all the features you'll need
especially if you need to support multiple DB protocols (mysql, postgresql,
oracle, mssql).

-Jarno

-- 
Jarno Huuskonen


Re: Heath check responds up even when server is down

2020-10-15 Thread Jarno Huuskonen
Hi,

On Thu, 2020-10-15 at 01:27 +, Wesley Lukehart wrote:
> Hello fine people. Short time lurker, first time poster.
>  
> Was on version 2.0.5 with CentOS 7.6 and everything was working fine with
> Exchange 2019.
> Upgraded to 2.2.3 and now when we put Exchange into maintenance mode
> HAProxy does not change status – it reports that all services are still up
> (L7OK/200).
>  
> Example backend:
> backend be_ex2019_oab
>   mode http
>   balance roundrobin
>   option httpchk GET /oab/healthcheck.htm
>   option log-health-checks
>   http-check expect status 200
>   server  :443 check ssl inter 15s verify required
> ca-file 
>   server  :443 check ssl inter 15s verify required
> ca-file 
>  
> If I stop the app pool for a service in IIS, or stop all of IIS, HAProxy
> will properly show the service/services as down – as it gets a non 200
> response (503 or 404).
>  
> When putting the Exchange server into maintenance mode, there is no http
> response.
> When I check with a browser I get “ERR_HTTP2_PROTOCOL_ERROR” or “Secure
> Connection Failed”. Basically no response.
> When I check with wget from the haproxy server I get “HTTP request sent,
> awaiting response... Read error (Connection reset by peer) in headers.”
> Yet HAProxy is happy and continues to try to send mail to the down server
> – not good.
>  
> Any Ideas?

Does the health check work if you try with something like this:
option httpchk
http-check connect ssl
http-check send meth GET uri /oab/healthcheck.htm ver HTTP/1.1 hdr Host
somehost.example.org
http-check expect status 200
(
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4.2-http-check%20connect
)

-Jarno

-- 
Jarno Huuskonen


Re: HAProxy feature request.

2020-08-27 Thread Jarno Huuskonen
Hello,

On Thu, 2020-08-27 at 13:33 +0530, Roshan Singh wrote:
> Dear HAProxy Technical Support Team,
> 
> REQUEST: HAProxy supports IPv4 Header manipulation for QoS.
> 
> ISSUE: I have been trying to pass the ToS value received from client to
> backend server for DSCP. But i can't manipulate DSCP value.
> 
> STEPS:
> 1.Request from client: # curl HAProxy_node_IP -H 'x-tos:0x48'
> 2. Below is the log captured from wireshark on HAProxy node.
> 3. DSCP value should be update with value as 'af21'. but this only goes
> from HTTPHeader when added below line in fronted : http-request set-header 
> x-ipheader % [req.hdr(x-tos)]

Try http-request set-tos: 
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4.2-http-request%20set-tos
(or http-response set-tos 
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4.2-http-response%20set-tos
)

I'm not sure the documentation is correct here because documentation says
that
both http-request/http-response set-tos "packets sent to the client"
and http-request probably should say "packets sent to the server" ?

-Jarno

> Please let me know if this feature has been already implemented or can be
> used by any third party tool.

-- 
Jarno Huuskonen


Re: graceful tcp shutdown ?

2020-08-04 Thread Jarno Huuskonen
Hi,

On Tue, 2020-08-04 at 16:54 +0500, Илья Шипицин wrote:
> for example, I'm running tcp balancing with several backend.
> is it possible to mark first backend for "established" connections ?

You want to set whole backend (and not just one server) to drain(established
only) ?

> i.e. all connections that established, still go to marked backend. no new
> connection are established (once I see there are no more connections, I
> can turn it off).

With servers it should be possible:
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4-http-check%20disable-on-404
or setting server weight to 0 or state to drain:
https://cbonte.github.io/haproxy-dconv/2.2/management.html#9.3-set%20server

Maybe you could set all servers in backend to weight 0/drain or use
somekind of acl+use_backend ?
 
Can you show a short/sanitized example config about your tcp backends ?

-Jarno

-- 
Jarno Huuskonen


Re: ssl_c_sha256 ?

2020-06-29 Thread Jarno Huuskonen
Hi,

On Mon, 2020-06-29 at 12:37 +0200, Tim Düsterhus wrote:
> Stephane,
> 
> Am 29.06.20 um 12:01 schrieb Stephane Martin (stepham2):
> > In haproxy documentation I don't see any option to work with the sha256
> > fingerprint of the peer certificate.
> > 
> > - Is there any other way to get that ?
> 
> Yes, see this commit message:
> https://github.com/haproxy/haproxy/commit/d4376302377e4f51f43a183c2c91d929b27e1ae3
> 
> The ssl_c_sha1 is simply a hash of the DER representation of the
> certificate. So you can just hash it with the sha2 converter:
> 
> ssl_c_sha256,sha2(256)

I think the first fetch should be ssl_c_der ?
(ssl_c_der,sha2(256))

-Jarno

-- 
Jarno Huuskonen


Re: ssl_c_sha256 ?

2020-06-29 Thread Jarno Huuskonen
Hi,

On Mon, 2020-06-29 at 10:01 +, Stephane Martin (stepham2) wrote:
> Hello,
> 
> I’m trying to setup TLS mutual authentication using pinned certificates in
> haproxy, ie. only accept a precise known certificate from the peer.
> 
> It is definitively possible using ACL and ssl_c_sha1, so that the route
> will only be accessible if the peer certificate has the right SHA1
> fingerprint.
> 
> But sha1 usage is strongly not recommended for compliancy (you can
> understand why...).
> 
> In haproxy documentation I don't see any option to work with the sha256
> fingerprint of the peer certificate.
> 
> - Is there any other way to get that ?

With haproxy 2.2(dev) this might work:
ssl_c_der,digest(sha256),hex
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.1-digest
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.4-ssl_c_der

And with haproxy 2.1:
ssl_c_der,sha2,hex
(https://cbonte.github.io/haproxy-dconv/2.1/configuration.html#7.3.1-sha2)

(I didn't test if these examples actually work).

> - If it needs to be implemented in haproxy, would you have any clue where
> to start ?

(Backport digest from haproxy-2.2 to required version ?)

-Jarno

-- 
Jarno Huuskonen


Re: how can I add an HTTP to prevent clickjacking to the stats page?

2020-06-18 Thread Jarno Huuskonen
Hi,

On Thu, 2020-06-18 at 10:06 +0300, Cristian Grigoriu wrote:
> Hello everyone,
> 
> The vulnerability scanner has flagged the stats page as being vulnerable
> to clickjacking. I am trying to fix this, by publishing the stats on its
> own frontend and add a header:
> 
> frontend stats
>  bind 10.11.12.13:9000
>  stats enable
>  stats uri /stats
>  stats refresh 10s
>  #rspadd X-Frame-Options:\ SAMEORIGIN
>  http-response set-header X-Frame-Options sameorigin
> 
> Neither rspadd nor http-response work, as no header is being added to the
> response.
> 
> Any pointer into the right direction is much appreciated.

As a workaround chaining two proxies should add the required header:

listen fakestats
bind 10.11.12.13:9000
http-response set-header X-Frame-Options
sameorigin  
server realstat abns@statssrv

frontend stats
bind abns@statssrv
stats enable
stats uri /stats
stats refresh 10s

Can you share your haproxy -vv ? There could be a better way to do this.

-Jarno

-- 
Jarno Huuskonen


Re: 2.0.14 + htx / retry-on all-retryable-errors -> sometimes wrong backend/server used

2020-05-19 Thread Jarno Huuskonen
Hi,

On Tue, 2020-05-19 at 15:58 +0200, Christopher Faulet wrote:
> It was already reported on github and seems to be fixed. We are just
> waiting a 
> feedback to be sure it is fixed before backporting the patch. See 
> https://github.com/haproxy/haproxy/issues/623.
> 
> If you try the latest 2.2 snapshot, it should be good. You may also
> try to 
> cherry-pick the commit 8cabc9783 to the 2.0.

Thanks Christopher (and Tim), I'll try with 2.2 snapshot (and/or)
8cabc9783 and report how it goes.

-Jarno

-- 
Jarno Huuskonen


2.0.14 + htx / retry-on all-retryable-errors -> sometimes wrong backend/server used

2020-05-19 Thread Jarno Huuskonen
NT
IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto'
keyword)
  h2 : mode=HTXside=FE|BE mux=H2
  h2 : mode=HTTP   side=FEmux=H2
: mode=HTXside=FE|BE mux=H1
    : mode=TCP|HTTP   side=FE|BE mux=PASS

Available services : none

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
)

-- 
Jarno Huuskonen


Re: 404 + VN when enabling h2 in front of keycloak

2020-04-26 Thread Jarno Huuskonen
Hi Ionel,

On Sat, 2020-04-25 at 11:22 +0200, Ionel GARDAIS wrote:
> I tried to enable h2 in our haproxy setup.

What's your haproxy version ?

> Most proxied servers work well except Keycloak (SSO solution)
> 
> While everything works fine in HTTP/1.1, Keycloak returns a 404 and
> haproxy shows a --VN status in h2.

Have tested w/out HTX (no option http-use-htx (
https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#4-option%20http-use-htx
)) ?

Does keycloak log anything useful ?

> As there are two Keycloak servers working in pair, the backend is
> defined as 
> 
> backend bck-keycloak
> cookie AUTH_SESSION_ID prefix
> server keycloak 192.168.8.27:8080 check cookie s1
> server keycloak-bck 192.168.8.28:8080 check cookie s2
> 
> Are their specific tuning required for h2 to work correctly ?

Maybe keycloak is case sensitive on some http headers ?
Have you tried comparing http/1.1 and http/2 request headers going to
keycloak server ?

(
https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#h1-case-adjust
)

-Jarno

-- 
Jarno Huuskonen


Re: HAProxy concurrent HTTP query limit based on header

2020-04-18 Thread Jarno Huuskonen
Hi,

On Fri, 2020-04-17 at 20:22 +0200, Olivier D wrote:
> Hello everyone,
> I would like to implement a "max concurrent connection" in HAProxy.
> This is easy to do at TCP level : 
> 
> stick-table  type ipv6 size 100k  expire 30s  store conn_cur
> http-request track-sc0 src
> http-request deny deny_status 429 if { src_conn_cur ge 20 }
> 
> But now, I want to do the same for concurrent HTTP queries, based on
> header 'X-Forwarded-For'. For example, I want to send a 429 error
> code if someone is sending an HTTP query when he already have 20
> ongoing.
> 
> My first tries are based on something like this : 
>stick-table type ipv6 size 100k  expire 30s  store
> http_req_rate(10s)
>http-request track-sc0 req.hdr( X-Forwarded-For )

Does it work if you use:
http-request track-sc0 req.hdr_ip(X-Forwarded-For)
(
https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#7.3.6-req.hdr_ip
)

Do you get any entries in the stick-table (show table ... command to
stats socket).

-Jarno

>http-request deny deny_status 429 if { sc0_conn_cur ge 20 }
> 
> but it doesn't seem to work the way I want ...

-- 
Jarno Huuskonen


Re: Haproxy loadbalancing out going mail to Antispam servers

2020-01-23 Thread Jarno Huuskonen
Hi,

On Wed, Jan 22, Brent Clark wrote:
> We have a project where we are trying to load balance to our outbound
> Spamexperts Antispam relays / servers.
> 
> We hit a snag where our clients servers are getting 'Too many concurrent
> SMTP connections from this IP address'. As a result the mail queue is
> building up on the servers.

What generates this error (the antispam servers ?)
Your antispam servers probably see all connections coming from haproxy
ip-address (and not the clients address).

> After reverting our change, the problem went away.
> 
> Our setup is:
> (CLIENT SERVERS INDC) ---> 587 (HAPROXY) ---> (ANTISPAM) ---> (INTERNET)

Do you control the antispam servers and do the antispam servers support
for example proxy-protocol (postfix, exim etc) ?
(https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#5.2-send-proxy)

-Jarno

-- 
Jarno Huuskonen



Re: How to "unit" test HAProxy configurations (and HTTP servers in general)

2019-12-18 Thread Jarno Huuskonen
Hi,

On Wed, Dec 18, Ciprian Dorin Craciun wrote:
> Hello all!
> 
> [First of all this question isn't only HAProxy specific, but can be
> extended to any HTTP server (including Apache, Nginx, and any other
> web application out there);  however I think it is especially
> important for HAProxy given how many HTTP-routing / mangling
> capabilities it has.]
> 
> I have a quite complex HAProxy configuration and I want to make sure
> that while changing some ACL's and rules I don't break something else.
> 
> Therefore I set out to find a tool that allows me to "unit test" an
> HTTP server (thus including HAProxy).  And to my surprise I didn't
> find one...  Obviously there are many "framework" unit test platforms
> out-there, each tailored for the underlying framework, such as Django,
> Flask, Rails, Spring, Go-lang, etc.;  however nothing "generic" that
> can test a web server by plain HTTP requests.
> 
> 
> So my question to the HAProxy community is if anyone knows / uses a
> generic HTTP unit testing framework.
> 
> (I have already written a few custom Bash scripts that given an URL
> create a file on disk, and given it resides in a Git repository, I can
> easily `git diff ./tests/responses` to see if anything changed, but
> this is too "barbaric"...)  :)

Have you looked into varnishtest(vtest) ? There're examples in haproxy source
reg-tests directory.

-Jarno

-- 
Jarno Huuskonen



Re: Configuration question

2019-12-12 Thread Jarno Huuskonen
Hi,

On Thu, Dec 12, Aleh Kurnitsou wrote:
> Could you please explain me how can I change my current config line:
> “server-template server- 3 nginx-service:80 check resolvers docker
> init-addr libc,none”?
> 
> 
> I would to use sticky session, but i can't find any good way how it
> possible to do with server template.
> 

I haven't tested it, but have you tried dynamic cookies with
dynamic-cookie-key
(https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#dynamic-cookie-key)
(https://www.haproxy.com/blog/whats-new-haproxy-1-8/#dynamic-cookies)

-Jarno

> I tried to use config from my dedicated servers:
> cookie serverid insert indirect nocache maxidle 15m maxlife 1h
> server s1 10.0.1.3:80 cookie s1 check
> 
> 
> But it works only when docker service " nginx-service " runned, after
> scaling or when i'm replacing containers it doesn't work. :(

-- 
Jarno Huuskonen



Re: [PATCH] bugfix to make do-resolve to use DNS cache

2019-11-05 Thread Jarno Huuskonen
Hi,

On Tue, Nov 05, Baptiste wrote:
> David Birdsong reported a bug last week about http do-resolve action not
> using the DNS cache.
> The patch in attachment fixes this issue.
> There is no github issue associated to this bug.
> Backport status is up to 2.0.

Quick question: are the printf's there on purpose or leftover debug
outputs ?

-Jarno

@@ -73,6 +73,7 @@ int check_trk_action(struct act_rule *rule, struct proxy *px, 
char **err)
 int act_resolution_cb(struct dns_requester *requester, struct dns_nameserver 
*nameserver)
 {
struct stream *stream;
+printf("%s %d\n", __FUNCTION__, __LINE__);
 
if (requester->resolution == NULL)
return 0;
@@ -89,6 +90,7 @@ int act_resolution_cb(struct dns_requester *requester, struct 
dns_nameserver *na
 int act_resolution_error_cb(struct dns_requester *requester, int error_code)
 {
struct stream *stream;
+printf("%s %d\n", __FUNCTION__, __LINE__);
 
if (requester->resolution == NULL)
    return 0;

-- 
Jarno Huuskonen



Re: http-request do-resolve Woes

2019-10-30 Thread Jarno Huuskonen
Hi,

On Tue, Oct 29, David Birdsong wrote:
> I've narrowed down a behavior that I think might be a bug, but is
> definitely not ideal.
> 
> This minimal configuration copies header: X-Host into Host and performs a
> dynamic DNS query against that field name, stores the output in a txn var,
> and then uses a backend whic sets the dest ip to that txn var.
> 
> For any requests with an X-Host header that matches a name already tracked
> by DNS in a backend, I see that haproxy spends 4-9 seconds reading the
> request from the client while any X-Host values which are not currently
> tracked by a backend show haproxy spending 1ms reading in the request from
> the client (normal.)
> 
> unnamed, fast: curl -v -H "X-Host: google.com" http://127.0.0.1:8080/foo
> 
> named, very slow:  curl -v -H "X-Host: mixpanel.com"
> http://127.0.0.1:8080/foo
> 
> Config:
> https://gist.github.com/davidbirdsong/1c3ec695fdbab10f64783437ffab901c
> haproxy -vv
> https://gist.github.com/davidbirdsong/d4c1c71e715d8461ad73a4891caca6f1

I tested this on latest 2.1dev3 snapshot. What happens if you add
timeouts to your main_resolver resolvers:
  hold valid   15s

For me increasing hold valid makes be_named requests take even longer
and if I add timeout client(to defaults) < hold valid then (be_named) requests 
fail with:
cR-- status

-Jarno

-- 
Jarno Huuskonen



Re: Mode TCP and acl to choose backend

2019-10-28 Thread Jarno Huuskonen
Hi,

On Mon, Oct 28, Philipp Kolmann wrote:
> I load-balance TCP Port 25 on a haproxy. This works perfect.
> 
> Now I need to check, if the connection is coming for a special host, then a
> different backend smtp server should be used. I thought I could use acl and
> use_backend but this seems only to work for http connections.

What does special host mean in this context ? Is it something you can
get from layer4 (src,src_port,dst,dst_port) or something from
for example SMTP protocol ?

> Has anyone a tip how to achieve this with mode tcp?

If you can get special host from layer4 then for example:
use_backend specialhost if { dst 10.10.10.10 }
might work.

-Jarno

-- 
Jarno Huuskonen



Re: Deprecating a few keywords for 2.1+

2019-10-28 Thread Jarno Huuskonen
On Mon, Oct 28, Aleksandar Lazic wrote:
> Am 27.10.2019 um 20:16 schrieb David Birdsong:
> > I'm just curious: what replaces monitor-uri? I'm putting up a new proxy
> > tier at my new company and can steer to use the more up-to-date method,
> > but combing the docs and nothing jumps out at me.
> > 
> > I'm guessing something in either http-re{quest,response}, but I don't
> > see anything that synthesizes responses in there.
> 
> I would think about to use errorfile for this.
> 
> https://cbonte.github.io/haproxy-dconv/2.1/configuration.html#4-errorfile
> 
> Could this work?
> 
> ```
> global
>   ...
> default
>   ...
> frontend
>   ...
>   use_backend b_health if { path_beg /health }
>   ...
> 
> backend b_health
>   errorfile 200 /etc/haproxy/errorfiles/200health.http
>   ...
> 
> ```

Or maybe something like:
http-request deny deny_status 500 if { path_beg /health } { nbsrv(yourbackend) 
lt 1 }
http-request deny deny_status 200 if { path_beg /health }

-Jarno

> > On Sat, Oct 26, 2019 at 8:14 AM Willy Tarreau  > <mailto:w...@1wt.eu>> wrote:
> > 
> > Hi,
> > 
> > a few months ago while working on cleaning up and stabilizing the
> > connection layers, I figured that we still have ugly hacks bypassing
> > the whole stack around the "mode health", "monitor-net" and 
> > "monitor-uri"
> > directives, that were all used to respond to health checks from an
> > external LB. Since SSL was introduced, these started not to make much
> > sense anymore, with raw data being sent directly to the socket and
> > bypassing the SSL stack, and now with muxes it's even worse.
> > 
> > Given their obvious obsolescence I don't expect anyone to be using these
> > anymore and to have switched to other mechanisms like HTTP redirects,
> > errorfiles or Lua instead which are all way more versatile and
> > configurable.
> > 
> > Thus I was thinking about marking them deprecated for 2.1 and then
> > removing them from 2.3. Or even better, removing them from 2.1, but
> > since we have not sent a prior deprecation warning, it would really
> > require confirmation that really nobody is using them at all anymore
> > (which I think is likely the case starting with 1.5).
> > 
> > Any opinion on this ?
> > 
> > Thanks,
> > Willy
> > 
> 
> 

-- 
Jarno Huuskonen



Re: healthchecks (to uwsgi) possible regression 1.9.8 -> 1.9.9

2019-10-09 Thread Jarno Huuskonen
Hi,

Thanks Willy for looking into this !

On Tue, Oct 08, Willy Tarreau wrote:
> On Fri, Oct 04, 2019 at 07:28:15PM +0300, Jarno Huuskonen wrote:
> > I sent pcap/strace offlist.
> 
> Thanks, that was very useful.
> 
> > (strace -f -o -ttt, tcpdump -n -p -s 16384 -w ... host 127.0.0.1 and
> > port 8080).
> > 
> > I think in packet capture the second health checks causes
> > "client_addr: 127.0.0.1 client_port: 2779] hr_read(): Connection reset by 
> > peer [plugins/http/http.c line 917]"
> > (I think uswgi logs client_port incorrectly, ntohs(2779) gives 56074
> > (and port 56074 is in packet capture)).
> 
> It's as usual, when you start to troubleshoot a bug you find a myriad of
> other ones around :-)
> 
> > (haproxy version: HA-Proxy version 2.1-dev2 2019/10/01).
> > 
> > I tried to reproduce with very minimal flask/uwsgi hello world app
> > and there hr_read happens very rarely. 
> > With alerta(.io) app this happens more regularly (AFAIK not with every 
> > check).
> > So maybe this is weird timing issue or bug in uwsgi.
> 
> So it is indeed a timing issue. The check running on port 56062 shows the
> following sequence:
> 
> 17:10:57.877876 IP localhost.56062 > localhost.8080: Flags [S]
> 17:10:57.877898 IP localhost.8080 > localhost.56062: Flags [S.]
> 17:10:57.877909 IP localhost.56062 > localhost.8080: Flags [.]
> 17:10:57.878065 IP localhost.56062 > localhost.8080: Flags [P.]
> 17:10:57.878078 IP localhost.8080 > localhost.56062: Flags [.]
> 17:10:57.879933 IP localhost.8080 > localhost.56062: Flags [P.]
> 17:10:57.879939 IP localhost.56062 > localhost.8080: Flags [.]
> 17:10:57.880008 IP localhost.8080 > localhost.56062: Flags [F.]
> 17:10:57.880333 IP localhost.56062 > localhost.8080: Flags [F.]
> 17:10:57.880341 IP localhost.8080 > localhost.56062: Flags [.]
> 
> Note the FIN sent 75 microseconds after the response. This resulted in
> recvfrom() returning zero and the connection to be cleanly closed. Now
> regarding port 56074 that is causing excess logs :
> 
> 17:11:04.132867 IP localhost.56074 > localhost.8080: Flags [S]
> 17:11:04.132890 IP localhost.8080 > localhost.56074: Flags [S.]
> 17:11:04.132904 IP localhost.56074 > localhost.8080: Flags [.]
> 17:11:04.133083 IP localhost.56074 > localhost.8080: Flags [P.]
> 17:11:04.133098 IP localhost.8080 > localhost.56074: Flags [.]
> 17:11:04.135101 IP localhost.8080 > localhost.56074: Flags [P.]
> 17:11:04.135107 IP localhost.56074 > localhost.8080: Flags [.]
> 17:11:04.135316 IP localhost.56074 > localhost.8080: Flags [R.]
> 
> As you can see, even 215 microseconds after the response there is still
> no FIN. I've checked and in both cases the headers are the same. There
> is indeed a "connection: close" header emitted by haproxy, none is
> advertised in the response, but clearly it's a matter of delay. And
> this causes recvfrom() to return EAGAIN, so haproxy is forced to hard-
> close the connection (as if it were doing option http-server-close).
> Please also note that returned data are properly ACKed so there is not
> much more we can do there.
> 
> Before the patch that you bisected, you didn't face this because of a
> bug that was preventing the hard close from working, and instead you
> were accumulating TIME_WAIT sockets on the client side. So I'm afraid
> to say that for now the best I can say is that you'll still have to
> live with these occasional logs :-/

Thanks. Glad to see this wasn't a bug on haproxy side.

I tested the alerta app with gunicorn and gunicorn doesn't seem to mind
that haproxy uses forced close on the health check, so switching
uwsgi->gunicorn is another option.

> I'd really like to see a massive rework being done on the checks. Your
> report made me realize that there is one feature we're currently missing,
> which is the ability to wait for a close once we've got a response. At
> the moment we cannot add this because checks work in 2 steps :
> 
>  1) connect and send the "request"
>  2) receive the "reponse" and close.

And AFAIK this works just fine with most sane backend servers.

> There's no real state machine there, it's roughly "if request was not
> sent, send it, otherwise try to receive a response; if response is
> received, then process it and close otherwise wait for response". So
> we cannot remain in a "waiting for closing" state after we receive a
> response. I'm wondering if that couldn't be achieved using tcp-checks
> however. I quickly tried but couldn't find a reliable way of doing this
> but I'm thing that we could possibly extend the tcp-check rules to have
> "tcp-check expect close" that would wait for a read0.

Should there be a somekind of timeout how long to wait for close ? I'm
thinking about backend servers that have very long/infinite keepalive
(and/or don't respect Connection: close).

-Jarno

-- 
Jarno Huuskonen



Re: healthchecks (to uwsgi) possible regression 1.9.8 -> 1.9.9

2019-10-04 Thread Jarno Huuskonen
Hi Willy,

On Fri, Oct 04, Willy Tarreau wrote:
> Hi Jarno,
> 
> On Wed, Oct 02, 2019 at 01:08:14PM +0300, Jarno Huuskonen wrote:
> > Hello,
> > 
> > I was testing haproxy -> uwsgi(alert.io) and noticed a possible regression
> > with healthchecks(httpchk).
> > With 1.9.9 uwsgi logs:
> > [uwsgi-http key: host.name.fi client_addr: 127.0.0.1 client_port: 45715] 
> > hr_read(): Connection reset by peer [plugins/http/http.c line 917]
> > 
> > health checks work
> > (option httpchk GET /_ HTTP/1.1\r\nHost:\ host.name.fi\r\nUser-Agent:\ 
> > haproxy)
> > but uwsgi logs the hr_read() warning/error.
> > 
> > I bisected 1.9.9 and this commit is probably the commit that changes
> > behaviour between 1.9.8 and 1.9.9:
> > 5d0cb90eb78f869e8801b34eddfdfd5dd8360e71 is the first bad commit
> > commit 5d0cb90eb78f869e8801b34eddfdfd5dd8360e71
> > Author: Olivier Houchard 
> > Date:   Fri Jun 14 15:26:06 2019 +0200
> > 
> > BUG/MEDIUM: connections: Don't call shutdown() if we want to disable 
> > linger.
> > 
> > In conn_sock_shutw(), avoid calling shutdown() if linger_risk is set. 
> > Not
> > doing so will result in getting sockets in TIME_WAIT for some time.
> > This is particularly observable with health checks.
> > 
> > This should be backported to 1.9.
> > 
> > (cherry picked from commit fe4abe62c7c5206dff1802f42d17014e198b9141)
> > Signed-off-by: Christopher Faulet 
> 
> Hmmm that's annoying, really, because we've opened a huge can of worms
> when fixing the first failed check and we're constantly displacing the
> problem somewhere else :-/
> 
> Yes, please do provide an strace, and a tcpdump, that would be nice.
> I suspect that we'll possibly see a FIN from the server, without the
> equivalent recv()==0 in haproxy, and that the setsockopt() call
> resulting in the RST being sent doesn't ack the FIN. Normally we
> should perform a clean shutdown+close if the FIN was received with
> the response and detected in time.

I sent pcap/strace offlist.
(strace -f -o -ttt, tcpdump -n -p -s 16384 -w ... host 127.0.0.1 and
port 8080).

I think in packet capture the second health checks causes
"client_addr: 127.0.0.1 client_port: 2779] hr_read(): Connection reset by peer 
[plugins/http/http.c line 917]"
(I think uswgi logs client_port incorrectly, ntohs(2779) gives 56074
(and port 56074 is in packet capture)).

(haproxy version: HA-Proxy version 2.1-dev2 2019/10/01).

I tried to reproduce with very minimal flask/uwsgi hello world app
and there hr_read happens very rarely. 
With alerta(.io) app this happens more regularly (AFAIK not with every check).
So maybe this is weird timing issue or bug in uwsgi.

> I'm seeing that your check request doesn't contain "connection: close",
> so actually it's possible that your server doesn't send the SYN, in
> which case we really need to close with RST. Could you please try to
> add "connection: close" to your httpchk line ?

I had Connection: close, but removed it after I added
http-check expect string OK
(http-check expect adds Connection: close).

-Jarno

-- 
Jarno Huuskonen



healthchecks (to uwsgi) possible regression 1.9.8 -> 1.9.9

2019-10-02 Thread Jarno Huuskonen
Hello,

I was testing haproxy -> uwsgi(alert.io) and noticed a possible regression
with healthchecks(httpchk).
With 1.9.9 uwsgi logs:
[uwsgi-http key: host.name.fi client_addr: 127.0.0.1 client_port: 45715] 
hr_read(): Connection reset by peer [plugins/http/http.c line 917]

health checks work
(option httpchk GET /_ HTTP/1.1\r\nHost:\ host.name.fi\r\nUser-Agent:\ haproxy)
but uwsgi logs the hr_read() warning/error.

I bisected 1.9.9 and this commit is probably the commit that changes
behaviour between 1.9.8 and 1.9.9:
5d0cb90eb78f869e8801b34eddfdfd5dd8360e71 is the first bad commit
commit 5d0cb90eb78f869e8801b34eddfdfd5dd8360e71
Author: Olivier Houchard 
Date:   Fri Jun 14 15:26:06 2019 +0200

BUG/MEDIUM: connections: Don't call shutdown() if we want to disable linger.

In conn_sock_shutw(), avoid calling shutdown() if linger_risk is set. Not
doing so will result in getting sockets in TIME_WAIT for some time.
This is particularly observable with health checks.

This should be backported to 1.9.

(cherry picked from commit fe4abe62c7c5206dff1802f42d17014e198b9141)
Signed-off-by: Christopher Faulet 

Also 1.9.11, 2.0.7 and 2.1-dev2 has the same problem with uwsgi hr_read().
If I revert commits 6c7e96a3e1abb331e414d1aabb45d9fedb0254c2 and
fe4abe62c7c5206dff1802f42d17014e198b9141 from 2.1-dev2 then the uwsgi hr_read()
disappears.

If this seems worth digging into I can get packet captures or strace.
(I'm testing this is on rhel8 vm with 4.18.0-80.11.1.el8_0.x86_64 kernel).

This is fairly minimal config for testing:
frontend FE_alerta
bind ipv4@:8443 name alertav4ssl ssl crt /etc/haproxy/ssl/crtname.pem 
alpn h2,http/1.1

modehttp
option  dontlognull
option  http-ignore-probes  # ignore "pre-connect" requests
timeout http-request8s

capture request header Host len 40

option contstats
option forwardfor   except 127.0.0.0/8

# remove incoming X-Forwarded-For headers
http-request set-header X-Forwarded-Proto https

default_backend BE_alertaapi

#
# Alerta uwsgi backend
#
backend BE_alertaapi
option httpchk GET /_ HTTP/1.1\r\nHost:\ demo3.uef.fi\r\nUser-Agent:\ 
UEFHaproxy
http-check expect string OK
http-check disable-on-404

retries 2
option  redispatch
option  prefer-last-server
balance roundrobin

timeout connect 4500ms
timeout server  30s
timeout queue   4s
timeout check   5s

# uwsgi alerta app expects /alerts (not /api/alerts), strip /api
#http-request replace-uri ^/api/?(.*) /\1

# inter fast for uwsgi hr_read() testing
default-server inter 6s downinter 25s rise 2
server alertaapi1 127.0.0.1:8080 id 1 check

-Jarno

-- 
Jarno Huuskonen



Re: Haproxy timeouts and returns NULL as response

2019-09-10 Thread Jarno Huuskonen
Hi,

On Fri, Aug 30, Santhosh Kumar wrote:
>  We have a client-haproxy-server setup like this https://imgur.com/bxV3BA9,
> we use apache and jetty httpclient for http1.1 and http2 requests
> respectively. Our http request will take 2 secs to 10 mins for processing a
> request depends on the request type. Some of the requests returns null as
> response(whereas, the request is received and processed succesfully by
> server which I can verify via server logs) which triggers
> *org.apache.http.MalformedChunkCodingException:
> Unexpected content at the end of chunk *on the client side and this problem
> is happening with http1.1 requests for a speicifc type of requests, I tried
> tweaking timeouts and tried to fix this but its doesnt help me and timeout
> does not have a pattern. Each request timeout is having diff timeout values
> like 5 secs, 12secs, 27 secs or even 45secs. This error dissapears if I
> remove haproxy and connect directly yo server. my config file as follows,

What version of haproxy are you using (haproxy -vv) ?
If version 2.x have you tried with "no option http-use-htx" and latest
2.0.x version ?

Do you get haproxy logs for these failed req/responses ?

-Jarno

-- 
Jarno Huuskonen



Re: rate limiting

2019-09-06 Thread Jarno Huuskonen
Hi,

On Thu, Sep 05, Sander Klein wrote:
> I was looking at implementing rate limiting in our setup. But, since we are
> handling both IPv4 and IPv6 in the same frontends and backends, I was
> wondering how I could do that.
> 
> AFAIK a stick table is either IPv4 or IPv6 and you can only have one stick
> table per frontend or backend.
> 
> Is there a way to do this without splitting up the frontends and backends
> based on the IP version?

Are you going to use src address as stick table key ?
If you use ipv6 stick table (type ipv6) then ipv4 addresses are
stored as ipv4 mapped ipv6 addresses (::ffff:127.0.0.1).

-Jarno

-- 
Jarno Huuskonen



Re: Get http connection client/server ip/port

2019-07-04 Thread Jarno Huuskonen
Hi,

On Thu, Jul 04, Peter Hudec wrote:
> I have maybe found some bug in haproxy, submitted as 
> https://github.com/haproxy/haproxy/issues/154.

1.8.4 is fairly old, can you reproduce on more recent 1.8.x or latest 2.0.x ?

> The variables dst, dst_port are identical with the src, src_port.
> 
> Is there any other way how to get these /in this case dst/ values ??
> 
> What do I need is ..
> 
> http-request set-header X-Server-IP %[dst]
> http-request set-header X-Server-Port %[dst_port]
> http-request set-header X-Client-IP %[src]
> http-request set-header X-Client-Port %[src_port]
> 
> result is ;(
> 
>   'HTTP_X_CLIENT_PORT' => '22696',
>   'HTTP_X_CLIENT_IP' => '217.73.20.190',
>   'HTTP_X_SERVER_PORT' => '22696',
>   'HTTP_X_SERVER_IP' => '217.73.20.190’,

With this simple test config dst, dst_port etc. seem to work for me, does
this config work for you ?

global
stats socket /tmp/stats level admin

defaults
mode http
log global
option httplog

frontend test
bind :8080

default_backend test_be

backend test_be
http-request set-header X-Server-IP %[dst]
http-request set-header X-Server-Port %[dst_port]
http-request set-header X-Client-IP %[src]
http-request set-header X-Client-Port %[src_port]

server srv1 127.0.0.1:9000 id 1

listen yeah
bind ipv4@127.0.0.1:9000
http-request deny deny_status 200

run with for example haproxy -d -f tmp.conf and
curl http://127.0.0.1:8080 and you should see the headers from haproxy debug
output.

-Jarno

-- 
Jarno Huuskonen



Re: Match response status code with regular expression

2019-06-26 Thread Jarno Huuskonen
Hi,

On Tue, Jun 25, Ricardo Fraile wrote:
> I'm trying to set an acl for multiple status codes. As example, using
> only for one works:
> 
>   http-response set-header Cache-Control max-age=60 if { status 302 }
> 
> but with more than one, trying with a regex, fails because it is not
> implemented in httpr-response:
> 
>   http-response set-header Cache-Control max-age=60 if { rstatus 3* }
> 
> produces the following error:
> 
>   error detected while parsing an 'http-response set-header' condition :
> unknown fetch method 'rstatus' in ACL expression 'rstatus'.
> 
> 
> 
> The "rstatus" is available only under "http-check expect". Are there any
> equivalence to the regext status matching?

You can use multiple conditions:
http-response set-header Cache-Control max-age=60 if { status ge 300 } { status 
lt 400 }
should match if status is between 300-399.

-Jarno

-- 
Jarno Huuskonen



Re: H/2 via Unix Sockets fails

2019-06-04 Thread Jarno Huuskonen
Hi Christian,

On Thu, Apr 25, Christian Ruppert wrote:
> 
> listen genlisten_10320-cust1.tls-tcp
>   acl REQ_TLS_HAS_ECC req.ssl_ec_ext eq 1
>   tcp-request content accept if { req_ssl_hello_type 1 } # Match
> Client SSL Hello
> 
>   use-server socket-10320-rsa if !REQ_TLS_HAS_ECC
>   server socket-10320-rsa unix@/run/haproxy-10320-rsa.sock send-proxy-v2
> 
>   use-server socket-10320-ecc if REQ_TLS_HAS_ECC
>   server socket-10320-ecc unix@/run/haproxy-10320-ecc.sock send-proxy-v2

Do you need this tcp frontend for just serving both rsa/ecc
certificates ?
If so I think haproxy can do this(with openssl >= 1.0.2) with crt keyword:
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#5.1-crt

-Jarno

> listen genlisten_10320-cust1.tls
> 
>   bind unix@/run/haproxy-10320-rsa.sock accept-proxy user haproxy
> group root mode 600 ssl crt /etc/haproxy/test-rsa.pem alpn
> h2,http/1.1 process 3
>   bind unix@/run/haproxy-10320-ecc.sock accept-proxy user haproxy
> group root mode 600 ssl crt /etc/haproxy/test-ecc.pem alpn
> h2,http/1.1 process 4-8

-- 
Jarno Huuskonen



Re: Haproxy infront of exim cluster - SMTP protocol synchronization error

2019-05-23 Thread Jarno Huuskonen
Hi,

On Wed, May 22, Brent Clark wrote:
> 2019-05-22 12:23:15 SMTP protocol synchronization error (input sent
> without waiting for greeting): rejected connection from
> H=smtpgatewayserver [IP_OF_LB_SERVER] input="PROXY TCP4 $MY_IP
> $IP_OF_LB_SERVER 39156 587\r\n"

Seems like proxy protocol is not enabled on exim.

> We use Exim and I set:
> hostlist haproxy_hosts = IP.OF.LB

Do you have
hosts_proxy(https://www.exim.org/exim-html-current/doc/html/spec_html/ch-proxies.html)
 set/enabled ? 

-Jarno

> My haproxy config:
> https://pastebin.com/raw/JYAXkAq4
> 
> If I run
> openssl s_client -host smtpgatewayserver -port 587 -starttls smtp -crlf
> 
> openssl says connected, but SSL-Session is empty.
> 
> I would like to say, if I change 'send-proxy' to 'check', the
> everything works, BUT the IP logged by Exim, is that of the LB, and
> not the client.
> 
> If anyone could please review the haproxy config / my setup, it
> would be appreciated.
> 
> Many thanks
> Brent Clark
> 
> 

-- 
Jarno Huuskonen



Re: Host header and sni extension differ

2019-05-16 Thread Jarno Huuskonen
Hi,

On Thu, May 16, Joao Morais wrote:
> 
> Hi list! The symptom is as follow: when logging Host: header I receive 
> `myapp.io` while in the same request the sni extension says `anotherapp.com`.
> 
> This happens in a very few requests (about 0.5%) but this is enough to make 
> some noise - regarding server certificate used in the handshake, and also the 
> ca-file used in handshakes with client certs. When they differ, the header is 
> right and the sni is wrong.
> 
> I can confirm that every "myapp.io" or "anotherapp.com" resolves to the same 
> haproxy cluster. I can also confirm that all agents are browsers (Chrome and 
> Firefox) running in Linux and, based on the "myapp.io" and "anotherapp.com" 
> samples I saw together in the logs, the user is using both applications at 
> the same time, probably from the same instance of the browser.

Do the myapp.io and anotherapp.com share same certificate (ie.
certificate has both myapp.io and anotherapp.com SAN) ?

AFAIK browser can reuse the same tls connection if the certificate
covers both names. When the host/sni differ do you have an earlier
connection (for example from same ip/port) using matching sni/host in your
logs ?

-Jarno

-- 
Jarno Huuskonen



Re: H/2 via Unix Sockets fails

2019-04-24 Thread Jarno Huuskonen
Hi,

On Tue, Apr 23, Christian Ruppert wrote:
> we have an older setup using nbproc >1 and having a listener for the
> initial tcp connection and one for the actual SSL/TLS, also using
> tcp mode which then goes to the actual frontend using http mode.
> Each being bound to different processes.
> So here's the test config I've used:

(Your config seems quite similar to what I tested in this thread:
https://www.mail-archive.com/haproxy@formilux.org/msg32255.html)

It kind of works if you add a second bind (with proto h2) to some_frontend
and from h2test_tcp.tls use server h2 if { ssl_fc_alpn h2 }
(BUT client can (at least in theory) choose alpn h2 and speak http/1.1).

So using mode http on h2test_tcp.tls is probably safer choice.

> listen h2test_tcp
> mode tcp
> bind :444
> option tcplog
> log global
> server socket-444-h2test unix@/run/haproxy-444-h2test.sock
> send-proxy-v2
> 
> listen h2test_tcp.tls
> mode tcp
> option tcplog
> log global
> bind unix@/run/haproxy-444-h2test.sock accept-proxy user haproxy
> group haproxy mode 600 ssl crt /etc/haproxy/ssl/h2test.pem alpn
> h2,http/1.1
> server socket-444_2 unix@/run/haproxy-444_2-h2test.sock
> send-proxy-v2
> 
> frontend some_frontend
> mode http
> log global
> bind unix@/run/haproxy-444_2-h2test.sock id 444 accept-proxy
> user haproxy group haproxy mode 600
> bind :80
> 
> ...

[...]

> curl says:
> # curl -k4vs 
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2F127.0.0.1%3A444%2Fdata=02%7C01%7C%7Ca512378ae52d4457158908d6c7f5fb34%7C87879f2e73044bf2baf263e7f83f3c34%7C0%7C1%7C636916256693248226sdata=Fw%2F21TnhE6yuJJq4dDDZf%2BdJfIVX3b6yR4nVjoD%2BkhA%3Dreserved=0
>  --http2
> *   Trying 127.0.0.1...
> * TCP_NODELAY set
> * Connected to 127.0.0.1 (127.0.0.1) port 444 (#0)
> * ALPN, offering h2
> * ALPN, offering http/1.1
> * Cipher selection:

[...]

> * Using HTTP2, server supports multi-use
> * Connection state changed (HTTP/2 confirmed)
> * Copying HTTP/2 data in stream buffer to connection buffer after
> upgrade: len=0
> * Using Stream ID: 1 (easy handle 0x56087e29b770)
> >GET / HTTP/2
> >Host: 127.0.0.1:444
> >User-Agent: curl/7.64.1
> >Accept: */*
> >
> * http2 error: Remote peer returned unexpected data while we
> expected SETTINGS frame.  Perhaps, peer does not support HTTP/2
> properly.
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection 0
> 
> Can anybody else confirm that? Tested with HAProxy 1.9.6.
> Any ideas what might be the reason? Right now, I'd guess that's a
> Problem with H/2 and those sockets on the HAProxy side.

I think the problem is that "bind unix@/run/haproxy-444_2-h2test.sock"
expects/speaks http/1.1.

-Jarno

-- 
Jarno Huuskonen



Re: Chained http -> http frontends: http/2 error 400 vs http/1.1 error 502

2019-03-26 Thread Jarno Huuskonen
Hi,

On Tue, Mar 26, Christopher Faulet wrote:
> Le 26/03/2019 à 08:48, Jarno Huuskonen a écrit :
> >Testing with 2.0-dev2(2.0-dev2 2019/03/26) I get kind of strange results
> >with http2:
> >- curl seems to retry in a infinite loop
> >- firefox tries few times with both H2 / HTTP1.1 and then shows
> >   "Secure Connection Failed"
> >- chrome tries few times (3 times w/H2 and 3 times w/HTTP/1.1) and
> >   then shows "ERR_SPDY_SERVER_REFUSED_STREAM"
> >
> >(With HTTP/1.1 all three show 502 error page).
> >
> 
> Hi Jarno,
> 
> The 502 response code in HTTP/1.1 is detected by curl as a transient
> error (timeout, 408/5xx response code). If you add the option
> '--retry 1', curl will retry to perform the request one time. My
> Firefox seems to retry 1 time before giving up. Note that in
> HTTP/1.1, such retries are only possible on idempotent request.
> 
> In HTTP/2, because nothing was sent to the client, HAProxy closes
> the stream sending a RST_STREAM frame with the error code
> REFUSED_STREAM. It is a guarantee that a request has not been
> processed. So the client may automatically retry it (see RFC7540 - #
> 8.1.4) . My Firefox retries 9 times before giving up. But curl
> retries in loop. The option "--retry" is ignored. So I guess it is a
> bug from curl.
> 
> So everything seems to work as expected from the HAproxy point of view.

Thank you for the explanation, makes sense. (And also thank you for
working on this:)

-Jarno

-- 
Jarno Huuskonen



Re: Chained http -> http frontends: http/2 error 400 vs http/1.1 error 502

2019-03-26 Thread Jarno Huuskonen
Hello,

On Fri, Mar 01, Christopher Faulet wrote:
> Le 01/03/2019 à 14:36, Jarno Huuskonen a écrit :
> >Hi,
> >
> >Pinging this thread incase if this an actual error/bug and not
> >a configuration error.
> >(current 2.0-dev1-8dca19-40 2019/03/01 sends 400 error to client when
> >http/2 is used).
> >
> 
> It is not an expected behavior, of course. And after a quick check,
> it is a bug. Instead of catching an error from the server side, we
> handle it as an abort from the client.
> 
> I have to investigate a bit more because abortonclose seems to be
> broken too. And when HTX is enable, the H1 is buggy too.

Testing with 2.0-dev2(2.0-dev2 2019/03/26) I get kind of strange results
with http2:
- curl seems to retry in a infinite loop
- firefox tries few times with both H2 / HTTP1.1 and then shows
  "Secure Connection Failed"
- chrome tries few times (3 times w/H2 and 3 times w/HTTP/1.1) and
  then shows "ERR_SPDY_SERVER_REFUSED_STREAM"

(With HTTP/1.1 all three show 502 error page).

(Also the vtest attached to the start of this thread shows refused
stream).

Here's the test config:
defaults
mode http
option http-use-htx
timeout connect 1s
timeout client  2s
timeout server  4s
timeout tarpit  3s

listen HTTPS_in
mode http
bind 127.0.0.1:8443 ssl crt common.pem alpn h2,http/1.1

server tmpserver abns@proc1 send-proxy-v2

frontend fe
bind abns@proc1 accept-proxy
http-request reject if TRUE

default_backend be

backend be
    server h1srv 127.0.0.1:8082

listen H2_server
bind 127.0.0.1:8082

-- 
Jarno Huuskonen - System Administrator | jarno.huuskonen atsign uef.fi



Re: 400 SC on h2 xhr post

2019-03-26 Thread Jarno Huuskonen
Hi Max,

On Thu, Mar 21, Maximilian Böhm wrote:
> thanks for your suggestions. It was not successful.
> 
> However, I managed to make it reproductible. I would be really happy, if 
> someone more experienced would take a look on this.
> 
> Setup
> Client (Chrome) -> Haproxy (Docker) -> Jetty (Docker)
> 
> The client executes following script, it can be saved on the local disk, we 
> can ignore the CORS logging. Do not change the '3', otherwise it will not 
> occur.
> 
> function loopMe () {
>setTimeout(function () {
>   var xhr = new XMLHttpRequest();
>   xhr.open('POST', '<a  rel="nofollow" href="https://[DOMAIN]/app/docker-jetty.json">https://[DOMAIN]/app/docker-jetty.json</a>'); 
>   xhr.send();
>   loopMe();
>}, 3)
> }
> loopMe();
> 
> 

Can you test with 2.0-dev2 afaik it contains H2 abortonclose fixes that might
help with this ?

And somewhat out of curiosity does changing jetty's default?
jetty.http.idleTimeout=3 make it easier to reproduce ?

-Jarno

> Haproxy.cfg
> global
> daemon
> tune.ssl.default-dh-param 2048
> stats socket /var/run/haproxy.stat
> 
> defaults
> mode http
> option httplog
> log stdout format raw daemon
> timeout connect  5m
> timeout client  5m
> timeout server  5m
> 
> frontend frontend_h2
> bind *:443 ssl crt /usr/local/etc/haproxy/ssl/ alpn h2,http/1.1
> use_backend backend_jetty
>   
> backend backend_jetty
> server web850 127.0.0.1:81
>  
> Commands for starting the container
> 1) docker run -p 443:443 -v [LOCAL_DIR_HAPROXY]:/usr/local/etc/haproxy/ -it 
> haproxy:1.9.4
> 2) docker run -d --name jetty -v [LOCAL_DIR_JETTY]:/var/lib/jetty/webapps -p 
> 81:8080 jetty
> 
> Substitute the following variables:
> 1) [DOMAIN]: Domain you have a certificate for or generate one
> 2) [LOCAL_DIR_HAPROXY]: Local directory where you need to put the 
> "haproxy.cfg" and your certificate (subdirectory "ssl")
> 3) [LOCAL_DIR_JETTY]: Local directory, create a subdirectory called "app" and 
> create an empty file named "docker-jetty.json"). 
> 
> Substitute the variables, start the container and open the script in the 
> browser. After 10-15 requests you should get a SC 400
> 
> At first sight, it looks like jetty is doing something terribly wrong. But, 
> and that's the problem, it does not occur if I have just http/1.1 enabled 
> between the client and haproxy. Any ideas?
> 
> Thanks,
> Max
> 
> -Ursprüngliche Nachricht-
> Von: Jarno Huuskonen  
> Gesendet: Mittwoch, 20. März 2019 12:59
> An: Maximilian Böhm 
> Cc: haproxy@formilux.org
> Betreff: Re: 400 SC on h2 xhr post
> 
> Hi Max,
> 
> On Wed, Mar 20, Maximilian Böhm wrote:
> > >> If the 400 errors happen within 3mins, have you tried changing 
> > >> client/keep-alive timeouts to see if anything changes ?
> > They do most often happen in the first 3 mins. But that's not always the 
> > case. And if it's really a timeout, shouldn't it be more clearly recurring? 
> > Like every tenth request fails. But that's also not the case. Sometimes 
> > it's the 3rd request, sometimes the 20th or even later.
> > However, I did increase the previously set timeouts (40min). But this did 
> > not change anything at all. Is there another timeout which explicitly only 
> > affects h2 on the client side?
> 
> I'm not aware of any more timeouts to test. I think possible timeouts are in 
> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.1
> 
> Have you tested different values for http-reuse (never to always) ?
> (https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-http-reuse)
> (Probably doesn't make any difference).
> 
> This could be related to 
> https://www.mail-archive.com/haproxy@formilux.org/msg32959.html
> that test case also returns 400 error with state CH-- with http2.
> 
> -Jarno
> 
> > -Ursprüngliche Nachricht-
> > Von: Jarno Huuskonen 
> > Gesendet: Dienstag, 19. März 2019 17:34
> > An: Maximilian Böhm 
> > Cc: haproxy@formilux.org
> > Betreff: Re: 400 SC on h2 xhr post
> > 
> > Hi,
> > 
> > On Tue, Mar 19, Maximilian Böhm wrote:
> > > The problem I experience is within a legacy javascript application which 
> > > periodically checks if the user is still logged in. It does so by sending 
> > > an xhr request every 30 seconds (I said, it's a legacy app, right? It 
> > > does so by POST not GET...). As you may guess, this behavior works using 
> >

Re: 400 SC on h2 xhr post

2019-03-20 Thread Jarno Huuskonen
Hi Max,

On Wed, Mar 20, Maximilian Böhm wrote:
> >> If the 400 errors happen within 3mins, have you tried changing 
> >> client/keep-alive timeouts to see if anything changes ?
> They do most often happen in the first 3 mins. But that's not always the 
> case. And if it's really a timeout, shouldn't it be more clearly recurring? 
> Like every tenth request fails. But that's also not the case. Sometimes it's 
> the 3rd request, sometimes the 20th or even later.
> However, I did increase the previously set timeouts (40min). But this did not 
> change anything at all. Is there another timeout which explicitly only 
> affects h2 on the client side?

I'm not aware of any more timeouts to test. I think possible timeouts
are in https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.1

Have you tested different values for http-reuse (never to always) ?
(https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-http-reuse)
(Probably doesn't make any difference).

This could be related to 
https://www.mail-archive.com/haproxy@formilux.org/msg32959.html
that test case also returns 400 error with state CH-- with http2.

-Jarno

> -Ursprüngliche Nachricht-
> Von: Jarno Huuskonen  
> Gesendet: Dienstag, 19. März 2019 17:34
> An: Maximilian Böhm 
> Cc: haproxy@formilux.org
> Betreff: Re: 400 SC on h2 xhr post
> 
> Hi,
> 
> On Tue, Mar 19, Maximilian Böhm wrote:
> > The problem I experience is within a legacy javascript application which 
> > periodically checks if the user is still logged in. It does so by sending 
> > an xhr request every 30 seconds (I said, it's a legacy app, right? It does 
> > so by POST not GET...). As you may guess, this behavior works using http1.1 
> > quasi infinitely. But as soon as I activate HTTP/2, I'll get the following 
> > output (sooner or later): 
> > 172.17.0.1:46372 [19/Mar/2019:12:10:13.465] [fntnd] [bknd] 0/0/0/14/14 200 
> > 368 - -  1/1/0/1/0 0/0 "POST   [URL] HTTP/1.1"
> > 172.17.0.1:46372 [19/Mar/2019:12:10:43.465] [fntnd] [bknd] 0/0/0/-1/8 400 
> > 187 - - CH-- 1/1/0/0/0 0/0 "POST [URL] HTTP/1.1"
> > 
> > Which means, the developer toolbar announces a response code "400" and 
> > "400 Bad requestYour browser sent an invalid 
> > request.". I was not yet successful reproduce this behavior 
> > with OkHttp (java http2-capable library). Jetty - on the backend site - 
> > does not report any requests in its ncsa request log.
> 
> I've seen some(very few (maybe one-two a day)) 400 bad requests with haproxy 
> 1.9.4 (http2) to apache+php (http/1.1) backend. These requests alos have CH.. 
> state in logs.
> (400 errors have also happened for GET requests).
> 
> > It is not directly reproducible (like every second time) but it usually 
> > happens with the first 3 minutes. I experienced this behavior in Chrome 
> > (73.0.3683.75), Firefox (65.0.2 (32-Bit)) and Edge (44.17763.1.0). I also 
> > tried with different networks and different internet connections.
> > 
> > Any ideas? Maybe a similar bug is known? What shall/can I do next? Setting 
> > up Wireshark with MITM and comparing the requests? Right now, I can't 
> > imagine the error is on side of the client nor on the backend (the backend 
> > is not changed). 
> 
> If the 400 errors happen within 3mins, have you tried changing 
> client/keep-alive timeouts to see if anything changes ?
> 
> > timeout queue   2m
> > timeout client  2m
> > timeout http-keep-alive 2m



Re: 400 SC on h2 xhr post

2019-03-19 Thread Jarno Huuskonen
Hi,

On Tue, Mar 19, Maximilian Böhm wrote:
> The problem I experience is within a legacy javascript application which 
> periodically checks if the user is still logged in. It does so by sending an 
> xhr request every 30 seconds (I said, it's a legacy app, right? It does so by 
> POST not GET...). As you may guess, this behavior works using http1.1 quasi 
> infinitely. But as soon as I activate HTTP/2, I'll get the following output 
> (sooner or later): 
> 172.17.0.1:46372 [19/Mar/2019:12:10:13.465] [fntnd] [bknd] 0/0/0/14/14 200 
> 368 - -  1/1/0/1/0 0/0 "POST   [URL] HTTP/1.1"
> 172.17.0.1:46372 [19/Mar/2019:12:10:43.465] [fntnd] [bknd] 0/0/0/-1/8 400 187 
> - - CH-- 1/1/0/0/0 0/0 "POST [URL] HTTP/1.1"
> 
> Which means, the developer toolbar announces a response code "400" and 
> "400 Bad requestYour browser sent an invalid 
> request.". I was not yet successful reproduce this behavior 
> with OkHttp (java http2-capable library). Jetty - on the backend site - does 
> not report any requests in its ncsa request log.

I've seen some(very few (maybe one-two a day)) 400 bad requests
with haproxy 1.9.4 (http2) to apache+php (http/1.1) backend. These
requests alos have CH.. state in logs.
(400 errors have also happened for GET requests).

> It is not directly reproducible (like every second time) but it usually 
> happens with the first 3 minutes. I experienced this behavior in Chrome 
> (73.0.3683.75), Firefox (65.0.2 (32-Bit)) and Edge (44.17763.1.0). I also 
> tried with different networks and different internet connections.
> 
> Any ideas? Maybe a similar bug is known? What shall/can I do next? Setting up 
> Wireshark with MITM and comparing the requests? Right now, I can't imagine 
> the error is on side of the client nor on the backend (the backend is not 
> changed). 

If the 400 errors happen within 3mins, have you tried changing
client/keep-alive timeouts to see if anything changes ?

> timeout queue   2m
> timeout client  2m
> timeout http-keep-alive 2m

-Jarno

-- 
Jarno Huuskonen



Re: Adding Configuration parts via File

2019-03-08 Thread Jarno Huuskonen
Hi,

On Fri, Mar 08, Philipp Kolmann wrote:
> On 3/8/19 2:50 PM, Patrick Hemmer wrote:
> >
> >You can use external files in two cases. See the following blog articles:
> >
> >https://www.haproxy.com/blog/introduction-to-haproxy-acls/ (search
> >for "acl file")
> >
> >https://www.haproxy.com/blog/introduction-to-haproxy-maps/
> 
> thanks for the hint with the maps. This looks quite promising for my
> other issue I am facing:
> 
>     acl mandant_IT_email path_beg -i /it/Service-One
>     http-request redirect code 302 location "/long/URL/84" if
> mandant_IT_email
> 
> Is there any possibility to achive such a redirect if path_beg via maps?
> 
>     http-request redirect code 302 location *value* if path_beg *key
> *

Yes(probably:), something like this might work for you:

acl is_redirect_match path,map_beg(redir.map) -m found
http-request redirect code 302 location %[path,map_beg(redir.map)] if 
is_redirect_match

and in the redir.map file:
/a/b/ab/somewhere
/a/c/ac/somewhere
/a1/b   /a1b/somewhere

-Jarno

-- 
Jarno Huuskonen



Re: read async auth date from file

2019-03-06 Thread Jarno Huuskonen
Hi,

On Sun, Mar 03, Jeff wrote:
> I need to add an authorization header for a target server, e.g.
>http-request add-header Authorization Bearer\ MYTOKENDATA
> 
> where MYTOKENDATA is read from a file for each proxy message.

Does this mean that each http request needs to read the MYTOKENDATA from
file (file read access for each and every request) ?

> (MYTOKENDATA is written asynchronously to the file by another
> process.)
> 
> How to do this in HAProxy?

Just a few ideas from top of my head:
- use maps / cli to update map values
  (https://www.haproxy.com/blog/introduction-to-haproxy-maps/)
- lua could probably do this, but AFAIK doing file io will block rest of
  the haproxy, so it might be better to read the MYTOKENDATA from redis
  or memcache (or something similar).

Is your use case something like described here:
https://www.haproxy.com/blog/using-haproxy-as-an-api-gateway-part-2-authentication/

-Jarno

-- 
Jarno Huuskonen



Re: Chained http -> http frontends: http/2 error 400 vs http/1.1 error 502

2019-03-01 Thread Jarno Huuskonen
Hi,

Pinging this thread incase if this an actual error/bug and not
a configuration error.
(current 2.0-dev1-8dca19-40 2019/03/01 sends 400 error to client when
http/2 is used).

-Jarno

On Sat, Feb 02, Jarno Huuskonen wrote:
> (This is kind of related to this thread:
> https://www.mail-archive.com/haproxy@formilux.org/msg32255.html).
> 
> I'm seeing different behaviour between http1.1 / http2 when chaining
> two frontends with mode http and the last frontend closes
> connection with http-request reject (or tcp-request content reject).
> 
> When client uses http/1.1 then client receives 502 error (I think
> this is expected because the "server" for the first frontend just closes
> connection).
> 
> But when client uses http/2 then client will receive error 400.
> (Tested with latest 2.0dev (2.0-dev0-32211a-258)).
> I'm not sure if this is a bug, but at least seems to be different behaviour
> between http/1.1 and http/2. (option http-use-htx doesn't seem to make
> difference).
> 
> The attached varnishtest should explain what I mean. I put some debug
> printf output to proto_htx and with http/2 the status 400 comes
> from /* 3: client abort with an abortonclose */
> (proto_htx.c line 1535, s->req.flags 0x9c42020).
> 
> With http/1.1 status 502 comes from /* 4: close from server, capture
> the response if the server has started to respond */
> (proto_htx.c line 1559, s->req.flags 0x9842000).
> (If I interpret s->req.flags correctly then http/2 has
> CF_READ_DONTWAIT and CF_SHUTR set and http/1.1 doesn't).
> 
> -Jarno
> 
> varnishtest "h2 chaining 400 error"
> #REQUIRE_VERSION=1.9
> feature ignore_unknown_macro
> 
> haproxy h1 -conf {
> defaults
> mode http
> ${no-htx} option http-use-htx
> timeout connect 1s
> timeout client  1s
> timeout server  1s
> 
> listen HTTP_in
> bind "fd@${HTTP_in}"
> server tmpserver abns@proc1 send-proxy-v2
> 
> listen HTTP2_in
> bind "fd@${HTTP2_in}" proto h2
> server tmpserver abns@proc1 send-proxy-v2
> 
> frontend fe
> bind abns@proc1 accept-proxy
> http-request reject if TRUE
> default_backend be
> 
> backend be
> server s1 ${s1_addr}:${s1_port}
> 
> } -start
> 
> client c1h1 -connect ${h1_HTTP_in_sock} {
> txreq
> rxresp
> expect resp.status == 502
> } -run
> 
> client c1h2 -connect ${h1_HTTP2_in_sock} {
>   txpri
>   stream 0 {
>   txsettings
>   rxsettings
>   txsettings -ack
>   rxsettings
>   expect settings.ack == true
>   } -run
>   stream 1 {
>   # warning: -req, -scheme, -url MUST be placed first otherwise
>   # the H2 protocol is invalid since they are pseudo-headers
>   txreq \
> -req GET \
> -scheme "https" \
> -url /path/to/file.ext
> 
>   rxhdrs
>   expect resp.status == 502
>   #rxdata -all
>   } -run
> } -run
> 

-- 
Jarno Huuskonen



Re: http2-issue with http2 enabled on frontend and on backend

2019-02-26 Thread Jarno Huuskonen
Hi,

On Tue, Feb 26, Tom wrote:
> 1)
> When I enable "errorfile 503 /etc/haproxy/503.html" in the
> defaults-section, then haproxy comes not up and logs the following
> error:
> "Unable to convert message in HTX for HTTP return code 503."

Does it work if you move the errorfile 503 to frontend/backend ?

> 2)
> When I enable removing the server-header from the backend with
> "rspidel ^Server:.*", then the haproxy-workers are terminating with
> Segmentation fault and the website via haproxy is not working:

Does http-response del-header Server work (instead of rspidel) ?

-Jarno

-- 
Jarno Huuskonen



Re: Require info on ACL for rate limiting on per URL basis.

2019-02-21 Thread Jarno Huuskonen
Hi,

On Thu, Feb 21, Badari Prasad wrote:
> But by replacing 'src' with 'path',  rate-limiting did not work. My current
> config after the change is :
> 
> backend st_src_as2_monte
> stick-table type string len 64 size 1m expire 1s store http_req_rate(1s)

(for testing it helps to use longer expire eg. 60s and longer rate
(60s). Then it's easier to use admin socket to view stick table values
to see if the stick table is updated etc).

> frontend scef
> bind 0.0.0.0:80
> bind 0.0.0.0:443 ssl crt /etc/ssl/private/as1.pem
> mode http
> option forwardfor
> 
> http-request track-sc1 path table st_src_as2_monte

You're using sc1 here.

> acl monte_as2_api_url path_beg /api/v1/monitoring-event/A02/
> #500 requests per second.
> acl monte_as1_exceeds_limit sc0_http_req_rate(st_src_as1_monte) gt 500

And sc0 here, change this to sc1 (or use track-sc1).

-Jarno

> http-request deny deny_status 429 if monte_as2_api_url
> monte_as2_exceeds_limit
> use_backend nodes
> Appreciate the response on this, and going further I will have to extend
> the rate limiting to multiple url's .
> 
> 
> Thanks
>  badari
> 
> 
> 
> On Wed, Feb 20, 2019 at 11:13 PM Jarno Huuskonen 
> wrote:
> 
> > Hi,
> >
> > On Wed, Feb 20, Badari Prasad wrote:
> > >  Thank you for responding. Came up with based on the inputs:
> > >
> > > #printf "as2monte" | mkpasswd --stdin --method=md5
> > > userlist AuthUsers_MONTE_AS2
> > > user appuser_as2  password $1$t25fZ7Oe$bjthsMcXgbCt2EJvQo8r0/
> > >
> > > backend st_src_as2_monte
> > > stick-table type string len 64 size 1000 expire 1s store
> > > http_req_rate(1s)
> > >
> > > frontend scef
> > > bind 0.0.0.0:80
> > > bind 0.0.0.0:443 ssl crt /etc/ssl/private/as1.pem
> > > mode http
> > > #option httpclose
> > > option forwardfor
> > >
> > > acl monte_as2_api_url url_beg /api/v1/monitoring-event/A02/
> > > #500 requests per second.
> > > acl monte_as2_exceeds_limit src_http_req_rate(st_src_as2_monte) gt
> > 500
> > > http-request track-sc1 src table st_src_as2_monte unless
> > > monte_as2_exceeds_limit
> > > http-request deny deny_status 429 if monte_as2_api_url
> > > monte_as2_exceeds_limit
> >
> > I'm confused :) what your requirements are but I think with
> > this configuration each src address can have rate 500 to
> > /api/v1/monitoring-event/A02/. (so with 10 different src addresses
> > you can have 5000 rate to /api/v1/monitoring-event/A02/).
> >
> > (And you're using type string stick table, type ip or ipv6 is better
> > fit for tracking src).
> >
> > But if it fits your requirements then I'm glad you found a working
> > solution.
> >
> > -Jarno
> >
> > > http-request auth realm basicauth if monte_as2_api_url
> > > !authorized_monte_as2
> > >
> > > use_backend nodes
> > >
> > > With this config I was able to rate limit per url basis.
> > >
> > > Thanks
> > >  badari
> > >
> > >
> > >
> > > On Tue, Feb 19, 2019 at 10:01 PM Jarno Huuskonen  > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > On Mon, Feb 11, Badari Prasad wrote:
> > > > >I want to rate limit based on url
> > > > > [/api/v1/monitoring-event/A01, /api/v1/client1/transfer_data,
> > > > > /api/v1/client2/transfer_data  ]  no matter what the source ip
> > address
> > > > is.
> > > >
> > > > Something like this might help you. Unfortunately at the moment
> > > > I don't have time to create a better example.
> > > >
> > > > acl api_a1 path_beg /a1
> > > > acl api_b1 path_beg /b1
> > > > acl rate_5 sc0_http_req_rate(test_be) gt 5
> > > > acl rate_15 sc0_http_req_rate(test_be) gt 15
> > > >
> > > > # You might want to add acl so you'll only track paths you're
> > > > # interested in.
> > > > http-request track-sc0 path table test_be
> > > > # if you want to track only /a1 /b1 part of path
> > > > # you can use for example field converter:
> > > > #http-request track-sc0 path,field(1,/,2) table test_be
> > > > #http-request set-header X-Rate %[sc0_http_req_rate(test_be)]
> > >

Re: Require info on ACL for rate limiting on per URL basis.

2019-02-20 Thread Jarno Huuskonen
Hi,

On Wed, Feb 20, Badari Prasad wrote:
>  Thank you for responding. Came up with based on the inputs:
> 
> #printf "as2monte" | mkpasswd --stdin --method=md5
> userlist AuthUsers_MONTE_AS2
> user appuser_as2  password $1$t25fZ7Oe$bjthsMcXgbCt2EJvQo8r0/
> 
> backend st_src_as2_monte
> stick-table type string len 64 size 1000 expire 1s store
> http_req_rate(1s)
> 
> frontend scef
> bind 0.0.0.0:80
> bind 0.0.0.0:443 ssl crt /etc/ssl/private/as1.pem
> mode http
> #option httpclose
> option forwardfor
> 
> acl monte_as2_api_url url_beg /api/v1/monitoring-event/A02/
> #500 requests per second.
> acl monte_as2_exceeds_limit src_http_req_rate(st_src_as2_monte) gt 500
> http-request track-sc1 src table st_src_as2_monte unless
> monte_as2_exceeds_limit
> http-request deny deny_status 429 if monte_as2_api_url
> monte_as2_exceeds_limit

I'm confused :) what your requirements are but I think with
this configuration each src address can have rate 500 to
/api/v1/monitoring-event/A02/. (so with 10 different src addresses
you can have 5000 rate to /api/v1/monitoring-event/A02/).

(And you're using type string stick table, type ip or ipv6 is better
fit for tracking src).

But if it fits your requirements then I'm glad you found a working
solution.

-Jarno

> http-request auth realm basicauth if monte_as2_api_url
> !authorized_monte_as2
> 
> use_backend nodes
> 
> With this config I was able to rate limit per url basis.
> 
> Thanks
>  badari
> 
> 
> 
> On Tue, Feb 19, 2019 at 10:01 PM Jarno Huuskonen 
> wrote:
> 
> > Hi,
> >
> > On Mon, Feb 11, Badari Prasad wrote:
> > >I want to rate limit based on url
> > > [/api/v1/monitoring-event/A01, /api/v1/client1/transfer_data,
> > > /api/v1/client2/transfer_data  ]  no matter what the source ip address
> > is.
> >
> > Something like this might help you. Unfortunately at the moment
> > I don't have time to create a better example.
> >
> > acl api_a1 path_beg /a1
> > acl api_b1 path_beg /b1
> > acl rate_5 sc0_http_req_rate(test_be) gt 5
> > acl rate_15 sc0_http_req_rate(test_be) gt 15
> >
> > # You might want to add acl so you'll only track paths you're
> > # interested in.
> > http-request track-sc0 path table test_be
> > # if you want to track only /a1 /b1 part of path
> > # you can use for example field converter:
> > #http-request track-sc0 path,field(1,/,2) table test_be
> > #http-request set-header X-Rate %[sc0_http_req_rate(test_be)]
> >
> > http-request deny deny_status 429 if api_a1 rate_5
> >     http-request deny deny_status 403 if api_b1 rate_15
> >
> > # adjust len and size etc. to your needs
> > backend test_be
> > stick-table type string len 40 size 20 expire 180s store
> > http_req_rate(60s)
> >
> > -Jarno
> >
> > > On Mon, Feb 11, 2019 at 7:34 PM Jarno Huuskonen 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > On Mon, Feb 11, Badari Prasad wrote:
> > > > > Thank you for the response. I came up with my own haproxy cfg,
> > where
> > > > i
> > > > > would want to rate limit based on event name and client id in url.
> > > > > URL ex : /api/v1//
> > > > >
> > > > > Have attached a file for my haproxy cfg.  But it does not seems to be
> > > > rate
> > > > > limiting the incoming requests.
> > > >
> > > > > backend st_src_monte
> > > > > stick-table type string size 1m expire 10s store
> > http_req_rate(10s)
> > > > > ...
> > > > >
> > > > >acl monte_as1_exceeds_limit src_http_req_rate(st_src_as1_monte)
> > gt 990
> > > > >acl monte_in_limit src_http_req_rate(st_src_as1_monte) lt 1000
> > > > >http-request track-sc0 src table st_src_as1_monte
> > > >
> > > > There's no st_src_as1_monte table in your example config, there's
> > > > st_src_monte table.
> > > >
> > > > >http-request deny deny_status 429 if { path_beg
> > > > /api/v1/monitoring-event/A01 AND monte_as1_exceeds_limit }
> > > >
> > > > You're tracking connections with src, but the stick table is type
> > string,
> > > > have you checked from admin socket that the stick table has entries,
> > > > something like:
> > > > echo 'show table st_src_monte' | nc -U /var/lib/haproxy/stats
> > > > (insted of nc -U, socat stdio /var/lib/haproxy/stats should also work).
> > > >
> > > > If you want to track src ip, then stick-table type ip or ipv6 is
> > > > probably better.
> > > >
> > > > >> I would want to configure 1000 tps for url
> > > > /api/v1/client1/transfer_data or
> > > > >> 500 tps for /api/v1/client2/user_data and so on
> > > >
> > > > Do you mean that only 1000 tps goes to
> > > > /api/v1/client1/transfer_data (no matter what the source ip addresses
> > > > are) or each source ip can send 1000 tps to
> > /api/v1/client1/transfer_data ?
> >
> > --
> > Jarno Huuskonen
> >

-- 
Jarno Huuskonen



Re: Tune HAProxy in front of a large k8s cluster

2019-02-20 Thread Jarno Huuskonen
Hi,

On Wed, Feb 20, Baptiste wrote:
> I would use a variable instead of a header:
>   http-request set-var(req.myvar) req.hdr(host),concat(,path)

Nitpicking here: AFAIK this won't work as is, because concat expects a variable
(https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.1-concat)

(so something like:
http-request set-var(req.mypath) path
http-request set-var(req.myvar) req.hdr(host),concat(,req.mypath)
(but I guess there are other ways to do this)).

-Jarno

-- 
Jarno Huuskonen



Re: Require info on ACL for rate limiting on per URL basis.

2019-02-19 Thread Jarno Huuskonen
Hi,

On Mon, Feb 11, Badari Prasad wrote:
>I want to rate limit based on url
> [/api/v1/monitoring-event/A01, /api/v1/client1/transfer_data,
> /api/v1/client2/transfer_data  ]  no matter what the source ip address is.

Something like this might help you. Unfortunately at the moment
I don't have time to create a better example.

acl api_a1 path_beg /a1
acl api_b1 path_beg /b1
acl rate_5 sc0_http_req_rate(test_be) gt 5
acl rate_15 sc0_http_req_rate(test_be) gt 15

# You might want to add acl so you'll only track paths you're
# interested in.
http-request track-sc0 path table test_be
# if you want to track only /a1 /b1 part of path
# you can use for example field converter:
#http-request track-sc0 path,field(1,/,2) table test_be
#http-request set-header X-Rate %[sc0_http_req_rate(test_be)]

http-request deny deny_status 429 if api_a1 rate_5
http-request deny deny_status 403 if api_b1 rate_15

# adjust len and size etc. to your needs
backend test_be
stick-table type string len 40 size 20 expire 180s store 
http_req_rate(60s)

-Jarno

> On Mon, Feb 11, 2019 at 7:34 PM Jarno Huuskonen 
> wrote:
> 
> > Hi,
> >
> > On Mon, Feb 11, Badari Prasad wrote:
> > > Thank you for the response. I came up with my own haproxy cfg, where
> > i
> > > would want to rate limit based on event name and client id in url.
> > > URL ex : /api/v1//
> > >
> > > Have attached a file for my haproxy cfg.  But it does not seems to be
> > rate
> > > limiting the incoming requests.
> >
> > > backend st_src_monte
> > > stick-table type string size 1m expire 10s store http_req_rate(10s)
> > > ...
> > >
> > >acl monte_as1_exceeds_limit src_http_req_rate(st_src_as1_monte) gt 990
> > >acl monte_in_limit src_http_req_rate(st_src_as1_monte) lt 1000
> > >http-request track-sc0 src table st_src_as1_monte
> >
> > There's no st_src_as1_monte table in your example config, there's
> > st_src_monte table.
> >
> > >http-request deny deny_status 429 if { path_beg
> > /api/v1/monitoring-event/A01 AND monte_as1_exceeds_limit }
> >
> > You're tracking connections with src, but the stick table is type string,
> > have you checked from admin socket that the stick table has entries,
> > something like:
> > echo 'show table st_src_monte' | nc -U /var/lib/haproxy/stats
> > (insted of nc -U, socat stdio /var/lib/haproxy/stats should also work).
> >
> > If you want to track src ip, then stick-table type ip or ipv6 is
> > probably better.
> >
> > >> I would want to configure 1000 tps for url
> > /api/v1/client1/transfer_data or
> > >> 500 tps for /api/v1/client2/user_data and so on
> >
> > Do you mean that only 1000 tps goes to
> > /api/v1/client1/transfer_data (no matter what the source ip addresses
> > are) or each source ip can send 1000 tps to /api/v1/client1/transfer_data ?

-- 
Jarno Huuskonen



Re: Require info on ACL for rate limiting on per URL basis.

2019-02-11 Thread Jarno Huuskonen
Hi,

On Mon, Feb 11, Badari Prasad wrote:
> Thank you for the response. I came up with my own haproxy cfg, where i
> would want to rate limit based on event name and client id in url.
> URL ex : /api/v1//
> 
> Have attached a file for my haproxy cfg.  But it does not seems to be rate
> limiting the incoming requests.

> backend st_src_monte
> stick-table type string size 1m expire 10s store http_req_rate(10s)
> ...
> 
>acl monte_as1_exceeds_limit src_http_req_rate(st_src_as1_monte) gt 990
>acl monte_in_limit src_http_req_rate(st_src_as1_monte) lt 1000
>http-request track-sc0 src table st_src_as1_monte

There's no st_src_as1_monte table in your example config, there's
st_src_monte table.

>http-request deny deny_status 429 if { path_beg 
> /api/v1/monitoring-event/A01 AND monte_as1_exceeds_limit }

You're tracking connections with src, but the stick table is type string,
have you checked from admin socket that the stick table has entries,
something like:
echo 'show table st_src_monte' | nc -U /var/lib/haproxy/stats
(insted of nc -U, socat stdio /var/lib/haproxy/stats should also work).

If you want to track src ip, then stick-table type ip or ipv6 is
probably better.

>> I would want to configure 1000 tps for url /api/v1/client1/transfer_data or
>> 500 tps for /api/v1/client2/user_data and so on

Do you mean that only 1000 tps goes to
/api/v1/client1/transfer_data (no matter what the source ip addresses
are) or each source ip can send 1000 tps to /api/v1/client1/transfer_data ?

-Jarno

-- 
Jarno Huuskonen



Re: HAProxy returns a 502 error when ssl offload and response has a large header

2019-02-07 Thread Jarno Huuskonen
Hi,

On Thu, Feb 07, Willy Tarreau wrote:
> On Thu, Feb 07, 2019 at 04:50:12PM +0200, Jarno Huuskonen wrote:
> > Hi,
> > 
> > On Thu, Feb 07, Steve GIRAUD wrote:
> > > Thanks Jarno for the investigation.
> > 
> > No problem.
> > 
> > > The large header is only on response and there is only one large header 
> > > (18k).
> > > 
> > > haproxy + ssl + http2+ tune.bufsize:32768  --> request fails
> > 
> > Did you check with curl or chrome if you get the same framing error
> > that I got (Error in the HTTP2 framing layer / ERR_SPDY_FRAME_SIZE_ERROR))?
> > 
> > > haproxy + ssl + http1.1 + tune.bufsize:32768  --> request ok
> > > 
> > > If I request my backend directly in h2 + ssl but without haproxy, the 
> > > request is ok.
> > 
> > I'm CC:ing Willy, in case this is something that a config option can fix
> > or possibly a incompatability/bug with http2 implementation.
> 
> I might have an idea. The default H2 max-frame-size is 16kB (by the
> spec). It is possible that your server ignores the setting and tries
> to push a frame size that is larger than the agreed limit, which
> becomes a protocol violation. Or it is possible as well that the
> server doesn't know how to send such a large header with this frame
> size and simply aborts the response.

At least on my test case haproxy listens http2 and uses http/1.1
to backend server
(example config and example backend server (in go) are in earlier
mail: https://www.mail-archive.com/haproxy@formilux.org/msg32727.html
(just increase the header size (l in the go server) > 16309 and the
http2 connection between client <-> haproxy fails with frame error).

So the test setup is something like:
client(curl/chrome) -> http2/haproxy -> http/1.1/go server port 8081

I'll try with h2c and see if I can put it between client and haproxy.

-Jarno

> You could install h2c between haproxy and your server, in wiretap mode,
> it's very convenient to see what is exchanged :
> 
>h2c_linux_amd64 wiretap 127.0.0.1: 127.0.0.1:
> 
> Then you configure haproxy to communicate to 127.0.0.1: to join the
> server while your server in fact listens on :.
> 
> Depending on what you see, we may have the possibility to work around
> it by advertising a larger max-frame-size in the settings frame when
> the buffers are larger.
> 
> Regards,
> Willy
> 

-- 
Jarno Huuskonen



Re: HAProxy returns a 502 error when ssl offload and response has a large header

2019-02-07 Thread Jarno Huuskonen
Hi,

On Thu, Feb 07, Steve GIRAUD wrote:
> Thanks Jarno for the investigation.

No problem.

> The large header is only on response and there is only one large header (18k).
> 
> haproxy + ssl + http2+ tune.bufsize:32768  --> request fails

Did you check with curl or chrome if you get the same framing error
that I got (Error in the HTTP2 framing layer / ERR_SPDY_FRAME_SIZE_ERROR))?

> haproxy + ssl + http1.1 + tune.bufsize:32768  --> request ok
> 
> If I request my backend directly in h2 + ssl but without haproxy, the request 
> is ok.

I'm CC:ing Willy, in case this is something that a config option can fix
or possibly a incompatability/bug with http2 implementation.

-Jarno

> Hi,
> 
> On Wed, Feb 06, Steve GIRAUD wrote:
> > Effectively, the header size is 17 556 bytes.
> 
> Is the large header(s) only on response (and not on request) ?
> (Is it one large header 17k header ?)
> 
> > If I increase the bufsize to 40 000 bytes and the maxrewrite to 20 000 the 
> > request failed.
> 
> For me (tested with current 2.0dev) increasing global tune.bufsize to
> 32768 allowed larger response header. With my limited testing http/https on
> frontend didn't make difference.
> (Does my test config work for you (you'll need to comment option htx
> with haprox-1.8) ?)
> 
> But if I use curl --http2 to haproxy+ssl frontend and my silly
> httpsrv.go sends x-dummy larger than 16309 then curl --http2 fails
> with curl: (16) Error in the HTTP2 framing layer
> (chrome reports ERR_SPDY_FRAME_SIZE_ERROR).
> 
> Is haproxy trying / sending a larger http2 frame than clients are
> willing to receive (SETTINGS_MAX_FRAME_SIZE?) ?
> 
> (Same request with --http1.1 to haproxy+ssl frontend works).
> 
> I'm attaching my test config and the httpsrv.go that I used as a
> backend server.
> Maybe http2 gurus can take a look and see if the frame size error is
> expected or not ?
> 
> -Jarno
> 
> > De : Jarno Huuskonen 
> > Envoyé : mercredi 6 février 2019 09:36
> > À : Steve GIRAUD
> > Cc : haproxy@formilux.org
> > Objet : Re: HAProxy returns a 502 error when ssl offload and response has a 
> > large header
> >
> > Hi,
> >
> > On Wed, Feb 06, Steve GIRAUD wrote:
> > > Hello everybody,
> > > Has anyone ever found that HAProxy returns a 502 error when ssl offload 
> > > is enabled and the http response contains a very long header.
> > > If I turn off SSL offload , all is OK with the same header.
> >
> > What's the size of the (very long) headers (how many bytes) ?
> > Is it by any chance larger than the bufsize or maxrewrite ?
> >
> > > Default settings :
> > >  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

-- 
Jarno Huuskonen



Re: HAProxy returns a 502 error when ssl offload and response has a large header

2019-02-06 Thread Jarno Huuskonen
Hi,

On Wed, Feb 06, Steve GIRAUD wrote:
> Effectively, the header size is 17 556 bytes.

Is the large header(s) only on response (and not on request) ?
(Is it one large header 17k header ?)

> If I increase the bufsize to 40 000 bytes and the maxrewrite to 20 000 the 
> request failed.

For me (tested with current 2.0dev) increasing global tune.bufsize to
32768 allowed larger response header. With my limited testing http/https on
frontend didn't make difference.
(Does my test config work for you (you'll need to comment option htx
with haprox-1.8) ?)

But if I use curl --http2 to haproxy+ssl frontend and my silly
httpsrv.go sends x-dummy larger than 16309 then curl --http2 fails
with curl: (16) Error in the HTTP2 framing layer
(chrome reports ERR_SPDY_FRAME_SIZE_ERROR).

Is haproxy trying / sending a larger http2 frame than clients are
willing to receive (SETTINGS_MAX_FRAME_SIZE?) ?

(Same request with --http1.1 to haproxy+ssl frontend works).

I'm attaching my test config and the httpsrv.go that I used as a
backend server.
Maybe http2 gurus can take a look and see if the frame size error is
expected or not ?

-Jarno

> De : Jarno Huuskonen 
> Envoyé : mercredi 6 février 2019 09:36
> À : Steve GIRAUD
> Cc : haproxy@formilux.org
> Objet : Re: HAProxy returns a 502 error when ssl offload and response has a 
> large header
> 
> Hi,
> 
> On Wed, Feb 06, Steve GIRAUD wrote:
> > Hello everybody,
> > Has anyone ever found that HAProxy returns a 502 error when ssl offload is 
> > enabled and the http response contains a very long header.
> > If I turn off SSL offload , all is OK with the same header.
> 
> What's the size of the (very long) headers (how many bytes) ?
> Is it by any chance larger than the bufsize or maxrewrite ?
> 
> > Default settings :
> >  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> 
> -Jarno
> 
> --
> Jarno Huuskonen

-- 
Jarno Huuskonen
global
tune.bufsize 32678

defaults
mode http
option http-use-htx
timeout connect 1s
timeout client  2s
timeout server  4s
timeout tarpit  3s

listen HTTPS_in
mode http
bind 127.0.0.1:8443 ssl crt common.pem alpn h2,http/1.1
bind 127.0.0.1:8080

server go-http 127.0.0.1:8081

package main

import (
"fmt"
"math/rand"
"net/http"
)

const letterBytes = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"

func handler(w http.ResponseWriter, r *http.Request) {
l := 16309 // >= 16310 breaks haproxy http2 FE (curl: (16) Error in the 
HTTP2 framing layer) / chrome also reports ERR_SPDY_FRAME_SIZE_ERROR
b := make([]byte, l)
for i := range b {
b[i] = letterBytes[rand.Int63()%int64(len(letterBytes))]
}
s := string(b[:l])
w.Header().Set("X-Dummy", s)
fmt.Fprintf(w, "Howdy neighbour!\n")
}

func main() {
http.HandleFunc("/", handler)
http.ListenAndServe(":8081", nil)
}


Re: HAProxy returns a 502 error when ssl offload and response has a large header

2019-02-06 Thread Jarno Huuskonen
Hi,

On Wed, Feb 06, Steve GIRAUD wrote:
> Hello everybody,
> Has anyone ever found that HAProxy returns a 502 error when ssl offload is 
> enabled and the http response contains a very long header.
> If I turn off SSL offload , all is OK with the same header.

What's the size of the (very long) headers (how many bytes) ?
Is it by any chance larger than the bufsize or maxrewrite ?

> Default settings :
>  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

-Jarno

-- 
Jarno Huuskonen



Chained http -> http frontends: http/2 error 400 vs http/1.1 error 502 Reply-To:

2019-02-02 Thread Jarno Huuskonen
Hi,

(This is kind of related to this thread:
https://www.mail-archive.com/haproxy@formilux.org/msg32255.html).

I'm seeing different behaviour between http1.1 / http2 when chaining
two frontends with mode http and the last frontend closes
connection with http-request reject (or tcp-request content reject).

When client uses http/1.1 then client receives 502 error (I think
this is expected because the "server" for the first frontend just closes
connection).

But when client uses http/2 then client will receive error 400.
(Tested with latest 2.0dev (2.0-dev0-32211a-258)).
I'm not sure if this is a bug, but at least seems to be different behaviour
between http/1.1 and http/2. (option http-use-htx doesn't seem to make
difference).

The attached varnishtest should explain what I mean. I put some debug
printf output to proto_htx and with http/2 the status 400 comes
from /* 3: client abort with an abortonclose */
(proto_htx.c line 1535, s->req.flags 0x9c42020).

With http/1.1 status 502 comes from /* 4: close from server, capture
the response if the server has started to respond */
(proto_htx.c line 1559, s->req.flags 0x9842000).
(If I interpret s->req.flags correctly then http/2 has
CF_READ_DONTWAIT and CF_SHUTR set and http/1.1 doesn't).

-Jarno

varnishtest "h2 chaining 400 error"
#REQUIRE_VERSION=1.9
feature ignore_unknown_macro

haproxy h1 -conf {
defaults
mode http
${no-htx} option http-use-htx
timeout connect 1s
timeout client  1s
timeout server  1s

listen HTTP_in
bind "fd@${HTTP_in}"
server tmpserver abns@proc1 send-proxy-v2

listen HTTP2_in
bind "fd@${HTTP2_in}" proto h2
server tmpserver abns@proc1 send-proxy-v2

frontend fe
bind abns@proc1 accept-proxy
http-request reject if TRUE
default_backend be

backend be
server s1 ${s1_addr}:${s1_port}

} -start

client c1h1 -connect ${h1_HTTP_in_sock} {
txreq
rxresp
expect resp.status == 502
} -run

client c1h2 -connect ${h1_HTTP2_in_sock} {
txpri
stream 0 {
txsettings
rxsettings
txsettings -ack
rxsettings
expect settings.ack == true
} -run
stream 1 {
# warning: -req, -scheme, -url MUST be placed first otherwise
# the H2 protocol is invalid since they are pseudo-headers
txreq \
  -req GET \
  -scheme "https" \
  -url /path/to/file.ext

rxhdrs
expect resp.status == 502
    #rxdata -all
} -run
} -run

-- 
Jarno Huuskonen



Re: Rate-limit relating to the healthy servers count

2019-01-23 Thread Jarno Huuskonen
Hi,

On Wed, Jan 23, Thomas Hilaire wrote:
> Hi,
> 
> I want to implement a rate-limit system using the sticky table of
> HAProxy. Consider that I have 100 servers, and a limit of 10
> requests per server, the ACL would be:
> 
>     http-request track-sc0 int(1) table GlobalRequestsTracker
>     http-request deny deny_status 429 if {
> sc0_http_req_rate(GlobalRequestsTracker),div(100) gt 10 }
> 
> Now if I want to make this dynamic depending on the healthy servers
> count, I need to replace the hardcoded `100` per the `nbsrv`
> converter like this:
> 
>     http-request track-sc0 int(1) table GlobalRequestsTracker
>     http-request deny deny_status 429 if {
> sc0_http_req_rate(GlobalRequestsTracker),div(nbsrv(MyBackend)) gt 10
> }
> 
> But I'm getting the error:
> 
>     error detected while parsing an 'http-request deny' condition :
> invalid args in converter 'div' : expects an integer or a variable
> name in ACL expression
> 'sc0_http_req_rate(GlobalRequestsTracker),div(nbsrv(MyBackend))'.
> 
> Is there a way to use `nbsrv` as a variable inside the `div` operator?

Untested: does something like this work:

http-request set-var(req.dummy) nbsrv(GlobalRequestsTracker)
http-request deny deny_status 429 if { 
sc0_http_req_rate(GlobalRequestsTracker),div(req.dummy) gt 10 }

-Jarno

-- 
Jarno Huuskonen



Re: [RFC PATCH] couple of reg-tests

2019-01-09 Thread Jarno Huuskonen
Hello Frederic,

On Mon, Jan 07, Frederic Lecaille wrote:
> 
> reg-tests/http-rules/h3.vtc fails on my side due to a typo in
> the regex with this error:
> 
>  h10.0 CLI regexp error: 'missing opening brace after \o'
> (@48) (^0x[a-f0-9]+ example\.org
> https://www\.example.\org\n0x[a-f0-9]+ subdomain\.example\.org
> https://www\.subdomain\.example\.org\n$)
> 
> .\org shoulb be replaced by \.org
> 
> Could you check on your side why you did not notice this issue please?

For some reason the buggy regex works for me, maybe it depends on
pcre version.
My varnishtest links to centos7 default pcre (pcre-8.32-17.el7.x86_64).

> After checking this issue we will merge your patches. Great work!

I'm attaching the patches again, with fixed regex in h3.vtc.
The patches are for recent 2.0dev.

Should reg-tests/README default to [-Dno-htx='#'] instead of -Dno-htx= ?

-Jarno

-- 
Jarno Huuskonen
>From 1a5a90641ec072d62babbb8ed65c6831998bbdee Mon Sep 17 00:00:00 2001
From: Jarno Huuskonen 
Date: Wed, 9 Jan 2019 13:41:19 +0200
Subject: [PATCH 1/4] REGTESTS: test case for map_regm commit 271022150d

Minimal test case for map_regm commit 271022150d7961b9aa39dbfd88e0c6a4bc48c3ee.
Config and test is adapted from: Daniel Schneller's example
(https://www.mail-archive.com/haproxy@formilux.org/msg30523.html).
---
 reg-tests/http-rules/b0.map |  1 +
 reg-tests/http-rules/b0.vtc | 77 +
 2 files changed, 78 insertions(+)
 create mode 100644 reg-tests/http-rules/b0.map
 create mode 100644 reg-tests/http-rules/b0.vtc

diff --git a/reg-tests/http-rules/b0.map b/reg-tests/http-rules/b0.map
new file mode 100644
index 000..08ffcfb
--- /dev/null
+++ b/reg-tests/http-rules/b0.map
@@ -0,0 +1 @@
+^(.*)\.(.*)$ \1_AND_\2
diff --git a/reg-tests/http-rules/b0.vtc b/reg-tests/http-rules/b0.vtc
new file mode 100644
index 000..897c3b4
--- /dev/null
+++ b/reg-tests/http-rules/b0.vtc
@@ -0,0 +1,77 @@
+#commit 271022150d7961b9aa39dbfd88e0c6a4bc48c3ee
+#BUG/MINOR: map: fix map_regm with backref
+#
+#Due to a cascade of get_trash_chunk calls the sample is
+#corrupted when we want to read it.
+#
+#The fix consist to use a temporary chunk to copy the sample
+#value and use it.
+
+varnishtest "map_regm get_trash_chunk test"
+feature ignore_unknown_macro
+
+#REQUIRE_VERSION=1.6
+syslog S1 -level notice {
+recv
+expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy (fe|be)1 started."
+recv
+expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy (fe|be)1 started."
+recv info
+# not expecting ${h1_pid} with master-worker
+expect ~ "[^:\\[ ]\\[[[:digit:]]+\\]: .* fe1 be1/s1 
[[:digit:]]+/[[:digit:]]+/[[:digit:]]+/[[:digit:]]+/[[:digit:]]+ 200 
[[:digit:]]+ - -  .* \"GET / HTTP/(1|2)(\\.1)?\""
+} -start
+
+server s1 {
+   rxreq
+   expect req.method == "GET"
+   expect req.http.x-mapped-from-header == example_AND_org
+   expect req.http.x-mapped-from-var == example_AND_org
+   txresp
+
+   rxreq
+   expect req.method == "GET"
+   expect req.http.x-mapped-from-header == www.example_AND_org
+   expect req.http.x-mapped-from-var == www.example_AND_org
+   txresp
+} -start
+
+haproxy h1 -conf {
+  global
+log ${S1_addr}:${S1_port} local0 debug err
+
+  defaults
+mode http
+${no-htx} option http-use-htx
+log global
+option httplog
+timeout connect 15ms
+timeout client  20ms
+timeout server  20ms
+
+  frontend fe1
+bind "fd@${fe1}"
+# Remove port from Host header
+http-request replace-value Host '(.*):.*' '\1'
+# Store host header in variable
+http-request set-var(txn.host) req.hdr(Host)
+# This works correctly
+http-request set-header X-Mapped-From-Header 
%[req.hdr(Host),map_regm(${testdir}/b0.map,"unknown")]
+# This breaks before commit 271022150d7961b9aa39dbfd88e0c6a4bc48c3ee
+http-request set-header X-Mapped-From-Var 
%[var(txn.host),map_regm(${testdir}/b0.map,"unknown")]
+
+default_backend be1
+
+backend be1
+server s1 ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe1_sock} {
+txreq -hdr "Host: example.org:8443"
+rxresp
+expect resp.status == 200
+
+txreq -hdr "Host: www.example.org"
+rxresp
+expect resp.status == 200
+} -run
+
-- 
1.8.3.1

>From 27b305721d62d5809f8ec400f0c236dd6a51e149 Mon Sep 17 00:00:00 2001
From: Jarno Huuskonen 
Date: Wed, 9 Jan 2019 13:44:44 +0200
Subject: [PATCH 2/4] REGTESTS: Basic tests for concat,strcmp,word,field,ipmask
 converters

---
 reg-tests/http-rules/h2.map |   1 +
 reg-tests/http-rules/h2.vtc | 220 
 2 files changed, 221 insertions(+)
 create mode 100644 reg-tests/http-rules/h2.map
 create mode 100644 reg-tests/http-rules/h2.vtc

[PATCH] DOC: http-request cache-use / http-response cache-store expects cache name

2019-01-04 Thread Jarno Huuskonen
Hi,

Small patch for doc/configuration.txt that adds missing cache name
option to http-request cache-use / http-response cache-store.

Also adds optional if/unless condition doc to
10.2.2. Proxy section: http-request cache-use / http-response cache-store

-Jarno

-- 
Jarno Huuskonen
>From b130a0676a621b6008333c97b56485d6ca23064b Mon Sep 17 00:00:00 2001
From: Jarno Huuskonen 
Date: Fri, 4 Jan 2019 14:05:02 +0200
Subject: [PATCH] DOC: http-request cache-use / http-response cache-store
 expects cache name

Adds missing cache name option to http-request cache-use and
http-response cache-store documentation.

Also adds optional if/unless condition to
10.2.2. Proxy section: http-request cache-use / http-response cache-store
---
 doc/configuration.txt | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 6ca63d6..855c0b1 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -4109,7 +4109,7 @@ http-request auth [realm ] [ { if | unless } 
 ]
 acl auth_ok http_auth_group(L1) G1
 http-request auth unless auth_ok
 
-http-request cache-use [ { if | unless }  ]
+http-request cache-use  [ { if | unless }  ]
 
   See section 10.2 about cache setup.
 
@@ -4658,7 +4658,7 @@ http-response allow [ { if | unless }  ]
   This stops the evaluation of the rules and lets the response pass the check.
   No further "http-response" rules are evaluated for the current section.
 
-http-response cache-store [ { if | unless }  ]
+http-response cache-store  [ { if | unless }  ]
 
   See section 10.2 about cache setup.
 
@@ -17868,13 +17868,13 @@ max-age 
 10.2.2. Proxy section
 -
 
-http-request cache-use 
+http-request cache-use  [ { if | unless }  ]
   Try to deliver a cached object from the cache . This directive is also
   mandatory to store the cache as it calculates the cache hash. If you want to
   use a condition for both storage and delivering that's a good idea to put it
   after this one.
 
-http-response cache-store 
+http-response cache-store  [ { if | unless }  ]
   Store an http-response within the cache. The storage of the response headers
   is done at this step, which means you can use others http-response actions
   to modify headers before or after the storage of the response. This action
-- 
1.8.3.1



HTTPS(nbproc > 1) and HTTP/2 help

2019-01-03 Thread Jarno Huuskonen
Hi,

I'm trying to convert "legacy" haproxy (haproxy 1.9.0) config that has
mode tcp https listen (bind-process 2 ...) feeding bind-process 1
frontend via abns socket. Something like this:

listen HTTPS_in
# missing bind-process etc.
mode tcp
tcp-request inspect-delay 3s
bind 127.0.0.1:8443 ssl crt common.pem alpn h2,http/1.1

#use-server h2 if { ssl_fc_alpn h2 }
#use-server h1 unless { ssl_fc_alpn h2 }
server h1 abns@proc1 send-proxy-v2
#server h2 abns@proc1h2 send-proxy-v2

frontend fe
mode http
bind abns@proc1 accept-proxy
bind abns@proc1h2 accept-proxy proto h2
tcp-request inspect-delay 5s
tcp-request content track-sc1 src table table1

# sc1_http_req_cnt(table1) gt 4 || 1 are just examples
tcp-request content reject if { sc1_http_req_cnt(table1) gt 4 }
http-request deny deny_status 429 if { sc1_http_req_cnt(table1) 
gt 1 }

default_backend be

backend be
mode http
http-request deny deny_status 200 # or some real servers

backend table1
stick-table type ipv6 size 100 expire 120s store 
http_req_cnt,http_req_rate(30s)

This doesn't work with alpn h2,http/1.1 (HTTP/2 doesn't work(as expected)).

Changing HTTPS_in to "mode http" kind of works, client gets error 400 (HTTP/2)
or 502 (HTTP/1.1) when (tcp-request content reject) reject's the connection.

mode tcp and use-server with ssl_fc_alpn h2 also seems to work, but can the
client choose not use HTTP/2 with alpn h2 (at least the ssl_fc_alpn
documentation suggests this) ? 

So it seems that some/best alternatives are:
- use "mode http" and use http-request deny instead of tcp-request content 
reject (sends response instead of silently closing connection -> no error 
400/502)
- use nbproc 1 / nbthread > 1 and move HTTPS_in functionality to fe frontend

Are there any more alternatives/tricks on using more than 1 core for
SSL and enabling HTTP/2 ? Are there any gotchas etc. to look out for
when converting nbproc to nbthread config ?

Thanks,
-Jarno
 
-- 
Jarno Huuskonen



[PATCH] DOC: Fix typo in req.ssl_alpn example (commit 4afdd138424ab...)

2019-01-02 Thread Jarno Huuskonen
Also link to ssl_fc_alpn.
---
 doc/configuration.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index dc1f222..03a567d 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15472,13 +15472,13 @@ req.ssl_alpn : string
   request buffer and not to the contents deciphered via an SSL data layer, so
   this will not work with "bind" lines having the "ssl" option. This is useful
   in ACL to make a routing decision based upon the ALPN preferences of a TLS
-  client, like in the example below.
+  client, like in the example below. See also "ssl_fc_alpn".
 
   Examples :
  # Wait for a client hello for at most 5 seconds
  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }
- use_backend bk_acme if { req_ssl.alpn acme-tls/1 }
+ use_backend bk_acme if { req.ssl_alpn acme-tls/1 }
  default_backend bk_default
 
 req.ssl_ec_ext : boolean
-- 
1.8.3.1




[RFC PATCH] couple of reg-tests

2019-01-02 Thread Jarno Huuskonen
Hello,

I started playing with reg-tests and came up with couple of regtests.
Is there a better subdirectory for these than http-rules ? Maybe
map/b0.vtc and converter/h* ?

I'm attaching the tests for comments.

-Jarno

-- 
Jarno Huuskonen
>From e75f2ef8b461caa164e81e2d39630e3b2e8791f4 Mon Sep 17 00:00:00 2001
From: Jarno Huuskonen 
Date: Thu, 27 Dec 2018 11:58:13 +0200
Subject: [PATCH 1/4] REGTESTS: test case for map_regm commit 271022150d

Minimal test case for map_regm commit 271022150d7961b9aa39dbfd88e0c6a4bc48c3ee.
Config and test is adapted from: Daniel Schneller's example
(https://www.mail-archive.com/haproxy@formilux.org/msg30523.html).
---
 reg-tests/http-rules/b0.map |  1 +
 reg-tests/http-rules/b0.vtc | 77 +
 2 files changed, 78 insertions(+)
 create mode 100644 reg-tests/http-rules/b0.map
 create mode 100644 reg-tests/http-rules/b0.vtc

diff --git a/reg-tests/http-rules/b0.map b/reg-tests/http-rules/b0.map
new file mode 100644
index 000..08ffcfb
--- /dev/null
+++ b/reg-tests/http-rules/b0.map
@@ -0,0 +1 @@
+^(.*)\.(.*)$ \1_AND_\2
diff --git a/reg-tests/http-rules/b0.vtc b/reg-tests/http-rules/b0.vtc
new file mode 100644
index 000..bdc3b34
--- /dev/null
+++ b/reg-tests/http-rules/b0.vtc
@@ -0,0 +1,77 @@
+#commit 271022150d7961b9aa39dbfd88e0c6a4bc48c3ee
+#BUG/MINOR: map: fix map_regm with backref
+#
+#Due to a cascade of get_trash_chunk calls the sample is
+#corrupted when we want to read it.
+#
+#The fix consist to use a temporary chunk to copy the sample
+#value and use it.
+
+varnishtest "map_regm get_trash_chunk test"
+feature ignore_unknown_macro
+
+#REQUIRE_VERSION=1.6
+syslog S1 -level notice {
+recv
+expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy (fe|be)1 started."
+recv
+expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy (fe|be)1 started."
+recv info
+# not expecting ${h1_pid} with master-worker
+expect ~ "[^:\\[ ]\\[[[:digit:]]+\\]: .* fe1 be1/s1 
[[:digit:]]+/[[:digit:]]+/[[:digit:]]+/[[:digit:]]+/[[:digit:]]+ 200 
[[:digit:]]+ - -  .* \"GET / HTTP/(1|2)(\\.1)?\""
+} -start
+
+server s1 {
+   rxreq
+   expect req.method == "GET"
+   expect req.http.x-mapped-from-header == example_AND_org
+   expect req.http.x-mapped-from-var == example_AND_org
+   txresp
+
+   rxreq
+   expect req.method == "GET"
+   expect req.http.x-mapped-from-header == www.example_AND_org
+   expect req.http.x-mapped-from-var == www.example_AND_org
+   txresp
+} -start
+
+haproxy h1 -conf {
+  global
+log ${S1_addr}:${S1_port} local0 debug err
+
+  defaults
+mode http
+${no-htx} option http-use-htx
+log global
+option httplog
+timeout connect 15ms
+timeout client  20ms
+timeout server  20ms
+
+  frontend fe1
+bind "fd@${fe1}"
+# Remove port from Host header
+http-request replace-value Host '(.*):.*' '\1'
+# Store host header in variable
+http-request set-var(txn.host) req.hdr(Host)
+# This works correctly
+http-request set-header X-Mapped-From-Header 
%[req.hdr(Host),map_regm(${testdir}/b0.map,"unknown")]
+# This breaks before commit 271022150d7961b9aa39dbfd88e0c6a4bc48c3ee
+http-request set-header X-Mapped-From-Var 
%[var(txn.host),map_regm(${testdir}/b0.map,"unknown")]
+
+default_backend be1
+
+backend be1
+server s1 ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe1_sock} {
+txreq -hdr "Host: example.org:8443"
+rxresp
+expect resp.status == 200
+
+txreq -hdr "Host: www.example.org"
+rxresp
+expect resp.status == 200
+} -run
+
-- 
1.8.3.1

>From cd8c246769267bfcf69acef29104cef86ace4032 Mon Sep 17 00:00:00 2001
From: Jarno Huuskonen 
Date: Tue, 1 Jan 2019 13:39:52 +0200
Subject: [PATCH 2/4] REGTESTS: Basic tests for using maps to redirect requests
 / select backend

---
 reg-tests/http-rules/h3-be.map |   4 +
 reg-tests/http-rules/h3.map|   3 +
 reg-tests/http-rules/h3.vtc| 174 +
 3 files changed, 181 insertions(+)
 create mode 100644 reg-tests/http-rules/h3-be.map
 create mode 100644 reg-tests/http-rules/h3.map
 create mode 100644 reg-tests/http-rules/h3.vtc

diff --git a/reg-tests/http-rules/h3-be.map 
b/reg-tests/http-rules/h3-be.map
new file mode 100644
index 000..c8822fc
--- /dev/null
+++ b/reg-tests/http-rules/h3-be.map
@@ -0,0 +1,4 @@
+# These entries are used for use_backend rules
+test1.example.com  test1_be
+test1.example.invalid  test1_be
+test2.example.com  test2_be
diff --git a/reg-tests/http-rules/h3.map b/reg-tests/http-rules/h3.map
new file mode 100644
index 000..a0cc02d
--- /dev/null
+++ b/reg-tests/http-rules/h3.map
@@ -0,0 +1,3 @@
+# These entries are used for http-request redirect rules
+

Re: Http HealthCheck Issue

2018-12-19 Thread Jarno Huuskonen
Hi,

On Wed, Dec 19, Jonathan Matthews wrote:
> On Wed, 19 Dec 2018 at 19:23, UPPALAPATI, PRAVEEN  wrote:
> >
> > Hmm. Wondering why do we need host header? I was able to do curl without 
> > the header. I did not find anything in the doc.
> 
> "curl" automatically adds a Host header unless you are directly
> hitting an IP address.

Even curl [-v] http://ip.add.re.ss adds host header (Host:
ip.add.re.ss). (At least the version I'm using (the one that comes with
centos 7.6)).

-Jarno

-- 
Jarno Huuskonen



Re: Http HealthCheck Issue

2018-12-18 Thread Jarno Huuskonen
Hi,

On Tue, Dec 18, UPPALAPATI, PRAVEEN wrote:
> My backend config is:
> 
> backend bk_8093_read
> balancesource
> http-response set-header X-Server %s
> option log-health-checks
> option httpchk get 
> /nexus/v1/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt 
> HTTP/1.1\r\nAuthorization:\ Basic\ 

Change get to GET, at least apache, ngingx and tomcat expect GET not get.
Or test with for example netcat that your server1 accepts get.

Something like: nc server1.add.re.ss 8093
get /nexus/v1/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt 
HTTP/1.1
Host: ...
Authorization: Basic ...

> server primary8093r :8093 check verify none
> server backUp08093r ::8093 check backup verify none
> server backUp18093r ::8093 check backup verify none
> 
> Output of log:
> 
> [Dec 18 05:22:51]  Health check for server bk_8093_read/primary8093r failed, 
> reason: Layer7 wrong status, code: 400, info: "No Host", check duration: 
> 543ms, status: 0/2 DOWN.

Like Jonathan said "No Host" is telling you what's wrong.
(HTTP/1.1 requests expect Host: header).

-Jarno

-- 
Jarno Huuskonen



Re: Http HealthCheck Issue

2018-12-17 Thread Jarno Huuskonen
Hi,

On Mon, Dec 17, UPPALAPATI, PRAVEEN wrote:
> I am trying to use Option httpHealth Check is not working and is marking all 
> servers as down:
> 
> 
> [haproxy@zld05596 ~]$ cat //opt/app/haproxy/etc/haproxy.cfg | grep /nexus/v1
> option httpchk get 
> /nexus/v1/repository/rawcentral/com.att.swm.attpublic/healthcheck.txt 
> HTTP/1.1\r\nAuthorization:\ Basic\ 

s/get/GET/

Do you have check enabled on server lines ? Can you show the backend
config (with sensitive information obscured/removed) ?

> [haproxy@zld05596 ~]$ cat //opt/app/haproxy/log/haproxy.log | grep /nexus/v1

Is your logging working (you'll get logs in /opt/app/haproxy/log/haproxy.log) ?

grep 'Health check for' /opt/app/haproxy/log/haproxy.log

-Jarno

-- 
Jarno Huuskonen



Re: SOAP service healthcheck

2018-12-06 Thread Jarno Huuskonen
Hi,

On Thu, Dec 06, Māra Grīnberga wrote:
> I'm new to Haproxy and I've a task for which I can't seem to find a
> solution online. Probably, I'm not looking in the right places.
> I need to check if a SOAP service responds before sending requests to the
> server. I've read about this option:
>option httpchk GET /check
> http-check expect string OK
> I think, it's what I need. But is there a way to pass SOAP envelope to this
> "/check" service?

Do you mean POST to /check where post body is the SOAP envelope ?

> Any suggestions and help would be appreciated!

I think you can (ab)use http version to send body with option httpchk
(https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#option%20httpchk)

One example for sending xml post:
https://discourse.haproxy.org/t/healthcheck-with-xml-post-in-body/733

-Jarno

-- 
Jarno Huuskonen



Re: apache proxy pass rules in HAproxy

2018-10-29 Thread Jarno Huuskonen
Hi,

Can you describe how you would like this (haproxy -> apache+shib -> jupyter?)
setup ? (Perhaps with some kind of diagram with desired urls / ips
etc).

>From what I understand you'd like to use public ip/url only on haproxy
and everything else on private ip's (accessed only from the haproxy host).

AFAIK something like this might work:
haproxy listens on public ip:443 and sends all /jhub /Shibboleth.sso
traffic to apache(shib)+jupyter backend server on port 8443(w/out ssl):

haproxy:
...
acl host_web3 path_beg /jhub
acl host_web3_saml2 path_beg /Shibboleth.sso
use_backend web3_cluster if host_web3 || host_web3_saml2
...
backend web3_cluster
server  apache_server_privateip:8443 check inter 2000 cookie w1
# If you've more than 1 server then you'll probably need persistence

apache vhost (plain http vhost, no ssl configured)
Listen 8443

HostnameLookups off
ServerName https://proxy.example.com
UseCanonicalName On
SetEnv HTTPS on

 / 
... # your jupyter proxypass / shibboleth auth (remote_user)/ wss config
# Also make sure apache passes or sets:
# X-Scheme/X-Forwarded-Proto and X-Real-Ip/X-Forwarded-For 


Configure shibboleth to use https://proxy.example.com/Shibboleth.sso
urls.

Configure jupyter to trust X- headers: NotebookApp.trust_xheaders
and maybe you need to use NotebookApp.custom_display_url so jupyter
knows it's url is https://proxy.example.com/jhub.

-Jarno

-- 
Jarno Huuskonen



Re: HAproxy ssh connection closes fast , after logon

2018-10-20 Thread Jarno Huuskonen
Hi,

On Fri, Oct 19, Imam Toufique wrote:
> I am working on setting up ssh / sftp capability with HAProxy,
> initial setup is done ( thanks to some of the members in the haproxy email
> list for help! ) .  I ran into an issue  -- as soon as I ssh via the proxy
> node, within a minutes or so, ssh connection closes on me.

Is the connection idle ? "in 50.3 seconds" matches your
timeout client/server 5(ms)
(Your (haproxy)logs should give more information why the connection
was closed).

> here is my config file:
> -

...

>timeout client 5
>timeout server 5

...

> backend http_back
>timeout connect 90
>timeout server 90

90(ms)(=15minutes) connect timeout for http seems long ...
(AFAIK this is how long haproxy will wait for tcp connection
to a backend http server).

> backend www-ssh-proxy-backend
>mode tcp
>balance roundrobin
>stick-table type ip size 200k expire 30m
>stick on src
>default-server inter 1s
>server web1 10.1.100.156:22 check id 1
>server web2 10.1.100.160:22 check id 2

Try using longer timeout server on www-ssh-proxy-backend.
(and/or longer timeout client on www-ssh-proxy).

(You could also try to play with sshd_config: ClientAliveInterval and
TCPKeepAlive)

> Transferred: sent 3312, received 3184 bytes, in 50.3 seconds

-Jarno

-- 
Jarno Huuskonen



Re: need help with sftp and http config on a single config file

2018-10-19 Thread Jarno Huuskonen
Hi,

On Thu, Oct 18, Imam Toufique wrote:
> *[root@crsplabnet2 examples]# haproxy -c -V -f /etc/haproxy/haproxy.cfg*
> *Configuration file is valid*
> 
> *when trying to start HA proxy, i see the following:*
> 
> *[root@crsplabnet2 examples]# haproxy -D -f /etc/haproxy/haproxy.cfg -p
> /var/run/haproxy.pid*
> *[ALERT] 290/234618 (5889) : Starting frontend www-ssh-proxy: cannot bind
> socket [0.0.0.0:22 <http://0.0.0.0:22>]*

Do you have sshd already running on the haproxy server ?
(Use netstat -tunapl / ss (something like ss -tlnp '( dport = :ssh or sport = 
:ssh )')
to see if sshd is already listening on port 22).

If you've sshd running on port 22 then you have to use different port or
ipaddress for sshd / haproxy(www-ssh-proxy)

-Jarno

-- 
Jarno Huuskonen



Re: Bug when passing variable to mapping function

2018-08-01 Thread Jarno Huuskonen
Hi,

On Tue, Jul 17, Emeric Brun wrote:
> > On Fri, 29 Jun 2018 at 07:15, Jarno Huuskonen  
> > wrote:
> >> On Thu, Jun 28, Jarno Huuskonen wrote:
> >>> I think this is the commit that breaks map_regm in this case:
> >>> b5997f740b21ebb197e10a0f2fe9dc13163e1772 (MAJOR: threads/map: Make
> >>> acls/maps thread safe).
> >>>
> >>> If I revert this commit from pattern.c:pattern_exec_match
> >>> then the map_regm \1 backref seems to work.
> >>
> >> I think I found what's replacing the \000 as first char:
> >> in (map.c) sample_conv_map:
> >> /* In the regm case, merge the sample with the input. */
> >> if ((long)private == PAT_MATCH_REGM) {
> >> str = get_trash_chunk();
> >> str->len = exp_replace(str->str, str->size, 
> >> smp->data.u.str.str,
> >>pat->data->u.str.str,
> >>(regmatch_t *)smp->ctx.a[0]);
> >>
> >> Before call to get_trash_chunk() smp->data.u.str.str is for example
> >> 'distri.com' and after get_trash_chunk() smp->data.u.str.str
> >> is '\000istri.com'.
> 
> Could you try the patch in attachment? i hope it will fix the issue

Sorry I've been away from keyboard. Just tested the patch w/1.8.12 and
for me the patch fixes the map_regm issue with Daniel's example config
(https://www.mail-archive.com/haproxy@formilux.org/msg30523.html).

Thanks,
-Jarno

-- 
Jarno Huuskonen



Re: Reverse String (or get 2nd level domain sample)?

2018-06-30 Thread Jarno Huuskonen
Hi,

On Fri, Jun 29, Baptiste wrote:
> converters are just simple C functions, (or could be Lua code as well), and
> are quite trivial to write.
> Instead of creating a converter that reverse the order of chars in a
> string, I would rather patch current "word" converter to support negative
> integers.
> IE: -2 would means you extract the second word, starting at the end of the
> string.

No need to patch "word" for negative indexes. The functionality is
already there:
commit 9631a28275b7c04f441f7d1c3706a765586844e7
Author: Marcin Deranek 
Date:   Mon Apr 16 14:30:46 2018 +0200

MEDIUM: sample: Extend functionality for field/word converters

Extend functionality of field/word converters, so it's possible
to extract field(s)/word(s) counting from the beginning/end and/or
extract multiple fields/words (including separators) eg.

str(f1_f2_f3__f5),field(2,_,2)  # f2_f3
str(f1_f2_f3__f5),field(2,_,0)  # f2_f3__f5
str(f1_f2_f3__f5),field(-2,_,3) # f2_f3_
str(f1_f2_f3__f5),field(-3,_,0) # f1_f2_f3

str(w1_w2_w3___w4),word(3,_,2)  # w3___w4
str(w1_w2_w3___w4),word(2,_,0)  # w2_w3___w4
str(w1_w2_w3___w4),word(-2,_,3) # w1_w2_w3
str(w1_w2_w3___w4),word(-3,_,0) # w1_w2;

-Jarno

> On Mon, Jun 25, 2018 at 12:29 PM, Daniel Schneller <
> daniel.schnel...@centerdevice.com> wrote:
> 
> > Hi!
> >
> > Just double checking to make sure I am not simply blind: Is there a way to
> > reverse a string using a sample converter?
> >
> > Background: I need to extract just the second level domain from the host
> > header. So for sub.sample.example.com I need to fetch "example".
> >
> > Using the "word" converter and a "." as the separator I can get at the
> > individual components, but because the number of nested subdomains varies,
> > I cannot use that directly.
> >
> > My idea was to just reverse the full domain (removing a potential port
> > number first), get word(2) and reverse again. Is that possible? Or is there
> > an even better function I can use? I am thinking this must be a common use
> > case, but googling "haproxy" and "reverse" will naturally turn up lots of
> > results talking about "reverse proxying".
> >
> > If possible, I would like to avoid using maps to keep this thing as
> > generic as possible.
> >
> > Thanks a lot!
> >
> > Daniel

-- 
Jarno Huuskonen



Re: Bug when passing variable to mapping function

2018-06-28 Thread Jarno Huuskonen
Hi,

On Thu, Jun 28, Jarno Huuskonen wrote:
> I think this is the commit that breaks map_regm in this case:
> b5997f740b21ebb197e10a0f2fe9dc13163e1772 (MAJOR: threads/map: Make
> acls/maps thread safe).
> 
> If I revert this commit from pattern.c:pattern_exec_match
> then the map_regm \1 backref seems to work.

I think I found what's replacing the \000 as first char:
in (map.c) sample_conv_map:
/* In the regm case, merge the sample with the input. */
if ((long)private == PAT_MATCH_REGM) {
str = get_trash_chunk();
str->len = exp_replace(str->str, str->size, smp->data.u.str.str,
   pat->data->u.str.str,
   (regmatch_t *)smp->ctx.a[0]);

Before call to get_trash_chunk() smp->data.u.str.str is for example
'distri.com' and after get_trash_chunk() smp->data.u.str.str
is '\000istri.com'.

At the moment I don't have time to dig deeper, but hopefully this
helps a little bit.

-Jarno

-- 
Jarno Huuskonen



Re: Bug when passing variable to mapping function

2018-06-28 Thread Jarno Huuskonen
Hi,

On Mon, Jun 25, Daniel Schneller wrote:
> This is the contents of the map file:
>  hostmap.txt -
> ^(.*)\.(.*)$ \1
> --

Setting this to:
^(.*)\.(.*)$ \2

And I get
X-Distri-Mapped-From-Var: com

and with map_regm: ^(.*)\.(.*)\.(.*)$ \2.\3
(Host: www.distri.com)

I get X-Distri-Mapped-From-Var: distri.com

So it looks like only backref \1 has first char set to \000

> See the X-Distri-Mapped-From-Var header's value. It has what seems to be a 
> nul-byte
> instead of the first character of the domain name. The other X- headers
> before it are meant to narrow down where the bug actually happens.
> 
> It would appear that it is somehow related to passing a variable's value
> into the mapping function or its return from there. Interestingly, the
> issue does _not_ show when simply putting the variable value into a header
> (X-Distri-Direct-From-Var) or when calling the mapping function with the
> header lookup instead of the intermediate variable 
> (X-Distri-Mapped-From-Header).
> 
> 
> One more tidbit: If I change the mapping file to this:
> --
> ^(.*)\.(.*)$ a\1
> --
> 
> The generated header header changes to:
> --
> X-Distri-Mapped-From-Var: aaistri
> --
> 
> Looks like some off-by-one error?

AFAIK this works on 1.7.11 but seems to be broken on all 1.8.x.

I think this is the commit that breaks map_regm in this case:
b5997f740b21ebb197e10a0f2fe9dc13163e1772 (MAJOR: threads/map: Make
acls/maps thread safe).

If I revert this commit from pattern.c:pattern_exec_match
then the map_regm \1 backref seems to work.

-Jarno

-- 
Jarno Huuskonen



Re: Haproxy client ip

2018-06-25 Thread Jarno Huuskonen
Hi,

On Mon, Jun 25, Simos Xenitellis wrote:
> On Sat, Jun 23, 2018 at 1:43 AM, Daniel Augusto Esteves
>  wrote:
> > Hi
> >
> > I am setting up haproxy with keepalived and i need to know if is possible
> > pass client ip for destination log server using haproxy in tcp mode?
> >
> 
> That can be done with the "proxy protocol". See more at
> https://www.haproxy.com/blog/haproxy/proxy-protocol/

There's also source usesrc clientip:
http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-source
if your backend servers don't support proxy-protocol.

-Jarno

-- 
Jarno Huuskonen




Re: http-response add-header

2018-06-25 Thread Jarno Huuskonen
Hi,

On Sat, Jun 23, mlist wrote:
> using this config no header is added to client from haproxy:
> 
> acl is_test hdr_dom(host) -i www.url1.url2.com
> 
> http-response add-header X-Custom-Header YES if is_test

Most likely the host header is not available for the http-response/acl.

For example with this config:
frontend test_fe
bind ipv4@127.0.0.1:8080
acl is_test hdr_dom(host) -i www.url1.url2.com
http-response add-header X-Custom-Header YES if is_test
default_backend test_be

backend test_be
http-request deny deny_status 200

haproxy complains:
[WARNING] 175/094858 (14971) : parsing [tmp_resp_header.conf:24] : acl 
'is_test' will never match because it only involves keywords that are 
incompatible with 'frontend http-response header rule'

You can use captures / variables to "store" the host header:
https://www.haproxy.com/blog/whats-new-in-haproxy-1-6/

So for example:
frontend test_fe
bind ipv4@127.0.0.1:8080
declare capture request len 64
http-request capture req.hdr(Host) id 0
acl is_test capture.req.hdr(0) -m beg -i www.url1.url2.com
http-response add-header X-Custom-Header YES if is_test

-Jarno

-- 
Jarno Huuskonen



Re: haproxy-1.8.8 seamless reloads failing with abns@ sockets

2018-06-07 Thread Jarno Huuskonen
Hi Olivier / Willy,

On Thu, Jun 07, Olivier Houchard wrote:
> Hi Willy,
> 
> On Thu, Jun 07, 2018 at 11:45:39AM +0200, Willy Tarreau wrote:
> > Hi Olivier,
> > 
> > On Wed, Jun 06, 2018 at 06:40:05PM +0200, Olivier Houchard wrote:
> > > You're right indeed, that code was not written with abns sockets in mind.
> > > The attached patch should fix it. It was created from master, but should
> > > apply to 1.8 as well.
> > > 
> > > Thanks !
> > > 
> > > Olivier
> > 
> > > >From 3ba0fbb7c9e854aafb8a6b98482ad7d23bbb414d Mon Sep 17 00:00:00 2001
> > > From: Olivier Houchard 
> > > Date: Wed, 6 Jun 2018 18:34:34 +0200
> > > Subject: [PATCH] MINOR: unix: Make sure we can transfer abns sockets as 
> > > well  on seamless reload.
> > 
> > Would you be so kind as to tag it "BUG" so that our beloved stable
> > team catches it for the next 1.8 ? ;-)
> > 
> 
> Sir yes sir.
> 
> > > diff --git a/src/proto_uxst.c b/src/proto_uxst.c
> > > index 9fc50dff4..a1da337fe 100644
> > > --- a/src/proto_uxst.c
> > > +++ b/src/proto_uxst.c
> > > @@ -146,7 +146,12 @@ static int uxst_find_compatible_fd(struct listener 
> > > *l)
> > >   after_sockname++;
> > >   if (!strcmp(after_sockname, ".tmp"))
> > >   break;
> > > - }
> > > + /* abns sockets sun_path starts with a \0 */
> > > + } else if (un1->sun_path[0] == 0
> > > + && un2->sun_path[0] == 0
> > > + && !strncmp(>sun_path[1], >sun_path[1],
> > > + sizeof(un1->sun_path) - 1))
> > > + break;
> > 
> > It may still randomly fail here because null bytes are explicitly permitted
> > in the sun_path. Instead I'd suggest this :
> > 
> > } else if (un1->sun_path[0] == 0 &&
> >memcmp(un1->sun_path, un2->sun_path, sizeof(un1->sun_path) 
> > == 0)
> > 
> > Jarno, if you still notice occasional failures, please try with this.
> > 
> 
> You're right, as unlikely as it can be in our current scenario, better safe
> than sorry.
> The attached patch is updated to reflect that.

Thanks !
My minimal test config with the patch works (on top of
1.8.9): (doing reloads/curl in loop).

I'll test with my normal/production config when I'll have more time
(probably few days).

-Jarno

-- 
Jarno Huuskonen



Re: HAProxy - Server Timeout and Client Timeout

2018-06-06 Thread Jarno Huuskonen
Hi,

On Tue, Jun 05, Martel, Michael H. wrote:
> We're running HAproxy 1.5.18 on RedHat Enterprise 7.4, as the load balancer 
> for our LMS (Moodle).  We have found that the course backup feature in Moodle 
> will return a 5xx error on some backups.  We have determined that the 
> "timeout server" value needed to be increased.

Do these backup requests have specific urls that you can match with acl ?

If you use separate backend for moodle backups then it should be
possible to increase timeout server for just the backup requests.

Something like
frontend fe_moodle
  acl backup_req path_sub /something/backup
  use_backend moodle_backup if backup_req
  default_backend moodle
...
backend moodle
  timeout server 1m
...

backend moodle_backup
  timeout server 12m
  server moodle1 ... track moodle/moodle1 ...
  server moodle2 ... track moodle/moodle2 ...

> Initially we were using a "timeout client 1m" and "timeout server 1m" .  
> Adjusting the server to "timeout server 12m" fixes the problem and does not 
> appear to introduce any other issues in our testing.
> 
> I can't see any reason that I should have the "timeout client" and the 
> "timeout server" set to the same value.
> 
> Is there anything I should watch out for after increasing the "timeout 
> server" by such a large amount ?

Probably not, but AFAIK if the backend server "dies" after haproxy has
forwarded the request (and before server responds) then client has to
wait for timeout server (in reality I think everyone will just click
stop or reload instead of waiting for the really long timeout).

-Jarno

-- 
Jarno Huuskonen



Re: Rewrite image path based on HTTP_REQUEST

2018-05-23 Thread Jarno Huuskonen
Hi,

On Sat, May 19, Aleksandar Lazic wrote:
> On 17/05/2018, Lotic Lists wrote:
> > How can I rewrite a image path based on URL?
> > 
> > Example, users request the url www.example.com/images/logo.png, haproxy just
> > balance to backend servers normally.
> > 
> > Now users request www.newdomain.com, I need rewrite URI to
> > /images/new-logo.png
> 
> Well what have you already tried?
> 
> I would try, untested.
> 
> https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-reqrep
> 
> acl old_dom hdr(host) -i www.example.com
> acl old_path path_beg -i /images/logo.png
> 
> reqrep "^([^ :]*) /images/logo.png /images/new-logo.png" if old_dom && 
> old_path
> reqirep "^Host: www.example.com   "Host: www.newdomain.com" if old_dom && 
> old_path

If you just need to change the path then http-request set-path should be
enough(https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-http-request).

Something like:
acl new_dom hdr_dom(host) -i www.newdomain.com
acl old_path path_beg -i /images/logo.png
http-request set-path "/images/new-logo.png" if new_dom old_path

-Jarno

-- 
Jarno Huuskonen - System Administrator |  jarno.huuskonen atsign uef.fi



Re: Haproxy support for handling concurrent requests from different clients

2018-05-15 Thread Jarno Huuskonen
Hi,

On Fri, May 11, Mihir Shirali wrote:
> I did look up some examples for setting 503 - but all of them (as you've
> indicated) seem based on src ip or src header. I'm guessing this is more
> suitable for a DOS/DDOS  attack? In our deployment, the likelihood of
> getting one request from multiple clients is more than multiple requests
> from a single client.

Can you explain how/when(on what condition) you'd like to limit the number
of requests and haproxy return 503 status to clients (429 seems more
appropriate status code for this) ?

If you just want haproxy to return 503 for all new requests when
there're X number of sessions/connections/session rate then
take a look at fe_conn, fe_req_rate, fe_sess_rate, be_conn and
be_sess_rate
(https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.2-fe_conn)
so for example something like
http-request deny deny_status 503 if { fe_req_rate gt 50 }

> As an update the rate-limit directive has helped. However, the only problem
> is that the client does not know that the server is busy and *could* time
> out. It would be great if it were possible to somehow send a 503 out , so
> the clients could retry after a random time.

-Jarno

-- 
Jarno Huuskonen



Re: req.body_param([])

2018-05-14 Thread Jarno Huuskonen
Hi Simon,

On Mon, May 14, Simon Schabel wrote:
> HA-Proxy version 1.7.5-2~bpo8+1 2017/05/27
> 
> The setting for the logging was done in the /default /section as:
> 
>    log-format %Ci:%Cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %st\
> %B\ %cc\ %cs\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\
> %[capture.req.hdr(0)]\ %{+Q}r
>     option log-separate-errors
>     option log-health-checks
> 
> and in the /http /and /https /section the body parameter capturing
> is activated as:
> 
>     # enable HTTP body logging
>     option http-buffer-request
>     declare capture request len 4
>     http-request capture req.body_param(customerId) id 0
> 
> As my haproxy version differs from yours I'm unsure where I might
> made a configuration error.

I tested with 1.8.8 and 1.7.5 and with both versions I managed to
log customerId (with simple curl -X PUT/POST).

Are the POST/PUT requests large, is it possible that the customerId doesn't
fit in haproxy buffer (default 16k (I think)) ?

Can you test with curl to see if customerId is logged then:
curl -v -X PUT -d'customerId=911' http://yourhost.yourdomain/yourpath

# bigfile is some random file much larger than 16k
and curl -v -X PUT -d@bigfile -d'customerId=912' 
http://yourhost.yourdomain/yourpath

-Jarno

-- 
Jarno Huuskonen



  1   2   3   >