Re: invalid request

2022-01-12 Thread Aleksandar Lazic



On 12.01.22 21:52, Andrew Anderson wrote:


On Wed, Jan 12, 2022 at 11:58 AM Aleksandar Lazic mailto:al-hapr...@none.at>> wrote:

Well, looks like you want a forward proxy like squid not a reverse proxy 
like haproxy.


The application being load balanced is a proxy, so http_proxy is not a good fit (and as you mention on the 
deprecation list), but haproxy as a load balancer is a much better at front-ending this environment than 
any other solution available.


We upgraded to 2.4 recently, and a Java application that uses these proxy servers is what exposed this 
issue for us.  Even if we were to use squid, we would still run into this, as I would want to ensure that 
squid was highly available for the environment, and we would hit the same code path when going through 
haproxy to connect to squid.


The only option currently available in 2.4 that I am aware of is to setup internal-only frontend/backend 
paths with accept-invalid-http-request configured on those paths exclusively for Java clients to use. This 
is effectively how we have worked around this for now:


listen proxy
     bind :8080
     mode http
     option httplog
     server proxy1 192.0.2.1:8080
     server proxy2 192.0.2.2:8080

listen proxy-internal
     bind :8081
     mode http
     option httplog
     option accept-invalid-http-request
     server proxy1 192.0.2.1:8080 track proxy/proxy1
     server proxy2 192.0.2.2:8080 track proxy/proxy2

This is a viable workaround for us in the short term, but this would not be a solution that would work for 
everyone.  If the uri parser patches I found in the 2.5/2.6 branches are the right ones to make haproxy 
more permissive on matching the authority with the host in CONNECT requests, that will remove the need for 
the parallel frontend/backends without validation enabled.  I hope to be able to have time to test a 2.4 
build with those patches included over the next few days.


By design is HAProxy a reverse proxy to a origin server not to a forwarding 
proxy which is the reason why the
CONNECT method is a invalid method.

Because of that fact I would not use "mode http" for the squid backend/servers 
because of the issues you
described.
Why not "mode tcp" with proxy protocol 
http://www.squid-cache.org/Doc/config/proxy_protocol_access/ if you
need the client ip.


Regards
Alex



Re: invalid request

2022-01-12 Thread Aleksandar Lazic



On 12.01.22 17:06, Andrew Anderson wrote:



On Thu, Dec 30, 2021 at 10:15 PM Willy Tarreau mailto:w...@1wt.eu>> wrote:

On Wed, Dec 29, 2021 at 12:29:11PM +0100, Aleksandar Lazic wrote:
 > >     0  CONNECT download.eclipse.org:443 HTTP/1.1\r\n
 > >     00043  Host: download.eclipse.org\r\n
 > >     00071  User-Agent: Apache-HttpClient/4.5.10 (Java/11.0.13)\r\n
 > >     00124  \r\n

It indeed looks like a recently fixed problem related to the mandatory
comparison between the authority part of the request and the Host header
field, which do not match above since only one contains a port.


I don't know how pervasive this issue is on non-Java clients, but the 
sendCONNECTRequest() method from
Java's HttpURLConnection API is responsible for the authority/host mismatch 
when using native Java HTTP
support, and has been operating this way for a very long time:

     /**
      * send a CONNECT request for establishing a tunnel to proxy server
      */
     private void sendCONNECTRequest() throws IOException {
         int port = url.getPort();

         requests.set(0, HTTP_CONNECT + " " + connectRequestURI(url)
                          + " " + httpVersion, null);
         requests.setIfNotSet("User-Agent", userAgent);

         String host = url.getHost();
         if (port != -1 && port != url.getDefaultPort()) {
             host += ":" + String.valueOf(port);
         }
         requests.setIfNotSet("Host", host);

The Apache-HttpClient library has a similar issue as well (as demonstrated 
above).

More recent versions are applying scheme-based normalization which consists
in dropping the port from the comparison when it matches the scheme
(which is implicitly https here).


Is there an option other than using "accept-invalid-http-request" available to 
modify this behavior on the
haproxy side in 2.4?  I have also run into this with Java 8, 11 and 17 clients.

Are these commits what you are referring to about scheme-based normalization 
available in more recent
versions (2.5+):

https://github.com/haproxy/haproxy/commit/89c68c8117dc18a2f25999428b4bfcef83f7069e
(MINOR: http: implement http uri parser)
https://github.com/haproxy/haproxy/commit/8ac8cbfd7219b5c8060ba6d7b5c76f0ec539e978
(MINOR: http: use http uri parser for scheme)
https://github.com/haproxy/haproxy/commit/69294b20ac03497e33c99464a0050951bdfff737
(MINOR: http: use http uri parser for authority)

If so, I can pull those into my 2.4 build and see if that works better for Java 
clients.


Well, looks like you want a forward proxy like squid not a reverse proxy like 
haproxy.
https://en.wikipedia.org/wiki/HTTP_tunnel

As you don't shared your config I assume you try to use option http_proxy which 
will be deprecated.
http://cbonte.github.io/haproxy-dconv/2.5/configuration.html#4-option%20http_proxy


Andrew


Regards Alex



Re: HAP 2.3.16 A bogus STREAM [0x559faa07b4f0] at "cache store filter"

2022-01-04 Thread Aleksandar Lazic

On 04.01.22 14:10, Christopher Faulet wrote:

Le 1/4/22 à 10:26, Aleksandar Lazic a écrit :


On 04.01.22 10:16, Christopher Faulet wrote:

Le 12/25/21 à 23:59, Aleksandar Lazic a écrit :


Hi.

as the message tell us that we should report this to the developers I do so :-)


```
Dec 24 01:10:31 lb1 haproxy[20008]: A bogus STREAM [0x559faa07b4f0] is spinning 
at 204371 calls per second
and refuses to die, aborting now!
Please report this error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
    txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
    rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
    ab=(nil),0 csb=0x559faad7dcf0,1a0
    
cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
    
cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
    filters={0x559faa29c520="cache store filter"}]



Hi Alex,

I think I found the issue. I'm unable to reproduce the spinning loop but I can 
freeze infinitely a stream.
It is probably just a matter of timing. On my side, it is related to L7 
retries. Could you confirm you have
a "retry-on" parameter in your configuration ?


Yes I can confirm.

```
defaults http
    log global
    mode http
    retry-on all-retryable-errors
    option forwardfor
    option redispatch
    option http-ignore-probes
    option httplog
    option dontlognull
    option ssl-hello-chk
    option log-health-checks
    option socket-stats
    timeout connect 5s
    timeout client  50s
    timeout server  50s
    http-reuse safe
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
...
```



Thanks Alex, I pushed a fix. It will be backported as far as the 2.0 ASAP.


Thank you Christopher




Re: HAP 2.3.16 A bogus STREAM [0x559faa07b4f0] at "cache store filter"

2022-01-04 Thread Aleksandar Lazic



On 04.01.22 10:16, Christopher Faulet wrote:

Le 12/25/21 à 23:59, Aleksandar Lazic a écrit :


Hi.

as the message tell us that we should report this to the developers I do so :-)


```
Dec 24 01:10:31 lb1 haproxy[20008]: A bogus STREAM [0x559faa07b4f0] is spinning 
at 204371 calls per second
and refuses to die, aborting now!
Please report this error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
   txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
   rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
   ab=(nil),0 csb=0x559faad7dcf0,1a0
   
cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
   cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
   filters={0x559faa29c520="cache store filter"}]



Hi Alex,

I think I found the issue. I'm unable to reproduce the spinning loop but I can 
freeze infinitely a stream.
It is probably just a matter of timing. On my side, it is related to L7 
retries. Could you confirm you have
a "retry-on" parameter in your configuration ?


Yes I can confirm.

```
defaults http
  log global
  mode http
  retry-on all-retryable-errors
  option forwardfor
  option redispatch
  option http-ignore-probes
  option httplog
  option dontlognull
  option ssl-hello-chk
  option log-health-checks
  option socket-stats
  timeout connect 5s
  timeout client  50s
  timeout server  50s
  http-reuse safe
  errorfile 400 /etc/haproxy/errors/400.http
  errorfile 403 /etc/haproxy/errors/403.http
...
```


Thanks !


Regards
Alex



Re: Troubles with AND in acl

2022-01-01 Thread Aleksandar Lazic

Hi.

On 01.01.22 20:56, Henning Svane wrote:

Hi

I have used it for some time in PFsense, but know made a Linux installation and now the configuration 
give me some troubles.


What have I done wrong here below?

As I cannot see what I should have done different, but sudo haproxy -c -f /etc/haproxy/haproxy01.cfg 
gives the following errors


error detected while parsing ACL 'XMail_EAS' : unknown fetch method 'if' in ACL 
expression 'if'.

error detected while parsing an 'http-request track-sc1' condition : unknown fetch method 'XMail_EAS' 
in ACL expression 'XMail_EAS'.


I have tried with { } around but that did not help


"if" is not a valid keyword for "acl" line.
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7


Configuration:

bind 10.40.61.10:443 ssl crt /etc/haproxy/crt/mail_domain_com.pem alpn 
h2,http/1.1

acl XMail hdr(host) -i mail.domain.com autodiscover.domain.com

http-request redirect scheme https code 301 if !{ ssl_fc }

acl XMail_EAS if XMail AND {url_beg -i /microsoft-server-activesync}



This works.

  acl XMail hdr(host) -i mail.domain.com autodiscover.domain.com
  acl MS_ACT url_beg -i /microsoft-server-activesync

  http-request track-sc1 src table Table_SRC_XMail_EAS_L4 if XMail MS_ACT

The AND is implicit.
http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.2


http-request track-sc1 src table Table_SRC_XMail_EAS_L4 if { XMail_EAS } { 
status 401 }  { status 403 }

http-request tarpit deny_status 429 if  { XMail_EAS} { sc_http_req_rate(1) gt 
10 }


Please can you share some more information's.
haproxy -vv


Regards

Henning


Regards
Alex





Re: invalid request

2021-12-29 Thread Aleksandar Lazic

Hi.

On 28.12.21 19:35, brendan kearney wrote:

list members,

i am running haproxy, and see some errors with requests.  i am trying to
understand why the errors are being thrown.  haproxy version and error
info below.  i am thinking that the host header is being exposed outside
the TLS encryption, but cannot be sure that is what is going on.

of note, the gnome weather extension runs into a similar issue. and the
eclipse IDE, when trying to call out to the download site.

where can i find more about what is going wrong with the requests and
why haproxy is blocking them?  if it matters, the calls are from apps to
a http VIP in haproxy, load balancing to squid backends.

# haproxy -v
HA-Proxy version 2.1.11-9da7aab 2021/01/08 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2021.
Known bugs: http://www.haproxy.org/bugs/bugs-2.1.11.html


As you can see on this page are 108 bugs fixed within the next version.
Maybe you should update to latest 2.4 and see if the behavior is still the same.


Running on: Linux 5.11.22-100.fc32.x86_64 #1 SMP Wed May 19 18:58:25 UTC
2021 x86_64

[28/Dec/2021:12:17:14.412] frontend proxy (#2): invalid request
    backend  (#-1), server  (#-1), event #154, src 
192.168.1.90:44228
    buffer starts at 0 (including 0 out), 16216 free,
    len 168, wraps at 16336, error at position 52
    H1 connection flags 0x, H1 stream flags 0x0012
    H1 msg state MSG_HDR_L2_LWS(24), H1 msg flags 0x1410
    H1 chunk len 0 bytes, H1 body len 0 bytes :

    0  CONNECT admin.fedoraproject.org:443 HTTP/1.1\r\n


Do you use 
http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#4-option%20http_proxy
It would help when you share the haproxy config.


    00046  Host: admin.fedoraproject.org\r\n
    00077  Accept-Encoding: gzip, deflate\r\n
    00109  User-Agent: gnome-software/40.4\r\n
    00142  Connection: Keep-Alive\r\n
    00166  \r\n

[28/Dec/2021:12:48:34.023] frontend proxy (#2): invalid request
    backend  (#-1), server  (#-1), event #166, src 
192.168.1.90:44350
    buffer starts at 0 (including 0 out), 16258 free,
    len 126, wraps at 16336, error at position 49
    H1 connection flags 0x, H1 stream flags 0x0012
    H1 msg state MSG_HDR_L2_LWS(24), H1 msg flags 0x1410
    H1 chunk len 0 bytes, H1 body len 0 bytes :

    0  CONNECT download.eclipse.org:443 HTTP/1.1\r\n
    00043  Host: download.eclipse.org\r\n
    00071  User-Agent: Apache-HttpClient/4.5.10 (Java/11.0.13)\r\n
    00124  \r\n

thanks in advance,

brendan






HAP 2.3.16 A bogus STREAM [0x559faa07b4f0] at "cache store filter"

2021-12-25 Thread Aleksandar Lazic



Hi.

as the message tell us that we should report this to the developers I do so :-)


```
Dec 24 01:10:31 lb1 haproxy[20008]: A bogus STREAM [0x559faa07b4f0] is spinning 
at 204371 calls per second
and refuses to die, aborting now!
Please report this error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
 txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
 rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
 ab=(nil),0 csb=0x559faad7dcf0,1a0
 cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
 cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
 filters={0x559faa29c520="cache store filter"}]

Dec 24 01:10:31 lb1 haproxy[4818]: [ALERT] 357/011031 (20008) : A bogus STREAM 
[0x559faa07b4f0] is spinning
at 204371 calls per second and refuses to die, aborting now! Please report this 
error to developers
[strm=0x559faa07b4f0,12390e src=:::79.183.184.235 fe=https-in be=be_api 
dst=api_main2
 txn=0x559faab233e0,44000 txn.req=MSG_DONE,d txn.rsp=MSG_RPBEFORE,0 
rqf=48c4e068 rqa=4
 rpf=a000a860 rpa=0 sif=CLO,2c8002 sib=CLO,1280112 af=(nil),0 
csf=0x559faa07ba10,1059a0
 ab=(nil),0
 csb=0x559faad7dcf0,1a0 
cof=0x7f224212e5d0,80003300:H2(0x559faa7d7b00)/SSL(0x7f22424fc7a0)/tcpv6(2162)
 cob=0x7f2240f79fe0,8982300:H1(0x559faa0ab840)/SSL(0x7f2263517770)/tcpv4(1490)
 filters={0x559faa29c520="cache store filter"}]
```

Here the cache config from haproxy.

```
cache default_cache
total-max-size 1024 # MB
# max-object-size 1  # bytes
max-age 300 # seconds

cache api_cache
total-max-size 1024 # MB
# max-object-size 1  # bytes
max-age 300 # seconds

backend be_default
  log global

  http-request cache-use default_cache
  http-response cache-store default_cache

backend be_api
  log global

  http-request cache-use api_cache
  http-response cache-store api_cache
```

Here the haproxy version ans we plan to update to 2.4 version asap.

```
ubuntu@lb1:~$ haproxy -vv
HA-Proxy version 2.3.16-1ppa1~bionic 2021/11/25 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2022.
Known bugs: http://www.haproxy.org/bugs/bugs-2.3.16.html
Running on: Linux 4.15.0-139-generic #143-Ubuntu SMP Tue Mar 16 01:30:17 UTC 
2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -O2 
-fdebug-prefix-map=/build/haproxy-1kKZLK/haproxy-2.3.16=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value 
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 
USE_SYSTEMD=1
  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT 
+POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
+GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY 
+TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER 
+PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=8).
Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with the Prometheus exporter as a service
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 7.5.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
fcgi : mode=HTTP   side=BEmux=FCGI
: mode=HTTP   side=FE|BE mux=H1
: mode=TCPside=FE|BE mux=PASS

Available services : prometheus-exporter
Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] 

Re: Getting rid of outdated haproxy apt ppa repo

2021-12-20 Thread Aleksandar Lazic



Hi.

On 20.12.21 09:40, Christoph Kukulies wrote:

Due to some recent action I did from some may outdated instructions for haproxy 
1.6 under Ubuntu
I have a left off broken haproxy repo which comes up everytim I’m doing 
apt-updates:

Ign:3 http://ppa.launchpad.net/vbernat/haproxy-1.6/ubuntu bionic InRelease
Hit:4 http://ppa.launchpad.net/vbernat/haproxy-1.8/ubuntu bionic InRelease
Err:5 http://ppa.launchpad.net/vbernat/haproxy-1.6/ubuntu bionic Release
   404  Not Found [IP: 91.189.95.85 80]
Hit:6 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:7 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Reading package lists... Done
E: The repository 'http://ppa.launchpad.net/vbernat/haproxy-1.6/ubuntu bionic 
Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore 
disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration 
details.


Any clues how I can get rid of this?


Well 1.6 is end of life.

https://www.haproxy.org/

You should replace haproxy-1.6 with 2.4, IMHO.
https://haproxy.debian.net/#?distribution=Ubuntu=bionic=2.4

How to handle ppa can be searched in the Internet, here a example page from the 
internet search.
https://linuxhint.com/ppa_repositories_ubuntu/


—
Christoph


Regards
Alex



Re: Add HAProxy to quicwg Implementations wiki

2021-12-19 Thread Aleksandar Lazic



On 19.12.21 13:52, Willy Tarreau wrote:

Hi Aleks,

On Sun, Dec 19, 2021 at 01:43:01PM +0100, Aleksandar Lazic wrote:

Do you agree that we now can add HAProxy to that list :-)

https://github.com/quicwg/base-drafts/wiki/Implementations


Ideally we should submit it once we have a public server with it. There
are still low-level issues that Fred and Amaury are working on before
this can happen, but based on the progress I'm seeing on the interop
page at https://interop.seemann.io/  I definitely expect that these
will be addressed soon and that haproxy.org will be delivered over QUIC
before 2.6 is released :-)


Cool thanks for the update :-)


Willy


Regards
Alex



Add HAProxy to quicwg Implementations wiki

2021-12-19 Thread Aleksandar Lazic



Hi.

Do you agree that we now can add HAProxy to that list :-)

https://github.com/quicwg/base-drafts/wiki/Implementations

My suggestion, please help me to file the ??:

IETF QUIC Transport

HAProxy:

QUIC implementation in HAProxy

Language: C
Version: draft-29??
Roles: Server, Client
Handshake: TLS 1.3
Protocol IDs: ??
ALPN: ??
Public server:
Is there a public server?
#

HTTP/3

Implementation of QUIC and HTTP/3 support in HAProxy

Language: C
Version: draft-http-34??
Roles: Server, Client
Handshake: TLSv1.3
Protocol IDs: ??
Public server: -
###

Regards
Alex



Re: Blocking log4j CVE with HAProxy

2021-12-14 Thread Aleksandar Lazic

Hi.

On 14.12.21 10:18, Olivier D wrote:

Hi,

Le lun. 13 déc. 2021 à 19:38, John Lauro mailto:johnala...@gmail.com>> a écrit :

http-request deny deny_status 405 if { url_sub -i "\$\{jndi:" or hdr_sub(user-agent) 
-i "\$\{jndi:" }
was not catching the bad traffic.  I think the escapes were causing issues 
in the matching.

The following did work:
                 http-request deny deny_status 405 if { url_sub -i -f 
/etc/haproxy/bad_header.lst }
                 http-request deny deny_status 405 if { hdr_sub(user-agent) 
-i -f /etc/haproxy/bad_header.lst }
and in bad_header.lst
${jndi:


  I tried
http-request deny deny_status 405 if { url_sub -i "\$\{jndi:" or hdr_sub(user-agent) -i 
"\$\{jndi:" }
and
http-request deny deny_status 405 if { url_sub -i ${jndi: or 
hdr_sub(user-agent) -i ${jndi: }

without success. Can anyone tell what's wrong with both syntaxes ? And how to 
escape special chars
correctly ?


There is now a blog post on haproxy.com how to configure haproxy to protect the 
backend applications against
the log4j attack.

https://www.haproxy.com/blog/december-2021-log4shell-mitigation/


Olivier


Regards
Alex



Re: Blocking log4j CVE with HAProxy

2021-12-13 Thread Aleksandar Lazic



On 13.12.21 14:53, Lukas Tribus wrote:

On Mon, 13 Dec 2021 at 14:43, Aleksandar Lazic  wrote:

Well I go the other way around.

The application must know what data are allowed, verify the input and if the 
input is not valid discard it.´


You clearly did not understand my point so let me try to phrase it differently:

The log4j vulnerability is about "allowed data" triggering a software
vulnerability which was impossible to predict.


ah okay, then please accept my apologize for misunderstanding you.


Lukas



Regards
Alex



Re: Blocking log4j CVE with HAProxy

2021-12-13 Thread Aleksandar Lazic

On 13.12.21 14:03, Lukas Tribus wrote:

On Mon, 13 Dec 2021 at 13:25, Aleksandar Lazic  wrote:

1. Why is a input from out site of the application passed unchecked to the 
logging library!


Because you can't predict the future.

When you know that your backend is SQL, you escape what's necessary to
avoid SQL injection (or use prepared statements) before sending
commands against the database.
When you know your output is HTML, you escape HTML special characters,
so untrusted inputs can't inject HTML tags.

That's what input validation means.

How exactly do you verify and sanitise inputs to protect against an
unknown vulnerability with an unknown syntax in a logging library that
is supposed to handle all strings just fine? You don't, it doesn't
work this way, and that's not what input validation means.


Well I go the other way around.

The application must know what data are allowed, verify the input and if the 
input is not valid discard it.
In any case, the user input should never send directly to the database!
There are a lot of options in many different languages to quote or prepare some 
Queries *before* they send to
the database.

I know that this is a lot of work because I do this in almost every of my 
programs but security and error
handling is a must in currently applications and I would say at least 1/3th of 
an appliation.

We see this in haproxy quite good as there are a huge mount of checks for null, 
expected types and a lot other
checks that's why haproxy is so robust and secure, imho.

But I think this is now off topic, let's mail off-list further, okay?


Lukas


Regards
Alex



Re: Blocking log4j CVE with HAProxy

2021-12-13 Thread Aleksandar Lazic

On 13.12.21 11:48, Olivier D wrote:

Hello there,

If you don't know yet, a CVE was published on friday about library log4j, 
allowing a remote code execution
with a crafted HTTP request.

We would like to filter these requests on HAProxy to lower the exposition. At 
peak times, 20% of our web
traffic is scanners about this bug !

The offended string is "${jndi:". It must be filtered on any fields that could 
go to log servers:
- URL
- User-Agent
- User name

What would be the easier way to do that ? If I give it a try :

http-request deny deny_status 405 if { url_sub -i "\$\{jndi:" or hdr_sub(user-agent) -i 
"\$\{jndi:" }


What do you think ?


Basically it could be any header which is used by the application which uses to 
send it unchecked and
unverified to the logging library.

The assumption or the fields are valid but not enough, from my point of view.

There is a quiet nice blog post https://isc.sans.edu/diary/28120 about this 
topic.

For my point of view is the key statement in the blog below.
"as long as it reads some input from an attacker, and passes that to the log4 
library"

There are 2 main question here.
1. Why is a input from out site of the application passed unchecked to the 
logging library!
2. I the lookup really necessary for the application or only a lazy way to 
solve some topics?

This CVE creates a lot or noise but I haven't seen anywhere that someone asked 
this simple questions.
The sad fact is that one of the main development rule is broken here from the 
developing peoples, and this
is a quite old rule.

Check and verify EVERY Input from the "User".

From my point of view can the "http-request deny" rule be added but which else 
header should be included?

The "referer" Header is also a nice injection option because some apps want to 
know from which location a
request is cumming and this is a know header how about some specific app headers 
"X-???"?


Olivier


Jm2c

Regards
Alex




Re: Is it expected that "capture response" does not get headers when "http-request return" is used

2021-12-08 Thread Aleksandar Lazic

On 08.12.21 10:20, Christopher Faulet wrote:

Le 12/6/21 à 08:25, Christopher Faulet a écrit :

Le 12/4/21 à 13:25, Aleksandar Lazic a écrit :


Hi.

I try to capture the response header "dst_conn" from "http-request return" but 
in %hs isn't the value.

```
podman logs -f haproxy-dest
[NOTICE]   (1) : New worker #1 (3) forked
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.438] 200 58 - - LR-- {} "GET / HTTP/1.1"

```

I haven't seen any "capture" in "http-after-response".
The question is also makes sense to have a capture after "http-request return" 
as in the documenation is
written that return stops the evaluation of any other rules also from capture?

http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#http-response%20return
"This stops the evaluation of the rules and immediately returns a response."


Hi Alex,

Unfortunately, it is indeed not possible for now. First, the captures via
"capture request" and "capture response" directives are performed very early,
and on received messages only. Thus it is not possible to capture info from
generated responses at this stage. However, it is probably possible to add a
"capture" action to the "http-afer-response" ruleset. This would able to you to
capture your header with the following config:

 declare capture response len 4
 http-after-response capture hdr(dst_conn) id 0

At first glance it seems trivial. I will check that.


Hi,

I added it to 2.6-DEV. The patch is small enough to be backported to 2.5.


Cool thank you.



Re: Help with peer setup and "srv_conn(bk_customer/haproxy-dest1)"

2021-12-08 Thread Aleksandar Lazic

Hi.

Anyone which can help to protect the backen with backend states?

Regards
Alex

On 05.12.21 11:42, Aleksandar Lazic wrote:


Hi.

I try to protect an backend server against a overload within a master/master 
setup.
The test setup looks like this

lb1: 8081 \
    -hap-dest: 8080
lb2: 8082 /

When I now call lb1 with curl the "tracker/quota1" gpc is increased and the 
second request is denied.
The problem is that the peer on lb2 does not get the counter data to protect 
the backend on lb2 too.

Please can anybody help me to fix my mistake and find a proper solution.


```
curl -v http://127.0.0.1:8081/; curl -v http://127.0.0.1:8081
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
 > GET / HTTP/1.1
 > Host: 127.0.0.1:8081
 > User-Agent: curl/7.68.0
 > Accept: */*
 >
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< dest_dst_conn: 1
< content-length: 0
<
* Connection #0 to host 127.0.0.1 left intact


* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
 > GET / HTTP/1.1
 > Host: 127.0.0.1:8081

< HTTP/1.1 403 Forbidden
< content-length: 93
< cache-control: no-cache
< content-type: text/html

```

``` lb1
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9990

0x55bb71554dc0: [05/Dec/2021:10:27:17] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
   0x55bb71558350: id=tracker(remote,inactive) addr=127.0.0.1:20001 
last_status=NAME last_hdshk=5m36s
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=1 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x55bb7156f1e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x55bb7156f090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x55bb71557300: id=h1(local,inactive) addr=127.0.0.1:2 last_status=NONE 
last_hdshk=
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x55bb7156f230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x55bb7156f0e0 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")

# table: tracker/quota1, type: string, size:100, used:1
0x55bb71772888: key=0 use=0 exp=53297 server_id=0 gpc0=1

# table: tracker/quota2, type: string, size:100, used:1
0x55bb71772958: key=0 use=0 exp=53297 server_id=0 gpc1=0

```

``` lb2
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9991

0x5618ae836dc0: [05/Dec/2021:10:27:12] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
   0x5618ae83a350: id=tracker(remote,inactive) addr=127.0.0.1:2 
last_status=NAME last_hdshk=5m31s
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=2 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x5618ae8511e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x5618ae838a50 id=tracker/quota1 update=0 localupdate=0 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x5618ae851090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x5618ae838c60 id=tracker/quota2 update=0 localupdate=0 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x5618ae839300: id=h2(local,inactive) addr=127.0.0.1:20001 last_status=NONE 
last_hdshk=
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x5618ae851230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x56

Re: Is it expected that "capture response" does not get headers when "http-request return" is used

2021-12-06 Thread Aleksandar Lazic

On 06.12.21 08:25, Christopher Faulet wrote:

Le 12/4/21 à 13:25, Aleksandar Lazic a écrit :


Hi.

I try to capture the response header "dst_conn" from "http-request return" but 
in %hs isn't the value.

```
podman logs -f haproxy-dest
[NOTICE]   (1) : New worker #1 (3) forked
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.438] 200 58 - - LR-- {} "GET / HTTP/1.1"

```

I haven't seen any "capture" in "http-after-response".
The question is also makes sense to have a capture after "http-request return" 
as in the documenation is
written that return stops the evaluation of any other rules also from capture?

http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#http-response%20return
"This stops the evaluation of the rules and immediately returns a response."


Hi Alex,

Unfortunately, it is indeed not possible for now. First, the captures via "capture 
request" and
"capture response" directives are performed very early, and on received 
messages only. Thus it is not
possible to capture info from generated responses at this stage. However, it is 
probably possible to add
a "capture" action to the "http-afer-response" ruleset. This would able to you to capture your header 
with the following config:


    declare capture response len 4
    http-after-response capture hdr(dst_conn) id 0

At first glance it seems trivial. I will check that.


Thank you Christopher.

Regards
Alex




Help with peer setup and "srv_conn(bk_customer/haproxy-dest1)"

2021-12-05 Thread Aleksandar Lazic


Hi.

I try to protect an backend server against a overload within a master/master 
setup.
The test setup looks like this

lb1: 8081 \
   -hap-dest: 8080
lb2: 8082 /

When I now call lb1 with curl the "tracker/quota1" gpc is increased and the 
second request is denied.
The problem is that the peer on lb2 does not get the counter data to protect 
the backend on lb2 too.

Please can anybody help me to fix my mistake and find a proper solution.


```
curl -v http://127.0.0.1:8081/; curl -v http://127.0.0.1:8081
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8081
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< dest_dst_conn: 1
< content-length: 0
<
* Connection #0 to host 127.0.0.1 left intact


* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8081

< HTTP/1.1 403 Forbidden
< content-length: 93
< cache-control: no-cache
< content-type: text/html

```

``` lb1
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9990

0x55bb71554dc0: [05/Dec/2021:10:27:17] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
  0x55bb71558350: id=tracker(remote,inactive) addr=127.0.0.1:20001 
last_status=NAME last_hdshk=5m36s
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 no_hbt=0 
new_conn=1 proto_err=0 coll=0
flags=0x0
shared tables:
  0x55bb7156f1e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x55bb7156f090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x55bb71557300: id=h1(local,inactive) addr=127.0.0.1:2 last_status=NONE 
last_hdshk=
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
flags=0x0
shared tables:
  0x55bb7156f230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x55bb7156f0e0 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")

# table: tracker/quota1, type: string, size:100, used:1
0x55bb71772888: key=0 use=0 exp=53297 server_id=0 gpc0=1

# table: tracker/quota2, type: string, size:100, used:1
0x55bb71772958: key=0 use=0 exp=53297 server_id=0 gpc1=0

```

``` lb2
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9991

0x5618ae836dc0: [05/Dec/2021:10:27:12] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
  0x5618ae83a350: id=tracker(remote,inactive) addr=127.0.0.1:2 
last_status=NAME last_hdshk=5m31s
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 no_hbt=0 
new_conn=2 proto_err=0 coll=0
flags=0x0
shared tables:
  0x5618ae8511e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x5618ae838a50 id=tracker/quota1 update=0 localupdate=0 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x5618ae851090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x5618ae838c60 id=tracker/quota2 update=0 localupdate=0 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x5618ae839300: id=h2(local,inactive) addr=127.0.0.1:20001 last_status=NONE 
last_hdshk=
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
flags=0x0
shared tables:
  0x5618ae851230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x5618ae838a50 id=tracker/quota1 update=0 localupdate=0 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x5618ae8510e0 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  

Is it expected that "capture response" does not get headers when "http-request return" is used

2021-12-04 Thread Aleksandar Lazic



Hi.

I try to capture the response header "dst_conn" from "http-request return" but 
in %hs isn't the value.

```
podman logs -f haproxy-dest
[NOTICE]   (1) : New worker #1 (3) forked
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.437] 200 58 - - LR-- {} "GET / HTTP/1.1"
<6>[04/Dec/2021:12:14:34.438] 200 58 - - LR-- {} "GET / HTTP/1.1"

```

I haven't seen any "capture" in "http-after-response".
The question is also makes sense to have a capture after "http-request return" 
as in the documenation is
written that return stops the evaluation of any other rules also from capture?

http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#http-response%20return
"This stops the evaluation of the rules and immediately returns a response."

My config
```
global
log stdout format short daemon debug
maxconn 1

defaults
timeout connect 1s
timeout server 1s
timeout client 1s

frontend http
mode http
log global
log-format "[%tr] %ST %B %CC %CS %tsc %hr %hs %{+Q}r"
# declare capture response len 4
capture response header dst_conn len 4

bind :::8080 v4v6

default_backend nginx

backend nginx
mode http
# bind :::8081

http-request return status 200 hdr dst_conn "%[dst_conn]"
```

Haproxy version
```
podman exec haproxy-dest haproxy -vv
HAProxy version 2.4.8-d1f8d41 2021/11/03 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2026.
Known bugs: http://www.haproxy.org/bugs/bugs-2.4.8.html
Running on: Linux 5.11.0-40-generic #44~20.04.2-Ubuntu SMP Tue Oct 26 18:07:44 
UTC 2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers 
-Wno-cast-function-type -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1 
USE_LUA=1 USE_PROMEX=1
  DEBUG   =
...
```

Regards
Alex



Re: Maybe stupid question but should "maxconn 0" work?

2021-12-02 Thread Aleksandar Lazic

On 02.12.21 15:12, Frank Wall wrote:

On 2021-12-02 02:16, Aleksandar Lazic wrote:

I try to test some limits with peers and wanted to test "maxconn 0"
before I start with the peers.
Should "maxconn 0" work?
I expect to get connection refused or similar and and 500 in the log
but both curls get a 200


Maybe I got your question wrong, but "maxconn 0" is not supposed to block
all connections:

   The default value is "0" which means unlimited.
(http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#maxconn%20(Server%20and%20default-server%20options)


Thanks Frank for the answer.
So the answer to my question is "Yes, it's a stupid question because RTFM!" :-)


Regards
- Frank


Best regards
Alex




Maybe stupid question but should "maxconn 0" work?

2021-12-01 Thread Aleksandar Lazic



Hi.

I try to test some limits with peers and wanted to test "maxconn 0" before I 
start with the peers.
Should "maxconn 0" work?
I expect to get connection refused or similar and and 500 in the log but both 
curls get a 200

```
# curl -v http://127.0.0.1:8080/; curl -v http://127.0.0.1:8080/
```

```
podman exec haproxy-dest haproxy -vv
HAProxy version 2.4.8-d1f8d41 2021/11/03 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2026.
Known bugs: http://www.haproxy.org/bugs/bugs-2.4.8.html
Running on: Linux 5.11.0-40-generic #44~20.04.2-Ubuntu SMP Tue Oct 26 18:07:44 
UTC 2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers 
-Wno-cast-function-type -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1 
USE_LUA=1 USE_PROMEX=1
  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT 
+POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
+GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM -ZLIB +SLZ +CPU_AFFINITY 
+TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER 
+PRCTL -PROCCTL +THREAD_DUMP -EVPORTS -OT -QUIC +PROMEX -MEMORY_PROFILING

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=12).
Built with OpenSSL version : OpenSSL 1.1.1k  25 Mar 2021
Running on OpenSSL version : OpenSSL 1.1.1k  25 Mar 2021
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with the Prometheus exporter as a service
Built with network namespace support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.36 2020-12-04
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 10.2.1 20210110

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2   
flags=HTX|CLEAN_ABRT|HOL_RISK|NO_UPG
fcgi : mode=HTTP   side=BEmux=FCGI 
flags=HTX|HOL_RISK|NO_UPG
: mode=HTTP   side=FE|BE mux=H1   flags=HTX
  h1 : mode=HTTP   side=FE|BE mux=H1   flags=HTX|NO_UPG
: mode=TCPside=FE|BE mux=PASS flags=
none : mode=TCPside=FE|BE mux=PASS flags=NO_UPG

Available services : prometheus-exporter
Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] fcgi-app
[COMP] compression
[TRACE] trace
```

Haproxy config
```
global
log stdout format short daemon debug
maxconn 0

defaults
timeout connect 1s
timeout server 5s
timeout client 5s

frontend http
mode http
log global
log-format "[%tr] %ST %B %CC %CS %tsc %hr %hs %{+Q}r"
declare capture response len 4

bind :::8080 v4v6

default_backend nginx

listen nginx
mode http
bind :::8081

http-request return status 200 content-type text/plain string "static" hdr x-host 
"%[req.hdr(host)]"
```

Regards
Alex



Re: Limit requests with peers on 2 independent HAProxies to one backend

2021-11-10 Thread Aleksandar Lazic

Hi Joao.

Thank you very much. I will give it a try.

Regards
Alex

On 10.11.21 22:25, Joao Morais wrote:




Em 8 de nov. de 2021, à(s) 08:26, Aleksandar Lazic  
escreveu:


Hi.

I have 2 LB's which should limit the connection to one backend.

I would try to use "conn_cur" in a stick table and share it via peers.
Have anyone such a solution already in place?


Hi Alex, I’ve already posted another question with a similar config which 
worked like a charm in my tests:

 https://www.mail-archive.com/haproxy@formilux.org/msg39753.html

~jm




That's my assuption for the config.

```
peers be_pixel_peers
  bind 9123
  log global
  localpeer {{ ansible_nodename }}
  server lb1 lb1.domain.com:1024
  server lb2 lb2.domain.com:1024


backend be_pixel_persons
  log global

  acl port_pixel dst_port {{ dst_ports["pixel"] }}
  tcp-request content silent-drop if port_pixel !{ src -f 
/etc/haproxy/whitelist.acl }

  option httpchk GET /alive
  http-check connect ssl
  timeout check 20s
  timeout server 300s

  # limit connection to backend

  stick-table type ip size 1m expire 10m store conn_cur peers be_pixel_peers
  http-request deny if { src,table_table_conn_cur(sc_conn_cur) gt 100 }

  

  http-request capture req.fhdr(Referer) id 0
  http-request capture req.fhdr(User-Agent) id 1
  http-request capture req.hdr(host) id 2
  http-request capture var(txn.cap_alg_keysize)  id 3
  http-request capture var(txn.cap_cipher) id 4
  http-request capture var(txn.cap_protocol) id 5

  http-response set-header X-Server %s

  balance roundrobin

  server pixel_persons1 {{ hosts["pixel_persons1"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
  server pixel_persons2 {{ hosts["pixel_persons2"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
  server pixel_persons3 {{ hosts["pixel_persons3"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 8 weight 80

```

Regards
Alex









Re: Limit requests with peers on 2 independent HAProxies to one backend

2021-11-10 Thread Aleksandar Lazic

Hi.

Have anybody some hints or tips about the question?

Regards
Alex

On 08.11.21 12:26, Aleksandar Lazic wrote:


Hi.

I have 2 LB's which should limit the connection to one backend.

I would try to use "conn_cur" in a stick table and share it via peers.
Have anyone such a solution already in place?

That's my assuption for the config.

```
peers be_pixel_peers
   bind 9123
   log global
   localpeer {{ ansible_nodename }}
   server lb1 lb1.domain.com:1024
   server lb2 lb2.domain.com:1024


backend be_pixel_persons
   log global

   acl port_pixel dst_port {{ dst_ports["pixel"] }}
   tcp-request content silent-drop if port_pixel !{ src -f 
/etc/haproxy/whitelist.acl }

   option httpchk GET /alive
   http-check connect ssl
   timeout check 20s
   timeout server 300s

   # limit connection to backend

   stick-table type ip size 1m expire 10m store conn_cur peers be_pixel_peers
   http-request deny if { src,table_table_conn_cur(sc_conn_cur) gt 100 }

   

   http-request capture req.fhdr(Referer) id 0
   http-request capture req.fhdr(User-Agent) id 1
   http-request capture req.hdr(host) id 2
   http-request capture var(txn.cap_alg_keysize)  id 3
   http-request capture var(txn.cap_cipher) id 4
   http-request capture var(txn.cap_protocol) id 5

   http-response set-header X-Server %s

   balance roundrobin

   server pixel_persons1 {{ hosts["pixel_persons1"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
   server pixel_persons2 {{ hosts["pixel_persons2"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
   server pixel_persons3 {{ hosts["pixel_persons3"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 8 weight 80

```

Regards
Alex






Limit requests with peers on 2 independent HAProxies to one backend

2021-11-08 Thread Aleksandar Lazic



Hi.

I have 2 LB's which should limit the connection to one backend.

I would try to use "conn_cur" in a stick table and share it via peers.
Have anyone such a solution already in place?

That's my assuption for the config.

```
peers be_pixel_peers
  bind 9123
  log global
  localpeer {{ ansible_nodename }}
  server lb1 lb1.domain.com:1024
  server lb2 lb2.domain.com:1024


backend be_pixel_persons
  log global

  acl port_pixel dst_port {{ dst_ports["pixel"] }}
  tcp-request content silent-drop if port_pixel !{ src -f 
/etc/haproxy/whitelist.acl }

  option httpchk GET /alive
  http-check connect ssl
  timeout check 20s
  timeout server 300s

  # limit connection to backend

  stick-table type ip size 1m expire 10m store conn_cur peers be_pixel_peers
  http-request deny if { src,table_table_conn_cur(sc_conn_cur) gt 100 }

  

  http-request capture req.fhdr(Referer) id 0
  http-request capture req.fhdr(User-Agent) id 1
  http-request capture req.hdr(host) id 2
  http-request capture var(txn.cap_alg_keysize)  id 3
  http-request capture var(txn.cap_cipher) id 4
  http-request capture var(txn.cap_protocol) id 5

  http-response set-header X-Server %s

  balance roundrobin

  server pixel_persons1 {{ hosts["pixel_persons1"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
  server pixel_persons2 {{ hosts["pixel_persons2"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 2 weight 20
  server pixel_persons3 {{ hosts["pixel_persons3"] }}:8184 resolvers mydns ssl 
check check-ssl ca-file /etc/haproxy/letsencryptauthorityx3.pem maxconn 8 weight 80

```

Regards
Alex



Re: Last-minute proposal for 2.5 about httpslog

2021-11-04 Thread Aleksandar Lazic

On 04.11.21 15:28, Willy Tarreau wrote:

Hello,

as some of you know, 2.5 will come with a new "option httpslog" to ease
logging some useful TLS info by default.

While running some tests in production with the error-log-format, I
realized that we're not logging the SNI in "httpslog", and that it's
probably a significant miss that we ought to fix before the release.
I think it could be particularly useful for those using long crt-lists
with a default domain, as it will allow to figure which ones have been
handled by the default one possibly due to a missing certificate or a
misconfiguration.

Right now the default HTTPS format is defined this way :

 log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC \
%CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r \
%[fc_conn_err]/%[ssl_fc_err,hex]/%[ssl_c_err]/\
%[ssl_c_ca_err]/%[ssl_fc_is_resumed] %sslv/%sslc"

As it is, it closely matches the httplog one so that tools configured to
process the latter should also work unmodified with the new one.

The question is, should we add "ssl_fc_sni" somewhere in this line, and
if so, where? Logging it at the end seems sensible to me so that even if
it's absent we're not missing anything. But maybe there are better options
or opinions on the subject.


A big bold +1 to add the sni header to the log.


Feel free to suggest so that we put something there before tomorrow and
have it in a last dev13 before the release.

Thanks,
Willy






Re: [ANNOUNCE] haproxy-2.5-dev10

2021-10-18 Thread Aleksandar Lazic

On 16.10.21 16:22, Willy Tarreau wrote:

Hi,

HAProxy 2.5-dev10 was released on 2021/10/16. It added 75 new commits
after version 2.5-dev9.

The smoke is progressively being blown away and we're starting to see
clearer what final 2.5 will look like.

In completely random order, here are the main changes I noticed in this
release:

   - some fixes for OpenSSL 3.0.0 support from Rémi and William; regression
 tests were fixed as well and the version in the CI was upgraded from
 alpha17 to 3.0.0

   - Rémi's JWT patches were merged. Now it becomes possible to decode
 JWT tokens and check their integrity. There are still a few pending
 patches for it but they're essentially cosmetic, so the code is
 expected to be already operational. Those who've been waiting for
 this are strongly invited to give it a try so that any required
 change has a chance to be merged before 2.5. Alex ?


That's great that the JWT feature is in HAProxy :-)

Sadly I'm not anymore involved in the project which I have planed to use it
therefore I can't test it in a real world scenario.

Thank you Rémi for implementing it.

Regards
Alex



Re: BoringSSL commit dddb60e breaks compilation of HAProxy

2021-09-08 Thread Aleksandar Lazic

On 08.09.21 11:07, Willy Tarreau wrote:

On Wed, Sep 08, 2021 at 01:58:00PM +0500,  ??? wrote:

??, 8 . 2021 ?. ? 13:54, Willy Tarreau :


On Wed, Sep 08, 2021 at 12:05:23PM +0500,  ??? wrote:

Hello, Bob

I tracked an issue  https://github.com/haproxy/haproxy/issues/1386


let's track activity there


Quite frankly, I'm seriously wondering how long we'll want to keep
supporting that constantly breaking library. Does it still provide



by "let us track activity" I do not mean that we are going to maintain
BoringSSL :)

people will come from time to time with BoringSSL support request. Existing
github issue is good to redirect them to.


Oh this is how I understood it as well, I just think that you and a
handful of others have already spent a lot of energy on that lib and
I was only encouraging you not to spend way more than what you find
reasonable after this issue is created :-)


Is there another library which have the quic stuff implemented which
can be used for quic development?


Willy






Re: Clarification about http-reuse

2021-08-18 Thread Aleksandar Lazic

On 17.08.21 16:58, Willy Tarreau wrote:

Hi Alex,

On Tue, Aug 17, 2021 at 02:19:38PM +0200, Aleksandar Lazic wrote:

```
3424 if ((curproxy->mode != PR_MODE_HTTP) && 
(curproxy->options & PR_O_REUSE_MASK) != PR_O_REUSE_NEVR)
3425 curproxy->options &= ~PR_O_REUSE_MASK;
```

Does this mean that even when no "http-reuse ..." is set will the "http-reuse 
safe" set on the proxy?


Yes, that's since 2.0. Reuse in "safe" mode is enabled by default.
You can forcefully disable it using "http-reuse never" if you want
(e.g. for debugging or if you suspect a bug in the server). But
"safe" is as safe as regular keep-alive.

Hoping this helps,


Yes, thanks.


Willy






Clarification about http-reuse

2021-08-17 Thread Aleksandar Lazic

Hi.

In the doc is this part

http://cbonte.github.io/haproxy-dconv/2.4/configuration.html#4-http-reuse

```
By default, a connection established between HAProxy and the backend server
which is considered safe for reuse is moved back to the server's idle
connections pool so that any other request can make use of it. This is the
"safe" strategy below.
```

and in the code this.

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/cfgparse.c;hb=2883fcf65bc09d4acf25561bcd955c6ca27c0438#l3424


```
3424 if ((curproxy->mode != PR_MODE_HTTP) && 
(curproxy->options & PR_O_REUSE_MASK) != PR_O_REUSE_NEVR)
3425 curproxy->options &= ~PR_O_REUSE_MASK;
```

Does this mean that even when no "http-reuse ..." is set will the "http-reuse 
safe" set on the proxy?

Regards
Alex



Re: [WARNING] (1) : We generated two equal cookies for two different servers.

2021-08-11 Thread Aleksandar Lazic

On 11.08.21 09:04, Willy Tarreau wrote:

Hi Aleks,

On Mon, Aug 09, 2021 at 06:40:29PM +0200, Aleksandar Lazic wrote:

Hi.

We use the HAProxy 2.4 image which have now HAProxy 2.4.2.
https://hub.docker.com/layers/haproxy/library/haproxy/2.4/images/sha256-d5e2a5261d6367c31c8ce9b2e692fe67237bdc29f37f2e153d346e8b0dc7c13b?context=explore

I get this message for dynamic cookies.

```
[WARNING]  (1) : We generated two equal cookies for two different servers.
Please change the secret key for 'my-haproxy'.
```

But from my point of view and for server-template and dynamic-cookie-key make
this message no sense or am I wrong?


The problem is that when using dynamic cookies, the dynamic-cookie-key,
the server's IP, and its port are hashed together to generate a fixed
cookie value that will be stable across a cluster of haproxy LBs, but
hashes are never without collisions despite being 64-bit, and here you
apparently faced one. Given how unlikely it is, I suspect that the issue
in fact is that you might have multiple servers on the same address.
Maybe just during some DNS transitions. If that's the case, maybe we
should improve the collision check to only report it if it happens for
servers with different addresses.


Well not the same IP but quite similar.
Your explanation can be the reason for the warning.

```
dig cloud-service.namespace.svc.cluster.local

cloud-service.namespace.svc.cluster.local. 5IN A 10.128.2.111
cloud-service.namespace.svc.cluster.local. 5IN A 10.128.2.112
cloud-service.namespace.svc.cluster.local. 5IN A 10.128.2.113
cloud-service.namespace.svc.cluster.local. 5IN A 10.128.2.114
cloud-service.namespace.svc.cluster.local. 5IN A 10.128.2.115
cloud-service.namespace.svc.cluster.local. 5IN A 10.129.9.83
cloud-service.namespace.svc.cluster.local. 5IN A 10.129.9.84
cloud-service.namespace.svc.cluster.local. 5IN A 10.129.9.85
cloud-service.namespace.svc.cluster.local. 5IN A 10.129.9.86
cloud-service.namespace.svc.cluster.local. 5IN A 10.129.9.87
cloud-service.namespace.svc.cluster.local. 5IN A 10.131.4.233
cloud-service.namespace.svc.cluster.local. 5IN A 10.131.4.234
cloud-service.namespace.svc.cluster.local. 5IN A 10.131.4.235
cloud-service.namespace.svc.cluster.local. 5IN A 10.131.4.236
cloud-service.namespace.svc.cluster.local. 5IN A 10.131.4.237
```


Willy






[WARNING] (1) : We generated two equal cookies for two different servers.

2021-08-09 Thread Aleksandar Lazic

Hi.

We use the HAProxy 2.4 image which have now HAProxy 2.4.2.
https://hub.docker.com/layers/haproxy/library/haproxy/2.4/images/sha256-d5e2a5261d6367c31c8ce9b2e692fe67237bdc29f37f2e153d346e8b0dc7c13b?context=explore

I get this message for dynamic cookies.

```
[WARNING]  (1) : We generated two equal cookies for two different servers.
Please change the secret key for 'my-haproxy'.
```

But from my point of view and for server-template and dynamic-cookie-key make 
this message no sense or am I
wrong?

Here the full haproxy config.

```
global
daemon
log 127.0.0.1:8514 local1 debug
maxconn 1

resolvers azure-dns
  accepted_payload_size 65535
  nameserver ocpresolver tcp@172.30.0.10:53
  resolve_retries   3
  timeout resolve   1s
  timeout retry 1s
  hold other   30s
  hold refused 30s
  hold nx  30s
  hold timeout 30s
  hold valid   10s
  hold obsolete30s

defaults
  mode http
  log global
  timeout connect 10m
  timeout client  1h
  timeout server  1h
  log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %si %sp %H %CC 
%CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"

frontend stats
  bind *:9000
  stats enable
  stats uri /stats
  stats refresh 10s
  stats admin if LOCALHOST

listen my-haproxy
  bind :"8080" ssl crt /mnt/haproxy/certs/default.pem
  cookie PHPSESSID insert indirect nocache dynamic
  dynamic-cookie-key testphrase
  balance roundrobin

  server-template my 20 
my-cloud-service.my-namespace.svc.cluster.local:29099 resolvers azure-dns check
```

Regards
Alex



Re: Help

2021-07-16 Thread Aleksandar Lazic

Hi.

On 16.07.21 14:34, Anilton Silva Fernandes wrote:

Hi there…

Can I get another HELP:

This time, I want to receive a request, and check for URL to know which backend 
should be call.

This is my config:

frontend web_accounts
     mode tcp
     bind 10.15.1.12:443
     default_backend accounts_servers

frontend web_apimanager
     mode tcp
     bind 10.15.1.13:443

     use_backend apiservices if   { path_beg /api/ }    
# IF THERE’S API ON THE URL SEND TO APISERVICES
     use_backend apimanager  unless   { path_beg /api }  # IF 
THERE’S NOT API, SEND IT TO APIMANAGER


This is not possible with TCP mode.
You have to switch to HTTP mode.

In this Blog post is such a example documented and more about HAProxy acls.

https://www.haproxy.com/blog/introduction-to-haproxy-acls/


backend accounts_servers
    mode tcp
    balance roundrobin
    server  accounts1 10.16.18.128:443 check

backend apimanager
    mode tcp
    balance roundrobin
    server  apimanager1 10.16.18.129:9445 check

backend apiservices
    mode tcp
    balance roundrobin
    server  apimanagerqa.cvt.cv 10.16.18.129:8245 check

Thank you

*From:*Emerson Gomes [mailto:emerson.go...@gmail.com]
*Sent:* 7 de julho de 2021 12:34
*To:* Anilton Silva Fernandes 
*Cc:* haproxy@formilux.org
*Subject:* Re: Help

Hello Anilton,

In the "bind *:443" line, do not specify a PEM file directly, but only the 
directory where your PEM file(s) resides.

Also, make sure that both the certificate and private key are contained within 
the same PEM file.

It should look like this:

-BEGIN CERTIFICATE-
    xxx
-END CERTIFICATE-
-BEGIN PRIVATE KEY-
   xxx
-END PRIVATE KEY-

BR.,

Emerson

Em qua., 7 de jul. de 2021 às 14:47, Anilton Silva Fernandes mailto:anilton.fernan...@cvt.cv>> escreveu:

Hi there.

Can I get some help from you.

I’m configuring HAProxy as a frontend on HTTPS with centified and I want 
clients to be redirect to BACKEND on HTTPS as well (443) but I want clients to 
see only HAProxy certificate, as the backend one is not valid.

Bellow the schematic of my design:

So, on

This is the configuration file I’m using:




frontend haproxy mode http bind *:80 bind *:443 ssl crt 
/etc/ssl/cvt.cv/accounts_cvt.pem default_backend wso2 backend wso2 mode http 
option forwardfor redirect scheme https if !{ ssl_fc } server my-api 
10.16.18.128:443 check ssl verify none http-request set-header X-Forwarded-Port 
%[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc }



frontend web_accounts mode tcp bind 192.168.1.214:443 default_backend 
accounts_servers frontend web_apimanager mode tcp bind 192.168.1.215:443 
default_backend apimanager_servers backend accounts_servers balance roundrobin 
server accounts1 10.16.18.128:443 check server accounts2 10.16.18.128:443 check 
backend apimanager_servers balance roundrobin server accounts1 10.16.18.128:443 
check server accounts2 10.16.18.128:443 check





The first one is what works but we got SSL problems due to invalid 
certificates on Backend;

The second one is what we would like, but does not work and says some erros:

[ALERT] 187/114337 (7823) : parsing [/etc/haproxy/haproxy.cfg:85] : 'bind *:443' 
: unable to load SSL private key from PEM file '/etc/ssl/cvt.cv/accounts_cvt.pem 
'.

[ALERT] 187/114337 (7823) : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg

[ALERT] 187/114337 (7823) : Proxy 'haproxy': no SSL certificate specified 
for bind '*:443' at [/etc/haproxy/haproxy.cfg:85] (use 'crt').

[ALERT] 187/114337 (7823) : Fatal errors found in configuration.

Errors in configuration file, check with haproxy check.

This is on CentOS 6

Thank you

Melhores Cumprimentos

**

*Anilton Fernandes | Plataformas, Sistemas e Infraestruturas*

Cabo Verde Telecom, SA

Group Cabo Verde Telecom

Rua Cabo Verde Telecom, 1, Edificio CVT

198, Praia, Santiago, República de Cabo Verde

Phone: +238 3503934 | Mobile: +238 9589123 | Email – anilton.fernan...@cvt.cv 


cid:image001.jpg@01D5997A.B9848FB0






Re: FYI: kubernetes api deprecation in 1.22

2021-07-16 Thread Aleksandar Lazic

On 16.07.21 10:27, Илья Шипицин wrote:

I wonder if Kubernetes has sort of ingress compliance test. Or is it up to 
ingress itself


Yes, there is such a thing but I never used it.
https://github.com/kubernetes-sigs/ingress-controller-conformance


On Fri, Jul 16, 2021, 1:21 PM Aleksandar Lazic mailto:al-hapr...@none.at>> wrote:

Hi.

FYI that the 1.22 have some changes which also impacts Ingress and 
Endpoints.

https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22

Regards
Alex






FYI: kubernetes api deprecation in 1.22

2021-07-16 Thread Aleksandar Lazic

Hi.

FYI that the 1.22 have some changes which also impacts Ingress and Endpoints.

https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22

Regards
Alex



Re: Long broken option http_proxy: should we kill it ?

2021-07-10 Thread Aleksandar Lazic

On 08.07.21 19:44, Aleksandar Lazic wrote:

On 08.07.21 18:33, Willy Tarreau wrote:

Hi all,

Amaury discovered that "option http_proxy" was broken. I quickly checked
when it started, and it got broken with the introduction of HTX in 1.9
three years ago. It still used to work in legacy mode in 1.9 and 2.0
but 2.0 uses HTX by default and legacy disappeared from 2.1. Thus to
summarize it, no single version emitted during the last 2.5 years saw it
working.

As such I was considering removing it from 2.5 without prior deprecation.
My opinion is that something that doesn't work for 2.5 years and that
triggers no single report is a sufficient indicator of non-use. We'll
still need to deploy reasonable efforts to see under what conditions it
can be fixed and the fix backported, of course. Does anyone object to
this ?

For a bit of background, this option was added 14 years ago to extract
an IP address an a port from an absolute URI, rewrite it to relative
and forward the request to the original IP:port, thus acting like a
non-resolving proxy. Nowadays one could probably achieve the same
by doing something such asthe following:

 http-request set-dst url_ip
 http-request set-dst-port url_port
 http-request set-uri %[path]

And it could even involve the do_resolve() action to resolve names to
addresses. That's why I'm in favor of not even trying to keep this one
further.


+1 to remove


Funny part, there was a question in SO about this topic ;-)

https://stackoverflow.com/questions/68321275/unable-to-implement-haproxy-as-forward-proxy-for-https


Thanks,
Willy









Re: Long broken option http_proxy: should we kill it ?

2021-07-08 Thread Aleksandar Lazic

On 08.07.21 18:33, Willy Tarreau wrote:

Hi all,

Amaury discovered that "option http_proxy" was broken. I quickly checked
when it started, and it got broken with the introduction of HTX in 1.9
three years ago. It still used to work in legacy mode in 1.9 and 2.0
but 2.0 uses HTX by default and legacy disappeared from 2.1. Thus to
summarize it, no single version emitted during the last 2.5 years saw it
working.

As such I was considering removing it from 2.5 without prior deprecation.
My opinion is that something that doesn't work for 2.5 years and that
triggers no single report is a sufficient indicator of non-use. We'll
still need to deploy reasonable efforts to see under what conditions it
can be fixed and the fix backported, of course. Does anyone object to
this ?

For a bit of background, this option was added 14 years ago to extract
an IP address an a port from an absolute URI, rewrite it to relative
and forward the request to the original IP:port, thus acting like a
non-resolving proxy. Nowadays one could probably achieve the same
by doing something such asthe following:

 http-request set-dst url_ip
 http-request set-dst-port url_port
 http-request set-uri %[path]

And it could even involve the do_resolve() action to resolve names to
addresses. That's why I'm in favor of not even trying to keep this one
further.


+1 to remove


Thanks,
Willy






Re: Proposal about new default SSL log format

2021-07-03 Thread Aleksandar Lazic

On 03.07.21 13:27, Илья Шипицин wrote:



сб, 3 июл. 2021 г. в 16:22, Aleksandar Lazic mailto:al-hapr...@none.at>>:

Hi Remi.

On 02.07.21 16:26, Remi Tricot-Le Breton wrote:
 > Hello list,
 >
 > Some work in ongoing to ease connection error and SSL handshake error 
logging.
 > This will rely on some new sample fetches that could be added to a custom
 > log-format string.
 > In order to ease SSL logging and debugging, we will also add a new 
default log
 > format for SSL connections. Now is then the good time to find the best 
format
 > for everyone.
 > The proposed format looks like the HTTP one to which the SSL specific
 > information is added. But if anybody sees a missing information that 
could be
 > beneficial for everybody, feel free to tell it, nothing is set in stone 
yet.
 >
 > The format would look like this :
 >      >>> Jul  1 18:11:31 haproxy[143338]: 127.0.0.1:37740 
<http://127.0.0.1:37740> [01/Jul/2021:18:11:31.517] \
 >    ssl_frontend~ ssl_frontend/s2 0/0/0/7/+7 \
 >    0/0/0/0 2750  1/1/1/1/0 0/0 TLSv1.3 TLS_AES_256_GCM_SHA384
 >
 >    Field   Format    Extract from the 
example above
 >    1   process_name '[' pid ']:'                           
haproxy[143338]:
 >    2   client_ip ':' client_port 127.0.0.1:37740 
<http://127.0.0.1:37740>
 >    3   '[' request_date ']'                      
[01/Jul/2021:18:11:31.517]
 >    4   frontend_name                                          
ssl_frontend~
 >    5   backend_name '/' server_name                         
ssl_frontend/s2
 >    6   TR '/' Tw '/' Tc '/' Tr '/' Ta*   
0/0/0/7/+7
 >    7 *conn_status '/' SSL hsk error '/' SSL vfy '/' SSL CA vfy* 
0/0/0/0
 >    8 bytes_read*                                                     
  2750
 >    9 termination_state                                               
  
 >   10   actconn '/' feconn '/' beconn '/' srv_conn '/' retries*    
1/1/1/1/0
 >   11   srv_queue '/' backend_queue   
   0/0
 >   12 *ssl_version*                                                  
TLSv1.3
 >   13 *ssl_ciphers*                                   
TLS_AES_256_GCM_SHA384
 >
 >
 > The equivalent log-format string would be the following :
 >      "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta \
 > %[conn_err_code]/%[ssl_fc_hsk_err]/%[ssl_c_err]/%[ssl_c_ca_err] \
 >          %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq %sslv %sslc
 >
 > The fields in bold are the SSL specific ones and the statuses ones will 
come
 > from a not yet submitted code so the names and format might slightly 
change.
 >
 > Feel free to suggest any missing data, which could come from log-format
 > specific fields or already existing sample fetches.

How about to combine ssl_version/ssl_ciphers in one line.

It would be helpful to see also the backend status.
Maybe add a 14th and 15th line with following fields

*backend_name '/' conn_status '/' SSL hsk error '/' SSL vfy '/' SSL CA vfy*
*backend_name '/' ssl_version '/' ssl_ciphers*

I had in the past several issues with the backend where the backend CA 
wasn't in the CA File which
was quite difficult to debug.

+1 to the suggestion from Илья Шипицин to use iso8601 which is already in 
haproxy since
2019/10/01:2.1-dev2.

I haven't found sub second format parameter in strftime call therefore I 
assume the strftime call
have this
".00" as fix value.

```
strftime(iso_time_str, sizeof(iso_time_str), "%Y-%m-%dT%H:%M:%S.00+00:00", 
)
```

Maybe another option is to use TAI for timestamps.


many analysis tools, for example Microsoft LogParser, ClickHouse, can perform 
queries right on top
of TSV files with iso8601 time.


Agree.
The output could be a TSV just to get sub seconds information could TAI be used.

https://en.wikipedia.org/wiki/International_Atomic_Time 
https://cr.yp.to/proto/utctai.html 
http://www.madore.org/~david/computers/unix-leap-seconds.html 


 > Thanks
 >
 > Rémi

Jm2c

Alex






Re: Proposal about new default SSL log format

2021-07-03 Thread Aleksandar Lazic

Hi Remi.

On 02.07.21 16:26, Remi Tricot-Le Breton wrote:

Hello list,

Some work in ongoing to ease connection error and SSL handshake error logging.
This will rely on some new sample fetches that could be added to a custom
log-format string.
In order to ease SSL logging and debugging, we will also add a new default log
format for SSL connections. Now is then the good time to find the best format
for everyone.
The proposed format looks like the HTTP one to which the SSL specific
information is added. But if anybody sees a missing information that could be
beneficial for everybody, feel free to tell it, nothing is set in stone yet.

The format would look like this :
     >>> Jul  1 18:11:31 haproxy[143338]: 127.0.0.1:37740 
[01/Jul/2021:18:11:31.517] \
   ssl_frontend~ ssl_frontend/s2 0/0/0/7/+7 \
   0/0/0/0 2750  1/1/1/1/0 0/0 TLSv1.3 TLS_AES_256_GCM_SHA384

   Field   Format    Extract from the example above
   1   process_name '[' pid ']:'   haproxy[143338]:
   2   client_ip ':' client_port127.0.0.1:37740
   3   '[' request_date ']'  [01/Jul/2021:18:11:31.517]
   4   frontend_name  ssl_frontend~
   5   backend_name '/' server_name ssl_frontend/s2
   6   TR '/' Tw '/' Tc '/' Tr '/' Ta*   0/0/0/7/+7
   7 *conn_status '/' SSL hsk error '/' SSL vfy '/' SSL CA vfy* 0/0/0/0
   8 bytes_read*   2750
   9 termination_state 
  10   actconn '/' feconn '/' beconn '/' srv_conn '/' retries*    1/1/1/1/0
  11   srv_queue '/' backend_queue  0/0
  12 *ssl_version*  TLSv1.3
  13 *ssl_ciphers*   TLS_AES_256_GCM_SHA384


The equivalent log-format string would be the following :
     "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta \
%[conn_err_code]/%[ssl_fc_hsk_err]/%[ssl_c_err]/%[ssl_c_ca_err] \
         %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq %sslv %sslc

The fields in bold are the SSL specific ones and the statuses ones will come
from a not yet submitted code so the names and format might slightly change.

Feel free to suggest any missing data, which could come from log-format
specific fields or already existing sample fetches.


How about to combine ssl_version/ssl_ciphers in one line.

It would be helpful to see also the backend status.
Maybe add a 14th and 15th line with following fields

*backend_name '/' conn_status '/' SSL hsk error '/' SSL vfy '/' SSL CA vfy*
*backend_name '/' ssl_version '/' ssl_ciphers*

I had in the past several issues with the backend where the backend CA wasn't 
in the CA File which was quite
difficult to debug.

+1 to the suggestion from Илья Шипицин to use iso8601 which is already in 
haproxy since 2019/10/01:2.1-dev2.

I haven't found sub second format parameter in strftime call therefore I assume 
the strftime call have this
".00" as fix value.

```
strftime(iso_time_str, sizeof(iso_time_str), "%Y-%m-%dT%H:%M:%S.00+00:00", 
)
```

Maybe another option is to use TAI for timestamps.

https://en.wikipedia.org/wiki/International_Atomic_Time
https://cr.yp.to/proto/utctai.html
http://www.madore.org/~david/computers/unix-leap-seconds.html


Thanks

Rémi


Jm2c

Alex



Line 47 in src/queue.c "s * queue's lock."

2021-06-24 Thread Aleksandar Lazic

Hi.

when someone works again on src/queue.c could be this typo fixed.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/queue.c;h=6d3aa9a12bcd6078d1b5a76969da4104a6adb1bd;hb=HEAD#l47

```
  44  *   - a pendconn_add() is only performed by the stream which will own the
  45  * pendconn ; the pendconn is allocated at this moment and returned ; 
it is
  46  * added to either the server or the proxy's queue while holding this
  47 s * queue's lock.
  48  *
```

Regards
Alex



Re: Weird behavior of spoe between http and https requests

2021-06-11 Thread Aleksandar Lazic

Hi.

On 11.06.21 18:07, Aleksandar Lazic wrote:

Hi.

I use haproxy 2.4 with this fe config.

```
global
     log stdout format raw daemon
     daemon
     maxconn 2
     stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd 
listeners
     stats timeout 30s

     tune.ssl.default-dh-param 2048

     # Default SSL material locations
     ca-base /etc/ssl/certs
     crt-base /etc/ssl/private


     # See 
https://ssl-config.mozilla.org/#server=haproxy=2.1=old=1.1.1d=5.4
     ssl-default-bind-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
     ssl-default-bind-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
     ssl-default-bind-options no-tls-tickets ssl-min-ver TLSv1.0

     ssl-default-server-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
     ssl-default-server-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
     ssl-default-server-options no-tls-tickets ssl-min-ver TLSv1.0


defaults http
   log global
   mode http
   retry-on all-retryable-errors
   option forwardfor
   option redispatch
   option http-ignore-probes
   option httplog
   option dontlognull
   option log-health-checks
   option socket-stats
   timeout connect 5s
   timeout client  50s
   timeout server  50s
   http-reuse safe
   errorfile 400 /etc/haproxy/errors/400.http
   errorfile 403 /etc/haproxy/errors/403.http
   errorfile 408 /etc/haproxy/errors/408.http
   errorfile 500 /etc/haproxy/errors/500.http
   errorfile 502 /etc/haproxy/errors/502.http
   errorfile 503 /etc/haproxy/errors/503.http
   errorfile 504 /etc/haproxy/errors/504.http

frontend http-in
   bind *:80
   mode http

   unique-id-format %rt
   http-request set-var(sess.my_fe_path) path
   http-request set-var(sess.my_fe_src) src
   http-request set-var(sess.my_fe_referer) req.hdr(Referer)
   http-request set-var(sess.my_fe_requestedhost) req.hdr(Host)

   # define the spoe agents
   filter spoe engine agent-on-http-req config /etc/haproxy/spoe-url.conf
   filter spoe engine agent-on-http-res config /etc/haproxy/spoe-url.conf

frontend https-in

   bind :::443 v4v6 alpn h2,http/1.1 ssl ca-file 
/etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/

   unique-id-format %rt
   http-request set-var(sess.my_fe_path) path
   http-request set-var(sess.my_fe_src) src
   http-request set-var(sess.my_fe_referer) req.hdr(Referer)
   http-request set-var(sess.my_fe_requestedhost) req.hdr(Host)

   # define the spoe agents
   filter spoe engine agent-on-http-req config /etc/haproxy/spoe-url.conf
   filter spoe engine agent-on-http-res config /etc/haproxy/spoe-url.conf
```

And with this spoe config.
```
[agent-on-http-req]
spoe-agent agent-on-http-req

     log global

     messages agent-on-http-req

     option var-prefix feevents

     timeout hello  2s
     timeout idle   2m
     timeout processing 1s

     use-backend agent-on-http-req

spoe-message agent-on-http-req
     args my_path=path my_src=src my_referer=req.hdr(Referer) my_sid=unique-id 
my_req_host=req.hdr(Host)
     event on-frontend-http-request

[agent-on-http-res]
spoe-agent agent-on-http-res

     log global

     messages agent-on-http-res

     option var-prefix feevents

     timeout hello  2s
     timeout idle   2m
     timeout processing 1s

     use-backend agent-on-http-res

spoe-message agent-on-http-res
     args my_path=var(sess.my_fe_path) my_src=src 
my_referer=var(sess.my_fe_referer) my_sid=unique-id 
my_req_host=var(sess.my_fe_requestedhost)
     event on-http-response
```

Now when I make a http request I get all values and args.
```
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Msg Name  
:agent-on-http-req:
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Msg Count :5:
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Arg Name  
:my_path:
Jun 11 16:01

Weird behavior of spoe between http and https requests

2021-06-11 Thread Aleksandar Lazic

Hi.

I use haproxy 2.4 with this fe config.

```
global
log stdout format raw daemon
daemon
maxconn 2
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd 
listeners
stats timeout 30s

tune.ssl.default-dh-param 2048

# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private


# See 
https://ssl-config.mozilla.org/#server=haproxy=2.1=old=1.1.1d=5.4
ssl-default-bind-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
ssl-default-bind-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options no-tls-tickets ssl-min-ver TLSv1.0

ssl-default-server-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
ssl-default-server-ciphersuites 
TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-server-options no-tls-tickets ssl-min-ver TLSv1.0


defaults http
  log global
  mode http
  retry-on all-retryable-errors
  option forwardfor
  option redispatch
  option http-ignore-probes
  option httplog
  option dontlognull
  option log-health-checks
  option socket-stats
  timeout connect 5s
  timeout client  50s
  timeout server  50s
  http-reuse safe
  errorfile 400 /etc/haproxy/errors/400.http
  errorfile 403 /etc/haproxy/errors/403.http
  errorfile 408 /etc/haproxy/errors/408.http
  errorfile 500 /etc/haproxy/errors/500.http
  errorfile 502 /etc/haproxy/errors/502.http
  errorfile 503 /etc/haproxy/errors/503.http
  errorfile 504 /etc/haproxy/errors/504.http

frontend http-in
  bind *:80
  mode http

  unique-id-format %rt
  http-request set-var(sess.my_fe_path) path
  http-request set-var(sess.my_fe_src) src
  http-request set-var(sess.my_fe_referer) req.hdr(Referer)
  http-request set-var(sess.my_fe_requestedhost) req.hdr(Host)

  # define the spoe agents
  filter spoe engine agent-on-http-req config /etc/haproxy/spoe-url.conf
  filter spoe engine agent-on-http-res config /etc/haproxy/spoe-url.conf

frontend https-in

  bind :::443 v4v6 alpn h2,http/1.1 ssl ca-file 
/etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/

  unique-id-format %rt
  http-request set-var(sess.my_fe_path) path
  http-request set-var(sess.my_fe_src) src
  http-request set-var(sess.my_fe_referer) req.hdr(Referer)
  http-request set-var(sess.my_fe_requestedhost) req.hdr(Host)

  # define the spoe agents
  filter spoe engine agent-on-http-req config /etc/haproxy/spoe-url.conf
  filter spoe engine agent-on-http-res config /etc/haproxy/spoe-url.conf
```

And with this spoe config.
```
[agent-on-http-req]
spoe-agent agent-on-http-req

log global

messages agent-on-http-req

option var-prefix feevents

timeout hello  2s
timeout idle   2m
timeout processing 1s

use-backend agent-on-http-req

spoe-message agent-on-http-req
args my_path=path my_src=src my_referer=req.hdr(Referer) my_sid=unique-id 
my_req_host=req.hdr(Host)
event on-frontend-http-request

[agent-on-http-res]
spoe-agent agent-on-http-res

log global

messages agent-on-http-res

option var-prefix feevents

timeout hello  2s
timeout idle   2m
timeout processing 1s

use-backend agent-on-http-res

spoe-message agent-on-http-res
args my_path=var(sess.my_fe_path) my_src=src 
my_referer=var(sess.my_fe_referer) my_sid=unique-id 
my_req_host=var(sess.my_fe_requestedhost)
event on-http-response
```

Now when I make a http request I get all values and args.
```
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Msg Name  
:agent-on-http-req:
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Msg Count :5:
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Arg Name  
:my_path:
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 2021/06/11 16:01:01 Arg Value 
:/test:
Jun 11 16:01:01 reggata-001 spoe-url[112969]: 

[PATCH] DOC: use the req.ssl_sni in examples

2021-06-05 Thread Aleksandar Lazic

Hi.

This patch fixes the usage of req_ssl_sni in the doc.

Any plan to remove the old keyword or add some warning that this
keyword is deprecated?

Regards
Alex
>From 84fe0fa89548c384322f47bc3eb37ea9843d0eb8 Mon Sep 17 00:00:00 2001
From: Alex 
Date: Sat, 5 Jun 2021 13:23:08 +0200
Subject: [PATCH] DOC: use the req.ssl_sni in examples

This patch should be backported to at least 2.0
---
 doc/configuration.txt | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 6b7cc2666..5b1768e89 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13228,16 +13228,16 @@ use-server  unless 
   The "use-server" statement works both in HTTP and TCP mode. This makes it
   suitable for use with content-based inspection. For instance, a server could
   be selected in a farm according to the TLS SNI field when using protocols with
-  implicit TLS (also see "req_ssl_sni"). And if these servers have their weight
+  implicit TLS (also see "req.ssl_sni"). And if these servers have their weight
   set to zero, they will not be used for other traffic.
 
   Example :
  # intercept incoming TLS requests based on the SNI field
- use-server www if { req_ssl_sni -i www.example.com }
+ use-server www if { req.ssl_sni -i www.example.com }
  server www 192.168.0.1:443 weight 0
- use-server mail if { req_ssl_sni -i mail.example.com }
+ use-server mail if { req.ssl_sni -i mail.example.com }
  server mail 192.168.0.1:465 weight 0
- use-server imap if { req_ssl_sni -i imap.example.com }
+ use-server imap if { req.ssl_sni -i imap.example.com }
  server imap 192.168.0.1:993 weight 0
  # all the rest is forwarded to this server
  server  default 192.168.0.2:443 check
@@ -18727,7 +18727,7 @@ ssl_fc_sni : string
   matching the HTTPS host name (253 chars or less). The SSL library must have
   been built with support for TLS extensions enabled (check haproxy -vv).
 
-  This fetch is different from "req_ssl_sni" above in that it applies to the
+  This fetch is different from "req.ssl_sni" above in that it applies to the
   connection being deciphered by HAProxy and not to SSL contents being blindly
   forwarded. See also "ssl_fc_sni_end" and "ssl_fc_sni_reg" below. This
   requires that the SSL library is built with support for TLS extensions
@@ -18998,13 +18998,13 @@ req_ssl_sni : string (deprecated)
   the example below. See also "ssl_fc_sni".
 
   ACL derivatives :
-req_ssl_sni : exact string match
+req.ssl_sni : exact string match
 
   Examples :
  # Wait for a client hello for at most 5 seconds
  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }
- use_backend bk_allow if { req_ssl_sni -f allowed_sites }
+ use_backend bk_allow if { req.ssl_sni -f allowed_sites }
  default_backend bk_sorry_page
 
 req.ssl_st_ext : integer
-- 
2.25.1



Re: Proxy Protocol - any browser proxy extensions that support ?

2021-06-04 Thread Aleksandar Lazic

On 04.06.21 21:32, Jim Freeman wrote:

https://developer.chrome.com/docs/extensions/reference/proxy/
supports SOCKS4/SOCKS5

Does anyone know of any in-browser VPN/proxy extensions that support
Willy's Proxy Protocol ?
https://www.haproxy.com/blog/haproxy/proxy-protocol/ enumerates some
of the state of support, but doesn't touch on browser VPN/proxy
extensions, and my due-diligence googling is coming up short ...


Well not a real browser but a Swedish army knife :-)

https://github.com/curl/curl/commit/6baeb6df35d24740c55239f24b5fc4ce86f375a5

`haproxy-protocol`


Thanks,
...jfree






Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-04 Thread Aleksandar Lazic

On 02.06.21 11:38, Christopher Faulet wrote:

Le 6/1/21 à 8:26 PM, Aleksandar Lazic a écrit :

On 01.06.21 14:23, Tim Düsterhus wrote:

Aleks,

On 6/1/21 10:30 AM, Aleksandar Lazic wrote:
This phrasing is understandable to me, but now I'm wondering if this is the best 
solution. Maybe the already existing user-configurable unique request ID should 
instead be sent to the SPOE and then logged?


https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id

The request_counter (%rt) you mentioned could be embedded into this unique-id.


Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.


Yes, that's why I suggested that the SPOE is extended to also include this 
specific ID somewhere (just) for logging purposes.


Yep.
Any opinion from the other community Members?



The SID provided in the SPOE log message is the one used in the SPOP frame header. 
This way it is possible to match a corresponding log message emitted by the agent.


The "unique-id-format %rt" fix the issue for me.

Regarding the format for this log message, its original purpose was to diagnose 
problems. Instead of adding custom information, I guess the best would be to have 
a "log-format" directive. At least to not break existing tools parsing those 
log messages. But to do so, all part of the current message must be available 
via log variables and/or sample fetches. And, at first glance, it will be hard 
to achieve (sample fetches are probably easier though).


Regarding the stream_uniq_id sample fetch, it is a good idea to add it. 
In fact, when it makes sense, a log variable must also be accessible via a 
sample fetch. Tim's remarks about the patch are valid. For the scope, INTRN or 
L4CLI, I don't know. I'm inclined to choose INTRN.


Let me withdrawal my patch because I use the following configs to satisfy  may
requirement.


```
global
log stdout format raw daemon
# daemon
maxconn 2

defaults
log global
modehttp
option  httplog
option  dontlognull
timeout connect 5000
timeout client  5
timeout server  5

frontend haproxynode
bind *:9080
mode http

unique-id-format %rt
http-request set-var(sess.my_fe_path) path
http-request set-var(sess.my_fe_src) src
http-request set-var(sess.my_fe_referer) req.hdr(Referer)
http-request set-var(sess.my_fe_requestedhost) req.hdr(Host)

# define the spoe agents
filter spoe engine agent-on-http-req config resources/haproxy/spoe-url.conf
filter spoe engine agent-on-http-res config resources/haproxy/spoe-url.conf

# map the spoe response to acl variables
# acl authenticated var(sess.allevents.info) -m bool

http-response set-header x-spoe %[var(sess.feevents.info)]
default_backend streams

backend agent-on-http-req
mode tcp
log global

server spoe 127.0.0.1:9000 check

backend agent-on-http-res
mode tcp
log global

server spoe 127.0.0.1:9000 check

backend streams
log global

server socat 127.0.0.1:1234 check
```

```
[agent-on-http-req]
spoe-agent agent-on-http-req

log global

messages agent-on-http-req

option var-prefix feevents

timeout hello  2s
timeout idle   2m
timeout processing 1s

use-backend agent-on-http-req

spoe-message agent-on-http-req
args my_path=path my_src=src my_referer=req.hdr(Referer) my_sid=unique-id 
my_req_host=req.hdr(Host)
event on-frontend-http-request

[agent-on-http-res]
spoe-agent agent-on-http-res

log global

messages agent-on-http-res

option var-prefix feevents

timeout hello  2s
timeout idle   2m
timeout processing 1s

use-backend agent-on-http-res

spoe-message agent-on-http-res
args my_path=var(sess.my_fe_path) my_src=src 
my_referer=var(sess.my_fe_referer) my_sid=unique-id 
my_req_host=var(sess.my_fe_requestedhost)
event on-http-response
```



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-01 Thread Aleksandar Lazic

On 01.06.21 14:23, Tim Düsterhus wrote:

Aleks,

On 6/1/21 10:30 AM, Aleksandar Lazic wrote:

This phrasing is understandable to me, but now I'm wondering if this is the 
best solution. Maybe the already existing user-configurable unique request ID 
should instead be sent to the SPOE and then logged?

https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id

The request_counter (%rt) you mentioned could be embedded into this unique-id.


Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.


Yes, that's why I suggested that the SPOE is extended to also include this 
specific ID somewhere (just) for logging purposes.


Yep.
Any opinion from the other community Members?


Best regards
Tim Düsterhus






Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-01 Thread Aleksandar Lazic
Tim,

Jun 1, 2021 9:50:17 AM Tim Düsterhus :

> Aleks,
>
> On 6/1/21 1:03 AM, Aleksandar Lazic wrote:
>>>>  srv_conn([/]) : integer
>>>>    Returns an integer value corresponding to the number of currently 
>>>> established
>>>>    connections on the designated server, possibly including the connection 
>>>> being
>>>> @@ -17514,6 +17509,9 @@ stopping : boolean
>>>>  str() : string
>>>>    Returns a string.
>>>>
>>>> +stream_uniq_id : integer
>>>> +  Returns the uniq stream id.
>>>> +
>>>
>>> This explanation is not useful to the reader (even I don't understand it).
>> […]
>> This is shown on the SPOE log line as sid and therefore I think it should be
>> possible to get the same ID also within HAProxy as fetch method.
>> ```
>> SPOE: [agent-on-http-req]  sid=88 st=0 
>> 0/0/0/0/0 1/1 0/0 10/33
>> ```
>> […]
>> ```
>> This fetch method returns the internal Stream ID, if a stream is available. 
>> The
>> internal Stream ID is used in several places in HAProxy to trace the Stream
>> inside HAProxy. It is also uses in SPOE as "sid" value.
>> ```
>>
>
> This phrasing is understandable to me, but now I'm wondering if this is the 
> best solution. Maybe the already existing user-configurable unique request ID 
> should instead be sent to the SPOE and then logged?
>
> https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id
>
> The request_counter (%rt) you mentioned could be embedded into this unique-id.

Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.

> Best regards
> Tim Düsterhus

Regards
Alex


Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-05-31 Thread Aleksandar Lazic

Tim.

On 31.05.21 23:23, Tim Düsterhus wrote:

Aleks,

On 5/31/21 9:35 PM, Aleksandar Lazic wrote:

While I try to get the stream id from spoa I recognized that there is no fetch 
method for the streamID.


Attached a patch which adds the fetch sample for the stream id.
I assume it could be back ported up to version 2.0


The backporting information should be part of the commit message. But I don't 
think it's going to be backported that far.

Further comments inline.


From 15a2026c495e64d8165a13a3c8a4e5e19ad7e8d6 Mon Sep 17 00:00:00 2001
From: Alexandar Lazic 
Date: Mon, 31 May 2021 21:28:56 +0200
Subject: [PATCH] MINOR: sample: fetch stream_uniq_id

This fetch sample allows to get the current Stream ID for the
current session.

---
 doc/configuration.txt  | 13 ++
 reg-tests/sample_fetches/stream_id.vtc | 33 ++
 src/sample.c   | 14 +++
 3 files changed, 55 insertions(+), 5 deletions(-)
 create mode 100644 reg-tests/sample_fetches/stream_id.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 11c38945c..7eb7e29cd 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17433,11 +17433,6 @@ rand([]) : integer
   needed to take some routing decisions for example, or just for debugging
   purposes. This random must not be used for security purposes.

-uuid([]) : string
-  Returns a UUID following the RFC4122 standard. If the version is not
-  specified, a UUID version 4 (fully random) is returned.
-  Currently, only version 4 is supported.
-


Good catch, but please split moving this around into a dedicated patch (DOC).


Done.


 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently established
   connections on the designated server, possibly including the connection being
@@ -17514,6 +17509,9 @@ stopping : boolean
 str() : string
   Returns a string.

+stream_uniq_id : integer
+  Returns the uniq stream id.
+


This explanation is not useful to the reader (even I don't understand it).


Hm. Well it fetches the uniq_id from the stream struct.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=include/haproxy/stream-t.h;h=9499e94d77feea0dad787eb3bd7b6b0375ca0148;hb=HEAD#l120
120 unsigned int uniq_id;   /* unique ID used for the traces */

This is shown on the SPOE log line as sid and therefore I think it should be
possible to get the same ID also within HAProxy as fetch method.

```
SPOE: [agent-on-http-req]  sid=88 st=0 
0/0/0/0/0 1/1 0/0 10/33
```

In the log is this the variable "%rt" when a stream is available, when no stream
is available then is this the "global.req_count".

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/log.c;h=7dabe16f8fa54631f6eab815eb73f77d058d0368;hb=HEAD#l2178

In the doc is it described as request_counter which is only true when no stream
is available, when a stream is available then is that %rt the uniq id.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=doc/configuration.txt;h=11c38945c29d2d28c9afb13afed60b30a97069cb;hb=HEAD#l20576
20576   |   | %rt  | request_counter (HTTP req or TCP session) | numeric
 |

So, yes I agree it's difficult to describe it in the doc for the normal user.

How about this wording.

```
This fetch method returns the internal Stream ID, if a stream is available. The
internal Stream ID is used in several places in HAProxy to trace the Stream
inside HAProxy. It is also uses in SPOE as "sid" value.
```



 table_avl([]) : integer
   Returns the total number of available entries in the current proxy's
   stick-table or in the designated stick-table. See also table_cnt.
@@ -17528,6 +17526,11 @@ thread : integer
   the function, between 0 and (global.nbthread-1). This is useful for logging
   and debugging purposes.

+uuid([]) : string
+  Returns a UUID following the RFC4122 standard. If the version is not
+  specified, a UUID version 4 (fully random) is returned.
+  Currently, only version 4 is supported.
+
 var() : undefined
   Returns a variable with the stored type. If the variable is not set, the
   sample fetch fails. The name of the variable starts with an indication
diff --git a/src/sample.c b/src/sample.c
index 09c272c48..5d3b06b10 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -4210,6 +4210,18 @@ static int smp_fetch_uuid(const struct arg *args, struct 
sample *smp, const char
 return 0;
 }

+/* returns the stream uniq_id */
+static int
+smp_fetch_stream_uniq_id(const struct arg *args, struct sample *smp, const 
char *kw, void *private)


I believe the 'static int' should go on the same line.


Well I copied from "smp_fetch_cpu_calls" but yes most of the other fetches are
in the same line so I will put it in the same line.


+{
+    if (!smp->strm)
+    return 0;
+
+    smp->data.type = SMP_T_SINT;
+    smp->data.u.sint = smp->strm->uniq_id;
+    return 1;
+}
+
 /* Note: must not be declared  as its list will be 

[PATCH] DOC/MINOR: move uuid in the configuration to the right, alphabetical order

2021-05-31 Thread Aleksandar Lazic

Fix alphabetical order of uuid
>From bb84a45b848b879f41ab37343b50057323a6ff19 Mon Sep 17 00:00:00 2001
From: Alexandar Lazic 
Date: Tue, 1 Jun 2021 00:27:01 +0200
Subject: [PATCH] DOC/MINOR: move uuid in the configuration to the right
 alphabetical order

This patch can be backported up to 2.1 where the uuid fetch was
introduced

---
 doc/configuration.txt | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 11c38945c..9264f03ce 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17433,11 +17433,6 @@ rand([]) : integer
   needed to take some routing decisions for example, or just for debugging
   purposes. This random must not be used for security purposes.
 
-uuid([]) : string
-  Returns a UUID following the RFC4122 standard. If the version is not
-  specified, a UUID version 4 (fully random) is returned.
-  Currently, only version 4 is supported.
-
 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently established
   connections on the designated server, possibly including the connection being
@@ -17528,6 +17523,11 @@ thread : integer
   the function, between 0 and (global.nbthread-1). This is useful for logging
   and debugging purposes.
 
+uuid([]) : string
+  Returns a UUID following the RFC4122 standard. If the version is not
+  specified, a UUID version 4 (fully random) is returned.
+  Currently, only version 4 is supported.
+  
 var() : undefined
   Returns a variable with the stored type. If the variable is not set, the
   sample fetch fails. The name of the variable starts with an indication
-- 
2.25.1



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-05-31 Thread Aleksandar Lazic

Hi.

On 31.05.21 14:23, Aleksandar Lazic wrote:

Hi.

While I try to get the stream id from spoa I recognized that there is no fetch 
method for the streamID.


Attached a patch which adds the fetch sample for the stream id.
I assume it could be back ported up to version 2.0

Regards
Alex


The discussion is here.
https://github.com/criteo/haproxy-spoe-go/issues/28

That's the sid in filter spoa log output.
SPOE: [agent-on-http-req]  sid=88 st=0 
0/0/0/0/0 1/1 0/0 10/33

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/flt_spoe.c;h=a68f7b9141025963e8f4ad79c0d1617a4c59774e;hb=HEAD#l2815

```
2815 if (ctx->status_code || !(conf->agent_fe.options2 & 
PR_O2_NOLOGNORM))
2816 send_log(>agent_fe, (!ctx->status_code ? 
LOG_NOTICE : LOG_WARNING),
2817  "SPOE: [%s]  sid=%u st=%u 
%ld/%ld/%ld/%ld/%ld %u/%u %u/%u %llu/%llu\n",
2818  agent->id, spoe_event_str[ev], s->uniq_id, 
ctx->status_code,
  ^^
2819  ctx->stats.t_request, ctx->stats.t_queue, 
ctx->stats.t_waiting,
2820  ctx->stats.t_response, 
ctx->stats.t_process,
2821  agent->counters.idles, 
agent->counters.applets,
2822  agent->counters.nb_sending, 
agent->counters.nb_waiting,
2823  agent->counters.nb_errors, 
agent->counters.nb_processed);

```

It looks to me that the %rt log format have the stream id, right?

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=doc/configuration.txt;h=a13a9a77f8a077a6ac798b1dccc8a0f2f3f67396;hb=HEAD#l20576

|   | %rt  | request_counter (HTTP req or TCP session) | numeric |

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l3175
3175 case LOG_FMT_COUNTER: // %rt

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l2202
2202 uniq_id = _HA_ATOMIC_FETCH_ADD(_count, 1);

Regards
Alex



>From 15a2026c495e64d8165a13a3c8a4e5e19ad7e8d6 Mon Sep 17 00:00:00 2001
From: Alexandar Lazic 
Date: Mon, 31 May 2021 21:28:56 +0200
Subject: [PATCH] MINOR: sample: fetch stream_uniq_id

This fetch sample allows to get the current Stream ID for the
current session.

---
 doc/configuration.txt  | 13 ++
 reg-tests/sample_fetches/stream_id.vtc | 33 ++
 src/sample.c   | 14 +++
 3 files changed, 55 insertions(+), 5 deletions(-)
 create mode 100644 reg-tests/sample_fetches/stream_id.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 11c38945c..7eb7e29cd 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17433,11 +17433,6 @@ rand([]) : integer
   needed to take some routing decisions for example, or just for debugging
   purposes. This random must not be used for security purposes.
 
-uuid([]) : string
-  Returns a UUID following the RFC4122 standard. If the version is not
-  specified, a UUID version 4 (fully random) is returned.
-  Currently, only version 4 is supported.
-
 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently established
   connections on the designated server, possibly including the connection being
@@ -17514,6 +17509,9 @@ stopping : boolean
 str() : string
   Returns a string.
 
+stream_uniq_id : integer
+  Returns the uniq stream id.
+
 table_avl([]) : integer
   Returns the total number of available entries in the current proxy's
   stick-table or in the designated stick-table. See also table_cnt.
@@ -17528,6 +17526,11 @@ thread : integer
   the function, between 0 and (global.nbthread-1). This is useful for logging
   and debugging purposes.
 
+uuid([]) : string
+  Returns a UUID following the RFC4122 standard. If the version is not
+  specified, a UUID version 4 (fully random) is returned.
+  Currently, only version 4 is supported.
+
 var() : undefined
   Returns a variable with the stored type. If the variable is not set, the
   sample fetch fails. The name of the variable starts with an indication
diff --git a/reg-tests/sample_fetches/stream_id.vtc b/reg-tests/sample_fetches/stream_id.vtc
new file mode 100644
index 0..ec512b198
--- /dev/null
+++ b/reg-tests/sample_fetches/stream_id.vtc
@@ -0,0 +1,33 @@
+varnishtest "stream id sample fetch Test"
+
+#REQUIRE_VERSION=2.0
+
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+} -start
+
+haproxy h1 -conf {
+defaults
+mode http
+timeout connect 1s
+timeout client  1s
+timeout server  1s
+
+frontend fe
+bind "fd@${fe}"
+http-response set-header stream-id   "

Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-05-31 Thread Aleksandar Lazic

Hi.

While I try to get the stream id from spoa I recognized that there is no fetch 
method for the streamID.

The discussion is here.
https://github.com/criteo/haproxy-spoe-go/issues/28

That's the sid in filter spoa log output.
SPOE: [agent-on-http-req]  sid=88 st=0 
0/0/0/0/0 1/1 0/0 10/33

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/flt_spoe.c;h=a68f7b9141025963e8f4ad79c0d1617a4c59774e;hb=HEAD#l2815

```
2815 if (ctx->status_code || !(conf->agent_fe.options2 & 
PR_O2_NOLOGNORM))
2816 send_log(>agent_fe, (!ctx->status_code ? 
LOG_NOTICE : LOG_WARNING),
2817  "SPOE: [%s]  sid=%u st=%u 
%ld/%ld/%ld/%ld/%ld %u/%u %u/%u %llu/%llu\n",
2818  agent->id, spoe_event_str[ev], s->uniq_id, 
ctx->status_code,
 ^^
2819  ctx->stats.t_request, ctx->stats.t_queue, 
ctx->stats.t_waiting,
2820  ctx->stats.t_response, 
ctx->stats.t_process,
2821  agent->counters.idles, 
agent->counters.applets,
2822  agent->counters.nb_sending, 
agent->counters.nb_waiting,
2823  agent->counters.nb_errors, 
agent->counters.nb_processed);

```

It looks to me that the %rt log format have the stream id, right?

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=doc/configuration.txt;h=a13a9a77f8a077a6ac798b1dccc8a0f2f3f67396;hb=HEAD#l20576

|   | %rt  | request_counter (HTTP req or TCP session) | numeric |

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l3175
3175 case LOG_FMT_COUNTER: // %rt

http://git.haproxy.org/?p=haproxy-2.4.git;a=blob;f=src/log.c;hb=c5c5bc4e36ce4a6f3bc113c8e16824fdb276c220#l2202
2202 uniq_id = _HA_ATOMIC_FETCH_ADD(_count, 1);

Regards
Alex



Old Github Issue

2021-05-25 Thread Aleksandar Lazic

Hi.

I wanted to cleanup some old issues but was not able due to the fact
that I'm not sure if the bugs are still valid, especially for 1.8/1.9
and previous versions.

https://github.com/haproxy/haproxy/issues?page=10=is%3Aissue+is%3Aopen

It would be nice when someone with more knowledge then I can take a look
and close the not relevant of fixed issues.

Regards
Alex



Re: Brainstorming to add JWT verify to HAPoxy (was: Re: What's the "best" way to read a file in a sample converter)

2021-05-02 Thread Aleksandar Lazic

On 01.05.21 19:45, Julien Pivotto wrote:

On 01 May 18:40, Aleksandar Lazic wrote:


On 01.05.21 14:38, Julien Pivotto wrote:

I do not know what you are trying to achieve.


I try to add on the first line of defense => HAProxy, the possibility to protect
the backend attack without to talk outside of HAProxy.


Did you see https://github.com/criteo/haproxy-spoe-auth ?


Yes. This requires also some external script like lua.
I would like to have the verify in HAProxy.


Well yes, thanks for shareing.

There are some envirnoments where you can't use SPOE and therfore it would be 
nice
to have the option to verify the Token before any connections goes to any 
backend or
SPOE agent.


Did you also see the other approach
https://github.com/haproxytech/haproxy-lua-jwt then?






On 01 May 13:42, Aleksandar Lazic wrote:


On 30.04.21 02:01, Aleksandar Lazic wrote:

Hi.

I think about to integrate the "l8w8jwt_decode(...)" into HAProxy.
https://github.com/GlitchedPolygons/l8w8jwt

The RS* methods requires some "RSA_PRIVATE_KEY[] = ..." and I'm not sure
what's the best method for a sample to read such a key in HAProxy converters.

My suggestion for the converter name.

jwt_verify(alg,key) : boolean

Example call:
http-request set-var(txn.jwt_verified) 
req.hdr(Authorization),ub64dec,jwt_verify(alg,HSKEY)
http-request set-var(txn.jwt_verified) 
req.hdr(Authorization),ub64dec,jwt_verify(alg,"path_to_RS_PEM")

Any opinions?


Some more examples and questions.

I have such a sequence in mind.
```

# check if the request have a Bearer Token
# https://tools.ietf.org/html/rfc6750
acl bearer_header_exist if req.hdr(Authorization) -m beg Bearer

# Get the right HMAC or PEM-File into the variable jwt_verify_value
http-request set-var(txn.jwt_verify_value) 
req.hdr(host),map_str(jwt_pem.lst),read_file_to_string if bearer_header_exist

# Extract the JSON Web Algorithms (JWA) from Bearer Token.
http-request set-var(txn.jwt_algo) 
req.hdr(Authorization),word(1,.),ub64dec,json_query('$.alg')   if 
bearer_header_exist


# Verify the JWT Token with the right HMAC and PEM
http-request set-var(txn.jwt_check) 
req.hdr(Authorization),ub64dec,jwt_verify(%[var(txn.jwt_algo)],%[var(txn.jwt_verify_value)])
 \

if  bearer_header_exist { 
jwt_valid_algo(%[var(txn.jwt_algo)]) }

```

jwt_valid_algo will be similar like fix_is_valid.
jwt_valid_algo will check if the '$.alg' is a supported JSON Web Algorithms

Do I need to call some functions in the converters (jwt_verify,jwt_valid_algo) 
to lookup '%[var(...)]'?
I haven't found a function which do the read_file_to_string, does such a 
function exist in HAProxy?
Can I create a $MAP or $DATA_STRUCTURE to prevent to read the file on very 
request?
Is there a max size of a variable in HAProxy?

Any feedback is very welcome.

Regards
Alex













Re: Brainstorming to add JWT verify to HAPoxy

2021-05-01 Thread Aleksandar Lazic

On 01.05.21 15:08, Tim Düsterhus wrote:

Aleks,

On 5/1/21 1:42 PM, Aleksandar Lazic wrote:

# Extract the JSON Web Algorithms (JWA) from Bearer Token.
http-request set-var(txn.jwt_algo) 
req.hdr(Authorization),word(1,.),ub64dec,json_query('$.alg')  if 
bearer_header_exist


Trusting the algorithm specified in the JWT is unsafe and a common source of 
security issues.


Agree. I was a bad example.


Best regards
Tim Düsterhus





Re: Brainstorming to add JWT verify to HAPoxy (was: Re: What's the "best" way to read a file in a sample converter)

2021-05-01 Thread Aleksandar Lazic



On 01.05.21 14:38, Julien Pivotto wrote:

I do not know what you are trying to achieve.


I try to add on the first line of defense => HAProxy, the possibility to protect
the backend attack without to talk outside of HAProxy.


Did you see https://github.com/criteo/haproxy-spoe-auth ?



Well yes, thanks for shareing.

There are some envirnoments where you can't use SPOE and therfore it would be 
nice
to have the option to verify the Token before any connections goes to any 
backend or
SPOE agent.




On 01 May 13:42, Aleksandar Lazic wrote:


On 30.04.21 02:01, Aleksandar Lazic wrote:

Hi.

I think about to integrate the "l8w8jwt_decode(...)" into HAProxy.
https://github.com/GlitchedPolygons/l8w8jwt

The RS* methods requires some "RSA_PRIVATE_KEY[] = ..." and I'm not sure
what's the best method for a sample to read such a key in HAProxy converters.

My suggestion for the converter name.

jwt_verify(alg,key) : boolean

Example call:
http-request set-var(txn.jwt_verified) 
req.hdr(Authorization),ub64dec,jwt_verify(alg,HSKEY)
http-request set-var(txn.jwt_verified) 
req.hdr(Authorization),ub64dec,jwt_verify(alg,"path_to_RS_PEM")

Any opinions?


Some more examples and questions.

I have such a sequence in mind.
```

# check if the request have a Bearer Token
# https://tools.ietf.org/html/rfc6750
acl bearer_header_exist if req.hdr(Authorization) -m beg Bearer

# Get the right HMAC or PEM-File into the variable jwt_verify_value
http-request set-var(txn.jwt_verify_value) 
req.hdr(host),map_str(jwt_pem.lst),read_file_to_string if bearer_header_exist

# Extract the JSON Web Algorithms (JWA) from Bearer Token.
http-request set-var(txn.jwt_algo) 
req.hdr(Authorization),word(1,.),ub64dec,json_query('$.alg')   if 
bearer_header_exist


# Verify the JWT Token with the right HMAC and PEM
http-request set-var(txn.jwt_check) 
req.hdr(Authorization),ub64dec,jwt_verify(%[var(txn.jwt_algo)],%[var(txn.jwt_verify_value)])
 \

   if  bearer_header_exist { 
jwt_valid_algo(%[var(txn.jwt_algo)]) }

```

jwt_valid_algo will be similar like fix_is_valid.
jwt_valid_algo will check if the '$.alg' is a supported JSON Web Algorithms

Do I need to call some functions in the converters (jwt_verify,jwt_valid_algo) 
to lookup '%[var(...)]'?
I haven't found a function which do the read_file_to_string, does such a 
function exist in HAProxy?
Can I create a $MAP or $DATA_STRUCTURE to prevent to read the file on very 
request?
Is there a max size of a variable in HAProxy?

Any feedback is very welcome.

Regards
Alex








Re: [ANNOUNCE] haproxy-2.4-dev18

2021-05-01 Thread Aleksandar Lazic

Hi.

On 01.05.21 09:14, Willy Tarreau wrote:

Hi,

HAProxy 2.4-dev18 was released on 2021/05/01. It added 51 new commits
after version 2.4-dev17.

It seems that it's been quite a calm week in terms of development, with
most of the time having been spent on old bugs that are not even *that*
serious. Most of them were corner cases occasionally causing peers to be
desynchronized on reload. These were rare enough to have remained unnoticed
since 1.6 for some of them, but some users reloading extremely frequently
managed to trigger them in visible ways and Emeric finally managed to get
rid of all of them.

A new URI normalizer "percent-decode-unreserved" was added, the
"default-path" directive was implemented to chdir into a configurable
path for each config file (absolute, relative to the config file,
relative to its parent) in order to ease packaging of external files
(maps, certs, Lua, errorfiles etc).

A new minor but convenient CLI feature was added, the ability to atomically
replace a map or ACL. We already had everything available internally, it
only had no CLI equivalent, and that was causing annoying limitations to
the ingress controller. So it's now possible to create a new version of a
file, add values to that specific version, then commit it once complete
without leaving any single period with an incomplete map.

Code cleanup has continued in various areas (channel, config parser, Lua,
HTX), and doc cleanups continued as well in the config manual.

I'm almost done with the changes I had in my queue, only the config defines
are left pending. I know that Christopher still has a few patches that we
need to discuss, and that Amaury is finishing the mechanism to gracefully
close idle client connections on reload, so both should be ready next week.

I'm thinking about doing a final pass on the "help" text on the CLI to
re-align it and make it readable again. We long tried to keep the 80-column
format but it's already broken *and* ugly. We can probably push the limit
slightly further but make it readable again. Or we may even shorten the
help description a bit.

Another point that I'd like to clean up is the format of the ALERT/WARNING/
NOTICE messages emitted on stderr. They're followed by numbers that nobody
knows what they correspond to. Now I know since I went into the code to see
that but I'm sure that one month from now I'll have forgotten. And I'm
pretty sure nobody else knows either. Hint: it's the day of year and the
time. I was thinking about changing that output format to present only the
message level and the PID since the date is usually present in whatever log
this lands into. These were added 20 years ago when haproxy was mostly used
as a debugging tool in foreground, to ease having a quick glace at a server's
console... Something tells me that use cases have changed since then :-)

If you're relying on them or if you have some reasonable suggestion about
a convenient variant, feel free to propose, but quickly (and I will not go
into a bikeshedding discussion).

It was also mentioned that for some use cases it could be convenient to have
the process' start time and uptime expressed in milliseconds or microseconds.
I initially thought about adding an extra field to present these extensions
but I'm rolling back on this idea because nothing guarantees they will be
dumped at the same moment as the field they extend. So I'm left with 3
possibilities on which I'd like to collect opinions:
   - add new fields "Uptime_usec", "Start_time_usec" which dump these info
 as 64-bit integers with the micro-second precision ;

   - modify existing fields to dump them as floats, at the risk of breaking
 existing parsers which wouldn't like to see a dot in a value, though
 Start_time_sec was added in 2.4-dev and can still be changed ;

   - pass an option to "show info" to automatically append the sub-second
 precision but leave default format as-is

If we'd go for the last option, maybe the approach would instead be to say
that the parser supports float values, and that we could over time improve
the precision of other measures (like connection rates) so I tend to think
it wouldn't be that bad an investment for the long term. Ideas welcome as
usual.


I also vote for lsat option.


I think that the following weeks will be mostly focused on doc and final
code cleanups (and likely on last-minute fixes, as happens with every
version). So unless we meet some last-minute painful bugs that require
heavy head-scratching, or we figure we've forgotten something really
important, I think it's reasonable to expect a final release any time
between ~10 days from now and the end of the month.

If you haven't started testing 2.4-dev yet, I'd encourage you to do so
now, at least to verify that it matches your use cases and expectations,
and still have an opportunity to report any concern before the release.

In the mean time, have a nice week-end!

Please find the usual URLs below :
Site index   : 

Brainstorming to add JWT verify to HAPoxy (was: Re: What's the "best" way to read a file in a sample converter)

2021-05-01 Thread Aleksandar Lazic



On 30.04.21 02:01, Aleksandar Lazic wrote:

Hi.

I think about to integrate the "l8w8jwt_decode(...)" into HAProxy.
https://github.com/GlitchedPolygons/l8w8jwt

The RS* methods requires some "RSA_PRIVATE_KEY[] = ..." and I'm not sure
what's the best method for a sample to read such a key in HAProxy converters.

My suggestion for the converter name.

jwt_verify(alg,key) : boolean

Example call:
http-request set-var(txn.jwt_verified) 
req.hdr(Authorization),ub64dec,jwt_verify(alg,HSKEY)
http-request set-var(txn.jwt_verified) 
req.hdr(Authorization),ub64dec,jwt_verify(alg,"path_to_RS_PEM")

Any opinions?


Some more examples and questions.

I have such a sequence in mind.
```

# check if the request have a Bearer Token
# https://tools.ietf.org/html/rfc6750
acl bearer_header_exist if req.hdr(Authorization) -m beg Bearer

# Get the right HMAC or PEM-File into the variable jwt_verify_value
http-request set-var(txn.jwt_verify_value) 
req.hdr(host),map_str(jwt_pem.lst),read_file_to_string if bearer_header_exist

# Extract the JSON Web Algorithms (JWA) from Bearer Token.
http-request set-var(txn.jwt_algo) 
req.hdr(Authorization),word(1,.),ub64dec,json_query('$.alg')   if 
bearer_header_exist


# Verify the JWT Token with the right HMAC and PEM
http-request set-var(txn.jwt_check) 
req.hdr(Authorization),ub64dec,jwt_verify(%[var(txn.jwt_algo)],%[var(txn.jwt_verify_value)])
 \

  if  bearer_header_exist { 
jwt_valid_algo(%[var(txn.jwt_algo)]) }

```

jwt_valid_algo will be similar like fix_is_valid.
jwt_valid_algo will check if the '$.alg' is a supported JSON Web Algorithms

Do I need to call some functions in the converters (jwt_verify,jwt_valid_algo) 
to lookup '%[var(...)]'?
I haven't found a function which do the read_file_to_string, does such a 
function exist in HAProxy?
Can I create a $MAP or $DATA_STRUCTURE to prevent to read the file on very 
request?
Is there a max size of a variable in HAProxy?

Any feedback is very welcome.

Regards
Alex



What's the "best" way to read a file in a sample converter

2021-04-29 Thread Aleksandar Lazic

Hi.

I think about to integrate the "l8w8jwt_decode(...)" into HAProxy.
https://github.com/GlitchedPolygons/l8w8jwt

The RS* methods requires some "RSA_PRIVATE_KEY[] = ..." and I'm not sure
what's the best method for a sample to read such a key in HAProxy converters.

My suggestion for the converter name.

jwt_verify(alg,key) : boolean

Example call:
http-request set-var(txn.jwt_verified) 
req.hdr(Authorization),ub64dec,jwt_verify(alg,HSKEY)
http-request set-var(txn.jwt_verified) 
req.hdr(Authorization),ub64dec,jwt_verify(alg,"path_to_RS_PEM")

Any opinions?

Regards
Alex



[PATCH] DOC: general: fix example in set-timeout

2021-04-27 Thread Aleksandar Lazic

Hi.

attach the fix for set-timeout.

Regards
Alex
>From 8ca8f7385a16894a6c74cd31d1b8507fc32fb36e Mon Sep 17 00:00:00 2001
From: Alex 
Date: Tue, 27 Apr 2021 12:57:07 +0200
Subject: [PATCH] DOC: general: fix example in set-timeout

The alternative arguments are always in curly brackets, let's fix it for
set-timeout.
The Example in set-timeout does not have the one of the required argument.

This commit makes the PR https://github.com/cbonte/haproxy-dconv/pull/34
obsolete.

---
 doc/configuration.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 0453380f6..0808433ec 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -6571,7 +6571,7 @@ http-request set-src-port  [ { if | unless }  ]
   the address family supports a port, otherwise it forces the source address to
   IPv4 "0.0.0.0" before rewriting the port.
 
-http-request set-timeout server|tunnel {  |  }
+http-request set-timeout { server | tunnel } {  |  }
[ { if | unless }  ]
 
   This action overrides the specified "server" or "tunnel" timeout for the
@@ -6586,8 +6586,8 @@ http-request set-timeout server|tunnel {  |  }
   results.
 
   Example:
-http-request set-timeout server 5s
-http-request set-timeout hdr(host),map_int(host.lst)
+http-request set-timeout tunnel 5s
+http-request set-timeout server req.hdr(host),map_int(host.lst)
 
 http-request set-tos  [ { if | unless }  ]
 
-- 
2.25.1



[PATCH] DOC: general: fix white spaces for HTML converter

2021-04-24 Thread Aleksandar Lazic

Hi.

The HTML converter expects some formats to recognize if a keyword is a
keyword.

Regards
alex
>From 9ed588c09a3ceb3af62bc9e4f9c7950fe0c58c7f Mon Sep 17 00:00:00 2001
From: Alex 
Date: Sat, 24 Apr 2021 13:02:21 +0200
Subject: [PATCH] DOC: general: fix white spaces for HTML converter

The HTML converter expects some formats to recognize if a keyword is a
keyword.

---
 doc/configuration.txt | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 375eedafa..65831e242 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -16255,7 +16255,7 @@ mod()
   This prefix is followed by a name. The separator is a '.'. The name may only
   contain characters 'a-z', 'A-Z', '0-9', '.' and '_'.
 
-mqtt_field_value(,)
+mqtt_field_value(,)
   Returns value of  found in input MQTT payload of type
   .
can be either a string (case insensitive matching) or a numeric
@@ -17318,17 +17318,17 @@ srv_sess_rate([/]) : integer
 acl srv2_full srv_sess_rate(be1/srv2) gt 50
 use_backend be2 if srv1_full or srv2_full
 
-srv_iweight([/]): integer
+srv_iweight([/]) : integer
   Returns an integer corresponding to the server's initial weight. If 
   is omitted, then the server is looked up in the current backend. See also
   "srv_weight" and "srv_uweight".
 
-srv_uweight([/]): integer
+srv_uweight([/]) : integer
   Returns an integer corresponding to the user visible server's weight. If
is omitted, then the server is looked up in the current
   backend. See also "srv_weight" and "srv_iweight".
 
-srv_weight([/]): integer
+srv_weight([/]) : integer
   Returns an integer corresponding to the current (or effective) server's
   weight. If  is omitted, then the server is looked up in the current
   backend. See also "srv_iweight" and "srv_uweight".
-- 
2.25.1



Re: HAproxy Origin header 403 forbidden

2021-04-17 Thread Aleksandar Lazic
Hi.

Please can you share youre config an haproxy -vv

Regards
Alex

Apr 17, 2021 5:34:38 PM Marcello Lorenzi :

> Hi All,
> We're experiencing an issue on our haproxy 2.2 instance. We configured some 
> backends and all worked fine but if we tried to forward some requests with 
> the header Origin we received a 403 error, but we didn't have any extra 
> config.
> 
> Could you help us to identify the issue?
> 
> Thanks,
> Marcello



Re: [PATCH v2 0/8] URI normalization / Issue #714

2021-04-17 Thread Aleksandar Lazic

On 17.04.21 13:23, Tim Düsterhus wrote:

Willy,

On 4/17/21 12:09 PM, Willy Tarreau wrote:

With the renaming already made I consider the configuration syntax to be
stable enough for a 2.4. I'll leave the final decision regarding that up to
you, though. Especially since 2.4 is going to be an LTS.


What we can possibly do, if you're not completely sure about the naming
(it's often a very difficult aspect to deal with), is to merge the series,
ask users in the next release announcement to have a look and possibly
suggest updates before the release. We can then mark the new actions as


With the new names (1st patch this morning) I think that the naming is good. It 
is descriptive and follows a well-defined naming scheme that allows for future 
extension.


experimental in the doc, and remove the experimental status after a while.
Or if the features look solid enough and you're feeling ready to deal with
occasionally possible bug reports, we can merge them and even not pass via
an experimental status.


I added the experimental marking in the 3rd patch this morning.

Generally I think that it looks solid enough, though. During development I 
carefully researched the relevant documentation (e.g. the URI RFC) and tested 
the behavior of different clients and servers. It also comes with quite a few 
tests ensuring that the normalizers behave like I expect them to.

Nonetheless I might have missed something and correct handling of URIs is a 
sensitive part of the request handling, so an experimental note still is 
appropriate.

All in all: I think that the 8 v2 patches + the 3 patches from this morning 
together result in something that is appropriate for HAProxy 2.4.


I'm open to various options. Anyway I do think that URI normalization is
a useful feature to have.

I think that some of the actions will probably end up being replicated
as converters, so maybe in the end the sequence below:

    http-request normalize-uri path-merge-slashes
    http-request normalize-uri path-strip-dotdot

could end up like this:

    http-request set-path %[path,path-merge-slashes,path-strip-dotdot]

The pre-release period is the right one to evaluate such options, so
I'm not worried about any outcome.


I would advice against making them into converters, because it forces the user 
to think about the appropriate fetch to use. As an example the 
path-strip-dotdot normalizer probably should not be applied to the query 
string! The actions hide this type of detail from the user which I consider to 
be a good thing.


Well I think also that the usage of

http-request set-path %[path,path-merge-slashes,path-strip-dotdot]

would be more "natural".

jm2c


Best regards
Tim Düsterhus






Bandwidth limitation in HAProxy

2021-04-16 Thread Aleksandar Lazic

Hi.

How difficult will it be to add a bandwidth limitation into HAProxy similar to 
the nginx feature?

https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate

Regards

Aleks



Re: [PATCH] MINOR: sample: add json_string

2021-04-15 Thread Aleksandar Lazic

On 15.04.21 17:09, Willy Tarreau wrote:

On Thu, Apr 15, 2021 at 04:49:00PM +0200, Aleksandar Lazic wrote:

#define JSON_INT_MAX ((1ULL << 53) - 1)

^
Sorry I was not clear, please drop that 'U' here.


I'm also sorry, I was in a tunnel :-/

Attached now the next patches.


Thank you! Now applied. I just fixed this remaining double indent issue
and that was all:


+   if (arg[1].data.str.data != 0) {
+   if (strcmp(arg[1].data.str.area, "int") != 0) {
+   memprintf(err, "output_type only supports \"int\" as 
argument");
+   return 0;
+   } else {
+   arg[1].type = ARGT_SINT;
+   arg[1].data.sint = 0;
+   }
+   }


Thanks Aleks! You see it wasn't that hard in the end :-)


Cool ;-) :-)

Now the Statement what I wanted to say ;-)

HAProxy have now at least 4 possibilities to route traffic and
catch some data.

HTTP fields
GRPC fields
FIX fields
JSON fields

Have I missed something?

I love this Project and the community.
Thanks willy and tim for your passion and precise reviews ;-)


Willy



Best regards
Aleks



Re: [PATCH] MINOR: sample: add json_string

2021-04-15 Thread Aleksandar Lazic

On 15.04.21 16:09, Willy Tarreau wrote:

On Thu, Apr 15, 2021 at 04:05:27PM +0200, Aleksandar Lazic wrote:

Well I don't think so because 4 is still bigger then -9007199254740991 ;-)


This is because *you* think it is -9007199254740991 but the reality
is that it's not this.due to ULL:

   #define JSON_INT_MAX ((1ULL << 53) - 1)
   #define JSON_INT_MIN (-JSON_INT_MAX)

=> it's -9007199254740991ULL hence 18437736874454810625 so 4 is
definitely not larger than this.



Never the less I have changed the defines and rerun the tests.
Btw, this vtest is a great enhancement to haproxy ;-)


Yes I totally agree. And you can't imagine how many times I'm angry
at it when it detects an error after a tiny change I make, just to
realize that I did really break something and that it was right :-)
Like all tools it just needs to be reasonably used, not excessively
trusted but used as a good hint that something unexpected changed,
and it helps a lot!



```
#define JSON_INT_MAX ((1ULL << 53) - 1)

   ^
Sorry I was not clear, please drop that 'U' here.


I'm also sorry, I was in a tunnel :-/

Attached now the next patches.


Willy



Regards
Aleks
>From 2f0673eb3e8a41e173221933021af2392d9a8ca4 Mon Sep 17 00:00:00 2001
From: Alex 
Date: Thu, 15 Apr 2021 16:45:15 +0200
Subject: [PATCH 2/2] MINOR: sample: converter: Add json_query converter

With the json_query can a JSON value be extacted from a Header
or body of the request and saved to a variable.

This converter makes it possible to handle some JSON Workload
to route requests to different backends.
---
 doc/configuration.txt  | 24 
 reg-tests/converter/json_query.vtc | 95 ++
 src/sample.c   | 88 +++
 3 files changed, 207 insertions(+)
 create mode 100644 reg-tests/converter/json_query.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index f242300e7..61c2a6dd9 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15961,6 +15961,30 @@ json([])
   Output log:
  {"ip":"127.0.0.1","user-agent":"Very \"Ugly\" UA 1\/2"}
 
+json_query(,[])
+  The json_query converter supports the JSON types string, boolean and
+  number. Floating point numbers will be returned as a string. By
+  specifying the output_type 'int' the value will be converted to an
+  Integer. If conversion is not possible the json_query converter fails.
+
+   must be a valid JSON Path string as defined in
+  https://datatracker.ietf.org/doc/draft-ietf-jsonpath-base/
+
+  Example:
+ # get a integer value from the request body
+ # "{"integer":4}" => 5
+ http-request set-var(txn.pay_int) req.body,json_query('$.integer','int'),add(1)
+
+ # get a key with '.' in the name
+ # {"my.key":"myvalue"} => myvalue
+ http-request set-var(txn.pay_mykey) req.body,json_query('$.my\\.key')
+
+ # {"boolean-false":false} => 0
+ http-request set-var(txn.pay_boolean_false) req.body,json_query('$.boolean-false')
+
+ # get the value of the key 'iss' from a JWT Bearer token
+ http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec,json_query('$.iss')
+
 language([,])
   Returns the value with the highest q-factor from a list as extracted from the
   "accept-language" header using "req.fhdr". Values with no q-factor have a
diff --git a/reg-tests/converter/json_query.vtc b/reg-tests/converter/json_query.vtc
new file mode 100644
index 0..ade7b4ccb
--- /dev/null
+++ b/reg-tests/converter/json_query.vtc
@@ -0,0 +1,95 @@
+varnishtest "JSON Query converters Test"
+#REQUIRE_VERSION=2.4
+
+feature ignore_unknown_macro
+
+server s1 {
+	rxreq
+	txresp
+} -repeat 8 -start
+
+haproxy h1 -conf {
+defaults
+	mode http
+	timeout connect 1s
+	timeout client  1s
+	timeout server  1s
+	option http-buffer-request
+
+frontend fe
+	bind "fd@${fe}"
+	tcp-request inspect-delay 1s
+
+	http-request set-var(sess.header_json) req.hdr(Authorization),json_query('$.iss')
+	http-request set-var(sess.pay_json) req.body,json_query('$.iss')
+	http-request set-var(sess.pay_int) req.body,json_query('$.integer',"int"),add(1)
+	http-request set-var(sess.pay_neg_int) req.body,json_query('$.negativ-integer',"int"),add(1)
+	http-request set-var(sess.pay_double) req.body,json_query('$.double')
+	http-request set-var(sess.pay_boolean_true) req.body,json_query('$.boolean-true')
+	http-request set-var(sess.pay_boolean_false) req.body,json_query('$.boolean-false')
+	http-request set-var(sess.pay_mykey) req.body,json_query('$.my\\.key')
+
+	http-response set-header x-var_header %[var(sess.header_json)]
+	http-response set-header x-var_body %[var(sess.pay_json)]
+	http-response set-header x-var_body_int %[var(sess.pay_int)]
+	http-response set-header x-var_body_neg_int %[v

Re: [PATCH] MINOR: sample: add json_string

2021-04-15 Thread Aleksandar Lazic

On 15.04.21 15:55, Willy Tarreau wrote:

On Thu, Apr 15, 2021 at 03:41:18PM +0200, Aleksandar Lazic wrote:

Now when I remove the check "smp->data.u.sint < 0" every positive value is
bigger then JSON INT_MIN and returns 0.


But don't you agree that this test DOES nothing ? If it changes anything
it means the issue is somewhere else and is a hidden bug waiting to strike
and that we must address it.

Look:

  if (smp->data.u.sint < 0 && smp->data.u.sint < JSON_INT_MIN)

is exactly equivalent to:

  if (smp->data.u.sint < JSON_INT_MIN && smp->data.u.sint < 0)

JSON_INT_MIN < 0 so the first part implies the second one. Said differently,
there is no value of sint that validates 

I think it checks if the value is negative or positive and then verify if the
value is bigger then the the max allowed value, +/-.

Maybe I thing wrong, so let us work with concrete values.

```
printf("\n\n>> smp->data.u.sint :%lld: < JSON_INT_MIN :%lld: if-no-check:%d:<<\n",   
smp->data.u.sint,JSON_INT_MIN,(smp->data.u.sint < JSON_INT_MIN));
printf(">> smp->data.u.sint :%lld: > JSON_INT_MAX :%lld: if-no-check:%d:<<\n\n", 
smp->data.u.sint,JSON_INT_MAX,(smp->data.u.sint > JSON_INT_MAX));

if (smp->data.u.sint < 0 && smp->data.u.sint < JSON_INT_MIN)
return 0;
else if (smp->data.u.sint > 0 && smp->data.u.sint > JSON_INT_MAX)
return 0;

```

Input is here 4.

smp->data.u.sint :4: < JSON_INT_MIN :-9007199254740991: if-no-check:1:<<
smp->data.u.sint :4: > JSON_INT_MAX :9007199254740991:  if-no-check:0:<<


Input is here -4.


smp->data.u.sint :-4: < JSON_INT_MIN :-9007199254740991: if-no-check:0:<<
smp->data.u.sint :-4: > JSON_INT_MAX :9007199254740991:  if-no-check:1:<<


OK I think I got it. It's just because your definitions of JSON_INT_MIN
and JSON_INT_MAX are unsigned and the comparison is made in unsigned mode.
So when you do "4 < JSON_INT_MIN" it's in fact "4 < 2^64-(1<53)-1" so it's
true. And conversely for the other one.

I'm pretty sure that if you change your constants to:

   #define JSON_INT_MAX ((1LL << 53) - 1)
   #define JSON_INT_MIN (-JSON_INT_MAX)

It will work :-)


Well I don't think so because 4 is still bigger then -9007199254740991 ;-)

Never the less I have changed the defines and rerun the tests.
Btw, this vtest is a great enhancement to haproxy ;-)

```
#define JSON_INT_MAX ((1ULL << 53) - 1)
#define JSON_INT_MIN (-JSON_INT_MAX)

printf("\n\n>> smp->data.u.sint :%lld: < JSON_INT_MIN :%lld: if-no-check:%d:<<\n", 
smp->data.u.sint,JSON_INT_MIN,(smp->data.u.sint < JSON_INT_MIN));
printf(">> smp->data.u.sint :%lld: > JSON_INT_MAX :%lld: if-no-check:%d:<<\n\n", 
smp->data.u.sint,JSON_INT_MAX, (smp->data.u.sint > JSON_INT_MAX));

if (smp->data.u.sint < JSON_INT_MIN)
return 0;
else if (smp->data.u.sint > JSON_INT_MAX)
return 0;
```

>> smp->data.u.sint :4: < JSON_INT_MIN :-9007199254740991: if-no-check:1:<<
>> smp->data.u.sint :4: > JSON_INT_MAX :9007199254740991: if-no-check:0:<<



That's among the historical idiocies of the C language that considers
the signedness as part of the variable instead of being the mode of the
operation applied to the variable. This results in absurd combinations.

Willy






Re: [PATCH] MINOR: sample: add json_string

2021-04-15 Thread Aleksandar Lazic

On 15.04.21 14:48, Willy Tarreau wrote:

On Thu, Apr 15, 2021 at 02:17:45PM +0200, Aleksandar Lazic wrote:

I, by far, prefer Tim's proposal here, as I do not even understand the
first one, sorry Aleks, please don't feel offended :-)


Well you know my focus is to support HAProxy and therefore it's okay.
The contribution was in the past much easier, but you know time changes.


It's not getting harder, we've always had numerous round trips,
however now there are more people participating and it's getting
increasingly difficult to maintain a constant level of quality so
it is important to take care about maintainability, which implies
being carefull about the coding style (which is really not strict)
and a good level of English in the doc (which remains achievable
as most of the contributors are not native speakers so we're not
using advanced English). In addition there's nothing wrong with
saying "I need someone to reword this part where I don't feel at
ease", it's just that nobody will force it on you as it would not
be kind nor respectful of your work.

In fact I'd say that it's got easier because most of the requirements
have been formalized by now, or are not unique to this project but
shared with other ones.


Okay, got you.


  From my point of view is it necessary to check if the value is a negative
value and only then should be checked if the max '-' range is reached.


But the first one is implied by the second. It looks like a logical
error when read like this, it makes one think the author had something
different in mind. It's like writing "if (a < 0 && a < -2)". It is
particularly confusing.


Well then then this does not work anymore


If so it precisely shows that a problem remains somewhere else.


Hm, maybe.


http-request set-var(sess.pay_int) req.body,json_query('$.integer',"int"),add(1)

with the given defines.

#define JSON_INT_MAX ((1ULL << 53) - 1)
#define JSON_INT_MIN (0 - JSON_INT_MAX)

Because "{"integer":4}" => 5" and 5 is bigger then JSON_INT_MIN which is 
(0-JSON_INT_MAX)

This sequence works because I check if the value is negative "smp->data.u.sint < 
0"
and only then check if the negative max border "JSON_INT_MIN"  is reached.


I'm sorry but I don't get it.


if (smp->data.u.sint < 0 && smp->data.u.sint < JSON_INT_MIN)

The same belongs to the positive max int.

Now when I remove the check "smp->data.u.sint < 0" every positive value is
bigger then JSON INT_MIN and returns 0.


But don't you agree that this test DOES nothing ? If it changes anything
it means the issue is somewhere else and is a hidden bug waiting to strike
and that we must address it.

Look:

 if (smp->data.u.sint < 0 && smp->data.u.sint < JSON_INT_MIN)

is exactly equivalent to:

 if (smp->data.u.sint < JSON_INT_MIN && smp->data.u.sint < 0)

JSON_INT_MIN < 0 so the first part implies the second one. Said differently,
there is no value of sint that validates 

I think it checks if the value is negative or positive and then verify if the
value is bigger then the the max allowed value, +/-.

Maybe I thing wrong, so let us work with concrete values.

```
printf("\n\n>> smp->data.u.sint :%lld: < JSON_INT_MIN :%lld: if-no-check:%d:<<\n",   
smp->data.u.sint,JSON_INT_MIN,(smp->data.u.sint < JSON_INT_MIN));
printf(">> smp->data.u.sint :%lld: > JSON_INT_MAX :%lld: if-no-check:%d:<<\n\n", 
smp->data.u.sint,JSON_INT_MAX,(smp->data.u.sint > JSON_INT_MAX));

if (smp->data.u.sint < 0 && smp->data.u.sint < JSON_INT_MIN)
return 0;
else if (smp->data.u.sint > 0 && smp->data.u.sint > JSON_INT_MAX)
return 0;

```

Input is here 4.
>> smp->data.u.sint :4: < JSON_INT_MIN :-9007199254740991: if-no-check:1:<<
>> smp->data.u.sint :4: > JSON_INT_MAX :9007199254740991:  if-no-check:0:<<

Input is here -4.

>> smp->data.u.sint :-4: < JSON_INT_MIN :-9007199254740991: if-no-check:0:<<
>> smp->data.u.sint :-4: > JSON_INT_MAX :9007199254740991:  if-no-check:1:<<

This looks to me when the comparing is done with a positive value then it
will be true for the JSON_INT_MIN and when the comparing is done with a negative
value then will it be true for JSON_INT_MAX.

So the concrete question is how to check the value in the positive and negative
range without the "smp->data.u.sint < 0" or "smp->data.u.sint > 0".

I haven't found any other solution, I'm open for any suggestion?


Willy



Regards
Aleks



Re: [PATCH] MINOR: sample: add json_string

2021-04-15 Thread Aleksandar Lazic

On 15.04.21 09:08, Willy Tarreau wrote:

On Wed, Apr 14, 2021 at 09:52:31PM +0200, Aleksandar Lazic wrote:

+   - string  : This is the default search type and returns a String;
+   - boolean : If the JSON value is not a String or a Number
+   - number  : When the JSON value is a Number then will the value be
+   converted to a String. If its known that the value is a
+   integer then add 'int' to the  which helps
+   haproxy to convert the value to a integer for further usage;


I'd probably completely rephrase this as:

The json_query converter supports the JSON types string, boolean and
number. Floating point numbers will be returned as a string. By specifying
the output_type 'int' the value will be converted to an Integer. If
conversion is not possible the json_query converter fails.


Well I would like to here also some other opinions about the wording.


I, by far, prefer Tim's proposal here, as I do not even understand the
first one, sorry Aleks, please don't feel offended :-)


Well you know my focus is to support HAProxy and therefore it's okay.
The contribution was in the past much easier, but you know time changes.


+    switch(tok) {
+    case MJSON_TOK_NUMBER:
+    if (args[1].type == ARGT_SINT) {
+    smp->data.u.sint = atoll(p);
+
+    if (smp->data.u.sint < 0 && smp->data.u.sint < JSON_INT_MIN) {
+    /* JSON integer too big negativ value */


This comment appears to be useless. It is implied by the 'if'. I also believe that 
the 'sint < 0' check is not needed.


Well I prefer to document in the comment what the if is doing.


OK but then please be careful about spelling, or it will force Ilya to
send yet another spell-checker patch.


 From my point of view is it necessary to check if the value is a negative
value and only then should be checked if the max '-' range is reached.


But the first one is implied by the second. It looks like a logical
error when read like this, it makes one think the author had something
different in mind. It's like writing "if (a < 0 && a < -2)". It is
particularly confusing.


Well then then this does not work anymore

http-request set-var(sess.pay_int) req.body,json_query('$.integer',"int"),add(1)

with the given defines.

#define JSON_INT_MAX ((1ULL << 53) - 1)
#define JSON_INT_MIN (0 - JSON_INT_MAX)

Because "{"integer":4}" => 5" and 5 is bigger then JSON_INT_MIN which is 
(0-JSON_INT_MAX)

This sequence works because I check if the value is negative "smp->data.u.sint < 
0"
and only then check if the negative max border "JSON_INT_MIN"  is reached.

if (smp->data.u.sint < 0 && smp->data.u.sint < JSON_INT_MIN)

The same belongs to the positive max int.

Now when I remove the check "smp->data.u.sint < 0" every positive value is 
bigger then JSON INT_MIN
and returns 0.

How about to add this information into the comments?


Maybe there is a better solution, I'm open for suggestions.

I can move the comment above the 'if'.


You have the choice as long as it's clear:
   - above the if, you describe what you're testing and why
   - inside the if, you describe the condition you've validated.

As it is now, it's best inside the if.

Thanks!
Willy






Re: [PATCH] MINOR: sample: add json_string

2021-04-14 Thread Aleksandar Lazic

On 14.04.21 18:41, Tim Düsterhus wrote:

Aleks,

On 4/14/21 1:19 PM, Aleksandar Lazic wrote:

From 46ddac8379324b645c662e19de39d5de4ac74a77 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Wed, 14 Apr 2021 13:11:26 +0200
Subject: [PATCH 2/2] MINOR: sample: converter: Add json_query converter

With the json_query can a JSON value be extacted from a Header
or body of the request and saved to a variable.

This converter makes it possible to handle some JSON Workload
to route requests to differnt backends.


Typo: different.


---
 doc/configuration.txt  |  32 
 reg-tests/converter/json_query.vtc | 116 +
 src/sample.c   |  95 +++
 3 files changed, 243 insertions(+)
 create mode 100644 reg-tests/converter/json_query.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index f242300e7..374e7939b 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15961,6 +15961,38 @@ json([])
   Output log:
  {"ip":"127.0.0.1","user-agent":"Very \"Ugly\" UA 1\/2"}

+


This empty line should not be there.


+json_query(,[])
+  This converter searches for the key given by  and returns
+  the value.
+   must be a valid JSONPath String as defined in


I'd use string in lowercase.


+  https://datatracker.ietf.org/doc/draft-ietf-jsonpath-base/
+
+  A floating point value will always be returned as String.
+
+  The follwing JSON types are recognized.


Typo: following.
I'd also use a ':' instead of '.'.


+   - string  : This is the default search type and returns a String;
+   - boolean : If the JSON value is not a String or a Number
+   - number  : When the JSON value is a Number then will the value be
+   converted to a String. If its known that the value is a
+   integer then add 'int' to the  which helps
+   haproxy to convert the value to a integer for further usage;


I'd probably completely rephrase this as:

The json_query converter supports the JSON types string, boolean and number. 
Floating point numbers will be returned as a string. By specifying the 
output_type 'int' the value will be converted to an Integer. If conversion is 
not possible the json_query converter fails.


Well I would like to here also some other opinions about the wording.


+  Example:
+ # get the value of the key 'iss' from a JWT Bearer token
+ http-request set-var(txn.token_payload) 
req.hdr(Authorization),word(2,.),ub64dec,json_query('$.iss')
+
+ # get a integer value from the request body
+ # "{"integer":4}" => 5
+ http-request set-var(txn.pay_int) 
req.body,json_query('$.integer','int'),add(1)
+
+ # get a key with '.' in the name
+ # {"my.key":"myvalue"} => myvalue
+ http-request set-var(txn.pay_mykey) req.body,json_query('$.my\\.key')
+
+ # {"boolean-false":false} => 0
+ http-request set-var(txn.pay_boolean_false) 
req.body,json_query('$.boolean-false')


These examples look good to me. I'd just move the JWT example to the bottom, so 
that the simple examples come first.


I prefer to keep it like this.



 language([,])
   Returns the value with the highest q-factor from a list as extracted from the
   "accept-language" header using "req.fhdr". Values with no q-factor have a
diff --git a/reg-tests/converter/json_query.vtc 
b/reg-tests/converter/json_query.vtc
new file mode 100644
index 0..88ef58a0c
--- /dev/null
+++ b/reg-tests/converter/json_query.vtc
@@ -0,0 +1,116 @@
+varnishtest "JSON Query converters Test"
+#REQUIRE_VERSION=2.4
+
+feature ignore_unknown_macro
+
+server s1 {
+    rxreq
+    txresp
+
+    rxreq
+    txresp
+
+    rxreq
+    txresp
+
+ rxreq
+ txresp
+
+ rxreq
+ txresp
+
+ rxreq
+ txresp
+ + rxreq
+ txresp
+ + rxreq
+ txresp
+} -start


You can use '-repeat 8' to simplify the server definition.


Good hint, thanks.


+haproxy h1 -conf {
+    defaults
+    mode http
+    timeout connect 1s
+    timeout client  1s
+    timeout server  1s
+    option http-buffer-request
+
+    frontend fe
+    bind "fd@${fe}"
+    tcp-request inspect-delay 1s
+
+    http-request set-var(sess.header_json) 
req.hdr(Authorization),json_query('$.iss')
+ http-request set-var(sess.pay_json) req.body,json_query('$.iss')
+ http-request set-var(sess.pay_int) 
req.body,json_query('$.integer',"int"),add(1)
+ http-request set-var(sess.pay_neg_int) 
req.body,json_query('$.negativ-integer',"int"),add(1)


Inconsistent indentation here.


+    http-request set-var(sess.pay_double) req.body,json_query('$.double')
+ http-request set-var(sess.pay_boolean_true) 
req.body,json_query('$.boolean-true')
+ http-request set-var(sess.pay_boolean_false) 
req.body,json_query('$.boolean-false')
+ http-request set-var(sess.pay_my

Re: [PATCH] MINOR: sample: add json_string

2021-04-14 Thread Aleksandar Lazic

Hi.

here now the current version of the patches.

Regards
Aleks.

On 14.04.21 10:45, Aleksandar Lazic wrote:

On 14.04.21 04:36, Willy Tarreau wrote:

On Wed, Apr 14, 2021 at 03:02:20AM +0200, Aleksandar Lazic wrote:

But then, could it make sense to also support "strict integers": values
that can accurately be represented as integers and which are within the
JSON valid range for integers (-2^52 to 2^52 with no decimal part) ?
This would then make the converter return nothing in case of violation
(i.e. risk of losing precision). This would also reject NaN and infinite
that the lib supports.


You mean the same check which is done in arith_add().


Not exactly because arith_add only checks for overflows after addition
and tries to cap the result, but I'd rather just say that if the decoded
number is <= -2^53 or >= 2^53 then the converter should return a no match
in case an integer was requested.


Okay got you.

There is such a check in stats.c which I copied to sample.c but this does
not look right.

Maybe I should create a include/haproxy/json-t.h and add the values there,
what do you think?


```
/* Limit JSON integer values to the range [-(2**53)+1, (2**53)-1] as per
* the recommendation for interoperable integers in section 6 of RFC 7159.
*/
#define JSON_INT_MAX ((1ULL << 53) - 1)
#define JSON_INT_MIN (0 - JSON_INT_MAX)

/* This sample function get the value from a given json string.
* The mjson library is used to parse the json struct
*/
static int sample_conv_json_query(const struct arg *args, struct sample *smp, 
void *private)
{
 struct buffer *trash = get_trash_chunk();
 const char *p; /* holds the temporay string from mjson_find */
 int tok, n;    /* holds the token enum and the length of the value */
 int rc;    /* holds the return code from mjson_get_string */

 tok = mjson_find(smp->data.u.str.area, smp->data.u.str.data, 
args[0].data.str.area, , );

 switch(tok) {
     case MJSON_TOK_NUMBER:
     if (args[1].type == ARGT_SINT) {
     smp->data.u.sint = atoll(p);

     if (smp->data.u.sint < JSON_INT_MIN || smp->data.u.sint > 
JSON_INT_MAX)
     return 0;

     smp->data.type = SMP_T_SINT;
     } else {
...
```


H that's not your fault but now I'm seeing that we already have a
converter inappropriately called "json", so we don't even know in which
direction it works by just looking at its name :-(  Same issue as for
base64.

May I suggest that you call yours "json_decode" or maybe shorter
"json_dec" so that it's more explicit that it's the decode one ? Because
for me "json_string" is the one that will emit a json string from some
input (which it is not). Then we could later create "json_enc" and warn
when "json" alone is used. Or even "jsdec" and "jsenc" which are much
shorter and still quite explicit.


How about "json_query" because it's exactly what it does :-)


I'm not familiar with the notion of "query" to decode and extract contents
but I'm not the most representative user and am aware of the "jq" command-
line utility that does this. So if it sounds natural to others I'm fine
with this.


I'm seeing that there's a very nice mjson_find() which does *exactly* what
you need:

 "In a JSON string s, len, find an element by its JSONPATH path. Save
  found element in tokptr, toklen. If not found, return JSON_TOK_INVALID.
  If found, return one of: MJSON_TOK_STRING, MJSON_TOK_NUMBER,
  MJSON_TOK_TRUE, MJSON_TOK_FALSE, MJSON_TOK_NULL, MJSON_TOK_ARRAY,
  MJSON_TOK_OBJECT.
  If a searched key contains ., [ or ] characters, they should be escaped
  by a backslash."

So you get the type in return. I think you can then call one of the
related functions depending on what is found, which is more reliable
than iterating over multiple attempts.


Oh yes, this sounds like a better approach.
I have now used this suggestion and I hope you can help me to fix the double 
parsing
issue or is it acceptable to parse the input twice?


 From what I've seen in the code in the lib you have no other option.
I thought it might be possible to call mjson_get_string() on the
resulting pointer but you would need to find a way to express that
you want to extract the immediate content, maybe by having an empty
key designation or something like this. This point is not clear to
me and the unit tests in the project all re-parse the input string
after mjson_find(), so probably this is the way to do it.


The check functions handles the int arg now as suggested.

```
/* This function checks the "json_query" converter's arguments. */
static int sample_check_json_query(struct arg *arg, struct sample_conv *conv,
    const char *file, int line, char **err)
{
if (arg[0].data.str.data == 0) { /* 

Re: [PATCH] MINOR: sample: add json_string

2021-04-14 Thread Aleksandar Lazic

On 14.04.21 04:36, Willy Tarreau wrote:

On Wed, Apr 14, 2021 at 03:02:20AM +0200, Aleksandar Lazic wrote:

But then, could it make sense to also support "strict integers": values
that can accurately be represented as integers and which are within the
JSON valid range for integers (-2^52 to 2^52 with no decimal part) ?
This would then make the converter return nothing in case of violation
(i.e. risk of losing precision). This would also reject NaN and infinite
that the lib supports.


You mean the same check which is done in arith_add().


Not exactly because arith_add only checks for overflows after addition
and tries to cap the result, but I'd rather just say that if the decoded
number is <= -2^53 or >= 2^53 then the converter should return a no match
in case an integer was requested.


Okay got you.

There is such a check in stats.c which I copied to sample.c but this does
not look right.

Maybe I should create a include/haproxy/json-t.h and add the values there,
what do you think?


```
/* Limit JSON integer values to the range [-(2**53)+1, (2**53)-1] as per
 * the recommendation for interoperable integers in section 6 of RFC 7159.
 */
#define JSON_INT_MAX ((1ULL << 53) - 1)
#define JSON_INT_MIN (0 - JSON_INT_MAX)

/* This sample function get the value from a given json string.
 * The mjson library is used to parse the json struct
 */
static int sample_conv_json_query(const struct arg *args, struct sample *smp, 
void *private)
{
struct buffer *trash = get_trash_chunk();
const char *p; /* holds the temporay string from mjson_find */
int tok, n;/* holds the token enum and the length of the value */
int rc;/* holds the return code from mjson_get_string */

tok = mjson_find(smp->data.u.str.area, smp->data.u.str.data, 
args[0].data.str.area, , );

switch(tok) {
case MJSON_TOK_NUMBER:
if (args[1].type == ARGT_SINT) {
smp->data.u.sint = atoll(p);

if (smp->data.u.sint < JSON_INT_MIN || 
smp->data.u.sint > JSON_INT_MAX)
return 0;

smp->data.type = SMP_T_SINT;
} else {
...
```


H that's not your fault but now I'm seeing that we already have a
converter inappropriately called "json", so we don't even know in which
direction it works by just looking at its name :-(  Same issue as for
base64.

May I suggest that you call yours "json_decode" or maybe shorter
"json_dec" so that it's more explicit that it's the decode one ? Because
for me "json_string" is the one that will emit a json string from some
input (which it is not). Then we could later create "json_enc" and warn
when "json" alone is used. Or even "jsdec" and "jsenc" which are much
shorter and still quite explicit.


How about "json_query" because it's exactly what it does :-)


I'm not familiar with the notion of "query" to decode and extract contents
but I'm not the most representative user and am aware of the "jq" command-
line utility that does this. So if it sounds natural to others I'm fine
with this.


I'm seeing that there's a very nice mjson_find() which does *exactly* what
you need:

 "In a JSON string s, len, find an element by its JSONPATH path. Save
  found element in tokptr, toklen. If not found, return JSON_TOK_INVALID.
  If found, return one of: MJSON_TOK_STRING, MJSON_TOK_NUMBER,
  MJSON_TOK_TRUE, MJSON_TOK_FALSE, MJSON_TOK_NULL, MJSON_TOK_ARRAY,
  MJSON_TOK_OBJECT.
  If a searched key contains ., [ or ] characters, they should be escaped
  by a backslash."

So you get the type in return. I think you can then call one of the
related functions depending on what is found, which is more reliable
than iterating over multiple attempts.


Oh yes, this sounds like a better approach.
I have now used this suggestion and I hope you can help me to fix the double 
parsing
issue or is it acceptable to parse the input twice?


 From what I've seen in the code in the lib you have no other option.
I thought it might be possible to call mjson_get_string() on the
resulting pointer but you would need to find a way to express that
you want to extract the immediate content, maybe by having an empty
key designation or something like this. This point is not clear to
me and the unit tests in the project all re-parse the input string
after mjson_find(), so probably this is the way to do it.


The check functions handles the int arg now as suggested.

```
/* This function checks the "json_query" converter's arguments. */
static int sample_check_json_query(struct arg *arg, struct sample_conv *conv,
const char *file, int line, char **err)
{
if (ar

Re: [PATCH] MINOR: sample: add json_string

2021-04-13 Thread Aleksandar Lazic

On 13.04.21 11:26, Willy Tarreau wrote:

Hi Aleks,

On Mon, Apr 12, 2021 at 10:09:08PM +0200, Aleksandar Lazic wrote:

Hi.

another patch which honer the feedback.


Thank you. FWIW I agree with all the points reported by Tim. I'll add
a few comments and/or suggestions below. On a general note, please be
careful about your indenting, as it very can quickly become a total
mess. Similarly please pay attention not to leave trailing spaces
that may Git complain:

   Applying: MINOR: sample: converter: add JSON Path handling
   .git/rebase-apply/patch:39: trailing whitespace.
  - number  : When the JSON value is a number then will the value be
   .git/rebase-apply/patch:40: trailing whitespace.
  converted to a string. If you know that the value is a
   .git/rebase-apply/patch:41: trailing whitespace.
  integer then can you help haproxy to convert the value
   .git/rebase-apply/patch:46: trailing whitespace.
 This converter extracts the value located at  from the JSON
   .git/rebase-apply/patch:47: trailing whitespace.
 string in the input value.
   warning: squelched 10 whitespace errors
   warning: 15 lines add whitespace errors.

All these lines are easily noticed this way:

 $ git show | grep -c '^+.*\s$'
 15

A good way to avoid this once for all it to enable colors in Git and to
always make sure not to leave red areas in "git diff" or "git show" :

 $ git config --global color.ui true


Cool tip, I have set it now.


And even if it's of low importance for the code itself, it's particularly
important in a review because such cosmetic issues constantly remind the
reader that the patch is far from being final, so it's possibly not yet
the moment to focus on not critically important stuff. Thus in the end
they increase the number of round trips.


Thanks. I will take care about it.


The doc will be enhanced but I have a question about that sequence.
This should write the double value to the string but I think I have here some
issue.

```
printf("\n>>>DOUBLE rc:%d: double:%f:\n",rc, 
double_val);
trash->size = snprintf(trash->area,
trash->data,

"%g",double_val);
smp->data.u.str = *trash;
smp->data.type = SMP_T_STR;
```


Yeah, as Tim mentioned, you mixed size and data. "data" is the amount of
data bytes used in a chunk. "size" is its allocated size.


Fixed now, you can see it in then snipplet below.


>From 8cb1bc4aaedd17c7189d4985a57f662ab1b533a4 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Mon, 12 Apr 2021 22:01:04 +0200
Subject: [PATCH] MINOR: sample: converter: add JSON Path handling

With json_path can a JSON value be extacted from a Header
or body


In the final version, please add a few more lines to describe the name
of the added converter and what it's used for. As a reminder, think that
you're trying to sell your artwork to me or anyone else who would make
you proud bybackporting your work into their version :-)


Will do it :-)


+json_query(,[])
+  The  is mandatory.
+  By default will the follwing JSON types recognized.
+   - string  : This is the default search type and returns a string;
+   - number  : When the JSON value is a number then will the value be
+   converted to a string. If you know that the value is a
+   integer then can you help haproxy to convert the value
+   to a integer when you add "sint" to the ;


Just thinking loud, I looked at the rest of the doc and noticed we never
mention "sint" anywhere else, so I think it's entirely an internal type.
However we do mention "int" which is used as the matching method for
integers, so we could have:

  ... json_query("blah",sint) -m int 12

As such I would find it more natural to call this type "int" so that it
matches the same as the one used in the match. Maps already use "int" as
the output type name by the way.

In any case, I too am a bit confused by the need to force an output type.
As a user, I'd expect the type to be implicit and not to have to know
about it in the configuration. Of course we can imagine situations where
we'd want to force the type (like we sometimes do by adding 0 or
concatenating an empty string for example) but this is still not very
clear to me if we want it by default. Or maybe when dealing with floats
where we'd have to decide whether to emit them verbatim as strings or
to convert them to integers.

But then, could it make sense to also support "strict integers": values
that can accurately be represented as integers and which are within the
JSON valid range for integers (-2^52 to 2^52 with no decimal part) ?
This would then

Re: [PATCH] JWT payloads break b64dec convertor

2021-04-12 Thread Aleksandar Lazic

Hi Moemen,

any chance to get this feature before 2.4 will be realeased?

Regards
Aleks

On 06.04.21 09:13, Willy Tarreau wrote:

Hi Moemen,

On Tue, Apr 06, 2021 at 01:58:11AM +0200, Moemen MHEDHBI wrote:

Only part unclear:
On 02/04/2021 15:04, Tim Düsterhus wrote:

+int base64urldec(const char *in, size_t ilen, char *out, size_t olen) {
+char conv[ilen+2];


This looks like a remotely triggerable stack overflow.


You mean in case ilen is too big?


Yes that's it, I didn't notice it during the first review. It's
particularly uncommon to use variable sized arrays and should never
be done. The immediate effect of this is that it will reserve some
room in the stack for a size as large as ilen+2 bytes. The problem
is that on most platforms the stack grows down, so the beginning of
the buffer is located at a point which is very far away from the
current stack. This memory is in fact not allocated, so the system
detects the first usage through a page fault and allocates the
necessary space. But in order to know that the page fault is within
the stack, it has to apply a certain margin. And this margin varies
between OSes and platforms. Some compilers will explicitly initialize
such a large stack from top to bottom to avoid a crash. Other ones
will not do and may very well crash at 64kB. On Linux, I can make the
above crash by using a 8 MB ilen, just because by default the stack
size limit is 8 MB. That's large but not overly excessive for those
who would like to perform some processing on bodies. And I recall
that some other OSes default to way smaller limits (I recall 64kB
on OpenBSD a long time ago though this might have been raised to a
megabyte or so by now).


in such case should we rather use dynamic allocation ?


No, there are two possible approaches. One of them is to use a trash
buffer using get_trash_chunk(). The trash buffers are "large enough"
for anything that comes from outside. A second, cleaner solution
simply consists in not using a temporary buffer but doing the conversion
on the fly. Indeed, looking closer, what the function does is to first
replace a few chars on the whole chain to then call the base64 conversion
function. So it doubles the work on the string and one side effect of
this double work is that you need a temporary storage.

Other approaches would consist in either reimplementing the functions
with a different alphabet, or modifying the existing ones to take an
extra argument for the conversion table, and make one set of functions
making use of the current table and another set making use of your new
table.

Willy






Re: [PATCH] MINOR: sample: add json_string

2021-04-12 Thread Aleksandar Lazic

Hi.

another patch which honer the feedback.

The doc will be enhanced but I have a question about that sequence.
This should write the double value to the string but I think I have here some
issue.

```
printf("\n>>>DOUBLE rc:%d: double:%f:\n",rc, 
double_val);
trash->size = snprintf(trash->area,
trash->data,

"%g",double_val);
smp->data.u.str = *trash;
smp->data.type = SMP_T_STR;
```

I have also add more tests with some specific JSON types.

Regards
Aleks

On 11.04.21 13:04, Tim Düsterhus wrote:

Aleks,

On 4/11/21 12:28 PM, Aleksandar Lazic wrote:

Agree. I have now rethink how to do it and suggest to add a output type.

```
json_query(,)
   The  and  are mandatory.
   This converter uses the mjson library https://github.com/cesanta/mjson
   This converter extracts the value located at  from the JSON
   string in the input value.
    must be a valid JsonPath string as defined at
   https://goessner.net/articles/JsonPath/

   These are the possible output types.
    - "bool"   : A boolean is expected;
    - "sint"   : A signed 64bits integer type is expected;
    - "str"    : A string is expected. This could be a simple string or
 a JSON sub-object;

   A floating point value will always be converted to sint!
```


The converter should be able to detect the type on its own. The types are part 
of the JSON after all! The output_type argument just moves the explicit type 
specification from the converter name into an argument. Not much of an 
improvement.

I don't know how the library works exactly, but after extracting the value 
something like the following should work:

If the first character is '"' -> string
If the first character is 't' -> bool(true)
If the first character is 'f' -> bool(false)
If the first character is 'n' -> null (This should probably result in the 
converter failing).
If the first character is a digit -> number


+    { "json_string", sample_conv_json_string, ARG1(1,STR), 
sample_check_json_string , SMP_T_STR, SMP_USE_CONST },


While testing something I also just notice that SMP_USE_CONST is incorrect 
here. I cannot apply e.g. the sha1 converter to the output of json_string.


Okay. I will change both to SMP_T_ANY because the return values can be bool, 
int or str.


The input type should remain as SMP_T_STR, because you are parsing a JSON 
*string*.


While implmenting the suggested options abouve I stuggle with checking the 
params.
Arg0 is quite clear but how make a efficient check for Arg1, the output type?


The efficiency of the check is less of a concern. That happens only once during 
configuration checking.



```
/* This function checks the "json_query" converter's arguments.
  */
static int sample_check_json_query(struct arg *arg, struct sample_conv *conv,
    const char *file, int line, char **err)
{
 if (arg[0].data.str.data == 0) { /* empty */
 memprintf(err, "json_path must not be empty");
 return 0;
 }

 /* this doen't work */
 int type = smp_to_type[arg[1].data.str.area];


The output_type argument should not exist. I'll answer the question 
nonetheless: You have to compare strings explicitly in C. So you would have use 
strcmp for each of the cases.


 switch (type) {
 case SMP_T_BOOL:
 case SMP_T_SINT:
 /* These type are not const. */
 break;

 case SMP_T_STR:

```

I would to the conversation from double to int like "smp->data.u.sint = (long long 
int ) double_val;"
is this efficient. I haven't done this for a long time so I would like to have a 
"2nd eye pair" on this.



I'd probably return a double as a string instead. At least that doesn't destroy 
information.

Best regards
Tim Düsterhus


>From 8cb1bc4aaedd17c7189d4985a57f662ab1b533a4 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Mon, 12 Apr 2021 22:01:04 +0200
Subject: [PATCH] MINOR: sample: converter: add JSON Path handling

With json_path can a JSON value be extacted from a Header
or body
---
 Makefile   |3 +-
 doc/configuration.txt  |   29 +
 include/import/mjson.h |  209 ++
 reg-tests/converter/json_query.vtc |   94 +++
 src/mjson.c| 1048 
 src/sample.c   |   94 +++
 6 files changed, 1476 insertions(+), 1 deletion(-)
 create mode 100644 include/import/mjson.h
 create mode 100644 reg-tests/converter/json_query.vtc
 create mode 100644 src/mjson.c

diff --git a/Makefile b/Makefile
index 9b22fe4be..56d0aa28d 100644
--- a/Makefile
+++ 

Re: [PATCH] MINOR: sample: add json_string

2021-04-11 Thread Aleksandar Lazic

On 10.04.21 13:22, Tim Düsterhus wrote:

Aleks,

On 4/10/21 12:24 AM, Aleksandar Lazic wrote:

+json_string() : string

I don't like the name. A few suggestions:

- json_query
- json_get
- json_decode


maybe json_get_string because there could be some more getter like bool, int, 
...


The '_string' suffix does not make sense to me, because why should the user 
need to write about the expected type when using the converter? Samples already 
store their type in HAProxy and they are automatically casted to an appropriate 
type if required (i.e. there is little difference between a numeric string and 
an int).

It should be valid to do something like this.

str('{"s": "foo", "i": 1}'),json_query('$.s'),sha1,hex

and likewise

str('{"s": "foo", "i": 1}'),json_query('$.i'),add(7)


Agree. I have now rethink how to do it and suggest to add a output type.

```
json_query(,)
  The  and  are mandatory.
  This converter uses the mjson library https://github.com/cesanta/mjson
  This converter extracts the value located at  from the JSON
  string in the input value.
   must be a valid JsonPath string as defined at
  https://goessner.net/articles/JsonPath/

  These are the possible output types.
   - "bool"   : A boolean is expected;
   - "sint"   : A signed 64bits integer type is expected;
   - "str": A string is expected. This could be a simple string or
a JSON sub-object;

  A floating point value will always be converted to sint!
```


+  # get the value from the key kubernetes.io/serviceaccount/namespace
+  # => openshift-logging
+  http-request set-var(sess.json) 
req.hdr(Authorization),b64dec,json_string('$.kubernetes\\.io/serviceaccount/namespace')
+ +  # get the value from the key iss
+  # => kubernetes/serviceaccount
+  http-request set-var(sess.json) 
req.hdr(Authorization),b64dec,json_string('$.iss')


I don't like that the example is that specific for Kubernetes usage. A more 
general example would be preferred, because it makes it easier to understand 
the concept.


The '$.iss' is the generic JWT field.
https://tools.ietf.org/html/rfc7519#section-4.1
"iss" (Issuer) Claim


But even a JWT is a very narrow use-case ...


Agree. I will ad some generic examples.


But maybe I could look for a "normal" JSON sting and only JWT.



... I suggest to use something generic like my example above (with "foo" as a 
common placeholder value). Examples should explain the concept, not a specific use case. 
Users are smart enough to understand that they can use this to extract values from a JWT 
if this is what they need to do.


diff --git a/reg-tests/sample_fetches/json_string.vtc 
b/reg-tests/sample_fetches/json_string.vtc
new file mode 100644
index 0..fc387519b
--- /dev/null
+++ b/reg-tests/sample_fetches/json_string.vtc


Again, this is a converter. Move the test into the appropriate folder. And 
please make sure you understand the difference between fetches and converters.


Yeah the difference between fetchers and converters in not fully clear for me.
I think when a value is fetched from any data then it's a fetcher like this
JSON "fetcher".


The use of correct terminology is important, because everything else introduces 
confusion. It is extra important if it is used in persistent documentation (vs. 
say a discussion in IRC where it can easily be clarified).

The difference is explained in configuration.txt in the introduction of section 
7 and again at the beginning of section 7.3.1:


Sample fetch methods may be combined with transformations to be appliedon top
of the fetched sample (also called "converters"). These combinations form what
is called "sample expressions" and the result is a "sample".


Fetches *fetch* data from e.g. the connection and then return a *sample*.

Converters *convert* data from an existing *sample* and then return a new 
*sample*.



That nails it down, thanks.


-


+ */
+static int sample_check_json_string(struct arg *arg, struct sample_conv *conv,
+   const char *file, int line, char **err)
+{
+    DPRINTF(stderr, "%s: arg->type=%d, arg->data.str.data=%ld\n",
+    __FUNCTION__,
+    arg->type, arg->data.str.data);


Debug code above.


This was intentionally. I asked my self why no Debug option is set.
This will only be printed with 'DEBUG=-DDEBUG_FULL'.
Maybe there should be a "DBUG_SAMPLES" like the other "DEBUG_*" options.


Imagine how the code and also the debug output would look if every converter would 
output several lines of debug output. Additionally there's not much useful 
information in the output here. arg->type is always going to be ARGT_STR, 
because HAProxy will automatically cast the argument based on the converter 
definition. The length 

Re: [PATCH] MINOR: sample: add json_string

2021-04-09 Thread Aleksandar Lazic

Tim.

On 09.04.21 18:55, Tim Düsterhus wrote:

Aleks,


> I have taken a first look. Find my remarks below. Please note that for the 
actual
> source code there might be further remarks by Willy (put in CC) or so. I 
might have
> missed something or might have told you something incorrect. So maybe before 
making
> changes wait for their opinion.

Thank you for your feedback.

> Generally I must say that I don't like the mjson library, because it uses 
'int' for
> sizes. It doesn't really bring the point home that it is a safe library. This 
one
> looks much better to me: https://github.com/FreeRTOS/coreJSON. It does not 
support
> JSON path, though. Not sure how much of an issue that would be?

Well I have created a issue in coreJSON how to handle the "." in the key.
https://github.com/FreeRTOS/coreJSON/issues/92

I have choose the mjson library because it was small and offers the JSON path 
feature.


On 4/8/21 10:21 PM, Aleksandar Lazic wrote:

From 7ecb80b1dfe37c013cf79bc5b5b1caa3c0112a6a Mon Sep 17 00:00:00 2001
From: Alekesandar Lazic 
Date: Thu, 8 Apr 2021 21:42:00 +0200
Subject: [PATCH] MINOR: sample: add json_string


I'd add 'converter' to the subject line to make it clear that this is a 
conveter.



This sample get's the value of a JSON key


Typo: It should be 'gets'.


---
 Makefile |    3 +-
 doc/configuration.txt    |   15 +
 include/import/mjson.h   |  213 +
 reg-tests/sample_fetches/json_string.vtc |   25 +
 src/mjson.c  | 1052 ++
 src/sample.c |   63 ++
 6 files changed, 1370 insertions(+), 1 deletion(-)
 create mode 100644 include/import/mjson.h
 create mode 100644 reg-tests/sample_fetches/json_string.vtc
 create mode 100644 src/mjson.c

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 01a01eccc..7f2732668 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -19043,6 +19043,21 @@ http_first_req : boolean


This is the 'fetch' section. Move the documentation to the 'converter' section.

>

   from some requests when a request is not the first one, or to help grouping
   requests in the logs.

+json_string() : string

I don't like the name. A few suggestions:

- json_query
- json_get
- json_decode


maybe json_get_string because there could be some more getter like bool, int, 
...


+  Returns the string value of the given json path.


It should be "JSON" (in uppercase) here and everywhere else.


Okay and agree.


+  The  is required.
+  This sample uses the mjson library https://github.com/cesanta/mjson
+  The json path syntax is defined in this repo 
https://github.com/json-path/JsonPath


Overall the description of the converter does not read nicely / feels 
inconsistent compared to other converters / uses colloquial language.


Let me suggest something like:

Extracts the value located at  from the JSON string in the input 
value.  must be a valid JsonPath string as defined at https://goessner.net/articles/JsonPath/


I changed the link, because that appears to be the canonical reference.


Okay.


+  Example :


No space in front of the colon.


+  # get the value from the key kubernetes.io/serviceaccount/namespace
+  # => openshift-logging
+  http-request set-var(sess.json) 
req.hdr(Authorization),b64dec,json_string('$.kubernetes\\.io/serviceaccount/namespace')
+ +  # get the value from the key iss
+  # => kubernetes/serviceaccount
+  http-request set-var(sess.json) 
req.hdr(Authorization),b64dec,json_string('$.iss')


I don't like that the example is that specific for Kubernetes usage. A more 
general example would be preferred, because it makes it easier to understand the concept.


The '$.iss' is the generic JWT field.
https://tools.ietf.org/html/rfc7519#section-4.1
"iss" (Issuer) Claim

But maybe I could look for a "normal" JSON sting and only JWT.



 method : integer + string
   Returns an integer value corresponding to the method in the HTTP request. For
   example, "GET" equals 1 (check sources to establish the matching). Value 9
diff --git a/include/import/mjson.h b/include/import/mjson.h
new file mode 100644
index 0..ff46e7950
--- /dev/null
+++ b/include/import/mjson.h
@@ -0,0 +1,213 @@
[...]
+// Aleksandar Lazic
+// git clone from 2021-08-04 because of this fix
+// 
https://github.com/cesanta/mjson/commit/7d8daa8586d2bfd599775f049f26d2645c25a8ee


Please don't edit third party libraries, even if it is just a comment. This 
makes updating hard.


Okay.


diff --git a/reg-tests/sample_fetches/json_string.vtc 
b/reg-tests/sample_fetches/json_string.vtc
new file mode 100644
index 0..fc387519b
--- /dev/null
+++ b/reg-tests/sample_fetches/json_string.vtc


Again, this is a converter. Move the test into the appropriate folder. And please make 
sure you understand the difference be

Re: [PATCH] MINOR: sample: add json_string

2021-04-08 Thread Aleksandar Lazic

Hi.

Sorry I have now seen the copy paste error.
please use this patch

Regards
Alex

On 08.04.21 21:55, Aleksandar Lazic wrote:

Hi.

Attached the patch to add the json_string sample.

In combination with the JWT patch is a pre-validation of a bearer token part 
possible.

I have something like this in mind.

http-request set-var(sess.json) 
req.hdr(Authorization),word(2,.),ub64dec,json_string('$.iss')
http-request deny unless { var(sess.json) -m str 'kubernetes/serviceaccount' }

Regards
Aleks


>From 7ecb80b1dfe37c013cf79bc5b5b1caa3c0112a6a Mon Sep 17 00:00:00 2001
From: Alekesandar Lazic 
Date: Thu, 8 Apr 2021 21:42:00 +0200
Subject: [PATCH] MINOR: sample: add json_string

This sample get's the value of a JSON key
---
 Makefile |3 +-
 doc/configuration.txt|   15 +
 include/import/mjson.h   |  213 +
 reg-tests/sample_fetches/json_string.vtc |   25 +
 src/mjson.c  | 1052 ++
 src/sample.c |   63 ++
 6 files changed, 1370 insertions(+), 1 deletion(-)
 create mode 100644 include/import/mjson.h
 create mode 100644 reg-tests/sample_fetches/json_string.vtc
 create mode 100644 src/mjson.c

diff --git a/Makefile b/Makefile
index 9b22fe4be..559248867 100644
--- a/Makefile
+++ b/Makefile
@@ -883,7 +883,8 @@ OBJS += src/mux_h2.o src/mux_fcgi.o src/http_ana.o src/stream.o\
 src/ebistree.o src/auth.o src/wdt.o src/http_acl.o \
 src/hpack-enc.o src/hpack-huff.o src/ebtree.o src/base64.o \
 src/hash.o src/dgram.o src/version.o src/fix.o src/mqtt.o src/dns.o\
-src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o
+src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o   \
+src/mjson.o
 
 ifneq ($(TRACE),)
 OBJS += src/calltrace.o
diff --git a/doc/configuration.txt b/doc/configuration.txt
index 01a01eccc..7f2732668 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -19043,6 +19043,21 @@ http_first_req : boolean
   from some requests when a request is not the first one, or to help grouping
   requests in the logs.
 
+json_string() : string
+  Returns the string value of the given json path.
+  The  is required.
+  This sample uses the mjson library https://github.com/cesanta/mjson
+  The json path syntax is defined in this repo https://github.com/json-path/JsonPath
+
+  Example :
+  # get the value from the key kubernetes.io/serviceaccount/namespace
+  # => openshift-logging
+  http-request set-var(sess.json) req.hdr(Authorization),b64dec,json_string('$.kubernetes\\.io/serviceaccount/namespace')
+  
+  # get the value from the key iss
+  # => kubernetes/serviceaccount
+  http-request set-var(sess.json) req.hdr(Authorization),b64dec,json_string('$.iss')
+
 method : integer + string
   Returns an integer value corresponding to the method in the HTTP request. For
   example, "GET" equals 1 (check sources to establish the matching). Value 9
diff --git a/include/import/mjson.h b/include/import/mjson.h
new file mode 100644
index 0..ff46e7950
--- /dev/null
+++ b/include/import/mjson.h
@@ -0,0 +1,213 @@
+// Copyright (c) 2018-2020 Cesanta Software Limited
+// All rights reserved
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to deal
+// in the Software without restriction, including without limitation the rights
+// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+// copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+// SOFTWARE.
+
+// Aleksandar Lazic
+// git clone from 2021-08-04 because of this fix
+// https://github.com/cesanta/mjson/commit/7d8daa8586d2bfd599775f049f26d2645c25a8ee
+
+#ifndef MJSON_H
+#define MJSON_H
+
+#include 
+#include 
+#include 
+
+#ifndef MJSON_ENABLE_PRINT
+#define MJSON_ENABLE_PRINT 1
+#endif
+
+#ifndef MJSON_ENABLE_RPC
+#define MJSON_ENABLE_RPC 1
+#endif
+
+#ifndef MJSON_ENABLE_BASE64
+#define MJSON_ENABLE_BASE64 1
+#endif
+
+#ifndef MJSON_ENABLE_MERGE
+#define MJSON_ENABLE_MERGE 0
+#elif MJSON_ENABLE_MERGE
+#define

[PATCH] MINOR: sample: add json_string

2021-04-08 Thread Aleksandar Lazic

Hi.

Attached the patch to add the json_string sample.

In combination with the JWT patch is a pre-validation of a bearer token part 
possible.

I have something like this in mind.

http-request set-var(sess.json) 
req.hdr(Authorization),word(2,.),ub64dec,json_string('$.iss')
http-request deny unless { var(sess.json) -m str 'kubernetes/serviceaccount' }

Regards
Aleks
>From 7ecb80b1dfe37c013cf79bc5b5b1caa3c0112a6a Mon Sep 17 00:00:00 2001
From: Alekesandar Lazic 
Date: Thu, 8 Apr 2021 21:42:00 +0200
Subject: [PATCH] MINOR: sample: add json_string

This sample get's the value of a JSON key
---
 Makefile |3 +-
 doc/configuration.txt|   15 +
 include/import/mjson.h   |  213 +
 reg-tests/sample_fetches/json_string.vtc |   25 +
 src/mjson.c  | 1052 ++
 src/sample.c |   63 ++
 6 files changed, 1370 insertions(+), 1 deletion(-)
 create mode 100644 include/import/mjson.h
 create mode 100644 reg-tests/sample_fetches/json_string.vtc
 create mode 100644 src/mjson.c

diff --git a/Makefile b/Makefile
index 9b22fe4be..559248867 100644
--- a/Makefile
+++ b/Makefile
@@ -883,7 +883,8 @@ OBJS += src/mux_h2.o src/mux_fcgi.o src/http_ana.o src/stream.o\
 src/ebistree.o src/auth.o src/wdt.o src/http_acl.o \
 src/hpack-enc.o src/hpack-huff.o src/ebtree.o src/base64.o \
 src/hash.o src/dgram.o src/version.o src/fix.o src/mqtt.o src/dns.o\
-src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o
+src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o   \
+src/mjson.o
 
 ifneq ($(TRACE),)
 OBJS += src/calltrace.o
diff --git a/doc/configuration.txt b/doc/configuration.txt
index 01a01eccc..7f2732668 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -19043,6 +19043,21 @@ http_first_req : boolean
   from some requests when a request is not the first one, or to help grouping
   requests in the logs.
 
+json_string() : string
+  Returns the string value of the given json path.
+  The  is required.
+  This sample uses the mjson library https://github.com/cesanta/mjson
+  The json path syntax is defined in this repo https://github.com/json-path/JsonPath
+
+  Example :
+  # get the value from the key kubernetes.io/serviceaccount/namespace
+  # => openshift-logging
+  http-request set-var(sess.json) req.hdr(Authorization),b64dec,json_string('$.kubernetes\\.io/serviceaccount/namespace')
+  
+  # get the value from the key iss
+  # => kubernetes/serviceaccount
+  http-request set-var(sess.json) req.hdr(Authorization),b64dec,json_string('$.iss')
+
 method : integer + string
   Returns an integer value corresponding to the method in the HTTP request. For
   example, "GET" equals 1 (check sources to establish the matching). Value 9
diff --git a/include/import/mjson.h b/include/import/mjson.h
new file mode 100644
index 0..ff46e7950
--- /dev/null
+++ b/include/import/mjson.h
@@ -0,0 +1,213 @@
+// Copyright (c) 2018-2020 Cesanta Software Limited
+// All rights reserved
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to deal
+// in the Software without restriction, including without limitation the rights
+// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+// copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+// SOFTWARE.
+
+// Aleksandar Lazic
+// git clone from 2021-08-04 because of this fix
+// https://github.com/cesanta/mjson/commit/7d8daa8586d2bfd599775f049f26d2645c25a8ee
+
+#ifndef MJSON_H
+#define MJSON_H
+
+#include 
+#include 
+#include 
+
+#ifndef MJSON_ENABLE_PRINT
+#define MJSON_ENABLE_PRINT 1
+#endif
+
+#ifndef MJSON_ENABLE_RPC
+#define MJSON_ENABLE_RPC 1
+#endif
+
+#ifndef MJSON_ENABLE_BASE64
+#define MJSON_ENABLE_BASE64 1
+#endif
+
+#ifndef MJSON_ENABLE_MERGE
+#define MJSON_ENABLE_MERGE 0
+#elif MJSON_ENABLE_MERGE
+#define MJSON_ENABLE_NEXT 1
+#endif
+
+#ifndef MJSON_ENABLE_PRETTY
+#define MJSON_ENABLE_PRETTY 0
+#elif MJSON_ENABLE_PRETTY
+#define

Re: help for implementation of first fetch function "sample_fetch_json_string"

2021-04-08 Thread Aleksandar Lazic

Tim,

you are great ;-)

On 08.04.21 18:14, Tim Düsterhus wrote:

Aleks,

On 4/8/21 5:07 PM, Aleksandar Lazic wrote:
http-request set-var(sess.json) %[req.hdr(Authorization),b64dec,json_string("\$.kubernetes\\.io/serviceaccount/namespace")] 


http-request set-var() does not expect the %[] syntax, because it always takes 
a sample. Even the following returns the same error message:


http-request set-var(sess.json) %[req.hdr(Authorization)]

I expect that the decoded json string is in args[0] and the 
"\$.kubernetes\\.io/serviceaccount/namespace" is in smp, is this assumption 
right?


The assumption is not correct, because you are not searching for a fetch. You 
want a converter, because you are converting an existing sample. I suggest you 
take a look at the "digest" converter. You can find it in sample.c.


Best regards
Tim Düsterhus


Thanks ,this was the hint I needed ;-)

Regards
Alex



help for implementation of first fetch function "sample_fetch_json_string"

2021-04-08 Thread Aleksandar Lazic

Hi.

I try to implement "sample_fetch_json_string" based on 
https://github.com/cesanta/mjson.

Because I haven't implemented a fetch function until now it would be nice when 
somebody
helps me and point me into the right direction. Maybe I have overseen a 
documentation in
the doc directory.

Let's assume there is this haproxy liney.

```
# get the namespace from a bearer token
http-request set-var(sess.json) 
%[req.hdr(Authorization),b64dec,json_string("\$.kubernetes\\.io/serviceaccount/namespace")]
http-request return status 200 content-type text/plain lf-string 
%[date,ltime(%Y-%m-%d_%H-%M-%S)] hdr x-var "val=%[var(sess.json)]"
```

When I run this I get the following message. That's what I also don't 
understand because
I have added the "sample_fetch_json_string" into the "static struct 
sample_fetch_kw_list smp_kws = ...".

```
./haproxy -d -f ../test-haproxy.conf
[NOTICE] 097/170201 (1105377) : haproxy version is 2.4-dev15-909947-31
[NOTICE] 097/170201 (1105377) : path to executable is ./haproxy
[ALERT] 097/170201 (1105377) : parsing [../test-haproxy.conf:10] : error 
detected in frontend 'fe1' while parsing 'http-request set-var(sess.json)' rule 
: missing fetch method.
[ALERT] 097/170201 (1105377) : Error(s) found in configuration file : 
../test-haproxy.conf
```

I expect that the decoded json string is in args[0] and the 
"\$.kubernetes\\.io/serviceaccount/namespace" is in smp, is this assumption 
right?

You see I have some open question which I hope someone can answer it.

That's the function signature.

https://github.com/cesanta/mjson#mjson_get_string
// s, len is a JSON string [ "abc", "de\r\n" ]
int mjson_get_string(const char *s, int len, const char *path, char *to, int 
sz);

I think that this line isn't right, but what's the right one?

rc = mjson_get_string(args[0].data.str.area, args[0].data.str.data, 
args[1].data.str.area, tmp->area, tmp->size);

attached the WIP diff and the test config.

It's similar concept as the env fetch function.
What I don't know in which struct is what.


``` from sample.c
smp_fetch_env(const struct arg *args, struct sample *smp, const char *kw, void 
*private)

/* This sample function fetches the value from a given json string.
 * The mjson library is used to parse the json struct
*/
static int sample_fetch_json_string(const struct arg *args, struct sample *smp, 
const char *kw, void *private)
{
struct buffer *tmp;
int rc;

tmp = get_trash_chunk();
/* json stringjson string length 
search patternvalue value length
rc = mjson_get_string(args[0].data.str.area, args[0].data.str.data, 
"$.kubernetes\\.io/serviceaccount/namespace", tmp->area, tmp->size);
*/
rc = mjson_get_string(args[0].data.str.area, args[0].data.str.data, 
args[1].data.str.area, tmp->area, tmp->size);

smp->flags |= SMP_F_CONST;
smp->data.type = SMP_T_STR;
smp->data.u.str.area = tmp->area;
smp->data.u.str.data = tmp->data;
return 1;
}
```

Regards
Alex
diff --git a/Makefile b/Makefile
index 9b22fe4be..7f6998cdc 100644
--- a/Makefile
+++ b/Makefile
@@ -883,7 +883,8 @@ OBJS += src/mux_h2.o src/mux_fcgi.o src/http_ana.o src/stream.o\
 src/ebistree.o src/auth.o src/wdt.o src/http_acl.o \
 src/hpack-enc.o src/hpack-huff.o src/ebtree.o src/base64.o \
 src/hash.o src/dgram.o src/version.o src/fix.o src/mqtt.o src/dns.o\
-src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o
+src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o   \
+src/mjson.o
 
 ifneq ($(TRACE),)
 OBJS += src/calltrace.o
@@ -946,6 +947,10 @@ dev/poll/poll:
 dev/tcploop/tcploop:
 	$(Q)$(MAKE) -C dev/tcploop tcploop CC='$(cmd_CC)' OPTIMIZE='$(COPTS)'
 
+dev/json/json: dev/json/json.o dev/json/mjson/src/mjson.o src/chunk.o
+	$(cmd_LD) $(LDFLAGS) -o $@ $^ $(LDOPTS)
+	#$(Q)$(MAKE) -C dev/json json CC='$(cmd_CC)' OPTIMIZE='$(COPTS)'
+
 # rebuild it every time
 .PHONY: src/version.c
 
diff --git a/dev/json/test-data.json b/dev/json/test-data.json
new file mode 100644
index 0..fdda596e9
--- /dev/null
+++ b/dev/json/test-data.json
@@ -0,0 +1 @@
+{"iss":"kubernetes/serviceaccount","kubernetes.io/serviceaccount/namespace":"openshift-logging","kubernetes.io/serviceaccount/secret.name":"deployer-token-m98xh","kubernetes.io/serviceaccount/service-account.name":"deployer","kubernetes.io/serviceaccount/service-account.uid":"35dddefd-3b5a-11e9-947c-fa163e480910","sub":"system:serviceaccount:openshift-logging:deployer"}
\ No newline at end of file
diff --git a/dev/json/test-data.json.base64 b/dev/json/test-data.json.base64
new file mode 100644
index 0..75cddd3ac
--- /dev/null
+++ b/dev/json/test-data.json.base64
@@ -0,0 +1 @@

Re: [HAP 2.4-dev] Quotes in str fetch sample

2021-04-08 Thread Aleksandar Lazic

Hi.

Never mind. I have send the header in base64 and decode it.

```shell
curl -vH 'Authorization: '$(< 
/datadisk/git-repos/haproxy/dev/json/test-data.json.base64 ) http://127.0.0.1:8080

*   Trying 127.0.0.1:8080...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.68.0
> Accept: */*
> Authorization: 
eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbG9nZ2luZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZXBsb3llci10b2tlbi1tOTh4aCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZXBsb3llciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjM1ZGRkZWZkLTNiNWEtMTFlOS05NDdjLWZhMTYzZTQ4MDkxMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpvcGVuc2hpZnQtbG9nZ2luZzpkZXBsb3llciJ9
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< x-var: 
json={"iss":"kubernetes/serviceaccount","kubernetes.io/serviceaccount/namespace":"openshift-logging","kubernetes.io/serviceaccount/secret.name":"deployer-token-m98xh","kubernetes.io/serviceaccount/service-account.name":"deployer","kubernetes.io/serviceaccount/service-account.uid":"35dddefd-3b5a-11e9-947c-fa163e480910","sub":"system:serviceaccount:openshift-logging:deployer"}
 val=
< content-length: 10
< content-type: text/plain

```

```
http-request set-var(req.json)  req.hdr(Authorization),b64dec
http-request return status 200 content-type text/plain lf-string %[date] hdr x-var 
"json=%[var(req.json)] val=%[var(sess.json)]"

```

regards
alex

On 08.04.21 01:27, Aleksandar Lazic wrote:

Hi.

I try to implement "sample_fetch_json_string" based on 
https://github.com/cesanta/mjson.
My current test looks good but I'm struggling with the test setup.

```
git-repos/haproxy$ ./haproxy -c -f ../test-haproxy.conf
[NOTICE] 097/012132 (1043229) : haproxy version is 2.4-dev15-8daf8d-30
[NOTICE] 097/012132 (1043229) : path to executable is ./haproxy
[ALERT] 097/012132 (1043229) : parsing [../test-haproxy.conf:9] : error 
detected in frontend 'fe1' while parsing
'http-request set-var(req.json)' rule : fetch method 'str' : expected ')' before 
',\"kubernetes.io/serviceaccount/namespace\":\"openshift-logging\",\"kubernetes.io/serviceaccount/secret.name\":\"deployer-token-m98xh\",\"kubernetes.io/serviceaccount/service-account.name\":\"deployer\",\"kubernetes.io/serviceaccount/service-account.uid\":\"35dddefd-3b5a-11e9-947c-fa163e480910\",\"sub\":\"system:serviceaccount:openshift-logging:deployer\"})'.

[ALERT] 097/012132 (1043229) : Error(s) found in configuration file : 
../test-haproxy.conf

```

That's the config.
```
defaults
    mode http
    timeout connect 1s
    timeout client  1s
    timeout server  1s

frontend fe1
    bind "127.0.0.1:8080"
    http-request set-var(req.json)  
'str({\"iss\":\"kubernetes/serviceaccount\",\"kubernetes.io/serviceaccount/namespace\":\"openshift-logging\",\"kubernetes.io/serviceaccount/secret.name\":\"deployer-token-m98xh\",\"kubernetes.io/serviceaccount/service-account.name\":\"deployer\",\"kubernetes.io/serviceaccount/service-account.uid\":\"35dddefd-3b5a-11e9-947c-fa163e480910\",\"sub\":\"system:serviceaccount:openshift-logging:deployer\"})'

```

I have tried several combos like.
str("...")
str('...')
str(...)
I have also added more '\' in the string.

But I get always the error above.

Any Idea how to fix the error?

Regards
Alex






[HAP 2.4-dev] Quotes in str fetch sample

2021-04-07 Thread Aleksandar Lazic

Hi.

I try to implement "sample_fetch_json_string" based on 
https://github.com/cesanta/mjson.
My current test looks good but I'm struggling with the test setup.

```
git-repos/haproxy$ ./haproxy -c -f ../test-haproxy.conf
[NOTICE] 097/012132 (1043229) : haproxy version is 2.4-dev15-8daf8d-30
[NOTICE] 097/012132 (1043229) : path to executable is ./haproxy
[ALERT] 097/012132 (1043229) : parsing [../test-haproxy.conf:9] : error 
detected in frontend 'fe1' while parsing
'http-request set-var(req.json)' rule : fetch method 'str' : expected ')' before 
',\"kubernetes.io/serviceaccount/namespace\":\"openshift-logging\",\"kubernetes.io/serviceaccount/secret.name\":\"deployer-token-m98xh\",\"kubernetes.io/serviceaccount/service-account.name\":\"deployer\",\"kubernetes.io/serviceaccount/service-account.uid\":\"35dddefd-3b5a-11e9-947c-fa163e480910\",\"sub\":\"system:serviceaccount:openshift-logging:deployer\"})'.

[ALERT] 097/012132 (1043229) : Error(s) found in configuration file : 
../test-haproxy.conf

```

That's the config.
```
defaults
mode http
timeout connect 1s
timeout client  1s
timeout server  1s

frontend fe1
bind "127.0.0.1:8080"
http-request set-var(req.json)  
'str({\"iss\":\"kubernetes/serviceaccount\",\"kubernetes.io/serviceaccount/namespace\":\"openshift-logging\",\"kubernetes.io/serviceaccount/secret.name\":\"deployer-token-m98xh\",\"kubernetes.io/serviceaccount/service-account.name\":\"deployer\",\"kubernetes.io/serviceaccount/service-account.uid\":\"35dddefd-3b5a-11e9-947c-fa163e480910\",\"sub\":\"system:serviceaccount:openshift-logging:deployer\"})'

```

I have tried several combos like.
str("...")
str('...')
str(...)
I have also added more '\' in the string.

But I get always the error above.

Any Idea how to fix the error?

Regards
Alex



Re: zlib vs slz (perfoarmance)

2021-03-30 Thread Aleksandar Lazic

+1

On 30.03.21 08:17, Илья Шипицин wrote:

I would really like to know whether zlib was chosen for purpose or by chance.

And yes, some marketing campaign makes sense

On Tue, Mar 30, 2021, 10:35 AM Dinko Korunic mailto:dinko.koru...@gmail.com>> wrote:


 > On 29.03.2021., at 23:06, Lukas Tribus mailto:lu...@ltri.eu>> wrote:
 >

[…]

 > Like I said last year, this needs a marketing campaign:
 > https://www.mail-archive.com/haproxy@formilux.org/msg38044.html 

 >
 >
 > What about the docker images from haproxytech? Are those zlib or slz
 > based? Perhaps that would be a better starting point?
 >
 > https://hub.docker.com/r/haproxytech/haproxy-alpine 




Hi Lukas,

I am maintaining haproxytech Docker images and I can easily make that (slz 
being used) happen, if that’s what community would like to see.


Kind regards,
D.

-- 
Dinko Korunic                   ** Standard disclaimer applies **

Sent from OSF1 osf1v4b V4.0 564 alpha







Re: Is there a way to deactivate this "message repeated x times"

2021-03-29 Thread Aleksandar Lazic

On 29.03.21 18:55, Lukas Tribus wrote:

Hello,

On Mon, 29 Mar 2021 at 15:25, Aleksandar Lazic  wrote:


Hi.

I need to create some log statistics with awffull stats and I assume this 
messages
means that only one line is written for 3 requests, is this assumption right?

Mar 28 14:04:07 lb1 haproxy[11296]: message repeated 3 times: [ ::::49445 [28/Mar/2021:14:04:07.234] 
https-in~ be_api/api_prim 0/0/0/13/13 200 2928 - -  930/900/8/554/0 0/0 {|Mozilla/5.0 (Macintosh; Intel 
Mac OS X 10.13; rv:86.0) Gecko/20100101 Firefox/86.0||128|TLS_AES_128_GCM_SHA256|TLSv1.3|} "GET 
https:/// HTTP/2.0"]

Can this behavior be disabled?


This is not haproxy, this is your syslog server. Refer to the
documentation of the syslog server.


Oh yes of course, *clap on head*.

Looks like the RepeatedMsgReduction is on Ubuntu 18.04.5 LTS is this by default 
on.

https://www.rsyslog.com/doc/v8-stable/configuration/action/rsconf1_repeatedmsgreduction.html

I have solved it with this ansible snipplet.

```
- name: Deactivate RepeatedMsgReduction in rsyslog
  lineinfile:
backup: yes
line: $RepeatedMsgReduction off
path: /etc/rsyslog.conf
regexp: '^\$RepeatedMsgReduction on'
  tags: haproxy,all,syslog
  register: syslog

- name: Restart syslog
  service:
name: rsyslog
state: restarted
  when: syslog.changed
  tags: haproxy,all,syslog
```


Lukas


Regards
Alex




Is there a way to deactivate this "message repeated x times"

2021-03-29 Thread Aleksandar Lazic

Hi.

I need to create some log statistics with awffull stats and I assume this 
messages
means that only one line is written for 3 requests, is this assumption right?

Mar 28 14:04:07 lb1 haproxy[11296]: message repeated 3 times: [ ::::49445 [28/Mar/2021:14:04:07.234] 
https-in~ be_api/api_prim 0/0/0/13/13 200 2928 - -  930/900/8/554/0 0/0 {|Mozilla/5.0 (Macintosh; Intel 
Mac OS X 10.13; rv:86.0) Gecko/20100101 Firefox/86.0||128|TLS_AES_128_GCM_SHA256|TLSv1.3|} "GET 
https:/// HTTP/2.0"]

Can this behavior be disabled?

Regards
Alex



[HAP 2.3.8] some missunderstandint of Session state and server correlation

2021-03-27 Thread Aleksandar Lazic

Hi.

As I understand the LH and LR combo right should no server be involved.
I expected in the https-in line also a "" but there is the 
"be_default/default_prim".
Do I missunderstand the 'L' flag which is described as below

```
the session was locally processed by haproxy and was not passed to
a server. This is what happens for stats and redirects.
```

 is always the same.

Mar 27 13:33:35 lb1 haproxy[634]: :58572 [27/Mar/2021:13:33:35.713]
http-in http-in/ 0/-1/-1/-1/0 301 121 - - LR-- 964/2/0/0/0 0/0
{|Mozilla/5.0 (compatible; Googlebot/2.1; 
+http://www.google.com/bot.html)|}
"GET /robots.txt HTTP/1.1"

Mar 27 13:33:35 lb1 haproxy[634]: :58572 [27/Mar/2021:13:33:35.713]
http-in http-in/ 0/-1/-1/-1/0 301 121 - - LR-- 964/2/0/0/0 0/0
{|Mozilla/5.0 (compatible; Googlebot/2.1; 
+http://www.google.com/bot.html)|}
"GET /robots.txt HTTP/1.1"

Mar 27 13:33:35 lb1 haproxy[634]: ::::36964 
[27/Mar/2021:13:33:35.837]
https-in~ be_default/default_prim 0/0/44/-1/57 200 266 - - LH-- 971/946/2/2/0 
0/0
{|Mozilla/5.0 (compatible; Googlebot/2.1; 
+http://www.google.com/bot.html)||128|TLS_AES_128_GCM_SHA256|TLSv1.3|}
"GET /robots.txt HTTP/1.1"

Mar 27 13:33:35 lb1 haproxy[634]: ::::36964 
[27/Mar/2021:13:33:35.837]
https-in~ be_default/default_prim 0/0/44/-1/57 200 266 - - LH-- 971/946/2/2/0 
0/0
{|Mozilla/5.0 (compatible; Googlebot/2.1; 
+http://www.google.com/bot.html)||128|TLS_AES_128_GCM_SHA256|TLSv1.3|}
"GET /robots.txt HTTP/1.1"

It's haproxy 2.3.8 and this is are the frontend sections.

```
frontend http-in
  bind *:80

  http-request capture req.fhdr(Referer) len 128
  http-request capture req.fhdr(User-Agent) len 256
  http-request capture req.hdr(host) len 148
  http-request set-var(txn.req_path) path

  http-response return content-type text/plain string "User-agent: *\nAllow: 
/\n" if { var(txn.req_path) /robots.txt }
  http-response return status 404 if { var(txn.req_path) /sitemap.txt }

  acl host_redir hdr(host),map(/etc/haproxy/redirect.map) -m found
  http-request redirect code 301 location 
%[req.hdr(host),map(/etc/haproxy/redirect.map)] if host_redir

  http-request redirect code 301 location 
https://%[hdr(host)]%[capture.req.uri] if ! { path_beg 
/.well-known/acme-challenge/ }

  use_backend be_nginx if { path_beg /.well-known/acme-challenge/ }

frontend https-in

  bind :::443 v4v6 alpn h2,http/1.1 ssl ca-file 
/etc/haproxy/letsencryptauthorityx3.pem crt /etc/ssl/haproxy/

  tcp-request inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }

  http-request deny if { src -f /etc/haproxy/denylist.acl }

  http-request set-var(txn.req_path) path

  http-response return content-type text/plain string "User-agent: *\nAllow: 
/\n" if { var(txn.req_path) /robots.txt }
  http-response return status 404 if { var(txn.req_path) /sitemap.txt }

  # Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
  http-request del-header Proxy

  # collect ssl infos.
  http-request set-var(txn.cap_alg_keysize) ssl_fc_alg_keysize
  http-request set-var(txn.cap_cipher) ssl_fc_cipher
  http-request set-var(txn.cap_protocol) ssl_fc_protocol

  declare capture request len 128
  declare capture request len 256
  declare capture request len 148
  declare capture request len 148
  declare capture request len 148
  declare capture request len 148

  http-request capture req.hdr(host) len 148

  # Add CORS response header
  acl is_cors_preflight method OPTIONS
  http-response add-header Access-Control-Allow-Origin "*" if is_cors_preflight
  http-response add-header Access-Control-Allow-Methods "GET,POST" if 
is_cors_preflight
  http-response add-header Access-Control-Allow-Credentials "true" if 
is_cors_preflight
  http-response add-header Access-Control-Max-Age "600" if is_cors_preflight

  # 
https://www.haproxy.com/blog/haproxy-and-http-strict-transport-security-hsts-header-in-http-redirects/
  http-response set-header Strict-Transport-Security "max-age=15768000; 
includeSubDomains"
  http-response set-header X-Frame-Options  "SAMEORIGIN"
  http-response set-header X-Xss-Protection "1; mode=block"
  http-response set-header X-Content-Type-Options   "nosniff"
  http-response set-header Referrer-Policy  "origin-when-cross-origin"

  use_backend be_nginx if { path_beg /.well-known/acme-challenge/ }
  use_backend 
%[req.hdr(host),lower,map(/etc/haproxy/haproxy_backend.map,be_default)]

```

regards
Alex



Re: [HAP 2.3.8] Is there a way to see why "" and "SSL handshake failure" happens

2021-03-27 Thread Aleksandar Lazic

On 27.03.21 12:01, Lukas Tribus wrote:

Hello,

On Sat, 27 Mar 2021 at 11:52, Aleksandar Lazic  wrote:


Hi.

I have a lot of such entries in my logs.

```
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 [27/Mar/2021:11:48:20.523] https-in~ 
https-in/ -1/-1/-1/-1/0 0 0 - - PR-- 1041/1011/0/0/0 0/0 ""
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 [27/Mar/2021:11:48:20.523] https-in~ 
https-in/ -1/-1/-1/-1/0 0 0 - - PR-- 1041/1011/0/0/0 0/0 ""


Use show errors on the admin socket:
https://cbonte.github.io/haproxy-dconv/2.0/management.html#9.3-show%20errors


Thanks.


Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 
[27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 
[27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure


That's currently a pain point:

https://github.com/haproxy/haproxy/issues/693


Thanks.


Lukas



Regards
Alex



[HAP 2.3.8] Is there a way to see why "" and "SSL handshake failure" happens

2021-03-27 Thread Aleksandar Lazic

Hi.

I have a lot of such entries in my logs.

```
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 [27/Mar/2021:11:48:20.523] https-in~ 
https-in/ -1/-1/-1/-1/0 0 0 - - PR-- 1041/1011/0/0/0 0/0 ""
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 [27/Mar/2021:11:48:20.523] https-in~ 
https-in/ -1/-1/-1/-1/0 0 0 - - PR-- 1041/1011/0/0/0 0/0 ""

Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 
[27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure
Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 
[27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure
```

Is there an easy way to see why this happens?

```
root@lb1:~# haproxy -vv
HA-Proxy version 2.3.8-1ppa1~bionic 2021/03/25 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2022.
Known bugs: http://www.haproxy.org/bugs/bugs-2.3.8.html
Running on: Linux 4.15.0-139-generic #143-Ubuntu SMP Tue Mar 16 01:30:17 UTC 
2021 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -O2 -fdebug-prefix-map=/build/haproxy-ot86Gj/haproxy-2.3.8=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value 
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 
USE_SYSTEMD=1
  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT 
+POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
+GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY 
+TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER 
+PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=8).
Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with the Prometheus exporter as a service
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 7.5.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
fcgi : mode=HTTP   side=BEmux=FCGI
: mode=HTTP   side=FE|BE mux=H1
: mode=TCPside=FE|BE mux=PASS

Available services : prometheus-exporter
Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] fcgi-app
[COMP] compression
[TRACE] trace
```

Regards
Alex



Which mode for Quic?

2021-03-02 Thread Aleksandar Lazic

Hi.

I assume that QUIC is a dedicated mode right?

Something like
   h3 : mode=QUIC   side=FE|BE mux=H3


```
Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
fcgi : mode=HTTP   side=BEmux=FCGI
: mode=HTTP   side=FE|BE mux=H1
: mode=TCPside=FE|BE mux=PASS
```

Regards
Aleks



Re: Setting up haproxy for tomcat SSL Valve

2021-02-25 Thread Aleksandar Lazic

On 25.02.21 07:38, Jarno Huuskonen wrote:

Hi,

On Thu, 2021-02-25 at 03:24 +0100, Aleksandar Lazic wrote:

Hi.

I try to setup HAProxy (precisely  OpenShift Router :-)) to send the TLS/SSL
Client
Information's to tomcat.

On the SSL Valve page are the following parameters available.

http://tomcat.apache.org/tomcat-9.0-doc/config/valve.html#SSL_Valve

SSL_CLIENT_CERT string  PEM-encoded client certificate
?

The only missing parameter is "SSL_CLIENT_CERT in PEM format". There is one
in DER Format
ssl_c_der in HAProxy but the code in SSL-Valve expects the PEM format.

https://github.com/apache/tomcat/blob/master/java/org/apache/catalina/valves/SSLValve.java#L125

Have I overseen something in the HAProxy code or doc or isn't there
currently an option to get
the  client certificate out of HAProxy in PEM format?


It should be possible (had this working years ago):
(https://www.mail-archive.com/haproxy@formilux.org/msg20883.html
http://shibboleth.net/pipermail/users/2015-July/022674.html)

Something like:
http-request add-header X-SSL-Client-Cert -BEGIN\ CERTIFICATE-\
%[ssl_c_der,base64]\ -END\ CERTIFICATE-\ # don't forget last space


Cool thanks.


-Jarno



Best regards
Alex



Setting up haproxy for tomcat SSL Valve

2021-02-24 Thread Aleksandar Lazic

Hi.

I try to setup HAProxy (precisely  OpenShift Router :-)) to send the TLS/SSL 
Client
Information's to tomcat.

On the SSL Valve page are the following parameters available.

http://tomcat.apache.org/tomcat-9.0-doc/config/valve.html#SSL_Valve

```
sslClientCertHeader:
Allows setting a custom name for the ssl_client_cert header. If not specified, 
the default
of "ssl_client_cert" is used.

sslCipherHeader:
Allows setting a custom name for the ssl_cipher header. If not specified, the 
default
of "ssl_cipher" is used.

sslSessionIdHeader:
Allows setting a custom name for the ssl_session_id header. If not specified, 
the default
of "ssl_session_id" is used.

sslCipherUserKeySizeHeader:
Allows setting a custom name for the ssl_cipher_usekeysize header. If not 
specified, the
default of "ssl_cipher_usekeysize" is used.
```

I have found some corresponding variables on the mod_ssl page and the HAProxy 
samples, at
least I hope I found the right one on HAProxy site.

https://httpd.apache.org/docs/current/mod/mod_ssl.html#envvars

SSL_CLIENT_CERT string  PEM-encoded client certificate
?

SSL_CIPHER  string  The cipher specification name
http-request set-header ssl_cipher   %[ssl_fc_cipher]
http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#7.3.4-ssl_fc_cipher

SSL_SESSION_ID  string  The hex-encoded SSL session id
http-request set-header ssl_session_id %[ssl_fc_session_id,hex]
http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#7.3.4-ssl_fc_session_id

SSL_CIPHER_USEKEYSIZE   number  Number of cipher bits (actually used)
http-request set-header ssl_cipher_usekeysize %[ssl_fc_alg_keysize]
http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#7.3.4-ssl_fc_alg_keysize

The only missing parameter is "SSL_CLIENT_CERT in PEM format". There is one in 
DER Format
ssl_c_der in HAProxy but the code in SSL-Valve expects the PEM format.

https://github.com/apache/tomcat/blob/master/java/org/apache/catalina/valves/SSLValve.java#L125

Have I overseen something in the HAProxy code or doc or isn't there currently 
an option to get
the  client certificate out of HAProxy in PEM format?

Regards
Alex



Re: Apache Proxypass mimicing ?

2021-02-22 Thread Aleksandar Lazic

Hi.

On 22.02.21 01:31, Igor Cicimov wrote:


But if I do some configuration tweaks in "wp-config.php", like adding the 
following two lines :

define('WP_HOME', 'https://front1.domain.local 
');
define('WP_SITEURL', 'https://front1.domain.local 
');

It seems to work correctly.

It is not an acceptable solution however, as these WP instances will be 
managed by people who are not really tech-savvy.


So I wonder if HAProxy could provide a setup with all the required modifications, 
rewritings, ... allowing both worlds to coexist in a transparent way :

- usable WP site while browsing the "real" URLs from the backend
- usable WP site while browsing through HAProxy.

Right now WP is my concern, but I am sure this is a reusable "pattern" for 
future needs.

Regards


This is a requirement for most apps behind a reverse proxy -- you simply have to 
tell the app that it is behind a reverse proxy so it can set correct links where needed.


In your case if you google for "wordpress behind reverse proxy" I'm sure you'll 
get a ton of resources that can point you in the right direction for your use 
case like using X-FORWARD headers for example or whatever suits you.


Full Ack to Igor's statment.

A a further Idea maybe you can replace the response.
http://cbonte.github.io/haproxy-dconv/2.3/configuration.html#4.2-http-response%20replace-header
http://cbonte.github.io/haproxy-dconv/2.3/configuration.html#4.2-http-response%20replace-value

It could be tricky for a huge amount of hosts, due to this fact I suggest to
setup WP with WP_HOME and WP_SITEURL which is possible via wp-admin via GUI :-)

You can also create a smal setup tool which adds the values to the wp_config and
add the haproxy map entry for the domain.

Regards
Alex



Vote for us as RegTech Partner of 
the Year at the British Bank Awards!

Know Your Customer due diligence on demand, powered by intelligent process 
automation


Blogs   | LinkedIn 
  | Twitter 



Encompass Corporation UK Ltd  | Company No. SC493055 | Address: Level 3, 33 
Bothwell Street, Glasgow, UK, G2 6NL

Encompass Corporation Pty Ltd  | ACN 140 556 896 | Address: Level 10, 117 
Clarence Street, Sydney, New South Wales, 2000

This email and any attachments is intended only for the use of the individual 
or entity named above and may contain confidential information.

If you are not the intended recipient, any dissemination, distribution or 
copying of this email is prohibited.

If received in error, please notify us immediately by return email and destroy 
the original message.









Re: Apache Proxypass mimicing ?

2021-02-18 Thread Aleksandar Lazic

HI.

On 18.02.21 10:12, spfma.t...@e.mail.fr wrote:

Hi,
I would like to setup a reverse proxy with SSL termination to allow something 
like :



https://front1.domain proxying http://back1.otherdomain:8000 (and maybe one day 
back2)
https://front2.domain proxying http://back3.otherdomain:5000

>

Common things I already configured using Apache's mod_proxy.
I am not an HAProxy expert, I only used it in tcp mode for simple and efficient 
load balancing.


I would suggest to take a look into the following articles.

https://www.haproxy.com/blog/how-to-map-domain-names-to-backend-server-pools-with-haproxy/
https://www.haproxy.com/blog/introduction-to-haproxy-maps/

I have read this very interresting article https://www.haproxy.com/fr/blog/howto-write-apache-proxypass-rules-in-haproxy/ 
but it seems directives belong to former versions, and I was not able to get the expected result.

>

One of my important use-case is Apache backends hosting WordPress.
There are numerous examples here and there, but I always end with URLs like https://front1.domain/wp-admin 
redirected to http://front1.domain:8000/wp-admin or https://back1.otherdomain:8000/wp-admin aso ...

>
I know WP is redirecting to URLs related to it's configured URLs , so I guess some 
headers rewriting are required, but I don't know how to do that.
I am looking for a generic way to perform the required rewrites, without depending 
on fixed URL patterns. Is it only possible with HAProxy ? Some very old posts 
suggested it was not, but there were from around nine years ago.
I have not been able to find answers so far (some search results show appealing 
descriptions but sites are not responding) so I am looking for some help here.


Well you will need some pattern that the computer can follow.

For example based on which criteria should a program know what it should to on 
the URL?

Request: https://front1.domain/wp-admin

Redirect to http://front1.domain:8000/wp-admin when what happen?
Send request to https://back1.otherdomain:8000/wp-admin when what happen?

I would start with that config 
https://github.com/Tyrell66/SoHo/blob/master/haproxy-2020.05.02.cfg

Here a slightly adopted version.


```
frontend http-in
  bind *:80

# Prevent DDoS
stick-table type ip size 100k expire 30s store http_req_rate(10s)
http-request track-sc0 src
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 20 }

  http-request add-header X-Forwarded-Proto http
  redirect scheme https if !{ ssl_fc }


frontend https-in
# /etc/haproxy/certs/ contains both .pem for default and second domain 
names.
  bind *:443 ...

http-response replace-header Location ^http://(.*)$ https://\1
http-request add-header X-Forwarded-Proto https

http-request set-header X-Forwarded-Proto https
http-request set-header X-Forwarded-Port 443
capture request header X-Forwarded-For len 15

# Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
  http-request del-header Proxy

## Secure headers 
https://blog.devcloud.hosting/securing-haproxy-and-nginx-via-http-headers-54020d460283
## Test your config with https://securityheaders.com/
## and https://observatory.mozilla.org/

http-response set-header X-XSS-Protection 1;mode=block
http-response set-header X-Content-Type-Options nosniff
http-response set-header Referrer-Policy no-referrer-when-downgrade
http-response set-header X-Frame-Options SAMEORIGIN
http-response del-header X-Powered-By
http-response del-header Server


  # This line is for HSTS:
  http-response set-header Strict-Transport-Security "max-age=63072000; 
includeSubdomains; preload;"


  use_backend %[req.hdr(host),lower,map(hosts.map,be_static)]

backend be_static
  server default_static xxx.xxx.xx

backend be_domain1
http-request replace-uri ^/gc/(.*) /guacamole/\1
  server host1  192.168.1.13:58080/guacamole/#

...

```

file hosts.map
```
front1.domain be_domain1
front2.domain be_domain2

```

You can also set maps for path and host with ports.
As you can see HAProxy should be able to full fill your requirement as long as 
you can
define it for you and the program/Computer ;-)

Maybe this article could also help you to protect the WP installations for 
attacks.
https://www.haproxy.com/blog/wordpress-cms-brute-force-protection-with-haproxy/


Thanks


Welcome

Alex



[PATCH] DOC/MINOR: ROADMAP: adopt the Roadmap to the current state

2021-02-05 Thread Aleksandar Lazic

Hi.

attached a patch for the Roadmap.

There is also the bandwidth limitation as open entry due
to this fact I assume it's not easy to handle bandwith
limitation within haproxy.

Regards
Aleks
>From 8a77687ca480feb286fd394d533570b079d4be27 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Fri, 5 Feb 2021 11:09:35 +0100
Subject: [PATCH] DOC/MINOR: ROADMAP: adopt the Roadmap to the current state

---
 ROADMAP | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/ROADMAP b/ROADMAP
index a797b84eb..67cd7edaa 100644
--- a/ROADMAP
+++ b/ROADMAP
@@ -3,9 +3,10 @@ Medium-long term wish list - updated 2019/06/15
 Legend: '+' = done, '-' = todo, '*' = done except doc
 
 2.1 or later :
-  - return-html code xxx [ file "xxx" | text "xxx" ] if 
-
-  - return-raw  [ file "xxx" | text "xxx" ] if 
+  + return-html code xxx [ file "xxx" | text "xxx" ] if 
+  + return-raw  [ file "xxx" | text "xxx" ] if 
+Both are available since 2.2
+https://www.haproxy.com/blog/announcing-haproxy-2-2/#native-response-generator
 
   - have multi-criteria analysers which subscribe to req flags, rsp flags, and
 stream interface changes. This would result in a single analyser to wait
@@ -34,8 +35,10 @@ Legend: '+' = done, '-' = todo, '*' = done except doc
   - add support for event-triggered epoll, and maybe change all events handling
 to pass through an event cache to handle temporarily disabled events.
 
-  - evaluate the changes required for multi-process+shared mem or multi-thread
+  + evaluate the changes required for multi-process+shared mem or multi-thread
 +thread-local+fast locking.
+HAProxy uses Threads since 1.8. In 2.0 is the theading enhanced.
+https://www.haproxy.com/blog/haproxy-2-0-and-beyond/#cloud-native-threading-logging
 
 Old, maybe obsolete points :
   - clarify licence by adding a 'MODULE_LICENCE("GPL")' or something equivalent.
@@ -61,7 +64,7 @@ Unsorted :
 
   - random cookie generator
 
-  - fastcgi to servers
+  + fastcgi to servers. Available since 2.1 https://www.haproxy.com/blog/haproxy-2-1/#fastcgi
 
   - hot config reload
 
@@ -73,4 +76,4 @@ Unsorted :
 
   - dynamic weights based on check response headers and traffic response time
 
-  - various kernel-level acceleration (multi-accept, ssplice, epoll2...)
+  + various kernel-level acceleration (multi-accept, ssplice, epoll2...)
-- 
2.25.1



Re: HAProxy ratelimit based on bandwidth

2021-02-05 Thread Aleksandar Lazic

On 26.01.21 20:27, Aleksandar Lazic wrote:

Hi.

On 26.01.21 05:54, Sangameshwar Babu wrote:
 > Hello Team,
 >
 > I would like to get some suggestions on setting up ratelimit on HAProxy 1.8 
version,
 > my current setup is as below.
 >
 > 1000+ rsyslog clients(TCP) -> HAProxy (TCP mode) -> backend centralized 
rsyslog server.
 >
 > I have the below stick table and acl's through which I am able to mark a 
source as
 > "abuse" if the client crosses the limit post which all new connections from 
the
 > same client are rejected until stick table timer expires.
 >
 > haproxy.cfg
 > -
 >  stick-table type ip size 200k expire 2m store 
gpc0,conn_rate(2s),bytes_in_rate(1s),bytes_in_cnt
 >
 >  acl data_rate_abuse  sc1_bytes_in_rate ge 100
 >  acl data_size_abuse  sc1_kbytes_in ge 1
 >
 > tcp-request connection silent-drop if data_rate_abuse
 >  tcp-request connection reject if data_size_abuse
 >
 > However I would like to configure in such a way that once a client sends 
about
 > "x bytes" of data the connection should be closed instantly instead of 
marking it
 > abuse and simultaneous connections being rejected.

+1
I have a similar issue and hope that we get suggestions to get a answer here.

 > Kindly let me know if the above can be configured with HAProxy version 1.8.

I will need it for 2.2+


Looks like this feature is not yet available when I look into the roadmap.

There is a "bandwidth limits" entry.
http://git.haproxy.org/?p=haproxy.git;a=blob;f=ROADMAP;h=a797b84eb95298807cefa03edaa69583d8007c5b;hb=HEAD#l22

I have seen there also some points which are already implemented therefore I 
will send a patch to update the roadmap.


 > BR
 > Sangam


Regards
Aleks




Re: Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-29 Thread Aleksandar Lazic

On 29.01.21 12:27, Christopher Faulet wrote:

Le 22/01/2021 à 07:08, Willy Tarreau a écrit :

On Thu, Jan 21, 2021 at 11:09:33PM +0100, Aleksandar Lazic wrote:

On 21.01.21 21:57, Christopher Faulet wrote:

Le 21/01/2021 à 21:19, Aleksandar Lazic a écrit :

Hi.

I'm not sure if I have missed something, because there are so many great 
features
now in HAProxy, therefore I just ask here.

Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy now?



Hi,

It is not possible right now. But it will be very very soon. Amaury implemented 
the
H2 websocket support and it works pretty well. Unfortunately, this relies on 
some
tricky fixes on the tunnel management that must be carefully reviewed. It is a
nightmare to support all tunnel combinations. But I've almost done the review. I
must split a huge patch in 2 or 3 smaller and more manageable ones. I'm on it 
and I
will do my best to push it very soon. Anyway, it will be a feature for the 2.4.


Wow that sounds really great. Thank you for your answer.


And by the way initially we thought we'd backport Amaury's work to 2.3,
but give the dependency with the tunnel stuff that opened this pandora
box, now I'm pretty sure we won't :-)

One nice point is that he managed to natively support the WS handshake,
it's not just a blind tunnel anymore, so that it's possible to have WS
using either H1 or H2 on the frontend, and either H1 or H2 on the backend.
Now we're really seeing the benefits of HTX because while at each extremity
we have a very specific WS handshake, in the middle we just have a tunnel
using a WS protocol, which allows a CONNECT on one side to become a GET on
the other side.

As Christopher said, the tunnel changes are extremely complicated because
these uncovered some old limitations at various levels, and each time we
reviewed the pending changes we could imagine a situation where an odd use
case would break if we don't recursively go into another round of refactoring
at yet another deeper level. But we're on the right track now, things start
to look good.



FYI, the HTTP/2 websockets support is now available and will be part of the 
next 2.4-dev release (2.4-dev7)


Cool thanks.



Re: HAProxy ratelimit based on bandwidth

2021-01-26 Thread Aleksandar Lazic

Hi.

On 26.01.21 05:54, Sangameshwar Babu wrote:
> Hello Team,
>
> I would like to get some suggestions on setting up ratelimit on HAProxy 1.8 
version,
> my current setup is as below.
>
> 1000+ rsyslog clients(TCP) -> HAProxy (TCP mode) -> backend centralized 
rsyslog server.
>
> I have the below stick table and acl's through which I am able to mark a 
source as
> "abuse" if the client crosses the limit post which all new connections from 
the
> same client are rejected until stick table timer expires.
>
> haproxy.cfg
> -
>  stick-table type ip size 200k expire 2m store 
gpc0,conn_rate(2s),bytes_in_rate(1s),bytes_in_cnt
>
>  acl data_rate_abuse  sc1_bytes_in_rate ge 100
>  acl data_size_abuse  sc1_kbytes_in ge 1
>
> tcp-request connection silent-drop if data_rate_abuse
>  tcp-request connection reject if data_size_abuse
>
> However I would like to configure in such a way that once a client sends about
> "x bytes" of data the connection should be closed instantly instead of 
marking it
> abuse and simultaneous connections being rejected.

+1
I have a similar issue and hope that we get suggestions to get a answer here.

> Kindly let me know if the above can be configured with HAProxy version 1.8.

I will need it for 2.2+

> BR
> Sangam

Regards
Aleks



Re: Question about substring match (*_sub)

2021-01-23 Thread Aleksandar Lazic

On 23.01.21 07:36, Илья Шипицин wrote:

the following usually works for performance profiling.


1) setup work stand (similar to what you use in production)

2) use valgrind + callgrind for collecting traces

3) put workload

4) aggregate using kcachegrind

most probably you were going to do very similar things already :)


Thanks for the tips ;-)

The issue here is that for sub-string matching are several parameters
important like pattern, pattern length, text, text length and the
alphabet.

My question was focused to hear some "common" setups to be able to
create some valid tests for the different algorithms to compare it.

I think something like the examples below. As I don't used _sub
in the past it's difficult to me alone to create some valid use
cases which are used out there. It's okay to send examples only
to me, just in case for some security or privacy reasons.

acl allow_from_int hdr(x-forwarded-for) hdr_sub("192.168.4.5")
acl admin_access   hdr(user)hdr_sub("admin")
acl test_url   path urlp_sub("test=1")

Should UTF-* be considered as valid Alphabet or only ASCII?

If _sub is a very rare case then it's okay as it is, isn't it?

Opinions?


сб, 23 янв. 2021 г. в 03:18, Aleksandar Lazic mailto:al-hapr...@none.at>>:

Hi.

I would like to take a look into the substring match implementation because 
of
the comment there.


http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;h=8729769e5e549bcd4043ae9220ceea440445332a;hb=HEAD#l767
 
<http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;h=8729769e5e549bcd4043ae9220ceea440445332a;hb=HEAD#l767>

"NB: Suboptimal, should be rewritten using a Boyer-Moore method."

Now before I take a deeper look into the different algorithms about 
sub-string
match I would like to know which pattern and length is a "common" use case
for the user here?

There are so many different algorithms which are mostly implemented in the
Smart Tool ( https://github.com/smart-tool/smart ) therefore it would be
interesting to know some metrics about the use cases.

Thanks for sharing.
Best regards

Aleks






Question about substring match (*_sub)

2021-01-22 Thread Aleksandar Lazic

Hi.

I would like to take a look into the substring match implementation because of
the comment there.

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/pattern.c;h=8729769e5e549bcd4043ae9220ceea440445332a;hb=HEAD#l767

"NB: Suboptimal, should be rewritten using a Boyer-Moore method."

Now before I take a deeper look into the different algorithms about sub-string
match I would like to know which pattern and length is a "common" use case
for the user here?

There are so many different algorithms which are mostly implemented in the
Smart Tool ( https://github.com/smart-tool/smart ) therefore it would be
interesting to know some metrics about the use cases.

Thanks for sharing.
Best regards

Aleks



Re: Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-21 Thread Aleksandar Lazic

On 21.01.21 21:57, Christopher Faulet wrote:
> Le 21/01/2021 à 21:19, Aleksandar Lazic a écrit :
>> Hi.
>>
>> I'm not sure if I have missed something, because there are so many great 
features
>> now in HAProxy, therefore I just ask here.
>>
>> Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy 
now?
>>
>
> Hi,
>
> It is not possible right now. But it will be very very soon. Amaury 
implemented the
> H2 websocket support and it works pretty well. Unfortunately, this relies on 
some
> tricky fixes on the tunnel management that must be carefully reviewed. It is a
> nightmare to support all tunnel combinations. But I've almost done the 
review. I
> must split a huge patch in 2 or 3 smaller and more manageable ones. I'm on it 
and I
> will do my best to push it very soon. Anyway, it will be a feature for the 
2.4.

Wow that sounds really great. Thank you for your answer.

Regards
Aleks



Question about rfc8441 (Bootstrapping WebSockets with HTTP/2)

2021-01-21 Thread Aleksandar Lazic

Hi.

I'm not sure if I have missed something, because there are so many great 
features
now in HAProxy, therefore I just ask here.

Is the rfc8441 (Bootstrapping WebSockets with HTTP/2) possible in HAProxy now?

Regards

Aleks



  1   2   3   4   5   6   7   8   9   >