Re: some QUIC questions

2024-05-06 Thread Shawn Heisey

On 5/6/24 06:02, Björn Jacke wrote:

frontend ft_443
   bind :::443 ssl crt /ssl/combined.pem
   bind quic6@:443 ssl crt /ssl/combined.pem alpn h3
   option tcp-smart-accept
   http-after-response add-header alt-svc 'h3=":443"; ma=600; persistent=1'





frontend ft_quic_test
     mode tcp
     bind quic6@:443 ssl crt /ssl/combined.pem
     use_backend local_smb

this results in this config check error thoug:

[ALERT]    (3611777) : config : frontend 'ft_quic_test' : MUX protocol 
'quic' is not usable for 'bind quic6@:443' at [/etc/haproxy/ 
haproxy.cfg:73].


So a setup like this is not supported by HAProxy's QUIC implementation 
currently, right? Is QUIC in HAProxy HTTP3 only for now?\


The alpn on the first config snippet only includes h3, not quic.  Here 
are alpn and npn settings that allow some of the quic protocol 
variations as well as h3 itself:


alpn h3,h3-29,h3-28,h3-27 npn h3,h3-29,h3-28,h3-27

The second one is a tcp frontend ... I feel pretty sure that h3/quic 
requires http in the frontend, not tcp.


Thanks,
Shawn




Re: Odd warnings when using "option forwarded"

2024-04-26 Thread Shawn Heisey

On 4/26/24 10:51, Aurelien DARRAGON wrote:

This is expected because forwarded cannot be used on frontend unlike
forwardfor:


That's interesting, because I already had `option  forwardfor except 
127.0.0.1` in the frontend which works perfectly.  Should that be in the 
backend too?


I was trying to find 'option forwarded' in the documentation, but I 
couldn't click on it after finding it with the search box and I couldn't 
search all the documentation with the browser, as it wasn't all on one page.


Thanks,
Shawn




Odd warnings when using "option forwarded"

2024-04-26 Thread Shawn Heisey
I was just reading about the new "option forwarded" capability in 2.9, 
so I added it to my config and now I get warnings when checking the config.


[WARNING]  (253408) : config : parsing [/etc/haproxy/haproxy.cfg:45] : 
'option forwarded' ignored because frontend 'web80' has no backend 
capability.
[WARNING]  (253408) : config : parsing [/etc/haproxy/haproxy.cfg:60] : 
'option forwarded' ignored because frontend 'web' has no backend capability.


This is the line I added to the frontends:

option  forwarded except 127.0.0.1

I realized I don't need it on 'web80', so I removed that one.

The 'web' frontend does use backends.  Its default backend has no 
servers and denies all requests -- the request must match one of the 
ACLs to choose a backend that actually does something.


I suspect that I can ignore this warning, but I want to check here and 
make sure.  Can I set something so it doesn't emit the warning?


Thanks,
Shawn




Re: How to check if a domain is known to HAProxy

2024-04-03 Thread Shawn Heisey

On 4/3/24 06:02, Froehlich, Dominik wrote:
I fear that strict-sni won’t get us far. The issue is that the SNI is 
just fine (it is in the crt-list), however we also need to check if the 
host-header is part of the crt-list. E.g.


William's answer should work.

The strict-sni setting makes sure that the SNI is in the cert list.  If 
it's not, then TLS negotiation will fail and as a result the request 
will not complete.


Then the following ACL in William's reply checks that the host header 
actually matches SNI:


   http-request set-var(txn.host) hdr(host)
   # Check whether the client is attempting domain fronting.
   acl ssl_sni_http_host_match ssl_fc_sni,strcmp(txn.host) eq 0

If SNI matches the Host header, then that ACL will be true.  Combined 
with strict-sni ensuring that the SNI matches one of your certs, this 
will get you what you want.


You can also reverse the ACL so it is false if there is no match.  The 
docs for 2.8 do not mention "ne" as a possible operator, so this ACL 
checks for greater than and less than:


   acl ssl_sni_http_host_no_match ssl_fc_sni,strcmp(txn.host) lt 0
   acl ssl_sni_http_host_no_match ssl_fc_sni,strcmp(txn.host) gt 0

Thanks,
Shawn




Re: [*EXT*] Re: Public-facing haproxy : recommandation about headers

2023-12-10 Thread Shawn Heisey

On 12/10/2023 15:07, Tristan wrote:
Cool topic! A few things struck me (of various levels of pertinence, and 
sorry in advance for the digressions):



  http-request del-header [Ff]orwarded.+ -m reg
  http-request del-header [Xx]-[Ff]orwarded.+ -m reg


I wonder about the regex use here to handle header case.
The HTTP spec mandates that headers names are case-insensitive 
(https://www.rfc-editor.org/rfc/rfc9110.html#fields), and the 
documentation (here for example: 
https://docs.haproxy.org/2.9/configuration.html#4.2-option%20h1-case-adjust-bogus-client) suggests that HAProxy does the right thing and normalizes them all to lower-case by default:


Could be that I don't need the checks for different case.  Pretty sure 
it isn't hurting anything to have it there, so I keep it.  I'm no expert 
at this!



  http-request set-header X-H3 true if { so_name -i -m beg quic443 }

you can use HTTP_3.0 as a more readable fetch here I think
since: 
https://github.com/haproxy/haproxy/commit/89da4e9e5d8ef467d52beb9234f832aa9aa87bce in v2.9.0
Though if you are looking to get "GET /" 200 HTTP/3.0 in logs for 
example, you can use %HV in logs (well, almost... see: 
https://github.com/haproxy/haproxy/issues/2095#issuecomment-1803179697 
the portion about HTTP versions).


I used the "name" option on all my bind lines.  The ones for quic all 
start with "quic443".  By checking for that and setting a header, 
applications I can know that a given request uses HTTP3 in the browser. 
The custom "X-H3" header is used by a silly little PHP program I wrote:


https://http3test.elyograg.org

If there is something more standard, I can always adjust that.


  http-request set-header X-Scheme https
  http-request set-header X-Forwarded-Scheme https
  http-request set-header X-Forwarded-Proto https
  http-request set-header X-Forwarded-HTTPS true
  http-request set-header X-Forwarded-Host %[req.hdr(Host)]
  http-request set-header X-Forwarded-SSL true
  http-request set-header X-HTTPS on
  http-request set-header X-SSL %[ssl_fc]
I don't know how demanding your backends/developers are, but I'd 
consider threatening them with the nearest sharp object if they asked 
all of these from me...


I haven't got any developers.  It's all personal websites, mostly using 
off-the-shelf applications.  Gitlab, Wordpress, Plex, and others.  I 
probably don't need all those headers, I was just setting everything I 
could think of that a web application might use to detect that the 
browser is using https.


My backend connections are not encrypted, except for plex.

Thanks,
Shawn




Re: [*EXT*] Re: Public-facing haproxy : recommandation about headers

2023-12-09 Thread Shawn Heisey

On 12/8/23 14:35, Ionel GARDAIS wrote:

Thanks Tristan.

So typically I’d say to add to every single http frontend:


 > http-request set-header X-Forwarded-For %[src]

http-request set-header X-Forwarded-Host %[hdr(Host)]
http-request set-header X-Forwarded-Proto %[ssl_fc,iif(https,http)]
http-request set-header Forwarded 
"by=${HOSTNAME};for=%[src];host=%[hdr(Host)];proto=%[ssl_fc,iif(https,http)]"|



Almost what I already did :)
What about using %[hdr(host,1)] to forcefully use the first Host header 
if multiple headers are sent ?


What I have done:

In "defaults" I have:

 option  forwardfor except 127.0.0.1

In the https frontend, I have the following header-related options, in 
this order:


 http-request del-header [Ff]orwarded.+ -m reg
 http-request del-header [Xx]-[Ff]orwarded.+ -m reg
 http-request add-header Forwarded "by=\"${HOSTNAME}\"; for=\"%[src]\"; 
host=\"%[hdr(Host)]\"; proto=https"

 http-request set-header X-H3 true if { so_name -i -m beg quic443 }
 http-request set-header X-Scheme https
 http-request set-header X-Forwarded-Scheme https
 http-request set-header X-Forwarded-Port %fp
 http-request set-header X-Forwarded-Proto https
 http-request set-header X-Forwarded-HTTPS true
 http-request set-header X-Forwarded-Host %[req.hdr(Host)]
 http-request set-header X-Forwarded-SSL true
 http-request set-header X-Haproxy-Current-Date %T
 http-request set-header X-HTTPS on
 http-request set-header X-SSL %[ssl_fc]
 http-request set-header X-SSL-Session_ID %[ssl_fc_session_id,hex]
 http-after-response set-header Strict-Transport-Security 
"max-age=1600; includeSubDomains; preload;"

 http-after-response add-header alt-svc 'h3=":443"; ma=7200'

Question for the group: Does that look like a good config?  Should I be 
doing something different?


Thanks,
Shawn




Re: Logging port #

2023-11-19 Thread Shawn Heisey

On 11/18/2023 08:07, Christoph Kukulies wrote:

For haproxy I don't have a log-format string.
defaults
         log     global
         mode    http
         option  httplog
         option  dontlognull
         timeout connect 5000
         timeout client  5
         timeout server  5
         errorfile 400 /etc/haproxy/errors/400.http
         errorfile 403 /etc/haproxy/errors/403.http
         errorfile 408 /etc/haproxy/errors/408.http
         errorfile 500 /etc/haproxy/errors/500.http
         errorfile 502 /etc/haproxy/errors/502.http
         errorfile 503 /etc/haproxy/errors/503.http
         errorfile 504 /etc/haproxy/errors/504.http
     compression algo gzip
     compression type text/html text/css text/plain text/vcard 
text/vnd.rim.location.xloc text/vtt text/x-component 
text/x-cross-domain-policy application/atom+xml application/javascript 
application/x-javascript application/json application/ld+json 
application/manifest+json application/rss+xml application/vnd.geo+json 
application/vnd.ms-fontobject application/x-font-ttf 
application/x-web-app-manifest+json application/xhtml+xml 
application/xml font/opentype image/bmp image/svg+xml image/x-icon 
text/cache-manifest

     balance roundrobin
     option dontlog-normal
     option dontlognull
     option httpclose
     option forwardfor


If that's the extent of the logging configuration, then I wonder whether 
you've actually got logging set up.  Or if you do, maybe it's not going 
where you think it's going.


These are the lines logged by haproxy when I tell my browser to go to my 
blog:


Nov 18 15:47:55 - haproxy[5915] 192.168.217.1:64098 
[18/Nov/2023:15:47:55.449] web~ be_smeagol_81/smeagol 0/0/2/227/229 200 
18218 - - --NI 1/1/0/0/0 0/0 {purg.atory.org} "GET 
https://purg.atory.org/ HTTP/2.0"
Nov 18 15:47:55 - haproxy[5915] 192.168.217.1:54363 
[18/Nov/2023:15:47:55.708] web~ be_smeagol_81/smeagol 0/0/0/3/3 200 2625 
- - --VN 1/1/4/4/0 0/0 {purg.atory.org} "GET 
/wp-includes/blocks/navigation/style.min.css?ver=6.4.1 HTTP/3.0"
Nov 18 15:47:55 - haproxy[5915] 192.168.217.1:54363 
[18/Nov/2023:15:47:55.709] web~ be_smeagol_81/smeagol 0/0/0/5/5 200 2484 
- - --VN 1/1/3/3/0 0/0 {purg.atory.org} "GET 
/wp-content/themes/twentytwentytwo/style.css?ver=1.6 HTTP/3.0"
Nov 18 15:47:55 - haproxy[5915] 192.168.217.1:54363 
[18/Nov/2023:15:47:55.709] web~ be_smeagol_81/smeagol 0/0/0/13/13 200 
1474 - - --VN 1/1/3/3/0 0/0 {purg.atory.org} "GET 
/wp-includes/blocks/navigation/view.min.js?ver=e3d6f3216904b5b42831 
HTTP/3.0"
Nov 18 15:47:55 - haproxy[5915] 192.168.217.1:54363 
[18/Nov/2023:15:47:55.709] web~ be_smeagol_81/smeagol 0/0/0/17/17 200 
1057 - - --VN 1/1/2/2/0 0/0 {purg.atory.org} "GET 
/wp-includes/js/wp-embed.min.js?ver=6.4.1 HTTP/3.0"
Nov 18 15:47:55 - haproxy[5915] 192.168.217.1:54363 
[18/Nov/2023:15:47:55.709] web~ be_smeagol_81/smeagol 0/0/0/38/38 200 
12360 - - --VN 1/1/2/2/0 0/0 {purg.atory.org} "GET 
/wp-includes/js/dist/interactivity.min.js?ver=6.4.1 HTTP/3.0"
Nov 18 15:47:55 - haproxy[5915] 192.168.217.1:54363 
[18/Nov/2023:15:47:55.714] web~ be_smeagol_81/smeagol 0/0/0/33/34 200 
103948 - - --VN 1/1/1/1/0 0/0 {purg.atory.org} "GET 
/wp-content/themes/twentytwentytwo/assets/images/flight-path-on-transparent-d.png 
HTTP/3.0"
Nov 18 15:47:55 - haproxy[5915] 192.168.217.1:54363 
[18/Nov/2023:15:47:55.746] web~ be_smeagol_81/smeagol 0/0/0/9/30 200 
428789 - - --VN 1/1/0/0/0 0/0 {purg.atory.org} "GET 
/wp-content/themes/twentytwentytwo/assets/fonts/source-serif-pro/SourceSerif4Variable-Roman.ttf.woff2 
HTTP/3.0"
Nov 18 15:47:55 - haproxy[5915] 192.168.217.1:54363 
[18/Nov/2023:15:47:55.779] web~ be_smeagol_81/smeagol 0/0/0/-1/166 404 
1634 - - LHVN 1/1/1/1/0 0/0 {purg.atory.org} "GET 
/wp-content/uploads/2022/09/sylized-e.jpeg HTTP/3.0"
Nov 18 15:47:55 - haproxy[5915] 192.168.217.1:54363 
[18/Nov/2023:15:47:55.864] web~ be_smeagol_81/smeagol 0/0/0/-1/84 400 
1232 - - CDVN 1/1/0/0/0 0/0 {purg.atory.org} "GET 
/wp-content/uploads/2022/09/sylized-e.jpeg HTTP/3.0"


That logging doesn't include port numbers, but that information is 
obtainable by reference.  You'll see that it says "web~ 
be_smeagol_81/smeagol" ... which means that the frontend in use is named 
"web" ... the ~ means that it is using TLS ... the backend is named 
"be_smeagol_81" and the server within the backend is named "smeagol". 
It just so happens that this backend goes to port 81, and I did put that 
number in the backend name, but not everyone does.


I send my haproxy logging to syslog.  The system runs rsyslog, which is 
responsible for getting them into actual logs on disk.


global
log 127.0.0.1 len 65535 format rfc5424 local0
log 127.0.0.1 len 65535 format rfc5424 local1 notice

Thanks,
Shawn




Re: Can't display the certificate: Not found or the certificate is a bundle!

2023-11-13 Thread Shawn Heisey

On 11/13/23 02:09, William Lallemand wrote:

"show ssl cert" shows the certificate in the haproxy memory, and not on
the filesystem. Start by doing "show ssl cert" without any argument to
see the list of certificates whcih were loaded by haproxy.


That makes complete sense now!  I saw an error on the other file because 
that file was not loaded by haproxy.


I have never looked at the documentation for this so I do not know if 
that is complete enough ... but there is an opportunity for an improved 
error message here.  Seems like haproxy would be able to detect that the 
requested file is not loaded into memory and inform the user.


Thanks,
Shawn




Re: Can't display the certificate: Not found or the certificate is a bundle!

2023-11-12 Thread Shawn Heisey

On 11/12/2023 02:37, Christoph Kukulies wrote:
what the inadvertently publicly dislosing my private key is concerned: 
  I obfuscated the excerpts of my .pem file by putting XX into the 
string. Destroying part of it would suffice, I think.


Up to you.  I wouldn't trust it personally.

What the actual issue is concerned:  It looks like haproxy (2.8) can't 
cope with the type of the certificate. An ECC (256 bit) seems to be 
generated by the acme.sh challenge by default.


My certificate is also using 256 bit EC keys because that's what recent 
versions of Lets Encrypt certbot provide by default, but my PEM file 
does not actually say "EC" like yours does, and haproxy works with it 
perfectly, including the "show" command with the socket.  The weird 
problem with the file on a tmpfs filesystem is probably unrelated.


This is the line in /etc/fstab that creates the filesystem for /tmp:

tmpfs /tmp tmpfs size=4096m,mode=1777 0 1

Another test with interesting results:

I made a copy of the cert with identical permissions (in the same 
directory) and made one change:


I changed "-BEGIN PRIVATE KEY-" and the matching END line to 
include the "EC" that yours has.  With that change, I get the exact 
error that you are getting.


I would bet that if you removed the "EC" from your PEM file, it would 
start working.


My haproxy is compiled with the quictls fork of openssl, version 3.1.4.

Thanks,
Shawn




Re: Can't display the certificate: Not found or the certificate is a bundle!

2023-11-11 Thread Shawn Heisey

On 11/11/2023 02:26, Christoph Kukulies wrote:
The file is definitely there and the command works an a different file, 
when I apply it to the previously used certificate fullchain.pem.

The file which is not working, has the following structure:

-BEGIN EC PRIVATE KEY-


I think you have just publicly disclosed the private key for your 
certificate.  If so, you should immediately replace that certificate 
with a new one that uses a different key, and if it is a certificate 
generated by a public CA, see about getting it revoked.


On your issue:

This is very strange.

I ran your command with my LE certificate and it worked.

echo "show ssl cert 
/etc/ssl/certs/local/elyograg_org.wildcards.combined.pem" | socat 
/etc/haproxy/stats.socket -


Then I made a copy of the certificate file as /tmp/fff/ddd and the same 
command with that file returned the error you are getting!


echo "show ssl cert /tmp/fff/ddd" | socat /etc/haproxy/stats.socket -

The root filesystem is ext4 and /tmp is a tmpfs (ramdisk).  Unix 
permissions are not an issue, and I have never configured ACLs on this 
system.  SELinux is not active, and the apparmor service is 
stopped/disabled.  It does look like snapd has activated apparmor for 
snaps, which seems odd because the service is stopped.


root@smeagol:/var/log# apparmor_status
apparmor module is loaded.
59 profiles are loaded.
54 profiles are in enforce mode.
   /snap/snapd/20092/usr/lib/snapd/snap-confine

/snap/snapd/20092/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /snap/snapd/20290/usr/lib/snapd/snap-confine

/snap/snapd/20290/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   docker-default
   snap-update-ns.certbot
   snap-update-ns.certbot-dns-route53
   snap-update-ns.chromium
   snap-update-ns.crypto
   snap-update-ns.cups
   snap-update-ns.firefox
   snap-update-ns.gradle
   snap-update-ns.snap-store
   snap.certbot-dns-route53.hook.post-refresh
   snap.chromium.chromedriver
   snap.chromium.chromium
   snap.chromium.hook.configure
   snap.crypto.crypto
   snap.cups.accept
   snap.cups.cancel
   snap.cups.cups-browsed
   snap.cups.cupsaccept
   snap.cups.cupsctl
   snap.cups.cupsd
   snap.cups.cupsdisable
   snap.cups.cupsenable
   snap.cups.cupsfilter
   snap.cups.cupsreject
   snap.cups.cupstestppd
   snap.cups.driverless
   snap.cups.gs
   snap.cups.ippeveprinter
   snap.cups.ippfind
   snap.cups.ipptool
   snap.cups.lp
   snap.cups.lpadmin
   snap.cups.lpc
   snap.cups.lpinfo
   snap.cups.lpoptions
   snap.cups.lpq
   snap.cups.lpr
   snap.cups.lprm
   snap.cups.lpstat
   snap.cups.reject
   snap.firefox.firefox
   snap.firefox.geckodriver
   snap.firefox.hook.configure
   snap.firefox.hook.connect-plug-host-hunspell
   snap.firefox.hook.disconnect-plug-host-hunspell
   snap.firefox.hook.post-refresh
   snap.snap-store.hook.configure
   snap.snap-store.snap-store
   snap.snap-store.ubuntu-software
   snap.snap-store.ubuntu-software-local-file
5 profiles are in complain mode.
   snap.certbot.certbot
   snap.certbot.hook.configure
   snap.certbot.hook.prepare-plug-plugin
   snap.certbot.renew
   snap.gradle.gradle
0 profiles are in kill mode.
0 profiles are in unconfined mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.
0 processes are in mixed mode.
0 processes are in kill mode.

Thanks,
Shawn




Re: unsupported protocol family 2 for address 'quic4@0.0.0.0:4

2023-11-08 Thread Shawn Heisey

On 11/8/23 10:11, Christoph Kukulies wrote:

frontend web80
         bind 0.0.0.0:80 name web80
         default_backend be-local-81


Normally you definitely would not want this in your production config... 
typically any request coming in on port 80 should be redirected to https 
without ever being sent to a backend webserver.


That config is only useful as-is for my CI pipeline.  I have updated it 
so it's much more in line with how my production setup is configured. 
This is how I configure port 80:


frontend web80
description Redirect to https
bind 0.0.0.0:80 name web80
redirect scheme https
default_backend be_deny

backend be_deny
description Back end with no servers that denies all requests.
no log
log 127.0.0.1 len 65535 format rfc5424 local0 notice err
http-request deny

Thanks,
Shawn




Re: unsupported protocol family 2 for address 'quic4@0.0.0.0:4

2023-11-08 Thread Shawn Heisey

On 11/8/23 05:37, Frederic Lecaille wrote:

0.0.0.0 special address has been forbidden for QUIC bindings. Have a
look to "bind" keyword documentation.


My gitlab CI/CD pipeline for this project uses 0.0.0.0 in the bind line 
and it passes.  The pipeline uses a special curl with HTTP3 support to 
validate that HTTP3 actually functions.  The gitlab-runner VM only has 
one IP address.


There's really only a problem with 0.0.0.0 if the system has multiple IP 
addresses on the NIC ... and that is due to a quirk of UDP that I'm 
pretty sure haproxy cannot fix.


Thanks,
Shawn




Re: No Private Key found in '/etc/letsencrypt/live/www.mydomain.org/fullchain.pem.key

2023-11-05 Thread Shawn Heisey

On 11/5/2023 02:48, Christoph Kukulies wrote:

I git cloned haproxy and compiled it :

root@mail:~/haproxy# ./haproxy --version
HAProxy version 2.9-dev8-ce7501-38 2023/11/04 - https://haproxy.org/ 


Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open 

Running on: Linux 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 
UTC 2023 x86_64
Usage : haproxy [-f ]* [ -vdVD ] [ -n  ] [ -N 
 ]


Probably this is not what I want? Better 2.8 stable?
I compiled with

make TARGET=linux-glibc


Many projects have a single git repo for all versions and use branches 
to separate them.  Haproxy doesn't.  2.8 is developed in a completely 
separate git repository from the one that you cloned.  This is the repo 
that you want:


https://git.haproxy.org/git/haproxy-2.8.git

My scripts just make things easier.  They will compile/install haproxy 
2.8 and the latest 3.1.x version of quictls/openssl (currently 3.1.4) 
with only a few commands.  The repo does not contain binaries ... all 
scripts can be examined to verify that nothing shady is happening. 
Today I pushed up some fixes.


The quictls repo is a fork of openssl, which has been patched to include 
QUIC functions that haproxy can use to provide QUIC/HTTP3:

https://github.com/quictls/openssl

Thanks,
Shawn




Re: No Private Key found in '/etc/letsencrypt/live/www.mydomain.org/fullchain.pem.key

2023-11-04 Thread Shawn Heisey

On 11/4/2023 01:42, Christoph Kukulies wrote:
How does one install haproxy directly under Ubuntu, also to be more up 
to date?


I created this set of scripts that will automate the build and install 
of the latest haproxy 2.8 version with support for HTTP/3.  It builds 
directly from the 2.8 dev repo, so what it installs may be even newer 
than the most recent 2.8.x release:


https://github.com/elyograg/haproxy-scripts

The prep-source script installs a whole bunch of packages -- everything 
that is needed to build quictls and haproxy.  It also modifies 
/etc/apt/sources.list to uncomment the source repos.  If you have not 
touched your /etc/apt/sources.list file, it should work perfectly and 
not break your APT setup.


I built this to work on Ubuntu.  It has been well-tested on Ubuntu, but 
it should also work on RHEL and its derivatives.


Although I do include a sample haproxy config, it is not really suitable 
as-is for production.  It is a config used by the gitlab CI/CD that I 
built for the project.


Thanks,
Shawn




Re: No Private Key found in '/etc/letsencrypt/live/www.mydomain.org/fullchain.pem.key

2023-11-02 Thread Shawn Heisey

On 11/2/2023 02:35, Christoph Kukulies wrote:

In /etc/letsencrypt/live/www.mydomain.org I have:

lrwxrwxrwx 1 root root  41 Oct 23 17:22 *cert.pem*-> 
../../archive/www.mydomain.org/cert12.pem 

lrwxrwxrwx 1 root root  42 Oct 23 17:22 *chain.pem*-> 
../../archive/www.mydomain.org/chain12.pem 

lrwxrwxrwx 1 root root  46 Oct 23 17:22 *fullchain.pem*-> 
../../archive/www.mydomain.org/fullchain12.pem 


lrwxrwxrwx 1 root root  13 Nov  1 12:12 *fullchain.pem.key*-> fullchain.pem
lrwxrwxrwx 1 root root  44 Oct 23 17:22 *privkey.pem*-> 
../../archive/www.mydomain.org/privkey12.pem 


lrwxrwxrwx 1 root root  11 Nov  1 12:11 *privkey.pem.key*-> privkey.pem
-rw-r--r-- 1 root root 692 Nov 13  2021 README

But note, that the file ending on .key are put there on an expermental 
basis, because I read somewhere in the haproxy docs that one could a 
file with extension .key
there and haproxy then adds interprets that as the private key. Location 
for this hint escaped me for the moment.


The link named 'fullchain.pem.key' is not pointing at a key.  It is 
pointing at the fullchain, which as already mentioned, does NOT contain 
the private key.


If you change that symlink to point at privkey.pem instead of 
fullchain.pem, haproxy might start working.  You do not need the 
privkey.pem.key symlink.


If you're going to use the fullchain file in haproxy, then you should 
also use the ssl-skip-self-issued-ca config that William mentioned so 
the root cert is not sent to browsers.


Thanks,
Shawn




Re: No Private Key found in '/etc/letsencrypt/live/www.mydomain.org/fullchain.pem.key

2023-11-01 Thread Shawn Heisey

On 11/1/23 05:20, Christoph Kukulies wrote:
'bind *:443' : No Private Key found in 
'/etc/letsencrypt/live/www.mydomain.org/fullchain.pem.key' 
.


I have the following in my
/etc/letsencrypt/live/www.mydomain.org :

lrwxrwxrwx 1 root root  41 Oct 23 17:22 cert.pem -> 
../../archive/www.mydomain.org/cert12.pem
lrwxrwxrwx 1 root root  42 Oct 23 17:22 chain.pem -> 
../../archive/www.mydomain.org/chain12.pem
lrwxrwxrwx 1 root root  46 Oct 23 17:22 fullchain.pem -> 
../../archive/www.mydomain.org/fullchain12.pem

lrwxrwxrwx 1 root root  13 Nov  1 12:12 fullchain.pem.key -> fullchain.pem
lrwxrwxrwx 1 root root  44 Oct 23 17:22 privkey.pem -> 
../../archive/www.mydomain.org/privkey12.pem

lrwxrwxrwx 1 root root  11 Nov  1 12:11 privkey.pem.key -> privkey.pem
-rw-r--r-- 1 root root 692 Nov 13  2021 README


This is what I have:

root@smeagol:/etc/letsencrypt/archive/elyograg.org-0022# ls -al 
/etc/letsencrypt/live/elyograg.org-0022

total 12
drwxr-xr-x  2 root root 4096 Nov  1 00:00 .
drwx-- 53 root root 4096 Nov  1 00:02 ..
lrwxrwxrwx  1 root root   41 Nov  1 00:00 cert.pem -> 
../../archive/elyograg.org-0022/cert1.pem
lrwxrwxrwx  1 root root   42 Nov  1 00:00 chain.pem -> 
../../archive/elyograg.org-0022/chain1.pem
lrwxrwxrwx  1 root root   46 Nov  1 00:00 fullchain.pem -> 
../../archive/elyograg.org-0022/fullchain1.pem
lrwxrwxrwx  1 root root   44 Nov  1 00:00 privkey.pem -> 
../../archive/elyograg.org-0022/privkey1.pem

-rw-r--r--  1 root root  692 Nov  1 00:00 README
root@smeagol:/etc/letsencrypt/archive/elyograg.org-0022# ls -al
total 28
drwxr-xr-x  2 root root 4096 Nov  1 00:00 .
drwx-- 53 root root 4096 Nov  1 00:02 ..
-rw-r--r--  1 root root 2329 Nov  1 00:00 cert1.pem
-rw-r--r--  1 root root 3749 Nov  1 00:00 chain1.pem
-rw-r--r--  1 root root 6078 Nov  1 00:00 fullchain1.pem
-rw---  1 root root  241 Nov  1 00:00 privkey1.pem

The LE fullchain file does not contain the key.  It contains 3 
certificates. ... the server cert, the issuing cert, and the root cert 
... which is not what you want.  For letsencrypt, the file that you give 
to haproxy must contain the server cert, the issuing cert, and the 
private key.  You do not want to include the root certificate.  It will 
be ignored by the browser even if it is included, but it will probably 
slow down TLS negotiation by a small amount.  The presence of the root 
certificate in the TLS handshake should not actually break anything in 
most cases, but it could result in a lower score on the Qualys Labs SSL 
test.


When my renewal script finishes, I have a file containing four things: 
The server cert, the issuing cert, the private key, and a unique 4096 
bit DHPARAM.  This combination is ideal for haproxy.


The version of certbot that I am using generates 256-bit ECDSA keys by 
default.  You might be thinking that a 256 bit ECDSA key cannot be as 
secure as a 2048 bit RSA key, but that is incorrect:


https://www.baeldung.com/cs/encryption-asymmetric-algorithms#3-key-length

Some of the equipment I use will not work with ECDSA keys, so I have a 
second cert with a subset of names that I build using 4096 bit RSA.


Thanks,
Shawn




Re: OCSP update restarts all proxies

2023-10-11 Thread Shawn Heisey

On 10/4/23 09:18, William Lallemand wrote:

Nothing in haproxy initiate a service reload, are sure you don't have an
external process which is doing it? The systemd support within HAProxy
is only meant to provide a status to systemd, it does not send it
actions.


I found the issue.  I am not surprised to learn that it was a PEBCAK 
problem. :)


I have a certs webapp I wrote in PHP for Lets Encrypt certificate 
generation and management.  One of the things it does is update a 
whitelist and reload haproxy, and it has an hourly cronjob to make sure 
that the whitelist is always current.


I have updated the code for this so that it actually checks to see 
whether the whitelist has changed, and only issue a reload when there is 
a change.  This will eliminate the hourly reloads.


I actually developed this webapp for $DAYJOB and deployed it on my own 
server as well.  A cow-orker noticed frequent alerts from zabbix about 
haproxy restarting on that system and asked me about it.  I have never 
employed my OCSP updating script on that system ... the only thing the 
two systems have in common is my certs webapp.  I had forgotten about 
the hourly cronjob.


I actually don't need the cronjob on my own server.  My personal haproxy 
doesn't use that whitelist.  It's in place for $DAYJOB so that only 
certain public IP addresses can reach my webapp.


Thank you for helping me work out that it was not haproxy's OCSP update 
that caused the anomaly, that just happened to be occuring at the same time.


Shawn




Re: OCSP update restarts all proxies

2023-10-04 Thread Shawn Heisey

On 10/4/23 05:34, Remi Tricot-Le Breton wrote:

You just have to run the following commands :

$ echo "update ssl ocsp-response " | socat 
/path_to_socket/haproxy.sock -


When I do this, the update is successful and shows in the logfile 
created by rsyslogd ... but unlike when haproxy does the automatic 
hourly update, there is no service reload, so the proxies are not stopped.


When my old ocsp updating script sent an ocsp response to the stats 
socket, there was no service reload either.


I couldn't follow what's in the src/ssl_ocsp.c file.  It has been a 
REALLY long time since I actually wrote C code myself.  I was hoping to 
find out whether or not that code was initiating a service reload when 
systemd support is enabled.


I have tried to find something external to haproxy that might be 
initiating the reload, but I haven't found anything.


Thanks,
Shawn




Re: OCSP update restarts all proxies

2023-10-03 Thread Shawn Heisey

On 10/3/23 01:33, Remi Tricot-Le Breton wrote:
This command relies on the same task that performs the automatic update. 
What it does is basically add the certificate at the top of the task's 
update list and wakes it up. The update is asynchronous so we can't 
return a status to the CLI command.
In order to check if the update was successful you can display the 
contents of the updated OCSP response via the "show ssl ocsp-response" 
command. If the response you updated is also set to be updated 
automatically, you can also use the "show ssl ocsp-updates" command that 
gives the update success and failure numbers as well as the last update 
status for all the responses registered in the auto update list.


I have no idea how to get an interactive session going on the stats 
socket so that I can see whatever response a command generates.  The 
only command I know for the socket is for the old-style OCSP update 
where the OCSP response is obtained with openssl, converted to base64, 
and sent to the socket.  No response comes back when using socat in this 
way.


Here is my old script for OCSP updates, which I stopped using once I 
learned how to set up haproxy to do it automatically:


https://paste.elyograg.org/view/5e88c914

(seems that I removed the final \ that made the blank lines necessary. 
oops!)


Thanks,
Shawn




Re: OCSP update restarts all proxies

2023-09-30 Thread Shawn Heisey

On 9/28/23 02:29, Remi Tricot-Le Breton wrote:
That's really strange, the OCSP update mechanism does not have anything 
to do with proxies. Are you sure you did not have a crash and 
autorestart of your haproxy ?


I did not think that I had autorestart for haproxy, but it turns out 
that the service file created by the systemd stuff in the source repo 
DOES have "Restart=always".


After I changed that to never and did systemctl daemon-reload, I 
discovered that at the top of the hour, something caused systemd to 
reload the service.  From systemctl status haproxy:


Sep 30 01:00:02 smeagol haproxy[234282]: [WARNING]  (234282) : Proxy 
be_gitlab_8881 stopped (cumulated conns: FE: 0, BE: 0).
Sep 30 01:00:02 smeagol haproxy[234282]: [WARNING]  (234282) : Proxy 
be_gitlab2_8881 stopped (cumulated conns: FE: 0, BE: 0).
Sep 30 01:00:02 smeagol haproxy[234282]: [WARNING]  (234282) : Proxy 
be_artifactory_8082 stopped (cumulated conns: FE: 0, BE: 0).
Sep 30 01:00:02 smeagol haproxy[234282]: [WARNING]  (234282) : Proxy 
be_zabbix_81 stopped (cumulated conns: FE: 0, BE: 0).
Sep 30 01:00:02 smeagol haproxy[234279]: [NOTICE]   (234279) : New 
worker (236124) forked
Sep 30 01:00:02 smeagol haproxy[234279]: [NOTICE]   (234279) : Loading 
success.

Sep 30 01:00:02 smeagol systemd[1]: Reloaded HAProxy Load Balancer.
Sep 30 01:00:02 smeagol haproxy[234279]: [NOTICE]   (234279) : haproxy 
version is 2.8.3-0499db-3
Sep 30 01:00:02 smeagol haproxy[234279]: [NOTICE]   (234279) : path to 
executable is /usr/local/sbin/haproxy
Sep 30 01:00:02 smeagol haproxy[234279]: [WARNING]  (234279) : Former 
worker (234282) exited with code 0 (Exit)


There are no relevant systemd timers, nothing in user crontabs, nothing 
in the various cron.* directories that could cause this.  I did compile 
haproxy with systemd support ... can haproxy itself ask systemd for a 
reload?


A way to check for a possible crash in the OCSP update code would be to 
use the "update ssl ocsp-response " from the CLI as well. It 
would use most of the OCSP update code so if a crash were to happen you 
might see it this way.


Can you explain to me how to do this and see any output?  I tried piping 
the command to socat talking to the stats proxy socket, and got no 
response.  I think I do not know how to use socat correctly for this.


Thanks,
Shawn




OCSP update restarts all proxies

2023-09-27 Thread Shawn Heisey

The haproxy -vv output is at the end of this message.

I got the built-in OCSP udpating mechanism working.  Works beautifully.

Today I discovered that once an hour when the OCSP gets updated, haproxy 
stops all its proxies and starts them back up. syslog:


Sep 27 15:00:01 - haproxy[3520801] Proxy web80 stopped (cumulated conns: 
FE: 42, BE: 0).
Sep 27 15:00:01 - haproxy[3520801] Proxy web stopped (cumulated conns: 
FE: 1403, BE: 0).
Sep 27 15:00:01 - haproxy[3520801] Proxy be_deny stopped (cumulated 
conns: FE: 0, BE: 122).
Sep 27 15:00:01 - haproxy[3520801] Proxy be_raspi1_81 stopped (cumulated 
conns: FE: 0, BE: 0).
Sep 27 15:00:01 - haproxy[3520801] Proxy be_raspi2_81 stopped (cumulated 
conns: FE: 0, BE: 0).
Sep 27 15:00:01 - haproxy[3520801] Proxy be_raspi3_81 stopped (cumulated 
conns: FE: 0, BE: 0).
Sep 27 15:00:01 - haproxy[3520801] Proxy be_smeagol_81 stopped 
(cumulated conns: FE: 0, BE: 700).
Sep 27 15:00:01 - haproxy[3520801] Proxy be_plex_32400_tls stopped 
(cumulated conns: FE: 0, BE: 0).
Sep 27 15:00:01 - haproxy[3520801] Proxy be_gitlab_8881 stopped 
(cumulated conns: FE: 0, BE: 235).
Sep 27 15:00:01 - haproxy[3520801] Proxy be_gitlab2_8881 stopped 
(cumulated conns: FE: 0, BE: 180).
Sep 27 15:00:01 - haproxy[3520801] Proxy be_artifactory_8082 stopped 
(cumulated conns: FE: 0, BE: 0).
Sep 27 15:00:01 - haproxy[3520801] Proxy be_zabbix_81 stopped (cumulated 
conns: FE: 0, BE: 969).
Sep 27 15:00:01 - haproxy[3545799] -:- [27/Sep/2023:15:00:01.668] 
 /etc/ssl/certs/local/REDACTED_org.wildcards.combined

.pem 1 "Update successful" 0 1
Sep 27 15:00:01 - haproxy[3545799] -:- [27/Sep/2023:15:00:01.795] 
 /etc/ssl/certs/local/REDACTED2.com.wildcards.combined.p

em 1 "Update successful" 0 1
Sep 27 15:00:01 - haproxy[3520801] -:- [27/Sep/2023:15:00:01.944] 
 /etc/ssl/certs/local/REDACTED_org.wildcards.combined

.pem 1 "Update successful" 0 2
Sep 27 15:00:02 - haproxy[3520801] -:- [27/Sep/2023:15:00:01.998] 
 /etc/ssl/certs/local/REDACTED2.com.wildcards.combined.p

em 1 "Update successful" 0 2

The really irritating effect is that once an hour, my Zabbix server 
records an event saying haproxy has been restarted:


https://imgur.com/a/WPkKoFa
(imgur will claim the image has mature content.  it doesn't.)

It looks like the only thing that resets back to zero on the stats page 
is the uptime in the "status" column for each backend.  That's good 
news, but I would hope for none of the data to be reset.


I have one big concern, which may be unfounded:  I'm worried that the 
proxies going down will mean that in-flight connections will be 
terminated.  I'm guessing that the work for seamless reloads will ensure 
that doesn't happen, I just want to be sure.


Not knowing a lot about how haproxy is architected, I do not know if 
there is some reason that the backends have to be cycled.  Seems like 
only frontends that listen with TLS would need that.  I would hope it 
would be possible to even avoid that ... maybe have OCSP data be copied 
from a certain memory location every time a frontend needs it, and when 
OCSP gets updated, overwrite the data in that memory location in a 
thread-safe way.  I know a fair amount about thread safety in Java, but 
nothing about it in C.


Final questions for today:

1) Can the OCSP update interval be changed?  I don't recall exactly what 
the validity for a LetsEncrypt OCSP response is, but I know it was at 
least 24 hours, and I think it might have even been as long as a week. I 
would like to increase the interval to 8-12 hours if I can.


2) There are two certs being used in my setup, and haproxy logs updates 
for both of them twice.  I would have hoped for that to only happen 
once.  I'm a bit mystified by the fact that it is done twice.  I would 
have expected either one time or four times ... I have one frontend that 
listens with TLS, with four bind lines all using exactly the same 
certificate list.  (one TCP, and three UDP)



-
HAProxy version 2.8.3-0499db-3 2023/09/14 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 
2028.

Known bugs: http://www.haproxy.org/bugs/bugs-2.8.3.html
Running on: Linux 6.1.0-1022-oem #22-Ubuntu SMP PREEMPT_DYNAMIC Wed Sep 
6 08:19:34 UTC 2023 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_QUIC=1 
USE_PCRE2_JIT=1

  DEBUG   =

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY 
+CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE 
-LIBATOMIC 

Re: problem with automatic OCSP update -- getting ipv6 address for ocsp endpoint

2023-08-15 Thread Shawn Heisey

On 8/15/23 19:17, Tristan wrote:
 > A common error that can happen with let's encrypt certificates is if 
the DNS


resolution provides an IPv6 address and your system does not have a valid
outgoing IPv6 route. In such a case, you can either create the appropriate
route or set the "httpclient.resolvers.prefer ipv4" option in the global
section.


Adding `httpclient.resolvers.prefer ipv4` to the global section fixed 
it.  Thank you!  I did some searching, but didn't come across that nugget.


Thanks,
Shawn



problem with automatic OCSP update -- getting ipv6 address for ocsp endpoint

2023-08-15 Thread Shawn Heisey
I've got another haproxy install on which I am trying to enable 
automatic OCSP updating.  The ones I asked about before are personal, 
this one is for work.


When haproxy looks up the host where it can get OCSP responses, it is 
getting an ipv6 address.


Aug 15 18:27:30 - haproxy[11234] -:- [15/Aug/2023:18:27:30.103] 
 /etc/ssl/certs/local/imat_us.wildcards.combined.pem 2 
"HTTP error" 1 0
Aug 15 18:27:30 - haproxy[11234] -:- [15/Aug/2023:18:27:30.104] 
 -/- 48/0/-1/-1/46 503 217 - - SC-- 0/0/0/0/3 0/0 
{2600:1405:7400:13::17de:1b94} "GET 
http://r3.o.lencr.org/MFMwUTBPME0wSzAJBgUrDgMCGgUABBRI2smg%2ByvTLU%2Fw3mjS9We3NfmzxAQUFC6zF7dYVsuuUAlA5h%2BvnYsUwsYCEgRA%2BzJf7gt%2BI21Isq6Sy8pDxg%3D%3D 
HTTP/1.1"


If I try the URL in the second log line with curl, I get the proper 
response.  The curl program is getting an ipv4 address.


I thought it might be doing this because the machine did have an ipv6 
local link address, so I completely disabled ipv6 with the grub 
commandline and rebooted.  Now there is no ipv6 address, but haproxy is 
still getting an ipv6 address for r3.o.lencr.org.


I couldn't locate any config for haproxy that would disable ipv6.  Is 
there a way to fix this problem?


HAProxy version 2.8.2 2023/08/09 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 
2028.

Known bugs: http://www.haproxy.org/bugs/bugs-2.8.2.html
Running on: Linux 4.18.0-477.21.1.el8_8.x86_64 #1 SMP Thu Aug 10 
13:51:50 EDT 2023 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_QUIC=1 
USE_PCRE2_JIT=1

  DEBUG   =

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY 
+CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE 
-LIBATOMIC +LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY -LUA -MATH 
-MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL 
-OPENSSL_WOLFSSL -OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL 
-PROCCTL -PROMEX -PTHREAD_EMULATION +QUIC +RT +SHM_OPEN -SLZ +SSL 
-STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY 
-WURFL +ZLIB


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
default=4).

Built with OpenSSL version : OpenSSL 3.1.2+quic 1 Aug 2023
Running on OpenSSL version : OpenSSL 3.1.2+quic 1 Aug 2023
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with PCRE2 version : 10.32 2018-09-10
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 8.5.0 20210514 (Red Hat 8.5.0-18)

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   quic : mode=HTTP  side=FE mux=QUIC  flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace




Re: Old style OCSP not working anymore?

2023-07-13 Thread Shawn Heisey

On 7/13/23 17:56, Shawn Heisey wrote:
I do still use this script on one of my servers where I can't get 
haproxy's built-in ocsp updating to work right.  It is haproxy 2.8.1.


A few minutes ago, I fixed the problem on that server with haproxy's 
built-in OCSP updater, so the script is officially retired.


Thanks,
Shawn



Re: Wierd issue with OCSP updating

2023-07-13 Thread Shawn Heisey

On 7/13/23 15:00, Cyril Bonté wrote:

Hi Shawn,

Le 13/07/2023 à 18:48, Shawn Heisey a écrit :
Looks like on my last edit I deleted it and didn't add it to 
defaults, so I was wrong in what I said.  It throws a different error 
when added to defaults: 
Because it should be in the global section, not the defaults one ;) 


It didn't work in global either.  It threw an error message that I did 
not understand at first.


After a little poking around with google, I added this section to the 
config (with the ipv4 resolver setting in global) and that made it work:


resolvers default
    nameserver dns1 127.0.0.1:53
    nameserver dns2 8.8.8.8:53
    accepted_payload_size 8192 # allow larger DNS payloads

Further investigation revealed that systemd-resolved was not setting 
/etc/resolv.conf to the usual symlink.  It was a real zero byte file.


Fixing that so it is a symlink to 
|/run/systemd/resolve/stub-resolv.conf|and commenting the new resolvers 
section in haproxy.cfg has completely fixed the issue.


I didn't think it was a bug in haproxy, but couldn't figure out why it 
was misbehaving.  Now I know it was a problem with /etc/resolv.conf.  I 
didn't think to look there because I could connect to things by name 
from the shell prompt, so I assumed everything was good.


Thanks,
Shawn




Re: Old style OCSP not working anymore?

2023-07-13 Thread Shawn Heisey

On 7/13/23 09:01, Sander Klein wrote:
I tried upgrading from 2.6.14 to 2.8.1, but after the upgrade I couldn't 
connect to any of the sites behind it.


While looking at the error it seems like OCSP is not working anymore. 
Right now I have a setup in which I provision the certificates with the 
corresponding ocsp file next to it. If this not supported anymore?


Does your certificate have "must-staple" configured?  That is the only 
way I can imagine an OCSP problem would keep websites from working.  I 
do ocsp stapling with haproxy, but I don't use "must-staple".  I do not 
believe that ocsp stapling is supported widely enough yet to declare 
that it MUST happen.


If you are relying only on the .ocsp file and are not informing haproxy 
when there is a new response, then you have to restart (or maybe reload) 
haproxy when you update the ocsp file.  If you don't, then the ocsp 
response that haproxy is using will quickly expire in a matter of days, 
as the .ocsp file is only read at startup.


I uploaded a script to github.  This is the script I used before haproxy 
gained the ability to do its own OCSP updates.  The script updates the 
.ocsp file(s) and informs haproxy about the new response(s) so haproxy 
does not need to be restarted.:


https://github.com/elyograg/haproxy-ocsp-elyograg

The script relies on mktemp, openssl, socat, and base64.

I do still use this script on one of my servers where I can't get 
haproxy's built-in ocsp updating to work right.  It is haproxy 2.8.1.


Thanks,
Shawn



Re: Wierd issue with OCSP updating

2023-07-13 Thread Shawn Heisey

On 7/12/23 04:13, Remi Tricot-Le Breton wrote:

On 11/07/2023 22:22, Shawn Heisey wrote:

On 7/11/23 01:30, Remi Tricot-Le Breton wrote:
That directive didn't work in "global" but it was accepted when I 
moved it to "defaults".  But it didn't change the behavior.  IPv6 is 
completely disabled on the server.


Didn't work as in an error was raised ? I have a local configuration 
file with this option in the global section and it seems to work fine.


It failed the config check that is done by the systemd service before 
restarting.  It seems to indicate I am missing additional config that it 
needs.


elyograg@bilbo:~$ sudo haproxy -dD -c -f /etc/haproxy/haproxy.cfg
[NOTICE]   (521767) : haproxy version is 2.8.1
[NOTICE]   (521767) : path to executable is /usr/local/sbin/haproxy
[ALERT](521767) : config : Proxy '': Can't find 
resolvers section 'default' for do-resolve action.
[ALERT](521767) : config : Proxy '': Can't find 
resolvers section 'default' for do-resolve action.
[DIAG] (521767) : config : Generating a random cluster secret. You 
should define your own one in the configuration to ensure consistency 
after reload/restart or across your whole cluster.

[ALERT](521767) : config : Fatal errors found in configuration.


You can use the "httpclient" CLI command this way:
echo "expert-mode on; httpclient GET 
http://r3.o.lencr.org/MFMwUTBPME0wSzAJBgUrDgMCGgUABBRI2smg%2ByvTLU%2Fw3mjS9We3NfmzxAQUFC6zF7dYVsuuUAlA5h%2BvnYsUwsYCEgOq9K0xVAXkgj8X4cNGeMutQw%3D%3D; | socat  -


I get an error from that, and it makes no sense to me.


elyograg@bilbo:~$ echo "expert-mode on; httpclient GET 
http://r3.o.lencr.org/MFMwUTBPME0wSzAJBgUrDgMCGgUABBRI2smg%2ByvTLU%2Fw3mjS9We3NfmzxAQUFC6zF7dYVsuuUAlA5h%2BvnYsUwsYCEgOq9K0xVAXkgj8X4cNGeMutQw%3D%3D; 
| sudo socat /etc/haproxy/stats.socket -

Permission denied

This command is restricted to expert mode only.


Looks like on my last edit I deleted it and didn't add it to defaults, 
so I was wrong in what I said.  It throws a different error when added 
to defaults:


elyograg@bilbo:~$ sudo haproxy -dD -c -f /etc/haproxy/haproxy.cfg
[NOTICE]   (521883) : haproxy version is 2.8.1
[NOTICE]   (521883) : path to executable is /usr/local/sbin/haproxy
[ALERT](521883) : config : parsing [/etc/haproxy/haproxy.cfg:32] : 
unknown keyword 'httpclient.resolvers.prefer' in 'defaults' section
[ALERT](521883) : config : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg

[ALERT](521883) : config : Fatal errors found in configuration.

Thanks,
Shawn



Re: Wierd issue with OCSP updating

2023-07-11 Thread Shawn Heisey

On 7/11/23 01:30, Remi Tricot-Le Breton wrote:
The OCSP update mechanism uses the internal http_client which then uses 
the resolvers. The only time when I had some strange resolver-related 
issues is when the name resolution returned IPv6 addresses which were 
not properly managed on my machine. The workaround I had in this case 
was to add "httpclient.resolvers.prefer ipv4" in the global section.


That directive didn't work in "global" but it was accepted when I moved 
it to "defaults".  But it didn't change the behavior.  IPv6 is 
completely disabled on the server.


You could also try to perform the same kind of request using the 
http_client directly from the CLI.


Can you explain how to do this?  When I make the request with curl, it 
works, but I don't know how to do what you are saying here.


Everything works on another server running a newer version of Ubuntu. 
That uses a newer version of gnu libc, which affects pretty much 
everything on the system, and a large number of other libraries are 
newer as well.


Thanks,
Shawn



Re: Wierd issue with OCSP updating

2023-07-10 Thread Shawn Heisey

On 7/8/23 21:33, Shawn Heisey wrote:
Here's the very weird part.  It seems that haproxy is sending the OCSP 
request to localhost, not the http://r3.o.lencr.org URL that it SHOULD 
be sending it to.  Right before the above log entry is this one:


Jul  8 21:15:38 - haproxy[4075] 127.0.0.1:57696 
[08/Jul/2023:21:15:38.447] web80 web80/ 0/-1/-1/-1/0 302 230 - - 
LR-- 1/1/0/0/0 0/0 "GET 
/MFMwUTBPME0wSzAJBgUrDgMCGgUABBRI2smg%2ByvTLU%2Fw3mjS9We3NfmzxAQUFC6zF7dYVsuuUAlA5h%2BvnYsUwsYCEgOq9K0xVAXkgj8X4cNGeMutQw%3D%3D HTTP/1.1"


Anyone have any idea why haproxy is sending the ocsp request to 
127.0.0.1 when it should be going to a public address obtained from the 
dns name r3.o.lencr.org?


If I do this command on the same machine, it works correctly:

curl -v -o response.ocsp 
"http://r3.o.lencr.org/MFMwUTBPME0wSzAJBgUrDgMCGgUABBRI2smg%2ByvTLU%2Fw3mjS9We3NfmzxAQUFC6zF7dYVsuuUAlA5h%2BvnYsUwsYCEgOq9K0xVAXkgj8X4cNGeMutQw%3D%3D;


Thanks,
Shawn



Re: Wierd issue with OCSP updating

2023-07-10 Thread Shawn Heisey

On 7/8/23 21:33, Shawn Heisey wrote:
Here's the very weird part.  It seems that haproxy is sending the OCSP 
request to localhost, not the http://r3.o.lencr.org URL that it SHOULD 
be sending it to.  Right before the above log entry is this one:


Jul  8 21:15:38 - haproxy[4075] 127.0.0.1:57696 
[08/Jul/2023:21:15:38.447] web80 web80/ 0/-1/-1/-1/0 302 230 - - 
LR-- 1/1/0/0/0 0/0 "GET 
/MFMwUTBPME0wSzAJBgUrDgMCGgUABBRI2smg%2ByvTLU%2Fw3mjS9We3NfmzxAQUFC6zF7dYVsuuUAlA5h%2BvnYsUwsYCEgOq9K0xVAXkgj8X4cNGeMutQw%3D%3D HTTP/1.1"


Anyone have any idea why haproxy is sending the ocsp request to 
127.0.0.1 when it should be going to a public address obtained from the 
dns name r3.o.lencr.org?


If I do this command on the same machine, it works correctly:

curl -v -o response.ocsp 
"http://r3.o.lencr.org/MFMwUTBPME0wSzAJBgUrDgMCGgUABBRI2smg%2ByvTLU%2Fw3mjS9We3NfmzxAQUFC6zF7dYVsuuUAlA5h%2BvnYsUwsYCEgOq9K0xVAXkgj8X4cNGeMutQw%3D%3D;


Thanks,
Shawn



Wierd issue with OCSP updating

2023-07-08 Thread Shawn Heisey
I have a strange problem with OCSP updating.  On one server everything 
works.  That server is in my basement, running Ubuntu 22.04.


Another system, Ubuntu 20.04 in AWS using exactly the same certificates 
and exactly the same crt-list file is failing to do an OCSP update.


Jul  8 21:15:38 - haproxy[4075] -:- [08/Jul/2023:21:15:38.446] 
 /etc/ssl/certs/local/elyograg_org.wildcards.combined.pem 2 
"HTTP error" 1 0


Here's the very weird part.  It seems that haproxy is sending the OCSP 
request to localhost, not the http://r3.o.lencr.org URL that it SHOULD 
be sending it to.  Right before the above log entry is this one:


Jul  8 21:15:38 - haproxy[4075] 127.0.0.1:57696 
[08/Jul/2023:21:15:38.447] web80 web80/ 0/-1/-1/-1/0 302 230 - - 
LR-- 1/1/0/0/0 0/0 "GET 
/MFMwUTBPME0wSzAJBgUrDgMCGgUABBRI2smg%2ByvTLU%2Fw3mjS9We3NfmzxAQUFC6zF7dYVsuuUAlA5h%2BvnYsUwsYCEgOq9K0xVAXkgj8X4cNGeMutQw%3D%3D 
HTTP/1.1"


Both systems are using a local bind9 caching resolver, which does work.

If I manually make the OCSP request with curl on the system, I get a 
response.


Both servers are running haproxy 2.8.1 compiled from source with quictls 
branch openssl-3.1.0+quic+locks.  It was compiled locally on each 
server.  I don't know if it makes any difference ... the one that works 
was compiled with make -j12 (24 CPU cores) and the one that doesn't was 
compiled with make -j3 (2 CPU cores).


Below is haproxy -vv from both servers, first the one that works:


HAProxy version 2.8.1 2023/07/03 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 
2028.

Known bugs: http://www.haproxy.org/bugs/bugs-2.8.1.html
Running on: Linux 6.1.0-1015-oem #15-Ubuntu SMP PREEMPT_DYNAMIC Fri Jun 
16 09:51:49 UTC 2023 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_QUIC=1 
USE_PCRE2_JIT=1

  DEBUG   =

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY 
+CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE 
-LIBATOMIC +LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY -LUA -MATH 
-MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL 
-OPENSSL_WOLFSSL -OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL 
-PROCCTL -PROMEX -PTHREAD_EMULATION +QUIC +RT +SHM_OPEN -SLZ +SSL 
-STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY 
-WURFL +ZLIB


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
default=48).

Built with OpenSSL version : OpenSSL 3.1.0+quic 14 Mar 2023
Running on OpenSSL version : OpenSSL 3.1.0+quic 14 Mar 2023
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with PCRE2 version : 10.39 2021-10-29
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 11.3.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   quic : mode=HTTP  side=FE mux=QUIC  flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace



HAProxy version 2.8.1 2023/07/03 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 
2028.

Known bugs: http://www.haproxy.org/bugs/bugs-2.8.1.html
Running on: Linux 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 
12:21:12 UTC 2023 x86_64

Build options :
  TARGET 

Re: Some notes about what happens with HTTP/1.0 requests

2023-07-05 Thread Shawn Heisey

On 7/5/23 15:27, Pavlos Parissis wrote:

There is a list of pre-defined ACLs, see 
http://docs.haproxy.org/2.8/configuration.html#7.4, and in that list
you have HTTP_1.0 acl to match traffic for that version of HTTP protocol.

So, you can add below snippet to block traffic for HTTP 1.0 version

http-request deny if HTTP_1.0.


I have a little more information.

It looks like Solr has a problem when it gets h2c requests from haproxy.
Interestingly, the problem does not happen if the frontend request is 
HTTP/2.  It does happen if the frontend request is 1.0, 1.1, or 3.  I 
have not yet configured Solr to use TLS, so I don't know if it's unique 
to h2c or also affects h2.


If I remove "proto h2 check-proto h2" from the server line so Solr only 
gets 1.1 requests, then the problem is entirely gone.  Very strange.  I 
think haproxy is working correctly and Solr has the problem.


Thanks,
Shawn



Some notes about what happens with HTTP/1.0 requests

2023-07-05 Thread Shawn Heisey
I have a backend in haproxy for my Solr server.  Solr lives unencrypted 
on port 8983, haproxy provides TLS for it, on a name like 
`solr.example.com`.


Everything works fully as expected with HTTP 1.1, 2, or 3.

If I send a request with curl using any HTTP version to 
https://solr.example.com/, it results in a 302 response.


If the request is HTTP/1.0, Solr is revealing the internal IP address -- 
the location header is https://172.31.8.104:8983/solr/ which will not 
work -- the port isn't exposed to the Internet, isn't using TLS, and the 
private IP address is only valid within the AWS VPC.  An interesting 
detail:  If I send the HTTP/1.0 request directly to Solr, it does NOT 
reveal the internal address.  That only happens for requests relayed by 
haproxy.


The backend connection is HTTP/2, as I have "proto h2" on the server line.

The curl command gets a response that's HTTP/1.1 even though it sent 1.0.

What I would like to do is deny HTTP/1.0 requests, but I have not been 
able to figure out a way to do that.  Alternately, if there is a way for 
haproxy to intercept headers with the internal address and replace them 
with a hostname, I can do that instead.


While looking into this, I found that Solr is logging HTTP/1.0 requests 
for haproxy check requests, even though I configured "check-proto h2" on 
the server line.  Actual requests are logged as HTTP/2 as expected, but 
check requests (which use /solr/ as the URL path) are being logged as 
HTTP/1.0:


172.31.8.104 - - [05/Jul/2023:18:08:54 +] "GET /solr/ HTTP/1.0" 200 
17035
172.31.8.104 - - [05/Jul/2023:18:08:57 +] "POST /solr/dovecot/update 
HTTP/2.0" 200 180
172.31.8.104 - - [05/Jul/2023:18:08:57 +] "POST /solr/dovecot/update 
HTTP/2.0" 200 155
172.31.8.104 - - [05/Jul/2023:18:09:04 +] "GET /solr/ HTTP/1.0" 200 
17035
172.31.8.104 - - [05/Jul/2023:18:09:15 +] "GET /solr/ HTTP/1.0" 200 
17035
172.31.8.104 - - [05/Jul/2023:18:09:25 +] "GET /solr/ HTTP/1.0" 200 
17035


haproxy -vv output:
HAProxy version 2.8.1 2023/07/03 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 
2028.

Known bugs: http://www.haproxy.org/bugs/bugs-2.8.1.html
Running on: Linux 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 
12:21:12 UTC 2023 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_QUIC=1 
USE_PCRE2_JIT=1

  DEBUG   =

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY 
+CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE 
-LIBATOMIC +LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY -LUA -MATH 
-MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL 
-OPENSSL_WOLFSSL -OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL 
-PROCCTL -PROMEX -PTHREAD_EMULATION +QUIC +RT +SHM_OPEN -SLZ +SSL 
-STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY 
-WURFL +ZLIB


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
default=2).

Built with OpenSSL version : OpenSSL 3.1.0+quic 14 Mar 2023
Running on OpenSSL version : OpenSSL 3.1.0+quic 14 Mar 2023
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with PCRE2 version : 10.34 2019-11-21
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 9.4.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   quic : mode=HTTP  side=FE mux=QUIC  flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  

Re: Is tune.quic.backend.max-idle-timeout missing from the documentation?

2023-06-28 Thread Shawn Heisey

On 6/28/23 08:17, Nick Ramirez wrote:
The HAProxy source code indicates that there is a directive named 
'tune.quic.backend.max-idle-timeout': haproxy/src/cfgparse-quic.c at 
f473eb72066e02d44837fd77110b6ca5bdea97e2 · haproxy/haproxy (github.com) 
. But I do not find it in the documentation. Is it missing? There is 'tune.quic.frontend.max-idle-timeout'.


Not an expert in this ... but I see in "haproxy -vv" output for HAProxy 
version 2.8.0-8ee9a5-30 2023/06/22 that the QUIC multiplexer only 
applies to frontend, not backend.  Could be that the option was put in 
the code so it's already there when/if QUIC backend support is added.


Thanks,
Shawn



Re: Debian + QUIC / HTTP/3

2023-06-05 Thread Shawn Heisey

On 6/5/23 01:41, Artur wrote:
What is suggested/recommended way to get QUIC / HTTP/3 working in 
haproxy on Debian ?


I have been debating for a while whether or not to get the work I have 
done on build scripts out into the world.  Just mirrored the repo from 
my gitlab server to github, so have fun with it!


https://github.com/elyograg/haproxy-scripts

I happened to have a debian 11 VM, i386 architecture.  I tested the 
scripts there with these steps:


mkdir ~/git
cd ~/git
sudo apt-y install git
git clone https://github.com/elyograg/haproxy-scripts.git
cd haproxy-scripts
./prep-source
sudo mkdir -p /etc/haproxy
./install-haproxy-service git-haproxy-2.8
./fullstack

They're shell scripts, so there is no mystery about what they do.

The prep_source script will install a whole lot of packages ... 
compilers, libraries needed for the compile, and some other tools.


The "repo_overrides" file is pre-setup to force a specific branch of 
quictls.  If you remove that, it will get the newest 3.1.x branch that 
ends in +quic.


The scripts do not attempt to install /etc/haproxy/haproxy.cfg ... you 
will have to handle that.  You can use the `ci-haproxy-cfg.txt` file as 
a starting point for your own config.  It's the barebones config that I 
use for the gitlab CI job I built.  It uses a self-signed certificate 
that is also included but not copied by default to the right directory 
for the config.


The scripts that compile the software will figure out how many physical 
CPU cores you have, divide that by 2, then set the number of threads for 
`make` to that value or a minimum value of 3.


This means the scripts have been tested on Ubuntu, Debian, and CentOS 7, 
and were found to work on all 3.


Thanks,
Shawn



Re: OCSP renewal with 2.8

2023-06-03 Thread Shawn Heisey

On 6/3/23 15:37, Shawn Heisey wrote:

On 6/3/23 15:28, Shawn Heisey wrote:
So maybe a completely separate global option makes sense.  The 
crt-list requirement is not really a burden for me, but for someone 
who uses a LOT of certificates that change frequently, it probably 
would become a burden.


Unless it is possible to have a directory as an entry in the crt-list 
file like it is for the crt option.  The crt-list doc does not say that 
this is possible, and I have not tested it.


Using a directory as an entry in the crt-list file causes `haproxy -c 
-f` to hang.  Which I think means that crt-list doesn't support directories.


How hard would it be to add that support?  I would hope that most of the 
code needed is already present in the part that parses crt options.


Thanks,
Shawn



Re: OCSP renewal with 2.8

2023-06-03 Thread Shawn Heisey

On 6/3/23 15:28, Shawn Heisey wrote:
So maybe a completely separate global option makes sense.  The crt-list 
requirement is not really a burden for me, but for someone who uses a 
LOT of certificates that change frequently, it probably would become a 
burden.


Unless it is possible to have a directory as an entry in the crt-list 
file like it is for the crt option.  The crt-list doc does not say that 
this is possible, and I have not tested it.


Thanks,
Shawn



Re: OCSP renewal with 2.8

2023-06-03 Thread Shawn Heisey

On 6/2/23 14:42, Lukas Tribus wrote:

I suggest we make it configurable on the bind line like other ssl
options, so it will work for the common use cases that don't involve
crt-lists, like a simple crt statement pointing to a certificate or a
directory.

It could also be a global option *as well*, but imho it does need to
be a bind line configuration option, just like strict-sni, alpn and
ciphers, so we can enable it specifically (per frontend, per bind
line) without requiring crt-list.


One of the places I tried to add it (which of course did not work) was 
ssl-default-bind-options.


It might make sense to have it configurable there.  Though that would 
imply of course that it is also an option on each bind line, which was 
the other place I tried to configure it.


So maybe a completely separate global option makes sense.  The crt-list 
requirement is not really a burden for me, but for someone who uses a 
LOT of certificates that change frequently, it probably would become a 
burden.


A question arises on where to log failures in getting OCSP data.  I have 
haproxy using two different syslog targets, but the way this config 
evolved is lost to time.


TL;DR:

In global, I have:

log 127.0.0.1 len 65535 format rfc5424 local0
log 127.0.0.1 len 65535 format rfc5424 local1 notice
tune.http.logurilen 49152

In defaults I have:

log global
option  httplog
option  dontlognull

In each backend, I have:

no log
log 127.0.0.1 len 65535 format rfc5424 local0 notice err

In /etc/rsyslog.d/99-haproxy.conf I have:

local0.info /var/log/debug-haproxy
local1.*/var/log/haproxy

In /etc/rsyslog.d/0001-remote.conf I have:

module(load="imudp")
input(type="imudp" port="514")
$MaxMessageSize 64k

$template BindLog,"/var/log/rsyslog/bind/log"
$template CudoLog,"/var/log/rsyslog/cudo/log"
$template UFWLog,"/var/log/rsyslog/ufw/log"
$template RemoteLogs,"/var/log/rsyslog/%HOSTNAME%/log"
$template RemoteHostFileFormat,"%TIMESTAMP% %fromhost% 
%syslogfacility-text% 
%syslogtag%%msg:::sp-if-no-1st-sp%%msg:::space-cc,drop-last-lf%\n"


if $msg contains 'UFW' then {
  *.* -?UFWLog;RemoteHostFileFormat
  stop
}

if $syslogtag contains 'cudo' then {
  *.* -?CudoLog;RemoteHostFileFormat
  stop
}

if $syslogtag contains 'named' then {
  *.* -?BindLog;RemoteHostFileFormat
  stop
}

if $inputname == 'imudp' then {
  if $fromhost-ip != '127.0.0.1' then {
if $fromhost != '-' then {
  *.* -?RemoteLogs;RemoteHostFileFormat
  stop
}
  }
}

The effective result of all this is that all log messages are logged to 
/var/log/debug-haproxy and anything more severe than a request is also 
logged to /var/log/haproxy.  This makes it so that I do not need to wade 
through megabytes of request logs to see other problems, though I do 
have the option of seeing the problem inline with requests in the other 
logfile.


I came up with this config back in the 1.4 to 1.5 days, and I cannot 
remember how it evolved.  There was some valid reason why I needed to do 
the "no log" followed by "log" in the backend, but I cannot remember 
what that reason was.


Thanks,
Shawn



Re: OCSP renewal with 2.8

2023-06-01 Thread Shawn Heisey

On 6/1/23 16:19, Shawn Heisey wrote:
I asked ChatGPT for help, and with that info, I was able to work out 
what to do.


-
elyograg@smeagol:/etc/haproxy$ cat crt-list.txt
/etc/ssl/certs/local/REDACTED1.combined.pem [ocsp-update on]
/etc/ssl/certs/local/REDACTED2.combined.pem [ocsp-update on]
-


Instead of two "crt" options, I now have "crt-list 
/etc/haproxy/crt-list.txt" on each bind line.  Haproxy handles getting 
and updating the OCSP response for stapling.  It's beautiful.


@Matthias I have no idea whether crt-list can load all certs in a 
directory like crt can.  If it can't, then you will probably need a 
script for starting/restarting haproxy that generates the cert list 
file.  If you wantthat script to be automatically run whenever someone 
does `systemctl restart haproxy`, you could use the ExecStartPre and 
ExecReloadPre options in a systemd service file to run your script.


My certificate files contain the server cert, the issuer cert, the 
private key, and DH PARAMETERS that are unique to that cert.


Thanks,
Shawn



Re: OCSP renewal with 2.8

2023-06-01 Thread Shawn Heisey

On 6/1/23 15:42, Willy Tarreau wrote:

So this means that the doc is still not clear enough and we need to
improve this. And indeed, I'm myself confused because William told me
a few days ago that "ocsp-update" was for crt-list lines only and it's
found in the "bind line options" section. And of course, when there are
examples, they're not the ones you're looking for, that's classical!


I looked at the 2.8.0 documentation for crt-list and it was not very 
clear what to actually put in the config to use it.


I asked ChatGPT for help, and with that info, I was able to work out 
what to do.


-
elyograg@smeagol:/etc/haproxy$ cat crt-list.txt
/etc/ssl/certs/local/REDACTED1.combined.pem [ocsp-update on]
/etc/ssl/certs/local/REDACTED2.combined.pem [ocsp-update on]
-

I commented the crontab entry that was handling ocsp renewal, deleted 
the *.ocsp files from the certificate location, restarted haproxy, and 
did a fresh Qualys SSL test.  That test indicated that it is still 
stapling OCSP.


Awesome new feature!

Thanks,
Shawn



Re: OCSP renewal with 2.8

2023-06-01 Thread Shawn Heisey

On 5/31/23 23:25, Matthias Fechner wrote:
I just saw in the release notes for 2.8 that an automatic OCSP renewal 
is now included and I would like to get rid of my manual scripts that 
are currently injecting the OCSP information.


I checked a little bit the documentation here:
https://docs.haproxy.org/2.8/configuration.html#ocsp-update
https://docs.haproxy.org/2.8/configuration.html#5.1-crt-list


I can't figure out where to put the option.  I've tried several 
different places and the config check fails every time.


Upgraded from dev13 to 2.8.0 and that didn't help.

It will be very cool for haproxy to handle ocsp renewal itself so I can 
retire my script.


The doc said that it would need the issuer cert, which is included in 
the file referenced by the crt option.  Is that enough?


Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2023-05-29 Thread Shawn Heisey

On 5/29/23 20:38, Willy Tarreau wrote:

Have you verified that the CPU is saturated ?


The CPU on the machine running the test settles at about 1800 percent 
for my test program.  12 real cores, hyperthreaded.


The CPU on the frontend haproxy process is barely breathing hard.  Never 
saw it get above 150%.  That server has 24 real cores.


The CPU on the backend haproxy running on the raspberry pi hovers 
between 250 and 280%.  It's a 3B, so it has four CPU cores.


Those CPU values gathered with the test program running 24 threads with 
quictls 1.1.1t.  With 200 threads, the CPU usage on all 3 systems is 
even lower.


So I would say I am not saturating the CPU.  I need a different test 
methodology ... this Java program is not really doing much to haproxy.



Without keep-alive nor TLS resume, you should see roughly 1000 connections
per second per core, and with TLS resume you should see roughly 4000 conns/s
per core. So if you have 12 cores you should see 12000 or 48000 conns/s
depending if you're using TLS resume or full rekey.


It's doing whatever Apache's httpclient does with Java's TLS.  I know 
it's not doing keepalive, I explicitly pass the connection close header. 
 I do not know if it uses TLS resume or not, and I do not know how to 
discover that info.


I'm not seeing anywhere near that connection rate.  Not even with an 
haproxy backend.



Hmmm are you sure you didn't build the client with OpenSSL 3.0 ? I'm asking
because that was our first concern when we tested the perf on Intel's SPR
machine. No way to go beyond 400 conn/s, with haproxy totally idle and the
client at 100% on 48 cores... The cause was OpenSSL 3. Rebuilding under 1.1.1
jumped to 74000, almost 200 times more!


The client is a Java program running in Java 11, with nothing to have it 
use anything but Java's TLS.  It should not be using any version of openssl.



https://asciinema.elyograg.org/haproxyssltest1.html


Hmmm host not found here.


Oops.  I did not get that name in my public DNS.  Fixed.  The run it 
shows is from earlier, before I set up a backend running haproxy.  That 
run is using 200 threads.  When it ends, it reports the connection rate 
at 244.69 per second.


Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2023-05-29 Thread Shawn Heisey

On 5/29/23 01:43, Aleksandar Lazic wrote:

HAProxies FE => HAProxies BE => Destination Servers

Where the Destination Servers are also HAProxies which just returns a 
static content or any high performance low latency HTTPS Server.

With such a Setup can you test also the Client mode of the OpenSSL.


Oops.  Mistype sent that message before I could finish it.

Interesting idea.

I set up haproxy on raspberry pi and configured it to serve a static web 
page with https.  Running the same version of haproxy on both the main 
server and the raspi, running with the same version of quictls.


https://raspi1.elyograg.org

Side note: compiling and installing quictls and haproxy is a lot slower 
on a raspberry pi than on a dell server.  84 seconds on the dell server 
and 2591 seconds on the pi.  Make gets 12 threads on the server, 2 on 
the pi ... I give it half of the physical core count, rounded up to 2.


It took a while to get this info due to the slow compile speeds on the 
pi.  I wish build systems could give me an accurate estimate of how far 
done the build is.  The quictls one doesn't say ANYTHING.


The requests are taking more time in general.  This is due to another 
round trip (including TLS) from the server to the raspberry pi that did 
not occur before.  With the other URL, it was forwarding to Apache on 
the same server, port 81 without TLS.


I still wouldn't call it a smoking gun, but this test shows evidence of 
1.1 handling the concurrency better than 3.0.


1.1.1t:
20:31:21.177 [main] INFO  o.e.t.h.MainSSLTest Count 24000 310.31/s
20:31:21.177 [main] INFO  o.e.t.h.MainSSLTest 10th % 53 ms
20:31:21.178 [main] INFO  o.e.t.h.MainSSLTest 25th % 60 ms
20:31:21.178 [main] INFO  o.e.t.h.MainSSLTest Median 69 ms
20:31:21.178 [main] INFO  o.e.t.h.MainSSLTest 75th % 81 ms
20:31:21.178 [main] INFO  o.e.t.h.MainSSLTest 95th % 125 ms
20:31:21.178 [main] INFO  o.e.t.h.MainSSLTest 99th % 163 ms
20:31:21.178 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 633 ms

3.0.8:
19:22:12.281 [main] INFO  o.e.t.h.MainSSLTest Count 24000 290.48/s
19:22:12.281 [main] INFO  o.e.t.h.MainSSLTest 10th % 59 ms
19:22:12.281 [main] INFO  o.e.t.h.MainSSLTest 25th % 66 ms
19:22:12.282 [main] INFO  o.e.t.h.MainSSLTest Median 75 ms
19:22:12.282 [main] INFO  o.e.t.h.MainSSLTest 75th % 87 ms
19:22:12.282 [main] INFO  o.e.t.h.MainSSLTest 95th % 123 ms
19:22:12.282 [main] INFO  o.e.t.h.MainSSLTest 99th % 161 ms
19:22:12.282 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 1004 ms

3.1.0+locks:
The quictls compile failed on the pi.  So I couldn't test this one.  I 
suppose I could have done it without TLS, but I didn't do that.  Here's 
the log from the compile:


/usr/bin/ld: unknown architecture of input file 
`libcrypto.a(libdefault-lib-pbkdf2_fips.o)' is incompatible with aarch64 
output

collect2: error: ld returned 1 exit status
make[1]: *** [Makefile:22146: fuzz/cmp-test] Error 1
make[1]: *** Waiting for unfinished jobs
/usr/bin/ld: unknown architecture of input file 
`libcrypto.a(libdefault-lib-pbkdf2_fips.o)' is incompatible with aarch64 
output

collect2: error: ld returned 1 exit status
make[1]: *** [Makefile:22270: fuzz/punycode-test] Error 1
make: *** [Makefile:3278: build_sw] Error 2

I wonder why that happened.  1.1.1t and 3.0.8 compiled just fine.  All 
three work on x86_64.


I should set up my third server to serve the static page from haproxy. 
It's x86_64.  Maybe when I find all that free time I am looking for!


Slightly interesting detail, not sure what it means:  The backend for 
haproxy on the pi shows L6OK on the stats page instead of L7OK like all 
the other backends.


Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2023-05-29 Thread Shawn Heisey

On 5/29/23 19:52, Shawn Heisey wrote:

Interesting idea.


So sorry.  I was writing up the new reply, and my fingers got confused 
for a moment, accidentally did Ctrl-Enter which tells Thunderbird to 
send the message.  Will send a new complete reply.




Re: Followup on openssl 3.0 note seen in another thread

2023-05-29 Thread Shawn Heisey

On 5/29/23 01:43, Aleksandar Lazic wrote:

HAProxies FE => HAProxies BE => Destination Servers

Where the Destination Servers are also HAProxies which just returns a 
static content or any high performance low latency HTTPS Server.

With such a Setup can you test also the Client mode of the OpenSSL.


Interesting idea.

I set up haproxy on raspberry pi and configured it to serve a static web 
page with https.  Running the same version of haproxy on both the main 
server and the raspi, running with the same version of quictls.


https://raspi1.elyograg.org

Side note: compiling and installing quictls and haproxy is a lot slower 
on a raspberry pi than on a dell server.  84 seconds on the dell server 
and 2591 seconds on the pi.  Make gets 12 threads on the server, 2 on 
the pi ... I give it half of the physical core count, rounded up to 2.


It took a while to get this info due to the slow compile speeds on the 
pi.  I wish build systems could give me an accurate estimate of how far 
done the build is.  The quictls one doesn't say ANYTHING.


The requests are taking more time in general.  This is due to another 
round trip (including SSL) from the server to the raspberry pi that did 
not occur before.  With the other URL, it was forwarding to Apache on 
the same server, port 81 without ssl.


1.1.t:

3.0.8:
19:22:12.281 [main] INFO  o.e.t.h.MainSSLTest Count 24000 290.48/s
19:22:12.281 [main] INFO  o.e.t.h.MainSSLTest 10th % 59 ms
19:22:12.281 [main] INFO  o.e.t.h.MainSSLTest 25th % 66 ms
19:22:12.282 [main] INFO  o.e.t.h.MainSSLTest Median 75 ms
19:22:12.282 [main] INFO  o.e.t.h.MainSSLTest 75th % 87 ms
19:22:12.282 [main] INFO  o.e.t.h.MainSSLTest 95th % 123 ms
19:22:12.282 [main] INFO  o.e.t.h.MainSSLTest 99th % 161 ms
19:22:12.282 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 1004 ms

3.1.0+locks:

Couldn't do this one.  Compile fails:





Re: Followup on openssl 3.0 note seen in another thread

2023-05-27 Thread Shawn Heisey

On 5/27/23 18:03, Shawn Heisey wrote:

On 5/27/23 14:56, Shawn Heisey wrote:
Yup.  It was using keepalive.  I turned keepalive off and repeated the 
tests.


I did the tests again with 200 threads.  The system running the tests 
has 12 hyperthreaded cores, so this definitely pushes its capabilities.


I had forgotten a crucial fact that means all my prior testing work was 
invalid:  Apache HttpClient 4.x defaults to a max simultaneous 
connection count of 2.  Not going to exercise concurrency with that!


I have increased that to 1024, my program's max thread count, and now 
the test is a LOT faster ... it's actually running 200 threads at the 
same time.  Two runs per branch here, one with 200 threads and one with 
24 threads.


Still no smoking gun showing 3.0 as the slowest of the bunch.  In fact, 
3.0 is giving the best results!  So my test method is still probably the 
wrong approach.



1.1.1t:
21:06:45.388 [main] INFO  o.e.t.h.MainSSLTest Count 20 234.54/s
21:06:45.388 [main] INFO  o.e.t.h.MainSSLTest 10th % 54 ms
21:06:45.388 [main] INFO  o.e.t.h.MainSSLTest 25th % 94 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest Median 188 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest 75th % 991 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest 95th % 3698 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest 99th % 6924 ms
21:06:45.390 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 11983 ms
-
21:20:35.400 [main] INFO  o.e.t.h.MainSSLTest Count 24000 355.56/s
21:20:35.400 [main] INFO  o.e.t.h.MainSSLTest 10th % 40 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 25th % 46 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest Median 57 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 75th % 71 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 95th % 126 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 99th % 168 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 721 ms

3.0.8:
20:50:12.916 [main] INFO  o.e.t.h.MainSSLTest Count 20 244.69/s
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest 10th % 56 ms
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest 25th % 93 ms
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest Median 197 ms
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest 75th % 949 ms
20:50:12.918 [main] INFO  o.e.t.h.MainSSLTest 95th % 3425 ms
20:50:12.918 [main] INFO  o.e.t.h.MainSSLTest 99th % 6679 ms
20:50:12.918 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 11582 ms
-
21:23:22.076 [main] INFO  o.e.t.h.MainSSLTest Count 24000 404.78/s
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 10th % 40 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 25th % 45 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest Median 53 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 75th % 63 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 95th % 90 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 99th % 121 ms
21:23:22.078 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 671 ms

3.1.0+locks:
20:33:32.805 [main] INFO  o.e.t.h.MainSSLTest Count 20 238.02/s
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest 10th % 58 ms
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest 25th % 95 ms
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest Median 196 ms
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest 75th % 1001 ms
20:33:32.807 [main] INFO  o.e.t.h.MainSSLTest 95th % 3475 ms
20:33:32.807 [main] INFO  o.e.t.h.MainSSLTest 99th % 6288 ms
20:33:32.807 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 10700 ms
-
21:26:24.555 [main] INFO  o.e.t.h.MainSSLTest Count 24000 402.89/s
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 10th % 39 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 25th % 45 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest Median 52 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 75th % 64 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 95th % 93 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 99th % 127 ms
21:26:24.557 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 689 ms



Re: Followup on openssl 3.0 note seen in another thread

2023-05-27 Thread Shawn Heisey

On 5/27/23 14:56, Shawn Heisey wrote:
Yup.  It was using keepalive.  I turned keepalive off and repeated the 
tests.


I did the tests again with 200 threads.  The system running the tests 
has 12 hyperthreaded cores, so this definitely pushes its capabilities.


The system running haproxy has 24 hyperthreaded cores.  There is no 
thread or process info in haproxy.cfg.


200 threads takes so long to run that I didn't do multiple runs per 
branch.  Any inconsistencies created by the fact that haproxy has just 
been restarted will hopefully be leveled out due to how long the run takes.


The request times for 200 threads vs. 24 threads shows that the speed 
went down.  I think I have definitely saturated the test system, and 
hopefully also the haproxy server.  Still no smoking gun showing the 
lock problems in 3.0.  I had hoped that would be apparent.


1.1.1t:
15:52:18.666 [main] INFO  o.e.t.h.MainSSLTest Count 20 56.82/s
15:52:18.668 [main] INFO  o.e.t.h.MainSSLTest 10th % 31 ms
15:52:18.668 [main] INFO  o.e.t.h.MainSSLTest 25th % 47 ms
15:52:18.668 [main] INFO  o.e.t.h.MainSSLTest Median 994 ms
15:52:18.669 [main] INFO  o.e.t.h.MainSSLTest 75th % 4953 ms
15:52:18.669 [main] INFO  o.e.t.h.MainSSLTest 95th % 14205 ms
15:52:18.669 [main] INFO  o.e.t.h.MainSSLTest 99th % 23581 ms
15:52:18.669 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 37396 ms

3.0.8:
16:59:03.645 [main] INFO  o.e.t.h.MainSSLTest Count 20 58.34/s
16:59:03.647 [main] INFO  o.e.t.h.MainSSLTest 10th % 30 ms
16:59:03.648 [main] INFO  o.e.t.h.MainSSLTest 25th % 35 ms
16:59:03.648 [main] INFO  o.e.t.h.MainSSLTest Median 368 ms
16:59:03.648 [main] INFO  o.e.t.h.MainSSLTest 75th % 4606 ms
16:59:03.648 [main] INFO  o.e.t.h.MainSSLTest 95th % 14840 ms
16:59:03.649 [main] INFO  o.e.t.h.MainSSLTest 99th % 25561 ms
16:59:03.649 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 40826 ms

3.1.1+locks:
18:01:04.198 [main] INFO  o.e.t.h.MainSSLTest Count 20 56.69/s
18:01:04.198 [main] INFO  o.e.t.h.MainSSLTest 10th % 31 ms
18:01:04.198 [main] INFO  o.e.t.h.MainSSLTest 25th % 39 ms
18:01:04.199 [main] INFO  o.e.t.h.MainSSLTest Median 455 ms
18:01:04.199 [main] INFO  o.e.t.h.MainSSLTest 75th % 4759 ms
18:01:04.199 [main] INFO  o.e.t.h.MainSSLTest 95th % 15071 ms
18:01:04.199 [main] INFO  o.e.t.h.MainSSLTest 99th % 25729 ms
18:01:04.200 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 41308 ms



Re: Followup on openssl 3.0 note seen in another thread

2023-05-27 Thread Shawn Heisey

On 5/27/23 02:59, Willy Tarreau wrote:

The little difference makes me think you've sent your requests over
a keep-alive connection, which is fine, but which doesn't stress the
TLS stack anymore.


Yup.  It was using keepalive.  I turned keepalive off and repeated the 
tests.


I'm still not seeing a notable difference between the branches, so I 
have to wonder whether I need a completely different test.  Or whether I 
simply don't need to worry about it at all because my traffic needs are 
so small.


Requests per second is down around 60 instead of 1200, and the request 
time percentile values went up.  I've included two runs per branch here. 
 24 threads, each doing 1000 requests.  The haproxy logs indicate the 
page I'm hitting returns 829 bytes, while the actual index.html is 1187 
bytes.  I think gzip compression and the HTTP headers explains the 
difference.  Without keepalive, the overall test takes a lot longer, 
which is not surprising.


The high percentiles are not encouraging.  7 seconds to get a web page 
under 1kb?, even with 1.1.1t?


This might be interesting to someone:

https://asciinema.elyograg.org/haproxyssltest1.html

I put the project in github.

https://github.com/elyograg/haproxytestssl

quictls branch: OpenSSL_1_1_1t+quic
14:15:57.496 [main] INFO  o.e.t.h.MainSSLTest Count 24000 64.65/s
14:15:57.498 [main] INFO  o.e.t.h.MainSSLTest 10th % 28 ms
14:15:57.499 [main] INFO  o.e.t.h.MainSSLTest 25th % 28 ms
14:15:57.499 [main] INFO  o.e.t.h.MainSSLTest Median 31 ms
14:15:57.499 [main] INFO  o.e.t.h.MainSSLTest 75th % 65 ms
14:15:57.500 [main] INFO  o.e.t.h.MainSSLTest 95th % 2690 ms
14:15:57.500 [main] INFO  o.e.t.h.MainSSLTest 99th % 5058 ms
14:15:57.500 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 9342 ms
-
14:22:19.922 [main] INFO  o.e.t.h.MainSSLTest Count 24000 65.39/s
14:22:19.924 [main] INFO  o.e.t.h.MainSSLTest 10th % 28 ms
14:22:19.924 [main] INFO  o.e.t.h.MainSSLTest 25th % 28 ms
14:22:19.924 [main] INFO  o.e.t.h.MainSSLTest Median 31 ms
14:22:19.925 [main] INFO  o.e.t.h.MainSSLTest 75th % 62 ms
14:22:19.925 [main] INFO  o.e.t.h.MainSSLTest 95th % 2683 ms
14:22:19.925 [main] INFO  o.e.t.h.MainSSLTest 99th % 4978 ms
14:22:19.925 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 7291 ms

quictls branch: openssl-3.1.0+quic+locks
13:15:28.901 [main] INFO  o.e.t.h.MainSSLTest Count 24000 63.43/s
13:15:28.903 [main] INFO  o.e.t.h.MainSSLTest 10th % 29 ms
13:15:28.903 [main] INFO  o.e.t.h.MainSSLTest 25th % 29 ms
13:15:28.903 [main] INFO  o.e.t.h.MainSSLTest Median 32 ms
13:15:28.904 [main] INFO  o.e.t.h.MainSSLTest 75th % 66 ms
13:15:28.904 [main] INFO  o.e.t.h.MainSSLTest 95th % 2660 ms
13:15:28.904 [main] INFO  o.e.t.h.MainSSLTest 99th % 4879 ms
13:15:28.905 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 9241 ms
-
13:23:15.119 [main] INFO  o.e.t.h.MainSSLTest Count 24000 62.99/s
13:23:15.121 [main] INFO  o.e.t.h.MainSSLTest 10th % 29 ms
13:23:15.122 [main] INFO  o.e.t.h.MainSSLTest 25th % 29 ms
13:23:15.122 [main] INFO  o.e.t.h.MainSSLTest Median 32 ms
13:23:15.122 [main] INFO  o.e.t.h.MainSSLTest 75th % 61 ms
13:23:15.123 [main] INFO  o.e.t.h.MainSSLTest 95th % 2275 ms
13:23:15.123 [main] INFO  o.e.t.h.MainSSLTest 99th % 6189 ms
13:23:15.123 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 11406 ms

quictls branch: openssl-3.0.8+quic
13:34:25.780 [main] INFO  o.e.t.h.MainSSLTest Count 24000 64.57/s
13:34:25.783 [main] INFO  o.e.t.h.MainSSLTest 10th % 28 ms
13:34:25.783 [main] INFO  o.e.t.h.MainSSLTest 25th % 28 ms
13:34:25.783 [main] INFO  o.e.t.h.MainSSLTest Median 33 ms
13:34:25.783 [main] INFO  o.e.t.h.MainSSLTest 75th % 66 ms
13:34:25.784 [main] INFO  o.e.t.h.MainSSLTest 95th % 2642 ms
13:34:25.784 [main] INFO  o.e.t.h.MainSSLTest 99th % 4994 ms
13:34:25.784 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 7503 ms
-
14:08:33.750 [main] INFO  o.e.t.h.MainSSLTest Count 24000 63.06/s
14:08:33.753 [main] INFO  o.e.t.h.MainSSLTest 10th % 28 ms
14:08:33.753 [main] INFO  o.e.t.h.MainSSLTest 25th % 29 ms
14:08:33.754 [main] INFO  o.e.t.h.MainSSLTest Median 33 ms
14:08:33.754 [main] INFO  o.e.t.h.MainSSLTest 75th % 64 ms
14:08:33.754 [main] INFO  o.e.t.h.MainSSLTest 95th % 2904 ms
14:08:33.754 [main] INFO  o.e.t.h.MainSSLTest 99th % 5216 ms
14:08:33.755 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 8287 ms



Re: Followup on openssl 3.0 note seen in another thread

2023-05-26 Thread Shawn Heisey

On 5/25/23 09:08, Willy Tarreau wrote:

The problem definitely is concurrency, so 1000 curl will show nothing
and will not even match production traffic. You'll need to use a load
generator that allows you to tweak the TLS resume support, like we do
with h1load's argument "--tls-reuse". Also I don't know how often the
recently modified locks are used per server connection and per client
connection, that's what the SSL guys want to know since they're not able
to test their changes.


I finally got a test program together.  After trying and failing with 
the Jetty HttpClient and Apache HttpClient version 5 (both options that 
would have let me do HTTP/2) I got a program together with Apache 
HttpClient version 4.  I had one version that shelled out to curl, but 
it ran about ten times slower.


I know lots of people are going to have bad things to say about writing 
a test in Java.  It's the only language where I already know how to 
write multi-threaded code.  I would have to spend a bunch of time 
learning how to do that in another language.


It fires up X threads, each of which make 1000 consecutive requests to 
the URL specified.  It records the time in milliseconds for each 
request, and when all the threads finish, prints out statistics.  These 
runs are with 24 threads.  I ran it on a different system so that it 
would not affect CPU usage on the server running haproxy.  Here's the 
results:


quictls branch: OpenSSL_1_1_1t+quic
23:01:19.067 [main] INFO  o.e.t.h.MainSSLTest Count 24000 1228.69/s
23:01:19.069 [main] INFO  o.e.t.h.MainSSLTest Median 7562839 ns
23:01:19.069 [main] INFO  o.e.t.h.MainSSLTest 75th % 25138492 ns
23:01:19.070 [main] INFO  o.e.t.h.MainSSLTest 95th % 70603313 ns
23:01:19.070 [main] INFO  o.e.t.h.MainSSLTest 99th % 120502022 ns
23:01:19.070 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 355829439 ns

quictls branch: openssl-3.1.0+quic+locks
22:56:11.457 [main] INFO  o.e.t.h.MainSSLTest Count 24000 1267.96/s
22:56:11.459 [main] INFO  o.e.t.h.MainSSLTest Median 6827111 ns
22:56:11.459 [main] INFO  o.e.t.h.MainSSLTest 75th % 23239248 ns
22:56:11.460 [main] INFO  o.e.t.h.MainSSLTest 95th % 70625628 ns
22:56:11.460 [main] INFO  o.e.t.h.MainSSLTest 99th % 129494323 ns
22:56:11.460 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 307070582 ns

quictls branch: openssl-3.0.8+quic
22:59:12.614 [main] INFO  o.e.t.h.MainSSLTest Count 24000 1163.24/s
22:59:12.616 [main] INFO  o.e.t.h.MainSSLTest Median 6930268 ns
22:59:12.616 [main] INFO  o.e.t.h.MainSSLTest 75th % 26238752 ns
22:59:12.616 [main] INFO  o.e.t.h.MainSSLTest 95th % 75464869 ns
22:59:12.616 [main] INFO  o.e.t.h.MainSSLTest 99th % 132522508 ns
22:59:12.617 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 445411125 ns

The stats don't show any kind of smoking gun like I had hoped they 
would.  Not a lot of difference there.


Differences in the requests per second are also not huge, but more in 
line with what I was expecting.  If I can believe those numbers, and I 
admit that this kind of micro-benchmark is not the most reliable way to 
test performance, it looks like 3.1.0 with the lock fixes is slightly 
faster than 1.1.1t. 24 threads might not be enough to really exercise 
the concurrency though.


I will poke at it a little more tomorrow, trying more threads.



Re: Followup on openssl 3.0 note seen in another thread

2023-05-25 Thread Shawn Heisey

On 3/11/23 22:52, Willy Tarreau wrote:

According to the OpenSSL devs, 3.1 should be "4 times better than 3.0",
so it could still remain 5-40 times worse than 1.1.1. I intend to run
some tests soon on it on a large machine, but preparing tests takes a
lot of time and my progress got delayed by the painful bug of last week.
I'll share my findings anywya.


Just noticed that quictls has a special branch for lock changes in 3.1.0:

https://github.com/quictls/openssl/tree/openssl-3.1.0+quic+locks

I am not sure how to go about proper testing for performance on this.  I 
did try a very basic "curl a URL 1000 times in bash" test back when 
3.1.0 was released, but that showed 3.0.8 and 3.1.0 were faster than 
1.1.1, so concurrency is likely required to see a problem.


Thanks,
Shawn



Re: Latest 2.8-dev not doing TLS 1.2

2023-05-20 Thread Shawn Heisey

On 5/19/23 14:21, Zakharychev, Bob wrote:

ssl-default-bind-options no-tls-tickets ssl-min-ver TLSv1.2





I'd suggest you try with ssl-default-bind-options as in my config, and maybe
ssl-default-bind-ciphers as well as these are for TLS 

I have been unknowingly hampered in my tests by the fact that my 
pacemaker cluster has been malfunctioning and moved the VIP to a 
different server that did not have everything up to date.  I added a 
check for pacemaker status into my build scripts so it will warn me 
about that particular problem with pacemaker.  It keeps happening when I 
reboot the servers for updates.


After thrashing the pacemaker cluster into obedience, I now have 
everything fully functional and once again getting an A+ grade with this 
config, haproxy 2.8dev12, and quictls 3.1.0:


ssl-default-bind-ciphers 
ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256
ssl-default-bind-ciphersuites 
TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256

ssl-default-bind-optionsssl-min-ver TLSv1.2

Thanks,
Shawn



Latest 2.8-dev not doing TLS 1.2

2023-05-19 Thread Shawn Heisey
I have a config that I have had in place for a while now.  It did TLS 
1.2 and 1.3, and got an A+ rating at SSL Labs.


Today I was running the SSL test again and it only got an A rating 
instead of A+.  Looking deeper at the results, I saw that it was no 
longer doing TLS 1.2 ... only TLS 1.3.


Below are the global section, the defaults section, the bind lines from 
the frontend, and haproxy -vv output.  If there is something missing 
that would shine a light on the issue, please let me know.


I haven't changed any TLS-related config for a LONG time now.  Is there 
something I am doing wrong that has disabled TLS 1.2 in 2.8-dev?


Thanks,
Shawn

---
global
log 127.0.0.1 len 65535 format rfc5424 local0
log 127.0.0.1 len 65535 format rfc5424 local1 notice
maxconn 4096
daemon
#debug
#quiet
spread-checks   2
tune.h2.max-concurrent-streams  1000
tune.bufsize65536
tune.http.logurilen 49152
ssl-server-verify   none
tune.ssl.default-dh-param   4096
tune.ssl.cachesize  10
tune.ssl.lifetime   900
	ssl-default-bind-ciphers 
ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384
	ssl-default-bind-ciphersuites 
TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384

ssl-default-bind-optionsno-sslv3 no-tlsv10 no-tlsv11
#   ssl-default-server-ciphers  RC4-MD5
	ssl-default-server-ciphers 
RC4-MD5:ECDHE-RSA-AES256-SHA384:AES256-SHA:AES256-SHA256:ECDHE-RSA-AES128-GCM-SHA256

stats socket /etc/haproxy/stats.socket

defaults
log global
modehttp
option  forwardfor except 127.0.0.1
option  socket-stats
balance leastconn
option  httplog
option  dontlognull
option  redispatch
# commented because http3/quic doesn't like it
#   option  abortonclose
retries 1
compression algo gzip
	compression type text/css text/html text/javascript 
application/javascript text/plain text/xml application/json application/css

timeout connect 5s
timeout client  15s
timeout server  120s
timeout http-keep-alive 5s
timeout check   9990
retry-on all-retryable-errors
http-errors myerrors
errorfile 400 /etc/haproxy/errors/400.http
errorfile 404 /etc/haproxy/errors/404.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/50x.http
errorfile 503 /etc/haproxy/errors/50x.http
errorfile 504 /etc/haproxy/errors/50x.http
---
	bind 0.0.0.0:443 name web443 ssl crt 
/etc/ssl/certs/local/REDACTED1.wildcards.combined.pem crt 
/etc/ssl/certs/local/REDACTED2.wildcards.combined.pem alpn h2,http/1.1 
npn h2,http/1.1 allow-0rtt curves secp521r1:secp384r1
	bind quic4@192.168.217.170:443 name quic443_vip ssl crt 
/etc/ssl/certs/local/REDACTED1.wildcards.combined.pem crt 
/etc/ssl/certs/local/REDACTED2.wildcards.combined.pem proto quic alpn h3 
npn h3 allow-0rtt curves secp521r1:secp384r1
	bind quic4@192.168.217.200:443 name quic443_smeagol ssl crt 
/etc/ssl/certs/local/REDACTED1.wildcards.combined.pem crt 
/etc/ssl/certs/local/REDACTED2.wildcards.combined.pem proto quic alpn h3 
npn h3 allow-0rtt curves secp521r1:secp384r1
	bind quic4@192.168.217.202:443 name quic443_gandalf ssl crt 
/etc/ssl/certs/local/REDACTED1.wildcards.combined.pem crt 
/etc/ssl/certs/local/REDACTED2.wildcards.combined.pem proto quic alpn h3 
npn h3 allow-0rtt curves secp521r1:secp384r1

---
HAProxy version 2.8-dev12-ffdf6a-1 2023/05/17 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Running on: Linux 6.1.0-1012-oem #12-Ubuntu SMP PREEMPT_DYNAMIC Tue May 
9 17:12:06 UTC 2023 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_QUIC=1 
USE_PCRE2_JIT=1

  DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY 
+CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE 
-LIBATOMIC +LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY -LUA -MATH 
-MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL 
-OPENSSL_WOLFSSL -OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL 
-PROCCTL -PROMEX -PTHREAD_EMULATION +QUIC +RT +SHM_OPEN -SLZ +SSL 
-STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY 
-WURFL 

Re: gitlab server behind haproxy never switches to http/3

2023-04-28 Thread Shawn Heisey

On 4/27/23 20:59, Tristan wrote:
Then yep, you're in the same boat as we were. It switched for no reason 
one day. Even trying HTTPS/SVCB DNS records did nothing for us until it 
"magically" decided to use H3.


Today it is using h3.  I didn't change anything, other than installing 
updates on my desktop (including the kernel) and rebooting it.  Which 
might cause infinitesimal differences in timing.  Firefox hasn't been 
updated in a few days.


When I try to get a response locally using a curl docker container that 
supports http3, the request times out.  If I use that curl to pull 
www.google.com, that works.


The same sites that timeout with that version of curl now work in 
Firefox and Chrome using http3 on the same client system, and test OK on 
http3check.net.  I don't understand that at all.  This is the docker 
command you can try yourself:


docker run --rm ymuski/curl-http3 curl -v -m 5 -s -f 
"https://www.google.com/; --http3


Redirect stdout to /dev/null to more easily see the h3/tls negotiation.

Thanks,
Shawn



Re: gitlab server behind haproxy never switches to http/3

2023-04-27 Thread Shawn Heisey

On 4/27/23 20:59, Tristan wrote:
Hmm.  The web server is on the local gigabit LAN with the client.  
Would that give TCP a significant enough boost that it could beat UDP?


Hard to say; in our case it seemed more random than actually driven by 
anything remotely close to clear. I'm merely quoting official word on 
it, but have yet to try and dig the actual source code for how it 
happens. The only platform I know gets reliable QUIC enabling is CF, so 
maybe someone from their side could chime in if they ever see this.


I know the NICs in the server do TCP handling/acceleration.  It is 
likely that the NIC in my desktop also does it.


A packet capture shows that the server NIC also handles UDP (outbound 
packets have a bad checksum because it is actually calculated by the 
NIC) ... but maybe the TCP handling is a lot more efficient than the UDP 
handling.  It would receive more attention from hardware engineers 
because the majority of IP traffic in the wild is TCP.


Thanks for your insights.  I have not delved into quic as deeply as you 
have.  I know just enough about it that I can get it working on haproxy 
and I can ask annoying questions. :)


Thanks,
Shawn



Re: gitlab server behind haproxy never switches to http/3

2023-04-27 Thread Shawn Heisey
I did figure out that ufw was not allowing udp/443.  So it turns out 
that wasn't working for any website on that install.


I have another install in AWS that IS working, and it turns out that 
when I was seeing the green lightning bolt in firefox, it was one of 
those websites, not the ones in my local install.  Sometimes I forget 
which install covers specific websites.


But even after allowing udp/443, it is still not switching.  If I enter 
the URL into https://http3check.net/ it does say that http3 works.


On 4/27/23 20:26, Tristan wrote:

As far as I know, the main way it happens is that the browser:
- races H2 and H3 and picks the fastest (then remember it)
- retries on H2 in case of H3 issue (then remember it)


Hmm.  The web server is on the local gigabit LAN with the client.  Would 
that give TCP a significant enough boost that it could beat UDP?  What I 
learned from forcing quic (see below) seems to support this notion.


> You can try something like that to force it to use H3 and reveal
> whatever issue it might be having:
> chromium-browser --enable-quic
> --origin-to-force-quic-on=your-gitlab-host.com:443

I have chrome, not chromium.  Substituting /opt/google/chrome/chrome for 
chromium and running it with those options, it DOES do http/3.  The 
lightning bolt is orange instead of blue.


Thanks,
Shawn



gitlab server behind haproxy never switches to http/3

2023-04-27 Thread Shawn Heisey
I have haproxy installed, doing http/3.  The http/3 is working for 
almost everything.  I build it from the main branch and update it fairly 
often.


One of the websites I have behind it is my gitlab server.  That is 
always http/2, it never switches to http/3.


Does anyone know why that happens, and whether there is anything I can 
do about it?  The alt-svc header IS received by the browser.


elyograg@smeagol:/storage0/build/haproxy$ haproxy -vv
HAProxy version 2.8-dev8-d2f61d-40 2023/04/27 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Running on: Linux 6.1.0-1009-oem #9-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 
31 09:59:10 UTC 2023 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_QUIC=1 
USE_PCRE2_JIT=1

  DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY 
+CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE 
-LIBATOMIC +LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY -LUA -MATH 
-MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL 
-OPENSSL_WOLFSSL -OT -PCRE +PCRE2 +PCRE2_JIT -PCRE_JIT +POLL +PRCTL 
-PROCCTL -PROMEX -PTHREAD_EMULATION +QUIC +RT +SHM_OPEN -SLZ +SSL 
-STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY 
-WURFL +ZLIB


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
default=48).

Built with OpenSSL version : OpenSSL 3.1.0+quic 14 Mar 2023
Running on OpenSSL version : OpenSSL 3.1.0+quic 14 Mar 2023
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with PCRE2 version : 10.39 2021-10-29
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 11.3.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   quic : mode=HTTP  side=FE mux=QUIC  flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace

Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2023-03-11 Thread Shawn Heisey

On 12/14/22 07:15, Willy Tarreau wrote:

On Wed, Dec 14, 2022 at 07:01:59AM -0700, Shawn Heisey wrote:

On 12/14/22 06:07, Willy Tarreau wrote:

By the way, are you running with OpenSSL
3.0 ?  That one is absolutely terrible and makes extreme abuse of
mutexes and locks, to the point that certain workloads were divided
by 2-digit numbers between 1.1.1 and 3.0. It took me one day to
figure that my load generator which was caping at 400 conn/s was in
fact suffering from an accidental build using 3.0 while in 1.1.1
the perf went back to 75000/s!


Is this a current problem with the latest openssl built from source?


Yes and deeper than that actually, there's even a meta-issue to try to
reference the many reports for massive performance regressions on the
project:


A followup to my followup.  Time flies!

I was just reading on the openssl mailing list about what's coming in 
version 3.1.  The first release highlight is:


* Refactoring of the OSSL_LIB_CTX code to avoid excessive locking

Is anyone enough in tune with openssl happenings to know whether that 
fixes the issues that Willy was advising me about?  Or maybe improves 
the situation but doesn't fully resolve it?


I tried to figure this out for myself based on data in the CHANGES.md 
file, but didn't see anything that looked relevant to my very untrained 
eye.  Reading the code wouldn't help, as I am completely clueless when 
it comes to encryption code.


Thanks,
Shawn



Re: Trying to understand how to do SSL properly.

2023-01-30 Thread Shawn Heisey

On 1/30/2023 12:08 AM, Jeremy Hansen wrote:
It’s working as of now but are you saying the connection from HAProxy to 
the real server won’t be encrypted?  I assumed at this point it’s all 
passthrough.  The browser isn’t complaining at the moment.


Redirecting this back to the mailing list.

When I looked at the config the first time, I missed that it is "mode 
tcp".  In which case haproxy is not involved in the TLS at all, and my 
previous statement is invalid.  My apologies.


My haproxy configs are "mode http" and haproxy is handling the TLS that 
browsers see.  The only backend I have that uses TLS is the one for 
plex, because I couldn't find a reasonable way around that.  All the 
other backends are unencrypted.


I am pretty sure that many of the things I have haproxy doing would not 
be possible in tcp mode.  If you do not need those, then tcp mode should 
work very well.


Thanks,
Shawn



Re: Trying to understand how to do SSL properly.

2023-01-29 Thread Shawn Heisey

On 1/29/2023 10:43 PM, Jeremy Hansen wrote:
Figured out my issue.  I was doing something really stupid.  Make sure 
if you’re using conf.d/, you name your file .cfg instead of .conf.


I don't think haproxy does a conf.d setup out of the box. You (or your 
OS) would have to set that up.  Or were you talking about something 
other than haproxy?



backend nodes
     mode tcp
     balance roundrobin
     option ssl-hello-chk
     server web01 192.168.10.30:443 check


If the backend is doing TLS as well, you need "ssl" after the IP:PORT in 
the server line.  If the back end is not expecting the same hostname in 
the Host header or SNI that the end user inputs, you'd probably have to 
change that before it gets to the backend.  Changing the host header 
would not be hard, but I have no idea whether that would also change SNI.


Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-08 Thread Shawn Heisey

On 1/7/23 09:59, Willy Tarreau wrote:

Also if you want you can show the IP:ports by adding "stats show-legends"
in your stats section. However, be aware that it will also show server IP
addresses, configured stickiness cookies etc. Thus only do this if access
to your stats page is restricted to authorized users only.


I have auth configured for my stats URI.  It's open to the world other 
than that.  All the addresses are RFC1918, so they aren't useful unless 
somebody manages to exploit a vulnerability on my server.


I did configure show-legends, and after finally figuring out how the 
info was presented (tooltip), I noticed something:  It does not indicate 
whether the listener is TCP or UDP.  The fact that I have "quic" in the 
name makes it easy for me to figure it out, but I do think it should 
indicate protocol.


Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-07 Thread Shawn Heisey

On 1/7/23 10:35, Shawn Heisey wrote:
I made that change, and it's still not including the alt-svc header on 
the stats page.


Once again bitten by my pacemaker config!  Turns out that it had 
switched the VIP to the other server, which still had the old config.  I 
think one of my haproxy restarts was noticed by pacemaker and it 
declared the server bad.


http-after-response is working.

Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-07 Thread Shawn Heisey

On 1/7/23 09:59, Willy Tarreau wrote:

No, you just have one entry per "bind" line. If it's only a matter of
listening on multiple host:port and you want them merged, you could
probably put all the addresses on the same line separated by commas
and see if it's better:

   bind quic4@1.1.1.1:443,quic4@2.2.2.2:443,quic4@3.3.3.3:443 ssl crt ...


That's how I had it configured and it showed three lines on stats.  I 
now have three lines, all with different names.  quic443-vip and 
quic443- for each of the real servers.


Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-07 Thread Shawn Heisey

On 1/7/23 10:04, Willy Tarreau wrote:

However there's a solution nowadays, you can use "http-after-response"
instead of "http-response". It will apply *after* the response, and will
happily overwrite stats, redirects and even error pages. I'd say that
it's the recommended way to place alt-svc now. /me just realises that I
should update the example on the haproxy.org main page by the way
I made that change, and it's still not including the alt-svc header on 
the stats page.


Initially I changed every http-response to http-after-response, but that 
didn't work for my error lines like this:


  http-response return status 404 default-errorfiles if { status 404 }

still on haproxy 2.7.1.

I love this community.  Thank you for all your efforts.

Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-07 Thread Shawn Heisey

On 1/7/23 09:41, Shawn Heisey wrote:
That's really cool.  But I have an oddity I'd like to share and see if 
it needs some work.


Semi-related but separate:  I have this line in that frontend:

  stats uri /redacted

Which works well ... but it never switches to http/3.  The alt-svc 
header that the frontend specifies is not in the response, even if I 
move that config before stats.


  http-response add-header alt-svc 'h3=":443"; ma=7200'

That's not causing me any problems ... it's not like http/2 is slow. :) 
But I did want to mention it, see if you might want to change it.  I 
suspect that for the stats uri there is a lot of frontend config that is 
ignored.


In case it's important, haproxy -vv:

HAProxy version 2.7.1 2022/12/19 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2024.
Known bugs: http://www.haproxy.org/bugs/bugs-2.7.1.html
Running on: Linux 6.0.0-1009-oem #9-Ubuntu SMP PREEMPT_DYNAMIC Thu Dec 8 
07:13:10 UTC 2022 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 
USE_QUIC=1

  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT -PCRE2 
+PCRE2_JIT +POLL +THREAD -PTHREAD_EMULATION +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
-ENGINE +GETADDRINFO +OPENSSL -OPENSSL_WOLFSSL -LUA +ACCEPT4 -CLOSEFROM 
+ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL 
+SYSTEMD -OBSOLETE_LINKER +PRCTL -PROCCTL +THREAD_DUMP -EVPORTS -OT 
+QUIC -PROMEX -MEMORY_PROFILING +SHM_OPEN


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
default=48).

Built with OpenSSL version : OpenSSL 1.1.1s+quic  1 Nov 2022
Running on OpenSSL version : OpenSSL 1.1.1s+quic  1 Nov 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with PCRE2 version : 10.39 2021-10-29
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 11.3.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   quic : mode=HTTP  side=FE mux=QUIC  flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace

Thanks,
Shawn



Re: Question about the "name" option for the bind line

2023-01-07 Thread Shawn Heisey

On 1/7/23 07:46, Willy Tarreau wrote:

Indeed, you need "option socket-stats" in the frontend that has such
listeners, so that the stats are collected per-listening socket (this
is not the case by default).


That's really cool.  But I have an oddity I'd like to share and see if 
it needs some work.


https://www.dropbox.com/s/m54wp15wkmkrzcp/haproxy_option_socket-stats.png?dl=0

The bind config line I have for quic lists three host:port combos (the 
two real server IPs and the VIP), which is why it's there three times. 
I know you probably don't want to put info like the host:port on the 
stats page.  Because all three host:port combos are part of the same 
bind line, I think it probably should have only listed the name once. 
Thoughts?


I will be adjusting that to three bind lines with separate names and the 
same options, because I do like the idea of having them separate ... but 
I think by default it probably should have only showed one line, not 
three with the same name, because it's currently one config line.


Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2022-12-16 Thread Shawn Heisey

On 12/16/22 01:59, Shawn Heisey wrote:

On 12/16/22 00:26, Willy Tarreau wrote:
 > Both work for me using firefox (green flash after reload).

It wasn't working when I tested it.  I rebooted for a kernel upgrade and 
it still wasn't working.


And then a while later I was poking around in my zabbix UI and saw the 
green lightning bolt.  No idea what changed.  Glad it's working, but 
problems that fix themselves annoy me because I usually never learn what 
happened.


I think I know what happened.

I was having problems with my pacemaker cluster where it got very 
confused about the haproxy resource.  I had the haproxy service enabled 
at boot for both systems.  I have now disabled that in systemd so it's 
fully under the control of pacemaker.  I'm pretty sure that pacemaker 
was confused because it saw the service running on a system where it 
should have been disabled and pacemaker didn't start it ... and it 
decided that was unacceptable and basically broke the cluster.


So for a while I had the virtual IP resource on the "lesser" server and 
the haproxy resource on the main server.  But because I had haproxy 
enabled at boot time, it was actually running on both.  The haproxy 
config is the same between both systems, but the other server was still 
running a broken haproxy version.  Most of the backends are actually on 
the better server accessed by br0 IP address rather than localhost, so 
the broken haproxy was still sending them to the right place.  This also 
explains why I was not seeing traffic with tcpdump filtering on "udp 
port 443".  I have a ways to go before I've got true HA for my websites. 
 Setting up a database cluster is going to be challenging, I think.


I got pacemaker back in working order after I was done with my testing, 
so both resources were colocated on the better server and haproxy was 
not running on the other one.  I think you tried the URLs after I had 
fixed pacemaker, and when I saw it working on zabbix, that was also 
definitely after I fixed pacemaker.


On that UDP bind thing ... I now have three binds defined.  The virtual 
IP, the IP of the first server, and the IP of the second server.


Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2022-12-16 Thread Shawn Heisey

On 12/16/22 00:26, Willy Tarreau wrote:
> Both work for me using firefox (green flash after reload).

It wasn't working when I tested it.  I rebooted for a kernel upgrade and 
it still wasn't working.


And then a while later I was poking around in my zabbix UI and saw the 
green lightning bolt.  No idea what changed.  Glad it's working, but 
problems that fix themselves annoy me because I usually never learn what 
happened.


> You indeed need to
> bind to both the native and the virtual IP addresses (you can have the
> two on the same "bind" line, delimited by comma).

That's the little bit of info that I needed.  Now it works the way I was 
expecting with both IP addresses.  I have a lot less experience with UDP 
than TCP, I wasn't aware of that gotcha.  It does make perfect sense now 
that it's been pointed out.


Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2022-12-16 Thread Shawn Heisey

On 12/16/22 00:01, Willy Tarreau wrote:

   - if you want to use QUIC, use quictls-1.1.1. Once you have to build
 something yourself, you definitely don't want to waste your time on
 the performance-crippled 3.0, and 1.1.1 will change less often than
 3.0 so that also means less package updates.

   - if you want to experiment with QUIC and help developers, running
 compatibility tests with the latest haproxy master and the latest
 WolfSSL master could be useful. I just don't know if the maintainers
 are ready to receive lots of uncoordinated reports yet, I'm aware
 that they're still in the process of fixing a few basic integration
 issues that will make things run much smoother soon. Similarly,
 LibreSSL's QUIC support is very recent (3.6) and few people seem to
 use LibreSSL, I don't know how well it's supported in distros these
 days. More tests on this one would probably be nice and may possibly
 encourage its support.


I'd say that I am somewhere in between these two.  Helping the devs is 
not an EXPLICIT goal, but I am already tinkering with this stuff for 
myself, so it's not a lot of extra effort to be involved here.  I think 
my setup can provide a little bit of useful data and another test 
environment.  Pursuing http3 has been fun.


Straying offtopic:

I find that being a useful member of open source communities is an 
awesome experience.  For this one I'm not as much use at the code level 
as I am for other communities.  My experience with C was a long time ago 
... it was one of my first languages.  I spend more time with Bash and 
Java than anything else these days.  Occasionally delve into Perl, which 
I really like.


On the subject of building things myself ... way back in the 90s I used 
to build all my own Linux kernels, enabling only what I needed, building 
it into the kernel directly, and optimizing for the specific CPU in the 
machine.  And I tended to build most of the software I used from source 
as well.


These days, some distros have figured out how to do all these things 
better than I ever could, so I mostly install from apt repos.  For 
really mainstream software, they keep up with recent versions pretty well.


For some software, haproxy being one of the most prominent, the distro 
packages are so far behind what's current that I pretty much have to 
build it myself if I want useful features.  I got started using haproxy 
with version 1.4, and quickly went to 1.5-dev because I was pursuing the 
best TLS setup I could get.  In those days I wasn't using source 
repositories, I would download tarballs from 1wt.eu.


Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2022-12-15 Thread Shawn Heisey

On 12/15/22 21:49, Willy Tarreau wrote:

There's currently a great momentum around WolfSSL that was already
adopted by Apache, Curl, and Ngtcp2 (which is the QUIC stack that
powers most HTTP/3-compatible agents). Its support on haproxy is
making fast progress thanks to the efforts on the two sides, and it's
pleasant to speak to people who care about performance.


What would be your recommendation right now for a quic-enabled library 
to use with haproxy?  Are there any choices better than quictls 1.1.1?


Is wolfSSL support far enough along that I could build and try it and 
have some hope of success, or should I stick with quictls for now?  My 
websites certainly aren't anything mission-critical, but there are 
people that would be annoyed if I have problems.  Email is more 
important than the websites, and that's directly on the Internet in my 
AWS instance, not going through haproxy.


Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2022-12-15 Thread Shawn Heisey

On 12/15/22 02:19, Willy Tarreau wrote:

I guess you'll get them only while the previous version remains maintained
(i.e. use a package from the previous LTS distro). But regardless you'll
also need to use executables linked with that version and that's where it
can become a pain.


When I upgraded my main server from Ubuntu 20 to Ubuntu 22, it still had 
openssl 1.1.x installed as an unmanaged package not part of any repo. 
Little by little I got my third-party APT repos updated to jammy.  The 
last holdout was Gitlab, and I got that resolved just a few days ago. 
Then I was able to remove the 1.1 package.


I'm sure the performance issue has been brought to the attention of the 
OpenSSL project ... what did they have to say about the likelihood and 
timeline for providing a fix?  Is there an article or bug filed I can 
read for more information?


Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2022-12-15 Thread Shawn Heisey

On 12/15/22 09:47, Shawn Heisey wrote:
The version of curl with http3 support is not available in any of the 
distro repos for my Ubuntu machines, so I found a docker image with it. 
That works in cases where a browser won't switch, but that's because it 
never tries TCP, it goes straight to UDP.  The problem doesn't break H3, 
it just breaks a browser's ability to transition from TCP to UDP.


With the provided patch, the h3 is working well on the machine with this 
URL:


https://http3test.elyograg.org/

But it doesn't work correctly on the machine with this URL:

https://admin.elyograg.org/

Testing with the curl docker image works on both servers.  Testing with 
https://http3check.net also works with both servers.


The configs are not completely identical, but everything related to 
quic/h3 for those URLs is identical.  The only significant difference I 
have found so far between the two systems is that the one that works is 
Ubuntu 20.04 with edge kernel 5.15, and the one that doesn't work is 
Ubuntu 22.04 with edge kernel 6.0.


Both have quictls 1.1.1s compiled with exactly the same options, and the 
same haproxy 2.7 version with the same options -- up to date master with 
that one line patch.  They have different openssl versions, but haproxy 
should not be using that, it should just be using quictls.


The hardware is very different.  The one that works is an AWS t3a.large 
instance, 2 CPUs (linux reports AMD EPYC 7571) and 8GB RAM.  The one 
that doesn't work is a Dell R720xd in my basement with two of the 
following CPU, each with 12 cores, and 88GB RAM:


Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz

I have been through the configs working out minor differences, which 
resulted in changes to both configs.  Nothing new -- the AWS instance is 
still working, the basement server isn't.  The backends are the same 
except for the names and IP addresses.


H3 used to work on the basement machine, and I couldn't say when it 
stopped working.  I had seen the green lightning bolt on my zabbix 
install that runs on the basement machine, not sure when it disappeared. 
 I noticed it first on the AWS machine when I was switching quictls 
versions.  I usually update both servers haproxy together, so they 
probably stopped working about the same time.  The patched version works 
well on one, but not the other.


I downgraded the basement to 437fd289f2e32e56498d2d4da63852d483f284ef 
which should be the 2.7.0 release.  That didn't help, so maybe there is 
something else going on.


I believe that haproxy works intimately with kernel code ... could the 
difference of 5.15 and 6.0 (both with all of ubuntu's patches) be enough 
to explain this?


These are very much homegrown configs.  I cobbled together info from the 
documentation, info obtained on this mailing list, and random articles 
found with google.  I might be doing things substantially different than 
a true expert would.


This is how I configure quictls.  If this should be adjusted, I'm open 
to that.


-
CONFARGS="--prefix=/opt/quictls enable-tls1_3 no-idea no-mdc2 no-rc5 
no-zlib no-ssl3 enable-unit-test no-ssl3-method enable-rfc3779 
enable-cms no-capieng threads"


if [ "$(uname -i)" == "x86_64" ]; then
  CONFARGS="${CONFARGS} enable-ec_nistp_64_gcc_128"
fi
-

And here is the latest haproxy -vv:

HAProxy version 2.7.0-e557ae-43 2022/12/14 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2024.
Known bugs: http://www.haproxy.org/bugs/bugs-2.7.0.html
Running on: Linux 5.15.0-1026-aws #30~20.04.2-Ubuntu SMP Fri Nov 25 
14:53:22 UTC 2022 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 
USE_QUIC=1

  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT -PCRE2 
+PCRE2_JIT +POLL +THREAD -PTHREAD_EMULATION +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
-ENGINE +GETADDRINFO +OPENSSL -OPENSSL_WOLFSSL -LUA +ACCEPT4 -CLOSEFROM 
+ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL 
+SYSTEMD -OBSOLETE_LINKER +PRCTL -PROCCTL +THREAD_DUMP -EVPORTS -OT 
+QUIC -PROMEX -MEMORY_PROFILING +SHM_OPEN


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
default=2).

Built with OpenSSL version : OpenSSL 1.1.1s+quic  1 Nov 2022
Running on OpenSSL version : OpenSSL 1.1.1s+quic  1 Nov 2022
OpenSSL libra

Re: Followup on openssl 3.0 note seen in another thread

2022-12-15 Thread Shawn Heisey

On 12/15/22 00:58, Amaury Denoyelle wrote:

I seem to be able to reach your website with H3 currently. Did you
revert to an older version ? Regarding this commit, it rejects requests
with invalid headers (with uppercase or non-HTTP tokens in the field
name). Have you tried with several browsers and with command-line
clients ?


Yes, once I found the problem commit, I reverted to the commit just 
prior, which is why you saw it working.


Had to use --3way to apply the patch from your other message to apply to 
the 2.8-dev master branch.  Got that built and deployed.  H3 works. 
Looking forward to the fix coming to 2.7.


I did try with firefox, chrome, and a special version of curl.

The version of curl with http3 support is not available in any of the 
distro repos for my Ubuntu machines, so I found a docker image with it. 
That works in cases where a browser won't switch, but that's because it 
never tries TCP, it goes straight to UDP.  The problem doesn't break H3, 
it just breaks a browser's ability to transition from TCP to UDP.


With the commit just prior to the one that broke H3 in a browser, H3 is 
a lot more stable than it has been in the past.  Before, by clicking 
around between folders in my webmail, I could eventually (after maybe a 
dozen clicks) reach a point where the website becomes unresponsive until 
I shift-reload to get it back to H2 and then reload to have it switch to 
H3 again.  That did not happen with the newer commit.  Building with 
your patch also handles webmail flawlessly.


Looks like you meant that I was supposed to apply the patch to the 2.7 
master branch, not 2.8-dev.  It applied there without --3way, and that 
also fixes the problem.


Just got a look at the patch.  One line code fixes are awesome.

Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2022-12-14 Thread Shawn Heisey

On 12/14/22 21:23, Илья Шипицин wrote:

Can you try to bisect?


I had made some incorrect assumptions about what's needed to use bisect. 
 With a little bit of research I figured it out and it was a LOT easier 
than I had imagined.


I suspect that it won't help, browsers tend to remember things in their 
own way


One thing I have learned in my testing is that doing shift-reload on the 
page means it will never switch to h3.  So I use shift-reload followed 
by a couple of regular reloads as a way of resetting what the browser 
remembers.  That seems to work.


The bisect process only took a few runs to find the problem commit:

3ca4223c5e1f18a19dc93b0b09ffdbd295554d46 is the first bad commit
commit 3ca4223c5e1f18a19dc93b0b09ffdbd295554d46
Author: Amaury Denoyelle 
Date:   Wed Dec 7 14:31:42 2022 +0100

BUG/MEDIUM: h3: reject request with invalid header name

Reject request containing invalid header name. This concerns every
header containing uppercase letter or a non HTTP token such as a space.

For the moment, this kind of errors triggers a connection close. In the
future, it should be handled only with a stream reset. To reduce
backport surface, this will be implemented in another commit.

Thanks to Yuki Mogi from FFRI Security, Inc. for having reported this.

This must be backported up to 2.6.

(cherry picked from commit d6fb7a0e0f3a79afa1f4b6fc7b62053c3955dc4a)
Signed-off-by: Christopher Faulet 

 src/h3.c | 30 +-
 1 file changed, 29 insertions(+), 1 deletion(-)



Re: Followup on openssl 3.0 note seen in another thread

2022-12-14 Thread Shawn Heisey

On 12/14/22 19:33, Shawn Heisey wrote:
With quictls 3.0.7 it was working.  I will try rebuilding and see 
whether it still does.  There was probably an update to haproxy as well 
as changing quictls -- my build script pulls the latest from the 2.7 git 
repo.


Rebuilding with quictls 3.0.7 didn't change the behavior -- browsers 
still don't switch to http as they did before, so the obvious conclusion 
is that something changed in haproxy.


If you would like me to do anything to help troubleshoot, please let me 
know.


This is the simplest test I have.  Reloading this page used to switch to 
http3:


https://http3test.elyograg.org/

I also built and installed the latest 2.8.0-dev version with quictls 
1.1.1s.  It doesn't switch to h3 either.


Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2022-12-14 Thread Shawn Heisey

On 12/14/22 07:15, Willy Tarreau wrote:

Should I switch to quictls 1.1.1 instead?

Possibly, yes


I did this, and now browsers do not switch to http3.  A direct request 
that forces http3 works, but browsers are not switching to it based on 
the alt-svc header.  Tried both firefox and chrome which have been 
successful for me in the past.


I grabbed a sniffer trace of UDP/443 when I ask for the page in firefox. 
 Here is a wireshark view of that when following the UDP stream:


https://www.dropbox.com/s/5sc8ylxt82mn0gf/h3_udp_capture_follow.png?dl=0

That certainly looks to me like a significant amount of two-way 
communication, but as it's encrypted, I have no idea what it might mean. 
 The browser's console reports that the connection is http/2.


With quictls 3.0.7 it was working.  I will try rebuilding and see 
whether it still does.  There was probably an update to haproxy as well 
as changing quictls -- my build script pulls the latest from the 2.7 git 
repo.


Output from haproxy -vv:

HAProxy version 2.7.0-e557ae-43 2022/12/14 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2024.
Known bugs: http://www.haproxy.org/bugs/bugs-2.7.0.html
Running on: Linux 5.15.0-1026-aws #30~20.04.2-Ubuntu SMP Fri Nov 25 
14:53:22 UTC 2022 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 
USE_QUIC=1

  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT -PCRE2 
+PCRE2_JIT +POLL +THREAD -PTHREAD_EMULATION +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
-ENGINE +GETADDRINFO +OPENSSL -OPENSSL_WOLFSSL -LUA +ACCEPT4 -CLOSEFROM 
+ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL 
+SYSTEMD -OBSOLETE_LINKER +PRCTL -PROCCTL +THREAD_DUMP -EVPORTS -OT 
+QUIC -PROMEX -MEMORY_PROFILING +SHM_OPEN


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
default=2).

Built with OpenSSL version : OpenSSL 1.1.1s+quic  1 Nov 2022
Running on OpenSSL version : OpenSSL 1.1.1s+quic  1 Nov 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with PCRE2 version : 10.34 2019-11-21
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 9.4.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   quic : mode=HTTP  side=FE mux=QUIC  flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace

Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2022-12-14 Thread Shawn Heisey

On 12/14/22 12:06, Shawn Heisey wrote:
I built a gitlab CI config to test out changes to my build/install 
scripts.  I'm having some trouble with that where haproxy is not working 
right, I'll start a new thread.


Turned out that most of those problems were due to docker-related 
issues.  And then I discovered that in my tiny little test config for 
haproxy I had the bind line for udp/443 all wrong.


The following command may be of interest to anyone testing out 
http3/quic support.  It requires that you have docker installed.  On 
Ubuntu that can be installed with "apt install docker.io".


sudo docker run --add-host=host.docker.internal:host-gateway --rm 
ymuski/curl-http3 curl -v -m 4 -s -f -k 
"https://host.docker.internal/test_file; --http3 && echo GOOD


The curl options configure a 4 second absolute timeout, suppress the 
usual progress meter that curl shows, turn 4xx or 5xx response codes 
into a nonzero exit status, and disable certificate validation.  Perfect 
for a CI/CD pipeline.


Thanks,
Shawn



Re: Followup on openssl 3.0 note seen in another thread

2022-12-14 Thread Shawn Heisey

On 12/14/22 07:15, Willy Tarreau wrote:

Should I switch to quictls 1.1.1 instead?


Possibly, yes. It's more efficient in every way from what we can see.
For users who build themselves (and with QUIC right now you don't have
a better choice), it should not change anything and will keep robustness.
For those relying on the distro's package, I don't know if it's possible
to install the previous distro's package side-by-side, but in any case
it can start to become a mess to deal with.


Bonus, 1.1.1s compiles noticeably faster than 3.0.7.  Install seems 
about the same, but I figured out how to have the install NOT do the 
docs, which brought install time down to 3 seconds.


I built a gitlab CI config to test out changes to my build/install 
scripts.  I'm having some trouble with that where haproxy is not working 
right, I'll start a new thread.


Thanks,
Shawn



Followup on openssl 3.0 note seen in another thread

2022-12-14 Thread Shawn Heisey

On 12/14/22 06:07, Willy Tarreau wrote:
> By the way, are you running with OpenSSL
> 3.0 ?  That one is absolutely terrible and makes extreme abuse of
> mutexes and locks, to the point that certain workloads were divided
> by 2-digit numbers between 1.1.1 and 3.0. It took me one day to
> figure that my load generator which was caping at 400 conn/s was in
> fact suffering from an accidental build using 3.0 while in 1.1.1
> the perf went back to 75000/s!

Is this a current problem with the latest openssl built from source? I'm 
running my 2.7.x installs with quictls 3.0.7, which aside from the QUIC 
support should be the same as openssl.


400 connections per second is a lot more than I need, but if it's that 
inefficient, seems like overall system performance would take a hit even 
if it's not completely saturated.  My primary server has dual E5-2697 v2 
CPUs, but my mail server is a 2-CPU AWS instance.


Should I switch to quictls 1.1.1 instead?

Thanks,
Shawn



Links on haproxy.org using http

2022-12-03 Thread Shawn Heisey
Since the release of 2.7.0, I changed the repository I build to 2.7 
instead of master.  I was noticing that some of the links in the table 
at the top of haproxy.org are http instead of https.


All the links under Branch plus the git, web, and bugs links under Links 
are http.


Those webservers do https, so if I were creating the website, I would 
make all the links https.  My haproxy configs include this config to 
redirect anything that comes in unencrypted on port 80:


frontend web80
    description Redirect to https
    bind 0.0.0.0:80 name web80
    redirect scheme https
    default_backend be-deny

That frontend actually blocks a lot of bots ... when some of them get 
the 302 response, they don't go any further.  I see a bunch of requests 
hitting port 80 with no followup https request.


There may be some reason the links are http, so this is just a heads up, 
not a demand for change.  Looking through the html, there are also a 
number of other http links.


Thanks,
Shawn




Possible problem with custom error pages -- backend server returns 503, haproxy logs 503, but the browser gets 403

2022-08-22 Thread Shawn Heisey

Here is the full haproxy -vv:

HAProxy version 2.7-dev4-16972e-5 2022/08/22 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Running on: Linux 5.15.0-1017-aws #21~20.04.1-Ubuntu SMP Fri Aug 5 
11:44:14 UTC 2022 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 
USE_QUIC=1

  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT -PCRE2 
+PCRE2_JIT +POLL +THREAD -PTHREAD_EMULATION +BACKTRACE -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H 
-ENGINE +GETADDRINFO +OPENSSL -LUA +ACCEPT4 -CLOSEFROM +ZLIB -SLZ 
+CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD 
-OBSOLETE_LINKER +PRCTL -PROCCTL +THREAD_DUMP -EVPORTS -OT +QUIC -PROMEX 
-MEMORY_PROFILING


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, 
default=2).

Built with OpenSSL version : OpenSSL 3.0.5+quic 5 Jul 2022
Running on OpenSSL version : OpenSSL 3.0.5+quic 5 Jul 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with PCRE2 version : 10.34 2019-11-21
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 9.4.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   quic : mode=HTTP  side=FE mux=QUIC flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2 flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1    flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1    flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
    [BWLIM] bwlim-in
    [BWLIM] bwlim-out
    [CACHE] cache
    [COMP] compression
    [FCGI] fcgi-app
    [SPOE] spoe
    [TRACE] trace


The same problem also happens with 2.6.4, built with the same options as 
the dev version.


HAProxy version 2.6.4 2022/08/22 - https://haproxy.org/

I have documentation for the problem details in another project's bug 
tracker:


https://issues.apache.org/jira/browse/SOLR-16327?focusedCommentId=17582990=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17582990

It appears so far as if haproxy is getting a 503 from the backend, 
logging a 503, but actually sending a 403.  Here is the config snippet 
when it works correctly:


A top-level config section:
http-errors myerrors
    errorfile 404 /etc/haproxy/errors/404.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/50x.http
    errorfile 503 /etc/haproxy/errors/50x.http
    errorfile 504 /etc/haproxy/errors/50x.http


In the frontend:
    errorfiles myerrors
    http-response return status 404 default-errorfiles if 
!real_errors { status 404 }
    http-response return status 403 default-errorfiles if 
!real_errors { status 403 }
    http-response return status 500 default-errorfiles if 
!real_errors { status 500 }
    http-response return status 502 default-errorfiles if 
!real_errors { status 502 }
    http-response return status 503 default-errorfiles if 
!real_errors { status 503 }
    http-response return status 504 default-errorfiles if 
!real_errors { status 504 }


Removing the "!real_errors" part and restarting haproxy is when the 
problem occurs.  I created and used the real_errors acl as a working 
bandaid for the issue -- turn off the custom error pages for the solr 
hostname.





haproxy listening on lots of UDP ports

2022-08-05 Thread Shawn Heisey
I am running haproxy in a couple of places.  It is listening on multiple 
seemingly random high UDP ports.


The one running "2.6.2-ce3023-30 2022/08/03" has the following ports.  
This server is in AWS.  The first three lines are expected:


elyograg@bilbo:/var/log$ sudo lsof -Pn -i | grep haproxy
haproxy   1928967    root    6u  IPv4 2585012  0t0 UDP *:443
haproxy   1928967    root    7u  IPv4 2585013  0t0 TCP *:80 
(LISTEN)
haproxy   1928967    root    8u  IPv4 2585014  0t0 TCP *:443 
(LISTEN)

haproxy   1928967    root   16u  IPv4 2587974  0t0 UDP *:57183
haproxy   1928967    root   17u  IPv4 2585855  0t0 UDP *:60746

The one running "2.7-dev2-f9d4a7-78 2022/08/05" is in my basement and 
has the following ports.  The first four lines are expected.  There are 
a lot more UDP ports active on this one.


elyograg@smeagol:~/git/lucene-solr$ sudo lsof -Pn -i | grep haproxy
haproxy   1469717  root    6u  IPv4 14230127 0t0  UDP 
192.168.217.170:443
haproxy   1469717  root    7u  IPv4 14230128 0t0  TCP *:8983 
(LISTEN)
haproxy   1469717  root    8u  IPv4 14230129 0t0  TCP *:80 
(LISTEN)
haproxy   1469717  root    9u  IPv4 14230130 0t0  TCP *:443 
(LISTEN)

haproxy   1469717  root   46u  IPv4 14242826 0t0  UDP *:45727
haproxy   1469717  root   47u  IPv4 14212730 0t0  UDP *:40101
haproxy   1469717  root   49u  IPv4 14209917 0t0  UDP *:34584
haproxy   1469717  root   50u  IPv4 14212920 0t0  UDP *:55409
haproxy   1469717  root   51u  IPv4 14209875 0t0  UDP *:46192
haproxy   1469717  root   52u  IPv4 14229139 0t0  UDP *:36370
haproxy   1469717  root   53u  IPv4 14209916 0t0  UDP *:50898
haproxy   1469717  root   55u  IPv4 14242839 0t0  UDP *:45456
haproxy   1469717  root   56u  IPv4 14242890 0t0  UDP *:37717
haproxy   1469717  root   57u  IPv4 14240387 0t0  UDP *:45547
haproxy   1469717  root   58u  IPv4 14240302 0t0  UDP *:33960
haproxy   1469717  root   60u  IPv4 14240885 0t0  UDP *:42145

These extra ports are not exposed to the world.  The external firewalls 
are locked down pretty well.  And the hosts also have firewalls (ufw) 
that are similarly restricted.


What are these ports for?  They are not in the haproxy config files.  I 
did try searching for an explanation, and didn't find anything.


Thanks,
Shawn




Re: Thoughts on QUIC/HTTP3

2022-07-09 Thread Shawn Heisey

On 7/9/22 18:08, William Lallemand wrote:

But is there any certificates in the /opt/quictls/ssl/certs/ directory ?


No, it is empty.  I didn't think to actually look inside it because it 
didn't occur to me that it would be empty.  I just checked an install of 
stock openssl 3 and it also has no certs.


I see that /usr/lib/ssl/certs is a symlink to /etc/ssl/certs ... so I 
have one more build step to put in my script -- make that symlink for 
quictls.  Once I did that manually, no more warning.


Thanks,
Shawn




Re: Thoughts on QUIC/HTTP3

2022-07-08 Thread Shawn Heisey

On 7/8/22 03:30, William Lallemand wrote:

HAProxy uses the ca-certificates provided by OpenSSL.
The SSL_CERT_DIR by default is set to the "certs" directory inside your
openssldir. You can check your openssldir by using the "openssl" binary
you compiled with your library (not the one of your distribution).

   $ openssl version -d
   OPENSSLDIR: "/usr/lib/ssl"

So you might want to set the SSL_CERT_DIR environment variable before
starting HAProxy or doing a symlink from your openssldir to the real
path of your ca-certificates ( /etc/ssl/certs ? )

This warning is emitted when trying to load the ca-certificates into the
httpclient at startup with an empty directory. (Which is not supposed to
happen on the openssl build of your distribution)



The openssl that haproxy is compiled against is in /opt/quictls/ssl ... 
but there is a distribution-provided openssl package in /usr/lib/ssl as 
well.  Both locations contain "certs".


Setting either environment variable that you have mentioned does not 
eliminate the warning.


root@bilbo:~# SSL_CERT_DIR=/opt/quictls/ssl/certs haproxy -c -f 
/etc/haproxy/haproxy.cfg

[NOTICE]   (2379692) : haproxy version is 2.6.1
[NOTICE]   (2379692) : path to executable is /usr/local/sbin/haproxy
[WARNING]  (2379692) : config : ca-file: 0 CA were loaded from '@system-ca'
Warnings were found.
Configuration file is valid
root@bilbo:~# OPENSSLDIR=/opt/quictls/ssl haproxy -c -f 
/etc/haproxy/haproxy.cfg

[NOTICE]   (2379701) : haproxy version is 2.6.1
[NOTICE]   (2379701) : path to executable is /usr/local/sbin/haproxy
[WARNING]  (2379701) : config : ca-file: 0 CA were loaded from '@system-ca'
Warnings were found.
Configuration file is valid

My setup has no need to verify certificates, so the warning doesn't 
actually matter for me.  But it could be a problem for someone else.


I did figure out the correct way to run the "version -d" command you 
mentioned on the quictls install:


elyograg@smeagol:~$ LD_LIBRARY_PATH=/opt/quictls/lib64 
/opt/quictls/bin/openssl version -d

OPENSSLDIR: "/opt/quictls/ssl"

My install does quic/http3 correctly, so I know it is finding and using 
quictls.


Thanks,
Shawn




Re: Thoughts on QUIC/HTTP3

2022-07-07 Thread Shawn Heisey

On 7/6/22 09:50, Илья Шипицин wrote:

haproxy is built in CI against latest quictls, for example quictls-3.0.5

https://github.com/haproxy/haproxy/runs/721404?check_suite_focus=true

please open an issue on github with failure details, no known build 
failures so far


Shortly after I saw this message, I tried the build again.  My script 
does "git pull" on the repo.  There were a bunch of updates to the 
quictls repo, and now haproxy compiles and runs.


I am getting a new config warning, though:

elyograg@bilbo:/usr/local/src$ sudo haproxy -c -f /etc/haproxy/haproxy.cfg
[NOTICE]   (2080586) : haproxy version is 2.6.1
[NOTICE]   (2080586) : path to executable is /usr/local/sbin/haproxy
[WARNING]  (2080586) : config : ca-file: 0 CA were loaded from '@system-ca'
Warnings were found.
Configuration file is valid

This is happening on Ubuntu 20, Ubuntu 22, and two different OSes for 
arm64 hardware.  I have the latest from the haproxy 2.6 git repo and the 
quictls 3.0.5 branch.


elyograg@bilbo:/usr/local/src$ haproxy -vv
HAProxy version 2.6.1 2022/06/21 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 
2027.

Known bugs: http://www.haproxy.org/bugs/bugs-2.6.1.html
Running on: Linux 5.15.0-1014-aws #18~20.04.1-Ubuntu SMP Wed Jun 15 
21:28:54 UTC 2022 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 
USE_QUIC=1

  DEBUG   = -DDEBUG_H3 -DDEBUG_QPACK

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT -PCRE2 
+PCRE2_JIT +POLL +THREAD +BACKTRACE -STATIC_PCRE -STATIC_PCRE2 +TPROXY 
+LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -ENGINE +GETADDRINFO 
+OPENSSL -LUA +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL 
+RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL 
-PROCCTL +THREAD_DUMP -EVPORTS -OT +QUIC -PROMEX -MEMORY_PROFILING


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 3.0.5+quic 5 Jul 2022
Running on OpenSSL version : OpenSSL 3.0.5+quic 5 Jul 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with PCRE2 version : 10.34 2019-11-21
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 9.4.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   quic : mode=HTTP  side=FE mux=QUIC flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2 flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1    flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1    flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
    [CACHE] cache
    [COMP] compression
    [FCGI] fcgi-app
    [SPOE] spoe
    [TRACE] trace




Re: Thoughts on QUIC/HTTP3

2022-07-06 Thread Shawn Heisey

On 5/31/22 08:16, Amaury Denoyelle wrote:
Thanks for your continuing your journey on HTTP/3 :) 


Yesterday I pulled down the haproxy 2.6 and quictls git repos. The 
branch for quictls was openssl-3.0.4+quic.  I built and installed 
quictls and then haproxy.


This combination is working better than previous combinations.  It does 
look like some of my sites are still having issues on http3, but 
roundcube webmail is working very well, and that was one of the sites 
where I had a LOT of trouble before.


A note for you, which you may already know:  If the openssl-3.0.5+quic 
branch of quictls is installed instead of earlier 3.0.x releases, then 
haproxy will not compile.


Thanks,
Shawn




Re: haproxy 2.6.0 and quic

2022-06-03 Thread Shawn Heisey

On 6/3/22 06:47, Markus Rietzler wrote:

my build command was

make TARGET=linux-glibc USE_OPENSSL=1 SSL_INC=/opt/quictls/include 
SSL_LIB=/opt/quictls/lib64 LDFLAGS="-Wl,-rpath,/opt/quictls/lib64" 
ADDLIB="-lz -ldl" USE_ZLIB=1 USE_PCRE=1 USE_PCRE=yes USE_LUA=1 
LUA_LIB_NAME=lua5.3  LUA_INC=/usr/include/lua5.3 ;


You will need to add USE_QUIC=1 to the build flags.  A small note: you 
have USE_PCRE twice.  IMHO, you should install PCRE2 and configure 
USE_PCRE2_JIT=1 instead.  The original PCRE library isn't being 
maintained, only version 2 will see bugfixes.


A word of warning that you would probably also get from the devs here:  
HTTP3/QUIC support is still new and not entirely working. I have it 
configured and it only works correctly for VERY simple websites.  Any 
complex webapp I try it on will fail in some way, but if I disable HTTP3 
and use HTTP2, it works.


Thanks,
Shawn




Re: Thoughts on QUIC/HTTP3

2022-05-29 Thread Shawn Heisey

On 5/29/2022 12:49 PM, Илья Шипицин wrote:

roundcube runs automatic browser tests

https://github.com/roundcube/roundcubemail/runs/6642129873?check_suite_focus=true

I think we can try to run those tests with haproxy between browser and 
roundcube


That looks cool.  Are there instructions somewhere for setting up and 
configuring the test suite to run it against my install?  I didn't see 
any with a quick look, but I will keep on the search for a little while.


Thanks,
Shawn




Re: Thoughts on QUIC/HTTP3

2022-05-29 Thread Shawn Heisey

On 4/29/2022 10:10 AM, Shawn Heisey wrote:
I did a build and install this morning, a bunch of quic-related 
changes in that.  Now everything seems to be working on my paste 
site.  Large pastes work, and I can reload the page a ton of times 
without it hanging until browser restart.


I have found that the roundcube package doesn't work completely right 
with quic/http3. It has proven to be an excellent way of revealing 
problems.  The roundcube package is a php webmail app that utilizes a 
database and an imap server.  It's using many of the features that are 
possible with modern browsers.


When I first started this http3 journey, I couldn't even log in to 
roundcube because any POST request was failing ... but that problem got 
fixed.  Now I find that when quickly jumping between folders in my big 
mailbox, the site stops responding.  If I switch back to http/2, 
everything works.  If it would help, I can give one of the devs full 
access to use my roundcube install and experiment with the config, see 
if maybe I am doing something wrong.


Here's my haproxy -vv output.  I built it from the master branch right 
after I saw the dev12 release announcement:


HAProxy version 2.6-dev12 2022/05/27 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Running on: Linux 5.13.0-1023-aws #25~20.04.1-Ubuntu SMP Mon Apr 25 
19:28:27 UTC 2022 x86_64

Build options :
  TARGET  = linux-glibc
  CPU = native
  CC  = cc
  CFLAGS  = -O2 -march=native -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -Wfatal-errors -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type 
-Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 
USE_QUIC=1

  DEBUG   = -DDEBUG_H3 -DDEBUG_QPACK

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT -PCRE2 
+PCRE2_JIT +POLL +THREAD +BACKTRACE -STATIC_PCRE -STATIC_PCRE2 +TPROXY 
+LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -ENGINE +GETADDRINFO 
+OPENSSL -LUA +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL 
+RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL 
-PROCCTL +THREAD_DUMP -EVPORTS -OT +QUIC -PROMEX -MEMORY_PROFILING


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 3.0.3+quic 3 May 2022
Running on OpenSSL version : OpenSSL 3.0.3+quic 3 May 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Support for malloc_trim() is enabled.
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with PCRE2 version : 10.34 2019-11-21
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 9.4.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   quic : mode=HTTP  side=FE mux=QUIC flags=HTX|NO_UPG|FRAMED
 h2 : mode=HTTP  side=FE|BE  mux=H2 flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1    flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1    flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
    [CACHE] cache
    [COMP] compression
    [FCGI] fcgi-app
    [SPOE] spoe
    [TRACE] trace




Re: What does HAProxy do?

2022-05-24 Thread Shawn Heisey

On 5/24/22 07:01, Turritopsis Dohrnii Teo En Ming wrote:

Subject: What does HAProxy do?

Good day from Singapore,

I notice that my company/organization uses HAProxy. What does it do?

How do I setup and configure it? Are there excellent and well written
guides on doing so?


The first hit on a google search for haproxy is the haproxy website.  
That page says what it is right at the top left corner. There's a very 
visible "Documentation" link too, and at that link, there is a "Starter 
Guide" link for each version, as well as a full manual.


https://www.haproxy.org

If you talk to your IT department, they can probably explain EXACTLY 
what haproxy is doing in your organization.


Thanks,
Shawn

https://lmgtfy.app/?q=haproxyhttps://lmgtfy.app/?q=haproxy



Re: Latest http/3 info

2022-05-08 Thread Shawn Heisey

On 5/8/2022 3:16 AM, Willy Tarreau wrote:

There's no good solution to this, except by forcing the exact address
yourself. The BSD socket API doesn't permit to send UDP packets from a
specific source, so the commonly used approach for clients is to bind
while sending the first packet, but that doesn't work for a server that
faces many clients, as it would restrict the traffic to the first IP
used.


Thanks for that info.  I got it working.  I set the wildcard entry in my 
internal DNS to the VIP, configured a specific name to point to the 
machine's primary address, and then bound quic directly to the VIP 
address only.  TCP bindings are still 0.0.0.0.  Then I changed the port 
forwarding in my router to point ports 22/tcp, 80/tcp, 443/tcp, and 
443/udp to the VIP.


Adding documentation about this quirk of UDP sounds like an excellent 
idea.  The doc for QUIC should point the user to the doc for UDP for 
details.



Note that in order for the two haproxy nodes to bind to the virtual
address you'll likely have to enable ip_nonlocal_bind, but I guess you
already have it.


When I had two haproxy instances, I didn't need ip_nonlocal_bind. 
Probably because I used 0.0.0.0 for all bindings and the VIP didn't 
exist at the time.  The dev version has proven stable enough for my 
purposes that I eliminated the second instance.  If I have a problem 
with it in the near future, I can roll back to a prior commit and rebuild.


Thanks,
Shawn




Re: Latest http/3 info

2022-05-07 Thread Shawn Heisey

On 5/7/2022 9:11 AM, Shawn Heisey wrote:
A couple of days ago I noticed that quictls had made a 3.0.3 version 
available.  I upgraded and then tried to rebuild haproxy (master 
branch).  The compile failed.  Don't they know they shouldn't change 
API in a point release?  (It's not even a good idea in a minor release 
unless there is backward compatibility)


Doing a git pull today on both quictls and haproxy before rebuilding has 
fixed this problem.  It looks like the haproxy pull didn't update any 
quic code, so I'm betting the quictls team didn't mean to break things 
and have now fixed it.  They don't seem to have any actual releases yet, 
so maybe I shouldn't be expecting stability from them.


If you look closely at the tcpdump output, you'll notice that when 
haproxy replies, it replies from the actual IP address of the machine 
(.200) rather than the ucarp VIP (.170) where it received the 
request.  Is this something that can be fixed?


Is it haproxy or quictls that is handling the udp/443 packets?  I think 
that for tcp/443 it is haproxy handling the tcp connection, and it would 
be my guess that haproxy also does this for udp/443, but I do not know 
that for sure.  I know that much of TCP is actually handled by the OS, 
but apps CAN get very involved if they want to.  I have not dived into 
haproxy source to the level required to answer these questions.


Thanks,
Shawn




Latest http/3 info

2022-05-07 Thread Shawn Heisey
A couple of days ago I noticed that quictls had made a 3.0.3 version 
available.  I upgraded and then tried to rebuild haproxy (master 
branch).  The compile failed.  Don't they know they shouldn't change API 
in a point release?  (It's not even a good idea in a minor release 
unless there is backward compatibility)


Something interesting.  I set up ucarp so haproxy wouldn't go down even 
if my main server dies.  When I have my dd-wrt router forwarding udp 443 
to the ucarp VIP, http/3 doesn't work.  UDP 443 is getting through:


https://paste.elyograg.org/view/8731d4cb

If you look closely at the tcpdump output, you'll notice that when 
haproxy replies, it replies from the actual IP address of the machine 
(.200) rather than the ucarp VIP (.170) where it received the request.  
Is this something that can be fixed?


If I change the port forwarding in haproxy to send to the main .200 
address, the external http/3 check works.


Thanks,
Shawn




Re: Can HAProxy function as a firewall?

2022-05-04 Thread Shawn Heisey

On 5/4/22 05:30, Tom Browder wrote:
From what I've seen of HAProxy's configuration, it seems it may be 
able to be used as an easy-to-configure firewall immediately 
downstream from my ISP's router and inside a small Debian computer 
feeding another router.


Does that sound feasible? Or is there a physical router available that 
incorporates HAProxy?


While this could theoretically be a possible use case for haproxy, it is 
not something I would try.  Haproxy is designed to be a proxy server and 
load balancer.  In those capacities, it is good enough for 
mission-critical deployment.


But haproxy is not designed to fill the role of a firewall.  I mean no 
disrespect to Willy or the other people that spend their valuable time 
working on haproxy when I say this.  I love haproxy  it is one of 
the best pieces of software in my problem-solving toolkit.  But for a 
software firewall, I will look elsewhere, for something that is designed 
for that role.


Some things that I can think of that I don't think haproxy can do that 
you'd expect from a firewall:


* Permit or deny any traffic other than TCP or UDP.
** Examples:  ICMP, IGMP, GRE, ESP.
* Examine certain application protocols to track and automatically allow 
related connections.

** FTP and RPC are the examples that come to mind.

TL;DR: While I am not part of the development team for this project, I 
am part of the development team for Apache Solr. Something that we are 
very often telling people on the Solr users list:  Solr is a search 
engine.  It is not a database.  It's a discussion similar to your 
question.  The response is:  When you have a software need, find 
software that is designed for the role.


Thanks,
Shawn




Re: PEM Certificates for HAproxy

2022-04-29 Thread Shawn Heisey

On 4/29/22 12:42, Branitsky, Norman wrote:


If you include the following in your HAProxy configuration global 
section you don't need to include DH Params in the certificate:


tune.ssl.default-dh-param 2048



It takes several minutes to generate params, so I doubt that with that 
option that there would be different params for each certificate.  It is 
my understanding that when they are included in the cert file, each cert 
can have different params.  Part of my automated cert renewal process 
included generating brand new dh params.


I know that a fresh install can be instantly operational with TLS, 
suggesting that it is not generating them on the fly ... so I really 
wonder how secure the default params are.  I wonder what is being used 
when there are no params in the cert file. Does it get something 
hardcoded and use that until params generated in the background can be 
swapped in?


Thanks,
Shawn




Re: PEM Certificates for HAproxy

2022-04-29 Thread Shawn Heisey

On 4/29/22 11:16, Henning Svane wrote:

I have tried to build a PEM Certificate, but with no luck.

What should it include and in which order?



I use certs issued by LetsEncrypt.

My certificate file that I use for haproxy and most other software doing 
TLS has four PEM-encoded items in it:


Server cert
LetsEncrypt Issuing cert
Private Key
DH Params

The file is owned by root and has 600 permissions.

The only thing that might be important there as far as order would be to 
have the server cert before the issuing cert.


You do not normally need to include the CA's root certificate in the 
file -- the browser already has root certificates for any authority that 
it trusts ... that is how trust is established. Unless you created the 
cert yourself, what you want to have in your file is certs for the 
entire trust chain *EXCEPT* for the root cert.


Most software will ignore DH Params in the certificate file.  It is my 
understanding that haproxy actually uses it.  So each cert file that I 
employ gets its own 4096 bit DH Params.  My cert is also 4096 bit.


Thanks,
Shawn




Re: Thoughts on QUIC/HTTP3

2022-04-29 Thread Shawn Heisey

On 4/25/22 10:55, Shawn Heisey wrote:
I was testing with the master branch from 
https://github.com/haproxy/haproxy.git. Just pulled down the latest 
changes, built it, and installed it.  Now I am sometimes seeing 
different behavior on the large POST.  It will load a page quickly 
sometimes, returning to the same page with blank fields, just as it 
would when first going there.  Another time, apache returned a 504 
error, which is very weird.  When haproxy got the 504, it sent its own 
error page. 


I did a build and install this morning, a bunch of quic-related changes 
in that.  Now everything seems to be working on my paste site.  Large 
pastes work, and I can reload the page a ton of times without it hanging 
until browser restart.


I changed the URL of my paste website, and now that everything seems to 
be working with http3, it's still using http3:


https://stikked.elyograg.org/

Thanks,
Shawn




Re: Thoughts on QUIC/HTTP3

2022-04-25 Thread Shawn Heisey

On 4/25/22 08:13, Amaury Denoyelle wrote:

I would not put too much faith in it for the near future. The OpenSSL
team seems to have put aside a simple QUIC API integration in favor of a
brand new full QUIC stack, which should take quite some time. So for
now, manually rebuilding your SSL library seems the only way to go


Ah.  Not Invented Here syndrome?


We already experienced this kind of problems but we though we have fixed
them. It seems the connection closure it not always properly handle on
haproxy side, which left the client with no notification that it should
open a new connection. It may help to increase the timeout client to be
greater than the default QUIC idle timeout, which is hardcoded to 10s on
haproxy side. For haproxy.org, we use a value of 1m and it seems to be
working. Please tell me if this makes your problem disappears or not.


My client timeout is set to 15 seconds in the "defaults" section, which 
is larger than that hardcoded default.  Should I go with something higher?



We did not have enough time to work on POST so there is still bugs in
this part. I just recently fixed two bugs which may be enough to fix
your situattion with the latest 2.6-dev7. However, I still have issues
when I use large payloads.

Thanks for your kind compliment for haproxy reliability. We hope one day
we will reach this level for QUIC but for now this objectif is still
far.


I was testing with the master branch from 
https://github.com/haproxy/haproxy.git.  Just pulled down the latest 
changes, built it, and installed it.  Now I am sometimes seeing 
different behavior on the large POST.  It will load a page quickly 
sometimes, returning to the same page with blank fields, just as it 
would when first going there.  Another time, apache returned a 504 
error, which is very weird.  When haproxy got the 504, it sent its own 
error page.


If there is anything I can do to help troubleshoot, give me instructions 
on how to do it.


Thanks,
Shawn






Thoughts on QUIC/HTTP3

2022-04-23 Thread Shawn Heisey
After seeing http/3 (orange lightning bolt with the HTTP Version 
Indicator extension) talking to a lot of websites, I had thought the 
standard was further along than it is.  I see that the openssl team is 
discussing it, and plans to fully embrace it, but hasn't actually 
started putting QUIC code in openssl yet, and it may be quite some time 
before something usable shows up even in their master branch.


It's been fun fiddling with it using haproxy with quictls, and I hope I 
can provide useful information to stabilize the implementation.


I'd like to say thank you to Willy and all the other people who make 
haproxy one of the best things in my problem-solving arsenal.  It 
handles the internet side of all my web deployments.  I haven't yet put 
other services behind it.  At a previous $DAYJOB I had been testing FTP 
load-balancing, which I did get working, but didn't actually get to the 
deployment stage.


At the moment, I am experiencing two problems with http3.  The second 
problem might actually just be another instance of the first problem.


First problem:  If I do enough fiddling with an HTTP3 page, in either 
Firefox or Chrome, eventually that page will stop loading and the only 
way I've found to fix it is to completely close the browser and reopen 
it.  Restarting haproxy or Apache doesn't seem to help.


Second problem:  If I try pasting a REALLY large block of text into my 
paste website at the following URL while I have it configured to use 
HTTP/3, it won't work.  The page never loads. I can't tell if this is a 
separate problem from the one above, or just another occurrence of it 
that triggers more readily because there is more data transferred.  The 
reason I think it might be actually the first problem is that if I open 
another tab, I can't get to the website ... but if I close the browser 
and reopen it, then I can get to the website again.


https://paste.elyograg.org/

If I remove the paste website from the http3 ACL so it doesn't send the 
alt-svc header, then everything works once I can convince the browser to 
stop using HTTP/3.


I don't have these issues talking to other sites using HTTP/3 
extensively, like facebook and google.


Thanks,
Shawn




Re: HTTP/3 -- POST requests not working

2022-04-15 Thread Shawn Heisey

On 4/15/22 06:40, Shawn Heisey wrote:
The 403 is random.  While clicking around in my webmail, going to 
different folders, I occasionally see a red box that has an error 
message pop up, an error message I can't recall at the moment. That's 
when the 403 is logged.



I noticed there was another change to a quic src file in the git repo 
when I did a "git pull" ... so I recompiled and reinstalled.  Now the 
webmail seems to work without problems.


One other thing to check ... switching my paste website back to http3 
and trying to create a paste with tens of thousands of lines.  That 
wasn't working before, had to switch it back to http2.  (insert final 
jeopardy music) ... And that still isn't working.  It just hangs, 
continually indicating the page is still loading.  Same as happened 
before.  I will need to get a packet capture for that problem, and 
compare the upload with http2 to the upload with http3.


Thanks,
Shawn




Re: HTTP/3 -- POST requests not working

2022-04-15 Thread Shawn Heisey

On 4/15/2022 1:20 AM, Amaury Denoyelle wrote:

Hum this is strange. Do you have a way to reproduce it easily ?


The 403 is random.  While clicking around in my webmail, going to 
different folders, I occasionally see a red box that has an error 
message pop up, an error message I can't recall at the moment. That's 
when the 403 is logged.  This is the matching Apache log entr for the 
haproxy log entry I sent earlier:


127.0.0.1 - - [14/Apr/2022:07:11:15 -0600] "POST 
/mail/?_task=mail&_action=refresh HTTP/2.0" 403 363 
"https://webmail.elyograg.org/mail/?_task=mail&_mbox=Sent; "Mozilla/5.0 
(X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) 
Chrome/100.0.4896.75 Safari/537.36"


The 403 is not being generated by haproxy.  It is coming from Apache.  
But this does not happen when the connection from the user to haproxy 
uses http/2, only when it is http/3.  When I am more rested, I can grab 
a packet capture of the traffic between haproxy and apache.  This is 
simple because that connection is h2c.



Otherwise, please know that as QUIC is still in an experimental status,
we did not have the time to test various config options with it.


Yes, I saw that designation.  I am hoping that my experiences and 
assistance can help make it more stable.



If you encounter a recurring bug, I advise you
to switch to a simple config file and inspect if the issue is still
there.


What parts of my config would you suggest taking out?  Can you give me 
an example of such a simple config?  Although my config is a little 
long, most of that is due to long ACLs.  I have never really thought of 
that config as complex.  :)



Hum we already have encounter this issue because we did not send a
CONNECTION_CLOSE on connection closing. Now most cases seems to be fixed
but maybe there is still cases where the connection dies without a
notification to the client. Do you observe this frequently ?


Quite frequently.  The browser will stop loading a page that is using 
http3 and I have to completely close the browser to get it working 
again, which makes testing more difficult.  I have not been able to 
determine what circumstances trigger the problem.


The list of domains that I am serving over http3 has shrunk because POST 
requests don't always work.


On the system where haproxy 2.4.15 is listening on TCP ports and 
2.6-dev5 is listening on UDP/443, this is the list of domains:


    acl http3 var(txn.host) -m end unknown.elyograg.org
    acl http3 var(txn.host) -m end raspi.elyograg.org
    acl http3 var(txn.host) -m end raspi1.elyograg.org
    acl http3 var(txn.host) -m end raspi2.elyograg.org
    acl http3 var(txn.host) -m end shawnheisey.com

On the system where the from-git build is the only instance running, it 
only advertises http3 for http3test.elyograg.org, a simple PHP script.  
None of the sites where I have currently enabled http3 use POST 
requests.  The webmail is on the same system as the http3test site.  It 
is not widely used by anyone but me, so it is a perfect testing ground 
of a complex webapp over http3.  I can create a mailbox on that system 
so you can try things yourself.


Please let me know how I can be helpful with further testing.

Thanks,
Shawn




  1   2   3   >