ignore please (mail test)

2024-09-06 Thread Lukas Tribus
Hello,


as in the subject line, please do ignore this email.


Thank you,

Lukas




Re: Seeking Clarity on Timeout and Option Redispatch Settings

2024-08-29 Thread Lukas Tribus
Hello,

a connect timeout is used only *when it is actually needed*, that is
when we send SYN but get no response whatsoever from a backend.

However when the backend server clearly responds with a RST or ICMP
unreachable - which is the case when the is no application running on
the particular port, the error is passed immediately up the stack and
haproxy can handle it without waiting for a timeout.

If you want to test timeout settings, point somewhere where you don't
get a response.


Lukas




Re: HAProxy returning 502 with SH--

2024-08-27 Thread Lukas Tribus
Also, before doing anything else, try using:

tune.disable-zero-copy-forwarding or tune.h1.zero-copy-fwd-recv off

as there is currently an open bug that doesn't fully match your case
but is still close enough that it may be worth a try:

https://github.com/haproxy/haproxy/issues/2665



Lukas




Re: HAProxy returning 502 with SH--

2024-08-27 Thread Lukas Tribus
Hello,

On Tue, 27 Aug 2024 at 18:09, BJ Taylor  wrote:
>
> Here are the 502 logs from the last run after the config changes.
>
> 2024-08-26T09:29:02.547581-06:00 testserver haproxy[284569]: <134>Aug 26 
> 09:29:02 haproxy[284569]: 192.168.69.101:45382 [26/Aug/2024:09:29:02.545] 
> www~ front3/pdafront32 0/0/0/-1/1 502 208 - - SH-- 5/5/3/3/0 0/0 
> {front3.domain.com|} "POST https://front3.domain.com/front1 HTTP/2.0"
> 2024-08-26T11:27:20.748921-06:00 testserver haproxy[284569]: <134>Aug 26 
> 11:27:20 haproxy[284569]: 192.168.69.101:50606 [26/Aug/2024:11:27:20.746] 
> www~ front3/pdafront32 0/0/0/-1/1 502 208 - - SH-- 5/5/3/3/0 0/0 
> {front3.domain.com|} "POST https://front3.domain.com/front1 HTTP/2.0"
> 2024-08-26T14:11:11.289987-06:00 testserver haproxy[284569]: <134>Aug 26 
> 14:11:11 haproxy[284569]: 192.168.69.101:40516 [26/Aug/2024:14:11:11.285] 
> www~ front3/pdafront32 0/0/0/-1/2 502 208 - - SH-- 15/15/7/7/0 0/0 
> {front3.domain.com|} "POST https://front3.domain.com/front1 HTTP/2.0"
> 2024-08-26T17:40:55.801154-06:00 testserver haproxy[284569]: <134>Aug 26 
> 17:40:55 haproxy[284569]: 192.168.69.101:53952 [26/Aug/2024:17:40:55.798] 
> www~ front3/pdafront32 0/0/0/-1/1 502 208 - - SH-- 10/10/1/1/0 0/0 
> {front3.domain.com|} "POST https://front3.domain.com/front1 HTTP/2.0"

This indicates that your backend applications crashes or at least does
not complete the HTTP response header after 208 bytes.

It possible that "show errors" on the haproxy admin socket gives your
more insight into what the HTTP response of your server looks like
(and where it suddenly aborts exactly after those 208 bytes).

Try switching off H2 in the backend, to see if this is H2 related.

If you can switch off SSL on the backend and you can still reproduce
the issue, you may have an easier time debugging this with network
traces.

Otherwise if you have no possibilities to troubleshoot at the backend
application, show errors is not useful and you cannot disable SSL on
the backend, you need to be able to decrypt the backend traffic from a
network trace. Reproducing with a non-FS cipher will allow you to
decrypt the SSL traffic with the certificates private key; otherwise
you have to use client random logging [1] and then decrypt the traffic
in wireshark before analyzing what happens in those last bytes of the
208 byte incomplete HTTP response.



Regards,
Lukas


[1] http://docs.haproxy.org/3.0/configuration.html#3.2-tune.ssl.keylog




Re: HAProxy returning 502 with SH--

2024-08-23 Thread Lukas Tribus
On Fri, 23 Aug 2024 at 18:55, BJ Taylor  wrote:
>
> We are trying to deploy HAProxy into our environment. We have a script that
> does some 600k api calls during approximately 24 hours.

How many concurrent connections / transactions though?


>  During that time, when haproxy is in place, there are a handful (8-12) of
> responses that come back as 502 with SH--.

As per the documentation:

 SH   The server aborted before sending its full HTTP response headers, or
  it crashed while processing the request. Since a server aborting at
  this moment is very rare, it would be wise to inspect its logs to
  control whether it crashed and why. The logged request may indicate a
  small set of faulty requests, demonstrating bugs in the application.
  Sometimes this might also be caused by an IDS killing the connection
  between HAProxy and the server.


You should disable custom format logging, enable httplog format and
share the affected SH log line.



>tune.bufsize 8388608
>tune.maxrewrite 1024

bufsize should usually be 16K maybe 32K for some specific application
requiring huge headers, but not 8M. I think 8M is unreasonable and I
think it will lead to issues one way or another.


>timeout connect 86400s
>timeout client  86400s
>timeout server  86400s

1 day long timeouts will probably also lead to an issue at some point,
due to sessions not expiring.




Lukas




Re: [ANNOUNCE] haproxy-3.1-dev4

2024-07-25 Thread Lukas Tribus
On Wed, 24 Jul 2024 at 23:19, William Lallemand  wrote:
>
> On Wed, Jul 24, 2024 at 10:32:16PM +0200, Aleksandar Lazic wrote:
> > Does this announcement have any impact to HAProxy?
> >
> > "Intent to End OCSP Service"
> > https://letsencrypt.org/2024/07/23/replacing-ocsp-with-crls.html
> > https://news.ycombinator.com/item?id=41046956
> >
>
> I read about this yesterday and my impression is that they are trying to use 
> the excuse of the privacy problems to end a
> service that they have difficulties to scale.

I agree.

Google disabled online/active OCSP requests a long time ago - more
than a decade.

Here's more argumentation:
https://docs.google.com/document/d/180T6cDSWPy54Rb5d6R4zN7MuLEMShaZ4IRLQgdPqE98/edit?pli=1

They claim OCSP must-staple is basically unused, OCSP stapling itself
is not much used (8%), it's complicated and they do not want to rely
on it.


I guess it's one less feature we have to care about, but I wish they
would have made up their mind 10 years ago and spared us all the pain.


Lukas




Re: [ANNOUNCE] haproxy-3.1-dev3 (more infos on the story with fd-hard-limit and systemd)

2024-07-17 Thread Lukas Tribus
On Wed, 17 Jul 2024 at 11:25, Willy Tarreau  wrote:
>
> At this point, do you (or anyone else) still have any objection against
> backporting the DEFAULT_MAXFD patch so as to preserve the current
> defaults for users, and/or do you have any alternate proposal, or just
> want to discuss other possibilities ?

No, I don't have objections.

Regards,
Lukas


Lukas



Re: [ANNOUNCE] haproxy-3.1-dev3 (more infos on the story with fd-hard-limit and systemd)

2024-07-16 Thread Lukas Tribus
Hi Valentine, hi Willy,


after spending some time testing I agree tuning maxconn/fd-limits is hard ...


With 8GB RAM we can still OOM with 1M FDs / 500k maxconn (no TLS), but
it appears to be around the sweetspot.

It thought it would require more memory considering that we suggest
1GB of memory for 20k non-TLS connections or 8k TLS connections, but
my test was indeed synthetic with zero features used, and it's not
only about haproxy userspace but the system as well.

lukas@dev:~/haproxy$ git grep -B3 -A1 "GB of RAM"
doc/configuration.txt-  global maxconn. Also, keep in mind that a
connection contains two buffers
doc/configuration.txt-  of tune.bufsize (16kB by default) each, as
well as some other data resulting
doc/configuration.txt-  in about 33 kB of RAM being consumed per
established connection. That means
doc/configuration.txt:  that a medium system equipped with 1GB of RAM
can withstand around
doc/configuration.txt-  2-25000 concurrent connections if properly tuned.
--
doc/intro.txt-  - 1300 HTTPS connections per second using TLS
connections renegotiated with
doc/intro.txt-RSA2048;
doc/intro.txt-
doc/intro.txt:  - 2 concurrent saturated connections per GB of
RAM, including the memory
doc/intro.txt-required for system buffers; it is possible to do
better with careful tuning
doc/intro.txt-but this result it easy to achieve.
doc/intro.txt-
doc/intro.txt:  - about 8000 concurrent TLS connections (client-side
only) per GB of RAM,
doc/intro.txt-including the memory required for system buffers;
lukas@dev:~/haproxy$



On Thu, 11 Jul 2024 at 08:05, Willy Tarreau  wrote:
>
> What I would really like is to no longer see any maxconn in a regular
> configuration because there's no good value and we've seen them copied
> over and over.

By setting a global maxconn you force yourself to at least think about it.

I'm assuming by "regular configuration" you mean small scale/size? In
this case I agree.



> I'm confused now, I don't see how, given that the change only *lowers*
> an existing limit, it never raises it. It's precisely because of the
> risk of OOM with OSes switching the default from one million FDs to one
> billion that we're proposing to keep the previous limit of 1 million as
> a sane upper bound. The only risk I'm seeing would be users discovering
> that they cannot accept more than ~500k concurrent connections on a large
> system. But I claim that those dealing with such loads *do* careful size
> and configure their systems and services (RAM, fd, conntrack, monitoring
> tools etc). Thus I'm not sure which scenario you have in mind that this
> change could result in such a report as above.

True, I confused memory required for initialization with memory
allocated when actually used.



On Thu, 11 Jul 2024 at 14:44, Willy Tarreau  wrote:
>
> My take on this limit is that most users should not care. Those dealing
> with high loads have to do their homework and are used to doing this,
> and those deploying in extremely small environments are also used to
> adjusting limits (even sometimes rebuilding with specific options), and
> I'm fine with leaving a bit of work for both extremities.

Considering how non-trivial tuning maxconn/fd-hard-limit/ulimit for a
specific memory size and configuration is, I (now) have to agree.



On Tue, 16 Jul 2024 at 16:22, Valentine Krasnobaeva
 wrote:
>
> Our issue in GITHUB: https://github.com/haproxy/haproxy/issues/2621
>
>>You agree that this is the environment systemd sets us up with, right?
>
> Yes, as it was investigated by Apollon systemd/256~rc3-3 now sets the
> file descriptor hard limit to kernel max on boot.

Yes, I will comment regarding some additional systemd context on the
github issue (but nothing changes for haproxy).



On Tue, 16 Jul 2024 at 16:22, Valentine Krasnobaeva
 wrote:
>
> It is obscure for some users 'fd-hard-limit'. And a lot of them
> may ask: "What is the best value, according to my environment,
> which I should put here ?", "What will be the impact?"

This is completely true and why I prefer users think about maxconn
instead, but like I said, even that is hard.



Regards,
Lukas



[PATCH] DOC: install: don't reference removed CPU arg

2024-07-16 Thread Lukas Tribus
Remove reference to the removed CPU= build argument in commit 018443b8a1
("BUILD: makefile: get rid of the CPU variable").
---
 INSTALL | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/INSTALL b/INSTALL
index ee91757e0..46ff3cd52 100644
--- a/INSTALL
+++ b/INSTALL
@@ -383,10 +383,15 @@ systems, by passing "USE_SLZ=" to the "make" command.
 
 Please note that SLZ will benefit from some CPU-specific instructions like the
 availability of the CRC32 extension on some ARM processors. Thus it can further
-improve its performance to build with "CPU=native" on the target system, or
-"CPU=armv81" (modern systems such as Graviton2 or A55/A75 and beyond),
-"CPU=a72" (e.g. for RPi4, or AWS Graviton), "CPU=a53" (e.g. for RPi3), or
-"CPU=armv8-auto" (automatic detection with minor runtime penalty).
+improve its performance to build with:
+
+  - "CPU_CFLAGS=-march=native" on the target system or
+  - "CPU_CFLAGS=-march=armv81" on modern systems such as Graviton2 or A55/A75
+ and beyond)
+  - "CPU_CFLAGS=-march=a72" (e.g. for RPi4, or AWS Graviton)
+  - "CPU_CFLAGS=-march=a53" (e.g. for RPi3)
+  - "CPU_CFLAGS=-march=armv8-auto" automatic detection with minor runtime
+ penalty)
 
 A second option involves the widely known zlib library, which is very likely
 installed on your system. In order to use zlib, simply pass "USE_ZLIB=1" to the
-- 
2.17.1




Re: [ANNOUNCE] haproxy-3.1-dev3

2024-07-11 Thread Lukas Tribus
Hi,

I will get back to this for further research and discussion in about a week.

In the meantime, do we agree that the environment we are developing the fix
for is the following:

the hard limit is always set to the maximum available in the kernel which
on amd64 is one billion with a B, whether the systems has 128M or 2T of
memory is irrelevant.

You agree that this is the environment systemd sets us up with, right?


Thanks,
Lukas


Re: [ANNOUNCE] haproxy-3.1-dev3

2024-07-10 Thread Lukas Tribus
On Wed, 10 Jul 2024 at 21:30, Lukas Tribus  wrote:
> There are ways to push defaults like this out if really needed: with
> default configuration files, like we have in examples/ and like
> distributions provide in their repositories. This default the users
> will then find in the configuration file and can look it up in the
> documentation if they want.


So here a few proposals:

Proposal 1:

- remove fd-hard-limit as it was a confusing mistake in the first place
- exit with a configuration error when global maxconn is not set
- put global maxconn in all example configurations, encourage
Debian/RH to do the same
- document accordingly


Proposal 2:

- keep fd-hard-limit
- exit with a configuration error when fd-hard-limit needs to guess 1M
- put fd-hard-limit in all example configurations, encourage Debian/RH
to do the same
- document accordingly



Otherwise the next bug report will be that haproxy OOM's (in
production and only when encountering load) by default with systems
with less than 16 GB of RAM. The same bug reporter just needs a VM
with 8 GB RAM or less.



Sometimes the hard choices need to be up to the user. I believe this
is some of those times.


cheers
lukas



Re: [ANNOUNCE] haproxy-3.1-dev3

2024-07-10 Thread Lukas Tribus
Hello,


On Wed, 10 Jul 2024 at 16:39, Willy Tarreau  wrote:
>
> Another change that will need to be backported after some time concerns
> the handling of default FD limits. For a few decades, operating systems
> would advertise safe limits (i.e. those they were able to deal with based
> on their amount of RAM). We've seen a few attempts at bumping the hard
> limit beyond 1 billion FDs many years ago that were reverted due to
> breaking many apps. Now it seems it's coming back, via systemd-256 setting
> the hard-limit from the kernel's nr_open variable (which in itself is not
> necessarily a bad thing -- proof that I'm not always bashing systemd, only
> when needed :-)). But with some machines showing extreme nr_open (I still
> don't know why) we're back to square one where it's possible for haproxy
> to try to start with a limit set to one billion FDs. Not only this would
> eat at least 64GB of RAM just for the fdtab itself, it also takes ages to
> start, and fortunately the watchdog quickly puts an end to this mess...
> We already have an fd-hard-limit global setting that allows to fix a hard
> limit to the number of FDs, but not everyone knows about it nor uses it.
> What we've found to be the most reasonable is to consider that
> fd-hard-limit now has a default value of 1048576, which matches what was
> almost always the default hard limit, so that when not set, it's like it
> used to be till now. That's sufficient for the vast majority of use cases,
> and trust me, the rare users who need to support more than 500k concurrent
> connections are pretty much aware of all related tunables and already make
> use of them, so it's expected that nobody should observe any change.

I wholeheartedly hate default implicit limits and I also pretty much
disagree with fd-hard-limit in general, but allow me to quote your own
post here from github issue #2043 comment
https://github.com/haproxy/haproxy/issues/2043#issuecomment-1433593837

> we used to have a 2k maxconn limit for a very long time and it was causing
> much more harm than such an error: the process used to start well and was
> working perfectly fine until the day there was a big rush on the site and it
> wouldn't accept more connections than the default limit. I'm not that much
> tempted by setting new high default limits. We do have some users running
> with 2+ million concurrent connections, or roughly 5M FDs. That's already way
> above what most users would consider an acceptable default limit, and anything
> below this could mean that such users wouldn't know about the setting and 
> could
> get trapped.

I disagree that we need to heuristically guess those values like I
believe I said in the past.

"But containers ..." should not be an argument to forgo the principle
of least surprise.

There are ways to push defaults like this out if really needed: with
default configuration files, like we have in examples/ and like
distributions provide in their repositories. This default the users
will then find in the configuration file and can look it up in the
documentation if they want.

At the very least we need a *stern* configuration warning that we now
default to 1M fd, although I would personally consider this (lack of
all fd-hard-limit, ulimit and global maxconn) leading to heuristic
fd-hard-limit a critical error.

I also consider backporting this change - even with a configuration
warning - dangerous.


cheers,
lukas



Re: How to configure DH groups for TLS 1.3

2024-05-03 Thread Lukas Tribus
On Thu, 2 May 2024 at 19:50, Lukas Tribus  wrote:
>
> On Thu, 2 May 2024 at 17:14, Froehlich, Dominik
>  wrote:
> > The closest I’ve gotten is the “curves” property: 
> > https://docs.haproxy.org/2.8/configuration.html#5.1-curves
> >
> > However, I think it only restricts the available elliptic curves in a ECDHE 
> > handshake, but it does not prevent a TLS 1.3 client from selecting a 
> > non-ECDHE prime group, for example “ffdhe8192”.
>
> If I understand the code correctly, both nginx and haproxy call
> SSL_CTX_set1_curves_list(), what exactly makes you think that haproxy
> does something different?

More to the point:

curve and group is the same exact thing in openssl:


https://www.openssl.org/docs/man3.0/man3/SSL_CONF_cmd.html

> -curves groups
> This is a synonym for the -groups command.


https://www.openssl.org/docs/man3.0/man3/SSL_CTX_set1_curves.html

> The curve functions are synonyms for the equivalently named group functions 
> and are identical in every respect. They exist because, prior to TLS1.3, 
> there was only the concept of supported curves. In TLS1.3 this was renamed to 
> supported groups, and extended to include Diffie Hellman groups. The group 
> functions should be used in preference.


https://github.com/openssl/openssl/issues/18089#issuecomment-1096748557

> In TLSv1.3 the old "supported_curves" extension was renamed to 
> "supported_groups". This renaming has been followed through to the OpenSSL 
> API so that SSL_CTX_set1_curves_list is synonymous with 
> SSL_CTX_set1_groups_list, and the the -curves command line argument is 
> synonymous with -groups. So in the above issue you are not just constraining 
> the EC curves - you are constraining all the groups available for use in 
> TLSv1.3. This includes FFDH groups - so the above configuration prevents 
> either ECDH or FFDH being used in TLSv1.3.


Setting openssl curves (groups) via SSL_CTX_set1_curves_list just like
nginx does is supported since Haproxy 1.8:

https://github.com/haproxy/haproxy/commit/e7f2b7301c0a6625654056356cca56853a14cd68


Lukas



Re: How to configure DH groups for TLS 1.3

2024-05-02 Thread Lukas Tribus
On Thu, 2 May 2024 at 17:14, Froehlich, Dominik
 wrote:
> The closest I’ve gotten is the “curves” property: 
> https://docs.haproxy.org/2.8/configuration.html#5.1-curves
>
> However, I think it only restricts the available elliptic curves in a ECDHE 
> handshake, but it does not prevent a TLS 1.3 client from selecting a 
> non-ECDHE prime group, for example “ffdhe8192”.

If I understand the code correctly, both nginx and haproxy call
SSL_CTX_set1_curves_list(), what exactly makes you think that haproxy
does something different?


Lukas



Re: maxconn definition in frontend or backend section ?

2024-05-02 Thread Lukas Tribus
On Thu, 2 May 2024 at 15:22, Roberto Carna  wrote:
>
> Dear all, I have HAproxy in front of a web server node.
>
> I want the web server node to accept just 1000 concurrent connections.
>
> So I want to use the maxconn parameter in order to let new connections
> above 1000 to wait until the web service has free workers.
>
> According to what I read, if I define maxconn in frontend section of
> haproxy.cfg, the incoming connections above 1000 will wait in the
> kernel socket queue, and if I define the parameter in the backend
> section the connections above 1000 will wait in the web server node
> until there are workers free.
>
> So where is the best section to define the maxconn parameter???

If you want to limit the connections to a server, that's is exactly
where you put the limit: *server* maxconn
There is no "backend" maxconn, it's always defined per server.

Leave more room in the frontend, e.g. 2000 or 3000 and set global
maxconn way above the total amount of connections (considering both
front and backend/server connections (like 1 in this example).

This way, haproxy handles the queuing. If you leave that up to the
kernel, haproxy will not see the connections above this threshold at
all. No logging, no stats, and you may even loose access to the
haproxy stats interface, if it is on the same frontend.


Lukas



Re: [PATCH] MINOR: systemd: Include MONOTONIC_USEC field in RELOADING=1 message

2024-04-04 Thread Lukas Tribus
On Thu, 4 Apr 2024 at 16:00, Tim Düsterhus  wrote:
>
> Hi
>
> On 4/4/24 14:35, William Lallemand wrote:
> > I'm not against merging this, but I don't see any change comparing to the
> > current model?
> >
>
> I mainly stumbled upon this new mode in the documentation while looking
> into replacing libsystemd, where you beat me to it :-)

Ah, that's nice, it's already gone in commit aa3632962 ("MEDIUM:
mworker: get rid of libsystemd") - I was thinking about the very same
thing ;)


Lukas



Re: [PATCH] DOC/MINOR: userlists: musl performance

2024-02-12 Thread Lukas Tribus
On Mon, 12 Feb 2024 at 18:10, Nicolas CARPi  wrote:
>
> Dear Lukas, Willy,
>
> Please find another patch attached, addressing your comments.
>
> Willy: s/gcc/glibc/
>
> Lukas: I shifted the focus on the rounds/cost solution, while still
> mentioning the musl issue, as this problem is clearly more visible on
> Alpine Linux, as the github issues show.

Thank you, I agree.

Acked-by: Lukas Tribus 


Lukas



Re: [PATCH] DOC/MINOR: userlists: musl performance

2024-02-12 Thread Lukas Tribus
On Mon, 12 Feb 2024 at 14:13, Nicolas CARPi  wrote:
>
> Hello everyone,
>
> Please find attached my very first patch to the documentation. Hope I
> did everything good! :)
>
> Based on a comment from @bugre:
> https://github.com/haproxy/haproxy/issues/2251#issuecomment-1716594046
>
> (and also because I've been bitten by this!)

This is getting confusing and I'm not sure if I agree with this patch.

The problem is neither the libc nor the hash itself, but the iterations.
Documenting that one libc performs worse or even much worse is besides
the point. The point is that strong hashes with high iteration counts
are designed to be a self-DoS, and that is exactly how they behave in
haproxy, on all libcs.

Worse, this suggests at least in some way that a configuration like
this acceptable on glibc.


Lukas



Re: ACL and operator

2024-02-02 Thread Lukas Tribus
On Fri, 2 Feb 2024 at 18:42, John Lauro  wrote:
>
> Seems like a lint style checker that doesn't require AI.
> For example, it could recognize that the / in /api isn't valid for 
> req.hdr(host)
> [...]
> The _ in path_beg is also questionable.  You can have _ in dns names,
> but are not valid in host names.

[ CCing the mailing list again ]

A primary use-case for ACLs is to match invalid values and headers (
for example in case of zero days).

We can't restrict ACLs to valid things only, that would defeat the
purpose of ACLs.


Lukas



[PATCH] DOC: install: clarify WolfSSL chroot requirements

2024-02-02 Thread Lukas Tribus
---
 INSTALL | 12 
 1 file changed, 12 insertions(+)

diff --git a/INSTALL b/INSTALL
index 18eb67f311..8ebf8d298c 100644
--- a/INSTALL
+++ b/INSTALL
@@ -293,6 +293,18 @@ Please also note that wolfSSL supports many 
platform-specific features that may
 affect performance, and that for production uses it might be a good idea to
 check them using "./configure --help". Please refer to the lib's documentation.
 
+When running wolfSSL in chroot, either mount /dev/[u]random devices into the
+chroot:
+
+  $ mkdir -p /path/to/chrootdir/dev/
+  $ mknod -m 444 /path/to/chrootdir/dev/random c 1 8
+  $ mknod -m 444 /path/to/chrootdir/dev/urandom c 1 9
+
+Or, if your OS supports it, enable the getrandom() syscall by appending the
+following argument to the wolfSSL configure command:
+
+  EXTRA_CFLAGS=-DWOLFSSL_GETRANDOM=1
+
 Building HAProxy with wolfSSL requires to specify the API variant on the "make"
 command line, for example:
 
-- 
2.17.1




Re: [PATCH] DOC: install: enable WOLFSSL_GETRANDOM

2024-02-02 Thread Lukas Tribus
On Fri, 2 Feb 2024 at 08:43, Willy Tarreau  wrote:
>
> Hi Lukas!
>
> On Thu, Feb 01, 2024 at 02:52:10PM +, Lukas Tribus wrote:
> > On Thu, 1 Feb 2024 at 12:08, William Lallemand  
> > wrote:
> > >
> > > That's interesting, however I'm surprised the init does not work before 
> > > the chroot,
> > > we are doing a RAND_bytes() with OpenSSL before the chroot to achieve 
> > > this.
> >
> > This approach can actually hide chroot issues leading to nasty
> > operational issues like "Haproxy 1.8 with OpenSSL 1.1.1-pre4 stops
> > working after 1 hour" (see [1]  and [2]). It's also not unrealistic to
> > cause issue with process management, like FD leaks [3].
> >
> > Stable OpenSSL on stable OS release branches today use getrandom() and
> > not /dev/urandom.
> >
> > I think using the filesystems for CRNG is a footgun. At least let us
> > fail fast and immediately if there is an issue with CRNG seeding from
> > chroot.
> >
> > I consider getrandom() a modern and simple solution to all those problems.
>
> It's not that black and white actually. I pretty well remember that we
> had to patch some code (I think it was openssl) in the past to force
> to switch back from getrandom() to /dev/urandom, precisely because it
> could block (particularly during early boot), while /dev/urandom would
> never. It's a bit old, it was around kernel 4.4 I think, and since then
> there has been endless discussions on the usual topics of "should a
> program work but be less safe or should it fail to protect users against
> themselves" and "how to generate a random MAC address on a headless
> system when the only entropy source is the network", leading to a debate
> around the addition of a GRND_INSECURE flag, but I don't know what the
> status is nowadays, especially in LTS distros. I just found an abstract
> of this thread here:
>
> https://lwn.net/Articles/800509/
>
> It might be one reason why it's not enabled by default, though I can't
> say, really.

Right, to behave exactly like /dev/urandom we need GRND_INSECURE.

GRND_INSECURE is in linux 5.6:
https://github.com/torvalds/linux/commit/75551dbf112c992bc6c99a972990b3f272247e23

defined in glibc-2.32:
https://github.com/bminor/glibc/commit/319d2a7b60cc0d06bb5c29684c23475d41a7f8b7

As such it is in RHEL 9, Ubuntu 22.04 LTS and Debian Bookworm (Stable).


There was actually an attempt at making /dev/urandom secure/blocking
in v5.18-rc1, but it didn't last for more than a few days:

https://github.com/torvalds/linux/commit/6f98a4bfee72c22f50aedb39fb761567969865fe
https://github.com/torvalds/linux/commit/0313bc278dac7cd9ce83a8d384581dc043156965
https://github.com/torvalds/linux/commit/48bff1053c172e6c7f340e506027d118147c8b7f



> Maybe we should *recommend* enabling getrandom, explaining however that
> its reliability may vary depending on the OS version and the amount of
> entropy sources.

For the record I think everything in INSTALL is a recommendation,
without any one size fits all.

WolfSSL support in HAProxy is experimental to the point that not only
does it require compiling library and application from source, it also
requires tinkering with LD paths to be able to even start the binary,
so it's not like the INSTALL instructions are "ready to roll" in
production.

I will make a V2 of the patch indicating to either enable getrandom,
mount /dev/[u]random paths into the chroot or disable chroot.


cheers,
Lukas



Re: ACL and operator

2024-02-02 Thread Lukas Tribus
On Fri, 2 Feb 2024 at 15:09, Tom Braarup  wrote:
>
> Hi,
>
> The config validator does not seems to catch this error in syntax and Haproxy 
> ignores the second part of the expression:
>
> use_backend api.example.com if { req.hdr(host) -i example.com and path_beg 
> /api }

This is correct syntax and matches when the host header corresponds to
the 4 possible values that have been indicated:

"example.com"
"and"
"path_beg"

as well as:
"/api"


Just like
acl url_static  path_beg /static /images /img /css

matches 4 different URI prefixes.

Me and you understand that this is not what you wanted, but it will
never be possible for the configuration parser to know that, unless it
is an AI.



BR,
Lukas



Re: [PATCH] DOC: install: enable WOLFSSL_GETRANDOM

2024-02-01 Thread Lukas Tribus
Hello William,

On Thu, 1 Feb 2024 at 17:52, William Lallemand  wrote:

> > I consider getrandom() a modern and simple solution to all those problems.
>
> Unfortunately this is still a fallback solution if getrandom() is not
> accessible or if the support is not built, as this is a fallback in
> openssl too.I don't want HAProxy to require getrandom() to work, even if
> this is not an ideal solution, there is no reason it shouldn't work
> without it, at least for the sake of portability.

Although freebsd has getrandom(), openbsd uses a different API and I'm
sure there are lots of other operating systems that do not provide
this API.

So yes, a single code path would have been nice, but you are right, it
isn't realistic.



> > > I'll check if we can do something like this instead of needing a explicit 
> > > option, but
> > > if that's not possible we will require GETRANDOM in the --enable-haproxy 
> > > build option.
> >
> > Actually I think wolfssl should add feature detection just like it
> > does with other optional syscalls. But that is not what the suggested
> > wolfssl 5.6.6 release does.
>
> It does not seem to be the wolfSSL philosophy :/ Everything needs to be
> compiled manually and there is a lot of options, it's quite complicated
> to obtain an optimized built... We recently saw that they've done
> something like this for AES-NI, so maybe we could try to push them to a
> more dynamic build system.

Wolfssl sure can be hard to love sometimes.


> I opened a wolfSSL issue https://github.com/wolfSSL/wolfssl/issues/7197
> feel free to participate, we could try to push them to the detection of
> getrandom() during the build of their library, and fix their urandom
> Implementation.

Yes, thank you, I will subscribe and participate.


> We could also try a call to RAND_bytes() after the chroot and exit with
> an error saying that the library is not compatible with chroot.

I definitively prefer failing fast/early, to avoid getting blindsided.


Let's see what can be improved at wolfssl.



Thank you,

Lukas



Re: [PATCH] DOC: install: enable WOLFSSL_GETRANDOM

2024-02-01 Thread Lukas Tribus
On Thu, 1 Feb 2024 at 12:08, William Lallemand  wrote:
>
> That's interesting, however I'm surprised the init does not work before the 
> chroot,
> we are doing a RAND_bytes() with OpenSSL before the chroot to achieve this.

This approach can actually hide chroot issues leading to nasty
operational issues like "Haproxy 1.8 with OpenSSL 1.1.1-pre4 stops
working after 1 hour" (see [1]  and [2]). It's also not unrealistic to
cause issue with process management, like FD leaks [3].

Stable OpenSSL on stable OS release branches today use getrandom() and
not /dev/urandom.

I think using the filesystems for CRNG is a footgun. At least let us
fail fast and immediately if there is an issue with CRNG seeding from
chroot.

I consider getrandom() a modern and simple solution to all those problems.


> I'll check if we can do something like this instead of needing a explicit 
> option, but
> if that's not possible we will require GETRANDOM in the --enable-haproxy 
> build option.

Actually I think wolfssl should add feature detection just like it
does with other optional syscalls. But that is not what the suggested
wolfssl 5.6.6 release does.


Regards,
Lukas

[1] https://www.mail-archive.com/haproxy@formilux.org/msg29592.html
[2] https://github.com/openssl/openssl/issues/5330
[3] https://github.com/haproxy/haproxy/issues/314



[RFC PATCH] DOC: httpclient: add dedicated httpclient section

2024-01-30 Thread Lukas Tribus
Move httpclient keywords into its own section and explain adding
an introductory paragraph.

Also see Github issue #2409

Should be backported to 2.6 ; but note that:
2.7 does not have httpclient.resolvers.disabled
2.6 does not have httpclient.retries and httpclient.timeout.connect
---
 doc/configuration.txt | 131 ++
 1 file changed, 69 insertions(+), 62 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 208b474471..402fa3d317 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -58,6 +58,7 @@ Summary
 3.8.  HTTP-errors
 3.9.  Rings
 3.10. Log forwarding
+3.11. httpclient
 
 4.Proxies
 4.1.  Proxy keywords matrix
@@ -1953,68 +1954,6 @@ http-fail-codes [+-][,...] [...]
   exactly the same as for http-err-codes above. See also "http-err-codes" and
   "http_fail_cnt".
 
-httpclient.resolvers.disabled 
-  Disable the DNS resolution of the httpclient. Prevent the creation of the
-  "default" resolvers section.
-
-  Default value is off.
-
-httpclient.resolvers.id 
-  This option defines the resolvers section with which the httpclient will try
-  to resolve.
-
-  Default option is the "default" resolvers ID. By default, if this option is
-  not used, it will simply disable the resolving if the section is not found.
-
-  However, when this option is explicitly enabled it will trigger a
-  configuration error if it fails to load.
-
-httpclient.resolvers.prefer 
-  This option allows to chose which family of IP you want when resolving,
-  which is convenient when IPv6 is not available on your network. Default
-  option is "ipv6".
-
-httpclient.retries 
-  This option allows to configure the number of retries attempt of the
-  httpclient when a request failed. This does the same as the "retries" keyword
-  in a backend.
-
-  Default value is 3.
-
-httpclient.ssl.ca-file 
-  This option defines the ca-file which should be used to verify the server
-  certificate. It takes the same parameters as the "ca-file" option on the
-  server line.
-
-  By default and when this option is not used, the value is
-  "@system-ca" which tries to load the CA of the system. If it fails the SSL
-  will be disabled for the httpclient.
-
-  However, when this option is explicitly enabled it will trigger a
-  configuration error if it fails.
-
-httpclient.ssl.verify [none|required]
-  Works the same way as the verify option on server lines. If specified to 
'none',
-  servers certificates are not verified. Default option is "required".
-
-  By default and when this option is not used, the value is
-  "required". If it fails the SSL will be disabled for the httpclient.
-
-  However, when this option is explicitly enabled it will trigger a
-  configuration error if it fails.
-
-httpclient.timeout.connect 
-  Set the maximum time to wait for a connection attempt by default for the
-  httpclient.
-
-  Arguments :
- is the timeout value specified in milliseconds by default, but
-  can be in any other unit if the number is suffixed by the unit,
-  as explained at the top of this document.
-
-  The default value is 5000ms.
-
-
 insecure-fork-wanted
   By default HAProxy tries hard to prevent any thread and process creation
   after it starts. Doing so is particularly important when using Lua files of
@@ -4597,6 +4536,74 @@ maxconn 
 timeout client 
   Set the maximum inactivity time on the client side.
 
+3.11. httpclient
+
+
+httpclient is an internal HTTP library, it can be used by various subsystems,
+for example in LUA scripts. httpclient is not used in the data path, in other
+words it has nothing with HTTP traffic passing through HAProxy.
+
+httpclient.resolvers.disabled 
+  Disable the DNS resolution of the httpclient. Prevent the creation of the
+  "default" resolvers section.
+
+  Default value is off.
+
+httpclient.resolvers.id 
+  This option defines the resolvers section with which the httpclient will try
+  to resolve.
+
+  Default option is the "default" resolvers ID. By default, if this option is
+  not used, it will simply disable the resolving if the section is not found.
+
+  However, when this option is explicitly enabled it will trigger a
+  configuration error if it fails to load.
+
+httpclient.resolvers.prefer 
+  This option allows to chose which family of IP you want when resolving,
+  which is convenient when IPv6 is not available on your network. Default
+  option is "ipv6".
+
+httpclient.retries 
+  This option allows to configure the number of retries attempt of the
+  httpclient when a request failed. This does the same as the "retries" keyword
+  in a backend.
+
+  Default value is 3.
+
+httpclient.ssl.ca-file 
+  This option defines the ca-file which should be used to verify the server
+  certificate. It takes the same parameters as the "ca-file" option on the
+  server line.
+
+  By default and when this option is not used, the value is
+  "@system-ca" which tries to 

[PATCH] DOC: install: enable WOLFSSL_GETRANDOM

2024-01-30 Thread Lukas Tribus
Suggest enabling getrandom() syscall in wolfssl to avoid chroot
problems when using wolfssl.
---
Also see:

https://discourse.haproxy.org/t/haproxy-no-responses-when-built-with-wolfssl-while-working-with-openssl/9320/15

---
 INSTALL | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/INSTALL b/INSTALL
index 18eb67f311..284b9825ba 100644
--- a/INSTALL
+++ b/INSTALL
@@ -285,7 +285,8 @@ least WolfSSL 5.6.6 is needed, but a development version 
might be needed for
 some of the features:
 
   $ cd ~/build/wolfssl
-  $ ./configure --enable-haproxy --enable-quic --prefix=/opt/wolfssl-5.6.6/
+  $ ./configure --enable-haproxy --enable-quic \
+  --prefix=/opt/wolfssl-5.6.6/ EXTRA_CFLAGS=-DWOLFSSL_GETRANDOM=1
   $ make -j $(nproc)
   $ make install
 
-- 
2.17.1




Re: CVE-2023-44487 and haproxy-1.8

2023-10-16 Thread Lukas Tribus
On Mon, 16 Oct 2023 at 19:41, Aleksandar Lazic  wrote:
>
>
>
> On 2023-10-16 (Mo.) 19:29, Илья Шипицин wrote:
> > Does 1.8 support http/2?
>
> No.

Actually haproxy 1.8 supports H2 (without implementing HTX), as per
the documentation and announcements:

https://www.mail-archive.com/haproxy@formilux.org/msg28004.html
http://docs.haproxy.org/1.8/configuration.html#5.1-alpn


It does so by downgrading H2 to HTTP/1.1.


I don't know whether haproxy 1.8 actually is affected by the rapid
reset vulnerability or not. I suppose it's possible.

Lukas



FYI: OpenWrt switches from wolfssl to mbedtls as default

2023-10-13 Thread Lukas Tribus
Hello,


an interesting move from the OpenWRT project:


> Switch from wolfssl to mbedtls as default
> =
>
> OpenWrt has transitioned its default cryptographic library from wolfssl
> to mbedtls. This shift brings several changes and implications:
>
>   * Size Efficiency: mbedtls is considerably smaller, making it an
> optimal choice for systems where storage space is paramount.
>   * LTS and ABI Stability: mbedtls consistently provides updates via its
> Long Term Support (LTS) branch, ensuring both security and a stable
> application binary interface (ABI). In contrast, wolfssl does not
> offer an LTS release, and its stable ABI is limited to a specific set
> of functions.
>   * TLS 1.3 Support: Users should be aware that mbedtls 2.28 no longer
> supports TLS 1.3.
>
> While mbedtls is now the default, users who have specific needs or
> preferences can still manually switch back to wolfssl or choose openssl.

As per:
http://lists.openwrt.org/pipermail/openwrt-announce/2023-October/47.html

Size Efficiency does not matter a lot in the context of haproxy, and
TLSv1.3 is a must-have, but I'm surprisedFYI about the point about LTS
and ABI Stability in wolfssl and I'm wondering if this is really the
case?


cheers,
lukas



Re: Options for mitigating CVE-2023-44487 with HAProxy

2023-10-10 Thread Lukas Tribus
On Tue, 10 Oct 2023 at 20:22, Willy Tarreau  wrote:
>
> So at this point I'm still failing to find any case where this attack
> hurts haproxy more than any of the benchmarks we're routinely inflicting
> it, given that it acts exactly like a client configured with a short
> timeout (e.g. if you configure haproxy with "timeout server 1" and
> have an h2 server, you will basically get the same traffic pattern).

This is pretty much the situation with nginx as well:

https://mailman.nginx.org/pipermail/nginx-devel/2023-October/S36Q5HBXR7CAIMPLLPRSSSYR4PCMWILK.html



Lukas



GCP: The novel HTTP/2 ‘Rapid Reset’ DDoS attack

2023-10-10 Thread Lukas Tribus
FYI

https://cloud.google.com/blog/products/identity-security/how-it-works-the-novel-http2-rapid-reset-ddos-attack



haproxy.org bug pages broken (missing html headers and footer?)

2023-09-27 Thread Lukas Tribus
Hello,

looks like the bug pages are broken; they contain the table of bugs
but there is really no formatting happening and it appears the entire
HTML header and footer is missing:

Example:
http://www.haproxy.org/bugs/bugs-2.4.html
http://www.haproxy.org/bugs/bugs-2.6.2.html


BR,

Lukas



Re: maxconn limit not working after reload / sighup

2023-09-21 Thread Lukas Tribus
On Thu, 21 Sept 2023 at 01:20, Björn Jacke  wrote:
>
> Hello,
>
> I just experienced that maxconn can easily not work as expected and lead
> to unavailable services. Take this example backend configuration of a
> 2.8.3 haproxy setup:
>
> backend bk_example
>balance first
>server server1   192.168.4.1:8000  id 1  maxconn 10
>server server2   192.168.4.2:8000  id 2  maxconn 10
>server server3   192.168.4.3:8000  id 3  maxconn 10
>...
>
> Each server here is only able to handle 10 requests, if it receives more
> requests it will just return an error. So usually the above
> configuration works fine, server1 receives up to 10 connections, after
> that connections are sent to server2, if also that has the maxconn limit
> reached, server3 receives requests and so on.
>
> So far so good. If haproxy however receives a SIGHUP because of some
> reconfiguration, then all the connections to the backend servers are
> kept alive but haproxy thinks that the servers have 0 connections and it
> will send up to 10 new connections to backend servers, even if they
> already had 10 connections, which are still active and still correctly
> processed by haproxy. So each server receives up to 20 connections and
> the backend servers just return errors in this case.
>
> This is very unexpected and it looks like unintended behavior actually.
> I also never heard about this and never read a warning note for such a
> side effect when a haproxy reload is being done. Maybe a
> server-state-file configuration might work around this problem but it
> was not obvious till now that this is a requirement if maxconn is being
> used. Can someone shed some light on this?

This is expected behaviour.

global maxconn docs do define that this is per-process, although
frontend or backend server maxconn docs do not specifically reiterate
that we are always talking about a per process counter.


I think it's pretty much impossible to solve, actually.


These are counters that can change every millisecond. Haproxy doesn't
do any IO at all during run time. The state file is useless for this
and I think it would be really hard to sync this between different
instances, considering how time critical this is.

But even if the master process would somehow pass information back and
forth every second, this could only theoretically work when the
configuration remains the same.


What if the old configuration has:

backend http
 server s1 s1:80 maxconn 5
 server s2 s2:80 maxconn 10
 server s3 s3:80 maxconn 20


and the new configuration has:

backend http
 server s1 s1:80 maxconn 3
 server s2 s2:80 maxconn 7
 server s3 s3:80 maxconn 30


Heuristics between old and new configuration to find middle ground? I
don't think that's even theoretically possible, but it would also be
completely unexpected.

I'm all for improving the docs where we can, but I doubt we can do
more than that in this case.

There are features to limit the amount of time old processes are
running (hard-stop-after) as well as features that ramp up the numbers
in new processes slowly. Other than that you'd have to account for
this overhead.



Lukas



Re: Haproxy 2.8 with Proxy Protocol v2 does not close connections

2023-09-07 Thread Lukas Tribus
On Thu, 7 Sept 2023 at 14:03, Tom Braarup  wrote:
>
> Hello,
>
> After upgrading Haproxy from 2.7 to 2.8, with Nginx (1.25.0) as
> backends and Proxy Protocol v2, the connections are not closed,
> CLOSE_WAIT is increasing over time. No configuration changes apart from
> the Haproxy version.

2.8.3 was just released with several important fixes. I suggest you
give it a try when it hits Vincent's PPA.


Lukas



Re: QUIC (mostly) working on top of unpatched OpenSSL

2023-07-07 Thread Lukas Tribus
On Fri, 7 Jul 2023 at 00:26, Tristan  wrote:
>
> Hi Willy,
>
> Thanks for sharing that. First, I'm amazed that such a hacky method
> works well-enough to get QUIC (nearly-fully) working.
>
> Now for your concerns... Honestly, I agree with you and really don't
> want to see a brand new protocol compromised on.
>
> Whether one calls it "ossification" or "degraded", it would be a
> compromise on a spec that hasn't even reached mass adoption yet, because
> of one single stubborn library's committee.
>
> So many hours of our lives (in IT) are wasted on dealing with poor
> decisions from the past, leading to inconsistent specs, surprising
> behaviours, and security holes left and right.
> And these *always* have some obscure justification like "but at the
> time, popular software $XYZ wasn't compliant/nice/$whatever, so everyone
> was forced to give in, and this is why the spec is now crap".
>
> I've spent far too long complaining and cursing about these things to
> not feel strongly against it as it unfolds in my own time for once,
> instead of feeling helpless because it happened 25 years ago.
>
> Now that said, I'd understand if HAProxy went forward with it, even if
> that feels bad.
> I think it'd be a smart thing for the project to do.
> Not for technical reasons, but from a "marketing" standpoint (and I
> don't mean the word in a negative way). If other similar projects adopt
> it, then HAProxy would be the only one missing it, and be perceived as
> lacking a feature from a prospective user's point of view. After all,
> people *will* keep comparing HAProxy with nginx & friends. And they
> won't really care about the hacks that were necessary to get there.
>
> So while it'd be sad, and I'd much much prefer seeing wolfSSL become the
> new standard instead, it would not be that unreasonable for HAProxy to
> adopt the patch either...
>
> Between a rock and a hard place,
> Tristan


I can only agree with everything Tristan said. I too would like to see
full wolfssl support.

Perhaps the focus should be to get SSL feature parity (for generic
SSL) as well as H3 support with wolfssl and only then implement a
openssl hack?

That way at least we are not the ones pushing the hack without
providing a proper alternative. Because once it's out there, people
will certainly stick to it.


Lukas



Re: regression? scheme and hostname logged with %r with 2.6.13

2023-06-07 Thread Lukas Tribus
Hello,


yes, H2 behaves very differently; due to protocol differences but also
due to other changes. In the beginning H2 was only implemented in the
frontend and every transaction was downgraded to HTTP/1.1 internally.
This was later changed to an internal generic "HTX" representation
that allowed to unify the protocol stack.


To return relative URIs form in the logs, I guess you could
reconstruct the string manually with pathq (untested):
\"%HM%[pathq]%HV\" as opposed to %r


pathq is a HTTP sample designed to do exactly this, always returning
the URI in a relative format:

http://docs.haproxy.org/2.6/configuration.html#7.3.6-pathq


Not sure what %HU does, I assume it refers to the url not pathq.



I agree that doc updates are needed at least in section "8.2.3. HTTP
log format" and "8.2.6. Custom log format".



Lukas



Re: OCSP renewal with 2.8

2023-06-05 Thread Lukas Tribus
On Sat, 3 Jun 2023 at 14:30, William Lallemand  wrote:
> That's what we've done in the first place, but I decided to remove it
> because I was not happy with the architecture. And once you have
> something like this, you have to keep the configuration compatibility
> for the next versions and then you are stuck with something awful.
>
> My concern here, is that the ocsp-update option was never a "bind"
> option, it's a feature which applies on the internal storage part, which
> is not directly exposed in the configuration. So for example if you use
> the same certificate on multiple bind lines, setting "ocsp-update on" on
> one line and "ocsp-update off" on the other doesn't make sense.

I understand, I just think that those are tradeoffs that need to be made.

We could document it well, and trigger configuration warnings or
alerts (depending on severity) for conflicts.

Not providing bind lines configuration support to avoid conflicting
configurations in a small number of cases, while not supporting the
most commonly used configuration does not seem like a good tradeoff.


Note that I'm not saying conflicting configuration warnings for this
are trivial to implement or anything like that. I don't actually know;
I'm just saying this sounds like in this case the cure may be worse
than the disease.



> We are well aware on the current limitations of this model, and we are
> working on it, that's why it landed in the crt-list for now, but that
> will evolve!

Great, thank you!


Lukas



Re: OCSP renewal with 2.8

2023-06-02 Thread Lukas Tribus
On Fri, 2 Jun 2023 at 21:55, Willy Tarreau  wrote:
> Initially during the design phase we thought about having 3 states:
> "off", "on", "auto", with the last one only enabling updates for certs
> that already had a .ocsp file. But along discussions with some users
> we were told that it was not going to be that convenient (I don't
> remember why, but I think that Rémi and/or William probably remember
> the reason), and it ended up dropping "auto".
>
> Alternately maybe instead of enabling for all certs, what would be
> useful would be to just change the default, because if you have 100k
> certs, it's likely that 99.9k work one way and the other ones the other
> way, and what you want is to indicate the default and only mention the
> exception for those concerned.

I suggest we make it configurable on the bind line like other ssl
options, so it will work for the common use cases that don't involve
crt-lists, like a simple crt statement pointing to a certificate or a
directory.

It could also be a global option *as well*, but imho it does need to
be a bind line configuration option, just like strict-sni, alpn and
ciphers, so we can enable it specifically (per frontend, per bind
line) without requiring crt-list.


Lukas



Re: http-request del-header removes Authorization header before authenticated on haproxy

2023-05-25 Thread Lukas Tribus
Did you try putting the "del-header" configuration in the backend section?


On Thu, 25 May 2023 at 15:25, pham lan  wrote:
>
> Hello,
>
> We use haproxy for basic authentication. And afterward, remove the 
> Authorization header from the backend section before forwarding the request 
> to backend.
> It works fine with reqidel. Lately, we upgraded haproxy to the latest version 
> and use http-request del-header Authorization for the same purpose but the 
> request fails to be authenticated. It looks like the header was removed 
> before the authentication happened.
> Am I missing something? Is there anyone experiencing my case?
>
> Thanks,
> Lan Pham



Re: [OPINIONS DESIRED] (was Re: [PATCH] BUG/MINOR: Fix typo in `TotalSplicedBytesOut` field name)

2023-04-23 Thread Lukas Tribus
On Sun, 23 Apr 2023 at 13:08, Willy Tarreau  wrote:
>
> On Sun, Apr 23, 2023 at 12:39:25PM +0200, Tim Düsterhus, WoltLab GmbH wrote:
> > Willy,
> >
> > On 3/27/23 20:25, Willy Tarreau wrote:
> > > OK, let's see what other users and participants think about it. If I get
> > > at least one "please don't change it" I'll abide, otherwise it may make
> > > sense to fix it before it ossifies and annoys some future users.
> > >
> > > Anyone has any opinion here ?
> >
> > Wanted to bump this thread to make sure it is resolved one way or another
> > before the release.
>
> Ah thanks, I remembered there was something pending but didn't remember what.
> Yeah I think we should just fix it since nobody objected. I'll deal with that
> next week (I did too much of haproxy for this week-end).

I agree that we should fix this, considering the possible breakage is
pretty unlikely and very much non fatal.


Lukas



Re: Problems using custom error files with HTTP/2

2023-04-17 Thread Lukas Tribus
On Sat, 15 Apr 2023 at 23:08, Willy Tarreau  wrote:
>
> On Sat, Apr 15, 2023 at 10:59:42PM +0200, Willy Tarreau wrote:
> > Hi Nick,
> >
> > On Sat, Apr 15, 2023 at 09:44:32PM +0100, Nick Wood wrote:
> > > And here is my configuration - I've slimmed it down to the absolute 
> > > minimum
> > > to reproduce the problem:
> > >
> > > If the back end is down, the custom 503.http page should be served.
> > >
> > > This works on HTTP/1.1 but not over HTTP/2:
> >
> > Very useful, thank you. In fact it's irrelevant to the errorfile but
> > it's the 503 that is not produced in this case. I suspect that it's
> > interpreted on the server side as only a retryable connection error
> > and that if the HTTP/1 client had faced it on its second request it
> > would have been the same (in H1 there's a special case for the first
> > request on a connection, that is not automatically retryable, but
> > after the first one we have the luxry of closing silently to force
> > the client to retry, something that H2 supports natively).
> >
> > I'm still trying to figure when this problem appeared, and it looks
> > like even 2.4.0 did behave like this. I'm still digging.
>
> And indeed, this issue appeared with this commit in 1.9-dev10 4 years ago:
>
>   746fb772f ("MEDIUM: mux_h2: Always set CS_FL_NOT_FIRST for new 
> conn_streams.")
>
> So it makes h2 behave like the second and more H1 requests which are silent
> about this. We overlooked this specificity, it would need to be rethought a
> little bit I guess.

Even though we had this issue for a long time and nobody noticed, we
should probably not enable H2 on a massive scale with new 2.8 defaults
before this is fixed to avoid silently breaking this error condition.


Lukas



Re: Opinions desired on HTTP/2 config simplification

2023-04-15 Thread Lukas Tribus
Hi,

On Sat, 15 Apr 2023 at 11:32, Willy Tarreau  wrote:
> Thus you're seeing me coming with my question: does anyone have any
> objection against turning "alpn h2,http/1.1" on by default for HTTP
> frontends, and "alpn h3" by default for QUIC frontends, and have a new
> "no-alpn" option to explicitly turn off ALPN negotiation on HTTP
> frontends e.g. for debugging ?

For H2 I agree, and just to state which I think it's obvious: when
alpn is already configured the behaviour will remain as-is, the change
would only impact bind lines without alpn keyword.

For QUIC frontends, they are explicit anyway, so yes, I think "alpn
h3" should be implicit in HTTP mode, unless we expect to negotiate
different alpn protocols on top of QUIC (but I guess in this case the
frontend would be in TCP mode).



> And if we change this default, do you prefer that we do it for 2.8 that
> will be an LTS release and most likely to be shipped with next year's
> LTS distros, or do you prefer that we skip this one and start with 2.9,
> hence postpone to LTS distros of 2026 ?

I agree this should be in 2.8 LTS.


Lukas



Re: HAProxy CE Docker Alpine image with QUIC

2023-03-19 Thread Lukas Tribus
On Sat, 18 Mar 2023 at 20:01, Aleksandar Lazic  wrote:
>
> Hi Dinko.
>
> On 17.03.23 20:59, Dinko Korunic wrote:
> > Dear community,
> >
> > Upon many requests, we have started building HAProxy CE for 2.6, 2.7 and
> > 2.8 branches with QUIC (based on OpenSSL 1.1.1t-quic Release 1) as
> > Docker Alpine 3.17 images.
>
> That's great news :-).
>
> What should keep in mind is that Apline's musl libc does not handle TCP
> DNS queries, which limits the answers for dns Queries to ~30 entries.

I don't think you'd use the libc resolver at all when using DNS
discovery, considering that it only runs once during startup of
haproxy. Therefore I think it's unlikely that this would be a real
world problem.


Unrelated: the development branch of musl now has full DNS TCP support
[1], so this will ultimately trickle down to Alpine at some point,
resolving the issue once and for all.


Regards,
Lukas

[1] 
https://git.musl-libc.org/cgit/musl/commit/?id=51d4669fb97782f6a66606da852b5afd49a08001



Re: stick-table replication not working anymore after Version-Upgrade

2023-03-01 Thread Lukas Tribus
On Wed, 1 Mar 2023 at 10:09, bjun...@gmail.com  wrote:
>
> Hi,
>
> i've upgraded from HAProxy 2.4.15 (OS: Ubuntu 18.04) to 2.4.22 (OS: Ubuntu 
> 22.04). Now the stick-table synchronization between peers isn't working 
> anymore.
>
> The peers listener is completely not existing (lsof output).
>
> HAProxy config:
>
> peers LB
> peer s017.domain.local 192.168.120.207:1234
> peer s018.domain.local 192.168.120.208:1234

Is it possible the kernel rejects the bind to those IP addresses after
the OS upgrade?

Can you bind to those ports with something like nc?

nc -l -s 192.168.120.207 -p 1234



Lukas



Re: Haproxy (2.2.26) Wont Start - cannot find default_backend

2023-01-12 Thread Lukas Tribus
Hello,


On Thu, 12 Jan 2023 at 09:35, Aurelien DARRAGON  wrote:
>
> Hi,
>
> > I am having trouble with Haproxy using a configuration was previously
> > worked and am getting a very odd to me error
> >
> >
> >
> > Jan 11 13:58:00 ca04vlhaproxy01 haproxy[16077]: [ALERT] 010/135800
> > (16077) : Proxy 'graylog_back': unable to find required default_backend:
> > '#002'.
>
> You might want to manually update your haproxy from the tip of 2.2
> branch while waiting for a new version:
>
> A regression only affecting 2.2.26 was recently discovered (see below),
> and the fix is available on top of 2.2.26
> (https://git.haproxy.org/?p=haproxy-2.2.git;a=commit;h=a7d662bda95c33131126b2ad45bbb073d93b85d3)

Are we lacking tests with ring configurations in the 2.2 branch?

I agree with Jason that a new release is needed.


Lukas



Re: dsr and haproxy

2022-11-04 Thread Lukas Tribus
On Fri, 4 Nov 2022 at 16:50, Szabo, Istvan (Agoda)
 wrote:
>
> Yeah, that’s why I’m curious anybody ever made it work somehow?

Perhaps I should have been clearer.

It's not supported because it's not possible.

Haproxy the OSS uses the socket API, haproxy cannot forward IP packets
arbitrarily, which is required for DRS.

This is a hard no, not a "we do not support this configuration because
nobody ever tried it and we can't guarantee it will work".


Lukas



Re: dsr and haproxy

2022-11-04 Thread Lukas Tribus
On Fri, 4 Nov 2022 at 16:32, Aleksandar Lazic  wrote:
>
> Hi.
>
> On 04.11.22 12:24, Szabo, Istvan (Agoda) wrote:
> > Hi,
> >
> > Is there anybody successfully configured haproxy and dsr?
>
> Well maybe this Blog Post is a good start point.
>
> https://www.haproxy.com/blog/layer-4-load-balancing-direct-server-return-mode/

The TLDR is:

Haproxy (the OSS project) does not support DSR and never will, it's an
application working with sockets. You need IPVS or similar
technologies for DSR.


Lukas



Fwd: [oss-security] Forthcoming OpenSSL Releases

2022-10-26 Thread Lukas Tribus
FYI a CRITICAL openssl vulnerability will be fixed in 3.0.7 and 1.1.1s
to be released Tue, Nov 1st between 1300-1700 UTC:

https://www.openwall.com/lists/oss-security/2022/10/25/4
https://www.openwall.com/lists/oss-security/2022/10/25/6
https://www.openssl.org/policies/general/security-policy.html


cheers,
lukas

-- Forwarded message -
From: Ing. Martin Koci, MBA 
Date: Tue, 25 Oct 2022 at 14:54
Subject: [oss-security] Forthcoming OpenSSL Releases
To: , ,
, 


Hello,

The OpenSSL project team would like to announce the forthcoming release
of OpenSSL version 3.0.7.

This release will be made available on Tuesday 1st November 2022 between
1300-1700 UTC.

OpenSSL 3.0.7 is a security-fix release. The highest severity issue
fixed in this release is CRITICAL:

https://www.openssl.org/policies/general/security-policy.html

Yours
The OpenSSL Project Team
-BEGIN PGP PUBLIC KEY BLOCK-

xsDNBGK6xFsBDADsEQUHt8TbZYz56qJw54TqR/b0lPEHAcy1WLfeY4AtHaMBDbnl
Y0Q+DL38HhXn4FBLmt0CMMmMqsZ1PS4qgA0RxhD7UqQp7bTETV+VubegqSDJGuR0
oJo7uOSnxcErzHdWFqiSeAo1Wk4dAdfABOw2FKU57JO4j1LzHDD0a1N/+Uii5/bS
CskmTtHD9OoNvCKMNM0Se8pbn1rzlYwRPq2afEbbxMU6PqGmtSHs+p/Ujdu+pfSh
558VuF4ZHLgBW2KE8Nsw2Xyhxk8IzUufLHJmL/kVNPmkGcBXHFQqIek2k0+GRl4Z
sPh0PQ8h+PNBDK7ivMJ4XejWUA6N6xH/Zoe5xOb34P39S8tt9WCWRdZPJQZKgnqz
HAXKCrbZq9L8BUIBnwWNDSHlVCMzosBkfMDuakc1xU0HMWkw281kzcQkbBbfUjHC
yHaL3gGB3YbcFRLI+WMxcomcsUttSUXDxgB7a9hMV8/sFJlJK7D4tVJ9zSvC2tnR
ek01E/vIhSyPZVEAEQEAAc0qSW5nLiBNYXJ0aW4gIEtvY2ksIE1CQSA8bWtvY2lA
b3BlbnNzbC5vcmc+wsEHBBMBCAAxFiEE12vinLX9ZD/PnDS0bQo20uMFkKYFAmLW
eAYCGwMECwkIBwUVCAkKCwUWAgMBAAAKCRBtCjbS4wWQpnZlC/9bzno6OfqFKqGG
sAUnBWV6YGqp7L6USSqR3zZ6+yOYHBHyC9LXs10Wat8YfQ5gCfidk0n1DLemcPJD
Y3/9K4WHNl3EaXxIpqbGc7N9NZmnZjS2cA55B9TD0QsbXiuNvm7ktx5n/HG+2RaI
Llo4bhMHoIBv/hsYGyufNGYjWd9uaFo/sVHdfWjsdUXOvYNchHv+M+olUlYTTzCb
gEyEp+vQlUxvW/X8nvEcb0AhlRy7c1FZccRuE2UvX0saDOIFpPTa05EXYxekKVgP
hpMOGUycOJnQSkcpJuR4fM3OEizdKDLb8JFK5Uf8mlORRIlPYcQimA0Z5jFW/WRa
VnTjDCyXMUQvpugQ2l9AjYeGf3evq9btLnPDBLkwZlRJt6tpGXTzOmV/DB0zQ4oQ
TeyA+Skx6OfWJTekfGp0QxnTaSqhQuDDsDTzKnMC6ErV8hqvW3mQQMAa0xJasGzx
bbKogLwYdjcSvA+bFalV6GhQIilV8jnEE9cHlfxa9nrqYgcv8fbOwM0EYrrEXAEM
AMnL7w8Ra5mEIiavrI0ee8Ij6JIz4uY2vteOQp0nDUTwCNDFixg5Bit91HbczKYC
pefisvGHBieKQYNN1Ep447qopgbnEm2h8YvAVUjeX6cvcUyVzHMv2kjX/Ur84k8B
7MlwET8fXgTjXZhzjc0UMZA7V6O+48Zd5AToUOz1Lye2S9OCACfFCBfkvh2D00cJ
kaRqNlBB58+3E/jz9DFkBlwrPvlaQg0esDBjwYIgRk94CFHTPdGRU4Z5g5NMy+Bf
koYKQG5292QaOrFag7+O/FKe5lQr0dEmsWJjWna0BU4rDdXwaIaX5uqu6UEY8m/o
3BKzW3xlUnVg7aZZE2jvDaR2RIg9gkmztdpfOk7AgmVuLzO36+P0bQvbPl6O0QPJ
xJdMS5Gne67mWyTSzHU/6d/E7ZsKT6MBKiFE+Kt0RvEqXJm0FqgJuLypQrE2nzmW
SgXXuo63vKPkrjL9DLJZUjR5tClLXzSPDu632AXI3fvyA7kJ/XACRAkPLqK/aeDz
WwARAQABwsD2BBgBCAAgFiEE12vinLX9ZD/PnDS0bQo20uMFkKYFAmLWeAcCGwwA
CgkQbQo20uMFkKb7gwv/QxA5+bBiGwfBiYasdDmswcrSIvpUkNnmonTHqRzPY/SW
WGhr4uCNBkCJY3rDVGJa23/gcKtvavxfVMpey2NcxuNzMaUyI8HslMPIeXzgTHw8
unh/jq+RfqFNiDqBLB8lpG1WUdgTrclNJecb6ubBVmxkelgODIO9Czkx22TAKs0T
n9DO1X3r099UfJjZDaaLkJNWj8YYY4DV0eXhZLEqzcc8FHGOGMGkY0MZ8e+vz+8u
1C/XKWt8TfpSU9vzLRkT/BtdofIa5dcqcmsZduxwfSxh3p2a43qcPNMxILk6fosc
QWuOL8fzWXW4p8ihK6WHgV2A2H9329+hocpHd6InqJ+C29l3cRadCSkcd61oTJj+
ZDtjp3SwIJ7Bf01rYcrRNfM1F40b5feknPSR0RTtahNQ44wQAAhGpSXiDexEoiGF
BdtVbpJTVM1RP3njO6IwJKwjqFItHs7mYbXIFZFQJZmkHHOBSqDEJtpPJbiHDMMF
fNmRR0/erhlo1sZ5oXH8
=wOFU
-END PGP PUBLIC KEY BLOCK-


OpenPGP_signature
Description: PGP signature


Re: most probably next LibreSSL release will come with ... QUIC

2022-08-31 Thread Lukas Tribus
Hello,


wolfSSL has also chosen to use the same API for QUIC:

https://www.wolfssl.com/wolfssl-quic-support/

> The wolfSSL QUIC API is aligned with the corresponding APIs in other *SSL 
> libraries, making integration with QUIC protocol stacks easier and protecting 
> investments. This is a departure from past customs where OpenSSL used to 
> drive the design of APIs. However OpenSSL declined to participate and offers 
> no QUIC support for the foreseeable future.


This is probably less useful for haproxy specifically, given that we
don't support wolfssl in the first place, but interesting nonetheless.


Lukas

On Wed, 31 Aug 2022 at 15:55, William Lallemand  wrote:
>
> On Mon, Aug 29, 2022 at 11:20:29PM +0500, Илья Шипицин wrote:
> > Hello,
> >
> > Provide the remaining QUIC API. · libressl-portable/openbsd@635aa39
> > (github.com)
> > 
> >
> >
>
> That's good to read! It didn't make it to libressl-portable for now but
> we will definitively try it once it's available.
> --
> William Lallemand
>



Re: V2.3 allow use of TLSv1.0

2022-06-09 Thread Lukas Tribus
On Thu, 9 Jun 2022 at 08:42,  wrote:
>
> Hi,
>
> I need to enable TLS V1.0 because of some legacy clients which have just been 
> "discovered" and won't be updated.

Configure "ssl-default-bind-ciphers" as per:
https://ssl-config.mozilla.org/#server=haproxy&version=2.3&config=old&openssl=1.1.1k&guideline=5.6

If you don't allow TLSv1.0 ciphers, TLSv1.0 can't be used.

Also it's possible OpenSSL is so new it needs additional convincing.
Share the full output of haproxy -vv, including the OpenSSL release
please.



> Can someone tell me what I am missing ? I have found a few messages
> about adding other cipher suites,  but nothing lead to an improvement.

You will have to share more data. Full output of haproxy -vv, full ssl
configuration. Can't really troubleshoot without configurations and
exact software releases (openssl).


Lukas



Re: Stupid question about nbthread and maxconn

2022-04-26 Thread Lukas Tribus
Hello,


> > Let's say we have the following setup.
> >
> > ```
> > maxconn 2
> > nbthread 4
> > ```
> >
> > My understanding is that HAProxy will accept 2 concurrent connection,
> > right? Even when I increase the nbthread will HAProxy *NOT* accept more then
> > 2 concurrent connection, right?

Yes.


> > What confuses me is "maximum per-process" in the maxconn docu part, will 
> > every
> > thread handle the maxconn or is this for the whole HAProxy instance.

Per process limits apply to processes, they do not apply to threads.

Maxconn is per process. It is NOT per thread.

Multithreading solves those issues.


Lukas



Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-26 Thread Lukas Tribus
Hello Willy,

On Sat, 26 Mar 2022 at 10:22, Willy Tarreau  wrote:
> A change discussed around previous announce was made in the H2 mux: the
> "timeout http-keep-alive" and "timeout http-request" are now respected
> and work as documented, so that it will finally be possible to force such
> connections to be closed when no request comes even if they're seeing
> control traffic such as PING frames. This can typically happen in some
> server-to-server communications whereby the client application makes use
> of PING frames to make sure the connection is still alive. I intend to
> backport this after some time, probably to 2.5 and later 2.4, as I've
> got reports about stable versions currently posing this problem.

While I agree with the change, actually documented is the previous behavior.

So this is a change in behavior, and documentation will need updating
as well to actually reflect this new behavior (patch incoming).

I have to say I don't like the idea of backporting such changes. We
have documented and trained users that H2 doesn't respect "timeout
http-keep-alive" and that it uses "timeout client" instead. We even
argued that this is a good thing because we want H2 connections to
stay up longer. I suggest not changing documented behavior in bugfix
releases of stable and stable/LTS releases.


cheers,
lukas



[PATCH] DOC: reflect H2 timeout changes

2022-03-26 Thread Lukas Tribus
Reverts 75df9d7a7 ("DOC: explain HTTP2 timeout behavior") since H2
connections now respect "timeout http-keep-alive".

If commit 15a4733d5d ("BUG/MEDIUM: mux-h2: make use of http-request
and keep-alive timeouts") is backported, this DOC change needs to
be backported along with it.
---
 doc/configuration.txt | 6 --
 1 file changed, 6 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 8385e81be..87ae43809 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13034,8 +13034,6 @@ timeout client 
   during startup because it may result in accumulation of expired sessions in
   the system if the system's timeouts are not configured either.
 
-  This also applies to HTTP/2 connections, which will be closed with GOAWAY.
-
   See also : "timeout server", "timeout tunnel", "timeout http-request".
 
 
@@ -13130,10 +13128,6 @@ timeout http-keep-alive 
   set in the frontend to take effect, unless the frontend is in TCP mode, in
   which case the HTTP backend's timeout will be used.
 
-  When using HTTP/2 "timeout client" is applied instead. This is so we can keep
-  using short keep-alive timeouts in HTTP/1.1 while using longer ones in HTTP/2
-  (where we only have one connection per client and a connection setup).
-
   See also : "timeout http-request", "timeout client".
 
 
-- 
2.17.1




Re: Is there some kind of program that mimics a problematic HTTP server?

2022-03-03 Thread Lukas Tribus
Hello,

take a look at how we are using tests with vtc/vtest in
doc/regression-testing.txt.

Maybe this tool can be useful for your use-case.


Lukas



Re: Question about http compression

2022-02-21 Thread Lukas Tribus
Hello,


On Mon, 21 Feb 2022 at 14:25, Tom Browder  wrote:
>
> I'm getting ready to try 2.5 HAProxy on my system
> and see http comression is recommended.

I'm not sure we are actively encouraging to enable HTTP compression.
Where did you see this recommendation?


> From those sources I thought https should not use compression
> because of some known exploit, so I'm not currently using it.

You are talking about BREACH [1], and I'm afraid there is no magic fix
 for that. The mitigations on the BREACH website apply.


Lukas


[1] http://www.breachattack.com/#mitigations



Re: ACL HAPROXY (check servers UP and DOWN) and redirect traffic

2022-02-19 Thread Lukas Tribus
On Sat, 19 Feb 2022 at 18:38, Carlos Renato  wrote:
>
> Yes,
>
> In stats server2 is DOWN. accept the VM's network card.

Provide detailed logs please.


Lukas



Re: HAProxy thinks Plex is down when it's not

2022-02-19 Thread Lukas Tribus
Hello,


On Sat, 19 Feb 2022 at 17:46, Moutasem Al Khnaifes
 wrote:
> but for some reason HAProxy thinks that Plex is down

John already explained this perfectly.



> the status page is inaccessible

Your configuration is:

> listen  stats
> bind localhost:1936
[...]
> stats uri  /haproxy?stats

*NEVER* configure a bind line requiring a DNS lookup, especially a
hostname returning both address families, whether haproxy is listening
to 127.0.0.1 or ::1 or something else entirely is depend on compile
options (USE_GETADDRINFO) and libc behavior.

If haproxy is listening on 127.0.0.1:1936, then you'd access the stats
socket as per your configuration with:

"http://127.0.0.1:1936/haproxy?stats";


But it could be listening on its IPv6 equivalent ::1 as well.


I suggest you change the configuration to your actual intention, which
probably is:

bind 127.0.0.1:1936



cheers,
lukas



Re: ACL HAPROXY (check servers UP and DOWN) and redirect traffic

2022-02-19 Thread Lukas Tribus
On Sat, 19 Feb 2022 at 16:15, Carlos Renato  wrote:
>
> Hi Lukas,
>
> Thanks for the reply and willingness to help.
>
> I did a test and it didn't work. I dropped the server2 interface and only 
> server1 was UP.
> Traffic continues to exit through the main bakend. My wish is that the 
> traffic is directed to the backup server.

Did haproxy recognize server2 was down?

With thgis configuration the backup keyword needs to be removed from
serverBKP (because the rule is implemented with a use_backend
directive), this was wrong in my earlier config.


Lukas



Re: ACL HAPROXY (check servers UP and DOWN) and redirect traffic

2022-02-19 Thread Lukas Tribus
Hello,

I suggest you put your backup server in a dedicated backend and select
it in the frontend. I guess the same could be done with use-server in
a single backend, but I feel like this is cleaner:



frontend haproxy
  option forwardfor
  bind server.lab.local:9191
  use_backend backup_servers if { nbsrv(backend_servers) lt 2 }
  default_backend backend_servers

backend backend_servers
  server server1 192.168.239.151:9090 check
  server server2 192.168.239.152:9090 check

backend backup_servers
  server serverBKP 192.168.17.2:9090 backup



Lukas

On Sat, 19 Feb 2022 at 14:17, Carlos Renato  wrote:
>
> Can anyone help me?
>
> How to create an ACL to use the backup server if a server goes DOWN. So, if 
> the two backend servers are UP, I use the registered servers. If one (only 
> one) becomes unavailable, traffic is directed to the backup server.
>
> Below my settings.
>
> frontend haproxy
>   option forwardfor
>   bind server.lab.local:9191
>   default_backend backend_servers
>
> backend backend_servers
>   server server1 192.168.239.151:9090 check
>   server server2 192.168.239.152:9090 check
>   server serverBKP 192.168.17.2:9090 backup
>
> Thank you for your help
>
>
>
>
> --
>
>



[PATCH] BUG/MINOR: mailers: negotiate SMTP, not ESMTP

2022-02-17 Thread Lukas Tribus
As per issue #1552 the mailer code currently breaks on ESMTP multiline
responses. Let's negotiate SMTP instead.

Should be backported to 2.0.
---
 src/mailers.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/mailers.c b/src/mailers.c
index 3d01d7532..34eaa5bb6 100644
--- a/src/mailers.c
+++ b/src/mailers.c
@@ -195,7 +195,7 @@ static int enqueue_one_email_alert(struct proxy *p, struct 
server *s,
goto error;
 
{
-   const char * const strs[4] = { "EHLO ", 
p->email_alert.myhostname, "\r\n" };
+   const char * const strs[4] = { "HELO ", 
p->email_alert.myhostname, "\r\n" };
if (!add_tcpcheck_send_strs(&alert->rules, strs))
goto error;
}
-- 
2.17.1




Re: haproxy in windows

2022-02-10 Thread Lukas Tribus
I'd suggest you give WSL/WSL2 a try.

Lukas

On Thu, 10 Feb 2022 at 11:25, Gowri Shankar  wrote:
>
> Im trying to install haproxy for loadbalancing for my servers,but im not able 
> install from my windows system.Is there ha proxy available for windows, 
> please give and help us with documentation.
>
>
>
>
>
>
>
>



Re: 2.0.26 breaks authentication

2022-01-18 Thread Lukas Tribus
On Mon, 17 Jan 2022 at 19:37,  wrote:
>
> Hi
>
> Configuration uses 'no option http-use-htx' in defaults because of case
> insensitivity.
> Statistics path haproxy?stats is behind simple username/password and
> both credentials are specified in config.
> When accessing haproxy?stats, 2.0.25 works fine, but 2.0.26 returns 401:

Confirmed and filed:
https://github.com/haproxy/haproxy/issues/1516

Bug will be fixed, but for the long term:

- the legacy HTTP code is gone from newer haproxy branches, 'no option
http-use-htx' is no more
- in HTX mode, if you have non-compliant clients or servers, use
h1-case-adjust to workaround those those case problems


Regards,

Lukas



Re: Blocking log4j CVE with HAProxy

2021-12-13 Thread Lukas Tribus
On Mon, 13 Dec 2021 at 19:51, Valters Jansons  wrote:
>
> Is this thread really "on-topic" for HAProxy?
>
> Attempts to mitigate Log4Shell at HAProxy level to me feel similar
> to.. looking at a leaking roof of a house and thinking "I should put
> an umbrella above it, so the leak isn't hit by rain". Generally, it
> might work, but it's not something that you can expect to hold up in
> the long run, and it's not something construction folks would advise.

This is about reducing the attack surface temporarily.

I would rather avoid thousands of euros of water damage in my house or
millions of dollars of damage at my employer, just because a
contractor can't immediately provide a long term fix. A temporary and
incomplete mitigation is better than nothing at all, that doesn't mean
it's an alternative to properly fixing the issue.



> So just patch/update your vulnerable applications; and where vendors
> provide mitigation steps - apply those instead.

That is often easier said than done; especially when there is no time.


Lukas



Re: Blocking log4j CVE with HAProxy

2021-12-13 Thread Lukas Tribus
On Mon, 13 Dec 2021 at 14:43, Aleksandar Lazic  wrote:
> Well I go the other way around.
>
> The application must know what data are allowed, verify the input and if the 
> input is not valid discard it.´

You clearly did not understand my point so let me try to phrase it differently:

The log4j vulnerability is about "allowed data" triggering a software
vulnerability which was impossible to predict.


Lukas



Re: Blocking log4j CVE with HAProxy

2021-12-13 Thread Lukas Tribus
On Mon, 13 Dec 2021 at 13:25, Aleksandar Lazic  wrote:
> 1. Why is a input from out site of the application passed unchecked to the 
> logging library!

Because you can't predict the future.

When you know that your backend is SQL, you escape what's necessary to
avoid SQL injection (or use prepared statements) before sending
commands against the database.
When you know your output is HTML, you escape HTML special characters,
so untrusted inputs can't inject HTML tags.

That's what input validation means.

How exactly do you verify and sanitise inputs to protect against an
unknown vulnerability with an unknown syntax in a logging library that
is supposed to handle all strings just fine? You don't, it doesn't
work this way, and that's not what input validation means.


Lukas



[PATCH] DOC: config: fix error-log-format example

2021-12-08 Thread Lukas Tribus
In commit 6f7497616 ("MEDIUM: connection: rename fc_conn_err and
bc_conn_err to fc_err and bc_err"), fc_conn_err became fc_err, so
update this example.
---
Should be backported to 2.5.
---
 doc/configuration.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 1e049012b..b8a40d574 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -21405,7 +21405,7 @@ would have passed through a successful stream, hence 
will be available as
 regular traffic log (see option httplog or option httpslog).
 
# detailed frontend connection error log
-   error-log-format "%ci:%cp [%tr] %ft %ac/%fc %[fc_conn_err]/\
+   error-log-format "%ci:%cp [%tr] %ft %ac/%fc %[fc_err]/\
  %[ssl_fc_err,hex]/%[ssl_c_err]/%[ssl_c_ca_err]/%[ssl_fc_is_resumed] \
  %[ssl_fc_sni]/%sslv/%sslc"
 
-- 
2.17.1




Re: [PATCH] DOC: config: retry-on list is space-delimited

2021-12-08 Thread Lukas Tribus
Hello,

On Wed, 8 Dec 2021 at 17:50, Tim Düsterhus  wrote:
>
> Lukas,
>
> On 12/8/21 11:33 AM, Lukas Tribus wrote:
> > We are using comma-delimited list for init-addr for example, let's
> > document that this is space-delimited to avoid the guessing game.
>
> Shouldn't this rather be fixed by unifying the delimiter instead of
> updating the docs? e.g. add support for the comma as the delimiter and
> then deprecate the use of spaces with a warning?

I agree, but I'm also not able to contribute more than a doc change here.

Also there is more than just those 2 uses of lists, here just a few,
certainly incomplete, some space delimited, some comma delimited, some
undocumented:

user/group
ssl-engine  [algo]
wurfl-information-list []*
51degrees-property-name-list [ ...]


So first of all we would have to find all existing lists users in the
haproxy configuration and then make a determination about what to do
(and which parts to touch). That requires more work than a single line
doc patch, which is all I can contribute at the moment.


Lukas



Re: [ANNOUNCE] haproxy-2.5.0

2021-12-08 Thread Lukas Tribus
Hello Cyril,

On Tue, 23 Nov 2021 at 17:18, Willy Tarreau  wrote:
>
> Hi,
>
> HAProxy 2.5.0 was released on 2021/11/23. It added 9 new commits after
> version 2.5-dev15, fixing minor last-minute details (bind warnings
> that turned to errors, and an incorrect free in the backend SSL cache).

could you run haproxy-dconv for haproxy 2.5 again? The last update is
from May and lots of doc updates (regarding new features) have been
submitted since then.

You could also add the new 2.6-dev branch at that point.


Thank you!

Lukas



[PATCH] DOC: config: retry-on list is space-delimited

2021-12-08 Thread Lukas Tribus
We are using comma-delimited list for init-addr for example, let's
document that this is space-delimited to avoid the guessing game.
---
 doc/configuration.txt | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 1e049012b..c810fa918 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -10124,17 +10124,18 @@ retries 
   See also : "option redispatch"
 
 
-retry-on [list of keywords]
+retry-on [space-delimited list of keywords]
   Specify when to attempt to automatically retry a failed request.
   This setting is only valid when "mode" is set to http and is silently ignored
   otherwise.
   May be used in sections:defaults | frontend | listen | backend
  yes   |no|   yes  |   yes
   Arguments :
-  is a list of keywords or HTTP status codes, each representing a
-type of failure event on which an attempt to retry the request
-is desired. Please read the notes at the bottom before changing
-this setting. The following keywords are supported :
+  is a space-delimited list of keywords or HTTP status codes, 
each
+representing a type of failure event on which an attempt to
+retry the request is desired. Please read the notes at the
+bottom before changing this setting. The following keywords are
+supported :
 
   none  never retry
 
@@ -10207,6 +10208,9 @@ retry-on [list of keywords]
 
   The default is "conn-failure".
 
+  Example:
+retry-on 503 504
+
   See also: "retries", "option redispatch", "tune.bufsize"
 
 server  [:[port]] [param*]
-- 
2.17.1




Re: How to compile with packaged openssl when custom openssl installed?

2021-11-03 Thread Lukas Tribus
Use the instructions in INSTALL to build openssl statically. Building
and installing a custom shared build of openssl on a OS is something
that I'd suggest you avoid, because it will become complicated.

Lukas



Re: Haproxy + LDAPS+ SNI

2021-11-03 Thread Lukas Tribus
Hello Ben,

On Wed, 3 Nov 2021 at 12:55, Ben Hart  wrote:
>
> Thanks again Lukas!
> So the server directive's use of a cert or CA file is only to
> verify the identity of the server in question.

No, "crt" (a certificate including private key) and "ca-file" (the
public certificate of a CA) are two completely different things (see
below).


> So the SSL crt speciied in the frontend, does that secure only the connection 
> to Haproxy

"Secure" is the wrong word. It authenticates the server to the far end
client connecting to port 443 on haproxy. It has nothing to do with
backend traffic.


>or is it passed-through to the server connection as well?  I might be 
>misunderstanding how this part of Haproxy works fundamentally...

No, there is nothing passing-through to the backend/backend-servers.


CA (as in "ca-file") means Certificate authority (a public key of the
CA certificate), and it is required to verify the certificate on the
other side:

- for the frontend this is required when you are using client
certificate authentication (you are not)
- for the backend this means that this CA is used to verify the
servers certificate

Certificate (as in "crt") is a certificate including a private key,
and is therefor for the *local* certificate:

- for the frontend this is about the standard server certificate that
haproxy responds with on port 443 (the classic SSL configuration)
- for the backend, this is about the case when you need haproxy to
authenticate with a "client" certificate (in this case, haproxy is the
client) against the backend server

In your case, only a certificate ("crt") on the frontend, and a CA
("ca-file") on the backend is necessary, as you are not using mutual
SSL certificate authentication.


Lukas



Re: Haproxy + LDAPS+ SNI

2021-11-03 Thread Lukas Tribus
Hello Ben,


On Wed, 3 Nov 2021 at 03:54, Ben Hart  wrote:
>
> I wonder, can I ask if the server directives are correct insofar as
> making a secured connection to the backend server entries?
>
> I'm told that HAP might be connecting by IP in which case the
> SSL cert would be useless

The documentation of the verify keyword in the server section clarifies this:

http://cbonte.github.io/haproxy-dconv/2.2/configuration.html#5.2-verify

"The certificate provided by the server is verified using CAs from 'ca-file' and
optional CRLs from 'crl-file' after having checked that the names provided in
the certificate's subject and subjectAlternateNames attributes match either
the name passed using the "sni" directive, or if not provided, the static
host name passed using the "verifyhost" directive. When no name is found, the
certificate's names are ignored. For this reason, without SNI it's important
to use "verifyhost".


Lukas



Re: Haproxy + LDAPS+ SNI

2021-11-02 Thread Lukas Tribus
Hello,



On Tue, 2 Nov 2021 at 21:24, Ben Hart  wrote:
>
>  In the config (pasted here
> https://0bin.net/paste/1aOh1F4y#qStfT0m0mER3rhI3DonDbCsr0NRmVuH9XiwvagEkAiE)
> My questions surround the syntax of the config file..

Most likely those clients don't send SNI. Capture the SSL handshake
and verify to make sure.

Although you don't need the "tcp-request*" keywords (we are not
extracting SNI "manually" from a TCP connection buffer, but locally
deciphering it and accessing it through the OpenSSL API), I don't see
any obvious errors in your configuration.


Regards,

Lukas



Re: Does haproxy utlize openssl with AES-NI if present?

2021-10-28 Thread Lukas Tribus
On Thu, 28 Oct 2021 at 21:20, Shawn Heisey  wrote:
>
> On 10/28/21 10:02 AM, Lukas Tribus wrote:
> > You seem to be trying very hard to find a problem where there is none.
> >
> > Definitely do NOT overwrite CPU flags in production. This is to *test*
> > AES acceleration, I put the link to the blog post in there for
> > context, not because I think you need to force this on.
>
> I wouldn't call this production.  It's the server in my basement. It
> runs most of my personal websites.  I do my experimentation there.  I'm
> OK with those experiments causing the occasional problem, because for
> the most part I know how to fix it if I make a mistake.

I get it. Despite this, I don't want you to make matters worse. Also
other people may read this thread and try the same.



> I did just think of a way that I MIGHT be able to test. Time a simple
> shell script using wget to hit a tiny static web page using https 1
> times.  For that test, the run with haproxy started normally actually
> took longer:

No, that's not the way to test AES-NI. AES-NI doesn't help TLS
handshakes at all. Testing many handshakes and downloads with small
files is exactly the case where AES-NI won't improve anything.

You would have to run a single request causing a large download, and
run haproxy through a cpu profiler, like perf, and compare outputs.

Make sure you run wget that with -O /dev/null and run it on a
different box, so it won't steal CPU from haproxy. Also make sure an
AES cipher is actually negotiated.


Lukas



Re: Does haproxy utlize openssl with AES-NI if present?

2021-10-28 Thread Lukas Tribus
On Thu, 28 Oct 2021 at 15:49, Shawn Heisey  wrote:
>
> On 10/28/21 7:34 AM, Shawn Heisey wrote:
> > Does haproxy's use of openssl turn on the same option that the
> > commandline does with the -evp argument?  If it does, then I think
> > everything is probably OK.
>
>
> Running "grep -r EVP ." in the haproxy source tree turns up a lot of
> hits in the TLS/SSL code.  So I think that haproxy is most likely using
> EVP, and since I am running haproxy on bare metal and not in a VM (which
> might mask the aes CPU flag), it probably is using acceleration.  Just
> in case, I did add the openssl bitmap environment variable (the one with
> + instead of ~) to my haproxy systemd unit.

You seem to be trying very hard to find a problem where there is none.

Definitely do NOT overwrite CPU flags in production. This is to *test*
AES acceleration, I put the link to the blog post in there for
context, not because I think you need to force this on.


You cannot compare command line arguments of an openssl tool with
openssl library API calls, those are two different things.

If this keeps you up at night, I'd suggest you ask on the
openssl-users mailing list for clarification, or set brakepoints in
gdb and debug openssl when running from haproxy, or find a platform
where you have both a CPU with and without aesni support, and compile
openssl and haproxy with aesni and then move the executable over. It
will be a waste of your time though.


Lukas



Re: Does haproxy utlize openssl with AES-NI if present?

2021-10-28 Thread Lukas Tribus
On Thu, 28 Oct 2021 at 08:31, Lukas Tribus  wrote:
>
> Hi,
>
> On Thursday, 28 October 2021, Shawn Heisey  wrote:
>>
>> On 10/27/2021 2:54 PM, Lukas Tribus wrote:
>>>
>>> I'd be surprised if the OpenSSL API calls we are using doesn't support 
>>> AES-NI.
>>
>>
>> Honestly that would surprise me too.  But I have no idea how to find out 
>> whether it's using the acceleration or not, and the limited (and possibly 
>> incorrect) evidence I had suggested that maybe it was disabled by default, 
>> so I wanted to ask the question.  I have almost zero knowledge about openssl 
>> API or code, so I can't discern the answer from haproxy code.
>
>
>
> You want evidence.
>
> Then get a raspberry pi, and run haproxy manually, fake the cpu
> flag aes-ni and it should crash when using aes acceleration,
> because the cpu doesn't support it.

For some reason, openssl itself doesn't crash on my raspberry pi:
OPENSSL_ia32cap="+0x202" openssl speed -elapsed -evp aes-128-gcm

Most likely openssl is compiled without aes-ni support here, so the
test doesn't work.

Lukas



Re: Does haproxy utlize openssl with AES-NI if present?

2021-10-27 Thread Lukas Tribus
Hi,

On Thursday, 28 October 2021, Shawn Heisey  wrote:

> On 10/27/2021 2:54 PM, Lukas Tribus wrote:
>
>> I'd be surprised if the OpenSSL API calls we are using doesn't support
>> AES-NI.
>>
>
> Honestly that would surprise me too.  But I have no idea how to find out
> whether it's using the acceleration or not, and the limited (and possibly
> incorrect) evidence I had suggested that maybe it was disabled by default,
> so I wanted to ask the question.  I have almost zero knowledge about
> openssl API or code, so I can't discern the answer from haproxy code.



You want evidence.

Then get a raspberry pi, and run haproxy manually, fake the cpu flag aes-ni
and it should crash when using aes acceleration, because the cpu doesn't
support it.

https://romanrm.net/force-enable-openssl-aes-ni-usage


Lukas


Re: Does haproxy utlize openssl with AES-NI if present?

2021-10-27 Thread Lukas Tribus
Hello,


On Wed, 27 Oct 2021 at 22:17, Shawn Heisey  wrote:
>
> I am building haproxy from source.
>
> For some load balancers that I used to manage, I also built openssl from
> source, statically linked, and compiled haproxy against that, because
> the openssl included with the OS (CentOS 6 if I recall correctly) was
> ANCIENT.  I don't know how to get haproxy to use the alternate openssl
> at runtime, which is why I compiled openssl to be linked statically.
>
> For my own servers, I am running Ubuntu 20, which has a reasonably
> current openssl version already included.
>
> I know that openssl on Ubuntu is compiled with support for Intel's
> AES-NI instructions for accelerating crypto.  That can be seen by
> running these two commands on a system with a proper CPU and comparing
> the reported speeds:
>
> openssl speed -elapsed aes-128-cbc
> openssl speed -elapsed -evp aes-128-cbc
>
> But the fact that it requires the -evp arg on the commandline to get the
> acceleration makes me wonder if maybe openssl 1.1.1 has CPU acceleration
> turned off by default, requiring an explicit enable to use it.

You are not comparing aes-ni on vs aes-ni off, you are comparing two
very different crypto interfaces, one supports aes-ni the other
doesn't, but there is also a big difference in the interface speed
itself.

The proper comparison is:

# aes-ni as per cpu flags:
openssl speed -elapsed -evp aes-128-cbc

# aes-ni cpu flag hidden:
OPENSSL_ia32cap="~0x202" openssl speed -elapsed -evp aes-128-cbc


I'd be surprised if the OpenSSL API calls we are using doesn't support AES-NI.


Lukas



PCRE (1) end of life and unmaintained

2021-10-18 Thread Lukas Tribus
Hello,

PCRE (1) is end of life and unmaintained now (see below). Not a huge
problem, because PCRE2 has been supported since haproxy 1.8.

However going forward (haproxy 2.5+) should we:

- warn when compiling with PCRE?
- remove PCRE support?
- both, but start with a warning in 2.5?
- maintain PCRE support as is?


PCRE is end of life and unmaintained :

from http://pcre.org/

> The older, but still widely deployed PCRE library, originally released in 
> 1997,
> is at version 8.45. This version of PCRE is now at end of life, and is no
> longer being actively maintained. Version 8.45 is expected to be the final
>release of the older PCRE library


> the older, unmaintained PCRE library from an unofficial mirror at SourceForge:

> bugs in the legacy PCRE release are unlikely to be looked at or fixed; and
> please don't use the SourceForge bug tracking system, as it is not
> normally monitored.


from the main PCRE author at:
https://github.com/PhilipHazel/pcre2/issues/26#issuecomment-944916343

> For 6 years after the release of PCRE2 we continued to maintain PCRE1
> before declaring End-of-Life. There will never be another release. If people
> want to continue to use the legacy version, that's fine, of course, but they
> must arrange their own maintenance. I would venture to suggest that the
> amount of effort needed to do any maintenance is probably less that
> what is needed to convert to PCRE2. A number of issues that caused
> problems in PCRE1 have already been addressed in PCRE2.
>
> I am sorry to have to sound harsh here, but the maintainers are just
> volunteers with only so much time for this work, and furthermore, after
> nearly 7 years I for one have forgotten how the PCRE1 code works. I
> am therefore going to close this issue "invalid" and "wontfix".



Lukas



Re: CVE-2021-40346, the Integer Overflow vulnerability

2021-09-08 Thread Lukas Tribus
Hello Jonathan,

On Wed, 8 Sept 2021 at 21:28, Jonathan Greig  wrote:
>
> Hello! My name is Jonathan Greig and I'm a reporter for ZDNet. I'm
> writing a story about CVE-2021-40346 and I was wondering if
> Ha Proxy had any comment about the vulnerability.

Just making sure you are aware that this is a public mailing list:
https://www.mail-archive.com/haproxy@formilux.org/msg41140.html

You can find the CVE-2021-40346 announcement with comments here on
this mailing list:
https://www.mail-archive.com/haproxy@formilux.org/msg41114.html

Short blog article on haproxy.com:
https://www.haproxy.com/blog/september-2021-duplicate-content-length-header-fixed/

Long Jfrog article with (lots) of technical details:
https://jfrog.com/blog/critical-vulnerability-in-haproxy-cve-2021-40346-integer-overflow-enables-http-smuggling/


Hope this helps,
Lukas



Re: double // after domain causes ERR_HTTP2_PROTOCOL_ERROR after upgrade to 2.4.3

2021-08-20 Thread Lukas Tribus
On Fri, 20 Aug 2021 at 13:08, Илья Шипицин  wrote:
>
> double slashes behaviour is changed in BUG/MEDIUM:
> h2: match absolute-path not path-absolute for :path · haproxy/haproxy@46b7dff 
> (github.com)

Actually, I think the patch you are referring to would *fix* this
particular issue, as it was committed AFTER the last releases:

https://github.com/haproxy/haproxy/commit/46b7dff8f08cb6c5c3004d8874d6c5bc689a4c51

It was this fix that probably caused the issue:
https://github.com/haproxy/haproxy/commit/4b8852c70d8c4b7e225e24eb58258a15eb54c26e


Using the latest git, applying the patch manually or running a
20210820 snapshot would fix this.


Lukas



Re: [ANNOUNCE] HTTP/2 vulnerabilities from 2.0 to 2.5-dev

2021-08-18 Thread Lukas Tribus
On Thursday, 19 August 2021, James Brown  wrote:

> Are there CVE numbers coming for these vulnerabilities?
>
>

CVE-2021-39240: -> 2) Domain parts in ":scheme" and ":path"
CVE-2021-39241: -> 1) Spaces in the ":method" field
CVE-2021-39242: -> 3) Mismatch between ":authority" and "Host"


Lukas


Re: HAProxy Network Namespace Support issues, and I also found a security flaw.

2021-07-19 Thread Lukas Tribus
Hello,


On Tue, 20 Jul 2021 at 08:13, Peter Jin  wrote:
> 2. There is a stack buffer overflow found in one of the files. Not
> disclosing it here because this email will end up on the public mailing
> list. If there is a "security" email address I could disclose it to,
> what is it?

It's secur...@haproxy.org, it's somehow well hidden in doc/intro.txt
(that is the *starter* guide).

I would definitely suggest putting it on the website haproxy.org, and
in the repository move it to a different file, like MAINTAINERS.


Lukas



Re: Replying to spam [was: Some Spam Mail]

2021-07-15 Thread Lukas Tribus
On Thu, 15 Jul 2021 at 11:27, Илья Шипицин  wrote:
>
> I really wonder what they will suggest.
>
> I'm not a spam source, since we do not have "opt in" policy, anybody can send 
> mail. so they do.
> please address the issue properly, either change list policy or be calm with 
> my experiments.

It's about common sense, not list policy. Please do your SPAM
responding experiments without the list in CC.


Thank you,

Lukas



Re: set mss on backend site on version 1.7.9

2021-07-13 Thread Lukas Tribus
Hello Stefan,

On Tue, 13 Jul 2021 at 14:10, Stefan Fuhrmann
 wrote:
>
> Hello all,
>
>
> First, we can not change to newer version so fast within the project.
>
> We are having on old installation of haproxy (1.7.9) and we have the
> need to configure tcp- mss- value on backend site.
>
>
>
> Is that possible to change the mss- value on backend site? How?

No.

You can set the MSS on the frontend socket, but not on the backend socket.

You need to work with your OS/kernel configuration.


Lukas



Re: [PATCH 0/1] Replace issue templates by issue forms

2021-06-23 Thread Lukas Tribus
Hello,


On Wed, 23 Jun 2021 at 22:25, Willy Tarreau  wrote:
>
> Hi Tim, Max,
>
> On Wed, Jun 23, 2021 at 09:38:12PM +0200, Tim Duesterhus wrote:
> > Hi Willy, Lukas, List!
> >
> > GitHub finally launched their next evolution of issue templates, called 
> > issue
> > forms, as a public beta: 
> > https://github.blog/changelog/2021-06-23-issues-forms-beta-for-public-repositories/
> >
> > Instead of prefilling the issue creation form with some markdown that can be
> > ignored or accidentally deleted issue forms will create different textareas,
> > automatically formatting the issue correctly. The end result will be regular
> > markdown that can be edited as usual.
> >
> > Beta implies that they might still slightly change in the future, possibly
> > requiring some further adjustments. However as the final result no longer
> > depends on the form itself we are not locking ourselves into some feature
> > for eternity.
> >
> > Max and I worked together to migrate the existing templates to issue forms,
> > cleaning up the old stuff that is no longer required.
> >
> > You can find an example bug report here:
> >
> > https://github.com/TimWolla/haproxy/issues/7
> >
> > It looks just like before!
>
> Indeed, and I like the issue description and the proposed fix :-)
>
> > The new forms can be tested here: 
> > https://github.com/TimWolla/haproxy/issues/new/choose.
> >
> > I have deleted the old 'Question' template, because it no longer is 
> > required,
> > as the user can't simply delete the template from the field when there's
> > separate fields :-)
>
> At first glance it indeed looks better than before. I'm personally fine
> with the change. I'll wait for Lukas' Ack (or comments) before merging
> though.

Thanks for this, I like it!

What I'm missing in the new UI is the possibility for the user to
preview the *entire* post, I'm always extensively using preview
features everywhere. So this feels like a loss, although the user can
preview the content of the individual input box.

But that's not reason enough to hold up this change, I just wish the
"send us your feedback" button [1] would actually work.

Full Ack from me for this change, this will be very beneficial as we
get higher quality reports and people won't be required to navigate
through raw markdown, which is not user-friendly at all.



cheers,
lukas



[1] 
https://github.blog/changelog/2021-06-23-issues-forms-beta-for-public-repositories/



Re: SSL Labs says my server isn't doing ssl session resumption

2021-06-20 Thread Lukas Tribus
Hello Shawn,

On Sun, 20 Jun 2021 at 14:03, Shawn Heisey  wrote:
>
> On 6/20/2021 1:52 AM, Lukas Tribus wrote:
> > Can you try disabling threading, by putting nbthread 1 in your config?
>
> That didn't help.  From testssl.sh:
>
>   SSL Session ID support   yes
>   Session Resumption   Tickets: yes, ID: no

It's a haproxy bug, affecting 2.4 releases, I've filed an issue in our tracker:

https://github.com/haproxy/haproxy/issues/1297



Willy wrote:
> I don't know if the config is responsible for this but I've just tested
> on haproxy.org and it does work there:
>
>   Session resumption (caching)  Yes
>   Session resumption (tickets)  Yes

demo.haproxy.org suggests code running is quite old though:

# curl -s http://demo.haproxy.org/ | grep released
http://www.haproxy.org/"; style="text-decoration:
none;">HAProxy version 1.7.12-84aad5b, released 2019/10/25
#



Regards,
Lukas



Re: SSL Labs says my server isn't doing ssl session resumption

2021-06-20 Thread Lukas Tribus
Hello Shawn,


On Sun, 20 Jun 2021 at 08:39, Shawn Heisey  wrote:
> This is what SSL Labs now says for the thing that started this thread:
>
> Session resumption (caching)No (IDs assigned but not accepted)
> Session resumption (tickets)Yes
>
> I'd like to get the caching item fixed, but I haven't figured that out
> yet.

Can you try disabling threading, by putting nbthread 1 in your config?

An upgrade to 2.4.1 would also be advisable, it actually fixes a
locking issue with SSL session cache (not sure whether that could
really be the root cause though).


Lukas



Re: SSL Labs says my server isn't doing ssl session resumption

2021-06-16 Thread Lukas Tribus
On Wed, 16 Jun 2021 at 17:03, Илья Шипицин  wrote:
>
> ssl sessions are for tls1.0  (disabled in your config)
> tls1.2 uses tls tickets for resumption

That is not true, you can disable TLS tickets and still get resumption
on TLSv1.2. Disabling TLSv1.0 does not mean disabling Session ID
caching.


What do you see with testssl.sh ?


Lukas



Re: [EXTERNAL] Re: built in ACL, REQ_CONTENT

2021-06-08 Thread Lukas Tribus
Hello,


On Tue, 8 Jun 2021 at 17:36, Godfrin, Philippe E
 wrote:
>
> Certainly,
>
> Postrgres sends this message across the wire:
>
> Jun  2 21:14:40 ip-172-31-77-193 haproxy[9031]: #0110x00: 00 00 00 4c 00 
> 03 00 00   75 73 65 72 00 74 73 64   |...Luser.tsd|
> Jun  2 21:14:40 ip-172-31-77-193 haproxy[9031]: #0110x10: 62 00 64 61 74 
> 61 62 61   73 65 00 74 73 64 62 00   |b.database.tsdb.|
> Jun  2 21:14:40 ip-172-31-77-193 haproxy[9031]: #0110x20: 61 70 70 6c 69 
> 63 61 74   69 6f 6e 5f 6e 61 6d 65   |application_name|
> Jun  2 21:14:40 ip-172-31-77-193 haproxy[9031]: #0110x30: 00 70 73 71 6c 
> 00 63 6c   69 65 6e 74 5f 65 6e 63   |.psql.client_enc|
> Jun  2 21:14:40 ip-172-31-77-193 haproxy[9031]: #0110x40: 6f 64 69 6e 67 
> 00 55 54   46 38 00 00   |oding.UTF8..|
>
>
>
> Bytes, 8 – are user\0 Byte 13 starts the userid. I would like to be able to 
> test that userid and make a routing decision on that. This is what the 
> HAProxy docs suggest:
>
>
>
> acl check-rw req.payload(8,32),hex -m sub  757365720074736462727700

And don't see how this is supposed to match?

62727700 is not what it's in your trace.

Is the username tsdb, like in your trace, or is it tsdbrw, like in your ACL?


Also, put a "tcp-request inspect-delay 5s" in front of the ACL (you
can optimize performance later) and share the entire configuration.


Please try to ask the actual question directly next time, so we can
help you right away (https://xyproblem.info/).



Thanks,
Lukas



Re: built in ACL, REQ_CONTENT

2021-06-07 Thread Lukas Tribus
Hello,

On Mon, 7 Jun 2021 at 14:51, Godfrin, Philippe E
 wrote:
>
> Greetings!
>
> I can’t seem to find instructions on how to use this builtin ACL. Can someone 
> point me in the right direction, please?

There is nothing specific about it, you use just like every other ACL.

http-request deny if REQ_CONTENT

http-request deny unless REQ_CONTENT


 Lukas



Re: how to write to a file safely in haproxy

2021-05-26 Thread Lukas Tribus
Hello,

On Wed, 26 May 2021 at 13:29, reshma r  wrote:
>
> Hello all,
> Periodically I need to write some configuration data to a file.
> However I came across documentation that warned against writing to a file at 
> runtime.
> Can someone give me advice on how I can achieve this safely?

You'll have to elaborate what you are talking about.

Are you talking about writing to the filesystem from a LUA scripts or
other runtime code within haproxy? Then yes, don't do it, it will
block the event loop and you will be in a world of hurt.
Are you talking about writing and changing the configuration file,
prior to a reload, manually or from a external process, that's not a
problem at all.

The issue is blocking filesystem access in the event-loop of the
haproxy process itself.

Explaining what the problem is you are trying to solve can get you
more accurate proposals and solutions faster than asking about a
particular solution (XY problem).


Lukas



Re: haproxy hung with CPU usage at 100% Heeeelp, please!!!

2021-05-14 Thread Lukas Tribus
The first thing I'd try is to disable multithreading (by putting
nbthread 1 in the global section of the configuration), so if that
helps.


Lukas



Re: Table sticky counters decrementation problem

2021-03-30 Thread Lukas Tribus
Hi Willy,

On Tue, 30 Mar 2021 at 17:56, Willy Tarreau  wrote:
>
> Guys,
>
> out of curiosity I wanted to check when the overflow happened:
>
> $ date --date=@$$(date +%s) * 1000) & -0x800) / 1000))
> Mon Mar 29 23:59:46 CEST 2021
>
> So it only affects processes started since today. I'm quite tempted not
> to wait further and to emit 2.3.9 urgently to fix this before other
> people get trapped after reloading their process. Any objection ?

No objection, but also: what a coincidence. I suggest you get a
lottery ticket today.


cheers,
lukas



Re: Stick table counter not working after upgrade to 2.2.11

2021-03-30 Thread Lukas Tribus
Hi Willy,


On Tue, 23 Mar 2021 at 09:32, Willy Tarreau  wrote:
>
> Guys,
>
> These two patches address it for me, and I could verify that they apply
> on top of 2.2.11 and work there as well. This time I tested with two
> counters at different periods 500 and 2000ms.

Both Sander and Thomas now report that the issue is at least partially
still there in 2.3.8 (which has all fixes, or 2.2.11 with patches),
and that downgrading to 2.3.7 (which has none of the commits) works
around the issue:

https://www.mail-archive.com/haproxy@formilux.org/msg40093.html
https://www.mail-archive.com/haproxy@formilux.org/msg40092.html


Let's not yet publish stable bugfix releases, until this is properly diagnosed.



Lukas



Re: Table sticky counters decrementation problem

2021-03-30 Thread Lukas Tribus
Hello Thomas,


this is a known issue in any release train other than 2.3 ...

https://github.com/haproxy/haproxy/issues/1196

However neither 2.3.7 (does not contain the offending commits), nor
2.3.8 (contains all the fixes) should be affected by this.


Are you absolutely positive that you are running 2.3.8 and not
something like 2.2 or 2.0 ? Can you provide the full output of haproxy
-vv?



Thanks,

Lukas



Re: zlib vs slz (perfoarmance)

2021-03-29 Thread Lukas Tribus
Hello,


On Mon, 29 Mar 2021 at 20:54, Илья Шипицин  wrote:
>> > Dear list,
>> >
>> > on browser load (html + js + css) I observe 80% of cpu spent on gzip.
>> > also, I observe that zlib is probably one of the slowest implementation
>> > my personal benchmark correlate with https://github.com/inikep/lzbench
>> >
>> > if so, should'n we switch to slz by default ? or am I missing something ?
>>
>> There is no default.
>>
>> zlib is optional.
>> slz is optional.
>>
>> Haproxy is compiled with either zlib, slz or no compression library at all.
>>
>>
>> What specifically are you suggesting to change in the haproxy source tree?
>
>
> for example, docker image
> https://hub.docker.com/_/haproxy

So nothing we control directly.

Docker images, package repositories, etc. This means filing requests
at those places, convincing other people to switch from a well known
library to an unknown library that isn't even packaged yet in most
places, that those maintainers then have to support. I'm not sure how
realistic that is.

Like I said last year, this needs a marketing campaign:
https://www.mail-archive.com/haproxy@formilux.org/msg38044.html


What about the docker images from haproxytech? Are those zlib or slz
based? Perhaps that would be a better starting point?

https://hub.docker.com/r/haproxytech/haproxy-alpine


Lukas



Re: Is there a way to deactivate this "message repeated x times"

2021-03-29 Thread Lukas Tribus
Hello,

On Mon, 29 Mar 2021 at 15:25, Aleksandar Lazic  wrote:
>
> Hi.
>
> I need to create some log statistics with awffull stats and I assume this 
> messages
> means that only one line is written for 3 requests, is this assumption right?
>
> Mar 28 14:04:07 lb1 haproxy[11296]: message repeated 3 times: [ 
> ::::49445 [28/Mar/2021:14:04:07.234] https-in~ be_api/api_prim 
> 0/0/0/13/13 200 2928 - -  930/900/8/554/0 0/0 {|Mozilla/5.0 
> (Macintosh; Intel Mac OS X 10.13; rv:86.0) Gecko/20100101 
> Firefox/86.0||128|TLS_AES_128_GCM_SHA256|TLSv1.3|} "GET 
> https:/// HTTP/2.0"]
>
> Can this behavior be disabled?

This is not haproxy, this is your syslog server. Refer to the
documentation of the syslog server.


Lukas



Re: zlib vs slz (perfoarmance)

2021-03-29 Thread Lukas Tribus
Hi Ilya,


On Mon, 29 Mar 2021 at 15:34, Илья Шипицин  wrote:
>
> Dear list,
>
> on browser load (html + js + css) I observe 80% of cpu spent on gzip.
> also, I observe that zlib is probably one of the slowest implementation
> my personal benchmark correlate with https://github.com/inikep/lzbench
>
> if so, should'n we switch to slz by default ? or am I missing something ?

There is no default.

zlib is optional.
slz is optional.

Haproxy is compiled with either zlib, slz or no compression library at all.


What specifically are you suggesting to change in the haproxy source tree?


Regards,
Lukas



Re: HAProxy proxy protocol

2021-03-28 Thread Lukas Tribus
Double post on discourse, please refrain from this practice in the future!

https://discourse.haproxy.org/t/haproxy-proxy-protocol/6413/2


Thanks,
Lukas



Re: [HAP 2.3.8] Is there a way to see why "" and "SSL handshake failure" happens

2021-03-27 Thread Lukas Tribus
Hello,

On Sat, 27 Mar 2021 at 11:52, Aleksandar Lazic  wrote:
>
> Hi.
>
> I have a lot of such entries in my logs.
>
> ```
> Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 
> [27/Mar/2021:11:48:20.523] https-in~ https-in/ -1/-1/-1/-1/0 0 0 - - 
> PR-- 1041/1011/0/0/0 0/0 ""
> Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 
> [27/Mar/2021:11:48:20.523] https-in~ https-in/ -1/-1/-1/-1/0 0 0 - - 
> PR-- 1041/1011/0/0/0 0/0 ""

Use show errors on the admin socket:
https://cbonte.github.io/haproxy-dconv/2.0/management.html#9.3-show%20errors


> Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 
> [27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure
> Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 
> [27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure

That's currently a pain point:

https://github.com/haproxy/haproxy/issues/693


Lukas



  1   2   3   4   5   6   7   8   9   10   >