Re: V2.3 allow use of TLSv1.0
On Thu, 9 Jun 2022 at 08:42, wrote: > > Hi, > > I need to enable TLS V1.0 because of some legacy clients which have just been > "discovered" and won't be updated. Configure "ssl-default-bind-ciphers" as per: https://ssl-config.mozilla.org/#server=haproxy=2.3=old=1.1.1k=5.6 If you don't allow TLSv1.0 ciphers, TLSv1.0 can't be used. Also it's possible OpenSSL is so new it needs additional convincing. Share the full output of haproxy -vv, including the OpenSSL release please. > Can someone tell me what I am missing ? I have found a few messages > about adding other cipher suites, but nothing lead to an improvement. You will have to share more data. Full output of haproxy -vv, full ssl configuration. Can't really troubleshoot without configurations and exact software releases (openssl). Lukas
Re: Stupid question about nbthread and maxconn
Hello, > > Let's say we have the following setup. > > > > ``` > > maxconn 2 > > nbthread 4 > > ``` > > > > My understanding is that HAProxy will accept 2 concurrent connection, > > right? Even when I increase the nbthread will HAProxy *NOT* accept more then > > 2 concurrent connection, right? Yes. > > What confuses me is "maximum per-process" in the maxconn docu part, will > > every > > thread handle the maxconn or is this for the whole HAProxy instance. Per process limits apply to processes, they do not apply to threads. Maxconn is per process. It is NOT per thread. Multithreading solves those issues. Lukas
Re: [ANNOUNCE] haproxy-2.6-dev4
Hello Willy, On Sat, 26 Mar 2022 at 10:22, Willy Tarreau wrote: > A change discussed around previous announce was made in the H2 mux: the > "timeout http-keep-alive" and "timeout http-request" are now respected > and work as documented, so that it will finally be possible to force such > connections to be closed when no request comes even if they're seeing > control traffic such as PING frames. This can typically happen in some > server-to-server communications whereby the client application makes use > of PING frames to make sure the connection is still alive. I intend to > backport this after some time, probably to 2.5 and later 2.4, as I've > got reports about stable versions currently posing this problem. While I agree with the change, actually documented is the previous behavior. So this is a change in behavior, and documentation will need updating as well to actually reflect this new behavior (patch incoming). I have to say I don't like the idea of backporting such changes. We have documented and trained users that H2 doesn't respect "timeout http-keep-alive" and that it uses "timeout client" instead. We even argued that this is a good thing because we want H2 connections to stay up longer. I suggest not changing documented behavior in bugfix releases of stable and stable/LTS releases. cheers, lukas
[PATCH] DOC: reflect H2 timeout changes
Reverts 75df9d7a7 ("DOC: explain HTTP2 timeout behavior") since H2 connections now respect "timeout http-keep-alive". If commit 15a4733d5d ("BUG/MEDIUM: mux-h2: make use of http-request and keep-alive timeouts") is backported, this DOC change needs to be backported along with it. --- doc/configuration.txt | 6 -- 1 file changed, 6 deletions(-) diff --git a/doc/configuration.txt b/doc/configuration.txt index 8385e81be..87ae43809 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -13034,8 +13034,6 @@ timeout client during startup because it may result in accumulation of expired sessions in the system if the system's timeouts are not configured either. - This also applies to HTTP/2 connections, which will be closed with GOAWAY. - See also : "timeout server", "timeout tunnel", "timeout http-request". @@ -13130,10 +13128,6 @@ timeout http-keep-alive set in the frontend to take effect, unless the frontend is in TCP mode, in which case the HTTP backend's timeout will be used. - When using HTTP/2 "timeout client" is applied instead. This is so we can keep - using short keep-alive timeouts in HTTP/1.1 while using longer ones in HTTP/2 - (where we only have one connection per client and a connection setup). - See also : "timeout http-request", "timeout client". -- 2.17.1
Re: Is there some kind of program that mimics a problematic HTTP server?
Hello, take a look at how we are using tests with vtc/vtest in doc/regression-testing.txt. Maybe this tool can be useful for your use-case. Lukas
Re: Question about http compression
Hello, On Mon, 21 Feb 2022 at 14:25, Tom Browder wrote: > > I'm getting ready to try 2.5 HAProxy on my system > and see http comression is recommended. I'm not sure we are actively encouraging to enable HTTP compression. Where did you see this recommendation? > From those sources I thought https should not use compression > because of some known exploit, so I'm not currently using it. You are talking about BREACH [1], and I'm afraid there is no magic fix for that. The mitigations on the BREACH website apply. Lukas [1] http://www.breachattack.com/#mitigations
Re: ACL HAPROXY (check servers UP and DOWN) and redirect traffic
On Sat, 19 Feb 2022 at 18:38, Carlos Renato wrote: > > Yes, > > In stats server2 is DOWN. accept the VM's network card. Provide detailed logs please. Lukas
Re: HAProxy thinks Plex is down when it's not
Hello, On Sat, 19 Feb 2022 at 17:46, Moutasem Al Khnaifes wrote: > but for some reason HAProxy thinks that Plex is down John already explained this perfectly. > the status page is inaccessible Your configuration is: > listen stats > bind localhost:1936 [...] > stats uri /haproxy?stats *NEVER* configure a bind line requiring a DNS lookup, especially a hostname returning both address families, whether haproxy is listening to 127.0.0.1 or ::1 or something else entirely is depend on compile options (USE_GETADDRINFO) and libc behavior. If haproxy is listening on 127.0.0.1:1936, then you'd access the stats socket as per your configuration with: "http://127.0.0.1:1936/haproxy?stats; But it could be listening on its IPv6 equivalent ::1 as well. I suggest you change the configuration to your actual intention, which probably is: bind 127.0.0.1:1936 cheers, lukas
Re: ACL HAPROXY (check servers UP and DOWN) and redirect traffic
On Sat, 19 Feb 2022 at 16:15, Carlos Renato wrote: > > Hi Lukas, > > Thanks for the reply and willingness to help. > > I did a test and it didn't work. I dropped the server2 interface and only > server1 was UP. > Traffic continues to exit through the main bakend. My wish is that the > traffic is directed to the backup server. Did haproxy recognize server2 was down? With thgis configuration the backup keyword needs to be removed from serverBKP (because the rule is implemented with a use_backend directive), this was wrong in my earlier config. Lukas
Re: ACL HAPROXY (check servers UP and DOWN) and redirect traffic
Hello, I suggest you put your backup server in a dedicated backend and select it in the frontend. I guess the same could be done with use-server in a single backend, but I feel like this is cleaner: frontend haproxy option forwardfor bind server.lab.local:9191 use_backend backup_servers if { nbsrv(backend_servers) lt 2 } default_backend backend_servers backend backend_servers server server1 192.168.239.151:9090 check server server2 192.168.239.152:9090 check backend backup_servers server serverBKP 192.168.17.2:9090 backup Lukas On Sat, 19 Feb 2022 at 14:17, Carlos Renato wrote: > > Can anyone help me? > > How to create an ACL to use the backup server if a server goes DOWN. So, if > the two backend servers are UP, I use the registered servers. If one (only > one) becomes unavailable, traffic is directed to the backup server. > > Below my settings. > > frontend haproxy > option forwardfor > bind server.lab.local:9191 > default_backend backend_servers > > backend backend_servers > server server1 192.168.239.151:9090 check > server server2 192.168.239.152:9090 check > server serverBKP 192.168.17.2:9090 backup > > Thank you for your help > > > > > -- > >
[PATCH] BUG/MINOR: mailers: negotiate SMTP, not ESMTP
As per issue #1552 the mailer code currently breaks on ESMTP multiline responses. Let's negotiate SMTP instead. Should be backported to 2.0. --- src/mailers.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/mailers.c b/src/mailers.c index 3d01d7532..34eaa5bb6 100644 --- a/src/mailers.c +++ b/src/mailers.c @@ -195,7 +195,7 @@ static int enqueue_one_email_alert(struct proxy *p, struct server *s, goto error; { - const char * const strs[4] = { "EHLO ", p->email_alert.myhostname, "\r\n" }; + const char * const strs[4] = { "HELO ", p->email_alert.myhostname, "\r\n" }; if (!add_tcpcheck_send_strs(>rules, strs)) goto error; } -- 2.17.1
Re: haproxy in windows
I'd suggest you give WSL/WSL2 a try. Lukas On Thu, 10 Feb 2022 at 11:25, Gowri Shankar wrote: > > Im trying to install haproxy for loadbalancing for my servers,but im not able > install from my windows system.Is there ha proxy available for windows, > please give and help us with documentation. > > > > > > > >
Re: 2.0.26 breaks authentication
On Mon, 17 Jan 2022 at 19:37, wrote: > > Hi > > Configuration uses 'no option http-use-htx' in defaults because of case > insensitivity. > Statistics path haproxy?stats is behind simple username/password and > both credentials are specified in config. > When accessing haproxy?stats, 2.0.25 works fine, but 2.0.26 returns 401: Confirmed and filed: https://github.com/haproxy/haproxy/issues/1516 Bug will be fixed, but for the long term: - the legacy HTTP code is gone from newer haproxy branches, 'no option http-use-htx' is no more - in HTX mode, if you have non-compliant clients or servers, use h1-case-adjust to workaround those those case problems Regards, Lukas
Re: Blocking log4j CVE with HAProxy
On Mon, 13 Dec 2021 at 19:51, Valters Jansons wrote: > > Is this thread really "on-topic" for HAProxy? > > Attempts to mitigate Log4Shell at HAProxy level to me feel similar > to.. looking at a leaking roof of a house and thinking "I should put > an umbrella above it, so the leak isn't hit by rain". Generally, it > might work, but it's not something that you can expect to hold up in > the long run, and it's not something construction folks would advise. This is about reducing the attack surface temporarily. I would rather avoid thousands of euros of water damage in my house or millions of dollars of damage at my employer, just because a contractor can't immediately provide a long term fix. A temporary and incomplete mitigation is better than nothing at all, that doesn't mean it's an alternative to properly fixing the issue. > So just patch/update your vulnerable applications; and where vendors > provide mitigation steps - apply those instead. That is often easier said than done; especially when there is no time. Lukas
Re: Blocking log4j CVE with HAProxy
On Mon, 13 Dec 2021 at 14:43, Aleksandar Lazic wrote: > Well I go the other way around. > > The application must know what data are allowed, verify the input and if the > input is not valid discard it.´ You clearly did not understand my point so let me try to phrase it differently: The log4j vulnerability is about "allowed data" triggering a software vulnerability which was impossible to predict. Lukas
Re: Blocking log4j CVE with HAProxy
On Mon, 13 Dec 2021 at 13:25, Aleksandar Lazic wrote: > 1. Why is a input from out site of the application passed unchecked to the > logging library! Because you can't predict the future. When you know that your backend is SQL, you escape what's necessary to avoid SQL injection (or use prepared statements) before sending commands against the database. When you know your output is HTML, you escape HTML special characters, so untrusted inputs can't inject HTML tags. That's what input validation means. How exactly do you verify and sanitise inputs to protect against an unknown vulnerability with an unknown syntax in a logging library that is supposed to handle all strings just fine? You don't, it doesn't work this way, and that's not what input validation means. Lukas
[PATCH] DOC: config: fix error-log-format example
In commit 6f7497616 ("MEDIUM: connection: rename fc_conn_err and bc_conn_err to fc_err and bc_err"), fc_conn_err became fc_err, so update this example. --- Should be backported to 2.5. --- doc/configuration.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/configuration.txt b/doc/configuration.txt index 1e049012b..b8a40d574 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -21405,7 +21405,7 @@ would have passed through a successful stream, hence will be available as regular traffic log (see option httplog or option httpslog). # detailed frontend connection error log - error-log-format "%ci:%cp [%tr] %ft %ac/%fc %[fc_conn_err]/\ + error-log-format "%ci:%cp [%tr] %ft %ac/%fc %[fc_err]/\ %[ssl_fc_err,hex]/%[ssl_c_err]/%[ssl_c_ca_err]/%[ssl_fc_is_resumed] \ %[ssl_fc_sni]/%sslv/%sslc" -- 2.17.1
Re: [PATCH] DOC: config: retry-on list is space-delimited
Hello, On Wed, 8 Dec 2021 at 17:50, Tim Düsterhus wrote: > > Lukas, > > On 12/8/21 11:33 AM, Lukas Tribus wrote: > > We are using comma-delimited list for init-addr for example, let's > > document that this is space-delimited to avoid the guessing game. > > Shouldn't this rather be fixed by unifying the delimiter instead of > updating the docs? e.g. add support for the comma as the delimiter and > then deprecate the use of spaces with a warning? I agree, but I'm also not able to contribute more than a doc change here. Also there is more than just those 2 uses of lists, here just a few, certainly incomplete, some space delimited, some comma delimited, some undocumented: user/group ssl-engine [algo] wurfl-information-list []* 51degrees-property-name-list [ ...] So first of all we would have to find all existing lists users in the haproxy configuration and then make a determination about what to do (and which parts to touch). That requires more work than a single line doc patch, which is all I can contribute at the moment. Lukas
Re: [ANNOUNCE] haproxy-2.5.0
Hello Cyril, On Tue, 23 Nov 2021 at 17:18, Willy Tarreau wrote: > > Hi, > > HAProxy 2.5.0 was released on 2021/11/23. It added 9 new commits after > version 2.5-dev15, fixing minor last-minute details (bind warnings > that turned to errors, and an incorrect free in the backend SSL cache). could you run haproxy-dconv for haproxy 2.5 again? The last update is from May and lots of doc updates (regarding new features) have been submitted since then. You could also add the new 2.6-dev branch at that point. Thank you! Lukas
[PATCH] DOC: config: retry-on list is space-delimited
We are using comma-delimited list for init-addr for example, let's document that this is space-delimited to avoid the guessing game. --- doc/configuration.txt | 14 +- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/doc/configuration.txt b/doc/configuration.txt index 1e049012b..c810fa918 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -10124,17 +10124,18 @@ retries See also : "option redispatch" -retry-on [list of keywords] +retry-on [space-delimited list of keywords] Specify when to attempt to automatically retry a failed request. This setting is only valid when "mode" is set to http and is silently ignored otherwise. May be used in sections:defaults | frontend | listen | backend yes |no| yes | yes Arguments : - is a list of keywords or HTTP status codes, each representing a -type of failure event on which an attempt to retry the request -is desired. Please read the notes at the bottom before changing -this setting. The following keywords are supported : + is a space-delimited list of keywords or HTTP status codes, each +representing a type of failure event on which an attempt to +retry the request is desired. Please read the notes at the +bottom before changing this setting. The following keywords are +supported : none never retry @@ -10207,6 +10208,9 @@ retry-on [list of keywords] The default is "conn-failure". + Example: +retry-on 503 504 + See also: "retries", "option redispatch", "tune.bufsize" server [:[port]] [param*] -- 2.17.1
Re: How to compile with packaged openssl when custom openssl installed?
Use the instructions in INSTALL to build openssl statically. Building and installing a custom shared build of openssl on a OS is something that I'd suggest you avoid, because it will become complicated. Lukas
Re: Haproxy + LDAPS+ SNI
Hello Ben, On Wed, 3 Nov 2021 at 12:55, Ben Hart wrote: > > Thanks again Lukas! > So the server directive's use of a cert or CA file is only to > verify the identity of the server in question. No, "crt" (a certificate including private key) and "ca-file" (the public certificate of a CA) are two completely different things (see below). > So the SSL crt speciied in the frontend, does that secure only the connection > to Haproxy "Secure" is the wrong word. It authenticates the server to the far end client connecting to port 443 on haproxy. It has nothing to do with backend traffic. >or is it passed-through to the server connection as well? I might be >misunderstanding how this part of Haproxy works fundamentally... No, there is nothing passing-through to the backend/backend-servers. CA (as in "ca-file") means Certificate authority (a public key of the CA certificate), and it is required to verify the certificate on the other side: - for the frontend this is required when you are using client certificate authentication (you are not) - for the backend this means that this CA is used to verify the servers certificate Certificate (as in "crt") is a certificate including a private key, and is therefor for the *local* certificate: - for the frontend this is about the standard server certificate that haproxy responds with on port 443 (the classic SSL configuration) - for the backend, this is about the case when you need haproxy to authenticate with a "client" certificate (in this case, haproxy is the client) against the backend server In your case, only a certificate ("crt") on the frontend, and a CA ("ca-file") on the backend is necessary, as you are not using mutual SSL certificate authentication. Lukas
Re: Haproxy + LDAPS+ SNI
Hello Ben, On Wed, 3 Nov 2021 at 03:54, Ben Hart wrote: > > I wonder, can I ask if the server directives are correct insofar as > making a secured connection to the backend server entries? > > I'm told that HAP might be connecting by IP in which case the > SSL cert would be useless The documentation of the verify keyword in the server section clarifies this: http://cbonte.github.io/haproxy-dconv/2.2/configuration.html#5.2-verify "The certificate provided by the server is verified using CAs from 'ca-file' and optional CRLs from 'crl-file' after having checked that the names provided in the certificate's subject and subjectAlternateNames attributes match either the name passed using the "sni" directive, or if not provided, the static host name passed using the "verifyhost" directive. When no name is found, the certificate's names are ignored. For this reason, without SNI it's important to use "verifyhost". Lukas
Re: Haproxy + LDAPS+ SNI
Hello, On Tue, 2 Nov 2021 at 21:24, Ben Hart wrote: > > In the config (pasted here > https://0bin.net/paste/1aOh1F4y#qStfT0m0mER3rhI3DonDbCsr0NRmVuH9XiwvagEkAiE) > My questions surround the syntax of the config file.. Most likely those clients don't send SNI. Capture the SSL handshake and verify to make sure. Although you don't need the "tcp-request*" keywords (we are not extracting SNI "manually" from a TCP connection buffer, but locally deciphering it and accessing it through the OpenSSL API), I don't see any obvious errors in your configuration. Regards, Lukas
Re: Does haproxy utlize openssl with AES-NI if present?
On Thu, 28 Oct 2021 at 21:20, Shawn Heisey wrote: > > On 10/28/21 10:02 AM, Lukas Tribus wrote: > > You seem to be trying very hard to find a problem where there is none. > > > > Definitely do NOT overwrite CPU flags in production. This is to *test* > > AES acceleration, I put the link to the blog post in there for > > context, not because I think you need to force this on. > > I wouldn't call this production. It's the server in my basement. It > runs most of my personal websites. I do my experimentation there. I'm > OK with those experiments causing the occasional problem, because for > the most part I know how to fix it if I make a mistake. I get it. Despite this, I don't want you to make matters worse. Also other people may read this thread and try the same. > I did just think of a way that I MIGHT be able to test. Time a simple > shell script using wget to hit a tiny static web page using https 1 > times. For that test, the run with haproxy started normally actually > took longer: No, that's not the way to test AES-NI. AES-NI doesn't help TLS handshakes at all. Testing many handshakes and downloads with small files is exactly the case where AES-NI won't improve anything. You would have to run a single request causing a large download, and run haproxy through a cpu profiler, like perf, and compare outputs. Make sure you run wget that with -O /dev/null and run it on a different box, so it won't steal CPU from haproxy. Also make sure an AES cipher is actually negotiated. Lukas
Re: Does haproxy utlize openssl with AES-NI if present?
On Thu, 28 Oct 2021 at 15:49, Shawn Heisey wrote: > > On 10/28/21 7:34 AM, Shawn Heisey wrote: > > Does haproxy's use of openssl turn on the same option that the > > commandline does with the -evp argument? If it does, then I think > > everything is probably OK. > > > Running "grep -r EVP ." in the haproxy source tree turns up a lot of > hits in the TLS/SSL code. So I think that haproxy is most likely using > EVP, and since I am running haproxy on bare metal and not in a VM (which > might mask the aes CPU flag), it probably is using acceleration. Just > in case, I did add the openssl bitmap environment variable (the one with > + instead of ~) to my haproxy systemd unit. You seem to be trying very hard to find a problem where there is none. Definitely do NOT overwrite CPU flags in production. This is to *test* AES acceleration, I put the link to the blog post in there for context, not because I think you need to force this on. You cannot compare command line arguments of an openssl tool with openssl library API calls, those are two different things. If this keeps you up at night, I'd suggest you ask on the openssl-users mailing list for clarification, or set brakepoints in gdb and debug openssl when running from haproxy, or find a platform where you have both a CPU with and without aesni support, and compile openssl and haproxy with aesni and then move the executable over. It will be a waste of your time though. Lukas
Re: Does haproxy utlize openssl with AES-NI if present?
On Thu, 28 Oct 2021 at 08:31, Lukas Tribus wrote: > > Hi, > > On Thursday, 28 October 2021, Shawn Heisey wrote: >> >> On 10/27/2021 2:54 PM, Lukas Tribus wrote: >>> >>> I'd be surprised if the OpenSSL API calls we are using doesn't support >>> AES-NI. >> >> >> Honestly that would surprise me too. But I have no idea how to find out >> whether it's using the acceleration or not, and the limited (and possibly >> incorrect) evidence I had suggested that maybe it was disabled by default, >> so I wanted to ask the question. I have almost zero knowledge about openssl >> API or code, so I can't discern the answer from haproxy code. > > > > You want evidence. > > Then get a raspberry pi, and run haproxy manually, fake the cpu > flag aes-ni and it should crash when using aes acceleration, > because the cpu doesn't support it. For some reason, openssl itself doesn't crash on my raspberry pi: OPENSSL_ia32cap="+0x202" openssl speed -elapsed -evp aes-128-gcm Most likely openssl is compiled without aes-ni support here, so the test doesn't work. Lukas
Re: Does haproxy utlize openssl with AES-NI if present?
Hi, On Thursday, 28 October 2021, Shawn Heisey wrote: > On 10/27/2021 2:54 PM, Lukas Tribus wrote: > >> I'd be surprised if the OpenSSL API calls we are using doesn't support >> AES-NI. >> > > Honestly that would surprise me too. But I have no idea how to find out > whether it's using the acceleration or not, and the limited (and possibly > incorrect) evidence I had suggested that maybe it was disabled by default, > so I wanted to ask the question. I have almost zero knowledge about > openssl API or code, so I can't discern the answer from haproxy code. You want evidence. Then get a raspberry pi, and run haproxy manually, fake the cpu flag aes-ni and it should crash when using aes acceleration, because the cpu doesn't support it. https://romanrm.net/force-enable-openssl-aes-ni-usage Lukas
Re: Does haproxy utlize openssl with AES-NI if present?
Hello, On Wed, 27 Oct 2021 at 22:17, Shawn Heisey wrote: > > I am building haproxy from source. > > For some load balancers that I used to manage, I also built openssl from > source, statically linked, and compiled haproxy against that, because > the openssl included with the OS (CentOS 6 if I recall correctly) was > ANCIENT. I don't know how to get haproxy to use the alternate openssl > at runtime, which is why I compiled openssl to be linked statically. > > For my own servers, I am running Ubuntu 20, which has a reasonably > current openssl version already included. > > I know that openssl on Ubuntu is compiled with support for Intel's > AES-NI instructions for accelerating crypto. That can be seen by > running these two commands on a system with a proper CPU and comparing > the reported speeds: > > openssl speed -elapsed aes-128-cbc > openssl speed -elapsed -evp aes-128-cbc > > But the fact that it requires the -evp arg on the commandline to get the > acceleration makes me wonder if maybe openssl 1.1.1 has CPU acceleration > turned off by default, requiring an explicit enable to use it. You are not comparing aes-ni on vs aes-ni off, you are comparing two very different crypto interfaces, one supports aes-ni the other doesn't, but there is also a big difference in the interface speed itself. The proper comparison is: # aes-ni as per cpu flags: openssl speed -elapsed -evp aes-128-cbc # aes-ni cpu flag hidden: OPENSSL_ia32cap="~0x202" openssl speed -elapsed -evp aes-128-cbc I'd be surprised if the OpenSSL API calls we are using doesn't support AES-NI. Lukas
PCRE (1) end of life and unmaintained
Hello, PCRE (1) is end of life and unmaintained now (see below). Not a huge problem, because PCRE2 has been supported since haproxy 1.8. However going forward (haproxy 2.5+) should we: - warn when compiling with PCRE? - remove PCRE support? - both, but start with a warning in 2.5? - maintain PCRE support as is? PCRE is end of life and unmaintained : from http://pcre.org/ > The older, but still widely deployed PCRE library, originally released in > 1997, > is at version 8.45. This version of PCRE is now at end of life, and is no > longer being actively maintained. Version 8.45 is expected to be the final >release of the older PCRE library > the older, unmaintained PCRE library from an unofficial mirror at SourceForge: > bugs in the legacy PCRE release are unlikely to be looked at or fixed; and > please don't use the SourceForge bug tracking system, as it is not > normally monitored. from the main PCRE author at: https://github.com/PhilipHazel/pcre2/issues/26#issuecomment-944916343 > For 6 years after the release of PCRE2 we continued to maintain PCRE1 > before declaring End-of-Life. There will never be another release. If people > want to continue to use the legacy version, that's fine, of course, but they > must arrange their own maintenance. I would venture to suggest that the > amount of effort needed to do any maintenance is probably less that > what is needed to convert to PCRE2. A number of issues that caused > problems in PCRE1 have already been addressed in PCRE2. > > I am sorry to have to sound harsh here, but the maintainers are just > volunteers with only so much time for this work, and furthermore, after > nearly 7 years I for one have forgotten how the PCRE1 code works. I > am therefore going to close this issue "invalid" and "wontfix". Lukas
Re: CVE-2021-40346, the Integer Overflow vulnerability
Hello Jonathan, On Wed, 8 Sept 2021 at 21:28, Jonathan Greig wrote: > > Hello! My name is Jonathan Greig and I'm a reporter for ZDNet. I'm > writing a story about CVE-2021-40346 and I was wondering if > Ha Proxy had any comment about the vulnerability. Just making sure you are aware that this is a public mailing list: https://www.mail-archive.com/haproxy@formilux.org/msg41140.html You can find the CVE-2021-40346 announcement with comments here on this mailing list: https://www.mail-archive.com/haproxy@formilux.org/msg41114.html Short blog article on haproxy.com: https://www.haproxy.com/blog/september-2021-duplicate-content-length-header-fixed/ Long Jfrog article with (lots) of technical details: https://jfrog.com/blog/critical-vulnerability-in-haproxy-cve-2021-40346-integer-overflow-enables-http-smuggling/ Hope this helps, Lukas
Re: double // after domain causes ERR_HTTP2_PROTOCOL_ERROR after upgrade to 2.4.3
On Fri, 20 Aug 2021 at 13:08, Илья Шипицин wrote: > > double slashes behaviour is changed in BUG/MEDIUM: > h2: match absolute-path not path-absolute for :path · haproxy/haproxy@46b7dff > (github.com) Actually, I think the patch you are referring to would *fix* this particular issue, as it was committed AFTER the last releases: https://github.com/haproxy/haproxy/commit/46b7dff8f08cb6c5c3004d8874d6c5bc689a4c51 It was this fix that probably caused the issue: https://github.com/haproxy/haproxy/commit/4b8852c70d8c4b7e225e24eb58258a15eb54c26e Using the latest git, applying the patch manually or running a 20210820 snapshot would fix this. Lukas
Re: [ANNOUNCE] HTTP/2 vulnerabilities from 2.0 to 2.5-dev
On Thursday, 19 August 2021, James Brown wrote: > Are there CVE numbers coming for these vulnerabilities? > > CVE-2021-39240: -> 2) Domain parts in ":scheme" and ":path" CVE-2021-39241: -> 1) Spaces in the ":method" field CVE-2021-39242: -> 3) Mismatch between ":authority" and "Host" Lukas
Re: HAProxy Network Namespace Support issues, and I also found a security flaw.
Hello, On Tue, 20 Jul 2021 at 08:13, Peter Jin wrote: > 2. There is a stack buffer overflow found in one of the files. Not > disclosing it here because this email will end up on the public mailing > list. If there is a "security" email address I could disclose it to, > what is it? It's secur...@haproxy.org, it's somehow well hidden in doc/intro.txt (that is the *starter* guide). I would definitely suggest putting it on the website haproxy.org, and in the repository move it to a different file, like MAINTAINERS. Lukas
Re: Replying to spam [was: Some Spam Mail]
On Thu, 15 Jul 2021 at 11:27, Илья Шипицин wrote: > > I really wonder what they will suggest. > > I'm not a spam source, since we do not have "opt in" policy, anybody can send > mail. so they do. > please address the issue properly, either change list policy or be calm with > my experiments. It's about common sense, not list policy. Please do your SPAM responding experiments without the list in CC. Thank you, Lukas
Re: set mss on backend site on version 1.7.9
Hello Stefan, On Tue, 13 Jul 2021 at 14:10, Stefan Fuhrmann wrote: > > Hello all, > > > First, we can not change to newer version so fast within the project. > > We are having on old installation of haproxy (1.7.9) and we have the > need to configure tcp- mss- value on backend site. > > > > Is that possible to change the mss- value on backend site? How? No. You can set the MSS on the frontend socket, but not on the backend socket. You need to work with your OS/kernel configuration. Lukas
Re: [PATCH 0/1] Replace issue templates by issue forms
Hello, On Wed, 23 Jun 2021 at 22:25, Willy Tarreau wrote: > > Hi Tim, Max, > > On Wed, Jun 23, 2021 at 09:38:12PM +0200, Tim Duesterhus wrote: > > Hi Willy, Lukas, List! > > > > GitHub finally launched their next evolution of issue templates, called > > issue > > forms, as a public beta: > > https://github.blog/changelog/2021-06-23-issues-forms-beta-for-public-repositories/ > > > > Instead of prefilling the issue creation form with some markdown that can be > > ignored or accidentally deleted issue forms will create different textareas, > > automatically formatting the issue correctly. The end result will be regular > > markdown that can be edited as usual. > > > > Beta implies that they might still slightly change in the future, possibly > > requiring some further adjustments. However as the final result no longer > > depends on the form itself we are not locking ourselves into some feature > > for eternity. > > > > Max and I worked together to migrate the existing templates to issue forms, > > cleaning up the old stuff that is no longer required. > > > > You can find an example bug report here: > > > > https://github.com/TimWolla/haproxy/issues/7 > > > > It looks just like before! > > Indeed, and I like the issue description and the proposed fix :-) > > > The new forms can be tested here: > > https://github.com/TimWolla/haproxy/issues/new/choose. > > > > I have deleted the old 'Question' template, because it no longer is > > required, > > as the user can't simply delete the template from the field when there's > > separate fields :-) > > At first glance it indeed looks better than before. I'm personally fine > with the change. I'll wait for Lukas' Ack (or comments) before merging > though. Thanks for this, I like it! What I'm missing in the new UI is the possibility for the user to preview the *entire* post, I'm always extensively using preview features everywhere. So this feels like a loss, although the user can preview the content of the individual input box. But that's not reason enough to hold up this change, I just wish the "send us your feedback" button [1] would actually work. Full Ack from me for this change, this will be very beneficial as we get higher quality reports and people won't be required to navigate through raw markdown, which is not user-friendly at all. cheers, lukas [1] https://github.blog/changelog/2021-06-23-issues-forms-beta-for-public-repositories/
Re: SSL Labs says my server isn't doing ssl session resumption
Hello Shawn, On Sun, 20 Jun 2021 at 14:03, Shawn Heisey wrote: > > On 6/20/2021 1:52 AM, Lukas Tribus wrote: > > Can you try disabling threading, by putting nbthread 1 in your config? > > That didn't help. From testssl.sh: > > SSL Session ID support yes > Session Resumption Tickets: yes, ID: no It's a haproxy bug, affecting 2.4 releases, I've filed an issue in our tracker: https://github.com/haproxy/haproxy/issues/1297 Willy wrote: > I don't know if the config is responsible for this but I've just tested > on haproxy.org and it does work there: > > Session resumption (caching) Yes > Session resumption (tickets) Yes demo.haproxy.org suggests code running is quite old though: # curl -s http://demo.haproxy.org/ | grep released http://www.haproxy.org/; style="text-decoration: none;">HAProxy version 1.7.12-84aad5b, released 2019/10/25 # Regards, Lukas
Re: SSL Labs says my server isn't doing ssl session resumption
Hello Shawn, On Sun, 20 Jun 2021 at 08:39, Shawn Heisey wrote: > This is what SSL Labs now says for the thing that started this thread: > > Session resumption (caching)No (IDs assigned but not accepted) > Session resumption (tickets)Yes > > I'd like to get the caching item fixed, but I haven't figured that out > yet. Can you try disabling threading, by putting nbthread 1 in your config? An upgrade to 2.4.1 would also be advisable, it actually fixes a locking issue with SSL session cache (not sure whether that could really be the root cause though). Lukas
Re: SSL Labs says my server isn't doing ssl session resumption
On Wed, 16 Jun 2021 at 17:03, Илья Шипицин wrote: > > ssl sessions are for tls1.0 (disabled in your config) > tls1.2 uses tls tickets for resumption That is not true, you can disable TLS tickets and still get resumption on TLSv1.2. Disabling TLSv1.0 does not mean disabling Session ID caching. What do you see with testssl.sh ? Lukas
Re: [EXTERNAL] Re: built in ACL, REQ_CONTENT
Hello, On Tue, 8 Jun 2021 at 17:36, Godfrin, Philippe E wrote: > > Certainly, > > Postrgres sends this message across the wire: > > Jun 2 21:14:40 ip-172-31-77-193 haproxy[9031]: #0110x00: 00 00 00 4c 00 > 03 00 00 75 73 65 72 00 74 73 64 |...Luser.tsd| > Jun 2 21:14:40 ip-172-31-77-193 haproxy[9031]: #0110x10: 62 00 64 61 74 > 61 62 61 73 65 00 74 73 64 62 00 |b.database.tsdb.| > Jun 2 21:14:40 ip-172-31-77-193 haproxy[9031]: #0110x20: 61 70 70 6c 69 > 63 61 74 69 6f 6e 5f 6e 61 6d 65 |application_name| > Jun 2 21:14:40 ip-172-31-77-193 haproxy[9031]: #0110x30: 00 70 73 71 6c > 00 63 6c 69 65 6e 74 5f 65 6e 63 |.psql.client_enc| > Jun 2 21:14:40 ip-172-31-77-193 haproxy[9031]: #0110x40: 6f 64 69 6e 67 > 00 55 54 46 38 00 00 |oding.UTF8..| > > > > Bytes, 8 – are user\0 Byte 13 starts the userid. I would like to be able to > test that userid and make a routing decision on that. This is what the > HAProxy docs suggest: > > > > acl check-rw req.payload(8,32),hex -m sub 757365720074736462727700 And don't see how this is supposed to match? 62727700 is not what it's in your trace. Is the username tsdb, like in your trace, or is it tsdbrw, like in your ACL? Also, put a "tcp-request inspect-delay 5s" in front of the ACL (you can optimize performance later) and share the entire configuration. Please try to ask the actual question directly next time, so we can help you right away (https://xyproblem.info/). Thanks, Lukas
Re: built in ACL, REQ_CONTENT
Hello, On Mon, 7 Jun 2021 at 14:51, Godfrin, Philippe E wrote: > > Greetings! > > I can’t seem to find instructions on how to use this builtin ACL. Can someone > point me in the right direction, please? There is nothing specific about it, you use just like every other ACL. http-request deny if REQ_CONTENT http-request deny unless REQ_CONTENT Lukas
Re: how to write to a file safely in haproxy
Hello, On Wed, 26 May 2021 at 13:29, reshma r wrote: > > Hello all, > Periodically I need to write some configuration data to a file. > However I came across documentation that warned against writing to a file at > runtime. > Can someone give me advice on how I can achieve this safely? You'll have to elaborate what you are talking about. Are you talking about writing to the filesystem from a LUA scripts or other runtime code within haproxy? Then yes, don't do it, it will block the event loop and you will be in a world of hurt. Are you talking about writing and changing the configuration file, prior to a reload, manually or from a external process, that's not a problem at all. The issue is blocking filesystem access in the event-loop of the haproxy process itself. Explaining what the problem is you are trying to solve can get you more accurate proposals and solutions faster than asking about a particular solution (XY problem). Lukas
Re: haproxy hung with CPU usage at 100% Heeeelp, please!!!
The first thing I'd try is to disable multithreading (by putting nbthread 1 in the global section of the configuration), so if that helps. Lukas
Re: Table sticky counters decrementation problem
Hi Willy, On Tue, 30 Mar 2021 at 17:56, Willy Tarreau wrote: > > Guys, > > out of curiosity I wanted to check when the overflow happened: > > $ date --date=@$$(date +%s) * 1000) & -0x800) / 1000)) > Mon Mar 29 23:59:46 CEST 2021 > > So it only affects processes started since today. I'm quite tempted not > to wait further and to emit 2.3.9 urgently to fix this before other > people get trapped after reloading their process. Any objection ? No objection, but also: what a coincidence. I suggest you get a lottery ticket today. cheers, lukas
Re: Stick table counter not working after upgrade to 2.2.11
Hi Willy, On Tue, 23 Mar 2021 at 09:32, Willy Tarreau wrote: > > Guys, > > These two patches address it for me, and I could verify that they apply > on top of 2.2.11 and work there as well. This time I tested with two > counters at different periods 500 and 2000ms. Both Sander and Thomas now report that the issue is at least partially still there in 2.3.8 (which has all fixes, or 2.2.11 with patches), and that downgrading to 2.3.7 (which has none of the commits) works around the issue: https://www.mail-archive.com/haproxy@formilux.org/msg40093.html https://www.mail-archive.com/haproxy@formilux.org/msg40092.html Let's not yet publish stable bugfix releases, until this is properly diagnosed. Lukas
Re: Table sticky counters decrementation problem
Hello Thomas, this is a known issue in any release train other than 2.3 ... https://github.com/haproxy/haproxy/issues/1196 However neither 2.3.7 (does not contain the offending commits), nor 2.3.8 (contains all the fixes) should be affected by this. Are you absolutely positive that you are running 2.3.8 and not something like 2.2 or 2.0 ? Can you provide the full output of haproxy -vv? Thanks, Lukas
Re: zlib vs slz (perfoarmance)
Hello, On Mon, 29 Mar 2021 at 20:54, Илья Шипицин wrote: >> > Dear list, >> > >> > on browser load (html + js + css) I observe 80% of cpu spent on gzip. >> > also, I observe that zlib is probably one of the slowest implementation >> > my personal benchmark correlate with https://github.com/inikep/lzbench >> > >> > if so, should'n we switch to slz by default ? or am I missing something ? >> >> There is no default. >> >> zlib is optional. >> slz is optional. >> >> Haproxy is compiled with either zlib, slz or no compression library at all. >> >> >> What specifically are you suggesting to change in the haproxy source tree? > > > for example, docker image > https://hub.docker.com/_/haproxy So nothing we control directly. Docker images, package repositories, etc. This means filing requests at those places, convincing other people to switch from a well known library to an unknown library that isn't even packaged yet in most places, that those maintainers then have to support. I'm not sure how realistic that is. Like I said last year, this needs a marketing campaign: https://www.mail-archive.com/haproxy@formilux.org/msg38044.html What about the docker images from haproxytech? Are those zlib or slz based? Perhaps that would be a better starting point? https://hub.docker.com/r/haproxytech/haproxy-alpine Lukas
Re: Is there a way to deactivate this "message repeated x times"
Hello, On Mon, 29 Mar 2021 at 15:25, Aleksandar Lazic wrote: > > Hi. > > I need to create some log statistics with awffull stats and I assume this > messages > means that only one line is written for 3 requests, is this assumption right? > > Mar 28 14:04:07 lb1 haproxy[11296]: message repeated 3 times: [ > ::::49445 [28/Mar/2021:14:04:07.234] https-in~ be_api/api_prim > 0/0/0/13/13 200 2928 - - 930/900/8/554/0 0/0 {|Mozilla/5.0 > (Macintosh; Intel Mac OS X 10.13; rv:86.0) Gecko/20100101 > Firefox/86.0||128|TLS_AES_128_GCM_SHA256|TLSv1.3|} "GET > https:/// HTTP/2.0"] > > Can this behavior be disabled? This is not haproxy, this is your syslog server. Refer to the documentation of the syslog server. Lukas
Re: zlib vs slz (perfoarmance)
Hi Ilya, On Mon, 29 Mar 2021 at 15:34, Илья Шипицин wrote: > > Dear list, > > on browser load (html + js + css) I observe 80% of cpu spent on gzip. > also, I observe that zlib is probably one of the slowest implementation > my personal benchmark correlate with https://github.com/inikep/lzbench > > if so, should'n we switch to slz by default ? or am I missing something ? There is no default. zlib is optional. slz is optional. Haproxy is compiled with either zlib, slz or no compression library at all. What specifically are you suggesting to change in the haproxy source tree? Regards, Lukas
Re: HAProxy proxy protocol
Double post on discourse, please refrain from this practice in the future! https://discourse.haproxy.org/t/haproxy-proxy-protocol/6413/2 Thanks, Lukas
Re: [HAP 2.3.8] Is there a way to see why "" and "SSL handshake failure" happens
Hello, On Sat, 27 Mar 2021 at 11:52, Aleksandar Lazic wrote: > > Hi. > > I have a lot of such entries in my logs. > > ``` > Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 > [27/Mar/2021:11:48:20.523] https-in~ https-in/ -1/-1/-1/-1/0 0 0 - - > PR-- 1041/1011/0/0/0 0/0 "" > Mar 27 11:48:20 lb1 haproxy[14556]: ::::23167 > [27/Mar/2021:11:48:20.523] https-in~ https-in/ -1/-1/-1/-1/0 0 0 - - > PR-- 1041/1011/0/0/0 0/0 "" Use show errors on the admin socket: https://cbonte.github.io/haproxy-dconv/2.0/management.html#9.3-show%20errors > Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 > [27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure > Mar 27 11:48:20 lb1 haproxy[14556]: ::::23166 > [27/Mar/2021:11:48:20.440] https-in/sock-1: SSL handshake failure That's currently a pain point: https://github.com/haproxy/haproxy/issues/693 Lukas
Fwd: OpenSSL Security Advisory
FYI -- Forwarded message - From: OpenSSL Date: Thu, 25 Mar 2021 at 15:03 Subject: OpenSSL Security Advisory To: , OpenSSL User Support ML , OpenSSL Announce ML -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 OpenSSL Security Advisory [25 March 2021] = CA certificate check bypass with X509_V_FLAG_X509_STRICT (CVE-2021-3450) Severity: High The X509_V_FLAG_X509_STRICT flag enables additional security checks of the certificates present in a certificate chain. It is not set by default. Starting from OpenSSL version 1.1.1h a check to disallow certificates in the chain that have explicitly encoded elliptic curve parameters was added as an additional strict check. An error in the implementation of this check meant that the result of a previous check to confirm that certificates in the chain are valid CA certificates was overwritten. This effectively bypasses the check that non-CA certificates must not be able to issue other certificates. If a "purpose" has been configured then there is a subsequent opportunity for checks that the certificate is a valid CA. All of the named "purpose" values implemented in libcrypto perform this check. Therefore, where a purpose is set the certificate chain will still be rejected even when the strict flag has been used. A purpose is set by default in libssl client and server certificate verification routines, but it can be overridden or removed by an application. In order to be affected, an application must explicitly set the X509_V_FLAG_X509_STRICT verification flag and either not set a purpose for the certificate verification or, in the case of TLS client or server applications, override the default purpose. OpenSSL versions 1.1.1h and newer are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. This issue was reported to OpenSSL on 18th March 2021 by Benjamin Kaduk from Akamai and was discovered by Xiang Ding and others at Akamai. The fix was developed by Tomáš Mráz. NULL pointer deref in signature_algorithms processing (CVE-2021-3449) = Severity: High An OpenSSL TLS server may crash if sent a maliciously crafted renegotiation ClientHello message from a client. If a TLSv1.2 renegotiation ClientHello omits the signature_algorithms extension (where it was present in the initial ClientHello), but includes a signature_algorithms_cert extension then a NULL pointer dereference will result, leading to a crash and a denial of service attack. A server is only vulnerable if it has TLSv1.2 and renegotiation enabled (which is the default configuration). OpenSSL TLS clients are not impacted by this issue. All OpenSSL 1.1.1 versions are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1k. OpenSSL 1.0.2 is not impacted by this issue. This issue was reported to OpenSSL on 17th March 2021 by Nokia. The fix was developed by Peter Kästle and Samuel Sapalski from Nokia. Note OpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended support is available for premium support customers: https://www.openssl.org/support/contracts.html OpenSSL 1.1.0 is out of support and no longer receiving updates of any kind. The impact of these issues on OpenSSL 1.1.0 has not been analysed. Users of these versions should upgrade to OpenSSL 1.1.1. References == URL for this Security Advisory: https://www.openssl.org/news/secadv/20210325.txt Note: the online version of the advisory may be updated with additional details over time. For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html -BEGIN PGP SIGNATURE- iQEzBAEBCAAdFiEEhlersmDwVrHlGQg52cTSbQ5gRJEFAmBcl6sACgkQ2cTSbQ5g RJGvnAgAtG6I7rfokDC9E5yB26KC3k0Vasfq5iH/aZz0CNRyOokWJBUyyNIVjqr0 2eZP7VsQT7zRM+tgh9c8MwH3FIghtpwJRJls4qZDHKoXts7JH4Ul4NLPd546x7xA GcKNwTD4NkZbTqtZ72NTgliInzrj0MCC8jqQrIIkcAIleGNzvZ0f64jdE+vBXoqX M2FOhWiA/JkAKtB3W7pthIt25qkOwHbrpTy+UUp/S5QD779NJ/EOYcsOFBRfLZiP gA6QILuW2L55lhG6Y2u+nVE3UI2hqd2hGgSAvDIPr2lVJxq0LQpgHca7Gj5bfIRo GLDz7n0FhN6n7NBqetP+nlHmYivcSg== =XIXK -END PGP SIGNATURE-
Re: Stick table counter not working after upgrade to 2.2.11
Hello, just a heads-up, this was also reported for 1.8: https://discourse.haproxy.org/t/counter-issues-on-1-8-29/6381/ Lukas On Tue, 23 Mar 2021 at 09:32, Willy Tarreau wrote: > > Guys, > > These two patches address it for me, and I could verify that they apply > on top of 2.2.11 and work there as well. This time I tested with two > counters at different periods 500 and 2000ms. > > Thanks, > Willy
Re: [ANNOUNCE] haproxy-1.6.16
Hello Willy, On Sat, 20 Mar 2021 at 10:09, Willy Tarreau wrote: > > 1.6 was EOL last year, I don't understand why there is a last release. > > There were some demands late last year and early this year to issue a > last one with pending fixes to "flush the pipe" but it was terribly > difficult to find enough time to go through the whole chain with the > other nasty bugs that kept us busy. > > > Both 1.6 and 1.7 are marked for critical fixes but many fixes are pushed > > in it. The risk is to introduce a late regression in this last version. > > There's always such a risk when doing backports unfortunately and it's > always difficult to set the boundary between what is needed and what > not. A lot of the issues I'm seeing there are crash-related, and > others address not-so-funny recent changes in compilers behaviors > leading to bad code generation. There are also some that were possibly > not strictly necessary, but then they're obvious fixes (like the one > on the timer that's incorrectly compared), and whose possible > consequences are not always trivial to imagine (such as checks looping > at 100% CPU every 24 days maybe depending on the tick's sign). I agree that finding the sweet spot can be difficult, but I have to say I share Vincent's concerns. I do feel like currently we backport *a lot*, especially on those near-EOLed trees. When looking at the list of backported patches, I don't feel like the age and remaining lifetime is taken into consideration. I don't want to be the monday morning quarterback, but in 1.7 we have 853926a9ac ("BUG/MEDIUM: ebtree: use a byte-per-byte memcmp() to compare memory blocks") and I quote from the commit message: > This is not correct and definitely confuses address sanitizers. In real > life the problem doesn't have visible consequences. > [...] > there is no way such a multi-byte access will cross a page boundary > and end up reading from an unallocated zone. This is why it was > never noticed before. This sounds like a backport candidate for "warm" stable branches (maybe), but 1.7 and 1.8 feel too "cold" for this, even 8 - 9 months ago. This backport specifically causes a build failure of 1.7.13 on musl (#760) - because of a missing backport, but that's just an example. 39 "MINOR" patches made it into 1.6.16, 62 patches in 1.7.13. While it is true that a lot of "MINOR" tagged patches are actually important, this is still a large number for a tree that is supposed to die so soon. Very rarely do users build from source from such old trees anyway (and those users would be especially conservative, definitely not interested in generic, non-important improvements). > But with this in mind, there were two options: > - not releasing the latest fixes You are talking about whether to publish a release or not for tree X.Y, with the backports that are already in the tree. I don't think that's the issue. I think the discussion should be about what commits land in those old trees in the first place. And I don't believe it is scalable to make those decisions during your backporting sessions. Instead I believe we should be more conservative when suggesting backports in the commit message. Currently, we say "should/can be backported to X.Y" based on whether it is *technically* possible to do so for supported trees, not if it makes sense considering the age and lifetime of the suggested tree. This is why I'm proposing a commit author should make such considerations when suggesting backports. Avoiding backports to cold trees of no-impact improvements and minor fixes for rare corner cases should be a goal. Unless we insist every single bug needs to be fixed on every single supported release branch. lukas
Re: [PATCH 1/1] MINOR: build: force CC to set a return code when probing options
Hello Bertrand, On Sun, 7 Mar 2021 at 00:53, Bertrand Jacquin wrote: > I am not proposing haproxy build-system to use -Werror here, I'm only > proposing to use -Werror when probing for options supported by the > compiler, as effectively clang return a code if 0 even if an option is > not supported. The fact haproxy does not use -Wno-clobbered today is > with clang build because of an internal bug in clang with how haproxy is > using stdin/stderr indirection. > > With the proposal above, Werror is only use to probe options, it is not > reflected in SPEC_CFLAGS. Got it, thanks for clarifying. Lukas
Re: [PATCH 1/1] MINOR: build: force CC to set a return code when probing options
Hello, On Sat, 6 Mar 2021 at 21:25, Bertrand Jacquin wrote: > > gcc returns non zero code if an option is not supported (tested > from 6.5 to 10.2). > > $ gcc -Wfoobar -E -xc - -o /dev/null < /dev/null > /dev/null 2>&1 ; echo $? > 1 > > clang always return 0 if an option in not recognized unless > -Werror is also passed, preventing a correct probing of options > supported by the compiler (tested with clang 6.0.1 to 11.1.0) -Werror is more than just "-Wunknown-warning-option" on clang. -Werror/ERR is specifically optional in the Makefile. If we want to change the haproxy build-system to do -Werror from now on we need a) discuss it and b) fix it all up. We can't hardcode -Werror and at the same time claim that it's an optional parameter. Lukas
Re: minconn, maxconn and fullconn (again, sigh!)
On Thu, 11 Feb 2021 at 05:31, Victor Sudakov wrote: > > Lukas Tribus wrote: > > > > On Wed, 10 Feb 2021 at 16:55, Victor Sudakov wrote: > > > > > > I can even phrase my question in simpler terms. What happens if the sum > > > total of all servers' maxconns in a backend is less than the maxconn > > > value in the frontend pointing to the said backend? > > > > Queueing for "timeout queue" amount of time, and then return 503 error > > And what happens in TCP mode, after the "timeout queue" amount of time? > I suppose the TCP connection with the client is reset? Yes, that's the only choice. > > > > > See: > > > > timeout queue > > https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4.2-timeout%20queue > > > > maxconn > > https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#5.2-maxconn > > > > > > I really suggest you ignore minconn and fullconn, and focus on maxconn > > instead. The latter is a must-have (and must-understand). Really > > maxconn (global, frontend and per server ) is the single most > > important performance knob in haproxy. > > Maxconn is rather clear, especially when one is sure about two things: > > 1. A server's maxconn value is always a hard limit (notwithstanding if > there is a minconn configured for the server). Yes. > 2. Connections outnumbering the sum total of a backend's servers > maxconns are queued for the "timeout queue" amount of time and then > dropped. If, for any reason, we can't use another server with free slots, yes. > It would be nice however to understand minconn/fullconn too. If a > backend has several servers with identical minconn, maxconn and weight, > what's the point of having minconn? The load will be always distributed > evenly between all the servers notwithstanding minconn/fullconn, > correct? If the load is REALLY the same, sure. That's just never the case in real life for a number of reasons: - different load-balancing algorithms - different client behavior - session persistence - long-running TCP connections (websocket, et all) But yes, like I already mentioned, minconn/fullconn is addressing a very specific requirement that I don't think comes up very often. Lukas
Re: minconn, maxconn and fullconn (again, sigh!)
Hello Victor, On Wed, 10 Feb 2021 at 16:55, Victor Sudakov wrote: > > I can even phrase my question in simpler terms. What happens if the sum > total of all servers' maxconns in a backend is less than the maxconn > value in the frontend pointing to the said backend? Queueing for "timeout queue" amount of time, and then return 503 error (and this really is desirable as opposed to hitting maxconn on a frontend or even worse, global maxconn, because a few milliseconds of delay do not hurt and returning a proper HTTP error in distress is way better then some obscure connection timeout). See: timeout queue https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4.2-timeout%20queue maxconn https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#5.2-maxconn I really suggest you ignore minconn and fullconn, and focus on maxconn instead. The latter is a must-have (and must-understand). Really maxconn (global, frontend and per server ) is the single most important performance knob in haproxy. Lukas
Re: TCP mode and ultra short lived connection
Hello, On Mon, 8 Feb 2021 at 18:14, Максим Куприянов wrote: > > Hi! > > I faced a problem dealing with l4 (tcp mode) haproxy-based proxy over > Graphite's component receiving metrics from clients and clients who are > connecting just to send one or two Graphite-metrics and disconnecting right > after. > > It looks like this > 1. Client connects to haproxy (SYN/SYN-ACK/ACK) > 2. Client sends one line of metric > 3. Haproxy acknowledges receiving this line (ACK to client) > 4. Client disconnects (FIN, FIN-ACK) > 5. Haproxy writes 1/-1/0/0 CC-termination state to log without even trying to > connect to a backend and send client's data to it. > 6. Metric is lost :( > > If the client is slow enough between steps 1 and 2 or it sends a bunch of > metrics so haproxy has time to connect to a backend – everything works like a > charm. The issue though is the client disconnect. If we delay the client disconnect, it could work. Try playing with tc by delaying the incoming FIN packets for a few hundred milliseconds (make sure you only apply this to this particular traffic, for example this particular destination port). > Example. First column is a time delta in seconds between packets It would be useful to have both front and backend tcp connections in the same output (and absolute time stamps - delta from the first packet, not the previous). You may also want to accelerate the server connect with options like: option tcp-smart-connect https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#4-option%20tcp-smart-connect tfo (needs server support) https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#tfo%20%28Server%20and%20default-server%20options%29 > How can I deal with these send-and-forget clients? In TCP mode, we need to propagate the close from one side to the other, as we are not aware of the protocol. Not sure if it is possible (or a good idea) to keep sending buffer contents to the backend server when the client is already gone. "[no] option abortonclose" only affects HTTP, according to the docs. Maybe Willy can confirm/deny this. Lukas
Re: HAproxy soft reload timeout?
Hello Dominik, you are looking for hard-stop-after: http://cbonte.github.io/haproxy-dconv/2.2/configuration.html#hard-stop-after Regards, Lukas On Thu, 4 Feb 2021 at 11:40, Froehlich, Dominik wrote: > > Hi, > > > > I am currently experimenting with the HAproxy soft reload functionality (USR2 > to HAproxy master process). > > From what I understood, a new worker is started and takes over the listening > sockets while the established connections remain on the previous worker until > they are finished. > > The worker then terminates itself once all work is done. > > > > I’ve noticed there are some quite long-lived connections on my system (e.g. > websockets or tcp-keepalive connections from slow servers). So when I am > doing the reload, the previous instance > > of HAproxy basically lives as long as the last connection is still going. > Which potentially could go on forever. > > > > So when I keep reloading HAproxy because the config has changed, I could end > up with dozens, even hundreds of HAproxy instances running with old > connections. > > > > My question: Is there a way to tell the old haproxy instances how much time > they have to get done with their work and leave? > > I know it’s a tradeoff. I want my users to not be disrupted in their > connections but I also need to protect the ingress machines from overloading. > > > > Any best practices here? > > > > Cheers, > > D
Re: (possibly off topic) how to handle Chrome on SSL mass hosting ?
On Wed, 3 Feb 2021 at 18:47, Илья Шипицин wrote: >> while I do not mind to have such optimization, but when 'a.example.com" >> responds with http2 GOAWAY, that affects also "b.example.com" and " >> c.example.com". Chrome is not clever enough to open new connections instead >> of abandoned one. > > above approach works for Chrome (and does not work for Safari) > unfortunately we found Safari is using connection reuse just like Chrome, and > Safari does not handle 421 properly In which exact case is GOAWAY sent to the browser and how does that impact the browser behavior exactly? You are probably doing routing based on the host header, not the SNI value, so you wouldn't have a routing problem. I'm unsure what the actual problem is that you are trying to solve. Lukas
Re: SSL session resumption
Hello, On Wed, 3 Feb 2021 at 17:44, Илья Шипицин wrote: > > TLS1.2 uses tls tickets, when TLS1.0 uses ssl sessions. I believe this is incorrect, TLSv1.2 works just fine with Session ID's (RFC5246) and TLS 1.0 works fine with TLS tickets (RFC5077). I'm not aware of any restrictions between TLSv1.0/1.2 and session ID's vs TLS tickets. Lukas
Re: SSL session resumption
Hello Johan, we are gonna need the outputs of "haproxy -vv" from both situations, as well as at the very least *all* the ssl configuration parameters in haproxy that you are using. However, I do not believe it is likely that we can find the root cause, without access to those handshakes, since it cannot be reproduced by openssl s_client. What definitely changed in haproxy 2.2 is that the default minimum TLS version is now 1.2. To rollback to TLS 1.0 you can configure: global ssl-default-bind-options ssl-min-ver TLSv1.0 Regards, Lukas On Wed, 3 Feb 2021 at 13:36, Johan Andersson wrote: > > To whom it may concern > > > > We have recently upgraded out HAProxy version from 2.1.3 to 2.2.4. > > After the upgrade we got customer complaints that the data usage of their > devices had gone up. Our company sells proprietary hardware that logs data > and sends that to a web service which we host. These devices are often > deployed remotely and connected via shaky 3G connections with data-capped SIM > cards, so low data usage is very important. > > After some digging with Wireshark, we found that the SSL sessions are not > resumed. Instead a new handshake is initiated every time the device sends > data. Which is typically once an hour. > > We have set the global tune.ssl.lifetime parameter to 24h and the > tune.ssl.cachesize to 10 and this has worked since HAProxy version 1.6.9 > when we first introduced it. > > We have also tested with the latest 2.1.11 release of HAProxy and it behaves > the same way as the 2.1.3 version. We have also tested with 2.2.0 and 2.2.8 > and they behave the same as 2.2.4. > > > > We have tried reproducing this with openssl s_client, saving the session id > between requests but can’t reproduce it that way. > > We have also pored over the change logs between versions to see if there is > some change that could make HAProxy behave this way. > > > > We’re at a loss here, what could cause this behavior, and how can we fix it? > > > > > > Best regards > > > > Johan Andersson > > Development Engineer > > Global Platforms Cloud Team > > > > HMS Industrial Networks AB > > Stationsgatan 37, Box 4126 > > 300 04 Halmstad, Sweden > > > > Email: j...@hms-networks.com > > > > > > HALMSTAD | BARCELONA | BEIJING | BOSTON | BUCHEN | CHICAGO | COVENTRY | DEN > BOSCH | DUBAI | IGUALADA | > > KARLSRUHE | MILAN | MULHOUSE | NIVELLES | PUNE | RAVENSBURG | SEOUL | > SINGAPORE | TOKYO | WETZLAR > >
Re: How can I enable the HTTP/3 (QUIC) in HAProxy?
Jimmy, On Thu, 21 Jan 2021 at 09:45, Tim Düsterhus wrote: > > Hi List, > > Am 21.01.21 um 08:59 schrieb jimmy: > > I found the fact that HAProxy 2.3 higher supports HTTP/3 (QUIC) through > > [this > > link](https://www.haproxy.com/blog/announcing-haproxy-2-3/#connection-improvements). > This is a duplicate of this comment on the issue tracker: > > https://github.com/haproxy/haproxy/issues/680#issuecomment-764475902 You also cross-posted on discourse: https://discourse.haproxy.org/t/how-to-enable-http-3-quic-in-haproxy/6159 Understand that this is considered rude, try to avoid that please. Thanks, Lukas
Re: end all sessions for specific user
Hello, On Friday, 4 December 2020, Yossi Nachum wrote: > If I will change the map file via admin socket > Will it shutdown old/current sessions? Better, you don't need to shutdown anything, because HTTP authentication works on a HTTP transaction level, so each request is authenticated, even if it is an old session. You do need to modify the map file too though, otherwise you will reload back into the old configuration, next time you need to make a config change. Lukas
Re: end all sessions for specific user
Hello, On Thu, 3 Dec 2020 at 16:17, Yossi Nachum wrote: > > Hi, > I'm using haproxy 1.8 > This is my global and frontend configuration which include user auth: > [...] > acl network_allowed src,map_ip_int(/etc/haproxy/allowed_ips.lst,0) -m int > eq 1 > acl users_allowed hdr(MD5UP),map(/etc/haproxy/allowed_users.lst) -m found > http-request auth realm Bis if network_allowed BASIC_AUTH !users_allowed > http-request auth realm Bis if !users_allowed !network_allowed > http-request reject unless network_allowed || users_allowed I assume you are reloading haproxy to apply this change. This means that an older haproxy process will keep running with the old data. Some ideas: - restart instead of reloading, dropping all session immediately (but also killing in flight transactions) - configure hard-stop-after to an acceptable value for your, to limit the amount of time haproxy runs with old configurations - apply the changes to the map file via admin socket, instead of requiring a new haproxy process to spawn Haproxy can't know whether a session has an old password or not. This is handled at transaction level, not at session level. The only thing you can do is kill all sessions with an IP address that is not in network_allowed, manually. cheers, lukas
Re: end all sessions for specific user
Hello, On Thu, 3 Dec 2020 at 15:32, Yossi Nachum wrote: > > Hi, > > > > I have haproxy configuration that based on a file with username and password. > > When I disable a user his new sessions are blocked with 407 but his > old/current sessions are still processed Please share your configuration and haproxy release. I think you may be in tunnel mode, where haproxy does not have visibility to subsequent transactions. Lukas
Fwd: Forthcoming OpenSSL Release
FYI -- Forwarded message - From: Paul Nelson Date: Tue, 1 Dec 2020 at 11:15 Subject: Forthcoming OpenSSL Release To: The OpenSSL project team would like to announce the forthcoming release of OpenSSL version 1.1.1i. This release will be made available on Tuesday 8th December 2020 between 1300-1700 UTC. OpenSSL 1.1.i is a security-fix release. The highest severity issue fixed in this release is HIGH: https://www.openssl.org/policies/secpolicy.html#high Yours The OpenSSL Project Team
Re: Logging mTLS handshake errors
Hello Dominik, On Wed, 18 Nov 2020 at 15:06, Froehlich, Dominik wrote: > > Hi everyone, > > > > Some of our customers are using mTLS to authenticate clients. There have been > complaints that some certificates don’t work > > but we don’t know why. To shed some light on the matter, I’ve tried to add > more info to our log format regarding TLS validation: This is a know pain point: https://github.com/haproxy/haproxy/issues/693 Lukas
Re: Disable client keep-alive using ACL
Hi Tim, On Tue, 17 Nov 2020 at 13:35, Tim Düsterhus, WoltLab GmbH wrote: > > Hi > > Am 09.11.20 um 12:36 schrieb Tim Düsterhus, WoltLab GmbH: > > is it possible to reliably disable client keep-alive on demand based on > > the result of an ACL? > > > > I was successful for HTTP/1 requests by using: > > > > http-after-response set-header connection close if foo > > > > But apparently that has no effect for HTTP/2 requests. I was unable to > > find anything within the documentation with regard to this either. I don't think there is a way. In HTTP/2 you'd need to send a GOAWAY message to close the connection. There are no instructions in the HTTP headers regarding the connection. I *think/hope* we are actually sending GOAWAY messages when: - some timeouts are reached - hard-stop-after triggers - a "shutdown session ..." is triggered You could check if sending a "421 Misdirected Request" error to the client could achieve your goal, but it certainly behaves differently than a close in H1 (you can't get a successful answer to the client). It's also a workaround. Triggering GOAWAY/full H2 connection teardown dynamically would need to be implemented. I think in HTX all connection headers are immediately dropped (they are not "translated" and applied to the connection). cheers, lukas [1] https://tools.ietf.org/html/rfc7540#section-9.1.2
Re: do we want to keep CentOS 6 builds?
Hello Ilya, On Mon, 16 Nov 2020 at 22:48, Илья Шипицин wrote: > we run CI only for master branch. Exactly! > do all those people want to run latest unstable haproxy on oldish RHEL 6 ? No, but since we *only test* master, this is the only way we get *some* coverage for the changes we are backporting to stable branches. After all, a large percentage of them come from master. How do we know that a fix that we are backporting to 1.8 won't break the build on an older libc or gcc? There is a chance that this would have failed a test on master. This is *NOT* about CentOs6 specifically. This is about having at least one old linux system we are testing with, so we don't break things that *we don't want to break*. How sure are you that there are no supported OS's out there that still use gcc 4.4 or glibc 2.12, which we are testing here "for free" and before backporting the fix to 1.8? I am very sympathetic to drop support for old systems, *if the maintenance overhead becomes a burden* - and I don't set this bar high. My only point is that we should be discussing the problem we are trying to solve (effort that goes into supporting and testing an obsolete system?). I don't know how much hand holding the tests require - I can't quantify the effort that goes into this, which is why I would like this discussion to be about that as opposed to bikesheed around EOL's. So, is this about OpenSSL? Thanks, Lukas
Re: do we want to keep CentOS 6 builds?
Hello, On Sun, 15 Nov 2020 at 17:14, Илья Шипицин wrote: > > Hello, > > we still run cirrus-ci builds. > CentOS 6 is EOL. > > should we drop it? I think CentOs6 gives us good feedback about older operating systems that we may not necessarily want to break. The question for me is not so much whether we want to test with CentOs 6, it's more about do we want to be aware and fix build issues with those old systems? If the answer to the latter is yes, then we should keep the tests. If the answer is no, then we should drop them of course. How much of a burden is it to a) keep testing and b) keep supporting (by making sure it builds) on old kernels, libc and libraries? I don't have a strong opinion, but I would tend to keep the support (even though it's EOL). However if this is a continued pain in the ass, for example with openssl, then it's probably better to drop it. Lukas
Re: fronted/bind ordering
Hello, On Fri, 13 Nov 2020 at 21:21, Willy Tarreau wrote: > > > I'd suggest you run haproxy with noreuseport [1] temporarily, and > > > check if your kernel refuses to bind() to those IP's - it likely will. > > > This indicates an unsupported configuration (by your kernel, not by > > > haproxy). > > > > It indeed does fail. Hmmm, that's a shame as this was a really nice > > "feature" to have this fallback. I guess it's back to the drawing board > > unless you or anyone else have any other suggestions. > > OK so that confirms it. Well, you didn't lose anything it's just that > before the moment noreuseport was introduced, you simply did not benefit > from it. Note that for two sockets to support binding with SO_REUSEPORT, > *both* need to have it. So in your case if you value having it for whatever > reason, you may want to keep it enabled for one of the bind lines and > remove it on the other ones. I'm afraid noreuseport is a global option in haproxy. But the noreuseport was merely a suggestion to confirm that the kernel is not OK with this config. The change in behavior with conflicting sockets with SO_REUSEPORT enabled could be due to support for BPF (SO_ATTACH_REUSEPORT_CBPF) and eBPF (SO_ATTACH_REUSEPORT_EBPF) introduced in 4.5. Lukas
Re: fronted/bind ordering
Hello Bartosz, On Fri, 13 Nov 2020 at 10:08, Bartosz wrote: > > Are we really the only ones with this issue? Has no one else seen this > change in behaviour? Or otherwise have any idea where it's coming from? > > Or at least confirm whether they do or don't see the same behaviour. I don't think you can setup sockets this way, I think this is an incorrect configuration leading to undefined behavior. The fact the behavior then changes across different kernels is not a surprise. I'd suggest you run haproxy with noreuseport [1] temporarily, and check if your kernel refuses to bind() to those IP's - it likely will. This indicates an unsupported configuration (by your kernel, not by haproxy). Lukas [1] https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#noreuseport
Re: DNS Load balancing needs feedback and advice.
Hello Willy, On Fri, 6 Nov 2020 at 10:59, Willy Tarreau wrote: > > > hate the noise that some people regularly make about "UDP support" > > > > I am *way* more concerned about what to tell people when they report > > redundant production systems meltdowns because of the traps that we > > knew about for a long time and never improved. Like when the DNS > > response size surpasses accepted_payload_size and we don't have a TCP > > fallback, > > This one should be addressed once TCP support is implemented. But here > again, I'm not interested in implementing fallbacks. We're not supposed > to be dealing with unknown public servers when it comes to discovery > (which is the case where large responses are expected), so I'd rather > have resolvers configured for a simple resolving use case (e.g. server > address change, UDP) or discovery (TCP only). All DNS servers are supposed to support TCP, in my opinion using *only* a TCP DNS client would be fine (I'm not sure about using different code paths regarding address change and discovery). A part from the code changes though, this would be a big change that the user needs to know about. I'm not sure if a big item in the release notes is enough. I'm always concerned about changes that are not immediately obvious (because they don't cause configuration warnings or errors). > > or we don't force the users to specify the address-family > > for resolution, which is of course very wrong on a load-balancer. > > Are you suggesting that we should use IPv4 only unless IPv6 is set, and > that under no circumstance we switch from one to the other ? I remember > that this was a difficult choice initially because we were trying to deal > with servers migrating to another region and being available using a > different protocol (I'm not personally fan of mixing address families > for a same server in a load balancer, but I'm pretty certain we clearly > identified this need). But again while it's understandable for certain > (older?) cases, it's very likely that it makes absolutely no sense for > discovery. I'm suggesting: zero assumptions. Currently we have "resolve-prefer" to set a *prefered* address-family. I suggest keeping this keyword as is. However I also suggest that we introduce a keyword to restrict the resolver to a specific address-family. "resolve-family" [ipv4|ipv6] to only send either A or queries. But more importantly I suggest: reject a configuration that has neither "resolve-prefer", nor "resolve-family" (if implemented). This is a hard breaking change that can easily be fixed by adjusting the configuration. It could also be just a config warning for one major release and only become an error in the subsequent release (although I don't think that is needed, since the fix for the user is so easy). I'm confident that the only way to get out of this address-family mess is by stopping the assumptions altogether and forcing the user to make the decision. A load-balancer is not a browser, we must not do happy eyeballs and the current default behavior is pretty close to "undefined" (first valid response is accepted, on errors switching from one AF to another). Regards, Lukas
Re: DNS Load balancing needs feedback and advice.
Hello Willy, On Wed, 4 Nov 2020 at 15:36, Willy Tarreau wrote: > I think it's a reasonable tradeoff because those who insist on this are > also those who want to use so-called "modern" tools (placing "modern" > and DNS in the same sentence always leaves me a strange feeling that > something 37 years old is still modern). > > @Lukas, to respond to your concern, I don't like DNS either I don't think I got my point across. I never said I don't like DNS (the protocol). Let me be a little more blunt then: What I don't like are code/subsystems that are not sufficiently covered maintenance- and maintainer-wise (whatever the reason may be). In my opinion, the resolver code is like that today: - issues (including bugs) are open for years - it's riddled with traps for the users that will suddenly blow up in their faces (lack of TCP support, IPv4 vs IPv6) - important discussions have come to a halt It's obvious from the language in this thread (from Emeric and Willy), that YOU don't like DNS, and it's obvious from the condition of the existing dns subsystem that there is a complete lack of time for it as well. I'm not blaming Baptiste, I understand time is a rare resource, I'm just honestly describing the situation as I see it. I cannot help here (other than explaining why some current behaviours are bad and triaging the bugs on GH, which is also lacking: most dns issues do not even have the dns subsystem label). All this blunt critique without providing suggestions to improve the situation is rude, but since we are discussing DNS load-balancing (which sounds like adding new fuel to the fire to me), apparently with the same amount of resources and enthusiasm, I am concerned that we will end up in the same or worse situation, which is why I have to share my (negative) opinion about the current situation. > hate the noise that some people regularly make about "UDP support" I am *way* more concerned about what to tell people when they report redundant production systems meltdowns because of the traps that we knew about for a long time and never improved. Like when the DNS response size surpasses accepted_payload_size and we don't have a TCP fallback, or we don't force the users to specify the address-family for resolution, which is of course very wrong on a load-balancer. Of course I understand the DNS resolver code has nothing to do with future DNS load-balancing code. But the fact of the matter is that a new subsystems/featureset require sustained effort, time and frankly also interest. lukas
Re: DNS Load balancing needs feedback and advice.
Hello Emeric, On Mon, 2 Nov 2020 at 15:41, Emeric Brun wrote: > > Hi All, > > We are currently studying to develop a DNS messages load balancer (into > haproxy core) I find this a little surprising given that there already is a great DNS load-balancer out there (dnsdist) from the folks at powerdns and when I look at the status of the haproxy resolver, I don't feel like DNS sparkes a huge amount of developer interest. Loadbalancing DNS will certainly require more attention and enthusiasm than what the resolver code get's today, and even more important: long term maintenance. > After a global pass on RFCs (DNS, DNS over TCP, eDNS, DNSsec ...) we noticed > that practices on DNS have largely evolved > since stone age. > > Since the last brainstorm meeting I had with Baptiste Assmann and Willy > Tarreau, we were attempted to make some > assumptions and choices and we want to submit them to community to have your > thoughts. > > Reading RFCs, I notice multiple fallback cases (if server not support eEDNS > we should retry request without eDNS The edns fallback should be obsolete and has been disabled on the large public resolver on flagday 2019. https://dnsflagday.net/2019/ > or if response is truncated we should retry over TCP This is and always will be very necessary. Deploying the haproxy resolver feature would be a lot less dangerous if we would support this (or make all requests over TCP in the first place). > So we decide to make the assumption that nowadays, all modern DNS servers > support both TCP (and pipelined requests > as defined in rfc 7766) and eDNS. In this case the DNS loadbalancer will > forward messages received from clients in UDP > or TCP (supporting eDNS or not) to server via pipelined TCP conn. That's probably a good idea. You still have to handle all the UDP pain on the frontend though. > In addition, I had a more technical question: eDNS first purpose is clearly > to bypass the 512 bytes limitation of standard DNS over UDP, > but I did'nt find details about usage of eDNS over TCP which seems mandatory > if we want to perform DNSsec (since DNSsec > exloit some eDNS pseudo-header fields). The main question is how to handle > the payload size field of the eDNS pseudo header > if messages are exchanged over TCP. I'm not sure what the RFC specifically says, but I'd say don't send the "UDP payload size" field if the transport is TCP and ignore/filter it when received over TCP. not a dns expert here though, lukas
Re: IP binding and standby health-checks
Hello, On Tue, 20 Oct 2020 at 05:36, Dave Hall wrote: > HAProxy Active/Standby pair using keepalived and a virtual IP. > Load balance SSH connections to a group of user access systems (long-running > Layer 4 connections). > Using Fail2Ban to protect against password attacks, so using send-proxy-v2 > and go-mmproxy to present client IP to target servers. > > Our objective is to preserve connections through a fail-over. This is not possible today and I doubt it ever will. Haproxy is terminating the Layer 4 sessions on both ends and thus would have to migrate the sockets from one box to another. While linux does have "TCP connection repair" I'm not sure it's actually possible to do this in the load-balancer scenario, where the active box would just suddenly die (as opposed to a graceful and planned failover). You need to look at a solution that does not involve socket termination. Like IPVS Connection Synchronization for example. Or look at what the hyperscalers do nowadays. Google's Maglev, Github's glb-director or Facebook's katran probably can give some inspiration. Lukas
Re: Removal / obsolescence of keywords in 2.3 and future
Hello, On Wed, 14 Oct 2020 at 15:29, Willy Tarreau wrote: > For "nbproc", given that I had no response in the previous question and > I anticipate some surprises if I play games with it, I'll probably apply > William's suggestion, consisting in starting to emit a warning about it, > and asking users to either remove it, or explicitly mention "nbthread 1" > to get rid of the warning, and to report their use case. What about multi-threading performance across NUMA nodes? On discourse someone asked why nbproc and nbthread can't be combined: https://discourse.haproxy.org/t/cpu-affinity-nbproc-1-and-nbthread-1/5675 I'm not sure if this is a real use-case or simply a case of overengineering ... lukas
Re: source algorithm - question.
Hello, On Thu, 24 Sep 2020 at 11:40, Łukasz Tasz wrote: > > Hi all, > haproxy is gr8 - simply. > > Till today I was using roundobin algorithm, but after studying documentation > it popped up that source might be better. > I'm using haproxy in tcp mode, version 1.8, load from one client sometimes > requires more then few servers from the backend. > > but also initialization of handling requests takes some cost - I considered > picking a source as an algorithm - to avoid picking the next server according > to roundrobin. > But I realised that the behaviour of the queue has changed. With a source > algorithm for every source(host, client) there is a separate queue and one > server picked. > would it be possible that when one server reaches it's slots, the next one is > allocated (and next one, and next one)? where I can find detailed information > on how queues are managed depending on the algorithm which is used? It sounds like what you are looking for is "balance first". You can read more about this in the documentation about balance: https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#4.2-balance Lukas
Re: [PATCH] BUILD: makefile: Update feature flags for Solaris / FreeBSD / NetBSD / OpenBSD
On Tue, 15 Sep 2020 at 09:05, Brad Smith wrote: > >> NetBSD 8.0 adds support for accept4() and closefrom(). Enable > >> getaddrinfo(). > > We just had to disable threading on OpenBSD 6.7 for the build to succeed: > > > > https://github.com/haproxy/haproxy/issues/725 > > > > Did you actually test all those combinations? Because otherwise it > > does not seem like a good idea to commit this. > > I know. I saw that. That was wrong. The other diff I submitted switching > the compiler default > as well as this came from that bogus diff. > > 2 of the 4 targets are (FreeBSD / OpenBSD). The other 2 are just based > on being aware of the API > and what has been implemented and when. I'll split up my diff and send > in the tested bits first. Ok thanks (I was concerned about disabling threading on OpenBSD as well). Please add the information that your previous patch switching gcc to cc is required for the OpenBSD change to work; it's obvious to me now, but when looking at the single change we are not always aware of the history (especially later during bisects). Thanks, Lukas
Re: [PATCH] BUILD: makefile: Update feature flags for Solaris / FreeBSD / NetBSD / OpenBSD
Hello Brad, On Sun, 13 Sep 2020 at 09:08, Brad Smith wrote: > > The following diff updates the feature flags for Solaris / FreeBSD / NetBSD / > OpenBSD. > > Bump the baseline Solaris to 9 which intruduced closefrom(). > > FreeBSD 10 is already EOL for support but its the new baseline. Introduces > accept4(). > Enable getaddrinfo(). The FreeBSD port enables these. > > OpenBSD 6.3 is pretty old but it brings a compiler that supports TLS for the > threads > support. 5.7 already supported closefrom(). Enable getaddrinfo(). > > NetBSD 8.0 adds support for accept4() and closefrom(). Enable getaddrinfo(). We just had to disable threading on OpenBSD 6.7 for the build to succeed: https://github.com/haproxy/haproxy/issues/725 Did you actually test all those combinations? Because otherwise it does not seem like a good idea to commit this. Lukas
Re: [RFC PATCH] MAJOR: ssl: Support for validating backend certificates with URI SANs (subjectAltName)
On Tue, 8 Sep 2020 at 12:39, Teo Klestrup Röijezon wrote: > > Hey Willy, sorry about the delay.. managed to get sick right after that stuff. > > > I don't understand what you mean here in that it does not make sense to > > you. Actually it's not even about overriding verifyhost, it's more that > > we match that the requested host (if any) is indeed supported by the > > presented certificate. The purpose is to make sure that the connection > > is not served by a server presenting a valid cert which doesn't match > > the authority we're asking for. And if we don't send any servername, > > then we can still enforce the check against a hard-coded servername > > presented in verifyhost. > > To my mind, `verifyhost` is more or less an acknowledgement that "no, this > isn't quite set up perfectly, but we can at least verify with some caveats". It's about proper certificate validation. > Otherwise, the host could just be taken from the address in the `connect` > keyword before SNI? No, because the address often is an actual IP address, not a hostname, and if it is a hostname, then it often does not match the actual certificate SAN. In a common setup we'd be serving a HTTPS site like www.example.org, so that is the certificate. However the backend servers that haproxy accesses is not www.example.org - because www.example.org would actually point to haproxy, not a backend server. Backend servers could be www1.example.org and www2.example.org, so there is a mismatch. So in the most common configuration, those hostnames do not match, which is why we need to specify it, with the intention to fully validate it. Lukas
[PATCH] DOC: overhauling github issue templates
as per the suggestions from Cyril and Willy on the mailing list: Message-ID: and with direct contributions from Tim, this changes large parts of the bug issue template. The Feature template is also updated as well as a new template for Code Reports is introduced. Co-authored-by: Tim Duesterhus --- .github/ISSUE_TEMPLATE/Bug.md | 67 --- .github/ISSUE_TEMPLATE/Code-Report.md | 55 .github/ISSUE_TEMPLATE/Feature.md | 30 3 files changed, 117 insertions(+), 35 deletions(-) create mode 100644 .github/ISSUE_TEMPLATE/Code-Report.md diff --git a/.github/ISSUE_TEMPLATE/Bug.md b/.github/ISSUE_TEMPLATE/Bug.md index 54f0475..038330a 100644 --- a/.github/ISSUE_TEMPLATE/Bug.md +++ b/.github/ISSUE_TEMPLATE/Bug.md @@ -25,27 +25,22 @@ Thanks for understanding, and for contributing to the project! --> -## Output of `haproxy -vv` and `uname -a` - +## Detailed description of the problem -``` -(paste your output here) -``` + -## What's the configuration? +## Expected behavior -``` -(paste your output here) -``` - ## Steps to reproduce the behavior -## Expected behavior +``` +(paste your output here) +``` + +## Output of `haproxy -vv` and `uname -a` + + + +``` +(paste your output here) +``` + +## If HAProxy crashed: Last outputs and backtraces -## Do you have any idea what may have caused this? +``` +(paste your output here) +``` -## Do you have an idea how to solve the issue? +## Additional information (if helpful) + diff --git a/.github/ISSUE_TEMPLATE/Code-Report.md b/.github/ISSUE_TEMPLATE/Code-Report.md new file mode 100644 index 000..d1bcd49 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/Code-Report.md @@ -0,0 +1,55 @@ +--- +name: Code Report +about: File a Code Report (for example from coverity or valgrind) +labels: 'type: code-report' +--- + + + +## Code Report + + + +Tool: (tool name goes here) + +``` +(paste your output here) +``` + + +## Output of `haproxy -vv` and `uname -a` + + + +``` +(paste your output here) +``` diff --git a/.github/ISSUE_TEMPLATE/Feature.md b/.github/ISSUE_TEMPLATE/Feature.md index 8f1a9df..eaaa7e5 100644 --- a/.github/ISSUE_TEMPLATE/Feature.md +++ b/.github/ISSUE_TEMPLATE/Feature.md @@ -25,27 +25,12 @@ Thanks for understanding, and for contributing to the project! --> -## Output of `haproxy -vv` and `uname -a` - - - -``` -(paste your output here) -``` - ## What should haproxy do differently? Which functionality do you think we should add? - ## What are you trying to do? +## Output of `haproxy -vv` and `uname -a` + + + +``` +(paste your output here) +``` -- 2.7.4
Re: github template
Hi, I prepared: - changes to Bug.md as per this discussion - changes to Features.md (just different sequence here) - added a new label "type: code-report" and a new issue template for those as well The changes can be seen here: https://github.com/lukastribus/hap-issue-trial/issues/new/choose If you agree, I will send the patches. Lukas
Re: Is the "source" keyword supported on FreeBSD?
On Wed, 12 Aug 2020 at 21:03, Jerome Magnin wrote: > > Hi Frank, > > On Wed, Aug 12, 2020 at 11:50:05AM +0200, Frank Wall wrote: > > Hi, > > > > this *feels* like a silly question and I may have missed something > > pretty obvious, but... I've tried to use the "source" keyword and > > it doesn't work. HAProxy does not use the specified IP address when > > connecting to the server. > > > > Is this keyword supposed to work on FreeBSD or are there any known caveats? > > Yes it is supposed to work on FreeBSD. The only caveat I know is that > you must use ipfw if you want to do "full transparent proxy" mode, which > mean using the client IP addresses to establish connections on the > backend side, because divert-reply is not available in FreeBSD's pf but > this is not what you are trying to do. Aren't those two different things? Just bind()ing to a source IP should not require any transport mode or ipfw settings. Of course, the IP needs to be actually configured on the system. Running haproxy through truss would show what happens to this bind() call. Lukas
Re: github template
Hello, On Mon, 20 Jul 2020 at 06:35, Willy Tarreau wrote: > > (Another case is when I try to follow the issue reports during vacation) > > > > I think it could be easier and quicker by only changing the sections order > > like this : > > 1. Expected behavior > > 2. Actual behavior > > 3. Steps to reproduce the behavior > > 4. Do you have any idea what may have caused this? > > 5. Do you have an idea how to solve the issue? > > 6. What's the configuration? > > 7. Output of haproxy -vv and uname -a > > > > What do you think about it ? > > Actually I'm wondering whether we should merge points 1 and 2 above into > "detailed description of the problem": sometimes it's easier to mention > "I'm seeing this which I find strange" without knowing exactly what the > expected behavior should be. We could indicate in the comments for this > section "please clearly describe the whole problem, preferably starting > with the expected behavior and followed by the actual behavior". > > And even then, I'm now thinking that most users would probably more > naturally first describe what they observed then explain what they > would have expected. This would flow more naturally: > >1) subject of the bug report Not sure whether you are just referring to the title of the issue (which actually is the subject already) or whether you are proposing to add a new section: I feel like that's redundant and would advise against it. >2) more detailed description matching the subject above: "when I > do this, haproxy does that" That's what "Actual behavior" is. Are you suggesting we rephrase? I agree it should be at the beginning, before "expected behavior". >3) expected behavior: explain why what's described above is > considered as wrong and what was expected instead (probably > a mismatch with the doc, can be skipped if it's about a crash) >4, 5, 6, 7) unchanged >8) haproxy's last output if it crashed, gdb output if a core was > produced >9) any additional info that may help (local patches, environment > specificities, unusual workload, observations, coincidences > with events on other components ...) Agreed. Features.md need's something similar (moving all the boring stuff below). Let me know about 1) and 2) above, and I will send a patch. Lukas
Re: Can I help with the 2.1 release?
Hello, On Thu, 30 Jul 2020 at 20:49, Valter Jansons wrote: > > On Thu, Jul 30, 2020 at 6:44 PM Harris Kaufmann > wrote: > > my company really needs the next 2.1 release but we want to avoid > > deploying a custom, self compiled version. > > > > Is there something I can do to help with the release? I guess there > > are no blocking issues left? > > For mission-critical setups you should be running the LTS release > lines. The 2.1 release line was more of a technical preview line for > the following 2.2 LTS release, to keep changes flowing, and you should > not expect regular new release tags on the 2.1 line considering the > 2.2 line has shipped. I am not involved in the release process but I > would assume the team will push a new 2.1 tag some day however I do > not see that being a high priority for them in any way. > > As a result, I would instead rephrase the question in the other > direction: Are there any blockers for you to upgrade to 2.2? I'm not sure I agree. I would be reluctant to suggest upgrading mission-critical setups to 2.2, it's not even a month old at this point. Unless you expect to run into bugs and have time and resources to troubleshoot it. 2.1 is not a technical preview, it's a proper release train with full support. Support for it will cease in 2021-Q1, but I don't think you can conclude that that means it's getting less love now. Lukas
Re: SLZ vs ZLIB
Hello, On Wed, 29 Jul 2020 at 19:19, Илья Шипицин wrote: > however, ZLIB is enabled by default in many distros and docker images. > any idea why ZLIB is chosen by default ? Because zlib is known, packaged and used everywhere and by everyone while slz is a niche library. It would need a marketing campaign I guess. lukas
Re: Several CVEs in Lua 5.4
Hello, On Wed, 29 Jul 2020 at 11:16, Froehlich, Dominik wrote: > > Hi Lukas, > > Thanks for the reply. > My query goes along the lines of which Lua version is compatible with HAproxy > and contains fixes to those CVEs. > I could not find a specific instruction as to which Lua version can be used > to build HAproxy / has been tested for production use. Currently LUA 5.3 is supported, patches will be committed soon for LUA 5.4 support: https://github.com/haproxy/haproxy/issues/730#issuecomment-664555213 But the way to fix this is not to rush to a new major LUA release, but to backport the fixes to LUA 5.3 instead. Lukas
Re: Several CVEs in Lua 5.4
Hello, On Wed, 29 Jul 2020 at 10:23, Froehlich, Dominik wrote: > > Hello everyone, > > Not sure if this is already addressed. Today I got a CVE report of several > issues with Lua 5.3.5 up to 5.4. > > I believe Lua 5.4 is currently recommended to build with HAproxy 2.x? > > Before I open an issue on github I would like to ask if these are already > known / addressed: I don't understand, specifically what are you asking us to do here? It's not like we ship LUA ... Lukas
Re: http-reuse and Proxy protocol
On Mon, 27 Jul 2020 at 13:14, Willy Tarreau wrote: > > However on a unix domain socket like this we never had this issue in > > the first place, as connection-reuse cannot be used on it by > > definition, correct? > > No, it doesn't change anything. We consider the connection, the protocol > family it uses is irrelevant. I don't know why, but I always wrongly assumed that a unix domain socket can only be datagram sockets, while really it's up to the application. And of course we use a stream sockets. Glad I could eliminate this wrong assumption :) Lukas
Re: http-reuse and Proxy protocol
Hello, On Thu, 23 Jul 2020 at 14:34, Willy Tarreau wrote: > > defaults > > http-reuse always > > > > backend abuse > > timeout server 60s > > balance roundrobin > > hash-balance-factor 0 > > server s_abuse u...@abuse.sock send-proxy-v2 maxconn 4 > > > > listen l_abuse > > bind u...@abuse.sock accept-proxy > > http-request set-var(req.delay) int(500) > > http-request lua.add_delay > > server 192.168.000.aaa:80 maxconn 1 > > server 192.168.000.bbb:80 maxconn 1 > > server z 192.168.000.ccc:80 maxconn 1 > > > > Is it OK ? Because i have no warning when verifying the configuration, or > > should i add a "http-reuse never" in "backend abuse" ? > > It is now properly dealt with, by marking the connection private, which > means it will not be shared at all. So what you'll see simply is that > there is no reuse for connections employing send-proxy. So your config > is safe, but you will just not benefit from the reuse. > > Anyway it's generally not a good idea to use proxy protocol over HTTP > from an HTTP-aware agent. Better use Forward/X-Forwarded-for that passes > the info per request and that nowadays everyone can consume. However on a unix domain socket like this we never had this issue in the first place, as connection-reuse cannot be used on it by definition, correct? Lukas
Re: github template
I will comment next week, but I generally agree that we should move the version output to the end, as I noticed the same issue. expected/actual behaviour sections are painful in the obvious cases (dont crash/crash), but oftentimes users just assume their itent is obvious when it's really not. lukas
[PATCH] MINOR: doc: ssl: req_ssl_sni needs implicit TLS
req_ssl_sni is not compatible with protocols negotiating TLS explicitly, like SMTP on port 25 or 587 and IMAP on port 143. Fix an example referring to 587 (SMTPS port with implicit TLS is 465) and amend the req_ssl_sni documentation. This doc fix should be backported to supported versions. --- doc/configuration.txt | 23 +-- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/doc/configuration.txt b/doc/configuration.txt index e2e9d88..93137ca 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -12259,15 +12259,16 @@ use-server unless The "use-server" statement works both in HTTP and TCP mode. This makes it suitable for use with content-based inspection. For instance, a server could - be selected in a farm according to the TLS SNI field. And if these servers - have their weight set to zero, they will not be used for other traffic. + be selected in a farm according to the TLS SNI field when using protocols with + implicit TLS (also see "req_ssl_sni"). And if these servers have their weight + set to zero, they will not be used for other traffic. Example : # intercept incoming TLS requests based on the SNI field use-server www if { req_ssl_sni -i www.example.com } server www 192.168.0.1:443 weight 0 use-server mail if { req_ssl_sni -i mail.example.com } - server mail 192.168.0.1:587 weight 0 + server mail 192.168.0.1:465 weight 0 use-server imap if { req_ssl_sni -i imap.example.com } server imap 192.168.0.1:993 weight 0 # all the rest is forwarded to this server @@ -17670,13 +17671,15 @@ req_ssl_sni : string (deprecated) contains data that parse as a complete SSL (v3 or superior) client hello message. Note that this only applies to raw contents found in the request buffer and not to contents deciphered via an SSL data layer, so this will not - work with "bind" lines having the "ssl" option. SNI normally contains the - name of the host the client tries to connect to (for recent browsers). SNI is - useful for allowing or denying access to certain hosts when SSL/TLS is used - by the client. This test was designed to be used with TCP request content - inspection. If content switching is needed, it is recommended to first wait - for a complete client hello (type 1), like in the example below. See also - "ssl_fc_sni". + work with "bind" lines having the "ssl" option. This will only work for actual + implicit TLS based protocols like HTTPS (443), IMAPS (993), SMTPS (465), + however it will not work for explicit TLS based protocols, like SMTP (25/587) + or IMAP (143). SNI normally contains the name of the host the client tries to + connect to (for recent browsers). SNI is useful for allowing or denying access + to certain hosts when SSL/TLS is used by the client. This test was designed to + be used with TCP request content inspection. If content switching is needed, + it is recommended to first wait for a complete client hello (type 1), like in + the example below. See also "ssl_fc_sni". ACL derivatives : req_ssl_sni : exact string match -- 2.7.4
Re: Documentation
Hello, On Sat, 11 Jul 2020 at 13:20, Jonathan Matthews wrote: > > On Sat, 11 Jul 2020 at 12:14, Tofflan wrote: >> >> Hello! >> >> Im trying to setup a setup HAProxy on my Pfsense router, the links under >> documentation dont work. example: >> https://cbonte.github.io/haproxy-dconv/2.3/intro.html and >> https://cbonte.github.io/haproxy-dconv/2.3/configuration.html >> Is there anyway to read or download them somewhere? > > > Hey there, > > I’m not sure if someone jumped the gun by updating the site’s doc links to > reference the unreleased 2.3 version, but you’ll have better luck changing > the “2.3” to either 2.2 or 2.0, depending on the version you’re trying to > install :-) Right, 2.3 is a development tree that you will not use in production anyway. Use the documentation that matches what you are actually using, and what you should be using is a stable release. If you are on a bleeding edge development tree, you should be looking at the files in the doc/ directory anyway, because that is what is as recent as the code itself. But yes, I'm sure Cyril will publish the 2.3-dev documentation shortly, then the links on haproxy.org will work. cheers, lukas
proposing a haproxy 2.0.16 release (was [BUG] haproxy retries dispatch to wrong server)
Hello, On Fri, 10 Jul 2020 at 08:08, Christopher Faulet wrote: > Hi, > > I finally pushed this fix in the 2.0. Note the same bug affected the HTTP > proxy > mode (using http_proxy option). In this case, the connection retries is now > disabled (on the 2.0 only) because the destination address is definitely lost. > It was the easiest way to work around the bug without backporting a bunch of > sensitive patches from the 2.1. Given the importance and impact of this bug you just fixed (at least 5 independent people already reported it on GH and ML) and the amount of other important fixes already in the tree (including at least one crash fix), I'm suggesting to release 2.0.16. Unless there are other important open bugs with ongoing troubleshooting? lukas@dev:~/haproxy-2.0$ git log --oneline v2.0.15.. d982a8e BUG/MEDIUM: stream-int: Disable connection retries on plain HTTP proxy mode e8d2423 BUG/MAJOR: stream: Mark the server address as unset on new outgoing connection b1e9407 MINOR: http: Add support for http 413 status c3db7c1 BUG/MINOR: backend: Remove CO_FL_SESS_IDLE if a client remains on the last server 0d881b2 BUG/MEDIUM: connection: Continue to recv data to a pipe when the FD is not ready 847271d MINOR: connection: move the CO_FL_WAIT_ROOM cleanup to the reader only 39bb227 BUG/MEDIUM: mux-h1: Subscribe rather than waking up in h1_rcv_buf() e0ca6ad BUG/MEDIUM: mux-h1: Disable splicing for the conn-stream if read0 is received 0528ae2 BUG/MINOR: mux-h1: Disable splicing only if input data was processed 8e8168a BUG/MINOR: mux-h1: Don't read data from a pipe if the mux is unable to receive afadc9a BUG/MINOR: mux-h1: Fix the splicing in TUNNEL mode 8e4e357 BUG/MINOR: http_act: don't check capture id in backend (2) c55e3e1 DOC: configuration: fix alphabetical ordering for tune.pool-{high,low}-fd-ratio a5e11c0 DOC: configuration: add missing index entries for tune.pool-{low,high}-fd-ratio ab06f88 BUG/MINOR: proxy: always initialize the trash in show servers state ca212e5 BUG/MINOR: proxy: fix dump_server_state()'s misuse of the trash 135899e BUG/MEDIUM: pattern: Add a trailing \0 to match strings only if possible 0b77c18 DOC: ssl: add "allow-0rtt" and "ciphersuites" in crt-list 4271c77 MINOR: cli: make "show sess" stop at the last known session 8ba978b BUG/MEDIUM: fetch: Fix hdr_ip misparsing IPv4 addresses due to missing NUL 9bd736c REGTEST: ssl: add some ssl_c_* sample fetches test 15080cb REGTEST: ssl: tests the ssl_f_* sample fetches d6cd2b3 MINOR: spoe: Don't systematically create new applets if processing rate is low 1b4cc2e BUG/MINOR: http_ana: clarify connection pointer check on L7 retry d995d5f BUG/MINOR: spoe: correction of setting bits for analyzer 26e1841 REGTEST: Add a simple script to tests errorfile directives in proxy sections 8645299 BUG/MINOR: systemd: Wait for network to be online b88a37c MEDIUM: map: make the "clear map" operation yield c5034a3 REGTEST: http-rules: test spaces in ACLs with master CLI 4cdff8b REGTEST: http-rules: test spaces in ACLs c3a2e35 BUG/MINOR: mworker/cli: fix semicolon escaping in master CLI da9a2d1 BUG/MINOR: mworker/cli: fix the escaping in the master CLI 7ed43aa BUG/MINOR: cli: allow space escaping on the CLI 249346d BUG/MINOR: spoe: add missing key length check before checking key names 1b7f58f BUG/MEDIUM: ebtree: use a byte-per-byte memcmp() to compare memory blocks 47a5600 BUG/MINOR: tcp-rules: tcp-response must check the buffer's fullness 9f3bda0 MINOR: http: Add 404 to http-request deny c09f797 MINOR: http: Add 410 to http-request deny lukas@dev:~/haproxy-2.0$ Thanks, Lukas
Re: [BUG] haproxy retries dispatch to wrong server
Hello Michael, On Tue, 7 Jul 2020 at 15:16, Michael Wimmesberger wrote: > > Hi, > > I might have found a potentially critical bug in haproxy. It occurs when > haproxy is retrying to dispatch a request to a server. If haproxy fails > to dispatch a request to a server that is either up or has no health > checks enabled it dispatches the request to a random server on any > backend in any mode (tcp or http) as long as they are in the up state > (via tcp-connect or httpchk health checks). In addition haproxy logs the > correct server although it dispatches the request to a wrong server. > > I could not reproduce this issue on 2.0.14 or any 2.1.x version. This > happens in tcp and http mode and http requests might be dispatched to > tcp servers and vice versa. > > I have tried to narrow this problem down in source using git bisect, > which results in this commit marked as the first bad one: > 7b69c91e7d9ac6d7513002ecd3b06c1ac3cb8297. Makes sense that 2.1 is not affected because this commit was specifically written for 2.0 (it's not a backport). Exceptionally detailed and thorough reporting here, this will help a lot, thank you. A bug has been previously filed, but the details mentioned in this thread will help get things going: https://github.com/haproxy/haproxy/issues/717 Lukas
Re: [PATCH v2 0/2] Warnings for truncated lines
Hello, On Monday, 22 June 2020, Willy Tarreau wrote: > > > Configuration file is valid > > Looks good to me. > > > I guess a truncated last line cannot be differentiated from file that > > does not > > end with a new line, because fgets() consumes the full line (triggering > the > > eof), even if reading a NUL byte? > > Definitely! At least we're giving info with the warning and that's what > matters to me. > > Lukas is that also OK for you ? Yes, looks good to me. lukas