Re: HAProxy and musl (was: Re: HAproxy Error)

2023-09-14 Thread Aleksandar Lazic

Hi.

Resuscitate this old thread with a musl lib update.

https://musl.libc.org/releases.html

```
musl-1.2.4.tar.gz (sig) - May 1, 2023

This release adds TCP fallback to the DNS stub resolver, fixing the 
longstanding inability to query large DNS records and incompatibility 
with recursive nameservers that don't give partial results in truncated 
UDP responses. It also makes a number of other bug fixes and 
improvements in DNS and related functionality, including making both the 
modern and legacy API results differentiate between NODATA and NxDomain 
conditions so that the caller can handle them differently.




```

Regards
Alex


On 2020-04-16 (Do.) 13:26, Willy Tarreau wrote:

On Thu, Apr 16, 2020 at 12:29:42PM +0200, Tim Düsterhus wrote:

FWIW musl seems to work OK here when building for linux-glibc-legacy.


Yes. HAProxy linked against Musl is smoke tested as part of the Docker
Official Images program, because the Alpine-based Docker images use Musl
as their libc. In fact you can even use TARGET=linux-glibc + USE_BACKTRACE=.


By the way, I initially thought I was the only one building with musl
for my EdgeRouter-x that I'm using as a distcc load balancer for the
build farm at work. But if there are other users, we'd rather add
a linux-musl target, as the split between OS and library was precisely
made for this purpose!

Anyone objects against something like this (+ the appropriate entries
in other places and doc) ?


diff --git a/Makefile b/Makefile
index d5841a5..a3dad36 100644
--- a/Makefile
+++ b/Makefile
@@ -341,6 +341,18 @@ ifeq ($(TARGET),linux-glibc-legacy)
  USE_ACCEPT4 USE_LINUX_SPLICE USE_PRCTL USE_THREAD_DUMP USE_GETADDRINFO)
  endif
  
+# For linux >= 2.6.28 and musl

+ifeq ($(TARGET),linux-musl)
+  set_target_defaults = $(call default_opts, \
+USE_POLL USE_TPROXY USE_LIBCRYPT USE_DL USE_RT USE_CRYPT_H USE_NETFILTER  \
+USE_CPU_AFFINITY USE_THREAD USE_EPOLL USE_FUTEX USE_LINUX_TPROXY  \
+USE_ACCEPT4 USE_LINUX_SPLICE USE_PRCTL USE_THREAD_DUMP USE_NS USE_TFO \
+USE_GETADDRINFO)
+ifneq ($(shell echo __arm__/__aarch64__ | $(CC) -E -xc - | grep 
'^[^\#]'),__arm__/__aarch64__)
+  TARGET_LDFLAGS=-latomic
+endif
+endif
+
  # Solaris 8 and above
  ifeq ($(TARGET),solaris)
# We also enable getaddrinfo() which works since solaris 8.

Willy




Re: mux-h2: Backend stream is not fully closed if frontend keeps stream open

2023-09-14 Thread Christopher Faulet

Le 14/09/2023 à 01:36, Valters Jansons a écrit :

I set up a small PoC repository at https://github.com/sigv/grpcopen
with a server and a client. There is a Ping endpoint, which works fine
(the frontend client is first to close). There is also a Foobar
endpoint, which is intentionally mangled, to ensure the backend server
returns a gRPC error before the frontend client closes its side.
Currently `client/main.go` sends both requests; to only observe the
failing request, `ping(stream, ctx)` invocation can be removed from
the end of the file.

The only requirement to run it is to have Go available locally. The
README file in the repository covers how to download and unpack Go, if
you want to run this on a fresh VM. It also includes a minimal HAProxy
configuration that I can reproduce the issue with.

Hopefully having a sample application locally makes it easier for you
to look at the raw traffic, and trace HAProxy itself.

Please let me know if I can help out in any other way!



Hi,

First of all, thanks for your reproducer. It is really helpful.

After a discussion with Willy, we've hopefully found a way to fix the issue by 
delaying detection of the server abort on the request processing side when there 
is a response to forward to the client. It should do the trick in your case and 
it should be safe.


However, the fix remains sensitive. It is really hard to be sure it will not 
introduce a regression. The worst could be to block a session or to get a loop 
because of an unhandled event.


Thus it could be go to test it on your side if it is possible. The patch is in 
attachment. It can be applied on top of the 2.9 or 2.8. Is this possible for you ?


--
Christopher Faulet
From 04892caae72eb13605e4a32b4a182ec22fcc30bf Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Thu, 14 Sep 2023 11:12:32 +0200
Subject: [PATCH] BUG/MEDIUM: http-ana: Try to handle response before handling
 server abort

In the request analyser responsible to forward the request, we try to detect
the server abort to stop the request forwarding. However, we must be careful
to not block the response processing, if any. Indeed, it is possible to get
the response and the server abort in same time. In this case, we must try to
forward the response to the client first.

So to fix the issue, in the request analyser we no longer handle the server
abort if the response channel is not empty. In the end, the response
analyser is able to detect the server abort if it is relevant. Otherwise,
the stream will be woken up after the response forwarding and the server
abort should be handled at this stage.

This patch should be backported as far as 2.7 only because the risk of
breakage is high. And it is probably a good idea to wait a bit before
backporting it.
---
 src/http_ana.c | 17 +
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/src/http_ana.c b/src/http_ana.c
index 819472c17..2d2ce90d1 100644
--- a/src/http_ana.c
+++ b/src/http_ana.c
@@ -985,8 +985,12 @@ int http_request_forward_body(struct stream *s, struct channel *req, int an_bit)
 
 	if ((s->scb->flags & SC_FL_SHUT_DONE) && co_data(req)) {
 		/* request errors are most likely due to the server aborting the
-		 * transfer. */
-		goto return_srv_abort;
+		 * transfer.Bit handle server aborts only if there is no
+		 * response. Otherwise, let a change to foward the response
+		 * first.
+		 */
+		if (!htx_is_empty(htxbuf(>res.buf)))
+			goto return_srv_abort;
 	}
 
 	http_end_request(s);
@@ -1023,8 +1027,13 @@ int http_request_forward_body(struct stream *s, struct channel *req, int an_bit)
 
  waiting:
 	/* waiting for the last bits to leave the buffer */
-	if (s->scb->flags & SC_FL_SHUT_DONE)
-		goto return_srv_abort;
+	if (s->scb->flags & SC_FL_SHUT_DONE) {
+		/* Handle server aborts only if there is no response. Otherwise,
+		 * let a change to foward the response first.
+		 */
+		if (!htx_is_empty(htxbuf(>res.buf)))
+			goto return_srv_abort;
+	}
 
 	/* When TE: chunked is used, we need to get there again to parse remaining
 	 * chunks even if the client has closed, so we don't want to set CF_DONTCLOSE.
-- 
2.41.0