Hi,
HAProxy 2.3-dev6 was released on 2020/10/10. It added 141 new commits
after version 2.3-dev5.
Around 20 bugs were fixed since dev5, hopefully we added less! A nice set
of build cleanups on BSD platforms was brought by Brad Smith, lifting their
feature support to match the currently supported versions. We're seeing
accept4(), closefrom() or getaddrinfo() appear on some of them, and
DragonFlyBSD made its entrance into the party.
Amaury joined the team and started by extending the stats so that various
modules can now register their own counters. For example it will now be
possible for the various muxes (H1, H2,FCGI), Lua, or the SSL layer to
register their own stats on proxies, servers, or listeners. Till now it was
particularly difficult because these parts are optional in the build
process, and reserving some entries for them in the stats structures as
well as some fields in the output would be quite cumbersome. Now the idea
is that in the stats dump there is a delimiter ("-") after which the stats
fields are not necessarily stable across restarts. As such, simple
monitoring systems which look at the core stats whose field positions are
documented in management.txt can stop on "-" and the smarter ones which can
match a column according to its name can get everything. For now no new
stats were added but that will allow anyone to much more easily add some
over time (typically I'm really missing H2 stats right now). In addition to
this he implemented support for stats domain. Each domain corresponds to a
certain type of stats. The default domain is proxies, DNS was added. We
could imagine adding peers, spoe, lua etc in the future, and maybe even
more (polling, sched, threads etc for example). Overall I really count on
this to make sure there is no excuse anymore for not adding stats to a new
subsystem being developed.
Emeric completed the new syslog load balancing feature. Now log-forward
supports receiving TCP syslog in addition to UDP or UNIX logs, and can
forward them to servers using same or different protocols. And since this
is done using the standard communication layers, the usual bind options
also apply. As such we can for example receive TCP logs over SSL with
cert-based client authentication and forward them to a pair of local UDP
servers each taking 50% of the traffic and duplicate that to another TCP
server using a different format. I'm pretty sure that some requests to
turn these logs to other formats (like JSON) will soon come :-)
While trying to clean up the connection layers, Christopher identified a
long-time issue with our abuse of the "tcp-request content" rule sets. It
revolves around the track-sc feature. A long time ago, when TCP was always
processed before HTTP, it used to be the only way to perform tracking.
When keep-alive started to appear we had to make a choice, and in order
to maintain compatibility with L7 fetches that appeared here and there in
TCP rules, it was decided that "tcp-request content" is per-request in HTTP,
so that these rulesets still see the request before HTTP processing starts.
But later with the introduction of muxes required to support H2, these TCP
rules started to become really ambigous because they only see already valid,
parsed requests and not incomplete nor invalid ones. Worse, in H2, we have
to cheat and re-encode them in H1 for the time it needs to analyse them. Now
that there are http-request rule sets, all these hacks make no sense anymore,
but they've been kept for compatibility with old configs. And sadly our
fingers have been trained to type them, even if we know they won't
certain things and will not behave similarly in H1 and H2 at all. Now all
the hacks added to continue to support them are causing quite some trouble
in the architecture. To give just one example, when the H1 mux detects a
bad request, it has to instanciante a stream just to execute them and emit
the error! The problem in fact only exists when such requests are used in
an HTTP proxy for things that do not depend on HTTP. For this reason a new
warning was added for this case which recommends to migrate the rule to
"tcp-request session" instead, indicating that the current behavior is not
reliable and no more guaranteed for the long term.
Finally, let's talk about the horror movie. I've been spending more than one
month on something I initially imagined would take only 3 days: splitting
the listeners in two parts, one for the socket layer and one for the stream
layer. The goal is to support QUIC which will have its own stack and yet
will depend on a socket that the listeners must not manipulate but must
configure. This experience was, hmmm, particular. Many parts of my body still
feel sore! It's obvious we've been slowly accumulating crap over crap for more
than a decade there, and for a good reason: listeners very rarely need to be
touched, so nobody feels like going through one month of rework when only a
1-hour hack can solve a problem. But my problem here was the t