Hi,
HAProxy 2.0.0 was released on 2019/06/16. It added 63 new commits
after version 2.0-dev7.
There were a few last-minute bug reports that started to make me worry
a bit but in the end these were nothing dramatic and quickly addressed.
Aside the usual bug fixes, this version mostly contains some documentation
and build updates. The most visible change will concern Linux users (most
users in fact), do to the removal of the totally obsolete "linux22",
"linux24", "linux24e" and "linux2628" targets in the Makefile. Now there
is a single "linux-glibc" one provided for environments running a recent
(read "still supported") Linux kernel with a similarly recent glibc. No
more worries about what was still enabled in 2.6.28 or for your libc! As
a side effect of this, namespaces, TCP Fast Open and getaddrinfo() are now
enabled by default on this combination. The very few remaining improvements
were in any order :
- a small change on the state file to use server name only and not a
more-or-less random combination of the server ID and name anymore,
as clearly people were confused by this in the past and we now have
everything to rely solely on names across a cluster now ;
- some HTX updates to pass the H2 scheme between the two sides so that
applications that expect to be called as "http:" will receive it ;
- a new "http-request replace-uri" directive to more easily get rid of
any remaining reqrep users may still have in their configurations and
which will be removed from 2.1 ;
- report of each process' version in "show proc" on the master process
command line (nice to detect failed upgrades) ;
- 51degrees now supports HTX and provided a dummy library so that there
is no more code that we cannot build in travis, and I hope we can now
continue to enforce such a rule ;
I must say I am very happy with this release, especially with how smooth
it went when approaching the release over the last few weeks. Contributors
were reasonable in general, avoiding to send risky patches, and testers
did a great coverage, being able to report interesting issues. Seeing the
type of issues we've had last is very encouraging and is what makes me
want to release now without waiting more. Thanks to all these improvements,
I think we now have the cleanest version ever issued at the dot-zero release.
I sincerely hope new versions will continue on this trend that benefits a
lot from the new development cycle.
For those who haven't followed the development cycle closely, I'll try to
quickly summarize the changes since the previous LTS version (1.8). As
most of you know, 1.9 will not be maintained for a long time and should
mostly be seen as a technological preview or technical foundation for 2.0.
Thus what was added since 1.8 :
- new internal native HTTP representation called HTX, was already in 1.9
and is now enabled by default in 2.0 ;
- end-to-end HTTP/2 support including trailers and continuation frames,
as needed for gRPC ; HTTP/2 may also be upgraded from HTTP/1.1 using
the H2 preface;
- server connection pooling and more advanced reuse, with ALPN protocol
negotiation (already in 1.9) ;
- layer 7 retries, allowing to use 0-RTT and TCP Fast Open to the servers
as well as on the frontend ;
- much more scalable multi-threading, which is even enabled by default on
platforms where it was successfully tested ; by default, as many threads
are started as the number of CPUs haproxy is allowed to run on. This
removes a lot of configuration burden in VMs and containers ;
- automatic maxconn setting for the process and the frontends, directly
based on the number of available FDs (easier configuration in containers
and with systemd) ;
- logging to stdout for use in containers and systemd (already in 1.9).
Logs can now provide micro-second resolution for some events ;
- peers now support SSL, declaration of multiple stick-tables directly in
the peers section, and synchronization of server names, not just IDs ;
- In master-worker mode, the master process now exposes its own CLI and
can communicate with all other processes (including the stopping ones),
even allowing to connect to their CLI and check their state. It is also
possible to start some sidecar programs and monitor them from the master,
and the master can automatically kill old processes that survived too
many reloads ;
- the incoming connections are load-balanced between all threads depending
on their load to minimize the processing time and maximize the capacity
(already in 1.9) ;
- the SPOE connection load-balancing was significantly improved in order
to reduce high percentiles of SPOA response time (already in 1.9) ;
- the "random" load balancing algorithm and a power-of-two-choices variant
were introduced ;
- statistics improvements with per-thread counters for certain things, and
a prometheus exporter for all o