Re: [ANNOUNCE] haproxy-2.2.18

2021-11-08 Thread Jim Freeman
Great to hear - thanks !

On Sat, Nov 6, 2021 at 12:58 AM Vincent Bernat  wrote:

>  ❦  5 November 2021 17:05 -06, Jim Freeman:
>
> > Might this (or something 2.4-ish) be heading towards bullseye-backports ?
> > https://packages.debian.org/search?keywords=haproxy
> > https://packages.debian.org/bullseye-backports/
>
> 2.4 will be in bullseye-backports.
> --
> Don't patch bad code - rewrite it.
> - The Elements of Programming Style (Kernighan & Plauger)
>


Re: [ANNOUNCE] haproxy-2.2.18

2021-11-05 Thread Vincent Bernat
 ❦  5 November 2021 17:05 -06, Jim Freeman:

> Might this (or something 2.4-ish) be heading towards bullseye-backports ?
> https://packages.debian.org/search?keywords=haproxy
> https://packages.debian.org/bullseye-backports/

2.4 will be in bullseye-backports.
-- 
Don't patch bad code - rewrite it.
- The Elements of Programming Style (Kernighan & Plauger)



Re: [ANNOUNCE] haproxy-2.2.18

2021-11-05 Thread Jim Freeman
Might this (or something 2.4-ish) be heading towards bullseye-backports ?
https://packages.debian.org/search?keywords=haproxy
https://packages.debian.org/bullseye-backports/

Thanks,
...jfree

On Fri, Nov 5, 2021 at 8:51 AM Christopher Faulet 
wrote:

> Hi,
>
> HAProxy 2.2.18 was released on 2021/11/04. It added 66 new commits
> after version 2.2.17.
>
...


[ANNOUNCE] haproxy-2.2.18

2021-11-05 Thread Christopher Faulet

Hi,

HAProxy 2.2.18 was released on 2021/11/04. It added 66 new commits
after version 2.2.17.

After the 2.3, it is the 2.2 turn. This one contains almost the same fixes,
stacked over the last 2 months. Willy already done the hard part describing
them. Thus, I'm shamelessly stealing everything from the 2.3.15
announcement:

  - if an HTTP/1 message was blocked for analysis waiting for some more
room, sometimes it could remain stuck indefinitely, leaving a few
never-expiring entries in "show sess".

  - A very old bug was fixed in the Lua part. The wrong function was used
to start Lua tasks leading to a process freeze if the call was
performed when the time was wrapping, one millisecond every 49.7
days. On this exact millisecond, a lua task was able to be queued
with no expiration date, preventing all subsequent timers from being
seen as expired.

  - Some bugs were fixed on the filters management to properly handle
client aborts and to be sure to always release allocated filters when
a stream is released.

  - The LDAP health-check was fixed to make it compatible with Active
Directory servers. The response parsing was improved to also support
servers using multi-bytes length-encoding. Active Directory servers
seems to systematically encode messages or elements length on 4 bytes
while others are using 1-byte length-encoding if possible. Now, 1, 2
and 4 bytes length-encoding are now supported. It should be good
enough to enable LDAP health-check on Active Directory

  - The build system was improved in many ways. Several -Wundef warnings
were fixed.

  - HTTP "TE" header is now sanitized when a request is sent to a server.
Only "trailers" token is sent. It is mandatory because HAProxy only
understand chunked encoding. Other transfer encoding are not
supported.

  - A bug on health-check was fixed when a sample fetch depending on the
execution context was used in a tcpcheck rulesets defined in a
defaults section.

  - tcp-request and tcp-response content rules evaluation is now
interrupted if a read error or the end of input is detected on the
corresponding channel. This change fixes a known bug in HAProxy 2.3
and prior. However, it does not seem to affect 2.4.

  - resolvers: there were a large number of structural issues in the
code, and quite frankly we're not proud of the solutions but it's
impossible to do something more elegant in the current state without
a major rewrite. So what matters here is that all race conditions are
addressed and that the code works reliably. While the 2.5 fixes add a
lookup tree to perform significant CPU savings on SRV records, that
code was not backported because it adds further changes that do not
seem necessary in the current situation. We got the confirmation from
one of the reporters that the issue is now fixed.

  - an interesting bug in the ring API caused boundary checks for the
wrapping at the end of the buffer to be shifted by one both in the
producer and the consumer, thus they both cancel each other and are
not observable... until the byte after the buffer is not mapped or
belongs to another area. One crash was met on boot (since startup
messages are duplicated into a ring for later retrieval), and it is
possible that those sending logs over TCP might have faced it as
well, otherwise it's extremely unlikely to be observed outside of
these use cases.

  - using the tarpit could lead to head-of-line blocking of an H2
connection as the pending data were not drained. And in other
protocols, the presence of these pending data could cause a wakeup
loop between the mux and the stream, which usually ended in the
process being detected as faulty and being killed by the safety
checks.

  - the h2spec tests in the CI were regularly failing on a few tests
expecting HTTP/2 GOAWAY frames that were sent (even seen in strace).
The problem was that we didn't perform a graceful shutdown and that
this copes badly with bidirectional communications as unread pending
data cause the connection to be reset and the frame to be lost. This
was addressed by performing a clean shutdown. It's unlikely that
anyone ever noticed this given that this essentially happens when
communication errors are reported (i.e. when the client has little
reason to complain).

  - some users complained that TLS handshakes were renewed too often in
some cases. Emeric found that with the migration to the muxes in
1.9-2.0 we've lost the clean shutdown on end of connection that's
also used to commit the TLS session cache entry. For HTTP/2 this was
addressed as a side effect of the fix above, and for HTTP/1, a fix
was produced to also perform a clean shutdown on keep-alive
connections (it used to work fine only for close ones).

  - the validity checks for sample fetch functions were