Am 29.01.2019 um 06:52 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9.3 was released on 2019/01/29. It added 35 new commits after
> version 1.9.2.
> 
> It mainly addresses a few stability issues affecting versions up to 1.9.2.
> Several of these issues are only reproducible when using H2 to connect to
> the servers and are caused by various incorrect or insufficient error
> handling when facing failures during connection reuse. Another issue was
> a side effect of the fixes on mailers (which still use the checks
> infrastructure) that resulted in a crash when using agent-check. A last
> minor fix for the checks was made to address a timeout issue, and checks
> are expected to be in a better shape now.
> 
> Another issue was reported on the way our SSL stack deals with KeyUpdate
> messages that are part of TLS 1.3. These were identified as renegotiation
> attempts and were dropped, causing some communication issues with Chrome
> when they attempted to make use of them. Apparently we were not the only
> ones so it's a side effect of reusing a feature which has long had to be
> disabled everywhere. Now the issue was addressed, and it's important that
> distros update their packages to get this part fixed when they use OpenSSL
> 1.1.1 so that we don't leave early bugs on the net which prevent security
> features from reliably being used. This patch was also backported into the
> 1.8 branch and will be present in the next 1.8 release.
> 
> On the less important issues, some better control for stream limits were
> enforced on outgoing H2 connections. We used to observe batches of errors
> when the server was refusing too high stream IDs after it sent a GOAWAY,
> now we can react faster. In addition, in order to avoid this situation at
> all (as Nginx wants to close by default after 1000 streams over the same
> connection), we've added a "max-reuse" server parameter indicating how
> many times a connection may be reused. For example setting this to 990
> is enough to always stop reusing a connection before nginx sends its
> GOAWAY.
> 
> The H2 mux was not respecting the reserve in HTX mode, leading to the
> impossibility to manipulate headers and to some request or response
> errors. Some other small issues affecting the reserve size in HTX were
> addressed, though some of them are now a bit foggy to me.
> 
> That's about all for this release. I still have some pending fixes that
> I preferred to delay a bit and that I'll backport for the next 1.9 :
>   - make outgoing connection reuse failure fail more gracefully and
>     support a retry ; we have everything for this, it just required a
>     few changes in the connection setup code that I didn't feel bold
>     enough to integrate into this one.
> 
>   - H2 will check that the content-length header matches the amount of
>     DATA (standards compliance)
> 
>   - H2 currently don't use the server's advertised MAX_CONCURRENT_STREAMS
>     setting and only uses its global one, but it's not much complicated
>     to address. I expect that we may face some of these sooner or later.
> 
>   - there's this ":authority" header field missing from H2 requests that
>     we should apparently add when upgrading H1 to H2.
> 
>   - regarding the reported issue of some large objects transfers over H2
>     from some specific clients being truncated during reloads, I brought
>     the issue to the IETF HTTP working group. Some gave me examples showing
>     my initial idea of watching WINDOW_UPDATE messages will not work. However
>     I managed to design another solution that I will experiment with soon
>     in 2.0-dev. If it ends up working fine enough, we'll backport it to 1.9.
> 
> Last, if you feel like you'd like to contribute but don't know where to
> start, please have a look at the issue tracker (see the URL below), have
> a look at the bugs and if you feel like you can work on one of them, just
> mention it in the issue and propose a patch.
> 
> Please find the usual URLs below :
>    Site index       : http://www.haproxy.org/
>    Discourse        : http://discourse.haproxy.org/
>    Slack channel    : https://slack.haproxy.org/
>    Issue tracker    : https://github.com/haproxy/haproxy/issues
>    Sources          : http://www.haproxy.org/download/1.9/src/
>    Git repository   : http://git.haproxy.org/git/haproxy-1.9.git/
>    Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git
>    Changelog        : http://www.haproxy.org/download/1.9/src/CHANGELOG
>    Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Docker Images are also updated:

https://hub.docker.com/r/me2digital/haproxy19
https://hub.docker.com/r/me2digital/haproxy-19-boringssl

Both have some errors at `make reg-tests`, I think that this could be a problem 
with containerized testing.
Does anyone run haproxy in container with h2?

###
https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/152750687

Testing with haproxy version: 1.9.3
#    top  TEST ./reg-tests/connection/b00000.vtc FAILED (8.793) exit=2
#    top  TEST ./reg-tests/http-messaging/h00002.vtc FAILED (0.749) exit=2
2 tests failed, 0 tests skipped, 32 tests passed


https://gitlab.com/aleks001/haproxy19-centos/-/jobs/152883903

Testing with haproxy version: 1.9.3
#    top  TEST ./reg-tests/http-messaging/h00002.vtc FAILED (0.737) exit=2
1 tests failed, 0 tests skipped, 33 tests passed
#####

> Willy

Regards
Aleks

> ---
> Complete changelog :
> Christopher Faulet (4):
>       BUG/MINOR: check: Wake the check task if the check is finished in 
> wake_srv_chk()
>       BUG/MINOR: proto-htx: Return an error if all headers cannot be received 
> at once
>       BUG/MEDIUM: mux-h2/htx: Respect the channel's reserve
>       BUG/MINOR: mux-h1: Apply the reserve on the channel's buffer only
> 
> Dirkjan Bussink (1):
>       BUG/MEDIUM: ssl: Fix handling of TLS 1.3 KeyUpdate messages
> 
> Jérôme Magnin (1):
>       BUG/MINOR: server: don't always trust srv_check_health when loading a 
> server state
> 
> Miroslav Zagorac (1):
>       BUG/MINOR: spoe: corrected fragmentation string size
> 
> Olivier Houchard (3):
>       BUG/MEDIUM: servers: Make assign_tproxy_address work when ALPN is set.
>       BUG/MEDIUM: connections: Add the CO_FL_CONNECTED flag if a send 
> succeeded.
>       BUG/MEDIUM: servers: Attempt to reuse an unfinished connection on retry.
> 
> PiBa-NL (1):
>       REGTEST: checks basic stats webpage functionality
> 
> Uman Shahzad (1):
>       BUG/MINOR: startup: certain goto paths in init_pollers fail to free
> 
> Willy Tarreau (23):
>       BUG/MEDIUM: checks: fix recent regression on agent-check making it crash
>       DOC: mention the effect of nf_conntrack_tcp_loose on src/dst
>       BUG/MINOR: mux-h1: avoid copying output over itself in zero-copy
>       BUG/MAJOR: mux-h2: don't destroy the stream on failed allocation in 
> h2_snd_buf()
>       BUG/MEDIUM: backend: also remove from idle list muxes that have no more 
> room
>       BUG/MEDIUM: mux-h2: properly abort on trailers decoding errors
>       MINOR: h2: declare new sets of frame types
>       BUG/MINOR: mux-h2: CONTINUATION in closed state must always return 
> GOAWAY
>       BUG/MINOR: mux-h2: headers-type frames in HREM are always a connection 
> error
>       BUG/MINOR: mux-h2: make it possible to set the error code on an already 
> closed stream
>       BUG/MINOR: hpack: return a compression error on invalid table size 
> updates
>       MINOR: server: make sure pool-max-conn is >= -1
>       BUG/MINOR: stream: take care of synchronous errors when trying to send
>       BUG/MINOR: mux-h2: always check the stream ID limit in 
> h2_avail_streams()
>       BUG/MINOR: mux-h2: refuse to allocate a stream with too high an ID
>       BUG/MEDIUM: backend: never try to attach to a mux having no more stream 
> available
>       MINOR: server: add a max-reuse parameter
>       MINOR: mux-h2: always consider a server's max-reuse parameter
>       DOC: nbthread is no longer experimental.
>       BUG/MINOR: listener: always fill the source address for accepted 
> socketpairs
>       BUG/MINOR: mux-h2: do not report available outgoing streams after GOAWAY
>       BUG/MINOR: task: fix possibly missed event in inter-thread wakeups
>       BUG/MEDIUM: backend: always call si_detach_endpoint() on async 
> connection failure
> 
> ---
> 


Reply via email to