Hi.

Am 12.09.2018 um 19:11 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 1.9-dev2 was released on 2018/09/12. It added 199 new commits
> after version 1.9-dev1.
> 
> Let's be clear about this one : it's mostly aimed at developers to resync
> their ongoing work. We've changed a number of pretty sensitive stuff and
> definitely expect to spot some interesting bugs starting with "this is
> impossible" and ending with "I didn't remember we supported this". Warning
> given :-)
> 
> Since 1.9-dev1, a number of additional changes were merged. This is always
> a good sign, it indicates that the developers do not intend to significantly
> modify their design anymore. We're still not close to a release though, but
> the steady efforts are paying off.
> 
> Among the recent changes (forgive me if I forget anyone) that should be
> user-visible, I can list :
>   - removal of the synchronization point for a thinner rendez-vous point.
>     This is used to propagate server status changes. It used to induce a
>     huge CPU consumption on all threads when servers were changing very
>     often. Now we don't need to wake the sleeping threads up anymore so
>     the health checks are much less noticeable.
> 
>   - added support for the new keyword "proto" on "bind" and "server" lines.
>     This permits to force the application level protocol (which normally is
>     negociated via ALPN). We temporarily use it to develop the new HTTP/1
>     mux and it is also useful to process clear-text HTTP/2. For example it
>     is possible to strip TLS using front servers then forward this to a
>     unique listener for HTTP processing.
> 
>   - the server queues used to be only ordered on request arrival time. Now
>     they also support time priority offsets and classes. This means it is
>     possible to decide among all the requests pending in a server queue
>     which one will be picked first based on its priority. The classes are
>     always considered first, which means that all requests in a low number
>     class will be processed before any request in a high number class. Then
>     the offsets can be enforced based on time, for example it's possible to
>     enforce that requests matching this criterion will experience 250ms
>     less queuing time than other requests. It can be useful to deliver
>     javascript/CSS before images for example, or to boost premium level
>     users compared to other ones.
> 
>   - a few new sample fetch functions were added to determine the number
>     of available connections on a server or on a backend.
> 
> The rest is mostly infrastructure changes which are not (well should not
> be) user-visible :
>   - the logs can now be emitted without a stream. This will be required to
>     emit logs from the muxes. The HTTP/2 mux now emits a few logs on some
>     rare error cases.
> 
>   - make error captures work outside of a stream. The purpose is to be able
>     to continue to capture bad messages at various protocol layers. Soon we
>     should get the ability to capture invalid H2 frames and invalid HPACK
>     sequences just like we used to do at the HTTP level. This will be needed
>     to preserve this functionnality when HTTP/1 processing moves down to the
>     mux. It could possibly even be used in other contexts (peers or SPOE
>     maybe ?).
> 
>   - the error captures are now dynamically allocated. For now it remains
>     simple but the purpose is to have the ability to keep only a limited
>     number of captures (total and per proxy) in order to significantly
>     reduce the memory usage for those with tens of thousands of backends,
>     and at the same time to maintain several logs per proxy, and not just
>     the last one for each direction.
> 
>   - the master-worker model has improve quite a bit. Now instead of using
>     pipes between the master and the workers, it uses socket pairs. The
>     goal here is to allow the master to better communicate with workers.
>     More specifically, a second patch set introducing the support for
>     communicating via socket pairs over UNIX sockets will be usable in
>     conjunction with this, allowing the CLI to bounce from the master to
>     the workers (not finished yet).
> 
>   - the HTTP definitions and semantics code were moved to a new file,
>     http.c, which is version-agnostic. Thanks to this, proto_http.c now
>     mostly deals with the analysers and sample processing. This is aimed
>     at simplifying the porting of the existing code to the new model.
> 
>   - the new HTTP/1 parser that was implemented to convert HTTP/1 responses
>     to HTTP/2 was completed so that it will be reusable by the upcoming
>     HTTP/1 mux.
> 
>   - the connection scheduling was reworked to feature true async-I/O at
>     every level : till now, when an upper layer wanted to send some data
>     (let's say a response to a client), it had to enable data polling on
>     the underlying conn_stream which itself requested polling of the FD
>     in the fd_cache from the mux, resulting in this FD being seen as
>     active and the send() callback to be called, and in turn the snd_buf()
>     function of the data layer to be called to deliver the data to the
>     lower level. It was an insane amount of round trips, especially for
>     some protocols like H2 where the connection state is irrelevant to
>     this. Now a direct snd_buf() attempt is performed, and if it fails,
>     the caller is subscribed to the lower layer to be called when it
>     becomes possible. This way only the required number of layers are
>     crossed and woken up. It also allows to deliver more fine-grained
>     information between them (e.g. use flags to filter on certain events).
>     Due to the very long history of working in the previous way (since
>     version 1.0), it would not be surprising if this change wakes up some
>     long-burried zombie bugs. So any CLOSE_WAIT or CPU loop report would
>     be welcome in this case.
> 
> There's also one initially announced feature which will probably not make
> it in 1.9, it's the SSL certificate update from the CLI. After looking at
> the impacts deeply with Emeric and William, we figured that the only way
> to make it reliable and future-proof requires some non-trivial changes to
> the way the certificates are currently indexed. It's not an enormous change
> but one significant enough to require a full-time person for a few weeks,
> so unless someone steps up and is autonomous enough on this one we'll have
> to postpone. Likewise, the changes that William made to load certificates
> and private keys from distinct files are supposedly mergeable but we have
> to double-check before putting us into a dead-end.
> 
> Now the good news for those who have made the effort to read me till here
> is that we'll increase the release rate to two releases per year. The
> observation is that while the current model works reasonably fine, it's
> extremely challenging to try to complete all the foundation changes before
> starting to work on the promised features which entirely depend on them,
> and once the release date approaches, we see pressure build up and heavy
> conflicts starting to appear between various branches. Additionnally, we
> all know that the vast majority of users test only after a release, causing
> massive reports of issues at different layers just after the release. Last
> but not least, distros which ship with a version tend to ship with an early
> one which is not yet very stable, and this complicates their ability to
> follow fixes.
> 
> So in order to improve the situation, we'll proceed differently : we'll
> release a version around October-November like nowadays, and this version
> will be focused on mostly technical stuff. Very often it will in fact
> include some optmizations that were developed in the context of the
> previous release but which were not dry enough to be merged. An example
> of this could be the lockless fd_cache changes that were merged very early
> in 1.9. And another release will happen around May for mostly functional
> changes. These ones will be much more user-visible and will present a much
> lower risk of regression. The May version will continue to be maintained
> for a long time while the November version will be short-lived (around 1
> year, mostly for advanced users).
> 
> As such, we'll release 1.9 as planned, around October-November depending
> on how things go, and we'll emit a 2.0 around May with more user-visible
> changes. 1.9 will stop getting fixes once 2.1 is released in November 2019,
> but 2.0 will continue to be maintained. This means that distros should
> definitely avoid to package odd releases (including 1.9) and should only
> focus on even ones (starting with 2.0).
> 
> I also predict that with this improved model we'll start to see the
> emergence of topic branches, which are merged once ready. It's will be less
> problematic to skip a release because it will postpone a feature for only 6
> months and not one year like now. This will cause less last-minute conflicts
> between the various activities and should result in a better overall
> stability in even versions, and a much smoother distributed development
> cycle where it's even likely that different maintainers will be able to
> emit development releases (specifically the technical ones which nobody
> thinks about doing).
> 
> Now... back to the code ;-)
> 
> Willy
> 
> Please find the usual URLs below :
>    Site index       : http://www.haproxy.org/
>    Discourse        : http://discourse.haproxy.org/
>    Sources          : http://www.haproxy.org/download/1.9/src/
>    Git repository   : http://git.haproxy.org/git/haproxy.git/
>    Git Web browsing : http://git.haproxy.org/?p=haproxy.git
>    Changelog        : http://www.haproxy.org/download/1.9/src/CHANGELOG
>    Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

A new docker image is also available.
I have updated also the openssl to 1.1.1 release

https://hub.docker.com/r/me2digital/haproxy19/

###
HA-Proxy version 1.9-dev2 2018/09/12
Copyright 2000-2018 Willy Tarreau <wi...@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-fno-strict-overflow -Wno-unused-label
  OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1
USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3

Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols markes as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTTP       side=FE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
        [SPOE] spoe
        [COMP] compression
        [TRACE] trace
###

> Willy
> ---
> Complete changelog :

[snipp]
Regards
Aleks

Reply via email to