чт, 16 мая 2019 г. в 02:02, Aleksandar Lazic <al-hapr...@none.at>:

> Am 15.05.2019 um 18:52 schrieb Willy Tarreau:
> > Hi,
> >
> > HAProxy 2.0-dev3 was released on 2019/05/15. It added 393 new commits
> > after version 2.0-dev2.
> >
> > This is another huge version, having been distacted by a number of bugs
> > lately, this one was postponed a bit too much in my taste. As usual for a
> > development version, I'll skip over the bugfixes which are uninteresting
> > for this changelog.
> >
> > The main points of this release are :
> >   - HTX enabled by default on all proxies. The only showstopper used to
> >     be the lack of ability to upgrade from TCP to HTTP in HTX mode when
> >     branching from a TCP frontend to an HTTP backend. Since it now works
> >     there is no reason for staying in legacy mode anymore. This means
> >     that all features (backend H2 etc) are all implicitly allowed without
> >     the need for an extra option. It is still possible to disable HTX in
> >     case of regression or suspicion using "no option http-use-htx". Keep
> >     in mind that any problem ought to be reported as the intent is to
> >     remove legacy mode with 2.1, so 2.0 will be the last one supporting
> >     both modes.
>
> Yes ;-)
>
> >   - HTTP/2 is now supported on HTTP/1 ports (in HTX mode). Whenever the
> >     H2 preface is met on an H1 listener, the connection is automatically
> >     switched to H2.
> >
> >   - significant scheduler improvements to improve fairness between all
> >     tasks in multi-threaded mode. There used to be a situation where some
> >     tasks could starve other ones, which was observable by some CLI
> commands
> >     timing out too early when doing "echo foo|socat"
> >
> >   - lockup bug detection : if a task loops forever and uses all the CPU,
> this
> >     is a bug and haproxy will be killed. Similarly if a task locks up
> for a
> >     long time, haproxy is killed. This is enabled for now in
> development, and
> >     maybe it will stay enabled by default after the release as it would
> have
> >     helped a number of users to recover faster from some annoying bugs.
> If you
> >     see haproxy crash in an abort() and dump a core, first you'll know
> you've
> >     hit a serious bug and it managed to stop it, second keep in mind that
> >     there are developers who could be interested by knowing what was
> detected
> >     so please don't erase the trace and the core immediately. I still
> have
> >     some watchdog code under development that is even able to detect dead
> >     locks and crash the process in this case, I need to polish it.
> >
> >   - Layer 7 retries : <rant> many of you know my disgust for such a
> feature
> >     essentially requested by incompetent admins trying to hide their
> horribly
> >     bogus applications and who prefer to shoot themselves in the foot
> instead
> >     of fixing the code, but there are a few valid (read riskless) use
> cases.
> >     One of them concerns the use of TCP fastopen to connect to the
> servers.
> >     It is not usable without such retries. Another one concerns 0-RTT to
> the
> >     servers where it's highly desirable that haproxy retries itself if
> the
> >     server ignores the early data. In addition to this there are some
> more
> >     legitimate users with known idempotent applications (static file
> servers
> >     and applications using replay-safe transaction numbers) where this
> can
> >     be understandable. The thing is that all these use cases require
> exactly
> >     the same mechanism. So now that this was implemented, it will also be
> >     available for those who want to do whatever and who will complain
> that
> >     haproxy multiplies their payment requests or kills all their servers
> in
> >     a domino effect. They'd rather not complain here or I may reserve
> them
> >     a selection of not-so-kind words. It is possible to finely enumerate
> >     the situations where a retry is permitted (see "retry-on"), and a few
> >     status codes are permitted (404 was included as this one is sometimes
> >     requested by content providers). In addition there is a new HTTP
> request
> >     action "disable-l7-retry" which allows to prevent such retries from
> >     happening (e.g. POST to an application not specifically designed to
> be
> >     replay-safe). Of course it is not enabled by default.</rant>
> >
> >   - TFO is now supported when talking to servers. It is one of the
> positive
> >     effects of having L7 retries. Similarly 0-RTT can now be replayed
> without
> >     going back to the client.
> >
> >   - stick-tables can now be declared inside peers sections. Many of those
> >     using tons of stick-tables have many backends with only one
> stick-table
> >     line. These backends also pollute the stats. And these stick-tables
> have
> >     to reference a peers section to be synchronized. We figured that
> since
> >     it is not possible to synchronize stick-tables between multiple peers
> >     sections, it made quite some sense to be able to declare several of
> >     them directly inside peers sections so that they are easily found,
> >     automatically synchronized, and require less configuration. These
> ones
> >     will be accessible using the peers section name followed by a slash
> and
> >     the stick-table name.
> >
> >   - http-request/tcp-request action "do-resolve", which takes an
> argument,
> >     submits it to the DNS resolvers and sets the result back into a
> variable.
> >     It can be used to resolve anything on the fly. I already hear some
> people
> >     asking if we'll become a forward proxy, the response is "no" :-)  But
> >     Baptiste had a working demo of something like this just for fun.
> >
> >   - log sampling and load balancing. The idea is to specify intervals of
> >     wider ranges for which logs will be sent to a given server. Thus it
> >     is possible for example to send only 1 log every 100 to a server to
> >     perform some sampling, or to send 1/3 to log server 1, 2/3 to log
> >     server 2 and 3/3 to log server 3 and perform some log load balancing.
> >     It's likely that over the long term we could add some hashing rules
> so
> >     that logs belonging to a same session end up on the same log server,
> >     but one thing at a time :-)
> >
> >   - it is possible to load sidecar programs from the global section using
> >     the "program" keyword in master-worker mode. They will be monitored
> by
> >     the master process. This is mainly aimed at simplifying some complex
> >     setups and allowing haproxy + extra components to start/stop
> together.
> >     For example some may want to load a syslog relay. In the very distant
> >     past we could have imagined loading stud or stunnel to offload SSL.
> >
> >   - idle server connections are better controlled now so that we don't
> >     enter a situation where a single session could collect tons of them
> >     and not reuse them. Some heuristics are applied so that we give back
> >     idle connections more often.
> >
> >   - the WURFL device detection was reintroduced. The Scientiamobile team
> >     has done a pretty good job at addressing all the issues that were
> >     raised and led to their removal so there was no reason to keep them
> >     out anymore. One nice improvement is that they provided a dummy
> library
> >     which allows to compile their code without any external dependency.
> >     This was the main issue developers were facing, and it turned out to
> >     be quite easy. Thus DeviceAtlas followed on the same principle and
> >     51Degrees said they'll contribute such a thing soon as well. It will
> >     then be possible to detect internal API regressions affecting any of
> >     them during development so that these issues should only be bad
> >     memories by now. We should even enable them in Travis builds by the
> >     way. There are still a few WURFL patches pending for review but
> >     nothing dramatic.
> >
> >   - DeviceAtlas implemented support for HTX mode, so it's already
> 2.0-ready
> >     as well.
> >
> >   - some systemd unit file changes were brought to ease the activation of
> >     the master socket. My understanding is that it will look at a few
> config
> >     files to figure the options passed on the command line so it should
> work
> >     on multiple distros.
> >
> >   - Just like we used to rely on "hard-stop-after" to limit the number of
> >     old processes upon reload, it is now possible to limit the number of
> >     reloads a process survives (see "mworker-max-reloads") before being
> >     actively killed. Those reloading very frequently will probably like
> >     this one!
> >
> >   - new "set-dumpable" global keyword. It tries its best to re-enable
> >     core dumps. It will do the equivalent of "ulimit -c unlimited" and
> >     of enabling dumps after setuid, which should save lots of trouble
> >     to users willing to provide some help on bug reports.
> >
> >   - lots of cleanups and reorganization of the regtests. They have a
> >     real name now, which is more convenient to manipulate them, and their
> >     dependencies are cleaner as they can depend on individual build
> options.
> >
> >   - I discovered an old SPOA server that Thierry implemented more than
> one
> >     year ago, and which provides SPOA to Python and Lua programs. I could
> >     verify that it starts so I merged it, it can be useful to a number of
> >     people, including developers who want an example of a more complex
> >     application than the basic examples.
>
> This "contrib/spoa_server" is now in this images with USE_LUA. As the
> python
> version in the makefile is 2.7 I'm not sure how difficult is it to add 2.7
> into
> CentOS and debian builds. Due to the fact that no python is by default in
> rhel8
> I will stay with lua for now.
>
> The server is installed in /usr/local/bin/spoa
>
> ```
> /usr/local/bin/spoa -h
> Usage: /usr/local/bin/spoa [-h] [-d] [-p <port>] [-n <num-workers>] -f
> <file>
>     -h                  Print this message
>     -d                  Enable the debug mode
>     -p <port>           Specify the port to listen on (default: 12345)
>     -n <num-workers>    Specify the number of workers (default: 5)
>     -f <file>           Specify the file whoch contains the processing
> code.
>                         This argument can specified more than once.
> ```
>
> >   - Travis-CI integration : the patches we push are now automatically
> tested
> >     in about a dozen of setups (OS, SSL versions) and the reg tests are
> run.
> >     This has already saved quite some time to detect bugs. Thanks to Ilya
> >     for working on this.
> >
> >   - addressed some build issues, mainly old AIX support and LibreSSL
> >     compatibility issues caused by their creative numbering (they pretend
> >     to be OpenSSL 2.0.0, complicating many compatibility tests). Now it
> >     should not break every morning anymore. Also some build issues of the
> >     "ist" strings affecting at least Cygwin should be addressed now (once
> >     I get a confirmation I can backport this to 1.9).
> >
> > Yes I know it's a long list. There are still a few things pending but
> we're
> > seeing the end of the tunnel. Some SSL layering changes that will be
> needed
> > for QUIC were started and are currently being finished. I really want to
> > have them in 2.0 so that we don't have two distinct architectures to deal
> > with between 2.0 (which is long-term supported) and 2.1+. Manu has
> proposed
> > the support of Solaris' event ports as a much better poller than poll().
> I
> > reviewed it, he's doing the final polishing and should be ready soon.
> Some
> > deprecated keywords which do not generate a warning should be addressed
> as
> > well or we'll never manage to get rid of them. I know that Christopher is
> > still addressing some HTX design concepts that could make the long term
> > maintenance much easier and that I'd rather see merged early. Tim already
> > has some patches for this. Alec Liu proposed to integrate the support of
> > SOCKS4. At first I was a bit worried but it turns out the protocol could
> > be supported in a very non-intrusive way so if it's ready in time I'm
> fine
> > with integrating it. I'm aware of a few other things people are working
> > on, we'll see. I'm not disclosing them to avoid putting needless
> pressure!
> >
> > I've also seen based on recent reports and patch submissions that a few
> > harmless bugs here and there might still be present, but nothing to be
> > alarmed of. Given that recently we've been working on lots of bug reports
> > and that things start to cool down, I'm considering that we're getting
> much
> > better.
> >
> > I'd like to emit a new -dev release next week with the rest of the
> pending
> > stuff, aiming at a final release by the end of this month. Please do test
> > and report issues so that we don't get all of them in the last 3 days as
> > usual. We all know releases slip a bit and I'm fine with this, but at
> > least I'd like this to be for a good reason. Oh and keep in mind, this
> > is *development* so please be careful with it. We all really appreciate
> > to see bugs reported on live traffic but please don't use it as an excuse
> > for switching all your LBs on it, or it may bite you hard!
>
> ;-)
>
> > I'm going to open a -next branch to collect the pending stuff for 2.1.
> This
> > one will periodically be rebased on top of master so that it can become
> the
> > next master after the release.
> >
> > Have fun!
> > Willy
> >
> > ---
> > Please find the usual URLs below :
> >    Site index       : http://www.haproxy.org/
> >    Discourse        : http://discourse.haproxy.org/
> >    Slack channel    : https://slack.haproxy.org/
> >    Issue tracker    : https://github.com/haproxy/haproxy/issues
> >    Sources          : http://www.haproxy.org/download/2.0/src/
> >    Git repository   : http://git.haproxy.org/git/haproxy.git/
> >    Git Web browsing : http://git.haproxy.org/?p=haproxy.git
> >    Changelog        : http://www.haproxy.org/download/2.0/src/CHANGELOG
> >    Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
>
> Docker images.
>
> OSSL: https://hub.docker.com/r/me2digital/haproxy20-centos
> BSSL: https://hub.docker.com/r/me2digital/haproxy20-boringssl
>
>
> Openssl build log:
> ########################## Starting vtest ##########################
> Testing with haproxy version: 2.0-dev3
> #    top  TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (2.259)
> exit=2
> 1 tests failed, 0 tests skipped, 39 tests passed
>
> https://gitlab.com/aleks001/haproxy20-centos/-/jobs/213234777
>
> Boringssl build log:
>
> ########################## Starting vtest ##########################
> Testing with haproxy version: 2.0-dev3
> #    top  TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (2.220)
> exit=2
> 1 tests failed, 0 tests skipped, 38 tests passed
>
> https://gitlab.com/aleks001/haproxy20-boringssl/-/jobs/213241924
>
> As we use more and more the CI features of github what's the opinion of the
> community to use this features to create and push Container images to the
> docker
> registry via the CI?
>


gitlab.com allows mirroring arbitrary repo and running CI (feature is
called "CI/CD for external repo")
the only requirement is having .gitlab-ci.yml

so ... we can add .gitlab-ci.yml to main repo ... and use mirroring, i.e.
git.haproxy.org --> gitlab.com


>
> I'm fine to keep it as it is but from project point of view it could be
> better
> to have all together on one place, right?
>
> I'm also pretty curious about the first QUIC version ;-)
>
> > Willy
> > ---
> > Complete changelog :
>
> [snipp]
>
> Regards
> Aleks
>
>

Reply via email to