`option http_proxy` DNS and HTTPS support
Hi, since haproxy now has DNS, is now possible to make `option http_proxy` to do DNS and HTTPS, in some cases, we need to let part of requests go local network directly. Thanks in advance.
Re: [ANNOUNCE] haproxy-2.0-dev3
Tim, Am 16.05.2019 um 00:32 schrieb Tim Düsterhus: > Aleks, > > Am 15.05.19 um 22:59 schrieb Aleksandar Lazic: >> As we use more and more the CI features of github what's the opinion of the >> community to use this features to create and push Container images to the >> docker >> registry via the CI? >> >> I'm fine to keep it as it is but from project point of view it could be >> better >> to have all together on one place, right? >> > > As a avid Docker user: I tend to absolutely avoid any Docker images that > are not built using Docker Hub's autobuilder, because I cannot verify > the Dockerfile myself (or cannot verify that the resulting image > actually matches the Dockerfile). And for the images using the > autobuilder: They are super crap more often than not. Sorry, I don't understand this statement, what do you mean? > I don't see any benefit whatsoever for HAProxy to provide image > themselves. The image in the docker-official-images program is timely > updated using a scraper [1], it is of high quality (of course )and the > fact that it's part of the DOI program makes it highly trusted among > Docker users. Also I keep half an eye on that image to make necessary > adjustments. Well why I build the images by myself because I need newer libraries, for example openssl with tls 1.3, which is not part of the official build. ``` podman run -it --rm --entrypoint haproxy haproxy -vv HA-Proxy version 1.9.7 2019/04/25 - https://haproxy.org/ Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200 Built with OpenSSL version : OpenSSL 1.1.0j 20 Nov 2018 Running on OpenSSL version : OpenSSL 1.1.0j 20 Nov 2018 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 Built with Lua version : Lua 5.3.3 Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Built with zlib version : 1.2.8 Running on zlib version : 1.2.8 Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with PCRE version : 8.39 2016-06-14 Running on PCRE version : 8.39 2016-06-14 PCRE library supports JIT : no (USE_PCRE_JIT not set) Encrypted password support via crypt(3): yes Built with multi-threading support. Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. Available multiplexer protocols : (protocols marked as cannot be specified using 'proto' keyword) h2 : mode=HTXside=FE|BE h2 : mode=HTTP side=FE : mode=HTXside=FE|BE : mode=TCP|HTTP side=FE|BE Available filters : [SPOE] spoe [COMP] compression [CACHE] cache [TRACE] trace ``` Does "they" accept pull requests which changes a lot in the docker file or how they behave. I have not a very good feeling about this docker-official-images program as from my point of view docker have change in the past so much parts (apis, gui, ...) that I'm not sure how they behave in the future. But you know I'm open for suggestions, so if we agree that the docker-official-images is the image which the haproxy community can commit to it I'm fine with it. > Conclusing: The time and effort is better spent elsewhere (i.e. in > actually improving HAProxy itself). > > [1] https://github.com/docker-library/haproxy/ > > Best regards > Tim Düsterhus Best regards Aleks
Re: [ANNOUNCE] haproxy-2.0-dev3
Aleks, Am 15.05.19 um 22:59 schrieb Aleksandar Lazic: > As we use more and more the CI features of github what's the opinion of the > community to use this features to create and push Container images to the > docker > registry via the CI? > > I'm fine to keep it as it is but from project point of view it could be better > to have all together on one place, right? > As a avid Docker user: I tend to absolutely avoid any Docker images that are not built using Docker Hub's autobuilder, because I cannot verify the Dockerfile myself (or cannot verify that the resulting image actually matches the Dockerfile). And for the images using the autobuilder: They are super crap more often than not. I don't see any benefit whatsoever for HAProxy to provide image themselves. The image in the docker-official-images program is timely updated using a scraper [1], it is of high quality (of course )and the fact that it's part of the DOI program makes it highly trusted among Docker users. Also I keep half an eye on that image to make necessary adjustments. Conclusing: The time and effort is better spent elsewhere (i.e. in actually improving HAProxy itself). [1] https://github.com/docker-library/haproxy/ Best regards Tim Düsterhus
Re: [ANNOUNCE] haproxy-2.0-dev3
Am 15.05.2019 um 18:52 schrieb Willy Tarreau: > Hi, > > HAProxy 2.0-dev3 was released on 2019/05/15. It added 393 new commits > after version 2.0-dev2. > > This is another huge version, having been distacted by a number of bugs > lately, this one was postponed a bit too much in my taste. As usual for a > development version, I'll skip over the bugfixes which are uninteresting > for this changelog. > > The main points of this release are : > - HTX enabled by default on all proxies. The only showstopper used to > be the lack of ability to upgrade from TCP to HTTP in HTX mode when > branching from a TCP frontend to an HTTP backend. Since it now works > there is no reason for staying in legacy mode anymore. This means > that all features (backend H2 etc) are all implicitly allowed without > the need for an extra option. It is still possible to disable HTX in > case of regression or suspicion using "no option http-use-htx". Keep > in mind that any problem ought to be reported as the intent is to > remove legacy mode with 2.1, so 2.0 will be the last one supporting > both modes. Yes ;-) > - HTTP/2 is now supported on HTTP/1 ports (in HTX mode). Whenever the > H2 preface is met on an H1 listener, the connection is automatically > switched to H2. > > - significant scheduler improvements to improve fairness between all > tasks in multi-threaded mode. There used to be a situation where some > tasks could starve other ones, which was observable by some CLI commands > timing out too early when doing "echo foo|socat" > > - lockup bug detection : if a task loops forever and uses all the CPU, this > is a bug and haproxy will be killed. Similarly if a task locks up for a > long time, haproxy is killed. This is enabled for now in development, and > maybe it will stay enabled by default after the release as it would have > helped a number of users to recover faster from some annoying bugs. If you > see haproxy crash in an abort() and dump a core, first you'll know you've > hit a serious bug and it managed to stop it, second keep in mind that > there are developers who could be interested by knowing what was detected > so please don't erase the trace and the core immediately. I still have > some watchdog code under development that is even able to detect dead > locks and crash the process in this case, I need to polish it. > > - Layer 7 retries : many of you know my disgust for such a feature > essentially requested by incompetent admins trying to hide their horribly > bogus applications and who prefer to shoot themselves in the foot instead > of fixing the code, but there are a few valid (read riskless) use cases. > One of them concerns the use of TCP fastopen to connect to the servers. > It is not usable without such retries. Another one concerns 0-RTT to the > servers where it's highly desirable that haproxy retries itself if the > server ignores the early data. In addition to this there are some more > legitimate users with known idempotent applications (static file servers > and applications using replay-safe transaction numbers) where this can > be understandable. The thing is that all these use cases require exactly > the same mechanism. So now that this was implemented, it will also be > available for those who want to do whatever and who will complain that > haproxy multiplies their payment requests or kills all their servers in > a domino effect. They'd rather not complain here or I may reserve them > a selection of not-so-kind words. It is possible to finely enumerate > the situations where a retry is permitted (see "retry-on"), and a few > status codes are permitted (404 was included as this one is sometimes > requested by content providers). In addition there is a new HTTP request > action "disable-l7-retry" which allows to prevent such retries from > happening (e.g. POST to an application not specifically designed to be > replay-safe). Of course it is not enabled by default. > > - TFO is now supported when talking to servers. It is one of the positive > effects of having L7 retries. Similarly 0-RTT can now be replayed without > going back to the client. > > - stick-tables can now be declared inside peers sections. Many of those > using tons of stick-tables have many backends with only one stick-table > line. These backends also pollute the stats. And these stick-tables have > to reference a peers section to be synchronized. We figured that since > it is not possible to synchronize stick-tables between multiple peers > sections, it made quite some sense to be able to declare several of > them directly inside peers sections so that they are easily found, > automatically synchronized, and require less configuration. These ones > will be accessible using the peers section
Re: PATCH: enable cirrus-ci (freebsd builds)
Hi Ilya, On Thu, May 16, 2019 at 12:05:47AM +0500, ??? wrote: > Hello, > > can we enable cirrus-ci ? it's like travis-ci, it allows run freebsd builds I have no opinion on it, I don't know anything about it at all, so since you appear to know what it involves, you'll have to give me some info. Cheers, Willy
Re: [ANNOUNCE] haproxy-1.9.8
Am 13.05.2019 um 16:57 schrieb Willy Tarreau: > Hi, > > HAProxy 1.9.8 was released on 2019/05/13. It added 53 new commits > after version 1.9.7. > > The most important bugs fall into 3 main categories here : > - a possible crash in multi-threads when issuing "show map" or > "show acl" on the CLI in parallel to "clear map" or "clear acl" on > another CLI session ; > > - an incorrect handling in H2 of the HTX end-of-message mark after > the response trailers which can lead to an endless loop between > the caller seeing there's still something to send and the callee > seeing it cannot send this block alone. This one gave a few of us > some difficulties and helped us see how we can improve HTX for > future versions by making certain cases more straightforward. > Thanks to Patrick Hemmer for providing backtraces exhibiting the > issue. > > - multiple incorrect list handling in the H2 mux resulting in endless > loops for some users with large objects. The assumptions that once > were granted in this code evolved several times during 1.9-dev and > have led to situations where the presence of an element in the send > list was not guarded anymore by some previous conditions. Multiple > iterations of fixes were only pushing the problem forward to the > next point. Now that these issues were addressed, we've figured how > certain parts can be simplified to significantly reduce the > probability that similar issues appear in the future. We owe a big > thanks to Maciej Zdeb for testing countless patches and reporting > detailed traces, and even core dumps. > > There were some other annoying issues among which : > - occasionally a 100% CPU condition (but traffic not interrupted) on > certain fragmented H2 HEADER frames. Thanks go to Yves Lafon for > providing cores and traces. > > - missing locks on source port ranges occasionally causing connections > running on different threads to pick the same outgoing source port, > resulting in connection failures. > > - a missing lock on the server slowstart code causing deadlocks on the > roundrobin algorithm when using threads and slowstart. > > The rest is either a bit less important or became confuse to me after > having dealt with the ones above, to be honest. > > I'm quite confident this one works way better than previous ones, and at > the same time that someone will soon raise their hand saying "I think I > have a problem". Having said that, with 305 bugs fixed since 1.9.0 was > released, you have no valid reason for still using an earlier release > now that this one is available. > > I would generally like to thank all the early adopters who are running > on 1.9, because they are the ones going through all the problems above > and taking the risks for others, and thanks to them we can expect a much > calmer 2.0. So please continue to report any issue you'll meet! > > Please find the usual URLs below : >Site index : http://www.haproxy.org/ >Discourse: http://discourse.haproxy.org/ >Slack channel: https://slack.haproxy.org/ >Issue tracker: https://github.com/haproxy/haproxy/issues >Sources : http://www.haproxy.org/download/1.9/src/ >Git repository : http://git.haproxy.org/git/haproxy-1.9.git/ >Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git >Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG >Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ Docker Images: OSSL: https://hub.docker.com/r/me2digital/haproxy19 BSSL: https://hub.docker.com/r/me2digital/haproxy-19-boringssl Openssl build log: ## Starting vtest ## Testing with haproxy version: 1.9.8 #top TEST ./reg-tests/http-capture/h0.vtc FAILED (0.123) exit=2 #top TEST ./reg-tests/http-messaging/h0.vtc FAILED (0.124) exit=2 2 tests failed, 0 tests skipped, 33 tests passed https://gitlab.com/aleks001/haproxy19-centos/-/jobs/213200457 Boringssl build log: ## Starting vtest ## Testing with haproxy version: 1.9.8 #top TEST ./reg-tests/http-capture/h0.vtc FAILED (0.118) exit=2 #top TEST ./reg-tests/connection/b0.vtc FAILED (8.184) exit=2 #top TEST ./reg-tests/http-messaging/h0.vtc FAILED (0.113) exit=2 3 tests failed, 0 tests skipped, 31 tests passed https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/213200704 > Willy > --- > Complete changelog : [snipp] Regards Aleks
Re: [PATCH 0/6] Kill deprecated configuration options
Am 15.05.2019 um 17:09 schrieb Tim Düsterhus: > Willy, > > Am 15.05.19 um 11:31 schrieb Tim Düsterhus: 2. 'req*' and 'rsp*'. I remember that they allow some modification that cannot easily be replicated otherwise (but I'll have to check that first). >>> >>> Sure but practically speaking such modifications do not make sense in >>> the real world (e.g. rename many header names at once). And the "excuse" >> >> I believe it was some kind of request path rewriting that was not easily >> possibly with `http-request set-path` (maybe because of syntax >> limitations). I'll definitely check and report back. >> > > Okay, I looked it up. It's simple: Everything that needs capturing > groups for the path modifications is not exactly trivial or clean to > replicate otherwise (but it's possible as I found out by scrolling > through the docs and seeing http-request replace-header): > > Consider I have URLs in a folder that I want to map to “git object > directory style” hashed folders (that's a hypothetical example, but I've > used something similar in production): > > /foo/1234.png will become /12/34 > > This is my config: > >> defaults >> timeout server 1s >> mode http >> option httpclose >> >> listen fe >> bind :8080 >> reqrep ^([^\ :]*)\ /foo/(.{2})(.*).png \1\ /\2/\3 >> http-request set-header FE 1 >> >> server example localhost:8082 >> >> listen fe2 >> bind :8081 >> http-request set-header XXX %[path] >> http-request replace-header XXX /foo/(.{2})(.*).png /\1/\2 >> http-request set-path %[req.hdr(XXX)] >> http-request del-header XXX >> http-request set-header FE 2 >> >> server example localhost:8082 > > Both frontends will do the correct replacement, but IMO the reqrep one > is more readable (not that any of these are really readable): > >> [timwolla@~]begin >> nc -l 127.0.0.1 8082 & >> curl -q localhost:8080/foo/1234.png >> wait >> echo >> nc -l 127.0.0.1 8082 & >> curl -q localhost:8081/foo/1234.png >> wait >> end >> GET /12/34 HTTP/1.1 >> host: localhost:8080 >> user-agent: curl/7.47.0 >> accept: */* >> fe: 1 >> >> 504 Gateway Time-out >> The server didn't respond in time. >> >> >> GET /12/34 HTTP/1.1 >> host: localhost:8081 >> user-agent: curl/7.47.0 >> accept: */* >> fe: 2 >> Connection: close >> >> 504 Gateway Time-out >> The server didn't respond in time. >> > > The obvious `http-request set-path %[path,regsub(...)]` as suggested in > the docs for `http-request set-query` does *NOT* work, because the > `regsub` parameters cannot contain the closing parenthesis required for > capture groups. Uh yes that's really unhandy. How about to reuse the `create_cond_regex_rule(...)` for http-(request|respone) replace [ { if | unless } ] http-(request|respone) ireplace [{ if | unless} ] which uses `chain_regex(...)` or maybe it's easier to add to `regsub` a backtick possibility? > Best regards > Tim Düsterhus Best regards Aleks
Re: PATCH: enable cirrus-ci (freebsd builds)
Hello, can we enable cirrus-ci ? it's like travis-ci, it allows run freebsd builds ср, 1 мая 2019 г. в 14:11, Илья Шипицин : > hello! > > can you please enable cirrus-ci on https://github.com/haproxy/haproxy ? > > > -- Forwarded message - > От: Илья Шипицин > Date: вт, 30 апр. 2019 г. в 15:35 > Subject: PATCH: enable cirrus-ci (freebsd builds) > To: HAProxy > > > Hello. > > I enabled basic freebsd CI (yet, it need to be enabled on github.com in > few easy steps) > thanks! > > Ilya Shipitcin >
[ANNOUNCE] haproxy-2.0-dev3
Hi, HAProxy 2.0-dev3 was released on 2019/05/15. It added 393 new commits after version 2.0-dev2. This is another huge version, having been distacted by a number of bugs lately, this one was postponed a bit too much in my taste. As usual for a development version, I'll skip over the bugfixes which are uninteresting for this changelog. The main points of this release are : - HTX enabled by default on all proxies. The only showstopper used to be the lack of ability to upgrade from TCP to HTTP in HTX mode when branching from a TCP frontend to an HTTP backend. Since it now works there is no reason for staying in legacy mode anymore. This means that all features (backend H2 etc) are all implicitly allowed without the need for an extra option. It is still possible to disable HTX in case of regression or suspicion using "no option http-use-htx". Keep in mind that any problem ought to be reported as the intent is to remove legacy mode with 2.1, so 2.0 will be the last one supporting both modes. - HTTP/2 is now supported on HTTP/1 ports (in HTX mode). Whenever the H2 preface is met on an H1 listener, the connection is automatically switched to H2. - significant scheduler improvements to improve fairness between all tasks in multi-threaded mode. There used to be a situation where some tasks could starve other ones, which was observable by some CLI commands timing out too early when doing "echo foo|socat" - lockup bug detection : if a task loops forever and uses all the CPU, this is a bug and haproxy will be killed. Similarly if a task locks up for a long time, haproxy is killed. This is enabled for now in development, and maybe it will stay enabled by default after the release as it would have helped a number of users to recover faster from some annoying bugs. If you see haproxy crash in an abort() and dump a core, first you'll know you've hit a serious bug and it managed to stop it, second keep in mind that there are developers who could be interested by knowing what was detected so please don't erase the trace and the core immediately. I still have some watchdog code under development that is even able to detect dead locks and crash the process in this case, I need to polish it. - Layer 7 retries : many of you know my disgust for such a feature essentially requested by incompetent admins trying to hide their horribly bogus applications and who prefer to shoot themselves in the foot instead of fixing the code, but there are a few valid (read riskless) use cases. One of them concerns the use of TCP fastopen to connect to the servers. It is not usable without such retries. Another one concerns 0-RTT to the servers where it's highly desirable that haproxy retries itself if the server ignores the early data. In addition to this there are some more legitimate users with known idempotent applications (static file servers and applications using replay-safe transaction numbers) where this can be understandable. The thing is that all these use cases require exactly the same mechanism. So now that this was implemented, it will also be available for those who want to do whatever and who will complain that haproxy multiplies their payment requests or kills all their servers in a domino effect. They'd rather not complain here or I may reserve them a selection of not-so-kind words. It is possible to finely enumerate the situations where a retry is permitted (see "retry-on"), and a few status codes are permitted (404 was included as this one is sometimes requested by content providers). In addition there is a new HTTP request action "disable-l7-retry" which allows to prevent such retries from happening (e.g. POST to an application not specifically designed to be replay-safe). Of course it is not enabled by default. - TFO is now supported when talking to servers. It is one of the positive effects of having L7 retries. Similarly 0-RTT can now be replayed without going back to the client. - stick-tables can now be declared inside peers sections. Many of those using tons of stick-tables have many backends with only one stick-table line. These backends also pollute the stats. And these stick-tables have to reference a peers section to be synchronized. We figured that since it is not possible to synchronize stick-tables between multiple peers sections, it made quite some sense to be able to declare several of them directly inside peers sections so that they are easily found, automatically synchronized, and require less configuration. These ones will be accessible using the peers section name followed by a slash and the stick-table name. - http-request/tcp-request action "do-resolve", which takes an argument, submits it to the DNS resolvers and sets the result back into a variable.
Re: [PATCH 0/6] Kill deprecated configuration options
Willy, Am 15.05.19 um 11:31 schrieb Tim Düsterhus: >>> 2. 'req*' and 'rsp*'. I remember that they allow some modification that >>>cannot easily be replicated otherwise (but I'll have to check that >>>first). >> >> Sure but practically speaking such modifications do not make sense in >> the real world (e.g. rename many header names at once). And the "excuse" > > I believe it was some kind of request path rewriting that was not easily > possibly with `http-request set-path` (maybe because of syntax > limitations). I'll definitely check and report back. > Okay, I looked it up. It's simple: Everything that needs capturing groups for the path modifications is not exactly trivial or clean to replicate otherwise (but it's possible as I found out by scrolling through the docs and seeing http-request replace-header): Consider I have URLs in a folder that I want to map to “git object directory style” hashed folders (that's a hypothetical example, but I've used something similar in production): /foo/1234.png will become /12/34 This is my config: > defaults > timeout server 1s > mode http > option httpclose > > listen fe > bind :8080 > reqrep ^([^\ :]*)\ /foo/(.{2})(.*).png \1\ /\2/\3 > http-request set-header FE 1 > > server example localhost:8082 > > listen fe2 > bind :8081 > http-request set-header XXX %[path] > http-request replace-header XXX /foo/(.{2})(.*).png /\1/\2 > http-request set-path %[req.hdr(XXX)] > http-request del-header XXX > http-request set-header FE 2 > > server example localhost:8082 Both frontends will do the correct replacement, but IMO the reqrep one is more readable (not that any of these are really readable): > [timwolla@~]begin > nc -l 127.0.0.1 8082 & > curl -q localhost:8080/foo/1234.png > wait > echo > nc -l 127.0.0.1 8082 & > curl -q localhost:8081/foo/1234.png > wait > end > GET /12/34 HTTP/1.1 > host: localhost:8080 > user-agent: curl/7.47.0 > accept: */* > fe: 1 > > 504 Gateway Time-out > The server didn't respond in time. > > > GET /12/34 HTTP/1.1 > host: localhost:8081 > user-agent: curl/7.47.0 > accept: */* > fe: 2 > Connection: close > > 504 Gateway Time-out > The server didn't respond in time. > The obvious `http-request set-path %[path,regsub(...)]` as suggested in the docs for `http-request set-query` does *NOT* work, because the `regsub` parameters cannot contain the closing parenthesis required for capture groups. Best regards Tim Düsterhus
Re: Zero RTT in backend server side
On Wed, May 15, 2019 at 2:10 PM Olivier Houchard wrote: > We usually only add options in ssl-default-bind-options that can later be > overriden on a per-bind basis, but right now, there's no option to disable > 0RTT. Thanks for the explanation! -- William
Re: Zero RTT in backend server side
Hi William, On Wed, May 15, 2019 at 01:10:37PM +0200, William Dauchy wrote: > Hello Olivier, > > In another subject related to 0rtt was wondering why it was not > available in ssl-default-bind-options? > We usually only add options in ssl-default-bind-options that can later be overriden on a per-bind basis, but right now, there's no option to disable 0RTT. Regards, Olivier
Re: Zero RTT in backend server side
Hello Olivier, In another subject related to 0rtt was wondering why it was not available in ssl-default-bind-options? Thanks, -- William
Re: [PATCH 0/6] Kill deprecated configuration options
Willy, Am 15.05.19 um 05:06 schrieb Willy Tarreau: > Hi Tim, > > On Tue, May 14, 2019 at 08:57:55PM +0200, Tim Duesterhus wrote: >> Okay, I did a sweep through the configuration parser and: >> >> 1. Made deprecated directives fatal and removed them from the docs. The >>error messages speak of "HAProxy 2.1", thus it should be merged into >>some kind of 'next' branch. >> 2. Made deprecated directives actually warn and remove them from the docs >>as well. No need to document deprecated options, users can simply peek >>into the old docs. Also the error messages are pretty clear on what >>needs to be done to fix it. > > OK, I think we can take all of these for a next branch indeed. Two of > them, the one adding a warning and the one removing the unused keyword > could be picked for 2.0 if you agree. Also I'll rename them "MEDIUM" as I'll leave this up to you. Applying them all to 'next' only possibly makes the history a bit cleaner, though. > they do change something in a way that is fixable by configuration, > these are not just functionally equivalent code cleanups. ack. >> 2. 'req*' and 'rsp*'. I remember that they allow some modification that >>cannot easily be replicated otherwise (but I'll have to check that >>first). > > Sure but practically speaking such modifications do not make sense in > the real world (e.g. rename many header names at once). And the "excuse" I believe it was some kind of request path rewriting that was not easily possibly with `http-request set-path` (maybe because of syntax limitations). I'll definitely check and report back. > above has been the reason for continually postponing their removal. I'd > instead vote for warning on them in 2.0 and removing them very early in > 2.1. If someone has a compelling use case, we'll get some feedback thanks > to the warning and it will still be possible to figure how to implement > a replacement using http-request rules. > Best regards Tim Düsterhus
Re: haproxy 1.9.6 segfault in srv_update_status
Hi Patrick, On Wed, May 15, 2019 at 01:22:41AM -0400, Patrick Hemmer wrote: > We haven't had a chance to update to 1.9.8 yet, so we're still running 1.9.6 > (Linux) in production, and just had 2 segfaults happen a little over an hour > apart. When I look at the core dumps from them, the stack trace is the same. > I'm not sure if this is an issue already fixed, so providing just in case. In 1.9.6 some locking was missing in the roundrobin LB algorithm as well as in the slowstart function. Any server state update there (weight change, up/down etc) only exercises your luck :-) > There was one oddity going on at the time these segfaults occurred. We had > maxed out the Linux kernel's conntrack table. So haproxy would have been > experiencing timeouts when attempting new connections, with health checks > failing all over the place. Yes good point, that's very likely what happened, which could indicate that the rest of the time your servers are quite stable and you don't trigger these code paths. Thanks! Willy