Re: [ANNOUNCE] haproxy-1.9-dev1
Hi. On 03/08/2018 19:42, Aleksandar Lazic wrote: Hi. On 02/08/2018 19:23, Willy Tarreau wrote: Hi, HAProxy 1.9-dev1 was released on 2018/08/02. It added 651 new commits after version 1.9-dev0. Great news and work ;-) The image is also ready. https://hub.docker.com/r/me2digital/haproxy19/ As an attentive reader mentioned is there a old ssl library in centos. Due to this fact I have now added the 1.1.1-pre8 version to this image and as I was on the way I also updated the lua version ;-) I don't think that there is now a more on the edge setup possible expect you build it from git. ### HA-Proxy version 1.9-dev1 2018/08/02 Copyright 2000-2018 Willy Tarreau Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -fno-strict-overflow -Wno-unused-label OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200 -> Built with OpenSSL version : OpenSSL 1.1.1-pre8 (beta) 20 Jun 2018 -> Running on OpenSSL version : OpenSSL 1.1.1-pre8 (beta) 20 Jun 2018 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes -> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3 Built with Lua version : Lua 5.3.5 Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Encrypted password support via crypt(3): yes Built with multi-threading support. Built with PCRE version : 8.32 2012-11-30 Running on PCRE version : 8.32 2012-11-30 PCRE library supports JIT : yes Built with zlib version : 1.2.7 Running on zlib version : 1.2.7 Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with network namespace support. Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. Available filters : [SPOE] spoe [COMP] compression [TRACE] trace ### FYI the dockerfile is there https://gitlab.com/aleks001/haproxy19-centos/blob/master/Dockerfile Regards Aleks Yes I know what some of you are thinking "what, 651 patches for a first development release ?". Last year, 1.8-dev1 was emitted with half that in April, 4 months earlier. But by then we only pushed fixes and some new features to flush the pipe, and that 1.8-dev2 and -dev3 that followed had even more patches once cumulated. Here after 1.8, we've got a longer trail of difficult bugs to deal with and the 1.9 changes were very low level stuff that doesn't bring any functional value, these were mostly some rearchitectures of certain sensitive parts, aimed at building the new features on top of them. So we could have emitted useless and broken versions, but... I don't like to discourage our users. Thus 8 months after 1.9-dev0 was created, here comes the first version really worth testing. Those looking for eye-candy stuff will be a bit disappointed, I prefer to warn. Among the ~300 patches that were not backported to 1.8.x (hence that were not bug fixes), I can see : - a rework of our task scheduler. Now it scales much better with large thread counts. There are 3 levels now, one priority-aware shared between all threads, a lockless priority-aware one per thread, and a per-thread list of already started tasks that can be used as well for I/O. It results in most of the scheduling work being performed without any lock, which scales way better. Another nice benefit of lock removal is that when haproxy has to coexist with another process on the same CPU, the impact on other threads is much lower since the threads are very rarely context-switched with a lock held. - the applets scheduler was killed and replaced by the new scheduler above. Not only the previous applets scheduler could use quite some CPU, it didn't make use of priorities, so many applets could use a lot of CPU bandwidth. I noticed this already with the first attempt at implementing H2 using applets. Now the task's nice value being respected, the CLI is much more responsive even under very high loads, and the stats page can be tuned to have less impact on the traffic. Same for peers and SPOE which we'll see if they can benefit from either a boost or a reduced priority. - a new test suite was introduced, based on "varnish-test" from the Varnish cache. It was extended to support haproxy and we can now write test cases, which are placed into the reg-tests directory. It is very convenient because testing a proxy is a particularly complex task which depends on a lot of elements and varnish-test makes it easier to write reproducible test patterns. - the buffers were completely changed (again). Buffers are redesigned every 5 years it seems. I pro
Re: [ANNOUNCE] haproxy-1.9-dev1
Hi. On 02/08/2018 19:23, Willy Tarreau wrote: Hi, HAProxy 1.9-dev1 was released on 2018/08/02. It added 651 new commits after version 1.9-dev0. Great news and work ;-) The image is also ready. https://hub.docker.com/r/me2digital/haproxy19/ ### HA-Proxy version 1.9-dev1 2018/08/02 Copyright 2000-2018 Willy Tarreau Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -fno-strict-overflow -Wno-unused-label OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200 Built with OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017 Running on OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2 Built with Lua version : Lua 5.3.4 Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Encrypted password support via crypt(3): yes Built with multi-threading support. Built with PCRE version : 8.32 2012-11-30 Running on PCRE version : 8.32 2012-11-30 PCRE library supports JIT : yes Built with zlib version : 1.2.7 Running on zlib version : 1.2.7 Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with network namespace support. Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. Available filters : [SPOE] spoe [COMP] compression [TRACE] trace ### Regards Aleks Yes I know what some of you are thinking "what, 651 patches for a first development release ?". Last year, 1.8-dev1 was emitted with half that in April, 4 months earlier. But by then we only pushed fixes and some new features to flush the pipe, and that 1.8-dev2 and -dev3 that followed had even more patches once cumulated. Here after 1.8, we've got a longer trail of difficult bugs to deal with and the 1.9 changes were very low level stuff that doesn't bring any functional value, these were mostly some rearchitectures of certain sensitive parts, aimed at building the new features on top of them. So we could have emitted useless and broken versions, but... I don't like to discourage our users. Thus 8 months after 1.9-dev0 was created, here comes the first version really worth testing. Those looking for eye-candy stuff will be a bit disappointed, I prefer to warn. Among the ~300 patches that were not backported to 1.8.x (hence that were not bug fixes), I can see : - a rework of our task scheduler. Now it scales much better with large thread counts. There are 3 levels now, one priority-aware shared between all threads, a lockless priority-aware one per thread, and a per-thread list of already started tasks that can be used as well for I/O. It results in most of the scheduling work being performed without any lock, which scales way better. Another nice benefit of lock removal is that when haproxy has to coexist with another process on the same CPU, the impact on other threads is much lower since the threads are very rarely context-switched with a lock held. - the applets scheduler was killed and replaced by the new scheduler above. Not only the previous applets scheduler could use quite some CPU, it didn't make use of priorities, so many applets could use a lot of CPU bandwidth. I noticed this already with the first attempt at implementing H2 using applets. Now the task's nice value being respected, the CLI is much more responsive even under very high loads, and the stats page can be tuned to have less impact on the traffic. Same for peers and SPOE which we'll see if they can benefit from either a boost or a reduced priority. - a new test suite was introduced, based on "varnish-test" from the Varnish cache. It was extended to support haproxy and we can now write test cases, which are placed into the reg-tests directory. It is very convenient because testing a proxy is a particularly complex task which depends on a lot of elements and varnish-test makes it easier to write reproducible test patterns. - the buffers were completely changed (again). Buffers are redesigned every 5 years it seems. I probably find it funny. No I don't in fact. With the introduction of the mux layer, we suffered a bit from the old design mixing input and output areas in the same buffer, as it didn't make any sense there and we had to arbitrarily use either side depending on the data direction, making it impossible to share code between the two sides. Now the buffers are much simpler and the code using them at the various layers was significantl
haproxy and changing ELB IPs
Hi, We are running into a problem and would like to hear any advice. Our Setup: We use haproxy 1.7.7 with two backends. One of the backends is AWS ELB The haproxy is running on a linux machine in our data center (on premises) Problem: The ELB is available on 3 AZ so the endpoint can resolve to 3 IPs at a given time. After startup, when one of the ELB's IP changes, our haproxy shows the ELB as down with L4TMOUT and never recovers the backend. >From 1.7.7 doc, under section 5.3.1 we see the below "A few other events can trigger a name resolution at run time: - when a server's health check ends up in a connection timeout: this may be because the server has a new IP address. So we need to trigger a name resolution to know this new IP." Its not clear if the resolvers section is required for the above statement to be true. Unfortunately, we cannot define name server's IPs in our configuration since these IPs can change. On the system (/etc/resolv.conf), these are automatically updated by dhclient. Question: 1. Is there any way by which haproxy can use the latest DNS entries from the system config and use it during the runtime? 2. Is there any way to configure haproxy to expire name resolution after X secs without defining the nameserver IPs? Please advice. Thanks, Karthik
Re: Digital Services and Web Solutions for haproxy.org
Dearbusiness owner haproxy.org, Ijust wanted to know if you require better solution to manage yourDigital Campaigns, Competitor Website Analysis, Keyword Research,Reporting etc. Weprovide a focused approach for the following Digital Services: -·SearchEngine Optimization·SocialMedia Optimization·PayPer Click·WebDesigning/Development·MobileApp Development·ContentWriting·ConversionRate Optimization·SecureWeb Hosting (HTTPS)·SoftwareDevelopment Wecan manage all as we have an expert team of professionals who canhelp you drive highly targeted web traffic to your website using ourhighly targeted Digital strategies. Usingour re-seller program, you can save a hefty amount on hiringresources while getting 24*7 superior support to excel in everyproject and ensure its delivery in an organized and timely manner. Ifthis is something you are interested in, and then allow me to sendyou a no obligation audit report.Pleaselet us know in case you are interested. You can give me your Skype IDor Phone number to discuss more. BestRegards,SteveRayBusinessDevelopment Executive
Re: Digital Services and Web Solutions for haproxy.com
Dearbusiness owner haproxy.com, Ijust wanted to know if you require better solution to manage yourDigital Campaigns, Competitor Website Analysis, Keyword Research,Reporting etc. Weprovide a focused approach for the following Digital Services: -·SearchEngine Optimization·SocialMedia Optimization·PayPer Click·WebDesigning/Development·MobileApp Development·ContentWriting·ConversionRate Optimization·SecureWeb Hosting (HTTPS)·SoftwareDevelopment Wecan manage all as we have an expert team of professionals who canhelp you drive highly targeted web traffic to your website using ourhighly targeted Digital strategies. Usingour re-seller program, you can save a hefty amount on hiringresources while getting 24*7 superior support to excel in everyproject and ensure its delivery in an organized and timely manner. Ifthis is something you are interested in, and then allow me to sendyou a no obligation audit report.Pleaselet us know in case you are interested. You can give me your Skype IDor Phone number to discuss more. BestRegards,SteveRayBusinessDevelopment Executive
http-request set-src without PROXY protocol
Hi, i'm currently experimenting with "http-request set-src". When i use it in a backend with PROXY Protocol configured, it's working and the IP is written in the PROXY protocol header. But what does "set-src" do if no PROXY Procotol is used/can be used? Is the "http-request set-src" feature only intended for using it with PROXY Protocol? If not, what are the requirements when not using PROXY Protocol? Example: frontend fe mode http http-request set-header X-FakeIP 192.168.99.5 default_backend be backend be mode http http-request set-src hdr(X-FakeIP) server s1 172.16.0.10:80 Best Regards / Mit freundlichen Grüßen Bjoern
Re: Using haproxy together with NFS
Hi, You might want to have a look at IPVS for instance in combination with Keepalived. You can then even use udp mounts if you want. Just my 2 cents. Regards, Sander > On 2 Aug 2018, at 18:40, Lucas Rolff wrote: > > I indeed removed the send-proxy - then I had to put the IP of haproxy in the > NFS exports file instead to be able to mount the share (which makes sense > seen from a NFS perspective). > > Making the NFS server support proxy protocol, isn't something I think will > happen - I rely on the upstream packages (CentOS 7 packages in this case). > > And using transparency mode - I think relying on stuff going via haproxy for > routing won't be a possibility in this case - so I guess I have to drop my > wish about haproxy + NFS in this case, I'd like something that is fairly > standard without too much modifications on the current NFS infrastructure > (since it would introduce more complexity). > > Thanks for your replies both of you! > > Best Regards, > > On 02/08/2018, 18.09, "Willy Tarreau" wrote: > >>On Thu, Aug 02, 2018 at 04:05:24AM +, Lucas Rolff wrote: >> Hi michael, >> >> Without the send-proxy, the client IP in the export would have to be the >> haproxy server in that case right? > >That's it. But Michael is absolutely right, your NFS server doesn't support >the proxy protocol, and the lines it emits below indicate it : > > Aug 01 21:44:44 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: > RPC: fragment too large: 1347571544 > Aug 01 21:44:44 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: > RPC: fragment too large: 1347571544 > Aug 01 21:44:44 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: > RPC: fragment too large: 1347571544 > Aug 01 21:44:45 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: > RPC: fragment too large: 1347571544 > >This fragment size (1347571544) is "PROX" encoded in big endian, which are >the first 4 chars of the proxy protocol header :-) > >> The issue there is then, that I end up with all clients having access to >> haproxy can suddenly mount all shares in nfs, which I would like to prevent > >Maybe you can modify your NFS server to support the proxy protocol, that >could possibly make sense for your use case ? Otherwise on Linux you may >be able to configure haproxy to work in transparent mode using "source >0.0.0.0 usesrc clientip" but beware that it requires some specific iptables >rules to divert the traffic and send it back to haproxy. It will also > require >that all your NFS servers route the clients via haproxy for the response >traffic. This is not always very convenient. > >Regards, >Willy > >