Re: Blocking log4j CVE with HAProxy
On Mon, 13 Dec 2021 at 12:51, Olivier D wrote: > If you don't know yet, a CVE was published on friday about library log4j, > allowing a remote code execution with a crafted HTTP request. [snip] > We would like to filter these requests on HAProxy to lower the exposition. At > peak times, 20% of our web traffic is scanners about this bug ! [snip] > http-request deny deny_status 405 if { url_sub -i "\$\{jndi:" or > hdr_sub(user-agent) -i "\$\{jndi:" } > What do you think ? I don't have an explicit example, but my understanding is that log4j's "${foo}" strings, which need to result in "${jndi:ldap://}" to trigger the exploit, are recursively expanded. Thus (again, having neither an example nor any expertise with log4j) the space of things you'd need to filter out becomes rather large. I believe there are string casing operators available, leading to options like (but probably not precisely!) "${j{$lower:N}di:ldap://...";. Whilst I'm sure you could reduce your malicious traffic volumes with a static rule, as you mention, I'm not sure it's a good idea to give anyone the impression that this can be anything more than a /very/ incomplete sticking plaster over the issue! All the best, Jon -- Jonathan Matthews https://jpluscplusm.com
Re: Add 401, 403 retries at l7
On Thu, 12 Nov 2020 at 12:21, Julien Pivotto wrote: > Dear, > > Please find a patch to add 401 and 403 l7 retries, see > https://github.com/haproxy/haproxy/issues/948 Hey Julien, This really feels like an anti-feature, to be frank! If a specific backend server can’t auth anyone, don’t have it in the pool of servers which process auth requests. If it can’t auth anyone, only some of the time, take it out of the auth pool based on health checks. If it can’t auth *some* people, *some* of the time, while other servers can: A) fix your broken server; don’t enlarge a nice piece of middleware like haproxy! B) you probably want a redispatch, not a retry; I *think* a retry can end up on the same server, which isn’t want you want. I might be wrong there, though. I think retry on 4XX, without modifying the request, is a terrible idea. It’s pretty much the opposite of what the HTTP spec says, and isn’t something haproxy should learn how to do :-) I know it already knows how to do it on 404 (& 408) which I can see a /slight/ rationale for, in a bulk-file-hosting, round-robin-until-a-server-has-a-file situation. That’s still, IMHO, the wrong place for this to be implemented - it should be in-app, not in-proxy. I genuinely don’t think we should expand the set of 4XX responses that can be automatically retried! J > <https://github.com/haproxy/haproxy/issues/948> -- Jonathan Matthews https://jpluscplusm.com
Re: TCP Proxy for database connections
On Thu, 29 Oct 2020 at 03:41, Anand Rao wrote: > Hi, > > I'm looking for a TCP proxy that can proxy the connection between a > database client and the database server. I want to be able to look at the > traffic and log the queries etc for mining later. I also want to use the > proxy to remove human knowledge of passwords. The users will point their > client to the TCP Port proxy is listening on and will specify a username > which will be a pointer to a vault account (like cyberark or beyondtrust). > The proxy upon receiving this information will then connect to this vault > and get the password and plug the password in for the connection to the > database server. After the connection is established - all traffic should > be proxied through and logged. > > Would HAProxy be a product that can achieve this? If not, I'd like to ask > this knowledgeable community if they can recommend any other projects that > might be closer to achieve the above. I understand this is a very niche > requirement. Any TCP proxy with the ability to script/transform the packets > on the way to the destination would be helpful. I'm trying to find > something in the open source community that I can use for my needs than > having to write one myself. > Hey Anand, I don’t think haproxy is what you’re looking for. You’re looking for more than a TCP proxy: you need a DB-specific-protocol-proxy. Haproxy can listen for HTTP, above the TCP layer, but not any specific DB protocols. I think you need to look for a proxy that’s designed to work with the specific DB you’re wanting to expose. For mysql, “mysql-proxy” and “mysql-router” come to mind. -proxy never went GA, and I’ve not used -router. Given your requirement for the proxy to dynamically fetch credentials, out of band from the connection, I think you’ll find your options to be limited. I know mysql-proxy had Lua embedded (I don’t know about mysql-router) but I’m not sure if it exposed enough Lua libraries to achieve what you’re looking for. For postgres, I’m afraid I’m only aware of “pgbouncer”. If none of these tools does 100% of what you want, you might be able to combine them with haproxy to achieve something closer to what you need. Your “everything is logged” requirement, depending on the level to which you need things logged, will likely be a sticking point. Best of luck, Jonathan > -- Jonathan Matthews https://jpluscplusm.com
Re: Question regarding the use of the Community Edition for commercial purposes
On Mon, 28 Sep 2020 at 12:15, Tobias Wengenroth wrote: > Dear HAProxy Support Team, > Hi Tobias. Just FYI, this is the public mailing list for the open source project, not a support team :-) > Our customer needs a loadbalancer for their media webservers and we think > about it to use HAProxy. Can we use your community edition of HAProxy for > commercial purposes? > For the avoidance of doubt: I’m not a lawyer, and this is *not* legal advice :-) The open source project is licensed under the GPL, with some being covered by the LGPL, as described here: http://git.haproxy.org/?p=haproxy-2.2.git;a=blob_plain;f=LICENSE;hb=refs/heads/master You should evaluate your use of the software as you would any other licensed project: by reading the license and talking to your lawyers! My personal experience is that I have used it in many commercial projects without issue, and without individualised permissions being sought from anyone. Does your company have a policy on the use of Free / Open Source software; perhaps one which explicitly deals with the (L)GPL? All the best, Jonathan -- Jonathan Matthews https://jpluscplusm.com
Re: Right way to get file version with Data Plane API?
I’ve not used the API yet, but my reading of those docs, alongside some previous experience, suggests to me that the version should br an entirely user-specified, opaque-to-haproxy string, whose purpose is to avoid stale config writes from concurrent clients. However, the fact that the version seems to be required outside of transactions, and indeed that changes outside transactions are even possible, makes me think I’ve got that wrong. I agree with OP: there should be a “show me the current version ID” endpoint. I don’t see how the best (apparent!) candidate ( /v2/services/haproxy/configuration/raw ) can be used to achieve this, as it also requires a version to be provided. I’m interested in what folks suggest :-) J On Mon, 21 Sep 2020 at 09:55, Ricardo Fraile wrote: > For example, to start a new transaction, as the documentation [1] > > points: > > > > version / required > > Configuration version on which to work on > > > > Or the blog post about it [2]: > > > > Call the /v1/services/haproxy/transactions endpoint to create a new > > transaction. This requires a version parameter in the URL, but the > > commands inside the transaction don’t need one. Whenever a POST, PUT, or > > DELETE command is called, a version must be included, which is then > > stamped onto the HAProxy configuration file. This ensures that if > > multiple clients are using the API, they’ll avoid conflicts. If the > > version you pass doesn’t match the version stamped onto the > > configuration file, you’ll get an error. When using a transaction, that > > version is specified up front when creating the transaction. > > > > What is the right way to get the version stamped on the configuration > > file? > > > > Thanks, > > > > [1] - > > > https://www.haproxy.com/documentation/dataplaneapi/latest/#operation/startTransaction > > [2] - https://www.haproxy.com/blog/new-haproxy-data-plane-api/ > > > > > > > what do you mean by "file version" ? > > > > -- Jonathan Matthews https://jpluscplusm.com
Re: "balance uri whole" in haproxy 2.2.3 differ from haproxy 2.0.17
Perhaps you could expand upon the differences you perceive, and the effect and impact they’re having? On Mon, 21 Sep 2020 at 07:51, Sehoon Kim wrote: > Hi, > > We are upgrading from haproxy 2.0.17 to 2.2.3. > And we use path rewriting and "balance uri whole" before sever selection. > > But server selection in 2.2.3 seems to be different from 2.0.17. > > - > backend bk_test > balance uri whole > hash-type consistent > > # Rewrite Host Header > http-request set-header Host %[path,map_beg(/etc/haproxy/acl/test.path, > test.com)] > http-request deny if { hdr(Host) test.com } > http-request replace-path /[\w.]*/(.*) /\1 > > server test1 10.1.1.1:80 check > server test2 10.1.1.2:80 check > server test310.1.1.3:80 check > server test4 10.1.1.4:80 check > server test5 10.1.1.5:80 check > server test6 10.1.1.6:80 check > server test7 10.1.1.7:80 check > server test810.1.1.8:80 check > server test9 10.1.1.9:80 check > server test10 10.1.1.10:80 check > > > Thanks, > Seri > > > -- Jonathan Matthews https://jpluscplusm.com
Re: check successful reload using master cli
On Tue, 15 Sep 2020 at 16:42, Tim Düsterhus wrote: > Why not use the Tab (0x09) as separator and make the output a proper TSV? I’m only 2% joking when I point out that ASCII already has single-byte inter-field and inter-record delimiters defined, just waiting to be used ... :-) https://ronaldduncan.wordpress.com/2009/10/31/text-file-formats-ascii-delimited-text-not-csv-or-tab-delimited-text/ http://html-codes.info/ascii/standard/What-is-the-ASCII-value-of-record%20separator_30 -- Jonathan Matthews https://jpluscplusm.com
Re: response header for CORS
On Thu, 27 Aug 2020 at 15:49, Senthil Naidu wrote: > Hi > > I am trying to enable corse with below in my frontend > > capture request header origin len 128 > Don’t the (1.6) docs suggest that line needs to be prefixed with “declare”? J -- Jonathan Matthews https://jpluscplusm.com
Re: Ha-proxy ignoring context after first digit
Hey there. Just to start by double-checking you know this is the public mailing list for the open source haproxy project, and not a commercial support contact ... :-) >From near the top of your configuration: what do you reckon these lines do? acl path_mtc-jenkins-1 path_beg /mtc-jenkins-1 use_backend mtc-jenkins-1_1564 if path_mtc-jenkins-1 There’s probably a relevant “amendment” you could make there :-) If you need a hand figuring it out, let us know how far you get and where you get stuck! HTH, Jonathan > -- Jonathan Matthews https://jpluscplusm.com
Re: Ha-proxy ignoring context after first digit
On Tue, 14 Jul 2020 at 08:47, wrote: > We are using Ha-proxy 1.8. Recently we started facing issue with Ha-Proxy > ignoring context after first digit. > Do you perhaps mean “Host” rather than Context? > Please check and help us on this. > Whilst I’m not ruling out a bug in haproxy causing this, it is *vastly* more likely that this is either inadvertently caused by your haproxy configuration or another layer 7/HTTP device in your traffic flow. Please post the smallest haproxy config which exhibits this issue so folks can help you figure it out! J -- Jonathan Matthews https://jpluscplusm.com
Re: Documentation
On Sat, 11 Jul 2020 at 12:14, Tofflan wrote: > Hello! > > Im trying to setup a setup HAProxy on my Pfsense router, the links under > documentation dont work. example: > https://cbonte.github.io/haproxy-dconv/2.3/intro.html and > https://cbonte.github.io/haproxy-dconv/2.3/configuration.html > Is there anyway to read or download them somewhere? > Hey there, I’m not sure if someone jumped the gun by updating the site’s doc links to reference the unreleased 2.3 version, but you’ll have better luck changing the “2.3” to either 2.2 or 2.0, depending on the version you’re trying to install :-) J > -- Jonathan Matthews https://jpluscplusm.com
Re: HAProxy 2.2 release date
On Thu, 25 Jun 2020 at 14:52, Venkat Kandhari -X (khvenkat - INFOSYS LIMITED at Cisco) wrote: > Hi Team: > > Can someone please let me know when is HAProxy 2.2 GA version planned to > release ? > “GA”? It’s an open source project! There’s no beta product hiding somewhere ;-) The last 2.2 -dev release was a week ago - here’s the announcement: https://www.mail-archive.com/haproxy@formilux.org/msg37687.html That reads to me like there’s a non-trivial amount still to do before release, not least in fixing the perf regressions Willy mentions. I’m not sure that tends towards any particular release schedule - or, at least, not an overly firm one! I’m more than happy to be corrected on that, but if I were you, I’d not make any commitments based on an assumption of any specific 2.2 release date :-) All the best, J > -- Jonathan Matthews London, UK https://jpluscplusm.com
Following redirects [was: haproxy on embedded device]
On Wed, 24 Jun 2020 at 14:15, Thomas Schmiedl wrote: > Hi, > > when trying to download a .ts-file with haproxy on the embedded > device/router, haproxy logs http-status 302. When using wget on the > router (please see attached output), wget "follows" the url and in the > end it's http-status 200. Is this "following" also possible in haproxy? It’s possible ( https://stackoverflow.com/questions/50844292/how-to-make-ha-proxy-to-follow-redirects-by-itself) but you’re really solving the wrong problem. Use an HTTP client which supports 3xx redirects; they’re at least 21 years old! https://tools.ietf.org/html/rfc2616 J > -- Jonathan Matthews London, UK https://jpluscplusm.com
Re: Doing directory based access control (Survey / Poll of admin expectations)
On Mon, 22 Jun 2020 at 20:16, Tim Düsterhus wrote: > This off-the-shelf PHP application has an integrated admin control panel > within the /admin/ directory. The frontend consists of several "old > style" PHP files, handling the various paths (e.g. login.php, > register.php, create-thread.php). During upgrades of this off-the-shelf > software new files might be added for new features. > > My boss asked me to restrict the access to the admin control panel to > our internal network (192.168.0.0/16) for security reasons. Access to > the user frontend files must not be restricted. If I were solving this problem solely at the haproxy layer, I'd do something like this: acl internal_net src 192.168.0.0/16 acl admin_request path_beg /admin/ http-request deny if admin_request !internal_net Though by preference I'd put app policy logic as close to, or best of all inside, the app itself; which would have X-Forwarded-For implications. I may have misunderstood your question though! I'm intrigued by what common problems you foresee here. I suppose the Front Controller pattern might be ... interesting to deal with? J -- Jonathan Matthews London, UK https://jpluscplusm.com
Re: SMTP error : TLS error on connection (recv): The TLS connection was non-properly terminated. due to haproxy in the middle
Without wishing to second guess your operational setup, are all of those services (client machines, haproxy, anti-spam boxes) on your network i.e. do they *need* TLS? Given the insecure nature of email, and the lack of guarantees which you (or anyone) can make about subsequent point-to-point transport layer security, would it not simply be easier to disable all TLS in that setup? Just a thought :-) J On Tue, 9 Jun 2020 at 12:34, Brent Clark wrote: > Good day Guys > > I was hoping I can pick you brain and ask for your help. > If any can help and share pointers, it would gratefully be appreciated. > > Where I work, we just inherited a series of third party out going spam > servers. > For various reason, we need to loadbalance but more importantly direct > traffic for when we need to perform maintenance on these servers. > > What we decided so use and do is put haproxy in front. > > The intended topology is: > [clients MTA servers] - 587 -> [haproxy] - 587 -> [outgoing spamservers] > > On odd occasion we see the following error message(s) on the clients > MTAs. And the mail just sits in the queue. When we revert back, it all > flows. > > - > TLS error on connection (recv): The TLS connection was non-properly > terminated. > > Remote host closed connection in response to end of data. > - > > We cant figure it out, and why. > What we think is happening is. There is a cert miss match. And as a > result Exim just refuses to send or accept the mail. > > Here is a snippet of when I run exim4 -d -M ID of a mail in the queue on > the client MTA. > > gnutls_handshake was successful > TLS certificate verification failed (certificate invalid): > peerdn="CN=antispam6-REMOVED" > TLS verify failure overridden (host in tls_try_verify_hosts) > 5:02 > Calling gnutls_record_recv(0x5634066e64a0, 0x7fffc4a62180, 4096) > LOG: MAIN >H=se-balancer.REMOVED [REMOVEDIP] TLS error on connection (recv): The > TLS connection was non-properly terminated. >SMTP(closed)<< > ok=0 send_quit=0 send_rset=1 continue_more=0 yield=1 first_address is > not NULL > tls_close(): shutting down TLS >SMTP(close)>> > LOG: MAIN > > One of the things we were thinking is, is that name of the LB is not in > the SAN cert of the out going spam server. > The other thing we realized is, we do not do / use SSL termination on > the haproxy. Do we need to do that? > > We are not an experts on TLS and crypto protocols. > > If anyone can help. It would be great. > > Kindest regards and many thanks. > Brent Clark > > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: DNS resolution problem since 1.8.14
Hey Patrick, Have you looked at the fixes in 1.8.16? They sound kinda-sorta related to your problem ... J On Sun, 23 Dec 2018 at 16:17, Patrick Valsecchi wrote: > I did a tcpdump. My config is modified to point to a local container (www) > in a docker compose (I'm trying to simplify my setup). You can see the DNS > answers correctly: > > 16:06:00.181533 IP (tos 0x0, ttl 64, id 63816, offset 0, flags [DF], proto > UDP (17), length 68) > 127.0.0.11.53 > localhost.40994: 63037 1/0/0 www. A 172.20.0.17 (40) > > Could it be related to that? > https://github.com/haproxy/haproxy/commit/8d4e7dc880d2094658fead50dedd9c22c95c556a > On 23.12.18 13:59, Patrick Valsecchi wrote: > > Hi, > > Since haproxy version 1.8.14 and including the last 1.9 release, haproxy > puts all my backends in MAINT after around 31s. They first work fine, but > then they are put in MAINT. > > The logs look like that: > > <149>Dec 23 12:45:11 haproxy[1]: Proxy www started. > <149>Dec 23 12:45:11 haproxy[1]: Proxy plain started. > [NOTICE] 356/124511 (1) : New worker #1 (8) forked > <150>Dec 23 12:45:13 haproxy[8]: 89.217.194.174:49752 > [23/Dec/2018:12:45:13.098] plain www/linked 0/0/16/21/37 200 4197 - - > 1/1/0/0/0 0/0 "GET / HTTP/1.1" > [WARNING] 356/124542 (8) : Server www/linked is going DOWN for maintenance > (DNS timeout status). 0 active and 0 backup servers left. 0 sessions > active, 0 requeued, 0 remaining in queue. > <145>Dec 23 12:45:42 haproxy[8]: Server www/linked is going DOWN for > maintenance (DNS timeout status). 0 active and 0 backup servers left. 0 > sessions active, 0 requeued, 0 remaining in queue. > [ALERT] 356/124542 (8) : backend 'www' has no server available! > <144>Dec 23 12:45:42 haproxy[8]: backend www has no server available! > > I run haproxy using docker: > > docker run --name toto -ti --rm -v > /home/docker-compositions/web/proxy/conf.test:/etc/haproxy/:ro -p 8080:80 > haproxy:1.9 haproxy -f /etc/haproxy/ > > And my config is that: > > global > log stderr local2 > chroot /tmp > pidfile /run/haproxy.pid > maxconn 4000 > max-spread-checks 500 > > master-worker > > usernobody > group nogroup > > resolvers dns > nameserver docker 127.0.0.11:53 > hold valid 1s > > defaults > modehttp > log global > option httplog > option dontlognull > option http-server-close > option forwardfor except 127.0.0.0/8 > option redispatch > retries 3 > timeout http-request10s > timeout queue 1m > timeout connect 10s > timeout client 10m > timeout server 10m > timeout http-keep-alive 10s > timeout check 10s > maxconn 3000 > default-server init-addr last,libc,none > > errorfile 400 /usr/local/etc/haproxy/errors/400.http > errorfile 403 /usr/local/etc/haproxy/errors/403.http > errorfile 408 /usr/local/etc/haproxy/errors/408.http > errorfile 500 /usr/local/etc/haproxy/errors/500.http > errorfile 502 /usr/local/etc/haproxy/errors/502.http > errorfile 503 /usr/local/etc/haproxy/errors/503.http > errorfile 504 /usr/local/etc/haproxy/errors/504.http > > backend www > option httpchk GET / HTTP/1.0\r\nUser-Agent:\ healthcheck > http-check expect status 200 > default-server inter 60s fall 3 rise 1 > server linked www.topin.travel:80 check resolvers dns > > frontend plain > bind :80 > > http-request set-header X-Forwarded-Proto http > http-request set-header X-Forwarded-Host%[req.hdr(host)] > http-request set-header X-Forwarded-Port%[dst_port] > http-request set-header X-Forwarded-For %[src] > http-request set-header X-Real-IP %[src] > > compression algo gzip > compression type text/css text/html text/javascript > application/javascript text/plain text/xml application/json > > # Forward to the main linked container by default > default_backend www > > > Any idea what is happening? I've tried to increase the DNS resolve timeout > to 5s and it didn't help. My feeling is that the newer versions of haproxy > cannot talk with the DNS provided by docker. > > Thanks > > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Http HealthCheck Issue
On Wed, 19 Dec 2018 at 19:23, UPPALAPATI, PRAVEEN wrote: > > Hmm. Wondering why do we need host header? I was able to do curl without the > header. I did not find anything in the doc. "curl" automatically adds a Host header unless you are directly hitting an IP address.
Re: Http HealthCheck Issue
On Tue, 18 Dec 2018 at 14:56, UPPALAPATI, PRAVEEN wrote: > > wcentral/com.att.swm.attpublic/healthcheck.txt HTTP/1.1\r\nAuthorization:\ > Basic\ > > [Dec 18 05:22:51] Health check for server bk_8093_read/primary8093r > failed, reason: Layer7 wrong status, code: 400, info: "No Host", check > duration: 543ms, status: 0/2 DOWN Hey there, Praveen. This log line is literally telling you what your problem is! I know different folks like the satisfaction of discovering their own solutions, so I'll ask before simply telling you the solution: do you need help in finding the error hidden in that log line, or can you manage to fix it? All the best, Jon -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: HA-Proxy configuration
On Wed, 10 Oct 2018 at 07:08, anjireddy.komire...@wipro.com < anjireddy.komire...@wipro.com> wrote: > Hi Team, > > > I am looking for HA-Proxy configuration Help in over project, can i know > some one who can give more information on configuration using 2 different > HA-Proxy > servers for high availability. > > > Feel free to contact me on - 9849916124 > Hey there, Welcome to the public mailing list for users of the open source haproxy tool. You'd probably do best by posting the configuration and HA setup as far as you've managed to get it going, and asking questions about specific problems you encounter along the way. You're more likely to get help via email than via telephone! Here is the starter guide for the current stable version: http://cbonte.github.io/haproxy-dconv/1.8/intro.html. There are links along the top of that page to the configuration and management manuals, which will be of interest as you evolve your HA setup. If, instead, you feel you would like to trade time for money, and want to take advantage of a commercial support option, some are listed here: http://www.haproxy.org/#supp As a backstop, my UK company is already set up as a supplier inside Wipro's procurement system. Do get in touch if the routes I've mentioned above don't meet your needs :-) All the best, Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Need Clarification
On Tue, 21 Aug 2018 at 17:53, Jordan Finsbel wrote: > Hello my name is Jordan Finsbell and interested to get involved That's great! What areas are you interested in? J -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: HaProxy question
Did you miss the two mails from Igor containing suggestions? Like this email, they went both to the list and directly to yourself. Maybe check your spam folder. J On Sat, 11 Aug 2018 at 02:28, Jonathan Opperman wrote: > *bump* > > Anyone? > > On Tue, 7 Aug 2018, 11:43 Jonathan Opperman, wrote: > >> Hi All, >> >> I am hoping someone can give me some tips and pointers on getting >> something working >> in haproxy that could do the following: >> >> I have installed haproxy and put a web server behind it, the proxy has 2 >> interfaces, >> eth0 (public) and eth1 (proxy internal) >> >> I've got a requirement where I want to only proxy some source ip >> addresses based on >> their source address so we can gradually add or customers to haproxy so >> that we can >> support TLS1.2 and strong ciphers >> >> I have added an iptables rule and can then bypass haproxy with: >> >> for ip in $INBOUNDEXCLUSIONS ; do >> ipset -N inboundexclusions iphash >> ipset -A inboundexclusions $ip >> done >> $IPTABLES -t nat -N HTTPSINBOUNDBYPASS >> $IPTABLES -t nat -A HTTPSINBOUNDBYPASS -m state --state NEW -j >> LOG --log-prefix " [>] SOURCE TO DEMO BYPASSING HAPROXY" >> $IPTABLES -t nat -A HTTPSINBOUNDBYPASS -d 10.0.0.92 -p tcp >> --dport 443 -j DNAT --to $JONODEMO1:443 >> $IPTABLES -t nat -A PREROUTING -m set ! --match-set >> inboundexclusions src -d 10.0.0.92 -p tcp --dport 443 -j HTTPSINBOUNDBYPASS >> >> Testing was done and I was happy with the solution, I then had a >> requirement >> to have a proxy with multiple IP address on eth0 (So created eth0:1 >> eth0:2) etc >> and changed my haproxy frontend config from bind 0.0.0.0:443 transparent >> to bind 10.0.0.92:443 transparent but now my dnat doesn't work if haproxy >> is running, if I stop haproxy the traffic gets dnatted fine. >> >> I am not sure if I am being very clear in here but basically wanted to >> know if there is >> a way to do selective ssl offloading on the haproxy or bypass >> ssl offloading on the >> server that sits behind the proxy? This is required so that customers >> that do not support >> TLS1.2 and strong ciphers we can still let them connect so actually >> bypassing >> the ssl offloading on the proxy. >> >> Thanks very much for your time reading this. >> >> Regards, >> Jonathan >> >> -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Regarding HA proxy configuration with denodo
On Thu, 26 Jul 2018 at 07:12, aditya.ana...@wipro.com < aditya.ana...@wipro.com> wrote: > We have two different denodo servers installed on two machines (LINUX) > installed on AWS and one load balancer installed on one of those machines . > Can you please provide the steps required or the configuration that need to > be done to connect HA proxy with the available denodo servers . HA proxy > should be able to connect either of the denodo server available . > Hello. This is the public mailing list for users of the open source haproxy tool. You would be best served by posting the configuration as far as you've managed to get it going, and asking questions about specific problems you encounter along the way. Here is the starter guide for the current stable version: http://cbonte.github.io/haproxy-dconv/1.8/intro.html. There are links along the top of that page to the configuration and management manuals. If, instead, you feel you would like to trade time for money, and want to take advantage of a commercial support option, some are listed here: http://www.haproxy.org/#supp As a backstop, my UK company is already set up as a supplier inside Wipro's procurement system. Do get in touch if the routes I've mentioned above don't meet your needs :-) All the best, Jonathan > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Help with environment variables in config
No. Sudo doesn't pass envvars through to its children by default: https://stackoverflow.com/questions/8633461/how-to-keep-environment-variables-when-using-sudo Read that page *and* the comments - in particular be aware that you have to request (at the CLI) that sudo preserve envvars, and you also have to have been granted permission to do this, via the sudoers config file. If this is all sounding a bit complicated, that's because it is. You've chosen a relatively uncommon way of running haproxy - directly, via sudo. Consider running via an init script or systemd unit (?) or, failing that, just a script which is itself the sudo target, which sets the envvars in the privileged environment. J On Sat, 21 Jul 2018 at 17:31, jdtommy wrote: > would this chain of calls not work? > > ubuntu@ip-172-31-30-4:~$ export GRAPH_ADDRESS=graph.server.com > ubuntu@ip-172-31-30-4:~$ export GRAPH_PORT=8182 > ubuntu@ip-172-31-30-4:~$ sudo haproxy -d -V -f /etc/haproxy/haproxy.cfg > > On Sat, Jul 21, 2018 at 3:26 AM Igor Cicimov < > ig...@encompasscorporation.com> wrote: > >> On Sat, Jul 21, 2018 at 7:12 PM, Jonathan Matthews < >> cont...@jpluscplusm.com> wrote: >> >>> On Sat, 21 Jul 2018 at 09:12, jdtommy wrote: >>> >>>> I am setting them before I start haproxy in the terminal. I tried both >>>> starting it as a service and starting directly, but neither worked. It >>>> still would not forward it along. >>>> >>> >>> Make sure that, as well as setting them, you're *exporting* the envvars >>> before asking a child process (i.e. haproxy) to use them. >>> >>> J >>> -- >>> Jonathan Matthews >>> London, UK >>> http://www.jpluscplusm.com/contact.html >>> >> >> As Jonathan said, plus make sure they are included/exported in the init >> script or systemd file for the service. >> >> > > -- > Jarad Duersch > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Help with environment variables in config
On Sat, 21 Jul 2018 at 09:12, jdtommy wrote: > I am setting them before I start haproxy in the terminal. I tried both > starting it as a service and starting directly, but neither worked. It > still would not forward it along. > Make sure that, as well as setting them, you're *exporting* the envvars before asking a child process (i.e. haproxy) to use them. J > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Setting up per-domain logging with haproxy
Hey Shawn, On 17 July 2018 at 19:59, Shawn Heisey wrote: [snip] > Can haproxy be configured to create multiple logfiles? Can the filename > of each log be controlled easily in the haproxy config? Can I use > dynamic info for the logfile name like the value in the Host header? Haproxy has absolutely nothing to do with the logfile creation! It doesn't name them, rotate them or write into them. That's *entirely* your local syslog daemon's responsibility - configure it appropriately, and it'll do what you want. Here's someone from 2011 doing exactly that: https://tehlose.wordpress.com/2011/10/10/a-log-file-for-each-virtual-host-with-haproxy-and-rsyslog/ > The *format* of the haproxy logfile is fine as it is, except that I > would like to have more than the 1024 bytes that syslog allows. Read the haproxy docs on this - you want to tune the "length" parameter: http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-log As the docs say: some syslog servers allow messages >1024, some don't. Use one that does :-) Cheers, Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Need Help!
You may not have had many replies as your email was marked as spam. You might want to address this by, amongst other things, using plain text and not HTML. On 24 June 2018 at 18:32, Ray Jender wrote: > I am sending rtmp from OBS with the streaming set to rtmp://”HAproxy server > IP”:1935/LPC1 > frontend rtmp-in > mode tcp > acl url_LPCX path_beg -i /LPC1/ > use_backend LPC1-backend if url_LPCX > And here is the log after restarting HAproxy with mode=http: > And here is the log after restarting HAproxy with mode=tcp: You can't usefully use HTTP mode, as the traffic isn't HTTP. Haproxy doesn't speak RTMP so, in TCP mode, haproxy doesn't know how to extract path information (or anything protocol-specific) from the traffic. It can't evaluate the ACL "url_LPCX", so you can't select a backend based on it. Your best option is to have 4 frontends (or listeners) on 4 different ports, and route using that information. Jonathan
Re: [PATCH] REGTEST: stick-tables: Test expiration when used with table_*
On Thu, 21 Jun 2018 at 19:45, Willy Tarreau wrote: > Oh indeed I didn't even notice! The correct solution is to use the > example.com domain for this, as explained in RFC2606/6761. No other > domain possibly pointing to a valid location now or in the future > should appear in test nor example files [Gmail on mobile; forgive any formatting fubar] Example\.com resolves. There's a "you can use this domain in documentation" site there. *Someone* is absorbing the traffic to that domain - I suggest not putting it in .vtc files :-) I think the same RFC reserves .invalid as a TLD. Perhaps missing.haproxy.invalid for when a DNS entry needs not to exist, and ... something else for when it needs a real backend? I'm out of ideas on that 2nd use case ... J > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: [Feature request] Call fan-out to all endpoints.
On 10 June 2018 at 08:44, amotz wrote: > I found myself needing the options to do "fantout" for a call. Meaning > making 1 call to haproxy and have it pass that call to all of the endpoint > currently active. > I don't mind implementing this myself and push to code review Is this a > feature you would be interested in ? Hey Amotz, I'm merely an haproxy user (not a dev and nothing to do with the project from a feature/code/merging point of view), but I'd be interested in using this. I feel like an important part of it would be how you'd handle the merge of the different server responses. I.e. the fan-in part. I can see various merge strategies which would be useful in different situations. e.g. "Reply with *this* backend's response but totally ignore this other backend's response" could be useful for in a logging/audit scenario. "Merge the response bodies in this defined order" could be useful for structured data/responses being assembled. "Merge the response bodies in any order, so long as they gave an HTTP response code in the range of X-Y" could be useful for unstructured or self-contained data (e.g. a catalog API). "Merge these N distinct JSON documents into one properly formed JSON response" could be really handy, but would obviously move haproxy's job up the stack somewhat, and might well be an anti-feature! I could have used all the above strategies at various points in my career. I think all but the first strategy might well be harder to implement, as you'll have to cater for a situation where you've received a response but the admin's configured merging strategy dictates that you can't serve the response to the requestor yet. You'll have to find somewhere to cache entire individual response bodies for an amount of time. I don't have any insight into doing that - I can just see that it might be ... interesting :-) If Willy and the rest of the folks who'd have to support this in the future feel like this feature is worth it, please take this as an enthusiastic "yes please!" from a user! Jonathan
Re: JWT payloads break b64dec convertor
On Mon, 28 May 2018 at 14:26, Willy Tarreau wrote: > On Mon, May 28, 2018 at 01:43:41PM +0100, Jonathan Matthews wrote: > > Improvements and suggestions welcome; flames and horror -> /dev/null ;-) > > Would anyone be interested in adding two new converters for this, > working exactly like base64/b64dec but with the URL-compatible > base64 encoding instead ? We could call them : > > u64dec > u64enc I like that idea, and have already retrieved my K&R from the loft :-) J > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: JWT payloads break b64dec convertor
On 28 May 2018 at 12:32, Jonathan Matthews wrote: > I think with your points and ccripy's sneaky (kudos!) padding > insertion, I can do something which suffices for my current audit > needs. For the list, here's my working v1 that I ended up with. I'm sure various things can be improved! :-) I couldn't get ccripy's concat() and length() converters to work, but I've stolen the basic idea - many thanks! acl ACL_jwt_payload_4x_chars_long var(txn.jwtpayload) -m reg ^(.{4})+$ http-request set-var(txn.jwtpayload) req.hdr(jwt) http-request set-var(txn.jwtpayload) var(txn.jwtpayload),regsub($,=) if !ACL_jwt_payload_4x_chars_long http-request set-var(txn.jwtpayload) var(txn.jwtpayload),regsub($,=) if !ACL_jwt_payload_4x_chars_long http-request set-var(txn.jwtpayload) var(txn.jwtpayload),regsub(-,+,g) http-request set-var(txn.jwtpayload) var(txn.jwtpayload),regsub(_,/,g) log-format " jwt-payload:%[var(txn.jwtpayload),b64dec]" Improvements and suggestions welcome; flames and horror -> /dev/null ;-) Jonathan
Re: JWT payloads break b64dec convertor
On 28 May 2018 at 09:19, Adis Nezirovic wrote: > On 05/26/2018 04:27 PM, Jonathan Matthews wrote: >> Hello folks, >> >> The payload (and other parts) of a JSON Web Token (JWT, a popular and >> growing auth standard: https://tools.ietf.org/html/rfc7519) is base64 >> encoded. >> >> Unfortunately, the payload encoding (specified in >> https://tools.ietf.org/html/rfc7515) is defined as the "URL safe" >> variant. This variant allows for the lossless omission of base64 >> padding ("=" or "=="), which the haproxy b64dec convertor doesn't >> appear to be able cope with. The result of > > Jonathan, > > It's not just padding, urlsafe base64 replaces '+' with '-', and '/' > with '_'. You're right. I'd noticed those extra substitutions but, for some reason I'd assumed they were applied after decoding. Brain fart! > For now, I guess the easiest way would be to write a simple > converter in Lua, which just returns the original string, and send > payload somewhere for further processing. One nice thing about the JWT format is that it's unambiguously formatted as "header.payload.signature", so the payload can be trivially parsed out of a sacrificial header with a http-request replace-header copy-of-jwt [^.]+\.([^.]+)\..+ \1 ... or some such manipulation. Here, for clarity, I'm double-passing it through an abns@ frontend-backend-listen chain, hence the additional header and not a variable, as per your example. I think with your points and ccripy's sneaky (kudos!) padding insertion, I can do something which suffices for my current audit needs. I suspect you're right that a Lua convertor is probably the more supportable way forwards, however. Many thanks, both! J
JWT payloads break b64dec convertor
Hello folks, The payload (and other parts) of a JSON Web Token (JWT, a popular and growing auth standard: https://tools.ietf.org/html/rfc7519) is base64 encoded. Unfortunately, the payload encoding (specified in https://tools.ietf.org/html/rfc7515) is defined as the "URL safe" variant. This variant allows for the lossless omission of base64 padding ("=" or "=="), which the haproxy b64dec convertor doesn't appear to be able cope with. The result of log-format %[,b64dec] ... when faced with such an unpadded string is just "-", which I take to mean decoding failed. I believe it's failing on line 84 of src/base64.c. I've tried and failed to use a regex convertor to add padding to the end, based on looking at the string's remainder after matching clusters with '(.{4})+'. Annoyingly I can't make this work in the regsub convertor as I believe it would require the use of grouping parentheses, which aren't permitted by the parser currently. I'm personally interested in this for logging the contents of JWT payloads for audit. Is anyone else working with JWT in haproxy, in this or any other context, and could share any tactics for dealing with this problem? Many thanks! Jonathan
Re: WAF with HA Proxy.
On Wed, 9 May 2018 at 18:43, Mark Lakes wrote: > For commercial purposes, see Signal Sciences Next Gen WAF solution: > https://www.signalsciences.com/waf-web-application-firewall/ > That page says it supports "Nginx, Nginx Plus, Apache and IIS". Does it integrate with HAProxy? Via what mechanism? J > <https://www.signalsciences.com/waf-web-application-firewall/> > <https://www.signalsciences.com/waf-web-application-firewall/> > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Use SNI with healthchecks
[Top post; fight me] You could either read an environment variable inherited from outside the process, or use "setenv" or "presetenv" as appropriate to DRY your config out. The fine manual describes how you would refer to this envvar in section 2.3, regardless of which of those options you use to set it. J On Tue, 24 Apr 2018 at 16:45, GALLISSOT VINCENT wrote: > I migrated to 1.8 and sni + check-sni *are working fine* with the > following code: > > > > 88 > > backend cloudfront > http-request set-header Host 123456789abcde.cloudfront.net > option httpchk HEAD /check HTTP/1.1\r\nHost:\ > 123456789abcde.cloudfront.net > server applaunch 123456789abcde.cloudfront.net:443 check resolvers > mydns no-sslv3 ssl verify required ca-file ca-certificates.crt sni > req.hdr(host) > check-sni 123456789abcde.cloudfront.net > > 88 > > > Obviously I cannot use %[req.hdr(host)] for "option httpchk" nor for > "check-sni" directives. > > > Do you know how can I define only one time my Host header in the code > above ? > > > Thanks, > > Vincent > > > -- > *De :* GALLISSOT VINCENT > *Envoyé :* lundi 23 avril 2018 17:33 > *À :* Lukas Tribus > *Cc :* haproxy@formilux.org > *Objet :* RE: Use SNI with healthchecks > > > Thank you very much for your answers, > > I'll migrate to 1.8 asap to fix this. > > > Vincent > > > > -- > *De :* lu...@ltri.eu de la part de Lukas Tribus < > lu...@ltri.eu> > *Envoyé :* lundi 23 avril 2018 17:18 > *À :* GALLISSOT VINCENT > *Cc :* haproxy@formilux.org > *Objet :* Re: Use SNI with healthchecks > > Hello Vincent, > > > On 23 April 2018 at 16:38, GALLISSOT VINCENT > wrote: > > Does anybody know how can I use healthchecks over HTTPS with SNI support > ? > > You need haproxy 1.8 for this, it contains the check-sni directive > which allows to set SNI to a specific string for the health check: > > http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-check-sni > > > > > Regards, > > Lukas > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Version 1.5.12, getting 502 when server check fails, but server is still working
On Sun, 15 Apr 2018 at 20:56, Shawn Heisey wrote: > Would I need to upgrade beyond 1.5 to get that working? I don't have any info about your precise problem, but here's a quote from Willy's 1.9 thread within the last couple of months: "Oh, before I forget, since nobody asked for 1.4 to continue to be maintained, I've just marked it "unmaintained", and 1.5 now entered the "critical fixes only" status. 1.4 will have lived almost 8 years (1.4.0 was released on 2010-02-26). Given that it doesn't support SSL, it's unlikely to be found exposed to HTTP traffic in sensitive places anymore. If you still use it, there's nothing wrong for now, as it's been one of the most stable versions of all times. But please at least regularly watch the activity on the newer ones and consider upgrading it once you see that some issues might affect it. For those who can really not risk to face a bug, 1.6 is a very good candidate now and is still well supported 2 years after its birth." > > You might get a solution to this and your other 1.5 problem on the list - it has a very helpful and knowledgeable population :-) But if you can possibly upgrade to 1.6 or later, I suspect the frequency of answers you get and the flexibility they'll have to help you will improve markedly. HTH! J -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: resolvers - resolv.conf fallback
On 14 April 2018 at 05:13, Willy Tarreau wrote: > On Fri, Apr 13, 2018 at 03:48:19PM -0600, Ben Draut wrote: >> How about 'parse-resolv-conf' for the current feature, and we reserve >> 'use-system-resolvers' for the feature that Jonathan described? > > Perfect! "parse" is quite explicit at least! Works for me :-)
Re: resolvers - resolv.conf fallback
On Fri, 13 Apr 2018 at 15:09, Willy Tarreau wrote: > On Fri, Apr 13, 2018 at 08:01:13AM -0600, Ben Draut wrote: > > How about this: > > > > * New directive: 'use_system_nameservers' > > OK, just use dashes ('-') instead of underscores as this is what we mostly > use on other keywords, except a few historical mistakes. I'm *definitely* not trying to bikeshed here, but from an Ops perspective a reasonable implication of "use_system_nameservers" would be for the resolution process to track the currently configured contents of resolv.conf over time. AIUI this will actually parse once, at proxy startup, which I suggest should be made more obvious in the naming. If I'm wrong, or splitting hairs, please ignore! J > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Health Checks not run before attempting to use backend
On Fri, 13 Apr 2018 at 00:01, Dave Chiluk wrote: > Is there a way to force haproxy to not use a backend until it passes a > healthcheck? I'm also worried about the side affects this might cause as > requests start to queue up in the haproxy > I asked about this in 2014 ("Current solutions to the soft-restart-healthcheck-spread problem?") and I don't recall seeing a fix since then. Very interested in whatever you find out! J > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Logs full TCP incoming and outgoing packets
On 10 April 2018 at 00:04, wrote: > Hello everybody, > > For an application, I use haproxy in TCP mode but I would need to log, from > the main load balancer machine, all the TCP transactions (incoming packets > sent to the node then the answer that is sent back from the node to the > client through the haproxy load balancer machine). > > Is it possible to do such a thing ? I started to dig in the ML and found few > information about capturing the tcp-request, which does not work for now... > and I need the response as well... so preferred to ask if someone have got > an experience doing this. Sure, it will have a performance penalty but > exhaustive logging is more important than that and it it the best solution > to avoid a lot of changes in the existing infrastructure we just > load-balanced. I don't believe this is possible inside haproxy right now. If I *had* to do this, I'd start by saying "no", and then I'd work out how to run a tcpdump process on the machine with carefully tuned filters and a -w parameter. Then I'd drink something strong. J
Re: New HTTP action: DNS resolution at run time
On 30 January 2018 at 09:04, Baptiste wrote: > Hi all, > > Please find enclosed a few patches which adds a new HTTP action into > HAProxy: do-resolve. > This action can be used to perform DNS resolution based on information found > in the client request and the result is stored in an HAProxy variable (to > discover the IP address of the server on the fly or logging purpose, > etc...). Hello folks, Did this feature ever go anywhere? I'm trying to write some ACLs matching X-Forwarded-For headers against a DNS record, and I *think* this set of patches is my only way to achieve this, without using an external lookup process to modify ACLs via the admin socket ... Many thanks! Jonathan
Re: skip logging some query parameters during GET request
I *think* you're going to have to fully construct your logging format with a whitelist of params you want, rather than an exclusion list. I'm not sure you can scope this by HTTP method, however. Given your use of this as a forward proxy, I assume you could scope it by Host header ... but that *might* require a double pass through haproxy, with an "abns@" style listener containing the logging format configuration. HTH, J On Tue, 13 Mar 2018 at 12:51, Dave Cottlehuber wrote: > Hi, > > I'm using haproxy to handle TLS termination to a 3rd party API that > requires authentication (username/password) to be passed as query > parameters to a GET call. > > I want to log the request as usual, just not all the query parameters. > Obviously for a POST the parameters would not be logged at all, but is it > possible to teach haproxy to exclude one specific query parameters on a GET > request? > > the request: > > GET /api?username=seriously&password=ohnoes&command=locate&item=chocolat > > desired log something like: > > GET /api?username=seriously&command=locate&item=chocolat > > I can do this downstream in rsyslog but I'd prefer to cleanse the urls up > front. > > A+ > Dave > > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: BUG/MINOR: limiting the value of "inter" parameter for Health check
On Wed, 7 Mar 2018 at 09:50, Nikhil Kapoor wrote > As currently, no parsing error is displayed when larger value is given to > "inter" parameter in config file. > > After applying this patch the maximum value of “inter” is set to 24h (i.e. > 8640 ms). > I regret to inform you, with no little embarrassment, that some years ago I designed a system which relied upon this parameter being set higher than 24 hours. I was not proud of this system, and it served absolutely minimal quantities of traffic ... but it was a valid setup. What's the rationale for having *any* maximum value here - saving folks from unintentional misconfigurations, or something else? J -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Active-Passive HAProxy Issue enquiry
On 15 February 2018 at 10:08, Swarup Saha wrote: > Hi, > I need help from HAProxy organization. Hello there. This is the haproxy user mailing list. It is received and read by a wider range of users across the world, many of whom read it in an individual capacity. If you want *commercial* support, then here is a link to organisations which provide it: https://www.haproxy.org/#supp > We all know that when we configure HAProxy in the Active-Passive manner then > there is a VIP. Outside service will access the VIP and the traffic will be > routed to appropriate inner services via Active Load Balancer. > > I have configured one Active Load Balancer in Site 1 and Passive Load > Balancer in Site 2, They are connected via LAN, Outside traffic will be > routed through VIP. > > Now, my question is if the LAN connectivity between the Active-Passive > HAProxy goes down will the VIP still exist? This is *entirely* the concern of the technology and methods you use to create the VIP across multiple haproxy instances in your different sites. It isn't under the control of haproxy, which deals with failover of the *backend* services you're load balancing. Failure of an entire loadbalancer, and how your setup deals with that, is *100%* a concern of the technology (not haproxy) with which you've chosen to implement resilience. People *might* be able to assist on this list if you gave some more detail about the technologies you're using. HTH! J
Re: How can I map bindings to the correct backend?
Unless I'm missing something, wouldn't you be rather better off just having a dedicated frontend for each set of ports that forwards to each distinct backend server? Or are you doing this at webscale, or something? :-) J -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: cannot bind socket - Need help with config file
On 11 January 2018 at 00:03, Imam Toufique wrote: > So, I have everything in the listen section commented out: > > frontend main >bind :2200 >default_backend sftp >timeout client 5d > > > #listen stats > # bind *:2200 > # mode tcp > # maxconn 2000 > # option redis-check > # retries 3 > # option redispatch > # balance roundrobin > > #use_backend sftp_server > backend sftp > balance roundrobin > server web 10.0.15.21:2200 check weight 2 > server nagios 10.0.15.15:2200 check weight 2 > > Is that what I need, right? I suspect you won't need to have your *backend*'s ports changed to 2200. Your SSH server on those machines is *probably* also your SFTP server. I don't recall if you can serve a different/sync'd host key per port in sshd, but this might be a reason to run a different daemon on a higher port as you're doing. As an aside, it's not clear why you're trying to do this. You've already hit the host-key-changing problem, and unless you have a *very* specific use case, your users will hit the "50% of the time I connect, my files have gone away" problem soon. So you've probably got to solve the shared-storage problem on your backends ... which turns them in to stateless SFTP-to-FS servers. In my opinion adding haproxy as a TCP proxy in your architecture adds very little, if anything. If I were you, I'd strongly consider just sync'ing the same host key to each server, putting their IPs in a low-TTL DNS record, and leaving haproxy out of the setup. J
Re: cannot bind socket - Need help with config file
On Mon, 8 Jan 2018 at 08:29, Imam Toufique wrote: > [ALERT] 007/081940 (1416) : Starting frontend sftp-server: cannot bind > socket [0.0.0.0:22] > [ALERT] 007/081940 (1416) : Starting proxy stats: cannot bind socket [ > 10.0.15.23:22] > [ALERT] 007/081940 (1416) : Starting proxy stats: cannot bind socket [ > 0.0.0.0:22] > I would strongly suspect that the server already has something bound to port 22. It's probably your SSH daemon. You'll need to fix that, by dedicating either a different port or interface to the SFTP listener. J > -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: haproxy without balancing
On 5 January 2018 at 10:28, Johan Hendriks wrote: > BTW if this is the wrong list please excuse me. This looks to me like it might be the right list :-) > We have an application running over multiple servers which all have > there own subdomain, there are about 12 of them. > We can live without loadbalancing, so there is no failover, each server > serves a couple of subdomains. What protocols are these servers serving? - HTTP - HTTPS - if HTTPS, do you control the TLS certificates and their private keys? - Something else? - if something else, what? > At this moment every server has its own ip, and so every subdomain has a > different DNS entry. What we want is a single point of entry and use > haproxy to route traffic to the right backend server. Are the DNS entries for every subdomain under your control? How painful would it be to change one of them? How painful would it be to change all of them? > Replacing an server is not easy at the moment. We have a lot of history > to deal with. We are working on it to leave that behind but till then we > need an solution. > > I looked at this and i think i have two options. > Create for each server in the backend an ip on the haproxy machine and > connect a frontend for that IP to the desired backend server. > This way we still have multiple ipadresses, but they can stay the same > if servers come and go. > > Secondly we could use a single ip and use ACL to route the traffic to > the right backend server. > The problem with the second option is that we have around 2000 different > subdomains and this number is still growing. So my haproxy config will > then consists over 4000 lines of acl rules. > and I do not know if haproxy can deal with that or if it will slowdown > request to much. Haproxy will happily cope with that number of ACLs, but at first glance I don't think you need to do it that way. Assuming you're using HTTP/S, you would probably be able to use a map, as describe in this blog post: https://www.haproxy.com/blog/web-application-name-to-backend-mapping-in-haproxy/ Also, assuming you're using HTTP/S, if you can relatively easily change DNS for all the subdomains to a single IP then I would *definitely* do that. If you're using HTTPS, then SNI client support (https://en.wikipedia.org/wiki/Server_Name_Indication#Support) would be something worth checking, but as a datapoint I've not bothered supporting non-SNI clients for several years now. All the best, J -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Poll: haproxy 1.4 support ?
On 2 January 2018 at 15:12, Willy Tarreau wrote: > So please simply voice in. Just a few "please keep it alive" will be > enough to convince me, otherwise I'll mark it unmaintained. I don't use 1.4, but I do have a small reason to say please *do* mark it as unmaintained. The sustainability of haproxy is linked to the amount of work you (and a /relatively/ small set of people) both have to do and want to do. I would very much like it to continue happily, so I would vote to reduce your mental load and to mark 1.4 as unmaintained. Thank you for haproxy, and here's to a great 2018, with 1.8 and beyond :-) Jonathan
Re: Why HAProxy is not a web server?
On 27 November 2017 at 01:09, wrote: > Why HAProxy is not a web server? Because it's a load balancer. It talks to multiple other web servers, often called backends or origins, which provide the content for it to serve to consumers. HTH, J
Re: Tagging a 1.8 release?
On 20 October 2017 at 17:17, Willy Tarreau wrote: > I'd like to collect all the pending stuff by the end of next week and issue > a release candidate. Don't expect too much stability yet though, but your > tests and reports will obviously be welcome. Are you still finger-in-the-air aiming for a "November 2017" 1.8? I don't recall where I saw that quote, but I'm pretty sure it was an intention mentioned ... /somewhere/! No pressure - just wondering :-) J
Re: counters for specific http status code
On 12 Jul 2016 05:43, "Willy Tarreau" wrote: > That could possibly be ssh $host "halog -st < /var/log/haproxy.log" or > anything like this. On behalf of people running busy load balancers / edge proxies / etc, please don't do this ;-) Instead laptop$ halog -st <(ssh -C $balancer "cat /var/log/haproxy.log") ... is likely to be slightly kinder to my contended servers :-) J
Re: AWS ELB with SSL backend adds proxy protocol inside SSL stream
Hello Hector - On 5 May 2016 at 12:11, Hector Rivas Gandara wrote: > * If not, is there a better way to 'chain' the config as I did above. I don't have any insight into the protocol layering problem you're having, I'm afraid, but if you do end up with the chained solution you describe, I have a suggestion. Take a look at the "abns@" syntax and feature documented here: https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#bind. It's excellent for HAP->HAP links, as you're using. I'm using it in production *inside* Cloud Foundry, for the record :-) As an aside, I'd be interested in even a brief summary of how/if you resolved your problem, given that I've not seen it described on the list before. I wonder if you're the first to run into this specific problem ... All the best, Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html
Re: Erroneous error code on wrong configuration.
On 29 Apr 2016 11:29, "Mayank Jha" wrote: > > I am facing the following in haproxy 1.5. I get the following error, with error code "SC" which is very misleading, for the below mentioned config. Why do you think it's misleading? > haproxy[6379]: 127.0.0.1:53010 [29/Apr/2016:12:05:40.552] my_frontend my_frontend/ -1/-1/-1/-1/1 503 212 - - SC-- 0/0/0/0/0 0/0 "GET / HTTP/1.1" > > With the following config. > > frontend my_frontend > bind :80 > acl global hdr(host) -i blablabla > use_backend my_backend if global > backend my_backend > server google www.google.com:80 Given that you don't alter the Host header before submitting the request to Google, I'm not sure what you're expecting to happen. I think there's a fair bit of extra information you'll need to provide before I (at least; not speaking for anyone else!) understand what your problem actually *is*. You're assuming we know more than we do about your setup, aims, and expected outcomes :-) J
Re: unique-id-header set twice
On 29 Apr 2016 06:55, "Willy Tarreau" wrote: > > On Fri, Apr 22, 2016 at 04:37:04PM +0200, Erwin Schliske wrote: > > Hello, > > > > for some of our services requests pass haproxy twice. As we have set the > > global option unique-id-header this header is added twice. [snip] > > I don't know what could cause this. Would you happen to have it in a > defaults section maybe, with your traffic passing through a frontend > and a backend ? If that's what causes it, I think we have a mistake > in the implementation and should ensure it's done only once, just like > x-forwarded-for. I /think/ you're talking at slight cross-porpoises! My reading of the OP is that when a request comes in to a frontend/listener with the configured unique-Id header already present, then a second UID header is added. My reading of your post, Willy, is that this would be a bug (which might suggest why unique-id-header isn't ACL-able?). But I may have misunderstood - you may be talking solely about when a request crosses a frontend/backend boundary, and not when the request comes in the front door anew (even if it was, as per the OP, a request coming back in directly from a backend). Am I right, both? I only ask because this has bugged me slightly in the past, and it'd be great to clear up the definition of the UID header option: When enabled, is the header's addition predicated on its initial absence? J
Re: Dynamic backend routing base on header
On 17 January 2016 at 17:54, Michel Blanc wrote: > Dear all, > > I am trying to get haproxy routing to a specific server if a header > (with the server nickname) is set. Can you adapt http://blog.haproxy.com/2015/01/26/web-application-name-to-backend-mapping-in-haproxy/ to achieve what you want? J
Re: SSL and Piranha conversion
On 8 September 2015 at 20:56, Daniel Zenczak wrote: > Hello Jonathan, > > Thank you for the response. That old gateway workstation is > not going to be used anymore (the HDDs failed on it and the RAID board > didn’t warn/detect/tell us when it happened). I have spun up Ubuntu Server > inside one of our Virtual Servers to act as the new Load Balancer. Is this > what you mean by migrating the hardware as well as the software? [on-list reply] Daniel - You have to swap out your hardware because it failed. You don't have to swap out your software as it has not failed. Whilst a move to HAProxy is a great plan, I would not be doing it whilst trying to fix your web servers' redundancy and bringing both web servers back into service. My professional advice in your situation would be to change the minimum number of things necessary to restore resilient service, which in this case sounds like only your hardware - whether you fix it by replacing the hardware or by virtualising the server. I would not include swapping Piranha for HAProxy and CentOS for Ubuntu in this work. I'd do both of those later. HTH, Jonathan
Re: SSL and Piranha conversion
On 8 Sep 2015 20:07, "Daniel Zenczak" wrote: > > Hello All, > > First time caller, short time listener. So this is the deal. My organization was running a CentOS box with Piranha on it to work as our load balancer between our two web servers. Well the CentOS box was a Gateway workstation from 2000 and it finally gave up the ghost. May I suggest you reconsider migrating your hardware and software at the same time, both whilst under pressure? It will be massively simpler to install your preexisting choice of (known "good") software on your new hardware. Jonathan
"stats uri" doesn't inherit from defaults sections
Hi all - A bit of lunchtime playing around today has exposed the fact that a "stats uri" in a defaults section has no effect on backends to which the defaults section /should/ apply. Stats-serving backends only obey the compile-time default ("/haproxy?stats") in my tests, until an explicit "stats uri" is placed inside the backend definition. The docs state that "stats uri" is valid in defaults sections, so let me ask: is this a documentation bug (which I'll happily submit a patch for!) or something else? To my mind, it absolutely makes sense to have this statement as settable in a defaults section. I've only tested this on the latest Debian backports version, 1.5.8, but I don't see anything related in the changelog since then which makes me think it's been fixed. The docs for 1.5.11 currently state it's a defaults-settable config statement. Cheers, Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html
Re: How to profile "stats" web page users
I think you want ACL-driven "stats scope" statements, which don't exist to the best of my knowledge. In your case, rather than open a bunch of different ports, I'd give people different FQDNs to hit, and point a wildcard DNS record at a single port 80. (Well, a :443 with TLS, if I were doing it, but you're using :80 in your example) DNS: *.haproxy-stats.example.com -> IP_ADDRESS frontend stats bind IP_ADDRESS:80 mode http option httplog compression algo gzip use_backend stats-foo if { hdr(host) foo.haproxy-stats.example.com } use_backend stats-bar if { hdr(host) bar.haproxy-stats.example.com } default_backend always_returns_400 defaults for-stats-backends-in-effect-until-next-defaults-section mode http option httplog stats enable stats uri /haproxystats stats refresh 60s stats show-legends stats scope . backend stats-foo stats scope foo-frontend-1 stats scope foo-backend-2 stats auth user1:password1 backend stats-bar stats scope bar-frontend-1 stats scope bar-frontend-2 stats scope bar-backend-3 stats auth user2:password2 defaults reset-defaults-disable-stats (typed but not tested ...) Yes, this isn't too different from what you proposed :-) Note the use of the multiple "defaults" section to move as much common config out of the individual backends. You might also find userlists handy: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.4 They'll let you make the "stats auth" definitions a fair bit cleaner, moving the user/passwords lists elsewhere in your configuration file. HTH, J
Re: How to profile "stats" web page users
Have you looked at "stats scope"? https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#stats%20scope Jonathan
Re: Gracefull shutdown
On 5 April 2015 at 10:33, Cohen Galit wrote: > How can I perform a graceful shutdown to HAProxy? > > I mean, not by killing process with pid. Please could you describe the behaviours you expect from a "graceful shutdown" which you don't get from killing the process? I would expect a `service haproxy stop`, which almost certainly translates to a `kill -TERM `, to be about as graceful as it gets ...
Re: Environment variable in port part of peer definition not resolved
On 25 March 2015 at 23:14, Dennis Jacobfeuerborn wrote: > Hi, > I'm trying to make the haproxy configuration more dynamic using > environment variables and while this works for the definition of the pid > file and the stats socket when I try to use an env. variable as the port > of a peer definition I get an error: Given that `peer` explicitly references this envvar usage (http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#peer), are you sure you're exporting those exact envvars to the child process?
Re: Debian (wheezy) official backport stuck at 1.5.8?
On 10 March 2015 at 16:36, Vincent Bernat wrote: > ❦ 10 mars 2015 15:48 GMT, Jonathan Matthews : > >> http://backports.debian.org/wheezy-backports/overview/ reports that >> it's up to date with 1.5, but is only making 1.5.8 available. Does >> anyone have any insight into why this might be and how/if one might >> help the situation? > > To be in "wheezy-backports" a package has to be in "jessie" (the next > version of Debian). Currently, "jessie" is frozen because the release is > imminent, so it is not possible to push newer versions. Once "jessie" is > released, it will be possible to get more recent versions for "wheezy" > through "wheezy-backports-sloppy" (or "jessie-backports" if you upgrade > to "jessie"). > > Also note that critical fixes have been integrated in this version (in > "wheezy-backports"). See the changelog: > > https://tracker.debian.org/media/packages/h/haproxy/changelog-1.5.8-2~bpo70%2B1 > > Once 1.6~dev1 is released, I will push more repositories to give more > choices (1.4, 1.5, 1.6, all distributions, "stable" or "latest" > versions). Thank you, that's cleared it up. I had wondered if it was jessie-related - it's good to get confirmation :-) Jonathan
Debian (wheezy) official backport stuck at 1.5.8?
Hi all - http://backports.debian.org/wheezy-backports/overview/ reports that it's up to date with 1.5, but is only making 1.5.8 available. Does anyone have any insight into why this might be and how/if one might help the situation? Cheers, Jonathan
Re: Sharing configuration between multiple backends
On 9 March 2015 at 00:12, Thrawn wrote: > Hi, all. > > Is there a way to share configuration between multiple backends? > > The use case for this is that we would like to configure different response > headers for different parts of our application, based on the request URL, but > otherwise route traffic the same way. Specifically, we want to specify > 'X-Frame-Options: ALLOW-FROM ' across most of the application, but > just use 'X-Frame-Options: DENY' on the admin area. > > We could do this, of course, by sending the admin traffic to a different > backend, and setting the response header differently in that backend, but > then we'd need to repeat our server configuration, hich is otherwise the > same. Something like this: > > frontend foo > listen x.x.x.x > acl admin url_beg /admin > default_backend foo > use_backend foo_admin if admin > > backend foo > rspadd "X-Frame-Options: ALLOW-FROM some-trusted-server.com" > complex > configuration > goes > here> > > backend foo_admin > rspadd "X-Frame-Options: DENY" > configuration > goes > here> > > To reduce the duplication, is it possible to have one backend delegate to > another, or specify a named list of servers that can be referenced from > different places? I don't know about your specific *question*, but to solve your specific *problem*, you might just use rspadd's conditional form: frontend foo acl admin url_beg /admin rspadd "X-Frame-Options: DENY" if admin rspadd "X-Frame-Options: ALLOW-FROM some-trusted-server.com" unless admin default_backend whatever As per https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#rspadd. Dictated but not tested ;-) Jonathan
Re: How to compare two haproxy.cfg files?
On 8 March 2015 at 18:46, Tom Limoncelli wrote: > The first step is to put the sections in a fixed order: First general, > the defaults, then each listen/frontend/backend sorted by name. That > works fine and has been a big help. Not a huge amount of help with your task, but don't forget that multiple default sections are valid, and take effect on the non-defaults sections following them - but only up to the /next/ defaults section. I.e. (IIRC!) with this: defaultsA backend #1 listener #2 frontend #3 defaults B listener #4 A's settings affect #1, #2 and #3, and B's settings affect #4. It would be different and quite possibly materially different if you concatenated all the defaults together at the top, and only then defined #1-#4. HTH, Jonathan
Re: 1.5.9 crashes every 4 hours, like clockwork
On 11 Dec 2014 14:27, "David Adams" wrote: > > We are running 1.5.9 on Centos 6.5. It crashes 10 seconds (give or take a few seconds) after 1am, 5am, 9am, 1pm, 5pm and 9pm, like clockwork; let's call that CRASHTIME. Previously we'd been using 1.5.3 on the same hardware for some months without crashes. Once the crashes started we moved to 1.5.9 but they continue. If we manually restart it a minute or two before CRASHTIME it stills crashes when CRASHTIME arrives a minute or two later. > > We've looked at all cron jobs that run on the server for anything that could be causing the problem but found nothing. We've even dumped a process list every 1 second in the minutes before and after CRASHTIME and there is nothing untoward. Traffic levels don't change and besides, that it happens every 4 hours at exactly the same time suggests it's not traffic related. Presumably that also rules out any kind of malformed request or similar causing it. I would check my ssh logs. In the absence of an on-system cron/at process doing this, I'd be looking /really/ externally :-) Perhaps disable sshd a few minutes before the "crash" and enable it a few minutes afterwards. I bet something like that (or a zabbix agent's cmd.run; or something else originating on another system) is screwing with you ... HTH, Jonathan
Re: Config reload to take out backend server still getting traffic
On 11 December 2014 at 07:58, Kasim wrote: > Hi, > > I am running haproxy on Ubuntu 14.04. After I added following config: > stick-table type ip size 2m expire 5m > stick on src > > Taking out a server and reloading haproxy still sends traffic to that server > ever after the stick table expires. For example, I have > server s1 > server s2 > > After commenting s1 out and reloading config, s1 still gets traffic. This > does not happen without the stick-table and stick on config. > > Any pointer or explanation? Could not find it in the doc or online. I /suspect/ you'll find that, after the reload, there's an old haproxy process sticking around to deal with connections which clients are keeping open. This traffic will be to both your s1 and s2 backends, but you're only noticing it on s1 as you're expect it to have stopped completely. HTH, Jonathan
Re: Can't get HAProxy to support Forward Secrecy FS
On 8 December 2014 at 22:44, Sander Rijken wrote: > System is Ubuntu 12.04 LTS server, with openssl 1.0.1 and haproxy 1.5.9 > > OpenSSL> version > OpenSSL 1.0.1 14 Mar 2012 > > > I'm currently using the following, started with the suggested [stanzas][1] > (formatted for readability, it is one long line in my config): > > bind 0.0.0.0:443 ssl crt mycert.pem no-tls-tickets ciphers \ > ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384: \ > > ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384: \ > > ECDHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256: \ > AES128-SHA:AES256-SHA256:AES256-SHA no-sslv3 > > [1]: https://gist.github.com/rnewson/8384304 > > ssllabs.com indicates FS is not used. When I disable all algorithms except > the ECDHE ones, I get SSL connection error (ERR_SSL_PROTOCOL_ERROR), so > something on the system doesn't support FS. > > Any ideas? I'm not best placed to help you debug your setup, but you might diff your versions and setup against what I have on my personal site, which SSLlabs says has "Robust" forward secrecy. I followed the server-side recommendations of the "Modern" setup, here: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility Here's some data you can check against, along with the commands I used to generate it: user:~$ /usr/sbin/haproxy -vv HA-Proxy version 1.5.8 2014/10/31 Copyright 2000-2014 Willy Tarreau Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 Encrypted password support via crypt(3): yes Built with zlib version : 1.2.7 Compression algorithms supported : identity, deflate, gzip Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013 Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports prefer-server-ciphers : yes Built with PCRE version : 8.30 2012-02-04 PCRE library supports JIT : no (USE_PCRE_JIT not set) Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. user:~$ ldd /usr/sbin/haproxy linux-gate.so.1 => (0xe000) libcrypt.so.1 => /lib/i386-linux-gnu/i686/cmov/libcrypt.so.1 (0xb76b4000) libz.so.1 => /lib/i386-linux-gnu/libz.so.1 (0xb769b000) libssl.so.1.0.0 => /usr/lib/i386-linux-gnu/i686/cmov/libssl.so.1.0.0 (0xb7641000) libcrypto.so.1.0.0 => /usr/lib/i386-linux-gnu/i686/cmov/libcrypto.so.1.0.0 (0xb7483000) libpcre.so.3 => /lib/i386-linux-gnu/libpcre.so.3 (0xb7445000) libc.so.6 => /lib/i386-linux-gnu/i686/cmov/libc.so.6 (0xb72e) libdl.so.2 => /lib/i386-linux-gnu/i686/cmov/libdl.so.2 (0xb72dc000) /lib/ld-linux.so.2 (0xb76f9000) user:~$ apt-cache policy openssl haproxy | grep -i -e install -e ^[a-z] openssl: Installed: 1.0.1e-2+deb7u13 haproxy: Installed: 1.5.8-1~bpo70+1 user:~$ openssl version OpenSSL 1.0.1e 11 Feb 2013 user:~$ openssl ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:SRP-DSS-AES-256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-256-CBC-SHA:DHE-DSS-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA256:DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:DHE-RSA-CAMELLIA256-SHA:DHE-DSS-CAMELLIA256-SHA:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-RSA-AES256-SHA384:ECDH-ECDSA-AES256-SHA384:ECDH-RSA-AES256-SHA:ECDH-ECDSA-AES256-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:CAMELLIA256-SHA:PSK-AES256-CBC-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:SRP-DSS-3DES-EDE-CBC-SHA:SRP-RSA-3DES-EDE-CBC-SHA:SRP-3DES-EDE-CBC-SHA:EDH-RSA-DES-CBC3-SHA:EDH-DSS-DES-CBC3-SHA:ECDH-RSA-DES-CBC3-SHA:ECDH-ECDSA-DES-CBC3-SHA:DES-CBC3-SHA:PSK-3DES-EDE-CBC-SHA:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:SRP-DSS-AES-128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:DHE-DSS-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-DSS-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:DHE-RSA-SEED-SHA:DHE-DSS-SEED-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-DSS-CAMELLIA128-SHA:ECDH-RSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-RSA-AES128-SHA256:ECDH-ECDSA-AES128-SHA256:ECDH-RSA-AES128-SHA:ECDH-ECDSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:SEED-SHA:CAMELLIA128-SHA:PSK-AES128-CBC-SHA:ECDHE-
Re: Can't find an old example of haproxy failover setup with 2 locations
On 8 Dec 2014 15:10, "Aleksandr Vinokurov" wrote: > > > I've seen it 2 years ago. If I remember it right, Willy Tarreau was the author and it had ASCII graphics for network schema. It depicts step by step the configuration from one location and one server to 2 locations and 4 (or only 2) Haproxy servers. > > Will be **very** glad if smb. can share a link to it. Might you be referring to www.haproxy.com/static/media/uploads/eng/resources/art-2006-making_applications_scalable_with_lb.pdf ? J
Re: Disable HTTP logging for specific backend in HAProxy
On 7 December 2014 at 20:54, Alexander Minza wrote: > How does one adjust logging level or disable logging altogether for specific > backends in HAProxy? > > In the example below, both directives "http-request set-log-level err" and > "no log" seem to have no effect - the logs are swamped with lines of > successful HTTP status 200 OK records. [snip] >> backend static >> http-request set-log-level err >> no log Are you /absolutely/ sure that these log lines aren't being emitted by the frontend or listener through which your backend must have received the request? Are you expecting that "no log" to percolate back to the frontend? I don't /think/ it works that way ... (though I've not tested). [ As an aside, the way I read what you've written above is "mark *all* logs from the static backend as "err" level. Whereas your global section's "log /dev/log local1 notice" line says "log everything that is notice-or-more-sever to /dev/log". I know you're "no log" looks like it should override this logging, but I just thought I'd mention it as it looks a little odd. ] Regards, Jonathan
Re: Can not set or clear a table when the Key contains "\"
On 5 December 2014 at 07:05, Nick wrote: > when i try the command --echo -e "set table RD01-CSN-1 key PVG\\PENGZ > data.server_id 3 " | socat /var/run/haproxy.stat stdio, the unix socket > seems excluded the backslash "\\", so i cannot successfully edit the > Haproxy tables. > the same problem when i try the command echo -e "clear table RD01-CSN-1 > key PVG\\PENGZ data.server_id 3 " | socat /var/run/haproxy.stat stdio. I think you're having a generic shell escaping problem, which has nothing to do with haproxy or the unix socket. Try using single quotes around the string you pass in, and without giving echo that "-e" parameter. Jonathan
Re: Significant number of 400 errors..
On 27 November 2014 at 10:39, Alexey Zilber wrote: > That's part of what I'm trying to figure out.. where are the junk bytes > coming from. Is it from the client, server, haproxy, or networking issue? That's what tcpdump is useful for. Use it at different places in your end-to-end client/backend path, and you'll discover where the junk originates. Jonathan
Re: POST body not getting forwarded
On 20 November 2014 05:17, Rodney Smith wrote: > I have a problem where a client is sending audio data via POST, and while > the request line and headers reach the server, the body of the POST does > not. However, if the client uses the header "Transfer-Encoding: chunked" and > chunks the data, it does get sent. What can I do to get the POST body sent > without the chunking? > What can be changed to get the incoming raw data packets to get forwarded? > > I'm using HAProxy in a forward proxy mode (option http_proxy). The function > http_request_forward_body() has the message in the HTTP_MSG_DONE state, and > the log line in process_session() line 1785 shows the incoming data is > accumulating "rqh=(s->req->buf->i)". I don't have a direct answer on your observed problem, but I would point out that judging by my archives, the use of the http_proxy option is /extremely/ underrepresented on this list. I have no information if this might be the case, but I suggest that it would be possible for a bug to creep in and remain hidden for longer in this code path because of its relative rareness. This mean-time-to-bug-discovery might be compounded by the very (very!) broad demographic generalisation that people using this simplistic feature of haproxy /might/ be less inclined to upgrade for feature-based reasons, due to their architectures perhaps relying less on a fully-featured proxy being inline. In the absence of any other information, my next steps in your situation would be to see if I could replicate this problem in a different haproxy mode, not using option http_proxy. I absolutely recognise that that might not be possible, and I'm sure others on the list will help you discover the true root cause of the problem. I only mention it as it might not be obvious that this isn't a commonly discussed, hence maybe used, feature of haproxy. HTH, Jonathan
Re: Haproxy - time to split traffic on servers
On 14 November 2014 22:59, Gorj Design ( Dragos ) wrote: > Hello, > > I have been using Haproxy to split the traffic between my servers. > I have a haproxy server and 2 servers that receive the traffic using round > robin . > The traffic is split usually very good 50 % on one server and 50 % on the > other. > > But at some point, the traffic gets in so fast for example > 2014-11-14T20:43:15.702Z > 2014-11-14T20:43:15.703Z > 2014-11-14T20:43:15.704Z > 2014-11-14T20:43:15.705Z > 2014-11-14T20:43:15.706Z > .. > From 15.702 to ..15.706 hundreads of incomming traffic are comming > and all are sent to server one . > > Can I set it somehow so the traffic is split even if it comes at such a low > milliseconds difference ? I don't believe you /should/ be seeing this pattern/problem with a simple round-robin setup. Are you *positive* that neither server polled down for any period, no matter how small? http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#balance describes your load balancing algorithm choices. I know it warns against leastconn with short-lived connections, but I've never had any problems with using that algorithm for HTTP :-) Jonathan
Re: Wrong certificate via openssl s_client -connect
On 23 July 2014 09:53, Martin van Diemen wrote: > Hi, > > I'm using multiple certificates for haproxy. All certificates are places in > one folder and this works great when using a webbrowser. [snip] > When I run "openssl s_client -connect subdomain.domain.tld:443" I get the > wrong certificate. [snip] > I can not figure out why the wrong certificate is returned. Maybe someone > could help me. Maybe this is a bug in haproxy? No; your problem is twofold: 1) You're expecting the s_client tool to do more, automatically, than it actually does. Have a look here: http://rt.openssl.org/Ticket/Display.html?id=2548&user=guest&pass=guest 2) The *only* reason your setup works in the browser is because you are using one which supports SNI. Read the wikipedia page for a decent summary of it: http://en.wikipedia.org/wiki/Server_Name_Indication. Note this paragraph: "Users whose browsers do not support SNI will be presented with a default certificate and hence are likely to receive certificate warnings, unless the server is equipped with a wildcard certificate that matches the name of the website." You have 3 options to solve this for your users, as far as I'm aware: a) use SNI b) allocate a separate IP for each HTTPS site you're hosting c) use a wildcard or UCC/SAN certificate. HTH, Jonathan
Re: Using a Whitlist to Redirect Users not on the Whitelist
On 17 Jul 2014 18:15, "JDzialo John" wrote: > I am creating a whitelist of subnets allowed to access HAPROXY during maintenance. Basically I want to redirect everyone to our maintenance page other than users in the whitelisted file. > > This is not working and is forwarding everyone to the maintenance page despite being a member of a whitelisted subnet. (10.0.0.0/8) > > Is using the hdr_ip(X-Forwarded-For) in the acl the way to go Unless your traffic is passing through another reverse proxy which inserts this header before it hits HAProxy, no. Why are you choosing to use that header?
Re: How can I force all frontend traffic to be temporarily queued/buffered by HAProxy?
On 17 Jul 2014 14:50, "Abe Voelker" wrote: > So basically I'm wondering if there is a way to "expire" these pre-existing sessions or connections or somehow force them to behave like a new one so that they will queue up in HAProxy? I believe 1.5 has the "on-marked-down shutdown-sessions" option to close connections when backends fail healthchecks. I don't recall what effect it has on the weighting change operation you're doing, however. I can't speak for the sanity of the approach, but I've used (tcp)cutter to terminate connections through a Linux firewall before. Maybe you could script that, or the similar tool tcpkill. Overall, however, I'd personally choose to address this at the DB or app layers - perhaps with a lock, perhaps with a code change to make the app be more forgiving during the outage. Doing this in the network feels error-prone and wrong. Cheers, Jonathan
Re: Adding Serial Number to POST Requests
On 16 Jul 2014 16:56, "Zuoning Yin" wrote: > > We later also got the help from Willy. He provided us a configuration which solved our problem. To benefit other people, I just posted it here. I had meant to chime in on this thread earlier. What happens when your HAProxy layer loses state - be it reboot, service restart or data centre power cut? Are you risking resetting the counter and overwriting existing data on the backend? Are you in fact treating HAP as a single point of truth? J
Re: Filing bugs.. found a bug in 1.5.1 (http-send-name-header is broken)
On 7 Jul 2014 14:44, "Alexey Zilber" wrote: > > Hey guys, > > I couldn't a bug tracker for HAProxy, and I found a serious bug in 1.5.1 that may be a harbinger of other broken things in header manipulation. > > The bug is: > > I added 'http-send-name-header sfdev1' unde the defaults section of haproxy.cfg. > > When we would do a POST with that option enabled, we would get 'sf' injected into a random variable. When posting with a time field like '07/06/2014 23:43:01' we would get back '07/06/2014 23:43:sf' consistently. Alex - Would you be able to post a (redacted) config that causes haproxy to exhibit this behaviour, along with a fuller example of exactly where this unwanted data appears in context? If you could post a packet capture of the data being inserted, that will probably help people to home in on the cause of the problem. Don't forget to redact anything from the capture as you feel necessary, such as auth creds, public IPs and host headers. (Anything you're content /not/ to redact could only help, however!) Jonathan
Re: haproxy 1.4 and ssl
On 16 Jun 2014 13:19, wrote: > > hi all > > i have the following situation: > i have 4 real servers (two exchange2013 and 2 citrix) which should get loadbalanced behind haproxy 1.4 (because this is the version shipped with redhat). > this backendservers should talk: > exchanges: https, pop3, imap, pop3s, imaps > citix: https > > https should get passed through, eg. the certificates are on the real servers and NOT on the loadbalancer. This is the only option with 1.4, as it can't terminate HTTPS. > i would like to have a single check for all exchange-services. this check is https://exchange/ews/healthcheck.html > if this fails all services for exchange should switch over. It sounds to me like you should investigate haproxy's healthcheck server tracking functionality, and have your services all hanging off a single healthcheck per real server. I forget the option name, but search the main docs.txt for "tracking" and you should find it. > and for citrix i would like to fail if the real-servers fail, eg with httperrorcode 503. it works if a service goes down completely, but not with 503. currently this does not fail, as a connect is possible. This doesn't look possible with 1.4 to me. The most it can do is the ssl-hello-chk, which doesn't actually examine any HTTP response code. Can you expose the healthcheck page via http? [snip] > *) is this possible with haproxy 1.4? Some things you're trying to do look possible, others less so. > *) can haproxy check for a "local" file residing on the loadbalancer itself? maybe by file:///tmp/healthcheck.txt Not at runtime, but I /guess/ you want this so as to fail over different services simultaneously - which is already possible with tracking servers. > *) is there any release-schedule for the next stable version? If you mean 1.5, then Willy's previously posted that it'll be Real Soon Now :-) Possibly weeks; possibly days. Neither of which options will update the version in your distro ... so I'd suggest just giving the latest 1.5 a spin and seeing if your experiences with it can produce a better stable release for everyone :-) HTH, Jonathan
Re: HAProxy connection remains but web page stream is cut off prematurely
John, Willy already replied to your original thread. I suggest you engage with his detailed reply, there, instead of starting a new thread.
Recommended strategy for running 1.5 in production
Hi all - I've been running 1.4 for a number of years, but am pondering moving some as-yet-unreleased apps to 1.5, for SSL and ACL-ish reasons. I'd like to ask how you, 1.5 sysadmins and devs, track the development version, and how you decide which version to run in production. Do you just run 1.5-dev${LATEST}? The latest snapshot? Do you follow the list here and cherry-pick important bug fixes? I don't feel I have a firm understanding of the status of the different, co-existing codebases that one could call "1.5" at any given time. And nor do I have the C-skills and time to review every commit. What do /you/ do, fellow sysadmins? How do you run, upgrade and maintain confidence in your chosen version of 1.5 in production? All opinions and information welcome! Jonathan
Re: Interaction between SSL and send-proxy
On 26 March 2014 11:01, Lukas Tribus wrote: > Hi, > > >> Basic question on send-proxy: >> >> If the HAProxy server configuration has both SSL and send-proxy, should >> the proxy protocol header be sent encrypted within the SSL packet? > > Good question. In my opinion send_proxy should be cleartext, as a proxy > may or may not terminate SSL. +1 J
Current solutions to the soft-restart-healthcheck-spread problem?
Hi all - [ tl;dr How do you stop haproxy using failed backend servers immediately after reload? Haproxy devs, please consider implementing a consider-servers-initially-DOWN option! ] I wonder if people could outline how they're dealing with the combination of these two haproxy behaviours: 1) On restart/reload/disabled-server-now-enabled-via-admin-interface, haproxy considers a server to be 1 health check away from going down, but considers it *initially* up. 2) On restart/reload, haproxy spreads out each backend's(?) initial server health checks over the entire health check interval. (If I'm slightly off with either of those statements, please forgive the inaccuracy and let it slide for the purposes of this discussion; do let me know if I'm /meaningfully/ wrong of course!) The combination of these facts in a high traffic environment seems to imply that an unhealthy-but-just-enabled server which is listed last in an haproxy backend may receive requests for a longer-than-expected period of time, resulting in a non-trivial number of requests failing. In such an environment, where multiple load balancers are involved and can be reloaded sequentially (such as mine!), it would be preferable to take a pessimistic approach and /not/ expose servers to traffic until you're positive that the backend is healthy, rather than haproxy's current default-optimism approach. I've been considering some methods to deal with this, but haven't got a working config yet. It's getting somewhat convoluted and stick-table heavy, so I thought I'd ask everyone: Where you have decided that this is something you actually need to deal with, *how* are you doing that? (I totally recognise that the combination of a frequent health check interval and non-insane traffic volumes may mask this issue, leading many -- myself included in previous jobs! -- not to consider it a problem in the first place) It's worth pointing out that I /believe/ this situation could be easily solved (operationally) by a global, per-backend or per-server option which switches on the pessimistic behaviour mentioned above. I recognise that this may not be easy from an /implementation/ perspective, of course. [Willy: any chance of an option to start each server as if it were down, but being 1 check away from going up, rather than the opposite? :-)] It's also worth pointing out that, whilst the "persist haproxy state over soft restarts" concept that's been mentioned previously on list would solve this for orderly restarts, it wouldn't solve it for crashes, reboots or otherwise. I think the option I mentioned above would be one way to solve it nicely, for multiple use cases. [ For a *not* nice solution, I'll post a follow up when I get my stick-table concept going. It's /nasty/. IMHO. Don't make me put it into production! ;-) ] Cheers, Jonathan
Re: When is it needed to reload HAProxy process?
On 24 February 2014 18:31, Behrooz Nobakht wrote: > Hello there, > > I am not an expert in HAProxy and tried to find an answer in the docs but > not really yet clear to me. > > Here is my situation, I have a script that modifies haproxy.cfg; e.g. it may > add new endpoints, remove them, enable or disable them. > > I want to verify on which of the situations above, it is required that > HAProxy process is reloaded/restarted so that the configuration takes > effect? To answer your exact question: absolutely no changes to the config file on disk will be picked up by the HAProxy process(es) without you first restarting/reloading the process. NB Do note that that doesn't answer the wider question about making changes to a running process' *idea* of what its configuration should be. That can be done in a variety of ways, but not via the config file itself. I'm not best placed to help you with that, however. Jonathan
Re: how to disable/enable TCP_NODELAY soket option in TCP mode?
On 7 February 2014 08:29, Татаркин Евгений wrote: > I can`t find in haproxy documentation any information about Nagle`s > algorithm or TCP_NODELAY option To quote http://marc.info/?l=haproxy&m=132173719731861&w=2, which I discovered via searching http://marc.info/?l=haproxy&w=2&r=1&s=nagle&q=b: 'the "http-no-delay" option [...] forces TCP_NODELAY on every outgoing segment and prevents the system from merging them.' Willy warns, however: "Doing so can increase load time on high latency networks due to output window being filled earlier with incomplete segments and due to the receiver having to ACK every segment, which can lead to uplink saturation on asymmetric links (ADSL, HSDPA)." HTH, J
Re: Question about logging in HAProxy
On 4 Feb 2014 20:06, "Kuldip Madnani" wrote: > > Hi, > > I want to redirect the logs generated by HAProxy into some specific file .I read that in the global section in log option i can put a file location instead of IP address. I suspect (but can't confirm as I'm on a mobile browser that can't cope with the docs!) that this filesystem location is solely for specifying a socket that will accept logs - HAProxy will not manage the log file/s on disk for you. J
Re: How to write multiple config file in haproxy
On 1 February 2014 12:32, Sukanta Saha wrote: > Thanks for all your help, I will try, > > I have one more question that is about the haproxy.conf file , in this file > we have written so many backends which are getting called from the > frontends. > Is there a way that I can seperate out the backends in multiple config files > and from my main haproxy.conf file I will call those files. > So that the main files looks clean and nice and I will have multiple config > files for my each service or backends . If I need to change anything for a > service I will change the corresponding config file not the main file. HAProxy can accept multiple "-f " parameters when it's started, but I /believe/ there are some constraints on the files' contents, such as each section must be fully defined in a single file. I forget the exact details and don't have them to hand. You'll probably need to change your init script to support this as well. Also, there isn't an "include" directive you can use inline, in the config file. There is some talk about it on this list, but I don't believe it's available yet - if ever. You may also find people have written init wrappers that simulate this or other multiple-config-file behaviours. I don't have a link to them myself, but you may find them mentioned somewhere in the list archives: http://marc.info/?l=haproxy Cheers, Jonathan
Re: haproxy
On 1 February 2014 06:46, Amit wrote: > Hi All, > I would like to hear about haproxy related issues and its resoultions. The 1.4 and 1.5 changelogs are http://haproxy.1wt.eu/download/1.4/src/CHANGELOG and http://haproxy.1wt.eu/download/1.5/src/CHANGELOG. Have fun.
Re: Question concerning stats server
On 31 January 2014 14:31, Andreas Mock wrote: > Hi Jonathan, > > this answer came in really fast. Thank you. Happy to help :-) Don't forget to turn on HTTP auth in a public-facing environment! Search "stats auth" in http://haproxy.1wt.eu/download/1.4/doc/configuration.txt ...
Re: Question concerning stats server
On 31 January 2014 14:21, Andreas Mock wrote: > Hi all, > > I need a little help to understand how the html stats page can be accessed: > How can I setup a stats page without having one backend? Is this possible? > Do I have to provide a frontend too? This works for me: root@foo:/# tail -5 /etc/haproxy/haproxy.cfg listen stats bind :80 mode http maxconn 20 stats uri /hap Jonathan
Re: Haproxy as simple proxy forwarding each request
On 29 January 2014 17:59, Ricardo wrote: > Hello, > > Is a bit mess situation but I can't configure Haproxy as a simple proxy. > > The behaviour I'm looking for is an Haproxy listen in port 80, receiving > request to any url and forward each request to the appropiate domain trought > his own gateway. It sounds to me like you're looking for a /forward/ proxy, which *really* isn't HAProxy's forte. I seem to recall it can /just/ about be mangled into doing something like what you want, but you'll have much more luck looking at Squid for this - that's one of its primary use cases. To confirm that you are actually looking for a forward proxy, answer this: are you able to deterministically list *all* of the domains that you wish to load-balance? Or are you looking to balance "whatever a user might type into their web browser"? Also - when you mentioned the internet gateway, do you really just mean a router? I.e. a box which is *just* moving packets, and not looking inside each HTTP request and then routing them based on the Host header it finds? Back to forward proxying: if you don't like Squid, then Nginx can, with a bit of force, be made to do the job pretty well. Varnish may also be able to achieve it with its more recent kinda dynamic backends [citation required; possible rubbish being spouted]. But I wouldn't personally go through the pain of trying to make HAProxy do this. Jonathan
Re: HAProxy for Solaris 10 X86
On 21 January 2014 13:17, Vinoth M wrote: > Hi, > > 1) I am using Solaris 10 x86.Could you please let me know if there a pre > compiled package available for it. > 2) Also let me know if HAproxy is supported for Solaris 10 x86. I can't help with these 2 questions ... > 3) My requirement is to load balance FTP(not http) .Let me know if i can > use HAProxy for the same. ... but I answered this exact question on the Nginx mailing list only this morning. Here's what I posted; I believe pretty much all the same points apply to you :-) Wow - the dream of the 90s really *is* alive in $WHEREVER_YOU_ARE! ;-) Seriously - it's 2014. We have better alternatives than the insecure and awful mess that is FTP. Any company that thinks otherwise deserves all the pain that comes with FTP ... Anyway, Nginx doesn't talk FTP to the best of my knowledge. Whilst I'd normally suggest a TCP load balancer for this, FTP has certain properties which make it annoying to load balance that you have to take into account. This came up with after moment's googling. It might help: http://ben.timby.com/?page_id=210 Regards, Jonathan
Re: URL path in backend servers
Rakesh - I replied to your identical question about this, yesterday, suggesting what you could do to help yourself diagnose your problem. Please don't start new threads for the same question. Jonathan
Re: Forward request with the URL path
On 15 January 2014 07:36, Rakesh G K wrote: > Hello, > Is it possible to forward an incoming request to the backend by retaining > the URL path in http mode?. > Using ACLs I was able to categorize the incoming requests to different > rulesets, but on using a specific backend for a certain URL parth, I could > not figure out how to send the request to the underlying server with the URL > path?. I don't quite understand. Are you finding that, without having configured it to do so, haproxy is *changing* the URI path when it proxies the request to your backends? Have you verified this with tcpdump? I would expect the opposite to be the case, as *not* changing the path the default behaviour. I think you've probably got something else going wrong here, that causes your backend to produce "Not Found on Accelerator" (as per the SO question). That error isn't one that haproxy generates. Get tcpdump out. It'll show you where the problem is. Jonathan
Re: Tuning HAProxy for Production
On 2 January 2014 20:09, Jordan Arentsen wrote: > I'm trying to prepare HAProxy for a production, and I'm trying to figure out > some good default configuration settings that will at least give me a good > place to start. > > My main question revolves around the maxconn option and the various > timeouts. I was thinking about setting the maxconn to 15k or so, is this a > bad place to start? Any other advice on baseline performance tuning? > > Mostly this will be routing to various front-end web servers based on the > incoming url. There is a main PHP application running on a couple servers, a > Tomcat server running authentication, and a few node.js servers. Mostly the > PHP servers will be handling the bulk of the load for now. Is that the > information you were looking for, or is there something I can dig into more > in-depth? That sounds pretty vanilla, so my suggestion would be to start with the defaults and see where that gets your specific application and workload. HAProxy's defaults are sane (I /think/ the default queue timeout and queue size might need increasing, but it's been a while since I've set up a greenfield app from scratch). Remember the sine qua nons of performance tuning are to change one thing at a time, measure things precisely and accurately, and make sure you're comparing apples with apples. You should have an idea of what you'll need maxconn to be, based on either existing logs or your business' traffic predictions. If you have neither of these, set it high and drop it down as you observe you're able to over time. Others may well have more specific recommendations, but that's where I'd start. Jonathan
Re: disable backend through socket
On 22 Dec 2013 20:32, "Patrick Hemmer" wrote: > > That disables a server. I want to disable a backend. No, you want to disable all the servers in a backend. I'm not sure there's a shortcut that's better than just doing them one by one. Others may be able to advise about alternatives, but is that an option for you? Jonathan
Replying to spam threads
My apologies to list members for replying to a spam thread and potentially screwing up mail classification at your end. My mistake. Jonathan
Re: XForwardfor Varnish behind HaProxy
On 7 December 2013 11:08, Clémence Varroi wrote: > I think I've made a mistake with my configuration. I can't retrieve the ip > adresse of my clients in my logs, I have just the ip adresse of my varnish > and I am becoming mad... It strikes me that the culprit is probably your nginx real_ip setup, and you need to test it with a set of synthetic, curl'd requests. If real_ip /were/ working, and translating the X-F-F into logged IPs, I think you'd see (at a *minimum*) the HAProxy server's IP in nginx's logs - not the Varnish server's IP. Don't forget to ensure real_ip is happy with: a) multiple X-F-F request headers (which are legal but whose ordering may be confusing nginx) and b) X-F-F headers with multiple IPs (e.g. "X-F-F: 1.2.3.4, 2.3.4.5", etc) and c) combinations of (a) and (b). Cheers, Jonathan