Re: DNS resolution problem since 1.8.14

2018-12-23 Thread Jonathan Matthews
Hey Patrick,

Have you looked at the fixes in 1.8.16? They sound kinda-sorta related to
your problem ...


On Sun, 23 Dec 2018 at 16:17, Patrick Valsecchi  wrote:

> I did a tcpdump. My config is modified to point to a local container (www)
> in a docker compose (I'm trying to simplify my setup). You can see the DNS
> answers correctly:
> 16:06:00.181533 IP (tos 0x0, ttl 64, id 63816, offset 0, flags [DF], proto
> UDP (17), length 68)
> > localhost.40994: 63037 1/0/0 www. A (40)
> Could it be related to that?
> On 23.12.18 13:59, Patrick Valsecchi wrote:
> Hi,
> Since haproxy version 1.8.14 and including the last 1.9 release, haproxy
> puts all my backends in MAINT after around 31s. They first work fine, but
> then they are put in MAINT.
> The logs look like that:
> <149>Dec 23 12:45:11 haproxy[1]: Proxy www started.
> <149>Dec 23 12:45:11 haproxy[1]: Proxy plain started.
> [NOTICE] 356/124511 (1) : New worker #1 (8) forked
> <150>Dec 23 12:45:13 haproxy[8]:
> [23/Dec/2018:12:45:13.098] plain www/linked 0/0/16/21/37 200 4197 - - 
> 1/1/0/0/0 0/0 "GET / HTTP/1.1"
> [WARNING] 356/124542 (8) : Server www/linked is going DOWN for maintenance
> (DNS timeout status). 0 active and 0 backup servers left. 0 sessions
> active, 0 requeued, 0 remaining in queue.
> <145>Dec 23 12:45:42 haproxy[8]: Server www/linked is going DOWN for
> maintenance (DNS timeout status). 0 active and 0 backup servers left. 0
> sessions active, 0 requeued, 0 remaining in queue.
> [ALERT] 356/124542 (8) : backend 'www' has no server available!
> <144>Dec 23 12:45:42 haproxy[8]: backend www has no server available!
> I run haproxy using docker:
> docker run --name toto -ti --rm -v
> /home/docker-compositions/web/proxy/conf.test:/etc/haproxy/:ro -p 8080:80
> haproxy:1.9 haproxy -f /etc/haproxy/
> And my config is that:
> global
> log stderr local2
> chroot  /tmp
> pidfile /run/
> maxconn 4000
> max-spread-checks 500
> master-worker
> usernobody
> group   nogroup
> resolvers dns
>   nameserver docker
>   hold valid 1s
> defaults
> modehttp
> log global
> option  httplog
> option  dontlognull
> option http-server-close
> option forwardfor   except
> option  redispatch
> retries 3
> timeout http-request10s
> timeout queue   1m
> timeout connect 10s
> timeout client  10m
> timeout server  10m
> timeout http-keep-alive 10s
> timeout check   10s
> maxconn 3000
> default-server init-addr last,libc,none
> errorfile 400 /usr/local/etc/haproxy/errors/400.http
> errorfile 403 /usr/local/etc/haproxy/errors/403.http
> errorfile 408 /usr/local/etc/haproxy/errors/408.http
> errorfile 500 /usr/local/etc/haproxy/errors/500.http
> errorfile 502 /usr/local/etc/haproxy/errors/502.http
> errorfile 503 /usr/local/etc/haproxy/errors/503.http
> errorfile 504 /usr/local/etc/haproxy/errors/504.http
> backend www
> option httpchk GET / HTTP/1.0\r\nUser-Agent:\ healthcheck
> http-check expect status 200
> default-server inter 60s fall 3 rise 1
> server linked check resolvers dns
> frontend plain
> bind :80
> http-request set-header X-Forwarded-Proto   http
> http-request set-header X-Forwarded-Host%[req.hdr(host)]
> http-request set-header X-Forwarded-Port%[dst_port]
> http-request set-header X-Forwarded-For %[src]
> http-request set-header X-Real-IP   %[src]
> compression algo gzip
> compression type text/css text/html text/javascript
> application/javascript text/plain text/xml application/json
> # Forward to the main linked container by default
> default_backend www
> Any idea what is happening? I've tried to increase the DNS resolve timeout
> to 5s and it didn't help. My feeling is that the newer versions of haproxy
> cannot talk with the DNS provided by docker.
> Thanks
> --
Jonathan Matthews
London, UK

Re: Http HealthCheck Issue

2018-12-19 Thread Jonathan Matthews
On Wed, 19 Dec 2018 at 19:23, UPPALAPATI, PRAVEEN  wrote:
> Hmm. Wondering why do we need host header? I was able to do curl without the 
> header. I did not find anything in the doc.

"curl" automatically adds a Host header unless you are directly
hitting an IP address.

Re: Http HealthCheck Issue

2018-12-18 Thread Jonathan Matthews
On Tue, 18 Dec 2018 at 14:56, UPPALAPATI, PRAVEEN  wrote:

> wcentral/com.att.swm.attpublic/healthcheck.txt HTTP/1.1\r\nAuthorization:\
> Basic\ 
> [Dec 18 05:22:51]  Health check for server bk_8093_read/primary8093r
> failed, reason: Layer7 wrong status, code: 400, info: "No Host", check
> duration: 543ms, status: 0/2 DOWN

Hey there, Praveen.

This log line is literally telling you what your problem is!

I know different folks like the satisfaction of discovering their own
solutions, so I'll ask before simply telling you the solution: do you need
help in finding the error hidden in that log line, or can you manage to fix

All the best,
Jonathan Matthews
London, UK

Re: HA-Proxy configuration

2018-10-10 Thread Jonathan Matthews
On Wed, 10 Oct 2018 at 07:08, <> wrote:

> Hi Team,
> I am looking for HA-Proxy configuration Help in over project, can i know
> some one who can give more information on configuration using 2 different 
> HA-Proxy
> servers for high availability.
> Feel free to contact me on - 9849916124

Hey there,

Welcome to the public mailing list for users of the open source haproxy

You'd probably do best by posting the configuration and HA setup as far as
you've managed to get it going, and asking questions about specific
problems you encounter along the way. You're more likely to get help via
email than via telephone!

Here is the starter guide for the current stable version: There are links along
the top of that page to the configuration and management manuals, which
will be of interest as you evolve your HA setup.

If, instead, you feel you would like to trade time for money, and want to
take advantage of a commercial support option, some are listed here:

As a backstop, my UK company is already set up as a supplier inside Wipro's
procurement system. Do get in touch if the routes I've mentioned above
don't meet your needs :-)

All the best,



Jonathan Matthews

London, UK

Jonathan Matthews
London, UK

Re: Need Clarification

2018-08-21 Thread Jonathan Matthews
On Tue, 21 Aug 2018 at 17:53, Jordan Finsbel 

> Hello my name is Jordan Finsbell and interested to get involved

That's great! What areas are you interested in?

Jonathan Matthews
London, UK

Re: HaProxy question

2018-08-10 Thread Jonathan Matthews
Did you miss the two mails from Igor containing suggestions?

Like this email, they went both to the list and directly to yourself. Maybe
check your spam folder.


On Sat, 11 Aug 2018 at 02:28, Jonathan Opperman  wrote:

> *bump*
> Anyone?
> On Tue, 7 Aug 2018, 11:43 Jonathan Opperman,  wrote:
>> Hi All,
>> I am hoping someone can give me some tips and pointers on getting
>> something working
>> in haproxy that could do the following:
>> I have installed haproxy and put a web server behind it, the proxy has 2
>> interfaces,
>> eth0 (public) and eth1 (proxy internal)
>> I've got a requirement where I want to only proxy some source ip
>> addresses based on
>> their source address so we can gradually add or customers to haproxy so
>> that we can
>> support TLS1.2 and strong ciphers
>> I have added an iptables rule and can then bypass haproxy with:
>> for ip in $INBOUNDEXCLUSIONS ; do
>> ipset -N inboundexclusions iphash
>> ipset -A inboundexclusions $ip
>> done
>> $IPTABLES -t nat -A HTTPSINBOUNDBYPASS -m state --state NEW -j
>> --dport 443 -j DNAT --to $JONODEMO1:443
>> $IPTABLES -t nat -A PREROUTING -m set ! --match-set
>> inboundexclusions src -d -p tcp --dport 443 -j HTTPSINBOUNDBYPASS
>> Testing was done and I was happy with the solution, I then had a
>> requirement
>> to have a proxy with multiple IP address on eth0 (So created eth0:1
>> eth0:2) etc
>> and changed my haproxy frontend config from  bind transparent
>> to bind transparent but now my dnat doesn't work if haproxy
>> is running, if I stop haproxy the traffic gets dnatted fine.
>> I am not sure if I am being very clear in here but basically wanted to
>> know if there is
>> a way to do selective ssl offloading on the haproxy or bypass
>> ssl offloading on the
>> server that sits behind the proxy? This is required so that customers
>> that do not support
>> TLS1.2 and strong ciphers we can still let them connect so actually
>> bypassing
>> the ssl offloading on the proxy.
>> Thanks very much for your time reading this.
>> Regards,
>> Jonathan
>> --
Jonathan Matthews
London, UK

Re: Regarding HA proxy configuration with denodo

2018-07-26 Thread Jonathan Matthews
On Thu, 26 Jul 2018 at 07:12, <> wrote:

> We have two different denodo servers installed on two machines (LINUX)
> installed on AWS and one load balancer installed on one of those machines .
> Can you please provide the steps required or the configuration that need to
> be done to connect HA proxy with the available denodo servers . HA proxy
> should be able to connect either of the denodo server available .


This is the public mailing list for users of the open source haproxy tool.

You would be best served by posting the configuration as far as you've
managed to get it going, and asking questions about specific problems you
encounter along the way.

Here is the starter guide for the current stable version: There are links along
the top of that page to the configuration and management manuals.

If, instead, you feel you would like to trade time for money, and want to
take advantage of a commercial support option, some are listed here:

As a backstop, my UK company is already set up as a supplier inside Wipro's
procurement system. Do get in touch if the routes I've mentioned above
don't meet your needs :-)

All the best,

> --
Jonathan Matthews
London, UK

Re: Help with environment variables in config

2018-07-21 Thread Jonathan Matthews
No. Sudo doesn't pass envvars through to its children by default:

Read that page *and* the comments - in particular be aware that you have to
request (at the CLI) that sudo preserve envvars, and you also have to have
been granted permission to do this, via the sudoers config file.

If this is all sounding a bit complicated, that's because it is.

You've chosen a relatively uncommon way of running haproxy - directly, via
sudo. Consider running via an init script or systemd unit (?) or, failing
that, just a script which is itself the sudo target, which sets the envvars
in the privileged environment.


On Sat, 21 Jul 2018 at 17:31, jdtommy  wrote:

> would this chain of calls not work?
> ubuntu@ip-172-31-30-4:~$ export
> ubuntu@ip-172-31-30-4:~$ export GRAPH_PORT=8182
> ubuntu@ip-172-31-30-4:~$ sudo haproxy -d -V -f /etc/haproxy/haproxy.cfg
> On Sat, Jul 21, 2018 at 3:26 AM Igor Cicimov <
>> wrote:
>> On Sat, Jul 21, 2018 at 7:12 PM, Jonathan Matthews <
>>> wrote:
>>> On Sat, 21 Jul 2018 at 09:12, jdtommy  wrote:
>>>> I am setting them before I start haproxy in the terminal. I tried both
>>>> starting it as a service and starting directly, but neither worked. It
>>>> still would not forward it along.
>>> Make sure that, as well as setting them, you're *exporting* the envvars
>>> before asking a child process (i.e. haproxy) to use them.
>>> J
>>> --
>>> Jonathan Matthews
>>> London, UK
>> ​
>> As Jonathan said, plus make sure they are included/exported in the init
>> script or systemd file for the service.
> --
> Jarad Duersch
Jonathan Matthews
London, UK

Re: Help with environment variables in config

2018-07-21 Thread Jonathan Matthews
On Sat, 21 Jul 2018 at 09:12, jdtommy  wrote:

> I am setting them before I start haproxy in the terminal. I tried both
> starting it as a service and starting directly, but neither worked. It
> still would not forward it along.

Make sure that, as well as setting them, you're *exporting* the envvars
before asking a child process (i.e. haproxy) to use them.


> --
Jonathan Matthews
London, UK

Re: Setting up per-domain logging with haproxy

2018-07-17 Thread Jonathan Matthews
Hey Shawn,

On 17 July 2018 at 19:59, Shawn Heisey  wrote:
> Can haproxy be configured to create multiple logfiles?  Can the filename
> of each log be controlled easily in the haproxy config?  Can I use
> dynamic info for the logfile name like the value in the Host header?

Haproxy has absolutely nothing to do with the logfile creation! It
doesn't name them, rotate them or write into them.

That's *entirely* your local syslog daemon's responsibility -
configure it appropriately, and it'll do what you want.

Here's someone from 2011 doing exactly that:

> The *format* of the haproxy logfile is fine as it is, except that I
> would like to have more than the 1024 bytes that syslog allows.

Read the haproxy docs on this - you want to tune the "length"

As the docs say: some syslog servers allow messages >1024, some don't.

Use one that does :-)


Jonathan Matthews
London, UK

Re: Need Help!

2018-06-26 Thread Jonathan Matthews
You may not have had many replies as your email was marked as spam.
You might want to address this by, amongst other things, using plain
text and not HTML.

On 24 June 2018 at 18:32, Ray Jender  wrote:
> I am sending rtmp from OBS with the streaming set to  rtmp://”HAproxy server
> IP”:1935/LPC1

> frontend rtmp-in
> mode tcp
> acl url_LPCX path_beg -i /LPC1/
> use_backend LPC1-backend if url_LPCX

> And here is the log after restarting HAproxy with mode=http:
> And here is the log after restarting HAproxy with mode=tcp:

You can't usefully use HTTP mode, as the traffic isn't HTTP.

Haproxy doesn't speak RTMP so, in TCP mode, haproxy doesn't know how
to extract path information (or anything protocol-specific) from the

It can't evaluate the ACL "url_LPCX", so you can't select a backend based on it.

Your best option is to have 4 frontends (or listeners) on 4 different
ports, and route using that information.


Re: [PATCH] REGTEST: stick-tables: Test expiration when used with table_*

2018-06-21 Thread Jonathan Matthews
On Thu, 21 Jun 2018 at 19:45, Willy Tarreau  wrote:

> Oh indeed I didn't even notice! The correct solution is to use the
> domain for this, as explained in RFC2606/6761. No other
> domain possibly pointing to a valid location now or in the future
> should appear in test nor example files

[Gmail on mobile; forgive any formatting fubar]

Example\.com resolves. There's a "you can use this domain in documentation"
site there. *Someone* is absorbing the traffic to that domain - I suggest
not putting it in .vtc files :-)

I think the same RFC reserves .invalid as a TLD. Perhaps
missing.haproxy.invalid for when a DNS entry needs not to exist, and ...
something else for when it needs a real backend? I'm out of ideas on that
2nd use case ...


> --
Jonathan Matthews
London, UK

Re: [Feature request] Call fan-out to all endpoints.

2018-06-10 Thread Jonathan Matthews
On 10 June 2018 at 08:44, amotz  wrote:
> I found myself needing the options to do  "fantout" for a call. Meaning
> making 1 call to haproxy and have it pass that call to all of the endpoint
> currently active.
> I don't mind implementing this myself and push to code review Is this a
> feature you would be interested in ?

Hey Amotz,

I'm merely an haproxy user (not a dev and nothing to do with the
project from a feature/code/merging point of view), but I'd be
interested in using this.

I feel like an important part of it would be how you'd handle the
merge of the different server responses. I.e. the fan-in part.

I can see various merge strategies which would be useful in different

e.g. "Reply with *this* backend's response but totally ignore this
other backend's response" could be useful for in a logging/audit

"Merge the response bodies in this defined order" could be useful for
structured data/responses being assembled.

"Merge the response bodies in any order, so long as they gave an HTTP
response code in the range of X-Y" could be useful for unstructured or
self-contained data (e.g. a catalog API).

"Merge these N distinct JSON documents into one properly formed JSON
response" could be really handy, but would obviously move haproxy's
job up the stack somewhat, and might well be an anti-feature!

I could have used all the above strategies at various points in my career.

I think all but the first strategy might well be harder to implement,
as you'll have to cater for a situation where you've received a
response but the admin's configured merging strategy dictates that you
can't serve the response to the requestor yet. You'll have to find
somewhere to cache entire individual response bodies for an amount of
time. I don't have any insight into doing that - I can just see that
it might be ... interesting :-)

If Willy and the rest of the folks who'd have to support this in the
future feel like this feature is worth it, please take this as an
enthusiastic "yes please!" from a user!


Re: JWT payloads break b64dec convertor

2018-05-28 Thread Jonathan Matthews
On Mon, 28 May 2018 at 14:26, Willy Tarreau <> wrote:

> On Mon, May 28, 2018 at 01:43:41PM +0100, Jonathan Matthews wrote:
> > Improvements and suggestions welcome; flames and horror -> /dev/null ;-)
> Would anyone be interested in adding two new converters for this,
> working exactly like base64/b64dec but with the URL-compatible
> base64 encoding instead ? We could call them :
>   u64dec
>   u64enc

I like that idea, and have already retrieved my K from the loft :-)


> --
Jonathan Matthews
London, UK

Re: JWT payloads break b64dec convertor

2018-05-28 Thread Jonathan Matthews
On 28 May 2018 at 12:32, Jonathan Matthews <> wrote:
> I think with your points and ccripy's sneaky (kudos!) padding
> insertion, I can do something which suffices for my current audit
> needs.

For the list, here's my working v1 that I ended up with. I'm sure
various things can be improved! :-)

I couldn't get ccripy's concat() and length() converters to work, but
I've stolen the basic idea - many thanks!

  acl ACL_jwt_payload_4x_chars_long var(txn.jwtpayload) -m reg ^(.{4})+$

  http-request set-var(txn.jwtpayload) req.hdr(jwt)
  http-request set-var(txn.jwtpayload) var(txn.jwtpayload),regsub($,=)
if !ACL_jwt_payload_4x_chars_long
  http-request set-var(txn.jwtpayload) var(txn.jwtpayload),regsub($,=)
if !ACL_jwt_payload_4x_chars_long
  http-request set-var(txn.jwtpayload) var(txn.jwtpayload),regsub(-,+,g)
  http-request set-var(txn.jwtpayload) var(txn.jwtpayload),regsub(_,/,g)

  log-format " jwt-payload:%[var(txn.jwtpayload),b64dec]"

Improvements and suggestions welcome; flames and horror -> /dev/null ;-)


Re: JWT payloads break b64dec convertor

2018-05-28 Thread Jonathan Matthews
On 28 May 2018 at 09:19, Adis Nezirovic <> wrote:
> On 05/26/2018 04:27 PM, Jonathan Matthews wrote:
>> Hello folks,
>> The payload (and other parts) of a JSON Web Token (JWT, a popular and
>> growing auth standard: is base64
>> encoded.
>> Unfortunately, the payload encoding (specified in
>> is defined as the "URL safe"
>> variant. This variant allows for the lossless omission of base64
>> padding ("=" or "=="), which the haproxy b64dec convertor doesn't
>> appear to be able cope with. The result of
> Jonathan,
> It's not just padding, urlsafe base64 replaces '+' with '-', and '/'
> with '_'.

You're right. I'd noticed those extra substitutions but, for some
reason I'd assumed they were applied after decoding. Brain fart!

> For now, I guess the easiest way would be to write a simple
> converter in Lua, which just returns the original string, and send
> payload somewhere for further processing.

One nice thing about the JWT format is that it's unambiguously
formatted as "header.payload.signature", so the payload can be
trivially parsed out of a sacrificial header with a

  http-request replace-header copy-of-jwt [^.]+\.([^.]+)\..+ \1

... or some such manipulation. Here, for clarity, I'm double-passing
it through an abns@ frontend-backend-listen chain, hence the
additional header and not a variable, as per your example.

I think with your points and ccripy's sneaky (kudos!) padding
insertion, I can do something which suffices for my current audit

I suspect you're right that a Lua convertor is probably the more
supportable way forwards, however.

Many thanks, both!

JWT payloads break b64dec convertor

2018-05-26 Thread Jonathan Matthews
Hello folks,

The payload (and other parts) of a JSON Web Token (JWT, a popular and
growing auth standard: is base64

Unfortunately, the payload encoding (specified in is defined as the "URL safe"
variant. This variant allows for the lossless omission of base64
padding ("=" or "=="), which the haproxy b64dec convertor doesn't
appear to be able cope with. The result of

  log-format %[,b64dec]

... when faced with such an unpadded string is just "-", which I take
to mean decoding failed. I believe it's failing on line 84 of

I've tried and failed to use a regex convertor to add padding to the
end, based on looking at the string's remainder after matching
clusters with '(.{4})+'. Annoyingly I can't make this work in the
regsub convertor as I believe it would require the use of grouping
parentheses, which aren't permitted by the parser currently.

I'm personally interested in this for logging the contents of JWT
payloads for audit. Is anyone else working with JWT in haproxy, in
this or any other context, and could share any tactics for dealing
with this problem?

Many thanks!

Re: WAF with HA Proxy.

2018-05-09 Thread Jonathan Matthews
On Wed, 9 May 2018 at 18:43, Mark Lakes <> wrote:

> For commercial purposes, see Signal Sciences Next Gen WAF solution:

That page says it supports "Nginx, Nginx Plus, Apache and IIS". Does it
integrate with HAProxy? Via what mechanism?


> <>
Jonathan Matthews
London, UK

Re: Use SNI with healthchecks

2018-04-24 Thread Jonathan Matthews
[Top post; fight me]

You could either read an environment variable inherited from outside the
process, or use "setenv" or "presetenv" as appropriate to DRY your config

The fine manual describes how you would refer to this envvar in section
2.3, regardless of which of those options you use to set it.


On Tue, 24 Apr 2018 at 16:45, GALLISSOT VINCENT <>

> I migrated to 1.8 and sni + check-sni *are working fine* with the
> following code:
> 88
> backend cloudfront
> http-request set-header Host
> option httpchk HEAD /check HTTP/1.1\r\nHost:\
> server applaunch check resolvers
> mydns  no-sslv3 ssl verify required ca-file ca-certificates.crt sni 
> req.hdr(host)
> check-sni
> 88
> Obviously I cannot use %[req.hdr(host)] for "option httpchk" nor for
> "check-sni" directives.
> Do you know how can I define only one time my Host header in the code
> above ?
> Thanks,
> Vincent
> --
> *Envoyé :* lundi 23 avril 2018 17:33
> *À :* Lukas Tribus
> *Cc :*
> *Objet :* RE: Use SNI with healthchecks
> Thank you very much for your answers,
> I'll migrate to 1.8 asap to fix this.
> Vincent
> --
> *De :* <> de la part de Lukas Tribus <
> *Envoyé :* lundi 23 avril 2018 17:18
> *Cc :*
> *Objet :* Re: Use SNI with healthchecks
> Hello Vincent,
> On 23 April 2018 at 16:38, GALLISSOT VINCENT <>
> wrote:
> > Does anybody know how can I use healthchecks over HTTPS with SNI support
> ?
> You need haproxy 1.8 for this, it contains the check-sni directive
> which allows to set SNI to a specific string for the health check:
> Regards,
> Lukas
Jonathan Matthews
London, UK

Re: Version 1.5.12, getting 502 when server check fails, but server is still working

2018-04-16 Thread Jonathan Matthews
On Sun, 15 Apr 2018 at 20:56, Shawn Heisey <> wrote:

> Would I need to upgrade beyond 1.5 to get that working?

I don't have any info about your precise problem, but here's a quote from
Willy's 1.9 thread within the last couple of months:

"Oh, before I forget, since nobody asked for 1.4 to continue to be
maintained, I've just marked it "unmaintained", and 1.5 now entered
the "critical fixes only" status. 1.4 will have lived almost 8 years
(1.4.0 was released on 2010-02-26). Given that it doesn't support
SSL, it's unlikely to be found exposed to HTTP traffic in sensitive
places anymore. If you still use it, there's nothing wrong for now,
as it's been one of the most stable versions of all times. But please
at least regularly watch the activity on the newer ones and consider
upgrading it once you see that some issues might affect it. For those
who can really not risk to face a bug, 1.6 is a very good candidate
now and is still well supported 2 years after its birth."
You might get a solution to this and your other 1.5 problem on the list -
it has a very helpful and knowledgeable population :-)

But if you can possibly upgrade to 1.6 or later, I suspect the frequency of
answers you get and the flexibility they'll have to help you will improve

Jonathan Matthews
London, UK

Re: resolvers - resolv.conf fallback

2018-04-14 Thread Jonathan Matthews
On 14 April 2018 at 05:13, Willy Tarreau  wrote:
> On Fri, Apr 13, 2018 at 03:48:19PM -0600, Ben Draut wrote:
>> How about 'parse-resolv-conf' for the current feature, and we reserve
>> 'use-system-resolvers' for the feature that Jonathan described?
> Perfect! "parse" is quite explicit at least!

Works for me :-)

Re: resolvers - resolv.conf fallback

2018-04-13 Thread Jonathan Matthews
On Fri, 13 Apr 2018 at 15:09, Willy Tarreau <> wrote:

> On Fri, Apr 13, 2018 at 08:01:13AM -0600, Ben Draut wrote:
> > How about this:
> >
> > * New directive: 'use_system_nameservers'
> OK, just use dashes ('-') instead of underscores as this is what we mostly
> use on other keywords, except a few historical mistakes.

I'm *definitely* not trying to bikeshed here, but from an Ops perspective a
reasonable implication of  "use_system_nameservers" would be for the
resolution process to track the currently configured contents of
resolv.conf over time.

AIUI this will actually parse once, at proxy startup, which I suggest
should be made more obvious in the naming.

If I'm wrong, or splitting hairs, please ignore!


> --
Jonathan Matthews
London, UK

Re: Health Checks not run before attempting to use backend

2018-04-13 Thread Jonathan Matthews
On Fri, 13 Apr 2018 at 00:01, Dave Chiluk <> wrote:

> Is there a way to force haproxy to not use a backend until it passes a
> healthcheck?  I'm also worried about the side affects this might cause as
> requests start to queue up in the haproxy

I asked about this in 2014 ("Current solutions to the
soft-restart-healthcheck-spread problem?") and I don't recall seeing a fix
since then. Very interested in whatever you find out!


> --
Jonathan Matthews
London, UK

Re: Logs full TCP incoming and outgoing packets

2018-04-09 Thread Jonathan Matthews
On 10 April 2018 at 00:04,   wrote:
> Hello everybody,
> For an application, I use haproxy in TCP mode but I would need to log, from
> the main load balancer machine, all the TCP transactions (incoming packets
> sent to the node then the answer that is sent back from the node to the
> client through the haproxy load balancer machine).
> Is it possible to do such a thing ? I started to dig in the ML and found few
> information about capturing the tcp-request, which does not work for now...
> and I need the response as well... so preferred to ask if someone have got
> an experience doing this. Sure, it will have a performance penalty but
> exhaustive logging is more important than that and it it the best solution
> to avoid a lot of changes in the existing infrastructure we just
> load-balanced.

I don't believe this is possible inside haproxy right now.

If I *had* to do this, I'd start by saying "no", and then I'd work out
how to run a tcpdump process on the machine with carefully tuned
filters and a -w parameter. Then I'd drink something strong.


Re: New HTTP action: DNS resolution at run time

2018-03-17 Thread Jonathan Matthews
On 30 January 2018 at 09:04, Baptiste  wrote:
> Hi all,
> Please find enclosed a few patches which adds a new HTTP action into
> HAProxy: do-resolve.
> This action can be used to perform DNS resolution based on information found
> in the client request and the result is stored in an HAProxy variable (to
> discover the IP address of the server on the fly or logging purpose,
> etc...).

Hello folks,

Did this feature ever go anywhere?

I'm trying to write some ACLs matching X-Forwarded-For headers against
a DNS record, and I *think* this set of patches is my only way to
achieve this, without using an external lookup process to modify ACLs
via the admin socket ...

Many thanks!

Re: skip logging some query parameters during GET request

2018-03-13 Thread Jonathan Matthews
I *think* you're going to have to fully construct your logging format with
a whitelist of params you want, rather than an exclusion list. I'm not sure
you can scope this by HTTP method, however.

Given your use of this as a forward proxy, I assume you could scope it by
Host header ... but that *might* require a double pass through haproxy,
with an "abns@" style listener containing the logging format configuration.


On Tue, 13 Mar 2018 at 12:51, Dave Cottlehuber <> wrote:

> Hi,
> I'm using haproxy to handle TLS termination to a 3rd party API that
> requires authentication (username/password) to be passed as query
> parameters to a GET call.
> I want to log the request as usual, just not all the query parameters.
> Obviously for a POST the parameters would not be logged at all, but is it
> possible to teach haproxy to exclude one specific query parameters on a GET
> request?
> the request:
> GET /api?username=seriously=ohnoes=locate=chocolat
> desired log something like:
> GET /api?username=seriously=locate=chocolat
> I can do this downstream in rsyslog but I'd prefer to cleanse the urls up
> front.
> A+
> Dave
> --
Jonathan Matthews
London, UK

Re: BUG/MINOR: limiting the value of "inter" parameter for Health check

2018-03-07 Thread Jonathan Matthews
On Wed, 7 Mar 2018 at 09:50, Nikhil Kapoor <> wrote

> As currently, no parsing error is displayed when larger value is given to
> "inter" parameter in config file.
> After applying this patch the maximum value of “inter” is set to 24h (i.e.
> 8640 ms).

I regret to inform you, with no little embarrassment, that some years ago I
designed a system which relied upon this parameter being set higher than 24

I was not proud of this system, and it served absolutely minimal quantities
of traffic ... but it was a valid setup.

What's the rationale for having *any* maximum value here - saving folks
from unintentional misconfigurations, or something else?

Jonathan Matthews
London, UK

Re: Active-Passive HAProxy Issue enquiry

2018-02-15 Thread Jonathan Matthews
On 15 February 2018 at 10:08, Swarup Saha  wrote:
> Hi,
> I need help from HAProxy organization.

Hello there. This is the haproxy user mailing list. It is received and
read by a wider range of users across the world, many of whom read it
in an individual capacity.
If you want *commercial* support, then here is a link to organisations
which provide it:

> We all know that when we configure HAProxy in the Active-Passive manner then
> there is a VIP. Outside service will access the VIP and the traffic will be
> routed to appropriate inner services via Active Load Balancer.
> I have configured one Active Load Balancer in Site 1 and Passive Load
> Balancer in Site 2, They are connected via LAN, Outside traffic will be
> routed through VIP.
> Now, my question is if the LAN connectivity between the Active-Passive
> HAProxy goes down will the VIP still exist?

This is *entirely* the concern of the technology and methods you use
to create the VIP across multiple haproxy instances in your different
It isn't under the control of haproxy, which deals with failover of
the *backend* services you're load balancing.

Failure of an entire loadbalancer, and how your setup deals with that,
is *100%* a concern of the technology (not haproxy) with which you've
chosen to implement resilience.

People *might* be able to assist on this list if you gave some more
detail about the technologies you're using.


Re: How can I map bindings to the correct backend?

2018-01-25 Thread Jonathan Matthews
Unless I'm missing something, wouldn't you be rather better off just having
a dedicated frontend for each set of ports that forwards to each distinct
backend server?

Or are you doing this at webscale, or something? :-)

Jonathan Matthews
London, UK

Re: cannot bind socket - Need help with config file

2018-01-11 Thread Jonathan Matthews
On 11 January 2018 at 00:03, Imam Toufique  wrote:
> So, I have everything in the listen section commented out:
> frontend main
>bind :2200
>default_backend sftp
>timeout client 5d
> #listen stats
> #   bind *:2200
> #   mode tcp
> #   maxconn 2000
> #   option redis-check
> #   retries 3
> #   option redispatch
> #   balance roundrobin
> #use_backend sftp_server
> backend sftp
> balance roundrobin
> server web check weight 2
> server nagios check weight 2
> Is that what I need, right?

I suspect you won't need to have your *backend*'s ports changed to
2200. Your SSH server on those machines is *probably* also your SFTP
server. I don't recall if you can serve a different/sync'd host key
per port in sshd, but this might be a reason to run a different daemon
on a higher port as you're doing.

As an aside, it's not clear why you're trying to do this. You've
already hit the host-key-changing problem, and unless you have a
*very* specific use case, your users will hit the "50% of the time I
connect, my files have gone away" problem soon. So you've probably got
to solve the shared-storage problem on your backends ... which turns
them in to stateless SFTP-to-FS servers.

In my opinion adding haproxy as a TCP proxy in your architecture adds
very little, if anything. If I were you, I'd strongly consider just
sync'ing the same host key to each server, putting their IPs in a
low-TTL DNS record, and leaving haproxy out of the setup.


Re: cannot bind socket - Need help with config file

2018-01-08 Thread Jonathan Matthews
On Mon, 8 Jan 2018 at 08:29, Imam Toufique <> wrote:

> [ALERT] 007/081940 (1416) : Starting frontend sftp-server: cannot bind
> socket []
> [ALERT] 007/081940 (1416) : Starting proxy stats: cannot bind socket [
> [ALERT] 007/081940 (1416) : Starting proxy stats: cannot bind socket [

I would strongly suspect that the server already has something bound to
port 22. It's probably your SSH daemon.

You'll need to fix that, by dedicating either a different port or interface
to the SFTP listener.


> --
Jonathan Matthews
London, UK

Re: haproxy without balancing

2018-01-05 Thread Jonathan Matthews
On 5 January 2018 at 10:28, Johan Hendriks <> wrote:
> BTW if this is the wrong list please excuse me.

This looks to me like it might be the right list :-)

> We have an application running over multiple servers which all have
> there own subdomain, there are about 12 of them.
> We can live without loadbalancing, so there is no failover, each server
> serves a couple of subdomains.

What protocols are these servers serving?

  - if HTTPS, do you control the TLS certificates and their private keys?
- Something else?
  - if something else, what?

> At this moment every server has its own ip, and so every subdomain has a
> different DNS entry. What we want is a single point of entry and use
> haproxy to route traffic to the right backend server.

Are the DNS entries for every subdomain under your control?
How painful would it be to change one of them?
How painful would it be to change all of them?

> Replacing an server is not easy at the moment. We have a lot of history
> to deal with. We are working on it to leave that behind but till then we
> need an solution.
> I looked at this and i think i have two options.
> Create for each server in the backend an ip on the haproxy machine and
> connect a frontend for that IP to the desired backend server.
> This way we still have multiple ipadresses, but they can stay the same
> if servers come and go.
> Secondly we could use a single ip and use ACL to route the traffic to
> the right backend server.
> The problem with the second option is that we have around 2000 different
> subdomains and this number is still growing. So my haproxy config will
> then consists over 4000 lines of acl rules.
> and I do not know if haproxy can deal with that or if it will slowdown
> request to much.

Haproxy will happily cope with that number of ACLs, but at first
glance I don't think you need to do it that way.

Assuming you're using HTTP/S, you would probably be able to use a map,
as describe in this blog post:

Also, assuming you're using HTTP/S, if you can relatively easily
change DNS for all the subdomains to a single IP then I would
*definitely* do that.

If you're using HTTPS, then SNI client support
( would
be something worth checking, but as a datapoint I've not bothered
supporting non-SNI clients for several years now.

All the best,
Jonathan Matthews
London, UK

Re: Poll: haproxy 1.4 support ?

2018-01-02 Thread Jonathan Matthews
On 2 January 2018 at 15:12, Willy Tarreau  wrote:
> So please simply voice in. Just a few "please keep it alive" will be
> enough to convince me, otherwise I'll mark it unmaintained.

I don't use 1.4, but I do have a small reason to say please *do* mark
it as unmaintained.

The sustainability of haproxy is linked to the amount of work you (and
a /relatively/ small set of people) both have to do and want to do.
I would very much like it to continue happily, so I would vote to
reduce your mental load and to mark 1.4 as unmaintained.

Thank you for haproxy, and here's to a great 2018, with 1.8 and beyond :-)


Re: Why HAProxy is not a web server?

2017-11-28 Thread Jonathan Matthews
On 27 November 2017 at 01:09,   wrote:
> Why HAProxy is not a web server?

Because it's a load balancer.
It talks to multiple other web servers, often called backends or
origins, which provide the content for it to serve to consumers.


Re: Tagging a 1.8 release?

2017-10-23 Thread Jonathan Matthews
On 20 October 2017 at 17:17, Willy Tarreau  wrote:
> I'd like to collect all the pending stuff by the end of next week and issue
> a release candidate. Don't expect too much stability yet though, but your
> tests and reports will obviously be welcome.

Are you still finger-in-the-air aiming for a "November 2017" 1.8? I
don't recall where I saw that quote, but I'm pretty sure it was an
intention mentioned ... /somewhere/!

No pressure - just wondering :-)


Re: counters for specific http status code

2016-07-12 Thread Jonathan Matthews
On 12 Jul 2016 05:43, "Willy Tarreau"  wrote:
> That could possibly be ssh $host "halog -st < /var/log/haproxy.log" or
> anything like this.

On behalf of people running busy load balancers / edge proxies / etc,
please don't do this ;-)

laptop$ halog -st <(ssh -C $balancer "cat /var/log/haproxy.log")

... is likely to be slightly kinder to my contended servers :-)


Re: AWS ELB with SSL backend adds proxy protocol inside SSL stream

2016-05-10 Thread Jonathan Matthews
Hello Hector -

On 5 May 2016 at 12:11, Hector Rivas Gandara
<> wrote:
>  * If not, is there a better way to 'chain' the config as I did above.

I don't have any insight into the protocol layering problem you're
having, I'm afraid, but if you do end up with the chained solution you
describe, I have a suggestion.

Take a look at the "abns@" syntax and feature documented here:
It's excellent for HAP->HAP links, as you're using. I'm using it in
production *inside* Cloud Foundry, for the record :-)

As an aside, I'd be interested in even a brief summary of how/if you
resolved your problem, given that I've not seen it described on the
list before. I wonder if you're the first to run into this specific
problem ...

All the best,
Jonathan Matthews
London, UK

Re: Erroneous error code on wrong configuration.

2016-05-01 Thread Jonathan Matthews
On 29 Apr 2016 11:29, "Mayank Jha"  wrote:
> I am facing the following in haproxy 1.5. I get the following error, with
error code "SC" which is very misleading, for the below mentioned config.

Why do you think it's misleading?

> haproxy[6379]: [29/Apr/2016:12:05:40.552] my_frontend
my_frontend/ -1/-1/-1/-1/1 503 212 - - SC-- 0/0/0/0/0 0/0 "GET /
> With the following config.
> frontend my_frontend
> bind :80
> acl global hdr(host) -i blablabla
> use_backend my_backend if global
> backend my_backend
> server google

Given that you don't alter the Host header before submitting the request to
Google, I'm not sure what you're expecting to happen.

I think there's a fair bit of extra information you'll need to provide
before I (at least; not speaking for anyone else!) understand what your
problem actually *is*. You're assuming we know more than we do about your
setup, aims, and expected outcomes :-)


Re: unique-id-header set twice

2016-04-30 Thread Jonathan Matthews
On 29 Apr 2016 06:55, "Willy Tarreau"  wrote:
> On Fri, Apr 22, 2016 at 04:37:04PM +0200, Erwin Schliske wrote:
> > Hello,
> >
> > for some of our services requests pass haproxy twice. As we have set the
> > global option unique-id-header this header is added twice.
> I don't know what could cause this. Would you happen to have it in a
> defaults section maybe, with your traffic passing through a frontend
> and a backend ? If that's what causes it, I think we have a mistake
> in the implementation and should ensure it's done only once, just like
> x-forwarded-for.

I /think/ you're talking at slight cross-porpoises!

My reading of the OP is that when a request comes in to a frontend/listener
with the configured unique-Id header already present, then a second UID
header is added.

My reading of your post, Willy, is that this would be a bug (which might
suggest why unique-id-header isn't ACL-able?). But I may have misunderstood
- you may be talking solely about when a request crosses a frontend/backend
boundary, and not when the request comes in the front door anew (even if it
was, as per the OP, a request coming back in directly from a backend).

Am I right, both? I only ask because this has bugged me slightly in the
past, and it'd be great to clear up the definition of the UID header
option: When enabled, is the header's addition predicated on its initial


Re: Dynamic backend routing base on header

2016-01-18 Thread Jonathan Matthews
On 17 January 2016 at 17:54, Michel Blanc  wrote:
> Dear all,
> I am trying to get haproxy routing to a specific server if a header
> (with the server nickname) is set.

Can you adapt
to achieve what you want?


Re: SSL and Piranha conversion

2015-09-08 Thread Jonathan Matthews
On 8 Sep 2015 20:07, "Daniel Zenczak"  wrote:
> Hello All,
> First time caller, short time listener. So this is the
deal.  My organization was running a CentOS box with Piranha on it to work
as our load balancer between our two web servers.  Well the CentOS box was
a Gateway workstation from 2000 and it finally gave up the ghost.

May I suggest you reconsider migrating your hardware and software at the
same time, both whilst under pressure? It will be massively simpler to
install your preexisting choice of (known "good") software on your new


Re: SSL and Piranha conversion

2015-09-08 Thread Jonathan Matthews
On 8 September 2015 at 20:56, Daniel Zenczak  wrote:
> Hello Jonathan,
> Thank you for the response.  That old gateway workstation is
> not going to be used anymore (the HDDs failed on it and the RAID board
> didn’t warn/detect/tell us when it happened).  I have spun up Ubuntu Server
> inside one of our Virtual Servers to act as the new Load Balancer.  Is this
> what you mean by migrating the hardware as well as the software?

[on-list reply]

Daniel -

You have to swap out your hardware because it failed.
You don't have to swap out your software as it has not failed.

Whilst a move to HAProxy is a great plan, I would not be doing it
whilst trying to fix your web servers' redundancy and bringing both
web servers back into service.

My professional advice in your situation would be to change the
minimum number of things necessary to restore resilient service, which
in this case sounds like only your hardware - whether you fix it by
replacing the hardware or by virtualising the server.

I would not include swapping Piranha for HAProxy and CentOS for Ubuntu
in this work. I'd do both of those later.


Re: How to profile stats web page users

2015-04-09 Thread Jonathan Matthews
I think you want ACL-driven stats scope statements, which don't
exist to the best of my knowledge.

In your case, rather than open a bunch of different ports, I'd give
people different FQDNs to hit, and point a wildcard DNS record at a
single port 80. (Well, a :443 with TLS, if I were doing it, but you're
using :80 in your example)


frontend stats
  bind IP_ADDRESS:80
  mode http
  option httplog
  compression algo gzip
  use_backend stats-foo if { hdr(host) }
  use_backend stats-bar if { hdr(host) }
  default_backend always_returns_400

defaults for-stats-backends-in-effect-until-next-defaults-section
  mode http
  option httplog
  stats enable
  stats uri /haproxystats
  stats refresh 60s
  stats show-legends
  stats scope .

backend stats-foo
  stats scope foo-frontend-1
  stats scope foo-backend-2
  stats auth user1:password1

backend stats-bar
  stats scope bar-frontend-1
  stats scope bar-frontend-2
  stats scope bar-backend-3
  stats auth user2:password2

defaults reset-defaults-disable-stats

rest of config

(typed but not tested ...)

Yes, this isn't too different from what you proposed :-) Note the use
of the multiple defaults section to move as much common config out
of the individual backends.

You might also find userlists handy:
They'll let you make the stats auth definitions a fair bit cleaner,
moving the user/passwords lists elsewhere in your configuration file.


Re: How to profile stats web page users

2015-04-09 Thread Jonathan Matthews
Have you looked at stats scope?


stats uri doesn't inherit from defaults sections

2015-04-09 Thread Jonathan Matthews
Hi all -

A bit of lunchtime playing around today has exposed the fact that a
stats uri in a defaults section has no effect on backends to which
the defaults section /should/ apply. Stats-serving backends only obey
the compile-time default (/haproxy?stats) in my tests, until an
explicit stats uri is placed inside the backend definition.

The docs state that stats uri is valid in defaults sections, so let
me ask: is this a documentation bug (which I'll happily submit a patch
for!) or something else? To my mind, it absolutely makes sense to have
this statement as settable in a defaults section.

I've only tested this on the latest Debian backports version, 1.5.8,
but I don't see anything related in the changelog since then which
makes me think it's been fixed. The docs for 1.5.11 currently state
it's a defaults-settable config statement.

Jonathan Matthews
Oxford, London, UK

Re: Gracefull shutdown

2015-04-05 Thread Jonathan Matthews
On 5 April 2015 at 10:33, Cohen Galit wrote:
 How can I perform a graceful shutdown to HAProxy?

 I mean, not by killing process with pid.

Please could you describe the behaviours you expect from a graceful
shutdown which you don't get from killing the process? I would expect
a `service haproxy stop`, which almost certainly translates to a `kill
-TERM PID`, to be about as graceful as it gets ...

Re: Environment variable in port part of peer definition not resolved

2015-03-25 Thread Jonathan Matthews
On 25 March 2015 at 23:14, Dennis Jacobfeuerborn wrote:
 I'm trying to make the haproxy configuration more dynamic using
 environment variables and while this works for the definition of the pid
 file and the stats socket when I try to use an env. variable as the port
 of a peer definition I get an error:

Given that `peer` explicitly references this envvar usage
are you sure you're exporting those exact envvars to the child

Re: Debian (wheezy) official backport stuck at 1.5.8?

2015-03-12 Thread Jonathan Matthews
On 10 March 2015 at 16:36, Vincent Bernat wrote:
  ❦ 10 mars 2015 15:48 GMT, Jonathan Matthews : reports that
 it's up to date with 1.5, but is only making 1.5.8 available. Does
 anyone have any insight into why this might be and how/if one might
 help the situation?

 To be in wheezy-backports a package has to be in jessie (the next
 version of Debian). Currently, jessie is frozen because the release is
 imminent, so it is not possible to push newer versions. Once jessie is
 released, it will be possible to get more recent versions for wheezy
 through wheezy-backports-sloppy (or jessie-backports if you upgrade
 to jessie).

 Also note that critical fixes have been integrated in this version (in
 wheezy-backports). See the changelog:

 Once 1.6~dev1 is released, I will push more repositories to give more
 choices (1.4, 1.5, 1.6, all distributions, stable or latest

Thank you, that's cleared it up. I had wondered if it was
jessie-related - it's good to get confirmation :-)


Debian (wheezy) official backport stuck at 1.5.8?

2015-03-12 Thread Jonathan Matthews
Hi all - reports that
it's up to date with 1.5, but is only making 1.5.8 available. Does
anyone have any insight into why this might be and how/if one might
help the situation?


Re: How to compare two haproxy.cfg files?

2015-03-09 Thread Jonathan Matthews
On 8 March 2015 at 18:46, Tom Limoncelli wrote:
 The first step is to put the sections in a fixed order: First general,
 the defaults, then each listen/frontend/backend sorted by name.  That
 works fine and has been a big help.

Not a huge amount of help with your task, but don't forget that
multiple default sections are valid, and take effect on the
non-defaults sections following them - but only up to the /next/
defaults section.

I.e. (IIRC!) with this:

  backend #1
  listener #2
  frontend #3
  defaults B
  listener #4

A's settings affect #1, #2 and #3, and B's settings affect #4.

It would be different and quite possibly materially different if you
concatenated all the defaults together at the top, and only then
defined #1-#4.


Re: Sharing configuration between multiple backends

2015-03-09 Thread Jonathan Matthews
On 9 March 2015 at 00:12, Thrawn wrote:
 Hi, all.

 Is there a way to share configuration between multiple backends?

 The use case for this is that we would like to configure different response 
 headers for different parts of our application, based on the request URL, but 
 otherwise route traffic the same way. Specifically, we want to specify 
 'X-Frame-Options: ALLOW-FROM some site' across most of the application, but 
 just use 'X-Frame-Options: DENY' on the admin area.

 We could do this, of course, by sending the admin traffic to a different 
 backend, and setting the response header differently in that backend, but 
 then we'd need to repeat our server configuration, hich is otherwise the 
 same. Something like this:

 frontend foo
   listen x.x.x.x
   acl admin url_beg /admin
   default_backend foo
   use_backend foo_admin if admin

 backend foo
   rspadd X-Frame-Options: ALLOW-FROM

 backend foo_admin
   rspadd X-Frame-Options: DENY

 To reduce the duplication, is it possible to have one backend delegate to 
 another, or specify a named list of servers that can be referenced from 
 different places?

I don't know about your specific *question*, but to solve your
specific *problem*, you might just use rspadd's conditional form:

frontend foo
  acl admin url_beg /admin
  rspadd X-Frame-Options: DENY if admin
  rspadd X-Frame-Options: ALLOW-FROM unless admin
  default_backend whatever

As per
Dictated but not tested ;-)


Re: Config reload to take out backend server still getting traffic

2014-12-11 Thread Jonathan Matthews
On 11 December 2014 at 07:58, Kasim wrote:

 I am running haproxy on Ubuntu 14.04. After I added following config:
 stick-table type ip size 2m expire 5m
 stick on src

 Taking out a server and reloading haproxy still sends traffic to that server
 ever after the stick table expires. For example, I have
 server s1 
 server s2 

 After commenting s1 out and reloading config, s1 still gets traffic. This
 does not happen without the stick-table and stick on config.

 Any pointer or explanation? Could not find it in the doc or online.

I /suspect/ you'll find that, after the reload, there's an old haproxy
process sticking around to deal with connections which clients are
keeping open. This traffic will be to both your s1 and s2 backends,
but you're only noticing it on s1 as you're expect it to have stopped


Re: 1.5.9 crashes every 4 hours, like clockwork

2014-12-11 Thread Jonathan Matthews
On 11 Dec 2014 14:27, David Adams wrote:

 We are running 1.5.9 on Centos 6.5.  It crashes 10 seconds (give or take
a few seconds) after 1am, 5am, 9am, 1pm, 5pm and 9pm, like clockwork; let's
call that CRASHTIME.  Previously we'd been using 1.5.3 on the same hardware
for some months without crashes.  Once the crashes started we moved to
1.5.9 but they continue.  If we manually restart it a minute or two before
CRASHTIME it stills crashes when CRASHTIME arrives a minute or two later.

 We've looked at all cron jobs that run on the server for anything that
could be causing the problem but found nothing.  We've even dumped a
process list every 1 second in the minutes before and after CRASHTIME and
there is nothing untoward.  Traffic levels don't change and besides, that
it happens every 4 hours at exactly the same time suggests it's not traffic
related.  Presumably that also rules out any kind of malformed request or
similar causing it.

I would check my ssh logs. In the absence of an on-system cron/at process
doing this, I'd be looking /really/ externally :-)

Perhaps disable sshd a few minutes before the crash and enable it a few
minutes afterwards. I bet something like that (or a zabbix agent's;
or something else originating on another system) is screwing with you ...


Re: Can not set or clear a table when the Key contains \

2014-12-08 Thread Jonathan Matthews
On 5 December 2014 at 07:05, Nick wrote:
 when i try the command --echo -e set table RD01-CSN-1 key PVG\\PENGZ
 data.server_id 3  | socat /var/run/haproxy.stat stdio, the unix socket
 seems excluded the backslash \\, so i cannot successfully edit the
 Haproxy tables.
 the same problem when i try the command echo -e clear table RD01-CSN-1
 key PVG\\PENGZ data.server_id 3  | socat /var/run/haproxy.stat stdio.

I think you're having a generic shell escaping problem, which has
nothing to do with haproxy or the unix socket.
Try using single quotes around the string you pass in, and without
giving echo that -e parameter.


Re: Disable HTTP logging for specific backend in HAProxy

2014-12-08 Thread Jonathan Matthews
On 7 December 2014 at 20:54, Alexander Minza wrote:
 How does one adjust logging level or disable logging altogether for specific
 backends in HAProxy?

 In the example below, both directives http-request set-log-level err and
 no log seem to have no effect - the logs are swamped with lines of
 successful HTTP status 200 OK records.
 backend static
   http-request set-log-level err
   no log

Are you /absolutely/ sure that these log lines aren't being emitted by
the frontend or listener through which your backend must have received
the request? Are you expecting that no log to percolate back to the
frontend? I don't /think/ it works that way ... (though I've not

[ As an aside, the way I read what you've written above is mark *all*
logs from the static backend as err level. Whereas your global
section's log /dev/log local1 notice line says log everything that
is notice-or-more-sever to /dev/log. I know you're no log looks
like it should override this logging, but I just thought I'd mention
it as it looks a little odd. ]


Re: Can't find an old example of haproxy failover setup with 2 locations

2014-12-08 Thread Jonathan Matthews
On 8 Dec 2014 15:10, Aleksandr Vinokurov wrote:

 I've seen it 2 years ago. If I remember it right, Willy Tarreau was the
author and it had ASCII graphics for network schema. It depicts step by
step the configuration from one location and one server to 2 locations and
4 (or only 2) Haproxy servers.

 Will be **very** glad if smb. can share a link to it.

Might you be referring to


Re: Can't get HAProxy to support Forward Secrecy FS

2014-12-08 Thread Jonathan Matthews
On 8 December 2014 at 22:44, Sander Rijken wrote:
 System is Ubuntu 12.04 LTS server, with openssl 1.0.1 and haproxy 1.5.9

 OpenSSL version
 OpenSSL 1.0.1 14 Mar 2012

 I'm currently using the following, started with the suggested [stanzas][1]
 (formatted for readability, it is one long line in my config):

 bind ssl crt mycert.pem no-tls-tickets ciphers \


 AES128-SHA:AES256-SHA256:AES256-SHA no-sslv3

 [1]: indicates FS is not used. When I disable all algorithms except
 the ECDHE ones, I get SSL connection error (ERR_SSL_PROTOCOL_ERROR), so
 something on the system doesn't support FS.

 Any ideas?

I'm not best placed to help you debug your setup, but you might diff
your versions and setup against what I have on my personal site, which
SSLlabs says has Robust forward secrecy. I followed the server-side
recommendations of the Modern setup, here:

Here's some data you can check against, along with the commands I used
to generate it:

user:~$ /usr/sbin/haproxy -vv
HA-Proxy version 1.5.8 2014/10/31
Copyright 2000-2014 Willy Tarreau

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4
-Wformat -Werror=format-security -D_FORTIFY_SOURCE=2

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.30 2012-02-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

user:~$ ldd /usr/sbin/haproxy =  (0xe000) = /lib/i386-linux-gnu/i686/cmov/ (0xb76b4000) = /lib/i386-linux-gnu/ (0xb769b000) =
/usr/lib/i386-linux-gnu/i686/cmov/ (0xb7641000) =
/usr/lib/i386-linux-gnu/i686/cmov/ (0xb7483000) = /lib/i386-linux-gnu/ (0xb7445000) = /lib/i386-linux-gnu/i686/cmov/ (0xb72e) = /lib/i386-linux-gnu/i686/cmov/ (0xb72dc000)
/lib/ (0xb76f9000)

user:~$ apt-cache policy openssl haproxy | grep -i -e install -e ^[a-z]
  Installed: 1.0.1e-2+deb7u13
  Installed: 1.5.8-1~bpo70+1

user:~$ openssl version
OpenSSL 1.0.1e 11 Feb 2013

user:~$ openssl ciphers

Re: Significant number of 400 errors..

2014-11-27 Thread Jonathan Matthews
On 27 November 2014 at 10:39, Alexey Zilber wrote:
 That's part of what I'm trying to figure out.. where are the junk bytes
 coming from.  Is it from the client, server, haproxy, or networking issue?

That's what tcpdump is useful for. Use it at different places in your
end-to-end client/backend path, and you'll discover where the junk


Re: POST body not getting forwarded

2014-11-20 Thread Jonathan Matthews
On 20 November 2014 05:17, Rodney Smith wrote:
 I have a problem where a client is sending audio data via POST, and while
 the request line and headers reach the server, the body of the POST does
 not. However, if the client uses the header Transfer-Encoding: chunked and
 chunks the data, it does get sent. What can I do to get the POST body sent
 without the chunking?
 What can be changed to get the incoming raw data packets to get forwarded?

 I'm using HAProxy in a forward proxy mode (option http_proxy). The function
 http_request_forward_body() has the message in the HTTP_MSG_DONE state, and
 the log line in process_session() line 1785 shows the incoming data is
 accumulating rqh=(s-req-buf-i).

I don't have a direct answer on your observed problem, but I would
point out that judging by my archives, the use of the http_proxy
option is /extremely/ underrepresented on this list. I have no
information if this might be the case, but I suggest that it would be
possible for a bug to creep in and remain hidden for longer in this
code path because of its relative rareness.

This mean-time-to-bug-discovery might be compounded by the very
(very!) broad demographic generalisation that people using this
simplistic feature of haproxy /might/ be less inclined to upgrade for
feature-based reasons, due to their architectures perhaps relying less
on a fully-featured proxy being inline.

In the absence of any other information, my next steps in your
situation would be to see if I could replicate this problem in a
different haproxy mode, not using option http_proxy.

I absolutely recognise that that might not be possible, and I'm sure
others on the list will help you discover the true root cause of the
problem. I only mention it as it might not be obvious that this isn't
a commonly discussed, hence maybe used, feature of haproxy.


Re: Haproxy - time to split traffic on servers

2014-11-14 Thread Jonathan Matthews
On 14 November 2014 22:59, Gorj Design ( Dragos ) wrote:

 I have been using Haproxy to split the traffic between my servers.
 I have a haproxy server and 2 servers that receive the traffic using round
 robin .
 The traffic is split usually very good  50 % on one server and 50 % on the

 But at some point, the traffic gets in so fast for example
 From 15.702 to ..15.706 hundreads of incomming traffic are comming
 and all are sent to server one .

 Can I set it somehow so the traffic is split even if it comes at such a low
 milliseconds difference ?

I don't believe you /should/ be seeing this pattern/problem with a
simple round-robin setup. Are you *positive* that neither server
polled down for any period, no matter how small?
describes your load balancing algorithm choices. I know it warns
against leastconn with short-lived connections, but I've never had any
problems with using that algorithm for HTTP :-)


Re: How can I force all frontend traffic to be temporarily queued/buffered by HAProxy?

2014-07-17 Thread Jonathan Matthews
On 17 Jul 2014 14:50, Abe Voelker wrote:
 So basically I'm wondering if there is a way to expire these
pre-existing sessions or connections or somehow force them to behave like a
new one so that they will queue up in HAProxy?

I believe 1.5 has the on-marked-down shutdown-sessions option to close
connections when backends fail healthchecks. I don't recall what effect it
has on the weighting change operation you're doing, however.

I can't speak for the sanity of the approach, but I've used (tcp)cutter to
terminate connections through a Linux firewall before. Maybe you could
script that, or the similar tool tcpkill.

Overall, however, I'd personally choose to address this at the DB or app
layers - perhaps with a lock, perhaps with a code change to make the app be
more forgiving during the outage. Doing this in the network feels
error-prone and wrong.


Re: Using a Whitlist to Redirect Users not on the Whitelist

2014-07-17 Thread Jonathan Matthews
On 17 Jul 2014 18:15, JDzialo John wrote:
 I am creating a whitelist of subnets allowed to access HAPROXY during
maintenance.  Basically I want to redirect everyone to our maintenance page
other than users in the whitelisted file.

 This is not working and is forwarding everyone to the maintenance page
despite being a member of a whitelisted subnet. (

 Is using the hdr_ip(X-Forwarded-For) in the acl the way to go

Unless your traffic is passing through another reverse proxy which inserts
this header before it hits HAProxy, no. Why are you choosing to use that

Re: Adding Serial Number to POST Requests

2014-07-16 Thread Jonathan Matthews
On 16 Jul 2014 16:56, Zuoning Yin wrote:

 We later also got the help from Willy.  He provided us a configuration
which solved our problem. To benefit other people,  I just posted it here.

I had meant to chime in on this thread earlier.

What happens when your HAProxy layer loses state - be it reboot, service
restart or  data centre power cut? Are you risking resetting the counter
and overwriting existing data on the backend? Are you in fact treating HAP
as a single point of truth?


Re: Filing bugs.. found a bug in 1.5.1 (http-send-name-header is broken)

2014-07-07 Thread Jonathan Matthews
On 7 Jul 2014 14:44, Alexey Zilber wrote:

 Hey guys,

   I couldn't a bug tracker for HAProxy, and I found a serious bug in
1.5.1 that may be a harbinger of other broken things in header manipulation.

  The bug is:

   I added 'http-send-name-header sfdev1' unde the defaults section of

 When we would do a POST with that option enabled, we would get 'sf'
injected into a random variable.   When posting with a time field like
'07/06/2014 23:43:01' we would get back '07/06/2014 23:43:sf' consistently.

Alex -

Would you be able to post a (redacted) config that causes haproxy to
exhibit this behaviour, along with a fuller example of exactly where this
unwanted data appears in context?

If you could post a packet capture of the data being inserted, that will
probably help people to home in on the cause of the problem. Don't forget
to redact anything from the capture as you feel necessary, such as auth
creds, public IPs and host headers. (Anything you're content /not/ to
redact could only help, however!)


Re: haproxy 1.4 and ssl

2014-06-16 Thread Jonathan Matthews
On 16 Jun 2014 13:19, wrote:

 hi all

 i have the following situation:
 i have 4 real servers (two exchange2013 and 2 citrix) which should get
loadbalanced behind haproxy 1.4 (because this is the version shipped with
 this backendservers should talk:
 exchanges: https, pop3, imap, pop3s, imaps
 citix: https

 https should get passed through, eg. the certificates are on the real
servers and NOT on the loadbalancer.

This is the only option with 1.4, as it can't terminate HTTPS.

 i would like to have a single check for all exchange-services. this check
is https://exchange/ews/healthcheck.html
 if this fails all services for exchange should switch over.

It sounds to me like you should investigate haproxy's healthcheck server
tracking functionality, and have your services all hanging off a single
healthcheck per real server. I forget the option name, but search the main
docs.txt for tracking and you should find it.

 and for citrix i would like to fail if the real-servers fail, eg with
httperrorcode 503. it works if a service goes down completely, but not with
503. currently this does not fail, as a connect is possible.

This doesn't look possible with 1.4 to me. The most it can do is the
ssl-hello-chk, which doesn't actually examine any HTTP response code. Can
you expose the healthcheck page via http?

 *) is this possible with haproxy 1.4?

Some things you're trying to do look possible, others less so.

 *) can haproxy check for a local file residing on the loadbalancer
itself? maybe by file:///tmp/healthcheck.txt

Not at runtime, but I /guess/ you want this so as to fail over different
services simultaneously - which is already possible with tracking servers.

 *) is there any release-schedule for the next stable version?

If you mean 1.5, then Willy's previously posted that it'll be Real Soon Now
:-) Possibly weeks; possibly days. Neither of which options will update the
version in your distro ... so I'd suggest just giving the latest 1.5 a spin
and seeing if your experiences with it can produce a better stable release
for everyone :-)


Re: HAProxy connection remains but web page stream is cut off prematurely

2014-05-19 Thread Jonathan Matthews
John, Willy already replied to your original thread. I suggest you engage
with his detailed reply, there, instead of starting a new thread.

Recommended strategy for running 1.5 in production

2014-04-14 Thread Jonathan Matthews
Hi all -

I've been running 1.4 for a number of years, but am pondering moving
some as-yet-unreleased apps to 1.5, for SSL and ACL-ish reasons.

I'd like to ask how you, 1.5 sysadmins and devs, track the development
version, and how you decide which version to run in production.

Do you just run 1.5-dev${LATEST}? The latest snapshot? Do you follow
the list here and cherry-pick important bug fixes?

I don't feel I have a firm understanding of the status of the
different, co-existing codebases that one could call 1.5 at any
given time. And nor do I have the C-skills and time to review every

What do /you/ do, fellow sysadmins? How do you run, upgrade and
maintain confidence in your chosen version of 1.5 in production?

All opinions and information welcome!

Re: Interaction between SSL and send-proxy

2014-03-26 Thread Jonathan Matthews
On 26 March 2014 11:01, Lukas Tribus wrote:

 Basic question on send-proxy:

 If the HAProxy server configuration has both SSL and send-proxy, should
 the proxy protocol header be sent encrypted within the SSL packet?

 Good question. In my opinion send_proxy should be cleartext, as a proxy
 may or may not terminate SSL.

+1 /AOL


Current solutions to the soft-restart-healthcheck-spread problem?

2014-03-06 Thread Jonathan Matthews
Hi all -

[ tl;dr How do you stop haproxy using failed backend servers immediately
after reload?
Haproxy devs, please consider implementing a
consider-servers-initially-DOWN option! ]

I wonder if people could outline how they're dealing with the combination
of these two haproxy behaviours:

1) On restart/reload/disabled-server-now-enabled-via-admin-interface,
haproxy considers a server to be 1 health check away from going down, but
considers it *initially* up.

2) On restart/reload, haproxy spreads out each backend's(?) initial server
health checks over the entire health check interval.

(If I'm slightly off with either of those statements, please forgive the
inaccuracy and let it slide for the purposes of this discussion; do let me
know if I'm /meaningfully/ wrong of course!)

The combination of these facts in a high traffic environment seems to imply
that an unhealthy-but-just-enabled server which is listed last in an
haproxy backend may receive requests for a longer-than-expected period of
time, resulting in a non-trivial number of requests failing.

In such an environment, where multiple load balancers are involved and can
be reloaded sequentially (such as mine!), it would be preferable to take a
pessimistic approach and /not/ expose servers to traffic until you're
positive that the backend is healthy, rather than haproxy's current
default-optimism approach.

I've been considering some methods to deal with this, but haven't got a
working config yet. It's getting somewhat convoluted and stick-table heavy,
so I thought I'd ask everyone:

Where you have decided that this is something you actually need to deal
with, *how* are you doing that? (I totally recognise that the combination
of a frequent health check interval and non-insane traffic volumes may mask
this issue, leading many -- myself included in previous jobs! -- not to
consider it a problem in the first place)

It's worth pointing out that I /believe/ this situation could be easily
solved (operationally) by a global, per-backend or per-server option which
switches on the pessimistic behaviour mentioned above. I recognise that
this may not be easy from an /implementation/ perspective, of course.
[Willy: any chance of an option to start each server as if it were down,
but being 1 check away from going up, rather than the opposite? :-)]

It's also worth pointing out that, whilst the persist haproxy state over
soft restarts concept that's been mentioned previously on list would solve
this for orderly restarts, it wouldn't solve it for crashes, reboots or
otherwise. I think the option I mentioned above would be one way to solve
it nicely, for multiple use cases.

[ For a *not* nice solution, I'll post a follow up when I get my
stick-table concept going. It's /nasty/. IMHO. Don't make me put it into
production! ;-) ]


Re: how to disable/enable TCP_NODELAY soket option in TCP mode?

2014-02-07 Thread Jonathan Matthews
On 7 February 2014 08:29, Татаркин Евгений wrote:
 I can`t find in haproxy documentation any information about Nagle`s
 algorithm or TCP_NODELAY option

To quote, which I
discovered via searching

'the http-no-delay option [...] forces TCP_NODELAY on every outgoing segment
and prevents the system from merging them.'

Willy warns, however:

Doing so can increase load time
on high latency networks due to output window being filled earlier with
incomplete segments and due to the receiver having to ACK every segment,
which can lead to uplink saturation on asymmetric links (ADSL, HSDPA).


Re: Question about logging in HAProxy

2014-02-04 Thread Jonathan Matthews
On 4 Feb 2014 20:06, Kuldip Madnani wrote:


 I want to redirect the logs generated by HAProxy into some specific file
.I read that in the global section in log option i can put a file location
instead of IP address.

I suspect (but can't confirm as I'm on a mobile browser that can't cope
with the docs!) that this filesystem location is solely for specifying a
socket that will accept logs - HAProxy will not manage the log file/s on
disk for you.


Re: How to write multiple config file in haproxy

2014-02-01 Thread Jonathan Matthews
On 1 February 2014 12:32, Sukanta Saha wrote:
 Thanks for all your help, I will try,

 I have one more question that is about the haproxy.conf file , in this file
 we have written so many backends which are getting called from the
 Is there a way that I can seperate out the backends in multiple config files
 and from my main haproxy.conf file I will call those files.
 So that the main files looks clean and nice and I will have multiple config
 files for my each service or  backends . If I need to change anything for a
 service I will change the corresponding config file not the main file.

HAProxy can accept multiple -f configfile parameters when it's
started, but I /believe/ there are some constraints on the files'
contents, such as each section must be fully defined in a single file.
I forget the exact details and don't have them to hand. You'll
probably need to change your init script to support this as well.

Also, there isn't an include directive you can use inline, in the
config file. There is some talk about it on this list, but I don't
believe it's available yet - if ever.

You may also find people have written init wrappers that simulate this
or other multiple-config-file behaviours. I don't have a link to them
myself, but you may find them mentioned somewhere in the list


Re: Question concerning stats server

2014-01-31 Thread Jonathan Matthews
On 31 January 2014 14:21, Andreas Mock wrote:
 Hi all,

 I need a little help to understand how the html stats page can be accessed:
 How can I setup a stats page without having one backend? Is this possible?
 Do I have to provide a frontend too?

This works for me:

root@foo:/# tail -5 /etc/haproxy/haproxy.cfg
listen stats
  bind :80
  mode http
  maxconn 20
  stats uri /hap


Re: Haproxy as simple proxy forwarding each request

2014-01-29 Thread Jonathan Matthews
On 29 January 2014 17:59, Ricardo wrote:

 Is a bit mess situation but I can't configure Haproxy as a simple proxy.

 The behaviour I'm looking for is an Haproxy listen in port 80, receiving 
 request to any url and forward each request to the appropiate domain trought 
 his own gateway.

It sounds to me like you're looking for a /forward/ proxy, which
*really* isn't HAProxy's forte. I seem to recall it can /just/ about
be mangled into doing something like what you want, but you'll have
much more luck looking at Squid for this - that's one of its primary
use cases.

To confirm that you are actually looking for a forward proxy, answer
this: are you able to deterministically list *all* of the domains that
you wish to load-balance? Or are you looking to balance whatever a
user might type into their web browser?

Also - when you mentioned the internet gateway, do you really just
mean a router? I.e. a box which is *just* moving packets, and not
looking inside each HTTP request and then routing them based on the
Host header it finds?

Back to forward proxying: if you don't like Squid, then Nginx can,
with a bit of force, be made to do the job pretty well. Varnish may
also be able to achieve it with its more recent kinda dynamic backends
[citation required; possible rubbish being spouted].

But I wouldn't personally go through the pain of trying to make HAProxy do this.


Re: HAProxy for Solaris 10 X86

2014-01-21 Thread Jonathan Matthews
On 21 January 2014 13:17, Vinoth M wrote:

 1) I am using Solaris 10 x86.Could you please let me know if there a pre
 compiled package available for it.
 2) Also let me know if HAproxy is supported for Solaris 10 x86.

I can't help with these 2 questions ...

 3) My requirement is to load balance FTP(not http) .Let me know if i can
 use HAProxy for the same.

... but I answered this exact question on the Nginx mailing list only
this morning.

Here's what I posted; I believe pretty much all the same points apply to you :-)

Wow - the dream of the 90s really *is* alive in $WHEREVER_YOU_ARE! ;-)

Seriously - it's 2014. We have better alternatives than the insecure
and awful mess that is FTP. Any company that thinks otherwise deserves
all the pain that comes with FTP ...

Anyway, Nginx doesn't talk FTP to the best of my knowledge. Whilst I'd
normally suggest a TCP load balancer for this, FTP has certain
properties which make it annoying to load balance that you have to
take into account.

This came up with after moment's googling. It might help:


Re: URL path in backend servers

2014-01-16 Thread Jonathan Matthews
Rakesh -

I replied to your identical question about this, yesterday, suggesting
what you could do to help yourself diagnose your problem. Please don't
start new threads for the same question.


Re: Forward request with the URL path

2014-01-15 Thread Jonathan Matthews
On 15 January 2014 07:36, Rakesh G K wrote:
 Is it possible to forward an incoming request to the backend by retaining
 the URL path in http mode?.
 Using ACLs I was able to categorize the incoming requests to different
 rulesets, but on using a specific backend for a certain URL parth, I could
 not figure out how to send the request to the underlying server with the URL

I don't quite understand. Are you finding that, without having
configured it to do so, haproxy is *changing* the URI path when it
proxies the request to your backends? Have you verified this with

I would expect the opposite to be the case, as *not* changing the path
the default behaviour. I think you've probably got something else
going wrong here, that causes your backend to produce Not Found on
Accelerator (as per the SO question). That error isn't one that
haproxy generates.

Get tcpdump out. It'll show you where the problem is.


Re: Tuning HAProxy for Production

2014-01-02 Thread Jonathan Matthews
On 2 January 2014 20:09, Jordan Arentsen wrote:
 I'm trying to prepare HAProxy for a production, and I'm trying to figure out
 some good default configuration settings that will at least give me a good
 place to start.

 My main question revolves around the maxconn option and the various
 timeouts. I was thinking about setting the maxconn to 15k or so, is this a
 bad place to start? Any other advice on baseline performance tuning?

 Mostly this will be routing to various front-end web servers based on the 
 incoming url. There is a main PHP application running on a couple servers, a 
 Tomcat server running authentication, and a few node.js servers. Mostly the 
 PHP servers will be handling the bulk of the load for now. Is that the 
 information you were looking for, or is there something I can dig into more 

That sounds pretty vanilla, so my suggestion would be to start with
the defaults and see where that gets your specific application and

HAProxy's defaults are sane (I /think/ the default queue timeout and
queue size might need increasing, but it's been a while since I've set
up a greenfield app from scratch). Remember the sine qua nons of
performance tuning are to change one thing at a time, measure things
precisely and accurately, and make sure you're comparing apples with

You should have an idea of what you'll need maxconn to be, based on
either existing logs or your business' traffic predictions. If you
have neither of these, set it high and drop it down as you observe
you're able to over time.

Others may well have more specific recommendations, but that's where I'd start.


Re: disable backend through socket

2013-12-22 Thread Jonathan Matthews
On 22 Dec 2013 20:32, Patrick Hemmer wrote:

 That disables a server. I want to disable a backend.

No, you want to disable all the servers in a backend. I'm not sure there's
a shortcut that's better than just doing them one by one. Others may be
able to advise about alternatives, but is that an option for you?


Replying to spam threads

2013-12-11 Thread Jonathan Matthews
My apologies to list members for replying to a spam thread and
potentially screwing up mail classification at your end. My mistake.


Re: performance problem about haproxy with libvirt20131201

2013-12-01 Thread Jonathan Matthews
On 1 December 2013 13:20, Lukas Tribus wrote:
 I'm not sure what libvirt exactly does and how/why it affects performance

I don't believe libvirt is the issue, as it's merely an orchestration
abstraction over the top of a variety of hypervisors.

I think XuXinkun merely means s/he's running under virtualisation, and
is surprised at the performance degradation observed in that setup.

XuXinkun - the extent to which your performance will degrade when
running under virtualisation will not only depend on coarse factors
such as the hypervisor used and the resource allocation you specify,
but also on a *wide* array of low-level tuning parameters at the
hypervisor, network IO, disk IO, kernel, application and HAProxy

I'm not the best person to help you with tuning these - others on list
may choose to. I *strongly* (STRONGLY!) suggest you browse this list's
archives to find information that's come up in the past on this exact
topic. This list is archived in a few different places - once such
place is here:

Jonathan Matthews
Oxford, London, UK

Re: RES: RES: RES: RES: RES: RES: RES: High CPU Usage (HaProxy)

2013-11-05 Thread Jonathan Matthews
On 5 November 2013 11:16, Willy Tarreau wrote:
 It is a Xeon E5-2650 Dual (So we have 16 physical cores to use here and 32

 OK. Do you know if you have a single or multiple interrupts on your NICs,
 and if they're delivered to a single core, multiple cores, or floating
 around more or less randomly ?

 I still don't know why you have that high a context switch rate. Are you
 running with more processes than CPUs ?

Fred is running with at least 30 separate haproxy processes (as per
his top output in message-id
col129-ds31e074947100ad71da09cb0...@phx.gbl) and 16 real (32 H/T)

I haven't seen a mail in this thread where Fred's shown that his
problems persist after moving to a single haproxy instance.

/wood-for-the-trees :-)


Re: HTTP and send-proxy

2013-10-29 Thread Jonathan Matthews
On 29 October 2013 08:30, Ge Jin wrote:
 Hi, Baptiste!

 Thanks for your reply, I found there is an incorrect configure in my

... email client? ;-)

Re: Haproxy for high load/High availability Mail Server on SSL

2013-10-12 Thread Jonathan Matthews
On 12 October 2013 19:11, Abhishek Sharma wrote:

 I am evaluating HaProxy (after being recommended very highly by some of the
 tech gurus i know) for one of my requirements. I have a mail server which
 scales very well for multiple concurrent connections. The mails server uses
 encrypted channel *SMTPS/IMAPS/POPS*
 , basically ports 465/995/993 on SSL.

 My requirement is to put a filtering mechanism just before my mail server.
 What I need is to filter incoming mails for certain rules and accordingly
 either forward the mail to server or drop it.

HAProxy doesn't talk SMTP, IMAP or POP3, so the criteria you'll be
able to use to reject *connections* will pretty much be restricted to
the remote IP address and other non-protocol-specific information. You
might be able to enforce some TLS-/SSL-level restrictions, but I
suspect this isn't what you have in mind.

Note that I said connections, above. Because HAProxy won't look
inside each opaque connection, you'll find multiple mails may be sent
by the remote server on any one connection.

 Now biggest challenge here
 being the ssl/encrypted data. So I used stunnel/Stud and was able to
 evaluate the architecture. It worked, but the trouble is I could'nt get it
 to scale to high load. I want something that could handle 3000-4000
 concurrent mail connections at any given moment.

 How can I leverage haproxy for this architecture?

I wouldn't, personally, for all sorts of reasons.

Put something that speaks SMTP/etc as your first hop in the chain or,
if you're still keen to shoehorn HAProxy in there, make sure you
really *really* understand the nature of the spam and abuse you'll
have to deal with because you opened up a SMTP port online.

Just my 2 cents. Other opinions are available ;-)
Jonathan Matthews
Oxford, London, UK

Re: Redirect help

2013-10-07 Thread Jonathan Matthews
Have you tried searching online for the answer?

Re: HaProxy service usage

2013-10-04 Thread Jonathan Matthews
On 4 October 2013 16:17, Alan Xu wrote:
 Hello HAProxy,

Hi Alan. This is the public, archived mailing list for users of the
open-source tool HAProxy. It is also inhabited by the developers of
HAProxy (of whom I am not one!)

Commercial load-balancers do exist which build on HAProxy's core, but
the product itself is free and freely available from - or your distribution's package repository. Be
aware that distro repos usually carry an older version than the
current stable (i.e. recommended) release.

Please feel free to ask any questions you have on this list.

Jonathan Matthews
Oxford, London, UK

Re: Haproxy SSL certificat exception with root

2013-10-01 Thread Jonathan Matthews
On 1 October 2013 11:51, Matthieu Boret wrote:

 I've setup Haproxy 1.5 dev 19 to handle my http and https traffic.

 All works fine except when I request the root url in https:

 My certificate is a wildcard *

This happens because your wildcard does *not* match your
root/naked/apex/etc domain.

In other words, even though it looks like it might, strictly speaking
a request for is not matched by *, so the browser
rejects the cert.

This is a problem commonly experienced when people purchase wildcard
certs from a vendor who hasn't added the root domain to the cert in
the SaN field.

 What is the solution to remove this error?

The solution is to use a correctly set up cert. You need to talk to
your cert provider. They may charge you extra for this.

 An url rewrite and add www?

As David said, there is nothing that HAProxy can do to help here.

Jonathan Matthews
Oxford, London, UK

Re: HTTP Content-Check

2013-08-12 Thread Jonathan Matthews
On 12 August 2013 12:35, Wolfgang Routschka wrote:
 Hi Guys,

 on question today about option httpchk  in haproxy 1.5-dev19.

 Is it possible to check the content of URI in option httpchk?

This is available in 1.4 and 1.5. Here're the 1.4 docs for the

Jonathan Matthews
Oxford, London, UK

Re: Major differences between 1.4 and 1.5

2013-07-25 Thread Jonathan Matthews
On 25 July 2013 17:13, Errol Neal wrote:
 Hi. Can anyone elaborate on this. I promise I'm not being too lazy, but I 
 can't find a side-by-side feature comparison against these two releases.

1.4 is stable.
1.5 is still in dev, but brings at least SSL and (peer-shared?) stick
tables in as features.

That stable/dev difference may decide it for you - it would for me in
most circumstances. I'm sure more knowledgable people will chime in
with more detailed comparisons.

Jonathan Matthews
Oxford, London, UK

Re: Prevent HAProxy from toggeling back from fallback to primary

2013-07-23 Thread Jonathan Matthews
On 23 July 2013 13:47, Claudio M. wrote:
 Hi, ive the folowing configuration

 backend web1.mi.ext
 mode http
 option httpchk
 balance roundrobin
 option httplog
 option http-server-close
 cookie CLUSTID insert
 source usesrc clientip
 server web1b.mi cookie web1b.mi check port 80 maxconn 
 server web1d.mi cookie web1d.mi check port 80 maxconn 
 server web1.bk cookie web1.bk  check port 80 maxconn 
 100 backup

 where web1.bk is obviously the backup server

 I need that when both primary servers go down and web1.bk go online, when 
 web1b.mi and/or web1d.mi go up haproxy not switch to these

 Is this possible?

A bit-of-a-hack way to achieve this is to put a high number of
required successes for the non-backup servers' health checks. I.e.
several thousand (or whatever - do the mathS that makes sense for your
situation) in the per-server rise setting:

Jonathan Matthews
Oxford, London, UK

Re: Meaning of hrsp_2xx in show_stat

2013-06-12 Thread Jonathan Matthews
On 12 June 2013 10:20, Ashish Jaiswal wrote:
 The problem is that the rate is showing something different and the
 hrsp_2xx is showing something different.
 # 33. rate: number of sessions per second over last elapsed second

This is what it says it is: sessions/sec over the last second.

 # 39. hrsp_1xx: http responses with 1xx code
 # 40. hrsp_2xx: http responses with 2xx code
 # 41. hrsp_3xx: http responses with 3xx code
 # 42. hrsp_4xx: http responses with 4xx code
 # 43. hrsp_5xx: http responses with 5xx code

These are absolute incremental counters over the lifetime of the
HAProxy process.

I don't think the 2 (rate  counters) are directly comparable, and I'd
definitely expect them to show different values.

Jonathan Matthews // Oxford, London, UK

Re: Question about HTTP load balancing using HAProxy

2013-06-06 Thread Jonathan Matthews
On 4 June 2013 09:09, Ali Majdzadeh wrote:

 Jonathan, Lukas
 Thanks for your valuable comments. Would you please indicate some of those 
 moving parts that could fail during a single download, Jonathan?

Sorry Ali, I don't think that's appropriate to the HAProxy mailing
list. Other people may help you with this, but it's too close to my
usual job for me to spend time on, on the wrong list. Contact me
professionally off-list if you like.

 From Lukas comments, I realized that at least some parts of the problem are 
 related to the client agent, is that right? I mean, for example, being the 
 primary server failed, if the client agent retries the download request, 
 HAProxy can proxy the new request to the other back-end server and download 
 continues from where it was interrupted, is this conclusion correct?

That probably won't happen. You'll need explicit support on both
client and server for HTTP Range requests, which I'm not sure you'll
get if you're just exposing download links and expecting a user to
re-click after a failed download. Check out for some
more information on this.

Jonathan Matthews // Oxford, London, UK

Re: Question about HTTP load balancing using HAProxy

2013-06-03 Thread Jonathan Matthews
On 3 June 2013 22:36, Ali Majdzadeh wrote:

 Hello All,
 I am totally new to HAProxy. What I am looking for is a solution for HTTP 
 load balancing and according to what I have read about HAProxy, I think this 
 is the right choice.

Hi Ali. I've found HAProxy to be a really good HTTP load-balancer.
Here's my unofficial take on your questions.

 Concerning HTTP, HAProxy is session-aware. Does this mean that all the 
 requests initiated from a specific client goes only to a specific back-end 

This can be configured, yes.

 What happens if suddenly the back-end server fails?

Your backend should expose a health check URI that can be requested by
HAProxy frequently (very frequently, if possible). If it stops
returning a 200, then that backend will stop receiving new requests.
The details of this are all configurable - timeouts, number of failed
health checks required to fail a backend, etc.

 For example, suppose that a user is downloading a file from a back-end web 
 server and the same file exists on another back-end web server. The load on 
 these two servers is balanced by HAProxy. Now, what happens if the first web 
 server dies whilst the file is being downloaded?

The client will receive an incomplete download. This is unavoidable -
any solution mitigating this would have to be highly coupled to the
backend service being provided, as a generic solution would almost
certainly not be HTTP RFC compliant.

 Does the download continues from the second server from where it was 

No. Another request will need to be made, through HAProxy (and not
/by/ HAProxy) to the backend.

 I am not sure whether I have got the correct understanding of 
 session-awareness feature in HAProxy. I do appreciate your comments.

I don't think that HAProxy's sessions are related to the kind of
in-request failure you seem to be concerned about. But I haven't run
the current 1.5 development version, so I may be out of touch with
what it does in this area.

Jonathan Matthews // Oxford, London, UK

Re: Question about HTTP load balancing using HAProxy

2013-06-03 Thread Jonathan Matthews
On 3 June 2013 23:19, Ali Majdzadeh wrote:

 Thomas, Jonathan
 Thanks for your responses. Well, the problem I currently face, supposing that 
 I am using the correct terminology, is how to maintain a failed HTTP request; 
 (For example, a file download request). Do you aware of any solutions 
 regarding this issue?

You'll need to write code. Either client-side or server-side (in the
middle, in front of HAProxy), you'll need to solve part of this in
code. However ...

 What is your suggested plan in order to achieve such a solution?

... I suggest you don't.

Sincerely. Don't bother solving this problem, would be my strong
professional suggestion, until a cost/benefit analysis from The
Business demonstrates that you have to.

And if there /isn't/ such an analysis -- if you're just up against
morons who spout failure is not an option; 100% uptime is essential
without showing /why/ it's essential -- then your problem isn't a
technical problem; it's a person problem :-)

Here's a very simplified rationale for this: as you write down the
list of moving parts that /could/ fail during a single download, you
will eventually come to the conclusion that solving the problem for
each of those parts *when*they*are*in*an*exceptional*situation* would
take hugely more engineering effort than lost sales will cost you.

Well, probably. I don't know what you're actually serving over HTTP,
of course :-)

 Does HAProxy help in this context?

It can do, due to its extremely configurable health checks. If you've
committed to solving this problem entirely, however, it will need more
than just HAProxy. That's the sort of situation into which I normally
step wearing my consultant hat, however, and charge you money ;-)

Jonathan Matthews // Oxford, London, UK

Re: Meaning of hrsp_2xx in show stat

2013-05-30 Thread Jonathan Matthews
IIRC, the meanings are:

 # 33. rate: number of sessions per second over last elapsed second

== Number of sessions initiated at the TCP level over the last second,
irrespective of the HTTP response.

 # 39. hrsp_1xx: http responses with 1xx code
 # 40. hrsp_2xx: http responses with 2xx code
 # 41. hrsp_3xx: http responses with 3xx code
 # 42. hrsp_4xx: http responses with 4xx code
 # 43. hrsp_5xx: http responses with 5xx code

== Continually incrementing count of [12345]xx response codes (i.e.
not a per-period rate).

Does this match what you're seeing? Remember that #33 is useful if
you're looking at it at a single point in time, but if you're trying
to graph it, you might find it more useful to collect stot directly
and calculate rates from that instead.

Jonathan Matthews // Oxford, London, UK

Re: smtpchk when using proxy protocol

2013-05-27 Thread Jonathan Matthews
On 27 May 2013 08:40, Vit Dua wrote:

 The log also said:

 May 27 14:39:11 localhost haproxy[1278]: Proxy ft_postfix started.
 May 27 14:39:11 localhost haproxy[1278]: Server ft_postfix/postfix01 is DOWN, 
 reason: Layer7 invalid response, info: ESMTP Postfix 
 (Ubuntu), check duration: 1ms. 0 active and 0 backup servers left. 0 
 sessions active, 0 requeued, 0 remaining in queue.
 May 27 14:39:11 localhost haproxy[1278]: proxy ft_postfix has no server 

This is just a guess (it's been a while since I've run SMTP in anger!) but:

You see the layer 7 response is 220-...? Well, that hyphen in the
4th character usually means that this is a response that's going to
spill over to the next line. Check out the example in
- see the difference between the 220 and the 250 responses? Only the
/last/ 250 response is without a hyphen.

I wonder if using the PROXY protocol is making the server respond with
1 line, which is making the smtpchk fail because the first reply it
sees doesn't match [0-9][0-9][0-9]space... any more.

Is the server definitely set up to accept the PROXY protocol? Remember
that it's generally an SMTP protocol violation for the client to talk
before receiving an SMTP banner - which is what (I /believe/) the
PROXY protocol does. Which would suggest the server has to explicitly
support the PROXY protocol.

Either way - if it's just something being tickled in the postfix code
which replies with a multi-line response or if it doesn't understand
PROXY messages entirely - I'm afraid I don't have any suggestions for
fixing it. You might need to dig further and let us know what you

Jonathan Matthews // Oxford, London, UK

Re: Redirect 1 time per day

2013-05-24 Thread Jonathan Matthews
On 24 May 2013 09:14, swapan wrote:
 I want to set the redirection for blackberry mobiles along with other mobile
 phones.what exact acl should i put in there in the haproxy config file.

Your question is overly broad and can't be answered without more
detail, but I would suggest you start by looking at for a generic solution to this
problem. I'm not sure if anyone has it integrated with HAProxy,

Jonathan Matthews // Oxford, London, UK

Re: HAProxy and MySQL failover

2013-05-16 Thread Jonathan Matthews
In the past I've used a ludicrously high setting on the primary for
rise, the number of health checks it has to pass before it's
considered to be up again.

It's definitely a hack, though that's not to say I haven't used it in
production ;-)


Jonathan Matthews
Oxford, London, UK

Re: Websockets and RTMP

2013-05-12 Thread Jonathan Matthews
On 12 May 2013 10:03, pablo platt wrote:
 Can you please explain how to use ssl_fc?
 I couldn't find it in the configuration docs.

 Please see below the global and defaults sections which I get when
 installing the haproxy-1.4.18 deb package on ubuntu 12.04

ssl_fc is only in HAProxy 1.5.

Jonathan Matthews // Oxford, London, UK

Balancing SIP

2013-04-12 Thread Jonathan Matthews
Does anyone have anything they could share about using HAProxy for
load-balancing SIP? Positive /or/ negative, of course! :-)


  1   2   >