Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2018-12-23 Thread PiBa-NL

Changed subject of patch requirement to 'REGTEST'.

Op 23-12-2018 om 21:17 schreef PiBa-NL:

Hi List,

Attached a new test to verify that the 'mailers' section is working 
properly.

Currently with 1.9 the mailers sends thousands of mails for my setup...

As the test is rather slow i have marked it with a starting letter 's'.

Note that the test also fails on 1.6/1.7/1.8 but can be 'fixed' there 
by adding a 'timeout mail 200ms'.. (except on 1.6 which doesn't have 
that setting.)


I don't think that should be needed though if everything was working 
properly?


If the test could be committed, and related issues exposed fixed that 
would be neat ;)


Thanks in advance,

PiBa-NL (Pieter)



From 8d63f5a39a9b4b326b636e42ccafcf0c2173d752 Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sun, 23 Dec 2018 21:06:31 +0100
Subject: [PATCH] REGTEST: mailers: add new test for 'mailers' section

This test verifies the mailers section works properly by checking that it sends 
the proper amount of mails when health-checks are changing and or marking a 
server up/down

The test currently fails on all versions of haproxy i tried with varying 
results.
1.9.0 produces thousands of mails..
1.8.14 only sends 1 mail, needs a 200ms 'timeout mail' to succeed
1.7.11 only sends 1 mail, needs a 200ms 'timeout mail' to succeed
1.6 only sends 1 mail, (does not have the 'timeout mail' setting implemented)
---
 reg-tests/mailers/shealthcheckmail.lua | 105 +
 reg-tests/mailers/shealthcheckmail.vtc |  75 ++
 2 files changed, 180 insertions(+)
 create mode 100644 reg-tests/mailers/shealthcheckmail.lua
 create mode 100644 reg-tests/mailers/shealthcheckmail.vtc

diff --git a/reg-tests/mailers/shealthcheckmail.lua 
b/reg-tests/mailers/shealthcheckmail.lua
new file mode 100644
index ..9c75877b
--- /dev/null
+++ b/reg-tests/mailers/shealthcheckmail.lua
@@ -0,0 +1,105 @@
+
+local vtc_port1 = 0
+local mailsreceived = 0
+local mailconnectionsmade = 0
+local healthcheckcounter = 0
+
+core.register_action("bug", { "http-res" }, function(txn)
+   data = txn:get_priv()
+   if not data then
+   data = 0
+   end
+   data = data + 1
+   print(string.format("set to %d", data))
+   txn.http:res_set_status(200 + data)
+   txn:set_priv(data)
+end)
+
+core.register_service("luahttpservice", "http", function(applet)
+   local response = "?"
+   local responsestatus = 200
+   if applet.path == "/setport" then
+   vtc_port1 = applet.headers["vtcport1"][0]
+   response = "OK"
+   end
+   if applet.path == "/svr_healthcheck" then
+   healthcheckcounter = healthcheckcounter + 1
+   if healthcheckcounter < 2 or healthcheckcounter > 6 then
+   responsestatus = 403
+   end
+   end
+
+   applet:set_status(responsestatus)
+   if applet.path == "/checkMailCounters" then
+   response = "MailCounters"
+   applet:add_header("mailsreceived", mailsreceived)
+   applet:add_header("mailconnectionsmade", mailconnectionsmade)
+   end
+   applet:start_response()
+   applet:send(response)
+end)
+
+core.register_service("fakeserv", "http", function(applet)
+   applet:set_status(200)
+   applet:start_response()
+end)
+
+function RecieveAndCheck(applet, expect)
+   data = applet:getline()
+   if data:sub(1,expect:len()) ~= expect then
+   core.Info("Expected: "..expect.." but 
got:"..data:sub(1,expect:len()))
+   applet:send("Expected: "..expect.." but got:"..data.."\r\n")
+   return false
+   end
+   return true
+end
+
+core.register_service("mailservice", "tcp", function(applet)
+   core.Info("# Mailservice Called #")
+   mailconnectionsmade = mailconnectionsmade + 1
+   applet:send("220 Welcome\r\n")
+   local data
+
+   if RecieveAndCheck(applet, "EHLO") == false then
+   return
+   end
+   applet:send("250 OK\r\n")
+   if RecieveAndCheck(applet, "MAIL FROM:") == false then
+   return
+   end
+   applet:send("250 OK\r\n")
+   if RecieveAndCheck(applet, "RCPT TO:") == false then
+   return
+   end
+   applet:send("250 OK\r\n")
+   if RecieveAndCheck(applet, "DATA") == false then
+   return
+   end
+   applet:send("354 OK\r\n")
+   core.Info(" Send your mailbody")
+   local endofmail = false
+   local subject = ""
+   while endofmail ~= true do
+   data = applet:getline() -- BODY CONTENT
+   --core.Info(data)
+   if data:sub(1, 9) == "Subject: " then
+   subject = data
+   end
+   if (data == "\r\n") then
+   data = applet:getline() -- BODY CONTENT
+   core.Info(data)
+   if (data == 

[PATCH] REGTEST: filters: add compression test

2018-12-23 Thread PiBa-NL

Hi Frederic,

As requested hereby the regtest send for inclusion into the git 
repository. Without randomization and with your .diff applied. Also 
outputting expected and actual checksum if the test fails so its clear 
that that is the issue detected.


Is it okay like this? Should the blob be bigger? As you mentioned 
needing a 10MB output to reproduce the original issue on your machine?


Regards,

PiBa-NL (Pieter)

From 64460dfeacef3d04af4243396007a606c2e5dbf7 Mon Sep 17 00:00:00 2001
From: PiBa-NL 
Date: Sun, 23 Dec 2018 21:21:51 +0100
Subject: [PATCH] REGTEST: filters: add compression test

This test checks that data transferred with compression is correctly received 
at different download speeds
---
 reg-tests/filters/b5.lua | 19 
 reg-tests/filters/b5.vtc | 58 
 2 files changed, 77 insertions(+)
 create mode 100644 reg-tests/filters/b5.lua
 create mode 100644 reg-tests/filters/b5.vtc

diff --git a/reg-tests/filters/b5.lua b/reg-tests/filters/b5.lua
new file mode 100644
index ..6dbe1d33
--- /dev/null
+++ b/reg-tests/filters/b5.lua
@@ -0,0 +1,19 @@
+
+local data = "abcdefghijklmnopqrstuvwxyz"
+local responseblob = ""
+for i = 1,1 do
+  responseblob = responseblob .. "\r\n" .. i .. data:sub(1, math.floor(i % 27))
+end
+
+http01applet = function(applet) 
+  local response = responseblob
+  applet:set_status(200) 
+  applet:add_header("Content-Type", "application/javascript") 
+  applet:add_header("Content-Length", string.len(response)*10) 
+  applet:start_response() 
+  for i = 1,10 do
+applet:send(response) 
+  end
+end
+
+core.register_service("fileloader-http01", "http", http01applet)
diff --git a/reg-tests/filters/b5.vtc b/reg-tests/filters/b5.vtc
new file mode 100644
index ..5216cdaf
--- /dev/null
+++ b/reg-tests/filters/b5.vtc
@@ -0,0 +1,58 @@
+# Checks that compression doesnt cause corruption..
+
+varnishtest "Compression validation"
+#REQUIRE_VERSION=1.6
+
+feature ignore_unknown_macro
+
+haproxy h1 -conf {
+global
+#  log stdout format short daemon 
+   lua-load${testdir}/b5.lua
+
+defaults
+   modehttp
+   log global
+   option  httplog
+
+frontend main-https
+   bind"fd@${fe1}" ssl crt ${testdir}/common.pem
+   compression algo gzip
+   compression type text/html text/plain application/json 
application/javascript
+   compression offload
+   use_backend TestBack  if  TRUE
+
+backend TestBack
+   server  LocalSrv ${h1_fe2_addr}:${h1_fe2_port}
+
+listen fileloader
+   mode http
+   bind "fd@${fe2}"
+   http-request use-service lua.fileloader-http01
+} -start
+
+shell {
+HOST=${h1_fe1_addr}
+if [ "${h1_fe1_addr}" = "::1" ] ; then
+HOST="\[::1\]"
+fi
+
+md5=$(which md5 || which md5sum)
+
+if [ -z $md5 ] ; then
+echo "MD5 checksum utility not found"
+exit 1
+fi
+
+expectchecksum="4d9c62aa5370b8d5f84f17ec2e78f483"
+
+for opt in "" "--limit-rate 300K" "--limit-rate 500K" ; do
+checksum=$(curl --compressed -k "https://$HOST:${h1_fe1_port}; $opt | 
$md5 | cut -d ' ' -f1)
+if [ "$checksum" != "$expectchecksum" ] ; then 
+  echo "Expecting checksum $expectchecksum"
+  echo "Received checksum: $checksum"
+  exit 1; 
+fi
+done
+
+} -run
-- 
2.18.0.windows.1



[PATCH] MINOR: sample: add ssl_sni_check converter

2018-12-23 Thread Moemen MHEDHBI
Hi,

The attached patch adds the ssl_sni_check converter which returns true
if the sample input string matches a loaded certificate's CN/SAN.

This can be useful to check for example if a host header matches a
loaded certificate CN/SAN before doing a redirect:

frontent fe_main 
  bind 127.0.0.1:80
  bind 127.0.0.1:443 ssl crt /etc/haproxy/ssl/
  http-request redirect scheme https if !{ ssl_fc } { hdr(host),ssl_sni_check() 
}


This converter may be even more useful when certificates will be
added/removed at runtime.

++

-- 
Moemen MHEDHBI
>From 14ed628ab9badbb06c45bab324eb00f998de49af Mon Sep 17 00:00:00 2001
From: Moemen MHEDHBI 
Date: Sun, 23 Dec 2018 20:50:04 +0100
Subject: [PATCH] MINOR: sample: add ssl_sni_check converter

This adds the ssl_sni_check converter. The converter returns
true if the sample input string matches a loaded certificate's CN/SAN.
Lookup can be done through certificates of a specified bind line (by
) otherwise the search will include all bind lines of the current
proxy.
---
 doc/configuration.txt |  6 ++
 src/ssl_sock.c| 43 +++
 2 files changed, 49 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 6ca63d64a..0be043e73 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13651,6 +13651,12 @@ sha1
   Converts a binary input sample to a SHA1 digest. The result is a binary
   sample with length of 20 bytes.
 
+ssl_sni_check()
+  Returns true if the sample input string matches a loaded certificate's CN/SAN.
+  Otherwise false is returned. When  is provided the lookup is done only
+  through the certificates of the  bind line named , if not all bind 
+  lines of the current frontend will be searched.
+
 strcmp()
   Compares the contents of  with the input value of type string. Returns
   the result as a signed integer compatible with strcmp(3): 0 if both strings
diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 282b85ddd..b24d78978 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -7276,6 +7276,41 @@ smp_fetch_ssl_c_verify(const struct arg *args, struct sample *smp, const char *k
 	return 1;
 }
 
+/* boolean, returns true if input string matches a loaded certificate's CN/SAN. */
+/* The lookup is done only for the bind named  if the param is prvided. */
+static int smp_conv_ssl_sni_check(const struct arg *args, struct sample *smp, void *private)
+{
+	struct proxy *px = smp->px;
+	struct listener *l;
+	struct ebmb_node *node = NULL;
+	char *wildp = NULL;
+	int i;
+
+	for (i = 0; i < trash.size && i < smp->data.u.str.data; i++) {
+		trash.area[i] = tolower(smp->data.u.str.area[i]);
+		if (!wildp && (trash.area[i] == '.'))
+			wildp = [i];
+	}
+	trash.area[i] = 0;
+
+	list_for_each_entry(l, >conf.listeners, by_fe) {
+		if ( args->type == ARGT_STR && l->name && (strcmp(args->data.str.area, l->name) != 0))
+			continue;
+		/* lookup in full qualified names */
+		node = ebst_lookup(>bind_conf->sni_ctx, trash.area);
+		/* lookup in wildcards names */
+		if (!node && wildp)
+			node = ebst_lookup(>bind_conf->sni_w_ctx, wildp);
+		if (node != NULL)
+			break;
+	}
+
+	smp->data.type = SMP_T_BOOL;
+	smp->data.u.sint = !!node;
+	smp->flags = SMP_F_VOL_TEST;
+	return 1;
+}
+
 /* parse the "ca-file" bind keyword */
 static int ssl_bind_parse_ca_file(char **args, int cur_arg, struct proxy *px, struct ssl_bind_conf *conf, char **err)
 {
@@ -9047,6 +9082,14 @@ static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
 
 INITCALL1(STG_REGISTER, sample_register_fetches, _fetch_keywords);
 
+/* Note: must not be declared  as its list will be overwritten */
+static struct sample_conv_kw_list sample_conv_kws = {ILH, {
+	{ "ssl_sni_check", smp_conv_ssl_sni_check, ARG1(0,STR), NULL, SMP_T_STR, SMP_T_BOOL },
+	{ /* END */ },
+}};
+
+INITCALL1(STG_REGISTER, sample_register_convs, _conv_kws);
+
 /* Note: must not be declared  as its list will be overwritten.
  * Please take care of keeping this list alphabetically sorted.
  */
-- 
2.19.2



Re: DNS resolution problem since 1.8.14

2018-12-23 Thread Jonathan Matthews
Hey Patrick,

Have you looked at the fixes in 1.8.16? They sound kinda-sorta related to
your problem ...

J

On Sun, 23 Dec 2018 at 16:17, Patrick Valsecchi  wrote:

> I did a tcpdump. My config is modified to point to a local container (www)
> in a docker compose (I'm trying to simplify my setup). You can see the DNS
> answers correctly:
>
> 16:06:00.181533 IP (tos 0x0, ttl 64, id 63816, offset 0, flags [DF], proto
> UDP (17), length 68)
> 127.0.0.11.53 > localhost.40994: 63037 1/0/0 www. A 172.20.0.17 (40)
>
> Could it be related to that?
> https://github.com/haproxy/haproxy/commit/8d4e7dc880d2094658fead50dedd9c22c95c556a
> On 23.12.18 13:59, Patrick Valsecchi wrote:
>
> Hi,
>
> Since haproxy version 1.8.14 and including the last 1.9 release, haproxy
> puts all my backends in MAINT after around 31s. They first work fine, but
> then they are put in MAINT.
>
> The logs look like that:
>
> <149>Dec 23 12:45:11 haproxy[1]: Proxy www started.
> <149>Dec 23 12:45:11 haproxy[1]: Proxy plain started.
> [NOTICE] 356/124511 (1) : New worker #1 (8) forked
> <150>Dec 23 12:45:13 haproxy[8]: 89.217.194.174:49752
> [23/Dec/2018:12:45:13.098] plain www/linked 0/0/16/21/37 200 4197 - - 
> 1/1/0/0/0 0/0 "GET / HTTP/1.1"
> [WARNING] 356/124542 (8) : Server www/linked is going DOWN for maintenance
> (DNS timeout status). 0 active and 0 backup servers left. 0 sessions
> active, 0 requeued, 0 remaining in queue.
> <145>Dec 23 12:45:42 haproxy[8]: Server www/linked is going DOWN for
> maintenance (DNS timeout status). 0 active and 0 backup servers left. 0
> sessions active, 0 requeued, 0 remaining in queue.
> [ALERT] 356/124542 (8) : backend 'www' has no server available!
> <144>Dec 23 12:45:42 haproxy[8]: backend www has no server available!
>
> I run haproxy using docker:
>
> docker run --name toto -ti --rm -v
> /home/docker-compositions/web/proxy/conf.test:/etc/haproxy/:ro -p 8080:80
> haproxy:1.9 haproxy -f /etc/haproxy/
>
> And my config is that:
>
> global
> log stderr local2
> chroot  /tmp
> pidfile /run/haproxy.pid
> maxconn 4000
> max-spread-checks 500
>
> master-worker
>
> usernobody
> group   nogroup
>
> resolvers dns
>   nameserver docker 127.0.0.11:53
>   hold valid 1s
>
> defaults
> modehttp
> log global
> option  httplog
> option  dontlognull
> option http-server-close
> option forwardfor   except 127.0.0.0/8
> option  redispatch
> retries 3
> timeout http-request10s
> timeout queue   1m
> timeout connect 10s
> timeout client  10m
> timeout server  10m
> timeout http-keep-alive 10s
> timeout check   10s
> maxconn 3000
> default-server init-addr last,libc,none
>
> errorfile 400 /usr/local/etc/haproxy/errors/400.http
> errorfile 403 /usr/local/etc/haproxy/errors/403.http
> errorfile 408 /usr/local/etc/haproxy/errors/408.http
> errorfile 500 /usr/local/etc/haproxy/errors/500.http
> errorfile 502 /usr/local/etc/haproxy/errors/502.http
> errorfile 503 /usr/local/etc/haproxy/errors/503.http
> errorfile 504 /usr/local/etc/haproxy/errors/504.http
>
> backend www
> option httpchk GET / HTTP/1.0\r\nUser-Agent:\ healthcheck
> http-check expect status 200
> default-server inter 60s fall 3 rise 1
> server linked www.topin.travel:80 check resolvers dns
>
> frontend plain
> bind :80
>
> http-request set-header X-Forwarded-Proto   http
> http-request set-header X-Forwarded-Host%[req.hdr(host)]
> http-request set-header X-Forwarded-Port%[dst_port]
> http-request set-header X-Forwarded-For %[src]
> http-request set-header X-Real-IP   %[src]
>
> compression algo gzip
> compression type text/css text/html text/javascript
> application/javascript text/plain text/xml application/json
>
> # Forward to the main linked container by default
> default_backend www
>
>
> Any idea what is happening? I've tried to increase the DNS resolve timeout
> to 5s and it didn't help. My feeling is that the newer versions of haproxy
> cannot talk with the DNS provided by docker.
>
> Thanks
>
> --
Jonathan Matthews
London, UK
http://www.jpluscplusm.com/contact.html


Re: DNS resolution problem since 1.8.14

2018-12-23 Thread Patrick Valsecchi
I did a tcpdump. My config is modified to point to a local container 
(www) in a docker compose (I'm trying to simplify my setup). You can see 
the DNS answers correctly:


   16:06:00.181533 IP (tos 0x0, ttl 64, id 63816, offset 0, flags [DF],
   proto UDP (17), length 68)
    127.0.0.11.53 > localhost.40994: 63037 1/0/0 www. A 172.20.0.17
   (40)

Could it be related to that? 
https://github.com/haproxy/haproxy/commit/8d4e7dc880d2094658fead50dedd9c22c95c556a


On 23.12.18 13:59, Patrick Valsecchi wrote:


Hi,

Since haproxy version 1.8.14 and including the last 1.9 release, 
haproxy puts all my backends in MAINT after around 31s. They first 
work fine, but then they are put in MAINT.


The logs look like that:

<149>Dec 23 12:45:11 haproxy[1]: Proxy www started.
<149>Dec 23 12:45:11 haproxy[1]: Proxy plain started.
[NOTICE] 356/124511 (1) : New worker #1 (8) forked
<150>Dec 23 12:45:13 haproxy[8]: 89.217.194.174:49752
[23/Dec/2018:12:45:13.098] plain www/linked 0/0/16/21/37 200 4197
- -  1/1/0/0/0 0/0 "GET / HTTP/1.1"
[WARNING] 356/124542 (8) : Server www/linked is going DOWN for
maintenance (DNS timeout status). 0 active and 0 backup servers
left. 0 sessions active, 0 requeued, 0 remaining in queue.
<145>Dec 23 12:45:42 haproxy[8]: Server www/linked is going DOWN
for maintenance (DNS timeout status). 0 active and 0 backup
servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 356/124542 (8) : backend 'www' has no server available!
<144>Dec 23 12:45:42 haproxy[8]: backend www has no server available!

I run haproxy using docker:

docker run --name toto -ti --rm -v
/home/docker-compositions/web/proxy/conf.test:/etc/haproxy/:ro -p
8080:80 haproxy:1.9 haproxy -f /etc/haproxy/

And my config is that:

global
    log stderr local2
    chroot  /tmp
    pidfile /run/haproxy.pid
    maxconn 4000
    max-spread-checks 500

    master-worker

    user    nobody
    group   nogroup

resolvers dns
  nameserver docker 127.0.0.11:53
  hold valid 1s

defaults
    mode    http
    log global
    option  httplog
    option  dontlognull
    option http-server-close
    option forwardfor   except 127.0.0.0/8
    option  redispatch
    retries 3
    timeout http-request    10s
    timeout queue   1m
    timeout connect 10s
    timeout client  10m
    timeout server  10m
    timeout http-keep-alive 10s
    timeout check   10s
    maxconn 3000
    default-server init-addr last,libc,none

    errorfile 400 /usr/local/etc/haproxy/errors/400.http
    errorfile 403 /usr/local/etc/haproxy/errors/403.http
    errorfile 408 /usr/local/etc/haproxy/errors/408.http
    errorfile 500 /usr/local/etc/haproxy/errors/500.http
    errorfile 502 /usr/local/etc/haproxy/errors/502.http
    errorfile 503 /usr/local/etc/haproxy/errors/503.http
    errorfile 504 /usr/local/etc/haproxy/errors/504.http

backend www
    option httpchk GET / HTTP/1.0\r\nUser-Agent:\ healthcheck
    http-check expect status 200
    default-server inter 60s fall 3 rise 1
    server linked www.topin.travel:80 check resolvers dns

frontend plain
    bind :80

    http-request set-header X-Forwarded-Proto http
    http-request set-header X-Forwarded-Host %[req.hdr(host)]
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request set-header X-Forwarded-For %[src]
    http-request set-header X-Real-IP %[src]

    compression algo gzip
    compression type text/css text/html text/javascript
application/javascript text/plain text/xml application/json

    # Forward to the main linked container by default
    default_backend www


Any idea what is happening? I've tried to increase the DNS resolve 
timeout to 5s and it didn't help. My feeling is that the newer 
versions of haproxy cannot talk with the DNS provided by docker.


Thanks



DNS resolution problem since 1.8.14

2018-12-23 Thread Patrick Valsecchi

Hi,

Since haproxy version 1.8.14 and including the last 1.9 release, haproxy 
puts all my backends in MAINT after around 31s. They first work fine, 
but then they are put in MAINT.


The logs look like that:

   <149>Dec 23 12:45:11 haproxy[1]: Proxy www started.
   <149>Dec 23 12:45:11 haproxy[1]: Proxy plain started.
   [NOTICE] 356/124511 (1) : New worker #1 (8) forked
   <150>Dec 23 12:45:13 haproxy[8]: 89.217.194.174:49752
   [23/Dec/2018:12:45:13.098] plain www/linked 0/0/16/21/37 200 4197 -
   -  1/1/0/0/0 0/0 "GET / HTTP/1.1"
   [WARNING] 356/124542 (8) : Server www/linked is going DOWN for
   maintenance (DNS timeout status). 0 active and 0 backup servers
   left. 0 sessions active, 0 requeued, 0 remaining in queue.
   <145>Dec 23 12:45:42 haproxy[8]: Server www/linked is going DOWN for
   maintenance (DNS timeout status). 0 active and 0 backup servers
   left. 0 sessions active, 0 requeued, 0 remaining in queue.
   [ALERT] 356/124542 (8) : backend 'www' has no server available!
   <144>Dec 23 12:45:42 haproxy[8]: backend www has no server available!

I run haproxy using docker:

   docker run --name toto -ti --rm -v
   /home/docker-compositions/web/proxy/conf.test:/etc/haproxy/:ro -p
   8080:80 haproxy:1.9 haproxy -f /etc/haproxy/

And my config is that:

   global
    log stderr local2
    chroot  /tmp
    pidfile /run/haproxy.pid
    maxconn 4000
    max-spread-checks 500

    master-worker

    user    nobody
    group   nogroup

   resolvers dns
  nameserver docker 127.0.0.11:53
  hold valid 1s

   defaults
    mode    http
    log global
    option  httplog
    option  dontlognull
    option http-server-close
    option forwardfor   except 127.0.0.0/8
    option  redispatch
    retries 3
    timeout http-request    10s
    timeout queue   1m
    timeout connect 10s
    timeout client  10m
    timeout server  10m
    timeout http-keep-alive 10s
    timeout check   10s
    maxconn 3000
    default-server init-addr last,libc,none

    errorfile 400 /usr/local/etc/haproxy/errors/400.http
    errorfile 403 /usr/local/etc/haproxy/errors/403.http
    errorfile 408 /usr/local/etc/haproxy/errors/408.http
    errorfile 500 /usr/local/etc/haproxy/errors/500.http
    errorfile 502 /usr/local/etc/haproxy/errors/502.http
    errorfile 503 /usr/local/etc/haproxy/errors/503.http
    errorfile 504 /usr/local/etc/haproxy/errors/504.http

   backend www
    option httpchk GET / HTTP/1.0\r\nUser-Agent:\ healthcheck
    http-check expect status 200
    default-server inter 60s fall 3 rise 1
    server linked www.topin.travel:80 check resolvers dns

   frontend plain
    bind :80

    http-request set-header X-Forwarded-Proto http
    http-request set-header X-Forwarded-Host %[req.hdr(host)]
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request set-header X-Forwarded-For %[src]
    http-request set-header X-Real-IP %[src]

    compression algo gzip
    compression type text/css text/html text/javascript
   application/javascript text/plain text/xml application/json

    # Forward to the main linked container by default
    default_backend www


Any idea what is happening? I've tried to increase the DNS resolve 
timeout to 5s and it didn't help. My feeling is that the newer versions 
of haproxy cannot talk with the DNS provided by docker.


Thanks