Re: TLS handshake works with certificate name mismatch using "verify required" and "verifyhost"

2018-07-12 Thread Igor Cicimov
On Fri, Jul 13, 2018 at 11:26 AM, Igor Cicimov <
ig...@encompasscorporation.com> wrote:

> On Fri, Jul 13, 2018 at 11:08 AM, Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> Hi Martin,
>>
>> On Thu, Jul 12, 2018 at 6:55 PM, Martin RADEL <
>> martin.ra...@rbinternational.com> wrote:
>>
>>> Hi all,
>>>
>>>
>>>
>>> we have a strange situation with our HAProxy, running on Version 1.8.8
>>> with OpenSSL.
>>>
>>> (See the details in the setup listed below - some lines are missing by
>>> intention. It’s a config snippet with just the interesting parts mentioned)
>>>
>>>
>>>
>>> Initial situation:
>>>
>>> We run a HAProxy instance which enforces mutual TLS on the frontend,
>>> allowing only clients to connect to it when they will present a specific
>>> certificate.
>>>
>>> The HAPRoxy also does mutual TLS to the backend, presenting its frontend
>>> server certificate to the backend as a client certificate.
>>>
>>> The backend only allows connections when the HAProxy’s certificate is
>>> presented to it.
>>>
>>> To have a proper TLS handshake to the backend, and to be able to
>>> identify a man-in-the-middle scenario, we use the “verify required”
>>> directive together with the “verifyhost” directive.
>>>
>>>
>>>
>>> The HAProxy is not able to resolve the backend’s real DNS-hostname, so
>>> it’s using the IP of the server instead (10.1.1.1)
>>>
>>> The backend is presenting a wildcard server certificate with a
>>> DNS-hostname looking like “*.foo.bar”
>>>
>>>
>>>
>>>
>>>
>>> In this configuration, one could assume that there is always a
>>> certificate name mismatch with the TLS handshake:
>>>
>>> Backend server will present its server certificate with a proper DNS
>>> hostname in it, and the HAProxy will find out that it doesn’t match the
>>> initially used connection name “10.1.1.1”.
>>>
>>>
>>>
>>
>> ​Just checking if the IP hasn't been by any chance included in the
>> certificate subjectAlternateNames ?
>> ​
>>
>>>
>>>
>>> Issue:
>>>
>>> In fact the connection to the backend works all the time, even when
>>> there is a name mismatch and even if we use the “verify required” option
>>> together with “verifyhost”.
>>>
>>> Seems as if HAProxy completely ignores the mismatch, as if we would use
>>> the option “verify none”.
>>>
>>>
>>>
>>>
>>>
>>> According to HAProxy documentation, this is clearly a not-expected
>>> behavior:
>>>
>>> http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-verify
>>>
>>>
>>>
>>>
>>>
>>> Can somebody please share some knowledge why this is working, or can
>>> confirm that this is a bug?
>>>
>>>
>>>
>>>
>>>
>>> #-
>>>
>>> # Global settings
>>>
>>> #-
>>>
>>> global
>>>
>>> log /dev/log local2
>>>
>>> pidfile /run/haproxy/haproxy.pid
>>>
>>> maxconn 2
>>>
>>> ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES
>>> 256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!
>>> MD5:!DSS:!RC4
>>>
>>> ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
>>>
>>> stats socket /var/lib/haproxy/stats
>>>
>>>
>>>
>>> #-
>>>
>>> # common defaults that all the 'listen' and 'backend' sections will
>>>
>>> # use if not designated in their block
>>>
>>> #-
>>>
>>> defaults
>>>
>>> modehttp
>>>
>>> log global
>>>
>>> option  http-server-close
>>>
>>> option  redispatch
>>>
>>> retries 3
>>>
>>> maxconn 2
>>>
>>> errorfile 503   /etc/haproxy/errorpage.html
>>>
>>> default-server  init-addr last,libc,none
>>>
>>>
>>>
>>> # 
>>>
>>> #  HAPROXY CONFIG WITH WILDCARD CERTIFICATE ON BACKEND
>>>
>>> # 
>>>
>>> # --- FRONTEND1 (TLS with mutual authentication) ---
>>>
>>> frontend FRONTEND1
>>>
>>> option  forwardfor except 127.0.0.0/8
>>>
>>> acl authorizedClient ssl_c_s_dn(cn) -m str -f
>>> /etc/haproxy/authorized_clients.cfg
>>>
>>> bind *:443 ssl crt /etc/haproxy/certs/frontend-server-certificate.pem
>>> ca-file /etc/haproxy/certs/frontend-ca-certificates.crt verify required
>>>
>>> use_backend BACKEND1 if authorizedClient frontend
>>>
>>>
>>>
>>> # --- BACKEND1
>>>
>>> backend BACKEND1
>>>
>>> option  forwardfor except 127.0.0.0/8
>>>
>>> server BACKEND1-server 10.1.1.1:443 check inter 30s  verify
>>> required ssl verifyhost *.foo.bar  ca-file
>>> /etc/haproxy/certs/backend-ca-certificates.crt crt
>>> /etc/haproxy/certs/frontend-server-certificate.pem
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> This message and any 

Re: TLS handshake works with certificate name mismatch using "verify required" and "verifyhost"

2018-07-12 Thread Igor Cicimov
On Fri, Jul 13, 2018 at 11:08 AM, Igor Cicimov <
ig...@encompasscorporation.com> wrote:

> Hi Martin,
>
> On Thu, Jul 12, 2018 at 6:55 PM, Martin RADEL <
> martin.ra...@rbinternational.com> wrote:
>
>> Hi all,
>>
>>
>>
>> we have a strange situation with our HAProxy, running on Version 1.8.8
>> with OpenSSL.
>>
>> (See the details in the setup listed below - some lines are missing by
>> intention. It’s a config snippet with just the interesting parts mentioned)
>>
>>
>>
>> Initial situation:
>>
>> We run a HAProxy instance which enforces mutual TLS on the frontend,
>> allowing only clients to connect to it when they will present a specific
>> certificate.
>>
>> The HAPRoxy also does mutual TLS to the backend, presenting its frontend
>> server certificate to the backend as a client certificate.
>>
>> The backend only allows connections when the HAProxy’s certificate is
>> presented to it.
>>
>> To have a proper TLS handshake to the backend, and to be able to identify
>> a man-in-the-middle scenario, we use the “verify required” directive
>> together with the “verifyhost” directive.
>>
>>
>>
>> The HAProxy is not able to resolve the backend’s real DNS-hostname, so
>> it’s using the IP of the server instead (10.1.1.1)
>>
>> The backend is presenting a wildcard server certificate with a
>> DNS-hostname looking like “*.foo.bar”
>>
>>
>>
>>
>>
>> In this configuration, one could assume that there is always a
>> certificate name mismatch with the TLS handshake:
>>
>> Backend server will present its server certificate with a proper DNS
>> hostname in it, and the HAProxy will find out that it doesn’t match the
>> initially used connection name “10.1.1.1”.
>>
>>
>>
>
> ​Just checking if the IP hasn't been by any chance included in the
> certificate subjectAlternateNames ?
> ​
>
>>
>>
>> Issue:
>>
>> In fact the connection to the backend works all the time, even when there
>> is a name mismatch and even if we use the “verify required” option together
>> with “verifyhost”.
>>
>> Seems as if HAProxy completely ignores the mismatch, as if we would use
>> the option “verify none”.
>>
>>
>>
>>
>>
>> According to HAProxy documentation, this is clearly a not-expected
>> behavior:
>>
>> http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-verify
>>
>>
>>
>>
>>
>> Can somebody please share some knowledge why this is working, or can
>> confirm that this is a bug?
>>
>>
>>
>>
>>
>> #-
>>
>> # Global settings
>>
>> #-
>>
>> global
>>
>> log /dev/log local2
>>
>> pidfile /run/haproxy/haproxy.pid
>>
>> maxconn 2
>>
>> ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES
>> 256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS:!RC4
>>
>> ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
>>
>> stats socket /var/lib/haproxy/stats
>>
>>
>>
>> #-
>>
>> # common defaults that all the 'listen' and 'backend' sections will
>>
>> # use if not designated in their block
>>
>> #-
>>
>> defaults
>>
>> modehttp
>>
>> log global
>>
>> option  http-server-close
>>
>> option  redispatch
>>
>> retries 3
>>
>> maxconn 2
>>
>> errorfile 503   /etc/haproxy/errorpage.html
>>
>> default-server  init-addr last,libc,none
>>
>>
>>
>> # 
>>
>> #  HAPROXY CONFIG WITH WILDCARD CERTIFICATE ON BACKEND
>>
>> # 
>>
>> # --- FRONTEND1 (TLS with mutual authentication) ---
>>
>> frontend FRONTEND1
>>
>> option  forwardfor except 127.0.0.0/8
>>
>> acl authorizedClient ssl_c_s_dn(cn) -m str -f
>> /etc/haproxy/authorized_clients.cfg
>>
>> bind *:443 ssl crt /etc/haproxy/certs/frontend-server-certificate.pem
>> ca-file /etc/haproxy/certs/frontend-ca-certificates.crt verify required
>>
>> use_backend BACKEND1 if authorizedClient frontend
>>
>>
>>
>> # --- BACKEND1
>>
>> backend BACKEND1
>>
>> option  forwardfor except 127.0.0.0/8
>>
>> server BACKEND1-server 10.1.1.1:443 check inter 30s  verify required
>> ssl verifyhost *.foo.bar  ca-file
>> /etc/haproxy/certs/backend-ca-certificates.crt crt
>> /etc/haproxy/certs/frontend-server-certificate.pem
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> This message and any attachment ("the Message") are confidential. If you
>> have received the Message in error, please notify the sender immediately
>> and delete the Message from your system, any use of the Message is
>> forbidden. Correspondence via e-mail is primarily for information purposes.
>> RBI neither 

Re: TLS handshake works with certificate name mismatch using "verify required" and "verifyhost"

2018-07-12 Thread Igor Cicimov
Hi Martin,

On Thu, Jul 12, 2018 at 6:55 PM, Martin RADEL <
martin.ra...@rbinternational.com> wrote:

> Hi all,
>
>
>
> we have a strange situation with our HAProxy, running on Version 1.8.8
> with OpenSSL.
>
> (See the details in the setup listed below - some lines are missing by
> intention. It’s a config snippet with just the interesting parts mentioned)
>
>
>
> Initial situation:
>
> We run a HAProxy instance which enforces mutual TLS on the frontend,
> allowing only clients to connect to it when they will present a specific
> certificate.
>
> The HAPRoxy also does mutual TLS to the backend, presenting its frontend
> server certificate to the backend as a client certificate.
>
> The backend only allows connections when the HAProxy’s certificate is
> presented to it.
>
> To have a proper TLS handshake to the backend, and to be able to identify
> a man-in-the-middle scenario, we use the “verify required” directive
> together with the “verifyhost” directive.
>
>
>
> The HAProxy is not able to resolve the backend’s real DNS-hostname, so
> it’s using the IP of the server instead (10.1.1.1)
>
> The backend is presenting a wildcard server certificate with a
> DNS-hostname looking like “*.foo.bar”
>
>
>
>
>
> In this configuration, one could assume that there is always a certificate
> name mismatch with the TLS handshake:
>
> Backend server will present its server certificate with a proper DNS
> hostname in it, and the HAProxy will find out that it doesn’t match the
> initially used connection name “10.1.1.1”.
>
>
>

​Just checking if the IP hasn't been by any chance included in the
certificate subjectAlternateNames ?
​

>
>
> Issue:
>
> In fact the connection to the backend works all the time, even when there
> is a name mismatch and even if we use the “verify required” option together
> with “verifyhost”.
>
> Seems as if HAProxy completely ignores the mismatch, as if we would use
> the option “verify none”.
>
>
>
>
>
> According to HAProxy documentation, this is clearly a not-expected
> behavior:
>
> http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-verify
>
>
>
>
>
> Can somebody please share some knowledge why this is working, or can
> confirm that this is a bug?
>
>
>
>
>
> #-
>
> # Global settings
>
> #-
>
> global
>
> log /dev/log local2
>
> pidfile /run/haproxy/haproxy.pid
>
> maxconn 2
>
> ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+
> AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!
> aNULL:!MD5:!DSS:!RC4
>
> ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
>
> stats socket /var/lib/haproxy/stats
>
>
>
> #-
>
> # common defaults that all the 'listen' and 'backend' sections will
>
> # use if not designated in their block
>
> #-
>
> defaults
>
> modehttp
>
> log global
>
> option  http-server-close
>
> option  redispatch
>
> retries 3
>
> maxconn 2
>
> errorfile 503   /etc/haproxy/errorpage.html
>
> default-server  init-addr last,libc,none
>
>
>
> # 
>
> #  HAPROXY CONFIG WITH WILDCARD CERTIFICATE ON BACKEND
>
> # 
>
> # --- FRONTEND1 (TLS with mutual authentication) ---
>
> frontend FRONTEND1
>
> option  forwardfor except 127.0.0.0/8
>
> acl authorizedClient ssl_c_s_dn(cn) -m str -f /etc/haproxy/authorized_
> clients.cfg
>
> bind *:443 ssl crt /etc/haproxy/certs/frontend-server-certificate.pem
> ca-file /etc/haproxy/certs/frontend-ca-certificates.crt verify required
>
> use_backend BACKEND1 if authorizedClient frontend
>
>
>
> # --- BACKEND1
>
> backend BACKEND1
>
> option  forwardfor except 127.0.0.0/8
>
> server BACKEND1-server 10.1.1.1:443 check inter 30s  verify required
> ssl verifyhost *.foo.bar  ca-file
> /etc/haproxy/certs/backend-ca-certificates.crt crt
> /etc/haproxy/certs/frontend-server-certificate.pem
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> This message and any attachment ("the Message") are confidential. If you
> have received the Message in error, please notify the sender immediately
> and delete the Message from your system, any use of the Message is
> forbidden. Correspondence via e-mail is primarily for information purposes.
> RBI neither makes nor accepts legally binding statements via e-mail unless
> explicitly agreed otherwise. Information pursuant to § 14 Austrian
> Companies Code: Raiffeisen Bank International AG; Registered Office: Am
> Stadtpark 9
> , 1030
> 

Re: active-active haproxy behind Azure Load Balancer

2018-07-12 Thread Christopher Cox
I don't speak "Azure", but if they have something that claims to be a 
load balancer, then "sure", just have to deal with stickiness issues and 
of course the fact that you're load balancing load balancers.


(you likely need Application Gateway)

On 07/12/2018 05:50 PM, musafir wrote:
Hey Folks, is it possible to setup  Haproxy 2 node active-active cluster 
behind Azure Load Balancer   i.e. (Azure LoadBalancer -> 2 
Haproxy(ACTIVE-ACTIVE) -> WEBSERVERS). any suggestions?




active-active haproxy behind Azure Load Balancer

2018-07-12 Thread musafir
Hey Folks, is it possible to setup  Haproxy 2 node active-active cluster
behind Azure Load Balancer   i.e. (Azure LoadBalancer -> 2
Haproxy(ACTIVE-ACTIVE) -> WEBSERVERS). any suggestions?


Re: [PATCH][MINOR] Implement resovle-opts with 2 new options

2018-07-12 Thread Willy Tarreau
On Thu, Jul 12, 2018 at 05:10:49PM +0200, Baptiste wrote:
> Hi all,
> 
> This patch adds a new keyword "resolve-opts" which can take a list of comma
> separated options.
(...)

applied, thank you Baptiste.

Willy



Re: [PATCH] REGTEST/MINOR: Wrong URI syntax.

2018-07-12 Thread Willy Tarreau
On Thu, Jul 12, 2018 at 11:05:30AM +0200, Frederic Lecaille wrote:
> This is a patch to fix the issue reported by Ilya Shipitsin in this thread.

Applied, thank you Fred.

Willy



Re: [PATCH] MINOR: mworker: exit with 0 on successful exit

2018-07-12 Thread Willy Tarreau
On Thu, Jul 12, 2018 at 05:38:34PM +0200, William Lallemand wrote:
> On Thu, Jul 12, 2018 at 04:42:01PM +0200, Vincent Bernat wrote:
> >  ? 12 juillet 2018 16:25 +0200, William Lallemand  :
> > 
> > > Maybe we could take your first patch for the unit file and backport it in 
> > > 1.8,
> > > and then make the appropriate changes for 1.9 once the master was
> > > redesigned.
> > 
> > Yes, no problem. The first patch should apply without any change on 1.8.
> > I am using it in Debian packages and so far, nobody complained.
> 
> Okay, thanks!
> 
> @Willy, could you apply the first patch "[PATCH] MINOR: systemd: consider exit
> status 143 as successful"?

Sure, now done, thank you guys!

Willy



Re: [PATCH] MINOR: mworker: exit with 0 on successful exit

2018-07-12 Thread William Lallemand
On Thu, Jul 12, 2018 at 04:42:01PM +0200, Vincent Bernat wrote:
>  ❦ 12 juillet 2018 16:25 +0200, William Lallemand  :
> 
> > Maybe we could take your first patch for the unit file and backport it in 
> > 1.8,
> > and then make the appropriate changes for 1.9 once the master was
> > redesigned.
> 
> Yes, no problem. The first patch should apply without any change on 1.8.
> I am using it in Debian packages and so far, nobody complained.

Okay, thanks!

@Willy, could you apply the first patch "[PATCH] MINOR: systemd: consider exit
status 143 as successful"?

Thanks.

-- 
William Lallemand



[PATCH][MINOR] Implement resovle-opts with 2 new options

2018-07-12 Thread Baptiste
Hi all,

This patch adds a new keyword "resolve-opts" which can take a list of comma
separated options.
2 options have been implemented for now:
* prevent-dup-ip: (default and historical way of working for HAProxy)
ensure this server will be the single one configured to an IP address, when
sharing the same fqdn than other servers in the same backend
* allow-dup-ip: allow multiple servers (they must all have this option
enabled) sharing the same fqdn to get an IP which is already used by an
other

The resolve-opts is compatible with server, default-server and
server-template. The latest configured value will win.
IE:
backend foobar
 default-server resolve-opts allow-dup-ip
 server s1 www.domain.tld
 server s2 www.domain.tld
 server s3 www.domain.tld resolve-opts prevent-dup-ip

==> only s1 and s2 could share the same IP in common.

Note that if the DNS server returns 2 records, there is no guarantee that
IPA will be affected to s1 and s2 and IPB to s3.
That's because, for now, the resolution is "atomic" and linked to the
server itself and that because the algorithm still search a different IP
before allowing a failover to an already used one (if allowed to).

The first 3 patches are clean up and the code is in the 4th one.

Note that I may move the other resolve-* keywords into the resolve-opts
(older keywords will still be valid for backward compatibility).

Baptiste
From 348effd9e5182687a51b52312ac054286599af07 Mon Sep 17 00:00:00 2001
From: Baptiste Assmann 
Date: Fri, 22 Jun 2018 15:04:43 +0200
Subject: [PATCH 4/4] MINOR: dns: new DNS options to allow/prevent IP address
 duplication

By default, HAProxy's DNS resolution at runtime ensure that there is no
IP address duplication in a backend (for servers being resolved by the
same hostname).
There are a few cases where people want, on purpose, to disable this
feature.

This patch introduces a couple of new server side options for this purpose:
"resolve-opts allow-dup-ip" or "resolve-opts prevent-dup-ip".
---
 doc/configuration.txt | 34 ++
 include/types/dns.h   |  2 ++
 src/dns.c |  6 +-
 src/server.c  | 27 +++
 4 files changed, 68 insertions(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index e901d7e..b443de6 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -11682,6 +11682,40 @@ rise 
   after  consecutive successful health checks. This value defaults to 2
   if unspecified. See also the "check", "inter" and "fall" parameters.
 
+resolve-opts ,,...
+  Comma separated list of options to apply to DNS resolution linked to this
+  server.
+
+  Available options:
+
+  * allow-dup-ip
+By default, HAProxy prevents IP address duplication in a backend when DNS
+resolution at runtime is in operation.
+That said, for some cases, it makes sense that two servers (in the same
+backend, being resolved by the same FQDN) have the same IP address.
+For such case, simply enable this option.
+This is the opposite of prevent-dup-ip.
+
+  * prevent-dup-ip
+Ensure HAProxy's default behavior is enforced on a server: prevent re-using
+an IP address already set to a server in the same backend and sharing the
+same fqdn.
+This is the opposite of allow-dup-ip.
+
+  Example:
+backend b_myapp
+  default-server init-addr none resolvers dns
+  server s1 myapp.example.com:80 check resolve-opts allow-dup-ip
+  server s2 myapp.example.com:81 check resolve-opts allow-dup-ip
+
+  With the option allow-dup-ip set:
+  * if the nameserver returns a single IP address, then both servers will use
+it
+  * If the nameserver returns 2 IP addresses, then each server will pick up a
+different address
+
+  Default value: not set
+
 resolve-prefer 
   When DNS resolution is enabled for a server and multiple IP addresses from
   different families are returned, HAProxy will prefer using an IP address
diff --git a/include/types/dns.h b/include/types/dns.h
index 9b1d08d..488d399 100644
--- a/include/types/dns.h
+++ b/include/types/dns.h
@@ -245,6 +245,8 @@ struct dns_options {
 		} mask;
 	} pref_net[SRV_MAX_PREF_NET];
 	int pref_net_nb; /* The number of registered prefered networks. */
+	int accept_duplicate_ip; /* flag to indicate whether the associated object can use an IP address
+already set to an other object of the same group */
 };
 
 /* Resolution structure associated to single server and used to manage name
diff --git a/src/dns.c b/src/dns.c
index 018c86a..77bf5c0 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -962,8 +962,10 @@ int dns_get_ip_from_response(struct dns_response_packet *dns_p,
 	int currentip_sel;
 	int j;
 	int score, max_score;
+	int allowed_duplicated_ip;
 
 	family_priority   = dns_opts->family_prio;
+	allowed_duplicated_ip = dns_opts->accept_duplicate_ip;
 	*newip = newip4   = newip6 = NULL;
 	currentip_found   = 0;
 	*newip_sin_family = AF_UNSPEC;
@@ -1027,7 +1029,9 @@ int 

Re: [PATCH] MINOR: mworker: exit with 0 on successful exit

2018-07-12 Thread Vincent Bernat
 ❦ 12 juillet 2018 16:25 +0200, William Lallemand  :

> Maybe we could take your first patch for the unit file and backport it in 1.8,
> and then make the appropriate changes for 1.9 once the master was
> redesigned.

Yes, no problem. The first patch should apply without any change on 1.8.
I am using it in Debian packages and so far, nobody complained.
-- 
Hell is empty and all the devils are here.
-- Wm. Shakespeare, "The Tempest"



Re: Issue with parsing DNS from AWS

2018-07-12 Thread Jim Deville
Thanks for the update. We will see what we can do, and I appreciate your help!


Jim


From: Baptiste 
Sent: Thursday, July 12, 2018 8:59:53 AM
To: Jim Deville
Cc: haproxy@formilux.org; Jonathan Works
Subject: Re: Issue with parsing DNS from AWS

Hi Jim,

"hold obsolete" defaults to 0, so basically, HAProxy may evince servers from 
your backend quite frequently (the bigger the farm, the more chance it happens).
Furthermore, most of those changes are "false positive" (since the server may 
still be healthy).

DNS over TCP won't help.
As I stated in my previous mail, AWS DNS servers only returns 8 records per 
response (they are "roundrobined"), even in TCP (I did try with "drill" DNS 
client).
So, your only way to go is to use the "hold obsolete" timer.


On Thu, Jul 5, 2018 at 3:49 PM, Jim Deville 
mailto:jdevi...@malwarebytes.com>> wrote:

Hi Baptiste,


I appreciate you taking time for this, we had tried increasing the response 
size, but I believe we left hold obsolete at defaults and that probably lead to 
flapping. How often does HAProxy re-poll DNS for this? I'm curious what limits 
this really sets for how many servers we can scale to with this. Also, will DNS 
over TCP help any? Seems like it still needs roughly the same settings given 
the round-robin responses.


In the meantime, we will look into these settings to see if we can make them 
work as well.


Jim


From: Baptiste mailto:bed...@gmail.com>>
Sent: Tuesday, July 3, 2018 9:20:53 AM

To: Jim Deville
Cc: haproxy@formilux.org; Jonathan Works
Subject: Re: Issue with parsing DNS from AWS

Ah yes, I also added the following "init-addr none" statement on the 
server-template line.
This prevents HAProxy from using libc resolvers, which might end up in 
unpredictible behavior in that enviroment

Baptiste

On Tue, Jul 3, 2018 at 3:18 PM, Baptiste 
mailto:bed...@gmail.com>> wrote:
Well, I can partially reproduce the issue you're facing and I can see some 
weird behavior of AWS's DNS servers.

First, by default, HAProxy only support DNS over UDP and can accept up to 512 
bytes of payload in the DNS response.
DNS over TCP is not yet available and accepted payload size can be increased 
using EDNS0 extension.

There is a "magic" number of SRV records with AWS and default HAProxy accepted 
payload size, at around 4 SRV records, the response payload may be bigger than 
512 bytes.
And so, AWS DNS server does not return any data, simply returns an empty 
response, with the TRUNCATED flag.
In such case, a client is supposed to replay the request over TCP...

An other magic value with AWS DNS servers is that it won't return more than 8 
SRV records, even if you have 10 servers in your service. (even in TCP)
AWS DNS servers will simply return a round robin list of the records, some will 
disappear, some will reappear at some point in time.


Conclusion, to make HAProxy work in such environment, you want to configure it 
that way:
resolvers awsdns
  nameserver dns0 NAMESERVER:53 # <=== please remove the doule quotes
  accepted_payload_size 8192 # <=== workaround for too short 
accepted payload
  hold obsolete 30s   # <=== workaround for 
limited number of records returned by AWS

You may want to read the documentation of HAProxy's resolver. There are a few 
other timeout / hold period you could tune.

With the configuration above, I could easily scale from 2 to 10, back to 2, 
passing through 4, 8, etc... successfully and without any server flapping.
I did not try to go higher than 10. Bear in mind the "hold obsolete" period is 
the period during which HAProxy considers a server as available even if the DNS 
server did not return it in the SRV record list.

Baptiste







On Tue, Jul 3, 2018 at 1:26 PM, Baptiste 
mailto:bed...@gmail.com>> wrote:
Answering myself... I found my way in the menu to be able to allow port 9000 to 
read the stats page and to find the public IP associated to my "app".
That said, I still can't get a shell on the running container, but I think I 
found an AWS documentation page for this purpose.

I keep you updated.

On Tue, Jul 3, 2018 at 1:06 PM, Baptiste 
mailto:bed...@gmail.com>> wrote:
Hi Jim,

I think I have something running...
At least, terraform did not complain and I can see "stuff" in my AWS dashoard.
Now, I have no idea how I can get connected to my running HAProxy container, 
neither how I can troubleshoot what's happening :)

Any help would be (again) appreciated.

Baptiste



On Tue, Jul 3, 2018 at 11:39 AM, Baptiste 
mailto:bed...@gmail.com>> wrote:
Hi Jim,

Sorry for the long pause :)
I was dealing with some travel, conferences and catching up on my backlog.
So, the good news, is that this issue is now my priority :)

I'll try to first reproduce it and come back to you if I have any issue during 
that step.
(by the way, thanks for the github repo to help me speed up in 

Re: [PATCH] MINOR: mworker: exit with 0 on successful exit

2018-07-12 Thread William Lallemand
On Thu, Jul 12, 2018 at 04:14:34PM +0200, Vincent Bernat wrote:
>  ❦ 22 juin 2018 22:03 +0200, Vincent Bernat  :
> 
> > Without this patch, when killing the master process, the SIGTERM
> > signal is forwarded to all children. Last children will likely exit
> > with "killed by signal SIGTERM" status which would be converted by an
> > exit with status 143 of the master process.
> >
> > With this patch, the master process takes note it is requesting its
> > children to stop and will convert "killed by signal SIGTERM" to an
> > exit status of 0. Therefore, the master process will exit with status
> > 0 if everything happens as expected.
> 
> I think this patch may have slipped through the cracks!
> -- 


Hi Vincent,

Sorry I forgot to reply to this mail. I'm currently reworking the code of the
master so I don't want to rebase everything on top of your patch :-)

Maybe we could take your first patch for the unit file and backport it in 1.8,
and then make the appropriate changes for 1.9 once the master was redesigned.

What do you think?


-- 
William Lallemand



Re: [PATCH] MINOR: mworker: exit with 0 on successful exit

2018-07-12 Thread Vincent Bernat
 ❦ 22 juin 2018 22:03 +0200, Vincent Bernat  :

> Without this patch, when killing the master process, the SIGTERM
> signal is forwarded to all children. Last children will likely exit
> with "killed by signal SIGTERM" status which would be converted by an
> exit with status 143 of the master process.
>
> With this patch, the master process takes note it is requesting its
> children to stop and will convert "killed by signal SIGTERM" to an
> exit status of 0. Therefore, the master process will exit with status
> 0 if everything happens as expected.

I think this patch may have slipped through the cracks!
-- 
Be careful of reading health books, you might die of a misprint.
-- Mark Twain



Re: Issue with parsing DNS from AWS

2018-07-12 Thread Baptiste
Hi Jim,

"hold obsolete" defaults to 0, so basically, HAProxy may evince servers
from your backend quite frequently (the bigger the farm, the more chance it
happens).
Furthermore, most of those changes are "false positive" (since the server
may still be healthy).

DNS over TCP won't help.
As I stated in my previous mail, AWS DNS servers only returns 8 records per
response (they are "roundrobined"), even in TCP (I did try with "drill" DNS
client).
So, your only way to go is to use the "hold obsolete" timer.


On Thu, Jul 5, 2018 at 3:49 PM, Jim Deville 
wrote:

> Hi Baptiste,
>
>
> I appreciate you taking time for this, we had tried increasing the
> response size, but I believe we left hold obsolete at defaults and that
> probably lead to flapping. How often does HAProxy re-poll DNS for this? I'm
> curious what limits this really sets for how many servers we can scale to
> with this. Also, will DNS over TCP help any? Seems like it still needs
> roughly the same settings given the round-robin responses.
>
>
> In the meantime, we will look into these settings to see if we can make
> them work as well.
>
>
> Jim
> --
> *From:* Baptiste 
> *Sent:* Tuesday, July 3, 2018 9:20:53 AM
>
> *To:* Jim Deville
> *Cc:* haproxy@formilux.org; Jonathan Works
> *Subject:* Re: Issue with parsing DNS from AWS
>
> Ah yes, I also added the following "init-addr none" statement on the
> server-template line.
> This prevents HAProxy from using libc resolvers, which might end up in
> unpredictible behavior in that enviroment
>
> Baptiste
>
> On Tue, Jul 3, 2018 at 3:18 PM, Baptiste  wrote:
>
> Well, I can partially reproduce the issue you're facing and I can see some
> weird behavior of AWS's DNS servers.
>
> First, by default, HAProxy only support DNS over UDP and can accept up to
> 512 bytes of payload in the DNS response.
> DNS over TCP is not yet available and accepted payload size can be
> increased using EDNS0 extension.
>
> There is a "magic" number of SRV records with AWS and default HAProxy
> accepted payload size, at around 4 SRV records, the response payload may be
> bigger than 512 bytes.
> And so, AWS DNS server does not return any data, simply returns an empty
> response, with the TRUNCATED flag.
> In such case, a client is supposed to replay the request over TCP...
>
> An other magic value with AWS DNS servers is that it won't return more
> than 8 SRV records, even if you have 10 servers in your service. (even in
> TCP)
> AWS DNS servers will simply return a round robin list of the records, some
> will disappear, some will reappear at some point in time.
>
>
> Conclusion, to make HAProxy work in such environment, you want to
> configure it that way:
> resolvers awsdns
>   nameserver dns0 NAMESERVER:53 # <=== please remove the doule quotes
>   accepted_payload_size 8192 # <=== workaround for too
> short accepted payload
>   hold obsolete 30s   # <=== workaround
> for limited number of records returned by AWS
>
> You may want to read the documentation of HAProxy's resolver. There are a
> few other timeout / hold period you could tune.
>
> With the configuration above, I could easily scale from 2 to 10, back to
> 2, passing through 4, 8, etc... successfully and without any server
> flapping.
> I did not try to go higher than 10. Bear in mind the "hold obsolete"
> period is the period during which HAProxy considers a server as available
> even if the DNS server did not return it in the SRV record list.
>
> Baptiste
>
>
>
>
>
>
>
> On Tue, Jul 3, 2018 at 1:26 PM, Baptiste  wrote:
>
> Answering myself... I found my way in the menu to be able to allow port
> 9000 to read the stats page and to find the public IP associated to my
> "app".
> That said, I still can't get a shell on the running container, but I think
> I found an AWS documentation page for this purpose.
>
> I keep you updated.
>
> On Tue, Jul 3, 2018 at 1:06 PM, Baptiste  wrote:
>
> Hi Jim,
>
> I think I have something running...
> At least, terraform did not complain and I can see "stuff" in my AWS
> dashoard.
> Now, I have no idea how I can get connected to my running HAProxy
> container, neither how I can troubleshoot what's happening :)
>
> Any help would be (again) appreciated.
>
> Baptiste
>
>
>
> On Tue, Jul 3, 2018 at 11:39 AM, Baptiste  wrote:
>
> Hi Jim,
>
> Sorry for the long pause :)
> I was dealing with some travel, conferences and catching up on my backlog.
> So, the good news, is that this issue is now my priority :)
>
> I'll try to first reproduce it and come back to you if I have any issue
> during that step.
> (by the way, thanks for the github repo to help me speed up in that step).
>
> Baptiste
>
>
>
>
> On Mon, Jun 25, 2018 at 10:54 PM, Jim Deville 
> wrote:
>
> Hi Bapiste,
>
>
> I just wanted to follow up to see if you were able to repro and perhaps
> had a patch we could try?
>
>
> Jim
> --
> *From:* Jim Deville
> *Sent:* 

Re: haproxy bug: healthcheck not passing after port change when statefile is enabled

2018-07-12 Thread Baptiste
Hi Sven,

Thanks for the clarification.
It's a bit more complicated than what it is supposed to be.
I think we may want to apply the port only if it has been changed at
runtime (changed by DNS SRV records).

The status is the following: I have a pending patch which brings SRV record
information into the state file. (WIP, but last mile)
Once it has been merged, we'll be able to fix this issue (by applying the
port only when the server is being managed by an SRV record).

Baptiste


On Tue, Jul 3, 2018 at 3:41 PM, Sven Wiltink  wrote:

> Hey Baptiste,
>
>
> Thank you for looking into it.
>
>
> The bug is triggered by running haproxy with the following config:
>
>
> global
> maxconn 32000
> tune.maxrewrite 2048
> user haproxy
> group haproxy
> daemon
> chroot /var/lib/haproxy
> nbproc 1
> maxcompcpuusage 85
> spread-checks 0
> stats socket /var/run/haproxy.sock mode 600 level admin process 1 user
> haproxy group haproxy
> server-state-file test
> server-state-base /var/run/haproxy/state
> master-worker no-exit-on-failure
>
> defaults
> load-server-state-from-file global
> log global
> timeout http-request 5s
> timeout connect  2s
> timeout client   300s
> timeout server   300s
> mode http
> option dontlog-normal
> option http-server-close
> option redispatch
> option log-health-checks
>
> listen stats
> bind :1936
> bind-process 1
> mode http
> stats enable
> stats uri /
> stats admin if TRUE
>
> listen banaan-443-ipv4
> bind :443
> mode tcp
> server banaan-vps 127.0.0.1:443 check inter 2000
>
>
> - Then start haproxy (it will do healthchecks to port 443)
> - change server banaan-vps 127.0.0.1:443 check inter 2000 to server
> banaan-vps 127.0.0.1:80 check inter 2000
> - save the state using /bin/sh -c "echo show servers state |
> /usr/bin/socat /var/run/haproxy.sock - > /var/run/haproxy/state/test"
> (this is normally done using the systemd file on reload, see initial mail)
> - reload haproxy (it still does healthchecks to port 443 while port 80 was
> expected)
>
> if you delete the statefile and reload haproxy it will start healthchecks
> for port 80 as expected
>
> -Sven
>
>
>
>
>
>
> --
> *Van:* Baptiste 
> *Verzonden:* dinsdag 3 juli 2018 11:38:14
> *Aan:* Sven Wiltink
> *CC:* haproxy@formilux.org
> *Onderwerp:* Re: haproxy bug: healthcheck not passing after port change
> when statefile is enabled
>
> Hi Sven,
>
> Thanks a lot for your feedback!
> I'll check how we could handle this use case with the state file.
>
> Just to ensure I'm going to troubleshoot the right issue, could you please
> summarize how you trigger this issue in a few simple steps?
> IE:
> - conf v1, server port is X
> - generate server state (where port is X)
> - update conf to v2, where port is Y
> reload HAProxy => X is applied, while you expect to get Y instead
>
> Baptiste
>
>
>
> On Mon, Jun 25, 2018 at 12:55 PM, Sven Wiltink 
> wrote:
>
> Hello,
>
>
> So we've dug a little deeper and the issue seems to be caused by the port
> value in the statefile. When the target port of a server has changed
> between reloads the port specified in the state file is leading. When
> running tcpdump you can see the healthchecks are being performed for the
> old port. After stopping haproxy and removing the statefile the healthcheck
> is performed for the right port. When manually editing the statefile to a
> random port the healthchecks will be performed for that port instead of the
> one specified by the config.
>
>
> The code responsible for this is line http://git.haproxy.org/?p=hapr
> oxy-1.8.git;a=blob;f=src/server.c;h=523289e3bda7ca6aa15575f1
> 928f5298760cf582;hb=HEAD#l2931
>
> from commit http://git.haproxy.org/?p=haproxy-1.8.git;a=commitdiff;h=316
> 9471964fdc49963e63f68c1fd88686821a0c4.
>
>
> A solution would be invalidating the state when the ports don't match.
>
>
> -Sven
>
>
>
> --
> *Van:* Sven Wiltink
> *Verzonden:* dinsdag 12 juni 2018 17:01:18
> *Aan:* haproxy@formilux.org
> *Onderwerp:* haproxy bug: healthcheck not passing after port change when
> statefile is enabled
>
> Hello,
>
> There seems to be a bug in the loading of state files after a
> configuration change. When changing the destination port of a server the
> healthchecks never start passing if the state before the reload was down.
> This bug has been introduced after 1.7.9 as we cannot reproduce it on
> machines running that version of haproxy. You can use the following steps
> to reproduce the issue:
>
> Start with a fresh debian 9 install
> install socat
> install haproxy 1.8.9 from backports
>
> create a systemd file /etc/systemd/system/haproxy.se
> rvice.d/60-haproxy-server_state.conf  with the following contents:
> [Service]
> ExecStartPre=/bin/mkdir -p /var/run/haproxy/state
> ExecReload=
> ExecReload=/usr/sbin/haproxy -f ${CONFIG} -c -q $EXTRAOPTS
> 

Re: haproxy ci (again), gitlab.com ?

2018-07-12 Thread William Lallemand
On Thu, Jul 12, 2018 at 02:54:43PM +0500, Илья Шипицин wrote:
> hello,
> 

Hello,

> I have the following suggestion
> 
> 1) I will add .gitlab-ci.yml to the haproxy repo (it will include "centos
> 7" and "fedora 28" builds, just to cover openssl-1.0.2 and openssl-1.1.0)
> 

It could be a better idea to provide a patch for the official repository in 
fact.

> 2) that .gitlab-ci.yml will run reg tests

Good, I worked on a CI file a long time ago which was doing only build, to test
if the build still works.

> 
> 3) anyone can follow to https://gitlab.com --> new --> CI for external repo
> --> mirror

> and (without any change) commits to haproxy repo will be tested.
> 
> what do you think ?
>

That's a good idea :-)
 
> 
> second question. I tried to register https://gitlab.com/haproxy/ - it is
> already owned by someone (and it is private, no access), what's that ?
> 

In fact I already registered https://gitlab.com/haproxy.org a long time ago, I
don't know if the /haproxy/ repository is registered, we might want to contact
gitlab to know what's going on with this repository.

We could share the same gitlab.

> cheers,
> Ilya Shipitsin


Cheers,

-- 
William Lallemand



haproxy ci (again), gitlab.com ?

2018-07-12 Thread Илья Шипицин
hello,

I have the following suggestion

1) I will add .gitlab-ci.yml to the haproxy repo (it will include "centos
7" and "fedora 28" builds, just to cover openssl-1.0.2 and openssl-1.1.0)

2) that .gitlab-ci.yml will run reg tests

3) anyone can follow to https://gitlab.com --> new --> CI for external repo
--> mirror

and (without any change) commits to haproxy repo will be tested.

what do you think ?


second question. I tried to register https://gitlab.com/haproxy/ - it is
already owned by someone (and it is private, no access), what's that ?

cheers,
Ilya Shipitsin


[PATCH] REGTEST/MINOR: Wrong URI syntax.

2018-07-12 Thread Frederic Lecaille

This is a patch to fix the issue reported by Ilya Shipitsin in this thread.

Fred.
>From 47ca7696d0ccca5989929940db323e9e9255ae4a Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20L=C3=A9caille?= 
Date: Thu, 12 Jul 2018 10:48:06 +0200
Subject: [PATCH] REGTEST/MINOR: Wrong URI syntax.

Ilya Shipitsin reported that with some curl versions this reg test
may fail due to a wrong URI syntax with ::1 ipv6 local address in
this varnishtest script. This patch fixes this syntax issue and
replaces the iteration of "procees" commands by a "shell" command
to start curl processes (must be faster).

Thanks to Ilya Shipitsin for having reported this VTC file bug.
---
 reg-tests/ssl/h0.vtc | 21 ++---
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/reg-tests/ssl/h0.vtc b/reg-tests/ssl/h0.vtc
index 0765cb4..819f385 100644
--- a/reg-tests/ssl/h0.vtc
+++ b/reg-tests/ssl/h0.vtc
@@ -31,14 +31,13 @@ haproxy h1 -conf {
 http-request redirect location /
 } -start
 
-process p1 "curl -i -k https://${h1_frt_addr}:${h1_frt_port}; -start
-process p2 "curl -i -k https://${h1_frt_addr}:${h1_frt_port}; -start
-process p3 "curl -i -k https://${h1_frt_addr}:${h1_frt_port}; -start
-process p4 "curl -i -k https://${h1_frt_addr}:${h1_frt_port}; -start
-process p5 "curl -i -k https://${h1_frt_addr}:${h1_frt_port}; -start
-
-process p1 -wait
-process p2 -wait
-process p3 -wait
-process p4 -wait
-process p5 -wait
+shell {
+HOST=${h1_frt_addr}
+if [ "${h1_frt_addr}" = "::1" ] ; then
+HOST="[::1]"
+fi
+for i in 1 2 3 4 5; do
+curl -i -k https://$HOST:${h1_frt_port} & pids="$pids $!"
+done
+wait $pids
+}
-- 
2.1.4



TLS handshake works with certificate name mismatch using "verify required" and "verifyhost"

2018-07-12 Thread Martin RADEL
Hi all,

we have a strange situation with our HAProxy, running on Version 1.8.8 with 
OpenSSL.
(See the details in the setup listed below - some lines are missing by 
intention. It's a config snippet with just the interesting parts mentioned)

Initial situation:
We run a HAProxy instance which enforces mutual TLS on the frontend, allowing 
only clients to connect to it when they will present a specific certificate.
The HAPRoxy also does mutual TLS to the backend, presenting its frontend server 
certificate to the backend as a client certificate.
The backend only allows connections when the HAProxy's certificate is presented 
to it.
To have a proper TLS handshake to the backend, and to be able to identify a 
man-in-the-middle scenario, we use the "verify required" directive together 
with the "verifyhost" directive.

The HAProxy is not able to resolve the backend's real DNS-hostname, so it's 
using the IP of the server instead (10.1.1.1)
The backend is presenting a wildcard server certificate with a DNS-hostname 
looking like "*.foo.bar"


In this configuration, one could assume that there is always a certificate name 
mismatch with the TLS handshake:
Backend server will present its server certificate with a proper DNS hostname 
in it, and the HAProxy will find out that it doesn't match the initially used 
connection name "10.1.1.1".


Issue:
In fact the connection to the backend works all the time, even when there is a 
name mismatch and even if we use the "verify required" option together with 
"verifyhost".
Seems as if HAProxy completely ignores the mismatch, as if we would use the 
option "verify none".


According to HAProxy documentation, this is clearly a not-expected behavior:
http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-verify


Can somebody please share some knowledge why this is working, or can confirm 
that this is a bug?


#-
# Global settings
#-
global
log /dev/log local2
pidfile /run/haproxy/haproxy.pid
maxconn 2
ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS:!RC4
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
stats socket /var/lib/haproxy/stats

#-
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#-
defaults
modehttp
log global
option  http-server-close
option  redispatch
retries 3
maxconn 2
errorfile 503   /etc/haproxy/errorpage.html
default-server  init-addr last,libc,none

# 
#  HAPROXY CONFIG WITH WILDCARD CERTIFICATE ON BACKEND
# 
# --- FRONTEND1 (TLS with mutual authentication) ---
frontend FRONTEND1
option  forwardfor except 127.0.0.0/8
acl authorizedClient ssl_c_s_dn(cn) -m str -f 
/etc/haproxy/authorized_clients.cfg
bind *:443 ssl crt /etc/haproxy/certs/frontend-server-certificate.pem 
ca-file /etc/haproxy/certs/frontend-ca-certificates.crt verify required
use_backend BACKEND1 if authorizedClient frontend

# --- BACKEND1
backend BACKEND1
option  forwardfor except 127.0.0.0/8
server BACKEND1-server 10.1.1.1:443 check inter 30s  verify required ssl 
verifyhost *.foo.bar ca-file 
/etc/haproxy/certs/backend-ca-certificates.crt crt 
/etc/haproxy/certs/frontend-server-certificate.pem







This message and any attachment ("the Message") are confidential. If you have 
received the Message in error, please notify the sender immediately and delete 
the Message from your system, any use of the Message is forbidden. 
Correspondence via e-mail is primarily for information purposes. RBI neither 
makes nor accepts legally binding statements via e-mail unless explicitly 
agreed otherwise. Information pursuant to ? 14 Austrian Companies Code: 
Raiffeisen Bank International AG; Registered Office: Am Stadtpark 9, 1030 
Vienna,Austria; Company Register Number: FN 122119m at the Commercial Court of 
Vienna (Handelsgericht Wien).


Re: how h1_frt_addr is defined during reg tests?

2018-07-12 Thread Илья Шипицин
yes, it fixed build:

https://gitlab.com/chipitsine/haproxy/-/jobs/81225803

чт, 12 июл. 2018 г. в 13:28, Frederic Lecaille :

> On 07/11/2018 09:12 PM, Илья Шипицин wrote:
> > Hello,
> >
> > I'm playing with reg tests. Sometimes they fail for weird reasons.
> > (for example, fedora 28 on gitlab ci)
> >
> > https://gitlab.com/chipitsine/haproxy/-/jobs/81106855
> >
> >
> > curl -i -k https://${h1_frt_addr}:${h1_frt_port}
> >
> > became
> >
> > curl -i -k https://::1:38627
> >
> > which is not correct.
> > but I could not find any definition of h1_frt_addr, how is it defined?
>
> Well, it is defined thanks to varnish lib APIs. The correct syntax
> should be curl -i -k https://[::1]:38627 in this case
> (https://www.ietf.org/rfc/rfc2732.txt).
>
> But h1_frt_addr could also be defined as 127.0.0.1 I guess on hosts
> where ipv6 is not enabled.
>
> So could you try this patch attached to this mail? Note that with this
> patch the test is a bit faster.
>
> Fred.
>
>


Re: how h1_frt_addr is defined during reg tests?

2018-07-12 Thread Frederic Lecaille

On 07/11/2018 09:12 PM, Илья Шипицин wrote:

Hello,

I'm playing with reg tests. Sometimes they fail for weird reasons.
(for example, fedora 28 on gitlab ci)

https://gitlab.com/chipitsine/haproxy/-/jobs/81106855


curl -i -k https://${h1_frt_addr}:${h1_frt_port}

became

curl -i -k https://::1:38627

which is not correct.
but I could not find any definition of h1_frt_addr, how is it defined?


Well, it is defined thanks to varnish lib APIs. The correct syntax 
should be curl -i -k https://[::1]:38627 in this case 
(https://www.ietf.org/rfc/rfc2732.txt).


But h1_frt_addr could also be defined as 127.0.0.1 I guess on hosts 
where ipv6 is not enabled.


So could you try this patch attached to this mail? Note that with this 
patch the test is a bit faster.


Fred.

diff --git a/reg-tests/ssl/h0.vtc b/reg-tests/ssl/h0.vtc
index 0765cb4..819f385 100644
--- a/reg-tests/ssl/h0.vtc
+++ b/reg-tests/ssl/h0.vtc
@@ -31,14 +31,13 @@ haproxy h1 -conf {
 http-request redirect location /
 } -start
 
-process p1 "curl -i -k https://${h1_frt_addr}:${h1_frt_port}; -start
-process p2 "curl -i -k https://${h1_frt_addr}:${h1_frt_port}; -start
-process p3 "curl -i -k https://${h1_frt_addr}:${h1_frt_port}; -start
-process p4 "curl -i -k https://${h1_frt_addr}:${h1_frt_port}; -start
-process p5 "curl -i -k https://${h1_frt_addr}:${h1_frt_port}; -start
-
-process p1 -wait
-process p2 -wait
-process p3 -wait
-process p4 -wait
-process p5 -wait
+shell {
+HOST=${h1_frt_addr}
+if [ "${h1_frt_addr}" = "::1" ] ; then
+HOST="[::1]"
+fi
+for i in 1 2 3 4 5; do
+curl -i -k https://$HOST:${h1_frt_port} & pids="$pids $!"
+done
+wait $pids
+}