Re: Debian upgrade to haproxy 1.7.5: tcp-check fails with Socket error, info: "No port available for the TCP connection"

2017-07-13 Thread Lukas Tribus
Hello!


Am 29.06.2017 um 16:14 schrieb Philipp Kolmann:
> Hi Lukas,
>
> On 06/19/17 21:23, Lukas Tribus wrote:
>> Am 19.06.2017 um 11:27 schrieb Philipp Kolmann:
>>> This config works in 1.5.8 but fails to tcp-check in 1.7.5.
>>>
>>> The errors in the logfile look like this:
>>>
>>> Jun 19 10:52:57 testha2 haproxy[5042]: Server mail-exchtest-smtp/mbx13a is 
>>> DOWN, reason: Socket error, info: "No port available for the TCP 
>>> connection", check duration: 0ms. 3 active and 0 backup servers left. 0 
>>> sessions active, 0 requeued, 0 remaining in queue.
>>>
>> Bug introduced  in 95db2bcfee ("MAJOR: check: find out which port to use
>> for health check at run time"), the AF check in line 1521 does not trigger
>> in this case ("tcp-check connect port" configuration).
>>
>> Partially reverting the check to the old one appears to work, but that's
>> probably not the correct fix.
>>
>>
>> diff --git a/src/checks.c b/src/checks.c
>> index 1af862e..5a34609 100644
>> --- a/src/checks.c
>> +++ b/src/checks.c
>> @@ -1518,7 +1518,7 @@ static int connect_conn_chk(struct task *t)
>>   conn->addr.to = s->addr;
>>   }
>>   -   if ((conn->addr.to.ss_family == AF_INET) || 
>> (conn->addr.to.ss_family == AF_INET6)) {
>> +   if (check->port) {
>>   int i = 0;
>> i = srv_check_healthcheck_port(check);
> thanks for the patch. I added the changed line and rebuilt the debian Package.
> Now the Ports come up again.
>
>> A quick config workaround, that reduces the check to a single port consist
>> in adding "port 25" to each server configuration (after the check keyword).
>
> Adding the Port works for the SMTP Setup. For IMAP, where the Port is SSL 
> enabled
> it still fails:

With the patch above, does the IMAP/SSL check work?


We also had a report on discourse:
https://discourse.haproxy.org/t/option-tcp-check-failed-with-ha-proxy-version-1-7-8/1443


Baptiste, can you please take a look at this?



Thanks,
Lukas



Re: peers synchronization issue

2017-07-13 Thread Willy Tarreau
Hi Fred,

On Thu, Jul 13, 2017 at 09:30:13AM +0200, Frederic Lecaille wrote:
> Hello,
> 
> I have noticed that when several stick-table backends are attached to
> several peers sections, only the stick-tables attached to the last peer
> section could be synchronized ;) .
> 
> This patch fixes this issue.

Thanks! I'm pretty sure we've already been hit by this global "peers"
variable sometimes being masked by another one carrying the same name in
a function. Now with your renaming we should be definitely safe.

Willy



peers synchronization issue

2017-07-13 Thread Frederic Lecaille

Hello,

I have noticed that when several stick-table backends are attached to 
several peers sections, only the stick-tables attached to the last peer 
section could be synchronized ;) .


This patch fixes this issue.

Regards,

Fred.
>From 53908a325155836ddb8b0c8a1b8c56e1fa13139d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20L=C3=A9caille?= 
Date: Thu, 13 Jul 2017 09:07:09 +0200
Subject: [PATCH] BUG/MINOR: peers: peer synchronization issue (with several
 peers sections).

When several stick-tables were configured with several peers sections,
only a part of them could be synchronized: the ones attached to the last
parsed 'peers' section. This was due to the fact that, at least, the peer I/O handler
refered to the wrong peer section list, in fact always the same: the last one parsed.

The fact that the global peer section list was named "struct peers *peers"
lead to this issue. This variable name is dangerous ;).

So this patch renames global 'peers' variable to 'cfg_peers' to ensure that
no such wrong references are still in use, then all the functions wich used
old 'peers' variable have been modified to refer to the correct peer list.

Must be backported to 1.6 and 1.7.
---
 include/types/peers.h |  2 +-
 src/cfgparse.c| 18 +-
 src/haproxy.c | 10 +-
 src/peers.c   | 40 
 src/proxy.c   |  6 +++---
 5 files changed, 38 insertions(+), 38 deletions(-)

diff --git a/include/types/peers.h b/include/types/peers.h
index 105dffb..a77a094 100644
--- a/include/types/peers.h
+++ b/include/types/peers.h
@@ -91,7 +91,7 @@ struct peers {
 };
 
 
-extern struct peers *peers;
+extern struct peers *cfg_peers;
 
 #endif /* _TYPES_PEERS_H */
 
diff --git a/src/cfgparse.c b/src/cfgparse.c
index 600f273..ecd4c9f 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -1941,7 +1941,7 @@ int cfg_parse_peers(const char *file, int linenum, char **args, int kwm)
 			goto out;
 		}
 
-		for (curpeers = peers; curpeers != NULL; curpeers = curpeers->next) {
+		for (curpeers = cfg_peers; curpeers != NULL; curpeers = curpeers->next) {
 			/*
 			 * If there are two proxies with the same name only following
 			 * combinations are allowed:
@@ -1959,8 +1959,8 @@ int cfg_parse_peers(const char *file, int linenum, char **args, int kwm)
 			goto out;
 		}
 
-		curpeers->next = peers;
-		peers = curpeers;
+		curpeers->next = cfg_peers;
+		cfg_peers = curpeers;
 		curpeers->conf.file = strdup(file);
 		curpeers->conf.line = linenum;
 		curpeers->last_change = now.tv_sec;
@@ -2040,7 +2040,7 @@ int cfg_parse_peers(const char *file, int linenum, char **args, int kwm)
 		if (strcmp(newpeer->id, localpeer) == 0) {
 			/* Current is local peer, it define a frontend */
 			newpeer->local = 1;
-			peers->local = newpeer;
+			cfg_peers->local = newpeer;
 
 			if (!curpeers->peers_fe) {
 if ((curpeers->peers_fe  = calloc(1, sizeof(struct proxy))) == NULL) {
@@ -8087,9 +8087,9 @@ int check_config_validity()
 		}
 
 		if (curproxy->table.peers.name) {
-			struct peers *curpeers = peers;
+			struct peers *curpeers;
 
-			for (curpeers = peers; curpeers; curpeers = curpeers->next) {
+			for (curpeers = cfg_peers; curpeers; curpeers = curpeers->next) {
 if (strcmp(curpeers->id, curproxy->table.peers.name) == 0) {
 	free((void *)curproxy->table.peers.name);
 	curproxy->table.peers.p = curpeers;
@@ -9108,15 +9108,15 @@ out_uri_auth_compat:
 		if (curproxy->table.peers.p)
 			curproxy->table.peers.p->peers_fe->bind_proc |= curproxy->bind_proc;
 
-	if (peers) {
-		struct peers *curpeers = peers, **last;
+	if (cfg_peers) {
+		struct peers *curpeers = cfg_peers, **last;
 		struct peer *p, *pb;
 
 		/* Remove all peers sections which don't have a valid listener,
 		 * which are not used by any table, or which are bound to more
 		 * than one process.
 		 */
-		last = 
+		last = _peers;
 		while (*last) {
 			curpeers = *last;
 
diff --git a/src/haproxy.c b/src/haproxy.c
index a425744..2316100 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -1448,7 +1448,7 @@ static void init(int argc, char **argv)
 		struct peers *pr;
 		struct proxy *px;
 
-		for (pr = peers; pr; pr = pr->next)
+		for (pr = cfg_peers; pr; pr = pr->next)
 			if (pr->peers_fe)
 break;
 
@@ -1662,11 +1662,11 @@ static void init(int argc, char **argv)
 	if (global.stats_fe)
 		global.maxsock += global.stats_fe->maxconn;
 
-	if (peers) {
+	if (cfg_peers) {
 		/* peers also need to bypass global maxconn */
-		struct peers *p = peers;
+		struct peers *p = cfg_peers;
 
-		for (p = peers; p; p = p->next)
+		for (p = cfg_peers; p; p = p->next)
 			if (p->peers_fe)
 global.maxsock += p->peers_fe->maxconn;
 	}
@@ -2653,7 +2653,7 @@ int main(int argc, char **argv)
 		}
 
 		/* we might have to unbind some peers sections from some processes */
-		for (curpeers = peers; curpeers; curpeers = curpeers->next) {
+		for (curpeers = cfg_peers; curpeers; curpeers 

Re: DNS suffix for resolver

2017-07-13 Thread Aleksandar Lazic
Title: Re: DNS suffix for resolver


Hi Baptiste,

Baptiste wrote on 13.07.2017:





On Wed, Jul 12, 2017 at 11:41 PM, Aleksandar Lazic  wrote:




Hi,

I have used today again my haproxy image
https://hub.docker.com/r/me2digital/haproxy17/ in openshift.

There is a variable SERVICE_DEST which have the destination hostname for
the server line in haproxy.

When I use just mongodb, that's the service name in openshift, it will
not resolve because the resolver does not respect the 'search' line.

###
oc rsh haproxy17-3-gd94c cat /etc/resolv.conf
search 1-mongodb-test.svc.cluster.local svc.cluster.local cluster.local esrv.local
nameserver 10.40.96.55
nameserver 10.40.96.55
options ndots:5
###

will this be better handled with 1.8 dns code?





Hi Aleksandar,

There is nothing in 1.8 regarding the search option. For now DNS resolution only supports fqdn.
I'm seeing attraction for such type of feature and I saw myself a few use cases where it looks to be interesting to support it.
I can dig into it and see if this can be done in a simple yet efficient way. I can't promise anything for 1.8 release though...

Baptiste



Thanks to take a look.

-- 
Best Regards
Aleks




Re: DNS suffix for resolver

2017-07-13 Thread Baptiste
On Wed, Jul 12, 2017 at 11:41 PM, Aleksandar Lazic 
wrote:

> Hi,
>
> I have used today again my haproxy image
> https://hub.docker.com/r/me2digital/haproxy17/ in openshift.
>
> There is a variable SERVICE_DEST which have the destination hostname for
> the server line in haproxy.
>
> When I use just mongodb, that's the service name in openshift, it will
> not resolve because the resolver does not respect the 'search' line.
>
> ###
> oc rsh haproxy17-3-gd94c cat /etc/resolv.conf
> search 1-mongodb-test.svc.cluster.local svc.cluster.local cluster.local
> esrv.local
> nameserver 10.40.96.55
> nameserver 10.40.96.55
> options ndots:5
> ###
>
> will this be better handled with 1.8 dns code?
>
>

Hi Aleksandar,

There is nothing in 1.8 regarding the search option. For now DNS resolution
only supports fqdn.
I'm seeing attraction for such type of feature and I saw myself a few use
cases where it looks to be interesting to support it.
I can dig into it and see if this can be done in a simple yet efficient
way. I can't promise anything for 1.8 release though...

Baptiste