Re: 1.8 resolvers - start vs. run

2018-01-08 Thread Baptiste
Unfortunately, this may not be backported into 1.8.
We do backport only bug fixes and this is a feature.

Baptiste

On Mon, Jan 8, 2018 at 10:20 PM, Jim Freeman  wrote:

> Your proposal aligns with what I was thinking over the weekend.
>
> I'll try to be clean/small enough to tempt a back-port to 1.8 :-)
>
> On Mon, Jan 8, 2018 at 1:17 PM, Baptiste  wrote:
>
>> Hi Jim,
>>
>> I very welcome this feature. Actually, I wanted to add it myself for some
>> time now.
>> I currently work it around using init script, whenever I want to use name
>> servers provided by resolv.conf.
>>
>> I propose the following: if no nameserver directives are found in the
>> resolvers section, then we fallback to resolv.conf parsing.
>>
>> If you fill comfortable enough, please send me / the ml a patch and I can
>> review it.
>> If you have any questions on the design, don't hesitate to ask.
>>
>> Baptiste
>>
>>
>> On Mon, Jan 8, 2018 at 1:56 PM, Jim Freeman  wrote:
>>
>>> No new libs needed.
>>>
>>> libc/libresolv 's res_ninit() suffices ...
>>>
>>> http://man7.org/linux/man-pages/man3/resolver.3.html
>>>
>>> On Fri, Dec 29, 2017 at 2:26 PM, Lukas Tribus  wrote:
>>>
 Hi Jim,


 On Fri, Dec 29, 2017 at 10:14 PM, Jim Freeman 
 wrote:
 > Looks like libresolv 's res_ninit() parses out /etc/resolv.conf 's
 > nameservers [resolv.h], so haproxy won't have to parse it either ...
 >
 > Will keep poking.

 Do give it some time to discuss the implementation here first though,
 before you invest a lot of time in a specific direction (especially if
 you link to new libraries).

 CC'ing Baptise and Willy.



 cheers,
 lukas

>>>
>>>
>>
>


Re: 1.8 resolvers - start vs. run

2018-01-08 Thread Jim Freeman
Your proposal aligns with what I was thinking over the weekend.

I'll try to be clean/small enough to tempt a back-port to 1.8 :-)

On Mon, Jan 8, 2018 at 1:17 PM, Baptiste  wrote:

> Hi Jim,
>
> I very welcome this feature. Actually, I wanted to add it myself for some
> time now.
> I currently work it around using init script, whenever I want to use name
> servers provided by resolv.conf.
>
> I propose the following: if no nameserver directives are found in the
> resolvers section, then we fallback to resolv.conf parsing.
>
> If you fill comfortable enough, please send me / the ml a patch and I can
> review it.
> If you have any questions on the design, don't hesitate to ask.
>
> Baptiste
>
>
> On Mon, Jan 8, 2018 at 1:56 PM, Jim Freeman  wrote:
>
>> No new libs needed.
>>
>> libc/libresolv 's res_ninit() suffices ...
>>
>> http://man7.org/linux/man-pages/man3/resolver.3.html
>>
>> On Fri, Dec 29, 2017 at 2:26 PM, Lukas Tribus  wrote:
>>
>>> Hi Jim,
>>>
>>>
>>> On Fri, Dec 29, 2017 at 10:14 PM, Jim Freeman  wrote:
>>> > Looks like libresolv 's res_ninit() parses out /etc/resolv.conf 's
>>> > nameservers [resolv.h], so haproxy won't have to parse it either ...
>>> >
>>> > Will keep poking.
>>>
>>> Do give it some time to discuss the implementation here first though,
>>> before you invest a lot of time in a specific direction (especially if
>>> you link to new libraries).
>>>
>>> CC'ing Baptise and Willy.
>>>
>>>
>>>
>>> cheers,
>>> lukas
>>>
>>
>>
>


Re: 1.8 resolvers - start vs. run

2018-01-08 Thread Baptiste
Hi Jim,

I very welcome this feature. Actually, I wanted to add it myself for some
time now.
I currently work it around using init script, whenever I want to use name
servers provided by resolv.conf.

I propose the following: if no nameserver directives are found in the
resolvers section, then we fallback to resolv.conf parsing.

If you fill comfortable enough, please send me / the ml a patch and I can
review it.
If you have any questions on the design, don't hesitate to ask.

Baptiste


On Mon, Jan 8, 2018 at 1:56 PM, Jim Freeman  wrote:

> No new libs needed.
>
> libc/libresolv 's res_ninit() suffices ...
>
> http://man7.org/linux/man-pages/man3/resolver.3.html
>
> On Fri, Dec 29, 2017 at 2:26 PM, Lukas Tribus  wrote:
>
>> Hi Jim,
>>
>>
>> On Fri, Dec 29, 2017 at 10:14 PM, Jim Freeman  wrote:
>> > Looks like libresolv 's res_ninit() parses out /etc/resolv.conf 's
>> > nameservers [resolv.h], so haproxy won't have to parse it either ...
>> >
>> > Will keep poking.
>>
>> Do give it some time to discuss the implementation here first though,
>> before you invest a lot of time in a specific direction (especially if
>> you link to new libraries).
>>
>> CC'ing Baptise and Willy.
>>
>>
>>
>> cheers,
>> lukas
>>
>
>


1.8.3 dns resolver ipv4/ipv6 undesirable behaviour

2018-01-08 Thread Marc Fournier

Hello,

Using the following (simplified) configuration, all the servers go (and
stay) into maintenance mode about 30s after start up or config
reload. Default "resolvers/hold timeout" I guess. These log lines get emitted:

2018-01-08T15:38:10.209195+00:00:  Proxy dockercloud_hello-world started.
2018-01-08T15:38:10.209200+00:00:  Proxy tutum_hello-world started.
[...]
2018-01-08T15:38:41.565222+00:00:  Server tutum_hello-world/tutum1 is going 
DOWN for maintenance (DNS timeout status). 3 active and 1 backup servers left. 
0 sessions active, 0 requeued, 0 remaining in queue.
2018-01-08T15:38:41.565233+00:00:  Server tutum_hello-world/tutum2 is going 
DOWN for maintenance (DNS timeout status). 2 active and 1 backup servers left. 
0 sessions active, 0 requeued, 0 remaining in queue.
2018-01-08T15:38:41.565238+00:00:  Server tutum_hello-world/tutum3 is going 
DOWN for maintenance (DNS timeout status). 1 active and 1 backup servers left. 
0 sessions active, 0 requeued, 0 remaining in queue.
2018-01-08T15:38:41.565244+00:00:  Server tutum_hello-world/tutum4 is going 
DOWN for maintenance (DNS timeout status). 0 active and 1 backup servers left. 
Running on backup. 0 sessions active, 0 requeued, 0 remaining in queue.
2018-01-08T15:38:41.565250+00:00:  Server dockercloud_hello-world/dockercloud1 
is going DOWN for maintenance (DNS timeout status). 0 active and 1 backup 
servers left. Running on backup. 0 sessions active, 0 requeued, 0 remaining in 
queue.


resolvers rancher
nameserver dnsmasq 169.254.169.250:53

backend dockercloud_hello-world
default-server inter 2000 rise 2 fall 3 port 80
server sorry 127.0.0.1:8082 backup
server-template dockercloud 1 hello-world.dockercloud.rancher.internal:80 
resolvers rancher check

backend tutum_hello-world
default-server inter 2000 rise 2 fall 3 port 80
server sorry 127.0.0.1:8082 backup
server-template tutum 4 hello-world.tutum.rancher.internal:80 resolvers 
rancher check


I tcpdumped the DNS traffic and noticed the  answers were empty.
(full output of wireshark's "text export" here:
https://gist.github.com/mfournier/0642a32df759ee0b1fbbd505a862f191)

Simply adding "resolve-prefer ipv4" makes the symptom go away, so no big
deal. But I wanted to point this out, as it might bite others, and I'm
pretty sure 1.7.x didn't have this issue.

Also, it doesn't seem right to me that a whole backend can get knocked
out by an incomplete DNS config (ie: the setup works well the first 30
seconds, as long as only the ipv4 A records get considered).

Thanks !

Marc



[PATCH] dns: Handle SRV record weights correctly

2018-01-08 Thread Olivier Houchard
Hi,

The attached patch attempts to map SRV record weight to haproxy weight 
correctly,
SRV weight goes from 0 to 65536 while haproxy uses 0 to 256, so we have to
divide it by 256, and a SRV weight of 0 doesn't mean the server shouldn't be
used, so we use a minimum weight of 1.

Regards,

Olivier
>From 8e8ab23223274ac75fdf1cfe2847337133fd59d2 Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Mon, 8 Jan 2018 16:28:57 +0100
Subject: [PATCH] MINOR: Handle SRV record weight correctly.

A SRV record weight can range from 0 to 65535, while haproxy weight goes
from 0 to 255, so we have to divide it by 256 before handing it to haproxy.
Also, a SRV record with a weight of 0 doesn't mean the server shouldn't be
used, so use a minimum weight of 1.

This should probably be backported to 1.8.
---
 include/types/dns.h |  2 +-
 src/dns.c   | 21 ++---
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/include/types/dns.h b/include/types/dns.h
index b1f068a61..9b1d08df7 100644
--- a/include/types/dns.h
+++ b/include/types/dns.h
@@ -143,7 +143,7 @@ struct dns_answer_item {
int16_t class; /* query class */
int32_t ttl;   /* response TTL */
int16_t priority;  /* SRV type priority */
-   int16_t weight;/* SRV type weight */
+   uint16_tweight;/* SRV type weight */
int16_t port;  /* SRV type port */
int16_t data_len;  /* number of bytes in target 
below */
struct sockaddr address;   /* IPv4 or IPv6, network 
format */
diff --git a/src/dns.c b/src/dns.c
index fceef2e48..22af18dc9 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -522,10 +522,17 @@ static void dns_check_dns_response(struct dns_resolution 
*res)
if (srv->srvrq == srvrq && srv->svc_port == 
item->port &&
item->data_len == srv->hostname_dn_len &&
!memcmp(srv->hostname_dn, item->target, 
item->data_len)) {
-   if (srv->uweight != item->weight) {
+   int ha_weight;
+
+   /* We still want to use a 0 weight 
server */
+   if (item->weight < 256)
+   ha_weight = 1;
+   else
+   ha_weight = item->weight / 256;
+   if (srv->uweight != ha_weight) {
char weight[9];
 
-   snprintf(weight, 
sizeof(weight), "%d", item->weight);
+   snprintf(weight, 
sizeof(weight), "%d", ha_weight);

server_parse_weight_change_request(srv, weight);
}
HA_SPIN_UNLOCK(SERVER_LOCK, >lock);
@@ -547,6 +554,7 @@ static void dns_check_dns_response(struct dns_resolution 
*res)
if (srv) {
const char *msg = NULL;
char weight[9];
+   int ha_weight;
char hostname[DNS_MAX_NAME_SIZE];
 
if (dns_dn_label_to_str(item->target, 
item->data_len+1,
@@ -563,7 +571,14 @@ static void dns_check_dns_response(struct dns_resolution 
*res)
if ((srv->check.state & CHK_ST_CONFIGURED) &&
!(srv->flags & SRV_F_CHECKPORT))
srv->check.port = item->port;
-   snprintf(weight, sizeof(weight), "%d", 
item->weight);
+
+   /* We still want to use a 0 weight server */
+   if (item->weight < 256)
+   ha_weight = 1;
+   else
+   ha_weight = item->weight / 256;
+
+   snprintf(weight, sizeof(weight), "%d", 
ha_weight);
server_parse_weight_change_request(srv, weight);
HA_SPIN_UNLOCK(SERVER_LOCK, >lock);
}
-- 
2.14.3



Redémarrage Haproxy

2018-01-08 Thread Christophe Fourmy RJ
Bonjour,

Nous utilisons HAProxy depuis quelques temps sur nos serveurs de
production. Il nous arrives de devoir redémarrer un serveur et donc le
HAProxy. Le problème est que le HAProxy reprend la configuration initial du
fichier de .conf, ce qui ne correspond pas forcément a l'état dans lequel
il était lors que son arrêt. Existe t il un moyen de palier a ce problème ?

Bonne Journée


-- 



*Christophe Fourmy*




*Architecte*
Port. | Tél. 02.90.89.60.00
Fax. 02.23.44.80.45
cfou...@regionsjob.com

#ToutRegionsJob 


Redirect traffic from one frontend to another based on ip address

2018-01-08 Thread Utkarsh Kattishettar
Hey ,



I was wondering if the following was possible .

I would like to redirect the traffic on haproxy from one frontend to
another based on ip/subnet . I'm aware of how to do this via ACL's
after recieving it on one frontend. However, I don't want the traffic
to be decrypted/ ssl terminated before it is forwarded to another
frontend. Basically, the SSL termination should take place at the last
frontend I want the traffic to be received on. How can I achieve this
?


Regards,
Utkarsh Kattishettar



Re: mworker: seamless reloads broken since 1.8.1

2018-01-08 Thread Pierre Cheynier
Hi,

On 08/01/2018 10:24, Lukas Tribus wrote:
>
> FYI there is a report on discourse mentioning this problem, and the
> poster appears to be able to reproduce the problem without nbthread
> paramter as well:
>
> https://discourse.haproxy.org/t/seamless-reloads-dont-work-with-systemd/1954
>
>
> Lukas
I retried this morning, I confirm that on 1.8.3, using

$ haproxy -vv
HA-Proxy version 1.8.3-205f675 2017/12/30
Copyright 2000-2017 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -Wno-unused-label -DTCP_USER_TIMEOUT=18
  OPTIONS = USE_LINUX_TPROXY=1 USE_GETADDRINFO=1 USE_ZLIB=1
USE_REGPARM=1 USE_OPENSSL=1 USE_SYSTEMD=1 USE_PCRE=1 USE_PCRE_JIT=1
USE_TFO=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

I get RSTs (not seamless reloads) when I introduce the global/nbthread
X, after a systemctl haproxy restart.

Pierre




QUESTION

2018-01-08 Thread Rachel Stinson

Hi,

I was just dropping in a quick line to know, if I could send some great 
article ideas your way for a guest post at your website haproxy.com?


If you like my suggested ideas, I can then provide you high-quality FREE 
CONTENT/ARTICLE. In return, I would expect just a favor of a backlink 
from within the main body of the article.


Do let me know if I can interest you with some great topic ideas?

Best Regards,

Rachel Stinson

If you would like to un-subscribe then click here 
.




Re: 1.8 resolvers - start vs. run

2018-01-08 Thread Jim Freeman
No new libs needed.

libc/libresolv 's res_ninit() suffices ...

http://man7.org/linux/man-pages/man3/resolver.3.html

On Fri, Dec 29, 2017 at 2:26 PM, Lukas Tribus  wrote:

> Hi Jim,
>
>
> On Fri, Dec 29, 2017 at 10:14 PM, Jim Freeman  wrote:
> > Looks like libresolv 's res_ninit() parses out /etc/resolv.conf 's
> > nameservers [resolv.h], so haproxy won't have to parse it either ...
> >
> > Will keep poking.
>
> Do give it some time to discuss the implementation here first though,
> before you invest a lot of time in a specific direction (especially if
> you link to new libraries).
>
> CC'ing Baptise and Willy.
>
>
>
> cheers,
> lukas
>


Re: cannot bind socket - Need help with config file

2018-01-08 Thread Lukas Tribus
Hello Imam,


On Mon, Jan 8, 2018 at 11:24 AM, Jonathan Matthews
 wrote:
> On Mon, 8 Jan 2018 at 08:29, Imam Toufique  wrote:
>>
>> [ALERT] 007/081940 (1416) : Starting frontend sftp-server: cannot bind
>> socket [0.0.0.0:22]
>> [ALERT] 007/081940 (1416) : Starting proxy stats: cannot bind socket
>> [10.0.15.23:22]
>> [ALERT] 007/081940 (1416) : Starting proxy stats: cannot bind socket
>> [0.0.0.0:22]
>
>
> I would strongly suspect that the server already has something bound to port
> 22. It's probably your SSH daemon.
>
> You'll need to fix that, by dedicating either a different port or interface
> to the SFTP listener.

Correct.

Also:
- you can't bind the stats socket to the same port as your actual frontend
- you are binding twice for the stats socket already (you must not
have "bind :ABC" AND listen stats 1.2.3.4:ABC as that will cause 2
different sockets to be created - don't specify IP and port in the
"listen" line to avoid that kind of confusing)


Lukas



Re: [PATCH 1/2] BUG/MINOR: lua: Fix default value for pattern in Socket.receive

2018-01-08 Thread Thierry Fournier
Hi Tim,

Thanks for the patch. Good catch !
Willy, you can apply it.

Thierry

> On 4 Jan 2018, at 19:32, Tim Duesterhus  wrote:
> 
> The default value of the pattern in `Socket.receive` is `*l` according
> to the documentation and in the `socket.tcp.receive` method of Lua.
> 
> The default value of `wanted` in `int hlua_socket_receive(struct lua_State *)`
> reflects this requirement, but the function fails to ensure this
> nonetheless:
> 
> If no parameter is given the top of the Lua stack will have the index 1.
> `lua_pushinteger(L, wanted);` then pushes the default value onto the stack
> (with index 2).
> The following `lua_replace(L, 2);` then pops the top index (2) and tries to
> replace the index 2 with it.
> I am not sure why exactly that happens (possibly, because one cannot replace
> non-existent stack indicies), but this causes the stack index to be lost.
> 
> `hlua_socket_receive_yield` then tries to read the stack index 2, to
> determine what to read and get the value `0`, instead of the correct
> HLSR_READ_LINE, thus taking the wrong branch.
> 
> Fix this by ensuring that the top of the stack is not replaced by itself.
> 
> This bug was introduced in commit 7e7ac32dad1e15c19152d37aaf9ea6b3f00a7226
> (which is the very first commit adding the Socket class to Lua). This
> bugfix should be backported to every branch containing that commit:
> - 1.6
> - 1.7
> - 1.8
> 
> A test case for this bug is as follows:
> 
> The 'Test' response header will contain an HTTP status line with the
> patch applied and will be empty without the patch applied. Replacing
> the `sock:receive()` with `sock:receive("*l")` will cause the status
> line to appear with and without the patch
> 
> http.lua:
>  core.register_action("bug", { "http-req" }, function(txn)
>   local sock = core.tcp()
>   sock:settimeout(60)
>   sock:connect("127.0.0.1:80")
>   sock:send("GET / HTTP/1.0\r\n\r\n")
>   response = sock:receive()
>   sock:close()
>   txn:set_var("txn.foo", response)
>  end)
> 
> haproxy.cfg (bits omitted for brevity):
>  global
>   lua-load /scratch/haproxy/http.lua
> 
>  frontend fe
>   bind 127.0.0.1:8080
>   http-request lua.bug
>   http-response set-header Test %[var(txn.foo)]
> 
>   default_backend be
> 
>  backend be
>   server s 127.0.0.1:80
> ---
> src/hlua.c | 5 -
> 1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/src/hlua.c b/src/hlua.c
> index abd096d03..285d25589 100644
> --- a/src/hlua.c
> +++ b/src/hlua.c
> @@ -1869,7 +1869,10 @@ __LJMP static int hlua_socket_receive(struct lua_State 
> *L)
> 
>   /* Set pattern. */
>   lua_pushinteger(L, wanted);
> - lua_replace(L, 2);
> +
> + /* Check if we would replace the top by itself. */
> + if (lua_gettop(L) != 2)
> + lua_replace(L, 2);
> 
>   /* init bufffer, and fiil it wih prefix. */
>   luaL_buffinit(L, >b);
> -- 
> 2.15.1
> 




Re: cannot bind socket - Need help with config file

2018-01-08 Thread Jonathan Matthews
On Mon, 8 Jan 2018 at 08:29, Imam Toufique  wrote:

> [ALERT] 007/081940 (1416) : Starting frontend sftp-server: cannot bind
> socket [0.0.0.0:22]
> [ALERT] 007/081940 (1416) : Starting proxy stats: cannot bind socket [
> 10.0.15.23:22]
> [ALERT] 007/081940 (1416) : Starting proxy stats: cannot bind socket [
> 0.0.0.0:22]
>

I would strongly suspect that the server already has something bound to
port 22. It's probably your SSH daemon.

You'll need to fix that, by dedicating either a different port or interface
to the SFTP listener.

J

> --
Jonathan Matthews
London, UK
http://www.jpluscplusm.com/contact.html


Re: mworker: seamless reloads broken since 1.8.1

2018-01-08 Thread Lukas Tribus
Hello,


On Fri, Jan 5, 2018 at 4:44 PM, William Lallemand
 wrote:
> I'm able to reproduce, looks like it happens with the nbthread parameter only,
> I'll try to find the problem in the code.

FYI there is a report on discourse mentioning this problem, and the
poster appears to be able to reproduce the problem without nbthread
paramter as well:

https://discourse.haproxy.org/t/seamless-reloads-dont-work-with-systemd/1954


Lukas



cannot bind socket - Need help with config file

2018-01-08 Thread Imam Toufique
Hi,

I need some help figuring out why my config below is failing to start the
haproxy daemon.  I am totally new to this.

Below is my confg:


global
#   local2.* /var/log/haproxy.log
#
   log 127.0.0.1 local2
   #local2.* /var/log/haproxy.log
   chroot /var/log/haproxy
   #stats timeout 30s
   user haproxy
   group haproxy
   daemon

defaults
   log global
   mode tcp
   option tcplog
   option dontlognull
   timeout connect 5000
   timeout client 5
   timeout server 5


frontend sftp-server
   bind *:22
   default_backend sftp_server
   timeout client 1h


listen stats 10.0.15.23:22
bind :22
mode tcp
maxconn 2000
option redis-check
retries 3
option redispatch
balance roundrobin

use_backend sftp_server
backend sftp_server
balance roundrobin
server web 10.0.15.21:22 check weight 2
server nagios 10.0.15.15:22 check weight 2

When I run a config check, i get this:

[root@file haproxy]# haproxy -f ./haproxy.cfg -c
Configuration file is valid

when I try to start haproxy, I get the following error:

[root@file haproxy]# haproxy -f ./haproxy.cfg -d
Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
[ALERT] 007/081940 (1416) : Starting frontend sftp-server: cannot bind
socket [0.0.0.0:22]
[ALERT] 007/081940 (1416) : Starting proxy stats: cannot bind socket [
10.0.15.23:22]
[ALERT] 007/081940 (1416) : Starting proxy stats: cannot bind socket [
0.0.0.0:22]

In the config above, I am trying to setup 2 SFTP servers load-balanced with
haproxy.  I would like to use port 22 , for sftp.

Please help, I need to get this going.

thanks.