Re: IPv6 resolvers seems not works

2017-04-10 Thread Frederic Lecaille

On 04/10/2017 04:17 PM, Frederic Lecaille wrote:

On 04/10/2017 01:42 PM, Павел Знаменский wrote:

Hello,


Hello,


I'm trying to add IPv6 address as a nameserver to able resolve addresses
in IPv6-only environment:

resolvers google_dns_10m
nameserver google_dns1 2001:4860:4860:::53
nameserver google_dns2 2001:4860:4860::8844:53
hold valid 10m
resolve_retries 2

But I getting error:
[ALERT] 099/133733 (10412) : Starting [google_dns_10m/google_dns1]
nameserver: can't connect socket.

As I understood resolver uses AF_INET when connecting to the nameserver
and that's why IPv6 doesn't work.


In fact, the address families used during socket() and connect() syscall
are different.

This is revealed by strace:

socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4
connect(4, {sa_family=AF_INET6, sin6_port=htons(53), inet_pton(AF_INET6,
"2001:4860:4860::8844", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0},
28) = -1 EAFNOSUPPORT (Address family not supported by protocol)

should be:

socket(PF_INET6, ...)


This patch fixes the issue.



>From 09fbee7c67dea87761165c35e8a1c0450575504c Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20L=C3=A9caille?= 
Date: Tue, 11 Apr 2017 08:46:37 +0200
Subject: [PATCH] MINOR: dns: Wrong address family used when creating IPv6
 sockets.

AF_INET address family was always used to create sockets to connect
to name servers. This prevented any connection over IPv6 from working.
---
 src/dns.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/dns.c b/src/dns.c
index 075a701..a118598 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -1022,7 +1022,7 @@ int dns_init_resolvers(int close_socket)
 			dgram->data = &resolve_dgram_cb;
 
 			/* create network UDP socket for this nameserver */
-			if ((fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) == -1) {
+			if ((fd = socket(curnameserver->addr.ss_family, SOCK_DGRAM, IPPROTO_UDP)) == -1) {
 Alert("Starting [%s/%s] nameserver: can't create socket.\n", curr_resolvers->id,
 		curnameserver->id);
 free(dgram);
-- 
2.1.4



typo nits @doc

2017-04-10 Thread Jim Freeman
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html
s/formated/formatted/g



Re: low load client payload intermittently dropped with a "cD" error (v1.7.3)

2017-04-10 Thread Bryan Talbot

> On Apr 8, 2017, at Apr 8, 2:24 PM, Lincoln Stern  
> wrote:
> 
> I'm not sure how to interpret this, but it appears that haproxy is dropping
> client payload intermittently (1/100).  I have included tcpdumps and logs to
> show what is happening.
> 
> Am I doing something wrong?  I have no idea what could be causing this or how
> to go about debugging it.  I cannot reproduce it, but I do observe in 
> production ~2 times
> a day across 20 instances and 2K connections.
> 
> Any help or advice would be greatly appreciated.
> 
> 
> 

You’re in TCP mode with 60 second timeouts. So, if the connection is idle for 
that long then the proxy will disconnect. If you need idle connections to stick 
around longer and mix http and tcp traffic then you probably want to set 
“timeout tunnel” to however long you’re willing to let idle tcp connections sit 
around and not impact http timeouts. If you only need long-lived tcp “tunnel” 
connections, then you can instead just increase both your “timeout client” and 
“timeout server” timeouts to cover your requirements.

-Bryan



> What I'm trying to accomplish is to provide HA availability over two routes
> (i.e. internet providers).  One acts as primary and I gave it a "static-rr"
> "weight" of 256 and the other as backup and has a weight of "1".  Backup
> should only be used in case of primary failure.
> 
> 
> log:
> Apr  4 18:55:27 app055 haproxy[13666]: 127.0.0.1:42262 
>  [04/Apr/2017:18:54:41.585] ws-local servers/server1 
> 1/86/45978 4503 5873 -- 0/0/0/0/0 0/0
> Apr  4 22:46:37 app055 haproxy[13666]: 127.0.0.1:47130 
>  [04/Apr/2017:22:46:36.931] ws-local servers/server1 
> 1/62/663 7979 517 -- 0/0/0/0/0 0/0
> Apr  4 22:46:38 app055 haproxy[13666]: 127.0.0.1:32931 
>  [04/Apr/2017:22:46:37.698] ws-local servers/server1 
> 1/55/405 3062 553 -- 1/1/1/1/0 0/0
> Apr  4 22:46:43 app055 haproxy[13666]: 127.0.0.1:41748 
>  [04/Apr/2017:22:46:43.190] ws-local servers/server1 
> 1/115/452 7979 517 -- 2/2/2/2/0 0/0
> Apr  4 22:46:46 app055 haproxy[13666]: 127.0.0.1:57226 
>  [04/Apr/2017:22:46:43.576] ws-local servers/server1 
> 1/76/3066 2921 538 -- 1/1/1/1/0 0/0
> Apr  4 22:46:47 app055 haproxy[13666]: 127.0.0.1:39656 
>  [04/Apr/2017:22:46:47.072] ws-local servers/server1 
> 1/67/460 8254 528 -- 1/1/1/1/0 0/0
> Apr  4 22:47:38 app055 haproxy[13666]: 127.0.0.1:39888 
>  [04/Apr/2017:22:46:38.057] ws-local servers/server1 
> 1/63/60001 0 0 cD 0/0/0/0/0 0/0 
> Apr  5 08:44:55 app055 haproxy[13666]: 127.0.0.1:42650 
>  [05/Apr/2017:08:44:05.529] ws-local servers/server1 
> 1/53/49645 4364 4113 -- 0/0/0/0/0 0/0
> 
> 
> tcpdump:
> 22:46:38.057127 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [S], seq 
> 2113072542, win 43690, options [mss 65495,sackOK,TS val 82055529 ecr 
> 0,nop,wscale 7], length 0
> 22:46:38.057156 IP 127.0.0.1.9011 > 127.0.0.1.39888: Flags [S.], seq 
> 3284611992, ack 2113072543, win 43690, options [mss 65495,sackOK,TS val 
> 82055529 ecr 82055529,nop,wscale 7], length 0
> 22:46:38.057178 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [.], ack 1, win 
> 342, options [nop,nop,TS val 82055529 ecr 82055529], length 0
> 22:46:38.057295 IP 10.10.10.10.34289 > 99.99.99.99.8000: Flags [S], seq 
> 35567, win 29200, options [mss 1460,sackOK,TS val 82055529 ecr 
> 0,nop,wscale 7], length 0
> 22:46:38.060539 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [P.], seq 1:199, 
> ack 1, win 342, options [nop,nop,TS val 82055530 ecr 82055529], length 198
> 22:46:38.060598 IP 127.0.0.1.9011 > 127.0.0.1.39888: Flags [.], ack 199, win 
> 350, options [nop,nop,TS val 82055530 ecr 82055530], length 0
> ... client payload acked ...
> 22:46:38.120527 IP 99.99.99.99.8000 > 10.10.10.10.34289: Flags [S.], seq 
> 4125907118, ack 35568, win 28960, options [mss 1460,sackOK,TS val 
> 662461622 ecr 82055529,nop,wscale 8], length 0
> 22:46:38.120619 IP 10.10.10.10.34289 > 99.99.99.99.8000: Flags [.], ack 1, 
> win 229, options [nop,nop,TS val 82055545 ecr 662461622], length 0
> ... idle timeout by server 5 seconds later...
> 22:46:43.183207 IP 99.99.99.99.8000 > 10.10.10.10.34289: Flags [F.], seq 1, 
> ack 1, win 114, options [nop,nop,TS val 662466683 ecr 82055545], length 0
> 22:46:43.183387 IP 127.0.0.1.9011 > 127.0.0.1.39888: Flags [F.], seq 1, ack 
> 199, win 350, options [nop,nop,TS val 82056810 ecr 82055530], length 0
> 22:46:43.184011 IP 10.10.10.10.34289 > 99.99.99.99.8000: Flags [.], ack 2, 
> win 229, options [nop,nop,TS val 82056811 ecr 662466683], length 0
> 22:46:43.184025 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [.], ack 2, win 
> 342, options [nop,nop,TS val 82056811 ecr 82056810], length 0
> 22:46:43.184715 IP 127.0.0.1.39888 > 127.0.0.1.9011: Flags [P.], seq 199:206, 
> ack 2, win 342, options [nop,nop,TS val 82056811 ecr 82056810], length 7
> 22:46:43.184795 IP 127.0.0.1.9011 > 1

Re: [RFC][PATCHES] seamless reload

2017-04-10 Thread Olivier Houchard
On Mon, Apr 10, 2017 at 11:08:56PM +0200, Pavlos Parissis wrote:
> On 10/04/2017 08:09 , Olivier Houchard wrote:
> > 
> > Hi,
> > 
> > On top of those patches, here a 3 more patches.
> > The first one makes the systemd wrapper check for a HAPROXY_STATS_SOCKET
> > environment variable. If set, it will use that as an argument to -x, when
> > reloading the process.
> 
> I see you want to introduce a specific environment variable for this 
> functionality
> and then fetched in the code with getenv(). This is a way to do it.
> 
> IMHO: I prefer to pass a value to an argument, for instance -x. It is also
> consistent with haproxy binary where someone uses -x argument as well.
> 

I'm not sure what you mean by this ?
It is supposed to be directly the value given to -x as an argument.

> > The second one sends listening unix sockets, as well as IPv4/v6 sockets. 
> > I see no reason not to, and that means we no longer have to wait until
> > the old process close the socket before being able to accept new connections
> > on it.
> 
> > The third one adds a new global optoin, nosockettransfer, if set, we assume
> > we will never try to transfer listening sockets through the stats socket,
> > and close any socket nout bound to our process, to save a few file
> > descriptors.
> > 
> 
> IMHO: a better name would be 'stats nounsedsockets', as it is referring to a
> generic functionality of UNIX stats socket, rather to a very specific 
> functionality.
> 

Well really it is a global setting, maybe my wording isn't great, but it
makes haproxy close all sockets not bound to this process, as it used to,
instead of keeping them around in case somebody asks for them. 

> I hope tomorrow I will find some time to test your patches.
> 

Thanks a lot ! This is greatly appreciated.

Regards,

Olivier



simgle ?

2017-04-10 Thread Jim Freeman
https://github.com/haproxy/haproxy/search?q=simgle

single ?
simple ?



Re: [RFC][PATCHES] seamless reload

2017-04-10 Thread Olivier Houchard
On Mon, Apr 10, 2017 at 10:49:21PM +0200, Pavlos Parissis wrote:
> On 07/04/2017 11:17 , Olivier Houchard wrote:
> > On Fri, Apr 07, 2017 at 09:58:57PM +0200, Pavlos Parissis wrote:
> >> On 06/04/2017 04:57 , Olivier Houchard wrote:
> >>> On Thu, Apr 06, 2017 at 04:56:47PM +0200, Pavlos Parissis wrote:
>  On 06/04/2017 04:25 , Olivier Houchard wrote:
> > Hi,
> >
> > The attached patchset is the first cut at an attempt to work around the
> > linux issues with SOREUSEPORT that makes haproxy refuse a few new 
> > connections
> > under heavy load.
> > This works by transferring the existing sockets to the new process via 
> > the
> > stats socket. A new command-line flag has been added, -x, that takes the
> > path to the unix socket as an argument, and if set, will attempt to 
> > retrieve
> > all the listening sockets;
> > Right now, any error, either while connecting to the socket, or 
> > retrieving
> > the file descriptors, is fatal, but maybe it'd be better to just fall 
> > back
> > to the previous behavior instead of opening any missing socket ? I'm 
> > still
> > undecided about that.
> >
> > Any testing, comments, etc would be greatly appreciated.
> >
> 
>  Does this patch set support HAProxy in multiprocess mode (nbproc > 1) ?
> 
> >>>
> >>> Hi Pavlos,
> >>>
> >>> If it does not, it's a bug :)
> >>> In my few tests, it seemed to work.
> >>>
> >>> Olivier
> >>>
> >>
> >>
> >> I run systems with systemd and I can't see how I can test the seamless 
> >> reload
> >> functionality ( thanks for that) without a proper support for the UNIX 
> >> socket
> >> file argument (-x) in the haproxy-systemd-wrapper.
> >>
> >> I believe you need to modify the wrapper to accept -x argument for a single
> >> UNIX socket file or -X for a directory path with multiple UNIX socket 
> >> files,
> >> when HAProxy runs in multi-process mode.
> >>
> >> What do you think?
> >>
> >> Cheers,
> >> Pavlos
> >>
> >>
> >>
> > 
> > 
> > Hi Pavlos,
> > 
> > I didn't consider systemd, so it may be I have to investigate there.
> > You don't need to talk to all the processes to get the sockets, in the new
> > world order, each process does have all the sockets, although it will accept
> > connections only for those for which it is configured to do so (I plan to 
> > add
> > a configuration option to restore the old behavior, for those who don't 
> > need 
> > that, and want to save file descriptors).
> > Reading the haproxy-systemd-wrapper code, it should be trivial.
> > I just need to figure out how to properly provide the socket path to the
> >  wrapper.
> > I see that you already made use of a few environment variables in
> > haproxy.service. Would that be reasonnable to add a new one, that could
> > be set in haproxy.service.d/overwrite.conf ? I'm not super-used to systemd,
> > and I'm not sure of how people are doing that kind of things.
> > 
> 
> I believe you are referring to $CONFIG and PIDFILE environment variables. 
> Those
> two variables are passed to the two arguments, which were present but 
> impossible
> to adjust their input, switching to variables allowed people to overwrite 
> their input.
> 
> In this case, we are talking about a new functionality I guess the best 
> approach
> would be to have ExecStart using EXTRAOPTS:
> ExecStart=/usr/sbin/haproxy-systemd-wrapper -f $CONFIG -p $PIDFILE $EXTRAOPTS
> 
> This will allow users to set a value to the new argument and any other 
> argument
> they want
> cat /etc/systemd/system/haproxy.service.d/overwrite.conf
> [Service]
> Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
> "EXTRAOPTS=-x /foobar"
> 
> or using default configuration file /etc/default/haproxy
> 
> [Service]
> Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
> EnvironmentFile=-/etc/default/haproxy
> ExecStart=/usr/sbin/haproxy-systemd-wrapper -f $CONFIG -p $PIDFILE $EXTRAOPTS
> 
> cat /etc/default/haproxy
> EXTRAOPTS="-x /foobar"
> 
> I hope it helps,
> Cheers,
> 



Hi Pavlos,

Yeah I see what you mean, it is certainly doable, though -x is a bit special,
because you don't use it the first time you run haproxy, only for reloading,
so that would mean the wrapper has special knowledge about it, and remove it
from the user-supplied command line the first time it's called. I'm a bit
uneasy about that, but if it's felt the best way to do it, I'll go ahead.

Regards,

Olivier



Re: [RFC][PATCHES] seamless reload

2017-04-10 Thread Pavlos Parissis
On 10/04/2017 08:09 μμ, Olivier Houchard wrote:
> 
> Hi,
> 
> On top of those patches, here a 3 more patches.
> The first one makes the systemd wrapper check for a HAPROXY_STATS_SOCKET
> environment variable. If set, it will use that as an argument to -x, when
> reloading the process.

I see you want to introduce a specific environment variable for this 
functionality
and then fetched in the code with getenv(). This is a way to do it.

IMHO: I prefer to pass a value to an argument, for instance -x. It is also
consistent with haproxy binary where someone uses -x argument as well.

> The second one sends listening unix sockets, as well as IPv4/v6 sockets. 
> I see no reason not to, and that means we no longer have to wait until
> the old process close the socket before being able to accept new connections
> on it.

> The third one adds a new global optoin, nosockettransfer, if set, we assume
> we will never try to transfer listening sockets through the stats socket,
> and close any socket nout bound to our process, to save a few file
> descriptors.
> 

IMHO: a better name would be 'stats nounsedsockets', as it is referring to a
generic functionality of UNIX stats socket, rather to a very specific 
functionality.

I hope tomorrow I will find some time to test your patches.

Thanks a lot for your work on this,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: [RFC][PATCHES] seamless reload

2017-04-10 Thread Pavlos Parissis
On 07/04/2017 11:17 μμ, Olivier Houchard wrote:
> On Fri, Apr 07, 2017 at 09:58:57PM +0200, Pavlos Parissis wrote:
>> On 06/04/2017 04:57 , Olivier Houchard wrote:
>>> On Thu, Apr 06, 2017 at 04:56:47PM +0200, Pavlos Parissis wrote:
 On 06/04/2017 04:25 , Olivier Houchard wrote:
> Hi,
>
> The attached patchset is the first cut at an attempt to work around the
> linux issues with SOREUSEPORT that makes haproxy refuse a few new 
> connections
> under heavy load.
> This works by transferring the existing sockets to the new process via the
> stats socket. A new command-line flag has been added, -x, that takes the
> path to the unix socket as an argument, and if set, will attempt to 
> retrieve
> all the listening sockets;
> Right now, any error, either while connecting to the socket, or retrieving
> the file descriptors, is fatal, but maybe it'd be better to just fall back
> to the previous behavior instead of opening any missing socket ? I'm still
> undecided about that.
>
> Any testing, comments, etc would be greatly appreciated.
>

 Does this patch set support HAProxy in multiprocess mode (nbproc > 1) ?

>>>
>>> Hi Pavlos,
>>>
>>> If it does not, it's a bug :)
>>> In my few tests, it seemed to work.
>>>
>>> Olivier
>>>
>>
>>
>> I run systems with systemd and I can't see how I can test the seamless reload
>> functionality ( thanks for that) without a proper support for the UNIX socket
>> file argument (-x) in the haproxy-systemd-wrapper.
>>
>> I believe you need to modify the wrapper to accept -x argument for a single
>> UNIX socket file or -X for a directory path with multiple UNIX socket files,
>> when HAProxy runs in multi-process mode.
>>
>> What do you think?
>>
>> Cheers,
>> Pavlos
>>
>>
>>
> 
> 
> Hi Pavlos,
> 
> I didn't consider systemd, so it may be I have to investigate there.
> You don't need to talk to all the processes to get the sockets, in the new
> world order, each process does have all the sockets, although it will accept
> connections only for those for which it is configured to do so (I plan to add
> a configuration option to restore the old behavior, for those who don't need 
> that, and want to save file descriptors).
> Reading the haproxy-systemd-wrapper code, it should be trivial.
> I just need to figure out how to properly provide the socket path to the
>  wrapper.
> I see that you already made use of a few environment variables in
> haproxy.service. Would that be reasonnable to add a new one, that could
> be set in haproxy.service.d/overwrite.conf ? I'm not super-used to systemd,
> and I'm not sure of how people are doing that kind of things.
> 

I believe you are referring to $CONFIG and PIDFILE environment variables. Those
two variables are passed to the two arguments, which were present but impossible
to adjust their input, switching to variables allowed people to overwrite their 
input.

In this case, we are talking about a new functionality I guess the best approach
would be to have ExecStart using EXTRAOPTS:
ExecStart=/usr/sbin/haproxy-systemd-wrapper -f $CONFIG -p $PIDFILE $EXTRAOPTS

This will allow users to set a value to the new argument and any other argument
they want
cat /etc/systemd/system/haproxy.service.d/overwrite.conf
[Service]
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
"EXTRAOPTS=-x /foobar"

or using default configuration file /etc/default/haproxy

[Service]
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
EnvironmentFile=-/etc/default/haproxy
ExecStart=/usr/sbin/haproxy-systemd-wrapper -f $CONFIG -p $PIDFILE $EXTRAOPTS

cat /etc/default/haproxy
EXTRAOPTS="-x /foobar"

I hope it helps,
Cheers,



signature.asc
Description: OpenPGP digital signature


Admin socket server state and MAINT flag issues

2017-04-10 Thread Dennis Jacobfeuerborn
Hi,
i'm currently playing with the values that the admin socket return when
the "show servers state" command is issued and I noticed to things:

1. When using and abstract namespace socket as address on a server line
then the srv_addr "field" will be empty which technically isn't a
problem but the documentation doesn't seem to mention how exactly fields
are delimited. Example:

...
5 sites-front-ssl 1 clear  2 0 1 1 63797 1 0 2 0 0 0 0
...
(notice the two spaces after "clear")

In my first attempt at parsing the output i used white-space as a
delimiter which failed for this particular server line. Once I used one
single space character as a delimiter the parsing worked fine.
I would probably be good to document this explicitly to make parsers
more robust. Alternatively one could use white-space as a delimiter and
maybe output fields that have not content as "-".

2. The server administrative state flags are defined like this:

enum srv_admin {
SRV_ADMF_FMAINT= 0x01,
SRV_ADMF_IMAINT= 0x02,
SRV_ADMF_MAINT = 0x23,
SRV_ADMF_CMAINT= 0x04,
SRV_ADMF_FDRAIN= 0x08,
SRV_ADMF_IDRAIN= 0x10,
SRV_ADMF_DRAIN = 0x18,
SRV_ADMF_RMAINT= 0x20,
};

Shouldn't the SRV_ADMF_MAINT value include the CMAINT flag and thus
actually have a value of 0x27?

Regards,
  Dennis



Re: server templates

2017-04-10 Thread Willy Tarreau
On Mon, Apr 10, 2017 at 08:29:05PM +0200, Aleksandar Lazic wrote:
> In case I have understood you all right I will be able to add and remove
> servers without reloading/restarting haproxy just with some cli commands,
> right.
> That would be very great ;-)

Yep that's it.

> Will be this also possible for listen/frontend/backend in the next step.
> This will be a big challenge because there are so much combinations and
> permission issues which can create a lot or headache 8-O

We've had some discussions in the past regarding what could be done for
backends. It's still uncertain and maybe it's the wrong approach, since
a backend is nothing less than a config container and an LB farm with
its LB algorithm. Maybe radically different approaches like splitting
backends to use server groups and being able to associate them on the
fly could make things easier. Or maybe we'll adopt very similar designs
for backend templates as for server templates. This still needs more
thinking. For frontends it's even worse and frontends are also directly
involved in maxconn computation and such stuff. Also most of the time
it will not be possible to bind a frontend after haproxy has lost its
privileges, let alone listen to a unix socket from a chroot. But other
approaches like passing an FD over the CLI could work to some extents.
Also frontends will require passing new certs, something we still can't
do for now. This alone could be a nice improvement for many users compared
to adding frontends (much rarer even in very dynamic environments).

Cheers,
Willy



Re: ACL with dynamic pattern

2017-04-10 Thread Aleksandar Lazic

Am 10-04-2017 10:55, schrieb Alexander Lebedev:


Hello!

I want to implement CSRF check with haproxy.
I want to check cookie value matched the header value and deny request 
if they're didn't equal.


Something like this:
alc token_valid req.cook(token) %[req.hdr(token)]
http-request deny unless token_valid


and when you add -m does this helps?

acl token_valid req.cook(token) -m %[req.hdr(token)]

or

acl token_valid %[req.hdr(token)] -m req.cook(token)

from
http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.1.3

Btw.: what's the output of
haproxy -vv


But I can't find the way to perform this check.
Is it really impossible?

Alexander Lebedev


Regards
aleks



Re: server templates

2017-04-10 Thread Aleksandar Lazic



Am 10-04-2017 20:19, schrieb Willy Tarreau:

On Mon, Apr 10, 2017 at 05:00:14PM +0200, Baptiste wrote:

On Mon, Apr 10, 2017 at 2:30 PM, Willy Tarreau  wrote:

> On Mon, Apr 10, 2017 at 10:02:29AM +0200, Frederic Lecaille wrote:
> > With server templates, haproxy could preallocate 'server' objects which
> > would derive from 'default-server' (with same settings as default server
> > settings), but with remaining parameters which are unknown at parsing
> time
> > (for instance their names, addresses, anything else). In fact here,
> names or
> > addresses are particular settings: a default server has not any default
> name
> > or default address.
>
> Absolutely. And this combined with the recent features of dynamic
> consistent
> cookies and with Baptiste's upcoming DNS patches will easily result in
> pretty
> dynamic backends!
>
> Willy
>
>
I just had a look at the implementation of server-templates. To make 
it
work with DNS resolution, we need to find a way to provide a fqdn to  
the

default-server directive. This might not be too complicated.


I hadn't thought about this one, good point. At least as a first step
you'll just have to use the same fqdn for all servers (and thus 
templates).

We definitely want it to be changeable over the CLI.


Thanks all for explanation.

In case I have understood you all right I will be able to add and remove 
servers without reloading/restarting haproxy just with some cli 
commands, right.

That would be very great ;-)

Will be this also possible for listen/frontend/backend in the next step.
This will be a big challenge because there are so much combinations and 
permission issues which can create a lot or headache 8-O


regards
Aleks



Re: server templates

2017-04-10 Thread Willy Tarreau
On Mon, Apr 10, 2017 at 05:00:14PM +0200, Baptiste wrote:
> On Mon, Apr 10, 2017 at 2:30 PM, Willy Tarreau  wrote:
> 
> > On Mon, Apr 10, 2017 at 10:02:29AM +0200, Frederic Lecaille wrote:
> > > With server templates, haproxy could preallocate 'server' objects which
> > > would derive from 'default-server' (with same settings as default server
> > > settings), but with remaining parameters which are unknown at parsing
> > time
> > > (for instance their names, addresses, anything else). In fact here,
> > names or
> > > addresses are particular settings: a default server has not any default
> > name
> > > or default address.
> >
> > Absolutely. And this combined with the recent features of dynamic
> > consistent
> > cookies and with Baptiste's upcoming DNS patches will easily result in
> > pretty
> > dynamic backends!
> >
> > Willy
> >
> >
> I just had a look at the implementation of server-templates. To make it
> work with DNS resolution, we need to find a way to provide a fqdn to  the
> default-server directive. This might not be too complicated.

I hadn't thought about this one, good point. At least as a first step
you'll just have to use the same fqdn for all servers (and thus templates).
We definitely want it to be changeable over the CLI.

Willy



Re: [RFC][PATCHES] seamless reload

2017-04-10 Thread Olivier Houchard

Hi,

On top of those patches, here a 3 more patches.
The first one makes the systemd wrapper check for a HAPROXY_STATS_SOCKET
environment variable. If set, it will use that as an argument to -x, when
reloading the process.
The second one sends listening unix sockets, as well as IPv4/v6 sockets. 
I see no reason not to, and that means we no longer have to wait until
the old process close the socket before being able to accept new connections
on it.
The third one adds a new global optoin, nosockettransfer, if set, we assume
we will never try to transfer listening sockets through the stats socket,
and close any socket nout bound to our process, to save a few file
descriptors.

Regards,

Olivier
>From 8d6c38b6824346b096ba31757ab62bc986a433b3 Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Sun, 9 Apr 2017 16:28:10 +0200
Subject: [PATCH 7/9] MINOR: systemd wrapper: add support for passing the -x
 option.

Make the systemd wrapper chech if HAPROXY_STATS_SOCKET if set.
If set, it will use it as an argument to the "-x" option, which makes
haproxy asks for any listening socket, on the stats socket, in order
to achieve reloads with no new connection lost.
---
 contrib/systemd/haproxy.service.in |  2 ++
 src/haproxy-systemd-wrapper.c  | 10 +-
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/contrib/systemd/haproxy.service.in 
b/contrib/systemd/haproxy.service.in
index dca81a2..05bb716 100644
--- a/contrib/systemd/haproxy.service.in
+++ b/contrib/systemd/haproxy.service.in
@@ -3,6 +3,8 @@ Description=HAProxy Load Balancer
 After=network.target
 
 [Service]
+# You can point the environment variable HAPROXY_STATS_SOCKET to a stats
+# socket if you want seamless reloads.
 Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
 ExecStartPre=@SBINDIR@/haproxy -f $CONFIG -c -q
 ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f $CONFIG -p $PIDFILE
diff --git a/src/haproxy-systemd-wrapper.c b/src/haproxy-systemd-wrapper.c
index f6a9c85..1d00111 100644
--- a/src/haproxy-systemd-wrapper.c
+++ b/src/haproxy-systemd-wrapper.c
@@ -92,11 +92,15 @@ static void spawn_haproxy(char **pid_strv, int nb_pid)
pid = fork();
if (!pid) {
char **argv;
+   char *stats_socket = NULL;
int i;
int argno = 0;
 
/* 3 for "haproxy -Ds -sf" */
-   argv = calloc(4 + main_argc + nb_pid + 1, sizeof(char *));
+   if (nb_pid > 0)
+   stats_socket = getenv("HAPROXY_STATS_SOCKET");
+   argv = calloc(4 + main_argc + nb_pid + 1 +
+   stats_socket != NULL ? 2 : 0, sizeof(char *));
if (!argv) {
fprintf(stderr, SD_NOTICE "haproxy-systemd-wrapper: 
failed to calloc(), please try again later.\n");
exit(1);
@@ -121,6 +125,10 @@ static void spawn_haproxy(char **pid_strv, int nb_pid)
argv[argno++] = "-sf";
for (i = 0; i < nb_pid; ++i)
argv[argno++] = pid_strv[i];
+   if (stats_socket != NULL) {
+   argv[argno++] = "-x";
+   argv[argno++] = stats_socket;
+   }
}
argv[argno] = NULL;
 
-- 
2.9.3

>From df5e6e70f2e73fca9e28ba273904ab5c5acf53d3 Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Sun, 9 Apr 2017 19:17:15 +0200
Subject: [PATCH 8/9] MINOR: cli: When sending listening sockets, send unix
 sockets too.

Send unix sockets, as well as IPv4/IPv6 sockets, so that we don't have to
wait for the old process to die before being able to bind those.
---
 src/cli.c|  6 --
 src/proto_uxst.c | 50 ++
 2 files changed, 54 insertions(+), 2 deletions(-)

diff --git a/src/cli.c b/src/cli.c
index d5ff11f..533f792 100644
--- a/src/cli.c
+++ b/src/cli.c
@@ -1067,7 +1067,8 @@ static int _getsocks(char **args, struct appctx *appctx, 
void *private)
list_for_each_entry(l, &px->conf.listeners, by_fe) {
/* Only transfer IPv4/IPv6 sockets */
if (l->proto->sock_family == AF_INET ||
-   l->proto->sock_family == AF_INET6)
+   l->proto->sock_family == AF_INET6 ||
+   l->proto->sock_family == AF_UNIX)
tot_fd_nb++;
}
px = px->next;
@@ -1120,7 +1121,8 @@ static int _getsocks(char **args, struct appctx *appctx, 
void *private)
/* Only transfer IPv4/IPv6 sockets */
if (l->state >= LI_LISTEN &&
(l->proto->sock_family == AF_INET ||
-   l->proto->sock_family == AF_INET6)) {
+   l->proto->sock_family == AF_INET6 ||
+   l->p

Re: Certificate order

2017-04-10 Thread Sander Hoentjen
This is a corrected patch against 1.7.5.

On 04/10/2017 05:00 PM, Sander Hoentjen wrote:
> No scratch that, this is wrong.
>
> On 04/10/2017 04:57 PM, Sander Hoentjen wrote:
>> The attached patch against haproxy 1.7.5 honours crt order also for
>> wildcards.
>>
>> On 04/07/2017 03:42 PM, Sander Hoentjen wrote:
>>> Hi Sander,
>>>
>>> On 04/06/2017 02:06 PM, Sander Klein wrote:
 Hi Sander,

 On 2017-04-06 10:45, Sander Hoentjen wrote:
> Hi guys,
>
> We have a setup where we sometimes have multiple certificates for a
> domain. We use multiple directories for that and would like the
> following behavior:
> - Look in dir A for any match, use it if found
> - Look in dir B for any match, use it if found
> - Look in dir .. etc
>
> This works great, except for wildcards. Right now a domain match in dir
> B takes precedence over a wildcard match in dir A.
>
> Is there a way to get haproxy to behave the way I describe?
 I have been playing with this some time ago and my solution was to
 just think about the order of certificate loading. I then found out
 that the last certificate was preferred if it matched. Not sure if
 this has changed over time.
>>> This does not work for wildcard certs, it seems they are always tried last.
>>>
>>> Regards,
>>> Sander
>>>
>

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index f947c99..ad70783 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -130,6 +130,7 @@
 
 int sslconns = 0;
 int totalsslconns = 0;
+int order = 0;
 
 #if (defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB && TLS_TICKETS_NO > 0)
 struct list tlskeys_reference = LIST_HEAD_INIT(tlskeys_reference);
@@ -1453,9 +1454,12 @@
 			break;
 		}
 	}
-	if (!node && wildp) {
+	if (wildp) {
 		/* lookup in wildcards names */
-		node = ebst_lookup(&s->sni_w_ctx, wildp);
+		n = ebst_lookup(&s->sni_w_ctx, wildp);
+		if (!node || n && container_of(n, struct sni_ctx, name)->order < container_of(node, struct sni_ctx, name)->order) {
+			node = n;
+		}
 	}
 	if (!node || container_of(node, struct sni_ctx, name)->neg) {
 		SSL_CTX *ctx;
@@ -2265,7 +2269,6 @@
 	X509 *x = NULL, *ca;
 	int i, err;
 	int ret = -1;
-	int order = 0;
 	X509_NAME *xname;
 	char *str;
 	pem_password_cb *passwd_cb;


Re: OpenSSL engine and async support

2017-04-10 Thread Grant Zhang

> On Apr 10, 2017, at 07:42, Emeric Brun  wrote:
> 

>> * openssl version (1.1.0b-e?)
> compiled 1.1.0e
>> 
>> 
> Could you provide patches rebased on current dev master branch?
I am kinda busy with other project but will try to provide rebased patches this 
week.

Thanks, 

Grant


Re: Certificate order

2017-04-10 Thread Sander Hoentjen
No scratch that, this is wrong.

On 04/10/2017 04:57 PM, Sander Hoentjen wrote:
> The attached patch against haproxy 1.7.5 honours crt order also for
> wildcards.
>
> On 04/07/2017 03:42 PM, Sander Hoentjen wrote:
>> Hi Sander,
>>
>> On 04/06/2017 02:06 PM, Sander Klein wrote:
>>> Hi Sander,
>>>
>>> On 2017-04-06 10:45, Sander Hoentjen wrote:
 Hi guys,

 We have a setup where we sometimes have multiple certificates for a
 domain. We use multiple directories for that and would like the
 following behavior:
 - Look in dir A for any match, use it if found
 - Look in dir B for any match, use it if found
 - Look in dir .. etc

 This works great, except for wildcards. Right now a domain match in dir
 B takes precedence over a wildcard match in dir A.

 Is there a way to get haproxy to behave the way I describe?
>>> I have been playing with this some time ago and my solution was to
>>> just think about the order of certificate loading. I then found out
>>> that the last certificate was preferred if it matched. Not sure if
>>> this has changed over time.
>> This does not work for wildcard certs, it seems they are always tried last.
>>
>> Regards,
>> Sander
>>




Re: IPv6 resolvers seems not works

2017-04-10 Thread Frederic Lecaille

On 04/10/2017 01:42 PM, Павел Знаменский wrote:

Hello,


Hello,


I'm trying to add IPv6 address as a nameserver to able resolve addresses
in IPv6-only environment:

resolvers google_dns_10m
nameserver google_dns1 2001:4860:4860:::53
nameserver google_dns2 2001:4860:4860::8844:53
hold valid 10m
resolve_retries 2

But I getting error:
[ALERT] 099/133733 (10412) : Starting [google_dns_10m/google_dns1]
nameserver: can't connect socket.

As I understood resolver uses AF_INET when connecting to the nameserver
and that's why IPv6 doesn't work.


Indeed, the address families used during socket() and connect() syscall 
are different.


This is revealed by strace:

socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4
connect(4, {sa_family=AF_INET6, sin6_port=htons(53), inet_pton(AF_INET6, 
"2001:4860:4860::8844", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 
28) = -1 EAFNOSUPPORT (Address family not supported by protocol)


should be:

socket(PF_INET6, ...)



Is it possible to add IPv6 support for resolvers?

Thanks.





Re: server templates

2017-04-10 Thread Baptiste
On Mon, Apr 10, 2017 at 2:30 PM, Willy Tarreau  wrote:

> On Mon, Apr 10, 2017 at 10:02:29AM +0200, Frederic Lecaille wrote:
> > With server templates, haproxy could preallocate 'server' objects which
> > would derive from 'default-server' (with same settings as default server
> > settings), but with remaining parameters which are unknown at parsing
> time
> > (for instance their names, addresses, anything else). In fact here,
> names or
> > addresses are particular settings: a default server has not any default
> name
> > or default address.
>
> Absolutely. And this combined with the recent features of dynamic
> consistent
> cookies and with Baptiste's upcoming DNS patches will easily result in
> pretty
> dynamic backends!
>
> Willy
>
>
I just had a look at the implementation of server-templates. To make it
work with DNS resolution, we need to find a way to provide a fqdn to  the
default-server directive. This might not be too complicated.
After this, the magic will happen

Great work Frederic :)

Baptiste


Re: Certificate order

2017-04-10 Thread Sander Hoentjen
The attached patch against haproxy 1.7.5 honours crt order also for
wildcards.

On 04/07/2017 03:42 PM, Sander Hoentjen wrote:
> Hi Sander,
>
> On 04/06/2017 02:06 PM, Sander Klein wrote:
>> Hi Sander,
>>
>> On 2017-04-06 10:45, Sander Hoentjen wrote:
>>> Hi guys,
>>>
>>> We have a setup where we sometimes have multiple certificates for a
>>> domain. We use multiple directories for that and would like the
>>> following behavior:
>>> - Look in dir A for any match, use it if found
>>> - Look in dir B for any match, use it if found
>>> - Look in dir .. etc
>>>
>>> This works great, except for wildcards. Right now a domain match in dir
>>> B takes precedence over a wildcard match in dir A.
>>>
>>> Is there a way to get haproxy to behave the way I describe?
>> I have been playing with this some time ago and my solution was to
>> just think about the order of certificate loading. I then found out
>> that the last certificate was preferred if it matched. Not sure if
>> this has changed over time.
> This does not work for wildcard certs, it seems they are always tried last.
>
> Regards,
> Sander
>

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index f947c99..ad70783 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -1453,9 +1453,12 @@
 			break;
 		}
 	}
-	if (!node && wildp) {
+	if (wildp) {
 		/* lookup in wildcards names */
-		node = ebst_lookup(&s->sni_w_ctx, wildp);
+		n = ebst_lookup(&s->sni_w_ctx, wildp);
+		if (!node || n && container_of(n, struct sni_ctx, name)->order < container_of(node, struct sni_ctx, name)->order) {
+			node = n;
+		}
 	}
 	if (!node || container_of(node, struct sni_ctx, name)->neg) {
 		SSL_CTX *ctx;


Re: OpenSSL engine and async support

2017-04-10 Thread Emeric Brun
Hi Grant,

On 04/01/2017 02:01 AM, Grant Zhang wrote:
> Hi Emeric,
> 
> Sorry for my delayed reply.
> 
> 
> On 03/28/2017 01:47 AM, Emeric Brun wrote:
>>
>>> This is an atom C2518 and it seems that --disable-prf has cut the 
>>> performance
>>> in half. We should receive a 8920 soon.
>>>
> Stopping the injection, the haproxy process continue to steal cpu doing 
> nothing (top shows ~50% of one core, mainly in user):
 Hmm, an idle haproxy process with qat enabled consumes about 5% of a core 
 in
 my test. 50% is too much:-(.
>>> In theory it should not consume anything anymore if it has nothing to do,
>>> so maybe the 5% you observed will help understand what is happening.
>> I've just noticed 50% cpu usage directly at start-up if we enable the engine 
>> (w or wout ssl-async):
>> global
>>  tune.ssl.default-dh-param 2048
>> ssl-engine qat
>> #ssl-async
>>
>> listen gg
>>  mode http
>>  bind 0.0.0.0:9443 ssl crt /root/2048.pem ciphers AES
>>  redirect location
> Somehow I cannot reproduce the cpu usage issue using the above config. In my 
> test with the above config, when haproxy is idle, pidstat shows 4% cpu usage
> 
> 11:49:14 PM3592473.331.330.004.67 1 haproxy_nodebug
> 11:49:17 PM3592473.331.330.004.67 1 haproxy_nodebug
> 11:49:20 PM3592472.671.330.004.00 1 haproxy_nodebug
> 
> When it is under load test the cpu usage jumps to 100%(single process mode):
> 11:51:26 PM359247   85.67   21.670.00  107.33 8 haproxy_nodebug
> 
> I am not sure whether it is the different hardware(c2000 vs. 895X), or some 
> difference in software. Just something to check:
We've juste reveive dh8920 but the qat config dh89xx fails to load with.

> * your kernel version (I tested with 4.4/4.7/4.9 without problem though), and 
> qat driver version?
I'm using centos as described in intel's doc:
[root@centos QAT_Engine]# uname -a
Linux centos 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux

and for qat
qatmux.l.2.6.0-60 (QAT1.5)
> * openssl version (1.1.0b-e?)
compiled 1.1.0e
> * are you using the latest QAT_ENGINE https://github.com/01org/QAT_Engine
Yes, i am
> * I assume you use qat_contig_mem kernel module?
Yes, i am
> * are you using the following config file for your c2000 card? 
> https://github.com/01org/QAT_Engine/blob/master/qat/config/c2xxx/multi_process_optimized/c2xxx_qa_dev0.conf
I'm using the one provided with the driver, reviewed and patched by intel guys 
because not compliant with my ship because provided one is for 2 engines and 
mine have only one. 
> Thanks,
> 
> Grant
> 
Could you provide patches rebased on current dev master branch?

R,
Emeric



Re: [PATCH]: BUG/MINOR

2017-04-10 Thread Willy Tarreau
On Fri, Apr 07, 2017 at 07:52:42PM +0100, David CARLIER wrote:
> Hi all,
> 
> I was trying to compile the 1.8 branch under DragonflyBSD and went into a
> build failure, thus
> this patch proposal.

Ah OK thanks David now I see the problem, it was also reported by Steven
(in CC). I'm merging it. Thanks,

Willy



Re: [PATCH] BUILD: fix for non-transparent builds

2017-04-10 Thread Willy Tarreau
Hi Steven,

On Thu, Apr 06, 2017 at 04:02:36PM -0700, Steven Davidovitz wrote:
> Broke in dba9707713eb49a39b218f331c252fb09494c566.

Strange, what OS/build options are you using ? Also, the commit above
doesn't seem to exist so it's not easy to find the extent of the issue.

Willy



Re: [PATCH] DOC: stick-table is available in frontend sections

2017-04-10 Thread Willy Tarreau
On Thu, Apr 06, 2017 at 04:31:39PM +0100, Adam Spiers wrote:
> Fix the proxy keywords matrix to reflect that it's permitted to use
> stick-table in frontend sections.

Applied, thanks Adam!
Willy



Re: [PATCH] minor cleanup to the dynamic cookie code

2017-04-10 Thread Willy Tarreau
On Tue, Apr 04, 2017 at 10:33:00PM +0200, Olivier Houchard wrote:
> Willy, I think it is mostly safe and you can apply it.

Applied, thanks Olivier!
Willy



Re: server templates

2017-04-10 Thread Willy Tarreau
On Mon, Apr 10, 2017 at 10:02:29AM +0200, Frederic Lecaille wrote:
> With server templates, haproxy could preallocate 'server' objects which
> would derive from 'default-server' (with same settings as default server
> settings), but with remaining parameters which are unknown at parsing time
> (for instance their names, addresses, anything else). In fact here, names or
> addresses are particular settings: a default server has not any default name
> or default address.

Absolutely. And this combined with the recent features of dynamic consistent
cookies and with Baptiste's upcoming DNS patches will easily result in pretty
dynamic backends!

Willy



IPv6 resolvers seems not works

2017-04-10 Thread Павел Знаменский
Hello,
I'm trying to add IPv6 address as a nameserver to able resolve addresses in
IPv6-only environment:

resolvers google_dns_10m
nameserver google_dns1 2001:4860:4860:::53
nameserver google_dns2 2001:4860:4860::8844:53
hold valid 10m
resolve_retries 2

But I getting error:
[ALERT] 099/133733 (10412) : Starting [google_dns_10m/google_dns1]
nameserver: can't connect socket.

As I understood resolver uses AF_INET when connecting to the nameserver and
that's why IPv6 doesn't work.
Is it possible to add IPv6 support for resolvers?

Thanks.


ACL with dynamic pattern

2017-04-10 Thread Alexander Lebedev
Hello!

I want to implement CSRF check with haproxy.
I want to check cookie value matched the header value and deny request if
they're didn't equal.

Something like this:
alc token_valid req.cook(token) %[req.hdr(token)]
http-request deny unless token_valid

But I can't find the way to perform this check.
Is it really impossible?

Alexander Lebedev


Re: server templates

2017-04-10 Thread Frederic Lecaille

On 04/08/2017 01:27 AM, Aleksandar Lazic wrote:

Hi Frederic


Hi Aleksandar,


Am 07-04-2017 15:00, schrieb Frederic Lecaille:

Hello Haproxy ML,

Here are patches attached to this mail to add "server templates"
feature to haproxy.


Please can you explain a little bit more the use case, thanks.
I'm sure there is a valid use case but I don't understand it.


haproxy allocates as much as possible everything required before 
starting doing its job. This is the case for 'server' objects. A static 
predefined list of servers, written in configuration files, is 
potentially allocated for each backend during configuration files parsing.


With server templates, haproxy could preallocate 'server' objects which 
would derive from 'default-server' (with same settings as default server 
settings), but with remaining parameters which are unknown at parsing 
time (for instance their names, addresses, anything else). In fact here, 
names or addresses are particular settings: a default server has not any 
default name or default address.


With server templates, haproxy would not have to allocate any 'server' 
after having parsed its configuration files. It could use an already 
allocated server template and set its remaining settings which were 
unknown at configuration files parsing time.


This may be useful to instantiate servers which are discovered after 
having parsed any configuration files.



The two first patches consist in moving code to be reused both during
'server' lines parsing and during and server templates
initializations.

A new CLI command has also been added (see "init server-templates
backend).



I think this is a more general request which could be discussed in 
another thread. From my point of view, regarding server templates, as 
they are more dynamic than 'server's, they are not supposed to be saved, 
because potentially different between two different haproxy runs.



Regards,

Fred.