Re: Observations about reloads and DNS SRV records

2018-06-07 Thread Tait Clarridge
Hi Baptiste, thanks for the response.

On Wed, Jun 6, 2018 at 6:32 PM Baptiste  wrote:

>
> This should not happen and it's a known issue that we're working on.
>
>
Excellent, figured you guys were probably already aware of it. Let me know
if I can assist in testing.


>
> Actually, I tested many DNS server and some of them simply did not send
> the additional records when they could not fit in the response (too small
> payload for the number of SRV records).
> Technically, we could try to use additional records if available and then
> failover to current way of working if none found.
>
>
True, not a lot do and I don't have a lot of hosts. I assumed it already
parsed them based on reading
https://www.haproxy.com/documentation/aloha/9-5/traffic-management/lb-layer7/dns-srv-records/
and
took that into account when writing my DNS service.

I'm a little swamped with other work at the moment, but when I get a chance
I would be able to provide a DNS server (written in Go) that returns
additional records to test with if that helps.

>
>
>> I'm happy with the workaround I'll be pursing for now where my SD service
>> (that originally was going to be a resolver and populate via SRV records)
>> is going to write all the backend definitions to disk so this is not a
>> pressing issue, just thought I'd share the limitations I discovered. My
>> knowledge of C (and the internal workings of HAproxy) is not great
>> otherwise this would probably be a patch submission for #1 :)
>>
>> Tait
>>
>>
> I'll check that for you. (In the mean time, please keep on answering to
> Aleksandar emails, the more info I'll have, the best).
>
> Baptiste
>

Thanks again, I don't have a lot of time to do any testing right now but
hope to soon.


Re: regression testing for haproxy

2018-06-07 Thread Frederic Lecaille

On 06/07/2018 03:14 PM, Frederic Lecaille wrote:

Hi all,

We have recently worked in colloboration with Poul-Henning Kamp to use 
varnishtest regression testing (script driven) tool for Varnish HTTP 
Cache so that to modify it and make it capable of also test haproxy.


Note that here we are speaking about *regression* testing which has 
nothing to see with others classes of tests (unit, integration, 
performance etc). The aim of such tests is: to prevent old bugs from 
coming back!


A nice reference manual for varnishtest may be bound here:

https://varnish-cache.org/docs/6.0/reference/varnishtest.html

In a few words, varnishtest program is able to start HTTP clients and 
servers and put a varnish-cache processus (or several) in the middle to 
test it, making these clients and servers interact with each others. So, 
here we wanted to do the same thing for haproxy.


Of course, it will not always be possible to write a regression test 
script for each bug to come.


Please find attached to this mail a first patch to document how to use 
varnishtest regression testing tool with haproxy and help you to write 
regression test files.


We will have to discuss about how to organize the test files for these 
regressions tests.


A big thank you to Poul-Henning Kamp for having helped us during this 
interesting project.



Regards,

Fred.


Well... this is patch matching with the previous text file.

Fred
>From deb0549ad58e7a4a91a631c6b673f832ace1f428 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20L=C3=A9caille?= 
Date: Thu, 7 Jun 2018 14:57:30 +0200
Subject: [PATCH] DOC: regression testing: Add a short starting guide.

This documentation describes how to write varnish test case (VTC)
files to reg test haproxy.
---
 doc/regression-testing.txt | 697 +
 1 file changed, 697 insertions(+)
 create mode 100644 doc/regression-testing.txt

diff --git a/doc/regression-testing.txt b/doc/regression-testing.txt
new file mode 100644
index 000..2df67cb
--- /dev/null
+++ b/doc/regression-testing.txt
@@ -0,0 +1,697 @@
+   +-+
+   | HAProxy regression testing with varnishtest |
+   +-+
+
+
+The information found in this file are a short starting guide to help you to
+write VTC (Varnish Test Case) scripts (or VTC files) for haproxy regression testing.
+Such VTC files are currently used to test Varnish cache application developed by
+Poul-Henning Kamp. A very big thanks you to him for having helped you to add
+our haproxy C modules to varnishtest tool.
+
+A lot of general information about how to write VTC files may be found in 'man/vtc.7'
+manual. It is *highly* recommended to read this manual before asking. This
+documentation only deals with the varnishtest support for haproxy.
+
+
+varnishtest installation
+
+
+To use varnishtest you will have to download and compile the recent Varnish cache
+sources found at https://github.com/varnishcache/varnish-cache.
+
+To compile Varnish cache :
+
+$ ./autogen.sh
+$ ./configure
+$ make
+
+The varnishtest sources may be found in 'bin/varnishtest' directory.
+'bin/varnishtest/tests' is plenty of VTC files for Varnish cache. After having
+compiled these sources, the varnishtest executable location is
+'bin/varnishtest/varnishtest'.
+
+varnishtest is able to search for the haproxy executable file it is supposed to
+launch in the PATH environment variable. To force the executable to be used by
+varnishtest, the HAPROXY_PROGRAM environment variable for varnishtest may be
+typically set as follows:
+
+ $ HAPROXY_PROGRAM=~/srcs/haproxy/haproxy varnishtest ...
+
+
+varnistest exectution
+-
+
+varnishtest program comes with interesting options. The most interesting are:
+
+-t  Timeout in seconds to abort the test if some launched program
+-v  By default, varnishtest does not dump the outputs of processus it launched
+when the test passes. With this option the outputs are dumped even
+when the test passes.
+-L  to always keep the temporary VTC directories.
+-l  to keep the temporary VTC directories only when the test fails.
+
+About haproxy when launched by varnishtest, -d option is enabled by default.
+
+
+How to write VTC files
+--
+
+A VTC file must start with a "varnishtest" command line followed by a descriptive
+line enclosed by double quotes. This is not specific to the VTC files for haproxy.
+
+The VTC files for haproxy must also contain a "feature ignore_unknown_macro" line
+if any macro is used for haproxy in this file. This is due to the fact that
+varnishtest parser code for haproxy commands generates macros the varnishtest
+parser code for varnish has no knowledge of. This line prevents varnishtest from
+failing in such cases.
+
+To make varnishtest capable of testing haproxy, two new VTC 

regression testing for haproxy

2018-06-07 Thread Frederic Lecaille

Hi all,

We have recently worked in colloboration with Poul-Henning Kamp to use 
varnishtest regression testing (script driven) tool for Varnish HTTP 
Cache so that to modify it and make it capable of also test haproxy.


Note that here we are speaking about *regression* testing which has 
nothing to see with others classes of tests (unit, integration, 
performance etc). The aim of such tests is: to prevent old bugs from 
coming back!


A nice reference manual for varnishtest may be bound here:

https://varnish-cache.org/docs/6.0/reference/varnishtest.html

In a few words, varnishtest program is able to start HTTP clients and 
servers and put a varnish-cache processus (or several) in the middle to 
test it, making these clients and servers interact with each others. So, 
here we wanted to do the same thing for haproxy.


Of course, it will not always be possible to write a regression test 
script for each bug to come.


Please find attached to this mail a first patch to document how to use 
varnishtest regression testing tool with haproxy and help you to write 
regression test files.


We will have to discuss about how to organize the test files for these 
regressions tests.


A big thank you to Poul-Henning Kamp for having helped us during this 
interesting project.



Regards,

Fred.
   +-+
   | HAProxy regression testing with varnishtest |
   +-+


The information found in this file are a short starting guide to help you to
write VTC (Varnish Test Case) scripts (or VTC files) for haproxy regression 
testing.
Such VTC files are currently used to test Varnish cache application developed by
Poul-Henning Kamp. A very big thanks you to him for having helped you to add
our haproxy C modules to varnishtest tool.

A lot of general information about how to write VTC files may be found in 
'man/vtc.7'
manual. It is *highly* recommended to read this manual before asking. This
documentation only deals with the varnishtest support for haproxy.


varnishtest installation


To use varnishtest you will have to download and compile the recent Varnish 
cache
sources found at https://github.com/varnishcache/varnish-cache.

To compile Varnish cache :

$ ./autogen.sh
$ ./configure
$ make

The varnishtest sources may be found in 'bin/varnishtest' directory.
'bin/varnishtest/tests' is plenty of VTC files for Varnish cache. After having
compiled these sources, the varnishtest executable location is
'bin/varnishtest/varnishtest'.

varnishtest is able to search for the haproxy executable file it is supposed to
launch in the PATH environment variable. To force the executable to be used by
varnishtest, the HAPROXY_PROGRAM environment variable for varnishtest may be
typically set as follows:

 $ HAPROXY_PROGRAM=~/srcs/haproxy/haproxy varnishtest ...


varnistest exectution
-

varnishtest program comes with interesting options. The most interesting are:

-t  Timeout in seconds to abort the test if some launched program
-v  By default, varnishtest does not dump the outputs of processus it 
launched
when the test passes. With this option the outputs are dumped even
when the test passes.
-L  to always keep the temporary VTC directories.
-l  to keep the temporary VTC directories only when the test fails.

About haproxy when launched by varnishtest, -d option is enabled by default.


How to write VTC files
--

A VTC file must start with a "varnishtest" command line followed by a 
descriptive
line enclosed by double quotes. This is not specific to the VTC files for 
haproxy.

The VTC files for haproxy must also contain a "feature ignore_unknown_macro" 
line
if any macro is used for haproxy in this file. This is due to the fact that
varnishtest parser code for haproxy commands generates macros the varnishtest
parser code for varnish has no knowledge of. This line prevents varnishtest from
failing in such cases.

To make varnishtest capable of testing haproxy, two new VTC commands have been
implemented: "haproxy" and "syslog". "haproxy" is used to start haproxy 
processus.
"syslog" is used to start syslog servers (at this time, only used by haproxy).
 
As haproxy cannot work without configuration file, a VTC file for haproxy must
embed the configuration files contents for the haproxy instances it declares.
This may  be done using the following intuitive syntax construction: -conf 
{...}.
Here -conf is an argument of "haproxy" VTC command to declare the configuration
file of the haproxy instances it also declares (see "Basic HAProxy test" VTC 
file
below).

As for varnish VTC files, the parser of VTC files for haproxy automatically
generates macros for the declared frontends to be reused by the clients later
in the script, so after having written the "haproxy" command sections.
The syntax 

Re: haproxy-1.8.8 seamless reloads failing with abns@ sockets

2018-06-07 Thread Willy Tarreau
On Thu, Jun 07, 2018 at 03:32:31PM +0300, Jarno Huuskonen wrote:
> My minimal test config with the patch works (on top of
> 1.8.9): (doing reloads/curl in loop).

Thanks, not surprising anyway ;-)

Now merged.

Willy



Re: haproxy-1.8.8 seamless reloads failing with abns@ sockets

2018-06-07 Thread Jarno Huuskonen
Hi Olivier / Willy,

On Thu, Jun 07, Olivier Houchard wrote:
> Hi Willy,
> 
> On Thu, Jun 07, 2018 at 11:45:39AM +0200, Willy Tarreau wrote:
> > Hi Olivier,
> > 
> > On Wed, Jun 06, 2018 at 06:40:05PM +0200, Olivier Houchard wrote:
> > > You're right indeed, that code was not written with abns sockets in mind.
> > > The attached patch should fix it. It was created from master, but should
> > > apply to 1.8 as well.
> > > 
> > > Thanks !
> > > 
> > > Olivier
> > 
> > > >From 3ba0fbb7c9e854aafb8a6b98482ad7d23bbb414d Mon Sep 17 00:00:00 2001
> > > From: Olivier Houchard 
> > > Date: Wed, 6 Jun 2018 18:34:34 +0200
> > > Subject: [PATCH] MINOR: unix: Make sure we can transfer abns sockets as 
> > > well  on seamless reload.
> > 
> > Would you be so kind as to tag it "BUG" so that our beloved stable
> > team catches it for the next 1.8 ? ;-)
> > 
> 
> Sir yes sir.
> 
> > > diff --git a/src/proto_uxst.c b/src/proto_uxst.c
> > > index 9fc50dff4..a1da337fe 100644
> > > --- a/src/proto_uxst.c
> > > +++ b/src/proto_uxst.c
> > > @@ -146,7 +146,12 @@ static int uxst_find_compatible_fd(struct listener 
> > > *l)
> > >   after_sockname++;
> > >   if (!strcmp(after_sockname, ".tmp"))
> > >   break;
> > > - }
> > > + /* abns sockets sun_path starts with a \0 */
> > > + } else if (un1->sun_path[0] == 0
> > > + && un2->sun_path[0] == 0
> > > + && !strncmp(>sun_path[1], >sun_path[1],
> > > + sizeof(un1->sun_path) - 1))
> > > + break;
> > 
> > It may still randomly fail here because null bytes are explicitly permitted
> > in the sun_path. Instead I'd suggest this :
> > 
> > } else if (un1->sun_path[0] == 0 &&
> >memcmp(un1->sun_path, un2->sun_path, sizeof(un1->sun_path) 
> > == 0)
> > 
> > Jarno, if you still notice occasional failures, please try with this.
> > 
> 
> You're right, as unlikely as it can be in our current scenario, better safe
> than sorry.
> The attached patch is updated to reflect that.

Thanks !
My minimal test config with the patch works (on top of
1.8.9): (doing reloads/curl in loop).

I'll test with my normal/production config when I'll have more time
(probably few days).

-Jarno

-- 
Jarno Huuskonen



RE: Set-Cookie Secure

2018-06-07 Thread Roberto Cazzato
Hi,

your code, as the original:

acl https_sess ssl_fc
acl secured_cookie res.hdr(Set-Cookie),lower -m sub secure
rspirep ^(set-cookie:.*) \1;\ Secure if https_sess !secured_cookie

works only for cookies inserted by backends server:
(Backend set cookie) -> ( haproxy intercept Set-Cookie and add “secure”) -> 
(client receive Set-Cookie WITH secure)

It doesn’t work generally for every cookie as those inserted by haproxy itself:
(haproxy add a cookie with “cookie insert” or “rspadd 
Set-Cookie”) -> (client receive Set-Cookie WITHOUT secure)

There is a stage haproxy can add secure for all cases ?

Thank you

PS: there is somewhere a logic schema of haproxy (as those for netfilter like 
these https://gist.github.com/nerdalert/a1687ae4da1cc44a437d so one can know 
which commands work where in haproxy ?)
  I found not so simple how one can control haproxy behavior more deeply



[APK]

[Unione]


Dott. Roberto Cazzato
Sicurezza ICT e Cloud
Area Tecnica

APKAPPA s.r.l. sede legale Via F. Albani, 21 20149 Milano | p.iva/vat no. 
IT-08543640158
sede amministrativa e operativa Reggio Emilia (RE) via M. K. Gandhi, 24/A 42123 
- sede operativa Magenta (MI) via Milano 89/91 20013
tel.  02 91712 000 | fax  02 91712 339 www.apkappa.it






Ai sensi e per gli effetti della Legge sulla tutela della riservatezza 
personale (DL.gs. 196/03 e collegate), questa mail è destinata unicamente alle 
persone sopra indicate e le informazioni in essa contenute sono da considerarsi 
strettamente riservate.
This email is confidential, do not use the contents for any purpose whatsoever 
nor disclose them to anyone else. If you are not the intended recipient, you 
should not copy, modify, distribute or take any action in reliance on it. If 
you have received this email in error, please notify the sender and delete this 
email from your system.





From: Igor Cicimov 
Sent: lunedì 9 ottobre 2017 06:38
To: mlist 
Cc: HAProxy 
Subject: Re: Set-Cookie Secure


Maybe try something like:

http-request set-var(txn.req_ssl) ssl_fc

acl https_sess var(txn.req_ssl)
acl secured_cookie res.hdr(Set-Cookie),lower -m sub secure
rspirep ^(set-cookie:.*) \1;\ Secure if https_sess !secured_cookie

So the first line sets transactional variable valid for the request AND 
response and then use it in the https_sess acl for the response.

On Sat, Oct 7, 2017 at 9:30 PM, mlist 
mailto:ml...@apsystems.it>> wrote:
I prefer to use only one frontend for all request, so I can control centrally 
many config
avoiding replication of rules not so simple to maintain but centralizing means 
to manage
not default cases, so: by default all http are converted to https if some 
conditions (acl)
are not meet (for applications we impose https, for web sites we leave choice, 
…).
We also use stick table as base for ddos control, ect, as now only basic rules 
and
use cookies mechanism for normal persistence and for special client side app 
persistence
needed to identify backend server in special situations.
In attach config file
From: Igor Cicimov 
[mailto:ig...@encompasscorporation.com]
Sent: venerdì 6 ottobre 2017 02:11

To: mlist mailto:ml...@apsystems.it>>
Cc: HAProxy mailto:haproxy@formilux.org>>
Subject: Re: Set-Cookie Secure
Hi,
On Fri, Oct 6, 2017 at 2:50 AM, mlist 
mailto:ml...@apsystems.it>> wrote:
Hi Igor, some news about this ?
From: mlist
Sent: venerdì 22 settembre 2017 08:58
To: 'Igor Cicimov' 
mailto:ig...@encompasscorporation.com>>
Cc: 'HAProxy' mailto:haproxy@formilux.org>>
Subject: RE: Set-Cookie Secure
I have acl to leave some sites http (not redirected to https), so adding secure 
flag on rspadd it is not an option.
From: Igor Cicimov [mailto:ig...@encompasscorporation.com]
Sent: venerdì 22 settembre 2017 02:35
To: mlist mailto:ml...@apsystems.it>>
Cc: HAProxy mailto:haproxy@formilux.org>>
Subject: Re: Set-Cookie Secure
Then you can unconditionally include Secure in your "rspadd Set-Cookie ..." 
since the communication between the client and HAP is always over SSL. Or am I 
missing something?
On Fri, Sep 22, 2017 at 10:18 AM, mlist 
mailto:ml...@apsystems.it>> wrote:
Hi Igor, I use fe_https:443-> be_http
From: Igor Cicimov 
[mailto:ig...@encompasscorporation.com]
Sent: venerdì 22 settembre 2017 00:44
To: rob.mlist mailto:rob.ml...@apsystems.it>>
Cc: HAProxy mailto:haproxy@formilux.org>>
Subject: Re: Set-Cookie Secure
On 18 Sep 2017 10:37 pm, "rob.mlist" 
mailto:rob.ml...@apsystems.it>> wrote:
I set 2 cookies on behalf of Backend Servers: one with these configuration 
lines at Frontend:
rspadd Set-Cookie:\ x_cookie_servedby=web1_;\ path=/ if id_web1 
!back_cookie_present
rspadd Set-Cookie:\ x_cookie_servedby=web4_;\ path=/ if id_web4 
!back_cookie_present
rspadd Set-Cookie:\ x_cookie_servedby=web10_;\ path=/ if id_web10 
!back_cookie_present
one at Backend with these line (and Backend cookie directive on each server):
cookie 

Re: haproxy-1.8.8 seamless reloads failing with abns@ sockets

2018-06-07 Thread Olivier Houchard
Hi Willy,

On Thu, Jun 07, 2018 at 11:45:39AM +0200, Willy Tarreau wrote:
> Hi Olivier,
> 
> On Wed, Jun 06, 2018 at 06:40:05PM +0200, Olivier Houchard wrote:
> > You're right indeed, that code was not written with abns sockets in mind.
> > The attached patch should fix it. It was created from master, but should
> > apply to 1.8 as well.
> > 
> > Thanks !
> > 
> > Olivier
> 
> > >From 3ba0fbb7c9e854aafb8a6b98482ad7d23bbb414d Mon Sep 17 00:00:00 2001
> > From: Olivier Houchard 
> > Date: Wed, 6 Jun 2018 18:34:34 +0200
> > Subject: [PATCH] MINOR: unix: Make sure we can transfer abns sockets as 
> > well  on seamless reload.
> 
> Would you be so kind as to tag it "BUG" so that our beloved stable
> team catches it for the next 1.8 ? ;-)
> 

Sir yes sir.

> > diff --git a/src/proto_uxst.c b/src/proto_uxst.c
> > index 9fc50dff4..a1da337fe 100644
> > --- a/src/proto_uxst.c
> > +++ b/src/proto_uxst.c
> > @@ -146,7 +146,12 @@ static int uxst_find_compatible_fd(struct listener *l)
> > after_sockname++;
> > if (!strcmp(after_sockname, ".tmp"))
> > break;
> > -   }
> > +   /* abns sockets sun_path starts with a \0 */
> > +   } else if (un1->sun_path[0] == 0
> > +   && un2->sun_path[0] == 0
> > +   && !strncmp(>sun_path[1], >sun_path[1],
> > +   sizeof(un1->sun_path) - 1))
> > +   break;
> 
> It may still randomly fail here because null bytes are explicitly permitted
> in the sun_path. Instead I'd suggest this :
> 
>   } else if (un1->sun_path[0] == 0 &&
>  memcmp(un1->sun_path, un2->sun_path, sizeof(un1->sun_path) 
> == 0)
> 
> Jarno, if you still notice occasional failures, please try with this.
> 

You're right, as unlikely as it can be in our current scenario, better safe
than sorry.
The attached patch is updated to reflect that.

Regards,

Olivier
>From b6c8bd3102abcf1bb1660429b9b737fcd7a60b61 Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Wed, 6 Jun 2018 18:34:34 +0200
Subject: [PATCH] BUG/MINOR: unix: Make sure we can transfer abns sockets on
 seamless reload.

When checking if a socket we got from the parent is suitable for a listener,
we just checked that the path matched sockname.tmp, however this is
unsuitable for abns sockets, where we don't have to create a temporary
file and rename it later.
To detect that, check that the first character of the sun_path is 0 for
both, and if so, that _path[1] is the same too.

This should be backported to 1.8.
---
 src/proto_uxst.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/src/proto_uxst.c b/src/proto_uxst.c
index 9fc50dff4..ab788bde7 100644
--- a/src/proto_uxst.c
+++ b/src/proto_uxst.c
@@ -146,7 +146,12 @@ static int uxst_find_compatible_fd(struct listener *l)
after_sockname++;
if (!strcmp(after_sockname, ".tmp"))
break;
-   }
+   /* abns sockets sun_path starts with a \0 */
+   } else if (un1->sun_path[0] == 0
+   && un2->sun_path[0] == 0
+   && !memcmp(>sun_path[1], >sun_path[1],
+   sizeof(un1->sun_path) - 1))
+   break;
}
xfer_sock = xfer_sock->next;
}
-- 
2.14.3



Re: remaining process after (seamless) reload

2018-06-07 Thread Willy Tarreau
On Thu, Jun 07, 2018 at 11:50:45AM +0200, William Lallemand wrote:
>   /* block signal delivery during processing */
> +#ifdef USE_THREAD
> + pthread_sigmask(SIG_SETMASK, _sig, _sig);
> +#else
>   sigprocmask(SIG_SETMASK, _sig, _sig);
> +#endif
 
I think for the merge we'd rather put a wrapper into hathreads.h, like
"ha_sigmask()" which uses either pthread_sigmask() or sigprocmask().

That will remove ifdefs and lower the risk of reusing these unsafe
calls.

What do you think ?

thanks,
Willy



Re: remaining process after (seamless) reload

2018-06-07 Thread William Lallemand
Hi guys,

Sorry for the late reply, I manage to reproduce and fix what seams to be the 
bug.
The signal management was not handled correctly with threads.

Could you try those patches and see if it fixes the problem?

Thanks.

-- 
William Lallemand
>From d695242fb260538bd8db323715d627c4a9deacc7 Mon Sep 17 00:00:00 2001
From: William Lallemand 
Date: Thu, 7 Jun 2018 09:46:01 +0200
Subject: [PATCH 1/3] BUG/MEDIUM: threads: handle signal queue only in thread 0

Signals were handled in all threads which caused some signals to be lost
from time to time. To avoid complicated lock system (threads+signals),
we prefer handling the signals in one thread avoiding concurrent access.

The side effect of this bug was that some process were not leaving from
time to time during a reload.
---
 src/haproxy.c | 21 ++---
 src/signal.c  |  8 
 2 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/src/haproxy.c b/src/haproxy.c
index 4628d8296..e984ca573 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -2398,8 +2398,10 @@ static void run_poll_loop()
 		/* Process a few tasks */
 		process_runnable_tasks();
 
-		/* check if we caught some signals and process them */
-		signal_process_queue();
+		/* check if we caught some signals and process them in the
+		 first thread */
+		if (tid == 0)
+			signal_process_queue();
 
 		/* Check if we can expire some tasks */
 		next = wake_expired_tasks();
@@ -2416,7 +2418,7 @@ static void run_poll_loop()
 			activity[tid].wake_tasks++;
 		else if (active_applets_mask & tid_bit)
 			activity[tid].wake_applets++;
-		else if (signal_queue_len)
+		else if (signal_queue_len && tid == 0)
 			activity[tid].wake_signal++;
 		else
 			exp = next;
@@ -3006,6 +3008,7 @@ int main(int argc, char **argv)
 		unsigned int *tids= calloc(global.nbthread, sizeof(unsigned int));
 		pthread_t*threads = calloc(global.nbthread, sizeof(pthread_t));
 		int  i;
+		sigset_t blocked_sig, old_sig;
 
 		THREAD_SYNC_INIT((1UL << global.nbthread) - 1);
 
@@ -3013,6 +3016,15 @@ int main(int argc, char **argv)
 		for (i = 0; i < global.nbthread; i++)
 			tids[i] = i;
 
+		/* ensure the signals will be blocked in every thread */
+		sigfillset(_sig);
+		sigdelset(_sig, SIGPROF);
+		sigdelset(_sig, SIGBUS);
+		sigdelset(_sig, SIGFPE);
+		sigdelset(_sig, SIGILL);
+		sigdelset(_sig, SIGSEGV);
+		pthread_sigmask(SIG_SETMASK, _sig, _sig);
+
 		/* Create nbthread-1 thread. The first thread is the current process */
 		threads[0] = pthread_self();
 		for (i = 1; i < global.nbthread; i++)
@@ -3046,6 +3058,9 @@ int main(int argc, char **argv)
 		}
 #endif /* !USE_CPU_AFFINITY */
 
+		/* when multithreading we need to let only the thread 0 handle the signals */
+		pthread_sigmask(SIG_SETMASK, _sig, NULL);
+
 		/* Finally, start the poll loop for the first thread */
 		run_thread_poll_loop([0]);
 
diff --git a/src/signal.c b/src/signal.c
index a0975910b..f1f682188 100644
--- a/src/signal.c
+++ b/src/signal.c
@@ -31,7 +31,6 @@ struct pool_head *pool_head_sig_handlers = NULL;
 sigset_t blocked_sig;
 int signal_pending = 0; /* non-zero if t least one signal remains unprocessed */
 
-__decl_hathreads(HA_SPINLOCK_T signals_lock);
 
 /* Common signal handler, used by all signals. Received signals are queued.
  * Signal number zero has a specific status, as it cannot be delivered by the
@@ -71,9 +70,6 @@ void __signal_process_queue()
 	struct signal_descriptor *desc;
 	sigset_t old_sig;
 
-	if (HA_SPIN_TRYLOCK(SIGNALS_LOCK, _lock))
-		return;
-
 	/* block signal delivery during processing */
 	sigprocmask(SIG_SETMASK, _sig, _sig);
 
@@ -100,7 +96,6 @@ void __signal_process_queue()
 
 	/* restore signal delivery */
 	sigprocmask(SIG_SETMASK, _sig, NULL);
-	HA_SPIN_UNLOCK(SIGNALS_LOCK, _lock);
 }
 
 /* perform minimal intializations, report 0 in case of error, 1 if OK. */
@@ -112,8 +107,6 @@ int signal_init()
 	memset(signal_queue, 0, sizeof(signal_queue));
 	memset(signal_state, 0, sizeof(signal_state));
 
-	HA_SPIN_INIT(_lock);
-
 	/* Ensure signals are not blocked. Some shells or service managers may
 	 * accidently block all of our signals unfortunately, causing lots of
 	 * zombie processes to remain in the background during reloads.
@@ -148,7 +141,6 @@ void deinit_signals()
 			pool_free(pool_head_sig_handlers, sh);
 		}
 	}
-	HA_SPIN_DESTROY(_lock);
 }
 
 /* Register a function and an integer argument on a signal. A pointer to the
-- 
2.16.1

>From 1501eddeb506897126d0d3d60a36ca780b24ffdf Mon Sep 17 00:00:00 2001
From: William Lallemand 
Date: Thu, 7 Jun 2018 09:49:04 +0200
Subject: [PATCH 2/3] BUG/MINOR: don't ignore SIG{BUS,FPE,ILL,SEGV} during
 signal processing

---
 src/signal.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/src/signal.c b/src/signal.c
index f1f682188..0dadd762c 100644
--- a/src/signal.c
+++ b/src/signal.c
@@ -120,6 +120,14 @@ int signal_init()
 
 	sigfillset(_sig);
 	sigdelset(_sig, SIGPROF);
+	/* man sigprocmask: If SIGBUS, SIGFPE, SIGILL, or 

Re: haproxy-1.8.8 seamless reloads failing with abns@ sockets

2018-06-07 Thread Willy Tarreau
Hi Olivier,

On Wed, Jun 06, 2018 at 06:40:05PM +0200, Olivier Houchard wrote:
> You're right indeed, that code was not written with abns sockets in mind.
> The attached patch should fix it. It was created from master, but should
> apply to 1.8 as well.
> 
> Thanks !
> 
> Olivier

> >From 3ba0fbb7c9e854aafb8a6b98482ad7d23bbb414d Mon Sep 17 00:00:00 2001
> From: Olivier Houchard 
> Date: Wed, 6 Jun 2018 18:34:34 +0200
> Subject: [PATCH] MINOR: unix: Make sure we can transfer abns sockets as well  
> on seamless reload.

Would you be so kind as to tag it "BUG" so that our beloved stable
team catches it for the next 1.8 ? ;-)

> diff --git a/src/proto_uxst.c b/src/proto_uxst.c
> index 9fc50dff4..a1da337fe 100644
> --- a/src/proto_uxst.c
> +++ b/src/proto_uxst.c
> @@ -146,7 +146,12 @@ static int uxst_find_compatible_fd(struct listener *l)
>   after_sockname++;
>   if (!strcmp(after_sockname, ".tmp"))
>   break;
> - }
> + /* abns sockets sun_path starts with a \0 */
> + } else if (un1->sun_path[0] == 0
> + && un2->sun_path[0] == 0
> + && !strncmp(>sun_path[1], >sun_path[1],
> + sizeof(un1->sun_path) - 1))
> + break;

It may still randomly fail here because null bytes are explicitly permitted
in the sun_path. Instead I'd suggest this :

} else if (un1->sun_path[0] == 0 &&
   memcmp(un1->sun_path, un2->sun_path, sizeof(un1->sun_path) 
== 0)

Jarno, if you still notice occasional failures, please try with this.

Thanks
Willy



Re: maxsslconn vs maxsslrate

2018-06-07 Thread Mihir Shirali
Hi Alexander,

I have looked at the link. What I am looking for is an answer to the
difference between maxsslconn and maxsslrate. The former does not result in
CPU savings while the latter does. Again the former does result in large
number of tcp connection resets while the latter does not. What I'd like to
know and understand is why that is the case.
I am using nbproc set to 2.

On Thu, Jun 7, 2018 at 2:43 PM, Aleksandar Lazic  wrote:

> On 07/06/2018 14:30, Mihir Shirali wrote:
>
>> We have a large number of ip phones connecting to this port. They could
>> be as large as 80k. They request for a file from a custom
>> application. haproxy front ends the tls connection and then forwards
>> the request to the application's http port.
>>
>
> Have you take a look into the link below for some tunings for the system
> and haproxy.
>
> HA-Proxy version 1.8.8 2018/04/19
>> Copyright 2000-2018 Willy Tarreau 
>>
>
> [snipp]
>
> Any change to update to 1.8.9?
>
> Thanks can you also send the "Anonymized haproxy conf".
> The main questions are do you use thread and or nbprocs?
> This will be answered by the conf
>
> Best regards
> aleks
>
>
> On Thu, Jun 7, 2018 at 2:13 PM, Aleksandar Lazic 
>> wrote:
>>
>> Hi Mihir.
>>>
>>> On 07/06/2018 10:27, Mihir Shirali wrote:
>>>
>>> Hi Team,

 We use haproxy to front tls for a large number of endpoints, haproxy
 prcesses the TLS session and then forwards the request to the backend
 application.

 What we have noticed is that if there are a large number of connections
 from different clients - the CPU usage goes up significantly. This
 primarily because haproxy is handling a lot ofSSL connections. I came
 across 2 options above and tested them out.


>>> What do you mean with *large number*?
>>>
>>> https://medium.freecodecamp.org/how-we-fine-tuned-haproxy-to
>>> -achieve-2-000-000-concurrent-ssl-connections-d017e61a4d27
>>>
>>> With maxsslrate - CPU is better controlled and if I combine this with
>>>
 503 response in the front end I see great results. Is there a
 possibility of connection timeout on the client here if there are a
 very large number of requests?

 With maxsslconn, CPU is still pegged high - and clients receive a tcp
 reset. This is also good, because there is no chance of tcp time out on
 the client. Clients can retry after a bit and they are aware that the
 connection is closed instead of waiting on timeout. However, CPU still
 seems pegged high. What is the reason for high CPU on the server here -
 Is it because SSL stack is still hit with this setting?


>>> SSL/TLS handling isn't that easy.
>>>
>>> Please can you share some more information's, because in the latest
>>> versions of haproxy are a lot optimisation's introduced also for TLS.
>>>
>>> haproxy -vv
>>>
>>> Anonymized haproxy conf.
>>>
>>> --
>>>
 Regards,
 Mihir


>>> Best regards
>>> Aleks
>>>
>>
>> --
>> Regards,
>> Mihir
>>
>


-- 
Regards,
Mihir


Re: maxsslconn vs maxsslrate

2018-06-07 Thread Mihir Shirali
We have a large number of ip phones connecting to this port. They could be
as large as 80k. They request for a file from a custom application. haproxy
front ends the tls connection and then forwards the request to the
application's http port.

HA-Proxy version 1.8.8 2018/04/19
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -fno-strict-overflow -Wno-unused-label
  OPTIONS = USE_OPENSSL=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Running on OpenSSL version : OpenSSL 1.0.2l.6.2.83
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built without PCRE or PCRE2 support (using libc's regex instead)
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity("identity")
Built with network namespace support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace


On Thu, Jun 7, 2018 at 2:13 PM, Aleksandar Lazic  wrote:

> Hi Mihir.
>
> On 07/06/2018 10:27, Mihir Shirali wrote:
>
>> Hi Team,
>>
>> We use haproxy to front tls for a large number of endpoints, haproxy
>> prcesses the TLS session and then forwards the request to the backend
>> application.
>>
>> What we have noticed is that if there are a large number of connections
>> from different clients - the CPU usage goes up significantly. This
>> primarily because haproxy is handling a lot ofSSL connections. I came
>> across 2 options above and tested them out.
>>
>
> What do you mean with *large number*?
>
> https://medium.freecodecamp.org/how-we-fine-tuned-haproxy-to
> -achieve-2-000-000-concurrent-ssl-connections-d017e61a4d27
>
> With maxsslrate - CPU is better controlled and if I combine this with
>> 503 response in the front end I see great results. Is there a
>> possibility of connection timeout on the client here if there are a
>> very large number of requests?
>>
>> With maxsslconn, CPU is still pegged high - and clients receive a tcp
>> reset. This is also good, because there is no chance of tcp time out on
>> the client. Clients can retry after a bit and they are aware that the
>> connection is closed instead of waiting on timeout. However, CPU still
>> seems pegged high. What is the reason for high CPU on the server here -
>> Is it because SSL stack is still hit with this setting?
>>
>
> SSL/TLS handling isn't that easy.
>
> Please can you share some more information's, because in the latest
> versions of haproxy are a lot optimisation's introduced also for TLS.
>
> haproxy -vv
>
> Anonymized haproxy conf.
>
> --
>> Regards,
>> Mihir
>>
>
> Best regards
> Aleks
>



-- 
Regards,
Mihir


Re: maxsslconn vs maxsslrate

2018-06-07 Thread Aleksandar Lazic

Hi Mihir.

On 07/06/2018 10:27, Mihir Shirali wrote:

Hi Team,

We use haproxy to front tls for a large number of endpoints, haproxy
prcesses the TLS session and then forwards the request to the backend
application.

What we have noticed is that if there are a large number of connections
from different clients - the CPU usage goes up significantly. This
primarily because haproxy is handling a lot ofSSL connections. I came
across 2 options above and tested them out.


What do you mean with *large number*?

https://medium.freecodecamp.org/how-we-fine-tuned-haproxy-to-achieve-2-000-000-concurrent-ssl-connections-d017e61a4d27


With maxsslrate - CPU is better controlled and if I combine this with
503 response in the front end I see great results. Is there a
possibility of connection timeout on the client here if there are a
very large number of requests?

With maxsslconn, CPU is still pegged high - and clients receive a tcp
reset. This is also good, because there is no chance of tcp time out on
the client. Clients can retry after a bit and they are aware that the
connection is closed instead of waiting on timeout. However, CPU still
seems pegged high. What is the reason for high CPU on the server here -
Is it because SSL stack is still hit with this setting?


SSL/TLS handling isn't that easy.

Please can you share some more information's, because in the latest
versions of haproxy are a lot optimisation's introduced also for TLS.

haproxy -vv

Anonymized haproxy conf.


--
Regards,
Mihir


Best regards
Aleks