Re: Warning: upgrading to openssl master+ enable_tls1_3 (coming v1.1.1) could break handshakes for all protocol versions .

2018-01-12 Thread Gibson, Brian (IMS)
The way I read it you just have to be sure to specify a valid tls 1.3 cipher. I have not attempted the configuration though to confirm. Sent from Nine From: Pavlos Parissis Sent: Friday, January 12, 2018 4:55

Re: Warning: upgrading to openssl master+ enable_tls1_3 (coming v1.1.1) could break handshakes for all protocol versions .

2018-01-12 Thread Pavlos Parissis
On 12/01/2018 03:57 μμ, Emeric Brun wrote: > Hi All, > > FYI: upgrading to next openssl-1.1.1 could break your prod if you're using a > forced cipher list because > handshake will fail regardless the tls protocol version if you don't specify > a cipher valid for TLSv1.3 > in your cipher list. >

Re: High load average under 1.8 with multiple draining processes

2018-01-12 Thread Willy Tarreau
On Fri, Jan 12, 2018 at 11:06:32AM -0600, Samuel Reed wrote: > On 1.8-git, similar results on the new process: > > % time seconds  usecs/call calls    errors syscall > -- --- --- - - >  93.75    0.265450  15 17805  

Re: High load average under 1.8 with multiple draining processes

2018-01-12 Thread Samuel Reed
On 1.8-git, similar results on the new process: % time seconds  usecs/call calls    errors syscall -- --- --- - -  93.75    0.265450  15 17805   epoll_wait   4.85    0.013730  49   283   write  

Re: High load average under 1.8 with multiple draining processes

2018-01-12 Thread Willy Tarreau
On Fri, Jan 12, 2018 at 10:13:55AM -0600, Samuel Reed wrote: > Excellent! Please let me know if there's any other output you'd like > from this machine. > > Strace on that new process shows thousands of these types of syscalls, > which vary slightly, > > epoll_wait(3, {{EPOLLIN, {u32=206,

Re: High load average under 1.8 with multiple draining processes

2018-01-12 Thread Samuel Reed
Excellent! Please let me know if there's any other output you'd like from this machine. Strace on that new process shows thousands of these types of syscalls, which vary slightly, epoll_wait(3, {{EPOLLIN, {u32=206, u64=206}}}, 200, 239) = 1 and these: epoll_wait(3, {}, 200, 0)   =

Re: High load average under 1.8 with multiple draining processes

2018-01-12 Thread Willy Tarreau
On Fri, Jan 12, 2018 at 09:50:58AM -0600, Samuel Reed wrote: > To accelerate the process, I've increased the number of threads from 4 > to 8 on a 16-core machine. Ran strace for about 5s on each. > > Single process (8 threads): > > $ strace -cp 16807 > % time seconds  usecs/call calls   

Re: High load average under 1.8 with multiple draining processes

2018-01-12 Thread Samuel Reed
To accelerate the process, I've increased the number of threads from 4 to 8 on a 16-core machine. Ran strace for about 5s on each. Single process (8 threads): $ strace -cp 16807 % time seconds  usecs/call calls    errors syscall -- --- --- - -

Re: High load average under 1.8 with multiple draining processes

2018-01-12 Thread Willy Tarreau
On Fri, Jan 12, 2018 at 09:28:54AM -0600, Samuel Reed wrote: > Thanks for your quick answer, Willy. > > That's a shame to hear but makes sense. We'll try out some ideas for > reducing contention. We don't use cpu-map with nbthread; I considered it > best to let the kernel take care of this,

Re: High load average under 1.8 with multiple draining processes

2018-01-12 Thread Samuel Reed
Thanks for your quick answer, Willy. That's a shame to hear but makes sense. We'll try out some ideas for reducing contention. We don't use cpu-map with nbthread; I considered it best to let the kernel take care of this, especially since there are some other processes on that box. I don't really

FW: Your exhibition stand at Engine Expo 2018

2018-01-12 Thread Brendan C
Hello Again, If you are attending the IEX Insulation Expo in Cologne this May (or indeed any shows on the European Mainland or the UK) we would like to offer you a complimentary 3D Design for your stand. Just send us your brief (Please check the questions under my signature below for the

Warning: upgrading to openssl master+ enable_tls1_3 (coming v1.1.1) could break handshakes for all protocol versions .

2018-01-12 Thread Emeric Brun
Hi All, FYI: upgrading to next openssl-1.1.1 could break your prod if you're using a forced cipher list because handshake will fail regardless the tls protocol version if you don't specify a cipher valid for TLSv1.3 in your cipher list. https://github.com/openssl/openssl/issues/5057

Re: High load average under 1.8 with multiple draining processes

2018-01-12 Thread Willy Tarreau
Hi Samuel, On Thu, Jan 11, 2018 at 08:29:15PM -0600, Samuel Reed wrote: > Is there a regression in the 1.8 series with SO_REUSEPORT and nbthread > (we didn't see this before with nbproc) or somewhere we should start > looking? In fact no, nbthread is simply new so it's not a regression but we're

Re: [BUG] 100% cpu on each threads

2018-01-12 Thread Emmanuel Hocdet
> Le 12 janv. 2018 à 15:23, Aleksandar Lazic a écrit : > > > -- Originalnachricht -- > Von: "Willy Tarreau" > An: "Emmanuel Hocdet" > Cc: "haproxy" > Gesendet: 12.01.2018 13:04:02 > Betreff: Re: [BUG] 100% cpu on

Re: [BUG] 100% cpu on each threads

2018-01-12 Thread Cyril Bonté
Hi all, - Mail original - > De: "Willy Tarreau" > À: "Emmanuel Hocdet" > Cc: "haproxy" > Envoyé: Vendredi 12 Janvier 2018 15:24:54 > Objet: Re: [BUG] 100% cpu on each threads > > On Fri, Jan 12, 2018 at 12:01:15PM +0100, Emmanuel

Re: [BUG] 100% cpu on each threads

2018-01-12 Thread Emmanuel Hocdet
> Le 12 janv. 2018 à 15:24, Willy Tarreau a écrit : > > On Fri, Jan 12, 2018 at 12:01:15PM +0100, Emmanuel Hocdet wrote: >> When syndrome appear, i see such line on syslog: >> (for one or all servers) >> >> Server tls/L7_1 is DOWN, reason: Layer4 connection problem, info: "Bad

Re: [BUG] 100% cpu on each threads

2018-01-12 Thread Willy Tarreau
On Fri, Jan 12, 2018 at 12:01:15PM +0100, Emmanuel Hocdet wrote: > When syndrome appear, i see such line on syslog: > (for one or all servers) > > Server tls/L7_1 is DOWN, reason: Layer4 connection problem, info: "Bad file > descriptor", check duration: 2018ms. 0 active and 1 backup servers

Re[2]: [BUG] 100% cpu on each threads

2018-01-12 Thread Aleksandar Lazic
-- Originalnachricht -- Von: "Willy Tarreau" An: "Emmanuel Hocdet" Cc: "haproxy" Gesendet: 12.01.2018 13:04:02 Betreff: Re: [BUG] 100% cpu on each threads On Fri, Jan 12, 2018 at 12:01:15PM +0100, Emmanuel Hocdet wrote: When

Re: [BUG] 100% cpu on each threads

2018-01-12 Thread Willy Tarreau
On Fri, Jan 12, 2018 at 12:01:15PM +0100, Emmanuel Hocdet wrote: > When syndrome appear, i see such line on syslog: > (for one or all servers) > > Server tls/L7_1 is DOWN, reason: Layer4 connection problem, info: "Bad file > descriptor", check duration: 2018ms. 0 active and 1 backup servers left.

Re: [BUG] 100% cpu on each threads

2018-01-12 Thread Emmanuel Hocdet
Hi Willy > Le 12 janv. 2018 à 11:38, Willy Tarreau a écrit : > > Hi Manu, > > On Fri, Jan 12, 2018 at 11:14:57AM +0100, Emmanuel Hocdet wrote: >> >> Hi, >> >> with 1.8.3 + threads (with mworker) >> I notice a 100% cpu per thread ( epool_wait + gettimeofday in loop) >>

Re: Segfault on haproxy 1.7.10 with state file and slowstart

2018-01-12 Thread Willy Tarreau
Hello Raghu, On Thu, Jan 11, 2018 at 02:20:34PM +0530, Raghu Udiyar wrote: > Hello, > > Haproxy 1.7.10 segfaults when the srv_admin_state is set to > SRV_ADMF_CMAINT (0x04) > for a backend server, and that backend has the `slowstart` option set. > > The following configuration reproduces it :

Re: [BUG] 100% cpu on each threads

2018-01-12 Thread Willy Tarreau
Hi Manu, On Fri, Jan 12, 2018 at 11:14:57AM +0100, Emmanuel Hocdet wrote: > > Hi, > > with 1.8.3 + threads (with mworker) > I notice a 100% cpu per thread ( epool_wait + gettimeofday in loop) > Syndrome appears regularly on start/reload. We got a similar report yesterday affecting 1.5 to

[BUG] 100% cpu on each threads

2018-01-12 Thread Emmanuel Hocdet
Hi, with 1.8.3 + threads (with mworker) I notice a 100% cpu per thread ( epool_wait + gettimeofday in loop) Syndrome appears regularly on start/reload. My configuration include one bind line with ssl on tcp mode. It's a know issue? ++ Manu