[PATCH] BUG/MINOR: fix maxaccept computation according to the frontend process range

2016-04-14 Thread Cyril Bonté
commit 7c0ffd23 is only considering the explicit use of the "process" keyword on the listeners. But at this step, if it's not defined in the configuration, the listener bind_proc mask is set to 0. As a result, the code will compute the maxaccept value based on only 1 process, which is not always

Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-14 Thread Willy Tarreau
On Thu, Apr 14, 2016 at 02:22:36PM +0200, Janusz Dziemidowicz wrote: > 2016-04-14 12:05 GMT+02:00 Willy Tarreau : > > Hi David, > > > > On Wed, Apr 13, 2016 at 03:19:45PM -0500, David Martin wrote: > >> This is my first attempt at a patch, I'd love to get some feedback on this. > >> >

HAProxy 1.6, override for dns/NXdomains on parsing

2016-04-14 Thread Michel Belleau
Hi Baptiste. (cc: HAProxy mailing-list) I recently came across one of your posts from last year (http://permalink.gmane.org/gmane.comp.web.haproxy/22841) regarding how DNS records are resolved when loading new configuration values (either at parsing during initial startup, or on dynamic

[no subject]

2016-04-14 Thread Michel Belleau

Ouotation

2016-04-14 Thread info
Dear Seller,My name is Ryan Williams from Tiestlin Ventures (We are Trading Company base in United States) we are interested on your products. Our company is looking for a reliable supplier who can provide a long term customer ship and maintain our customer’s specification items. Supplier

Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-14 Thread David Martin
Here's a revised patch, it throws a fatal config error if SSL_CTX_set1_curves_list() fails. The default echde option is used so current configurations should not be impacted. Sorry Janusz, forgot the list on my reply. On Thu, Apr 14, 2016 at 10:37 AM, David Martin wrote: >

Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-14 Thread Janusz Dziemidowicz
2016-04-14 12:05 GMT+02:00 Willy Tarreau : > Hi David, > > On Wed, Apr 13, 2016 at 03:19:45PM -0500, David Martin wrote: >> This is my first attempt at a patch, I'd love to get some feedback on this. >> >> Adds support for SSL_CTX_set_ecdh_auto which is available in OpenSSL 1.0.2. >

Re: nbproc 1 vs >1 performance

2016-04-14 Thread Willy Tarreau
On Thu, Apr 14, 2016 at 01:54:27PM +0200, Daniel Schneller wrote: > Trying not to hijack the thread here, but it seems to fit well in the context: > > Does this mean that in the following could happen due to the difference in > BSD/Linux SO_REUSEPORT: > > 1. haproxy process ???A??? binds say

Re: nbproc 1 vs >1 performance

2016-04-14 Thread Daniel Schneller
Trying not to hijack the thread here, but it seems to fit well in the context: Does this mean that in the following could happen due to the difference in BSD/Linux SO_REUSEPORT: 1. haproxy process “A” binds say port 1234 2. client A connects to 1234 and keeps the connection open 3.

Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-14 Thread Willy Tarreau
Hi David, On Wed, Apr 13, 2016 at 03:19:45PM -0500, David Martin wrote: > This is my first attempt at a patch, I'd love to get some feedback on this. > > Adds support for SSL_CTX_set_ecdh_auto which is available in OpenSSL 1.0.2. > From 05bee3e95e5969294998fb9e2794ef65ce5a6c1f Mon Sep 17

Re: nbproc 1 vs >1 performance

2016-04-14 Thread Willy Tarreau
On Thu, Apr 14, 2016 at 10:17:10AM +0200, Willy Tarreau wrote: > So I guess that indeed, if not all the processes a frontend is bound to > have a corresponding bind line, this can cause connection issues as some > incoming connections will be distributed to queues that nobody listens to. I said

Re: nbproc 1 vs >1 performance

2016-04-14 Thread Willy Tarreau
On Thu, Apr 14, 2016 at 11:49:47AM +0200, Christian Ruppert wrote: > Yep, that did it. With this setting there is no more performance decrease on > the http bind. Thanks! > I'm just not sure if that will (negatively) affect anything else. It may, depending on your setup (eg: if some frontends can

Re: nbproc 1 vs >1 performance

2016-04-14 Thread Willy Tarreau
Hi Christian, On Thu, Apr 14, 2016 at 11:06:02AM +0200, Christian Ruppert wrote: > I've applied your patch and I just looked at the performance so far. The > performance is still the same, so the lessperformant one is still less > performant than the moreperformant.cfg. So from the performance

Re: nbproc 1 vs >1 performance

2016-04-14 Thread Christian Ruppert
Hi Willy, On 2016-04-14 10:17, Willy Tarreau wrote: On Thu, Apr 14, 2016 at 08:55:47AM +0200, Lukas Tribus wrote: Le me put it this way: frontend haproxy_test bind-process 1-8 bind :12345 process 1 bind :12345 process 2 bind :12345 process 3 bind :12345 process 4 Leads to 8 processes,

Re: Haproxy & Kubernetes, dynamic backend configuration

2016-04-14 Thread Smain Kahlouch
Is it plan to support backend configuration with stats socket ? 2016-04-13 16:09 GMT+02:00 Smain Kahlouch : > Ok thank you, > I'll have a look to SmartStack. > > 2016-04-13 16:03 GMT+02:00 B. Heath Robinson : > >> SmartStack was mentioned earlier in

Re: nbproc 1 vs >1 performance

2016-04-14 Thread Willy Tarreau
On Thu, Apr 14, 2016 at 08:55:47AM +0200, Lukas Tribus wrote: > Le me put it this way: > > frontend haproxy_test > bind-process 1-8 > bind :12345 process 1 > bind :12345 process 2 > bind :12345 process 3 > bind :12345 process 4 > > > Leads to 8 processes, and the master process binds the

Re: nbproc 1 vs >1 performance

2016-04-14 Thread Willy Tarreau
On Thu, Apr 14, 2016 at 08:55:47AM +0200, Lukas Tribus wrote: > Hi Willy, > > > Am 14.04.2016 um 07:08 schrieb Willy Tarreau: > >Hi Lukas, > > > >On Thu, Apr 14, 2016 at 12:14:15AM +0200, Lukas Tribus wrote: > >>For example, the following configuration load balances the traffic across > >>all 40

Re: nbproc 1 vs >1 performance

2016-04-14 Thread Lukas Tribus
Hi Willy, Am 14.04.2016 um 07:08 schrieb Willy Tarreau: Hi Lukas, On Thu, Apr 14, 2016 at 12:14:15AM +0200, Lukas Tribus wrote: For example, the following configuration load balances the traffic across all 40 processes, expected or not? frontend haproxy_test bind-process 1-40 bind