Re: Multiple balance statements in a backend

2020-04-03 Thread Igor Cicimov
On Fri, Apr 3, 2020 at 11:23 PM Willy Tarreau  wrote:

> On Fri, Apr 03, 2020 at 09:38:58PM +1100, Igor Cicimov wrote:
> > >> And in general how are duplicate statements being handled in the code,
> > >> .i.e. the first one or the last one is considered as valid, and are
> there
> > >> maybe any special statements that are exempt from the rule (like
> hopefully
> > >> balance :-) )
>
> And just to clarify this point, with balance like most exclusive
> directives, the last one overrides the previous ones. There's a
> reason for this that's easy to remember: the values are first
> pre-initialized from the defaults section's values, so each keyword
> needs to be able to override any previous occurrence.
>
> Willy
>

Got it, thanks Willy.

Igor


Re: haproxy 2.0.14 failing to bind peer sockets

2020-04-03 Thread Willy Tarreau
On Fri, Apr 03, 2020 at 02:27:05PM +0200, Willy Tarreau wrote:
> On Thu, Apr 02, 2020 at 12:32:32PM -0700, James Brown wrote:
> > I reverted that commit, but it doesn't appear to have fixed the issue.
> > 
> > I also tried adding a stick-table using this peers group to my config (this
> > test cluster didn't actually have any stick-tables), but it still fails at
> > startup with the same error.
> 
> James, just to confirm, does it fail to start from a cold start or only
> on reloads ?

I'm trying with this config and this command:

   global
stats socket /tmp/sock1 mode 666 level admin expose-fd listeners
stats timeout 1d

   peers p
peer peer1 127.0.0.1:8521
peer peer2 127.0.0.1:8522

   listen l
mode http
bind 127.0.0.1:2501
timeout client 10s
timeout server 10s
timeout connect 10s
stick-table size 200 expire 10s type ip peers p store server_id
stick on src
server s 127.0.0.1:8000

   $ ./haproxy -D -L peer1 -f peers.cfg -p /tmp/haproxy.pid
   $ ./haproxy -D -L peer1 -f peers.cfg -p /tmp/haproxy.pid -sf $(pidof 
haproxy) -x /tmp/sock1
   $ ./haproxy -D -L peer1 -f peers.cfg -p /tmp/haproxy.pid -sf $(pidof 
haproxy) -x /tmp/sock1
   $ ./haproxy -D -L peer1 -f peers.cfg -p /tmp/haproxy.pid -sf $(pidof 
haproxy) -x /tmp/sock1

For now I can't figure how to reproduce it :-/ If you manage to modify
this config to trigger the issue that would be great!

Willy



Re: haproxy 2.0.14 failing to bind peer sockets

2020-04-03 Thread Willy Tarreau
On Thu, Apr 02, 2020 at 12:32:32PM -0700, James Brown wrote:
> I reverted that commit, but it doesn't appear to have fixed the issue.
> 
> I also tried adding a stick-table using this peers group to my config (this
> test cluster didn't actually have any stick-tables), but it still fails at
> startup with the same error.

James, just to confirm, does it fail to start from a cold start or only
on reloads ?

Willy



Re: Multiple balance statements in a backend

2020-04-03 Thread Willy Tarreau
On Fri, Apr 03, 2020 at 09:38:58PM +1100, Igor Cicimov wrote:
> >> And in general how are duplicate statements being handled in the code,
> >> .i.e. the first one or the last one is considered as valid, and are there
> >> maybe any special statements that are exempt from the rule (like hopefully
> >> balance :-) )

And just to clarify this point, with balance like most exclusive
directives, the last one overrides the previous ones. There's a
reason for this that's easy to remember: the values are first
pre-initialized from the defaults section's values, so each keyword
needs to be able to override any previous occurrence.

Willy



Re: [PATCH] add DEBUG_STRICT to travis, upgrade openssl to 1.1.1f

2020-04-03 Thread Willy Tarreau
On Fri, Apr 03, 2020 at 12:34:31AM +0500,  ??? wrote:
> Hello,
> 
> patch is urgent.
> openssl has changed download path, I guess it was done in purpose (to
> signal people that they download outdated openssl)
> 
> so ... we need to upgrade to 1.1.1f

Both pathces applied, thanks Ilya. By the way, thanks for having
started to improve your commit messages, it's pleasant to find a
bit more info there now :-)

Willy



Re: [PATCH] CI: minor cleanup on SSL linking

2020-04-03 Thread Willy Tarreau
On Thu, Apr 02, 2020 at 11:46:58PM +0500,  ??? wrote:
> Hello,
> 
> this PR cleans up SSL linking.
> it is very well aligned to "how to link to custom openssl" documentation.

It's indeed cleaner, thanks!
Willy



Re: regtest: abns should work now :-)

2020-04-03 Thread Илья Шипицин
пт, 3 апр. 2020 г. в 16:56, Илья Шипицин :

>
>
> пт, 3 апр. 2020 г. в 16:33, Martin Grigorov :
>
>> Hi everyone,
>>
>> On Mon, Mar 23, 2020 at 11:11 AM Martin Grigorov 
>> wrote:
>>
>>> Hi Илья,
>>>
>>> On Mon, Mar 23, 2020 at 10:52 AM Илья Шипицин 
>>> wrote:
>>>
 well, I tried to repro abns failures on x86_64
 I chose MS Azure VM of completely different size, both number of CPU
 and RAM.
 it was never reproduced, say on 1000 execution in loop.

 so, I decided "it looks like something with memory aligning".
 also, I tried to run arm64 emulation on virtualbox. no luck yet.

>>>
>>>
>> 
>>
>>
>>> Have you tried with multiarch Docker ?
>>>
>>> 1) execute
>>> docker run --rm --privileged multiarch/qemu-user-static:register --reset
>>> to register QEMU
>>>
>>> 2) create Dockerfile
>>> for Centos use: FROM multiarch/centos:7-aarch64-clean
>>> for Ubuntu use: FROM multiarch/ubuntu-core:arm64-bionic
>>>
>>> 3) enjoy :-)
>>>
>>
>> Here is a PR for Varnish Cache project where I use Docker + QEMU to build
>> and package for several Linux distros and two architectures:
>> https://github.com/varnishcache/varnish-cache/pull/3263
>> They use CircleCI but I guess the same approach can be applied on GitHub
>> Actions.
>> If you are interested in this approach I could give it a try.
>>
>
> I tried custom docker images in Github Actions.
>
> some parts of github runner are executed inside container, for example it
> breaks centos 6
> https://github.com/actions/runner/issues/337
>


here's corresponding workflow
https://github.com/chipitsine/haproxy/commit/20fabcd005dc9e3bac54a84bf44631f177fa79c2


>
> however, I was able to run Fedora Rawhide.
>
> if that will work, why not ?
> if you will get it working on CircleCI, I do not mind. CircleCI is nice.
>
>
>>
>>
>> Regards,
>> Martin
>>
>>
>>>
>>>

 пн, 23 мар. 2020 г. в 13:43, Willy Tarreau :

> Hi Ilya,
>
> I think this time I managed to fix the ABNS test. To make a long story
> short, it was by design extremely sensitive to the new process's
> startup
> time, which is increased with larger FD counts and/or less powerful VMs
> and/or noisy neighbors. This explains why it started to misbehave with
> the commit which relaxed the maxconn limitations. A starting process
> stealing a few ms of CPU from the old one could make its keep-alive
> timeout expire before it got a new request on a reused connection,
> resulting in an empty response as reported by the client.
>
> I'm going to issue dev5 now. s390x is currently down but all x86 ones
> build and run fine for now.
>
> Cheers,
> Willy
>



Re: regtest: abns should work now :-)

2020-04-03 Thread Илья Шипицин
пт, 3 апр. 2020 г. в 16:33, Martin Grigorov :

> Hi everyone,
>
> On Mon, Mar 23, 2020 at 11:11 AM Martin Grigorov 
> wrote:
>
>> Hi Илья,
>>
>> On Mon, Mar 23, 2020 at 10:52 AM Илья Шипицин 
>> wrote:
>>
>>> well, I tried to repro abns failures on x86_64
>>> I chose MS Azure VM of completely different size, both number of CPU and
>>> RAM.
>>> it was never reproduced, say on 1000 execution in loop.
>>>
>>> so, I decided "it looks like something with memory aligning".
>>> also, I tried to run arm64 emulation on virtualbox. no luck yet.
>>>
>>
>>
> 
>
>
>> Have you tried with multiarch Docker ?
>>
>> 1) execute
>> docker run --rm --privileged multiarch/qemu-user-static:register --reset
>> to register QEMU
>>
>> 2) create Dockerfile
>> for Centos use: FROM multiarch/centos:7-aarch64-clean
>> for Ubuntu use: FROM multiarch/ubuntu-core:arm64-bionic
>>
>> 3) enjoy :-)
>>
>
> Here is a PR for Varnish Cache project where I use Docker + QEMU to build
> and package for several Linux distros and two architectures:
> https://github.com/varnishcache/varnish-cache/pull/3263
> They use CircleCI but I guess the same approach can be applied on GitHub
> Actions.
> If you are interested in this approach I could give it a try.
>

I tried custom docker images in Github Actions.

some parts of github runner are executed inside container, for example it
breaks centos 6
https://github.com/actions/runner/issues/337

however, I was able to run Fedora Rawhide.

if that will work, why not ?
if you will get it working on CircleCI, I do not mind. CircleCI is nice.


>
>
> Regards,
> Martin
>
>
>>
>>
>>>
>>> пн, 23 мар. 2020 г. в 13:43, Willy Tarreau :
>>>
 Hi Ilya,

 I think this time I managed to fix the ABNS test. To make a long story
 short, it was by design extremely sensitive to the new process's startup
 time, which is increased with larger FD counts and/or less powerful VMs
 and/or noisy neighbors. This explains why it started to misbehave with
 the commit which relaxed the maxconn limitations. A starting process
 stealing a few ms of CPU from the old one could make its keep-alive
 timeout expire before it got a new request on a reused connection,
 resulting in an empty response as reported by the client.

 I'm going to issue dev5 now. s390x is currently down but all x86 ones
 build and run fine for now.

 Cheers,
 Willy

>>>


Re: regtest: abns should work now :-)

2020-04-03 Thread Martin Grigorov
Hi everyone,

On Mon, Mar 23, 2020 at 11:11 AM Martin Grigorov 
wrote:

> Hi Илья,
>
> On Mon, Mar 23, 2020 at 10:52 AM Илья Шипицин 
> wrote:
>
>> well, I tried to repro abns failures on x86_64
>> I chose MS Azure VM of completely different size, both number of CPU and
>> RAM.
>> it was never reproduced, say on 1000 execution in loop.
>>
>> so, I decided "it looks like something with memory aligning".
>> also, I tried to run arm64 emulation on virtualbox. no luck yet.
>>
>
>



> Have you tried with multiarch Docker ?
>
> 1) execute
> docker run --rm --privileged multiarch/qemu-user-static:register --reset
> to register QEMU
>
> 2) create Dockerfile
> for Centos use: FROM multiarch/centos:7-aarch64-clean
> for Ubuntu use: FROM multiarch/ubuntu-core:arm64-bionic
>
> 3) enjoy :-)
>

Here is a PR for Varnish Cache project where I use Docker + QEMU to build
and package for several Linux distros and two architectures:
https://github.com/varnishcache/varnish-cache/pull/3263
They use CircleCI but I guess the same approach can be applied on GitHub
Actions.
If you are interested in this approach I could give it a try.


Regards,
Martin


>
>
>>
>> пн, 23 мар. 2020 г. в 13:43, Willy Tarreau :
>>
>>> Hi Ilya,
>>>
>>> I think this time I managed to fix the ABNS test. To make a long story
>>> short, it was by design extremely sensitive to the new process's startup
>>> time, which is increased with larger FD counts and/or less powerful VMs
>>> and/or noisy neighbors. This explains why it started to misbehave with
>>> the commit which relaxed the maxconn limitations. A starting process
>>> stealing a few ms of CPU from the old one could make its keep-alive
>>> timeout expire before it got a new request on a reused connection,
>>> resulting in an empty response as reported by the client.
>>>
>>> I'm going to issue dev5 now. s390x is currently down but all x86 ones
>>> build and run fine for now.
>>>
>>> Cheers,
>>> Willy
>>>
>>


Re: Multiple balance statements in a backend

2020-04-03 Thread Igor Cicimov
Hi Baptiste,

On Fri, Apr 3, 2020 at 5:28 PM Baptiste  wrote:

>
>
> On Fri, Apr 3, 2020 at 5:21 AM Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> Hi all,
>>
>> Probably another quite basic question that I can't find an example of in
>> the docs (at least as a warning not to do that as it does not make sense or
>> bad practise) or on the net. It is regarding the usage of multiple balance
>> statements in a backend like this:
>>
>> balance leastconn
>> balance hdr(Authorization)
>>
>> So basically is this a valid use case where we can expect both options to
>> get considered when load balancing or one is ignored as a duplicate (in
>> which case which one)?
>>
>> And in general how are duplicate statements being handled in the code,
>> .i.e. the first one or the last one is considered as valid, and are there
>> maybe any special statements that are exempt from the rule (like hopefully
>> balance :-) )
>>
>> Thanks in advance.
>>
>> Igor
>>
>>
>
> Hi Igor,
>
> duplicate statement processing depends on the keyword: very few are
> cumulative, and most of them s "last found match".
>
> To come back to the original point, you already a chance to have 2 LB
> algorithm: if you do 'balance hdr(Authorization)' and no Authorization
> header can be found, then HAProxy fails back to a round robin mode.
> Now, if you need persistence, I think you can enable "balance leastconn"
> and then use a stick table to route known  Authorization header to the
> right server.
> More information here:
>
> https://www.haproxy.com/fr/blog/load-balancing-affinity-persistence-sticky-sessions-what-you-need-to-know/
>
> Baptiste
>

Thanks for confirming this, great stuff!

Cheers,
Igor


Re: TLV problem after updating to 2.1.14

2020-04-03 Thread Tim Düsterhus
Hativ,

Am 03.04.20 um 00:38 schrieb Hativ:
> Any ideas what's wrong?
> 

I would assume that this patch changed the behavior there:
https://github.com/haproxy/haproxy/commit/7f26391bc51ad56c31480d03f56e1db604f1c617

Can you try reverting that to check whether it is the cause?

Best regards
Tim Düsterhus



Re: [PATCH] MINOR: ssl: skip self issued CA in cert chain for ssl_ctx

2020-04-03 Thread Emmanuel Hocdet


> Le 31 mars 2020 à 18:40, William Lallemand  a écrit :
> 
> On Thu, Mar 26, 2020 at 06:29:48PM +0100, William Lallemand wrote:
>> 
>> After some thinking and discussing with people involved in this part of
>> HAProxy. I'm not feeling very confortable with setting this behavior by
>> default, on top on that the next version is an LTS so its not a good
>> idea to change this behavior yet. I think in most case it won't be a
>> problem but it would be better if it's enabled by an option in the
>> global section.
>> 
> 
> Hi Manu,
> 
> Could you take a look at this? Because I already merged your first
> patch, so if we don't do anything about it we may revert it before the
> release.
> 
> Thanks a lot!

Hi William,

It’s really safe because self Issued CA is the X509 end chain by definition,
but yes it change the behaviour.
Why not an option in global section.

++
Manu




[RFC] Consistent Hashing for Replica Sharding

2020-04-03 Thread Dario Di Pasquale
Hi! I write on behalf of Immobiliare.it, an Italian company leader in 
the real estate services and advertising market, we are using almost 
exclusively HAProxy for our load-balancing. In particular, we are using 
a patched version of HAProxy to balance requests to our cache servers. 
Long story short, we set up an improved version of the replicated, 
sharded cache pattern: each server both maintains its cache entries and 
a subset of entries of the other servers. Doing that, we ensure that a 
request for an entity can be sent either to its server (the one provided 
by the HAProxy's consistent hashing algorithm) or to its second server 
(the next server in the tree after the one chosen by the consistent 
hashing algorithm), so we can ensure that all the entries are still 
present if a single server crashes, and when many server crashes, only a 
little subset of those entries were lost. To do that we created a new 
consistent hashing algorithm (consistent-2x) that, once found the server 
that has to serve the request, also looks for another server in the tree 
(which should reside on different host) and sends the request either to 
the first or to the second in a random fashion, still considering 
server's weight.


The whole implementation of the above mentioned algorithm requires the 
following changes:


the tree we build is not driven by weights but it is static, so if a 
server crashes HAProxy does not re-balance requests (consistent-2x 
algorithm does this); weights are used instead to chose one of the two 
selected servers; servers' IDs should follow a given pattern: servers on 
the same machine should give the same result when their IDs is divided 
by 1000; consistent-2x algorithm. We are looking forward to hearing from 
you. We wish our contribution could be useful for as many people as 
possible.


Best,

Dario Di Pasquale

Immobiliare.it



Re: Multiple balance statements in a backend

2020-04-03 Thread Baptiste
On Fri, Apr 3, 2020 at 5:21 AM Igor Cicimov 
wrote:

> Hi all,
>
> Probably another quite basic question that I can't find an example of in
> the docs (at least as a warning not to do that as it does not make sense or
> bad practise) or on the net. It is regarding the usage of multiple balance
> statements in a backend like this:
>
> balance leastconn
> balance hdr(Authorization)
>
> So basically is this a valid use case where we can expect both options to
> get considered when load balancing or one is ignored as a duplicate (in
> which case which one)?
>
> And in general how are duplicate statements being handled in the code,
> .i.e. the first one or the last one is considered as valid, and are there
> maybe any special statements that are exempt from the rule (like hopefully
> balance :-) )
>
> Thanks in advance.
>
> Igor
>
>

Hi Igor,

duplicate statement processing depends on the keyword: very few are
cumulative, and most of them s "last found match".

To come back to the original point, you already a chance to have 2 LB
algorithm: if you do 'balance hdr(Authorization)' and no Authorization
header can be found, then HAProxy fails back to a round robin mode.
Now, if you need persistence, I think you can enable "balance leastconn"
and then use a stick table to route known  Authorization header to the
right server.
More information here:
https://www.haproxy.com/fr/blog/load-balancing-affinity-persistence-sticky-sessions-what-you-need-to-know/

Baptiste