Re: H2 Server Connection Resets (1.9.2)

2019-01-22 Thread Luke Seelenbinder
Hi Willy, Aleks,

I will try the things suggested this afternoon (hopefully) or tomorrow and get 
back to you.

> At least if nginx does this it should send a GOAWAY
> frame indicating that it will stop after stream #2001.

That's my understanding as well (and the docs say as much). I assumed HAProxy 
would properly handle it, as well, so perhaps it's something else nefarious 
going on in our particular setup. There is still the possibility that the bug 
fixed by Aleks' patches regarding HTX & headers were causing this issue in a 
back-handed sort of way. I will apply those patches, establish that the headers 
bug is fixed, and then try the recommendations from this bug to rule out any 
interactions on that side (a badly written header in our situation could result 
in a 404, which seemed to be the worst user-facing case of this bug).

Best,
Luke

—
Luke Seelenbinder
Stadia Maps | Founder
stadiamaps.com

‐‐‐ Original Message ‐‐‐
On Tuesday, January 22, 2019 9:37 AM, Willy Tarreau  wrote:

> Hi Luke,
> 

> On Mon, Jan 21, 2019 at 09:30:39AM +, Luke Seelenbinder wrote:
> 

> > After enabling h2 backends (technically `server ... alpn h2,http/1.1`), we
> > began seeing a high number of backend /server/ connection resets. A
> > reasonable number of client-side connection resets due to timeouts, etc., is
> > normal, but the server connection resets were new.
> > I believe the root cause is that our backend servers are NGINX servers, 
> > which
> > by default have a 1000 request limit per h2 connection
> > (https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests).
> > As far as I can tell there's no way to set this to unlimited. That resulted
> > in NGINX resetting the HAProxy backend connections and thus resulted in user
> > requests being dropped
> 

> That's rather strange. At least if nginx does this it should send a GOAWAY
> frame indicating that it will stop after stream #2001. We normally respect
> stream limits advertised by the server before deciding if a connection is
> still usable (but we could very well have a bug of course). If it only
> rejects new stream creation, that's extremely inefficient and unfriendly
> to clients, so I doubt it's doing something like this.
> 

> We'll need to run some interoperability tests on nginx so see what happens.
> It might indeed be that the only short-term solution would be to add an
> option to limit the total number of streams per connection. I don't see
> any value in doing something as gross, except working around some memory
> leak bugs, but we also need to be able to adapt to such servers.
> 

> Could you try h2load on your server to see if it reports errors ? Just
> use a single connection (-c 1) and a few streams (-m 10), and no more
> than 10k requests (-n 1). It could give us some hints about how it
> works and behaves.
> 

> Thanks,
> Willy



publickey - luke.seelenbinder@stadiamaps.com - 0xB23C1E8A.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: H2 Server Connection Resets (1.9.2)

2019-01-22 Thread Willy Tarreau
Hi Luke,

On Mon, Jan 21, 2019 at 09:30:39AM +, Luke Seelenbinder wrote:
> After enabling h2 backends (technically `server ... alpn h2,http/1.1`), we
> began seeing a high number of backend /server/ connection resets. A
> reasonable number of client-side connection resets due to timeouts, etc., is
> normal, but the server connection resets were new.
> 
> I believe the root cause is that our backend servers are NGINX servers, which
> by default have a 1000 request limit per h2 connection
> (https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests).
> As far as I can tell there's no way to set this to unlimited. That resulted
> in NGINX resetting the HAProxy backend connections and thus resulted in user
> requests being dropped

That's rather strange. At least if nginx does this it should send a GOAWAY
frame indicating that it will stop after stream #2001. We normally respect
stream limits advertised by the server before deciding if a connection is
still usable (but we could very well have a bug of course). If it only
rejects new stream creation, that's extremely inefficient and unfriendly
to clients, so I doubt it's doing something like this.

We'll need to run some interoperability tests on nginx so see what happens.
It might indeed be that the only short-term solution would be to add an
option to limit the total number of streams per connection. I don't see
any value in doing something as gross, except working around some memory
leak bugs, but we also need to be able to adapt to such servers.

Could you try h2load on your server to see if it reports errors ? Just
use a single connection (-c 1) and a few streams (-m 10), and no more
than 10k requests (-n 1). It could give us some hints about how it
works and behaves.

Thanks,
Willy



Re: Some test case for HTTP/2 failed, are those bugs?

2019-01-22 Thread Willy Tarreau
On Tue, Jan 22, 2019 at 09:32:22AM +0100, Lukas Tribus wrote:
> Hello,
> 
> 
> On Tue, 22 Jan 2019 at 03:04, ???  wrote:
> >
> > Dear willy:
> >
> > I am a follower of haproxy. I tested HTTP/2 fuction in haproxy_1.8.17 
> > with the tool h2spec, but some test cases failed. I wonder if those are 
> > bugs for haproxy.
> > See the tool here https://github.com/summerwind/h2spec .
> 
> Verison 1.9.2 should fix most of those issues:
> https://www.mail-archive.com/haproxy@formilux.org/msg32461.html

Yes definitely. The CONTINUATION and Trailers stuff cannot be backported
to 1.8 at all, it was made possible thans to the new architecture. For
the case reporting an unexpected DATA frame, I'm used to seeing it on
certain configs, it's a race that h2spec cannot guess due to response
time and/or bytes in flight. In short, h2spec sends an incorrect frame
and expects an error response, but there's already a response flowing
back and the error comes afterwards. It cannot guess it and reports the
error. All h2spec reports have to be interpreted in their context, like
with any compliance tool. I'll check in my configs, but from what I
remember the test config I'm using for h2spec uses http-buffer-request
and a 8kB-long error page, in order not to trigger false alarms in
h2spec, and with this I correctly get 0 error on 1.9.2.

Willy



Re: Some test case for HTTP/2 failed, are those bugs?

2019-01-22 Thread Lukas Tribus
Hello,


On Tue, 22 Jan 2019 at 03:04, 高和东  wrote:
>
> Dear willy:
>
> I am a follower of haproxy. I tested HTTP/2 fuction in haproxy_1.8.17 
> with the tool h2spec, but some test cases failed. I wonder if those are bugs 
> for haproxy.
> See the tool here https://github.com/summerwind/h2spec .

Verison 1.9.2 should fix most of those issues:
https://www.mail-archive.com/haproxy@formilux.org/msg32461.html

Regards,
Lukas



Re: H2 Server Connection Resets (1.9.2)

2019-01-22 Thread Willy Tarreau
On Tue, Jan 22, 2019 at 09:42:53AM +, Luke Seelenbinder wrote:
> Hi Willy, Aleks,
> 
> I will try the things suggested this afternoon (hopefully) or tomorrow and 
> get back to you.
> 
> > At least if nginx does this it should send a GOAWAY
> > frame indicating that it will stop after stream #2001.
> 
> That's my understanding as well (and the docs say as much).

OK.

> I assumed HAProxy
> would properly handle it, as well, so perhaps it's something else nefarious
> going on in our particular setup.

Or we might have a bug there as well. I'll recheck the code just in case
I spot anything.

> There is still the possibility that the bug
> fixed by Aleks' patches regarding HTX & headers were causing this issue in a
> back-handed sort of way. I will apply those patches, establish that the
> headers bug is fixed, and then try the recommendations from this bug to rule
> out any interactions on that side (a badly written header in our situation
> could result in a 404, which seemed to be the worst user-facing case of this
> bug).

Sure, let's address one problem at a time :-)

Willy



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-22 Thread Emmanuel Hocdet


> Le 21 janv. 2019 à 19:07, Dirkjan Bussink  a écrit :
> 
> Hi Manu,
> 
>> On 21 Jan 2019, at 09:49, Emmanuel Hocdet  wrote:
>> 
>> Boringssl does not have SSL_OP_NO_RENEGOTIATION and need KeyUpdate to work.
>> As workaround, SSL_OP_NO_RENEGOTIATION could be set to 0 in openssl-compat.h.
> 
> Hmm, then we will need a different #define though since we can’t rely own the 
> constant not being defined in that case to disable the logic. We would need a 
> separate way to detect this then. Is there a good example of this or should I 
> change the logic then to version checks instead? And how about LibreSSL in 
> that case?
> 
> Does BoringSSL need any of the logic in the first place? There’s not really 
> versions of it, so is the target there always current master or something 
> else? 
> 


No need to change, SSL_OP_NO_RENEGOTIATION is now in Boringssl, thanks Adam, 
and renegotiation is disabled by default.
For LibreSSL, no TLSv1.3, no SSL_OP_NO_RENEGOTIATION.

++
Manu






Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-22 Thread Emeric Brun
Hi Willy,

On 1/21/19 6:38 PM, Dirkjan Bussink wrote:
> Hi Emeric,
> 
>> On 21 Jan 2019, at 08:06, Emeric Brun  wrote:
>>
>> Interesting, it would be good to skip the check using the same method.
>>
>> We must stay careful to not put the OP_NO_RENEG flag on the client part 
>> (when haproxy connects to server), because reneg from server is authorized
>> but i think infocbk is called only on frontend/accept side.
>>
>> so a patch which do:
>>
>> #ifdef  SSL_OP_NO_RENEGOTIATION
>> SSL_set_options(ctx, SSL_OP_NO_RENEGOTIATION);
>> #endif
>>
>> without condition during init
>>
>> and adding #ifndef SSL_OP_NO_RENEGOTIATION arround the CVE check, should fix 
>> the issue mentionned about keyupdate and will fix the CVE using the clean 
>> way if the version
>> of openssl support.
> 
> I have implemented this and attached the patch for it. What do you think of 
> this approach? 
> 
> Cheers,
> 
> Dirkjan Bussink
> 

I think you can merge this

R,
Emeric



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-22 Thread Emeric Brun
Hi Willy,

On 1/21/19 6:38 PM, Dirkjan Bussink wrote:
> Hi Emeric,
> 
>> On 21 Jan 2019, at 08:06, Emeric Brun  wrote:
>>
>> Interesting, it would be good to skip the check using the same method.
>>
>> We must stay careful to not put the OP_NO_RENEG flag on the client part 
>> (when haproxy connects to server), because reneg from server is authorized
>> but i think infocbk is called only on frontend/accept side.
>>
>> so a patch which do:
>>
>> #ifdef  SSL_OP_NO_RENEGOTIATION
>> SSL_set_options(ctx, SSL_OP_NO_RENEGOTIATION);
>> #endif
>>
>> without condition during init
>>
>> and adding #ifndef SSL_OP_NO_RENEGOTIATION arround the CVE check, should fix 
>> the issue mentionned about keyupdate and will fix the CVE using the clean 
>> way if the version
>> of openssl support.
> 
> I have implemented this and attached the patch for it. What do you think of 
> this approach? 
> 
> Cheers,
> 
> Dirkjan Bussink
> 
I think you can merge this.

Thx Dirkjan.

R,
Emeric



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-22 Thread Emmanuel Hocdet


> Le 21 janv. 2019 à 19:31, Adam Langley  a écrit :
> 
> On Mon, Jan 21, 2019 at 10:16 AM Dirkjan Bussink  wrote:
>> Ah ok, I recently added support in HAProxy to handle the new 
>> SSL_CTX_set_ciphersuites option since OpenSSL handles setting TLS 1.3 
>> ciphers separate from the regular ones. Are those things that BoringSSL 
>> would also want to adopt then?
> 
> SSL_CTX_set_ciphersuites is more than a compatibility hack like adding
> a dummy #define, and the considerations are more complex. I'm not sure
> that we want to allow TLS 1.3 ciphersuite to be configured: the
> excessive number of cipher suites prior to TLS 1.3 was a problem, as
> was the excessive diversity of configurations. Also, string-based APIs
> have historically been expensive because they prevent easy static
> analysis. So we could add a dummy SSL_CTX_set_ciphersuites that always
> returns zero, but most applications would probably take that to be a
> fatal error so that wouldn't be helpful. So SSL_CTX_set_ciphersuites
> might be a case where a #ifdef is the best answer. But we'll always
> think about such things if asked.
> 

I agree, no need for SSL_CTX_set_ciphersuites. If a security issue appear on
cipher i suppose BoringSSL will evolve with a default fix.

> (If you happen to know, I would be curious who is using BoringSSL with 
> HAProxy.)
> 
We used BoringSSL in production since 1.5 year.

++
Manu




Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Tim.

Am 22.01.2019 um 20:57 schrieb Tim Düsterhus:

> Aleks,
> 
> Am 22.01.19 um 20:50 schrieb Aleksandar Lazic:
>> This means that the function in haproxy works but the check should be 
>> adopted to
>> match both cases, right?
> 
> At least one should investigate what exactly is happening here (the
> differences between the libc is a guess) and possibly file a bug for
> either glibc or musl. I believe what musl is doing here is correct and
> thus glibc must be incorrect.
> 
> Consider filing a tracking bug in haproxy's issue tracker to verify
> where / who exactly is doing something wrong.

Done.
https://github.com/haproxy/haproxy/issues/23

>> Do you think that in general the alpine/musl is a good idea or should I stay 
>> on
>> centos as for my other images?
> 
> FWIW: There already is an Alpine image for haproxy in Docker Official
> Images:
> https://github.com/docker-library/haproxy/blob/master/1.9/alpine/Dockerfile

Yep, I know, this uses openssl I was curious how difficult is is to run haproxy 
with boringssl.

Never the less this Dockerfile have "only" 2 failed tests.


## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.904) exit=2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.804) exit=2
2 tests failed, 0 tests skipped, 31 tests passed
## Gathering results ##
## Test case: ./reg-tests/http-rules/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_20-26-25.BmFdCB/vtc.1383.3d3a039a"
 s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) == 
"2001:db8:c001:c01a:0::10:0" failed
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_20-26-25.BmFdCB/vtc.1383.06fe4e21"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
make: *** [Makefile:1102: reg-tests] Error 1


This looks like your assumption with musl<>glibc ipv6 handling is different.


> Personally I'm a Debian guy, for containers I prefer Debian based and
> CentOS / RHEL I don't use at all.

Interesting is that even the debian based Image have failed tests

https://github.com/docker-library/haproxy/tree/master/1.9

But this could be a know bug and is fixed in the current git

-
## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.808) exit=2
1 tests failed, 0 tests skipped, 32 tests passed
## Gathering results ##
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
Makefile:1102: recipe for target 'reg-tests' failed
make: *** [reg-tests] Error 1
+ egrep -r ^ /tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log 
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log:## Test case: 
./reg-tests/mailers/k_healthcheckmail.vtc ##
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log:## test results in: 
"/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1"
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/failedtests.log: c27.0 
EXPECT resp.http.mailsreceived (11) == "16" failed
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/INFO:Test case: 
./reg-tests/mailers/k_healthcheckmail.vtc
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:global
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:stats 
socket 
"/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/stats.sock" 
level admin mode 600
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:stats 
socket "fd@${cli}" level admin
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:global
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
lua-load /usr/src/haproxy/./reg-tests/mailers/k_healthcheckmail.lua
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:defaults
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
frontend femail
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
mode tcp
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
bind "fd@${femail}"
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:
tcp-request content use-service lua.mailservice
/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1/h1/cfg:

Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Adam Langley
On Tue, Jan 22, 2019 at 11:45 AM Aleksandar Lazic  wrote:
> Can it be reused to test a specific server like?
>
> ssl/test/runner/runner -test "KeyUpdate-ToServer" 127.0.0.1:8443

Not easily: it drives the implementation under test by forking a
process and has quite a complex interface via command-line arguments.
(I.e. 
https://boringssl.googlesource.com/boringssl/+/eadef4730e66f914d7b9cbb2f38ecf7989f992ed/ssl/test/test_config.h)

> or should be a small c/go program be used for that test?

You could easily tweak transport_common.cc to call SSL_key_update
before each SSL_write or so.


Cheers

AGL

-- 
Adam Langley a...@imperialviolet.org https://www.imperialviolet.org



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 20:54 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 11:45 AM Aleksandar Lazic  wrote:
>> Can it be reused to test a specific server like?
>>
>> ssl/test/runner/runner -test "KeyUpdate-ToServer" 127.0.0.1:8443
> 
> Not easily: it drives the implementation under test by forking a
> process and has quite a complex interface via command-line arguments.
> (I.e. 
> https://boringssl.googlesource.com/boringssl/+/eadef4730e66f914d7b9cbb2f38ecf7989f992ed/ssl/test/test_config.h)
> 
>> or should be a small c/go program be used for that test?
> 
> You could easily tweak transport_common.cc to call SSL_key_update
> before each SSL_write or so.

Great.

To be on the save site, I would like to add the following lines

###
if (!SSL_key_update(ssl, SSL_KEY_UPDATE_NOT_REQUESTED)) {
  fprintf(stderr, "SSL_key_update failed.\n");
  return false;
}
###

before this line.

https://boringssl.googlesource.com/boringssl/+/master/tool/transport_common.cc#706

Sorry for my dump question, I just want to be save not to break something.

It would be nice to have the option '-key-update' in client.cc and server.cc
where can I put this feature request for boringssl?

That would be make the test easy with this command.

`./tool/bssl s_client -key-update -connect $test-haproxy-instance `

> Cheers
> 
> AGL

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Adam Langley
On Tue, Jan 22, 2019 at 12:13 PM Aleksandar Lazic  wrote:
> Sorry for my dump question, I just want to be save not to break something.
>
> It would be nice to have the option '-key-update' in client.cc and server.cc
> where can I put this feature request for boringssl?
>
> That would be make the test easy with this command.
>
> `./tool/bssl s_client -key-update -connect $test-haproxy-instance `

bssl is just for human experimentation, it shouldn't be used in
something like a test because we break the interface from
time-to-time. (Also note that BoringSSL in general "is not intended
for general use, as OpenSSL is. We don't recommend that third parties
depend upon it." https://boringssl.googlesource.com/boringssl)

You may well be better off using OpenSSL for a test like that, or
perhaps writing a C/C++ program (which will probably work for either
OpenSSL or BoringSSL).


Cheers

AGL

-- 
Adam Langley a...@imperialviolet.org https://www.imperialviolet.org



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread PiBa-NL

Hi Aleksandar,

Just FYI.

Op 22-1-2019 om 22:08 schreef Aleksandar Lazic:

But this could be a know bug and is fixed in the current git

-
## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.808) exit=2
1 tests failed, 0 tests skipped, 32 tests passed
## Gathering results ##
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_20-56-31.QMI0Ue/vtc.5740.39907fe1"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed


This was indeed identified as a bug, and is fixed in current master.

The impact of this was rather low though, and this specific issue of a 
few 'missing' mails under certain configuration circumstances existed 
for years before it was spotted with the regtest.


https://www.mail-archive.com/haproxy@formilux.org/msg32190.html
http://git.haproxy.org/?p=haproxy.git;a=commit;h=774c486cece942570b6a9d16afe236a16ee12079

Regards,
PiBa-NL (Pieter)




Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 21:45 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 12:13 PM Aleksandar Lazic  wrote:
>> Sorry for my dump question, I just want to be save not to break something.
>>
>> It would be nice to have the option '-key-update' in client.cc and server.cc
>> where can I put this feature request for boringssl?
>>
>> That would be make the test easy with this command.
>>
>> `./tool/bssl s_client -key-update -connect $test-haproxy-instance `
> 
> bssl is just for human experimentation, it shouldn't be used in
> something like a test because we break the interface from
> time-to-time. (Also note that BoringSSL in general "is not intended
> for general use, as OpenSSL is. We don't recommend that third parties
> depend upon it." https://boringssl.googlesource.com/boringssl)

Yes I have read it and was surprised, but it is how it is.

> You may well be better off using OpenSSL for a test like that, or
> perhaps writing a C/C++ program (which will probably work for either
> OpenSSL or BoringSSL).

Well thanks.
Currently I have no time wo look into this topic.

> Cheers
> 
> AGL

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Tim Düsterhus
Aleks,

Am 22.01.19 um 20:50 schrieb Aleksandar Lazic:
> This means that the function in haproxy works but the check should be adopted 
> to
> match both cases, right?

At least one should investigate what exactly is happening here (the
differences between the libc is a guess) and possibly file a bug for
either glibc or musl. I believe what musl is doing here is correct and
thus glibc must be incorrect.

Consider filing a tracking bug in haproxy's issue tracker to verify
where / who exactly is doing something wrong.

> Do you think that in general the alpine/musl is a good idea or should I stay 
> on
> centos as for my other images?

FWIW: There already is an Alpine image for haproxy in Docker Official
Images:
https://github.com/docker-library/haproxy/blob/master/1.9/alpine/Dockerfile

Personally I'm a Debian guy, for containers I prefer Debian based and
CentOS / RHEL I don't use at all.

> Any Idea for the other failed tests?

No idea.

Best regards
Tim Düsterhus

> -
> ## Starting vtest ##
> Testing with haproxy version: 1.9.2
> #top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.859) exit=2
> #top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.739) exit=2
> #top  TEST ./reg-tests/log/b0.vtc TIMED OUT (kill -9)
> #top  TEST ./reg-tests/log/b0.vtc FAILED (10.001) signal=9
> #top  TEST ./reg-tests/http-messaging/h2.vtc FAILED (0.752) exit=2
> 4 tests failed, 0 tests skipped, 29 tests passed
> ## Gathering results ##
> ## Test case: ./reg-tests/http-messaging/h2.vtc ##
> ## test results in: 
> "/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.7739e83e"
>  c1h2  0.0 Wrong frame type HEADERS (1) wanted WINDOW_UPDATE
> ## Test case: ./reg-tests/log/b0.vtc ##
> ## test results in: 
> "/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.2776263d"
> ## Test case: ./reg-tests/http-rules/h2.vtc ##
> ## test results in: 
> "/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.0900be1e"
>  s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) ==
> "2001:db8:c001:c01a:0::10:0" failed
> ## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
> ## test results in: 
> "/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.506e5b2b"
>  c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
> -
> 
>> Best regards
>> Tim Düsterhus
> 
> Regards
> Aleks
> 



Re: HTX & tune.maxrewrite [1.9.2]

2019-01-22 Thread Luke Seelenbinder
Hi Christopher,

I can confirm the patches fixed the issue. Thanks again for fixing this up!

Best,
Luke


—
Luke Seelenbinder
Stadia Maps | Founder
stadiamaps.com

‐‐‐ Original Message ‐‐‐
On Monday, January 21, 2019 2:07 PM, Christopher Faulet  
wrote:

> Le 18/01/2019 à 14:23, Luke Seelenbinder a écrit :
> 

> > Quick clarification on the previous message.
> > The code emitting the warning is almost assuredly here: 
> > https://github.com/haproxy/haproxy/blob/ed7a066b454f09fee07a9ffe480407884496461b/src/proto_htx.c#L3242
> >  not in proto_http.c, seeing how this is in htx mode not http mode.
> > I've traced the issue to likely being caused by the following condition 
> > false:
> > https://github.com/haproxy/haproxy/blob/202c6ce1a27c92d21995ee82c71b2f70c636e3ea/src/htx.c#L93
> > We are dealing with a lot of larger responses (PNGs, 50-100KB/request on 
> > avg) with perhaps 10 simultaneous initial requests on the same h2 
> > connection being very common. That sounds like I may in fact need to tweak 
> > some buffer settings somewhere. In http/1.1 mode, these requests were 
> > spread out across four connections with browsers blocking until the 
> > previous connection finished.
> > The documentation is only somewhat helpful for tune.bufsize and 
> > tune.maxrewrite, http/2 and large requests. If this isn't a bug, would 
> > someone be willing to offer some guidance into good values for these buffer 
> > sizes?
> > Thanks for your help!
> > Best,
> > Luke
> 

> Hi Luke,
> 

> Could you try following patches please ?
> 

> Thanks,
> 

> --
> 

> Christopher Faulet



publickey - luke.seelenbinder@stadiamaps.com - 0xB23C1E8A.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: H2 Server Connection Resets (1.9.2)

2019-01-22 Thread Luke Seelenbinder
Hi Willy,

I just confirmed the other patchset works, so I will start going down this 
road. :-)

While testing the other issue, I discovered something fascinating. Our 
application is typically used by clients that cancel requests with reasonable 
frequency (5+%). When zooming in and out of maps quickly, it's pretty common 
for map tiles to be requested and then the request is canceled from the 
client-side, because they are no longer required (out of view, etc.).

There is a strong correlation between client connections canceling requests 
(resulting in a HTTP log string of CD--) and then a whole string of requests 
immediately after resulting in server connection resets (resulting in SD--).  
I've included some example log lines below. This behavior causes no issues when 
HTX & h2 mode are disabled for backends (still using h2 on the frontend). If 
the additional information triggers any ideas, let me know, otherwise I'm 
starting down the list recommended by you and Aleks.

Best,
Luke

—
Luke Seelenbinder
Stadia Maps | Founder
stadiamaps.com

stadiamaps~ tile/tile1 0/0/202/-1/266 -1 0 - - CD-- 2/1/5/5/0 0/0 {} "GET 
/tiles/osm_bright/9/344/1...@2x.png HTTP/2.0"
stadiamaps~ tile/tile1 0/0/202/-1/266 -1 0 - - CD-- 2/1/4/4/0 0/0 {} "GET 
/tiles/osm_bright/9/348/1...@2x.png HTTP/2.0"
stadiamaps~ tile/tile1 0/0/202/-1/266 -1 0 - - CD-- 2/1/3/3/0 0/0 {} "GET 
/tiles/osm_bright/9/344/1...@2x.png HTTP/2.0"
stadiamaps~ tile/tile1 0/0/202/-1/266 -1 0 - - CD-- 2/1/2/2/0 0/0 {} "GET 
/tiles/osm_bright/9/348/1...@2x.png HTTP/2.0"
stadiamaps~ tile/tile1 0/0/202/-1/266 -1 0 - - CD-- 2/1/1/1/0 0/0 {} "GET 
/tiles/osm_bright/9/344/1...@2x.png HTTP/2.0"
stadiamaps~ tile/tile1 0/0/202/-1/266 -1 0 - - CD-- 2/1/0/0/0 0/0 {} "GET 
/tiles/osm_bright/9/348/1...@2x.png HTTP/2.0"
stadiamaps~ tile/tile1 0/0/0/-1/456 -1 0 - - SD-- 2/1/5/5/0 0/0 {} "GET 
/tiles/osm_bright/10/690/3...@2x.png HTTP/2.0"
stadiamaps~ tile/tile1 0/0/0/-1/456 -1 0 - - SD-- 2/1/4/4/0 0/0 {} "GET 
/tiles/osm_bright/10/695/3...@2x.png HTTP/2.0"
stadiamaps~ tile/tile1 0/0/0/-1/456 -1 0 - - SD-- 2/1/3/3/0 0/0 {} "GET 
/tiles/osm_bright/10/690/3...@2x.png HTTP/2.0"
stadiamaps~ tile/tile1 0/0/0/-1/454 -1 0 - - SD-- 2/1/2/2/0 0/0 {} "GET 
/tiles/osm_bright/10/695/3...@2x.png HTTP/2.0"
stadiamaps~ tile/tile1 0/0/0/-1/454 -1 0 - - SD-- 2/1/1/1/0 0/0 {} "GET 
/tiles/osm_bright/10/690/3...@2x.png HTTP/2.0"
stadiamaps~ tile/tile1 0/0/0/-1/454 -1 0 - - SD-- 2/1/0/0/0 0/0 {} "GET 
/tiles/osm_bright/10/695/3...@2x.png HTTP/2.0"

‐‐‐ Original Message ‐‐‐
On Tuesday, January 22, 2019 11:33 AM, Willy Tarreau  wrote:

> On Tue, Jan 22, 2019 at 09:42:53AM +, Luke Seelenbinder wrote:
> 

> > Hi Willy, Aleks,
> > I will try the things suggested this afternoon (hopefully) or tomorrow and 
> > get back to you.
> > 

> > > At least if nginx does this it should send a GOAWAY
> > > frame indicating that it will stop after stream #2001.
> > 

> > That's my understanding as well (and the docs say as much).
> 

> OK.
> 

> > I assumed HAProxy
> > would properly handle it, as well, so perhaps it's something else nefarious
> > going on in our particular setup.
> 

> Or we might have a bug there as well. I'll recheck the code just in case
> I spot anything.
> 

> > There is still the possibility that the bug
> > fixed by Aleks' patches regarding HTX & headers were causing this issue in a
> > back-handed sort of way. I will apply those patches, establish that the
> > headers bug is fixed, and then try the recommendations from this bug to rule
> > out any interactions on that side (a badly written header in our situation
> > could result in a 404, which seemed to be the worst user-facing case of this
> > bug).
> 

> Sure, let's address one problem at a time :-)
> 

> Willy



publickey - luke.seelenbinder@stadiamaps.com - 0xB23C1E8A.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-22 Thread Willy Tarreau
Hi guys,

On Tue, Jan 22, 2019 at 03:22:38PM +0100, Emeric Brun wrote:
> I think you can merge this.

OK. I still find it very fragile in that we usually don't make a
difference between an absent define and the same declared as zero, and
most SSL_OP_* entries are defined this way in ssl_sock.c, but I don't
see that many other options here. I think that the #ifndef at least
deserves a comment indicating that it may also match a zero value to
detect safe implementations so that we are not tempted later to refactor
this and break BoringSSL.

We can also add a Reported-By to ack Adam's original work on the issue.

Just let me know if I need to adjust it myself or if anyone wants to take
care of it.

Thanks,
Willy



Re: H2 Server Connection Resets (1.9.2)

2019-01-22 Thread Willy Tarreau
On Tue, Jan 22, 2019 at 02:57:23PM +, Luke Seelenbinder wrote:
> There is a strong correlation between client connections canceling requests
> (resulting in a HTTP log string of CD--) and then a whole string of requests
> immediately after resulting in server connection resets (resulting in SD--).
> I've included some example log lines below. This behavior causes no issues
> when HTX & h2 mode are disabled for backends (still using h2 on the
> frontend). If the additional information triggers any ideas, let me know,
> otherwise I'm starting down the list recommended by you and Aleks.

Very interesting observation! I suspect we may face a situation where aborted
streams leave the connection in an unrecoverable state leading to GOAWAYs
being sent. Maybe we have excessive tests in the HPACK encoder resulting in
compression errors being emitted while in fact it's only the stream which is
dead. I see how to test this, I can try to investigate there.

Thanks!
Willy



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-22 Thread Willy Tarreau
Hi Dirkjan,

On Tue, Jan 22, 2019 at 11:07:07PM -0800, Dirkjan Bussink wrote:
> I have adjusted the patch to make it more robust and more match the style of
> how we use other options. How does this look to you?

Unfortunately it does introduce the problem I feared for BoringSSL :


+#if !defined(SSL_OP_NO_RENEGOTIATION) || SSL_OP_NO_RENEGOTIATION == 0
if (where & SSL_CB_HANDSHAKE_START) {
/* Disable renegotiation (CVE-2009-3555) */
if ((conn->flags & (CO_FL_CONNECTED | CO_FL_EARLY_SSL_HS | 
CO_FL_EARLY_DATA)) == CO_FL_CONNECTED) {
@@ -1475,6 +1476,7 @@ void ssl_sock_infocbk(const SSL *ssl, int where, int ret)
conn->err_code = CO_ER_SSL_RENEG;
}
}
+#endif


As you can see it will enable this code when SSL_OP_NO_RENEGOTIATION=0,
which is what BoringSSL does and it needs this code to be disabled. Thus
I think it's better to simply do this :

+#ifndef SSL_OP_NO_RENEGOTIATION
+   /* Please note that BoringSSL defines this macro to zero so don't
+* change this to #if and do not assign a default value to this macro!
+*/

Thanks,
Willy



Re: H2 Server Connection Resets (1.9.2)

2019-01-22 Thread Willy Tarreau
Hi Luke,

I've place an nginx instance after my local haproxy dev config, and
found something which might explain what you're observing : the process
apparently leaks FDs and fails once in a while, causing 500 to be returned :

2019/01/23 08:22:13 [crit] 25508#0: *36705 open() 
"/usr/local/nginx/html/index.html" failed (24: Too many open files), client: 1>
2019/01/23 08:22:13 [crit] 25508#0: accept4() failed (24: Too many open files)

127.0.0.1 - - [23/Jan/2019:08:22:13 +0100] "GET / HTTP/2.0" 500 579 "-" 
"Mozilla/4.0 (compatible; MSIE 7.01; Windows)"

The ones are seen by haproxy :

127.0.0.1:47098 [23/Jan/2019:08:22:13.589] decrypt trace/ngx 0/0/0/0/0 500 701 
- -  1/1/0/0/0 0/0 "GET / HTTP/1.1"

And at this point the connection is closed and reopened for new requests.
There's never any GOAWAY sent.

I managed to work around the problem by limiting the number of total
requests per connection. I find this extremely dirty but if it helps...
I just need to figure how to best do it, so that we can use it as well
for H2 as for H1.

Best regards,
Willy



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-22 Thread Dirkjan Bussink
Hi Willy,

> On 22 Jan 2019, at 07:07, Willy Tarreau  wrote:
> 
> Hi guys,
> 
> On Tue, Jan 22, 2019 at 03:22:38PM +0100, Emeric Brun wrote:
>> I think you can merge this.
> 
> OK. I still find it very fragile in that we usually don't make a
> difference between an absent define and the same declared as zero, and
> most SSL_OP_* entries are defined this way in ssl_sock.c, but I don't
> see that many other options here. I think that the #ifndef at least
> deserves a comment indicating that it may also match a zero value to
> detect safe implementations so that we are not tempted later to refactor
> this and break BoringSSL.
> 
> We can also add a Reported-By to ack Adam's original work on the issue.
> 
> Just let me know if I need to adjust it myself or if anyone wants to take
> care of it.

I have adjusted the patch to make it more robust and more match the style of 
how we use other options. How does this look to you?

Cheers,

Dirkjan



0001-BUG-MEDIUM-ssl-Fix-handling-of-TLS-1.3-KeyUpdate-mes.patch
Description: Binary data


Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 19:54 schrieb Aleksandar Lazic:
> Cool, thanks.
> 
> Do have boringssl a similar tool like s_client?
> 
> I don't like to build openssl just for s_client call :-)

Answer my own question.

bssl is the boringssl tool command.

The open question is why the tests fails in container?

> Regards
> Aleks
> 
> 
>  Ursprüngliche Nachricht 
> Von: Janusz Dziemidowicz 
> Gesendet: 22. Jänner 2019 19:49:15 MEZ
> An: Aleksandar Lazic 
> CC: HAProxy 
> Betreff: Re: haproxy 1.9.2 with boringssl
> 
> wt., 22 sty 2019 o 19:40 Aleksandar Lazic  napisał(a):
>>
>> Hi.
>>
>> I have now build haproxy with boringssl and it looks quite good.
>>
>> Is it the recommended way to simply make a git clone without any branch or 
>> tag?
>> Does anyone know how the KeyUpdate can be tested?
> 
> openssl s_client -connect HOST:PORT (openssl >= 1.1.1)
> Just type 'K' and press enter. If the server is broken then connection
> will be aborted.
> 
> www.github.com:443, currently broken:
> read R BLOCK
> K
> KEYUPDATE
> read R BLOCK
> read:errno=0
> 
> mail.google.com:443, working:
> read R BLOCK
> K
> KEYUPDATE
> 
> 
> 




Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Tim Düsterhus
Aleks,

Am 22.01.19 um 19:38 schrieb Aleksandar Lazic:
> ## test results in: 
> "/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.76167f9e"
>  s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) ==
> "2001:db8:c001:c01a:0::10:0" failed

The difference here is that the test expects an IPv6 address that's not
maximally compressed, while you get a IPv6 address that *is* maximally
compressed. I would guess that this is the difference in behaviour
between glibc and musl (as you are using an Alpine container).

Best regards
Tim Düsterhus



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 20:30 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 11:16 AM Aleksandar Lazic  wrote:
>> Agree that I get a 400 with this command.
>>
>> `echo 'K' | ./tool/bssl s_client -connect mail.google.com:443`
> 
> (Note that "K" on its own line does not send a KeyUpdate message with
> BoringSSL's bssl tool. It just sends "K\n".)
> 
>> How does boringssl test if the KeyUpdate on a server works?
> 
> If you're asking how BoringSSL's internal tests exercise KeyUpdates
> then we maintain a fork of Go's TLS stack that is extensively modified
> to be able to generate a large variety of TLS patterns. That is used
> to exercise KeyUpdates in a number of ways:
> https://boringssl.googlesource.com/boringssl/+/eadef4730e66f914d7b9cbb2f38ecf7989f992ed/ssl/test/runner/runner.go#2779

Thanks.

Can it be reused to test a specific server like?

ssl/test/runner/runner -test "KeyUpdate-ToServer" 127.0.0.1:8443

or should be a small c/go program be used for that test?

> Cheers
> 
> AGL

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Adam Langley
On Tue, Jan 22, 2019 at 11:16 AM Aleksandar Lazic  wrote:
> Agree that I get a 400 with this command.
>
> `echo 'K' | ./tool/bssl s_client -connect mail.google.com:443`

(Note that "K" on its own line does not send a KeyUpdate message with
BoringSSL's bssl tool. It just sends "K\n".)

> How does boringssl test if the KeyUpdate on a server works?

If you're asking how BoringSSL's internal tests exercise KeyUpdates
then we maintain a fork of Go's TLS stack that is extensively modified
to be able to generate a large variety of TLS patterns. That is used
to exercise KeyUpdates in a number of ways:
https://boringssl.googlesource.com/boringssl/+/eadef4730e66f914d7b9cbb2f38ecf7989f992ed/ssl/test/runner/runner.go#2779


Cheers

AGL

-- 
Adam Langley a...@imperialviolet.org https://www.imperialviolet.org



Re: Automatic Redirect transformations using regex?

2019-01-22 Thread Bruno Henc

Hello Joao Guimaraes,


The following lines should accomplish what you described in your email:


    acl is_main_site hdr(Host) -i www.mysite.com mysite.com
    http-request set-var(req.scheme) str(https) if { ssl_fc }
    http-request set-var(req.scheme) str(http) if !{ ssl_fc }

    http-request redirect code 301 location 
%[var(req.scheme)]://%[req.hdr(Host),regsub(www.,,i),lower,field(2,'.')].mysite.com%[capture.req.uri] 
if !is_main_site



Explained line by line, the ACL is_main_site prevents a redirect loop 
(www.mysite.com redirected to mysite.www or some other terrible 
monstrosity often found when dealing with redirects). I highly recommend 
thoroughly testing any redirect before deploying to production, as 
redirect loops are quite nasty to debug.



The second and third line define a variable req.scheme that is used to 
redirect either to http or https versions of a site. If you're doing 
HTTPS only, you can drop these two lines and hardcode the following line 
to redirect directly to HTTPS:



    acl is_main_site hdr(Host) -i www.mysite.com mysite.com
    http-request redirect code 301 location 
https://%[req.hdr(Host),regsub(www.,,i),lower,field(2,'.')].mysite.com%[capture.req.uri] 
if !is_main_site


_Please note that set-var requires haproxy 1.7 or any later version.
_

Also, if you are not performing SSL termination on the HAProxy instance 
doing the redirect you will probably need to read a header value (most 
likely X-Forwarded-Proto) instead of using { ssl_fc } to correctly set 
the req.var variable ( alternatively, you can use the header value 
directly for e.g. X-Forwarded-Proto by starting the redirect with 
%[hdr(X-Forwarded_Proto)]:// ).



Finally, the redirect itself can be explained:

  http-request redirect code 301 location 
%[var(req.scheme)]://%[req.hdr(Host),regsub(www.,,i),lower,field(2,'.')].mysite.com%[capture.req.uri] 
if !is_main_site



As explained above, this part sets the HTTP scheme to either http;// or 
https://


This part strips the www. prefix (if present) from e.g. www.mysite.fr to 
leave only mysite.fr . The i flag in regsub means that a 
case-insensitive match is performed. If you need to match multiple 
patterns (e.g. pictures.mysite.fr, chain multiple regsub statements.


Lower simply turns everything lowercase.

Field does the magic in this redirect and splits the prepared header 
string by the separator '.' into a list (starting with index 1). We are 
only interested in the 2 part, that is, the TLD. Please note that any 
insanity with ccTLDs 
 
(mysite.co.uk), multilevel subdomains (my.pictures.mysite.fr) or similar 
won't work with this redirect. If you need a redirect with general 
support for those, I recommend using regirep. Alternatively, if you need 
to cover just one ccTLD, you can use regsub to replace .co.uk with .uk  
. Also, as Aleksandar Lazic mentioned in his reply, haproxy map files 
are also an option. Map files might be more pleasant that regirep if you 
need to handle something exotic.



capture.req.uri saves the whole URI (path + query_string) so if you 
accessed mysite.fr/cute.php?cat the redirect would go to 
fr.mysite.com/cute.php?cat . if you just used path, you would loose the 
?cat query parameter at the end.



Hope this helps, My apologies for the longer email but covering the 
general case of the problem requires mentioning the major caveats you 
might experience. Turns out, rewriting URLs is a non-trivial (and rather 
not-fun) exercise.


Let me know if you have any questions.

Best regards,

Bruno Henc


On 1/21/19 11:40 PM, Joao Guimaraes wrote:

Hi Haproxy team!

I've been trying to figure out how to perform automatic redirects 
based on source URL transformations.


*Basically I need the following redirect: *

mysite.*abc* redirected to *abc*.mysite.com .


Note that mysite.abc is not fixed, must apply to whatever abc wants to be.

*Other examples:*
*
*

mysite.fr  TO fr.mysite.com 
mysite.es  TO es.mysite.com 
mysite.us  TO us.mysite.com 
mysite.de  TO de.mysite.com 
mysite.uk  TO uk.mysite.com 


Thanks in advance!
Joao Guimaraes




Re: H2 Server Connection Resets (1.9.2)

2019-01-22 Thread Luke Seelenbinder
Hi Aleksandar,

Thanks for your tips.

> Do you have such a info in the nginx log?
> 

> "http2 flood detected"

I did not find this in any of the logs from when the buggy configuration was 
deployed.

> Can you try to set some timeout values for`timeout http-keep-alive`

I do have this set already:

timeout http-keep-alive 3m

> Mind you to create a issue for that if there isn't one already?

Can do!

> Isn't`unsigned int` not enought ?
> How many idle connections do you have for how long time?

We cycle through idle connections pretty quickly, so I can certainly bump the 
NGINX limit. My issue is that we have a very real possibility of reusing a 
connection many thousands of times. We pretty consistently serve hundreds of 
req/s on some of the instances of this deployment, which means it has a lot of 
opportunity to keep a backend connection around. Thus, if there is any sort of 
upper limit on our backend server, it feels like we may very well hit that 
limit.

> Can you try to increase the max-requests to 20 in nginx

I can certainly try this. I'm not certain if that will entirely eliminate the 
issue, given my last paragraph. (Which makes me somewhat reluctant to put this 
back into production with the strong possibility of affecting user requests.)

> Just for my curiosity, have you seen any changes for your solution with the 
> htx
> /H2 e2e?

I wouldn't say we've seen any particular benefits of htx/h2 e2e simply because 
we only ran it for a few hours in one region. Once we have some bugs ironed 
out, I'll be able to better answer your question. :-) I expect to see better 
overall response times, since fast requests won't be blocked by slow requests 
(in theory).

Running h2 fe and h1.1 be has definitely made our solution more performant!

Best,
Luke

—
Luke Seelenbinder
Stadia Maps | Founder
stadiamaps.com

‐‐‐ Original Message ‐‐‐
On Monday, January 21, 2019 3:16 PM, Aleksandar Lazic  
wrote:

> Hi Luke.
> 

> Am 21.01.2019 um 10:30 schrieb Luke Seelenbinder:
> 

> > Hi all,
> > One more bug (or configuration hole) from our transition to 1.9.x using 
> > end-to-end h2 connections.
> > After enabling h2 backends (technically `server … alpn h2,http/1.1`), we 
> > began seeing a high number of backend /server/ connection resets. A 
> > reasonable number of client-side connection resets due to timeouts, etc., 
> > is normal, but the server connection resets were new.
> > I believe the root cause is that our backend servers are NGINX servers, 
> > which by default have a 1000 request limit per h2 connection 
> > (https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests).
> >  As far as I can tell there's no way to set this to unlimited. That 
> > resulted in NGINX resetting the HAProxy backend connections and thus 
> > resulted in user requests being dropped or returning 404s (oddly enough; 
> > though this may be as a result of the outstanding bug related to header 
> > manipulation and HTX mode).
> 

> Do you have such a info in the nginx log?
> 

> "http2 flood detected"
> 

> It's the message from this lines
> 

> https://trac.nginx.org/nginx/browser/nginx/src/http/v2/ngx_http_v2.c#L4517
> 

> > This wouldn't be a problem if one of the following were true:
> > 

> > -   HAProxy could limit the number of times it reused a connection
> 

> Can you try to set some timeout values for`timeout http-keep-alive`
> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#timeout 
> http-keep-alive
> 

> I assume that this timeout could be helpful because of this block in the doc
> 

> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html
> 

>   - KAL : keep alive ("option http-keep-alive") which is the default mode 
> : all
> requests and responses are processed, and connections remain open but 
> idle
> between responses and new requests.
> 

> 

> and this code part
> 

> https://github.com/haproxy/haproxy/blob/v1.9.0/src/backend.c#L1164
> 

> > -   HAProxy could retry a failed request due to backend server connection 
> > reset (possibly coming in 2.0 with L7 retries?)
> 

> Mind you to create a issue for that if there isn't one already?
> 

> > -   NGINX could set that limit to unlimited.
> 

> Isn't`unsigned int` not enought ?
> How many idle connections do you have for how long time?
> 

> > Our http-reuse is set to aggressive, but that doesn't make much difference, 
> > I don't think, since safe would result in the same behavior (the connection 
> > is reusable…but only for a limited number of requests).
> > We've worked around this by only using h/1.1 on the backends, which isn't a 
> > big problem for us, but I thought I would raise the issue, since I'm sure a 
> > lot of folks are using haproxy <-> nginx pairings, and this is a bit of a 
> > subtle result of that in full h2 mode.
> 

> Can you try to increase the max-requests to 20 in nginx
> 

> The `max_requests` is defined as `ngx_uint_t` which is `unsigned int`
> 

> I 

Re: Automatic Redirect transformations using regex?

2019-01-22 Thread Aleksandar Lazic
Am 21.01.2019 um 23:40 schrieb Joao Guimaraes:
> Hi Haproxy team!
> 
> I've been trying to figure out how to perform automatic redirects based on
> source URL transformations. 
> 
> *Basically I need the following redirect: *
> 
> mysite.*abc* redirected to *abc*.mysite.com .

Maybe you can reuse the solution from reg-tests dir.

47 # redirect Host: example.org / subdomain.example.org
48 http-request redirect prefix
%[req.hdr(Host),lower,regsub(:\d+$,,),map_str(${testdir}/h3.map)] code 301
if { hdr(Host),lower,regsub(:\d+$,,),map_str(${testdir}/h3.map) -m found }

This solution uses a map for redirect.

http://git.haproxy.org/?p=haproxy-1.9.git;a=blob;f=reg-tests/http-rules/h3.vtc;h=55bb2687d3abe02ee74eca5283e50b039d6d162e;hb=HEAD#l47

> Note that mysite.abc is not fixed, must apply to whatever abc wants to be.
> 
> *Other examples:*
> *
> *
> 
> mysite.fr TO fr.mysite.com
> mysite.es TO es.mysite.com
> mysite.us TO us.mysite.com
> mysite.de TO de.mysite.com
> mysite.uk TO uk.mysite.com
> 
> 
> Thanks in advance!
> Joao Guimaraes

Best regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Janusz Dziemidowicz
wt., 22 sty 2019 o 19:40 Aleksandar Lazic  napisał(a):
>
> Hi.
>
> I have now build haproxy with boringssl and it looks quite good.
>
> Is it the recommended way to simply make a git clone without any branch or 
> tag?
> Does anyone know how the KeyUpdate can be tested?

openssl s_client -connect HOST:PORT (openssl >= 1.1.1)
Just type 'K' and press enter. If the server is broken then connection
will be aborted.

www.github.com:443, currently broken:
read R BLOCK
K
KEYUPDATE
read R BLOCK
read:errno=0

mail.google.com:443, working:
read R BLOCK
K
KEYUPDATE


-- 
Janusz Dziemidowicz



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Am 22.01.2019 um 20:04 schrieb Adam Langley:
> On Tue, Jan 22, 2019 at 10:54 AM Aleksandar Lazic  wrote:
>> Do have boringssl a similar tool like s_client?
> 
> BoringSSL builds tool/bssl (in the build directory), which is similar.
> However it doesn't have any magic inputs that can trigger a KeyUpdate
> message like OpenSSL's s_client.

Thanks.
The test is already runnig as I got your answer.

https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/149540960

Agree that I get a 400 with this command.

`echo 'K' | ./tool/bssl s_client -connect mail.google.com:443`

How does boringssl test if the KeyUpdate on a server works?

> Cheers
> 
> AGL

Regards
Aleks



haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Hi.

I have now build haproxy with boringssl and it looks quite good.

Is it the recommended way to simply make a git clone without any branch or tag?
Does anyone know how the KeyUpdate can be tested?

###
HA-Proxy version 1.9.2 2019/01/16 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1
USE_THREAD=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_TFO=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : BoringSSL
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTXside=FE|BE
  h2 : mode=HTTP   side=FE
: mode=HTXside=FE|BE
: mode=TCP|HTTP   side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
###

I also wanted to run the reg-tests but they fails.

https://gitlab.com/aleks001/haproxy-19-boringssl/-/jobs/149523589

-
...
+ cd /usr/src/haproxy
+ VTEST_PROGRAM=/usr/src/VTest/vtest HAPROXY_PROGRAM=/usr/local/sbin/haproxy
make reg-tests
...
## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.856) exit=2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.742) exit=2
#top  TEST ./reg-tests/log/b0.vtc TIMED OUT (kill -9)
#top  TEST ./reg-tests/log/b0.vtc FAILED (10.008) signal=9
#top  TEST ./reg-tests/http-messaging/h2.vtc FAILED (0.745) exit=2
4 tests failed, 0 tests skipped, 29 tests passed
## Gathering results ##
## Test case: ./reg-tests/log/b0.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.357fd753"
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.477fdc0b"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
## Test case: ./reg-tests/http-messaging/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.7aab2925"
 c1h2  0.0 Wrong frame type HEADERS (1) wanted WINDOW_UPDATE
## Test case: ./reg-tests/http-rules/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.76167f9e"
 s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) ==
"2001:db8:c001:c01a:0::10:0" failed
make: *** [Makefile:1102: reg-tests] Error 1
-
###

Have anyone tried to run the tests in a containerized environment?

Regards
Aleks



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Cool, thanks.

Do have boringssl a similar tool like s_client?

I don't like to build openssl just for s_client call :-)

Regards
Aleks


 Ursprüngliche Nachricht 
Von: Janusz Dziemidowicz 
Gesendet: 22. Jänner 2019 19:49:15 MEZ
An: Aleksandar Lazic 
CC: HAProxy 
Betreff: Re: haproxy 1.9.2 with boringssl

wt., 22 sty 2019 o 19:40 Aleksandar Lazic  napisał(a):
>
> Hi.
>
> I have now build haproxy with boringssl and it looks quite good.
>
> Is it the recommended way to simply make a git clone without any branch or 
> tag?
> Does anyone know how the KeyUpdate can be tested?

openssl s_client -connect HOST:PORT (openssl >= 1.1.1)
Just type 'K' and press enter. If the server is broken then connection
will be aborted.

www.github.com:443, currently broken:
read R BLOCK
K
KEYUPDATE
read R BLOCK
read:errno=0

mail.google.com:443, working:
read R BLOCK
K
KEYUPDATE





Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Adam Langley
On Tue, Jan 22, 2019 at 10:54 AM Aleksandar Lazic  wrote:
> Do have boringssl a similar tool like s_client?

BoringSSL builds tool/bssl (in the build directory), which is similar.
However it doesn't have any magic inputs that can trigger a KeyUpdate
message like OpenSSL's s_client.


Cheers

AGL



Re: haproxy 1.9.2 with boringssl

2019-01-22 Thread Aleksandar Lazic
Tim.

Am 22.01.2019 um 20:26 schrieb Tim Düsterhus:
> Aleks,
> 
> Am 22.01.19 um 19:38 schrieb Aleksandar Lazic:
>> ## test results in: 
>> "/tmp/haregtests-2019-01-22_18-28-24.aBghMD/vtc.3398.76167f9e"
>>  s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) ==
>> "2001:db8:c001:c01a:0::10:0" failed
> 
> The difference here is that the test expects an IPv6 address that's not
> maximally compressed, while you get a IPv6 address that *is* maximally
> compressed. I would guess that this is the difference in behaviour
> between glibc and musl (as you are using an Alpine container).

Ah that explains this error.

This means that the function in haproxy works but the check should be adopted to
match both cases, right?

Do you think that in general the alpine/musl is a good idea or should I stay on
centos as for my other images?

Any Idea for the other failed tests?

-
## Starting vtest ##
Testing with haproxy version: 1.9.2
#top  TEST ./reg-tests/http-rules/h2.vtc FAILED (0.859) exit=2
#top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.739) exit=2
#top  TEST ./reg-tests/log/b0.vtc TIMED OUT (kill -9)
#top  TEST ./reg-tests/log/b0.vtc FAILED (10.001) signal=9
#top  TEST ./reg-tests/http-messaging/h2.vtc FAILED (0.752) exit=2
4 tests failed, 0 tests skipped, 29 tests passed
## Gathering results ##
## Test case: ./reg-tests/http-messaging/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.7739e83e"
 c1h2  0.0 Wrong frame type HEADERS (1) wanted WINDOW_UPDATE
## Test case: ./reg-tests/log/b0.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.2776263d"
## Test case: ./reg-tests/http-rules/h2.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.0900be1e"
 s10.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a:::10:0) ==
"2001:db8:c001:c01a:0::10:0" failed
## Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ##
## test results in: 
"/tmp/haregtests-2019-01-22_19-34-55.EKMMnc/vtc.3399.506e5b2b"
 c27.0 EXPECT resp.http.mailsreceived (11) == "16" failed
-

> Best regards
> Tim Düsterhus

Regards
Aleks