Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-21 Thread Willy Tarreau
On Tue, Jan 22, 2019 at 12:36:41AM +0100, PiBa-NL wrote:
> The regtest works for me as well with this patch. Without needing the
> 'timeout mail' setting.
> 
> I think we can call it fixed once committed.

OK so I've now merged it in dev and 1.9.

Thanks!
Willy



Some test case for HTTP/2 failed, are those bugs?

2019-01-21 Thread 高和东
Dear willy:

I am a follower of haproxy. I tested HTTP/2 fuction in haproxy_1.8.17 with 
the tool h2spec, but some test cases failed. I wonder if those are bugs for 
haproxy.
See the tool here https://github.com/summerwind/h2spec .


Those failed cases are as follow:
gaohd@host:~/.golang/gopath/src/github.com/summerwind/h2spec$./h2spec http2 -h 
www.axddos.com -p 443 -t -k
Failures:
Generic tests for HTTP/2 server
  3. Frame Definitions
3.10. CONTINUATION
  × 1: Sends a CONTINUATION frame
-> The endpoint MUST accept CONTINUATION frame.
   Expected: HEADERS Frame (stream_id:1)
 Actual: Connection closed
  × 2: Sends multiple CONTINUATION frames
-> The endpoint MUST accept multiple CONTINUATION frames.
   Expected: HEADERS Frame (stream_id:1)
 Actual: Connection closed


  4. HTTP Message Exchanges
× 4: Sends a POST request with trailers
  -> The endpoint MUST respond to the request.
 Expected: HEADERS Frame (stream_id:1)
   Actual: Connection closed


Hypertext Transfer Protocol Version 2 (HTTP/2)
  4. HTTP Frames
4.2. Frame Size
  × 3: Sends a large size HEADERS frame that exceeds the 
SETTINGS_MAX_FRAME_SIZE
-> The endpoint MUST respond with a connection error of type 
FRAME_SIZE_ERROR.
   Expected: GOAWAY Frame (Error Code: FRAME_SIZE_ERROR)
 Connection closed
 Actual: DATA Frame (length:624, flags:0x01, stream_id:1)


  5. Streams and Multiplexing
5.1. Stream States
  × 13: closed: Sends a CONTINUATION frame
-> The endpoint MUST treat this as a connection error of type 
STREAM_CLOSED.
   Expected: GOAWAY Frame (Error Code: STREAM_CLOSED)
 GOAWAY Frame (Error Code: PROTOCOL_ERROR)
 Connection closed
 Actual: RST_STREAM Frame (length:4, flags:0x00, stream_id:1)


  6. Frame Definitions
6.10. CONTINUATION
  × 1: Sends multiple CONTINUATION frames preceded by a HEADERS frame
-> The endpoint must accept the frame.
   Expected: HEADERS Frame (stream_id:1)
 Actual: Connection closed
  × 4: Sends a CONTINUATION frame preceded by a HEADERS frame with 
END_HEADERS flag
-> The endpoint MUST respond with a connection error of type 
PROTOCOL_ERROR.
   Expected: GOAWAY Frame (Error Code: PROTOCOL_ERROR)
 Connection closed
 Actual: RST_STREAM Frame (length:4, flags:0x00, stream_id:1)


  8. HTTP Message Exchanges
8.1. HTTP Request/Response Exchange
  8.1.2. HTTP Header Fields
8.1.2.6. Malformed Requests and Responses
  × 1: Sends a HEADERS frame with the "content-length" header field 
which does not equal the DATA frame payload length
-> The endpoint MUST treat this as a stream error of type 
PROTOCOL_ERROR.
   Expected: GOAWAY Frame (Error Code: PROTOCOL_ERROR)
 RST_STREAM Frame (Error Code: PROTOCOL_ERROR)
 Connection closed
 Actual: DATA Frame (length:182, flags:0x01, stream_id:1)
  × 2: Sends a HEADERS frame with the "content-length" header field 
which does not equal the sum of the multiple DATA frames payload length
-> The endpoint MUST treat this as a stream error of type 
PROTOCOL_ERROR.
   Expected: GOAWAY Frame (Error Code: PROTOCOL_ERROR)
 RST_STREAM Frame (Error Code: PROTOCOL_ERROR)
 Connection closed
 Actual: DATA Frame (length:182, flags:0x01, stream_id:1)


HPACK: Header Compression for HTTP/2
  6. Binary Format
6.3. Dynamic Table Size Update
  × 1: Sends a dynamic table size update larger than the value of 
SETTINGS_HEADER_TABLE_SIZE
-> The endpoint MUST treat this as a decoding error.
   Expected: GOAWAY Frame (Error Code: COMPRESSION_ERROR)
 Connection closed
 Actual: DATA Frame (length:624, flags:0x01, stream_id:1)




Finished in 18.9586 seconds
145 tests, 135 passed, 0 skipped, 10 failed

And haproxy info:
./haproxy -vv
HA-Proxy version 1.8.17 2019/01/08
Copyright 2000-2019 Willy Tarreau 


Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-Wno-format-truncation -Wno-null-dereference -Wno-unused-label
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1 USE_TFO=1 USE_NS=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200


Built with OpenSSL version : OpenSSL 1.1.0g  2 Nov 2017
Running on OpenSSL version : OpenSSL 1.1.0g  2 Nov 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Encrypted password support via

Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-21 Thread PiBa-NL

Hi Christopher,
Op 21-1-2019 om 15:28 schreef Christopher Faulet:


Hi Pieter,

About the timing issue, could you try the following patch please ? 
With it, I can run the regtest about email alerts without any error.


Thanks,
--
Christopher Faulet


The regtest works for me as well with this patch. Without needing the 
'timeout mail' setting.


I think we can call it fixed once committed.

Thanks,
PiBa-NL (Pieter)




Automatic Redirect transformations using regex?

2019-01-21 Thread Joao Guimaraes
Hi Haproxy team!

I've been trying to figure out how to perform automatic redirects based on
source URL transformations.

*Basically I need the following redirect: *

mysite.*abc* redirected to *abc*.mysite.com.


Note that mysite.abc is not fixed, must apply to whatever abc wants to be.

*Other examples:*

mysite.fr TO fr.mysite.com
mysite.es TO es.mysite.com
mysite.us TO us.mysite.com
mysite.de TO de.mysite.com
mysite.uk TO uk.mysite.com


Thanks in advance!
Joao Guimaraes


Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-21 Thread Adam Langley
On Mon, Jan 21, 2019 at 10:16 AM Dirkjan Bussink  wrote:
> Ah ok, I recently added support in HAProxy to handle the new 
> SSL_CTX_set_ciphersuites option since OpenSSL handles setting TLS 1.3 ciphers 
> separate from the regular ones. Are those things that BoringSSL would also 
> want to adopt then?

SSL_CTX_set_ciphersuites is more than a compatibility hack like adding
a dummy #define, and the considerations are more complex. I'm not sure
that we want to allow TLS 1.3 ciphersuite to be configured: the
excessive number of cipher suites prior to TLS 1.3 was a problem, as
was the excessive diversity of configurations. Also, string-based APIs
have historically been expensive because they prevent easy static
analysis. So we could add a dummy SSL_CTX_set_ciphersuites that always
returns zero, but most applications would probably take that to be a
fatal error so that wouldn't be helpful. So SSL_CTX_set_ciphersuites
might be a case where a #ifdef is the best answer. But we'll always
think about such things if asked.

(If you happen to know, I would be curious who is using BoringSSL with HAProxy.)


Cheers

AGL

-- 
Adam Langley a...@imperialviolet.org https://www.imperialviolet.org



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-21 Thread Dirkjan Bussink
Hi Adam,

> On 21 Jan 2019, at 10:09, Adam Langley  wrote:
> 
> HAProxy isn't a user that we have on our radar, but BoringSSL dislikes
> pushing compatibility hacks into downstream projects. (You can always
> ask for these things to be included in BoringSSL instead.)

Ah ok, I recently added support in HAProxy to handle the new 
SSL_CTX_set_ciphersuites option since OpenSSL handles setting TLS 1.3 ciphers 
separate from the regular ones. Are those things that BoringSSL would also want 
to adopt then? 

Cheers,

Dirkjan Bussink


Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-21 Thread Adam Langley
On Mon, Jan 21, 2019 at 9:49 AM Emmanuel Hocdet  wrote:
> Boringssl does not have SSL_OP_NO_RENEGOTIATION and need KeyUpdate to work.
> As workaround, SSL_OP_NO_RENEGOTIATION could be set to 0 in openssl-compat.h.

HAProxy isn't a user that we have on our radar, but BoringSSL dislikes
pushing compatibility hacks into downstream projects. (You can always
ask for these things to be included in BoringSSL instead.)

Thus I've just added SSL_OP_NO_RENEGOTIATION as a no-op to BoringSSL:
https://boringssl.googlesource.com/boringssl/+/20f4a043eb8c148d044777b849a1cc1e0168b9ee

(Renegotiation is disabled by default in BoringSSL already. Also,
there's only the current version of BoringSSL so no need to wait for
any releases.)


Cheers

AGL

-- 
Adam Langley a...@imperialviolet.org https://www.imperialviolet.org



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-21 Thread Dirkjan Bussink
Hi Manu,

> On 21 Jan 2019, at 09:49, Emmanuel Hocdet  wrote:
> 
> Boringssl does not have SSL_OP_NO_RENEGOTIATION and need KeyUpdate to work.
> As workaround, SSL_OP_NO_RENEGOTIATION could be set to 0 in openssl-compat.h.

Hmm, then we will need a different #define though since we can’t rely own the 
constant not being defined in that case to disable the logic. We would need a 
separate way to detect this then. Is there a good example of this or should I 
change the logic then to version checks instead? And how about LibreSSL in that 
case?

Does BoringSSL need any of the logic in the first place? There’s not really 
versions of it, so is the target there always current master or something else? 

Cheers,

Dirkjan


Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-21 Thread Emmanuel Hocdet
Hi,

> Le 21 janv. 2019 à 17:06, Emeric Brun  a écrit :
> 
> Interesting, it would be good to skip the check using the same method.
> 
> We must stay careful to not put the OP_NO_RENEG flag on the client part (when 
> haproxy connects to server), because reneg from server is authorized
> but i think infocbk is called only on frontend/accept side.
> 
> so a patch which do:
> 
> #ifdef  SSL_OP_NO_RENEGOTIATION
> SSL_set_options(ctx, SSL_OP_NO_RENEGOTIATION);
> #endif
> 
> without condition during init
> 
> and adding #ifndef SSL_OP_NO_RENEGOTIATION arround the CVE check, should fix 
> the issue mentionned about keyupdate and will fix the CVE using the clean way 
> if the version
> of openssl support.
> 

Boringssl does not have SSL_OP_NO_RENEGOTIATION and need KeyUpdate to work.
As workaround, SSL_OP_NO_RENEGOTIATION could be set to 0 in openssl-compat.h.

++
Manu







Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-21 Thread Dirkjan Bussink
Hi Emeric,

> On 21 Jan 2019, at 08:06, Emeric Brun  wrote:
> 
> Interesting, it would be good to skip the check using the same method.
> 
> We must stay careful to not put the OP_NO_RENEG flag on the client part (when 
> haproxy connects to server), because reneg from server is authorized
> but i think infocbk is called only on frontend/accept side.
> 
> so a patch which do:
> 
> #ifdef  SSL_OP_NO_RENEGOTIATION
> SSL_set_options(ctx, SSL_OP_NO_RENEGOTIATION);
> #endif
> 
> without condition during init
> 
> and adding #ifndef SSL_OP_NO_RENEGOTIATION arround the CVE check, should fix 
> the issue mentionned about keyupdate and will fix the CVE using the clean way 
> if the version
> of openssl support.

I have implemented this and attached the patch for it. What do you think of 
this approach? 

Cheers,

Dirkjan Bussink



0001-BUG-MEDIUM-ssl-Fix-handling-of-TLS-1.3-KeyUpdate-mes.patch
Description: Binary data


Re: Seamless reloads: file descriptors utilization in LUA

2019-01-21 Thread William Lallemand
Hello,

On Mon, Jan 21, 2019 at 06:53:12AM +0300, Wert wrote:
> Hi,
> 
> I'm talking only about performance ways)
> 
> About socket.
> I use UDP for sending, there are no reasons for delays.
> However, my bad - I misunderstood some FDs in "lsof". It is not related to 
> that UDP-sending, that is OK.
> 
> About file system.
> I open file from disk for GeoIP, but finally it cached in memory. I don't 
> think that there will be difference if I move it to ramdisk.
> 
> Example for Redis (works same as for GeoIP):
> 1) luarocks install redis-lua
> 2) At the top in lua-file (or inside core.register_init()):
> redisClient = pcall(redis.connect,'unix:///run/redis/redis.sock')
> 3) For cfg just "lua-load" param should be enough.
> 
> For each reload "lsof" would show one additional unix-socket on master and 
> worker.
> 
> As I understand:
> - The LUA initialization executes for master and creates FD.
> - During reload "re-executed" master-process keeps old FD and gets one more 
> by new lua-init.
> - Workers inherit everything from master.

To avoid FD leaks during a reload of the master process, those FDs must be
marked FD_CLOEXEC. I don't know how you are opening these file but you should
try to do a fcntl on the FD.
 
-- 
William Lallemand



Re: Does anyone *really* use 51d or WURFL ?

2019-01-21 Thread Willy Tarreau
On Mon, Jan 21, 2019 at 04:37:32PM +, Ben Shillito wrote:
> Hi Willy,
> 
> Ah yes, thanks, I missed the S first time reading it.
> 
> There are actually a couple of things I'd like to check over a bit more
> thoroughly like the caching used in 51d.c, so it will probably be more like
> tomorrow.

No problem. We'll probably issue another 1.9 on wednesday. If you can
have something by then it could be backported into the next 1.9,
otherwise it can still wait a bit. It's been waiting for more than one
year, we're not in a hurry anymore :-)

Thanks,
Willy



RE: Does anyone *really* use 51d or WURFL ?

2019-01-21 Thread Ben Shillito
Hi Willy,

Ah yes, thanks, I missed the S first time reading it.

There are actually a couple of things I'd like to check over a bit more 
thoroughly like the caching used in 51d.c, so it will probably be more like 
tomorrow.

Thanks,

Ben Shillito
Developer
O: +44 1183 287152
E: b...@51degrees.com
T: @51Degrees

-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: 21 January 2019 16:06
To: Ben Shillito 
Cc: haproxy@formilux.org
Subject: Re: Does anyone *really* use 51d or WURFL ?

On Mon, Jan 21, 2019 at 04:00:13PM +, Ben Shillito wrote:
> Hi Willy,
>
> I agree, setting the flag from the HAProxy USE_THREADS is probably the
> neatest solution.

Yep. Be careful, it's "USE_THREAD" (without trailing S).

> I will get a patch over to you later on today.

Fine, no emergency though. I prefer that you take the time to verify that it 
works before submitting :-)

Thanks,
Willy
This email and any attachments are confidential and may also be privileged. If 
you are not the named recipient, please notify the sender immediately and do 
not disclose, use, store or copy the information contained herein. This is an 
email from 51Degrees.mobi Limited, 5 Charlotte Close, Reading. RG47BY. T: +44 
118 328 7152; E: i...@51degrees.com; 51Degrees.mobi Limited t/as 51Degrees.



Re: Does anyone *really* use 51d or WURFL ?

2019-01-21 Thread Willy Tarreau
Hi Patrick,

On Mon, Jan 21, 2019 at 10:54:17AM -0500, Patrick Hemmer wrote:
> We do use 51Degrees at my place of employment. However a couple of
> caveats in that statement.

Great, thank you for the feedback!

> One is that we're still running on 1.7.

No problem.

> We'll
> likely be upgrading to 1.9 soon, but still a couple months out.

That's definitely a reasonable approach.

> The other caveat is that we run with threading disabled. Until the
> statement on 'nbthread' that "THREADS SUPPORT IN HAPROXY IS HIGHLY
> EXPERIMENTAL" goes away, we'll be leaving it off.

Ah good catch, that's always the problem when placing statements like
this in the doc at a place that has no chance for being revisited! I'll
remove it for -dev and 1.9. That was totally true at the time when 1.8
was released however, but now thread-specific issues are much less
common than other ones.

I consider that 1.8 is safe now regarding threads, however they're
suboptimal and the scalability isn't that great above ~4. I should
possibly revisit the statement there as well to reflect this.

Thanks!
willy



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-21 Thread Emeric Brun
On 1/21/19 3:37 PM, Dirkjan Bussink wrote:
> Hi all,
> 
>> On 21 Jan 2019, at 02:01, Emeric Brun  wrote:
>>
>> Is there a way to check this is a keyupdate message which trigger the 
>> callback (and not an other)?
> 
> Sadly there is not. I had taken a look at the OpenSSL code and it triggers 
> the callback without any additional information available at 
> https://github.com/openssl/openssl/blob/OpenSSL_1_1_1a/ssl/statem/statem_srvr.c#L4059.
>  
> 
>> And what happen for natural i/o operation, are they return something 
>> receiving the message or is this completely
>> hide by openssl (i.e. what returns a SSL_read/write for instance when a 
>> keyupdate has just been received)
> 
> This is completely hidden by OpenSSL and as a library user you don’t see this 
> message happened as it’s transparently updated. 
> 
> I think the other option to consider is that since this logic was added, 
> OpenSSL has gotten support for flags to control renegotiation and prevent 
> insecure renegotiation so the question is if we need this check still at all. 
> I think it can be removed (which is also what NGinx did in the commit that 
> Janusz mentioned in another email in this thread. From the commit message at 
> https://trac.nginx.org/nginx/changeset/e3ba4026c02d2c1810fd6f2cecf499fc39dde5ee/nginx/src/event/ngx_event_openssl.c:
> 
> --- 
> 
> Following 7319:dcab86115261, as long as SSL_OP_NO_RENEGOTIATION is
> defined, it is OpenSSL library responsibility to prevent renegotiation,
> so the checks are meaningless.
> 
> Additionally, with TLSv1.3 OpenSSL tends to report SSL_CB_HANDSHAKE_START
> at various unexpected moments - notably, on KeyUpdate? messages and
> when sending tickets. This change prevents unexpected connection
> close on KeyUpdate? messages and when finishing handshake with upcoming
> early data changes.
> 
> --- 

Interesting, it would be good to skip the check using the same method.

We must stay careful to not put the OP_NO_RENEG flag on the client part (when 
haproxy connects to server), because reneg from server is authorized
but i think infocbk is called only on frontend/accept side.

so a patch which do:

#ifdef  SSL_OP_NO_RENEGOTIATION
SSL_set_options(ctx, SSL_OP_NO_RENEGOTIATION);
#endif

without condition during init

and adding #ifndef SSL_OP_NO_RENEGOTIATION arround the CVE check, should fix 
the issue mentionned about keyupdate and will fix the CVE using the clean way 
if the version
of openssl support.

R,
Emeric



Re: Does anyone *really* use 51d or WURFL ?

2019-01-21 Thread Willy Tarreau
On Mon, Jan 21, 2019 at 04:00:13PM +, Ben Shillito wrote:
> Hi Willy,
> 
> I agree, setting the flag from the HAProxy USE_THREADS is probably the
> neatest solution.

Yep. Be careful, it's "USE_THREAD" (without trailing S).

> I will get a patch over to you later on today.

Fine, no emergency though. I prefer that you take the time to verify
that it works before submitting :-)

Thanks,
Willy



RE: Does anyone *really* use 51d or WURFL ?

2019-01-21 Thread Ben Shillito
Hi Willy,

I agree, setting the flag from the HAProxy USE_THREADS is probably the neatest 
solution.

I will get a patch over to you later on today.

Thanks,

Ben Shillito
Developer
O: +44 1183 287152
E: b...@51degrees.com
T: @51Degrees

-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: 21 January 2019 15:44
To: Ben Shillito 
Cc: haproxy@formilux.org
Subject: Re: Does anyone *really* use 51d or WURFL ?

Hi Ben,

First, thanks for your quick response.

On Mon, Jan 21, 2019 at 03:05:08PM +, Ben Shillito wrote:
> Hi Willy,
>
> I'd like to point out that the 51Degrees API does in fact support
> multi-threaded operation by default. The HAProxy makefile however,
> explicitly uses the FIFTYONEDEGREES_NO_THREADING compile option to
> disable this when building 
> (https://github.com/haproxy/haproxy/blob/master/Makefile#L740).

OK!

> Multi-threaded operation in the API is tested under maximum load
> before every release as can be seen here:
> https://github.com/51Degrees/Device-Detection/blob/master/VisualStudio
> /UnitTests/Performance/Base.cs#L89
>
> Mutli-threaded operation is also stress tested in conjunction with
> many threads detecting User-Agents, and another reloading the data
> file repeatedly as seen here:
> https://github.com/51Degrees/Device-Detection/blob/master/examples/Rel
> oadFromFile.c

Oh, rest assured that I have no doubt about the API's support itself, what I 
mean is that 51d.c in haproxy is not thread-compatible right now (at least due 
to the makefile entry above and to the test in 51d.c) while haproxy is built 
with threading on by default, and I find it suspicious that nobody noticed 
after more than one year. But it's very possible that for now nobody is 
interested in deploying it with threads.

At the moment I'm concerned by the fact that in the whole project there remains 
exactly two files which are incompatible with thread support and as you can 
imagine it's always a pain to have to deal with exceptions like this.

> If multi-threaded support is an issue, then it is a trivial change to
> the makefile to align the HAProxy build to the 51Degrees default, and
> give the option to disable threading for those who require that.

Excellent! What I can suggest then is to set this define only when USE_THREAD 
is not set. This way you can make sure that 51d.c and the rest of haproxy are 
always aligned regarding thread support.

Feel free to send a patch, I'll happily merge it and even backport it so that 
we don't have to worry anymore about the compatbility between threads and 
various options.

Thanks!
Willy
This email and any attachments are confidential and may also be privileged. If 
you are not the named recipient, please notify the sender immediately and do 
not disclose, use, store or copy the information contained herein. This is an 
email from 51Degrees.mobi Limited, 5 Charlotte Close, Reading. RG47BY. T: +44 
118 328 7152; E: i...@51degrees.com; 51Degrees.mobi Limited t/as 51Degrees.



Re: Does anyone *really* use 51d or WURFL ?

2019-01-21 Thread Patrick Hemmer


On 2019/1/21 09:36, Willy Tarreau wrote:
> Hi all,
>
> recently it was figured that the buffer API changes caused some breakage
> to da.c and 51d.c (both fixed since), I don't know if wurfl builds at all
> by the way since the last update to the module is its introduction more
> than 2 years ago. But more importantly I'm realizing that neither 51d nor
> wurfl will start with threads enabled. This has been the case for about
> 15 months now, while we've had threads enabled on all relevant operating
> systems.
>
> Thus it leads me to the (possibly wrong) conclusion that absolutely
> nobody ever uses these options. Am I wrong ? I'm asking because each
> time we perform some API changes it's always a pain to go through these
> which don't build out of the box without external dependencies, and I
> suspect we're simply wasting our time and we should get rid of them
> if nobody uses them (or at the very least move them to contrib/).
>
> But if anyone is using them and disables threads for this, then it's
> fine, but in this case it would be nice if these two would check the
> thread safety of their code so that the thread restriction can be
> removed for the benefit of their users.
>
> Please advise.
>
> Thanks,
> Willy
>

We do use 51Degrees at my place of employment. However a couple of
caveats in that statement. One is that we're still running on 1.7. We'll
likely be upgrading to 1.9 soon, but still a couple months out.

The other caveat is that we run with threading disabled. Until the
statement on 'nbthread' that "THREADS SUPPORT IN HAPROXY IS HIGHLY
EXPERIMENTAL" goes away, we'll be leaving it off.

-Patrick


Re: Does anyone *really* use 51d or WURFL ?

2019-01-21 Thread Willy Tarreau
Hi Ben,

First, thanks for your quick response.

On Mon, Jan 21, 2019 at 03:05:08PM +, Ben Shillito wrote:
> Hi Willy,
> 
> I'd like to point out that the 51Degrees API does in fact support
> multi-threaded operation by default. The HAProxy makefile however, explicitly
> uses the FIFTYONEDEGREES_NO_THREADING compile option to disable this when
> building (https://github.com/haproxy/haproxy/blob/master/Makefile#L740).

OK!

> Multi-threaded operation in the API is tested under maximum load before every
> release as can be seen here:
> https://github.com/51Degrees/Device-Detection/blob/master/VisualStudio/UnitTests/Performance/Base.cs#L89
> 
> Mutli-threaded operation is also stress tested in conjunction with many
> threads detecting User-Agents, and another reloading the data file repeatedly
> as seen here:
> https://github.com/51Degrees/Device-Detection/blob/master/examples/ReloadFromFile.c

Oh, rest assured that I have no doubt about the API's support itself,
what I mean is that 51d.c in haproxy is not thread-compatible right now
(at least due to the makefile entry above and to the test in 51d.c) while
haproxy is built with threading on by default, and I find it suspicious
that nobody noticed after more than one year. But it's very possible that
for now nobody is interested in deploying it with threads.

At the moment I'm concerned by the fact that in the whole project there
remains exactly two files which are incompatible with thread support and
as you can imagine it's always a pain to have to deal with exceptions
like this.

> If multi-threaded support is an issue, then it is a trivial change to the
> makefile to align the HAProxy build to the 51Degrees default, and give the
> option to disable threading for those who require that.

Excellent! What I can suggest then is to set this define only when
USE_THREAD is not set. This way you can make sure that 51d.c and the
rest of haproxy are always aligned regarding thread support.

Feel free to send a patch, I'll happily merge it and even backport
it so that we don't have to worry anymore about the compatbility
between threads and various options.

Thanks!
Willy



RE: Does anyone *really* use 51d or WURFL ?

2019-01-21 Thread Ben Shillito
Hi Willy,

I'd like to point out that the 51Degrees API does in fact support 
multi-threaded operation by default. The HAProxy makefile however, explicitly 
uses the FIFTYONEDEGREES_NO_THREADING compile option to disable this when 
building (https://github.com/haproxy/haproxy/blob/master/Makefile#L740).

Multi-threaded operation in the API is tested under maximum load before every 
release as can be seen here: 
https://github.com/51Degrees/Device-Detection/blob/master/VisualStudio/UnitTests/Performance/Base.cs#L89

Mutli-threaded operation is also stress tested in conjunction with many threads 
detecting User-Agents, and another reloading the data file repeatedly as seen 
here: 
https://github.com/51Degrees/Device-Detection/blob/master/examples/ReloadFromFile.c

If multi-threaded support is an issue, then it is a trivial change to the 
makefile to align the HAProxy build to the 51Degrees default, and give the 
option to disable threading for those who require that.

Regards,

Ben Shillito
Developer
O: +44 1183 287152
E: b...@51degrees.com
T: @51Degrees

-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: 21 January 2019 14:36
To: haproxy@formilux.org
Subject: Does anyone *really* use 51d or WURFL ?

Hi all,

recently it was figured that the buffer API changes caused some breakage to 
da.c and 51d.c (both fixed since), I don't know if wurfl builds at all by the 
way since the last update to the module is its introduction more than 2 years 
ago. But more importantly I'm realizing that neither 51d nor wurfl will start 
with threads enabled. This has been the case for about
15 months now, while we've had threads enabled on all relevant operating 
systems.

Thus it leads me to the (possibly wrong) conclusion that absolutely nobody ever 
uses these options. Am I wrong ? I'm asking because each time we perform some 
API changes it's always a pain to go through these which don't build out of the 
box without external dependencies, and I suspect we're simply wasting our time 
and we should get rid of them if nobody uses them (or at the very least move 
them to contrib/).

But if anyone is using them and disables threads for this, then it's fine, but 
in this case it would be nice if these two would check the thread safety of 
their code so that the thread restriction can be removed for the benefit of 
their users.

Please advise.

Thanks,
Willy

This email and any attachments are confidential and may also be privileged. If 
you are not the named recipient, please notify the sender immediately and do 
not disclose, use, store or copy the information contained herein. This is an 
email from 51Degrees.mobi Limited, 5 Charlotte Close, Reading. RG47BY. T: +44 
118 328 7152; E: i...@51degrees.com; 51Degrees.mobi Limited t/as 51Degrees.



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-21 Thread Dirkjan Bussink
Hi all,

> On 21 Jan 2019, at 02:01, Emeric Brun  wrote:
> 
> Is there a way to check this is a keyupdate message which trigger the 
> callback (and not an other)?

Sadly there is not. I had taken a look at the OpenSSL code and it triggers the 
callback without any additional information available at 
https://github.com/openssl/openssl/blob/OpenSSL_1_1_1a/ssl/statem/statem_srvr.c#L4059.
 

> And what happen for natural i/o operation, are they return something 
> receiving the message or is this completely
> hide by openssl (i.e. what returns a SSL_read/write for instance when a 
> keyupdate has just been received)

This is completely hidden by OpenSSL and as a library user you don’t see this 
message happened as it’s transparently updated. 

I think the other option to consider is that since this logic was added, 
OpenSSL has gotten support for flags to control renegotiation and prevent 
insecure renegotiation so the question is if we need this check still at all. I 
think it can be removed (which is also what NGinx did in the commit that Janusz 
mentioned in another email in this thread. From the commit message at 
https://trac.nginx.org/nginx/changeset/e3ba4026c02d2c1810fd6f2cecf499fc39dde5ee/nginx/src/event/ngx_event_openssl.c:

--- 

Following 7319:dcab86115261, as long as SSL_OP_NO_RENEGOTIATION is
defined, it is OpenSSL library responsibility to prevent renegotiation,
so the checks are meaningless.

Additionally, with TLSv1.3 OpenSSL tends to report SSL_CB_HANDSHAKE_START
at various unexpected moments - notably, on KeyUpdate? messages and
when sending tickets. This change prevents unexpected connection
close on KeyUpdate? messages and when finishing handshake with upcoming
early data changes.

--- 

Cheers,

Dirkjan Bussink


Does anyone *really* use 51d or WURFL ?

2019-01-21 Thread Willy Tarreau
Hi all,

recently it was figured that the buffer API changes caused some breakage
to da.c and 51d.c (both fixed since), I don't know if wurfl builds at all
by the way since the last update to the module is its introduction more
than 2 years ago. But more importantly I'm realizing that neither 51d nor
wurfl will start with threads enabled. This has been the case for about
15 months now, while we've had threads enabled on all relevant operating
systems.

Thus it leads me to the (possibly wrong) conclusion that absolutely
nobody ever uses these options. Am I wrong ? I'm asking because each
time we perform some API changes it's always a pain to go through these
which don't build out of the box without external dependencies, and I
suspect we're simply wasting our time and we should get rid of them
if nobody uses them (or at the very least move them to contrib/).

But if anyone is using them and disables threads for this, then it's
fine, but in this case it would be nice if these two would check the
thread safety of their code so that the thread restriction can be
removed for the benefit of their users.

Please advise.

Thanks,
Willy



Re: reg-tests situation in haproxy 1.8

2019-01-21 Thread Frederic Lecaille

On 1/19/19 8:53 AM, Willy Tarreau wrote:

Hi Lukas,

On Fri, Jan 18, 2019 at 12:43:34PM +0100, Lukas Tribus wrote:

Hello,


currently we have 4 reg-tests in haproxy-1.8, backported due to the
actual bugfix commit, which included a test. We also have a broken
symbolic link in reg-tests/lua/common.pem, which causes at least some
confusion [1].


This is getting dirty :-/


We don't have any test infrastructure in haproxy-1.8 (Makefile,
reg-tests/README) afaik.

So my question is, where do we go from here? Dropping them all, and
remove the test part in future backports? Or backport the actual
infrastructure so that 1.8 can be tested?


I was interested in backporting them to 1.8 once we have more experience
with them and they're better organized, so that we avoid backporting
reorg patches. I'd say we've made quite some progress now and we could
possibly backport them. But I wouldn't be surprised if we'd soon rename
many of them again since the relation between the level and the prefix
letter has to be looked up into the makefile each time, so probably this
is something we should improve.


Note that a "reg-tests-help" makefile target dump the list of LEVELs:



$ make reg-tests-help

To launch the reg tests for haproxy, first export to your environment 
VTEST_PROGRAM variable to point to your vtest program:

$ export VTEST_PROGRAM=/opt/local/bin/vtest
or
$ setenv VTEST_PROGRAM /opt/local/bin/vtest

The same thing may be done to set your haproxy program with 
HAPROXY_PROGRAM but with ./haproxy as default value.


To run all the tests:
$ make reg-tests

You can also set the programs to be used on the command line:
$ VTEST_PROGRAM=<...> HAPROXY_PROGRAM=<...> make reg-tests

To run tests with specific levels:
$ LEVEL=1,3,4   make reg-tests  #list of levels
$ LEVEL=1-3,5-6 make reg-tests  #list of range of levels

About the levels:
LEVEL 1 scripts are dedicated to pure haproxy compliance tests 
(prefixed with 'h' letter).

LEVEL 2 scripts are slow scripts (prefixed with 's' letter).
LEVEL 3 scripts are low interest scripts (prefixed with 'l' letter).
LEVEL 4 scripts are in relation with bugs they help to reproduce 
(prefixed with 'b' letter).
LEVEL 5 scripts are scripts triggering known broken behaviors for 
which there is still no fix (prefixed with 'k' letter).
LEVEL 6 scripts are experimental, typically used to develop new 
scripts (prefixed with 'e' lettre).




On my side I always run all the VTC files. When I add a new reg test, it 
can be tested like that without using LEVEL:


$ make reg-tests 


We could set the level with strings:

h*.vtc -> haproxy
s*.vtc -> slow
l*.vtc -> low   (perhaps this one should be removed).
b*.vtx -> bug
k*.vtc -> broken
e*.vtc -> exp

only list of levels could be permitted:

$ LEVEL=haproxy,bug make reg-tests ...

As there is no more level notion here, perhaps we should rename LEVEL 
environment variable to VTC_TYPES, REGTEST_TYPES or something else.





Re: [PATCH] REG-TEST: mailers: add new test for 'mailers' section

2019-01-21 Thread Christopher Faulet

Le 23/12/2018 à 21:17, PiBa-NL a écrit :

Hi List,

Attached a new test to verify that the 'mailers' section is working
properly.
Currently with 1.9 the mailers sends thousands of mails for my setup...

As the test is rather slow i have marked it with a starting letter 's'.

Note that the test also fails on 1.6/1.7/1.8 but can be 'fixed' there by
adding a 'timeout mail 200ms'.. (except on 1.6 which doesn't have that
setting.)

I don't think that should be needed though if everything was working
properly?

If the test could be committed, and related issues exposed fixed that
would be neat ;)

Thanks in advance,

PiBa-NL (Pieter)



Hi Pieter,

About the timing issue, could you try the following patch please ? With 
it, I can run the regtest about email alerts without any error.


Thanks,
--
Christopher Faulet
>From 8b7822ad2c7d9e4f9fe6b19f57d1d1ff01336a70 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Mon, 21 Jan 2019 14:15:50 +0100
Subject: [PATCH] BUG/MINOR: check: Wake the check task if the check is
 finished in wake_srv_chk()

With tcp-check, the result of the check is set by the function tcpcheck_main()
from the I/O layer. So it is important to wake up the check task to handle the
result and finish the check. Otherwise, we will wait the task timeout to handle
the result of a tcp-check, delaying the next check by as much.

This patch also fixes a problem about email alerts reported by PiBa-NL (Pieter)
on the ML [1] on all versions since the 1.6. So this patch must be backported
from 1.9 to 1.6.

[1] https://www.mail-archive.com/haproxy@formilux.org/msg32190.html
---
 src/checks.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/src/checks.c b/src/checks.c
index 1a81ccdb3..b0de77088 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -1442,12 +1442,13 @@ static int wake_srv_chk(struct conn_stream *cs)
 	}
 
 	if (check->result != CHK_RES_UNKNOWN) {
-		/* We're here because nobody wants to handle the error, so we
-		 * sure want to abort the hard way.
-		 */
+		/* Check complete or aborted. If connection not yet closed do it
+		 * now and wake the check task up to be sure the result is
+		 * handled ASAP. */
 		conn_sock_drain(conn);
 		cs_close(cs);
 		ret = -1;
+		task_wakeup(check->task, TASK_WOKEN_IO);
 	}
 
 	if (check->server)
-- 
2.20.1



Re: H2 Server Connection Resets (1.9.2)

2019-01-21 Thread Aleksandar Lazic
Hi Luke.

Am 21.01.2019 um 10:30 schrieb Luke Seelenbinder:
> Hi all,
> 
> One more bug (or configuration hole) from our transition to 1.9.x using 
> end-to-end h2 connections.
> 
> After enabling h2 backends (technically `server … alpn h2,http/1.1`), we 
> began seeing a high number of backend /server/ connection resets. A 
> reasonable number of client-side connection resets due to timeouts, etc., is 
> normal, but the server connection resets were new.
> 
> I believe the root cause is that our backend servers are NGINX servers, which 
> by default have a 1000 request limit per h2 connection 
> (https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests). 
> As far as I can tell there's no way to set this to unlimited. That resulted 
> in NGINX resetting the HAProxy backend connections and thus resulted in user 
> requests being dropped or returning 404s (oddly enough; though this may be as 
> a result of the outstanding bug related to header manipulation and HTX mode).

Do you have such a info in the nginx log?

"http2 flood detected"

It's the message from this lines

https://trac.nginx.org/nginx/browser/nginx/src/http/v2/ngx_http_v2.c#L4517


> This wouldn't be a problem if one of the following were true:
> 
> - HAProxy could limit the number of times it reused a connection

Can you try to set some timeout values for `timeout http-keep-alive`
https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#timeout%20http-keep-alive

I assume that this timeout could be helpful because of this block in the doc

https://cbonte.github.io/haproxy-dconv/1.9/configuration.html

```
  - KAL : keep alive ("option http-keep-alive") which is the default mode : all
requests and responses are processed, and connections remain open but idle
between responses and new requests.
```

and this code part

https://github.com/haproxy/haproxy/blob/v1.9.0/src/backend.c#L1164

> - HAProxy could retry a failed request due to backend server connection reset 
> (possibly coming in 2.0 with L7 retries?)

Mind you to create a issue for that if there isn't one already?

> - NGINX could set that limit to unlimited.

Isn't `unsigned int` not enought ?
How many idle connections do you have for how long time?

> Our http-reuse is set to aggressive, but that doesn't make much difference, I 
> don't think, since safe would result in the same behavior (the connection is 
> reusable…but only for a limited number of requests).
> 
> We've worked around this by only using h/1.1 on the backends, which isn't a 
> big problem for us, but I thought I would raise the issue, since I'm sure a 
> lot of folks are using haproxy <-> nginx pairings, and this is a bit of a 
> subtle result of that in full h2 mode.

Can you try to increase the max-requests to 20 in nginx

The `max_requests` is defined as `ngx_uint_t` which is `unsigned int`

I have found this in the nginx source.

https://www.nginx.com/resources/wiki/extending/api/main/#ngx-uint-t
https://trac.nginx.org/nginx/browser/nginx/src/http/v2/ngx_http_v2_module.h#L27
https://trac.nginx.org/nginx/browser/nginx/src/http/v2/ngx_http_v2_module.c#L85

> Thanks again for such great software—I've found it pretty fantastic to run in 
> production. :)

Just for my curiosity, have you seen any changes for your solution with the htx
/H2 e2e?

> Best,
> Luke

Best regards
Aleks

> —
> Luke Seelenbinder
> Stadia Maps | Founder
> stadiamaps.com
> 




Re: reg-tests situation in haproxy 1.8

2019-01-21 Thread Frederic Lecaille

On 1/19/19 8:53 AM, Willy Tarreau wrote:

Hi Lukas,

On Fri, Jan 18, 2019 at 12:43:34PM +0100, Lukas Tribus wrote:

Hello,


currently we have 4 reg-tests in haproxy-1.8, backported due to the
actual bugfix commit, which included a test. We also have a broken
symbolic link in reg-tests/lua/common.pem, which causes at least some
confusion [1].


This is getting dirty :-/


We don't have any test infrastructure in haproxy-1.8 (Makefile,
reg-tests/README) afaik.

So my question is, where do we go from here? Dropping them all, and
remove the test part in future backports? Or backport the actual
infrastructure so that 1.8 can be tested?


I was interested in backporting them to 1.8 once we have more experience
with them and they're better organized, so that we avoid backporting
reorg patches. I'd say we've made quite some progress now and we could
possibly backport them. But I wouldn't be surprised if we'd soon rename
many of them again since the relation between the level and the prefix
letter has to be looked up into the makefile each time, so probably this
is something we should improve.

Over time I also find that purely numeric tests names are not *that*
convenient and as the mechanism develops, it will even cause more
confusion because multiple people will propose tests with a similar
name, either to document or reproduce an issue, or just because they
want to complete the regtests. And when reading an error report from
vtest, it is quite difficult to figure what was the original test
since we have many identical names in different directories. I see
that Pieter has already migrated to using fully descriptive names in
his new tests, which is probably what we should do.


I propose this renaming (after having modified a "tree" command output, 
the new names follow the semi-colon characters):


$ tree reg-tests
reg-tests
├── cache
│   └── h0.vtc : h_basic.vtc
├── checks
│   ├── common.pem -> ../ssl/common.pem
│   ├── s0.vtc : s_4be_1srv_health_checks.vtc
│   ├── s1.vtc : s_1be_40srv_odd_health_checks.vtc
│   ├── s2.vtc : s_40be_2srv_odd_health_checks.vtc
│   ├── s3.vtc : s_4be_1srv_smtpchk_httpchk_layer47errors.vtc
│   └── s4.vtc : s_tls_health_checks.vtc
├── compression
│   ├── common.pem -> ../ssl/common.pem
│   ├── h0.vtc : h_basic.vtc
│   ├── s0.lua
│   └── s0.vtc : h_lua_validation.vtc
├── connection
│   ├── b0.vtc : b_proxy_protocol_random_fail.vtc
│   ├── common.pem -> ../ssl/common.pem
│   └── h1.vtc : h_dispatch.vtc
├── http-capture
│   └── h0.vtc : h_multiple_headers.vtc
├── http-cookies
│   └── h0.vtc : h_cookie_insert_indirect.vtc
├── http-messaging
│   ├── h0.vtc : h_h1_to_h1.vtc
│   ├── h2.vtc : h_h2_to_h1.vtc
│   └── h3.vtc : h_http_request_buffer.vtc
├── http-rules
│   ├── b0.map
│   ├── b0.vtc : b_map_regm_with_backref.vtc
│   ├── h0.vtc : h_h1_to_h1c.vtc
│   ├── h1.vtc : h_h1or2_to_h1c.vtc
│   ├── h2.map
│   ├── h2.vtc : h_converters_ipmask_concat_strcmp_field_word.vtc
│   ├── h3-be.map
│   ├── h3.map
│   └── h3.vtc : h_map_redirect.vtc
├── log
│   └── b0.vtc : b_wrong_ip_port_logging.vtc
├── lua
│   ├── b0.lua
│   ├── b0.vtc : b_wrong_types_usage.vtc
│   ├── b1.lua
│   ├── b1.vtc : b_bad_http_clt_req_duration.vtc
│   ├── b2.lua
│   ├── b2_print_r.lua
│   ├── b2.vtc : b_txn_get_priv.vtc
│   ├── b3.lua
│   ├── b3.vtc : b_close_wait_lf.vtc
│   ├── common.pem -> ../ssl/common.pem
│   ├── h1.lua
│   ├── h1.vtc : h_txn_get_privv.vtc
│   ├── h2.lua
│   └── h2.vtc : h_lua_socket.vtc
├── mailers
│   ├── k_healthcheckmail.lua
│   └── k_healthcheckmail.vtc
├── peers
├── README
├── seamless-reload
│   └── b0.vtc : b_abns_socket.vtc
├── server
│   └── b0.vtc : b_cli_set_fdqn.vtc
├── spoe
│   └── b0.vtc : b_wrong_init.vtc
├── ssl
│   ├── b0.vtc : b_wrong_ctx_storage.vtc
│   ├── common.pem
│   └── README
├── stick-table
│   ├── b0.vtc : b_unknown_key.vtc
│   └── b1.vtc : b_converteers_ref_cnt_never_dec.vtc
└── webstats
└── h_webstats-scope-and-post-change.vtc

18 directories, 55 files


If agreed, I will provide a patch.


Fred.



Re: HTX & tune.maxrewrite [1.9.2]

2019-01-21 Thread Christopher Faulet

Le 18/01/2019 à 14:23, Luke Seelenbinder a écrit :

Quick clarification on the previous message.

The code emitting the warning is almost assuredly here: 
https://github.com/haproxy/haproxy/blob/ed7a066b454f09fee07a9ffe480407884496461b/src/proto_htx.c#L3242
 not in proto_http.c, seeing how this is in htx mode not http mode.

I've traced the issue to likely being caused by the following condition false:

https://github.com/haproxy/haproxy/blob/202c6ce1a27c92d21995ee82c71b2f70c636e3ea/src/htx.c#L93

We are dealing with a lot of larger responses (PNGs, 50-100KB/request on avg) 
with perhaps 10 simultaneous initial requests on the same h2 connection being 
very common. That sounds like I may in fact need to tweak some buffer settings 
somewhere. In http/1.1 mode, these requests were spread out across four 
connections with browsers blocking until the previous connection finished.

The documentation is only somewhat helpful for tune.bufsize and 
tune.maxrewrite, http/2 and large requests. If this isn't a bug, would someone 
be willing to offer some guidance into good values for these buffer sizes?

Thanks for your help!

Best,
Luke


Hi Luke,

Could you try following patches please ?

Thanks,
--
Christopher Faulet
>From ccea4c140c8958507e8c91f14354e986eb8aabe6 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Mon, 21 Jan 2019 11:24:38 +0100
Subject: [PATCH 1/3] BUG/MINOR: proto-htx: Return an error if all headers
 cannot be receied at once

When an HTX stream is waiting for a request or a response, it reports an error
(400 for the request or 502 for the response) if a parsing error is reported by
the mux (HTX_FL_PARSING_ERROR). The mux-h1 uses this error, among other things,
when the headers are too big to be analyzed at once. But the mux-h2 doesn't. So
the stream must also report an error if the multiplexer is unable to emit all
headers at once. The multiplexers must always emit all the headers at once
otherwise it is an error.

There are 2 ways to detect this error:

  * The end-of-headers marker was not received yet _AND_ the HTX message is not
empty.

  * The end-of-headers marker was not received yet _AND_ the multiplexer have
some data to emit but it is waiting for more space in the channel's buffer.

Note the mux-h2 is buggy for now when HTX is enabled. It does not respect the
reserve. So there is no way to hit this bug.

This patch must be backported to 1.9.
---
 src/proto_htx.c | 18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/src/proto_htx.c b/src/proto_htx.c
index 9fa820653..9e285f216 100644
--- a/src/proto_htx.c
+++ b/src/proto_htx.c
@@ -126,9 +126,12 @@ int htx_wait_for_request(struct stream *s, struct channel *req, int an_bit)
 	 */
 	if (unlikely(htx_is_empty(htx) || htx_get_tail_type(htx) < HTX_BLK_EOH)) {
 		/*
-		 * First catch invalid request
+		 * First catch invalid request because of a parsing error or
+		 * because only part of headers have been transfered.
+		 * Multiplexers have the responsibility to emit all headers at
+		 * once.
 		 */
-		if (htx->flags & HTX_FL_PARSING_ERROR) {
+		if ((htx->flags & HTX_FL_PARSING_ERROR) || htx_is_not_empty(htx) || (s->si[0].flags & SI_FL_RXBLK_ROOM)) {
 			stream_inc_http_req_ctr(s);
 			stream_inc_http_err_ctr(s);
 			proxy_inc_fe_req_ctr(sess->fe);
@@ -1086,7 +1089,7 @@ int htx_wait_for_request_body(struct stream *s, struct channel *req, int an_bit)
 	 * been received or if the buffer is full.
 	 */
 	if (htx_get_tail_type(htx) >= HTX_BLK_EOD ||
-	htx_used_space(htx) + global.tune.maxrewrite >= htx->size)
+	channel_htx_full(req, htx, global.tune.maxrewrite))
 		goto http_end;
 
  missing_data:
@@ -1457,9 +1460,14 @@ int htx_wait_for_response(struct stream *s, struct channel *rep, int an_bit)
 	 */
 	if (unlikely(co_data(rep) || htx_is_empty(htx) || htx_get_tail_type(htx) < HTX_BLK_EOH)) {
 		/*
-		 * First catch invalid response
+		 * First catch invalid response because of a parsing error or
+		 * because only part of headers have been transfered.
+		 * Multiplexers have the responsibility to emit all headers at
+		 * once. We must be sure to have forwarded all outgoing data
+		 * first.
 		 */
-		if (htx->flags & HTX_FL_PARSING_ERROR)
+		if (!co_data(rep) &&
+		((htx->flags & HTX_FL_PARSING_ERROR) || htx_is_not_empty(htx) || (s->si[1].flags & SI_FL_RXBLK_ROOM)))
 			goto return_bad_res;
 
 		/* 1: have we encountered a read error ? */
-- 
2.20.1

>From 1654f144f9815a46be126ded3ac04db7fc68c40e Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Mon, 21 Jan 2019 11:49:37 +0100
Subject: [PATCH 2/3] BUG/MEDIUM: mux-h2/htx: Respect the channel's reserve by
 checking the flag CO_RFL_KEEP_RSV

When data are pushed in the channel's buffer, in h2_rcv_buf(), the mux-h2 must
respect the reserve if the flag CO_RFL_KEEP_RSV is set. In HTX, because the
stream-interface always sees the buffer as full, there is no other way to know
the reserve must be respected.

This patch must be backporte

Re: [PATCH] MINOR: startup: certain goto paths in init_pollers fail to free

2019-01-21 Thread Uman Shahzad
Hi

On Mon, Jan 21, 2019, at 08:49, Willy Tarreau wrote:
> On Mon, Jan 21, 2019 at 04:39:53AM +0100, Willy Tarreau wrote:
> > Hi,
> > 
> > On Thu, Jan 17, 2019 at 08:21:39AM +, Uman Shahzad wrote:
> > > If we fail to initialize pollers due to fdtab/fdinfo/polled_mask
> > > not getting allocated, we free any of those that were allocated
> > > and exit. However the ordering was incorrect, and there was an old
> > > unused and unreachable "fail_cache" path as well.
> > 
> > Good catch. The issue was introduced in 1.9-dev with this patch :
> > 
> > cb92f5c ("MINOR: pollers: move polled_mask outside of struct fdtab.")
> > 
> > 1.8 is OK. I'll mention this in the commit message so that we don't
> > try to backport it too far.
> 
> By the way, I will even adjust it a little bit because there is still
> a mistake at the end : if no poller works, we return zero without
> freeing anything (this never happens). We need to remove the return 0
> at the end and to keep free(fdinfo).
> 
> Willy

Oh, nice, thanks for fixing that up! And thanks for handling this!

Re: Lots of mail from email alert on 1.9.x

2019-01-21 Thread Johan Hendriks


Op 13-01-19 om 18:47 schreef Willy Tarreau:
> Hi Olivier,
>
> On Sun, Jan 13, 2019 at 06:40:56PM +0100, Olivier Houchard wrote:
>>> Indeed, this function should not have any special effect in this case,
>>> it is needed to prepend this at the beginning of chk_report_conn_err() :
>>>
>>> if (!check->server)
>>> return;
>>>
>>> We need to make sure that check->server is properly tested everywhere.
>>> With a bit of luck this one was the only remnant.
>>>
>> I'd rather just avoid calling dns_trigger_resolution() if there's no server,
>> it seems it is the only use of check->server in chk_report_conn_err(), so
>> that set_server_check_status() is call, and the check's status and result
>> may be updated.
> OK, that's fine with me as well, I hesitated between the two.
>
>> Not sure it is really needed, but I'd rather not offend the Check Gods.
> :-)
>
>> The attached patches are updated to od just that.
> Thank you. I'll merge them tomorrow. The ugliness of this code tells me
> it's becoming urgent to perform a serious lifting to the whole checks
> code :-/
>
> Thanks,
> Willy
>
Sorry for the late reply, have been struck by flu.
I can confirm that with version 1.9.2 all is fine with the e-mail alerts
again.

Thanks all.






Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-21 Thread Janusz Dziemidowicz
pon., 21 sty 2019 o 00:10 Adam Langley  napisał(a):
> No idea, I'm afraid. If you have a server to test, it looks like one
> can use OpenSSL 1.1.1's `openssl s_client` tool to send a KeyUpdate
> message by writing "K" on a line by itself.

I tested all my servers and I've noticed that nginx is broken too. I
am running nginx 1.14.2 with OpenSSL 1.1.1a The nginx source contains
exactly the same function as haproxy:
https://trac.nginx.org/nginx/browser/nginx/src/event/ngx_event_openssl.c?rev=ebf8c9686b8ce7428f975d8a567935ea3722da70#L850

However, it seems that it might have been fixed in 1.15.2 by this commit:
https://trac.nginx.org/nginx/changeset/e3ba4026c02d2c1810fd6f2cecf499fc39dde5ee/nginx/src/event/ngx_event_openssl.c

It might also be a better approach for haproxy to just use
SSL_OP_NO_RENEGOTIATION if possible. Older OpenSSL versions do no have
it, but they also don't support TLS 1.3

And just for reference, I've found Chrome bug with this problem (as I
am interested when this will get enabled to keep all my systems
updated) https://bugs.chromium.org/p/chromium/issues/detail?id=923685

-- 
Janusz Dziemidowicz



Re: HAProxy with OpenSSL 1.1.1 breaks when TLS 1.3 KeyUpdate is used.

2019-01-21 Thread Emeric Brun
Hi Adam,
On 1/20/19 10:12 PM, Adam Langley wrote:
> KeyUpdate messages are a feature of TLS 1.3 that allows the symmetric
> keys of a connection to be periodically rotated. It's
> mandatory-to-implement in TLS 1.3, but not mandatory to use. Google
> Chrome tried enabling KeyUpdate and promptly broke several sites, at
> least some of which are using HAProxy.
> 
> The cause is that HAProxy's code to disable TLS renegotiation[1] is
> triggering for TLS 1.3 post-handshake messages. But renegotiation has
> been removed in TLS 1.3 and post-handshake messages are no longer
> abnormal. Thus I'm attaching a patch to only enforce that check when
> the version of a TLS connection is <= 1.2.
> 
> Since sites that are using HAProxy with OpenSSL 1.1.1 will break when
> Chrome reenables KeyUpdate without this change, I'd like to suggest it
> as a candidate for backporting to stable branches.
> 
> [1] https://github.com/haproxy/haproxy/blob/master/src/ssl_sock.c#L1472
> 
> 
> Thank you
> 
> AGL

Is there a way to check this is a keyupdate message which trigger the callback 
(and not an other)?

And what happen for natural i/o operation, are they return something receiving 
the message or is this completely
hide by openssl (i.e. what returns a SSL_read/write for instance when a 
keyupdate has just been received)

R,
Emeric



H2 Server Connection Resets (1.9.2)

2019-01-21 Thread Luke Seelenbinder
Hi all,

One more bug (or configuration hole) from our transition to 1.9.x using 
end-to-end h2 connections.

After enabling h2 backends (technically `server … alpn h2,http/1.1`), we began 
seeing a high number of backend /server/ connection resets. A reasonable number 
of client-side connection resets due to timeouts, etc., is normal, but the 
server connection resets were new.

I believe the root cause is that our backend servers are NGINX servers, which 
by default have a 1000 request limit per h2 connection 
(https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests). As 
far as I can tell there's no way to set this to unlimited. That resulted in 
NGINX resetting the HAProxy backend connections and thus resulted in user 
requests being dropped or returning 404s (oddly enough; though this may be as a 
result of the outstanding bug related to header manipulation and HTX mode).

This wouldn't be a problem if one of the following were true:

- HAProxy could limit the number of times it reused a connection
- HAProxy could retry a failed request due to backend server connection reset 
(possibly coming in 2.0 with L7 retries?)
- NGINX could set that limit to unlimited.

Our http-reuse is set to aggressive, but that doesn't make much difference, I 
don't think, since safe would result in the same behavior (the connection is 
reusable…but only for a limited number of requests).

We've worked around this by only using h/1.1 on the backends, which isn't a big 
problem for us, but I thought I would raise the issue, since I'm sure a lot of 
folks are using haproxy <-> nginx pairings, and this is a bit of a subtle 
result of that in full h2 mode.

Thanks again for such great software—I've found it pretty fantastic to run in 
production. :)

Best,
Luke

—
Luke Seelenbinder
Stadia Maps | Founder
stadiamaps.com

publickey - luke.seelenbinder@stadiamaps.com - 0xB23C1E8A.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature