Re: Recommendations for deleting headers by regexp in 2.x?

2020-09-21 Thread Tim Düsterhus
Ricardo,

Am 21.09.20 um 11:29 schrieb Ricardo Fraile:
> One mail on this thread said

Your MUA appears to not specify the 'references' or 'in-reply-to'
headers that are required for proper threading. This makes it
unnecessarily hard to find the thread.

To prevent all the others from searching through their emails the
mailing list thread in question is this one (from January 2020):
https://www.mail-archive.com/haproxy@formilux.org/msg36060.html

Message-ID:
calgeukyhmvpqmcbp2hi-z2dvutgyhdrsn_esgadjcdic2t-...@mail.gmail.com

Best regards
Tim Düsterhus



Naming and discoverability of fetches and converters

2020-09-17 Thread Tim Düsterhus
Hi List,

I'd like to discuss something that's bothering me for quite some time
now: The naming, namespacing and discoverability of fetches and converters.

Frankly, it's a small mess right now. I assume this is due to historic
growth an features being added over time. Let me give a few examples:

I want to encoded something using base64? I use the `base64` converter.
Decoding? **Obviously** that's `b64dec` ;)

Hex encoding? Easily done with `hex`. Decoding back to a string? Not
possible. But you can to `hex2i` to turn a hexadecimal string into an
integer.

We have `url_dec`. But there isn't an `url_enc` or maybe I am just not
able to find it.

Then there's quite a few redundant converters. We have `sha1`, then
`sha2` and most recently we have `digest`.

Then within the description of the converters the variable naming and
scoping is explained over and over again for each converter that takes
variable names.

For fetches it's not as bad, many have proper prefixes. But there's also
a difference in naming. Sometimes the namespace is separated by a dot
(`req.hdr`). Sometime it's an underscore (`ssl_fc`).

Long story short: Is it feasible to update the canonical names of
fetches and converters, preserving backwards compatibility with old
versions and adding the missing "duals" for encoders / decoders or do we
have to live with the current situation.

---

Let me give a proposal of what makes sense to me:

Make 's.' contain string related converters.

s.field() instead of field()
s.word() instead of word()
s.json() instead of json()

Make 'http.' contain HTTP related converters.

http.language() instead of language()

Make 'math.' contain mathematics related converters.

math.add() instead of add()
math.sub() instead of sub()

Make 'enc.' contain encoding related converters.

enc.base64() instead of base64()
enc.unbase64() instead of b64dec()
enc.hex() instead of hex()
enc.unhex() being the new reverse of hex()

Make 'sec.' contain security related converters.

sec.digest() instead of digest()
deprecate sha1, sha2 in favor of digest()
sec.secure_memcmp() instead of secure_memcmp()

You get the concept.

The biggest benefit to namespacing for me is that related converters
appear below each other within configuration.txt. The most commonly
required converters could still remain in the default namespace (or
perhaps rather aliased into the default namespace). I don't believe that
's.' for the string stuff is too bad, though.

Opinions?

Best regards
Tim Düsterhus



Re: check successful reload using master cli

2020-09-15 Thread Tim Düsterhus
William,

Am 15.09.20 um 17:36 schrieb William Lallemand:
> Oh right... the space in "[was: ]" is troublesome for cutting the string,
> we must remove it.
> 

Why not use the Tab (0x09) as separator and make the output a proper TSV?

Best regards
Tim Düsterhus



Re: Dynamic Googlebot identification via lua?

2020-09-12 Thread Tim Düsterhus
Reinhard,

Am 12.09.20 um 16:43 schrieb Reinhard Vicinus:
>>> thanks, for your reply and the information. Sorry for my late reply, but
>>> I had only today time to test. I did try to get the spoa server working
>>> on a ubuntu bionic (18.04.4) with haproxy 2.2.3-2ppa1~bionic from the
>>> vbernat ppa. I could compile the spoa server with python 3.6 support
>>> from the latest github sources without obvious problems and it also
>>> started without problems with the example python script (./spoa -d -f
>>> ps_python.py).
>>>
>>> If I start haproxy with the following command:
>>>
>>> haproxy -f spoa-server.conf -d
>>>
>>> haproxy seg faults on the first request to port 10001
>> This is a bug in HAProxy then. Do you happen to have a core dump / stack
>> trace?
> Yes I have a core dump. But I am somewhat rusty in analyzing them. So
> any pointers what to do with the core dump is appreciated. Also the
> segmentation fault only occurs if the spoa server is running so the
> problem is probably somewhere in the code regarding the connection to
> the spoa server.

Install the haproxy-dbgsym package to install the debug symbols to make
the stack trace readable. Then:

gdb haproxy 

In gdb use 'bt' to get the backtrace of the thread that killed the
process. Possibly 't a a bt' to see what the other threads were doing.

Ideally file an issue within the tracker:
https://github.com/haproxy/haproxy/issues. It's easier to track it
within there and there's a template to fill in.

>>> If I start haproxy with the additional parameter -Ws then it does not
>>> seg fault, but only the first and every 4th request get (correctly?)
>>> forwarded to the spoa server, the 3 requests in between get answered
>>> with an empty %[var(sess.iprep.ip_score)].
>>>
>>> [...]
>>>
>>> I am unsure if I am making some stupid mistakes, or if I should test it
>>> with an older haproxy version or how to debug the issue further. So any
>>> pointers are very much appreciated.
>> Can you share the configuration you attempted to use?
> Sorry, I forgot to mention that the configuration used is the example
> configuration from haproxy repository. But here it is, to ensure that
> there were no changes in the meantime:

Unfortunately I can't comment on the SPOA functionality, because I never
used it. Hopefully the information is sufficient for someone more
proficient to tell what's going wrong there.

Best regards
Tim Düsterhus



Re: Dynamic Googlebot identification via lua?

2020-09-12 Thread Tim Düsterhus
Reinhard,

Am 12.09.20 um 12:45 schrieb Reinhard Vicinus:
> thanks, for your reply and the information. Sorry for my late reply, but
> I had only today time to test. I did try to get the spoa server working
> on a ubuntu bionic (18.04.4) with haproxy 2.2.3-2ppa1~bionic from the
> vbernat ppa. I could compile the spoa server with python 3.6 support
> from the latest github sources without obvious problems and it also
> started without problems with the example python script (./spoa -d -f
> ps_python.py).
> 
> If I start haproxy with the following command:
> 
> haproxy -f spoa-server.conf -d
> 
> haproxy seg faults on the first request to port 10001

This is a bug in HAProxy then. Do you happen to have a core dump / stack
trace?

> If I start haproxy with the additional parameter -Ws then it does not
> seg fault, but only the first and every 4th request get (correctly?)
> forwarded to the spoa server, the 3 requests in between get answered
> with an empty %[var(sess.iprep.ip_score)].
> 
> [...]
> 
> I am unsure if I am making some stupid mistakes, or if I should test it
> with an older haproxy version or how to debug the issue further. So any
> pointers are very much appreciated.

Can you share the configuration you attempted to use?

Best regards
Tim Düsterhus



Re: [PATCH] MINOR: Add either(,) converter

2020-09-11 Thread Tim Düsterhus , WoltLab GmbH
Miroslav,

Am 11.09.20 um 17:10 schrieb Miroslav Zagorac:
> there is a small typo in the patch, if says 'iff' instead of 'iif':
> 
> ---
> +  Example:
> +    http-request set-header x-forwarded-proto %[ssl_fc,iff(https,http)]
> ---
> 

Fun. I didn't receive your reply on company mail. I only got it from the
list using my personal subscription. I hope this message threads properly.

Muscle memory is too strong :-/ I even used search and replace to adjust
the name in all files, except configuration.txt. I even had to fix the
'either' within the GPL license comment.

Willy, find a patch attached. I've put Miroslav as 'Reported-by'.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

-- 

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P
>From e9ea0196ab20d8cd92ba8a9651e0da7e9575f9ae Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Fri, 11 Sep 2020 17:13:12 +0200
Subject: [PATCH] DOC: Fix typo in iif() example
To: haproxy@formilux.org
Cc: w...@1wt.eu

It should read 'iif', not 'iff'.

Reported-by: Miroslav Zagorac 
---
 doc/configuration.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 0e34f620d..075d3d5c8 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15155,7 +15155,7 @@ iif(,)
   string otherwise.
 
   Example:
-http-request set-header x-forwarded-proto %[ssl_fc,iff(https,http)]
+http-request set-header x-forwarded-proto %[ssl_fc,iif(https,http)]
 
 in_table()
   Uses the string representation of the input sample to perform a look up in
-- 
2.28.0



Re: [PATCH] MINOR: Add either(,) converter

2020-09-11 Thread Tim Düsterhus , WoltLab GmbH
Willy,

Am 11.09.20 um 16:46 schrieb Willy Tarreau:
> First, I really like the feature, that's a great idea.

:-)

>>> - choice (my initial choice)
>>> - ifor / if_or
>>> - ifelse / if_else
>>> - iftrue (with the  argument being optional)
>>
>> Maybe something like this would be appropriate (IIF)?
> 
> I was thinking about "select" which I've seldom seen, I never heard about
> "iif" previously but it has the benefit of being quite short (convenient
> for long expressions) and already in use elsewhere. And it also has the
> benefit that someone searching for "if" would easily stumble on it.
> 
> I'm fine with the patch BTW, so if everyone agrees on "iif", let's suggest
> we go with it ?
> 

I consider 'iif' a bit obscure. It easily looks like a typo. Similar to
'iff' for 'if and only if' which tends to generate a number of questions
as well.

But I've certainly seen 'iif' being used before and I don't strongly
care for the naming. I'm happy with anything.

I've attached an updated patch and leave it up to you whether you want
to merge it and whether you'd like to wait for more opinions or not :-)

Best regards
Tim Düsterhus
Developer WoltLab GmbH

-- 

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P
>From 9022a2f0b3e2ab449debfe6db919312aa97a5262 Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Fri, 11 Sep 2020 14:25:23 +0200
Subject: [PATCH] MINOR: Add iif(,) converter
To: haproxy@formilux.org
Cc: w...@1wt.eu

iif() takes a boolean as input and returns one of the two argument
strings depending on whether the boolean is true.

This converter most likely is most useful to return the proper scheme
depending on the value returned by the `ssl_fc` fetch, e.g. for use within
the `x-forwarded-proto` request header.

However it can also be useful for use within a template that is sent to
the client using `http-request return` with a `lf-file`. It allows the
administrator to implement a simple condition, without needing to prefill
variables within the regular configuration using `http-request
set-var(req.foo)`.
---
 doc/configuration.txt   |  7 ++
 reg-tests/converter/iif.vtc | 45 +
 src/sample.c| 22 ++
 3 files changed, 74 insertions(+)
 create mode 100644 reg-tests/converter/iif.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c1f6f8219..dd8d5889a 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15150,6 +15150,13 @@ http_date([])
   microseconds since epoch. Offset is assumed to have the same unit as
   input timestamp.
 
+iif(,)
+  Returns the  string if the input value is true. Returns the 
+  string otherwise.
+
+  Example:
+http-request set-header x-forwarded-proto %[ssl_fc,iff(https,http)]
+
 in_table()
   Uses the string representation of the input sample to perform a look up in
   the specified table. If the key is not found in the table, a boolean false
diff --git a/reg-tests/converter/iif.vtc b/reg-tests/converter/iif.vtc
new file mode 100644
index 0..360b63f96
--- /dev/null
+++ b/reg-tests/converter/iif.vtc
@@ -0,0 +1,45 @@
+varnishtest "iif converter Test"
+
+feature ignore_unknown_macro
+
+server s1 {
+	rxreq
+	txresp
+} -repeat 3 -start
+
+haproxy h1 -conf {
+defaults
+	mode http
+	timeout connect 1s
+	timeout client  1s
+	timeout server  1s
+
+frontend fe
+	bind "fd@${fe}"
+
+	 requests
+	http-request set-var(txn.iif) req.hdr_cnt(count),iif(ok,ko)
+	http-response set-header iif %[var(txn.iif)]
+
+	default_backend be
+
+backend be
+	server s1 ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe_sock} {
+	txreq
+	rxresp
+	expect resp.status == 200
+	expect resp.http.iif == "ko"
+	txreq \
+		-hdr "count: 1"
+	rxresp
+	expect resp.status == 200
+	expect resp.http.iif == "ok"
+	txreq \
+		-hdr "count: 1,2"
+	rxresp
+	expect resp.status == 200
+	expect resp.http.iif == "ok"
+} -run
diff --git a/src/sample.c b/src/sample.c
index 3a1534865..a9c08ef54 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -3121,6 +3121,26 @@ static int sample_conv_secure_memcmp(const struct arg *arg_p, struct sample *smp
 }
 #endif
 
+/* Takes a boolean as input. Returns the first argument if that boolean is true and
+ * the second argument otherwise.
+ */
+static int sample_conv_iif(const struct arg *arg_p, struct sample *smp, void *private)
+{
+	smp->data.type = SMP_T_STR;
+	smp->flags |= SMP_F_CONST;
+
+	if (smp->data.u.sint) {
+		smp->data.u.str.data = arg_p[0].data.str.data;
+		smp->data.u.str.area = arg_p[0].data.str.area;
+	}
+	else {
+		smp->data.u.str.data = arg_p[1].data.str.data;
+		smp->data.u.str.area = arg_p[1].data.str.area;
+	}
+
+	return

[PATCH] MINOR: Add either(,) converter

2020-09-11 Thread Tim Düsterhus , WoltLab GmbH
Willy,

[keep this email in CC, it's not subscribed to the list]

"either() takes a boolean as input and returns one of the two argument
strings depending on whether the boolean is true."

Find the full details in the attached patch.

---

We've had a bit of discussion regarding the naming of the converter. I
wanted to avoid calling it `if`, because then we could have stuff like this:

  http-request set-var(txn.foo) bool(1),if(bar,baz)

which can easily be confused with:

  http-request set-var(txn.foo) bool(1) if bar

Some other naming suggestions that came up:

- choice (my initial choice)
- ifor / if_or
- ifelse / if_else
- iftrue (with the  argument being optional)

I'm happy to adjust the patch based on your preferences.

I'd also appreciate if this one could make a backport into 2.2. But I
would fully understand if you refuse to backport this into a LTS branch.
November is not too far away :-)

Best regards
Tim Düsterhus
Developer WoltLab GmbH

-- 

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P
>From 52e50548f97a795cb891adf0f18a8ebc1e38b188 Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Fri, 11 Sep 2020 14:25:23 +0200
Subject: [PATCH] MINOR: Add either(,) converter
To: haproxy@formilux.org
Cc: w...@1wt.eu

either() takes a boolean as input and returns one of the two argument
strings depending on whether the boolean is true.

This converter most likely is most useful to return the proper scheme
depending on the value returned by the `ssl_fc` fetch, e.g. for use within
the `x-forwarded-proto` request header.

However it can also be useful for use within a template that is sent to
the client using `http-request return` with a `lf-file`. It allows the
administrator to implement a simple condition, without needing to prefill
variables within the regular configuration using `http-request
set-var(req.foo)`.
---
 doc/configuration.txt  |  7 ++
 reg-tests/converter/either.vtc | 45 ++
 src/sample.c   | 22 +
 3 files changed, 74 insertions(+)
 create mode 100644 reg-tests/converter/either.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c1f6f8219..197b5cec5 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15094,6 +15094,13 @@ djb2([])
   32-bit hash is trivial to break. See also "crc32", "sdbm", "wt6", "crc32c",
   and the "hash-type" directive.
 
+either(,)
+  Returns the  string if the input value is true. Returns the 
+  string otherwise.
+
+  Example:
+http-request set-header x-forwarded-proto %[ssl_fc,either(https,http)]
+
 even
   Returns a boolean TRUE if the input value of type signed integer is even
   otherwise returns FALSE. It is functionally equivalent to "not,and(1),bool".
diff --git a/reg-tests/converter/either.vtc b/reg-tests/converter/either.vtc
new file mode 100644
index 0..8a155c356
--- /dev/null
+++ b/reg-tests/converter/either.vtc
@@ -0,0 +1,45 @@
+varnishtest "either converter Test"
+
+feature ignore_unknown_macro
+
+server s1 {
+	rxreq
+	txresp
+} -repeat 3 -start
+
+haproxy h1 -conf {
+defaults
+	mode http
+	timeout connect 1s
+	timeout client  1s
+	timeout server  1s
+
+frontend fe
+	bind "fd@${fe}"
+
+	 requests
+	http-request set-var(txn.either) req.hdr_cnt(count),either(ok,ko)
+	http-response set-header either %[var(txn.either)]
+
+	default_backend be
+
+backend be
+	server s1 ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe_sock} {
+	txreq
+	rxresp
+	expect resp.status == 200
+	expect resp.http.either == "ko"
+	txreq \
+		-hdr "count: 1"
+	rxresp
+	expect resp.status == 200
+	expect resp.http.either == "ok"
+	txreq \
+		-hdr "count: 1,2"
+	rxresp
+	expect resp.status == 200
+	expect resp.http.either == "ok"
+} -run
diff --git a/src/sample.c b/src/sample.c
index 3a1534865..d6baf6c89 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -3121,6 +3121,26 @@ static int sample_conv_secure_memcmp(const struct arg *arg_p, struct sample *smp
 }
 #endif
 
+/* Takes a boolean as input. Returns the first argument if that boolean is true and
+ * the second argument otherwise.
+ */
+static int sample_conv_either(const struct arg *arg_p, struct sample *smp, void *private)
+{
+	smp->data.type = SMP_T_STR;
+	smp->flags |= SMP_F_CONST;
+
+	if (smp->data.u.sint) {
+		smp->data.u.str.data = arg_p[0].data.str.data;
+		smp->data.u.str.area = arg_p[0].data.str.area;
+	}
+	else {
+		smp->data.u.str.data = arg_p[1].data.str.data;
+		smp->data.u.str.area = arg_p[1].data.str.area;
+	}
+
+	return 1;
+}
+
 #define GRPC_MSG_COMPRESS_FLAG_SZ 1 /* 1 byte */
 #define GRPC_MSG_LENGTH_SZ4 /* 4 bytes */
 #define GRPC_MSG_HEADER_SZ(GRPC_MSG_COMPRESS_FLAG_

Re: http2 smuggling

2020-09-11 Thread Tim Düsterhus
Willy,

Am 11.09.20 um 09:42 schrieb Willy Tarreau:
> On Fri, Sep 11, 2020 at 09:02:57AM +0200, Tim Düsterhus wrote:
>> According to the article performing a h2c upgrade via TLS is not valid
>> according to the spec. HAProxy implements the H2 spec.
> 
> "according to the article" :-) There's no such mention in the spec
> itself from what I remember, it's just that it's usually pointless,
> but there may be a lot of situations where it's considered better to
> forward an upgradable connection over TLS to the next intermediary
> because the intermediary network is not safe.

I might misunderstand it, but I'd say that RFC 7540#3.3 specifically
disallows h2c for TLS:

>HTTP/2 over TLS uses the "h2" protocol identifier.  The "h2c"
>protocol identifier MUST NOT be sent by a client or selected by a
>server; the "h2c" protocol identifier describes a protocol that does
>not use TLS.

-

>> Question 1: Should HAProxy reject requests that set Upgrade: h2c over
>> TLS? I think it should. Basically the following rule should be applied
>> automatically to my understanding.
>>
>> http-request deny deny_status 400 if { req.hdr(upgrade) h2c } { ssl_fc }
> 
> No I disagree. Let's say you have an h2c client on datacenter 1 and
> an h2c server on datacenter 2. This rule would prevent you from using
> the local haproxy to secure the connection, while providing zero benefit.

If I just want to secure the connection I would use 'mode tcp' where
HAProxy is a dumb pipe and this rule would not apply.

> By the way, it's fun to see that a discussion started a few days ago
> regarding the uselessness of h2c and its removal from the next H2 spec
> because "nobody implemented it yet" :-)  And actually the guy had to
> implement its own server to find a complying one.

I believe nginx can do h2c.

Best regards
Tim Düsterhus



Re: http2 smuggling

2020-09-11 Thread Tim Düsterhus
Willy,

Am 11.09.20 um 08:07 schrieb Willy Tarreau:
> On Fri, Sep 11, 2020 at 01:55:10PM +1000, Igor Cicimov wrote:
>> Should we be worried?
>>
>> https://portswigger.net/daily-swig/http-request-smuggling-http-2-opens-a-new-attack-tunnel
> 
> But this stuff is total non-sense. Basically the guy is complaining
> that the products he tested work exactly as desired, designed and
> documented!
> 
> The principle of the upgrade at the gateway level precisely is to say
> "OK both the client and the server want to speak another protocol you
> agreed upon, let me retract" and let them talk over a tunnel. That's
> exactly what is needed to support WebSocket for example. The simple
> fact that he found that many proxies/gateways work like this should
> ring a bell about the intended behavior!
> 

I'm very well aware about your opinion regarding access control at the
edge by now, however I have 2 questions at this point.

I've read the official write-up of the findings of the researcher:
https://labs.bishopfox.com/tech-blog/h2c-smuggling-request-smuggling-via-http/2-cleartext-h2c.
It contains quite a few more details and I recommend to take a look at
it to answer my questions.

According to the article performing a h2c upgrade via TLS is not valid
according to the spec. HAProxy implements the H2 spec.

Question 1: Should HAProxy reject requests that set Upgrade: h2c over
TLS? I think it should. Basically the following rule should be applied
automatically to my understanding.

http-request deny deny_status 400 if { req.hdr(upgrade) h2c } { ssl_fc }



Further the article says that the HTTP2-Settings header is a hop by hop
header. It should not be forwarded by a proxy. According to the article
HAProxy *does* forward it.

Question 2: Should HAProxy automatically strip the HTTP2-Settings header
when forwarding requests?

Best regards
Tim Düsterhus



Re: Not sure if my mails to haproxy mailing lists are being blocked.

2020-09-09 Thread Tim Düsterhus
Badari,

Am 09.09.20 um 04:10 schrieb Badari Prasad:
> Hi Admin,
>  Need help here , not sure if my mails to the mailing lists are being
> blocked. Can you kindly check.
> 

I am not the Admin, but I can confirm that your emails reach the list
just fine. You can easily check yourself using the mailing list
archives: https://www.mail-archive.com/haproxy@formilux.org/

You should be a bit more patient, though. Bumping your thread after just
roughly 24 hours is not nice.

I'm however not able to help with your issue, that's why I did not reply.

Best regards
Tim Düsterhus



Re: Dynamic Googlebot identification via lua?

2020-09-08 Thread Tim Düsterhus
Reinhard,
Björn,

Am 08.09.20 um 21:39 schrieb Björn Jacke:
>> the only official supported way to identify a google bot is to run a
>> reverse DNS lookup on the accessing IP address and run a forward DNS
>> lookup on the result to verify that it points to accessing IP address
>> and the resulting domain name is in either googlebot.com or google.com
>> domain.
>> ...
> 
> thanks for asking this again, I brought this up earlier this year and I
> got no answer:
> 
> https://www.mail-archive.com/haproxy@formilux.org/msg37301.html
> 
> I would expect that this is something that most sites would actually
> want to check and I'm surprised that there is no solution for this or at
> least none that is obvious to find.

The usually recommended solution for this kind of checks is either Lua
or the SPOA, running the actual logic out of process.

For Lua my haproxy-auth-request script is a batteries included solution
to query an arbitrary HTTP service:
https://github.com/TimWolla/haproxy-auth-request. It comes with the
drawback that Lua runs single-threaded within HAProxy, so you might not
want to use this if the checks need to run in the hot path, handling
thousands of requests per second.

It should be possible to cache the results of the script using a stick
table or a map.

Back in nginx times I used nginx' auth_request to query a local service
that checked whether the client IP address was a Tor exit node. It
worked well.

For SPOA there's this random IP reputation service within the HAProxy
repository:
https://github.com/haproxy/haproxy/tree/master/contrib/spoa_example. I
never used the SPOA feature, so I can't comment on whether that example
generally works and how hard it would be to extend it. It certainly
comes with the restriction that you are limited to C or Python (or a
manual implementation of the SPOA protocol) vs anything that speaks HTTP.

Best regards
Tim Düsterhus



Re: [PATCH v3 0/4] Add support for if-none-match for cache responses

2020-09-08 Thread Tim Düsterhus
William,

Am 08.09.20 um 17:21 schrieb William Lallemand:
>> Yes, this generally  makes sense to me. Unfortunately the code frankly
>> is a foreign language to me here.
>>
>> I have not the slightest idea what steps I would need to perform to get
>> the headers of the cached response within 'http_action_req_cache_use'.
>> That's the primary reason why I implemented the logic within
>> 'http_cache_io_handler' and not in 'http_action_req_cache_use'.
> 
> Indeed it's kind of complicated in the current state of the cache,
> because the data are chunked in the shctx so you can't use the htx
> functions to do it, so we need to dump the headers before.
> 
>> I would also say that doing this in 'http_cache_io_handler' is
>> logically the correct place, because we are already dealing with the
>> response here.  However I understand that needing to malloc() the
>> if-none-match is non-ideal.
>>
>> Do you have any advice how I can efficiently retrieve the ETag header
>> from the cached response in 'http_action_req_cache_use'?
> 
> 
> I think it's better to handle the headers in the action, like it's done
> for the store_cache where the action store the headers and the filters
> stores the body.
> 
> So what could be done is to dump the headers directly from the action,
> so the applet is created only in the case there is a body to dump, that
> could even improve the performances.
> 
> It will be a requirement in the future if we want to handle properly the
> Vary header because we will need to chose the right object depending on
> the header before setuping the appctx.

Unfortunately that very much sounds like it is above my "pay grade" as a
community contributor to HAProxy. I *do not* plan on implementing this,
because I don't expect this to getting this right without investing much
effort.

Feel free to take the first 3 patches if you want. I believe they are
good and useful for whoever is going to implement this. I'm going to
link the mailing list thread in the issue for reference.

> Also, when reading the RFC about the 304, I notice that they impose to
> remove some of the entity headers in the case of the weak etag, so the
> output is not exactly the same as the HEAD.
> https://tools.ietf.org/html/rfc2616#section-10.3.5

Oh, yes, indeed. For a weak ETag some headers need to be dropped, good
catch.

Best regards
Tim Düsterhus



Re: [PATCH v3 0/4] Add support for if-none-match for cache responses

2020-09-07 Thread Tim Düsterhus
William,

Am 02.09.20 um 07:50 schrieb Christopher Faulet:
> It seems good for me. If it's good too for William, I will merge it.
> 

Did you have time to take a look, yet? The final patch version is the
'v3' one.

Best regards
Tim Düsterhus



Re: [RFC PATCH] MEDIUM: Add support for if-none-match for cache responses

2020-09-01 Thread Tim Düsterhus
Christopher,

Am 01.09.20 um 10:37 schrieb Christopher Faulet:
> I've also a small patch (not pushed yet) that extends the
> http_replace_res_status() function to pass a extra reason. It may be
> IST_NULL to preserve the existing one. But if set, it is replaced. It is
> a small patch so if you prefer to push you function it's ok for me. Let
> me know.

No, please push that one. I will adjust my series to make use of it.

Thanks.
Tim Düsterhus



Re: [RFC PATCH v2 0/4] Add support for if-none-match for cache responses

2020-08-31 Thread Tim Düsterhus
Christopher,

Am 31.08.20 um 10:36 schrieb Christopher Faulet:
> There is no applet for uncached requests. So there is no reason to call
> the release callback function. Looking at your 4th patch, you should
> manually release 'if_none_match' variable for uncached requests in the
> http_action_req_cache_use() function.

Ah, you are correct of course. What solution would you prefer here?

1. Manually freeing in the else part of `if(res)`
2. Moving the whole If-N-M logic down into the `si_register_handler`
part? I initially didn't do this, because I assumed that this was within
a lock, but now I realize that there's an unlock before the
`si_register_handler`.

> Another comment, it could be good to test the return value for
> http_replace_res_status() and http_replace_res_reason() calls.
> 

That makes sense. I've already adjusted my local branch.

Best regards
Tim Düsterhus



Re: [RFC PATCH] MEDIUM: Add support for if-none-match for cache responses

2020-08-31 Thread Tim Düsterhus
Christopher,

Am 31.08.20 um 10:24 schrieb Christopher Faulet:
> No, there is no way to set the status code and the reason in one
> function call. But it may be good to have one. For now, we can live with
> two fun calls I guess.
> 

I've added such a function to my series locally.

Best regards
Tim Düsterhus



Re: [RFC PATCH] MEDIUM: Add support for if-none-match for cache responses

2020-08-27 Thread Tim Düsterhus
William,

Am 27.08.20 um 18:53 schrieb William Lallemand:
> Sorry for the late reply,

All good, it's not arriving earlier than 2.3 anyway :-)

> On Tue, Aug 18, 2020 at 11:19:05PM +0200, Tim Duesterhus wrote:
>> @@ -875,6 +877,7 @@ static void http_cache_io_handler(struct appctx *appctx)
>>  unsigned int len;
>>  size_t ret, total = 0;
>>  
>> +req_htx = htxbuf(>buf);
>>  res_htx = htxbuf(>buf);
>>  total = res_htx->data;
>>  
>> @@ -899,15 +902,31 @@ static void http_cache_io_handler(struct appctx 
>> *appctx)
>>  }
>>  
>>  if (appctx->st0 == HTX_CACHE_HEADER) {
>> +struct http_hdr_ctx inm_ctx = { .blk = NULL };
>> +int etag_match = 0;
>> +
>>  /* Headers must be dump at once. Otherwise it is an error */
>>  len = first->len - sizeof(*cache_ptr) - appctx->ctx.cache.sent;
>>  ret = htx_cache_dump_msg(appctx, res_htx, len, HTX_BLK_EOH);
>> -if (!ret || (htx_get_tail_type(res_htx) != HTX_BLK_EOH) ||
>> -!htx_cache_add_age_hdr(appctx, res_htx))
>> +if (!ret || (htx_get_tail_type(res_htx) != HTX_BLK_EOH))
>> +goto error;
>> +
>> +if (http_find_header(req_htx, ist("if-none-match"), _ctx, 
>> 0)) {
>> +struct http_hdr_ctx etag_ctx = { .blk = NULL };
>> +if (http_find_header(res_htx, ist("etag"), _ctx, 
>> 0)) {
>> +if (isteq(inm_ctx.value, etag_ctx.value)) {
>> +etag_match = 1;
>> +http_replace_res_status(res_htx, (const 
>> struct ist) IST("304"));
>> +http_replace_res_reason(res_htx, (const 
>> struct ist) IST("Not Modified"));
>> +}
>> +}
>> +}
>> +
>> +if (!htx_cache_add_age_hdr(appctx, res_htx))
>>  goto error;
>>  
>>  /* Skip response body for HEAD requests */
>> -if (si_strm(si)->txn->meth == HTTP_METH_HEAD)
>> +if (si_strm(si)->txn->meth == HTTP_METH_HEAD || etag_match)
>>  appctx->st0 = HTX_CACHE_EOM;
>>  else
>>  appctx->st0 = HTX_CACHE_DATA;
>>
>>
> 
> Changing the status and reason in the IO handler is the right thing to
> do since it's for the response.

Okay, good. Is there an easier way to generate a proper status line than
the two function calls?

Just using 'http_replace_res_status' is not sufficient, because then the
result will be something like '304 Ok'. It would be okay if it would
clear the reason, but 304 + Ok is ugly.

> However I don't think that's a good idea to access the request from the
> cache_io_handler. There can be some cases where you can't access the
> headers of the request here. It's suppose to have finished everything in
> the request once here. So you probably need to do the matching in the
> http-request action ( http_action_req_cache_use ) and then store a state
> in the applet which will be read in the io_handler.
> 

It's the same with http-response actions not being able to access
req.hdr. That makes sense to me.

So basically I would need to simply store the 'if-none-match' of the
request within the `appctx`, yes?

Best regards
Tim Düsterhus



Re: Large difference between response time as measured by nginx and as measured by HAProxy

2020-08-27 Thread Tim Düsterhus , WoltLab GmbH
Hi List,

Am 27.08.20 um 11:45 schrieb Tim Düsterhus, WoltLab GmbH:
>> You can see two h2c connections, one is on the frontend (hence the 'F')
>> and the other one is on the backend. The h2s is the H2 stream, unique to
>> a request (though it comes from a pool, so it will be reused for new
>> streams on any connection once this one is finished). Thus bu just
>> filtering on the h2 and correlating with what you see in the logs (with
>> your source ports), and the date of course, you can quite reliably match
>> connections, hence requests.
> 
> Okay, that works sufficiently well for me. I've extracted the relevant
> parts, redacted any private information and will check with a coworker
> to see if I missed anything. I'll send you the traces + logs in private
> after the contents have been ACKed :-)
> 

Willy and I discussed this in private. My understanding is that this
issue is caused by an inherent limitation of H2 with regard to head of
the line blocking.

In simple terms:

If the stuff at the beginning of the the backend connection belongs to a
client that is very slowly or not reading the stuff within its client
connection then everything that's within that backend connection will
stall until the slow client finally gets around to reading the stuff
thats intended for it.

Basically a very slow client can block *other* clients that are
unfortunate and share the backend connection.

Our resolution is simple: We disable H2 for the backend connections and
use good old H1. We don't *need* end to end H2.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

-- 

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



Re: Large difference between response time as measured by nginx and as measured by HAProxy

2020-08-27 Thread Tim Düsterhus , WoltLab GmbH
Willy,

Am 27.08.20 um 10:29 schrieb Willy Tarreau:
>> I now have the trace results and my HAProxy log where I can correlate
>> the slow requests using the timestamp and path. Unfortunately the trace
>> does not appear to contain the unique-id of the request.
> 
> Sadly, due to the way HPACK works we can't decode the headers being sent.
> There's already a ugly hack being done for the URI and method. I seem to
> remember we peak them from the HTX's start line, but it could not reliably
> be done for all headers. I'd still like to improve this :-/

The unique-id is a property of the the stream (struct stream).unique_id.
So that one should technically be available, no? I would also be happy
with using the internal request_counter (the %rt log value), because
that one is part of my unique ID format anyway.

>> Can I somehow filter down the trace file to just the offending requests
>> + possible the requests within the same H2 connection? For privacy
>> reasons I would not like to provide the full trace log, even if it's in
>> a non-public email.
> 
> Of course I understand. You can rely on the "h2c" pointer that appears
> in the trace, it's the H2 connection. For example below, I'm forwarding
> an H2 request from a frontend to a backend:
> 
>   <0>2020-08-27T10:23:26.117936+02:00 [01|h2|0|mux_h2.c:2541] rcvd H2 request 
>  : h2c=0x7f59f0026880(F,FRH) : [1] H2 REQ: GET http://127.0.0.1:4446/ HTTP/2.0
>   <0>2020-08-27T10:23:26.128922+02:00 [01|h2|0|mux_h2.c:5152] sent H2 request 
> : h2c=0x7f59f00382a0(B,FRH) h2s=0x7f59f0037fa0(1,IDL) : [1] H2 REQ: GET 
> http://mail.google.com/ HTTP/2.0
>   <0>2020-08-27T10:23:26.149012+02:00 [01|h2|0|mux_h2.c:2632] rcvd H2 
> response : h2c=0x7f59f00382a0(B,FRH) : [1] H2 RES: HTTP/2.0 301 
>   <0>2020-08-27T10:23:26.149041+02:00 [01|h2|0|mux_h2.c:4790] sent H2 
> response : h2c=0x7f59f0026880(F,FRH) h2s=0x7f59f0027dc0(1,HCR) : [1] H2 RES: 
> HTTP/2.0 301 
> 
> You can see two h2c connections, one is on the frontend (hence the 'F')
> and the other one is on the backend. The h2s is the H2 stream, unique to
> a request (though it comes from a pool, so it will be reused for new
> streams on any connection once this one is finished). Thus bu just
> filtering on the h2 and correlating with what you see in the logs (with
> your source ports), and the date of course, you can quite reliably match
> connections, hence requests.

Okay, that works sufficiently well for me. I've extracted the relevant
parts, redacted any private information and will check with a coworker
to see if I missed anything. I'll send you the traces + logs in private
after the contents have been ACKed :-)

There's even a PHP request within there with an 800ms difference between
nginx and HAProxy.

> Do not hesitate to let me know if you think of something that can be
> improved, traces are still quite young and under exploited, I'm quite
> open to studying potential improvements.

Nothing else for now. grepping for the pointers worked sufficiently
well. The bigger issue is identifying the request in question in the
first place. I was lucky that the file in this case was fairly
low-trafficked so that I was able to accurately identify it based on the
timestamp and path.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

-- 

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



Re: Large difference between response time as measured by nginx and as measured by HAProxy

2020-08-27 Thread Tim Düsterhus , WoltLab GmbH
Willy,

Am 26.08.20 um 17:51 schrieb Willy Tarreau
>> As I said: It's not really reproducible.
> 
> Yeah but we all have a definition of "not really reproducible". As you
> consider that it doesn't happen in HTTP/1 after only 4 hours, for me
> this means you expect it to usually happen at least once in 4 hours.
> If the load is not too high (no more than a few thousands requests per
> second) and you have some disk space, the h2 trace method could prove
> to be useful once you have isolated a culprit in the logs. You could
> even gzip the output on the fly to take less space, they compress very
> well.

My definition of reproducible is "I can write up a list of steps that
makes the issue happen somewhat reliably". This is not the case here. If
I test with e.g. nghttp -a to pull the HTML + all resources then
everything is working smoothly. Same if I attempt to pull down the same
static file after I am seeing an issue within the logs.

An update regarding the H1 numbers: In the 20 hours or so with HTTP/1
enabled a total of 15 (!) static requests took longer than 45ms. The
maximum being 77ms. This is still something I consider much, but nothing
compared to the H2 performance.

This morning I re-enabled H2 for the backend communication and then
plugged in the tracing. In the half of an hour since I reenabled H2 I'm
seeing 160 static requests taking longer than 45ms, with the worst ones
being > 800ms.

I now have the trace results and my HAProxy log where I can correlate
the slow requests using the timestamp and path. Unfortunately the trace
does not appear to contain the unique-id of the request.

Can I somehow filter down the trace file to just the offending requests
+ possible the requests within the same H2 connection? For privacy
reasons I would not like to provide the full trace log, even if it's in
a non-public email.

>>> Another thing you can try is to artificially limit
>>> tune.h2.max-concurrent-streams just in case there is contention in
>>> the server's connection buffers. By default it's 100, you can try with
>>> much less (e.g. 20) and see if it seems to make any difference at all.
>>>
>>
>> The fact that disabling HTTP/2 helps could indicate that something like
>> this is the case here. I'll try that tomorrow, thanks.
> 

I've not done this yet, I'd first like to hear how we go about with the
trace I've collected.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

-- 

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



Re: Large difference between response time as measured by nginx and as measured by HAProxy

2020-08-26 Thread Tim Düsterhus , WoltLab GmbH
Willy,

Am 26.08.20 um 16:59 schrieb Willy Tarreau:
> On Wed, Aug 26, 2020 at 12:59:55PM +0200, Tim Düsterhus, WoltLab GmbH wrote:
>> Within the HAProxy logs we're sometimes seeing largish 'Tr' values for
>> static file requests. For example 734ms for the example request below.
> 
> That sounds huge. I could have imagined a 200ms time, possibly indicating
> that somewhere between the two, an extra MSG_MORE was present and prevented
> a request and/or response from leaving the machine. But 734ms doesn't
> correspond to anything usual.

There are even worse ones:

nginx:

> Aug 26 09:56:50 *snip* abd21141cdb4[5615]: 5F4631E22E407 "GET *snip*.gif 
> HTTP/2.0" 200 139 0.000

HAProxy:

> Aug 26 09:57:17 *snip* haproxy[3066]: *snip*:57086 [26/Aug/2020:09:56:50.265] 
> https~ *snip*/nginx 0/0/0/27340/27340 200 335 - -  82/82/17/17/0 0/0 
> {*snip*|*snip*} {*snip*|} "GET *snip*.gif HTTP/2.0" 5F4631E22E407

Yes, that's 27340ms. I don't have any more details regarding the
connection reuse for that, unfortunately :-(

FWIW: I disabled HTTP/2 for the backend connection after sending my
initial email and I am not seeing anything worse than 45ms for static
files any more.

>> The two requests in nginx' log:
>>
>>> Aug 26 10:23:21 *snip* abd21141cdb4[5615]: 5F46381900A0 "GET *snip* 
>>> HTTP/2.0" 200 27049 0.070
>>> Aug 26 10:23:26 *snip* abd21141cdb4[5615]: 5F46381E00F5 "GET *snip*.gif 
>>> HTTP/2.0" 200 6341 0.000
>>
>> Note how the timing for the PHP request matches up what HAProxy was
>> seeing: Roughly 70ms / 0.070s. The timing for the GIF request is *way*
>> off, though.
> 
> Stupid question, do the PHP request *always* match ? I'm asking because

I just arbitrarily checked 3 PHP requests and they match. The static
ones *mostly* do as well. It's just that static files taking ages stick
out like a sore thumb, while PHP tends to be all over the place anyway.

It basically was a "grep for static file request" and then "halog -pct"
and wonder why there's a large number at the bottom.

> one point is that haproxy cannot tell the difference between the two.
> However the static one very likely responds instantly, which could justify
> a less common code path being taken and a bug to be triggered.
> 
> However static files might also cause some disk access, and not knowing
> anything about the environment, I'd say it's also possible that the whole
> nginx process gets frozen for the time it takes open() to respond, and if
> for whatever reason there are lots of I/Os and it takes that long, it might
> be possible that it doesn't notice it, depending where the total time is
> retrieved.

See above: HTTP/1 did not show the issue within the last 4 hours, so I
would rule out nginx / the disk being the culprit here.

Static files are served from fast local SSDs with plenty of page cache.
I don't expect the disk access to bottleneck up to the 3-digit
millisecond range.

> But it could definitely be a bug in the connection reuse or mux code, or
> course. It's just that I have no idea what could cause this for now.
> 
> If you see it often enough, it might be useful to enable h2 traces and
> watch them on the CLI, or even run a tcpdump between the two to observe
> what's going on. The traces might be easier as a first step as you'll
> see already decoded stuff. For example:
> 
>   (echo "trace h2 sink buf0; trace h2 level user; trace h2 verbosity \
>complete; trace h2 start now;show events buf0 -w -n" ; cat ) | \
>   socat - /path/to/socket > /path/to/logs
> 
> Once you're confident you got it, you can disable all traces using the
> short form "trace 0".

As I said: It's not really reproducible. I'm mostly seeing it after the
fact. I can't even tell if HAProxy's measuring or nginx' measuring is
correct (i.e. which latency the client actually experienced).

However tcpdump definitely is not going to be fun, because it's all TLS
with PFS ciphers.

> Another thing you can try is to artificially limit
> tune.h2.max-concurrent-streams just in case there is contention in
> the server's connection buffers. By default it's 100, you can try with
> much less (e.g. 20) and see if it seems to make any difference at all.
> 

The fact that disabling HTTP/2 helps could indicate that something like
this is the case here. I'll try that tomorrow, thanks.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

-- 

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



Re: Large difference between response time as measured by nginx and as measured by HAProxy

2020-08-26 Thread Tim Düsterhus , WoltLab GmbH
Ilya,

Am 26.08.20 um 13:13 schrieb Илья Шипицин:
> nginx buffers output by default.
> 
> can you try
> 
> proxy_request_buffering off;
> proxy_buffering off;
> 

The GIF file with the 734ms difference in response time between the two
logs is a static file served from the local file system.

The proxy_* options do not apply here. In fact nginx is not performing
any proxying at all, it's just doing FastCGI. But FastCGI is not the
thing I am concerned about.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

-- 

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



Large difference between response time as measured by nginx and as measured by HAProxy

2020-08-26 Thread Tim Düsterhus , WoltLab GmbH
Hi List,

please keep this email in Cc for all responses, I am not subscribed to
the list.

We are running HAProxy 2.1.8 in front of nginx serving static files and
acting as a FastCGI gateway. HAProxy is communicating with nginx using
HTTP/2. Clients are regular web browsers, mostly using HTTP/2.

Within the HAProxy logs we're sometimes seeing largish 'Tr' values for
static file requests. For example 734ms for the example request below.
The corresponding request in the nginx log shows a $request_time of
0.000s
(http://nginx.org/en/docs/http/ngx_http_log_module.html#var_request_time).

HAProxy log line for a static GIF file:

> Aug 26 10:23:27 *snip* haproxy[3066]: *snip*:1167 [26/Aug/2020:10:23:26.028] 
> https~ *snip*/nginx 0/0/0/734/1719 200 6540 - -  19/19/12/12/0 0/0 
> {*snip*|*snip*} {*snip*|} "GET *snip*.gif HTTP/2.0" *snip*:59348 5F46381E00F5

Some notes regarding the log format: This is the standard HTTP log
format with '%bi:%bp %ID' appended.

According to the '%bp' value this was not the first request within the
backend connection. The backend connection was established 5 seconds
earlier for a HTTP/1.1 request going to PHP. According to the request ID
this was the 160th (00A0) request after a reload (I just recently added
the logging of %bp).

> Aug 26 10:23:21 *snip* haproxy[3066]: *snip*:11594 [26/Aug/2020:10:23:21.148] 
> https~ *snip*/nginx 0/0/2/69/72 200 27432 - -  19/19/0/0/0 0/0 
> {*snip*|*snip*} {*snip*|} "GET *snip* HTTP/1.1" *snip*:59348 5F46381900A0

The two requests in nginx' log:

> Aug 26 10:23:21 *snip* abd21141cdb4[5615]: 5F46381900A0 "GET *snip* HTTP/2.0" 
> 200 27049 0.070
> Aug 26 10:23:26 *snip* abd21141cdb4[5615]: 5F46381E00F5 "GET *snip*.gif 
> HTTP/2.0" 200 6341 0.000

Note how the timing for the PHP request matches up what HAProxy was
seeing: Roughly 70ms / 0.070s. The timing for the GIF request is *way*
off, though.

Any idea what is happening there? It's not really reproducible in any
way. I'll be happy to add more stuff to my logs if that would help. I
can also provide specific configuration settings on request.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

-- 

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



Re: Logging using %HP (path) produce different results with H1 and H2

2020-08-25 Thread Tim Düsterhus
Willy,

Am 25.08.20 um 14:53 schrieb Willy Tarreau:
> So I think it's the right place to open such a discussion (what we should
> log and whether or not it loses info by default or requires to duplicate
> some data while waiting for the response), so that we can reach a better
> and more modern solution. I'm open to proposals.
> 

%ID by default, please.

See also this issue:
https://github.com/haproxy/haproxy/issues/401#issuecomment-562776569
and this:
https://github.com/haproxy/haproxy/issues/771

Best regards
Tim Düsterhus



Re: stable-bot: Bugfixes waiting for a release 2.2 (18), 2.1 (13), 2.0 (8), 1.8 (6)

2020-08-19 Thread Tim Düsterhus
Daniel,

Am 19.08.20 um 02:00 schrieb stable-...@haproxy.com:
> The current list of patches in the queue is:
>  - 2.2   - MAJOR   : dns: disabled servers through SRV 
> records never recover
>  - 2.2   - MEDIUM  : ssl: fix the ssl-skip-self-issued-ca 
> option
>  - 1.8, 2.0  - MEDIUM  : mux-h2: Don't fail if nothing is 
> parsed for a legacy chunk response
>  - 2.2   - MEDIUM  : ssl: never generates the chain from 
> the verify store
>  - 2.0, 2.1, 2.2 - MEDIUM  : mux-h1: Refresh H1 connection 
> timeout after a synchronous send
>  - 2.0, 2.1, 2.2 - MEDIUM  : htx: smp_prefetch_htx() must always 
> validate the direction
>  - 2.1, 2.2  - MEDIUM  : ssl: memory leak of ocsp data at 
> SSL_CTX_free()
>  - 1.8, 2.0, 2.1, 2.2- MEDIUM  : map/lua: Return an error if a 
> map is loaded during runtime

The indentation is messed up here. Is this caused by the removal of 1.9?

>  - 2.1, 2.2  - MINOR   : arg: Fix leaks during arguments 
> validation for fetches/converters
>  - 1.8, 2.0, 2.1, 2.2- MINOR   : lua: Check argument type to 
> convert it to IP mask in arg validation
>  - 2.1, 2.2  - MINOR   : ssl: fix memory leak at OCSP loading
>  - 2.0, 2.1, 2.2 - MINOR   : snapshots: leak of snapshots on 
> deinit()
>  - 1.8, 2.0, 2.1, 2.2- MINOR   : stats: use strncmp() instead of 
> memcmp() on health states
>  - 2.2   - MINOR   : ssl: ssl-skip-self-issued-ca 
> requires >= 1.0.2
>  - 2.1, 2.2  - MINOR   : lua: Duplicate map name to load it 
> when a new Map object is created
>  - 2.2   - MINOR   : spoa-server: fix size_t format 
> printing
>  - 1.8, 2.0, 2.1, 2.2- MINOR   : lua: Check argument type to 
> convert it to IPv4/IPv6 arg validation
>  - 1.8   - MINOR   : dns: ignore trailing dot
>  - 2.1, 2.2  - MINOR   : converters: Store the sink in an arg 
> pointer for debug() converter
>  - 2.1, 2.2  - MINOR   : lua: Duplicate lua strings in sample 
> fetches/converters arg array
> 

I also have another proposal for readability of this list: Can the
versions be ordered in descending order and possibly aligned within
columns? Here's an example of what I'm thinking of:

- 2.2- MINOR: stuff
- 2.2, 2.1   - MINOR: stuff
- 2.2- MINOR: stuff
- 2.2, 2.1, 2.0  - MINOR: stuff
-   2.0, 1.8 - MINOR: stuff

You can quickly scan the columns of your version that way and do not
have to search for your version in each of the rows.

Best regards
Tim Düsterhus



Re: [PATCH] MEDIUM: cfgparse: Emit hard error on truncated lines

2020-08-19 Thread Tim Düsterhus
William,

Am 19.08.20 um 10:54 schrieb William Lallemand:
> I think it broke
> reg-tests/stick-table/converteers_ref_cnt_never_dec.vtc:
> 
> https://travis-ci.com/github/haproxy/haproxy/jobs/375236964
> 
> But I can't reproduce localy.

Looking at the test I would suspect that removing the indentation in
front of the brace in line 62 fixes the issue. But I did not test this
either:

https://github.com/haproxy/haproxy/blob/ff4d86becd658f61e45692815f3b9e7a1e66107f/reg-tests/stick-table/converteers_ref_cnt_never_dec.vtc#L62

Best regards
Tim Düsterhus



Re: github template

2020-08-17 Thread Tim Düsterhus
Lukas,

Am 16.08.20 um 22:27 schrieb Lukas Tribus:
> The changes can be seen here:
> 
> https://github.com/lukastribus/hap-issue-trial/issues/new/choose
> 
> If you agree, I will send the patches.
> 

I created a pull request for each of the three templates. If you are
happy with those then please add the following to the squashed commit:

Co-authored-by: Tim Duesterhus 

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.3-dev3

2020-08-14 Thread Tim Düsterhus
Willy,

Am 14.08.20 um 19:23 schrieb Willy Tarreau:
> HAProxy 2.3-dev3 was released on 2020/08/14. It added 38 new commits
> after version 2.3-dev2.

I'm not seeing the tag. Did you forget to 'git push'?

Best regards
Tim Düsterhus



Re: Hello ! May I ask you one question ?

2020-08-10 Thread Tim Düsterhus
Axel,

Am 10.08.20 um 19:02 schrieb Axel DUMAS:
> Can I ask my question here ? Or do I have to write it in an other place ?

This is the appropriate place. However I must admit that your email
looked like spam based off the Subject at a first glance.

> In order to test load-balancing, in the config file of HAProxy, I wrote
> theses two things :
> *frontend* java_srv
> *bind* 192.168.0.19:26001
> *mode* tcp
> *default_backend* all_java_srv
> 
> *backend* all_java_srv
> *balance* roundrobin
> *mode* tcp
> *server* srv_1 192.168.0.19:26001

Can you please give your *literal* configuration instead of just the
options you consider important? And please give the configuration in the
original format without the asterisks in there.

> Some fast exchanges seems to be made between the IoT device and my
> server, but when it have to wait, the connection is cut and the device
> is disconnected.

Any error messages? Did you enable logging in HAProxy (you should)? Can
you provide a log message?

Best regards
Tim Düsterhus



Re: SUBVERS for 2.2 snapshots not correctly filled in?

2020-08-04 Thread Tim Düsterhus
Willy,

Am 04.08.20 um 18:34 schrieb Willy Tarreau:
> That's rather strange. I'm still comparing the two repos, they look
> exactly similar.
> 
> Ah found it! We don't just need the "atttributes" file but also the
> info/attributes one. I don't remember which one is used when. I'm
> going to update my release check list. We're tackling one release bug
> at a time for each release, hopefully 2.3 will be perfect!
> 

Or you just could commit the .gitattributes file as this would also fix
the GitHub downloads :-)

Best regards
Tim Düsterhus



SUBVERS for 2.2 snapshots not correctly filled in?

2020-08-04 Thread Tim Düsterhus
Willy,

It appears that the SUBVERS file is broken for 2.2 snapshots:

> [timwolla@/tmp]curl -fsSL 
> 'http://www.haproxy.org/download/2.2/src/snapshot/haproxy-ss-LATEST.tar.gz' 
> |tar -O -xvz --wildcards '*/SUBVERS'
> haproxy-ss-20200801/SUBVERS
> -$Format:%h$

It looks good for 2.1:

> [timwolla@/tmp]curl -fsSL 
> 'http://www.haproxy.org/download/2.1/src/snapshot/haproxy-ss-LATEST.tar.gz' 
> |tar -O -xvz --wildcards '*/SUBVERS'
> haproxy-ss-20200801/SUBVERS
> -3a5bfcc
> 

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-1.9.16

2020-07-31 Thread Tim Düsterhus
Christopher,
Willy,

Am 31.07.20 um 14:14 schrieb Christopher Faulet:
> The 1.9 branch is EOL now. Thus, it is the last 1.9 release. No further 
> release 
> should be expected. It contains all pending patches marked to be backported 
> to 
> the 1.9 to leave it in a proper state. Have a look at the changelog below for 
> the complete list of fixes.
> 
> You should have no reason to deploy it in a production environment. Use the 
> 2.0 
> or above instead. No specific support should no longer be expected on the 
> 1.9. 
> This branch will not receive fixes anymore.
> 
> Thanks everyone for you help and your contributions !

Don't forget to update the color in the version table on haproxy.org

Best regards
Tim Düsterhus



Re: HAProxy 1.9 Snapshot generation broken?

2020-07-31 Thread Tim Düsterhus
Willy,

Am 31.07.20 um 04:22 schrieb Willy Tarreau:
>> either I am dumb or it appears that the generation of HAProxy 1.9
>> snapshots is broken. In
>> http://www.haproxy.org/download/1.9/src/snapshot/ the newest tarball is
>> from Jun, 12th.
> 
> Wow, seems you're right! However I really don't understand what's
> broken: when I start the script by hand it works, and the *only*

Thanks. I just wanted to confirm that the Lua receive loop fix in
HAProxy 1.9 works before the last 1.9 release. It does:
https://github.com/TimWolla/haproxy-auth-request/pull/20/checks?check_run_id=930857201

> difference with other working versions is exactly the branch number
> in the BRANCH variable :-(  And permissions are the same everywhere.
> 
> I'll check logs in case I find anything.

Maybe Christopher only pushed yesterday for some reason?

Best regards
Tim Düsterhus




Re: [PATCH] CI: Expand use of GitHub Actions for CI

2020-07-30 Thread Tim Düsterhus
Ilya,

Am 29.07.20 um 21:15 schrieb Илья Шипицин:
> as for migrating to GH actions in general, I suggest the following
> 
> 1) I will have a look at red builds (I did not expect red LibreSSL builds)

They look like a fluke to me, the test for the field converter failed.
But I did not bother to investigate for a PoC patch.

> 2) I'll run arm64/s390/ppc64le for big matrix to see if there are red builds

I would be fine with leaving those architectures on Travis for the time
being. amd64 is by far and large the most important one and we can
benefit from the improved GitHub Actions there. For the other arches a
basic smoke test (does it compile?) should catch most of the issues.

> 3) can we track "pro et contra" of travis <--> GH in dedicated GH issue ?
> issue is much like mailing list, but it is pinned

Feel free to create an issue if you feel it is helpful. I don't
particularly care either way (mailing list vs issue).

Best regards
Tim Düsterhus



Re: [PATCH] CI: Expand use of GitHub Actions for CI

2020-07-30 Thread Tim Düsterhus
Ilya,

Am 30.07.20 um 08:50 schrieb Илья Шипицин:
> as for failing osx builds, I think it might be due to long folder name
> 
> https://github.com/TimWolla/haproxy/runs/924097118?check_suite_focus=true#step:10:361
> 
> it is 100+ characters.  there's also unix socket in that dir, which has
> limitation on filename length.
> I use TMPDIR=/tmp in travis to make path shorter

Ah, yeah. That makes sense, I remember that issue from Travis.

> (but it would be nice to output logs on failure to see that error in log)

Yes, I acknowledged that as "Known Issues". My patch is a very rough
first version that will need some love. But I wanted to get some
feedback as it is in an acceptable state now.

Best regards
Tim Düsterhus



Re: [PATCH] CI: Expand use of GitHub Actions for CI

2020-07-29 Thread Tim Düsterhus
Ilya,

Am 29.07.20 um 19:05 schrieb Илья Шипицин:
>> Known issues:
>> - Apparently the git commit is not properly detected during build. The HEAD
>>   currently shows as 2.3-dev1.
>>
> 
> there's something wrong with git checkout (or the way we use it)
> 
> have a look (do not ask me why, I've no idea why it is 2.0-dev...)
> 
> Run ./haproxy -vv
> HA-Proxy version 2.0-dev2-f7c257-3122 2020/07/29 - https://haproxy.org/
> Status: development branch - not safe for use in production.
> 
> https://github.com/chipitsine/haproxy/runs/924210892?check_suite_focus=true
> 

That's interesting. I am suspecting that the shallow clones are at fault
here. That's why I attempted to increase the fetch-depth to 100. Travis
uses 50. Unfortunately that did not help. We'll have to investigate this.

Or maybe someone knows the necessary git internals on top of their head
and can chime in :-)

Best regards
Tim Düsterhus



Re: [PATCH] CI: Expand use of GitHub Actions for CI

2020-07-29 Thread Tim Düsterhus
Ilya,

Am 29.07.20 um 18:55 schrieb Илья Шипицин:
>> +for ssl in [
>> +  "stock",
>> +  "OPENSSL_VERSION=1.0.2u",
>> +  "OPENSSL_VERSION=1.1.0l",
>> +  "OPENSSL_VERSION=1.1.1f",
>> +  "LIBRESSL_VERSION=2.9.2",
>> +  "LIBRESSL_VERSION=3.0.2",
>> +  "LIBRESSL_VERSION=3.1.1",
>> +]:
>> +  flags = ["USE_OPENSSL=1"]
>> +  if ssl != "stock":
>> +flags.append("SSL_LIB=${HOME}/opt/lib")
>> +flags.append("SSL_INC=${HOME}/opt/include")
>>
> 
> I do not like that polluting assignment. I know where it came from, it was
> me :)
> I'm going to get rid of it, hope in 1-2 weeks.
> 

Yes, I copied that from Travis, because the workflow aims to be a Travis
CI replacement.

Best regards
Tim Düsterhus



Re: [PATCH] CI: Expand use of GitHub Actions for CI

2020-07-29 Thread Tim Düsterhus
Ilya,

Am 29.07.20 um 19:22 schrieb Илья Шипицин:
> there's also gitlab official mirror (ran by William)
> https://gitlab.com/haproxy.org/haproxy
> 
> 
> in theory we can use gitlab-ci as well :)
> 

Please no more external services! I've specifically reimplemented the
Travis tests in GitHub Actions, because they are native to GitHub and
pretty awesome because of the dynamic matrix generation support. I am
hoping that we can turn off Travis sooner rather than later.

Best regards
Tim Düsterhus



Re: [PATCH] CI: Expand use of GitHub Actions for CI

2020-07-29 Thread Tim Düsterhus
Ilya,

Am 29.07.20 um 18:58 schrieb Илья Шипицин:
> generally, there was an idea of keeping "developer velocity" as high as
> possible.
> i.e. to perform check in 3-5 min after the push.
> 
> Github Actions allows 4 parallel builds (same as Travis CI), I thought of
> moving some builds to GH actions.
> also, that was a reason to move some rare configurations to "cron" trigger.
> 
> Big matrix takes 30 min. It is not acceptable.
> 

GitHub Actions allows 20 concurrent jobs:

https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#usage-limits

And with self hosted runners (maybe HAProxy Technologies is willing to
sponsor some?) there is no limit.

Anyhow: The Workflow can be optimized for sure. It currently performs no
caching at all. But it still gets 1:45 minutes for a simple gcc build
with all feature flags and 3:09 minutes for gcc with a custom OpenSSL.
That's super fast. The Mac OS one is pretty slow, because all the VTest
tests keep timing out for some reason. It will be faster if the tests pass.

Overall the whole workflow shows as 8:10 minutes for me which is in "3
to 5 minute" territory if some optimizations are performed.

Best regards
Tim Düsterhus



Re: [PATCH] CI: Expand use of GitHub Actions for CI

2020-07-29 Thread Tim Düsterhus
Ilya,
List,

Am 29.07.20 um 18:49 schrieb Tim Duesterhus:
> [long snip]

You can find an example run of that GitHub Action workflow here:

https://github.com/TimWolla/haproxy/actions/runs/187396816

I've attached a screenshot of how nice it looks in the commit overview
compared to Travis.

The build output itself also is much cleaner, because it clearly
separates the steps and even allows to collapse substeps (e.g. the ldd
output):

https://github.com/TimWolla/haproxy/runs/924096838?check_suite_focus=true

Ilya, feel free to adjust around in that workflow file. But please take
care to keep it as clean as possible. I hope that the Python support
helps here. The Travis one is a small mess to be honest :-)

Best regards
Tim Düsterhus


Re: DOC: Update documentation / comments to be gender neutral

2020-07-26 Thread Tim Düsterhus
Willy,

Am 26.07.20 um 22:34 schrieb Willy Tarreau:
> Just a small point, Jackie, as I noticed you fixed several bad constructs
> of "he" instead of "it", it's worth noting that in some latin languages
> (like French or Spanish), a number of common words are arbitrarily male
> or female. A car is female, a truck is male, a plane is male and a space
> shuttle is female. I could give plenty of examples like this, as there's
> almost no equivalent for "it". [...]

Today I learned. I thought that having gendered nouns was unique to
German or as Mark Twain said in "The Awful German Language"
(https://en.wikipedia.org/wiki/The_Awful_German_Language):

"Every noun has a gender, and there is no sense or system in
distribution; so the gender of each must be learned separately and by
heart. There is no other way. To do this one has to have a memory like a
memorandum-book. In German, a young lady has no sex, while a turnip has.
Think what overwrought reverence that shows for the turnip, and what
callous disrespect for the girl."

And indeed sometimes I accidentally slip in a gendered pronoun when
communicating in English.

> This comforts my beliefs that avoiding mentions of the user in general is
> by far the best solution, and that when not possible (due to generalizing),
> it's best to use a plural form (since in this case what is described doesn't
> apply to a single user but to all of them). In this case instead of writing

Please note that the "they" is no plural form here. It's a so-called
"Singular they": https://en.wikipedia.org/wiki/Singular_they

In German we don't have an equivalent for that. I don't know about
French. By now it feels pretty natural to me, though.

Best regards
Tim Düsterhus



Re: DOC: Update documentation / comments to be gender neutral

2020-07-26 Thread Tim Düsterhus
Jackie,

Am 26.07.20 um 19:02 schrieb Jackie Tapia:
> Thanks for the suggestion, Tim.
> 
> That commit message sounds good to me.
> 

Your patch looks good to me now. I've put Willy into Cc to perform the
final review and commit the patch.

Best regards
Tim Düsterhus



Re: DOC: Update documentation / comments to be gender neutral

2020-07-26 Thread Tim Düsterhus
Jackie,

Am 26.07.20 um 18:36 schrieb Jackie Tapia:
> I've attached an updated patch.
> 
Thank you, the Diff LGTM now (however I did not check whether you missed
any gendered phrasing).

Unfortunately it appears that you missed by remark regarding the missing
commit message body.

Please make sure to always use a single line commit title *plus* at
least a single line commit message body. If the title wraps within the
generated patch files it is probably too long.

Based off your patch and existing commit message, may I suggest the
following?

---
DOC: Use gender neutral language

This patch updates the documentation files and code comments to avoid
the use of gender specific phrasing in favor of "they" or "it".
---

Best regards
Tim Düsterhus




Re: SRV records resolution failure if Authority section is present

2020-07-26 Thread Tim Düsterhus
Jerome,

Am 26.07.20 um 17:25 schrieb Jerome Magnin:
> Please find a proper fix attached. We already know if we have entries in
> the Authority section (dns_p->header.nscount > 0), so just skip them
> when they are present and only use the Additional records.

Regarding the commit message: Please add backporting information to the
end of the commit message body (I believe it should be 2.2+).

Regarding the patch:

+   /* skip 2 bytes for ttl */
+   reader += 4;

Either the commit or the code is incorrect here.

Best regards
Tim Düsterhus



Show: GitHub Action for easy installation of HAProxy and VTest

2020-07-25 Thread Tim Düsterhus
Hi List,
Ilya,
Adis,

some of you might know my haproxy-auth-request Lua script [1]. After
having good experiences with VTest for HAProxy itself I started building
VTest based tests for haproxy-auth-request running on GitHub Actions [2]
to test the script for all HAProxy branches starting 1.8.

Initially I performed all the necessary preparation steps manually
within my workflow file, but felt it was unnecessarily verbose having to
compile HAProxy manually.

Thus today I worked on moving the download and compilation of HAProxy
and VTest into a re-usable GitHub Action that can easily be embedded
without needing to reinvent the wheel everywhere:

https://github.com/TimWolla/action-install-haproxy

You can use the test.yml workflow in my haproxy-auth-request repository
as a guidance on how to use it. Currently the only supported build flags
are USE_OPENSSL and USE_LUA. The former because it is generally useful,
the latter, because I need it.

I hope it comes in useful to those of you who build Lua libraries or
develop an SPOA.

I'd like to hear your feedback and I'll happily accept pull requests in
case you run into issues.

I've put Ilya into Cc, because he might be interested after doing all
this CI work on HAProxy and Adis, because I plan to finally use the
haproxy-lua-http library once
https://github.com/haproxy/haproxy/issues/744 is backported to 1.8.

Best regards
Tim Düsterhus

[1] https://github.com/TimWolla/haproxy-auth-request
[2]
https://github.com/TimWolla/haproxy-auth-request/blob/master/.github/workflows/test.yml



Re: [ANNOUNCE] haproxy-2.2.1

2020-07-23 Thread Tim Düsterhus
Willy,

Am 23.07.20 um 09:46 schrieb Willy Tarreau:
> these ones. I think we'll soon emit another 2.1 given that quite some fixes
> have been accumulating since 2.1.7.

Any ETA for that?

You also might want to do the final 1.9 as suggested previously:
https://www.mail-archive.com/haproxy@formilux.org/msg37580.html. It
would allow to remove the 1.9 label from the issue tracker and clean out
quite a few already-fixed issues from the list.

Best regards
Tim Düsterhus



Re: DOC: Update documentation / comments to be gender neutral

2020-07-23 Thread Tim Düsterhus
Jackie,

First: I'm a community contributor, so my review might not necessarily
reflect that of the HAProxy project. I'm also not a native English speaker.

Regarding your patch I have a few comments:

>From your PR message:

> Also, remove some trailing whitespaces from the files modified for funsies.

I disagree with this change. This makes the patch larger than it needs
to be and it distracts from the important parts of it. If anything it
should become a dedicated patch that is separately committed.

Regarding your commit messages:

Please have a look at the CONTRIBUTING file you modified. Commit
messages need to be prefixed with a "tag" ("DOC:" would probably be
appropriate) and also contain a body:

>As a rule of thumb, your patch MUST NEVER be made only of a subject line,
>it *must* contain a description. Even one or two lines, or indicating
>whether a backport is desired or not. It turns out that single-line commits
>are so rare in the Git world that they require special manual (hence
>painful) handling when they are backported, and at least for this reason
>it's important to keep this in mind.

(https://github.com/haproxy/haproxy/blob/f1ea47d8960730c79cc71fc634b3d7e5909d5683/CONTRIBUTING#L562-L567)

Then to the patch itself:

- It must not modify the license files doc/lgpl.txt and doc/gpl.txt,
because they are standard license texts:
https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
- In lua.txt it now reads "They just works carefully". I believe it
should be "They just work carefully".
- In proxy-protocol.txt it now reads "to hide it's activities". I
believe it should be "to hide its activities".

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.0.16

2020-07-20 Thread Tim Düsterhus
Christopher,

Am 17.07.20 um 17:19 schrieb Christopher Faulet:
> Le 17/07/2020 à 16:58, Tim Düsterhus a écrit :
>> Christopher,
>>
>> Am 17.07.20 um 16:34 schrieb Christopher Faulet:
>>> HAProxy 2.0.16 was released on 2020/07/17. It added 45 new commits
>>> after version
>>> 2.0.15.
>>
>> I'm a bit surprised seeing a new 2.0 before a new 2.1 / 2.2. Will they
>> also come today? I'm especially interested in 2.1.8 because of this
>> patch:
>> http://git.haproxy.org/?p=haproxy-2.1.git;a=commit;h=4cbc30d405b06e7ff1ba6a6f3be58af786af81bc
>>
>>
>> While I deployed a custom built 2.2 today for #758 I don't want to do
>> this on all machines due to the increased operational effort.
>>
>> Also releasing a new 2.0 before the newer branches violates the contract
>> that upgrading HAProxy to a newer branch will never reintroduce fixed
>> bugs. I believe I read this contract somewhere in the past, but I can't
>> find it right now.
>>
> 
> I released the 2.0.16 in first because of a major bug on the 2.0.15
> about random backend connections on connection retry. So, I guess we may

Yes, I understand that 2.0.16 was a very important release for users of
2.0.x.

> emit a 2.1 too. There are some annoying medium bugs and the 2.1.7 is 1.5
> month old. For the 2.2, there are still pending bugs we should fix
> before emitting a new release. At least the issue #758 as you said, but
> also the #756.
> 

I found the "contract" I was referring to:
https://github.com/haproxy/haproxy/blob/a267b5df4a20016a7daee5f457b219c1a2c48f39/BRANCHES#L98-L108

I quote, highlighting mine:

> The fix will be backported into 1.9 and from there into 1.8. 1.9.5
**will be issued with the fix before** 1.8.19 will be issued. This
guarantees that for any version 1.8 having the fix, there always exists
a version 1.9 with it as well.

I understand that releasing a new version takes time to write the
announcement and stuff and I also understand that 2.0 had priority. But
I don't think that excuses violating one's own rules that exist for good
reason. Users upgrading from 2.0.16 to 2.2.0 will now see bugs that have
already been fixed in 2.0.16, but not 2.2.0.

Best regards
Tim Düsterhus



Re: Segfault on 2.2.0 and UUID fetch regression

2020-07-17 Thread Tim Düsterhus
Luke,

Am 17.07.20 um 17:55 schrieb Luke Seelenbinder:
> To follow up on this—is it easier if I create this as a GitHub issue?
> 

It will certainly not get lost in the depths of the email archive and it
allows to easily link commits to issues and issues to commits just by
mentioning the number in the message.

Personally I create issues for everything, even if I fix them myself a
few minutes later. In case my patch isn't correct at least the issue
will still remain in the tracker.

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.0.16

2020-07-17 Thread Tim Düsterhus
Christopher,

Am 17.07.20 um 16:34 schrieb Christopher Faulet:
> HAProxy 2.0.16 was released on 2020/07/17. It added 45 new commits after 
> version
> 2.0.15.

I'm a bit surprised seeing a new 2.0 before a new 2.1 / 2.2. Will they
also come today? I'm especially interested in 2.1.8 because of this
patch:
http://git.haproxy.org/?p=haproxy-2.1.git;a=commit;h=4cbc30d405b06e7ff1ba6a6f3be58af786af81bc

While I deployed a custom built 2.2 today for #758 I don't want to do
this on all machines due to the increased operational effort.

Also releasing a new 2.0 before the newer branches violates the contract
that upgrading HAProxy to a newer branch will never reintroduce fixed
bugs. I believe I read this contract somewhere in the past, but I can't
find it right now.

Best regards
Tim Düsterhus



Re: proposing a haproxy 2.0.16 release (was [BUG] haproxy retries dispatch to wrong server)

2020-07-15 Thread Tim Düsterhus
William,
Christopher,

Am 11.07.20 um 00:03 schrieb Tim Düsterhus:
> Lukas,
> 
> Am 10.07.20 um 21:04 schrieb Lukas Tribus:
>>> I finally pushed this fix in the 2.0. Note the same bug affected the HTTP 
>>> proxy
>>> mode (using http_proxy option). In this case, the connection retries is now
>>> disabled (on the 2.0 only) because the destination address is definitely 
>>> lost.
>>> It was the easiest way to work around the bug without backporting a bunch of
>>> sensitive patches from the 2.1.
>>
>> Given the importance and impact of this bug you just fixed (at least 5
>> independent people already reported it on GH and ML) and the amount of
>> other important fixes already in the tree (including at least one
>> crash fix), I'm suggesting to release 2.0.16. Unless there are other
>> important open bugs with ongoing troubleshooting?
>>
>>
> 
> Agreed. With 2.2 being released this is a good opportunity to release
> updates for all branches down to 1.9 at least to properly mark the
> latter as EoL.
> 
> I'd also appreciate at least backports happening without a release for
> 1.6 and 1.7, allowing us to close off old, fixed, issues on GitHub.

Bumping this.

Best regards
Tim Düsterhus



Re: [PATCH v2 0/2] Certificate Generation Enhancements

2020-07-11 Thread Tim Düsterhus
Shimi,

Am 11.07.20 um 09:28 schrieb Gersner:
> Gentle ping on this. Can I assist with providing more information?

William responded on the v1 of your patch. I assume he didn't see that
there was a v2, because it's a separate email thread. I put him in Cc.

https://www.mail-archive.com/haproxy@formilux.org/msg37884.html
https://www.mail-archive.com/haproxy@formilux.org/msg37885.html

Best regards
Tim Düsterhus



Re: proposing a haproxy 2.0.16 release (was [BUG] haproxy retries dispatch to wrong server)

2020-07-10 Thread Tim Düsterhus
Lukas,

Am 10.07.20 um 21:04 schrieb Lukas Tribus:
>> I finally pushed this fix in the 2.0. Note the same bug affected the HTTP 
>> proxy
>> mode (using http_proxy option). In this case, the connection retries is now
>> disabled (on the 2.0 only) because the destination address is definitely 
>> lost.
>> It was the easiest way to work around the bug without backporting a bunch of
>> sensitive patches from the 2.1.
> 
> Given the importance and impact of this bug you just fixed (at least 5
> independent people already reported it on GH and ML) and the amount of
> other important fixes already in the tree (including at least one
> crash fix), I'm suggesting to release 2.0.16. Unless there are other
> important open bugs with ongoing troubleshooting?
> 
> 

Agreed. With 2.2 being released this is a good opportunity to release
updates for all branches down to 1.9 at least to properly mark the
latter as EoL.

I'd also appreciate at least backports happening without a release for
1.6 and 1.7, allowing us to close off old, fixed, issues on GitHub.

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.2.0

2020-07-09 Thread Tim Düsterhus
Ryan,

Am 09.07.20 um 20:34 schrieb Ryan O'Hara:
> I'm currently packaging this for Fedora. It seems to build just fine on
> Fedora 32 and rawhide. Is there any new build options or dependencies to be
> aware of? I'm looking at the Makefile now and nothing jumps out at me. That
> said, I am totally capable of missing something.
> 

I've just run `git diff v2.2-dev0..v2.3-dev0 -- Makefile`. The only
thing I'm seeing that might of of interest to you is HLUA_PREPEND_PATH /
HLUA_PREPEND_CPATH if you plan to ship any HAProxy specific Lua
libraries that don't make sense in the global Lua library path.

See this mailing list thread for details:
https://www.mail-archive.com/haproxy@formilux.org/msg35839.html

Specifically this email for the patch:
https://www.mail-archive.com/haproxy@formilux.org/msg35896.html

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.2-dev12

2020-07-04 Thread Tim Düsterhus
Willy,

Am 04.07.20 um 12:11 schrieb Willy Tarreau:
> Thank you. I now rebased next and applied your series on top of it. I'm
> not going to review what goes into -next, or it defeats the purpose of
> using it as a pending queue. There's enough time so that anyone can
> suggest a change and even if some issues get merged, we have 6 months
> to spot them till next release.

Okay, fair enough. Just make sure to read the backporting information
when moving next to master after the release. For some of the patches
backporting might make sense, for others not so much.

But I suppose those are going to be treated just like regular BUGs and
will automatically be presented during the backporting sessions?

>> I had a little dispute with the limits I configured within my mail
>> server. My own account is restricted to sending 15 emails per hour for
>> abuse prevention in case my password ever leaks. So 2 patches did not
>> make it through the first time. I hope I did not completely mess up
>> threading when sending the last 2. My email limit is now up to 20 :-)
> 
> I would be terribly handicaped with such a configuration, that would
> force me to stop even more frequently and chat with coworkers :-)
> 

15 emails per hour is one every 4 minutes. If I'd be sending emails at
that rate I'd just use an actual chat :-) That is of course unless I
send largish patch bombs in an automated fashion.

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.2-dev12

2020-07-04 Thread Tim Düsterhus
Willy,

Am 04.07.20 um 11:13 schrieb Willy Tarreau:
>> Or you tell me to send it now and apply it to 'next'.
> 
> It's up to you. I personally like to keep my topic branch till the last
> moment because that leaves more more freedom to revisit my patches later,
> but when I'm certain patches are definitive, I prefer to get them merged
> and to forget them. So you decide, and both are fine to me.
> 

Okay I sent them now. Details are in the cover letter of 'v2'.

I had a little dispute with the limits I configured within my mail
server. My own account is restricted to sending 15 emails per hour for
abuse prevention in case my password ever leaks. So 2 patches did not
make it through the first time. I hope I did not completely mess up
threading when sending the last 2. My email limit is now up to 20 :-)

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.2-dev12

2020-07-04 Thread Tim Düsterhus
Willy,

Am 04.07.20 um 08:10 schrieb Willy Tarreau:
> Tim also had some post-2.2 fixes pending to improve free() calls and
> remove some valgrind complaints on exit.
> 

Good you mention it, because I forgot to give a heads up: The series
grew a little bit further. My 'deinit' branch as of now is 15 commits
ahead of master. So I'll send an updated series once 2.2 is out.

Or you tell me to send it now and apply it to 'next'. While I'm here:
There's already one commit sitting in the next branch to be applied
after the release.

Best regards
Tim Düsterhus



[PATCH] BUG/MINOR: http_act: don't check capture id in backend (2)

2020-07-03 Thread Tim Düsterhus , WoltLab GmbH
Willy,

find the patch attached.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

-- 

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P
>From ea6bdbfa54b98d0b8a39e4e25ea5271de933867a Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Fri, 3 Jul 2020 13:43:42 +0200
Subject: [PATCH] BUG/MINOR: http_act: don't check capture id in backend (2)
To: haproxy@formilux.org
Cc: w...@1wt.eu

Please refer to commit 19a69b3740702ce5503a063e9dfbcea5b9187d27 for all the
details. This follow up commit fixes the `http-response capture` case, the
previous one only fixed the `http-request capture` one. The documentation was
already updated and the change to `check_http_res_capture` is identical to
the `check_http_req_capture` change.

This patch must be backported together with 19a69b3740702ce5503a063e9dfbcea5b9187d27.
Most likely this is 1.6+.
---
 src/http_act.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/src/http_act.c b/src/http_act.c
index 1c7a1d4e6..2eac12549 100644
--- a/src/http_act.c
+++ b/src/http_act.c
@@ -723,7 +723,10 @@ static int check_http_res_capture(struct act_rule *rule, struct proxy *px, char
 	if (rule->action_ptr != http_action_res_capture_by_id)
 		return 1;
 
-	if (rule->arg.capid.idx >= px->nb_rsp_cap) {
+	/* capture slots can only be declared in frontends, so we can't check their
+	 * existence in backends at configuration parsing step
+	 */
+	if (px->cap & PR_CAP_FE && rule->arg.capid.idx >= px->nb_rsp_cap) {
 		memprintf(err, "unable to find capture id '%d' referenced by http-response capture rule",
 			  rule->arg.capid.idx);
 		return 0;
-- 
2.27.0



Re: ssl_c_sha256 ?

2020-06-29 Thread Tim Düsterhus
Stephane,

Am 29.06.20 um 12:56 schrieb Stephane Martin (stepham2):
> Thank you for your quick answers!
> 
> So I understand that it is possible for haproxy >= 2.1. For haproxy 2.0, got 
> to backport the sha2 filter, right ?

That is correct. I expect the commit I linked to apply pretty seamlessly
to HAProxy 2.0, it contains all you need.

One small note: The correct terminology for "sha2 filter" is "sha2
converter".

Best regards
Tim Düsterhus



Re: ssl_c_sha256 ?

2020-06-29 Thread Tim Düsterhus
Jarno,

Am 29.06.20 um 12:46 schrieb Jarno Huuskonen:
>> The ssl_c_sha1 is simply a hash of the DER representation of the
>> certificate. So you can just hash it with the sha2 converter:
>>
>> ssl_c_sha256,sha2(256)
> 
> I think the first fetch should be ssl_c_der ?
> (ssl_c_der,sha2(256))
> 

You are right, of course.

While adjusting the example from the commit message I replaced the 'der'
instead of the 'f'.

Best regards
Tim Düsterhus



Re: ssl_c_sha256 ?

2020-06-29 Thread Tim Düsterhus
Stephane,

Am 29.06.20 um 12:01 schrieb Stephane Martin (stepham2):
> In haproxy documentation I don't see any option to work with the sha256 
> fingerprint of the peer certificate.
> 
> - Is there any other way to get that ?

Yes, see this commit message:
https://github.com/haproxy/haproxy/commit/d4376302377e4f51f43a183c2c91d929b27e1ae3

The ssl_c_sha1 is simply a hash of the DER representation of the
certificate. So you can just hash it with the sha2 converter:

ssl_c_sha256,sha2(256)

Best regards
Tim Düsterhus



Re: HAProxy 2.2 release date

2020-06-26 Thread Tim Düsterhus
Willy,

Am 26.06.20 um 11:44 schrieb Willy Tarreau:
>> If you need to plan ahead for migrations due to any reason, you can
>> stay on LTS releases and you will have plenty of time for carrying out
>> such migrations.
>> Currently, that is either version 1.8 with EOL planned 2022-Q4, or
>> version 2.0 with EOL planned 2024-Q2.
> 
> That's exactly the purpose of LTS versions! I'd add that in addition,
> if there's anything of interest in 2.2 for your deployment you should
> definitely try it *before* it's released, to make sure any possible
> bug that would affect you is addressed in time.
> 

FWIW: As of the 20200624 snapshot I'm running it on my personal box
without issues. Moderately complex configuration with HTTP / TCP / DNS /
Proxy Protocol / etc. Very low traffic, though.

However I'm not yet testing out the new shinys in the configuration
(http-after-response), in case I have to revert back.

My two previous attempts failed due to heap / allocator corruption
(issues #681 and #700).

Best regards
Tim Düsterhus



Re: Doing directory based access control (Survey / Poll of admin expectations)

2020-06-26 Thread Tim Düsterhus
Ilya,
Willy,

Am 26.06.20 um 11:19 schrieb Willy Tarreau:
>> Tim, can we schedule this for 2.3 ? It seems to be "too much" for 2.2
> 
> Rest assured that for me it's not even imaginable to break 2.2 with
> such sort of things. We have sufficient issues to address right now!

Agreed. This is something non-trivial and the solution should not be
rushed. Personally I've already adjusted my rules.

I just filed a tracking bug for it on GitHub, so that the discussion
will not get lost in the depths of the email archive:

https://github.com/haproxy/haproxy/issues/714

Best regards
Tim Düsterhus



Re: Doing directory based access control (Survey / Poll of admin expectations)

2020-06-25 Thread Tim Düsterhus
Hi List,

Am 22.06.20 um 21:13 schrieb Tim Düsterhus:
> What kind of (configuration) advice would you give me? Do you have any
> concerns? I consider *anything* a valid answer here and I'd like to hear
> from both experienced admins and "newbies".
> 
> I'll give the "solution" once I get some replies :-)
> 

I've also forwarded the question to the #haproxy channel on Freenode and
I also got some replies in private. The suggestions mostly looked
something like Jonathan's reply:

  acl internal_net src 192.168.0.0/16
  acl admin_request path_beg /admin/
  http-request deny if admin_request !internal_net

Indeed that's also something I did myself within my production
configurations. However it's insecure.

One correct solution is the one given by Ilya and hinted by by Jonathan
is: Don't do this in HAProxy, instead do this in the backend which knows
exactly how it's going to interpret stuff.

However that might be painful if authentication is not going to be
performed based on IP address, but e.g. TLS client certificates, because
you would need to ship the DN within a header to the backend.

Another correct solution would be using a strict whitelist using
`http-request allow [...] if [...]` with an unconditional `http-request
deny` at the end.

However that might be painful for the situation I described in the
initial mail ("During upgrades of this off-the-shelf software new files
might be added for new features.").

So what exactly is the issue with the http-request deny rule as given above?

The issue I was concerned about is that HAProxy does not perform
normalization when comparing the path against something within an ACL.
Specifically HAProxy does not perform normalization of non-reserved
percent-encoded characters (RFC 3986#2.3). Nginx does.

So a simple `curl localhost/%61dmin` would circumvent the rule in
HAProxy. But nginx happily interprets this as a request to the /admin/
folder, allowing access to unauthorized users.

While this percent normalization could possibly be done generically,
because these URLs are defined in RFC 3986 to be equivalent, other types
normalization would be harder.

So a request to `curl localhost/public/../admin/` might or might not
cause the backend to interpret the request as a request to /admin/, but
a blanket normalization would be incorrect here, because API backends
might not refer to paths on a file system. This can be fixed with a
`http-request deny if { path_sub ../ }` or something similar.

On Windows a request to `curl localhost/ADMIN` might also access the
admin/ folder, but that is easily fixed with the `-i` flag (unless
unicode characters are used, I guess?).

However the percent-encoding normalization can not be implemented within
the configuration as of right now (to the best of my knowledge). Using
the url_dec converter is incorrect, because it also decodes stuff like
`%2F` to `/` which is disallowed during normalization and can introduce
issues on its own.

The disagreement I had with Willy was that I said that percent
normalization should happen by default (because of the RFC). However
Willy responded that this would take away some visibility, because
you're never going to see these unnormalized URLs in the logs, not
noticing an ongoing attack. Also some bogus clients and backends might
rely on these unnormalized forms.

Willy: Please correct me if I misrepresented your arguments or left out
something important.

Concluding:
- The documentation should be updated to warn administrators that
http-request deny must be used very carefully when combining it with a path.
- HAProxy should gain the ability to correctly normalize URLs (i.e. not
using the url_dec version). How exactly that would look is not yet clear.
  - It could be a `http-request normalize-path percent,dotdot,Y` action.
  - It could be a `normalize(percent)` converter
  - The `path` fetch could be extended to gain an argument, specifying
the normalization requested.
  - An option within the `defaults` section could enable normalization
for everything.

If you have anything to add after reading this mail: Please do!

Best regards
Tim Düsterhus



Re: [PATCH v2 0/2] Warnings for truncated lines

2020-06-22 Thread Tim Düsterhus
Willy,
Lukas,

sorry, I forgot to hit save on my cover-letter before running git
send-email. Here's what I *actually* wanted to say:
Am 22.06.20 um 22:10 schrieb Lukas Tribus:
>> So what we could do if we want to do something clean is the following:
>>   - detect \0 or truncation on a line and note its position ;
>>   - if we find another line once a truncated line has been found, we
>> emit a warning about "stray NUL character at position X on line Y,
>> this will not be supported in 2.3 anymore" and reset this position.
>>   - at the end of the loop, if the last NUL position is still set, we
>> emit a warning about "missing LF on last line, file might have been
>> truncated, this will not be supported in 2.3 anymore".
>>
>> And in 2.3 we turn that into errors.
>>
>> What do you guys think about it ?
> 
> I like it, yes.

Find a patch attached.

I tested this with:

printf "listen foo\nbind *:8080\x00test\nmode http\x00stuff\nlog
global\x00bad\n"

And this is the result:

$ ./haproxy -c -f ./example.cfg
[WARNING] 173/225406 (30776) : parsing [./example.cfg:2]: Stray NUL
character at position 12. This will become a hard error in HAProxy 2.3.
[WARNING] 173/225406 (30776) : parsing [./example.cfg:3]: Stray NUL
character at position 10. This will become a hard error in HAProxy 2.3.
[WARNING] 173/225406 (30776) : parsing [./example.cfg:4]: Missing LF
on last line, file might have been truncated at position 11. This will
become a hard error in HAProxy 2.3.
[WARNING] 173/225406 (30776) : config : missing timeouts for proxy
'foo'.
   | While not properly invalid, you will certainly encounter
various problems
   | with such a configuration. To fix this, please ensure that all
following
   | timeouts are set to a non-zero value: 'client', 'connect',
'server'.
Warnings were found.
Configuration file is valid

I guess a truncated last line cannot be differentiated from file that
does not
end with a new line, because fgets() consumes the full line (triggering the
eof), even if reading a NUL byte?

Best regards
Tim Düsterhus



Doing directory based access control (Survey / Poll of admin expectations)

2020-06-22 Thread Tim Düsterhus
Hi List,

I was having a bit of off-list disagreement with Willy regarding how
HAProxy ACLs should work and what (experienced) administrators may or
may expect from them. I am arguing about something I believe many
administrators might accidentally do incorrectly. I'm intentionally
being vague here, to not spoil any results of this survey.

Let's pretend I'm new to this list and send the following request for help:

---

We are using an off-the-shelf PHP 7.2 application (think some bulletin
board software), running behind nginx as the FastCGI gateway and static
file server. In front of that nginx we are running HAProxy 2.0 in 'mode
http'.

This off-the-shelf PHP application has an integrated admin control panel
within the /admin/ directory. The frontend consists of several "old
style" PHP files, handling the various paths (e.g. login.php,
register.php, create-thread.php). During upgrades of this off-the-shelf
software new files might be added for new features.

My boss asked me to restrict the access to the admin control panel to
our internal network (192.168.0.0/16) for security reasons. Access to
the user frontend files must not be restricted.

How can I do this?

---

What kind of (configuration) advice would you give me? Do you have any
concerns? I consider *anything* a valid answer here and I'd like to hear
from both experienced admins and "newbies".

I'll give the "solution" once I get some replies :-)

Best regards
Tim Düsterhus



Re: [PATCH] BUG/MINOR: cfgparse: Support configurations without newline at EOF

2020-06-22 Thread Tim Düsterhus
Lukas,

Am 22.06.20 um 18:41 schrieb Lukas Tribus:
> On Mon, 22 Jun 2020 at 18:16, Tim Duesterhus  wrote:
>>
>> Fix parsing of configurations if the configuration file does not end with
>> an LF.
> 
> ... but it's also warning about it at the same time.
> 
> So it's unclear to me:
> 
> Do we support a configuration without trailing LF or not?
> 
> If yes, there is no point in a warning (just as < 2.1).
> If no, we should abort and error out.
> 
> We can "just warn" if we plan to deprecate it in future release, and
> at that point, explain that fact. But I find it strange to warn about
> something, without a clear indication about *WHY* (unsupported,
> deprecated, etc).
> 
> 
> Thoughts?
> 
I consider leaving out a trailing newline a bug for these reasons:

1. POSIX defines a line as "A sequence of zero or more non- 
characters plus a terminating  character."
(https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_206).
A non-terminated line thus is not a line and handling non-terminated
lines is a bit wonky.

2. Leaving out the trailing newline causes ./haproxy -f config/ with
config/ being a folder not to be equivalent to ./haproxy -f config with
config being the catenation of all files of the config/ folder (⏎
indicates a missing trailing newline in my shell):

> $ ls -alh
> total 84K
> drwxrwxr-x  2 timwolla timwolla 4.0K Jun 22 18:47 .
> drwxrwxrwt 29 root root  68K Jun 22 18:47 ..
> -rw-rw-r--  1 timwolla timwolla   22 Jun 22 18:47 a.cfg
> -rw-rw-r--  1 timwolla timwolla   23 Jun 22 18:47 b.cfg
> $ cat a.cfg
> listen foo
> bind *:8080⏎> $ cat b.cfg
> listen foo2
> bind *:8081⏎> $ cat a.cfg b.cfg > cat
Then:

> $ ./haproxy -d -f /tmp/example/
> [WARNING] 173/185127 (17393) : parsing [/tmp/example//a.cfg:2]: line is not 
> terminated by a newline (LF / '\n').
> [WARNING] 173/185127 (17393) : parsing [/tmp/example//b.cfg:2]: line is not 
> terminated by a newline (LF / '\n').
> [WARNING] 173/185127 (17393) : config : missing timeouts for proxy 'foo'.
>| While not properly invalid, you will certainly encounter various problems
>| with such a configuration. To fix this, please ensure that all following
>| timeouts are set to a non-zero value: 'client', 'connect', 'server'.
> [WARNING] 173/185127 (17393) : config : missing timeouts for proxy 'foo2'.
>| While not properly invalid, you will certainly encounter various problems
>| with such a configuration. To fix this, please ensure that all following
>| timeouts are set to a non-zero value: 'client', 'connect', 'server'.
> Note: setting global.maxconn to 524276.
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result FAILED
> Total: 3 (2 usable), will use epoll.
> 
> Available filters :
>   [SPOE] spoe
>   [COMP] compression
>   [TRACE] trace
>   [CACHE] cache
>   [FCGI] fcgi-app
> Using epoll() as the polling mechanism.
> ^C⏎
> $ ./haproxy -d -f /tmp/example/cat
> [NOTICE] 173/185130 (17415) : haproxy version is 2.2-dev10-55729a-1
> [NOTICE] 173/185130 (17415) : path to executable is ./haproxy
> [ALERT] 173/185130 (17415) : parsing [/tmp/example/cat:2] : 'bind 
> *:8080listen' unknown keyword 'foo2'. Registered keywords :
> [STAT] level 
> [STAT] expose-fd 
> [STAT] severity-output 
> [ ALL] accept-netscaler-cip 
> [ ALL] accept-proxy
> [ ALL] backlog 
> [ ALL] id 
> [ ALL] maxconn 
> [ ALL] name 
> [ ALL] nice 
> [ ALL] process 
> [ ALL] proto 
> [UNIX] gid 
> [UNIX] group 
> [UNIX] mode 
> [UNIX] uid 
> [UNIX] user 
> [ TCP] defer-accept
> [ TCP] interface 
> [ TCP] mss 
> [ TCP] tcp-ut 
> [ TCP] tfo
> [ TCP] transparent
> [ TCP] v4v6
> [ TCP] v6only
> [ TCP] namespace 
> [WARNING] 173/185130 (17415) : parsing [/tmp/example/cat:3]: line is not 
> terminated by a newline (LF / '\n').
> [ALERT] 173/185130 (17415) : Error(s) found in configuration file : 
> /tmp/example/cat
> [ALERT] 173/185130 (17415) : Fatal errors found in configuration.
Thus even if we might never not support leaving out the trailing newline
I consider that something worthwhile to warn about.

Best regards
Tim Düsterhus



Re: how can I add an HTTP to prevent clickjacking to the stats page?

2020-06-18 Thread Tim Düsterhus
Cristian,

Am 18.06.20 um 15:20 schrieb Cristian Grigoriu:
> Thank you for your workaround, it works!
> 
> Here's the output of my haproxy -vv command:
> 
> HA-Proxy version 1.7.5-2 2017/05/17

You really should upgrade to HAProxy 1.7.12 at the very least:
http://www.haproxy.org/bugs/bugs-1.7.5.html

Starting with the *upcoming* HAProxy 2.2 the following will work
(http-after-response instead of http-response):

Config:

frontend stats
mode http
bind *:8080
stats enable
stats uri /
http-after-response set-header X-Frame-Options sameorigin

Example:

$ http --headers localhost:8080
HTTP/1.1 200 OK
cache-control: no-cache
content-type: text/html
transfer-encoding: chunked
x-frame-options: sameorigin

Best regards
Tim Düsterhus



Re: Conditional request logging ?

2020-06-18 Thread Tim Düsterhus
Mariusz,

Am 18.06.20 um 12:59 schrieb Mariusz Gronczewski:
> Is there a way to log requests that match the given ACL (and only that
> ACL) ? I know I can capture headers by ACL but I can't seem to find any
> way to do that for whole log entries.
> 

Use http-response set-log-level silent. See:
http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#4.2-http-response%20set-log-level

Best regards
Tim Düsterhus



Re: VTest does not test deinit

2020-06-16 Thread Tim Düsterhus
William,

Am 16.06.20 um 11:27 schrieb William Lallemand:
>> Actually we must always remember that while convenient, VTest's
>> primary goal is to test a proxy by synchronizing the two sides (which
>> is what basically no other testing tool can reliably do). If we want
>> to run deeper tests on other process-oriented behaviors, it can make
>> sense to rely on other tools. For example vtest is not the best suited
>> to testing command line or process stability. It has no process
>> management, can leak background processes and needs to wait for a
>> timeout to detect a crash.
>>
>> My point above is that as long as *proxy* tests are compatible with
>> stopping using SIGUSR1 I think I'd be fine with the change. But if
>> doing this actually results in more pain to test the proxy features,
>> I'd rather stay away from this and switch to a distinct test series
>> for this. That's where I'd draw the line.
>>
> 
> I think that's a good idea but It will probably because it will let us
> test the deinit() with all the diversity of configuration we have in the
> reg-tests.

Yes, that was the intention behind my suggestion. The reg-tests already
test various configurations during regular operation. It should be easy
enough to also test a clean deinit() for those configurations by using
SIGUSR1 instead of SIGINT.

I don't expect any major issues with the normal testing operation,
because the deinit() happens after the regular proxy tests finished. And
a buggy deinit() still is a bug.

> But I also agree with Willy and we should be careful about the
> consequences of this change. If there is too much changes to handle it
> may be painful to do it before the 2.2 release.
> 

I'd definitely postpone changing anything about VTest past 2.2. Any bugs
found using that will be backported anyway. So nothing really lost by
waiting for the release.

Best regards
Tim Düsterhus



Re: Broken SNI with crt-list for HAProxy 2.1.x after upgrade from Stretch to Buster

2020-06-15 Thread Tim Düsterhus
William,

Am 15.06.20 um 14:56 schrieb William Lallemand:
> I think I found the problem, could you try the attached patch for 2.1?
> 

I'd prefer not, because I don't have a staging system where I could
easily reproduce the issue (and generating SSL certs to test this
properly is annoying). I was encountering the issue on a production box
and I don't like to mess around with manually compiled HAProxy versions
more than necessary.

For me the issue is sufficiently resolved by specifying the DH bit size
(and the fact that I plan to upgrade to 2.2 as soon as packages are
available).

I trust you that the patch fixes *a* bug, even if it might not be *my*
bug, thus feel free to apply.

Best regards
Tim Düsterhus



VTest does not test deinit

2020-06-14 Thread Tim Düsterhus
Hi List,
Willy,
Ilya,

I noticed that the reg-tests were unable find the issue reported by
William here:
https://www.mail-archive.com/haproxy@formilux.org/msg37637.html

This is because VTest never performs a "soft" shutdown of HAProxy,
instead it uses SIGINT -> SIGTERM -> SIGKILL. Thus the deinit() will
never trigger.

Fixing this will require a change to VTest:
https://github.com/vtest/VTest/blob/b9e9e03fdeebd494783ae1dd8e6008f5c1e3a4bc/src/vtc_haproxy.c#L786
needs to be SIGUSR1 (and the switch below adjusted).

Making the change in my local repository and running the tests that
don't need any additional USE_XXX falgs causes 3 tests to fail with 2 of
them triggering an assertion within VTest.

It would have caught the bug reported by William, though.

Concluding: It probably makes sense to adjust VTest to use SIGUSR1
instead of SIGINT, the latter is handled like SIGTERM in HAProxy anyway,
so there is no need for this distinction. I did not yet look into the
details of the failing tests, though.

Best regards
Tim Düsterhus



Re: [PATCH 3/3] BUG/MINOR: haproxy: Free rule->arg.vars.expr during deinit_act_rules

2020-06-14 Thread Tim Düsterhus
William,

Am 14.06.20 um 16:59 schrieb Tim Düsterhus:
> I can reproduce this with the following config:
> 
> frontend http
> mode http
> bind 127.0.0.1:80
> 
> http-request redirect scheme https if METH_GET
> 
>> $ valgrind ./haproxy -c -f ./crasher.cfg
>> [...]
>> ==6484== Invalid read of size 8
>> ==6484==at 0x4C4747: release_sample_expr (sample.c:1427)
>> ==6484==by 0x525A43: deinit_act_rules (haproxy.c:2558)
>> ==6484==by 0x5271CA: deinit (haproxy.c:2706)
>> ==6484==by 0x528017: deinit_and_exit (haproxy.c:2871)
>> ==6484==by 0x528E90: init (haproxy.c:2205)
>> ==6484==by 0x41F382: main (haproxy.c:3127)
>> ==6484==  Address 0x0 is not stack'd, malloc'd or (recently) free'd
> 
> I would assume that this is related to METH_GET being a default acl.

I spoke too soon. I can even remove the `if METH_GET` and it segfaults
for a simple `http-request redirect scheme https`.

Best regards
Tim Düsterhus



Re: [PATCH 3/3] BUG/MINOR: haproxy: Free rule->arg.vars.expr during deinit_act_rules

2020-06-14 Thread Tim Düsterhus
William,

Am 14.06.20 um 16:28 schrieb William Dauchy:
> After this patch, I'm getting a segfault after firing an USR1 signal
> to trigger the deinit:
> 
> #0  0x55b653aaec34 in release_sample_expr (expr=0x55b654766bc0) at
> src/sample.c:1427
> b1427   list_for_each_entry_safe(conv_expr, conv_exprb,
> >conv_exprs, list)
> (gdb) bt
> #0  0x55b653aaec34 in release_sample_expr (expr=0x55b654766bc0) at
> src/sample.c:1427
> #1  0x55b653b0d884 in deinit_act_rules (rules=0x55b6547644b0) at
> src/haproxy.c:2559
> #2  0x55b653b0f09d in deinit () at src/haproxy.c:2707
> #3  0x55b653b0fef8 in deinit_and_exit (status=0) at src/haproxy.c:2872
> #4  0x55b6539f7256 in main (argc=, argv= out>) at src/haproxy.c:3771
> 

I can reproduce this with the following config:

frontend http
mode http
bind 127.0.0.1:80

http-request redirect scheme https if METH_GET

> $ valgrind ./haproxy -c -f ./crasher.cfg
> [...]
> ==6484== Invalid read of size 8
> ==6484==at 0x4C4747: release_sample_expr (sample.c:1427)
> ==6484==by 0x525A43: deinit_act_rules (haproxy.c:2558)
> ==6484==by 0x5271CA: deinit (haproxy.c:2706)
> ==6484==by 0x528017: deinit_and_exit (haproxy.c:2871)
> ==6484==by 0x528E90: init (haproxy.c:2205)
> ==6484==by 0x41F382: main (haproxy.c:3127)
> ==6484==  Address 0x0 is not stack'd, malloc'd or (recently) free'd

I would assume that this is related to METH_GET being a default acl.

Best regards
Tim Düsterhus



Re: Broken SNI with crt-list for HAProxy 2.1.x after upgrade from Stretch to Buster

2020-06-13 Thread Tim Düsterhus
William,

Am 13.06.20 um 16:46 schrieb Tim Düsterhus:
> tune.ssl.default-dh-param 2048 solved the issue for me.
> 
> I'd argue that this is a bug in HAProxy nonetheless, because apparently
> the crt-list file is not fully parsed in case of DH parameter warnings
> (not errors). In fact I can remember that some similar issue was
> previously fixed.
> 

Could this be another version of this issue:
https://github.com/haproxy/haproxy/issues/483? Should I file a bug report?

Best regards
Tim Düsterhus



Re: Broken SNI with crt-list for HAProxy 2.1.x after upgrade from Stretch to Buster

2020-06-13 Thread Tim Düsterhus
Dear List,

Am 13.06.20 um 16:11 schrieb Tim Düsterhus:
> Any ideas?
> 

Looking at the startup warnings is always a good idea:

> Jun 13 14:40:52 *snip* haproxy[15815]: [WARNING] 164/144052 (15815) : 
> Reexecuting Master process
> Jun 13 14:40:52 *snip* haproxy[15815]: [WARNING] 164/144052 (15815) : parsing 
> [/usr/share/haproxy//004-http.cfg:96] : 'bind :::443' :
> Jun 13 14:40:52 *snip* haproxy[15815]:   'crt-list' : error processing line 1 
> in file '/usr/share/haproxy/crt-list' : unable to load default 1024 bits DH 
> parameter for certificate '/usr/share/haproxy/ssl/*snip*
> Jun 13 14:40:52 *snip* haproxy[15815]:   , SSL library will use an 
> automatically generated DH parameter.
> Jun 13 14:40:52 *snip* haproxy[15815]: [WARNING] 164/144052 (15815) : parsing 
> [/usr/share/haproxy//010-*snip*.cfg:7] : 'bind :::993' :
> Jun 13 14:40:52 *snip* haproxy[15815]:   unable to load default 1024 bits DH 
> parameter for certificate '/usr/share/haproxy/ssl/*snip*'.
> Jun 13 14:40:52 *snip* haproxy[15815]:   , SSL library will use an 
> automatically generated DH parameter.
> Jun 13 14:40:52 *snip* haproxy[15815]: [WARNING] 164/144052 (15815) : parsing 
> [/usr/share/haproxy//030-*snip*.cfg:14] : 'bind :::6' :
> Jun 13 14:40:52 *snip* haproxy[15815]:   unable to load default 1024 bits DH 
> parameter for certificate '/usr/share/haproxy/ssl/*snip*'.
> Jun 13 14:40:52 *snip* haproxy[15815]:   , SSL library will use an 
> automatically generated DH parameter.
> Jun 13 14:40:52 *snip* haproxy[15815]: [WARNING] 164/144052 (15815) : parsing 
> [/usr/share/haproxy//999-stats.cfg:2] : 'bind *:1936' :
> Jun 13 14:40:52 *snip* haproxy[15815]:   'crt-list' : error processing line 1 
> in file '/usr/share/haproxy/crt-list' : unable to load default 1024 bits DH 
> parameter for certificate '/usr/share/haproxy/ssl/*snip*
> Jun 13 14:40:52 *snip* haproxy[15815]:   , SSL library will use an 
> automatically generated DH parameter.

tune.ssl.default-dh-param 2048 solved the issue for me.

I'd argue that this is a bug in HAProxy nonetheless, because apparently
the crt-list file is not fully parsed in case of DH parameter warnings
(not errors). In fact I can remember that some similar issue was
previously fixed.

Best regards
Tim Düsterhus



Broken SNI with crt-list for HAProxy 2.1.x after upgrade from Stretch to Buster

2020-06-13 Thread Tim Düsterhus
Dear List,

I finally got around to upgrading my personal box from Debian Stretch to
Debian Buster. Unfortunately after the upgrade all my HTTP hosts failed
to work, because I use `strict-sni` as well as a strict `crt-list`.

Software versions before the upgrade:

HAProxy 2.1.7-1~bpo9+1
libssl  1.1.0l-1~deb9u1

Software versions after the upgrade:

HAProxy 2.1.7-1~bpo10+1
libssl  1.1.1d-0+deb10u3

HAProxy 2.0.14-1~bpo10+1 on Buster works with SNI, but breaks my Dovecot
due to the TLV stuff.

Relevant bits and pieces from my config:

Broken with 2.1.7 on Buster:

bind :::443 ssl crt-list /usr/share/haproxy/crt-list strict-sni alpn
h2,http/1.1

Working with 2.1.7 on Buster:

bind :::443 ssl crt /usr/share/haproxy/ssl strict-sni alpn h2,http/1.1

My crt-list looks like this:

*snip*.pem
*snip*.pem [ca-file  /etc/haproxy/ssl-client/*snip*  verify  required]

With a total of 24 single-SAN certificates, 3 of those having a ca-file
with verify required (the same ca-file for all of them).

For the time being I used `crt` with a folder without a crt-list and
killed off the domains protected by client certificate authentication.
But of course this is no real solution :-)

Any ideas?

Best regards
Tim Düsterhus



Re: [PATCH 0/2] Clean up disabled build warnings

2020-06-11 Thread Tim Düsterhus
Willy,
William

Am 29.05.20 um 14:35 schrieb Tim Duesterhus:
> These two patches re-enable some build warnings in the Makefile. The first one
> only enables those that did not trigger for me at all.
> 
> The second one re-enables -Wimplicit-fallthrough. I could not adjust all
> places, because I was not sure whether they work as intended.
> 
> I've Cc't Willy (stream_interface, pattern) and William (cli), because they
> where the ones that wrote the switch statements. I've also added Ilya, because
> applying the second patch will most likely fail the Travis build.
> 
> I'd say that the one in 'pattern' is a bug and that the other two just need a
> /* fall through */ comment.
> 
> See the build log below for what I'm seeing within a debian:sid Docker
> container:
> 
> [...]
> 
> Tim Düsterhus (2):
>   BUILD: Remove nowarn for warnings that do not trigger
>   BUILD: Re-enable -Wimplicit-fallthrough
> 
>  Makefile | 3 ---
>  src/acl.c| 2 ++
>  src/checks.c | 4 ++--
>  src/hlua.c   | 2 ++
>  src/peers.c  | 8 
>  5 files changed, 10 insertions(+), 9 deletions(-)
> 

With the new -dev9 I'd like to move this thread up the INBOX again, in
case you want to resolve this before the release.

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.2-dev9

2020-06-11 Thread Tim Düsterhus
Willy,

Am 11.06.20 um 11:06 schrieb Willy Tarreau:
> So we're getting pretty good for a release very soon. I think that this
> might be the last development release so it can be considered almost as a
> release candidate, and that if everything goes well, we could release by the
> end of next week, which is close to initial estimates.
> 

I would assume that the release of 2.2 is also the date where 1.9 goes
from EoL to unmaintained? You might or might not want to plan a final
1.9 release then as well.

Best regards
Tim Düsterhus



Re: No access to git.haproxy.org from Travis CI

2020-06-11 Thread Tim Düsterhus
Bjoern,
Willy,

Am 11.06.20 um 11:24 schrieb bjun...@gmail.com:
> i have a Travis CI job that is pulling/cloning a repo from git.haproxy.org,
> but unfortunately this isn't working anymore (i believe since May, 12).
> 
> Output Travis CI job:
> 
> [...]
> 
> Is there any IP blocking or something else in place?
> 

See also:
https://github.com/haproxy/haproxy/issues/49#issuecomment-633521350

Best regards
Tim Düsterhus



Re: Proposal to resolve (again) the include dependency hell

2020-06-07 Thread Tim Düsterhus
Willy,

Am 05.06.20 um 21:09 schrieb Willy Tarreau:
> I'm just asking here that the regular contributors have a glance at the
> branch named "20200605-rework-include-final" and honestly say how they
> feel about adopting this (even if they don't care, that's fine). Such
> work may only be done very late in the LTS cycle, and always has an
> impact on backports later, so I'd rather not do that again for the next
> few years!

I don't care. My editor of choice allows me to easily jump to the
declaration of whatever I want to look up. So I don't really think in
terms of files anyway.

Best regards
Tim Düsterhus



Re: Peers Protocol "Table Type"

2020-06-02 Thread Tim Düsterhus
Emeric,
Willy,

Am 02.06.20 um 15:10 schrieb Emeric Brun:
> Thank you Tim!
> 
> Here the updated patch.

This looks good to me now. I trust that you actually tested the changes.

Reviewed-by: Tim Duesterhus 

Best regards
Tim Düsterhus



Re: Peers Protocol "Table Type"

2020-06-02 Thread Tim Düsterhus
Emeric,

Am 02.06.20 um 11:29 schrieb Emeric Brun:
> In attachement a proposed patch for this issue.
> 

Thanks. The changes to the doc look good to me.

Regarding peers.c:

> +/* network key types;
> + * network types were directly and mistakenly
> + * mapped on sample types, to keep backward
> + * compatiblitiy we keep those values but
> + * we now use a internal/network mapping
> + * to avoid further mistakes adding or
> + * modifying internals types
> + */
> +enum {
> +PEER_KT_ANY = 0,  /* any type */
> +PEER_KT_RESV1,/* UNUSED */
> +PEER_KT_SINT, /* signed 64bits integer type */
> +PEER_KT_RESV2,/* UNUSED */

Maybe call this RESV3 to make it clear that the numeric value is '3'.

> +PEER_KT_IPV4, /* ipv4 type */
> +PEER_KT_IPV6, /* ipv6 type */
> +PEER_KT_STR,  /* char string type */
> +PEER_KT_BIN,  /* buffer type */
> +PEER_KT_TYPES /* number of types, must always be last */
> +};

-

> + * Note: Undeclared mapping maps entry to SMP_ST_ANY == 0

This should read SMP_T_ANY.

Also backporting instructions are missing from the commit message.

Other than that the patch looks good to me, but I didn't actually test a
binary compiled from it.

Best regards
Tim Düsterhus



Re: [PATCH 0/2] Clean up disabled build warnings

2020-05-29 Thread Tim Düsterhus
Willy,

Am 29.05.20 um 14:57 schrieb Willy Tarreau:
> Hi Tim,
> 
> On Fri, May 29, 2020 at 02:35:49PM +0200, Tim Duesterhus wrote:
>> Hi there,
>>
>> These two patches re-enable some build warnings in the Makefile. The first 
>> one
>> only enables those that did not trigger for me at all.
> 
> OK, I agree with not removing warnings that don't trigger.

Should the 'not' in front of 'removing' be removed in that sentence?

>> The second one re-enables -Wimplicit-fallthrough. I could not adjust all
>> places, because I was not sure whether they work as intended.
>>
>> I've Cc't Willy (stream_interface, pattern) and William (cli), because they
>> where the ones that wrote the switch statements. I've also added Ilya, 
>> because
>> applying the second patch will most likely fail the Travis build.
> 
> Indeed if there are so few left, it might be worth trying to address
> them and re-enabling the warning.

It's also super easy to silence any false positives of that warning by
adding a comment.

>> I'd say that the one in 'pattern' is a bug and that the other two just need a
>> /* fall through */ comment.
> 
> No opinion yet since I've not read that code for a while. However I'd like
> the code to be fixed before we enable the warning, as I absolutely don't
> want to see a build warning during regular development. We encourage
> building with -Werror and this would force developers to remove it which
> would be a huge step backwards!

I am fine with that. That's why I put all the involved developers in Cc.

> That reminds me, I seem to have noticed some warnings a few months ago
> when building for i386, maybe something related to printf formats having
> %l when ints were used instead. It's been a long time and I don't even
> remember if we've fixed that. It just happens to be the right period in
> the development cycle to start to address such things. The cleaner the
> better.

Possibly also have a look at -Wclobbered. It only triggers at two places
within Lua with something regarding 'setjmp'. It might be possible to
re-enable that one as well, but I don't understand that code.

Best regards
Tim Düsterhus



Re: haproxy 2.2-dev8-7867525 - 100% cpu usage on 1 core after config 'reload'

2020-05-28 Thread Tim Düsterhus
Pieter,

Am 29.05.20 um 00:45 schrieb PiBa-NL:
> I 'suspect' it has something to do with the healthchecks though... (and
> their refactoring as i think happened.?.)

This appears to be correct.

> Anyhow perhaps this is already enough for someone to take a closer look.?
> If more info is needed ill try and provide :).
> 
> Regards,
> PiBa-NL (Pieter)
> 
> *Reproduction (works 99% of the time..):*
>   haproxy -W -f /var/etc/haproxy-2020/haproxy.cfg
>   kill -s USR2 17683
> 
> *haproxy.cfg*
> frontend www
>     bind            127.0.0.1:81
>     mode            http
> backend testVPS_ipv4
>     mode            http
>     retries            3
>     option            httpchk OPTIONS /Test HTTP/1.1\r\nHost:\ test.test.nl
>     server            vps2a 192.168.30.10:80 id 10109 check inter 15000
> backend O365mailrelay
>     mode            tcp
>     option            smtpchk HELO
>     no option log-health-checks
>     server-template            O365smtp 2
> test.mail.protection.outlook.com:25 id 122 check inter 1
> 

I can reproduce the issue with your configuration on

> HA-Proxy version 2.2-dev8-fa9d78-30 2020/05/28 - https://haproxy.org/
> Status: development branch - not safe for use in production.
> Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
> Running on: Linux 4.4.0-179-generic #209-Ubuntu SMP Fri Apr 24 17:48:44 UTC 
> 2020 x86_64


Backtrace is as follows:

> (gdb) bt full
> #0  eb_next (node=0x19e2e60) at ebtree/ebtree.h:571
> t = 0x19e2e61
> #1  ebpt_next (ebpt=0x19e2e60) at ebtree/ebpttree.h:77
> No locals.
> #2  deinit_tcpchecks () at src/checks.c:5523
> rs = 
> r = 
> rb = 
> node = 0x19e2e60
> #3  0x004d1da3 in deinit () at src/haproxy.c:2762

strace does not show any further activity.

Best regards
Tim Düsterhus



Debian packaging note (was: stable-bot: Bugfixes waiting for a release 2.1 (52), 2.0 (45))

2020-05-28 Thread Tim Düsterhus
Vincent,

Am 28.05.20 um 12:41 schrieb Tim Düsterhus:
> Okay, I've done what I really wanted to avoid and built my own HAProxy.
> I'm now running HAProxy 2.1.5-1~~~timwolla+1 and I hope that it will
> smoothly upgrade to Vincent's build once it is released.
> 

While researching how to build a 2.1.5 .deb based off your 2.1.4 sources
I noticed that Debian QA complained that HAProxy's compiler flags were
hidden [1]. You should be able to fix that by adjusting MAKEARGS in
debian/rules to include 'V=1':

> MAKEARGS=V=1\
>DESTDIR=debian/haproxy \
>PREFIX=/usr \
>IGNOREGIT=true \
>MANDIR=/usr/share/man \
>DOCDIR=/usr/share/doc/haproxy \
>USE_PCRE2=1 \
>USE_PCRE2_JIT=1 \
>USE_OPENSSL=1 \
>USE_ZLIB=1 \
>USE_LUA=1 \
>LUA_INC=/usr/include/lua5.3 \
>EXTRA_OBJS="contrib/prometheus-exporter/service-prometheus.o"

Best regards
Tim Düsterhus

[1] https://qa.debian.org/bls/packages/h/haproxy.html



Re: stable-bot: Bugfixes waiting for a release 2.1 (52), 2.0 (45)

2020-05-28 Thread Tim Düsterhus
Willy,

Am 28.05.20 um 09:23 schrieb Willy Tarreau:
> Please do me a favor, just check that this pre-release is OK for you:
> 
>http://git.haproxy.org/?p=haproxy-2.1.git;a=snapshot;h=HEAD;sf=tgz
> 
> I'd really hate having to release it just to have to emit yet another
> one to fix the same issue again :-/
> 

Okay, I've done what I really wanted to avoid and built my own HAProxy.
I'm now running HAProxy 2.1.5-1~~~timwolla+1 and I hope that it will
smoothly upgrade to Vincent's build once it is released.

> [root@~]haproxy -vv
> HA-Proxy version 2.1.5-1~~~timwolla+1 2020/05/28 - https://haproxy.org/
> Status: stable branch - will stop receiving fixes around Q1 2021.
> Known bugs: http://www.haproxy.org/bugs/bugs-2.1.5.html
> Running on: Linux 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64
> Build options :
>   TARGET  = linux-glibc
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -O2 -g -O2 -fdebug-prefix-map=/pwd/haproxy-2.1.5=. 
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
> -D_FORTIFY_SOURCE=2 -fno-strict-aliasing -Wdeclaration-after-statement 
> -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter 
> -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered 
> -Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value 
> -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
>   OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 
> USE_ZLIB=1 USE_SYSTEMD=1
> 
> Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE -PCRE_JIT 
> +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +REGPARM 
> -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT 
> +CRYPT_H -VSYSCALL +BACKTRACE +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 
> -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES 
> -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS
> 
> Default settings :
>   bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> 
> Built with multi-threading support (MAX_THREADS=64, default=8).
> Built with OpenSSL version : OpenSSL 1.1.0l  10 Sep 2019
> Running on OpenSSL version : OpenSSL 1.1.0l  10 Sep 2019
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
> Built with Lua version : Lua 5.3.3
> Built with network namespace support.
> Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
> IP_FREEBIND
> Built with PCRE2 version : 10.22 2016-07-29
> PCRE2 library supports JIT : yes
> Encrypted password support via crypt(3): yes
> Built with zlib version : 1.2.8
> Running on zlib version : 1.2.8
> Compression algorithms supported : identity("identity"), deflate("deflate"), 
> raw-deflate("deflate"), gzip("gzip")
> Built with the Prometheus exporter as a service
> 
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
> 
> Available multiplexer protocols :
> (protocols marked as  cannot be specified using 'proto' keyword)
>   h2 : mode=HTTP   side=FE|BE mux=H2
> fcgi : mode=HTTP   side=BEmux=FCGI
> : mode=HTTP   side=FE|BE mux=H1
> : mode=TCPside=FE|BE mux=PASS
> 
> Available services :
>   prometheus-exporter
> 
> Available filters :
>   [SPOE] spoe
>   [CACHE] cache
>   [FCGI] fcgi-app
>   [TRACE] trace
>   [COMP] compression

My Postfix + Dovecot still works as evidenced by the fact that I am able
read your email and send a reply. My HTTP services also work.

Best regards
Tim Düsterhus



Re: [PATCH] skip reg-tests/connection/proxy_protocol_send_unique_id_alpn.vtc on CentOS 6

2020-05-27 Thread Tim Düsterhus
Ilya,

Am 27.05.20 um 22:53 schrieb Илья Шипицин:
> Hello,
> 
> let us skip new test on CentOS6
> 

There definitely should be a smarter solution than "delete test" to skip
tests that depend on OpenSSL's features.

Or maybe we should just get rid of CentOS 6 tests, it will be end of
life on November, 30th anyway.

Best regards
Tim Düsterhus



Re: stable-bot: Bugfixes waiting for a release 2.1 (52), 2.0 (45)

2020-05-27 Thread Tim Düsterhus
Hi List,
Willy,

Am 27.05.20 um 02:00 schrieb stable-...@haproxy.com:
> Last release 2.1.4 was issued on 2020-04-02.  There are currently 52 patches 
> in the queue cut down this way:
> - 1 MAJOR, first one merged on 2020-05-20
> - 20 MEDIUM, first one merged on 2020-05-01
> - 31 MINOR, first one merged on 2020-04-02
> 
> Thus the computed ideal release date for 2.1.5 would be 2020-04-30, which was 
> four weeks ago.
> 
> Last release 2.0.14 was issued on 2020-04-02.  There are currently 45 patches 
> in the queue cut down this way:
> - 1 MAJOR, first one merged on 2020-05-22
> - 18 MEDIUM, first one merged on 2020-05-07
> - 26 MINOR, first one merged on 2020-04-02
> 
> Thus the computed ideal release date for 2.0.15 would be 2020-04-30, which 
> was four weeks ago.

I already asked 2 weeks ago [1], but I'll ask again:

> Is there any date planned for 2.1.5? I'm still running 2.1.3 on one
> machine, because I use Dovecot.

And I only just realize that 2.1.3 is affected by CVE-2020-11100 which
makes the current situation especially ugly. Either I run a version with
a critical bug without workaround, I break Dovecot or I compile my own
HAProxy.

Best regards
Tim Düsterhus

[1] https://www.mail-archive.com/haproxy@formilux.org/msg37344.html



Re: [PATCH] REGTEST: Add connection/proxy_protocol_send_unique_id_alpn

2020-05-27 Thread Tim Düsterhus
Christopher,

Am 27.05.20 um 13:38 schrieb Christopher Faulet:
> Thanks Tim. Now applied.
> 

Ugh. I realize that I messed up all the commit hashes within the commit
message, because I've taken the hashes from my testing branch where I
cherry-picked them onto an old commit to verify the test fails without
them and passes with them :-/

For posteriority the correct commits are:

68ad53cb3781010ccde7c781b6a3a1e926b5ed23
3ab504f5ff53968ae70d592cba4c1c7da6a0e7ff
d82056c319814f9328db07dd50ab90785ec6c95c

Best regards
Tim Düsterhus



Re: RFC: set minimum default TLS version to 1.2 for HAProxy 2.2

2020-05-27 Thread Tim Düsterhus
Ilya,

Am 27.05.20 um 13:33 schrieb Илья Шипицин:
>> As a data point:
>>
>> The OpenSSL shipped with Debian Buster does not support anything below
>> TLS 1.2 by default [1]. The same is true starting with Ubuntu 20.04 LTS.
>>
> 
> 
> I know several real-world cases when people had to build their own openssl
> on Debian Buster in order get TLS1.0 back
> 

Sure. But admins that are capable enough to compile their own OpenSSL
will be capable enough to add the following to their HAProxy configuration:

ssl-default-bind-options ssl-min-ver TLSv1.0

However in the general case you won't get far as a client in today's
Internet without supporting TLS 1.2. For my machines I dropped support
for anything < 1.2 on port 443 more than 2 years ago.

Best regards
Tim Düsterhus



Re: RFC: set minimum default TLS version to 1.2 for HAProxy 2.2

2020-05-27 Thread Tim Düsterhus
William,

Am 27.05.20 um 12:40 schrieb William Lallemand:
> Hello List,
> 
> Since HAProxy 1.8, the minimum default TLS version for bind lines is
> TLSv10. I was thinking to increase this minimum default to TLSv11 before
> the 2.2 release. But when we discussed the other day about the DH
> param set to 2048 by default, I read that RHEL 8 was also disabling
> TLSv11 by default. TLSv12 now exists for 12 years, it is widely-spread
> nowadays.
> 
> So in my opinion we should do the same, and set the minimum version to
> TLSv12 by default on bind lines. It's still configurable with
> min-ssl-ver if you want the support for prior TLS versions.
> 
> Does anybody have any objections?
> 

As a data point:

The OpenSSL shipped with Debian Buster does not support anything below
TLS 1.2 by default [1]. The same is true starting with Ubuntu 20.04 LTS.

Best regards
Tim Düsterhus

[1]
https://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#openssl-defaults



Re: [ANNOUNCE] haproxy-2.2-dev8

2020-05-22 Thread Tim Düsterhus
Willy,

Am 22.05.20 um 17:16 schrieb Willy Tarreau:
>   - new "http-error" directive and unification with the "return", "deny"
> and "tarpit" rules. This means that it's now possible to handle a

Is there any functional difference between 'return' and 'deny' now or
can 'deny' be considered deprecated?

> deny or a return exactly the same way by specifiying headers and body
> independently using raw text or log-format, and that all processing
> errors can now be dynamic. I know this used to be a very long awaited
> feature which will for example allow to define errorfile templates
> which embed a unique ID or at least be a bit more user-friendly.

Sweet. I always wanted to embed the unique ID in the error messages.

Best regards
Tim Düsterhus



Re: [PATCH 0/6] Lua variable handling

2020-05-19 Thread Tim Düsterhus
Adis,

Am 19.05.20 um 15:57 schrieb Adis Nezirovic:
> However, nothing in the code checks that variable name is formed
> correctly (with correct prefix, such as "txn."), or that the rest of the
> variable name is correct.
> 
> Lua code using set_var() with wrong name silently fails, the variable is
> not set, and nothing is logged
> .
> 
> 
> With your proposal set_var() now can return result, and we could even
> log warning for "variable name syntax error".
> 

Have a look at the reg-test in patch 5/6. I specifically added a test
that checks the return value for incorrect variable names.

> ---8<---
> 
> Now back on topic. Instead of adding more parameters to set_var(), I'd
> prefer a warning instead.
> 
> If someone is using set_var() from Lua, and that variable is never used
> or set in the rest of the config, we would know that. Additionally, one
> can set "zero-warning" option to prevent abuse by Lua scripts or to
> prevent bugs.

This would break my use case for haproxy-auth-request. The plan is that
the Lua script unconditionally sets are variable for all response
headers and that HAProxy with my patches applied drops the variables
that are never going to be read. I specifically want to avoid that the
administrator needs to configure the headers to expose. Details are in:
https://github.com/TimWolla/haproxy-auth-request/pull/13

> For your use case, maybe you could use maps from Lua? They should be
> backed by ebtree and be faster right?
> 

To my understanding Maps are not scoped per request and thus would not
be usable for haproxy-auth-request.

Best regards
Tim Düsterhus



Re: 2.0.14 + htx / retry-on all-retryable-errors -> sometimes wrong backend/server used

2020-05-19 Thread Tim Düsterhus
Jarno,

Am 19.05.20 um 15:36 schrieb Jarno Huuskonen:
> I think I found a case when haproxy-2.0.14 with htx and retry-on
> all-retryable-errors sometimes seems to select wrong backend/server
> to retry. (Doesn't happen on every retry).
> 

I believe this is a duplicate of this issue:
https://github.com/haproxy/haproxy/issues/623

Best regards
Tim Düsterhus



Re: decode key created with url32+src

2020-05-17 Thread Tim Düsterhus
Aleks,

Am 18.05.20 um 00:48 schrieb Aleksandar Lazic:
> Is there a easy way to know which URL+src the key is?
> [...]
>   http-request track-sc1 url32+src table per_ip_and_url_rates unless { 
> path_end .css .js .png .gif }

No, as per the documentation:

> url32+src : binary
> This returns the concatenation of the "url32" fetch and the "src" fetch. The
> resulting type is of type binary, with a size of 8 or 20 bytes depending on
> the source address family. This can be used to track per-IP, per-URL counters.

and

> url32 : integer
> This returns a 32-bit hash of the value obtained by concatenating the first
> Host header and the whole URL including parameters (not only the path part of
> the request, as in the "base32" fetch above). This is useful to track per-URL
> activity. A shorter hash is stored, saving a lot of memory. The output type
> is an unsigned integer.

Thus you only have a hash value of the URL in question. However the IP
address is stored in clear at the end of the resulting key. You might
need to hex decode it.

Best regards
Tim Düsterhus



Re: stable-bot: Bugfixes waiting for a release 2.1 (27), 2.0 (24)

2020-05-14 Thread Tim Düsterhus
Hi List,

Am 13.05.20 um 02:00 schrieb stable-...@haproxy.com:
> Last release 2.1.4 was issued on 2020-04-02.  There are currently 27 patches 
> in the queue cut down this way:
> - 10 MEDIUM, first one merged on 2020-05-01
> - 17 MINOR, first one merged on 2020-04-02
> 
> Thus the computed ideal release date for 2.1.5 would be 2020-04-30, which was 
> two weeks ago.
> 
> Last release 2.0.14 was issued on 2020-04-02.  There are currently 24 patches 
> in the queue cut down this way:
> - 12 MEDIUM, first one merged on 2020-05-07
> - 12 MINOR, first one merged on 2020-04-02
> 
> Thus the computed ideal release date for 2.0.15 would be 2020-04-30, which 
> was two weeks ago.

Is there any date planned for 2.1.5? I'm still running 2.1.3 on one
machine, because I use Dovecot.

Best regards
Tim Düsterhus



Re: [PATCH] MINOR: crypto: Add digest and hmac converters

2020-05-07 Thread Tim Düsterhus
Patrick,

Am 07.05.20 um 18:54 schrieb Patrick Gansterer:
>> I skipped the patch (that's why I did not ACK the first, but only the
>> second one), because it was a very large change of code that was just
>> moved. Nonetheless I should have noticed the missing body and noted that
>> during my review.
> 
> IMHO, that's a patch you usually do not want review.

Yes, that's why I did not. However the commit message is not directly
part of the patch, so I should have.

>> Please note that I review patches on a voluntary basis. I'm not an
>> "employed first level reviewer".
> 
> That's not what I meant. I thought that when regular participants on
> this list do not spot the errors of first time contributes, it can't be
> that obvious and directing to CONTRIBUTING might not be enough.

I agree with you that Willy's reply was overly blunt in this case.
Especially since you demonstrated an effort to take my review into a
account and he noticed my Ack (implying it can't be too bad).

>> Liking HAProxy and wanting to give something back is my motivation as
>> well. I am very sorry to see how this experience went for you. If it is
>> of any help to you: This is definitely not how it usually goes.
> 
> Then here is my next try. ;-)
> 
> I've rebased my changes to reflect the recent changes and added the
> missing description to the first patch.

I've now taken a look at both patches now and both are:

Reviewed-by: Tim Duesterhus 

Best regards
Tim Düsterhus



Re: [PATCH] MINOR: crypto: Add digest and hmac converters

2020-05-07 Thread Tim Düsterhus
Patrick,

Am 07.05.20 um 18:04 schrieb Patrick Gansterer:
> On 07.05.20 17:35, Willy Tarreau wrote:
>> Indeed. I encourage to ping again after one week because usually when you
>> restart with a new week of work, the previous one is definitely in old
>> history and will only be revisited by pure luck.
> 
> I don't want to look impatient, so I waited 2 weeks. ;-)
> 
>> With this said, I remember having noticed Tim's ack and was about to take
>> the series until I noticed there was a first patch with no commit message
>> to justify the change and postponed the reply because I had other things
>> to do than to rehash what's already in CONTRIBUTING again and again :-/
> 
> I'm not sure how I should read this. I wrote an explanation into the
> commit message and tried to match already existing messages. If I look
> at commits in the last days like 0e9d87bf06 or de80201460 there are no
> very long commit messages either.
> 
> If I should write an essay about my use case into the commit message I
> could do that, but that's something a "first level reviewer" could
> demand already.

This is about your "[PATCH 1/2] MINOR: crypto: Move aes_gcm_dec
implementation into new file". It does not contain a body, but instead
just a subject.

I skipped the patch (that's why I did not ACK the first, but only the
second one), because it was a very large change of code that was just
moved. Nonetheless I should have noticed the missing body and noted that
during my review.

Please note that I review patches on a voluntary basis. I'm not an
"employed first level reviewer".

> I read the CONTRIBUTING text before submitting my patch, since I don't
> like to repeat myself also, but it's hard to get everything right in the
> first patch. Sorry for that.

The "missing body" Willy was referring to is here, if you'd like to read
up on it:
https://github.com/haproxy/haproxy/blob/f82ea4ae4ca4a6fca70a6e874643db887a39f037/CONTRIBUTING#L562-L567

> I really like haproxy and want to give something back, but I'm not sure
> if I want to do that in the future with the experience I had so far. :-(
> 

Liking HAProxy and wanting to give something back is my motivation as
well. I am very sorry to see how this experience went for you. If it is
of any help to you: This is definitely not how it usually goes.

Best regards
Tim Düsterhus



  1   2   3   4   5   >