Re: Proposal patch to improve CI error reporting

2021-11-26 Thread Tim Düsterhus

Willy,

On 11/26/21 4:18 PM, Willy Tarreau wrote:

Do you have any objection against this being merged ? Would you prefer
to change it a bit (e.g. delimit the output one way or another) ? I'm
open to suggestions, knowing that in its current raw form it did the
job for me, so the rest is cometic.


No objection. If you believe it is useful to you, then by all means: Go 
ahead.


I have just one request: Please make this collapsible. You can do so 
easily by basically replacing the comment by another echo:


echo "::group::Show platform specific defines"
echo | ${{ matrix.CC }} -dM -xc -E -
echo "::endgroup::"

Documentation is here: 
https://docs.github.com/en/actions/learn-github-actions/workflow-commands-for-github-actions#grouping-log-lines


Best regards
Tim Düsterhus



Re: [PATCH 1/1] BUG/MINOR: lua: remove loop initial declarations

2021-11-24 Thread Tim Düsterhus

Bertrand,

On 11/24/21 10:16 PM, Bertrand Jacquin wrote:

No backport needed as this issue was introduced in v2.5-dev10~69 with
commit 9e5e586e35c5 ("BUG/MINOR: lua: Fix lua error handling in
`hlua_config_prepend_path()`")


Oh no, that's mine :-( Actually a backport is needed, because 2.5 was 
released yesterday.


Acked-by: Tim Duesterhus 

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.5.0

2021-11-23 Thread Tim Düsterhus

Willy,

On 11/23/21 5:18 PM, Willy Tarreau wrote:

As a reminder, this is a stable version which will receive fixes for
around 12 months. Its initially scheduled EOL is 2023-Q1 but it can be
slightly extended depending on adoption and feedback.


You're living in the future. haproxy.org shows 2022-11-23 as the release 
date for 2.5 (s/2022/2021/ to fix).


Best regards
Tim Düsterhus



Re: should we keep both OpenSSL-3.0.0 and QuicTLS builds in CI ?

2021-11-20 Thread Tim Düsterhus

Ilya,

On 11/20/21 7:14 PM, Илья Шипицин wrote:

QuicTLS is built on top of OpenSSL-3.0.0 and may be considered as "kind of
OpenSSL ... with QUIC enabled".

should we keep both of just second ?
(I'm fine with either of two)



I'd say keep both: They are different products and might fail 
independently (e.g. due to some bug in QuicTLS also affecting regular 
TCP-TLS).


Best regards
Tim Düsterhus



Re: [PATCH 0/6] Probably final Coccinelle Cleanup

2021-11-08 Thread Tim Düsterhus

Willy,

On 11/8/21 11:43 AM, Willy Tarreau wrote:

You're totally right. Not only it is redundant, but it is wrong (which
is why it is redundant). By being called strncat() one would hope that
it never copies more than the source, but here it ignores the source's
size and always takes the len argument. I suggest that we replace it
with chunk_memcat() everywhere and remove it to avoid any issue. At
first glance for now this is harmless, fortunately, but for how long?


I'm going to do it, and have already applied your 6 first patches here.


Thanks for handling this.

Can you please revert b9656e48377a9e5359494bce6a413a9915c8f74b then? 
This Coccinelle patch is no longer needed / correct now that the 
function is gone.


Best regards
Tim Düsterhus



Re: [PATCH] CLEANUP: slz: Mark `reset_refs` as static

2021-11-05 Thread Tim Düsterhus

Willy,

On 11/5/21 4:43 PM, Willy Tarreau wrote:

Yes, sorry, but I will really deal with that stuff *after* we're done
with haproxy's release.


Yes, absolutely.

I just came across this point on my list while I was double-checking 
whether I had anything important / any loose ends for 2.5 and thought 
I'd sent an email so that I don't forget about this myself :-)


Best regards
Tim Düsterhus



Re: [PATCH] halog stuff

2021-11-05 Thread Tim Düsterhus , WoltLab GmbH

Willy,

On 10/28/21 6:50 PM, Willy Tarreau wrote:

Just one thing (but do not worry, I'll rearrange it later), I initially
took great care of limiting the output to 80 columns for having suffered
a few times from it, and I think it broke when time ranges were added.


This did not happen, yet. So here's a reminder before 2.5.

When you do, please also add the missing ']' in front of '< log'  in the 
help text. I accidentally removed it in 
66255f7bbf9dfa18545d96f87d7a0f6fb8684d1c.


Best regards
Tim Düsterhus
Developer WoltLab GmbH

--

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



Re: [PATCH] CLEANUP: slz: Mark `reset_refs` as static

2021-11-05 Thread Tim Düsterhus

Willy,

On 10/11/21 5:15 PM, Tim Düsterhus wrote:

please also apply to https://github.com/wtarreau/libslz/.
[...]


Now applied, thanks!


Not seeing anything in the libslz repository yet. Did you forget to push?
:-)


No, I've applied to the haproxy copy only for now, will do slz later,
and after double-checking that no legacy code still uses the function
(becaure in the very first incarnation it had to be called from the
main code, which was not very handy). But I guess that any such code
I could find would just be test code.


Reminder in case you forgot about this and did not just get not around
to it.



Another reminder, because you did not reply to my previous.

Best regards
Tim Düsterhus



Re: [EXTERNAL] Re: [PATCH 1/2] MINOR: jwt: Make invalid static JWT algorithms an error in `jwt_verify` converter

2021-11-04 Thread Tim Düsterhus

Remi,

On 11/3/21 9:47 AM, Remi Tricot-Le Breton wrote:

As for the second one, it would have been easier to simply add a string
length comparison before the strcmp in my opinion. We would have had a
one line fix instead of a full conversion of strXXX calls into ist
equivalents (most of which worked fine thanks to the constant algorithm
string length).



Your patch is already merged and the bug is fixed. However I'd like to 
comment on the reasons behind why I refactored the whole function to use 
the ist API:


I *strongly* dislike code that just works because of some implicit 
assumptions that might or might not hold after future changes. This is 
C, every messed up boundary check can easily result in some major issues.


In this specific case the implicit assumption is that all supported 
algorithms are exactly 5 characters long. This is true with the current 
implementation in HAProxy which just supports the HS, RS, ES, and PS 
families. However the JOSE specification [1] also specifies other values 
for the 'alg' field (though most of them for encrypted instead of just 
signed tokens) and they have differing lengths.


If someone in the future wants to add support for a 6 character 
algorithm, then they need to remember to add the length check to not run 
into the same strncmp() issue like you did. My experience is that if a 
mistake is made once, it is going to be made again.


The ist API already contains a large number of functions for *safe* 
handling of strings that are not NUL terminated. It is very hard to 
misuse them, because they all include the appropriate checks. If you see 
isteq(dynamic, ist("static")) then it is very clear that this will match 
the string "statis" exactly. For strncmp you will need to perform the 
length check yourself. You also need to remember to manually subtract 1 
for the trailing NUL when using sizeof, whereas this is done 
automatically by the ist() function.


While I am not an expert in optimization and getting the last percent of 
performance out of a function, I don't think the ist API is going to be 
measurably slower than strncmp. It is used everywhere within HTX after 
all! For the JWT the OpenSSL crypto is going to take up the majority of 
the time anyway.


Hope this help to clarify why I'm strongly pushing the use of the ist 
API and why I've contributed a fair number of additional functions in 
the past when I encountered situations where the current functions were 
not ergonomic to use. One example is the istsplit() function which I 
added for uri_normalizer.c. It makes tokenization of strings very easy 
and safe.


Best regards
Tim Düsterhus

[1] https://www.iana.org/assignments/jose/jose.xhtml



Re: [PATCH] CLEANUP: http_fetch: Use ist helpers in smp_fetch_http_auth_bearer()

2021-10-29 Thread Tim Düsterhus

Willy,

On 10/29/21 8:50 AM, Willy Tarreau wrote:

I don't see how this can ever match:

   - we search for a space in the  first characters starting at 
   - if we find one such space, we check if these characters are exactly
 equal to the string "Bearer" (modulo the case), and if so we take the
 value.
   => by definition this cannot match since there is no space in "Bearer"

It made me doubt about strncasecmp() to the point that I had to write some
code to verify I wasn't mistaken about how it worked.

Rémi, am I missing something or is it just that this code snippet indeed
has a bug that was not spotted by the regtests (which I'm fine with,
they're regression tests, not unit tests seeking 100% coverage) ?


You are not missing anything. This is indeed broken, because no reg-test 
covers calling the `http_auth_bearer` fetch with a string parameter.


Thus the tests always go into the 'else' part where `get_http_auth()` 
handles the parsing.


The correct fix would then of course be that the parsing logic is moved 
into a dedicated function (ideally based on the ist API) and not copied 
into several different functions.


I retract my patch.

Remi: Shall I file an issue to track the duplicated parsing logic or 
will you handle this based on this ML thread?


Best regards
Tim Düsterhus



Re: [PATCH] halog stuff

2021-10-28 Thread Tim Düsterhus , WoltLab GmbH

Willy,

On 10/28/21 6:50 PM, Willy Tarreau wrote:

Just one thing (but do not worry, I'll rearrange it later), I initially
took great care of limiting the output to 80 columns for having suffered
a few times from it, and I think it broke when time ranges were added.


ack


 From 2e09f62a989dfe3ef2a0965c8f42681db9af937e Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Thu, 28 Oct 2021 17:24:02 +0200
Subject: [PATCH 5/5] MINOR: halog: Add support for extracting captures using
  -hdr
To: haproxy@formilux.org
Cc: w...@1wt.eu

This resolves GitHub issue #1146.


On this one, please provide a description. It's fine to mention issues or
any external info, but commit messages will outlive any external site, and
they must contain a description of what is done. It seems to me that you're
extracting the fields from the capture, and also that you're doing a special
case of the quotes. I'm not sure how they appear on the output though. We
don't need many details, but just a few lines about this would help.



The check for the quote is to detect the start of the request method. 
I've taken that logic from `filter_count_url`. This function skips the 
captures by searching for the `"` in front of `GET` (or whatever method 
there is).


I've indicated the quote in question in this example log line:


127.0.0.1:54972 [28/Oct/2021:19:14:25.534] test test/ 0/-1/-1/-1/0 403 192 - - 
PR-- 2/2/0/0/3 0/0 {foo||bar} {||} "GET / HTTP/1.1"

 ^


I've attached an updated patch with an extensive explanation :-)

Best regards
Tim Düsterhus
Developer WoltLab GmbH

--

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P
>From db0356b61a32ccd7b4015cea66b4e2b7d8ff0097 Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Thu, 28 Oct 2021 17:24:02 +0200
Subject: [PATCH 5/5] MINOR: halog: Add support for extracting captures using
 -hdr
To: haproxy@formilux.org
Cc: w...@1wt.eu

This patch adds support for extracting captured header fields to halog. A field
can be extracted by passing the `-hdr :` output filter.

Both `` and `` are 1-indexed.

`` refers to the index of the brace-delimited list of headers. If both
request and response headers are captured, then request headers are referenced
by ` = 1`, response headers are `2`. If only one direction is captured,
there will only be a single block `1`.

`` refers to a single field within the selected block.

The output will contain one line, possibly empty, per log line processed.
Passing a non-existent `` or `` will result in an empty line.

Example:

capture request  header a len 50
capture request  header b len 50
capture request  header c len 50
capture response header d len 50
capture response header e len 50
capture response header f len 50

`-srv 1:1` will extract request  header `a`
`-srv 1:2` will extract request  header `b`
`-srv 1:3` will extract request  header `c`
`-srv 2:3` will extract response header `f`

This resolves GitHub issue #1146.
---
 admin/halog/halog.c | 109 ++--
 1 file changed, 105 insertions(+), 4 deletions(-)

diff --git a/admin/halog/halog.c b/admin/halog/halog.c
index fe11aa81d..9b202d731 100644
--- a/admin/halog/halog.c
+++ b/admin/halog/halog.c
@@ -119,6 +119,7 @@ struct url_stat {
 
 #define FILT2_TIMESTAMP 0x01
 #define FILT2_PRESERVE_QUERY0x02
+#define FILT2_EXTRACT_CAPTURE   0x04
 
 unsigned int filter = 0;
 unsigned int filter2 = 0;
@@ -139,6 +140,7 @@ void filter_count_term_codes(const char *accept_field, const char *time_field, s
 void filter_count_status(const char *accept_field, const char *time_field, struct timer **tptr);
 void filter_graphs(const char *accept_field, const char *time_field, struct timer **tptr);
 void filter_output_line(const char *accept_field, const char *time_field, struct timer **tptr);
+void filter_extract_capture(const char *accept_field, const char *time_field, unsigned int, unsigned int);
 void filter_accept_holes(const char *accept_field, const char *time_field, struct timer **tptr);
 
 void usage(FILE *output, const char *msg)
@@ -147,9 +149,11 @@ void usage(FILE *output, const char *msg)
 		"%s"
 		"Usage: halog [-h|--help] for long help\n"
 		"   halog [-q] [-c] [-m ]\n"
-		"   {-cc|-gt|-pct|-st|-tc|-srv|-u|-uc|-ue|-ua|-ut|-uao|-uto|-uba|-ubt|-ic}\n"
+		"   {-cc|-gt|-pct|-st|-tc|-srv|-u|-uc|-ue|-ua|-ut|-uao|-uto|-uba|-ubt|-ic\n"
+		"   |-hdr :\n"
+		"   }\n"
 		"   [-s ] [-e|-E] [-H] [-rt|-RT ] [-ad ] [-ac ] [-query]\n"
-		"   [-v] [-Q|-QS] [-tcn|-TCN ] [ -hs|-HS [min][:[max]] ] [ -time [min][:[max]] ] < log\n"
+		"   [-v] [-Q|-QS] [-tcn|-TCN ] [ -hs|-HS [m

[PATCH] halog stuff

2021-10-28 Thread Tim Düsterhus , WoltLab GmbH

Willy,

please find another halog series attached.

1. Some small changes to the new -qry/-query flag.
2. A new -hdr flag, resolving my own GitHub issue.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

--

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P
>From b5f3f0a1dafc85854574c77f3df0e3a31ee0136d Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Thu, 28 Oct 2021 15:53:37 +0200
Subject: [PATCH 1/5] DOC: halog: Move the `-qry` parameter into the correct
 section in help text
To: haproxy@formilux.org
Cc: w...@1wt.eu

This is not an output filter, but instead a modifier. Specifically "only one
may be used at a time" is not true.

see 24b8d693b202b01b649f64ed878d8f9dd1b242e4
---
 admin/halog/halog.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/admin/halog/halog.c b/admin/halog/halog.c
index 2d6c17f45..76e68cebf 100644
--- a/admin/halog/halog.c
+++ b/admin/halog/halog.c
@@ -183,6 +183,7 @@ void help()
 	   " -m   limit output to the first  lines\n"
" -s   skip n fields from the beginning of a line (default %d)\n"
" you can also use -n to start from earlier then field %d\n"
+	   " -qrypreserve the query string for per-URL (-u*) statistics\n"
"\n"
 	   "Output filters - only one may be used at a time\n"
 	   " -conly report the number of lines that would have been printed\n"
@@ -197,8 +198,7 @@ void help()
 	   "   -u : by URL, -uc : request count, -ue : error count\n"
 	   "   -ua : average response time, -ut : average total time\n"
 	   "   -uao, -uto: average times computed on valid ('OK') requests\n"
-	   "   -uba, -ubt: average bytes returned, total bytes returned\n"
-	   " -qry  preserve the query string for per-URL statistics\n",
+	   "   -uba, -ubt: average bytes returned, total bytes returned\n",
SOURCE_FIELD,SOURCE_FIELD
 	   );
 	exit(0);
-- 
2.33.1

>From 50d5f579bc35fcffbf06a2fa2dbc8a4fd6944135 Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Thu, 28 Oct 2021 16:36:03 +0200
Subject: [PATCH 2/5] MINOR: halog: Rename -qry to -query
To: haproxy@formilux.org
Cc: w...@1wt.eu

With the query flag moved into the correct help section, there is enough space
for two additional characters.
---
 admin/halog/halog.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/admin/halog/halog.c b/admin/halog/halog.c
index 76e68cebf..89590cfed 100644
--- a/admin/halog/halog.c
+++ b/admin/halog/halog.c
@@ -148,7 +148,7 @@ void usage(FILE *output, const char *msg)
 		"Usage: halog [-h|--help] for long help\n"
 		"   halog [-q] [-c] [-m ]\n"
 		"   {-cc|-gt|-pct|-st|-tc|-srv|-u|-uc|-ue|-ua|-ut|-uao|-uto|-uba|-ubt|-ic}\n"
-		"   [-s ] [-e|-E] [-H] [-rt|-RT ] [-ad ] [-ac ] [-qry]\n"
+		"   [-s ] [-e|-E] [-H] [-rt|-RT ] [-ad ] [-ac ] [-query]\n"
 		"   [-v] [-Q|-QS] [-tcn|-TCN ] [ -hs|-HS [min][:[max]] ] [ -time [min][:[max]] ] < log\n"
 		"\n",
 		msg ? msg : ""
@@ -183,7 +183,7 @@ void help()
 	   " -m   limit output to the first  lines\n"
" -s   skip n fields from the beginning of a line (default %d)\n"
" you can also use -n to start from earlier then field %d\n"
-	   " -qrypreserve the query string for per-URL (-u*) statistics\n"
+	   " -query  preserve the query string for per-URL (-u*) statistics\n"
"\n"
 	   "Output filters - only one may be used at a time\n"
 	   " -conly report the number of lines that would have been printed\n"
@@ -833,7 +833,7 @@ int main(int argc, char **argv)
 			filter |= FILT_COUNT_URL_BAVG;
 		else if (strcmp(argv[0], "-ubt") == 0)
 			filter |= FILT_COUNT_URL_BTOT;
-		else if (strcmp(argv[0], "-qry") == 0)
+		else if (strcmp(argv[0], "-query") == 0)
 			filter2 |= FILT2_PRESERVE_QUERY;
 		else if (strcmp(argv[0], "-ic") == 0)
 			filter |= FILT_COUNT_IP_COUNT;
-- 
2.33.1

>From d0cdbf68f41bfd43f5cbb1bb0cc1b745fe261dfd Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Thu, 28 Oct 2021 15:55:49 +0200
Subject: [PATCH 3/5] CLEANUP: halog: Use consistent indentation in help()
To: haproxy@formilux.org
Cc: w...@1wt.eu

Consistently use 1 Tab per line.
---
 admin/halog/halog.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/admin/halog/halog.c b/admin/halog/halog.c
index 89590cfed..47

Re: [PR] Typos fixed "it" should be "is"

2021-10-28 Thread Tim Düsterhus

Willy,

On 10/14/21 7:23 PM, PR Bot wrote:

This is an automated relay of the Github pull request:
Typos fixed "it" should be "is"

Patch title(s):
Typos fixed "it" should be "is"

Link:
https://github.com/haproxy/haproxy/pull/1415


The patch contents were good. Please find a version with a cleaned up 
commit message attached. I've used Anubhav's phrasing as-is to not 
modify more than necessary.


Best regards
Tim Düsterhus
>From f4aad97d7967c5da7e20bb6586fc7af2d2147c15 Mon Sep 17 00:00:00 2001
From: Anubhav 
Date: Thu, 14 Oct 2021 22:28:25 +0530
Subject: [PATCH] DOC: Typo fixed "it" should be "is"
To: haproxy@formilux.org
Cc: w...@1wt.eu

This patch was proposed in GitHub PR #1415.

Reviewed-by: Tim Duesterhus 
---
 doc/configuration.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index d90c39e66..20c0f7454 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -12528,7 +12528,7 @@ tcp-request session  [{if | unless} ]
   it is possible to evaluate some conditions to decide whether this session
   must be accepted or dropped or have its counters tracked. Those conditions
   cannot make use of any data contents because no buffers are allocated yet and
-  the processing cannot wait at this stage. The main use case it to copy some
+  the processing cannot wait at this stage. The main use case is to copy some
   early information into variables (since variables are accessible in the
   session), or to keep track of some information collected after the handshake,
   such as SSL-level elements (SNI, ciphers, client cert's CN) or information
-- 
2.33.1



Re: [PATCH] CLEANUP: http_fetch: Use ist helpers in smp_fetch_http_auth_bearer()

2021-10-28 Thread Tim Düsterhus

Willy,

On 10/14/21 7:48 PM, Tim Duesterhus wrote:

please find a suggested cleanup for your JWT patch series. I think that
using the ist functions results in easier to understand code, because you
don't need to manually calculate lengths and offsets.



Here's another patch that did not receive any attention. I originally 
sent it to Remi (and the list) only.


Best regards
Tim Düsterhus



Re: Problem with the var() sample fetch function

2021-10-27 Thread Tim Düsterhus

Willy,

On 10/27/21 6:32 PM, Willy Tarreau wrote:

So the question is (as you see me coming), is it acceptable to fix this in
2.5+ by making var() match the doc, returning the type "any", and mandathing
the matching method, implying that this bogus config which does not work:

  http-request set-var(txn.foo) int(-1)
  http-request deny if { var(txn.foo) lt 0 }

will need to be written like this:

  http-request set-var(txn.foo) int(-1)
  http-request deny if { var(txn.foo) -m int lt 0 }

My lock checks showing that roughly 30% of the configs not using -m are
bogus does not even make me want to continue to support this without
telling users their config will not work. And any config fixed with this
will also be fixed and start to work for earlier versions.

I would love to have the option to emit a warning for older ones but I
really cannot promise anything. I expect that 2.5 users will be experienced
users and that it will not be a big deal to rely on this to fix their
configs once for all. I'd rather not do this during an odd->even switch
however.

I'd like to get some opinions quickly on this as while the error will
just be a one-liner fix, the warnings are way more complicated and will
take some time.



Personally I'd prefer to see my config loudly rejected if it is 
incorrect than it silently working incorrectly. It's not like I'm going 
to roll out 2.5 in my fleet without testing the config at least once.


So: +1 to make this an error.

Best regards
Tim Düsterhus



Re: [PATCH] CLEANUP: Consistently `unsigned int` for bitfields

2021-10-18 Thread Tim Düsterhus

Willy,

On 10/18/21 10:51 AM, Willy Tarreau wrote:

On Mon, Oct 18, 2021 at 09:18:12AM +0200, Tim Düsterhus wrote:

Hu, interesting. Is the GitHub Mirror Sync broken? I'm seeing changes in
https://git.haproxy.org/?p=haproxy.git, but not in GitHub.


So it was in relation with the Painful Access Token apparently. The
mirror user was not allowed anymore to push to .github/workflows:

   $ git push github
   Counting objects: 99, done.
   Delta compression using up to 2 threads.
   Compressing objects: 100% (63/63), done.
   Writing objects: 100% (63/63), 6.69 KiB | 0 bytes/s, done.
   Total 63 (delta 51), reused 0 (delta 0)
   remote: Resolving deltas: 100% (51/51), completed with 34 local objects.
   To https://github.com/haproxy/haproxy.git
! [remote rejected] master -> master (refusing to allow a Personal Access 
Token to create or update workflow `.github/workflows/codespell.yml` without 
`workflow` scope)

I don't really see the relation with any of the recent changes. Thus I
switched to SSH and got rid of the stupid clear-text PAT and now it's
OK again.



GitHub Action Workflows are pretty powerful and can do all kinds of 
stuff within a repository. I assume that GitHub wanted to increase the 
security by not allowing arbitrary tokens to change them, when the 
purpose of the token is something entirely different.


I assume that simply using SSH will be stable going forward. That's all 
I ever used for write access since I signed up to GitHub.


Best regards
Tim Düsterhus



[PATCH] MINOR: halog: Add -qry parameter allowing to preserve the query string in -uX

2021-10-18 Thread Tim Düsterhus , WoltLab GmbH

Willy,

please find the patch attached.


Our use-case for this is a dynamic application that performs routing based on
the query string. Without this option all URLs will just point to the central
entrypoint of this location, making the output completely useless.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

--

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P
>From 6095a454dee425487083674ec9d35be7a59f7ef6 Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Mon, 18 Oct 2021 12:12:02 +0200
Subject: [PATCH] MINOR: halog: Add -qry parameter allowing to preserve the
 query string in -uX
To: haproxy@formilux.org
Cc: w...@1wt.eu

Our use-case for this is a dynamic application that performs routing based on
the query string. Without this option all URLs will just point to the central
entrypoint of this location, making the output completely useless.
---
 admin/halog/halog.c | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/admin/halog/halog.c b/admin/halog/halog.c
index 5d256209b..2d6c17f45 100644
--- a/admin/halog/halog.c
+++ b/admin/halog/halog.c
@@ -117,7 +117,8 @@ struct url_stat {
 #define FILT_COUNT_COOK_CODES 0x4000
 #define FILT_COUNT_IP_COUNT   0x8000
 
-#define FILT2_TIMESTAMP	0x01
+#define FILT2_TIMESTAMP 0x01
+#define FILT2_PRESERVE_QUERY0x02
 
 unsigned int filter = 0;
 unsigned int filter2 = 0;
@@ -147,7 +148,7 @@ void usage(FILE *output, const char *msg)
 		"Usage: halog [-h|--help] for long help\n"
 		"   halog [-q] [-c] [-m ]\n"
 		"   {-cc|-gt|-pct|-st|-tc|-srv|-u|-uc|-ue|-ua|-ut|-uao|-uto|-uba|-ubt|-ic}\n"
-		"   [-s ] [-e|-E] [-H] [-rt|-RT ] [-ad ] [-ac ]\n"
+		"   [-s ] [-e|-E] [-H] [-rt|-RT ] [-ad ] [-ac ] [-qry]\n"
 		"   [-v] [-Q|-QS] [-tcn|-TCN ] [ -hs|-HS [min][:[max]] ] [ -time [min][:[max]] ] < log\n"
 		"\n",
 		msg ? msg : ""
@@ -196,7 +197,8 @@ void help()
 	   "   -u : by URL, -uc : request count, -ue : error count\n"
 	   "   -ua : average response time, -ut : average total time\n"
 	   "   -uao, -uto: average times computed on valid ('OK') requests\n"
-	   "   -uba, -ubt: average bytes returned, total bytes returned\n",
+	   "   -uba, -ubt: average bytes returned, total bytes returned\n"
+	   " -qry  preserve the query string for per-URL statistics\n",
SOURCE_FIELD,SOURCE_FIELD
 	   );
 	exit(0);
@@ -831,6 +833,8 @@ int main(int argc, char **argv)
 			filter |= FILT_COUNT_URL_BAVG;
 		else if (strcmp(argv[0], "-ubt") == 0)
 			filter |= FILT_COUNT_URL_BTOT;
+		else if (strcmp(argv[0], "-qry") == 0)
+			filter2 |= FILT2_PRESERVE_QUERY;
 		else if (strcmp(argv[0], "-ic") == 0)
 			filter |= FILT_COUNT_IP_COUNT;
 		else if (strcmp(argv[0], "-o") == 0) {
@@ -1576,7 +1580,8 @@ void filter_count_url(const char *accept_field, const char *time_field, struct t
 	/* stop at end of field or first ';' or '?', takes avg 64 ns per line */
 	e = b;
 	do {
-		if (*e == ' ' || *e == '?' || *e == ';') {
+		if (*e == ' '||
+		(!(filter2 & FILT2_PRESERVE_QUERY) && (*e == '?' || *e == ';'))) {
 			*(char *)e = 0;
 			break;
 		}
-- 
2.33.0



Re: [PATCH] CLEANUP: Consistently `unsigned int` for bitfields

2021-10-18 Thread Tim Düsterhus

Willy,

On 10/18/21 9:15 AM, Willy Tarreau wrote:

On Mon, Oct 18, 2021 at 09:09:01AM +0200, Tim Düsterhus wrote:

Feel free to replace 'unsigned int' with 'uint' and reformat the struct as
needed.


Done an pushed, thank you!
Willy



Hu, interesting. Is the GitHub Mirror Sync broken? I'm seeing changes in 
https://git.haproxy.org/?p=haproxy.git, but not in GitHub.


Best regards
Tim Düsterhus



Re: [PATCH] CLEANUP: Consistently `unsigned int` for bitfields

2021-10-18 Thread Tim Düsterhus

Willy,

On 10/18/21 7:22 AM, Willy Tarreau wrote:

On Sat, Oct 16, 2021 at 06:24:18PM +0200, Tim Duesterhus wrote:

see 6a0dd733906611dea958cf74b9f51bb16028ae20

Found using GitHub's CodeQL scan.
---
  include/haproxy/stick_table-t.h | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/haproxy/stick_table-t.h b/include/haproxy/stick_table-t.h
index 3b1f2b3ef..133f992b5 100644
--- a/include/haproxy/stick_table-t.h
+++ b/include/haproxy/stick_table-t.h
@@ -125,8 +125,8 @@ struct stktable_data_type {
const char *name; /* name of the data type */
int std_type; /* standard type we can use for this data, STD_T_* */
int arg_type; /* type of optional argument, ARG_T_* */
-   int is_array:1;   /* this is an array of gpc/gpt */
-   int is_local:1;   /* this is local only and never learned */
+   unsigned int is_array:1;   /* this is an array of gpc/gpt */
+   unsigned int is_local:1;   /* this is local only and never learned */


Please note, since last patch we've added shorter type names like "uint"
for the unsigned int, that help preserve alignment. Do you mind if I adapt
your patch ?


Feel free to replace 'unsigned int' with 'uint' and reformat the struct 
as needed.


Best regards
Tim Düsterhus



Re: [PR] Typos fixed "it" should be "is"

2021-10-14 Thread Tim Düsterhus

Hi,

On 10/14/21 7:23 PM, PR Bot wrote:

Description:
Could you please add the label `hacktoberfest-accepted` to the PR
(obviously if you feel that it's not invalid/spam and should be
merged)



For the information of the list readers:

I replied to this request on the PR on GitHub itself: 
https://github.com/haproxy/haproxy/pull/1415#issuecomment-943565693


Best regards
Tim Düsterhus



Re: [PATCH] CLEANUP: slz: Mark `reset_refs` as static

2021-10-11 Thread Tim Düsterhus

Willy,

On 9/24/21 3:36 PM, Willy Tarreau wrote:

On Fri, Sep 24, 2021 at 03:33:14PM +0200, Tim Düsterhus wrote:

Willy,

On 9/24/21 3:09 PM, Willy Tarreau wrote:

please also apply to https://github.com/wtarreau/libslz/.
[...]


Now applied, thanks!


Not seeing anything in the libslz repository yet. Did you forget to push?
:-)


No, I've applied to the haproxy copy only for now, will do slz later,
and after double-checking that no legacy code still uses the function
(becaure in the very first incarnation it had to be called from the
main code, which was not very handy). But I guess that any such code
I could find would just be test code.


Reminder in case you forgot about this and did not just get not around 
to it.


Best regards
Tim Düsterhus



Re: host-based be routing with H2

2021-10-05 Thread Tim Düsterhus

Jarno,

On 10/5/21 5:19 PM, Jarno Huuskonen wrote:

Have you looked at this thread:
https://www.mail-archive.com/haproxy@formilux.org/msg40652.html
your issue sounds similar.

Is one backend the default_backend (where HTTP/2 requests go) ?

Does it work with something like:
use_backend %[req.hdr(host),lower,regsub(:\d+$,,)]
or
use_backend %[req.hdr(host),lower,word(1,:)]


That issue with the port being part of the host header for Firefox 
Websockets should already be fixed in HAProxy 2.4.4.


Best regards
Tim Düsterhus



Re: host-based be routing with H2

2021-10-05 Thread Tim Düsterhus

Ionel,

On 10/5/21 3:56 PM, Ionel GARDAIS wrote:

Currently, backend selection is made with
use_backend %[req.hdr(host),lower]

Would
use_backend %[ssl_fc_sni,lower] # Layer 5
or
use_backend %[req.ssl_sni,lower] # Layer 6
help with H2 ?



That would be a big fat NO.

SNI is ***never*** the correct solution to perform routing.

In fact it will make the situation even worse for you.

-

req.hdr(host) is the correct solution and I am surprised that it does 
not work for you.


Consider adding 'capture request header Host len 50' to your frontend 
and then share the log lines for affected requests. With the httplog 
format they should then indicate both the host as seen by HAProxy as 
well as the backed/server selected.


Best regards
Tim Düsterhus



Re: [PATCH] CLEANUP: slz: Mark `reset_refs` as static

2021-09-24 Thread Tim Düsterhus

Willy,

On 9/24/21 3:09 PM, Willy Tarreau wrote:

please also apply to https://github.com/wtarreau/libslz/.
[...]


Now applied, thanks!


Not seeing anything in the libslz repository yet. Did you forget to 
push? :-)


Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.5-dev7

2021-09-12 Thread Tim Düsterhus

Willy,

On 9/12/21 12:06 PM, Willy Tarreau wrote:

Tim Duesterhus (3):
   CLEANUP: Add haproxy/xxhash.h to avoid modifying import/xxhash.h
   CLEANUP: Move XXH3 macro from haproxy/compat.h to haproxy/xxhash.h
   BUG/MEDIUM lua: Add missing call to RESET_SAFE_LJMP in hlua_filter_new()

Tim Düsterhus (1):
   CLEANUP: ebmbtree: Replace always-taken elseif by else


I'm annoyed that I appear twice. Consider applying the attached UTF-8 
encoded (!) patch to fix that.


The .mailmap in base64 is:

VGltIETDvHN0ZXJodXMgPHRpbUBiYXN0ZWxzdHUuYmU+Cg==

with hash:

SHA256 (.mailmap) = 
a8134231950cd8733a21770538747516394b1b8ba7b1ac32e85e7d70d9c8cd99


Best regards
Tim Düsterhus
>From 28f0740576947b0f4b2a78677142d0e6d408221b Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Sun, 12 Sep 2021 15:55:04 +0200
Subject: [PATCH] DOC: Add .mailmap
To: haproxy@formilux.org
Cc: w...@1wt.eu

Ensure that Tim's last name is consistent no matter how the patch is generated
and applied, preventing Tim from showing up as two different persons in the
shortlog in releases.
---
 .mailmap | 1 +
 1 file changed, 1 insertion(+)
 create mode 100644 .mailmap

diff --git a/.mailmap b/.mailmap
new file mode 100644
index 0..82cb385ae
--- /dev/null
+++ b/.mailmap
@@ -0,0 +1 @@
+Tim Düsterhus 
-- 
2.33.0



[PATCH] Huge number of cleanup patches

2021-09-12 Thread Tim Düsterhus

Willy,

please find 42 patches attached. Sending as attachments to not 
completely bomb the list.


I'll leave it up to you whether you want to squash patches 0001 until 0041.

0042 is sufficiently different to warrant a dedicated commit.

Best regards
Tim Düsterhus
>From 87ca986b06663344bdb18b1db5ca69c74145a56c Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Sun, 12 Sep 2021 12:49:33 +0200
Subject: [PATCH 01/42] CLEANUP: Fix prototype of ha_backtrace_to_stderr()
To: haproxy@formilux.org
Cc: w...@1wt.eu

Explicitly indicate the empty parameter list.
---
 include/haproxy/bug.h   | 4 ++--
 include/haproxy/debug.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/haproxy/bug.h b/include/haproxy/bug.h
index 7a2fa81a0..45c97ba67 100644
--- a/include/haproxy/bug.h
+++ b/include/haproxy/bug.h
@@ -39,12 +39,12 @@
 
 #ifdef DEBUG_USE_ABORT
 /* abort() is better recognized by code analysis tools */
-#define ABORT_NOW() do { extern void ha_backtrace_to_stderr(); ha_backtrace_to_stderr(); abort(); } while (0)
+#define ABORT_NOW() do { extern void ha_backtrace_to_stderr(void); ha_backtrace_to_stderr(); abort(); } while (0)
 #else
 /* More efficient than abort() because it does not mangle the
   * stack and stops at the exact location we need.
   */
-#define ABORT_NOW() do { extern void ha_backtrace_to_stderr(); ha_backtrace_to_stderr(); (*(volatile int*)1=0); } while (0)
+#define ABORT_NOW() do { extern void ha_backtrace_to_stderr(void); ha_backtrace_to_stderr(); (*(volatile int*)1=0); } while (0)
 #endif
 
 /* BUG_ON: complains if  is true when DEBUG_STRICT or DEBUG_STRICT_NOCRASH
diff --git a/include/haproxy/debug.h b/include/haproxy/debug.h
index dd1668db9..c6bdd7c5d 100644
--- a/include/haproxy/debug.h
+++ b/include/haproxy/debug.h
@@ -29,7 +29,7 @@ extern unsigned int debug_commands_issued;
 void ha_task_dump(struct buffer *buf, const struct task *task, const char *pfx);
 void ha_thread_dump(struct buffer *buf, int thr, int calling_tid);
 void ha_dump_backtrace(struct buffer *buf, const char *prefix, int dump);
-void ha_backtrace_to_stderr();
+void ha_backtrace_to_stderr(void);
 void ha_thread_dump_all_to_trash();
 void ha_panic();
 
-- 
2.33.0

>From d3057dd6d1ab2c4bf0018b08ba25b95689048b1e Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Sun, 12 Sep 2021 00:34:34 +0200
Subject: [PATCH 02/42] CLEANUP: Fix prototype of ha_cpuset_size()
To: haproxy@formilux.org
Cc: w...@1wt.eu

Explicitly indicate the empty parameter list.
---
 include/haproxy/cpuset.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/haproxy/cpuset.h b/include/haproxy/cpuset.h
index d29c3560b..390115bcd 100644
--- a/include/haproxy/cpuset.h
+++ b/include/haproxy/cpuset.h
@@ -39,6 +39,6 @@ void ha_cpuset_assign(struct hap_cpuset *dst, const struct hap_cpuset *src);
 
 /* Returns the biggest index plus one usable on the platform.
  */
-int ha_cpuset_size();
+int ha_cpuset_size(void);
 
 #endif /* _HAPROXY_CPUSET_H */
-- 
2.33.0

>From c259a76dd9474a0a4e30982767144248ed350b84 Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Sun, 12 Sep 2021 12:50:04 +0200
Subject: [PATCH 03/42] CLEANUP: Fix prototype of ha_panic()
To: haproxy@formilux.org
Cc: w...@1wt.eu

Explicitly indicate the empty parameter list.
---
 include/haproxy/debug.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/haproxy/debug.h b/include/haproxy/debug.h
index c6bdd7c5d..6d5eba6b8 100644
--- a/include/haproxy/debug.h
+++ b/include/haproxy/debug.h
@@ -31,6 +31,6 @@ void ha_thread_dump(struct buffer *buf, int thr, int calling_tid);
 void ha_dump_backtrace(struct buffer *buf, const char *prefix, int dump);
 void ha_backtrace_to_stderr(void);
 void ha_thread_dump_all_to_trash();
-void ha_panic();
+void ha_panic(void);
 
 #endif /* _HAPROXY_DEBUG_H */
-- 
2.33.0

>From cb39d1db9aa527c5ea57d084565abf7eff6da894 Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Sun, 12 Sep 2021 01:12:21 +0200
Subject: [PATCH 04/42] CLEANUP: Fix prototype of ha_random64()
To: haproxy@formilux.org
Cc: w...@1wt.eu

Explicitly indicate the empty parameter list.
---
 include/haproxy/tools.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/haproxy/tools.h b/include/haproxy/tools.h
index dec82e6b7..341a93e70 100644
--- a/include/haproxy/tools.h
+++ b/include/haproxy/tools.h
@@ -1027,7 +1027,7 @@ int parse_dotted_uints(const char *s, unsigned int **nums, size_t *sz);
 void ha_generate_uuid(struct buffer *output);
 void ha_random_seed(const unsigned char *seed, size_t len);
 void ha_random_jump96(uint32_t dist);
-uint64_t ha_random64();
+uint64_t ha_random64(void);
 
 static inline uint32_t ha_random32()
 {
-- 
2.33.0

>From 11ee1f68b4ad932499e4347f5c6749ca833b936c Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Sun, 12 Sep 2021 12:50:26 +0200
Subject: [PATCH 05/42] CLEANUP: Fix prototype of ha_thread_dump_all_to_trash()
To: haproxy@formilux.org
Cc: w...@1wt.eu

Expl

Re: [PATCH] BUILD: opentracing: excluded use of haproxy variables for, OpenTracing context

2021-09-10 Thread Tim Düsterhus

Miroslav,

On 9/10/21 11:52 PM, Miroslav Zagorac wrote:

Hello Tim,

On 09/10/2021 10:55 PM, Tim Düsterhus wrote:

If it's not a NULL pointer then it must be a valid pointer. I don't
quite understand why you would need to replace this by an empty constant
string then? Is this about a missing NUL terminator? Otherwise a valid
pointer with .len == 0 would be equivalent to "".



No, v.ptr is not a null pointer and points to the wrong location, v.len
is equal to 0.

For example I added to the source two calls to the printf () function,
one before the htx_get_blk_value() function call and the other after:

-
printf(">>> n = { %p '%s' %zu }, v = { %p '%s' %zu }\n", n.ptr, n.ptr,
n.len, v.ptr, v.ptr, v.len);

v = htx_get_blk_value(htx, blk);

printf(">>> v = { %p '%s' %zu }\n", v.ptr, v.ptr, v.len);
-

I intentionally did not write the string format as '%.*s' so that the
contents can be seen.  These two headers are treated here:

1, 'ot-ctx-uberctx-haproxy_id: '
2. 'ot-ctx-uber-trace-id: 3c4069d777e89019:3c4069d777e89019:0:1'

, and this is the result:

1st header:
  >>> n = { 0x56079098bec1
'ot-ctx-uberctx-haproxy_idot-ctx-uber-trace-id3c4069d777e89019:3c4069d777e89019:0:1'
25 }, v = { (nil) '(null)' 0 }
  >>> v = { 0x56079098beda
'ot-ctx-uber-trace-id3c4069d777e89019:3c4069d777e89019:0:1' 0 }

As can be seen from the example, v.ptr = 0x56079098beda, v.len = 0.


This looks correct to me. The value points to the location right behind 
the name of the header and has length 0. The header is empty, thus the 
'v' is empty as well, as indicated by v.len == 0. Thus accessing any 
byte of this zero-size "buffer" is effectively undefined behavior.



2nd header:
  >>> n = { 0x56079098beda
'ot-ctx-uber-trace-id3c4069d777e89019:3c4069d777e89019:0:1' 20 }, v = {
(nil) '(null)' 0 }
  >>> v = { 0x56079098beee '3c4069d777e89019:3c4069d777e89019:0:1' 37 }


This also looks correct to me.


Maybe I use htx_get_blk_name()/htx_get_blk_value() in this function in
an inappropriate way, so I get such a result.


Unrelated to that: Having worked quite a bit with the ist API I would
generally recommend using the helper functions in ist.h (possibly adding
new ones if they are helpful) instead of directly touching the struct.
It makes refactoring much easier, because it's easy to find the function
calls vs. accessing struct members which might be available on different
structs.

As an example in flt_ot_http_headers_get you could use an ist for prefix
and then simply use isteqi() instead of strncasecmp().


The functions strncasecmp() and isteqi() are not compatible, because
isteqi() requires that the lengths of the strings being compared are
equal, whereas in this function that is not the case.  In addition, the
prefix and len should be put in the ist structure and n.len reduced to
the size of len, this is an unnecessary complication.


You are right of course. For some reason I thought you wanted to compare 
for equality and used the 'n' variant because they strings are not NUL 
terminated.


The programming style in OT is not what I'm used to, but if I understand 
it correctly then maybe istmatchi() is what you are searching for (it 
matches prefixes which should be what you are doing manually)?


Best regards
Tim Düsterhus



Re: [PATCH] BUILD: opentracing: excluded use of haproxy variables for, OpenTracing context

2021-09-10 Thread Tim Düsterhus

Miroslav,

On 9/10/21 8:08 PM, Miroslav Zagorac wrote:

the problem is that in this case after calling the function
htx_get_blk_value(), v.ptr is not a NULL pointer but v.len is equal
to 0, if the http header has no value.



If it's not a NULL pointer then it must be a valid pointer. I don't 
quite understand why you would need to replace this by an empty constant 
string then? Is this about a missing NUL terminator? Otherwise a valid 
pointer with .len == 0 would be equivalent to "".


-

Unrelated to that: Having worked quite a bit with the ist API I would 
generally recommend using the helper functions in ist.h (possibly adding 
new ones if they are helpful) instead of directly touching the struct. 
It makes refactoring much easier, because it's easy to find the function 
calls vs. accessing struct members which might be available on different 
structs.


As an example in flt_ot_http_headers_get you could use an ist for prefix 
and then simply use isteqi() instead of strncasecmp().


> Of course, it could look like this:
>
> if (v.len == 0)
>  v.ptr = "" /* or v = ist("") */;
>

In your example I would prefer v = ist("") over v.ptr = "" and leave the 
heavy lifting up to the compiler for that reason.


Best regards
Tim Düsterhus



Re: [PATCH] BUILD: opentracing: excluded use of haproxy variables for, OpenTracing context

2021-09-10 Thread Tim Düsterhus

Miroslav,

On 9/10/21 12:31 PM, Miroslav Zagorac wrote:

I am sending 4 patches that revert to the opentracing filter function,
with the possibility of using haproxy variables to transfer the
opentracing context excluded.


Patch 2 looks odd to me. Is the issue that .ptr is equal to NULL and 
*that* is causing issues? Then I recommend the following instead:


if (!isttest(v))
  v = ist("");

this is much cleaner, because it does not attempt to cram so much logic 
into that single huge line.


In any case if the issue is a NULL then you must test for that NULL 
instead of testing the length.


Best regards
Tim Düsterhus



Re: PPv2: clarify "protocol block" and how to ignore it?

2021-09-10 Thread Tim Düsterhus

Jacob,

On 9/8/21 8:17 PM, Jacob Champion wrote:

Just to double-check, is there a better way for me to have asked my
question, or something I missed in the ML etiquette doc?



You are in the right place and your email looks fine with regard to 
mailing list etiquette. I just assume that there's very few people 
capable of answering your question and that they might've missed your email.


I worked quite a bit on PPv2 within HAProxy and saw your email, but I 
did not write the specification, thus I was unable to give an answer.


Adding Willy to Cc.

Best regards
Tim Düsterhus



Re: [PATCH] variables cleanup/fixup

2021-08-31 Thread Tim Düsterhus

Willy,

On 8/31/21 9:07 AM, Willy Tarreau wrote:

I've finally implemented the replacement of the global variables table


Okay, please refer to issue #624 in the commit: 
https://github.com/haproxy/haproxy/issues/624. I believe it should be 
resolved afterwards.



with a hash instead. However it now obviously breaks the "ifexist"
argument that you added to Lua's set_var() that was designed to work
around the growing table problem.

Given that the limitation used to be made on the existence of a similarly
named variable anywhere in the process (and not of the variable in the
current list), I conclude that it was only used to preserve precious
resources and never to conditionally set a variable in one's context.
As such we could simply remove this test and silently ignore the ifexist
argument now. Do you agree with this ? I'd really like it if we could
definitely get rid of this old mess!


For the record this is my the repository I care about:

https://github.com/TimWolla/haproxy-auth-request

It includes tests based on VTest. I just ran the tests with patches applied:


$ vtest -Dhaproxy_version=2.5.0 -q -k -t 10 -C test/*.vtc
#top  TEST test/no_variable_leak.vtc FAILED (0.203) exit=2
1 tests failed, 0 tests skipped, 19 tests passed


Naturally the no_variable_leak.vtc test 
(https://github.com/TimWolla/haproxy-auth-request/blob/main/test/no_variable_leak.vtc) 
fails now, as it specifically tests that my detection of the ifexist 
parameter works as expected. I can simply exclude HAProxy 2.5 from that 
test and all is well.


In my case I only care about not bloating HAProxy's memory usage 
infinitely, in case the backend sends sends headers with randomly 
generated names (these are exposed as req.auth_response_header.*). The 
fact that variables are unavailable to Lua is a side-effect of this, not 
a feature.


It would certainly be preferable for the user if they could simply use 
the variables from Lua, without needing to reference them in the config.


As such: Your patches LGTM, thanks. Please proceed :-)


For reference I'm appending the current patch series.



Best regards
Tim Düsterhus



Re: [PATCH]: MINOR: proc: making the process able to produce ore dump on FreeBSD

2021-08-21 Thread Tim Düsterhus

David,

On 8/21/21 10:35 AM, David CARLIER wrote:

Here a little patch for FreeBSD systems.





diff --git a/Makefile b/Makefile
index 84b1bc0ea..f6d813c27 100644
--- a/Makefile
+++ b/Makefile
@@ -36,6 +36,7 @@
 #   USE_ACCEPT4  : enable use of accept4() on linux. Automatic.
 #   USE_CLOSEFROM: enable use of closefrom() on *bsd, solaris. 
Automatic.
 #   USE_PRCTL: enable use of prctl(). Automatic.
+#   USE_PROCCTL  : enable use of procctl().


Should it read "Automatic." at the end, because it appears to be 
automatically enabled for the freebsd target?


Best regards
Tim Düsterhus



Re: question: ExecStartPre removal from systemd unit file

2021-08-19 Thread Tim Düsterhus

William,

On 8/19/21 1:50 PM, William Lallemand wrote:

The config check should prevent HAProxy from going into wait mode when
the config is bad on a reload. If I am not mistaken it's not possible to
recover from wait mode without a full restart, no?



Well, this line is not used for the reload, but only for the start. For
the reload the first ExecReload line is used:


Ah, yes, indeed. I should have checked instead of just assuming stuff. 
Sorry! I don't see an issue with your proposed change then.


Best regards
Tim Düsterhus



Re: question: ExecStartPre removal from systemd unit file

2021-08-19 Thread Tim Düsterhus

William,

On 8/19/21 12:04 PM, William Lallemand wrote:

I realized yesterday that we have this line in the systemd unit file:

 ExecStartPre=@SBINDIR@/haproxy -f $CONFIG -c -q $EXTRAOPTS

Which does not make any sense to me since starting HAProxy itself
checks the configuration, so it slows the start of the service for
nothing.

I'm going to remove this line.

Is there anyone against it, or did I miss a particular usecase?


The config check should prevent HAProxy from going into wait mode when 
the config is bad on a reload. If I am not mistaken it's not possible to 
recover from wait mode without a full restart, no?


Best regards
Tim Düsterhus



Re: [ANNOUNCE] HTTP/2 vulnerabilities from 2.0 to 2.5-dev

2021-08-17 Thread Tim Düsterhus

Hi Willy, Everyone,

On 8/17/21 5:13 PM, Willy Tarreau wrote:

2) Domain parts in ":scheme" and ":path"

[...] As such HTTP/1 servers are safe and only HTTP/2 servers are exposed.


I'd like to clarify that the above statement is not true. The issue also 
affects H2->HAProxy->H1 connections. It allows to forward a different 
'host' header than the one HAProxy sees to the backend.


The 'http-request set-uri %[url]' workaround mentioned at the bottom of 
Willy's email also fixes the issue for HTTP/1 backends.


In any case I recommend to upgrade as soon as possible. That way you 
don't have to think whether your setup requires a workaround or not.


Best regards
Tim Düsterhus



Re: [ANNOUNCE] HTTP/2 vulnerabilities from 2.0 to 2.5-dev

2021-08-17 Thread Tim Düsterhus

Vincent,

On 8/17/21 5:49 PM, Vincent Bernat wrote:

For users of haproxy.debian.net or Launchpad PPA, the vulnerabilities
are fixed by patching the previous versions. Launchpad PPA builders are
still running but it should be available in the next hour. I will upload
the new versions later this week.


As a consumer of your packages: A big thank you for your work.

After I noticed Willy's announcement I checked whether any new 2.4.x 
packages were available and indeed they were already there, saving me 
the work from compiling a custom version / adjusting the config.


Best regards
Tim Düsterhus



Re: [PATCH] JA3 TLS Fingerprinting (take 2)

2021-07-15 Thread Tim Düsterhus

Marcin,

On 7/14/21 2:01 PM, Marcin Deranek wrote:

Thank you for all comments I have received regarding JA3 Fingerprinting
patches. Here is the new set of patches which incorporated all your
suggestions.


Sorry I gave a little outdated advice regarding the reg-tests. For any 
new tests please use:


  feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(2.5-dev0)'"

instead of

  #REQUIRE_VERSION=2.5

Other than that the tests LGTM from a glance. I didn't look at your C 
and I also didn't (yet) compare the tests against the documentation you 
have written.


Best regards
Tim Düsterhus



Replying to spam [was: Some Spam Mail]

2021-07-15 Thread Tim Düsterhus

Ilya,

On 7/14/21 5:20 PM, Илья Шипицин wrote:

Yes, go ahead


Would you please stop replying to spam, especially with both the sender 
and the list in Cc? It just causes more spam down the road.


Best regards
Tim Düsterhus



Re: [PATCH] JA3 TLS Fingerprinting

2021-07-12 Thread Tim Düsterhus

Marcin,

On 7/12/21 4:59 PM, Marcin Deranek wrote:

Over a past few weeks I have been working on implementing JA3 compatible
TLS Fingerprinting[1] in the HAProxy. You can find the outcome in
attachments. Feel free to review/comment them.


I can't comment on the correctness of the patches, but please add 
reg-tests where possible. At the very least the new / updated converters 
should should (must?) get a reg-test to ensure correctness.


Also one minor remark regarding the first patch: Please indent jump 
labels (store_capture:) by at least one space. This improves the 
detection of diff hunk headers in some cases.


Best regards
Tim Düsterhus



Re: Proposal about new default SSL log format

2021-07-07 Thread Tim Düsterhus

Remi,

On 7/7/21 4:43 PM, Remi Tricot-Le Breton wrote:
A quick recap of the topics raised by the multiple conversations had in 
this thread :
- we won't used tabs as separators in order to remain consistent with 
the existing log format (especially since this new format will only be 
an extension of the HTTP one)

- the date format won't change right now, it is a whole new subject
- the logged lines will all have the same "date+process_name+pid" prefix 
as the already existing logs
- the HTTP log format won't be touched yet, changing it would be a whole 
new subject as well



The log would then be as follows :


     >>> Jul  1 18:11:31 haproxy[143338]: 127.0.0.1:37740 
[01/Jul/2021:18:11:31.517] \

   ssl_frontend~ ssl_frontend/s2 0/0/0/7/+7 200 2750 - -  \
   1/1/1/1/0 0/0 {1wt.eu} {} "GET /index.html HTTP/1.1 \
   0/0/0/0 TLSv1.3/TLS_AES_256_GCM_SHA384

*snip*

and the corresponding log-format :

     %ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC \
     %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r \
*%[conn_err_code]/%[ssl_fc_hsk_err]/%[ssl_c_err]/%[ssl_c_ca_err]* \
*%sslv/%sslc*



Looks good to me, this is a natural extension of the existing format.

Best regards
Tim Düsterhus



Re: Proposal about new default SSL log format

2021-07-06 Thread Tim Düsterhus

Willy,

On 7/6/21 12:12 PM, Willy Tarreau wrote:

A few points first, that are needed to address various concerns. The
goal here is to defined an HTTPS log format because that's what the
vast majority of users are dealing with day-to-day. For specific usages,
everyone already redefines log formats. But for a very basic setup, right
now it's true that httplog is a bit limited, and all it shows to help guess
there was SSL involved is to append this '~' after the frontend's name.


It was not clear to me that this log format is meant to apply to HTTPS 
requests, because the example given by Remi does not include the HTTP 
verb and neither the request URI (unless I need glasses). I thought it 
was a format for TLS errors or something like this.


Is this a mistake in the examples? Or is HAProxy going to emit multiple 
log lines: One for the TLS connection and one for each HTTP request 
passing through this TLS connection?



However, it's also clear that most users will not violently migrate from
httplog to httpslog, and it's important to keep a smooth enough transition.
This also means not to change stuff that would be relevant to httplog as
well (e.g. delimitors, time format etc).

We can (and should) have discussions about what to change in future log
formats, but let's not use the https one as a bootstrap for passing
everyone's missing field. Instead, let's focus on the SSL-specific stuff
that users are always missing from HTTP logs, and try to establish a
reasonable list that will always be there and suit most users without
adding too much for others, and that will require limited adaptations
to parsers.


Agree.

Best regards
Tim Düsterhus



Re: SNI spoofing in HAproxy?

2021-07-05 Thread Tim Düsterhus

Dominik,

On 7/5/21 2:30 PM, Froehlich, Dominik wrote:

I've played around with your solution a bit and I think I may have found two 
issues with it:

- It doesn't check if the client uses SNI at all and it will deny the request 
if no SNI is used


I always use 'strict-sni' on the bind line, so this is not a concern for me.


- It fails if the client adds a port to the host header


Indeed, but this also not a concern for me, because I use standard 
ports. There is a "bug" in Firefox for WebSockets via HTTP/2 where it 
adds the :443 to the 'Host', but this will be worked around in HAProxy. 
See this mailing list thread: 
https://www.mail-archive.com/haproxy@formilux.org/msg40652.html



So to my understanding, it is perfectly fine for a client to not use SNI if 
there is only one certificate to be used.


I don't understand what you mean by "if there is only one certificate to 
be used".



Here is my iteration of your solution:

   http-request set-var(txn.host) hdr(host),field(1,:)
   acl ssl_sni_http_host_match ssl_fc_sni,strcmp(txn.host) eq 0
   http-request deny deny_status 421 if !ssl_sni_http_host_match { 
ssl_fc_has_sni }

- I am using the field converter to strip away any ports from the host header


This looks good to me.


- I only deny requests that actually use SNI


I disagree, but your mileage may vary.


What are your thoughts on this? I know that technically this would still allow 
clients to do this:

curl https://myhost.com:443 -H "host: myhost.com:1234"

this would then pass and not be denied.
But I don't see any other choice since SNI will never contain a port, I must 
ignore it in the comparison.


You technically could compare the port in the 'host' header, if any, 
with the actual port (you can retrieve it using 'dst_port'). You must 
decide whether this is important for your environment or not.


Best regards
Tim Düsterhus



Re: Proposal about new default SSL log format

2021-07-05 Thread Tim Düsterhus

Remi,

On 7/5/21 5:15 PM, Remi Tricot-Le Breton wrote:

1) tab separated is better for any log import tool (mixing spaces and
"/" is terrible for import)


I don't have any problems with that apart from inconsistency with the
other default formats. If switching to tabs for this format only does
not bother anyone I'll do it.


This inconsistency bothers me. Tabs also bother me in general, because 
they make it hard for a human to parse the log format, whereas one can 
simply teach a program to understand the HAProxy format. With 'halog' 
there already is a dedicated log parser that works with HAProxy's format.


Regarding my suggestion of %ID: I also don't think it is useful to put 
%ID into the SSL log when it is not in the HTTP log. Please don't do 
this, unless you also introduce an updated 'option httplog2' or 
something like this.


Best regards
Tim Düsterhus



Re: Proposal about new default SSL log format

2021-07-02 Thread Tim Düsterhus

Remi,

On 7/2/21 4:26 PM, Remi Tricot-Le Breton wrote:

But if anybody sees a missing information that could be
beneficial for everybody, feel free to tell it, nothing is set in stone yet.
[…]
Feel free to suggest any missing data, which could come from log-format
specific fields or already existing sample fetches.



Not sure whether the HTTP format will also be updated then, but: Put %ID 
somewhere by default, please.


Best regards
Tim Düsterhus



Re: Bad backend selected

2021-06-26 Thread Tim Düsterhus

Willy, Amaury,

On 6/18/21 5:55 PM, Willy Tarreau wrote:

Amaury started something in this direction but it was only in H2 and
I'd like that we explore the ability to do it the most generic way
possible so as not to have different behaviours depending on where
the requests enter and where they leave. I know it's tricky, which is
one more reason for exploring this possibility. At least regardless
of the final choice, we'll be certain not to have left unknown cases
incorrectly handled.

Thus no need to create a new issue for now, hopefully by next week
we'll all agree on a durable solution.


FYI: Another user in IRC (#haproxy on Libera.Chat) just stumbled upon 
this issue as well. So it appears it's a reasonably common occurence 
with Firefox and HAProxy 2.4.


Best regards
Tim Düsterhus



Re: [PATCH] BUILD/MEDIUM: tcp-act: set-mark support fir FreeBSD

2021-06-26 Thread Tim Düsterhus

David,

On 6/26/21 5:37 PM, David CARLIER wrote:

Alright then :-)


Not sure what happened, but it looks even more off now. To save you the 
work I just applied it locally and rewrapped it automatically with vim 
using 'gq'. I attached an updated patch.


Please confirm it looks good to you, because it's still your name in 
there :-)


Best regards
Tim Düsterhus
>From 19383d2d386fd86e3477afc6f9d859f81f3c708c Mon Sep 17 00:00:00 2001
From: David Carlier 
Date: Sat, 26 Jun 2021 12:04:36 +0100
Subject: [PATCH] BUILD/MEDIUM: tcp: set-mark setting support for FreeBSD.

This platform has a similar socket option from Linux's SO_MARK,
 marking a socket with an id for packet filter purpose, DTrace
monitoring and so on.
---
 doc/configuration.txt| 47 +++-
 include/haproxy/connection.h |  5 +++-
 src/tcp_act.c|  4 +--
 3 files changed, 31 insertions(+), 25 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 9ffcc7581..caa864667 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -6503,13 +6503,14 @@ http-request set-map()  
 
 http-request set-mark  [ { if | unless }  ]
 
-  This is used to set the Netfilter MARK on all packets sent to the client to
-  the value passed in  on platforms which support it. This value is an
-  unsigned 32 bit value which can be matched by netfilter and by the routing
-  table. It can be expressed both in decimal or hexadecimal format (prefixed by
-  "0x"). This can be useful to force certain packets to take a different route
-  (for example a cheaper network path for bulk downloads). This works on Linux
-  kernels 2.6.32 and above and requires admin privileges.
+  This is used to set the Netfilter/IPFW MARK on all packets sent to the client
+  to the value passed in  on platforms which support it. This value is an
+  unsigned 32 bit value which can be matched by netfilter/ipfw and by the
+  routing table or monitoring the packets through DTrace. It can be expressed
+  both in decimal or hexadecimal format (prefixed by "0x").
+  This can be useful to force certain packets to take a different route (for
+  example a cheaper network path for bulk downloads). This works on Linux
+  kernels 2.6.32 and above and requires admin privileges, as well on FreeBSD.
 
 http-request set-method  [ { if | unless }  ]
 
@@ -7163,13 +7164,14 @@ http-response set-map()  
 
 http-response set-mark  [ { if | unless }  ]
 
-  This is used to set the Netfilter MARK on all packets sent to the client to
-  the value passed in  on platforms which support it. This value is an
-  unsigned 32 bit value which can be matched by netfilter and by the routing
-  table. It can be expressed both in decimal or hexadecimal format (prefixed
-  by "0x"). This can be useful to force certain packets to take a different
-  route (for example a cheaper network path for bulk downloads). This works on
-  Linux kernels 2.6.32 and above and requires admin privileges.
+  This is used to set the Netfilter/IPFW MARK on all packets sent to the client
+  to the value passed in  on platforms which support it. This value is an
+  unsigned 32 bit value which can be matched by netfilter/ipfw and by the
+  routing table or monitoring the packets through DTrace.
+  It can be expressed both in decimal or hexadecimal format (prefixed by "0x").
+  This can be useful to force certain packets to take a different route (for
+  example a cheaper network path for bulk downloads). This works on Linux
+  kernels 2.6.32 and above and requires admin privileges, as well on FreeBSD.
 
 http-response set-nice  [ { if | unless }  ]
 
@@ -11898,14 +11900,15 @@ tcp-request connection  [{if | unless} ]
 fails and the actions evaluation continues.
 
 - set-mark :
-  Is used to set the Netfilter MARK in all packets sent to the client to
-  the value passed in  on platforms which support it. This value is
-  an unsigned 32 bit value which can be matched by netfilter and by the
-  routing table. It can be expressed both in decimal or hexadecimal format
-  (prefixed by "0x"). This can be useful to force certain packets to take a
-  different route (for example a cheaper network path for bulk
-  downloads). This works on Linux kernels 2.6.32 and above and requires
-  admin privileges.
+  Is used to set the Netfilter/IPFW MARK in all packetssent to the client
+  to the value passed in  on platforms which support it. This value
+  is an unsigned 32 bit value which can be matched by netfilter/ipfw and by
+  the routing table or monitoring the packets through DTrace.
+  It can be expressed both in decimal or hexadecimal format (prefixed by
+  "0x"). This can be useful to force certain packets to take a different
+  route (for example a cheaper network path for bulk downloads). This works
+  on Linux kernels 2.6.32 and above and require

Re: [PATCH] BUILD/MEDIUM: tcp-act: set-mark support fir FreeBSD

2021-06-26 Thread Tim Düsterhus

David,

On 6/26/21 4:57 PM, David CARLIER wrote:

Sure. hope the doc changes is alright.



Can you please rewrap the documentation to ensure an uniform look?

Best regards
Tim Düsterhus



Re: SNI spoofing in HAproxy?

2021-06-25 Thread Tim Düsterhus

Dominik,

On 6/25/21 10:42 AM, Froehlich, Dominik wrote:

Your code sends a 421 if the SNI and host header don't match.
Is this the recommended behavior? The RFC is pretty thin here:

"   Since it is possible for a client to present a different server_name
in the application protocol, application server implementations that
rely upon these names being the same MUST check to make sure the
client did not present a different name in the application protocol."
(from https://datatracker.ietf.org/doc/html/rfc6066#section-11.1 )


Indeed it is. See the HTTP/2 RFC: 
https://datatracker.ietf.org/doc/html/rfc7540#section-9.1.1.


Additionally I would recommend using single-domain certificates (if 
possible), then a regular browser will never see a 421, because it will 
not perform connection reuse.



My initial idea was to simply pave over any incoming headers with the SNI:


http-request set-header Host %[ssl_fc_sni] if { ssl_fc_has_sni }


This wouldn't abort the request with a 421 but I am not sure if I MUST abort it 
to be compliant.


This is equivalent to routing based off the SNI directly and it's as 
unsafe. If a browser reuses a TCP connection for an unrelated host you 
will send the request to the wrong backend.



Regarding domain fronting, I thought I might be able to have the cake and eat 
it, too if I still allowed it but only prevented it for mTLS
Requests. Maybe like this:


http-request set-header Host %[ssl_fc_sni] if { ssl_c_used }





Only performing checks when `ssl_c_used` is true should be safe. But I 
*strongly* recommend sending a 421 instead of attempting to fiddle 
around with the 'host' header. It's just too easy to accidentally 
perform incorrect routing. It would then look like this:



http-request   set-var(txn.host)hdr(host)
http-request   deny deny_status 400 unless { req.hdr_cnt(host) eq 1 }
# Verify that SNI and Host header match if a client certificate is sent
http-request   deny deny_status 421 if { ssl_c_used } ! { 
ssl_fc_sni,strcmp(txn.host) eq 0 }


Best regards
Tim Düsterhus



Re: SNI spoofing in HAproxy?

2021-06-24 Thread Tim Düsterhus

Julien,

On 6/24/21 3:40 PM, Julien Pivotto wrote:

   use_backend bob if { hdr(host) -m dom bob.com }
   use_backend alice if { hdr(host) -m dom alice.com }


Thanks for taking the time to write this report.

SNI and host header are indeed different.

You should consider using req.ssl_sni instead of hdr(host).



NO! Using req.ssl_sni for request routing is unsafe and it will break 
more setups than it fixes.


Browsers can and will open a single TCP / TLS connection for multiple 
unrelated hosts if the certificate presented on the first connection is 
valid for the other host. This is especially true for HTTP/2.


To prevent the follow-up requests from being routed to the wrong backend 
you must use the 'host' header for routing -- or your certificates need 
to be non-overlapping.


In fact HTTP 421 Misdirected Request was invented for the specific case 
of a web browser reusing an existing TCP connection for an unrelated domain.


Best regards
Tim Düsterhus



Re: SNI spoofing in HAproxy?

2021-06-24 Thread Tim Düsterhus

Dominik,

On 6/24/21 3:29 PM, Froehlich, Dominik wrote:

Not sure if you would call this a security issue, hence I am asking this on the 
mailing list prior to opening a github issue:


This is also known as "Domain Fronting" 
(https://en.wikipedia.org/wiki/Domain_fronting). It's not necessarily a 
security issue and in fact might be the desired behavior in certain 
circumstances.



I’ve noticed that it is really easy to bypass the check on client certificates 
of a domain when the client can present a valid certificate for another domain.


Indeed. I also use client certificates for authentication and I'm aware 
of this issue. That's why I validate that the SNI and Host match up for 
my set up. In fact I added the 'strcmp' converter to HAProxy just for 
this purpose. The documentation for 'strcmp' also gives an explicit 
example on how to use it to prevent domain fronting:



http-request set-var(txn.host) hdr(host)
# Check whether the client is attempting domain fronting.
acl ssl_sni_http_host_match ssl_fc_sni,strcmp(txn.host) eq 0


https://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.1-strcmp

My full rules look like this:


# Verify that SNI and Host header match
http-request   set-var(txn.host)hdr(host)
http-request   deny deny_status 400 unless { req.hdr_cnt(host) eq 1 }
http-request   deny deny_status 421 unless { 
ssl_fc_sni,strcmp(txn.host) eq 0 }


-


[…]

My questions:

   *   HAproxy does seem to treat SNI (L5) and HTTP Host Header (L7) as 
unrelated. Is this true?


Yes.


   *   Applications offloading TLS to HAproxy usually trust that mTLS requests 
coming in are validated correctly. They usually don’t revalidate the entire 
certificate again and only check for the subject’s identity. Is there a way to 
make SNI vs host header checking more strict?


Yes, see above.


   *   What’s the best practice to dispatch mTLS requests to backends? I’ve 
used a host header based approach here but it shows the above vulnerabilities.


You *must* use the 'Host' header for routing. Using the SNI value for 
routing is even more unsafe. But you also must validate that the SNI 
matches up or that the client presented a valid certificate.


Best regards
Tim Düsterhus



Re: [PATCH 0/1] Replace issue templates by issue forms

2021-06-24 Thread Tim Düsterhus

Willy,

On 6/24/21 4:17 AM, Willy Tarreau wrote:

Thanks for this fast feedback, now merged!



And now we already have the first issue created using an Issue Form: 
https://github.com/haproxy/haproxy/issues/1307


I think the result is looking great. I particularly like that the 
plaintext email no longer contains the HTML comments guiding the user 
through the template.


Best regards
Tim Düsterhus



Re: [PATCH 0/1] Replace issue templates by issue forms

2021-06-23 Thread Tim Düsterhus

Lukas,

On 6/23/21 11:27 PM, Lukas Tribus wrote:

Thanks for this, I like it!

What I'm missing in the new UI is the possibility for the user to
preview the *entire* post, I'm always extensively using preview
features everywhere. So this feels like a loss, although the user can
preview the content of the individual input box.

But that's not reason enough to hold up this change, I just wish the
"send us your feedback" button [1] would actually work.


Looking at the link it appears as if they accidentally created a 
relative link by forgetting to put the `https://`. The correct one 
appears to be:


https://github.com/github/feedback/discussions/categories/issues-feedback


Full Ack from me for this change, this will be very beneficial as we
get higher quality reports and people won't be required to navigate
through raw markdown, which is not user-friendly at all.


Yes, exactly. I regularly edited the issues, because the user 
accidentally used inline code instead of the multiline blocks (single 
backtick vs 3 backticks) making it an unreadable mess and also because 
sometimes they put the actual values into the HTML comments with the 
explanation. Causing them to show in the email notifications, but not in 
the web interface.


Best regards
Tim Düsterhus



Re: [PATCH] CLEANUP: Prevent channel-t.h from being detected as C++ by GitHub

2021-06-20 Thread Tim Düsterhus

Willy,

On 6/20/21 11:49 AM, Willy Tarreau wrote:

Well, I would say that this heuristic is totally broken because it should at
least ensure there's no more word before a brace on the line. However I agree
that usually we prefer to respect the comment format starting with an asterisk
at the beginning of each line, so I'm OK with the change and merged the patch.


I agree and I certainly would not have proposed it, if it was *just* to 
satisfy this heuristic. With the other reason of matching the comment 
format I considered it acceptable, especially since it did not cause 
rewrapping :-)


Best regards
Tim Düsterhus



Re: Bad backend selected

2021-06-18 Thread Tim Düsterhus

Willy,

On 6/8/21 8:37 AM, Willy Tarreau wrote:

However the only difference is the 443 port explicitly specified in the
later request.
I am not sure it's something specific to 2.4.0, but I've never seen it
before.
Is it an expected behaviour ? If so, how can I change my acls to correct
it ?


I encountered the same issue (incidentally also with socket.io). It's
happening for WebSockets via HTTP/2. These are newly supported starting with
HAProxy 2.4. The "broken" requests are most likely Firefox, while the
working ones are not Firefox. I already have a private email thread with a
few developers regarding this behavior.


So I had some thoughts about that discussion that started off-list. And
now I think that the right thing to do is to always drop the port part
of the authority when we have a scheme for which it's the default. I.e.
if the scheme is "http" we drop ":80", and if the scheme is "https" we
drop ":443". This will always be consistent with the standards, and by
doing it early (i.e. during conversion to HTX) we're certain to address
both the conversion of CONNECT to GET+Upgrade, and the hdr(host) match.


Is this still on your radar? Should I file an issue for that?

Best regards
Tim Düsterhus



Re: [PATCH] BUG/MINOR: cache: Correctly handle existing-but-empty, 'accept-encoding' header

2021-06-18 Thread Tim Düsterhus , WoltLab GmbH

Remi,

On 6/18/21 3:46 PM, Remi Tricot-Le Breton wrote:
please find a small fix for the 'Vary' support of the cache to 
correctly handle the difference between a missing 'accept-encoding' 
and an empty 'accept-encoding'.



Nice catch, I completely forgot about this case.



Frankly it's a bit dumb that there even is a difference between a 
missing header and an empty header. And also that content-encoding is 
opt-out instead of opt-in.


I found the issue, because HAProxy was serving me a gzip encoded 
response from the cache when I was not providing an 'accept-encoding' 
header and I initially wanted to report *that* as a bug report. Then I 
noticed that this was intentional and that it is the expected behavior 
based on the HTTP specification.


I would not be surprised if this case is not correctly handled by the 
majority of HTTP clients out there, because most (all?) servers tend to 
only serve compressed responses if the client explicitly indicates support.


After sending the patch to HAProxy I also patched the HTTP client 
without compression support in our software to specify 'accept-encoding: 
identity' to be on the safe side :-/


Best regards
Tim Düsterhus
Developer WoltLab GmbH

--

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



[PATCH] BUG/MINOR: cache: Correctly handle existing-but-empty, 'accept-encoding' header

2021-06-18 Thread Tim Düsterhus , WoltLab GmbH

Remi,

please find a small fix for the 'Vary' support of the cache to correctly 
handle the difference between a missing 'accept-encoding' and an empty 
'accept-encoding'.


Best regards
Tim Düsterhus
Developer WoltLab GmbH

--

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P
>From e33b3332a138319476d690acbf0aa20395df5598 Mon Sep 17 00:00:00 2001
From: Tim Duesterhus 
Date: Fri, 18 Jun 2021 15:09:28 +0200
Subject: [PATCH] BUG/MINOR: cache: Correctly handle existing-but-empty
 'accept-encoding' header
To: haproxy@formilux.org
Cc: rlebre...@haproxy.com,
w...@1wt.eu

RFC 7231#5.3.4 makes a difference between a completely missing
'accept-encoding' header and an 'accept-encoding' header without any values.

This case was already correctly handled by accident, because an empty accept
encoding does not match any known encoding. However this resulted in the
'other' encoding being added to the bitmap. Usually this also succeeds in
serving cached responses, because the cached response likely has no
'content-encoding', thus matching the identity case instead of not serving the
response, due to the 'other' encoding. But it's technically not 100% correct.

Fix this by special-casing 'accept-encoding' values with a length of zero and
extend the test to check that an empty accept-encoding is correctly handled.
Due to the reasons given above the test also passes without the change in
cache.c.

Vary support was added in HAProxy 2.4. This fix should be backported to 2.4+.
---
 reg-tests/cache/vary.vtc | 47 
 src/cache.c  | 15 +++--
 2 files changed, 60 insertions(+), 2 deletions(-)

diff --git a/reg-tests/cache/vary.vtc b/reg-tests/cache/vary.vtc
index 61d7af671..197f822b3 100644
--- a/reg-tests/cache/vary.vtc
+++ b/reg-tests/cache/vary.vtc
@@ -108,7 +108,18 @@ server s1 {
-hdr "Content-Encoding: gzip" \
   -bodylen 188
 
+   rxreq
+   expect req.url == "/empty-vs-missing"
+   txresp -hdr "Content-Encoding: gzip" \
+   -hdr "Vary: accept-encoding" \
+   -hdr "Cache-Control: max-age=5" \
+   -bodylen 234
 
+   rxreq
+   expect req.url == "/empty-vs-missing"
+   txresp -hdr "Vary: accept-encoding" \
+   -hdr "Cache-Control: max-age=5" \
+   -bodylen 256
 } -start
 
 server s2 {
@@ -344,6 +355,42 @@ client c1 -connect ${h1_fe_sock} {
expect resp.bodylen == 188
expect resp.http.X-Cache-Hit == 0
 
+   # A missing 'Accept-Encoding' implies that anything is acceptable,
+   # while an empty 'Accept-Encoding' implies nothing is acceptable.
+
+   # Start by caching a gzip response.
+   txreq -url "/empty-vs-missing" -hdr "Accept-Encoding: gzip"
+   rxresp
+   expect resp.status == 200
+   expect resp.bodylen == 234
+   expect resp.http.content-encoding == "gzip"
+   expect resp.http.X-Cache-Hit == 0
+
+   # Check that it is cached.
+   txreq -url "/empty-vs-missing" -hdr "Accept-Encoding: gzip"
+   rxresp
+   expect resp.status == 200
+   expect resp.bodylen == 234
+   expect resp.http.content-encoding == "gzip"
+   expect resp.http.X-Cache-Hit == 1
+
+   # Check that the cached response is returned when no accept-encoding is
+   # specified.
+   txreq -url "/empty-vs-missing"
+   rxresp
+   expect resp.status == 200
+   expect resp.bodylen == 234
+   expect resp.http.content-encoding == "gzip"
+   expect resp.http.X-Cache-Hit == 1
+
+   # Check that the cached response is not returned when an empty
+   # accept-encoding is specified.
+   txreq -url "/empty-vs-missing" -hdr "Accept-Encoding:"
+   rxresp
+   expect resp.status == 200
+   expect resp.bodylen == 256
+   expect resp.http.content-encoding == ""
+   expect resp.http.X-Cache-Hit == 0
 
# The following requests are treated by a backend that does not cache
# responses containing a Vary header
diff --git a/src/cache.c b/src/cache.c
index bdb82583f..feab63f07 100644
--- a/src/cache.c
+++ b/src/cache.c
@@ -2280,6 +2280,19 @@ static int accept_encoding_normalizer(struct htx *htx, struct ist hdr_name,
 	/* Iterate over all the ACCEPT_ENCODING_MAX_ENTRIES first accept-encoding
 	 * values that might span acrosse multiple accept-encoding headers. */
 	while (http_find_header(htx, hdr_name, , 0) && count < ACCEPT_ENCODING_MAX_ENTRIES) {
+		count++;
+
+		/* As per RFC7231#5.3.4, "An Accept-Encoding header field with a
+		 * combined field-value that is empty implies that the user agent
+		 * does not want any content-coding in 

Re: [PATCH 0/4] Use 'feature cmd' in regtests

2021-06-17 Thread Tim Düsterhus

Willy,

[clearing CC list]

On 6/17/21 3:27 PM, Willy Tarreau wrote:

Whole series merged, thanks Tim. I feel like the selection is slightly
faster than before, but that might just be a placebo effect.



I would assume that it's slightly slower, because we need to launch more 
executables compared to before. 'haproxy -cc' is executed once for each 
test, instead of 'haproxy -vv' being run once and the results stored in 
variables.


Even if it's slower I think it's definitely worth for the improved 
flexibility.


I'll also try to look into replacing the 'EXCLUDE_TARGETS' check with 
calls to 'uname'.


Best regards
Tim Düsterhus



Re: Speeding up opentracing build in CI ?

2021-06-17 Thread Tim Düsterhus

Willy,

On 6/17/21 4:39 PM, Willy Tarreau wrote:

Ah indeed. Well, it even suffers variations that are larger than the
potential savings, ranging from 10'45 to 15'00 over the last few builds.
So we don't really know how much we're saving, we can only hope it saves
a little bit.



To clarify: This specific timing also includes the wait times until the 
build VMs are available. It's basically the wall time elapsed between 
the push happening and the *last* build reporting the status. So the 
time the developer has to wait for all the results to be available :-) 
It's not the duration of the slowest job.


I can confirm that OpenSSL 3 is much faster, though. But you only see 
that when directly looking into a specific run.


Best regards
Tim Düsterhus



Re: Speeding up opentracing build in CI ?

2021-06-17 Thread Tim Düsterhus

Willy, William,

On 6/17/21 3:55 PM, William Lallemand wrote:

OK that's a net win, openssl-3.0.0-alpha17 dropped from 8'29 to 2'55.
I've just excluded versions 1.x from both the parallel build and the
build_sw target and that's good now.

Willy


Great improvement, thanks!



Here's a full filtered overview:

https://github.com/haproxy/haproxy/actions/workflows/vtest.yml?query=branch%3Amaster

It's down from ~11 minutes until the last build finishes to ~9 minutes.

Best regards
Tim Düsterhus



Re: [PATCH] CI: Replace the requirement for 'sudo' with a call to 'ulimit -n'

2021-06-17 Thread Tim Düsterhus

Willy,

On 6/13/21 3:02 PM, Tim Duesterhus wrote:

Using 'sudo' required quite a few workarounds in various places. Setting an
explicit 'ulimit -n' removes the requirement for 'sudo', resulting in a cleaner
workflow configuration.


You might've missed this one. Ilya already acked it and I tested it here:

https://github.com/TimWolla/haproxy/actions/runs/933195103

Best regards
Tim Düsterhus



Re: [PATCH 0/4] Use 'feature cmd' in regtests

2021-06-14 Thread Tim Düsterhus

William,

On 6/14/21 12:12 PM, William Lallemand wrote:


 feature cmd "$HAPROXY_PROGRAM -vv |grep -vq BoringSSL"

and then they won't run for BoringSSL (the example is untested).



That's very interesting, I had a discussion with Willy last week about
this kind of problems. I think that's clearly the cleanest soluton to
the problem.

In my opinion it only lacks a way to check the OpenSSL version from
'haproxy -cc' so we can enable ssl/set_ssl_cert_bundle.vtc again.



Note that it doesn't necessarily need to be part of '-cc'. If a simple 
'grep' is sufficient, then that works as well. Of course if you need to 
check exact versions instead of just product names a 'grep' probably is 
not sufficient.



After this series is applied the output of 'make reg-tests' will change. Tests
that are excluded using 'feature cmd' will still be listed as "Add test" in
the test gathering section. However the final line will show that a few tests
have been skipped:

 0 tests failed, 4 tests skipped, 105 tests passed

I don't think this is going to be an issue. But if it is, please complain!



Hm the only problem I have with this, is that we won't be able to see why a
test was excluded.



VTest logs the reason for the SKIP in the verbose output, but it's true 
that it's not readily available (not that I bypass 'make' for this example):



env HAPROXY_PROGRAM=$(pwd)/haproxy /VTest/vtest -v -k -t 1 
reg-tests/ssl/new_del_ssl_crlfile.vtc reg-tests/ssl/set_ssl_crlfile.vtc 
reg-tests/ssl/set_ssl_cafile.vtc reg-tests/ssl/new_del_ssl_cafile.vtc 
reg-tests/ssl/show_ssl_ocspresponse.vtc 
reg-tests/http-messaging/http_abortonclose.vtc 
reg-tests/startup/check_condition.vtc |grep SKIPPING


Results in:


*top   SKIPPING test, lacking feature: $HAPROXY_PROGRAM -cc 
'feature(OPENSSL)'
*top   SKIPPING test, lacking feature: $HAPROXY_PROGRAM -cc 
'feature(OPENSSL)'
*top   SKIPPING test, lacking feature: $HAPROXY_PROGRAM -cc 
'feature(OPENSSL)'
*top   SKIPPING test, lacking feature: $HAPROXY_PROGRAM -cc 
'feature(OPENSSL)'
*top   SKIPPING test, lacking feature: $HAPROXY_PROGRAM -cc 
'feature(OPENSSL)'
0 tests failed, 5 tests skipped, 2 tests passed


Best regards
Tim Düsterhus



Re: [PATCH] CI: cirrus: add alpine linux to the jobs

2021-06-14 Thread Tim Düsterhus

William,

On 6/14/21 11:54 AM, William Lallemand wrote:

I just remembered that we had travis-ci to achieve that in the past, but
it looks like it's not integrated in the github interface, I don't
remember why. https://travis-ci.com/github/haproxy/haproxy It looks like
it's still running though.



We only run Travis once weekly, because of the limited credits we have. 
Thus only the newest commit at time of running will have a Travis CI 
Status integrated into GitHub.


The most recent one is this: 
https://github.com/haproxy/haproxy/commit/1b095cac9468d0c3eeb157e9b1a2947487bd3c83


Best regards
Tim Düsterhus



Re: [PATCH] CI: Replace the requirement for 'sudo' with a call to 'ulimit -n'

2021-06-13 Thread Tim Düsterhus

Ilya,

On 6/13/21 3:18 PM, Илья Шипицин wrote:

It was in my to do list as well. I recall I ran tests without increasing
limits.

Which test requires 5 open files? Maybe the one currently disabled?

Also, it does not like a good pattern to open so many files. If we really
use as many files, shouldn't we revisit that area?


The issue is that HAProxy attempts to increase the soft ulimit to the 
hard ulimit during startup. This fails on macOS which has a soft limit 
of ~10k and a hard limit of ~500k, but does not allow to actually 
increase the soft limit without being root.


That's why I'm updating the ulimit manually before starting HAProxy. 
This will update both the soft as well as the hard limit, resulting in 
HAProxy not needing to do anything, thus succeeding on macOS.


I surely could've used an even smaller number, but 5k is something that 
worked just fine on my first attempt, so that one it is. In fact the 
limit I set is even lower than the default of macOS.


Best regards
Tim Düsterhus



Re: http-response set-header and redirect

2021-06-11 Thread Tim Düsterhus

James,

On 6/11/21 8:28 PM, James Brown wrote:

Is there any reason (performance or otherwise) to use http-response instead
of just turning everything into http-after-response?


There is a difference: If a http-response rule fails [1] then a standard 
error page will be emitted. For this error page the http-after-response 
rules will need be evaluated. They might fail as well, aborting the 
processing and causing a very simple 500 Internal Server Error to be 
emitted. This will suppress any other error (e.g. 503, 403, …).


So complex http-after-response rules might cause additional (debugging) 
issues in error situations.


I recommend using them for the most essential stuff only. In my case 
that is the Strict-Transport-Security header and a request ID response 
header.


Best regards
Tim Düsterhus

[1] e.g. if there's insufficient memory to add the header.



Re: http-response set-header and redirect

2021-06-11 Thread Tim Düsterhus

James,

On 6/11/21 8:03 PM, James Brown wrote:

Is there any way to set a HTTP header on a redirect being emitted by
haproxy?


To also match HAProxy generated responses (including redirects and error 
pages) you will need to use 'http-after-response':


https://cbonte.github.io/haproxy-dconv/2.4/configuration.html#http-after-response

Best regards
Tim Düsterhus



Re: add alpine linux to the CI

2021-06-11 Thread Tim Düsterhus

William

On 6/11/21 4:01 PM, William Lallemand wrote:

I couldn't find a way to launch an alpine job easily with github actions so
instead I wrote one for cirrus-ci, It will help debugging Docker images
and musl problems.


I believe GitHub Actions also supports running using a Docker Image. I 
attempted to build something for Fedora Rawhide, but did not succeed 
either. We already use Cirrus, so if Cirrus works, then Cirrus it is.


Best regards
Tim Düsterhus



Re: [PATCH] CI: cirrus: add alpine linux to the jobs

2021-06-11 Thread Tim Düsterhus

William,

On 6/11/21 4:01 PM, William Lallemand wrote:

This commit adds a CI job to cirrus-ci which builds HAProxy on Alpine
Linux, allowing to build and test HAProxy with musl.

OpenSSL, PCRE2, Lua 5.3 as well as the prometheus exporter are enabled.

GNU grep was purposely installed to run the reg-test script.


LGTM

Best regards
Tim Düsterhus



Re: Speeding up opentracing build in CI ?

2021-06-10 Thread Tim Düsterhus

William,

On 6/10/21 5:48 PM, William Lallemand wrote:

Looks fine to me, but from what I remember when debugging some reg-tests
there was only one CPU available, I hope I'm wrong.



GitHub-provided Action runners are 2-core VMs:

https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources

Best regards
Tim Düsterhus



Re: enaling cache in github actions

2021-06-08 Thread Tim Düsterhus

Ilya,

On 6/8/21 3:49 PM, Илья Шипицин wrote:

Tim, maybe you have an idea how to make it work.
I gave up


Ack. I'll add it to my ToDo and I'll try to look into in in the next days.

Best regards
Tim Düsterhus



Re: [PATCH 3/3] MINOR: haproxy: Add `-cc` argument

2021-06-08 Thread Tim Düsterhus

Willy,

On 6/8/21 11:26 AM, Willy Tarreau wrote:

I couldn't figure how the VTC was OK but not testing it by hand. I finally
found it, the outlen variable was not initialized, it should contain the
size of the allocated area for the output, so if there was some dirt in
the stack, it would use that and be happy, but when run by hand it had
zero, hence the message above :-)


Good catch, thanks! This mistake was part of my initial patch. I'll note 
that I should read the function documentation more carefully.


Reproducing the issue locally it appears that the variable always 
contains a Unix timestamp for me.


However I'm surprised that valgrind does not detect the issue. Usually 
it's able to detect the use of uninitialized memory within a conditional 
expression.


Anyway: Thanks for fixing the issue and taking the series. I've marked 
the GitHub issue as fixed and closed it.


Best regards
Tim Düsterhus



Re: [PATCH] CI: enable openssl-3.0.0 builds

2021-06-07 Thread Tim Düsterhus

William,

On 6/7/21 1:30 PM, William Lallemand wrote:

On Mon, Jun 07, 2021 at 04:02:00PM +0500, Илья Шипицин wrote:

sorry, I do not have much spare time to implement that in short time
perspective.
I think of 2-3 month timeframe.



Isn't it possible to pass a make argument for one specific CI job?

This way we could just do something like:

   make DEBUG_CFLAGS="-g -Wno-deprecated-declarations"



Yes, this would be possible. It's already done to enable QUIC for BoringSSL:

https://github.com/haproxy/haproxy/blob/a9334df5a9bee2bf17b22f965b08df8d47f2f63e/.github/matrix.py#L116-L117

It would need an extra case for OpenSSL 3. You can test whether it works 
locally using:


$ python3 .github/matrix.py push

Best regards
Tim Düsterhus



Re: [PATCH 1/3] CLEANUP: cfgparse: Remove duplication of `MAX_LINE_ARGS + 1`

2021-06-05 Thread Tim Düsterhus

Willy,

On 6/6/21 12:50 AM, Maximilian Mader wrote:

From: Tim Duesterhus 


Acking this one as mine. I sent it to Max to incorporate it into the series.

Best regards
Tim Düsterhus



Re: [PATCH] CI: enable openssl-3.0.0 builds

2021-06-05 Thread Tim Düsterhus

Ilya,

On 6/5/21 5:10 AM, Илья Шипицин wrote:

here are two patches:

- deprecated warnings suppressed
- openssl-3.0.0 enabled



In the second patch you forgot the 'CI:' prefix in the commit message.

Other than that the patches LGTM with regard to the CI changes.

Best regards
Tim Düsterhus



Re: [PATCH] CI: enable openssl-3.0.0 builds

2021-06-02 Thread Tim Düsterhus

Dinko,

On 6/2/21 1:34 PM, Dinko Korunic wrote:

Yes, GItHub Actions has support for flagging some of the jobs in matrix as 
experimental (which will permit job to fail and other jobs in the matrix will 
continue to execute), for instance:

https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#example-including-new-combinations
 
<https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#example-including-new-combinations>
https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#example-preventing-a-specific-failing-matrix-job-from-failing-a-workflow-run
 
<https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#example-preventing-a-specific-failing-matrix-job-from-failing-a-workflow-run>



Oh, continue-on-error also exists on the job level. IMO it's badly 
named, but it appears to be what I was looking for.


Based off the other responses we might not need it after all, but it's 
good to know that it is possible. Thanks!


Best regards
Tim Düsterhus



Re: [PATCH] CI: enable openssl-3.0.0 builds

2021-06-02 Thread Tim Düsterhus

Ilya,

On 6/2/21 12:58 PM, Илья Шипицин wrote:

as openssl-3.0.0 is getting close to release, let us add it to build matrix.



I dislike that this is going to report all commits in red, until the 
build failures with OpenSSL 3 are fixed. I did a quick research, whether 
some job could be marked as "experimental" / "allowed failure" with 
GitHub Actions, but could not find anything.


Do you know whether this is possible?

The patch LGTM other than that concern.

Best regards
Tim Düsterhus



Re: Modify POST data (with lua)?

2021-06-02 Thread Tim Düsterhus

Elias,

On 6/2/21 12:16 PM, Elias Abacioglu wrote:

I'm planning on placing an API behind HAProxy and want a way to modify the
times/intervals in the POST data to increase cacheability to the API.


While POST requests may in fact be cached under certain circumstances as 
I learned 2 minutes ago, it is certainly very unusual and also not 
supported by HAProxy's cache.



   Responses to POST requests are only cacheable when they include
   explicit freshness information (see Section 4.2.1 of [RFC7234]).
   However, POST caching is not widely implemented.  For cases where an
   origin server wishes the client to be able to cache the result of a
   POST in a way that can be reused by a later GET, the origin server
   MAY send a 200 (OK) response containing the result and a
   Content-Location header field that has the same value as the POST's
   effective request URI (Section 3.1.4.2).


I would advise against caching POST requests and instead move the 
parameters into the URL or a request header (make sure to set 'Vary' on 
the response) and instead use a GET request.


Best regards
Tim Düsterhus



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-01 Thread Tim Düsterhus

Aleks,

On 6/1/21 10:30 AM, Aleksandar Lazic wrote:

This phrasing is understandable to me, but now I'm wondering if this is the 
best solution. Maybe the already existing user-configurable unique request ID 
should instead be sent to the SPOE and then logged?

https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id

The request_counter (%rt) you mentioned could be embedded into this unique-id.


Well this uniqe-id is not send as Stream ID to SPOA receiver, due to this fact 
can't you debug which stream is the troubled one.


Yes, that's why I suggested that the SPOE is extended to also include 
this specific ID somewhere (just) for logging purposes.


Best regards
Tim Düsterhus



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-06-01 Thread Tim Düsterhus

Aleks,

On 6/1/21 1:03 AM, Aleksandar Lazic wrote:

 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently 
established
   connections on the designated server, possibly including the 
connection being

@@ -17514,6 +17509,9 @@ stopping : boolean
 str() : string
   Returns a string.

+stream_uniq_id : integer
+  Returns the uniq stream id.
+


This explanation is not useful to the reader (even I don't understand 
it).


[…]

This is shown on the SPOE log line as sid and therefore I think it 
should be

possible to get the same ID also within HAProxy as fetch method.

```
SPOE: [agent-on-http-req]  sid=88 st=0 
0/0/0/0/0 1/1 0/0 10/33

```

[…]

```
This fetch method returns the internal Stream ID, if a stream is 
available. The

internal Stream ID is used in several places in HAProxy to trace the Stream
inside HAProxy. It is also uses in SPOE as "sid" value.
```




This phrasing is understandable to me, but now I'm wondering if this is 
the best solution. Maybe the already existing user-configurable unique 
request ID should instead be sent to the SPOE and then logged?


https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#7.3.6-unique-id

The request_counter (%rt) you mentioned could be embedded into this 
unique-id.


Best regards
Tim Düsterhus



Re: Maybe stupid question but, I don't see a fetch method for %rt => StreamID

2021-05-31 Thread Tim Düsterhus

Aleks,

On 5/31/21 9:35 PM, Aleksandar Lazic wrote:
While I try to get the stream id from spoa I recognized that there is 
no fetch method for the streamID.


Attached a patch which adds the fetch sample for the stream id.
I assume it could be back ported up to version 2.0


The backporting information should be part of the commit message. But I 
don't think it's going to be backported that far.


Further comments inline.


From 15a2026c495e64d8165a13a3c8a4e5e19ad7e8d6 Mon Sep 17 00:00:00 2001
From: Alexandar Lazic 
Date: Mon, 31 May 2021 21:28:56 +0200
Subject: [PATCH] MINOR: sample: fetch stream_uniq_id

This fetch sample allows to get the current Stream ID for the
current session.

---
 doc/configuration.txt  | 13 ++
 reg-tests/sample_fetches/stream_id.vtc | 33 ++
 src/sample.c   | 14 +++
 3 files changed, 55 insertions(+), 5 deletions(-)
 create mode 100644 reg-tests/sample_fetches/stream_id.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 11c38945c..7eb7e29cd 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17433,11 +17433,6 @@ rand([]) : integer
   needed to take some routing decisions for example, or just for debugging
   purposes. This random must not be used for security purposes.
 
-uuid([]) : string

-  Returns a UUID following the RFC4122 standard. If the version is not
-  specified, a UUID version 4 (fully random) is returned.
-  Currently, only version 4 is supported.
-


Good catch, but please split moving this around into a dedicated patch 
(DOC).



 srv_conn([/]) : integer
   Returns an integer value corresponding to the number of currently established
   connections on the designated server, possibly including the connection being
@@ -17514,6 +17509,9 @@ stopping : boolean
 str() : string
   Returns a string.
 
+stream_uniq_id : integer

+  Returns the uniq stream id.
+


This explanation is not useful to the reader (even I don't understand it).


 table_avl([]) : integer
   Returns the total number of available entries in the current proxy's
   stick-table or in the designated stick-table. See also table_cnt.
@@ -17528,6 +17526,11 @@ thread : integer
   the function, between 0 and (global.nbthread-1). This is useful for logging
   and debugging purposes.
 
+uuid([]) : string

+  Returns a UUID following the RFC4122 standard. If the version is not
+  specified, a UUID version 4 (fully random) is returned.
+  Currently, only version 4 is supported.
+
 var() : undefined
   Returns a variable with the stored type. If the variable is not set, the
   sample fetch fails. The name of the variable starts with an indication
diff --git a/src/sample.c b/src/sample.c
index 09c272c48..5d3b06b10 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -4210,6 +4210,18 @@ static int smp_fetch_uuid(const struct arg *args, struct 
sample *smp, const char
return 0;
 }
 
+/* returns the stream uniq_id */

+static int
+smp_fetch_stream_uniq_id(const struct arg *args, struct sample *smp, const 
char *kw, void *private)


I believe the 'static int' should go on the same line.


+{
+   if (!smp->strm)
+   return 0;
+
+   smp->data.type = SMP_T_SINT;
+   smp->data.u.sint = smp->strm->uniq_id;
+   return 1;
+}
+
 /* Note: must not be declared  as its list will be overwritten.
  * Note: fetches that may return multiple types must be declared as the lowest
  * common denominator, the type that can be casted into all other ones. For
@@ -4243,6 +4255,8 @@ static struct sample_fetch_kw_list smp_kws = {ILH, {
{ "bin",  smp_fetch_const_bin,  ARG1(1,STR),  smp_check_const_bin , 
SMP_T_BIN,  SMP_USE_CONST },
{ "meth", smp_fetch_const_meth, ARG1(1,STR),  smp_check_const_meth, 
SMP_T_METH, SMP_USE_CONST },
 
+	{ "stream_uniq_id", smp_fetch_stream_uniq_id, 0,  NULL, SMP_T_SINT, SMP_USE_INTRN },

+


I believe 'SMP_USE_INTRN' is not correct. I believe you need 
'SMP_SRC_L4CLI', but don't quote me on that.



{ /* END */ },
 }};
 
--

2.25.1


Best regards
Tim Düsterhus



Re: how to write to a file safely in haproxy

2021-05-26 Thread Tim Düsterhus

Reshma,

On 5/26/21 4:44 PM, reshma r wrote:

not). There is a requirement that the configuration file may be changed
from an external portal. So whenever a change is made in the portal I would
like to update my configuration file accordingly. To do this, I am making a
socket call which periodically checks whether the portal has been changed
(from within haproxy action). If so I must write the updated values to the
file and it seems I cannot write to a file inside lua action so this is
where I am stuck. To me, It sounds like this cannot be achieved and I must
drop the idea of changing the file from the portal.


You should implement this in a dedicated sidecar process that checks the 
portal API and on a change writes the updated configuration file and 
then either updates the state within the running HAProxy process using 
the stats socket (e.g. using 'set map' [1] or 'set var' [2]) or reloads 
HAProxy depending on your exact requirements.


Just because you can do a lot of stuff within HAProxy using Lua doesn't 
mean that you should or that it's the best solution :-)


Best regards
Tim Düsterhus

[1] https://cbonte.github.io/haproxy-dconv/2.4/management.html#9.3-set%20map
[2] https://cbonte.github.io/haproxy-dconv/2.4/management.html#9.3-set%20var



Re: Termination Code 'CC' + HTTP status?

2021-05-25 Thread Tim Düsterhus , WoltLab GmbH

Willy,

On 5/25/21 11:02 AM, Willy Tarreau wrote:

We've already noted that significant improvements in close/abort need to
be made to 1) process them better and 2) report them more accurately.
This is a tedious task which *will* inevitably cause regressions, but
it's the price to pay to improve that.


Shall I file a feature request on GitHub that the 'CC' case does not result
in a 5xx, but a 4xx or something clearly invalid?


Yes, why not. This will save us from forgetting about it.


Okay, here it is:

https://github.com/haproxy/haproxy/issues/1266

Best regards
Tim Düsterhus
Developer WoltLab GmbH

--

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



Re: Termination Code 'CC' + HTTP status?

2021-05-25 Thread Tim Düsterhus , WoltLab GmbH

Willy,

On 5/21/21 6:48 PM, Willy Tarreau wrote:

I'd like your advice on a few log entries that confuse me. I am seeing HTTP
2.0 requests dying with a termination code of 'CC', i.e.:


 C : the TCP session was unexpectedly aborted by the client.
 C : the proxy was waiting for the CONNECTION to establish on the
 server. The server might at most have noticed a connection attempt.


Thus my understanding is that the client is "simply gone". However HAProxy
*also logs* a HTTP status of 503 with 0 bytes sent to the client.

For requests that never even established a connection to the server and that
were aborted by the client I would not have expected any status code to be
logged at all, after all the server could not generate one and there is no
client that would be able to receive it.


I'd say "it depends". If a connection attempt was made, and your servers
are flooded with aborted connections coming from haproxy, I'm pretty sure
you'll come here and ask why haproxy doesn't produce a log that permits
to identify the client and frontend involved in this.


Yes, it absolutely makes sense that this connection attempt is logged in 
HAProxy. It's just that the status code I disagree with.



The 503 is just a byproduct of the aborted connection. That can sound
strange but what happens here is that the status was set after it was
noticed the processing ended without being able to connect to any server.
I agree that a 4xx would be more suitable here, and this problem was
already raised in the past regarding the fact that *certain* aborts or
errors need closer inspection to figure where the problem was.


I believe in some cases a status code of 0 (zero) or -1 is logged, but 
I'm not 100% sure there. I would consider such a status to be 
appropriate as well, because no HTTP response was actually sent to the 
client.



We've already noted that significant improvements in close/abort need to
be made to 1) process them better and 2) report them more accurately.
This is a tedious task which *will* inevitably cause regressions, but
it's the price to pay to improve that.


Shall I file a feature request on GitHub that the 'CC' case does not 
result in a 5xx, but a 4xx or something clearly invalid?



For the time being, status codes are here for a reason and are more


s/status/termination/?


accurate than a status code (status which could also come from a server
by the way). There used to be some corner cases of combined client+server
errors reporting server errors, but these faded away with the muxes and
I don't think there are still any (or only in tricky cases). Thus I'd
encourage you to check the termination flags to be certain.


ack, thank you.

Best regards
Tim Düsterhus
Developer WoltLab GmbH

--

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



Termination Code 'CC' + HTTP status?

2021-05-21 Thread Tim Düsterhus , WoltLab GmbH

Hi List

[this email is not subscribed, please keep it in Cc]

I'd like your advice on a few log entries that confuse me. I am seeing 
HTTP 2.0 requests dying with a termination code of 'CC', i.e.:



C : the TCP session was unexpectedly aborted by the client.
C : the proxy was waiting for the CONNECTION to establish on the
server. The server might at most have noticed a connection attempt.


Thus my understanding is that the client is "simply gone". However 
HAProxy *also logs* a HTTP status of 503 with 0 bytes sent to the client.


For requests that never even established a connection to the server and 
that were aborted by the client I would not have expected any status 
code to be logged at all, after all the server could not generate one 
and there is no client that would be able to receive it.


Example entries:


*snip*:59447 [20/May/2021:21:45:12.448] https~ *snip*/nginx 1/0/-1/-1/2 503 0 - - CC-- 
88/87/3/3/0 0/0 {www.example.com|Mozilla/5.0*snip*|} {|} "GET 
https://www.example.com/*snip*.svg HTTP/2.0" 0004-60A6D8681D428
*snip*:59447 [20/May/2021:21:45:12.448] https~ *snip*/nginx 1/0/-1/-1/2 503 0 - - CC-- 
88/87/2/2/0 0/0 {www.example.com|Mozilla/5.0*snip*|} {|} "GET 
https://www.example.com/*snip*.png HTTP/2.0" 0004-60A6D8681D429
*snip*:59447 [20/May/2021:21:45:12.449] https~ *snip*/nginx 1/0/-1/-1/2 503 0 - - CC-- 
88/87/2/2/0 0/0 {www.example.com|Mozilla/5.0*snip*|} {|} "GET 
https://www.example.com/*snip*.jpg/?thumbnail=1 HTTP/2.0" 0004-60A6D8681D42A
*snip*:59447 [20/May/2021:21:45:12.449] https~ *snip*/nginx 2/0/-1/-1/1 503 0 - - CC-- 
88/87/1/1/0 0/0 {www.example.com|Mozilla/5.0*snip*|} {|} "GET 
https://www.example.com/*snip*.jpg/?thumbnail=1 HTTP/2.0" 0004-60A6D8681D42B


Can someone explain what (most likely) happened there? Is HAProxy just 
making up a status code?


The reason I'm asking is that this 503 popped up in my log processing as 
a server / backend error, but I don't think there's anything I could do 
about it. A client closing the connection after 2ms appears to be 
entirely out of my control. I might be misunderstanding it, though.


Best regards
Tim Düsterhus
Developer WoltLab GmbH

--

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



Re: [PATCH] CI: enable OpenTracing feature

2021-05-18 Thread Tim Düsterhus

Willy,

On 5/18/21 12:42 PM, Илья Шипицин wrote:

this enables OpenTracing for CI builds.


This one looks good to me.

Best regards
Tim Düsterhus



Re: [PATCH] move VTest installation to scripts/build-vtest.sh

2021-05-18 Thread Tim Düsterhus

Ilya,

On 5/18/21 11:50 AM, Илья Шипицин wrote:

I wonder if we can benefit from dependent steps, i.e. build VTest just once
and deliver binary to other containers (instead of building it every time).



The complexity is not worth it to shave of another half second or so. 
Installing VTest is down to 6 seconds for GitHub Actions. That's 
certainly good enough.


Best regards
Tim Düsterhus



Re: unexpected asan behaviour

2021-05-18 Thread Tim Düsterhus

Ilya,

On 5/17/21 3:57 PM, Илья Шипицин wrote:

FYI, this is not what it supposed to be

BUILD/MINOR: ssl: Fix compilation with SSL enabled ·
haproxy/haproxy@d75b99e (github.com)
<https://github.com/haproxy/haproxy/runs/2599654016>

according to global ASAN_OPTIONS it was supposed to go to separate log.
but something went wrong :)


I'm aware of that, I believe we discussed this back when migrating to 
GitHub Actions. But we considered the current state acceptable, as ASAN 
generally works and one can find the output, even if it is a bit hidden. 
I was unable to find out why it didn't work back then.



it is almost all time ok to break on ASAN errors, the only degradation it
fails on the first error and execution stops (that is ok almost all time).

I'll try to figure out why ASAN_OPTIONS did not work.



Thanks!

Best regards
Tim Düsterhus



Re: enaling cache in github actions

2021-05-16 Thread Tim Düsterhus

Ilya,

On 5/15/21 5:30 PM, Илья Шипицин wrote:

I've found that we do not cache "download-cache" and "opt" folders,
thus we build BoringSSL on every build (no cache).

I tried to enable cache

https://github.com/chipitsine/haproxy/blob/master/.github/workflows/vtest.yml#L46-L53

github does not like such caching keys:

Error: Key Validation Error: Linux-Ubuntu, gcc, no features cannot contain
commas.


Tim, do you have an idea how to fix this ?



I suggest the following:

- Cache based off the matrix.ssl value, because that determines what 
type of SSL lib we download and build.
- If we want to cache something else in the future, then we use a 
separate cache head for that specifically.
- And then use `steps.XXX.outputs.cache-hit != 'true'` to check whether 
you need to build anything or not.


See also: https://github.com/actions/cache#example-workflow

Best regards
Tim Düsterhus



Re: [PATCH] move VTest installation to scripts/build-vtest.sh

2021-05-15 Thread Tim Düsterhus

Willy,
Ilya,

On 5/15/21 8:58 AM, Илья Шипицин wrote:

I attached a patch that uses "curl". on a distance it seems to be faster
for 50%



This one looks good to me.

Best regards
Tim Düsterhus



Re: adding Coverity badge to some visible place

2021-05-14 Thread Tim Düsterhus

Daniel,
Ilya,

On 5/14/21 5:45 PM, Daniel Corbett wrote:

I would like to improve code quality visibility.
what if we convert README --> README.md and put Coverity badge there ?



Funny, I had a similar thought earlier today about having a README.md.

I would like to help improve it from a visual perspective.

+1 from me and very interested!



Yes, I believe using a README.md to benefit from the better formatting 
for the repository's landing page is useful.


But please don't add a large number of stupid badges there. They do not 
benefit the end user. Especially Coverity with all the false positives 
is going to be absolutely useless and distract from more important stuff.


Best regards
Tim Düsterhus



Re: [PATCH] move VTest installation to scripts/build-vtest.sh

2021-05-14 Thread Tim Düsterhus

Ilya,

On 5/14/21 9:02 PM, Илья Шипицин wrote:

let us unify VTest installation magic.


I disagree with using 'git clone' here, cloning the repository with full 
history is wasteful. Please use the tarball approach from vtest.yml


Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.4.0

2021-05-14 Thread Tim Düsterhus

Willy,

On 5/14/21 11:56 AM, Willy Tarreau wrote:

And of course there's all the invisible stuff being done on the internals
to improve the code, make it more extensible, more reliable or faster. I
think that's about it. If you contributed something that I missed here,
sorry for this, that's already a lot to figure out. Don't feel upset, and
just respond here to point it :-)


URI normalization is missing in your summary :-)


As usual, I've been careful when uploading the new release and I would
not be surprised to see a 404 or two, so do not hesitate to report any
issue. Please be gentle for the doc, as it also requires some manual
post-release adaptations and will take a bit more time.


a) The table on haproxy.org is missing the 'Bugs' link for HAProxy 2.4.
b) Can you please add initial 'snapshot's for new development versions: 
https://www.haproxy.org/download/2.5/src/snapshot/? My 
haproxy-auth-request CI relies on the snapshots and I already added 2.5, 
thus failing the build: 
https://github.com/TimWolla/haproxy-auth-request/runs/2582881096


Best regards
Tim Düsterhus



Re: Inconsistent reading of txn vars from Lua script

2021-05-13 Thread Tim Düsterhus
oking more closely at vars_get_by_name(), it's only used by Lua's
various get_var() and by the CLI's "get var" that I recently added
without knowing about this unexpected behavior either.

So what I'm proposing is to simply change vars_get_by_name() to call
register_name() with alloc=0 in order to fix this mess. We can then
check during 2.5 how to refine this to also consider the scope with
the variable's name. It's just this, and fixes Joao's test case to
always return 403:

diff --git a/src/vars.c b/src/vars.c
index 996141f5d..15dcb3c3d 100644
--- a/src/vars.c
+++ b/src/vars.c
@@ -583,7 +583,7 @@ int vars_get_by_name(const char *name, size_t len, struct 
sample *smp)
 enum vars_scope scope;
  
 /* Resolve name and scope. */

-   name = register_name(name, len, , 1, NULL);
+   name = register_name(name, len, , 0, NULL);
 if (!name)
 return 0;
  
Tim, do you agree with this analysis ?




Yes, that change makes sense to me. If you'd see my full use case then I 
recommend taking a look at haproxy-auth-request. It's super simple and 
even comes with VTest tests:


https://github.com/TimWolla/haproxy-auth-request#usage
https://github.com/TimWolla/haproxy-auth-request/blob/main/auth-request.lua#L50
https://github.com/TimWolla/haproxy-auth-request/tree/main/test

Best regards
Tim Düsterhus




Re: Inconsistent reading of txn vars from Lua script

2021-05-12 Thread Tim Düsterhus

Willy,

On 5/12/21 7:47 AM, Willy Tarreau wrote:

Interestingly, the code for variables was initially made for the config,
so it doesn't seem to destroy variable names when they're released since
that was pointless with the config. I think that code should be revisited
in 2.5 to improve the situation (e.g. by marking that the variable was
dynamically allocated maybe), but I don't know this part well so I'll
probably stop before starting to suggest stupidities :-)



There's also this related issue from back when I implemented this 
additional parameter:


https://github.com/haproxy/haproxy/issues/624

Best regards
Tim Düsterhus



Re: Error "line too long" after adding some conditions

2021-05-11 Thread Tim Düsterhus

Willy,

On 5/11/21 5:21 PM, Willy Tarreau wrote:

What do you guys think a more acceptable limit would be for each of them ?
Both can be increased if needed, they will only use a little bit more
memory during the parsing.



In Tom's case two alternative solutions that are much more readable were 
available. I consider these lowish limits a good thing, because it 
forces users to rethink what they are doing and possibly ask on the list 
to find a better solution.


Best regards
Tim Düsterhus



Re: Error "line too long" after adding some conditions

2021-05-11 Thread Tim Düsterhus

Tom,

On 5/11/21 4:15 PM, Tom wrote:

Using haproxy 2.1.3:
I've configured multiple conditions to a "use_backend" directive like this:

use_backend example_192.168.1.30 if host_test.example.com || 
host_a01.test.example.com || host_a02.test.example.com || 
host_a03.test.example.com || host_a04.test.example.com || ... || ...


When I add some more conditions, then I got an error after reloading the 
daemon:


"line too long, truncating at word 65, position 783:..."


How can I configure more conditions in the "use_backend"-directive above 
without having the error?




In this specific case (using ||) you can just repeat the 'use_backend' 
directive in multiple lines:


use_backend example_192.168.1.30 if host_test.example.com
use_backend example_192.168.1.30 if host_a01.test.example.com
use_backend example_192.168.1.30 if host_a02.test.example.com
use_backend example_192.168.1.30 if host_a03.test.example.com

As Adis said you can also define the same ACL multiple times:

acl test_dot_example req.hdr(host) test.example.com
acl test_dot_example req.hdr(host) a01.test.example.com
acl test_dot_example req.hdr(host) a02.test.example.com
acl test_dot_example req.hdr(host) a03.test.example.com

use_backend example_192.168.1.30 if test_dot_example

Best regards
Tim Düsterhus



Re: [PATCH] CI: Build VTest with clang

2021-05-10 Thread Tim Düsterhus

Willy,

On 5/10/21 11:02 PM, Willy Tarreau wrote:

Ah thank you Tim, I was also a bit annoyed by the recent failures.
I've just pushed it, let's see if it works!


Okay, at least VTest builds again. Perfect. Unfortunately some flaky 
tests are back :-(


Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.4-dev19

2021-05-10 Thread Tim Düsterhus

Willy,

On 5/10/21 8:50 AM, Willy Tarreau wrote:

I really don't expect any more significant changes being applied before
the release now, so we'll focus on cosmetic updates, doc and build tests
on various platforms. I'll add a few CPU entries in the Makefile to ease
building on modern ARM platforms and recheck the docs aimed at newcomers
(contributing, reporting issues etc).

Unless a huge bug falls in front of us blocking any progress, I think
we'll release this week, ideally on Friday morning so that those in search
of a distraction for their boring Friday afternoon have something to play
with :-)


As this is expected to be the last "release candidate" I just threw 
2.4-dev19 onto one of our HAProxy machines:


#   
3932master  0   *snip*  *snip*  2.4~dev19-1
# workers
15082   worker  1   0   0d00h00m12s 2.4~dev19-1
# old workers

10384   worker  [was: 1]1   *snip*  
2.3.10-1~bpo10+1
# programs


Let's see how that goes :-)


So if you still have a comment to formulate, hurry up!


I dropped the ball on the normalizers a bit due to other work, I'm sorry 
about that. I plan to send two patches for 'fragment-strip' and 
'fragment-encode' tonight (based off our private exchange). I expect 
these to be safe, as the feature is marked experimental and clearly 
separated.


Best regards
Tim Düsterhus



Re: [PATCH] MINOR: opentracing: register config file and line number on log servers

2021-05-09 Thread Tim Düsterhus

Ilya,

On 5/8/21 9:19 PM, Илья Шипицин wrote:

I've created "proof of concept" OT support for CI

https://github.com/chipitsine/haproxy/runs/2535689580

Tim, are you ok with that approach ? If yes, I will polish things a bit.


If the build time is reduced appropriately then I am fine with that. 
Please order the 'apt' dependencies alphabetically please.



I'm not very happy with unconditionally adding OT flags
https://github.com/chipitsine/haproxy/commit/dcf1812b17bd2907939eaa8d3310378d419d90c5#diff-b525f86b1f4925509959f857496dff18a9c4ed34fcc7f49357c5a6d00fb64d17R91
Tim, do you know how to add it using ${{ contains(matrix.FLAGS, 'USE_OT=1')
}} condition ?



You can simply add these to the 'flags' list within matrix.py, similarly 
to WURFL_INC and WURFL_LIB also being part of that list.


Best regards
Tim Düsterhus



Re: Brainstorming to add JWT verify to HAPoxy

2021-05-01 Thread Tim Düsterhus

Aleks,

On 5/1/21 1:42 PM, Aleksandar Lazic wrote:

# Extract the JSON Web Algorithms (JWA) from Bearer Token.
http-request set-var(txn.jwt_algo) 
req.hdr(Authorization),word(1,.),ub64dec,json_query('$.alg')   if 
bearer_header_exist


Trusting the algorithm specified in the JWT is unsafe and a common 
source of security issues.


Best regards
Tim Düsterhus



Re: [PATCH 0/3] Add a `strip-dot` normalizer

2021-04-21 Thread Tim Düsterhus

Willy,

On 4/21/21 7:03 PM, Willy Tarreau wrote:

$ curl -v https://example.com/foo///bar 2>&1 |grep GET

GET /foo///bar HTTP/2

$ curl -v https://example.com/foo/../bar 2>&1 |grep GET

GET /bar HTTP/2

$ curl -v https://example.com/foo//../bar 2>&1 |grep GET

GET /foo/bar HTTP/2


I'm fine with prefixing 'path-' (i.e. 'path-filesystem'). Simply 'path'
might be misleading, because it includes non-standard normalization.


I agree that path alone could be confusing or misleading, at least because
it looks like the similar sample-fetch (and we must absolutely avoid having
conflicting names between sample fetch functions and converters otherwise
we will never be able to merge them). Why not "path-normalize" ? It seems
to describe exactly what it tries to do.


I'm still working on getting the `http-request normalize-uri` action 
that is being used for request ingestion from the clients correct.


  http-request normalize-uri path-normalize

would be pretty redundant. That's why I suggested path-filesystem. But I 
realize that most of the normalizer names could be used as-is for 
converters, but this combined normalizer cannot.


I *really* think that we should leave these combined normalizers out for 
the initial version and wait for users feedback (including real world 
tests by myself). It's a bit verbose in the configuration, but it works.



By the way be careful about RFC3986, as I remember that there is an
algorithm or an illustration program there explaining how to resolve
paths, and that contains a bug (like every single time code is offered
in RFCs). The ABNF was correct however. It might be mentioned in the
errata but I really don't remember the details, most likely something
related to the inability to resolve a missing component and causing a
".." to point to the wrong place.



I just had a look at the errata. There's indeed something about '../', 
but that is regarding combining relative URLs with a specific base URL. 
Additional leading '../' must be preserved. The normalizer already has 
an appropriate option.


Best regards
Tim Düsterhus



  1   2   3   4   5   6   7   8   >