Re: [PATCH] MINOR: sample: add json_string

2021-04-12 Thread Willy Tarreau
On Mon, Apr 12, 2021 at 10:36:30PM +0200, Tim Düsterhus wrote:
> Aleks,
> 
> [Willy: I believe the patch is in a state that warrants you taking a look at
> it!]

Yeah, I'll have a look today, thanks for your exchanges on this.

Willy



About the 'Hot Restarts' of haproxy

2021-04-12 Thread Rmrf99
In this Slack engineering blog post: 
https://slack.engineering/migrating-millions-of-concurrent-websockets-to-envoy/

they replace HAProxy with Envoy for **Hot Restart**, just curious does HAProxy 
new version will have similar approach? or have better solution(in the future).



Sent with ProtonMail Secure Email.





Re: [PATCH] MINOR: sample: add json_string

2021-04-12 Thread Moemen MHEDHBI



On 08/04/2021 21:55, Aleksandar Lazic wrote:
> Hi.
> 
> Attached the patch to add the json_string sample.
> 
> In combination with the JWT patch is a pre-validation of a bearer token
> part possible.
> 
> I have something like this in mind.
> 
> http-request set-var(sess.json)
> req.hdr(Authorization),word(2,.),ub64dec,json_string('$.iss')
> http-request deny unless { var(sess.json) -m str
> 'kubernetes/serviceaccount' }
> 
> Regards
> Aleks

Hi,
I have also thought about something similar.
However I am not sure using a third party library is encouraged because
it may make the code less portable. Also using a third party library by
directly importing its code may be hard to maintain later.
In the end I am wondering if it is not easier to handle json parsing via
a LUA module for example.

-- 
Moemen



Re: [PATCH] JWT payloads break b64dec convertor

2021-04-12 Thread Moemen MHEDHBI


On 12/04/2021 23:13, Aleksandar Lazic wrote:
> Hi Moemen,
> 
> any chance to get this feature before 2.4 will be realeased?
> 
> Regards
> Aleks
> 

Hi Aleksandar,
I have updated the patch (attached) so it gets reviewed and eventually
merged.
I know this is going to be useful with what you are trying to do with
the json converter so I will try to be more active on this.


On 06/04/2021 09:13, Willy Tarreau wrote:

>> in such case should we rather use dynamic allocation ?
>
> No, there are two possible approaches. One of them is to use a trash
> buffer using get_trash_chunk(). The trash buffers are "large enough"
> for anything that comes from outside. A second, cleaner solution
> simply consists in not using a temporary buffer but doing the conversion
> on the fly. Indeed, looking closer, what the function does is to first
> replace a few chars on the whole chain to then call the base64 conversion
> function. So it doubles the work on the string and one side effect of
> this double work is that you need a temporary storage.

The url variant is not only about a different alphabet that needs to be
replaced but also is a non padding variant. So the straightforward
algorithm to decoding it is to add the padding to the input encoded in
url variant and then use the standard base64 decoder.
Even doing this on the fly requires extending input with two more bytes
max. Unless I am missing smth but in such case on the fly conversion
will result in a out of bound array access. That's why I have copied
input in a "inlen+2" string.

In the end I have updated patch to handle extending input in decoding
function via get_trash_chunk to make sure a buffer of size input+2 is
available.

You can find attached the patches 0001 and 0002 for this implementation.


> Other approaches would consist in either reimplementing the functions
> with a different alphabet, or modifying the existing ones to take an
> extra argument for the conversion table, and make one set of functions
> making use of the current table and another set making use of your new
> table.
>
> Willy
>

You can find attached the patches 0001-bis and 0002-bis modifying the
existing functions (introducing an url flag) to see how it looks like.
This solution may be cleaner (no chunk allocation and we don't loop
twice over input string) but has the drawbacks of being intrusive with
the rest of the code and less clearer imo regarding how url variant is
different from standard base64.
Feel free to pick the one that looks better otherwise I can continue
with a different implementation if needbe.
-- 
Moemen
>From cf0a43dab4f5f88ddf5e5e736127132721b7f18e Mon Sep 17 00:00:00 2001
From: Moemen MHEDHBI 
Date: Thu, 1 Apr 2021 20:53:59 +0200
Subject: [PATCH 1/2] MINOR: sample: add ub64dec and ub64enc converters

ub64dec and ub64enc are the base64url equivalent of b64dec and base64
converters. base64url encoding is the "URL and Filename Safe Alphabet"
variant of base64 encoding. It is also used in in JWT (JSON Web Token)
standard.
RFC1421 mention in base64.c file is deprecated so it was replaced with
RFC4648 to which existing converters, base64/b64dec, still apply.

Example:
  HAProxy:
http-request return content-type text/plain lf-string %[req.hdr(Authorization),word(2,.),ub64dec]
  Client:
Token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoiZm9vIiwia2V5IjoiY2hhZTZBaFhhaTZlIn0.5VsVj7mdxVvo1wP5c0dVHnr-S_khnIdFkThqvwukmdg
$ curl -H "Authorization: Bearer ${TOKEN}" http://haproxy.local
{"user":"foo","key":"chae6AhXai6e"}
---
 doc/configuration.txt| 12 ++
 include/haproxy/base64.h |  2 +
 reg-tests/sample_fetches/ubase64.vtc | 45 +++
 src/base64.c | 64 +++-
 src/sample.c | 38 +
 5 files changed, 160 insertions(+), 1 deletion(-)
 create mode 100644 reg-tests/sample_fetches/ubase64.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 7048fb63e..c7fe416e5 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -16393,6 +16393,18 @@ table_trackers()
   connections there are from a given address for example. See also the
   sc_trackers sample fetch keyword.
 
+ub64dec
+  This converter is the base64url variant of b64dec converter. base64url
+	encoding is the "URL and Filename Safe Alphabet" variant of base64 encoding.
+	It is also the encoding used in JWT (JSON Web Token) standard.
+
+	Example:
+	  # Decoding a JWT payload:
+	  http-request set-var(txn.token_payload) req.hdr(Authorization),word(2,.),ub64dec
+
+ub64enc
+  This converter is the base64url variant of base64 converter.
+
 upper
   Convert a string sample to upper case. This can only be placed after a string
   sample fetch function or after a transformation keyword returning a string
diff --git a/include/haproxy/base64.h b/include/haproxy/base64.h
index 1756bc058..532c46a44 100644
--- a/include/haproxy/base64.h
+++ b/include/haproxy/base64.h

Re: [PATCH] JWT payloads break b64dec convertor

2021-04-12 Thread Aleksandar Lazic

Hi Moemen,

any chance to get this feature before 2.4 will be realeased?

Regards
Aleks

On 06.04.21 09:13, Willy Tarreau wrote:

Hi Moemen,

On Tue, Apr 06, 2021 at 01:58:11AM +0200, Moemen MHEDHBI wrote:

Only part unclear:
On 02/04/2021 15:04, Tim Düsterhus wrote:

+int base64urldec(const char *in, size_t ilen, char *out, size_t olen) {
+char conv[ilen+2];


This looks like a remotely triggerable stack overflow.


You mean in case ilen is too big?


Yes that's it, I didn't notice it during the first review. It's
particularly uncommon to use variable sized arrays and should never
be done. The immediate effect of this is that it will reserve some
room in the stack for a size as large as ilen+2 bytes. The problem
is that on most platforms the stack grows down, so the beginning of
the buffer is located at a point which is very far away from the
current stack. This memory is in fact not allocated, so the system
detects the first usage through a page fault and allocates the
necessary space. But in order to know that the page fault is within
the stack, it has to apply a certain margin. And this margin varies
between OSes and platforms. Some compilers will explicitly initialize
such a large stack from top to bottom to avoid a crash. Other ones
will not do and may very well crash at 64kB. On Linux, I can make the
above crash by using a 8 MB ilen, just because by default the stack
size limit is 8 MB. That's large but not overly excessive for those
who would like to perform some processing on bodies. And I recall
that some other OSes default to way smaller limits (I recall 64kB
on OpenBSD a long time ago though this might have been raised to a
megabyte or so by now).


in such case should we rather use dynamic allocation ?


No, there are two possible approaches. One of them is to use a trash
buffer using get_trash_chunk(). The trash buffers are "large enough"
for anything that comes from outside. A second, cleaner solution
simply consists in not using a temporary buffer but doing the conversion
on the fly. Indeed, looking closer, what the function does is to first
replace a few chars on the whole chain to then call the base64 conversion
function. So it doubles the work on the string and one side effect of
this double work is that you need a temporary storage.

Other approaches would consist in either reimplementing the functions
with a different alphabet, or modifying the existing ones to take an
extra argument for the conversion table, and make one set of functions
making use of the current table and another set making use of your new
table.

Willy






Re: [PATCH] MINOR: sample: add json_string

2021-04-12 Thread Tim Düsterhus

Aleks,

[Willy: I believe the patch is in a state that warrants you taking a 
look at it!]


thank you. This looks *much* better now. I primarily have some style 
complaints. I'll probably also complain about the documentation a bit 
more, but you already said that you are still working on it.


On 4/12/21 10:09 PM, Aleksandar Lazic wrote:
This should write the double value to the string but I think I have here 
some

issue.


I've responded inline to that.

Patch feedback:


From 8cb1bc4aaedd17c7189d4985a57f662ab1b533a4 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Mon, 12 Apr 2021 22:01:04 +0200
Subject: [PATCH] MINOR: sample: converter: add JSON Path handling

With json_path can a JSON value be extacted from a Header
or body
---
 Makefile   |3 +-
 doc/configuration.txt  |   29 +
 include/import/mjson.h |  209 ++
 reg-tests/converter/json_query.vtc |   94 +++
 src/mjson.c| 1048 
 src/sample.c   |   94 +++
 6 files changed, 1476 insertions(+), 1 deletion(-)
 create mode 100644 include/import/mjson.h
 create mode 100644 reg-tests/converter/json_query.vtc
 create mode 100644 src/mjson.c

diff --git a/Makefile b/Makefile
index 9b22fe4be..56d0aa28d 100644
--- a/Makefile
+++ b/Makefile
@@ -883,7 +883,8 @@ OBJS += src/mux_h2.o src/mux_fcgi.o src/http_ana.o 
src/stream.o\
 src/ebistree.o src/auth.o src/wdt.o src/http_acl.o 
\
 src/hpack-enc.o src/hpack-huff.o src/ebtree.o src/base64.o 
\
 src/hash.o src/dgram.o src/version.o src/fix.o src/mqtt.o src/dns.o
\
-src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o
+src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o   
\
+   src/mjson.o


Incorrect indentation here.


 ifneq ($(TRACE),)
 OBJS += src/calltrace.o
diff --git a/doc/configuration.txt b/doc/configuration.txt
index f21a29a68..4393d5c1f 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15958,6 +15958,35 @@ json([])
   Output log:
  {"ip":"127.0.0.1","user-agent":"Very \"Ugly\" UA 1\/2"}
 
+json_query(,[])

+  The  is mandatory.


This is redundant. It is implied by the method signature (no square 
brackets around json_path).



+  By default will the follwing JSON types recognized.
+   - string  : This is the default search type and returns a string;
+   - number  : When the JSON value is a number then will the value be 
+   converted to a string. If you know that the value is a 
+   integer then can you help haproxy to convert the value 
+   to a integer when you add "sint" to the ;

+   - boolean : If the JSON value is not a String
+
+  This converter uses the mjson library https://github.com/cesanta/mjson
+  This converter extracts the value located at  from the JSON 
+  string in the input value. 
+   must be a valid JsonPath string as defined at 
+  https://goessner.net/articles/JsonPath/

+
+  A floating point value will always be returned as string!
+  
+  Example:

+ # get the value of the key kubernetes.io/serviceaccount/namespace
+ # => openshift-logging
+ http-request set-var(sess.json) 
req.hdr(Authorization),b64dec,json_string('$.kubernetes\\.io/serviceaccount/namespace')
+ 
+ # get the value of the key 'iss' from a JWT

+ # => kubernetes/serviceaccount
+ http-request set-var(sess.json) 
req.hdr(Authorization),b64dec,json_string('$.iss')
+
+
+
 language([,])
   Returns the value with the highest q-factor from a list as extracted from the
   "accept-language" header using "req.fhdr". Values with no q-factor have a
diff --git a/reg-tests/converter/json_query.vtc 
b/reg-tests/converter/json_query.vtc
new file mode 100644
index 0..b7c0e2c3a
--- /dev/null
+++ b/reg-tests/converter/json_query.vtc
@@ -0,0 +1,94 @@
+varnishtest "JSON Query converters Test"
+#REQUIRE_VERSION=2.4
+
+feature ignore_unknown_macro
+
+server s1 {
+   rxreq
+   txresp
+
+rxreq
+expect req.url == "/"
+txresp  -body "{\"iss\":\"kubernetes/serviceaccount\"}"
+
+rxreq
+expect req.url == "/"
+txresp  -body "{\"integer\":4}"
+
+rxreq
+   txresp -body "{\"boolean-true\":true}"
+
+rxreq
+   txresp -body "{\"boolean-false\":false}"
+
+rxreq
+   txresp -body "{\"my.key\":\"myvalue\"}"


Incorrect indentation above. You are mixing tabs and spaces.


+} -start
+
+haproxy h1 -conf {
+defaults
+   mode http
+   timeout connect 1s
+   timeout client  1s
+   timeout server  1s
+
+frontend fe
+   bind "fd@${fe}"
+
+http-request set-var(sess.header_json) 
req.hdr(Authorization),json_string('$.iss')
+http-request set-var(sess.pay_json) req.body,json_string('$.iss')
+http-request set-var(sess.pay_int) 
req.body,json_string('$.integer',"sint"),add(1)
+http-request set-var(sess.pay_boolean_true) 

Re: [PATCH] MINOR: sample: add json_string

2021-04-12 Thread Aleksandar Lazic

Hi.

another patch which honer the feedback.

The doc will be enhanced but I have a question about that sequence.
This should write the double value to the string but I think I have here some
issue.

```
printf("\n>>>DOUBLE rc:%d: double:%f:\n",rc, 
double_val);
trash->size = snprintf(trash->area,
trash->data,

"%g",double_val);
smp->data.u.str = *trash;
smp->data.type = SMP_T_STR;
```

I have also add more tests with some specific JSON types.

Regards
Aleks

On 11.04.21 13:04, Tim Düsterhus wrote:

Aleks,

On 4/11/21 12:28 PM, Aleksandar Lazic wrote:

Agree. I have now rethink how to do it and suggest to add a output type.

```
json_query(,)
   The  and  are mandatory.
   This converter uses the mjson library https://github.com/cesanta/mjson
   This converter extracts the value located at  from the JSON
   string in the input value.
    must be a valid JsonPath string as defined at
   https://goessner.net/articles/JsonPath/

   These are the possible output types.
    - "bool"   : A boolean is expected;
    - "sint"   : A signed 64bits integer type is expected;
    - "str"    : A string is expected. This could be a simple string or
 a JSON sub-object;

   A floating point value will always be converted to sint!
```


The converter should be able to detect the type on its own. The types are part 
of the JSON after all! The output_type argument just moves the explicit type 
specification from the converter name into an argument. Not much of an 
improvement.

I don't know how the library works exactly, but after extracting the value 
something like the following should work:

If the first character is '"' -> string
If the first character is 't' -> bool(true)
If the first character is 'f' -> bool(false)
If the first character is 'n' -> null (This should probably result in the 
converter failing).
If the first character is a digit -> number


+    { "json_string", sample_conv_json_string, ARG1(1,STR), 
sample_check_json_string , SMP_T_STR, SMP_USE_CONST },


While testing something I also just notice that SMP_USE_CONST is incorrect 
here. I cannot apply e.g. the sha1 converter to the output of json_string.


Okay. I will change both to SMP_T_ANY because the return values can be bool, 
int or str.


The input type should remain as SMP_T_STR, because you are parsing a JSON 
*string*.


While implmenting the suggested options abouve I stuggle with checking the 
params.
Arg0 is quite clear but how make a efficient check for Arg1, the output type?


The efficiency of the check is less of a concern. That happens only once during 
configuration checking.



```
/* This function checks the "json_query" converter's arguments.
  */
static int sample_check_json_query(struct arg *arg, struct sample_conv *conv,
    const char *file, int line, char **err)
{
 if (arg[0].data.str.data == 0) { /* empty */
 memprintf(err, "json_path must not be empty");
 return 0;
 }

 /* this doen't work */
 int type = smp_to_type[arg[1].data.str.area];


The output_type argument should not exist. I'll answer the question 
nonetheless: You have to compare strings explicitly in C. So you would have use 
strcmp for each of the cases.


 switch (type) {
 case SMP_T_BOOL:
 case SMP_T_SINT:
 /* These type are not const. */
 break;

 case SMP_T_STR:

```

I would to the conversation from double to int like "smp->data.u.sint = (long long 
int ) double_val;"
is this efficient. I haven't done this for a long time so I would like to have a 
"2nd eye pair" on this.



I'd probably return a double as a string instead. At least that doesn't destroy 
information.

Best regards
Tim Düsterhus


>From 8cb1bc4aaedd17c7189d4985a57f662ab1b533a4 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Mon, 12 Apr 2021 22:01:04 +0200
Subject: [PATCH] MINOR: sample: converter: add JSON Path handling

With json_path can a JSON value be extacted from a Header
or body
---
 Makefile   |3 +-
 doc/configuration.txt  |   29 +
 include/import/mjson.h |  209 ++
 reg-tests/converter/json_query.vtc |   94 +++
 src/mjson.c| 1048 
 src/sample.c   |   94 +++
 6 files changed, 1476 insertions(+), 1 deletion(-)
 create mode 100644 include/import/mjson.h
 create mode 100644 reg-tests/converter/json_query.vtc
 create mode 100644 src/mjson.c

diff --git a/Makefile b/Makefile
index 9b22fe4be..56d0aa28d 100644
--- a/Makefile
+++ b/Makefile
@@ -883,7 +883,8 @@ OBJS += src/mux_h2.o src/mux_fcgi.o src/http_ana.o src/stream.o\
 src/ebistree.o src/auth.o 

[ANNOUNCE] haproxy-1.8.30

2021-04-12 Thread Amaury Denoyelle
Hi,

HAProxy 1.8.30 was released on 2021/04/12. It added 8 new commits after
version 1.8.29. The main point of this release is the fix of the
regression on the frequency counters introduced in the latest release.
Thanks to users feedback, the bug was quickly spotted and the fix
validated in higher version before being backported in the 1.8 tree.
Here is the list of the detailed changes.

- As stated, the bug on the frequency counters is fixed. Since the
  previous release, the time period was not properly calculated, so the
  counters were not properly updated. Willy was able to fix the issue
  thanks to quick user reports.

- The hdr_ip sample fetch is now stricter. It will reject a field if
  there is some garbage after a valid IPv4 address. This ensures that
  for example an invalid x-forwared-for header field is not present,
  which is better to detect a non-conformant http proxy in a network.

- The silent-drop action was not functional for IPv6 connections if the
  haproxy process is executed without admin capabilities. It now
  properly set the IPv6 header field hop-limit to 1, as explained in the
  documentation.

Thanks to everyone for this release. Enjoy !

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Wiki : https://github.com/haproxy/wiki/wiki
   Sources  : http://www.haproxy.org/download/1.8/src/
   Git repository   : http://git.haproxy.org/git/haproxy-1.8.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-1.8.git
   Changelog: http://www.haproxy.org/download/1.8/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Amaury Denoyelle
---
Complete changelog :
Willy Tarreau (8):
  MINOR: time: also provide a global, monotonic global_now_ms timer
  BUG/MEDIUM: freq_ctr/threads: use the global_now_ms variable
  BUG/MEDIUM: time: make sure to always initialize the global tick
  MINOR: tools: make url2ipv4 return the exact number of bytes parsed
  BUG/MINOR: http_fetch: make hdr_ip() reject trailing characters
  BUG/MINOR: tcp: fix silent-drop workaround for IPv6
  BUILD: tcp: use IPPROTO_IPV6 instead of SOL_IPV6 on FreeBSD/MacOS
  BUG/MINOR: http_fetch: make hdr_ip() resistant to empty fields

---



[ANNOUNCE] haproxy-2.0.22

2021-04-12 Thread Amaury Denoyelle
Hi,

HAProxy 2.0.22 was released on 2021/04/12. It added 23 new commits after
version 2.0.21. Most notably, this release fixes a regression affecting
the frequency counters. Thanks to quick users feedback, the bug was
quickly spotted and the fix validated in higher version before being
backported in the 2.0 tree. There were also important fixes in the DNS
module due to recent changes. Here is the list of all the changes.

- Since the previous release, the time period of frequency counters was
  not properly calculated, and so the counters not correctly updated.
  Willy was able to fix the issue quickly thanks to quick user reports.

- As mentionned, a bunch of fixes were made on the DNS resolvers. The
  server managed through the SRV records additional sections were not
  properly handled, causing servers in MAINT status to never be
  activated when becoming available. Also the handling of resolution
  errors is now safer.

- The hdr_ip sample fetch is now stricter. It will reject a field if
  there is some garbage after a valid IPv4 address. This ensures that
  for example an invalid x-forwared-for header field is not present,
  which is better to detect a non-conformant http proxy in a network.

- The silent-drop action was not functional for IPv6 connections if the
  haproxy process is executed without admin capabilities. It now
  properly set the IPv6 header field hop-limit to 1, as explained in the
  documentation.

- The Lua debugging has been slightly improved by Christopher with the
  implementation of an internal function to display the backtrace in
  case of failure.  This allows to output the backtrace even if a memory
  allocation failure is the cause of the bug.

- The closing of an H1 connection is now idempotent. This prevents a
  rare occurence of a crash when closing an already closed H1
  connection.

- On the html stats page, a DOWN backend in transition to the UP state
  was incorrectly displayed with the wrong color, making it
  indistinguishable with going DOWN backends.

- The unix-bind-prefix directive was incorrectly prepended to the UNIX
  socket path.

- A deadlock was fixed for process built with DEBUG_UAF when using
  thread isolation. This option is normally only activated for debugging
  purposes to detect use-after-free problems.

Thanks to everyone for this release. Enjoy !

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Wiki : https://github.com/haproxy/wiki/wiki
   Sources  : http://www.haproxy.org/download/2.0/src/
   Git repository   : http://git.haproxy.org/git/haproxy-2.0.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-2.0.git
   Changelog: http://www.haproxy.org/download/2.0/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Amaury Denoyelle
---
Complete changelog :
Baptiste Assmann (1):
  BUG/MAJOR: dns: disabled servers through SRV records never recover

Christopher Faulet (10):
  MINOR: lua: Slightly improve function dumping the lua traceback
  BUG/MEDIUM: debug/lua: Use internal hlua function to dump the lua 
traceback
  BUG/MEDIUM: lua: Always init the lua stack before referencing the context
  BUG/MEDIUM: thread: Fix a deadlock if an isolated thread is marked as 
harmless
  BUG/MINOR: resolvers: Unlink DNS resolution to set RMAINT on SRV 
resolution
  MINOR: resolvers: Use a function to remove answers attached to a 
resolution
  MINOR: resolvers: Purge answer items when a SRV resolution triggers an 
error
  MINOR: resolvers: Add function to change the srv status based on SRV 
resolution
  MINOR: resolvers: Directly call srvrq_update_srv_state() when possible
  BUG/MEDIUM: resolvers: Don't release resolution from a requester callbacks

Eric Salama (1):
  MINOR/BUG: mworker/cli: do not use the unix_bind prefix for the master 
CLI socket

Florian Apolloner (1):
  BUG/MINOR: stats: Apply proper styles in HTML status page.

Jerome Magnin (1):
  BUG/MAJOR: dns: fix null pointer dereference in snr_update_srv_status

Willy Tarreau (9):
  MINOR: time: also provide a global, monotonic global_now_ms timer
  BUG/MEDIUM: freq_ctr/threads: use the global_now_ms variable
  BUG/MEDIUM: time: make sure to always initialize the global tick
  MINOR: tools: make url2ipv4 return the exact number of bytes parsed
  BUG/MINOR: http_fetch: make hdr_ip() reject trailing characters
  BUG/MEDIUM: mux-h1: make h1_shutw_conn() idempotent
  BUG/MINOR: tcp: fix silent-drop workaround for IPv6
  BUILD: tcp: use IPPROTO_IPV6 instead of SOL_IPV6 on FreeBSD/MacOS
  BUG/MINOR: http_fetch: make hdr_ip() resistant to empty fields

---



Re: [ANNOUNCE] haproxy-2.4-dev16

2021-04-12 Thread Christopher Faulet

Le 12/04/2021 à 09:40, Илья Шипицин a écrit :


Dear Team,

can we address at least #1112, #1119 before 2.4 is released ?

Of course, thanks for the reminder !

--
Christopher Faulet



Re: [ANNOUNCE] haproxy-2.4-dev16

2021-04-12 Thread Илья Шипицин
Dear Team,

can we address at least #1112, #1119 before 2.4 is released ?

пт, 9 апр. 2021 г. в 20:52, Willy Tarreau :

> Hi,
>
> HAProxy 2.4-dev16 was released on 2021/04/09. It added 37 new commits
> after version 2.4-dev15.
>
> This one is particularly calm, I even hesitated between making it or not.
> But there are a few updates that may affect configuration so I figured it
> was better to emit a new one.
>
> A large part of the patch is essentially caused by the renaming of a few
> atomic ops that were causing some confusion. Now HA_ATOMIC_FOO() will be
> a void statement so that if you want to read from the location you use
> either HA_ATOMIC_FETCH_FOO() or HA_ATOMIC_FOO_FETCH() (pre- or post-
> fetch).  The output code shouldn't change however, and given that it was
> essentially sed-work, as soon as it started to work I was confident in it.
>
> A few changes in the FD code to clean up that messy fdtab structure cause
> another noticeable part of the diff. I obviously managed to break
> something once (hence the BUG/MAJOR fix) but now it's OK. Mistakes at this
> level are never forgiving anyway, either it fully works or it fully fails.
>
> The nice part that makes me think we're progressively approaching what
> could look like the release is that Emeric finally performed the few
> changes in the DNS and log code. For the DNS, the TCP servers in the
> "resolvers" section do not need to be referred to as "server" anymore,
> they are "nameserver" just like the other ones, except that you can
> mention "tcp@" in front of their address to force them to be TCP
> nameservers. No more configuration mixing there. And for the log servers,
> similarly, now you can specify "tcp@" in front of an address on a "log"
> statement, and it will automatically create the ring for you with default
> settings. Previously it was still required to manually declare the ring, I
> found this too cumbersome, and Emeric figured how to handle this.
>
> The rest is essentially small bug fixes and code cleanups from a bunch
> of contributors.
>
> Now speaking about the pending stuff I'm still aware of:
>
>   - I'd like to rename LIST_ADD/LIST_ADDQ to LIST_INSERT/LIST_APPEND
> (or maybe LIST_ADD_HEAD/LIST_ADD_TAIL ?). I've already been trapped
> in using LIST_ADD() when I wanted to append and I know the confusion
> is easy. That's just a quick "sed" once we know what we want.
>
>   - I identified some undesired cache line sharing around some pointers,
> that slow down FD operations and possibly other stuff. I see how to
> arrange this, it just needs to be done (and tested on MacOS since we
> noticed once that section names differ there).
>
>   - we've had a recent discussion about the opportunity to import SLZ
> and to always enable compression by default using it if no other lib
> is specified. I think it could be useful for some users (especially
> those for whom memory usage is important). I'll have a look at this,
> maybe next week, that's only two files to include.
>
>   - regarding the quick discussion two weeks ago about optimization for
> modern ARM CPUs, I saw that one solution could be to build using
> gcc 10.2+ which outline their atomics into functions. That's ugly
> but the performance impact is small (about 3% in my tests), while
> it provides a tremendous improvement for many-core machines. But if
> we rely on this do this I'll probably add two new CPU entries to
> force to use only an old one (v8.0) or only a modern one (v8.2) so
> that those who build themselves can shave the last few percent.
>
>   - no progress made on the default-path, but we'll have to reread the
> issue to figure the best choice. I'd like to see it done for the
> release to ease config packaging and deployments.
>
>   - I'd like to add a warning when we detect that nbthreads is forced
> to a higher value than the number of bound CPUs. It's not the first
> time that I see this in a config and the result is catastrophic, so
> a warning is deserved. It just needs to be set at the right place.
>
>   - the shared memory pools cleanup must be completed before the release,
> as the situation is not good for those with a slow malloc() function
> at the moment. I know what to do, I just need to stay focused on it.
>
>   - the date & time cleanups would be nice to have for the long term and
> are not particularly hard to do so better finish them before 2.4.
>
>   - Tim sent a patch series to implement URI normalization. That's
> something
> I'd like to see merged if possible, as it may improve security for some
> users, and at least improve reliability for others.
>
>   - I also noticed Alek's mjson import with a new converter, but didn't
> have a look yet. Maybe it will open opportunities for more converters,
> that's definitely something which deserves being considered before the
> release.
>
>   - Amaury has almost finished some