Re: CI caching improvement

2022-03-21 Thread Tim Düsterhus

William,

On 3/18/22 11:31, William Lallemand wrote:

It looks like it is available as well on our repositories, I just test
it and it works correctly.

Honestly I really don't like the dependency to another repository with a
format specific to github.

I agree that a cleaner integration with github with their specific tools
is nice, but I don't want us to be locked with github, we are still
using cirrus, travis, sometimes gitlab, and also running some of the
scripts by hand.

We also try to avoid the dependencies to other projects and its much
simplier to have few shell scripts and a CI configuration in the
repository. And typescript is not a language we would want to depend on
if we need to debug it for example.


Okay, that's fair.


Giving that github is offering the job restart feature, we could skip
the VTest caching, since it's a little bit ugly. Only the quictls cache
need to be fixed.


Perfect, I agree here. QUICTLS caching is useful and VTest caching is 
obsolete with the single-job restart.


Best regards
Tim Düsterhus



MEDIUM: mqtt: support mqtt_is_valid and mqtt_field_value converters for MQTTv3.1

2022-03-21 Thread dhruvjain99
From: "Dhruv Jain" 

In MQTTv3.1, protocol name is "MQIsdp" and protocol level is 3. The mqtt
converters(mqtt_is_valid and mqtt_field_value) did not work for clients on
mqttv3.1 because the mqtt_parse_connect() marked the CONNECT message invalid
if either the protocol name is not "MQTT" or the protocol version is other than
v3.1.1 or v5.0. To fix it, we have added the mqttv3.1 protocol name and version
as part of the checks.

This patch fixes the mqtt converters to support mqttv3.1 clients as well (issue 
#1600).
It must be backported to 2.4.
---
 include/haproxy/mqtt-t.h |  1 +
 reg-tests/converter/mqtt.vtc | 11 +++
 src/mqtt.c   | 12 +++-
 3 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/include/haproxy/mqtt-t.h b/include/haproxy/mqtt-t.h
index 937702178..710fd87db 100644
--- a/include/haproxy/mqtt-t.h
+++ b/include/haproxy/mqtt-t.h
@@ -27,6 +27,7 @@
 /* MQTT protocol version
  * In MQTT 3.1.1, version is called "level"
  */
+#define MQTT_VERSION_3_1  3
 #define MQTT_VERSION_3_1_14
 #define MQTT_VERSION_5_0  5
 
diff --git a/reg-tests/converter/mqtt.vtc b/reg-tests/converter/mqtt.vtc
index 60458a3fe..fc3dacae1 100644
--- a/reg-tests/converter/mqtt.vtc
+++ b/reg-tests/converter/mqtt.vtc
@@ -42,6 +42,11 @@ server s1 {
 recv 22
 sendhex "2102"
 expect_close
+
+# MQTT 3.1 CONNECT packet (id: test_sub - username: test - passwd: passwd)
+accept
+recv 38
+sendhex "2002"
 } -start
 
 server s2 {
@@ -225,3 +230,9 @@ client c2_50_1 -connect ${h1_fe2_sock} {
 recv 39
 expect_close
 } -run
+
+client c3_31_1 -connect ${h1_fe1_sock} {
+# Valid MQTT 3.1 CONNECT packet (id: test_sub - username: test - passwd: 
passwd)
+sendhex 
"102400064d514973647003c20008746573745f7375620004746573740006706173737764"
+recv 4
+} -run
\ No newline at end of file
diff --git a/src/mqtt.c b/src/mqtt.c
index ebdb57d4e..5688296e5 100644
--- a/src/mqtt.c
+++ b/src/mqtt.c
@@ -40,14 +40,14 @@ uint8_t mqtt_cpt_flags[MQTT_CPT_ENTRIES] = {
 const struct ist mqtt_fields_string[MQTT_FN_ENTRIES] = {
[MQTT_FN_INVALID]= IST(""),
 
-   /* it's MQTT 3.1.1 and 5.0, those fields have no unique id, so we use 
strings */
+   /* it's MQTT 3.1, 3.1.1 and 5.0, those fields have no unique id, so we 
use strings */
[MQTT_FN_FLAGS]  = IST("flags"),
-   [MQTT_FN_REASON_CODE]= IST("reason_code"),  
 /* MQTT 3.1.1: return_code */
+   [MQTT_FN_REASON_CODE]= IST("reason_code"),  
 /* MQTT 3.1 and 3.1.1: return_code */
[MQTT_FN_PROTOCOL_NAME]  = IST("protocol_name"),
[MQTT_FN_PROTOCOL_VERSION]   = IST("protocol_version"), 
 /* MQTT 3.1.1: protocol_level */
[MQTT_FN_CLIENT_IDENTIFIER]  = IST("client_identifier"),
[MQTT_FN_WILL_TOPIC] = IST("will_topic"),
-   [MQTT_FN_WILL_PAYLOAD]   = IST("will_payload"), 
 /* MQTT 3.1.1: will_message */
+   [MQTT_FN_WILL_PAYLOAD]   = IST("will_payload"), 
 /* MQTT 3.1 and 3.1.1: will_message */
[MQTT_FN_USERNAME]   = IST("username"),
[MQTT_FN_PASSWORD]   = IST("password"),
[MQTT_FN_KEEPALIVE]  = IST("keepalive"),
@@ -695,6 +695,7 @@ struct ist mqtt_field_value(struct ist msg, int type, int 
fieldname_id)
 }
 
 /* Parses a CONNECT packet :
+ *   
https://public.dhe.ibm.com/software/dw/webservices/ws-mqtt/mqtt-v3r1.html#connect
  *   
https://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718028
  *   
https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901033
  *
@@ -718,14 +719,15 @@ static int mqtt_parse_connect(struct ist parser, struct 
mqtt_pkt *mpkt)
 */
/* read protocol_name */
parser = mqtt_read_string(parser, 
>data.connect.var_hdr.protocol_name);
-   if (!isttest(parser) || 
!isteqi(mpkt->data.connect.var_hdr.protocol_name, ist("MQTT")))
+   if (!isttest(parser) || 
!(isteqi(mpkt->data.connect.var_hdr.protocol_name, ist("MQTT")) || 
isteqi(mpkt->data.connect.var_hdr.protocol_name, ist("MQIsdp"
goto end;
 
/* read protocol_version */
parser = mqtt_read_1byte_int(parser, 
>data.connect.var_hdr.protocol_version);
if (!isttest(parser))
goto end;
-   if (mpkt->data.connect.var_hdr.protocol_version != MQTT_VERSION_3_1_1 &&
+   if (mpkt->data.connect.var_hdr.protocol_version != MQTT_VERSION_3_1 &&
+   mpkt->data.connect.var_hdr.protocol_version != MQTT_VERSION_3_1_1 &&
mpkt->data.connect.var_hdr.protocol_version != MQTT_VERSION_5_0)
goto end;
 
-- 
2.27.0




RE: [EXTERNAL] Re: Self-signed cert at haproxy, formal cert on backend web server

2022-03-21 Thread Moore, Dan [TREAS]
Shawn,

That is very helpful and provided some insight I hadn't considered which will 
definitely help moving forward.  Thank you!

Dan.

-Original Message-
From: Shawn Heisey  
Sent: Friday, March 18, 2022 8:06 PM
To: haproxy@formilux.org
Subject: [EXTERNAL] Re: Self-signed cert at haproxy, formal cert on backend web 
server

***CAUTION***This message came from an EXTERNAL address 
(haproxy+bounces-51802-dan.moore=treas.nj@formilux.org). DO NOT click on 
links or attachments unless you know the sender and the content is safe. 
Suspicious? Forward the message to spamrep...@cyber.nj.gov.

On 3/18/2022 9:28 AM, Moore, Dan [TREAS] wrote:
> This all works except the client browser is showing the connection as 
> insecure.  Would a formal
> certificate at haproxy fix this or is there another way to keep the browser 
> happy using the
> self-signed cert?  The config I'm using is below.  Thanks!

Yes, you need a real cert signed by a public CA on whatever users 
actually connect to with https, in this case it's haproxy.

On your setup, the end user will never see the certificate on the 
backend server, they will only see the certificate that haproxy gives them.

The place to use a self-signed certificate is on the backend servers.  
There is an option for haproxy to have it not validate the certificate 
chain, I can't remember what it is.

I've worked hard on my setup to eliminate the need for SSL on the 
backend.  It was only recently that I figured out how to accomplish this 
on all my sites.  Wordpress and a WSGI application called dnote were the 
ones that I had the hardest time configuring to force https even though 
the connection to Apache is unencrypted.  Once I figured out how to 
configure those applications, I was able to completely eliminate the SSL 
virtualhosts from my Apache configuration, and haproxy talks to Apache 
on localhost port 81.

TL;DR: The way I got the wsgi application to force https was with this 
directive in the Apache virtualhost:

WSGITrustedProxyHeaders X-Forwarded-For X-Forwarded-Proto

Thanks,
Shawn




Re: CI caching improvement

2022-03-21 Thread Илья Шипицин
пт, 18 мар. 2022 г. в 15:32, William Lallemand :

> On Wed, Mar 16, 2022 at 09:31:56AM +0100, Tim Düsterhus wrote:
> > Willy,
> >
> > On 3/8/22 20:43, Tim Düsterhus wrote:
> > >> Yes my point was about VTest. However you made me think about a very
> good
> > >> reason for caching haproxy builds as well :-)  Very commonly, some
> VTest
> > >> randomly fails. Timing etc are involved. And at the moment, it's
> impossible
> > >> to restart the tests without rebuilding everything. And it happens to
> me to
> > >> click "restart all jobs" sometimes up to 2-3 times in a row in order
> to end
> > >
> > > I've looked up that roadmap entry I was thinking about: A "restart this
> > > job" button apparently is planned for Q1 2022.
> > >
> > > see https://github.com/github/roadmap/issues/271 "any individual job"
> > >
> > > Caching the HAProxy binary really is something I strongly advice
> against
> > > based on my experience with GitHub Actions and CI in general.
> > >
> > > I think the restart of the individual job sufficiently solves the issue
> > > of flaky builds (until they are fixed properly).
> > >
> >
> > In one of my repositories I noticed that this button is now there. One
> > can now re-run individual jobs and also all failed jobs. See screenshots
> > attached.
> >
>
> Hello Tim,
>
> It looks like it is available as well on our repositories, I just test
> it and it works correctly.
>
> Honestly I really don't like the dependency to another repository with a
> format specific to github.
>
> I agree that a cleaner integration with github with their specific tools
> is nice, but I don't want us to be locked with github, we are still
> using cirrus, travis, sometimes gitlab, and also running some of the
> scripts by hand.
>
> We also try to avoid the dependencies to other projects and its much
> simplier to have few shell scripts and a CI configuration in the
> repository. And typescript is not a language we would want to depend on
> if we need to debug it for example.
>
> Giving that github is offering the job restart feature, we could skip
> the VTest caching, since it's a little bit ugly. Only the quictls cache
> need to be fixed.
>

I think we can adjust build-ssl.sh script to download tagged quictls (and
cache it in the way we do cache openssl itself)
Tags · quictls/openssl (github.com)



>
> Regards,
>
> --
> William Lallemand
>


Trouble with HTTP request queueing when using HTTP/2 frontend and HTTP/1.1 backend

2022-03-21 Thread Jens Wahnes

Hello,

I'm a happy user of HAProxy and so far have been able to resolve any 
issues I've had by reading the docs and following the answers given on 
this mailing list or online tutorials. But now I've run into a problem I 
cannot resolve myself and hope someone could help me figure out what 
might be wrong.


The setup I have has been running fine for many months now. HAProxy 
terminates TLS for HTTPS requests and forwards these requests to a 
couple of backends. The backend servers are HTTP/1.1 only. As long as 
the frontend is limited to HTTP/1.1, the special backend I use for file 
uploads is operating exactly as intended. After enabling HTTP/2 on the 
frontend, however, the file upload backends are not working as before. 
The requests will not be processed properly and run into timeouts.


These file upload backends in my HAProxy configuration are somewhat 
special. They try to "serialize" certain HTTP file upload requests made 
by browser clients via AJAX calls (i.e. drag-and-drop of several files 
at once into the browser window). These files need to be processed one 
after the other per client (not in parallel). So the HAProxy backend in 
question uses a number of servers with a "maxconn 1" setting each, which 
will process the first request immediately but queue subsequent HTTP 
requests coming in at the same time until the previous request is 
finished. This approach certainly is not perfect in design, but has been 
working for me when using a somewhat high arbitrary number of 
pseudo-servers to realize it, so that each client making these file 
upload requests will be served by one "server" exclusively. This is what 
the backend definition looks like:


backend upload_ananas
option forwardfor if-none header X-Client-IP #except 127.0.0.0/8
stick-table type string len 32 size 10k expire 2m
stick match req.cook(mycookie),url_dec
stick store-request req.cook(mycookie),url_dec
timeout server 10m
timeout queue  20s
balance hdr(Cookie)
default-server   no-check maxconn 1 maxqueue 20 send-proxy-v2 track 
xy/ananas source "ipv6@[${BACKENDUSEIPV6}]"

server-template a 32 "${ANANAS}":9876


Once I switched the HTTP frontend to use HTTP/2 (using "alpn 
h2,http/1.1"), this special backend is not working as expected anymore. 
All is fine as long as there is only one request present for a certain 
server at any given time. However, when there are two or more requests 
at the same time, i.e. as soon as the queueing mechanism is supposed to 
kick in, the setup is not working the way it does with a HTTP/1.1 
frontend. The parallel requests aren't properly put into the queue (or 
taken out of the queue) in this case. From what I can see in the log 
file, the requests seem to be blocking one another, and nothing is 
happening until the timeout set by "timeout queue" is reached. At that 
point, 1 or 2 out of 4 requests in an example call may succeed, but the 
others will fail.


Clearly, this kind of setup is quite the opposite of what most people 
will be using. In my case, I'm deliberately trying to stuff requests 
into a queue, whereas normally, one would try to move requests to a 
server that has got slots open for processing. So I think that my use 
case is hitting different code paths than most other setups.


I've read in previous emails on the mailing list that the "maxconn" 
setting nowadays does not limit the number of TCP sessions to the 
backend server, but the number of parallel HTTP requests. This made we 
wonder if the trouble I'm seeing might have to do with the way 
multiplexed HTTP/2 requests are mapped to HTTP/1.1 backends. Could it be 
that when the backend server finishes  processing the first request, 
this isn't generating a proper event in HAProxy's backend logic, so that 
the next request is not being processed when it could? Or maybe there is 
something special about the number of "0" free slots in the server 
definition in this case, once the first slot has been taken?


Trying to work around the problem, I've switched on and off quite a few 
settings that may influence the way processing takes place, but still 
haven't been able to come up with a working configuration in the HTTP/2 
case. Some of the settings I tried were:

  * default-server max-reuse 0
  * http-reuse never
  * option http-server-close
  * option httpclose
  * option http-buffer-request
  * retry-on conn-failure 408 503
  * http-request wait-for-body time 15s at-least 16k if METH_POST

With HTTP/2 active, there will be log entries like this (I re-ordered 
them to be in order of begin of processing).


Mar 18 17:30:31 localhost haproxy[478250]: fd90:1234::21a:52738 
[18/Mar/2022:17:29:51.411] Loadbalancer~ zyx_ananas/a24 
12/0/1/40112/40125 200 1217 mycookie=3BRoK6tyijmqndBJRzLyT9Lq7dsiPmeT - 
 356/356/1/0/0 0/0 {serialize.example.com} "POST 
https://serialize.example.com/services/ajax.php/file/upload HTTP/2.0"
Mar 18 17:30:11 localhost haproxy[478250]: fd90:1234::21a:52738 

Re: [PATCH] REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+

2022-03-21 Thread Willy Tarreau
On Mon, Mar 21, 2022 at 09:25:28AM +0100, Tim Düsterhus wrote:
> Willy,
> 
> On 3/11/22 22:46, Tim Duesterhus wrote:
> > Introduced in:
> > 
> > 0657b9338 MINOR: stream: add "last_rule_file" and "last_rule_line" samples
> 
> I believe you missed this one.

Indeed, for an unknown reason it was marked read on my side, now merged,
thanks!

Willy



Re: [PATCH] REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+

2022-03-21 Thread Tim Düsterhus

Willy,

On 3/11/22 22:46, Tim Duesterhus wrote:

Introduced in:

0657b9338 MINOR: stream: add "last_rule_file" and "last_rule_line" samples


I believe you missed this one.

Best regards
Tim Düsterhus



Re: [PATCH 0/4] Using Coccinelle the right way

2022-03-21 Thread Willy Tarreau
Hi Tim,

On Tue, Mar 15, 2022 at 01:11:04PM +0100, Tim Duesterhus wrote:
> Willy,
> 
> I wanted to build a simple reproducer for the "ist in struct" issue to post
> on the Coccinelle list and found that it worked if all structs are defined
> in the same .c file. Searching the list archives then revealed the
> 
>   --include-headers-for-types
> 
> flag which fixes the issue we're seeing.
> 
> I've fixed a bug in the ist.cocci, reapplied it on the whole tree and then
> turned the bugfix into another rule and applied that one.

Series now applied, thank you!
Willy