Re: Always add "reason"

2022-03-16 Thread Marco Corte

Il 2022-03-11 18:00 Willy Tarreau ha scritto:

Hi Marco,

On Thu, Mar 03, 2022 at 12:26:09PM +0100, Marco Corte wrote:

Hi!

I can add a "reason phrase" to a response based on the HTTP status 
like

this:

http-response set-status 200 reason OK if { status eq 200 }

Is there any way to add the reason phrase for a set of codes without 
an

explicit rule for each code?
I would like to write a set of rules like this

http-response set-status 200 reason OK if { status eq 200 }
http-response set-status %[status] reason NotOK unless { status eq 200 
}


Unfortunately I don't see a way to easily do this. I didn't even
remembered that the internal representation for the reason was
conveyed, I thought it was built when serializing the status on the
wire.

In fact, since the advent of HTTP/2 which completely dropped the reason
phrase, you really cannot trust that element anymore at all. It used to
be very unreliable even before HTTP/2 since plenty of proxies were
placing their own reason for a given code, but nowadays it's really
impossible to trust it anymore.

Out of curiosity, what would be your use case ?


Hi, Willy

In this case haproxy is used to wrap a service for a legacy(*) client, 
that uses "reason" to process the response.


Thank you for your answer!

.marcoc

(*) the correct spelling whould be: fu%#!#@-old-out-of-standard client.



Re: Peers using heavily single cpu core

2022-03-16 Thread Maciej Zdeb
Hi Willy,
did you find anything?

I've made some "session shutdown" tests. With simple configuration
(attached) it is very easy to reproduce a situation when "shutdown session"
works only for the first time.

1. Kill haproxy to have a clean start:
root@hap-rnd-1a:/etc/haproxy# killall haproxy

2. Show current sessions:
root@hap-rnd-1a:/etc/haproxy# echo "show sess" | socat
unix-connect:/var/run/haproxy/haproxy1.sock stdio
0x562291719560: proto=tcpv4 src=10.0.0.3:5276 fe=hap-rnd-1a be=hap-rnd-1a
srv= ts=00 epoch=0 age=4s calls=1 rate=0 cpu=0 lat=0
rq[f=c48202h,i=0,an=00h,rx=,wx=,ax=] rp[f=80048202h,i=0,an=00h,rx=,wx=,ax=]
s0=[8,200048h,fd=29,ex=] s1=[8,204058h,fd=-1,ex=] exp=
0x562291719fa0: proto=unix_stream src=unix:1 fe=GLOBAL be= srv=
ts=00 epoch=0 age=0s calls=2 rate=2 cpu=0 lat=0
rq[f=c0c020h,i=0,an=00h,rx=,wx=,ax=] rp[f=80008002h,i=0,an=00h,rx=,wx=,ax=]
s0=[8,280008h,fd=30,ex=] s1=[8,204018h,fd=-1,ex=] exp=30s

3. Kill peer session 0x562291719560
root@hap-rnd-1a:/etc/haproxy# echo "shutdown session 0x562291719560" |
socat unix-connect:/var/run/haproxy/haproxy1.sock stdio

4. Show current sessions - confirm peer connection shutdown
root@hap-rnd-1a:/etc/haproxy# echo "show sess" | socat
unix-connect:/var/run/haproxy/haproxy1.sock stdio
0x7f2628026220: proto=unix_stream src=unix:1 fe=GLOBAL be= srv=
ts=00 epoch=0x1 age=0s calls=1 rate=1 cpu=0 lat=0
rq[f=c08000h,i=0,an=00h,rx=,wx=,ax=] rp[f=80008000h,i=0,an=00h,rx=,wx=,ax=]
s0=[8,240008h,fd=29,ex=] s1=[8,204018h,fd=-1,ex=] exp=

5. Show current sessions - confirm new peer connection (note that new
connection has same id 0x7f2628026220 as unix_stream in previous output,
but it is probably by chance)
root@hap-rnd-1a:/etc/haproxy# echo "show sess" | socat
unix-connect:/var/run/haproxy/haproxy1.sock stdio
0x7f2628026220: proto=tcpv4 src=10.0.0.3:5288 fe=hap-rnd-1a be=hap-rnd-1a
srv= ts=00 epoch=0x2 age=3s calls=1 rate=0 cpu=0 lat=0
rq[f=c48200h,i=0,an=00h,rx=,wx=,ax=] rp[f=80048202h,i=0,an=00h,rx=,wx=,ax=]
s0=[8,200048h,fd=29,ex=] s1=[8,204058h,fd=-1,ex=] exp=
0x7f263c026220: proto=unix_stream src=unix:1 fe=GLOBAL be= srv=
ts=00 epoch=0x2 age=0s calls=1 rate=1 cpu=0 lat=0
rq[f=c08000h,i=0,an=00h,rx=,wx=,ax=] rp[f=80008002h,i=0,an=00h,rx=,wx=,ax=]
s0=[8,240008h,fd=30,ex=] s1=[8,204018h,fd=-1,ex=] exp=

6. Again kill peer session 0x7f2628026220
root@hap-rnd-1a:/etc/haproxy# echo "shutdown session 0x7f2628026220" |
socat unix-connect:/var/run/haproxy/haproxy1.sock stdio

7. Show current sessions - note that 0x7f2628026220 was not killed in the
previous step
root@hap-rnd-1a:/etc/haproxy# echo "show sess" | socat
unix-connect:/var/run/haproxy/haproxy1.sock stdio
0x7f267c026a60: proto=unix_stream src=unix:1 fe=GLOBAL be= srv=
ts=00 epoch=0x3 age=0s calls=1 rate=1 cpu=0 lat=0
rq[f=c08000h,i=0,an=00h,rx=,wx=,ax=] rp[f=80008000h,i=0,an=00h,rx=,wx=,ax=]
s0=[8,240008h,fd=30,ex=] s1=[8,204018h,fd=-1,ex=] exp=
0x7f2628026220: proto=tcpv4 src=10.0.0.3:5288 fe=hap-rnd-1a be=hap-rnd-1a
srv= ts=00 epoch=0x2 age=17s calls=1 rate=0 cpu=0 lat=0
rq[f=c48202h,i=0,an=00h,rx=,wx=,ax=] rp[f=80048202h,i=0,an=00h,rx=,wx=,ax=]
s0=[8,200048h,fd=29,ex=] s1=[8,204058h,fd=-1,ex=] exp=

8. Kill peer session 0x7f2628026220
root@hap-rnd-1a:/etc/haproxy# echo "shutdown session 0x7f2628026220" |
socat unix-connect:/var/run/haproxy/haproxy1.sock stdio

9. Show session - again no effect
root@hap-rnd-1a:/etc/haproxy# echo "show sess" | socat
unix-connect:/var/run/haproxy/haproxy1.sock stdio
0x7f2628026220: proto=tcpv4 src=10.0.0.3:5288 fe=hap-rnd-1a be=hap-rnd-1a
srv= ts=00 epoch=0x2 age=22s calls=1 rate=0 cpu=0 lat=0
rq[f=c48202h,i=0,an=00h,rx=,wx=,ax=] rp[f=80048202h,i=0,an=00h,rx=,wx=,ax=]
s0=[8,200048h,fd=29,ex=] s1=[8,204058h,fd=-1,ex=] exp=
0x7f261c026220: proto=unix_stream src=unix:1 fe=GLOBAL be= srv=
ts=00 epoch=0x4 age=0s calls=1 rate=1 cpu=0 lat=0
rq[f=c08000h,i=0,an=00h,rx=,wx=,ax=] rp[f=80008002h,i=0,an=00h,rx=,wx=,ax=]
s0=[8,240008h,fd=30,ex=] s1=[8,204018h,fd=-1,ex=] exp=

Kind regards,

sob., 12 mar 2022 o 00:20 Willy Tarreau  napisał(a):

> On Fri, Mar 11, 2022 at 10:19:09PM +0100, Maciej Zdeb wrote:
> > Hi Willy,
> >
> > Thank you for such useful info! I've checked the worst HAProxy nodes and
> on
> > every such node all outgoing peers connections are run on the same
> thread:
> (...)
> Indeed. I was pretty sure we were setting them to any thread on creation
> but maybe I'm wrong, I'll have to recheck.
>
> > On one node I was able to rebalance it, but on the node above (and other
> > nodes) I'm not able to shutdown the sessions:
> (...)
> > echo "shutdown session 0x7f0aa402e2c0" | socat
> > unix-connect:/var/run/haproxy.sock stdio
> >
> > echo "show sess 0x7f0aa402e2c0" | socat
> unix-connect:/var/run/haproxy.sock
> > stdio
> > 0x7f0aa402e2c0: [11/Mar/2022:07:17:02.313221] id=0 proto=?
> (...)
>
> That's not expected, another thing I'll have to check.
>
> Thanks for testing. I'll put that in pause for the week-end, though :-)
>
> cheers,
> Willy
>
global
chroot 

Re: CI caching improvement

2022-03-16 Thread Tim Düsterhus

Willy,

On 3/8/22 20:43, Tim Düsterhus wrote:

Yes my point was about VTest. However you made me think about a very good
reason for caching haproxy builds as well :-)  Very commonly, some VTest
randomly fails. Timing etc are involved. And at the moment, it's impossible
to restart the tests without rebuilding everything. And it happens to me to
click "restart all jobs" sometimes up to 2-3 times in a row in order to end


I've looked up that roadmap entry I was thinking about: A "restart this
job" button apparently is planned for Q1 2022.

see https://github.com/github/roadmap/issues/271 "any individual job"

Caching the HAProxy binary really is something I strongly advice against
based on my experience with GitHub Actions and CI in general.

I think the restart of the individual job sufficiently solves the issue
of flaky builds (until they are fixed properly).



In one of my repositories I noticed that this button is now there. One 
can now re-run individual jobs and also all failed jobs. See screenshots 
attached.


Best regards
Tim Düsterhus

[PATCH] CI: switch to LibreSSL-3.5.1

2022-03-16 Thread Илья Шипицин
Hello,

as LibreSSL-3.5.1 is released, let us switch to the most recent release.

thanks,
Ilya
From 7e85be757646d4bd788bfccd74146d317c5595bb Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Wed, 16 Mar 2022 12:10:47 +0500
Subject: [PATCH] CI: github actions: switch to LibreSSL-3.5.1

---
 .github/matrix.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.github/matrix.py b/.github/matrix.py
index 3c34eaf90..4427b255f 100755
--- a/.github/matrix.py
+++ b/.github/matrix.py
@@ -112,7 +112,7 @@ for CC in ["gcc", "clang"]:
 "OPENSSL_VERSION=1.0.2u",
 "OPENSSL_VERSION=3.0.1",
 "LIBRESSL_VERSION=2.9.2",
-"LIBRESSL_VERSION=3.3.3",
+"LIBRESSL_VERSION=3.5.1",
 "QUICTLS=yes",
 #"BORINGSSL=yes",
 ]:
-- 
2.35.1