On Mon, Feb 08 2021 15:49:02 +0100, William Lallemand wrote:
> Thanks to Rémi development we already have the server crt update
> available from the CLI in the 2.4 tree.
Wow, this prove that I didn't follow that much what's currently happening...
> I'm not sure why
I'm trying to figure out what would be missing to consider server crt-s as
crt-lists (as in bind lines) so that they could be listed via "show ssl
crt-list" APIs and also managed (essentially renewed) this way.
default-server check ssl crt
De : William Dauchy
Envoyé : samedi 30 janvier 2021 16:21
> this is a follow up of commit c6464591a365bfcf509b322bdaa4d608c9395d75
> ("MAJOR: contrib/prometheus-exporter: move ftd/bkd/srv states to
> labels"). The main goal being to be better aligned with prometheus use
> cases in terms of
Added latest fields: idle_conn_cur, safe_conn_cur, used_conn_cur, need_conn_est
doc/management.txt | 4
1 file changed, 4 insertions(+)
diff --git a/doc/management.txt b/doc/management.txt
index eef05b0fc..9fd7e6c03 100644
On Tue, Aug 25, 2020 at 14:53:05PM +0200, Willy Tarreau wrote:
> Thus an HTTP/2 request effectively "looks like" an HTTP/1 request using
> an absolute URI. What causes the mess in the logs is that such HTTP/1
> requests are rarely used (most only for proxies), but they are perfectly
On Fri, Aug 21, 2020 at 8:11 PM William Dauchy wrote:
So awesome to get the first response from your direct colleague :)
> I believe this is expected; this behaviour has changed since v2.1 though.
Indeed, we don't use this logging variable since a long time, so I'm not really
able to confirm
We're running HAProxy 2.2.2.
It turns out logging requests paths using "%HP" var produce a different results
on H1 vs. H2.
H2: https://hostname.domain/path (< I consider this one buggy)
No idea where does this comes from exactly, I essentially understand txn->uri
A typo I identified while having a look to our metric inventory.
contrib/prometheus-exporter/README | 2 +-
contrib/prometheus-exporter/service-prometheus.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/contrib/prometheus-exporter/README
>> My only fear for this point would be to make the code too complicated
>> and harder to maintain.
> And slow down the exporter execution. Moreover, everyone will have a
> opinion on how to aggregate the stats. My first idea was to sum all servers
> counters. But Pierre's
> Ok, so it is a new kind of metric. I mean, not exposed by HAProxy. It would
> require an extra loop on all servers for each backend. It is probably doable
> the check_status. For the code, I don't know. Because it is not exclusive to
> HTTP checks. it is also used for SMTP and LDAP
> Hi Pierre,
> I addressed this issue based on a William's idea. I also proposed to add a
> filter to exclude all servers in maintenance from the export. Let me know if
> see a better way to do so. For the moment, from the exporter point of view,
> is not really hard to do
We've recently tried to switch to the native prometheus exporter, but went
quickly stopped in our initiative given the output on one of our preprod server:
$ wc -l metrics.out
$ ls -lh metrics.out
-rw-r--r-- 1 pierre pierre 130M nov. 15 15:33 metrics.out
Any attempt to put TLS 1.3 ciphers on servers failed with output 'unable
to set TLS 1.3 cipher suites'.
This was due to usage of SSL_CTX_set_cipher_list instead of
SSL_CTX_set_ciphersuites in the TLS 1.3 block (protected by
OPENSSL_VERSION_NUMBER >= 0x10101000L & so).
> Not really. Maybe we should see how the state file parser works, because
> multiple seconds to parse only 30K lines seems extremely long.
I would even say multiple minutes :)
> I'm just thinking about a few things. Probably that among these 30K servers,
> most of them are in fact
> Hi Pierre,
> The close on the server side is expected, that's a limitation of the current
> design that we're addressing for 1.9 and which is much harder than initially
>expected. The reason is that streams are independent in H2 while in H1 the
> same stream remains idle and recycled
> You'll notice that in the HTTP/2 case, the stream is closed as you mentioned
> (DATA len=0 + ES=1) then HAProxy immediately send FIN-ACK to the server.
> Same for the client just after it forwarded the headers. It never wait for
> SSE frame.
EDIT: in fact, analyzing my capture, I see
Trying to use load-server-state-from-file to prevent sending trafic to KO
servers and retoring stats numbers, I feel that it slows down the reload a lot
Any known hint or alternative?
ttp_3 process 1/3
bind *:443 name https_4 ssl crt /etc/haproxy/tls/fe_main process 1/4 alpn
bind *:443 name https_5 ssl crt /etc/haproxy/tls/fe_main process 1/5 alpn
bind *:443 name https_6 ssl crt /etc/haproxy/tls/fe_main process 1/6 alpn
# Nothing specific in the backend (no override of the aforementioned settings).
We had an issue recently, using 1.8.5. For some reason we ended up entering in
the "No enabled listener found" state (I guess the config file was incomplete,
being written at that time, something like that).
Here are the logs:
Apr 03 17:51:49 hostname systemd: Reloaded HAProxy
On 23/01/2018 19:29, Willy Tarreau wrote:
> Pierre, please give a try to the latest 1.8 branch or the next nightly
> snapshot tomorrow morning. It addresses the aforementionned issue, and
> I hope it's the same you're facing.
Willy, I confirm that it works well again running
We have a use-case in which the health-check URI is depending on the
server-name (be reassured, only the health-check :) ).
It would be something like:
backend be_testmode http[...] option httpchk get /check
HTTP/1.1\r\nHost: test.tld default-server inter 3s fall 3 rise 2
On 08/01/2018 14:32, Pierre Cheynier wrote:
> I retried this morning, I confirm that on 1.8.3, using
> I get RSTs (not seamless reloads) when I introduce the global/nbthread
> X, after a systemctl haproxy restart.
Any news on that ?
I saw one mworker commit ("execvp fai
On 17/01/2018 15:56, Olivier Houchard wrote:
>> So, as a conclusion, I'm just not sure that producing this warning is
>> relevant in case the IP is duplicated for several servers *if they are
> Or maybe we should just advocate using 0.0.0.0 when we mean "no IP" :)
Not sure about
On 16/01/2018 18:48, Olivier Houchard wrote:
> Not really :) That's not a case I thought of.
> The attached patch disables the generation of the dynamic cookie if the IP
> is 0.0.0.0 or ::, so that it only gets generated when the server gets a real
> IP. Is it OK with you ?
I'm not sure
On 16/01/2018 15:43, Olivier Houchard wrote:
> I'm not so sure about this.
> It won't be checked again when server are enabled, so you won't get the
> warning if it's still the case.
> You shouldn't get those warnings unless multiple servers have the same IP,
> though. What does your
We started to use the server-template approach in which you basically
provision servers in backends using a "check disabled" state, then
re-enabling them using the Runtime API.
I recently noticed that when used with dynamic cookies, we end up
getting these warnings:
when does it happens.
On 09/01/2018 19:37, Pierre Cheynier wrote:
> I'm experimenting the small objects cache feature in 1.8, maybe I'm
> doing something obviously wrong, but I don't get what...
> Here is my setup:
> cache static_assets
I'm experimenting the small objects cache feature in 1.8, maybe I'm
doing something obviously wrong, but I don't get what...
Here is my setup:
frontend fe_main # HTTP(S) Service
bind *:80 name http
On 08/01/2018 10:24, Lukas Tribus wrote:
> FYI there is a report on discourse mentioning this problem, and the
> poster appears to be able to reproduce the problem without nbthread
> paramter as well:
On 05/01/2018 16:44, William Lallemand wrote:
> I'm able to reproduce, looks like it happens with the nbthread parameter only,
Exact, I observe the same.
At least I have a workaround for now to perform the upgrade.
> I'll try to find the problem in the code.
>>> Your systemd configuration is not uptodate.
>>> - make sure haproxy is compiled with USE_SYSTEMD=1
>>> - update the unit file: start haproxy with -Ws instead of -W (ExecStart)
>>> - update the unit file: use Type=notify instead of Type=forking
>> In fact that should
>>> $ cat /usr/lib/systemd/system/haproxy.service
>>> Description=HAProxy Load Balancer
>>> After=syslog.target network.target
>>> ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q
>>> ExecStart=/usr/sbin/haproxy -W -f
We've recently tried to upgrade from 1.8.0 to 1.8.1, then 1.8.2, 1.8.3
on a preprod environment and noticed that the reload is not so seamless
since 1.8.1 (easily getting TCP RSTs while reloading).
Having a short look on the haproxy-1.8 git remote on the changes
Many thanks for that ! As you know, we are very interested on this topic.
We'll test your patches soon for sure.
I guess you're using a systemd-based distro. You should have a look at this
The patches were applied to 1.7, but apparently backported to 1.6.11 and 1.5.19
Now I have a clean termination of old processes, no
I didn't subscribed to the list and noticed that there was several exchanges on
this thread that I didn't read so far.
To share a bit more of our context:
* we do not reload every 2ms, this was the setting used to be able to reproduce
easily and in a short period of time. Our reload
> A solution I use is to delay next reload in systemd unit until a
> reload is in progress.
Unfortunately, even when doing this you can end up in the situation described
before, because for systemd a reload is basically a SIGUSR2 to send. You do not
wait for some callback saying "I'm now OK and
> Same for all of them. Very interesting, SIGUSR2 (12) is set
> in SigIgn :-) One question is "why", but at least we know we
> have a workaround consisiting in unblocking these signals in
> haproxy-systemd-wrapper, as we did in haproxy.
> Care to retry with the attached patch ?
Sorry, wrong order in the answers.
> Yes it has something to do with it because it's the systemd-wrapper which
> delivers the signal to the old processes in this mode, while in the normal
> mode the processes get the signal directly from the new process. Another
> important point is that
> Pierre, could you please issue "grep ^Sig /proc/pid/status" for each
> wrapper and haproxy process ? I'm interested in seeing SigIgn and
> SigBlk particularly.
Sure, here is the output for the following pstree:
$ ps fauxww | grep haproxy | grep -v grep
root 43135 0.0 0.0 46340
Thanks for your answer and sorry for my delay.
First let's clarify again: we are on systemd-based OS (centOS7), so reload is
done by sending SIGUSR2 to haproxy-systemd-wrapper.
Theoretically, this has absolutely no relation with our current issue (if I
understand well the way the old
Any updates/findings on that issue ?
> From : Pierre Cheynier
> To: Lukas Tribus; email@example.com
> Sent: Friday, October 14, 2016 12:54 PM
> Subject: RE: HAProxy reloads lets old and outdated processes
> Hi Lukas,
> > I
> I did not meant no-reuseport to workaround or "solve" the problem
definitely, but rather to see if the problems can still be triggered,
since you can reproduce the problem easily.
This still happens using snapshot 20161005 with no-reuseport set, a bit less
probably because reload
I experiment the following behaviour : I'm on 1.6.8 (same behaviour in
1.4/1.5), use systemd and noticed that when reloads are relatively frequent,
old processes sometimes never dies and stays bound to the TCP socket(s), thanks
Here is an example of process tree:
Mail list logo