RE: frequently reload haproxy without sleep time result in old haproxy process never dying

2017-02-07 Thread Pierre Cheynier
Hi, I guess you're using a systemd-based distro. You should have a look at this thread https://www.mail-archive.com/haproxy@formilux.org/msg23867.html. The patches were applied to 1.7, but apparently backported to 1.6.11 and 1.5.19 since. Now I have a clean termination of old processes, no

RE: HAProxy reloads lets old and outdated processes

2016-10-14 Thread Pierre Cheynier
Hi Lukas, > I did not meant no-reuseport to workaround or "solve" the problem definitely, but rather to see if the problems can still be triggered, since you can reproduce the problem easily. This still happens using snapshot 20161005 with no-reuseport set, a bit less probably because reload

RE: HAProxy reloads lets old and outdated processes

2016-10-18 Thread Pierre Cheynier
Hi, Any updates/findings on that issue ? Many thanks, Pierre > From : Pierre Cheynier > To: Lukas Tribus; haproxy@formilux.org > Sent: Friday, October 14, 2016 12:54 PM > Subject: RE: HAProxy reloads lets old and outdated processes >   > Hi Lukas, > > > I

RE: HAProxy reloads lets old and outdated processes

2016-10-25 Thread Pierre Cheynier
Hi, I didn't subscribed to the list and noticed that there was several exchanges on this thread that I didn't read so far. To share a bit more of our context: * we do not reload every 2ms, this was the setting used to be able to reproduce easily and in a short period of time. Our reload

RE: HAProxy reloads lets old and outdated processes

2016-10-21 Thread Pierre Cheynier
Hi Willy, Thanks for your answer and sorry for my delay. First let's clarify again: we are on systemd-based OS (centOS7), so reload is done by sending SIGUSR2 to haproxy-systemd-wrapper. Theoretically, this has absolutely no relation with our current issue (if I understand well the way the old

RE: HAProxy reloads lets old and outdated processes

2016-10-24 Thread Pierre Cheynier
> Same for all of them. Very interesting, SIGUSR2 (12) is set > in SigIgn :-)  One question is "why", but at least we know we > have a workaround consisiting in unblocking these signals in > haproxy-systemd-wrapper, as we did in haproxy. > Care to retry with the attached patch ? Same behaviour.

RE: HAProxy reloads lets old and outdated processes

2016-10-24 Thread Pierre Cheynier
Hi, Sorry, wrong order in the answers. > Yes it has something to do with it because it's the systemd-wrapper which > delivers the signal to the old processes in this mode, while in the normal > mode the processes get the signal directly from the new process. Another > important point is that

RE: HAProxy reloads lets old and outdated processes

2016-10-24 Thread Pierre Cheynier
> A solution I use is to delay next reload in systemd unit until a > reload is in progress. Unfortunately, even when doing this you can end up in the situation described before, because for systemd a reload is basically a SIGUSR2 to send. You do not wait for some callback saying "I'm now OK and

RE: HAProxy reloads lets old and outdated processes

2016-10-24 Thread Pierre Cheynier
Hi, > Pierre, could you please issue "grep ^Sig /proc/pid/status" for each > wrapper and haproxy process ? I'm interested in seeing SigIgn and > SigBlk particularly. > Sure, here is the output for the following pstree: $ ps fauxww | grep haproxy | grep -v grep root 43135  0.0  0.0  46340 

HAProxy reloads lets old and outdated processes

2016-10-13 Thread Pierre Cheynier
Hi list, I experiment the following behaviour : I'm on 1.6.8 (same behaviour in 1.4/1.5), use systemd and noticed that when reloads are relatively frequent, old processes sometimes never dies and stays bound to the TCP socket(s), thanks to SO_REUSEPORT. Here is an example of process tree:

RE: [RFC][PATCHES] seamless reload

2017-05-04 Thread Pierre Cheynier
Hi Olivier, Many thanks for that ! As you know, we are very interested on this topic. We'll test your patches soon for sure. Pierre

Re: mworker: seamless reloads broken since 1.8.1

2018-01-05 Thread Pierre Cheynier
On 05/01/2018 16:44, William Lallemand wrote: > I'm able to reproduce, looks like it happens with the nbthread parameter only, Exact, I observe the same. At least I have a workaround for now to perform the upgrade. > I'll try to find the problem in the code. > Thanks ! Pierre

Cache & ACLs issue

2018-01-09 Thread Pierre Cheynier
I'm experimenting the small objects cache feature in 1.8, maybe I'm doing something obviously wrong, but I don't get what... Here is my setup: (...) cache static_assets total-max-size 100 max-age 60 (...) frontend fe_main # HTTP(S) Service     bind *:80 name http     acl

mworker: seamless reloads broken since 1.8.1

2018-01-05 Thread Pierre Cheynier
Hi list, We've recently tried to upgrade from 1.8.0 to 1.8.1, then 1.8.2, 1.8.3 on a preprod environment and noticed that the reload is not so seamless since 1.8.1 (easily getting TCP RSTs while reloading). Having a short look on the haproxy-1.8 git remote on the changes affecting haproxy.c,

Re: mworker: seamless reloads broken since 1.8.1

2018-01-05 Thread Pierre Cheynier
> Hi, > >>> $ cat /usr/lib/systemd/system/haproxy.service >>> [Unit] >>> Description=HAProxy Load Balancer >>> After=syslog.target network.target >>> >>> [Service] >>> EnvironmentFile=/etc/sysconfig/haproxy >>> ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q >>> ExecStart=/usr/sbin/haproxy -W -f

Re: mworker: seamless reloads broken since 1.8.1

2018-01-05 Thread Pierre Cheynier
>> Hi, >> >>> Your systemd configuration is not uptodate. >>> >>> Please: >>> - make sure haproxy is compiled with USE_SYSTEMD=1 >>> - update the unit file: start haproxy with -Ws instead of -W (ExecStart) >>> - update the unit file: use Type=notify instead of Type=forking >> In fact that should

Re: mworker: seamless reloads broken since 1.8.1

2018-01-08 Thread Pierre Cheynier
Hi, On 08/01/2018 10:24, Lukas Tribus wrote: > > FYI there is a report on discourse mentioning this problem, and the > poster appears to be able to reproduce the problem without nbthread > paramter as well: > > https://discourse.haproxy.org/t/seamless-reloads-dont-work-with-systemd/1954 > > >

Re: mworker: seamless reloads broken since 1.8.1

2018-01-17 Thread Pierre Cheynier
Hi, On 08/01/2018 14:32, Pierre Cheynier wrote: > I retried this morning, I confirm that on 1.8.3, using (...) > I get RSTs (not seamless reloads) when I introduce the global/nbthread > X, after a systemctl haproxy restart. Any news on that ? I saw one mworker commit ("execvp fai

Re: Warnings when using dynamic cookies and server-template

2018-01-17 Thread Pierre Cheynier
On 17/01/2018 15:56, Olivier Houchard wrote: > >> So, as a conclusion, I'm just not sure that producing this warning is >> relevant in case the IP is duplicated for several servers *if they are >> disabled*... > Or maybe we should just advocate using 0.0.0.0 when we mean "no IP" :) Not sure about

Warnings when using dynamic cookies and server-template

2018-01-15 Thread Pierre Cheynier
Hello, We started to use the server-template approach in which you basically provision servers in backends using a "check disabled" state, then re-enabling them using the Runtime API. I recently noticed that when used with dynamic cookies, we end up getting these warnings: haproxy.c:149    

Re: Cache & ACLs issue

2018-01-15 Thread Pierre Cheynier
when does it happens. Regards, Pierre On 09/01/2018 19:37, Pierre Cheynier wrote: > I'm experimenting the small objects cache feature in 1.8, maybe I'm > doing something obviously wrong, but I don't get what... > > Here is my setup: > > (...) > > cache static_assets &g

Re: Warnings when using dynamic cookies and server-template

2018-01-16 Thread Pierre Cheynier
Hi Olivier, On 16/01/2018 15:43, Olivier Houchard wrote: > I'm not so sure about this. > It won't be checked again when server are enabled, so you won't get the > warning if it's still the case. > You shouldn't get those warnings unless multiple servers have the same IP, > though. What does your

Re: Warnings when using dynamic cookies and server-template

2018-01-17 Thread Pierre Cheynier
Hi, On 16/01/2018 18:48, Olivier Houchard wrote: > > Not really :) That's not a case I thought of. > The attached patch disables the generation of the dynamic cookie if the IP > is 0.0.0.0 or ::, so that it only gets generated when the server gets a real > IP. Is it OK with you ? I'm not sure

Re: mworker: seamless reloads broken since 1.8.1

2018-01-24 Thread Pierre Cheynier
On 23/01/2018 19:29, Willy Tarreau wrote: > Pierre, please give a try to the latest 1.8 branch or the next nightly > snapshot tomorrow morning. It addresses the aforementionned issue, and > I hope it's the same you're facing. > > Cheers, > Willy Willy, I confirm that it works well again running

Different health-check URI per server : how would you do that ?

2018-01-24 Thread Pierre Cheynier
Hi, We have a use-case in which the health-check URI is depending on the server-name (be reassured, only the health-check :) ). It would be something like: backend be_testmode http[...] option httpchk get /check HTTP/1.1\r\nHost: test.tld default-server inter 3s fall 3 rise 2   server srv01

No enabled listener found and reloads triggered an inconsistent state.

2018-04-04 Thread Pierre Cheynier
Hi there, We had an issue recently, using 1.8.5. For some reason we ended up entering in the "No enabled listener found" state (I guess the config file was incomplete, being written at that time, something like that). Here are the logs: Apr 03 17:51:49 hostname systemd[1]: Reloaded HAProxy

RE: faster than load-server-state-from-file?

2018-10-03 Thread Pierre Cheynier
Hi Willy, > Not really. Maybe we should see how the state file parser works, because > multiple seconds to parse only 30K lines seems extremely long. I would even say multiple minutes :) > I'm just thinking about a few things. Probably that among these 30K servers, > most of them are in fact

h2 + text/event-stream: closed on both sides by FIN/ACK?

2018-09-21 Thread Pierre Cheynier
ttp_3 process 1/3 bind *:443 name https_4 ssl crt /etc/haproxy/tls/fe_main process 1/4 alpn http/1.1,h2 bind *:443 name https_5 ssl crt /etc/haproxy/tls/fe_main process 1/5 alpn http/1.1,h2 bind *:443 name https_6 ssl crt /etc/haproxy/tls/fe_main process 1/6 alpn http/1.1,h2 (...) # Nothing specific in the backend (no override of the aforementioned settings). Any idea? Best regards, Pierre Cheynier

RE: h2 + text/event-stream: closed on both sides by FIN/ACK?

2018-09-24 Thread Pierre Cheynier
> You'll notice that in the HTTP/2 case, the stream is closed as you mentioned > (DATA len=0 + ES=1) then HAProxy immediately send FIN-ACK to the server. > Same for the client just after it forwarded the headers. It never wait for > any > SSE frame. EDIT: in fact, analyzing my capture, I see

RE: h2 + text/event-stream: closed on both sides by FIN/ACK?

2018-09-24 Thread Pierre Cheynier
> Hi Pierre, Hi Willy, > The close on the server side is expected, that's a limitation of the current > design that we're addressing for 1.9 and which is much harder than initially >expected. The reason is that streams are independent in H2 while in H1 the > same stream remains idle and recycled

faster than load-server-state-from-file?

2018-09-21 Thread Pierre Cheynier
nes). Trying to use load-server-state-from-file to prevent sending trafic to KO servers and retoring stats numbers, I feel that it slows down the reload a lot (multiple seconds). Any known hint or alternative? Thanks, Pierre Cheynier

[PATCH] ssl: ability to set TLS 1.3 ciphers using ssl-default-server-ciphersuites

2019-03-21 Thread Pierre Cheynier
Any attempt to put TLS 1.3 ciphers on servers failed with output 'unable to set TLS 1.3 cipher suites'. This was due to usage of SSL_CTX_set_cipher_list instead of SSL_CTX_set_ciphersuites in the TLS 1.3 block (protected by OPENSSL_VERSION_NUMBER >= 0x10101000L & so). Signed-off-by:

native prometheus exporter: retrieving check_status

2019-11-15 Thread Pierre Cheynier
Hi list, We've recently tried to switch to the native prometheus exporter, but went quickly stopped in our initiative given the output on one of our preprod server: $ wc -l metrics.out 1478543 metrics.out $ ls -lh metrics.out -rw-r--r-- 1 pierre pierre 130M nov. 15 15:33 metrics.out This is

RE: native prometheus exporter: retrieving check_status

2019-11-20 Thread Pierre Cheynier
> Ok, so it is a new kind of metric. I mean, not exposed by HAProxy. It would > require an extra loop on all servers for each backend. It is probably doable > for > the check_status. For the code, I don't know. Because it is not exclusive to > HTTP checks. it is also used for SMTP and LDAP

RE: native prometheus exporter: retrieving check_status

2019-11-20 Thread Pierre Cheynier
>> My only fear for this point would be to make the code too complicated >> and harder to maintain. >> > > And slow down the exporter execution. Moreover, everyone will have a > different > opinion on how to aggregate the stats. My first idea was to sum all servers > counters. But Pierre's

RE: native prometheus exporter: retrieving check_status

2019-11-19 Thread Pierre Cheynier
> Hi Pierre, Hi!, > I addressed this issue based on a William's idea. I also proposed to add a > filter to exclude all servers in maintenance from the export. Let me know if > you > see a better way to do so. For the moment, from the exporter point of view, > it > is not really hard to do

[PATCH] DOC: Add missing stats fields in the management doc

2020-10-08 Thread Pierre Cheynier
Added latest fields: idle_conn_cur, safe_conn_cur, used_conn_cur, need_conn_est --- doc/management.txt | 4 1 file changed, 4 insertions(+) diff --git a/doc/management.txt b/doc/management.txt index eef05b0fc..9fd7e6c03 100644 --- a/doc/management.txt +++ b/doc/management.txt @@ -1127,6

RE: Logging using %HP (path) produce different results with H1 and H2

2020-08-25 Thread Pierre Cheynier
Hi Willy, On Tue, Aug 25, 2020 at 14:53:05PM +0200, Willy Tarreau wrote: > Thus an HTTP/2 request effectively "looks like" an HTTP/1 request using > an absolute URI. What causes the mess in the logs is that such HTTP/1 > requests are rarely used (most only for proxies), but they are perfectly >

Logging using %HP (path) produce different results with H1 and H2

2020-08-21 Thread Pierre Cheynier
Hi list, We're running HAProxy 2.2.2. It turns out logging requests paths using "%HP" var produce a different results on H1 vs. H2. H1: /path H2: https://hostname.domain/path (< I consider this one buggy) No idea where does this comes from exactly, I essentially understand txn->uri structure

RE: Logging using %HP (path) produce different results with H1 and H2

2020-08-24 Thread Pierre Cheynier
On Fri, Aug 21, 2020 at 8:11 PM William Dauchy wrote: So awesome to get the first response from your direct colleague :) > I believe this is expected; this behaviour has changed since v2.1 though. Indeed, we don't use this logging variable since a long time, so I'm not really able to confirm

[PATCH] CLEANUP: contrib/prometheus-exporter: typo fixes for ssl reuse metric

2020-07-07 Thread Pierre Cheynier
A typo I identified while having a look to our metric inventory. --- contrib/prometheus-exporter/README | 2 +- contrib/prometheus-exporter/service-prometheus.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/contrib/prometheus-exporter/README

RE: [PATCH 1/9] MAJOR: contrib/prometheus-exporter: move health check status to labels

2021-02-01 Thread Pierre Cheynier
De : William Dauchy Envoyé : samedi 30 janvier 2021 16:21 > this is a follow up of commit c6464591a365bfcf509b322bdaa4d608c9395d75 > ("MAJOR: contrib/prometheus-exporter: move ftd/bkd/srv states to > labels"). The main goal being to be better aligned with prometheus use > cases in terms of

Should server crt be consider as crt-list and handled via the runtime API?

2021-02-08 Thread Pierre Cheynier
I'm trying to figure out what would be missing to consider server crt-s as crt-lists (as in bind lines) so that they could be listed via "show ssl crt-list" APIs and also managed (essentially renewed) this way. Exemple: backend foo-using-client-auth default-server check ssl crt

RE: Should server crt be consider as crt-list and handled via the runtime API?

2021-02-08 Thread Pierre Cheynier
Hi William! On Mon, Feb 08 2021 15:49:02 +0100, William Lallemand wrote: > Thanks to Rémi development we already have the server crt update > available from the CLI in the 2.4 tree. Wow, this prove that I didn't follow that much what's currently happening... Awesome, thanks! > I'm not sure why