Re: remaining process after (seamless) reload

2018-06-22 Thread William Dauchy
On Fri, Jun 22, 2018 at 2:28 PM William Lallemand
 wrote:
> The seamless reload only transfers the listeners, the remaining connections 
> are
> handled in the leaving process. It would be complicated to handle them in a 
> new
> process with a different configuration.

Thanks for the clarification. I am pushing this version on some more machines.

-- 
William



Re: remaining process after (seamless) reload

2018-06-22 Thread Willy Tarreau
On Fri, Jun 22, 2018 at 02:41:44PM +0200, Christopher Faulet wrote:
> You're right, removing the thread from all_threads_mask when it exits is a
> good idea. Here a patch to do so.

merged as well, thanks!
Willy



Re: remaining process after (seamless) reload

2018-06-22 Thread Christopher Faulet

Le 20/06/2018 à 18:29, Willy Tarreau a écrit :

On Wed, Jun 20, 2018 at 04:42:58PM +0200, Christopher Faulet wrote:

When HAProxy is shutting down, it exits the polling loop when there is no jobs
anymore (jobs == 0). When there is no thread, it works pretty well, but when
HAProxy is started with several threads, a thread can decide to exit because
jobs variable reached 0 while another one is processing a task (e.g. a
health-check). At this stage, the running thread could decide to request a
synchronization. But because at least one of them has already gone, the others
will wait infinitly in the sync point and the process will never die.


Just a comment on this last sentence, I think this is the root cause of the
problem : a thread quits and its thread_mask bit doesn't disappear. In my
opinion if we're looping, it's precisely because there's no way by looking
at the all_threads_mask if some threads are missing. Thus I think that a more
reliable long term solution would require that each exiting thread does at
least "all_threads_mask &= ~tid_bit".

Now I have no idea whether or not the current sync point code is compatible
with this nor if this will be sufficient, but I'm pretty sure that over time
we'll have to go this way to fix this inconsistency.



Hi Willy,

You're right, removing the thread from all_threads_mask when it exits is 
a good idea. Here a patch to do so.


Thanks,
--
Christopher Faulet
>From cf238c63d4db1567756d388c7473122451adef17 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Thu, 21 Jun 2018 09:57:39 +0200
Subject: [PATCH] MINOR: threads: Be sure to remove threads from
 all_threads_mask on exit

When HAProxy is started with several threads, Each running thread holds a bit in
the bitfiled all_threads_mask. This bitfield is used here and there to check
which threads are registered to take part in a specific processing. So when a
thread exits, it seems normal to remove it from all_threads_mask.
---
 src/haproxy.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/haproxy.c b/src/haproxy.c
index 5906f7989..11d1d47ce 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -2473,6 +2473,8 @@ static void *run_thread_poll_loop(void *data)
 	list_for_each_entry(ptdf, _thread_deinit_list, list)
 		ptdf->fct();
 
+	HA_ATOMIC_AND(_threads_mask, ~tid_bit);
+
 #ifdef USE_THREAD
 	if (tid > 0)
 		pthread_exit(NULL);
-- 
2.17.1



Re: remaining process after (seamless) reload

2018-06-22 Thread William Lallemand
On Fri, Jun 22, 2018 at 12:03:22PM +0200, William Dauchy wrote:
> On Thu, Jun 21, 2018 at 5:21 PM William Lallemand
>  wrote:
> > Once you are sure this is not a bug and that a client is still connected, 
> > you
> > could use the keyword 'hard-stop-after' to force a hard stop.
> 
> After double checking some cases, indeed there are still a few
> remaining established connections. I however was expecting them to be
> transmitted on the new process through the seamless reload, but I am
> probably mixing things.
> Apart from that, no other issues seen so far.
> 

The seamless reload only transfers the listeners, the remaining connections are
handled in the leaving process. It would be complicated to handle them in a new
process with a different configuration.

Thanks for the feedback!

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-06-22 Thread William Dauchy
On Thu, Jun 21, 2018 at 5:21 PM William Lallemand
 wrote:
> Once you are sure this is not a bug and that a client is still connected, you
> could use the keyword 'hard-stop-after' to force a hard stop.

After double checking some cases, indeed there are still a few
remaining established connections. I however was expecting them to be
transmitted on the new process through the seamless reload, but I am
probably mixing things.
Apart from that, no other issues seen so far.
-- 
William



Re: remaining process after (seamless) reload

2018-06-22 Thread Willy Tarreau
On Thu, Jun 21, 2018 at 05:20:19PM +0200, William Lallemand wrote:
> I'm waiting a few days for your feedback and we will probably release a new 
> 1.8
> version including those fixes.

I've merged it in master now.

Willy



Re: remaining process after (seamless) reload

2018-06-21 Thread William Lallemand
On Thu, Jun 21, 2018 at 05:10:35PM +0200, William Dauchy wrote:
> On Thu, Jun 21, 2018 at 5:03 PM William Lallemand
>  wrote:
> > Maybe one client was still connected on a frontend (including the stats 
> > socket).
> > The process first unbind the listeners, and then wait for all clients to 
> > leave.
> > It's difficult to see what's going on since the stats socket is unbind at 
> > this
> > moment :/
> > But you could try a netstat or ss to see if the listeners are still binded 
> > in
> > this process.
> 
> indeed, I will double check those things next time it happens.

Once you are sure this is not a bug and that a client is still connected, you
could use the keyword 'hard-stop-after' to force a hard stop.

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-hard-stop-after

I'm waiting a few days for your feedback and we will probably release a new 1.8
version including those fixes.

> Thanks for the hard work!

Thanks for your tests and useful traces!

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-06-21 Thread William Dauchy
On Thu, Jun 21, 2018 at 5:03 PM William Lallemand
 wrote:
> Maybe one client was still connected on a frontend (including the stats 
> socket).
> The process first unbind the listeners, and then wait for all clients to 
> leave.
> It's difficult to see what's going on since the stats socket is unbind at this
> moment :/
> But you could try a netstat or ss to see if the listeners are still binded in
> this process.

indeed, I will double check those things next time it happens.
Thanks for the hard work!
-- 
William



Re: remaining process after (seamless) reload

2018-06-21 Thread William Lallemand
On Thu, Jun 21, 2018 at 04:47:50PM +0200, William Dauchy wrote:
> Hi Christopher,
> 
> A quick followup from this morning.
> 
> On Thu, Jun 21, 2018 at 10:41 AM William Dauchy  wrote:
> > it seems better now, but not completely gone, in a way, I think we now
> > have a new issue.
> > this morning, on one test machine I have a process which remains polling
> > traffic
> 
> so it exited after a while:
> Jun 21 12:04:18 haproxy[745]: [WARNING] 171/120413 (745) : Former
> worker 30845 exited with code 0
> 
> I don't know why it took so much time to exit (~ 5 hours). Any hint?
> 
> Best,

Maybe one client was still connected on a frontend (including the stats socket).
The process first unbind the listeners, and then wait for all clients to leave.

It's difficult to see what's going on since the stats socket is unbind at this
moment :/
But you could try a netstat or ss to see if the listeners are still binded in
this process.


-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-06-21 Thread William Dauchy
Hi Christopher,

A quick followup from this morning.

On Thu, Jun 21, 2018 at 10:41 AM William Dauchy  wrote:
> it seems better now, but not completely gone, in a way, I think we now
> have a new issue.
> this morning, on one test machine I have a process which remains polling
> traffic

so it exited after a while:
Jun 21 12:04:18 haproxy[745]: [WARNING] 171/120413 (745) : Former
worker 30845 exited with code 0

I don't know why it took so much time to exit (~ 5 hours). Any hint?

Best,
-- 
William



Re: remaining process after (seamless) reload

2018-06-21 Thread William Dauchy
Hello Christopher,

Thanks for the followup patch.

On Wed, Jun 20, 2018 at 04:42:58PM +0200, Christopher Faulet wrote:
> Hum, ok, forget the previous patch. Here is a second try. It solves the same
> bug using another way. In this patch, all threads must enter in the sync
> point to exit. I hope it will do the trick.

it seems better now, but not completely gone, in a way, I think we now
have a new issue.
this morning, on one test machine I have a process which remains polling
traffic

root   745  3.1  1.8 1007252 918760 ?  Ss   Jun20  30:24
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
-sf 41623 39579 30845 -x /var/lib/haproxy/stats
haproxy  30845  4.0  1.9 1277604 949508 ?  Ssl  07:05   3:31  \_
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
-sf 30836 30826 29600 -x /var/lib/haproxy/stats
haproxy  39579 35.0  1.9 1283460 951520 ?  Ssl  08:30   0:43  \_
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
-sf 39568 39534 30845 39439 -x /var/lib/haproxy/stats
haproxy  41623 32.3  1.9 1285932 954988 ?  Ssl  08:32   0:07  \_
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
-sf 39579 30845 -x /var/lib/haproxy/stats
haproxy  44987 58.0  1.9 1282780 950584 ?  Ssl  08:32   0:00  \_
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
-sf 41623 39579 30845 -x /var/lib/haproxy/stats

process 30845 looks weird.

kill -USR1 30845 does nothing

SigQ:   2/192448
SigPnd: 
SigBlk: 0800
SigIgn: 1800
SigCgt: 000180300205

#0  0x7f85e6bee923 in epoll_wait () from /lib64/libc.so.6
#1  0x55c21fd048f5 in _do_poll (p=, exp=) at src/ev_epoll.c:172
#2  0x55c21fd8570b in run_poll_loop () at src/haproxy.c:2432
#3  run_thread_poll_loop (data=data@entry=0x55c2327d3390) at src/haproxy.c:2470
#4  0x55c21fd01856 in main (argc=, argv=) at src/haproxy.c:3072


> From c01f5636a0cbe2be18573e455370c4a47f84d59e Mon Sep 17 00:00:00 2001
> From: Christopher Faulet 
> Date: Wed, 20 Jun 2018 16:22:03 +0200
> Subject: [PATCH] BUG/MEDIUM: threads: Use the sync point to check active jobs
>  and exit
>
> When HAProxy is shutting down, it exits the polling loop when there is no jobs
> anymore (jobs == 0). When there is no thread, it works pretty well, but when
> HAProxy is started with several threads, a thread can decide to exit because
> jobs variable reached 0 while another one is processing a task (e.g. a
> health-check). At this stage, the running thread could decide to request a
> synchronization. But because at least one of them has already gone, the others
> will wait infinitly in the sync point and the process will never die.
>
> To fix the bug, when the first thread (and only this one) detects there is no
> active jobs anymore, it requests a synchronization. And in the sync point, all
> threads will check if jobs variable reached 0 to exit the polling loop.

it does explain what was going on indeed.
with nbthread = 4, the process was using 300% cpu, explaining the fact,
one thread is gone and the other are waiting for sync.

I guess it is a good process anyway.
Thanks!

-- 
William



Re: remaining process after (seamless) reload

2018-06-20 Thread Willy Tarreau
On Wed, Jun 20, 2018 at 04:42:58PM +0200, Christopher Faulet wrote:
> When HAProxy is shutting down, it exits the polling loop when there is no jobs
> anymore (jobs == 0). When there is no thread, it works pretty well, but when
> HAProxy is started with several threads, a thread can decide to exit because
> jobs variable reached 0 while another one is processing a task (e.g. a
> health-check). At this stage, the running thread could decide to request a
> synchronization. But because at least one of them has already gone, the others
> will wait infinitly in the sync point and the process will never die.

Just a comment on this last sentence, I think this is the root cause of the
problem : a thread quits and its thread_mask bit doesn't disappear. In my
opinion if we're looping, it's precisely because there's no way by looking
at the all_threads_mask if some threads are missing. Thus I think that a more
reliable long term solution would require that each exiting thread does at
least "all_threads_mask &= ~tid_bit".

Now I have no idea whether or not the current sync point code is compatible
with this nor if this will be sufficient, but I'm pretty sure that over time
we'll have to go this way to fix this inconsistency.

Willy



Re: remaining process after (seamless) reload

2018-06-20 Thread Christopher Faulet

Le 20/06/2018 à 15:11, William Dauchy a écrit :

Hello Christopher,

Thank you for the quick answer and the patch.

On Wed, Jun 20, 2018 at 11:32 AM Christopher Faulet  wrote:

Here is a patch to avoid a thread to exit its polling loop while others
are waiting in the sync point. It is a theoretical patch because I was
not able to reproduce the bug.
Could you check if it fixes it please ?


Unfortunately, it does not fix the issue; I am getting the same backtrace:
#0  0x55f30960a10b in thread_sync_barrier (barrier=0x55f309875528
) at src/hathreads.c:114
#1  thread_enter_sync () at src/hathreads.c:127
#2  0x55f3095b27a2 in sync_poll_loop () at src/haproxy.c:2376
#3  run_poll_loop () at src/haproxy.c:2433
#4  run_thread_poll_loop (data=data@entry=0x55f31c4b0c50) at src/haproxy.c:2463
#5  0x55f30952e856 in main (argc=, argv=) at src/haproxy.c:3065

however the backtrace is different on some machines:

#0  0x7f31fd89ff57 in pthread_join () from /lib64/libpthread.so.0
#1  0x55e184a5486a in main (argc=, argv=) at src/haproxy.c:3069



Hum, ok, forget the previous patch. Here is a second try. It solves the 
same bug using another way. In this patch, all threads must enter in the 
sync point to exit. I hope it will do the trick.


--
Christopher Faulet
>From c01f5636a0cbe2be18573e455370c4a47f84d59e Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Wed, 20 Jun 2018 16:22:03 +0200
Subject: [PATCH] BUG/MEDIUM: threads: Use the sync point to check active jobs
 and exit

When HAProxy is shutting down, it exits the polling loop when there is no jobs
anymore (jobs == 0). When there is no thread, it works pretty well, but when
HAProxy is started with several threads, a thread can decide to exit because
jobs variable reached 0 while another one is processing a task (e.g. a
health-check). At this stage, the running thread could decide to request a
synchronization. But because at least one of them has already gone, the others
will wait infinitly in the sync point and the process will never die.

To fix the bug, when the first thread (and only this one) detects there is no
active jobs anymore, it requests a synchronization. And in the sync point, all
threads will check if jobs variable reached 0 to exit the polling loop.

This patch must be backported in 1.8.
---
 src/haproxy.c | 19 +--
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/src/haproxy.c b/src/haproxy.c
index 1c0455c56..5906f7989 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -2372,10 +2372,12 @@ void mworker_pipe_register()
 	fd_want_recv(mworker_pipe[0]);
 }
 
-static void sync_poll_loop()
+static int sync_poll_loop()
 {
+	int stop = 0;
+
 	if (THREAD_NO_SYNC())
-		return;
+		return stop;
 
 	THREAD_ENTER_SYNC();
 
@@ -2389,7 +2391,9 @@ static void sync_poll_loop()
 
 	/* *** } */
   exit:
+	stop = (jobs == 0); /* stop when there's nothing left to do */
 	THREAD_EXIT_SYNC();
+	return stop;
 }
 
 /* Runs the polling loop */
@@ -2410,9 +2414,10 @@ static void run_poll_loop()
 		/* Check if we can expire some tasks */
 		next = wake_expired_tasks();
 
-		/* stop when there's nothing left to do */
-		if (jobs == 0)
-			break;
+		/* the first thread requests a synchronization to exit when
+		 * there is no active jobs anymore */
+		if (tid == 0 && jobs == 0)
+			THREAD_WANT_SYNC();
 
 		/* expire immediately if events are pending */
 		exp = now_ms;
@@ -2431,7 +2436,9 @@ static void run_poll_loop()
 
 
 		/* Synchronize all polling loops */
-		sync_poll_loop();
+		if (sync_poll_loop())
+			break;
+
 		activity[tid].loops++;
 	}
 }
-- 
2.17.1



Re: remaining process after (seamless) reload

2018-06-20 Thread William Dauchy
Hello Christopher,

Thank you for the quick answer and the patch.

On Wed, Jun 20, 2018 at 11:32 AM Christopher Faulet  wrote:
> Here is a patch to avoid a thread to exit its polling loop while others
> are waiting in the sync point. It is a theoretical patch because I was
> not able to reproduce the bug.
> Could you check if it fixes it please ?

Unfortunately, it does not fix the issue; I am getting the same backtrace:
#0  0x55f30960a10b in thread_sync_barrier (barrier=0x55f309875528
) at src/hathreads.c:114
#1  thread_enter_sync () at src/hathreads.c:127
#2  0x55f3095b27a2 in sync_poll_loop () at src/haproxy.c:2376
#3  run_poll_loop () at src/haproxy.c:2433
#4  run_thread_poll_loop (data=data@entry=0x55f31c4b0c50) at src/haproxy.c:2463
#5  0x55f30952e856 in main (argc=, argv=) at src/haproxy.c:3065

however the backtrace is different on some machines:

#0  0x7f31fd89ff57 in pthread_join () from /lib64/libpthread.so.0
#1  0x55e184a5486a in main (argc=, argv=) at src/haproxy.c:3069

-- 
William



Re: remaining process after (seamless) reload

2018-06-20 Thread Christopher Faulet

Le 19/06/2018 à 16:42, William Dauchy a écrit :

On Tue, Jun 19, 2018 at 4:30 PM William Lallemand
 wrote:

That's interesting, we can suppose that this bug is not related anymore to the
signal problem we had previously.
Looks like it's blocking in the thread sync point.
Are you able to do a backtrace with gdb? that could help a lot.


yes, sorry, forgot to paste it.

#0  0x56269318f0fb in thread_sync_barrier (barrier=0x5626933fa528
) at src/hathreads.c:112
#1  thread_enter_sync () at src/hathreads.c:125
#2  0x5626931377a2 in sync_poll_loop () at src/haproxy.c:2376
#3  run_poll_loop () at src/haproxy.c:2433
#4  run_thread_poll_loop (data=data@entry=0x5626a7aa1000) at src/haproxy.c:2463
#5  0x5626930b3856 in main (argc=, argv=) at src/haproxy.c:3065



Hi William(s),

Here is a patch to avoid a thread to exit its polling loop while others 
are waiting in the sync point. It is a theoretical patch because I was 
not able to reproduce the bug.


Could you check if it fixes it please ?

Thanks,
--
Christopher Faulet
>From 3576ecdfe108b07c20173d3d82dce5370e796742 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Wed, 20 Jun 2018 11:05:03 +0200
Subject: [PATCH] BUG/MEDIUM: threads: Increase jobs when threads must reach
 the sync point

When a thread want to pass in the sync point, the jobs number is increased. It's
only done for the first thread requesting the sync point. The jobs number is
decreased when the last thread exits from the sync point.

This is mandatory to avoid a thread to stop its polling loop while it must enter
in the sync point. It is really hard to figure out how to hit the bug. But, in
theory, it is possible for a thread to ask for a synchronization during the
HAProxy shutdown. In this case, we can imagine some threads waiting in the sync
point while anothers are stopping all jobs (listeners, peers...). So a thread
could exit from its polling loop without passing by the sync point, blocking all
others in the sync point and finally letting the process stalled and consuming
all the cpu.

This patch must be backported in 1.8.
---
 src/hathreads.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/src/hathreads.c b/src/hathreads.c
index 5db3c2197..295a0b304 100644
--- a/src/hathreads.c
+++ b/src/hathreads.c
@@ -71,8 +71,10 @@ void thread_want_sync()
 	if (all_threads_mask) {
 		if (threads_want_sync & tid_bit)
 			return;
-		if (HA_ATOMIC_OR(_want_sync, tid_bit) == tid_bit)
+		if (HA_ATOMIC_OR(_want_sync, tid_bit) == tid_bit) {
+			HA_ATOMIC_ADD(, 1);
 			shut_your_big_mouth_gcc(write(threads_sync_pipe[1], "S", 1));
+		}
 	}
 	else {
 		threads_want_sync = 1;
@@ -142,6 +144,7 @@ void thread_exit_sync()
 		char c;
 
 		shut_your_big_mouth_gcc(read(threads_sync_pipe[0], , 1));
+		HA_ATOMIC_SUB(, 1);
 		fd_done_recv(threads_sync_pipe[0]);
 	}
 
-- 
2.17.1



Re: remaining process after (seamless) reload

2018-06-19 Thread William Dauchy
On Tue, Jun 19, 2018 at 4:30 PM William Lallemand
 wrote:
> That's interesting, we can suppose that this bug is not related anymore to the
> signal problem we had previously.
> Looks like it's blocking in the thread sync point.
> Are you able to do a backtrace with gdb? that could help a lot.

yes, sorry, forgot to paste it.

#0  0x56269318f0fb in thread_sync_barrier (barrier=0x5626933fa528
) at src/hathreads.c:112
#1  thread_enter_sync () at src/hathreads.c:125
#2  0x5626931377a2 in sync_poll_loop () at src/haproxy.c:2376
#3  run_poll_loop () at src/haproxy.c:2433
#4  run_thread_poll_loop (data=data@entry=0x5626a7aa1000) at src/haproxy.c:2463
#5  0x5626930b3856 in main (argc=, argv=) at src/haproxy.c:3065

-- 
William



Re: remaining process after (seamless) reload

2018-06-19 Thread William Lallemand
On Tue, Jun 19, 2018 at 04:09:51PM +0200, William Dauchy wrote:
> Hello William,
> 
> Not much progress on my side, apart from the fact I forgot to mention
> where the process are now stuck using all the cpu, in
> src/hathreads.c:112
> while (*barrier != all_threads_mask)
>   pl_cpu_relax();
> 

That's interesting, we can suppose that this bug is not related anymore to the
signal problem we had previously.

Looks like it's blocking in the thread sync point.

Are you able to do a backtrace with gdb? that could help a lot.

Thanks,

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-06-19 Thread William Dauchy
Hello William,

Not much progress on my side, apart from the fact I forgot to mention
where the process are now stuck using all the cpu, in
src/hathreads.c:112
while (*barrier != all_threads_mask)
  pl_cpu_relax();

-- 
William



Re: remaining process after (seamless) reload

2018-06-15 Thread William Dauchy
Hello,

Thanks for your answer. Here are the information requested.

On Fri, Jun 15, 2018 at 11:22 AM William Lallemand
 wrote:
> - haproxy -vv

HA-Proxy version 1.8.9-83616ec 2018/05/18
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -fno-strict-overflow -Wno-unused-label -DTCP_USER_TIMEOUT=18
  OPTIONS = USE_LINUX_TPROXY=1 USE_GETADDRINFO=1 USE_ZLIB=1
USE_REGPARM=1 USE_OPENSSL=1 USE_SYSTEMD=1 USE_PCRE=1 USE_PCRE_JIT=1
USE_TFO=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace

> - distribution  w/ glibc version and kernel

CentOS Linux release 7.4.1708 (Core)
glibc:  2.17-196.el7_4.2
kernel: 4.13.16 (I know it is out of support as of today :/)

> - compiler version

gcc centos package: 4.8.5-16.el7_4.2

I will try to come up with more information or reproducer if I can find one.
-- 
William



Re: remaining process after (seamless) reload

2018-06-15 Thread William Lallemand
On Tue, Jun 12, 2018 at 04:56:24PM +0200, William Dauchy wrote:
> On Tue, Jun 12, 2018 at 04:33:43PM +0200, William Lallemand wrote:
> > Those processes are still using a lot of CPU...
> > Are they still delivering traffic?
> 
> they don't seem to handle any traffic (at least I can't see it through strace)
> but that's the main difference here, using lots of CPU.
> 
> > > strace: [ Process PID=14595 runs in x32 mode. ]
> >
> > This part is particularly interesting, I suppose you are not running in 
> > x32, right?
> > I had this problem at some point but was never able to reproduce it...
> 
> yup, I can't really understand where it is coming from. I have this
> issue until I do a complete restart.
> 
> Thanks for your help,

Hey William,

Unfortunately I was not able to reproduce the bug :/, it might be dependent on 
your environment.

Could you share:

- haproxy -vv
- distribution  w/ glibc version and kernel
- compiler version

Thanks,

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-06-12 Thread William Dauchy
On Tue, Jun 12, 2018 at 04:33:43PM +0200, William Lallemand wrote:
> Those processes are still using a lot of CPU...
> Are they still delivering traffic?

they don't seem to handle any traffic (at least I can't see it through strace)
but that's the main difference here, using lots of CPU.

> > strace: [ Process PID=14595 runs in x32 mode. ]
>
> This part is particularly interesting, I suppose you are not running in x32, 
> right?
> I had this problem at some point but was never able to reproduce it...

yup, I can't really understand where it is coming from. I have this
issue until I do a complete restart.

Thanks for your help,
-- 
William



Re: remaining process after (seamless) reload

2018-06-12 Thread William Lallemand
On Tue, Jun 12, 2018 at 04:00:25PM +0200, William Dauchy wrote:
> Hello William L,
> 
> On Fri, Jun 08, 2018 at 04:31:30PM +0200, William Lallemand wrote:
> > That's great news!
> >
> > Here's the new patches. It shouldn't change anything to the fix, it only
> > changes the sigprocmask to pthread_sigmask.
> 
> In fact, I now have a different but similar issue.
> 

:(

> root 18547  3.2  1.3 986660 898844 ?   Ss   Jun08 182:12 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 2063 1903 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
> haproxy  14593  299  1.3 1251216 920480 ?  Rsl  Jun11 5882:01  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 14582 14463 -x /var/lib/haproxy/stats
> haproxy  18290  299  1.4 1265028 935288 ?  Ssl  Jun11 3425:51  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 18281 18271 18261 14593 -x /var/lib/haproxy/stats
> haproxy  29663 99.9  1.4 1258024 932796 ?  Ssl  Jun11 1063:08  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 29653 29644 18290 14593 -x /var/lib/haproxy/stats
> haproxy   4203 99.9  1.4 1258804 933216 ?  Ssl  Jun11 1009:27  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 4194 4182 18290 29663 14593 -x /var/lib/haproxy/stats
> haproxy   1445 25.9  1.4 1261680 929516 ?  Ssl  13:51   0:42  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 1436 29663 4203 18290 14593 -x /var/lib/haproxy/stats
> haproxy   1763 18.9  1.4 1260500 931516 ?  Ssl  13:52   0:15  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
> haproxy   1903 25.0  1.4 1261472 931064 ?  Ssl  13:53   0:14  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
> haproxy   2063 52.5  1.4 1259568 927916 ?  Ssl  13:53   0:19  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 1903 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
> haproxy   2602 62.0  1.4 1262220 928776 ?  Rsl  13:54   0:02  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 2063 1903 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
> 
> 

Those processes are still using a lot of CPU...

> # cat /proc/14593/status | grep Sig
> SigQ:   0/257120
> SigPnd: 
> SigBlk: 0800
> SigIgn: 1800
> SigCgt: 000180300205
> 
> kill -USR1 14593 has no effect:
> 
> # strace - -p 14593
> strace: Process 14593 attached with 3 threads


> strace: [ Process PID=14595 runs in x32 mode. ]

This part is particularly interesting, I suppose you are not running in x32, 
right? 
I had this problem at some point but was never able to reproduce it...

We might find something interesting by looking further..

> [pid 14593] --- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_USER, si_pid=18547, 
> si_uid=0} ---
> [pid 14593] rt_sigaction(SIGUSR1, {0x558357660020, [USR1], 
> SA_RESTORER|SA_RESTART, 0x7f0e87671270}, {0x558357660020, [USR1], 
> SA_RESTORER|SA_RESTART, 0x7f0e87671270}, 8) = 0
> [pid 14593] rt_sigreturn({mask=[USR2]}) = 7


At least you managed to strace when the process was seen as an x32 one, it 
wasn't my case.

> 
> however, the unix socket is on the correct process:
> 
> # lsof | grep "haproxy/stats" ; ps auxwwf | grep haproxy
> haproxy2602haproxy5u unix 0x880f902e8000   0t0 
> 061798 /var/lib/haproxy/stats.18547.tmp
> haproxy2602  2603  haproxy5u unix 0x880f902e8000   0t0 
> 061798 /var/lib/haproxy/stats.18547.tmp
> haproxy2602  2604  haproxy5u unix 0x880f902e8000   0t0 
> 061798 /var/lib/haproxy/stats.18547.tmp
> haproxy2602  2605  haproxy5u unix 0x880f902e8000   0t0 
> 061798 /var/lib/haproxy/stats.18547.tmp
> 
> So it means, it does not cause any issue for the provisioner which talks
> to the correct process, however, they are remaining process.

Are they still delivering traffic?


> Should I start a different thread for that issue?
>

That's not necessary, thanks.
 
> it seems harder to reproduce, I got the issue ~2 days after pushing back.
> 
> Thanks,
> 

I'll try to reproduce this again...

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-06-12 Thread William Dauchy
Hello William L,

On Fri, Jun 08, 2018 at 04:31:30PM +0200, William Lallemand wrote:
> That's great news!
>
> Here's the new patches. It shouldn't change anything to the fix, it only
> changes the sigprocmask to pthread_sigmask.

In fact, I now have a different but similar issue.

root 18547  3.2  1.3 986660 898844 ?   Ss   Jun08 182:12 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 2063 
1903 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
haproxy  14593  299  1.3 1251216 920480 ?  Rsl  Jun11 5882:01  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 14582 
14463 -x /var/lib/haproxy/stats
haproxy  18290  299  1.4 1265028 935288 ?  Ssl  Jun11 3425:51  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 18281 
18271 18261 14593 -x /var/lib/haproxy/stats
haproxy  29663 99.9  1.4 1258024 932796 ?  Ssl  Jun11 1063:08  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 29653 
29644 18290 14593 -x /var/lib/haproxy/stats
haproxy   4203 99.9  1.4 1258804 933216 ?  Ssl  Jun11 1009:27  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 4194 
4182 18290 29663 14593 -x /var/lib/haproxy/stats
haproxy   1445 25.9  1.4 1261680 929516 ?  Ssl  13:51   0:42  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1436 
29663 4203 18290 14593 -x /var/lib/haproxy/stats
haproxy   1763 18.9  1.4 1260500 931516 ?  Ssl  13:52   0:15  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1445 
14593 29663 4203 18290 -x /var/lib/haproxy/stats
haproxy   1903 25.0  1.4 1261472 931064 ?  Ssl  13:53   0:14  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1763 
1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
haproxy   2063 52.5  1.4 1259568 927916 ?  Ssl  13:53   0:19  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1903 
1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
haproxy   2602 62.0  1.4 1262220 928776 ?  Rsl  13:54   0:02  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 2063 
1903 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats


# cat /proc/14593/status | grep Sig
SigQ:   0/257120
SigPnd: 
SigBlk: 0800
SigIgn: 1800
SigCgt: 000180300205

kill -USR1 14593 has no effect:

# strace - -p 14593
strace: Process 14593 attached with 3 threads
strace: [ Process PID=14595 runs in x32 mode. ]
[pid 14593] --- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_USER, si_pid=18547, 
si_uid=0} ---
[pid 14593] rt_sigaction(SIGUSR1, {0x558357660020, [USR1], 
SA_RESTORER|SA_RESTART, 0x7f0e87671270}, {0x558357660020, [USR1], 
SA_RESTORER|SA_RESTART, 0x7f0e87671270}, 8) = 0
[pid 14593] rt_sigreturn({mask=[USR2]}) = 7

however, the unix socket is on the correct process:

# lsof | grep "haproxy/stats" ; ps auxwwf | grep haproxy
haproxy2602haproxy5u unix 0x880f902e8000   0t0 
061798 /var/lib/haproxy/stats.18547.tmp
haproxy2602  2603  haproxy5u unix 0x880f902e8000   0t0 
061798 /var/lib/haproxy/stats.18547.tmp
haproxy2602  2604  haproxy5u unix 0x880f902e8000   0t0 
061798 /var/lib/haproxy/stats.18547.tmp
haproxy2602  2605  haproxy5u unix 0x880f902e8000   0t0 
061798 /var/lib/haproxy/stats.18547.tmp

So it means, it does not cause any issue for the provisioner which talks
to the correct process, however, they are remaining process.
Should I start a different thread for that issue?

it seems harder to reproduce, I got the issue ~2 days after pushing back.

Thanks,

-- 
William



Re: remaining process after (seamless) reload

2018-06-08 Thread Willy Tarreau
On Fri, Jun 08, 2018 at 06:22:39PM +0200, William Dauchy wrote:
> On Fri, Jun 08, 2018 at 04:31:30PM +0200, William Lallemand wrote:
> > That's great news!
> >
> > Here's the new patches. It shouldn't change anything to the fix, it only
> > changes the sigprocmask to pthread_sigmask.
> 
> thanks, I attached the backport for 1.8 and started a new test with
> them.
> Feel free to add Tested-by tag :)

Argh! 5 seconds too late, I've just pushed them now for mainline!

Thanks guys,
Willy



Re: remaining process after (seamless) reload

2018-06-08 Thread William Dauchy
On Fri, Jun 08, 2018 at 04:31:30PM +0200, William Lallemand wrote:
> That's great news!
>
> Here's the new patches. It shouldn't change anything to the fix, it only
> changes the sigprocmask to pthread_sigmask.

thanks, I attached the backport for 1.8 and started a new test with
them.
Feel free to add Tested-by tag :)

-- 
William
º&íí¹ïwõuÖöõïo=yÝ»÷ÖùknÛsVýã^;2‰Òz{ÓM4ÓM´ÓQk¢e¢–Xš˜¶¥•éšÜ%jY^™©Ý…ªk£œ¢`ÚµäỲnŸm5óOxëM~Óm4JæãyËO0‡×ýÁPoÌ22Øky§l…©Ý•ë"‚v¥ªç®z‰åÊ)톷šwD¢‚v¥³«z§vWŠv¥–Øky§lÂœ…Æ®±çl¢g¬Š	ږËhméh²×ë¢kb™ëh¶)žN†¯¢'\¢je‰Æ­yÙhrK2²×¦¶ÞiÛ>²('j[0zšÞ}êájweŠx-…ë"‚v¥²)èëa­æjú"v)àr‰ÜººÞžÖœqë,N¬‰×ž}÷œ¶‡í†+º²ØZ¶Êšèqë,Áêޞ‹ey«âžë¢kb™ëh¶)žvêâž«zZu8b²–­r®²ÖÞm§$¦Šíyا×Ë+søZ¦º1É͵ûï¾ûï¾ûï¾ûï¾ûë+sû"‚v¥s͟ŠW¬r§çuò)ìz»b¢{>×W^•ëb¢{‰÷àŠÖ¿²·?…ªk£œoû+søZ¦º1Éȧuìxëoóo^÷ÎkžõÓN¸á¯ì­ÏájšèÇ'>ûæÿ²·?…ªk£œÛ|óí·÷Ít²Ö­‰Ëè‰Ú%–Z(§óë¡Ç¬±§ÞÂÖ¬’Ï鮇²Êîžv›•ëZ²K?rœ’'ðyÆ®‚l¢g¬Š	ږƧvšèqë,¶¦þÈ ©i®‡²Ê®zç¾ýÈ^rHŸÁçºm²‰ž²('j[Úk¡Ç¬²Ø^š)í…»-¶Þißþ‰ûbÿº&ã¯9Ó­iÖøs­:Ù¦{Ž¸Ù×vÙíº×¾9ç¿72‰Òz{ÓM4ÓM´ÓQk¢e¢–Xš˜¶¥•éšÜ%jY^™©Ý…ªk£œ¢`ÚµäỲnŸm5óOx÷N>Óm4JæãyËO0‡ÛýÁPoÌ Ó‘v‰íŠ	è­äˆñ,´„Wn®)à²('jZk¡Ç¬²)àYçhžØZ½æ§Ê·š²‰è}¹hrH§‚Øh±ë"‚v¥°‡Ò `TI"ñ b.ŠÒ d„V«z§z¶­yÜ!ŠW­…ìš­æå¡ÉvØ^­ë.–جºw^~)ÞvéåzË-…ë"‚v¥Á« zw«jםo)"–]¬Š
®zç·¢ºÚŠÇ·N¬².•ÖÞm§$¦ŠíyÛh×Ë+sû"‚v¥sϾûï¾ûí_ŠWœ…©àyß"žÇ«¶*'³çb}ø"µ¯ì­ÏìŠ	ڕÆÿ²·?²('jW"×±Wúóm|Ñ֝w¾¶×M:ㆿ²·?²('jW>ûæÿ²·?²('jW5ÛN¾×m5â)í²('jX§ŠÛ"ø¥–Ç­nZ‘çlŠ"×¥±ë[–‡$yÛ""=…ûùšžÈ ¦º™«$!ôˆH…¶¬Š	ڗ¬éÞ­«^u¼¤ŠYvúÈ ªç®{z+­¨¬{þ²(z[µ¹hrG²( `TKë"×¥±ë[–‡$yÛ""ñ>²(z[µ¹hrG²( b/ë"×¥±ë[–‡$yÛ""HA•~ŠìŠºöüqÆûç‡[÷}ùÙ·4k¶½yǹoÇ=ÙÇxqÖº2‰Òz{ÓM4ÓM´ÓQk¢e¢–Xš˜¶¥•éšÜ%jY^™©Ý…ªk£œ¢`ÚµäỲnŸm5ó]vߍ>Óm4JæãyËO0‡ßýÁPoÌ Ó‘²('j[!jÈ ™«$™§+¡ú+šémŠØky§bžáy·¡jø¨®‡ìŠ
k¡Éš²H§jy®–Ø­†·šuç^žø«¢yžžØ¬ºw^~)Þu8^ìÊ腫"‚f¬‘Æ¥–Ç¢¶«¦Øky§l²(É(®È ¦º™«$‰øZ¦º1ˬn襷­†Øky§lºšh®Ú+ž‹S†+,†‹¥u·›iÉ)¢»^vÚ5ò)ܖç^ýÊ&š‰ÿ…«a­æ²¾ûï¾úÊÜýÈ^rKãשּׂÏájšèÇ'8ûë+sû"‚v¥sϾûï¸~)^±ÈZž×˜§±ê튉ìû·^•ëb¢{‰÷àŠÖ¿Šw%¹×¿r‰¦¢ájØky§l…¿âÉnuïÜ¢i¨ŸøZ¶ÞiÛ!Šw^Åþ|Û·´WïNï]4ëŽþ)ܖç^ýÊ&š‰ÿ…«a­æ²¾ù¿âÉnuïÜ¢i¨ŸøZ¶ÞiÛ!÷N¾÷O^Æ׫1ÑË8 º{"‚w–‰à¶'[ŠßӅæâµÊ+­ë)¢wbžh¶­†·šuçâáÀEbΤÑaË8"¥nYtuçâáÀEbΤCPÒΩ[–WhýÚ'¢Øbžð†)^Óç^~)ޅ«"‚f¬’0±ëh•Û¶È ¦º™«$†Œ,zÚ%vÇ­û祱ïÔHDÇD@þ)ܖç^²×b¢´ã¯¶Ó¿bžÛa­æç³)ܾˆ{^®{§²('yÙhž¥–Øky§l™«$ùןŠw¡jÈ ™«$†Œ,zÚ%vÇ­¦Øky§lŠ	š²HhÂÇ­¢Wlzß¾‰÷^~)Þt1PdÇD@uçâçCBËý`4ƒF‰ü¨ºê]j×­†+žé©•æ¬y©l¢Gž¦Z’V›z[©¶‡ZµæޖŒ?v'߂+ZþÊÜýÈ^rKoû+s÷!yÉ,r)Ý{tí¯6Í=ñ×^qýtÓ®8kû+s÷!yÉ,sï¾oû+s÷!yÉ,s^µó¿µë_;¾ˆnZ’È r]¾ˆ²(,zÛlzÛ"é©·+¶Ç­²(u۶ǭH‚°Ú²Ç«¶È ¦º™«$H,àŠ±ëMP²

Re: remaining process after (seamless) reload

2018-06-08 Thread William Lallemand
On Fri, Jun 08, 2018 at 06:20:21PM +0200, Willy Tarreau wrote:
> On Fri, Jun 08, 2018 at 04:31:30PM +0200, William Lallemand wrote:
> > That's great news!
> >  
> > Here's the new patches. It shouldn't change anything to the fix, it only
> > changes the sigprocmask to pthread_sigmask.
> 
> OK, I can merge them right now if you want. At the very least it will
> kill a whole class of potential bugs and help future troubleshooting.
> Just let me know.
> 
> Thank you guys,
> Willy

Ok for me.

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-06-08 Thread Willy Tarreau
On Fri, Jun 08, 2018 at 04:31:30PM +0200, William Lallemand wrote:
> That's great news!
>  
> Here's the new patches. It shouldn't change anything to the fix, it only
> changes the sigprocmask to pthread_sigmask.

OK, I can merge them right now if you want. At the very least it will
kill a whole class of potential bugs and help future troubleshooting.
Just let me know.

Thank you guys,
Willy



Re: remaining process after (seamless) reload

2018-06-08 Thread William Lallemand
On Fri, Jun 08, 2018 at 02:10:44PM +0200, William Dauchy wrote:
> On Thu, Jun 07, 2018 at 11:50:45AM +0200, William Lallemand wrote:
> > Sorry for the late reply, I manage to reproduce and fix what seams to be 
> > the bug.
> > The signal management was not handled correctly with threads.
> > Could you try those patches and see if it fixes the problem?
> 
> Thanks a lot for the proposed patches!
> I started a new test yesterday evening, and it seems ok so far, so I
> deployed on two new machines this morning.
> I hope I don't speak too fast but it looks promising because I was
> previously able to reproduce in a few hours maximum.
> 

That's great news!
 
Here's the new patches. It shouldn't change anything to the fix, it only
changes the sigprocmask to pthread_sigmask.

-- 
William Lallemand
>From f2942335e76d98df0da05235b9d8df95bc5636b0 Mon Sep 17 00:00:00 2001
From: William Lallemand 
Date: Thu, 7 Jun 2018 09:46:01 +0200
Subject: [PATCH 1/3] BUG/MEDIUM: threads: handle signal queue only in thread 0

Signals were handled in all threads which caused some signals to be lost
from time to time. To avoid complicated lock system (threads+signals),
we prefer handling the signals in one thread avoiding concurrent access.

The side effect of this bug was that some process were not leaving from
time to time during a reload.

This patch must be backported in 1.8.
---
 src/haproxy.c | 21 ++---
 src/signal.c  |  8 
 2 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/src/haproxy.c b/src/haproxy.c
index 766758f62..96b8abcf9 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -2402,8 +2402,10 @@ static void run_poll_loop()
 		/* Process a few tasks */
 		process_runnable_tasks();
 
-		/* check if we caught some signals and process them */
-		signal_process_queue();
+		/* check if we caught some signals and process them in the
+		 first thread */
+		if (tid == 0)
+			signal_process_queue();
 
 		/* Check if we can expire some tasks */
 		next = wake_expired_tasks();
@@ -2418,7 +2420,7 @@ static void run_poll_loop()
 			activity[tid].wake_cache++;
 		else if (active_tasks_mask & tid_bit)
 			activity[tid].wake_tasks++;
-		else if (signal_queue_len)
+		else if (signal_queue_len && tid == 0)
 			activity[tid].wake_signal++;
 		else
 			exp = next;
@@ -3008,6 +3010,7 @@ int main(int argc, char **argv)
 		unsigned int *tids= calloc(global.nbthread, sizeof(unsigned int));
 		pthread_t*threads = calloc(global.nbthread, sizeof(pthread_t));
 		int  i;
+		sigset_t blocked_sig, old_sig;
 
 		THREAD_SYNC_INIT((1UL << global.nbthread) - 1);
 
@@ -3015,6 +3018,15 @@ int main(int argc, char **argv)
 		for (i = 0; i < global.nbthread; i++)
 			tids[i] = i;
 
+		/* ensure the signals will be blocked in every thread */
+		sigfillset(_sig);
+		sigdelset(_sig, SIGPROF);
+		sigdelset(_sig, SIGBUS);
+		sigdelset(_sig, SIGFPE);
+		sigdelset(_sig, SIGILL);
+		sigdelset(_sig, SIGSEGV);
+		pthread_sigmask(SIG_SETMASK, _sig, _sig);
+
 		/* Create nbthread-1 thread. The first thread is the current process */
 		threads[0] = pthread_self();
 		for (i = 1; i < global.nbthread; i++)
@@ -3048,6 +3060,9 @@ int main(int argc, char **argv)
 		}
 #endif /* !USE_CPU_AFFINITY */
 
+		/* when multithreading we need to let only the thread 0 handle the signals */
+		pthread_sigmask(SIG_SETMASK, _sig, NULL);
+
 		/* Finally, start the poll loop for the first thread */
 		run_thread_poll_loop([0]);
 
diff --git a/src/signal.c b/src/signal.c
index a0975910b..f1f682188 100644
--- a/src/signal.c
+++ b/src/signal.c
@@ -31,7 +31,6 @@ struct pool_head *pool_head_sig_handlers = NULL;
 sigset_t blocked_sig;
 int signal_pending = 0; /* non-zero if t least one signal remains unprocessed */
 
-__decl_hathreads(HA_SPINLOCK_T signals_lock);
 
 /* Common signal handler, used by all signals. Received signals are queued.
  * Signal number zero has a specific status, as it cannot be delivered by the
@@ -71,9 +70,6 @@ void __signal_process_queue()
 	struct signal_descriptor *desc;
 	sigset_t old_sig;
 
-	if (HA_SPIN_TRYLOCK(SIGNALS_LOCK, _lock))
-		return;
-
 	/* block signal delivery during processing */
 	sigprocmask(SIG_SETMASK, _sig, _sig);
 
@@ -100,7 +96,6 @@ void __signal_process_queue()
 
 	/* restore signal delivery */
 	sigprocmask(SIG_SETMASK, _sig, NULL);
-	HA_SPIN_UNLOCK(SIGNALS_LOCK, _lock);
 }
 
 /* perform minimal intializations, report 0 in case of error, 1 if OK. */
@@ -112,8 +107,6 @@ int signal_init()
 	memset(signal_queue, 0, sizeof(signal_queue));
 	memset(signal_state, 0, sizeof(signal_state));
 
-	HA_SPIN_INIT(_lock);
-
 	/* Ensure signals are not blocked. Some shells or service managers may
 	 * accidently block all of our signals unfortunately, causing lots of
 	 * zombie processes to remain in the background during reloads.
@@ -148,7 +141,6 @@ void deinit_signals()
 			pool_free(pool_head_sig_handlers, sh);
 		}
 	}
-	HA_SPIN_DESTROY(_lock);
 }
 
 /* Register a function and an integer argument on a 

Re: remaining process after (seamless) reload

2018-06-08 Thread William Dauchy
Hello William L.,

On Thu, Jun 07, 2018 at 11:50:45AM +0200, William Lallemand wrote:
> Sorry for the late reply, I manage to reproduce and fix what seams to be the 
> bug.
> The signal management was not handled correctly with threads.
> Could you try those patches and see if it fixes the problem?

Thanks a lot for the proposed patches!
I started a new test yesterday evening, and it seems ok so far, so I
deployed on two new machines this morning.
I hope I don't speak too fast but it looks promising because I was
previously able to reproduce in a few hours maximum.

Keep me up to date if you need a test on a re-worked version.

Thanks!


> From d695242fb260538bd8db323715d627c4a9deacc7 Mon Sep 17 00:00:00 2001
> From: William Lallemand 
> Date: Thu, 7 Jun 2018 09:46:01 +0200
> Subject: [PATCH 1/3] BUG/MEDIUM: threads: handle signal queue only in thread 0
>
> Signals were handled in all threads which caused some signals to be lost
> from time to time. To avoid complicated lock system (threads+signals),
> we prefer handling the signals in one thread avoiding concurrent access.
>
> The side effect of this bug was that some process were not leaving from
> time to time during a reload.
> ---
>  src/haproxy.c | 21 ++---
>  src/signal.c  |  8 
>  2 files changed, 18 insertions(+), 11 deletions(-)
>
> diff --git a/src/haproxy.c b/src/haproxy.c
> index 4628d8296..e984ca573 100644
> --- a/src/haproxy.c
> +++ b/src/haproxy.c
> @@ -2398,8 +2398,10 @@ static void run_poll_loop()
>   /* Process a few tasks */
>   process_runnable_tasks();
>
> - /* check if we caught some signals and process them */
> - signal_process_queue();
> + /* check if we caught some signals and process them in the
> +  first thread */
> + if (tid == 0)
> + signal_process_queue();
>
>   /* Check if we can expire some tasks */
>   next = wake_expired_tasks();
> @@ -2416,7 +2418,7 @@ static void run_poll_loop()
>   activity[tid].wake_tasks++;
>   else if (active_applets_mask & tid_bit)
>   activity[tid].wake_applets++;
> - else if (signal_queue_len)
> + else if (signal_queue_len && tid == 0)
>   activity[tid].wake_signal++;
>   else
>   exp = next;
> @@ -3006,6 +3008,7 @@ int main(int argc, char **argv)
>   unsigned int *tids= calloc(global.nbthread, sizeof(unsigned 
> int));
>   pthread_t*threads = calloc(global.nbthread, 
> sizeof(pthread_t));
>   int  i;
> + sigset_t blocked_sig, old_sig;
>
>   THREAD_SYNC_INIT((1UL << global.nbthread) - 1);
>
> @@ -3013,6 +3016,15 @@ int main(int argc, char **argv)
>   for (i = 0; i < global.nbthread; i++)
>   tids[i] = i;
>
> + /* ensure the signals will be blocked in every thread */
> + sigfillset(_sig);
> + sigdelset(_sig, SIGPROF);
> + sigdelset(_sig, SIGBUS);
> + sigdelset(_sig, SIGFPE);
> + sigdelset(_sig, SIGILL);
> + sigdelset(_sig, SIGSEGV);
> + pthread_sigmask(SIG_SETMASK, _sig, _sig);
> +
>   /* Create nbthread-1 thread. The first thread is the current 
> process */
>   threads[0] = pthread_self();
>   for (i = 1; i < global.nbthread; i++)
> @@ -3046,6 +3058,9 @@ int main(int argc, char **argv)
>   }
>  #endif /* !USE_CPU_AFFINITY */
>
> + /* when multithreading we need to let only the thread 0 handle 
> the signals */
> + pthread_sigmask(SIG_SETMASK, _sig, NULL);
> +
>   /* Finally, start the poll loop for the first thread */
>   run_thread_poll_loop([0]);
>
> diff --git a/src/signal.c b/src/signal.c
> index a0975910b..f1f682188 100644
> --- a/src/signal.c
> +++ b/src/signal.c
> @@ -31,7 +31,6 @@ struct pool_head *pool_head_sig_handlers = NULL;
>  sigset_t blocked_sig;
>  int signal_pending = 0; /* non-zero if t least one signal remains 
> unprocessed */
>
> -__decl_hathreads(HA_SPINLOCK_T signals_lock);
>
>  /* Common signal handler, used by all signals. Received signals are queued.
>   * Signal number zero has a specific status, as it cannot be delivered by the
> @@ -71,9 +70,6 @@ void __signal_process_queue()
>   struct signal_descriptor *desc;
>   sigset_t old_sig;
>
> - if (HA_SPIN_TRYLOCK(SIGNALS_LOCK, _lock))
> - return;
> -
>   /* block signal delivery during processing */
>   sigprocmask(SIG_SETMASK, _sig, _sig);
>
> @@ -100,7 +96,6 @@ void __signal_process_queue()
>
>   /* restore signal delivery */
>   sigprocmask(SIG_SETMASK, _sig, NULL);
> - HA_SPIN_UNLOCK(SIGNALS_LOCK, _lock);
>  }
>
>  /* perform minimal intializations, report 0 in case of error, 1 if OK. */
> @@ -112,8 

Re: remaining process after (seamless) reload

2018-06-08 Thread William Lallemand
On Thu, Jun 07, 2018 at 12:02:46PM +0200, Willy Tarreau wrote:
> On Thu, Jun 07, 2018 at 11:50:45AM +0200, William Lallemand wrote:
> > /* block signal delivery during processing */
> > +#ifdef USE_THREAD
> > +   pthread_sigmask(SIG_SETMASK, _sig, _sig);
> > +#else
> > sigprocmask(SIG_SETMASK, _sig, _sig);
> > +#endif
>  
> I think for the merge we'd rather put a wrapper into hathreads.h, like
> "ha_sigmask()" which uses either pthread_sigmask() or sigprocmask().
> 
> That will remove ifdefs and lower the risk of reusing these unsafe
> calls.
> 
> What do you think ?
> 

Agreed, I'll rework the patches once they are tested.

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-06-07 Thread Willy Tarreau
On Thu, Jun 07, 2018 at 11:50:45AM +0200, William Lallemand wrote:
>   /* block signal delivery during processing */
> +#ifdef USE_THREAD
> + pthread_sigmask(SIG_SETMASK, _sig, _sig);
> +#else
>   sigprocmask(SIG_SETMASK, _sig, _sig);
> +#endif
 
I think for the merge we'd rather put a wrapper into hathreads.h, like
"ha_sigmask()" which uses either pthread_sigmask() or sigprocmask().

That will remove ifdefs and lower the risk of reusing these unsafe
calls.

What do you think ?

thanks,
Willy



Re: remaining process after (seamless) reload

2018-06-07 Thread William Lallemand
Hi guys,

Sorry for the late reply, I manage to reproduce and fix what seams to be the 
bug.
The signal management was not handled correctly with threads.

Could you try those patches and see if it fixes the problem?

Thanks.

-- 
William Lallemand
>From d695242fb260538bd8db323715d627c4a9deacc7 Mon Sep 17 00:00:00 2001
From: William Lallemand 
Date: Thu, 7 Jun 2018 09:46:01 +0200
Subject: [PATCH 1/3] BUG/MEDIUM: threads: handle signal queue only in thread 0

Signals were handled in all threads which caused some signals to be lost
from time to time. To avoid complicated lock system (threads+signals),
we prefer handling the signals in one thread avoiding concurrent access.

The side effect of this bug was that some process were not leaving from
time to time during a reload.
---
 src/haproxy.c | 21 ++---
 src/signal.c  |  8 
 2 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/src/haproxy.c b/src/haproxy.c
index 4628d8296..e984ca573 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -2398,8 +2398,10 @@ static void run_poll_loop()
 		/* Process a few tasks */
 		process_runnable_tasks();
 
-		/* check if we caught some signals and process them */
-		signal_process_queue();
+		/* check if we caught some signals and process them in the
+		 first thread */
+		if (tid == 0)
+			signal_process_queue();
 
 		/* Check if we can expire some tasks */
 		next = wake_expired_tasks();
@@ -2416,7 +2418,7 @@ static void run_poll_loop()
 			activity[tid].wake_tasks++;
 		else if (active_applets_mask & tid_bit)
 			activity[tid].wake_applets++;
-		else if (signal_queue_len)
+		else if (signal_queue_len && tid == 0)
 			activity[tid].wake_signal++;
 		else
 			exp = next;
@@ -3006,6 +3008,7 @@ int main(int argc, char **argv)
 		unsigned int *tids= calloc(global.nbthread, sizeof(unsigned int));
 		pthread_t*threads = calloc(global.nbthread, sizeof(pthread_t));
 		int  i;
+		sigset_t blocked_sig, old_sig;
 
 		THREAD_SYNC_INIT((1UL << global.nbthread) - 1);
 
@@ -3013,6 +3016,15 @@ int main(int argc, char **argv)
 		for (i = 0; i < global.nbthread; i++)
 			tids[i] = i;
 
+		/* ensure the signals will be blocked in every thread */
+		sigfillset(_sig);
+		sigdelset(_sig, SIGPROF);
+		sigdelset(_sig, SIGBUS);
+		sigdelset(_sig, SIGFPE);
+		sigdelset(_sig, SIGILL);
+		sigdelset(_sig, SIGSEGV);
+		pthread_sigmask(SIG_SETMASK, _sig, _sig);
+
 		/* Create nbthread-1 thread. The first thread is the current process */
 		threads[0] = pthread_self();
 		for (i = 1; i < global.nbthread; i++)
@@ -3046,6 +3058,9 @@ int main(int argc, char **argv)
 		}
 #endif /* !USE_CPU_AFFINITY */
 
+		/* when multithreading we need to let only the thread 0 handle the signals */
+		pthread_sigmask(SIG_SETMASK, _sig, NULL);
+
 		/* Finally, start the poll loop for the first thread */
 		run_thread_poll_loop([0]);
 
diff --git a/src/signal.c b/src/signal.c
index a0975910b..f1f682188 100644
--- a/src/signal.c
+++ b/src/signal.c
@@ -31,7 +31,6 @@ struct pool_head *pool_head_sig_handlers = NULL;
 sigset_t blocked_sig;
 int signal_pending = 0; /* non-zero if t least one signal remains unprocessed */
 
-__decl_hathreads(HA_SPINLOCK_T signals_lock);
 
 /* Common signal handler, used by all signals. Received signals are queued.
  * Signal number zero has a specific status, as it cannot be delivered by the
@@ -71,9 +70,6 @@ void __signal_process_queue()
 	struct signal_descriptor *desc;
 	sigset_t old_sig;
 
-	if (HA_SPIN_TRYLOCK(SIGNALS_LOCK, _lock))
-		return;
-
 	/* block signal delivery during processing */
 	sigprocmask(SIG_SETMASK, _sig, _sig);
 
@@ -100,7 +96,6 @@ void __signal_process_queue()
 
 	/* restore signal delivery */
 	sigprocmask(SIG_SETMASK, _sig, NULL);
-	HA_SPIN_UNLOCK(SIGNALS_LOCK, _lock);
 }
 
 /* perform minimal intializations, report 0 in case of error, 1 if OK. */
@@ -112,8 +107,6 @@ int signal_init()
 	memset(signal_queue, 0, sizeof(signal_queue));
 	memset(signal_state, 0, sizeof(signal_state));
 
-	HA_SPIN_INIT(_lock);
-
 	/* Ensure signals are not blocked. Some shells or service managers may
 	 * accidently block all of our signals unfortunately, causing lots of
 	 * zombie processes to remain in the background during reloads.
@@ -148,7 +141,6 @@ void deinit_signals()
 			pool_free(pool_head_sig_handlers, sh);
 		}
 	}
-	HA_SPIN_DESTROY(_lock);
 }
 
 /* Register a function and an integer argument on a signal. A pointer to the
-- 
2.16.1

>From 1501eddeb506897126d0d3d60a36ca780b24ffdf Mon Sep 17 00:00:00 2001
From: William Lallemand 
Date: Thu, 7 Jun 2018 09:49:04 +0200
Subject: [PATCH 2/3] BUG/MINOR: don't ignore SIG{BUS,FPE,ILL,SEGV} during
 signal processing

---
 src/signal.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/src/signal.c b/src/signal.c
index f1f682188..0dadd762c 100644
--- a/src/signal.c
+++ b/src/signal.c
@@ -120,6 +120,14 @@ int signal_init()
 
 	sigfillset(_sig);
 	sigdelset(_sig, SIGPROF);
+	/* man sigprocmask: If SIGBUS, SIGFPE, SIGILL, or 

Re: remaining process after (seamless) reload

2018-05-30 Thread William Lallemand
On Wed, May 30, 2018 at 07:57:03PM +0200, Tim Düsterhus wrote:
> William,
> 
> Am 30.05.2018 um 19:45 schrieb William Lallemand:
> >> @William Lallemand Possibly the sd_notifyf should be moved below
> >> mworker_unblock_signals in mworker_wait?
> >>
> >  
> > This shouldn't happen with or without systemd.  I can reproduce it without
> > using systemd, we should not rely on an external component to mitigate the
> > problem.
> 
> Yes, I agree. But I believe that readiness should be signaled as late as
> possible to ensure that everything is set-up correctly. So the
> `sd_notifyf` should be moved down below the signal registering, no? Even
> if it does not fix this issue it is technically incorrect.
> 

Even there it's technically incorrect in fact, readiness should be notified
once all workers are ready and it's not possible to do this right now.

When the signals are blocked they are stored within the signal queue in order
to be transmitted once they are unblocked, so that does not change a lot if
it's before or after the unblock. Even if we send the reload signal just before
the unblocking, it will still work.

> The ideal case would be the workers signaling their status to the master
> (which in turn signals to SystemD). Is this part of your current work on
> master-worker Willy announced in his 1.9 mail?

Better than that, the whole idea started from the need of synchronous
reloading  that don't rely on signals anymore.

The current problems with the signal method:
- kill -USR2 is asynchronous and can't return an error code
- sd_notify can't signal an error during a reload, which is a big problem
- we use haproxy -c as a hack to stop the reloading if the new config is wrong
- haproxy can fail even if the config was right during the check
- we load the configuration twice because of the -c, which is not convenient
  for people having a heavy configuration to load.

>  Personally I'd like to see that workers are able to send information like
>  the number of requests handled to the master to be able to display them in
>  SystemD [1]
> 

I thought about it too, but people tends to rely on statistics usually. I
already have in mind how to do that kind of things but I can't say it will be
there, it will depends on how much time I spend on the other features :)

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-05-30 Thread William Dauchy
On Wed, May 30, 2018 at 5:29 PM, William Lallemand
 wrote:
> I can reproduce the same situation there, however I disabled the seamless
> reload. When doing a -USR1 & strace on an remaining worker, I can see that the
> the signal is not blocked, and that it's still polling

good news!

> Unfortunately the only way to know when the service is ready is with systemd,
> but I planned to make the status available on the stats socket in the future.

That's exactly what I was looking for!

Best,
-- 
William



Re: remaining process after (seamless) reload

2018-05-30 Thread Tim Düsterhus
William,

Am 30.05.2018 um 19:45 schrieb William Lallemand:
>> @William Lallemand Possibly the sd_notifyf should be moved below
>> mworker_unblock_signals in mworker_wait?
>>
>  
> This shouldn't happen with or without systemd.  I can reproduce it without
> using systemd, we should not rely on an external component to mitigate the
> problem.

Yes, I agree. But I believe that readiness should be signaled as late as
possible to ensure that everything is set-up correctly. So the
`sd_notifyf` should be moved down below the signal registering, no? Even
if it does not fix this issue it is technically incorrect.

The ideal case would be the workers signaling their status to the master
(which in turn signals to SystemD). Is this part of your current work on
master-worker Willy announced in his 1.9 mail? Personally I'd like to
see that workers are able to send information like the number of
requests handled to the master to be able to display them in SystemD [1]

Best regards
Tim Düsterhus

[1]
https://www.freedesktop.org/software/systemd/man/sd_notify.html#STATUS=%E2%80%A6



Re: remaining process after (seamless) reload

2018-05-30 Thread William Lallemand
Hi Tim,

On Tue, May 29, 2018 at 09:33:48PM +0200, Tim Düsterhus wrote:
> 
> @William Lallemand Possibly the sd_notifyf should be moved below
> mworker_unblock_signals in mworker_wait?
> 
 
This shouldn't happen with or without systemd.  I can reproduce it without
using systemd, we should not rely on an external component to mitigate the
problem.

In my opinion your suggestion won't change anything, and I suspect that the
problem is not there, but probably in the worker.

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-05-30 Thread William Lallemand
On Wed, May 30, 2018 at 04:47:31PM +0200, William Dauchy wrote:
> Hello William L.,
> 

Hi William D. :-)

> I did some more testing:
> I simplified my config, removing the multi binding part and cpu-map.
> Conclusion is, I have this issue when I activate nbthread feature
> (meaning no probkem without).
> 
> I tried to kill -USR1 the failing worker, but it remains.
> 
> Here are the Sig* from status file of one of the failing process:
> SigQ:   0/192448
> SigPnd: 
> SigBlk: 0800
> SigIgn: 1800
> SigCgt: 000180300205
> 

I can reproduce the same situation there, however I disabled the seamless
reload. When doing a -USR1 & strace on an remaining worker, I can see that the
the signal is not blocked, and that it's still polling

My guess is that something is preventing the leaving of the worker. It tried to
gdb the threads but not one seems to be in a dead lock. I have to investigate
more. 

I'm not sure that's related at all with the timing of the reload but I could be
wrong.

> About the timing of reload, it seems to take a few seconds most of the
> time, so I *think* I am not reloading before another is not yet done,
> but I would appreciate whether I can check this fact through a file
> before sending the reload; do you have any hint?

I think systemd is not trying to reload when a reload is not finished yet
with Type=notify. You could 'grep reloading' on the systemctl status haproxy to
check that.

Unfortunately the only way to know when the service is ready is with systemd,
but I planned to make the status available on the stats socket in the future.

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-05-30 Thread William Dauchy
Hello William L.,

I did some more testing:
I simplified my config, removing the multi binding part and cpu-map.
Conclusion is, I have this issue when I activate nbthread feature
(meaning no probkem without).

I tried to kill -USR1 the failing worker, but it remains.

Here are the Sig* from status file of one of the failing process:
SigQ:   0/192448
SigPnd: 
SigBlk: 0800
SigIgn: 1800
SigCgt: 000180300205

About the timing of reload, it seems to take a few seconds most of the
time, so I *think* I am not reloading before another is not yet done,
but I would appreciate whether I can check this fact through a file
before sending the reload; do you have any hint?

Best,
-- 
William



Re: remaining process after (seamless) reload

2018-05-29 Thread Willy Tarreau
On Tue, May 29, 2018 at 08:35:19PM +0200, William Dauchy wrote:
> I however don't see on which part haproxy would
> need to do dns lookup on our side. Front end side is host matching and
> backend side is IP only.

We studied the possibility that a reload happends at the exact moment
the config finishes to be parsed and thought about such possibilities
as well. We could imagine that you're using tons of certificates and
that they take a bit of time to be processed on startup. But despite
this apparently the startup sequence *looks* safe.

> But I will have a closer look at this point. What I am missing for now
> is how to know when haproxy is considered as "ready" to prevent new
> reloads.

>From what I understood it's when the workers are forked, at this point
sd_notify is used. There's very likely a race somewhere there. We imagined
several hypothesis, like reload signal being delivered before workers are
started, etc, but William managed to rule them all out till now.

All ideas and silly suggestions are welcome of course.

Cheers,
Willy



Re: remaining process after (seamless) reload

2018-05-29 Thread William Dauchy
Hello Tim,

On Tue, May 29, 2018 at 9:33 PM, Tim Düsterhus  wrote:
> Run systemctl status haproxy. It shows the status:
>
>> [timwolla@/s/haproxy (maxrewrite-warn)]sudo systemctl status haproxy
>> ● haproxy.service - HAProxy Load Balancer
>>Loaded: loaded (/lib/systemd/system/haproxy.service; disabled; vendor 
>> preset: enabled)
>>Active: reloading (reload) since Tue 2018-05-29 21:25:06 CEST; 22s ago
>
> "Active: reloading (reload)" implies that the haproxy master process did
> not signal readiness to SystemD yet. In my cause that is, because I
> commented out the line signaling readiness. Readiness is signaled, once
> the master process forked off the children, but before unblocking the
> signals in the master process.

thanks for the precision. I was more looking for a way to easily check
it in my provisioning system, e.g a file to check instead of parsing
the output of systemctl status.

-- 
William



Re: remaining process after (seamless) reload

2018-05-29 Thread Tim Düsterhus
William,

Am 29.05.2018 um 17:09 schrieb William Dauchy:
> We are using Type=notify. I however cannot guarantee we do not trigger
> a new reload, before the previous one is done. Is there a way to check
> the "ready" state you mentioned?

Run systemctl status haproxy. It shows the status:

> [timwolla@/s/haproxy (maxrewrite-warn)]sudo systemctl status haproxy
> ● haproxy.service - HAProxy Load Balancer
>Loaded: loaded (/lib/systemd/system/haproxy.service; disabled; vendor 
> preset: enabled)
>Active: reloading (reload) since Tue 2018-05-29 21:25:06 CEST; 22s ago

"Active: reloading (reload)" implies that the haproxy master process did
not signal readiness to SystemD yet. In my cause that is, because I
commented out the line signaling readiness. Readiness is signaled, once
the master process forked off the children, but before unblocking the
signals in the master process.

@William Lallemand Possibly the sd_notifyf should be moved below
mworker_unblock_signals in mworker_wait?

Once the reload finished it should show "Active: active (running)":

> [timwolla@/s/haproxy (maxrewrite-warn)]sudo systemctl status haproxy
> ● haproxy.service - HAProxy Load Balancer
>Loaded: loaded (/lib/systemd/system/haproxy.service; disabled; vendor 
> preset: enabled)
>Active: active (running) since Tue 2018-05-29 21:27:58 CEST; 1s ago

Best regards
Tim Düsterhus



Re: remaining process after (seamless) reload

2018-05-29 Thread William Dauchy
Hello Dave,

On Tue, May 29, 2018 at 5:55 PM, Dave Chiluk  wrote:
> We've battled the same issue with our haproxys.  We root caused it to slow
> dns lookup times while parsing the config was causing haproxy config parsing
> to take so long that we were attempting to reload again before the original
> reload had completed.  I'm still not sure why or where the Signals are
> getting dropped to the old haproxy, but we found by installing a dns cache
> on our haproxy nodes we were able to greatly decrease the likelihood of
> creating zombie haproxy instances.

interesting findings. I however don't see on which part haproxy would
need to do dns lookup on our side. Front end side is host matching and
backend side is IP only.
But I will have a closer look at this point. What I am missing for now
is how to know when haproxy is considered as "ready" to prevent new
reloads.

> We further improved on that by rearchitecting our micro-service architecture
> to make use of the haproxy dynamic scaling apis, and allocating dummy slots
> for future expansion.  Similar to
> https://www.haproxy.com/blog/dynamic-scaling-for-microservices-with-runtime-api/.

that's indeed what we are already doing for servers.

Best,
-- 
William



Re: remaining process after (seamless) reload

2018-05-29 Thread Dave Chiluk
We've battled the same issue with our haproxys.  We root caused it to slow
dns lookup times while parsing the config was causing haproxy config
parsing to take so long that we were attempting to reload again before the
original reload had completed.  I'm still not sure why or where the Signals
are getting dropped to the old haproxy, but we found by installing a dns
cache on our haproxy nodes we were able to greatly decrease the likelihood
of creating zombie haproxy instances.

We further improved on that by rearchitecting our micro-service
architecture to make use of the haproxy dynamic scaling apis, and
allocating dummy slots for future expansion.  Similar to
https://www.haproxy.com/blog/dynamic-scaling-for-microservices-with-runtime-api/
.

Good luck, I hope that's the answer to your problem.
Dave.

On Tue, May 29, 2018 at 10:12 AM William Dauchy  wrote:

> Hello William,
>
> Sorry for the last answer.
>
> > Are the problematical workers leaving when you reload a second time?
>
> no, they seems to stay for a long time (forever?)
>
> > Did you try to kill -USR1 the worker ? It should exits with "Former
> worker $PID
> > exited with code 0" on stderr.
> > If not, could you check the Sig* lines in /proc/$PID/status for this
> worker?
>
> will try. I need to put the setup back in shape, and maybe test
> without multiple binding.
>
> > Do you know how much time take haproxy to load its configuration, and do
> you
> > think you tried a reload before it finished to parse and load the config?
> > Type=notify in your systemd unit file should help for this case. If I
> remember
> > well it checks that the service is 'ready' before trying to reload.
>
> We are using Type=notify. I however cannot guarantee we do not trigger
> a new reload, before the previous one is done. Is there a way to check
> the "ready" state you mentioned?
> (We are talking about a reload every 10 seconds maximum though)
>
> > I suspect the SIGUSR1 signal is not received by the worker, but I'm not
> sure
> > either if it's the master that didn't send it or if the worker blocked
> it.
>
> good to know.
>
> Best,
> --
> William
>
>


Re: remaining process after (seamless) reload

2018-05-29 Thread William Dauchy
Hello William,

Sorry for the last answer.

> Are the problematical workers leaving when you reload a second time?

no, they seems to stay for a long time (forever?)

> Did you try to kill -USR1 the worker ? It should exits with "Former worker 
> $PID
> exited with code 0" on stderr.
> If not, could you check the Sig* lines in /proc/$PID/status for this worker?

will try. I need to put the setup back in shape, and maybe test
without multiple binding.

> Do you know how much time take haproxy to load its configuration, and do you
> think you tried a reload before it finished to parse and load the config?
> Type=notify in your systemd unit file should help for this case. If I remember
> well it checks that the service is 'ready' before trying to reload.

We are using Type=notify. I however cannot guarantee we do not trigger
a new reload, before the previous one is done. Is there a way to check
the "ready" state you mentioned?
(We are talking about a reload every 10 seconds maximum though)

> I suspect the SIGUSR1 signal is not received by the worker, but I'm not sure
> either if it's the master that didn't send it or if the worker blocked it.

good to know.

Best,
-- 
William



Re: remaining process after (seamless) reload

2018-05-28 Thread William Lallemand
On Thu, May 24, 2018 at 11:00:29PM +0200, William Dauchy wrote:
> On Thu, May 24, 2018 at 12:01:38PM +0200, William Lallemand wrote:
> > I managed to reproduce something similar with the 1.8.8 version. It looks 
> > like
> > letting a socat connected to the socket helps.
> >
> > I'm looking into the code to see what's happening.
> 
> Indeed, after some more hours, I got the same issue on v1.8.8. However it 
> seems to
> be easier to reproduce in v1.8.9, but I might be wrong.
> So now I bet on either thread issue, or bind with reuseport.
> I'll try to do some more tests.
> 
> Best,

Hi,

I don't think I reproduced the same problem, so I have a few questions for you 
:-)

Are the problematical workers leaving when you reload a second time?

Did you try to kill -USR1 the worker ? It should exits with "Former worker $PID
exited with code 0" on stderr.

If not, could you check the Sig* lines in /proc/$PID/status for this worker?

Do you know how much time take haproxy to load its configuration, and do you
think you tried a reload before it finished to parse and load the config?
Type=notify in your systemd unit file should help for this case. If I remember
well it checks that the service is 'ready' before trying to reload.

I suspect the SIGUSR1 signal is not received by the worker, but I'm not sure
either if it's the master that didn't send it or if the worker blocked it.

Thanks!

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-05-24 Thread William Dauchy
Hi William,

Thank you for your reply.

On Thu, May 24, 2018 at 12:01:38PM +0200, William Lallemand wrote:
> I managed to reproduce something similar with the 1.8.8 version. It looks like
> letting a socat connected to the socket helps.
>
> I'm looking into the code to see what's happening.

Indeed, after some more hours, I got the same issue on v1.8.8. However it seems 
to
be easier to reproduce in v1.8.9, but I might be wrong.
So now I bet on either thread issue, or bind with reuseport.
I'll try to do some more tests.

Best,
-- 
William



Re: remaining process after (seamless) reload

2018-05-24 Thread William Lallemand
On Thu, May 24, 2018 at 10:07:23AM +0200, William Dauchy wrote:
> On Wed, May 23, 2018 at 08:45:04PM +0200, William Dauchy wrote:
> > More details which could help understand what is going on:
> >
> > ps output:
> >
> > root 15928  0.3  0.0 255216 185268 ?   Ss   May21  10:11 
> > /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> > 16988 16912 6340 28271 30590 30334 -x /var/lib/haproxy/stats
> > haproxy   6340  2.0  0.0 526172 225476 ?   Ssl  May22  35:03  \_ 
> > /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> > 6328 6315 -x /var/lib/haproxy/stats
> > haproxy  28271  1.8  0.0 528720 229508 ?   Ssl  May22  27:13  \_ 
> > /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> > 28258 28207 28232 6340 -x /var/lib/haproxy/stats
> > haproxy  30590  265  0.0 527268 225032 ?   Rsl  04:35 2188:55  \_ 
> > /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> > 30578 28271 6340 -x /var/lib/haproxy/stats
> > haproxy  30334  197  0.0 526704 224544 ?   Rsl  09:17 1065:59  \_ 
> > /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> > 30322 30295 27095 6340 28271 30590 -x /var/lib/haproxy/stats
> > haproxy  16912  1.7  0.0 527544 216552 ?   Ssl  18:14   0:03  \_ 
> > /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> > 16899 28271 30590 30334 6340 -x /var/lib/haproxy/stats
> > haproxy  17001  2.2  0.0 528392 214656 ?   Ssl  18:17   0:00  \_ 
> > /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> > 16988 16912 6340 28271 30590 30334 -x /var/lib/haproxy/stats
> >
> >
> > lsof output:
> >
> > haproxy6340haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> > haproxy6340  6341  haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> > haproxy6340  6342  haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> > haproxy6340  6343  haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> > haproxy   17020haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> > haproxy   17020 17021  haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> > haproxy   17020 17022  haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> > haproxy   17020 17023  haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> > haproxy   28271haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> > haproxy   28271 28272  haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> > haproxy   28271 28273  haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> > haproxy   28271 28274  haproxy5u unix 0x883feec97000   0t0  
> > 679289634 /var/lib/haproxy/stats.15928.tmp
> >
> > (So on unhealthy nodes, I find old processes which are still linked to
> > the socket.)
> >
> > The provisioning part is also seeing data which are supposed to be
> > already updated through the runtime API. I suspect I am getting old
> > data when connecting to the unix socket. The later being still attached
> > to an old process?
> > Indeed, if I try
> > for i in {1..500}; do sudo echo "show info" | sudo socat stdio  
> > /var/lib/haproxy/stats  | grep Pid; done
> >
> > I get "Pid: 17001" most of the time, which is the last process
> > but I sometimes get: "Pid: 28271"(!) which is a > 24 hours old
> > process.
> >
> > Is there something we are doing wrongly?
> 
> After some more testing, I don't have this issue using haproxy v1.8.8
> (rollbacked for > 12 hours). I hope I don't speak too fast.
> 

Hi,

I managed to reproduce something similar with the 1.8.8 version. It looks like
letting a socat connected to the socket helps.

I'm looking into the code to see what's happening.

-- 
William Lallemand



Re: remaining process after (seamless) reload

2018-05-24 Thread William Dauchy
On Wed, May 23, 2018 at 08:45:04PM +0200, William Dauchy wrote:
> More details which could help understand what is going on:
>
> ps output:
>
> root 15928  0.3  0.0 255216 185268 ?   Ss   May21  10:11 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 16988 16912 6340 28271 30590 30334 -x /var/lib/haproxy/stats
> haproxy   6340  2.0  0.0 526172 225476 ?   Ssl  May22  35:03  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 6328 6315 -x /var/lib/haproxy/stats
> haproxy  28271  1.8  0.0 528720 229508 ?   Ssl  May22  27:13  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 28258 28207 28232 6340 -x /var/lib/haproxy/stats
> haproxy  30590  265  0.0 527268 225032 ?   Rsl  04:35 2188:55  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 30578 28271 6340 -x /var/lib/haproxy/stats
> haproxy  30334  197  0.0 526704 224544 ?   Rsl  09:17 1065:59  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 30322 30295 27095 6340 28271 30590 -x /var/lib/haproxy/stats
> haproxy  16912  1.7  0.0 527544 216552 ?   Ssl  18:14   0:03  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 16899 28271 30590 30334 6340 -x /var/lib/haproxy/stats
> haproxy  17001  2.2  0.0 528392 214656 ?   Ssl  18:17   0:00  \_ 
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 
> 16988 16912 6340 28271 30590 30334 -x /var/lib/haproxy/stats
>
>
> lsof output:
>
> haproxy6340haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
> haproxy6340  6341  haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
> haproxy6340  6342  haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
> haproxy6340  6343  haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
> haproxy   17020haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
> haproxy   17020 17021  haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
> haproxy   17020 17022  haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
> haproxy   17020 17023  haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
> haproxy   28271haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
> haproxy   28271 28272  haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
> haproxy   28271 28273  haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
> haproxy   28271 28274  haproxy5u unix 0x883feec97000   0t0  
> 679289634 /var/lib/haproxy/stats.15928.tmp
>
> (So on unhealthy nodes, I find old processes which are still linked to
> the socket.)
>
> The provisioning part is also seeing data which are supposed to be
> already updated through the runtime API. I suspect I am getting old
> data when connecting to the unix socket. The later being still attached
> to an old process?
> Indeed, if I try
> for i in {1..500}; do sudo echo "show info" | sudo socat stdio  
> /var/lib/haproxy/stats  | grep Pid; done
>
> I get "Pid: 17001" most of the time, which is the last process
> but I sometimes get: "Pid: 28271"(!) which is a > 24 hours old
> process.
>
> Is there something we are doing wrongly?

After some more testing, I don't have this issue using haproxy v1.8.8
(rollbacked for > 12 hours). I hope I don't speak too fast.

-- 
William



Re: remaining process after (seamless) reload

2018-05-23 Thread William Dauchy
On Wed, May 23, 2018 at 06:49:09PM +0200, William Dauchy wrote:
> We do frequent reloads (approximatively every 10s).
> After a while some processes remains alive and seem to never exit (waited >24
> hours). While stracing them, some of them are still handling traffic and
> doing healthchecks. Some of them are exiting normally after the reload.
> I was wondering how I can help to debug this issue. I assume I won't
> have any other info through the stats socket, since it concerns older
> processes but maybe I missed something.
> Do you have any hint to help me understand what is going on?

More details which could help understand what is going on:

ps output:

root 15928  0.3  0.0 255216 185268 ?   Ss   May21  10:11 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 16988 
16912 6340 28271 30590 30334 -x /var/lib/haproxy/stats
haproxy   6340  2.0  0.0 526172 225476 ?   Ssl  May22  35:03  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 6328 
6315 -x /var/lib/haproxy/stats
haproxy  28271  1.8  0.0 528720 229508 ?   Ssl  May22  27:13  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 28258 
28207 28232 6340 -x /var/lib/haproxy/stats
haproxy  30590  265  0.0 527268 225032 ?   Rsl  04:35 2188:55  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 30578 
28271 6340 -x /var/lib/haproxy/stats
haproxy  30334  197  0.0 526704 224544 ?   Rsl  09:17 1065:59  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 30322 
30295 27095 6340 28271 30590 -x /var/lib/haproxy/stats
haproxy  16912  1.7  0.0 527544 216552 ?   Ssl  18:14   0:03  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 16899 
28271 30590 30334 6340 -x /var/lib/haproxy/stats
haproxy  17001  2.2  0.0 528392 214656 ?   Ssl  18:17   0:00  \_ 
/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 16988 
16912 6340 28271 30590 30334 -x /var/lib/haproxy/stats


lsof output:

haproxy6340haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp
haproxy6340  6341  haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp
haproxy6340  6342  haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp
haproxy6340  6343  haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp
haproxy   17020haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp
haproxy   17020 17021  haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp
haproxy   17020 17022  haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp
haproxy   17020 17023  haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp
haproxy   28271haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp
haproxy   28271 28272  haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp
haproxy   28271 28273  haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp
haproxy   28271 28274  haproxy5u unix 0x883feec97000   0t0  
679289634 /var/lib/haproxy/stats.15928.tmp

(So on unhealthy nodes, I find old processes which are still linked to
the socket.)

The provisioning part is also seeing data which are supposed to be
already updated through the runtime API. I suspect I am getting old
data when connecting to the unix socket. The later being still attached
to an old process?
Indeed, if I try
for i in {1..500}; do sudo echo "show info" | sudo socat stdio  
/var/lib/haproxy/stats  | grep Pid; done

I get "Pid: 17001" most of the time, which is the last process
but I sometimes get: "Pid: 28271"(!) which is a > 24 hours old
process.

Is there something we are doing wrongly?

Best,

-- 
William



remaining process after (seamless) reload

2018-05-23 Thread William Dauchy
Hello,

I am trying to understand a possible issue we have regarding haproxy (seamless)
reloads.
I am using haproxy v1.8.9 with the following config (using nbthread):

global
 log 127.0.0.1 local0 info
 maxconn 262144
 user haproxy
 group haproxy
 nbproc 1
 daemon
 stats socket /var/lib/haproxy/stats level admin mode 644 expose-fd 
listeners
 stats timeout 2m
 tune.bufsize 33792
 ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
 ssl-default-bind-ciphers 
ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
 ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
 ssl-default-server-ciphers 
ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
 ssl-server-verify none
 crt-base /etc/haproxy/tls/
 nbthread 4
 cpu-map auto:1/1-4 0-3

defaults
 log global
 mode http
 retries 3
 timeout connect 10s
 timeout client 180s
 timeout server 180s
 timeout http-keep-alive 10s
 timeout http-request 10s
 timeout queue 1s
 timeout check 5s
 option httplog
 option dontlognull
 option redispatch
 option prefer-last-server
 option dontlog-normal
 option http-keep-alive
 option forwardfor except 127.0.0.0/8
 balance roundrobin
 maxconn 262134
 http-reuse safe
 default-server inter 5s fastinter 1s fall 3 slowstart 20s observe layer7 
error-limit 1 on-error fail-check
 http-check send-state
 http-check expect status 200 string passing

listen stats
 bind *:8080
 stats enable
 stats uri /haproxy_stats


frontend fe_main
# arbitrary split in two for http/https traffic
bind *:80 name http_1 process 1/1
bind *:80 name http_2 process 1/2
bind *:443 name https_3 ssl crt /etc/haproxy/tls/fe_main process 1/3
bind *:443 name https_4 ssl crt /etc/haproxy/tls/fe_main process 1/4
[...]

The rest of the config contains lost of acl/backends (> 1000)

We do frequent reloads (approximatively every 10s).
After a while some processes remains alive and seem to never exit (waited >24
hours). While stracing them, some of them are still handling traffic and
doing healthchecks. Some of them are exiting normally after the reload.
I was wondering how I can help to debug this issue. I assume I won't
have any other info through the stats socket, since it concerns older
processes but maybe I missed something.
Do you have any hint to help me understand what is going on?

Best regards,

-- 
William