Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2021-01-12 Thread Amaury Denoyelle
Mark Brookes  wrote:

> Sorry to drag up an old thread. But do we have an eta for a new
> release of version 1.8 that contains the fix? I noticed that the 2.x
> versions have been updated, and wanted to make sure that 1.8 has not
> been left out by mistake.

I have just updated the backports for haproxy 1.8, I will try to make a
release with your fix soon.

-- 
Amaury Denoyelle



Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2021-01-12 Thread Mark Brookes
Sorry to drag up an old thread. But do we have an eta for a new
release of version 1.8 that contains the fix? I noticed that the 2.x
versions have been updated, and wanted to make sure that 1.8 has not
been left out by mistake.

Thankyou

Mark


On Wed, 16 Dec 2020 at 10:03, Peter Statham
 wrote:
>
> On Wed, 16 Dec 2020 at 08:40, Christopher Faulet  wrote:
> >
> > Le 11/12/2020 à 21:34, Peter Statham a écrit :
> > >
> > > The patch seems to fix the issue.
> > >
> >
> > Peter,
> >
> > The fix was backported to the 1.8. Thanks !
> >
> > --
> > Christopher Faulet
>
> Hello Christopher,
>
> Thank you for your time finding the cause and the solution to this.
>
> --
>
> Peter Statham
> Loadbalancer.org Ltd.
>


-- 

Mark Brookes
Loadbalancer.org Ltd.
www.loadbalancer.org


+44 (0)330 380 1064
m...@loadbalancer.org



Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-12-16 Thread Peter Statham
On Wed, 16 Dec 2020 at 08:40, Christopher Faulet  wrote:
>
> Le 11/12/2020 à 21:34, Peter Statham a écrit :
> >
> > The patch seems to fix the issue.
> >
>
> Peter,
>
> The fix was backported to the 1.8. Thanks !
>
> --
> Christopher Faulet

Hello Christopher,

Thank you for your time finding the cause and the solution to this.

-- 

Peter Statham
Loadbalancer.org Ltd.



Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-12-16 Thread Christopher Faulet

Le 11/12/2020 à 21:34, Peter Statham a écrit :


The patch seems to fix the issue.



Peter,

The fix was backported to the 1.8. Thanks !

--
Christopher Faulet



Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-12-11 Thread Peter Statham
On Fri, 11 Dec 2020 at 15:21, Christopher Faulet  wrote:
>
> Le 11/12/2020 à 11:45, Christopher Faulet a écrit :
> > Le 10/12/2020 à 19:38, Peter Statham a écrit :
> >>   > Sorry for the delay in getting back to you.  It is the same crash,
> >>   > we've been trying to narrow down the exact combination of compiler,
> >>   > libraries, kernel, hypervisor, etc. that causes the issue now that we
> >>   > know it isn't universal but that's turning out to be trickier than
> >>   > identifying the issue.
> >>   >
> >>   > I only backported the changes to the src/lb_fwlc.c file, but
> >>   > backporting 1b87748ff5 seems to work just as well.  So far we haven't
> >>   > been able to provoke the issue with the changes in 1b87748ff5 applied
> >>   > to the 1.8 tree so that does look like a solution.
> >>   >
> >>   > We will keep testing and trying to narrow the issue down.
> >>
> >> Since I wrote the above I have managed to replicate the issue on 1.8 with
> >> applied, so it looks as if that was not the solution after all.
> >>
> >> I include a binary built from 1.8.27 with 1b87748ff5 backported and a core 
> >> dump.
> >>
> >> haproxy-1.8.27+1b87748ff5
> >> 
> >> haproxy-1.8.27+1b87748ff5.core
> >> 
> >>
> >
> >
> > Thanks Peter, I'll try to take a look today. The reproducer is the same ?
> >
>
> Ok, in fact it is pretty easy to reproduce. Because I found a similar bug on
> newer versions, I have not tested on the 1.8. Unfortunately,  there is second
> bug, specific to the 1.8.
>
> I attached a patch that should fix it. In fact, the bug exists because of the
> rendez-vous point. It was removed on newer versions. But, on 1.8, there may 
> have
> a short time to commit server state changes because we must wait for all
> threads. Thus, we must take care to not use info of the next state too early.
> And this is the bug here. In the leasconn algo, the next server weight is 
> used,
>   instead of the current one, to reposition the server in the tree. The next
> server weight must only be used when the server state changes are committed.
>
> Peter, could you confirm it fixes you bug ?
> --
> Christopher Faulet

The patch seems to fix the issue.

I've built a new version of haproxy 1.8.27 with the patch applied on
both Debian and CentOS under VMWare.  I then ran these builds
concurrently with my previous builds on both platforms using
configuration files that are identical save for the bind address.

I can reproduce the bug with the existing build but not with the one
with your patch applied.

I'll ask some of my colleagues to double check my tests.

--

Peter Statham
Loadbalancer.org Ltd.



Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-12-11 Thread Christopher Faulet

Le 11/12/2020 à 11:45, Christopher Faulet a écrit :

Le 10/12/2020 à 19:38, Peter Statham a écrit :

  > Sorry for the delay in getting back to you.  It is the same crash,
  > we've been trying to narrow down the exact combination of compiler,
  > libraries, kernel, hypervisor, etc. that causes the issue now that we
  > know it isn't universal but that's turning out to be trickier than
  > identifying the issue.
  >
  > I only backported the changes to the src/lb_fwlc.c file, but
  > backporting 1b87748ff5 seems to work just as well.  So far we haven't
  > been able to provoke the issue with the changes in 1b87748ff5 applied
  > to the 1.8 tree so that does look like a solution.
  >
  > We will keep testing and trying to narrow the issue down.

Since I wrote the above I have managed to replicate the issue on 1.8 with
applied, so it looks as if that was not the solution after all.

I include a binary built from 1.8.27 with 1b87748ff5 backported and a core dump.

haproxy-1.8.27+1b87748ff5

haproxy-1.8.27+1b87748ff5.core





Thanks Peter, I'll try to take a look today. The reproducer is the same ?



Ok, in fact it is pretty easy to reproduce. Because I found a similar bug on 
newer versions, I have not tested on the 1.8. Unfortunately,  there is second 
bug, specific to the 1.8.


I attached a patch that should fix it. In fact, the bug exists because of the 
rendez-vous point. It was removed on newer versions. But, on 1.8, there may have 
a short time to commit server state changes because we must wait for all 
threads. Thus, we must take care to not use info of the next state too early. 
And this is the bug here. In the leasconn algo, the next server weight is used, 
 instead of the current one, to reposition the server in the tree. The next 
server weight must only be used when the server state changes are committed.


Peter, could you confirm it fixes you bug ?
--
Christopher Faulet
>From 89c1eb3a9a1cc0b1d9743a74d299a329cbe3b910 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Fri, 11 Dec 2020 15:36:01 +0100
Subject: [PATCH] BUG/MEDIUM: lb-leastconn: Reposition a server using the right
 eweight

Depending on the context, the current eweight or the next one must be used
to reposition a server in the tree. When the server state is updated, for
instance its weight, the next eweight must be used because it is not yet
committed. However, when the server is used, on normal conditions, the
current eweight must be used.

It is important to do so, because the server state is updated and committed
inside the rendez-vous point. Thus, the next server state may be unsync with
the current state for a short time, waiting all threads join the rendez-vous
point. It is especially a problem if the next eweight is set to 0. Because
otherwise, it must not be used to reposition the server in the tree, leading
to a divide by 0.

This bug is specific to the 1.8. On previous versions, there is no thread
support. On newer ones, the rendez-vous point was removed. Thus, there is no
upstream commit ID for this patch. And no backport is needed.
---
 src/lb_fwlc.c | 20 +---
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/src/lb_fwlc.c b/src/lb_fwlc.c
index e2740994e..decb38697 100644
--- a/src/lb_fwlc.c
+++ b/src/lb_fwlc.c
@@ -37,13 +37,19 @@ static inline void fwlc_dequeue_srv(struct server *s)
 	eb32_delete(&s->lb_node);
 }
 
-/* Queue a server in its associated tree, assuming the weight is >0.
+/* Queue a server in its associated tree, assuming the  is >0.
  * Servers are sorted by #conns/weight. To ensure maximum accuracy,
  * we use #conns*SRV_EWGHT_MAX/eweight as the sorting key.
+ *
+ * NOTE: Depending on the calling context, we use s->next_eweight or
+ *   s->cur_eweight. The next value is used when the server state is updated
+ *   (because the weight changed for instance). During this step, the server
+ *   state is not yet committed. The current value is used to reposition the
+ *   server in the tree. This happens when the server is used.
  */
-static inline void fwlc_queue_srv(struct server *s)
+static inline void fwlc_queue_srv(struct server *s, unsigned int eweight)
 {
-	s->lb_node.key = s->served * SRV_EWGHT_MAX / s->next_eweight;
+	s->lb_node.key = s->served * SRV_EWGHT_MAX / eweight;
 	eb32_insert(s->lb_tree, &s->lb_node);
 }
 
@@ -56,7 +62,7 @@ static void fwlc_srv_reposition(struct server *s)
 	HA_SPIN_LOCK(LBPRM_LOCK, &s->proxy->lbprm.lock);
 	if (s->lb_tree) {
 		fwlc_dequeue_srv(s);
-		fwlc_queue_srv(s);
+		fwlc_queue_srv(s, s->cur_eweight);
 	}
 	HA_SPIN_UNLOCK(LBPRM_LOCK, &s->proxy->lbprm.lock);
 }
@@ -161,7 +167,7 @@ static void fwlc_set_server_status_up(struct server *srv)
 	}
 
 	/* note that eweight cannot be 0 here */
-	fwlc_queue_srv(srv);
+	fwlc_queue_srv(srv, srv->next_eweight)

Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-12-11 Thread Christopher Faulet

Le 10/12/2020 à 19:38, Peter Statham a écrit :

 > Sorry for the delay in getting back to you.  It is the same crash,
 > we've been trying to narrow down the exact combination of compiler,
 > libraries, kernel, hypervisor, etc. that causes the issue now that we
 > know it isn't universal but that's turning out to be trickier than
 > identifying the issue.
 >
 > I only backported the changes to the src/lb_fwlc.c file, but
 > backporting 1b87748ff5 seems to work just as well.  So far we haven't
 > been able to provoke the issue with the changes in 1b87748ff5 applied
 > to the 1.8 tree so that does look like a solution.
 >
 > We will keep testing and trying to narrow the issue down.

Since I wrote the above I have managed to replicate the issue on 1.8 with 
applied, so it looks as if that was not the solution after all.


I include a binary built from 1.8.27 with 1b87748ff5 backported and a core dump.

haproxy-1.8.27+1b87748ff5 

haproxy-1.8.27+1b87748ff5.core 






Thanks Peter, I'll try to take a look today. The reproducer is the same ?

--
Christopher Faulet



Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-12-10 Thread Peter Statham
On Tue, 8 Dec 2020 at 14:55, Christopher Faulet  wrote:
>
> Le 04/12/2020 à 21:24, Peter Statham a écrit :
> > I might have spoken too soon.
> >
> > The latest release of 1.8 works flawlessly on my debian desktop but
> > still crashes when I attempt the same configuration on a CentOS
> > virtual machine on our VMWare cluster.
> >
> > I'm not sure if this is down to differences in the way memory fencing
> > or thread scheduling work on these platforms or if it is a
> > library/compiler issue.  Backporting the LBPRM spinlocks from 1.9's
> > src/lb_fwlc.c seems to help but I will continue investigating and
> > hopefully rule out some of the other possibilities.
> >
>
> Hum, not good. Peter, it is the same crash or not ? I didn't checked very
> deeply, but I guess you backported th e commit 1b87748ff5 ("BUG/MEDIUM:
> lb/threads: always properly lock LB algorithms on maintenance
operations"). A
> comment in the commit message says it may be required on the 1.8 if some
bugs
> surface in this area.
>
> However I'm surprised because locked functions are called for the
rendez-vous
> point. It means all threads are blocked at the same point waiting the
updates on
> servers are performed.
>
> --
> Christopher Faulet

My apologies for replying to the wrong address, Christopher.  I have pasted
the body of that email here.

> Sorry for the delay in getting back to you.  It is the same crash,
> we've been trying to narrow down the exact combination of compiler,
> libraries, kernel, hypervisor, etc. that causes the issue now that we
> know it isn't universal but that's turning out to be trickier than
> identifying the issue.
>
> I only backported the changes to the src/lb_fwlc.c file, but
> backporting 1b87748ff5 seems to work just as well.  So far we haven't
> been able to provoke the issue with the changes in 1b87748ff5 applied
> to the 1.8 tree so that does look like a solution.
>
> We will keep testing and trying to narrow the issue down.

Since I wrote the above I have managed to replicate the issue on 1.8 with
applied, so it looks as if that was not the solution after all.

I include a binary built from 1.8.27 with 1b87748ff5 backported and a core
dump.

 haproxy-1.8.27+1b87748ff5

 haproxy-1.8.27+1b87748ff5.core


-- 

Peter Statham
Loadbalancer.org Ltd.


Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-12-08 Thread Christopher Faulet

Le 04/12/2020 à 21:24, Peter Statham a écrit :

I might have spoken too soon.

The latest release of 1.8 works flawlessly on my debian desktop but
still crashes when I attempt the same configuration on a CentOS
virtual machine on our VMWare cluster.

I'm not sure if this is down to differences in the way memory fencing
or thread scheduling work on these platforms or if it is a
library/compiler issue.  Backporting the LBPRM spinlocks from 1.9's
src/lb_fwlc.c seems to help but I will continue investigating and
hopefully rule out some of the other possibilities.



Hum, not good. Peter, it is the same crash or not ? I didn't checked very 
deeply, but I guess you backported th e commit 1b87748ff5 ("BUG/MEDIUM: 
lb/threads: always properly lock LB algorithms on maintenance operations"). A 
comment in the commit message says it may be required on the 1.8 if some bugs 
surface in this area.


However I'm surprised because locked functions are called for the rendez-vous 
point. It means all threads are blocked at the same point waiting the updates on 
servers are performed.


--
Christopher Faulet



Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-12-04 Thread Peter Statham
On Mon, 26 Oct 2020 at 13:05, Peter Statham
 wrote:
>
> On Mon, 19 Oct 2020 at 10:00, Christopher Faulet  wrote:
> >
> > Le 16/10/2020 à 10:04, Christopher Faulet a écrit :
> > > Le 13/10/2020 à 14:53, Peter Statham a écrit :
> > >> Hello,
> > >>
> > >> We've found an issue when using agent checks in conjunction with the 
> > >> weighted
> > >> least connections algorithm in multithreaded mode.  It seems to me as if 
> > >> it is
> > >> possible for next_eweight in struct server to be modified in another 
> > >> thread
> > >> during the execution of fwlc_srv_reposition.  If next_eweight is set to 
> > >> zero
> > >> then a division by zero occurs on line 59 in src/lb_fwlc.c in 
> > >> fwlc_queue_srv.
> > >>
> > >> I notice that in haproxy-2.0.18 this section of code is protected by
> > >> HA_SPINLOCKs and I've been unable to replicate this issue in that 
> > >> version.
> > >>
> > >> I've written an agent (attached), bad_agent.py, which provokes this 
> > >> condition by
> > >> switching randomly between 1 and 0 percent.  I also include a minimal
> > >> configuration, cfg (also attached), which seems sufficient to cause the 
> > >> issue.
> > >> With these two running “ab -n 500 -c 500 http://192.168.92.1:8080/” 
> > >> will
> > >> quickly crash the haproxy process.
> > >>
> > >> I include links to a coredump and the binary that generated it 
> > >> (unstripped).
> > >> The backtrace of thread 1 follows.
> > >>
> > >
> > > Hi,
> > >
> > > Thanks for the reproducer. I'm able to crash HAProxy too using your 
> > > config and
> > > your agent. It seems to only crash on the 1.8. I'll investigate.
> > >
> >
> > Hi,
> >
> > In fact, it fails in all branches supporting the threads. The leasconn and 
> > first
> > loadbalancing algorithms are affected by this bug. In leastconn, it may 
> > crash
> > because of the division by 0 when the server weight is set to 0. But for the
> > both algos, the server tree may be also corrupted, leading to stranger and
> > undefined bugs.
> >
> > I pushed a fix (commit 26a52a) and backported it as far as 1.8. So, it 
> > should be
> > fixed in all branches now.
> >
> > Thanks !
> > --
> > Christopher Faulet
>
> Thank you for making a patch for this bug, Christopher.  I've checked
> out the 1.8 master (I would have done so sooner, but I'm afraid I
> didn't have access to my email last week) and I'm happy to say I can't
> replicate the crash. :)
>
> --
> Peter Statham

Hi,

I might have spoken too soon.

The latest release of 1.8 works flawlessly on my debian desktop but
still crashes when I attempt the same configuration on a CentOS
virtual machine on our VMWare cluster.

I'm not sure if this is down to differences in the way memory fencing
or thread scheduling work on these platforms or if it is a
library/compiler issue.  Backporting the LBPRM spinlocks from 1.9's
src/lb_fwlc.c seems to help but I will continue investigating and
hopefully rule out some of the other possibilities.

-- 
Peter Statham
Loadbalancer.org Ltd.



Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-10-26 Thread Peter Statham
On Mon, 19 Oct 2020 at 10:00, Christopher Faulet  wrote:
>
> Le 16/10/2020 à 10:04, Christopher Faulet a écrit :
> > Le 13/10/2020 à 14:53, Peter Statham a écrit :
> >> Hello,
> >>
> >> We've found an issue when using agent checks in conjunction with the 
> >> weighted
> >> least connections algorithm in multithreaded mode.  It seems to me as if 
> >> it is
> >> possible for next_eweight in struct server to be modified in another thread
> >> during the execution of fwlc_srv_reposition.  If next_eweight is set to 
> >> zero
> >> then a division by zero occurs on line 59 in src/lb_fwlc.c in 
> >> fwlc_queue_srv.
> >>
> >> I notice that in haproxy-2.0.18 this section of code is protected by
> >> HA_SPINLOCKs and I've been unable to replicate this issue in that version.
> >>
> >> I've written an agent (attached), bad_agent.py, which provokes this 
> >> condition by
> >> switching randomly between 1 and 0 percent.  I also include a minimal
> >> configuration, cfg (also attached), which seems sufficient to cause the 
> >> issue.
> >> With these two running “ab -n 500 -c 500 http://192.168.92.1:8080/” 
> >> will
> >> quickly crash the haproxy process.
> >>
> >> I include links to a coredump and the binary that generated it 
> >> (unstripped).
> >> The backtrace of thread 1 follows.
> >>
> >
> > Hi,
> >
> > Thanks for the reproducer. I'm able to crash HAProxy too using your config 
> > and
> > your agent. It seems to only crash on the 1.8. I'll investigate.
> >
>
> Hi,
>
> In fact, it fails in all branches supporting the threads. The leasconn and 
> first
> loadbalancing algorithms are affected by this bug. In leastconn, it may crash
> because of the division by 0 when the server weight is set to 0. But for the
> both algos, the server tree may be also corrupted, leading to stranger and
> undefined bugs.
>
> I pushed a fix (commit 26a52a) and backported it as far as 1.8. So, it should 
> be
> fixed in all branches now.
>
> Thanks !
> --
> Christopher Faulet

Thank you for making a patch for this bug, Christopher.  I've checked
out the 1.8 master (I would have done so sooner, but I'm afraid I
didn't have access to my email last week) and I'm happy to say I can't
replicate the crash. :)

--
Peter Statham



Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-10-19 Thread Christopher Faulet

Le 16/10/2020 à 10:04, Christopher Faulet a écrit :

Le 13/10/2020 à 14:53, Peter Statham a écrit :

Hello,

We've found an issue when using agent checks in conjunction with the weighted
least connections algorithm in multithreaded mode.  It seems to me as if it is
possible for next_eweight in struct server to be modified in another thread
during the execution of fwlc_srv_reposition.  If next_eweight is set to zero
then a division by zero occurs on line 59 in src/lb_fwlc.c in fwlc_queue_srv.

I notice that in haproxy-2.0.18 this section of code is protected by
HA_SPINLOCKs and I've been unable to replicate this issue in that version.

I've written an agent (attached), bad_agent.py, which provokes this condition by
switching randomly between 1 and 0 percent.  I also include a minimal
configuration, cfg (also attached), which seems sufficient to cause the issue.
With these two running “ab -n 500 -c 500 http://192.168.92.1:8080/” will
quickly crash the haproxy process.

I include links to a coredump and the binary that generated it (unstripped).
The backtrace of thread 1 follows.



Hi,

Thanks for the reproducer. I'm able to crash HAProxy too using your config and
your agent. It seems to only crash on the 1.8. I'll investigate.



Hi,

In fact, it fails in all branches supporting the threads. The leasconn and first 
loadbalancing algorithms are affected by this bug. In leastconn, it may crash 
because of the division by 0 when the server weight is set to 0. But for the 
both algos, the server tree may be also corrupted, leading to stranger and 
undefined bugs.


I pushed a fix (commit 26a52a) and backported it as far as 1.8. So, it should be 
fixed in all branches now.


Thanks !
--
Christopher Faulet



Re: Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-10-16 Thread Christopher Faulet

Le 13/10/2020 à 14:53, Peter Statham a écrit :

Hello,

We've found an issue when using agent checks in conjunction with the weighted 
least connections algorithm in multithreaded mode.  It seems to me as if it is 
possible for next_eweight in struct server to be modified in another thread 
during the execution of fwlc_srv_reposition.  If next_eweight is set to zero 
then a division by zero occurs on line 59 in src/lb_fwlc.c in fwlc_queue_srv.


I notice that in haproxy-2.0.18 this section of code is protected by 
HA_SPINLOCKs and I've been unable to replicate this issue in that version.


I've written an agent (attached), bad_agent.py, which provokes this condition by 
switching randomly between 1 and 0 percent.  I also include a minimal 
configuration, cfg (also attached), which seems sufficient to cause the issue.  
With these two running “ab -n 500 -c 500 http://192.168.92.1:8080/” will 
quickly crash the haproxy process.


I include links to a coredump and the binary that generated it (unstripped).  
The backtrace of thread 1 follows.




Hi,

Thanks for the reproducer. I'm able to crash HAProxy too using your config and 
your agent. It seems to only crash on the 1.8. I'll investigate.


--
Christopher Faulet



Crash when using wlc in multithreaded mode with agent checks (1.8.26).

2020-10-13 Thread Peter Statham
Hello,

We've found an issue when using agent checks in conjunction with the
weighted least connections algorithm in multithreaded mode.  It seems to me
as if it is possible for next_eweight in struct server to be modified in
another thread during the execution of fwlc_srv_reposition.  If
next_eweight is set to zero then a division by zero occurs on line 59 in
src/lb_fwlc.c in fwlc_queue_srv.

I notice that in haproxy-2.0.18 this section of code is protected by
HA_SPINLOCKs and I've been unable to replicate this issue in that version.

I've written an agent (attached), bad_agent.py, which provokes this
condition by switching randomly between 1 and 0 percent.  I also include a
minimal configuration, cfg (also attached), which seems sufficient to cause
the issue.  With these two running “ab -n 500 -c 500
http://192.168.92.1:8080/” will quickly crash the haproxy process.

I include links to a coredump and the binary that generated it
(unstripped).  The backtrace of thread 1 follows.

GNU gdb (Debian 8.2.1-2+b3) 8.2.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./haproxy...done.
[New LWP 11307]
[New LWP 11308]

warning: Unexpected size of section `.reg-xstate/11307' in core file.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `./haproxy -db -- cfg'.
Program terminated with signal SIGFPE, Arithmetic exception.

warning: Unexpected size of section `.reg-xstate/11307' in core file.
#0  0x56120ec24a90 in fwlc_queue_srv (s=0x56120f989e80) at
src/lb_fwlc.c:59
59  fwlc_queue_srv(s);
[Current thread is 1 (Thread 0x7f5871e723c0 (LWP 11307))]
(gdb) set pagination off
(gdb) bt full
#0  0x56120ec24a90 in fwlc_queue_srv (s=0x56120f989e80) at
src/lb_fwlc.c:59
No locals.
#1  fwlc_srv_reposition (s=0x56120f989e80) at src/lb_fwlc.c:59
No locals.
#2  0x56120ebf5390 in connect_server (s=s@entry=0x56120fa02650) at
src/backend.c:1234
count = 
cli_conn = 0x7f586c504e70
srv_conn = 0x7f586c099020
srv_cs = 
old_cs = 
srv = 
reuse = 
err = 
#3  0x56120eb990f8 in sess_update_stream_int (s=0x56120fa02650) at
src/stream.c:886
conn_err = 
srv = 0x56120f989e80
si = 0x56120fa028c0
req = 0x56120fa02660
srv = 
si = 
req = 
conn_err = 
#4  process_stream (t=) at src/stream.c:2234
srv = 
s = 0x56120fa02650
sess = 
rqf_last = 
rpf_last = 2147483648
rq_prod_last = 
rq_cons_last = 
rp_cons_last = 
rp_prod_last = 
req_ana_back = 
req = 0x56120fa02660
res = 0x56120fa026a0
si_f = 0x56120fa02898
si_b = 0x56120fa028c0
#5  0x56120ec2007d in process_runnable_tasks () at src/task.c:317
t = 
i = 
max_processed = 
local_tasks = {0x56120fa029b0, 0x7f586c04d340, 0x50004, 0xb2,
0x0, 0x56120ec2527b , 0x50004, 0xb2, 0x0,
0x7f586c529180, 0x2c80, 0x56120ec15513 , 0x0,
0x, 0x7fff32892a50, 0x7fff32892a50}
local_tasks_count = 
final_tasks_count = 0
#6  0x56120ebce414 in run_poll_loop () at src/haproxy.c:2499
next = 
exp = 
next = 
exp = 
#7  run_thread_poll_loop (data=) at src/haproxy.c:2569
ptif = 
ptdf = 
start_lock = 0
#8  0x56120eb4212f in main (argc=, argv=)
at src/haproxy.c:3172
tids = 
threads = 0x56120f9861c0
i = 
old_sig = {__val = {2048, 48, 72057589742960643, 94635570839576,
31, 80, 18446744073709544224, 0, 206158430211, 0, 0, 472446402651,
511101108348, 0, 140017846675184, 140017846656064}}
blocked_sig = {__val = {1844674406710583, 18446744073709551615
}}
err = 
retry = 
limit = {rlim_cur = 4027, rlim_max = 4027}
errmsg = "\000pression support (neither USE_ZLIB nor
\000X%-x\342/6\375\000\000\000\000\000\000\000(-\211\062\377\177\000\000\350+\211\062\377\177\000\000\001\303\307\016\022V\000\000\201\000\000\000\000\000\000\000H,\211\062\377\177\000\000`C\225\017"
pidfd = 
(gdb)

 core-1.8.26

 haproxy-1.8.26