I'm pretty sure that we are seeing "the service is down", though only
briefly. We started looking at the logs because we were seeing testing
failures and failures with our code deploys, which check the haproxy status
as part of rolling the code update to the machines. We aren't manually
having
On Sat, Mar 21, 2020 at 10:08:15AM +0100, Willy Tarreau wrote:
> On Fri, Mar 20, 2020 at 08:10:25AM -0600, Sean Reifschneider wrote:
> > I grabbed the source from the PPA and rebuilt it, installed the dbg
> > package, and here's one of the "bt full"s:
>
> Thanks!
>
> > (gdb) bt full
> > #0
On Fri, Mar 20, 2020 at 08:10:25AM -0600, Sean Reifschneider wrote:
> I grabbed the source from the PPA and rebuilt it, installed the dbg
> package, and here's one of the "bt full"s:
Thanks!
> (gdb) bt full
> #0 pattern_exec_match (head=head@entry=0x55e4dd275478,
> smp=smp@entry=0x7fbf9ef650c0,
I grabbed the source from the PPA and rebuilt it, installed the dbg
package, and here's one of the "bt full"s:
(gdb) bt full
#0 pattern_exec_match (head=head@entry=0x55e4dd275478,
smp=smp@entry=0x7fbf9ef650c0,
fill=fill@entry=0) at src/pattern.c:2541
__pl_l =
__pl_r =
Le 17/03/2020 à 16:41, Sean Reifschneider a écrit :
The only place tcp-request appears in my config is in relation to rate-limiting,
which we have set up to track but not enforce. Here are the associated rules:
frontend main
[...]
acl rate_whitelist src 10.0.0.1
acl
The only place tcp-request appears in my config is in relation to
rate-limiting, which we have set up to track but not enforce. Here are the
associated rules:
frontend main
[...]
acl rate_whitelist src 10.0.0.1
acl rate_whitelist src 10.0.1.1
acl rate_whitelist src 10.0.1.2
Le 06/03/2020 à 18:53, Sean Reifschneider a écrit :
Here's what the stack traces look like, they all seem to be showing
"pattern_exec_match" and "epool_wait":
PID: 14348 (haproxy)
UID: 0 (root)
GID: 0 (root)
Signal: 11 (SEGV)
Timestamp: Thu
❦ 16 mars 2020 16:02 -06, Sean Reifschneider:
> I reverted back to haproxy 2.0.13 from the PPA last Wednesday and have
> verified that we get no segfaults on that. If there's anything else I can
> provide for you, let me know. Otherwise I'm just gonna close this ticket
> in our bugtracker.
I reverted back to haproxy 2.0.13 from the PPA last Wednesday and have
verified that we get no segfaults on that. If there's anything else I can
provide for you, let me know. Otherwise I'm just gonna close this ticket
in our bugtracker. :-)
Sean
On Fri, Mar 6, 2020 at 10:53 AM Sean
Here's what the stack traces look like, they all seem to be showing
"pattern_exec_match" and "epool_wait":
PID: 14348 (haproxy)
UID: 0 (root)
GID: 0 (root)
Signal: 11 (SEGV)
Timestamp: Thu 2020-03-05 19:59:05 MST (14h ago)
Command Line:
❦ 4 mars 2020 13:19 -07, Sean Reifschneider :
> I've upgraded back to 2.1, and installed the systemd-coredump, I'll update
> when I have additional information. I wasn't able to find a -dbgsym
> package, I even looked in the debian pool directory for the PPA. We're
> talking like a
(Sorry, meant version 2.0.13, not 2.0.3.
On Wed, Mar 4, 2020 at 1:19 PM Sean Reifschneider wrote:
> It's maybe a little bit early to say, but 2.0.3 has not segfaulted since I
> installed it, around 20 hours ago. The previous 20 hours had maybe a dozen
> segfaults, so this might tell us
It's maybe a little bit early to say, but 2.0.3 has not segfaulted since I
installed it, around 20 hours ago. The previous 20 hours had maybe a dozen
segfaults, so this might tell us something.
I've upgraded back to 2.1, and installed the systemd-coredump, I'll update
when I have additional
❦ 3 mars 2020 15:34 -07, Sean Reifschneider :
> We've been running haproxy 1.8 series for quite a while. We're currently
> in the process of updating to 2.1, and have installed from the vbernat PPA
> on Ubuntu 18.04 using the same old config file.
>
> Now we are seeing segfaults a few times a
We've been running haproxy 1.8 series for quite a while. We're currently
in the process of updating to 2.1, and have installed from the vbernat PPA
on Ubuntu 18.04 using the same old config file.
Now we are seeing segfaults a few times a day:
Mar 03 14:53:52 fw1.dev.realgo.com kernel:
15 matches
Mail list logo