Hi William,
On Tue, Apr 09, 2019 at 01:54:03PM +, William Dauchy wrote:
> Hello,
>
> Probably a useless report as I don't have a lot information to provide,
> but we faced an issue where the unix socket was unresponsive, with the
> processes using all cpu (1600% with 16 nbthreads)
>
> I only
The allocated trash chunk is not freed properly and causes a memory leak
exhibited as the growth in the trash pool allocations. Bug was introduced
in commit 271022 (BUG/MINOR: map: fix map_regm with backref).
This should be backported to all branches where the above commit was
backported.
---
src
Thank you; I had missed the context from 1.9.6. I've updated my test machine
and will report back on Monday (or earlier, if it runs into trouble)
--
Richard Russo
to...@enslaves.us
On Fri, Apr 12, 2019, at 4:17 AM, Olivier Houchard wrote:
> Hi,
>
> On Fri, Apr 12, 2019 at 08:37:10AM +0200
Hi Emeric,
On 4/12/19 5:26 PM, Emeric Brun wrote:
Do you have ssl enabled on the server side?
Yes, ssl is on frontend and backend with ssl checks enabled.
If it is the case could replace health check with a simple tcp check (without
ssl)?
What I noticed before that if I (re)start HAProxy
Hi Marcin,
Do you have ssl enabled on the server side? If it is the case could replace
health check with a simple tcp check (without ssl)?
Regarding the show info/lsoff it seems there is no more sessions on client
side but remaining ssl jobs (CurrSslConns) and I supsect the health checks to
m
Hi Emeric,
On 4/10/19 2:20 PM, Emeric Brun wrote:
On 4/10/19 1:02 PM, Marcin Deranek wrote:
Hi Emeric,
Our process limit in QAT configuration is quite high (128) and I was able to
run 100+ openssl processes without a problem. According to Joel from Intel
problem is in cleanup code - presuma
Patrick, Christian,
Could you try the attached patch? It should fix the issue.
Thanks
--
William Lallemand
>From 71dae971bf9b59ec8ef2f94cab2c0d762744c2a9 Mon Sep 17 00:00:00 2001
From: William Lallemand
Date: Fri, 12 Apr 2019 14:40:36 +0200
Subject: [PATCH] BUG/MEDIUM: mworker/cli: don't over
Hey Guys,
I can confirm those issues as well as the proposed fix/workaround to
solve the issue.
I upgraded our "nbproc" setup from 1.7.x to 1.9.6 today and noticed some
missing entries from the stats socket, e.g.:
# echo 'show stat' | socat stdio /run/haproxy.stat|wc -l
1442
Which is correct,
yep, I'm going to play in my own fork as well.
I will show something soon
пт, 12 апр. 2019 г. в 16:50, Tim Düsterhus :
> Ilya,
>
> Am 12.04.19 um 10:54 schrieb Илья Шипицин:
> > I wish to enable travis-ci, cirrus-ci builds for github repo
> > I do not see any public member there.
> >
> > who does
Ilya,
Am 12.04.19 um 10:54 schrieb Илья Шипицин:
> I wish to enable travis-ci, cirrus-ci builds for github repo
> I do not see any public member there.
>
> who does control it ?
>
Willy does. And FWIW I've played around with Travis CI in a GitHub fork
of mine: https://github.com/TimWolla/haprox
Hi,
On Fri, Apr 12, 2019 at 08:37:10AM +0200, Maciej Zdeb wrote:
> Hi Richard,
>
> Those patches from Olivier (in streams) are related to my report from
> thread "[1.9.6] One of haproxy processes using 100% CPU", but as it turned
> out it wasn't a single bug and issue is not entirely fixed yet.
>
hello,
I wish to enable travis-ci, cirrus-ci builds for github repo
I do not see any public member there.
who does control it ?
thanks!
12 matches
Mail list logo