On Mon, Mar 26, 2018 at 08:53:07AM +0300, ?? ? wrote:
> Hi!
>
> It's been almost 2 weeks since I've installed the patch and there were no
> segfaults since then. It seems that the problem is fixed now. Thank you!
Thank you Maxim for the report, it's much appreciated!
Willy
Hi!
It's been almost 2 weeks since I've installed the patch and there were no
segfaults since then. It seems that the problem is fixed now. Thank you!
2018-03-19 23:16 GMT+03:00 William Dauchy :
> On Mon, Mar 19, 2018 at 08:41:16PM +0100, Willy Tarreau wrote:
> > For me,
On Mon, Mar 19, 2018 at 08:41:16PM +0100, Willy Tarreau wrote:
> For me, "experimental" simply means "we did our best to ensure it works
> but we're realist and know that bug-free doesn't exist, so a risk remains
> that a bug will be hard enough to fix so as to force you to disable the
> feature
On Mon, Mar 19, 2018 at 08:28:14PM +0100, William Dauchy wrote:
> On Mon, Mar 19, 2018 at 07:28:16PM +0100, Willy Tarreau wrote:
> > Threading was clearly released with an experimental status, just like
> > H2, because we knew we'd be facing some post-release issues in these
> > two areas that are
Hi Willy,
Thank your for your detailed answer.
On Mon, Mar 19, 2018 at 07:28:16PM +0100, Willy Tarreau wrote:
> Threading was clearly released with an experimental status, just like
> H2, because we knew we'd be facing some post-release issues in these
> two areas that are hard to get 100% right
Hi William,
On Mon, Mar 19, 2018 at 06:57:50PM +0100, William Dauchy wrote:
> > However, be careful. This new implementation should be thread-safe
> > (hopefully...). But it is not optimal and in some situations, it could be
> > really
> > slower in multi-threaded mode than in single-threaded
Hi Christopher,
On Thu, Mar 15, 2018 at 04:05:04PM +0100, Christopher Faulet wrote:
> From 91b1349b6a1a64d43cc41e8546ff1d1ce17a8e14 Mon Sep 17 00:00:00 2001
> From: Christopher Faulet
> Date: Wed, 14 Mar 2018 16:18:06 +0100
> Subject: [PATCH] BUG/MAJOR: threads/queue: Fix
Le 15/03/2018 à 15:50, Willy Tarreau a écrit :
On Thu, Mar 15, 2018 at 02:49:59PM +0100, Christopher Faulet wrote:
When we scan a queue, it is locked. So on your mark, the server queue is
already locked. To remove a pendconn from a queue, we also need to have a
lock on this queue, if it is
On Thu, Mar 15, 2018 at 02:49:59PM +0100, Christopher Faulet wrote:
> When we scan a queue, it is locked. So on your mark, the server queue is
> already locked. To remove a pendconn from a queue, we also need to have a
> lock on this queue, if it is still linked. Else we can safely remove it,
>
Le 15/03/2018 à 12:16, Willy Tarreau a écrit :
+static struct stream *pendconn_process_next_strm(struct server *srv, struct
proxy *px)
{
+ struct pendconn *p = NULL;
+ struct server *rsrv;
rsrv = srv->track;
if (!rsrv)
rsrv = srv;
+ if
On Thu, Mar 15, 2018 at 11:32:41AM +0100, Christopher Faulet wrote:
> I tested with an ugly and really unsafe/buggy hack to migrate the stream and
> its client FD before waking it up, and this problem disappears. So this is
> definitely the right way to handle it. But for now, this could be a
Le 15/03/2018 à 07:19, Willy Tarreau a écrit :
Hi Christopher,
first, thank you for this one, I know how painful it can have been!
Hi Willy,
Thanks for your support :)
On Wed, Mar 14, 2018 at 09:56:19PM +0100, Christopher Faulet wrote:
(...)
But it is not optimal and in some situations,
Hi Christopher,
first, thank you for this one, I know how painful it can have been!
On Wed, Mar 14, 2018 at 09:56:19PM +0100, Christopher Faulet wrote:
(...)
> But it is not optimal and in some situations, it could be really
> slower in multi-threaded mode than in single-threaded one. The
Hi, Christopher!
Thank you very much for the patch. I'll apply it to my canary host today
but it will take a week or even more to assure that no crashes occur.
Anyway I'll write you back.
2018-03-14 23:56 GMT+03:00 Christopher Faulet :
> Le 07/03/2018 à 09:58, Christopher
Le 07/03/2018 à 09:58, Christopher Faulet a écrit :
I found thread-safety bugs about the management of pending connections.
It is totally fucked up :) It needs to be entirely reworked. I'm on it.
I hope to propose a patch this afternoon.
Hi,
Sorry for the lag. This issue was definitely harder
On Mon, Mar 05, 2018 at 09:19:16PM +0500, ?? ? wrote:
> Hi Willy!
>
> I have 2 more haproxy-servers with exactly the same configuration and load.
> Both has threads compiled in but not enabled in config (no nbthreads). And
> there're no segfaults at all. So I'm sure everything is fine
? wrote:
> > Hi!
> >
> > I have a backtrace for segfault in haproxy=1.8.4 with 4 threads. It
> happens
> > usually under heavy load. Can you take a look?
> >
> > Using host libthread_db library "/lib/x86_64-linux-gnu/
> libthread_db.so.1".
&
Hi Maxsim,
On Mon, Mar 05, 2018 at 03:08:11PM +0300, ?? ? wrote:
> Hi!
>
> I have a backtrace for segfault in haproxy=1.8.4 with 4 threads. It happens
> usually under heavy load. Can you take a look?
>
> Using host libthread_db library "/lib/x86_64-linu
Hi!
I have a backtrace for segfault in haproxy=1.8.4 with 4 threads. It happens
usually under heavy load. Can you take a look?
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/sbin/haproxy -f /etc/haproxy/haproxy-market.cfg
-
19 matches
Mail list logo