Hi Lukas,
On Tue, Nov 10, 2020 at 11:55:33PM +0100, Lukas Tribus wrote:
> Hello Willy,
>
> On Fri, 6 Nov 2020 at 10:59, Willy Tarreau wrote:
> > > > hate the noise that some people regularly make about "UDP support"
> > >
> > > I am *way* more concerned about what to tell people when they report
On Tue, Nov 10, 2020 at 10:30:52PM +0100, Tim Düsterhus wrote:
(...)
> Let me (or Ilya) know if you have any questions or if you notice any
> issues with it. Personally I'm super happy with how it turned out :-)
Many thanks to you and Ilya for handling this. I know for having followed
your exchang
Hi,
This is a friendly bot that watches fixes pending for the next haproxy-stable
release! One such e-mail is sent periodically once patches are waiting in the
last maintenance branch, and an ideal release date is computed based on the
severity of these fixes and their merge date. Responses t
Hello Willy,
On Fri, 6 Nov 2020 at 10:59, Willy Tarreau wrote:
> > > hate the noise that some people regularly make about "UDP support"
> >
> > I am *way* more concerned about what to tell people when they report
> > redundant production systems meltdowns because of the traps that we
> > knew abo
On Tue, Nov 10, 2020 at 10:30:52PM +0100, Tim Düsterhus wrote:
> Let me (or Ilya) know if you have any questions or if you notice any
> issues with it. Personally I'm super happy with how it turned out :-)
>
Thanks to both of you, the whole thing is cleaner and quicker from my
point of view :-)
Hi List,
this is a kind-of follow-up for my previous email from July:
https://www.mail-archive.com/haproxy@formilux.org/msg38032.html.
You might or might not have noticed that Travis became a bit slow in the
last months and a few days ago they announced a new pricing model,
limiting minutes even
Hi Christopher,
On Tue, Nov 10, 2020 at 09:17:15PM +0100, Christopher Faulet wrote:
> Le 10/11/2020 à 18:12, Maciej Zdeb a écrit :
> > Hi,
> >
> > I'm so happy you're able to replicate it! :)
> >
> > With that patch that disabled pool_flush I still can reproduce on my r&d
> > server and on produ
Le 10/11/2020 à 18:12, Maciej Zdeb a écrit :
Hi,
I'm so happy you're able to replicate it! :)
With that patch that disabled pool_flush I still can reproduce on my r&d server
and on production, just different places of crash:
Hi Maciej,
Could you test the following patch please ? For now I
And it seems our existing loadbalancer actually work as I expect. It
seems the order of the frontends is irrelevant, the more specific one
gets picked over the fallback one.
This is really puzzling me. Why is it different on this newest VM?
Kernel 3.16 vs 4.9 ? But then I'm surprised that googlin
Hi,
I'm so happy you're able to replicate it! :)
With that patch that disabled pool_flush I still can reproduce on my r&d
server and on production, just different places of crash:
on r&d:
(gdb) bt
#0 tasklet_wakeup (tl=0xd720c300a000) at include/haproxy/task.h:328
#1 h2s_notify_recv (h2s=h
Hi all,
Is there a way to use some mecanism (spoe or other) to use modsecurity v3
with haproxy (2.x) ?
I found documentation on modsecurity v2 integration with spoe , but nothing
on v3.
My goal is to protect backends with modsecurity using owasp CRS.
I've setup a nginx with modsecurity v3 on ano
On Tue, Nov 10, 2020 at 04:14:52PM +0100, Willy Tarreau wrote:
> Seems like we're getting closer. Will continue digging now.
I found that among the 5 crashes I got, 3 were under pool_flush()
that is precisely called during the soft stopping. I tried to
disable that function with the patch below an
NFP WORKSHOPS
18 Blake Street, York YO1 8QG 01133 280988
Affordable Training Courses for Charities, Schools & Public Sector
Organisations
This email has been sent to haproxy@formilux.org
CLICK TO UNSUBSCRIBE FROM LIST
Alternatively send a blank e-mail to unsubscr...@nfpmail2001.co.uk quot
Hi Maciej,
On Tue, Nov 10, 2020 at 03:21:45PM +0100, Maciej Zdeb wrote:
> Hi,
>
> I'm very sorry that my skills in gdb and knowledge of HAProxy and C are not
> sufficient for this debugging process.
Quite frankly, you don't have to be sorry for anything :-)
I could reproduce the crash on 2.2 wi
Hi,
for years we've been using HAProxy, currently on 2.0. And we've had a
setup like this (simplified):
=
frontend
bind :80
use_backend
frontend
bind :80
use_backend
fronted fallback
bind *:80
use_backend
Hi,
I'm very sorry that my skills in gdb and knowledge of HAProxy and C are not
sufficient for this debugging process.
With the patch applied I tried again to use spoa from
"contrib/spoa_example/". Example spoa agent does not understand my
spoe-message and silently ignores it, but it doesn't matt
16 matches
Mail list logo