On 02/12/2018 04:28 PM, Franks Andy (IT Technical Architecture Manager)
wrote:
Hi Fred,
Hi Franks,
Please bottom post when you reply.
Thanks for the reply.
I have two peers synchronising (we use keepalived over the two to control which
is live).
HAProxy config:
peers lb_replication
DNS over TCP :)
To share the solution with everyone, the problem was fixed by a
configuration update.
Mike added a "accepted_payload_size 1024" into his resolvers section.
HAProxy announces by an accepted payload of 512 bytes, which let the place
for only 3 records reported by consul.
With a payload of 1024, up
-- Forwarded message --
From: David CARLIER
Date: 12 February 2018 at 15:37
Subject: Plans for 1.9
To: w...@1wt.eu
Was thinking as a contrib work, making haproxy more fuzzer "compliant"
(AFL and LLVM/fuzzer for example) which would mean turning haproxy into a
Hi Fred,
Thanks for the reply.
I have two peers synchronising (we use keepalived over the two to control which
is live).
HAProxy config:
peers lb_replication
peer server1 10.128.176.141:1024
peer server2 10.128.176.142:1024
backend sourceaddr
stick-table type ip size 10240k
-- Forwarded message --
From: David CARLIER
Date: 12 February 2018 at 14:37
Subject: Re: haproxy-1.8 build failure on FreeBSD/i386 (clang)
To: trtrmi...@gmail.com
I think I m the one behing this relatively recent change ... why not adding
in the condition the
Hi
I had this patch locally for couple of weeks just having the proper current
year reliably for the verbosity information part, relying on the startup
date.
Kind regards.
From deab708b01f90a14e466dcca29056b007eddd600 Mon Sep 17 00:00:00 2001
From: David Carlier
Date: Fri,
Hi,
This is a small patch about the Lua documentation.
it should be backported in 1.8
Note that the function prototype is compatible with old versions.
Thierry
>From 4feaa411b6cca0b3a57ebe16c13ce056d93eb74a Mon Sep 17 00:00:00 2001
From: Thierry FOURNIER
Date: Mon,
On 02/08/2018 11:22 AM, Franks Andy (IT Technical Architecture Manager)
wrote:
Hi all,
Hello Franks,
Haproxy 1.6.13
I’ve checked the documentation again but can’t see an option for this.
We sometimes clear backup path server use for individual connections and
whilst the peers
Replying to myself :)
I think I spotted a bug in HAProxy as well.
For some reasons, when I run HAProxy in debug more, I never ever have the
issue (all my servers are properly populated and maintained).
I did a strace of the process running in daemon mode in the container, and
I can confirm the
Continuing on my investigation I found an other interesting piece of
information:
I run haproxy and my consul environment in a docker host, through
docker-compose and I can reproduce the same issue as you.
Basically, I have a service delivered by 20 containers, and HAProxy in
docker can see only
First, I confirm the following bug in consul 1.0.5:
- start a X instances of a service
- scale the service to X+Y (with Y > 1)
==> then consul crashes...
>From time to time, I also saw HAProxy getting only 10 servers from 20 for a
given service.
I'll revert to 1.0.2 for now.
The order of the
12 matches
Mail list logo