Hi Andrew, James,
On Wed, Apr 12, 2017 at 11:46:57PM +0100, Andrew Smalley wrote:
> I do not see how the old haproxy being on a separate PID could do anything
> with a socket created by a new PID.
That's what James explained. The old process tries to clean up before
leaving and tries to clean
On Apr 12, 2017 6:49 PM, "Andrew Smalley" wrote:
HI James
Thank you for your reply.
I do not see how the old haproxy being on a separate PID could do anything
with a socket created by a new PID.
How? Easily. Unix domain sockets are presented as files. *Any*
HI James
Thank you for your reply.
I do not see how the old haproxy being on a separate PID could do anything
with a socket created by a new PID.
Do you bring up your new instance with real servers in a maintenance
state? this seems to be required to do a correct handover before making
them
Hi Andrew:
Thanks for you feedback, but I'm describing a very specific bug wherein the
old haproxy will unlink the new haproxy's bound unix domain socket upon
reload due to a race condition in the domain socket cleanup code if a
listen overflow occurs while the graceful is in process.
On Wed,
HI James
When you do a graceful reload of haproxy this is what happens.
1. the old process will accept no more connections and the stats page is
stopped and so is the socket
2. a new haproxy instance is started where new clients get connected to,
and this has the live socket
3. when the old
This just hit us again on a different set of load balancers... if there's a
listen socket overflow on a domain socket during graceful, haproxy
completely deletes the domain socket and becomes inaccessible.
On Tue, Feb 21, 2017 at 6:47 PM, James Brown wrote:
> Under load,
Under load, we're sometimes seeing a situation where HAProxy will
completely delete a bound unix domain socket after a reload.
The "bad flow" looks something like the following:
- haproxy is running on pid A, bound to /var/run/domain.sock (via a bind
line in a frontend)
- we run
7 matches
Mail list logo