Hi all,

Thanks for the responses.

I agree, in most cases having Ansible trigger a reload instead of a restart
is better and it will prevent the situation I described. We have a few
environments with very long running sessions where there are situations
that we change the configuration and configure different backend IPs. With
a reload, we found that those old sessions would still be active to the old
backend IPs instead of the new ones we configured. Understandable of course
due to the reload, but that's why we have a restart handler in Ansible when
we perform configuration changes.
I will however look in the option Lukas mentioned, which sounds like it
will prevent this and allow us to always reload instead. (
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#hard-stop-after
)

Of course, regardless, it would also be great if the behaviour is made
consistent as Moemen suggests.

Regards,
Niels Hendriks

On 4 October 2017 at 22:01, Lukas Tribus <[email protected]> wrote:

> Hello Moemen,
>
>
> Am 04.10.2017 um 19:21 schrieb Moemen MHEDHBI:
> >
> > I am wondering if this is actually an expected behaviour and if maybe
> > that restart/stop should just shutdown the process and its open
> connections.
> > I have made the following tests:
> > 1/ keep an open connection then do a restart will work correctly without
> > waiting for existing connections to be closed.
>
> You're right, I got confused there.
>
> Stop or restart is supposed to kill existing connections without any
> timeouts, and
> systemd would signal it with a SIGTERM to the systemd-wrapper:
>
> https://cbonte.github.io/haproxy-dconv/1.7/management.html#4
>
>
>
> > I think it makes more sense to say that restart will not wait for
> > established connections.
>
> Correct, that's the documented behavior as per haproxy documentation and
> really the implicit assumption when talking about stopping/restarting in a
> process
> management context (I need more coffee).
>
>
>
> >   Otherwise there will be no difference between
> > reload and restart unless there is something else am not aware of.
> > If we need to fix 2/, a possible solution would be:
> > - Set killmode to "control-group" rather than "mixed" (the current
> > value) in systemd unit file.
>
> Indeed the mixed killmode was a conscious choice:
> https://marc.info/?l=haproxy&m=141277054505608&w=2
>
>
> I guess the problem is that when a reload happens before a restart and
> pre-reload
> systemd-wrapper process is still alive, systemd gets confused by that old
> process
> and therefor, refrains from starting up the new instance.
>
> Or systemd doesn't get confused, sends SIGTERM to the old systemd-wrapper
> process as well, but the wrapper doesn't handle SIGTERM after a SIGUSR1
> (a hard stop WHILE we are already gracefully stopping).
>
>
> Should the systemd-wrapper exit after distributing the graceful stop
> message to
> processes? I don't think so, it sounds horribly.
>
> Should the systemd-wrapper expect a SIGTERM after a SIGUSR1 and sends the
> TERM/INT to its childs? I think so, but I'm not 100% sure. Is that even
> the issue?
>
>
>
> We did get rid of the systemd-wrapper in haproxy 1.8-dev, and replaced it
> with a
> master->worker solution, so I'd say there is a chance that this doesn't
> affect 1.8.
>
>
> Niels, I still think what you want is for Ansible to reload instead of
> restart, but I
> agree that this is an issue ("systemctl [stop|restart] haproxy" should work
> regardless of an additional instance that is already gracefully stopping).
>
>
> CC'ing William and Apollon, maybe they can share their opinion?
>
>
>
>
> [1] https://cbonte.github.io/haproxy-dconv/1.7/
> configuration.html#hard-stop-after
>
>

Reply via email to