On Fri, Jan 26, 2018 at 2:28 PM, TomK <[email protected]> wrote:

> Hey All,
>
> We have UCARP and HAproxy configured and setup between two servers.
> HAproxy is bound to the UCARP VIP between the nodes. There are four
> services per hoer: four on SRV1 (primary) and same four apps on SRV2
> (secondary)  We need active / passive behavior, since apps don't support an
> active / active config.   We have one UCARP VIP for each application.
>
> SRV1 primary
> SRV2 secondary
>
> We need all four VIP's and HAproxy processes to failover to the standby if:
>
> 1) One of the four processes on the primary fails.
> 2) Primary host fails ( This piece is easier. )
>
> When all fail over to the standby, we need them capable of failover back
> if the secondary (standby) fails in the future.
>
> We can't seem to have all of them stick on the standby (now primary) when
> the primary comes back up or even when the one failed service comes back on
> SRV1 (former primary).
>
> They end up flipping back and we end up in a situation where some of the
> traffic goes to SRV1 and some to SRV2.
>
> We tried the:
>
> stick-table type ip size 1 nopurge peers LB
> stick on dst
>
> As well as rise 9999999
>
> but those eventually fail over.  Wanted to know what is the best practice
> in this case?
>
>
​I think you should solve this in your design instead in haproxy.
Keepalived for example supports iptables rules, you can use them ​to block
the traffic completely on the secondary so the primary will still think the
secondary instances are out of service even when the server is up and
services running. I don't think this is an option with CARP which doesn't
support even executing scripts on state change. You can also use Heartbeat
or Pacemaker which will have control over the services too so they will be
only running if the server owns the VIP.


> Second question we have is how to split up HAproxy processes into separate
> start and stop scripts?  Currently we stop and start using only the main
> restart script /etc/init.d/haproxy but that stops all or starts all.  Has
> anyone split these up into separate start stop scripts to control
> individual HAproxy instances?  In other environments we find that they
> start multiple copies of the same HAproxy definition.  We need better fine
> grained control of each one.
>

​Not sure I understand this question and what do you refer to when saying
haproxy processes? If you mean separate haproxy per service I don't think
thats a good idea and sounds like waste of resources. Anyway, changing your
design should solve you problems and the need of needlessly messing around
​with haoroxy.

Reply via email to