On 1/27/2018 2:21 AM, Igor Cicimov wrote:
Thank you!  That was it.


On 27 Jan 2018 4:44 pm, "TomK" <[email protected] <mailto:[email protected]>> wrote:

    On 1/26/2018 7:49 PM, Igor Cicimov wrote:



        On Fri, Jan 26, 2018 at 2:28 PM, TomK <[email protected]
        <mailto:[email protected]> <mailto:[email protected]
        <mailto:[email protected]>>> wrote:

             Hey All,

             We have UCARP and HAproxy configured and setup between two
        servers.
             HAproxy is bound to the UCARP VIP between the nodes. There
        are four
             services per hoer: four on SRV1 (primary) and same four
        apps on SRV2
             (secondary)  We need active / passive behavior, since apps
        don't
             support an active / active config.   We have one UCARP VIP
        for each
             application.

             SRV1 primary
             SRV2 secondary

             We need all four VIP's and HAproxy processes to failover to the
             standby if:

             1) One of the four processes on the primary fails.
             2) Primary host fails ( This piece is easier. )

             When all fail over to the standby, we need them capable of
        failover
             back if the secondary (standby) fails in the future.

             We can't seem to have all of them stick on the standby (now
        primary)
             when the primary comes back up or even when the one failed
        service
             comes back on SRV1 (former primary).

             They end up flipping back and we end up in a situation
        where some of
             the traffic goes to SRV1 and some to SRV2.

             We tried the:

             stick-table type ip size 1 nopurge peers LB
             stick on dst

             As well as rise 9999999

             but those eventually fail over.  Wanted to know what is the
        best
             practice in this case?


        ​I think you should solve this in your design instead in
        haproxy. Keepalived for example supports iptables rules, you can
        use them ​to block the traffic completely on the secondary so
        the primary will still think the secondary instances are out of
        service even when the server is up and services running. I don't
        think this is an option with CARP which doesn't support even
        executing scripts on state change. You can also use Heartbeat or
        Pacemaker which will have control over the services too so they
        will be only running if the server owns the VIP.

    Agreed, CARP is limited.  We see that already.  We're increasingly
    looking at keepalived instead of UCARP however we already have a
large portion of our infrastructure now using UCARP and HAPROXY. (However, that won't take long to change.)

    We just tested using sticky tables today and like the results so far
    however we notice we can't start haproxy on the second node of a
    cluster as long as haproxy was already started on the first node and
    already bound to the UCARP VIP.


It will start if you set net.ipv4.ip_nonlocal_bind = 1
kernel prameter

       I do recall this is possible where UCARP runs on both nodes and
    HAproxy starts up on both nodes and successfully binds to the same
    VIP on both nodes.  But I can't access that environment anymore to
    see what the config difference might be.  Is this more of a UCARP
    question or an HAproxy config to allow both HAproxy to bind to the
    same VIP and same port off of two different hosts?  Let me know if
    I'm not being clear.



             Second question we have is how to split up HAproxy
        processes into
             separate start and stop scripts?  Currently we stop and
        start using
             only the main restart script /etc/init.d/haproxy but that
        stops all
             or starts all.  Has anyone split these up into separate
        start stop
             scripts to control individual HAproxy instances?  In other
             environments we find that they start multiple copies of the
        same
             HAproxy definition.  We need better fine grained control of
        each one.


        ​Not sure I understand this question and what do you refer to
        when saying haproxy processes? If you mean separate haproxy per
        service I don't think thats a good idea and sounds like waste of
        resources. Anyway, changing your design should solve you
        problems and the need of needlessly messing around ​with haoroxy.



-- Cheers,
    Tom K.
    
-------------------------------------------------------------------------------------

    Living on earth is expensive, but it includes a free trip around the
    sun.




--
Cheers,
Tom K.
-------------------------------------------------------------------------------------

Living on earth is expensive, but it includes a free trip around the sun.


Reply via email to