hi Willy,thanks for elaborate explanation on 2 methods, here is my scenario( i
didn't explain much in details earlier)
I have load balancer 1 which handles all my connections it is a ubuntu 10.04
LTS machine with haproxy 1.5 running on it, LB1 <public ip> is our DNS for
client applications. having said that. we however are failing a few compliance
scenarios, hence i need to upgrade the VM to 12.04, however in order to do so
this could take us about 2-4 hours and in the mean time the entire VM has to be
rebuild (don't want to do a inplace upgrade). So we only have http connections
and those are also very shortlived 1-2 mins max, and i have a load balancer 2
configured so that during data center/networking issues all the connections
could still fail over (using keepalived on public IP of LB1).so far i tried
using keepalived to fail over the public ip to the LB2, but in that scenario
the network connection of LB1 is complete lost and no updates or packages can
be installed making it impractical.I have also tried the IP forwarding approach
as i mentioned earlier, but for some reason connections aren't failing over and
keep hanging, so i wanted to check with the group as to what else could i try
to avoid a total downtime and still keep working on the first LB.I am not too
aware of DNS round robin...
Hopefully now my issue is a little clearer
From: Willy Tarreau <[email protected]>
To: Amol <[email protected]>
Cc: HAproxy Mailing Lists <[email protected]>
Sent: Sunday, June 7, 2015 12:44 AM
Subject: Re: haproxy upgrade strategy for primary/secondary model
Hi Amol,
On Fri, Jun 05, 2015 at 03:44:35PM +0000, Amol wrote:
> Hi All,I want to get an idea from you all about a scenario that i am
> facing..So i have 2 haproxy servers as load balancer primary and secondary,
> all the connections always go to primary, when primary fails i have
> keepalived running so the connections will fail-over to secondary
>
> Now when i am upgrading i can upgrade secondary without any issues as the
> server never has active connections, but my question is how can i upgrade the
> primary without causing any downtime to my users.
>
> I have 2 Apache servers running behind the load balancers.
> so far i have tried the following on the primary , but have no luck.....
>
> echo "1" > /proc/sys/net/ipv4/ip_forward
> iptables -P FORWARD ACCEPT
> iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT
> --to-destination <public_ip_secondary>:80
> iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT
> --to-destination <public_ip_secondary>:443
> iptables -t nat -A POSTROUTING -j MASQUERADE
> iptables -t nat -L -v
>
> my website does not get redirected to the secondary even after i do this....
> any suggestions?
The config above should work. It will only ensure that new connections go
to the backup node and that existing connections will be handled by the
current active node. It will not allow you to reboot the machine without
breaking connections.
In theory this will result in the backup node processing everything and
the active node not processing anything. In practice you may find that
some very long connections will remain on the active node for a long
time and be disappointed when you find that both nodes are active and
that you can't stop any. But in general with HTTP only this should not
happen.
If there is no extra-long connection, there's no reason for going through
this hassle, most connections you'll break will be idle connections and
simply failing over to the other node (VRRP or so) will be clean and
could impact just the few active connections at the moment you're
switching, but most of them will be in keep-alive, and the browsers will
retry them.
If you really want to have zero cut at all during a switch, you'll in
fact need to set up a front layer of LVS using source hash so that
sessions are determinist and that established sessions always go to
the same node. When you want to switch a node, you just deconfigure
it on LVS, which will keep established sessions to it and redistribute
new sessions to the new node. This also covers against system reboots
once you don't have any session anymore. Hashing on the source will
ensure that if you lose one LVS node (or reboot it), the other node
will apply the same hash to the sessions and will pick the same
haproxy node. You just need to avoid losing LVS nodes and haproxy
nodes at the same time.
Such setups can happen in environments with very long sessions such
as RDP, SSH, etc. In my opinion it doesn't make much sense for HTTP.
Willy