On Tue, 19 Mar 2013 20:03:14 +0100 in
<[email protected]>, Lukas Tribus Lukas
Tribus <[email protected]> wrote:

> 
> > conntrackd permit to also share TCP states between boxes that will
> > also run iptables
> 
> With conntrackd-syncing you just allow the packet to pass the iptables
> barrier; but the session will still be dropped by the OS because the
> TCP stack doesn't know the socket, and so does not the application.

Right. 

> To do this you would need something like TCP connection repair [1], but
> that requires support in both the kernel and userspace. While this crazy
> feature seems to have made it into the 3.5 kernel, I'm not aware that
> this is supported in haproxy.

I can think of an alternative implementation using ... netfilter kernel
API. Probably not a good idea since the implementation will require
netfilter rules to be configured by HAProxy and is not portable.  

> 
> In fact, while rethinking, I'm not sure TCP connection repair can be used
> for failover anyway, its just a technology to move the TCP session from one
> host to another gracefully, but it requires both hosts to be alive afaik -
> so it doesn't make sense for failover.

It's not its primary goal ... but it can't be used in active/active,
only active/passive setup with a bit of hack that, personally, I'll put
in the software that ensure the HA like keepalived. It's a point in time
dump and restore of the in flight packets.  

> 
> 
> > HAProxy can share as of 1.5 its connection table, which is really a
> > appreciated feature :) 
> 
> HAProxy can share stick-tables [2], but that doesn't mean you can
> implement stateful failover.

Which will at least ensure that session affinity with backends is
kept between several HAProxy instance which is a good step for
active/active HAProxy setup. 

 
> If you want to do this with haproxy, you will probably need to drop the
> idea of stateful failover, imho, no user-space software can accomplish
> this.

Well, in a non portable fashion, it's possible if you reuse the
kernel API that pfsync or conntrackd use to share the connection table
between two boxes. I'm pretty sure one can patch HAProxy code to share
more than the stickiness criterions of a frontend or a backend. And to
implement this, you do not need the fancy TCP_QUEUE_REPAIR flag, only
for example a forked process over an event driven API to guarantee
syncing and of course some time to implement that cause that not
exactly what I call a trivial piece of code :)   

> On the other side, if maintaining the TCP sessions when a failover occurs
> is a requirement for you, you should stick to LVS + conntrack syncing,
> as thats probably possible. Of course you can work only up to
> Layer 4 and won't see the application Layer on your load-balancer.

It's not a strong requirement for all the applications served only for a
few that really prefer an proper solution to avoid the renegotiation
on the client side is case of a crash and a restart from scratch. LVS
only will not fit the functional requirements of the backend
application, I really need layer 4 LB criterion. 

Regards,    

-- 
Jérôme Benoit aka fraggle
La Météo du Net - http://grenouille.com
OpenPGP Key ID : 9FE9161D
Key fingerprint : 9CA4 0249 AF57 A35B 34B3 AC15 FAA0 CB50 9FE9 161D

Attachment: signature.asc
Description: PGP signature

Reply via email to