Hi Baptiste. Thanks for your respond. By context switch here I mean when HAProxy switches from one backend to another. My bad for not using a correct term. In my case some of the servers are in both backends.
To better understand the HAProxy functionality and based on your explanation, Is it safe to assume that every time HAProxy switches from one back-end to another it reloads? Does it do that by forking? All that said. Is there a way, either by configuration or by 3rd party software, to maintain one active backup server for an active server pool of "n" in such a way that the backup become active when once active server goes down? Thanks in advance. /Rama On 12/2/12 11:44 PM, Baptiste wrote: Hi Rama, What do you call a "context switch"? Is that an HAProxy reload ? When reloading, you may have 2 HAProxy processes running in the mean time. The old process keeps on handling old connections while new process handles new connections. Each process manages its own maxconn value, that's why you may have more connections per server than maxconn, furthermore that your protocol seems to rely on a long time connection. There is actually no way to tell 2 HAProxy processes to exchange their information. cheers On Mon, Dec 3, 2012 at 3:50 AM, Rama Alebouyeh<[email protected]> <[email protected]> wrote: Hi All, I am trying to come up with a configuration that a backup server become active as soon as a server in active pool goes down. For doing that I interleaved the active and backup pool and used ACL to switch to backends based on number of servers in active pool. The issue that I am facing is that when context switch happens the common server connection count gets reset and it goes over maxconn. As in the following configuration if srv2 goes down, while each srv1 and srv2 has 80 connections, the backup_pool becomes active. Now however srv1 is still maintaining its 80 connections, if I push another 50 connections the srv1 will reach 105 and srv3 will have 25. I was wondering if there is any way to keep the connection counter between backends to handle this case with scenario? Or is there any other way I can configure haproxy for this scenario? If any code changes that required, I am willing to contribute if this is a valid case. Regards, Rama here is the my config file: I am using haproxy 1.4.21 ====================================================== global log /var/run/log local1 debug maxconn 1024 debug #quiet defaults log global mode tcp option tcplog option logasap timeout server 5000 timeout client 5000 timeout connect 5000 frontend proto_in bind *:1935 acl SRV_DOWN nbsrv(active_pool) lt 2 default_backend active_pool use_backend backup_pool if SRV_DOWN backend active_pool mode tcp balance leastconn timeout server 5000 timeout connect 5000 option log-health-checks option httpchk OPTIONS * HTTP/1.1\r\nHost:\ server srv1.foo.com 10.10.10.75:1935 check port 8086 maxconn 100 server srv2.foo.com 10.10.10.29:1935 check port 8086 maxconn 100 backend backup_pool mode tcp balance leastconn timeout server 5000 timeout connect 5000 option log-health-checks option httpchk OPTIONS * HTTP/1.1\r\nHost:\ server srv1.foo.com 10.10.10.75:1935 check port 8086 maxconn 100 server srv2.foo.com 10.10.10.29:1935 check port 8086 maxconn 100 #including backup server srv3.foo.com 10.10.10.28:1935 check port 8086 maxconn 100 ======================================================

