Hi,

On Wed, Feb 22, 2012 at 03:49:16PM +0530, Sachin Shetty wrote:
> Hi,
> 
> We have four web servers in a single backend. Physically these four servers
> are on two different machines. A new sessions is made sticky by hashing on
> one of the headers. 
> 
> Regular flow is ok, but when one of the webservers are down for an in-flight
> session, the request should be re-dispatched to the webserver on the same
> machine if available. I looked at various options in the config, but
> couldn't figure out a way to do it.  Has anybody achieved any thing similar
> with some config tweaks?

If you're using cookie-based persistence, then multiple servers may share the
same cookie value. If the first one fails, haproxy will try to forward the
request to another one with the same value. Note that this can cause an
imbalance between the servers, as the first one with the value will always
get the persistent traffic of the other ones. So it might not exactly be
what you're looking for but it might give you some ideas.

In fact what you're looking for are two layers of load balancing it seems.
A first level contains farms and a second level contains servers in farms.
I've seen two main types of architectures doing that :

1) the first one consists in stacking two layers of LB. The first one
   selects the farm and does the persistence, the second one picks any
   server within that farm. It only works well if your servers are totally
   stateless :

   listen front-lb
       bind :80
       cookie FARM insert
       server farm1 127.0.0.1:1 cookie farm1
       server farm2 127.0.0.2:1 cookie farm2
       server farm3 127.0.0.3:1 cookie farm3

   listen farm1
       bind 127.0.0.1:1
       balance roundrobin (or leastconn)
       server farm1-1 192.168.1.1:80 check
       server farm1-2 192.168.1.2:80 check
       server farm1-3 192.168.1.3:80 check

   listen farm2
       bind 127.0.0.2:1
       balance roundrobin (or leastconn)
       server farm2-1 192.168.2.1:80 check
       server farm2-2 192.168.2.2:80 check
       server farm2-3 192.168.2.3:80 check

    etc...

2) the second method consists in having one backend per farm and one
   backend containing all the servers. The complete backend is used for
   load balancing and the other ones for persistence once the cookie is
   known. You have to properly format your cookie values for this :

   frontend front
       bind :80
       use_backend farm1 if { hdr_sub(cookie) SRV=farm1- }
       use_backend farm2 if { hdr_sub(cookie) SRV=farm2- }
       use_backend farm3 if { hdr_sub(cookie) SRV=farm3- }
       default_backend all-farms

   backend all-farms
       option redispatch
       balance roundrobin
       cookie SRV insert indirect
       server farm1-1 192.168.1.1:80 cookie farm1-1 track farm1/s1
       server farm1-2 192.168.1.2:80 cookie farm1-2 track farm1/s2
       server farm1-3 192.168.1.3:80 cookie farm1-3 track farm1/s3
       server farm2-1 192.168.2.1:80 cookie farm2-1 track farm2/s1
       server farm2-2 192.168.2.2:80 cookie farm2-2 track farm2/s2
       server farm2-3 192.168.2.3:80 cookie farm2-3 track farm2/s3

   backend farm1
       option redispatch
       balance roundrobin
       cookie SRV insert indirect
       server s1 192.168.1.1:80 cookie farm1-1 check
       server s2 192.168.1.2:80 cookie farm1-2 check
       server s3 192.168.1.3:80 cookie farm1-3 check

   backend farm2
       option redispatch
       balance roundrobin
       cookie SRV insert indirect
       server s1 192.168.2.1:80 cookie farm2-1 check
       server s2 192.168.2.2:80 cookie farm2-2 check
       server s3 192.168.2.3:80 cookie farm2-3 check

I really prefer the second option for several reasons :
  - there is one single layer
  - session affinity is maintained per-server and not per-farm
  - in case of fail-over, a new cookie is assigned and the user
    says on the new server in the same farm
  - the load is smoothed over all servers instead of all farms,
    meaning that if a farm has 2 servers and the other ones have 5,
    the 2 servers will not be overloaded.
  - it's easier to monitor

However it's a bit tricky to write, you have to be very careful or to
generate the config using scripts. It's highly recommended to improve
the use_backend rules to include nbsrv(#backend) in order not to
direct the traffic to a down backend.

Hoping this helps,
Willy


Reply via email to