In data martedì 31 marzo 2009 16:04:41, Joseph Mack NA3T ha scritto: > On Tue, 31 Mar 2009, Fabio Coatti wrote: > > Basically, the scenario is the following: 5 servers (RIP), > > balanced with ipvs on linux 2.6.23, DR is used. All works > > just fine, but if for some reasons the real servers > > freezes and need reboot, ipvs keeps as "alive" all the > > connections so the balancing is not correctly done. > > ipvs has no way of knowing that a machine has been rebooted, > so this is the expected behaviour. Health checking is all > external to ipvs. Since in LVS-DR the director doesn't see > the returning packets, it has to guess when connections > expire, and you've set a long timeout.
Yes, I know this part, and in fact I don't blame lvs for falling in this situation; basically It would be useful a quick way to recover (by hand) when something weird happens and lvs gets confused. > > Either > > o use the -SH scheduler to direct clients to the same > database server and return the timeouts to the default > period. Hm, interesting, but I fear that this will lead to non-optimal situations when from the same machine many connections are started. > > o use one of the ioctls that Horms wrote to handle crashed > realservers when ipvs is using persistence. These clear out > the connection table This would be the good solution for my issue, indeed. Thanks for the suggestion, with it I finally found some hints. If I read correctly the documentation, /proc/sys/net/ipv4/vs/expire_quiescent_template and /proc/sys/net/ipv4/vs/expire_nodest_conn should help to solve the issue.. I'll try asap. Thanks. _______________________________________________ Please read the documentation before posting - it's available at: http://www.linuxvirtualserver.org/ LinuxVirtualServer.org mailing list - [email protected] Send requests to [email protected] or go to http://lists.graemef.net/mailman/listinfo/lvs-users
