> Hi Benoit, > On Tue, Oct 17, 2017 at 04:17:42AM +0200, Benoit GEORGELIN - Association > Web4all > wrote: > > Hi members of the list ,
> > I would like to know if you have any recommendation about this problem I > > have > > . > > I'm running haproxy with a backend of apache2 web servers to loadbalance > > HTTP > > services > > In this model, I have better performance with "balance source" so I visitor > > have the content delivered from the same server. > > But I have a issue when a "source" kind of over talk to a website . This can > > make one of the HTTP in the backend to become slower for exemple as it > > handle > > too much connections. And as I balance with "source" it will stay and stick > > to that server in the backend > > So I was thinking, I'm gonna use "maxcon" to the servers in that backend , > > but it does not solve my problem from a visitor point of view. The visitor > > will be add in "queue" and when a slot will be available in this backend for > > that server, pages will be delivered. > > At the end, the visitor will have a wait time. > > My question is, would it be possible to have an option that could be use > > with > > "balance source" but if "maxcon" reach 90% then force redispatch :) > > Or it there any available option that could help in this scenario? > I think you can use the consistent hashing with the hash-balance-factor > parameter. It was designed for URL-based load balancing for caches and > remains sticky as much as possible but tries to balance the load as well > and accepts to break the hash to ensure a smoother load. I never thought > about using it with "balance source" but I don't see any reason why it > wouldn't work, and your use case falls exactly in its use case. So in > short, try to do something like this : > balance source > hash-type consistent > hash-balance-factor 125 # highest load not more than 125% of average > Please check the doc for this last keyword, it's well explained. > Regards, > Willy Thank you very much Willy. I'll try this , sounds perfectly adapted to my case. Cheers, Benoit

