Hi Dirk, On Thu, Apr 08, 2010 at 09:14:31PM +0200, Dirk Taggesell wrote: > Hi all, > > I have a question regarding the balancing algorithm with > "balance url_param site": ist the distribution of requests to the > back-ends reproducible or random? > > The urls will look like this: > http://hostname.com?site=www.someplace.com/bla/blubb.html > <http://hostname.com/?site=www.someplace.com/bla/blubb.html> > > Actual problem: > According to the site parameter in the request our back-end servers > respond with a certain info regarding to the particular url given. > For some reasons the back-end servers cannot hold all possible site urls > responses and thus we want to distribute requests among the backends so > that every site url is answered always by the same server (so with two > back-end servers every server only needs to hold half of the responses). > That's what the url_param does. > > But what happens if we need two haproxy servers as front-end (too much > requests per second for one machine)? will every haproxy instance > deliver the same url to the same back-end server as the other haproxy > machines? Granted that the haproxy's configurations are identical.
No problem, the hashes are guaranteed to return the same server if your configs are the same and the servers are seen in the same operational state. That's also why consistent hashing was a bit hard to get smooth enough :-) In 1.4 you should probably use "hash-type consistent" to avoid redispatching everyone when one server falls down. > Or can I force the behaviour with some other means? not needed. > I know that the entire idea is far from ideal, but for the moment I have > to find a quick solution (and then think about something elegant). Hashes are typically used for that : need for something looking like persistence and totally determinist. Your usage is not stupid at all and perfectly suited for hashes. Willy

