Hi Willy,

there is no problem with hashing of usernames (we have hundreds of different usernames and in normal there are distributed equally).

The problem is, when I stop my cluster nodes and only ser2 is turned on. Requests are routed by haproxy also on ser1, which is off (and it was off when I restarted HAproxy). I am able to see it on statistics page. Aren't you able to reproduce the situation? It seems like a bug with routing algorithm when combination DOWN+unequal weights occurrs.

I am sorry I am unable to test it on 1.4 untill I will not find rpm package for RH 5 :(

Jozef

Willy Tarreau wrote:
Hi Jozef,

On Tue, Mar 09, 2010 at 01:13:26PM +0100, Jozef Hovan wrote:
Hi all,

we are using haproxy for load balancing and we found interesting bug. We had 3 servers in cluster. The situation occurs, when only the first one was up and the others were down and when we had set weights to 20,20,10 for these servers. The trafic was balanced on first UP server and second DOWN server. Maybe I am mistaken, but I expected that on inactive server is never routed any trafic.

Affected versions are 1.3.19 and 1.3.23. When I removed weight from config, balancing was ok.

It is possible that I didn't understood configuration well, but routing traffic on inactive node is probably not wanted feature :)

Jozef

There is my testing haproxy.cfg causing problem:
===================
(...)
       balance url_param USERNAME check_post 4096

See here above ? In short, it's quite expected with any form
of hashing that if the population is too small, the balancing
will be uneven. In my opinion, what happens is that out of 10
possible keys, 5 would match ser2_10, 5 ser1_10 and none
ser3_10. Maybe if you have thousands of usernames you'll see
a better distribution. It also explains why changing a server
state changes the distribution : it changes the modulus and
the computation is different.

I suggest that you try on 1.4 with the consistent hasing feature
("hash-type consistent"). It is possible that it will be smoother,
and it will also avoid redistributing everyone when the state of
a server changes. Only a small part will move.

Regards,
Willy

Reply via email to