Hello list. I'm implementing peers in order to share rps and other metrics 
between all instances of a haproxy cluster, so I have a global view of these 
data. Here is a snippet of my poc which simply does a request count:

    global
        localpeer h1
        ...
    listen l1
        ...
        http-request track-sc0 int(1) table p/t1
        http-request set-var(req.gpc0) sc_inc_gpc0(0)
        http-request set-var(req.gpc0) sc_get_gpc0(0,p/t2),add(req.gpc0)
        http-request set-var(req.gpc0) sc_get_gpc0(0,p/t3),add(req.gpc0)
        http-request return hdr x-out %[var(req.gpc0)]
    peers p
        bind :9001
        log stdout format raw local0
        server h1
        server h2 127.0.0.1:9002
        server h3 127.0.0.1:9003
        table t1 type integer size 1 store gpc0
        table t2 type integer size 1 store gpc0
        table t3 type integer size 1 store gpc0

Our biggest cluster has actually 25 haproxy instances, meaning 25 tables per 
instance, and 25 set-var + add() per request per tracking data. On top of that 
all the 25 instances will share their 25 tables to all of the other 24 
instances. Build and maintain such configuration isn't a problem at all because 
it's automated, but how does it scale? Starting from how much instances should 
I change the approach and try, eg, to elect a controller that receives 
everything from everybody and delivers grouped data? Any advice or best 
practice will be very much appreciated, thanks!

~jm


Reply via email to