peer replication reset values of stick tables
Hi, I have an issue configuring peer replication with stick tables. Here is my setup : peers mypeers peer cldev-lb 10.1.1.101:1024 backend b_35902 stick-table type ip size 1k store bytes_out_rate(30),bytes_in_rate(30),bytes_out_cnt,bytes_in_cnt peers mypeers tcp-request content track-sc2 dst When reloading haproxy I can see, the learning process fetching data on port 1024, and then the key still remain after, but all counter is reseted : Before the reload : echo show table b_35902 | socat /var/run/haproxy/admin.sock stdio # table: b_35902, type: ip, size:1024, used:1 0xd82e08: key=172.18.5.5 use=0 exp=0 bytes_in_cnt=3088 bytes_in_rate(30)=3088 bytes_out_cnt=14570 bytes_out_rate(30)=14570 After : echo show table b_35902 | socat /var/run/haproxy/admin.sock stdio # table: b_35902, type: ip, size:1024, used:1 0x175ae08: key=172.18.5.5 use=0 exp=0 bytes_in_cnt=0 bytes_in_rate(30)=0 bytes_out_cnt=0 bytes_out_rate(30)=0 Is it normal ? My goal is to keep theses counters across reload. Thanks for help. Regards. Aurélien
外贸客户搜索与开发系统,降低外贸客户开发成本偶鳎�让您化被动为主动,轻松开发客户。
B2B --- soho QQ1494676820
Re: MIB
❦ 25 février 2015 16:17 +0100, Mathieu Sergent mathieu.sergent...@gmail.com : I want to know if a MIB for HAProxy is available ? IT depends what you call a MIB. Aloha (the packaged HAProxy by HAProxy Tech) comes with a MIB: https://www.haproxy.com/download/aloha/mibs/EXCELIANCE-MIB.txt But you need an implementation. You can find mine here (I am not using it anymore currently, I don't even remember if it was for HAProxy 1.4 or 1.5): https://gist.github.com/vincentbernat/244004c94e1932d86f14 The stat socket should be configured to listen to port 8881. If you are using multiple processes, you need to bind one stat socket for each process on ports 8881, 8882, 8883, etc. The script should be used as a pass persist script with Net-SNMP. Also, there is a similar script distributed with HAProxy in contrib. I didn't test it and it doesn't provide a MIB description. -- Take care to branch the right way on equality. - The Elements of Programming Style (Kernighan Plauger)
Re: acl + map
Hi Willy, 2015-02-25 17:32 GMT+01:00 Willy Tarreau w...@1wt.eu: Hi Joris, On Wed, Feb 25, 2015 at 02:24:45PM +0100, joris dedieu wrote: Hi, I have a list of valid cookies associated with client IP, that I try to make match in an acl. The map format is : cookie-value\tip-address\n This acl should do : if (client has cookie plop and plop value lookup in plop.map returns src); then the acl is valid endif I tried things like : acl valid_cookie src %[req.cook(plop),map_str_ip(plop.map)] or acl valid_cookie req.cook(plop),map_str_ip(plop.map) -m ip %[src] but it clearly don't works (error detected while parsing ACL 'valid_cookie' : '%[req.cook(plop),map_str_ip(plop.map)]' or %[src] is not a valid IPv4 or IPv6 address). I maybe misunderstand %[ substitution ? Does anyone here knows the right way to do that ? Maybe the -M switch ? The problem with %[] is that it became widespread enough to let people believe it can be used everywhere. It's only valid in some arguments of the http-request actions, and in log formats of course. It cannot be used to describe ACL patterns since by definitions these patterns are constant. Ok thanks for this clarification. In your case, if you need to check that the combination of (source,cookie) matches one in your table, I think you could proceed like this : 1) build a composite header which contains $cookie=$ip : http-request add-header blah %[req.cook(plop)]=%[src] 2) match this header against your own list of cookie=src entries in an ACL : acl valid_cookie req.hdr(add-header) -f valid-cookies.lst 3) fill your valid-cookies.lst file with the valid combinations in the form cookie=ip. 4) optionally remove the header blah after you've used the valid_cookie ACL. Hoping this helps, Yes it helps a lot (even if I not really satisfy using this for client identification, but that's an other stuff :) Best Regards Joris Willy
understanding HAproxy stats
Hello, I have trouble understanding stats reports from our HAproxy servers, can anyone please shed some light on this ? 1. On a backend with only one server, scur(BACKEND) scur(server). How can this be ? # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime, backend_b,server_a,0,0,1,1,1,92433,167970344,127389583,,0,,0,0,0,0,UP,1,1,0,512,1,20,1,,6373,backend_a/server_a,2,3,,290,92432,0,0,0,0,00,0,0,,,12858,2,530,13826, backend_b,BACKEND,48,142,49,143,410,92481,167970344,127389583,0,0,,0,0,0,0,UP,1,1,0,,0,88182,0,,1,20,0,,6373,,1,0,,650,92432,0,0,0,0,0,0,0,0,0,0,183,,,12858,2,530,13826, 2. Same HAproxy instance, on another backend, smax(BACKEND) slim(server_a) + slim(server_b). Again, how is this possible ? It seems like smax(BACKEND) = smax(server_a) + smax(server_b) + qmax(BACKEND) ? # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime, backend_c,server_a,0,0,0,3,3,95541,232232183,241897167,,0,,0,0,0,0,UP,1,1,0,512,1,19,1,,95507,backend_a/server_a,2,0,,50,95541,0,0,0,0,01,0,2,,,0,1,706,2028, backend_c,server_b,0,0,0,3,3,95587,232144919,241789273,,0,,0,0,0,0,UP,1,1,0,512,1,19,2,,95546,backend_a/server_b,2,0,,50,95587,0,0,0,0,00,0,3,,,0,2,675,2082, backend_c,BACKEND,0,3,0,9,410,191128,464377102,483686440,0,0,,0,0,0,0,UP,2,2,0,,0,89074,0,,1,19,0,,191053,,1,0,,100,191128,0,0,0,0,1,0,0,0,0,0,2,,,0,1,678,2059, 3. qmax stays at zero on each backend server, even though qmax(BACKEND) 0. Are queue stats not maintained for each server ? # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime, backend_c,server_a,0,0,0,3,3,95541,232232183,241897167,,0,,0,0,0,0,UP,1,1,0,512,1,19,1,,95507,backend_a/server_a,2,0,,50,95541,0,0,0,0,01,0,2,,,0,1,706,2028, backend_c,server_b,0,0,0,3,3,95587,232144919,241789273,,0,,0,0,0,0,UP,1,1,0,512,1,19,2,,95546,backend_a/server_b,2,0,,50,95587,0,0,0,0,00,0,3,,,0,2,675,2082, backend_c,BACKEND,0,3,0,9,410,191128,464377102,483686440,0,0,,0,0,0,0,UP,2,2,0,,0,89074,0,,1,19,0,,191053,,1,0,,100,191128,0,0,0,0,1,0,0,0,0,0,2,,,0,1,678,2059, 4. On another HAproxy instance, on a backend with two servers, we always have scur=slim on both servers, but qcur stays at zero. The application servers tell us their threads are ready (zero busy thread). Is there any way to show more info about the sessions which HAproxy thinks are in use ? # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime, backend_a,server_a,0,0,12,12,12,90,114944,68086,,0,,0,0,0,0,UP,1,1,0,18,6,67872,1015,128,1,9,1,,88,,2,0,,8,L7OK,200,5,0,78,0,0,0,0,00,0,501099,OK,,3,1,42,55, backend_a,server_b,0,0,12,12,12,72,87204,47593,,0,,0,0,0,0,UP,1,1,0,25,6,67842,1064,128,1,9,2,,72,,2,0,,7,L7OK,200,4,0,60,0,0,0,0,00,0,501461,OK,,0,1,28,36, backend_a,BACKEND,0,3,24,27,410,181,223139,119707,0,0,,5,0,0,0,UP,2,2,0,,6,67872,862,,1,9,0,,160,,1,0,,150,138,0,0,19,0,5,0,0,0,0,0,501099,,,3,1,66,86, BTW, we are running HAproxy 1.5.9 from the vbernat Ubuntu PPA. Thanks in advance. Sylvain
Re: Integrating a third party library
On Thu, Feb 26, 2015 at 08:30:45AM +0100, Baptiste wrote: and 2. how could we write a new function in HAProxy which takes a buffer of data in entry and can return a string (or buffer of data) I think that what you want to implement is a sample fetch function. For example, take a look at the recently introduced req.hdr_names function, which iterates over all request headers and produces a string that can be used to build a log line, another header or whatever. I think it will be straightforward enough for you to understand how to implement this with your lib. Best regards, Willy Hi Willy, I think a converter is more suited here. I mean, a fetch can't take a buffer issued from the result of an other fetch... The idea would to configure it something like http-request set-header Foobar req.hdr(HEADER),mikefunction(parameters if required) That's precisely what I don't find clear in Mike's description. Since he said take a buffer of data, I assumed use whatever is in the buffer, and then that's what a fetch does. Of course if it's just to convert strings or so, a converter is better, but that's not my understanding here. Mike, in such case, you want to have a look at this file: http://git.haproxy.org/?p=haproxy.git;a=blob_plain;f=src/sample.c;hb=HEAD and with the upper and lower and any other converter functions. Absolutely. Hopfully Mike will show us what he projected to do once it's done ! Willy
Re: Balancing requests and backup servers
On Thu, Feb 26, 2015 at 3:58 PM, Dmitry Sivachenko trtrmi...@gmail.com wrote: Hello! Given the following configuration backend BC option allbackups server s1 maxconn 30 check server s2 maxconn 30 check server s3 maxconn 30 check server b1 maxconn 30 check backup server b2 maxconn 30 check backup imagine that s1, s2 and s3 have 30 active sessions and (tcp) checks succeed. Hi Dmitry. Let me answer inline: 1) subsequent requests will be balanced between b1 and b2 because s1, s2 and s3 reached it's maxconn nope, they'll be queued on the backend until one of the server has a free slot b1 and b2 will be used when ALL s1, s2 and s3 will be operationnaly DOWN. 2) nbsrv(BC) will be still equal to 3 because checks for s1, s2 and s3 still succeed nope, nbsrv is 5, since b1 and b2 should be counted as well. Baptiste
Re: peer replication reset values of stick tables
On Thu, Feb 26, 2015 at 4:08 PM, Aurélien Bras aurelien.b...@gmail.com wrote: Hi, I have an issue configuring peer replication with stick tables. Here is my setup : peers mypeers peer cldev-lb 10.1.1.101:1024 backend b_35902 stick-table type ip size 1k store bytes_out_rate(30),bytes_in_rate(30),bytes_out_cnt,bytes_in_cnt peers mypeers tcp-request content track-sc2 dst When reloading haproxy I can see, the learning process fetching data on port 1024, and then the key still remain after, but all counter is reseted : Before the reload : echo show table b_35902 | socat /var/run/haproxy/admin.sock stdio # table: b_35902, type: ip, size:1024, used:1 0xd82e08: key=172.18.5.5 use=0 exp=0 bytes_in_cnt=3088 bytes_in_rate(30)=3088 bytes_out_cnt=14570 bytes_out_rate(30)=14570 After : echo show table b_35902 | socat /var/run/haproxy/admin.sock stdio # table: b_35902, type: ip, size:1024, used:1 0x175ae08: key=172.18.5.5 use=0 exp=0 bytes_in_cnt=0 bytes_in_rate(30)=0 bytes_out_cnt=0 bytes_out_rate(30)=0 Is it normal ? My goal is to keep theses counters across reload. Thanks for help. Regards. Aurélien Hi Aurélien, Yes, this is normal and by design. Baptiste
[SPAM] Équipez votre salle de bain en douche sécurisée
Title: Douche-Securisee Cliquez ici pour lire cet e-mail dans votre navigateur.Se dsinscrire ici
[SPAM] Équipez votre salle de bain en douche sécurisée
Title: Douche-Securisee Cliquez ici pour lire cet e-mail dans votre navigateur.Se dsinscrire ici