Re: HAProxy clustering
So because one loadbal can reach the service the others can? Log spam needs getting rid of anyway. Filter it out whether its the in service or one of the out of service loadbal. If you have a complex health check that creates load make it a little smarter and cache its result for a while On Fri, 16 Dec 2016 at 19:56, Jeff Palmer <j...@palmerit.net> wrote: > backend health should be in on the sticktables that are shared between > > all instances, right? > > > > With that in mind, the inactive servers would know the backed states > > if a failover were to occur. no sense in having the log spam, network > > traffic, and load from healthchecks that aree essentially useless > > (IMO, of course) > > > > > > > > > > On Fri, Dec 16, 2016 at 2:50 PM, Neil - HAProxy List > > <maillist-hapr...@iamafreeman.com> wrote: > > > Stephan, > > > > > > I'm curious... > > > > > > Why would you want the inactive loadbal not to check the services? > > > > > > If you really really did want that you do something horrid like tell > > > keepalive to block with iptables access to the backends when it does not > own > > > the service ip > > > > > > but why? you healthchecks should be fairly lightweight? > > > > > > Neil > > > > > > > > > On 16 Dec 2016 15:44, "Marco Corte" <ma...@marcocorte.it> wrote: > > >> > > >> Hi! > > >> > > >> I use keepalived for IP management. > > >> > > >> I use Ansible on another host to deploy the configuration on the haproxy > > >> nodes. > > >> This setup gives me better control on the configuration: it is split in > > >> several files on the Ansible host, but assembled to a single config > file on > > >> the nodes. > > >> This gives also the opportunity to deploy the configuration on one node > > >> only. > > >> On the Ansible host, the configuration changes are tracked with git. > > >> > > >> I also considered an automatic replication of the config, between the > > >> nodes but... I did not like the idea. > > >> > > >> > > >> .marcoc > > >> > > > > > > > > > > > -- > > Jeff Palmer > > https://PalmerIT.net > >
Re: rspadd X-Frame-Options:\ ALLOW-FROM
Hello the warning explains it. you are attempting to change a response based on a request header. responses dont have access to request headers. there are ways round that this has come up on the list before so archives will have an answer or two Neil On 15 Oct 2016 16:28, "Amol" <mandm_z...@yahoo.com> wrote: > Hi Igor, > Thanks so much for the reply, here is the error/warning i get when i add > your config line > > sudo /etc/init.d/haproxy restart > * Restarting haproxy haproxy [WARNING] 288/112410 (18154) : parsing > [/etc/haproxy/haproxy.cfg:84] : anonymous acl will never match because it > uses keyword 'req.hdr' which is incompatible with 'frontend http-response > header rule' > [WARNING] 288/112410 (18157) : parsing [/etc/haproxy/haproxy.cfg:84] : > anonymous acl will never match because it uses keyword 'req.hdr' which is > incompatible with 'frontend http-response header rule' > >[ OK ] > am i also missing something else? like an acl rule for req.hdr? > > -- > *From:* Igor Cicimov <ig...@encompasscorporation.com> > *To:* Amol <mandm_z...@yahoo.com> > *Cc:* HAproxy Mailing Lists <haproxy@formilux.org> > *Sent:* Friday, October 14, 2016 6:27 PM > *Subject:* Re: rspadd X-Frame-Options:\ ALLOW-FROM > > Amol, > > On Sat, Oct 15, 2016 at 7:21 AM, Amol <mandm_z...@yahoo.com> wrote: > > Hi, > I am trying to configure my LB such that it can allow one of my websites > to render the pages behind this LB. > i am using Ubuntu 12.04 LTS > and > haproxy -v > HA-Proxy version 1.5.14 2015/07/02 > > config file entry > rspadd X-Frame-Options:\ ALLOW-FROM if https://load.example.com > > > You are missing a condition here, try: > > rspadd X-Frame-Options:\ ALLOW-FROM if { req.hdr(Host) -i load.example.com > } > > > > > but i get this error > > [ALERT] 287/161307 (22941) : parsing [/etc/haproxy/haproxy.cfg:83] : error > detected while parsing a 'rspadd' condition : no such ACL : ' > https://load.example.com/'. > [ALERT] 287/161307 (22941) : Error(s) found in configuration file : > /etc/haproxy/haproxy.cfg > [ALERT] 287/161307 (22941) : Fatal errors found in configuration. > <https://load.iformbuilder.com/> > > > > my prior setting was > config file entry > rspadd X-Frame-Options:\ SAMEORIGIN > > and that blocked any site from rendering the pages behind this LB. But now > i want it to allow this one link to open the pages. > > Please let me know if anyone has tackled this before. > > > > > -- > Igor Cicimov | DevOps > > > p. +61 (0) 433 078 728 > e. ig...@encompasscorporation.com <http://encompasscorporation.com/> > w*.* www.encompasscorporation.com > > a. > Level 4, 65 York Street, Sydney 2000 > > >
Re: Inform backend about https for http2 connections
Hello if you can have the app not specify the scheme for the css etc. just use //site.com/path or /path if it is on the same site On 6 Aug 2016 04:33, "Igor Cicimov"wrote: > On 6 Aug 2016 1:31 am, "Matthias Fechner" wrote: > > > > Dear all, > > > > > > I use haproxy in tcp mode to have http2 working. > > Now I have the problem that the backend has to know if the connection > > was encrypted or not (some websites using this information to add the > > schema to css and javascript URIs). > > > > Afaik, since http2 is by default tls encrypted just by specifying h2 as > protocol to the backend should be enough i guess. > > > Normally I think a > > reqadd X-Forwarded-Proto:\ https > > > > should do the trick. > > > > Will this work if working in tcp mode or are there other tricks to do > this? > > > > > > Thanks > > Matthias > > > > -- > > > > "Programming today is a race between software engineers striving to > > build bigger and better idiot-proof programs, and the universe trying to > > produce bigger and better idiots. So far, the universe is winning." -- > > Rich Cook > > > > >
Re: Only using map file when an entry exists
Thanks Nanad, That works perfectly, thank you On 11 March 2016 at 22:37, Nenad Merdanovic <ni...@nimzo.info> wrote: > Hello Neil, > > You seem to have missed my answer, so I am gonna top post this time :) > > http-request redirect location > %[hdr(host),map(/etc/haproxy/redirect_host.map)] code 301 if { > hdr(host),map(/etc/haproxy/redirect_host.map) -m found } > > Regards, > Nenad > > On 03/11/2016 11:32 PM, Neil - HAProxy List wrote: > > Hello > > > > I've left a little time and no one has said anything more so time for me > > to act and submit a patch. > > > > I want to make functions that can be used in acls and take a map and > > provide has_key and, for completeness, has_value > > > > Are those names uncontroversial/ suitable and, i really hope, is this > > unnecessary as it already exists. > > > > I'm more that a little surprised to find myself the first to want this > > > > Cheers > > > > Neil > > > > On 11 Mar 2016 22:16, "Neil" <n...@iamafreeman.com > > <mailto:n...@iamafreeman.com>> wrote: > > > > Hello > > > > I've left a little time and no one has said anything more so time > > for me to act and submit a patch. > > > > I want to make functions that can be used in acls and take a map and > > provide has_key and, for completeness, has_value > > > > Are those names uncontroversia/ suitablel and, i really hope, is > > this unnecessary as it already exists. > > > > I'm more that a little sutprised to find myself the first to want > this > > > > Cheers > > > > Neil > > > > On 3 Mar 2016 18:08, "Neil - HAProxy List" > > <maillist-hapr...@iamafreeman.com > > <mailto:maillist-hapr...@iamafreeman.com>> wrote: > > > > Thanks Conrad, > > > > That sort of thing looks better that what I had, and I'll give > > it a go. > > > > I still think this is a bit long winded syntax for something > > that probably quite a common things to want to do? A > > map_contains type boolean function still seems like a good to > have? > > > > Thanks > > > > Neil > > > > On 3 March 2016 at 13:05, Conrad Hoffmann <con...@soundcloud.com > > <mailto:con...@soundcloud.com>> wrote: > > > > If you are using haproxy >=1.6, you might be able to do > > something like this: > > > > acl no_redir %[req.redir] -m str NO_REDIR > > http-request set-var(req.redir) \ > > %[hdr(host),map(/etc/haproxy/redirect_host.map,NO_REDIR)] > > http-request redirect location %[req.redir] code 301 if > > !no_redir > > > > This is completely made up and untested, but I hope you get > > the idea. > > Avoids a second map lookup altogether, but also map lookups > > are quite fast, > > so unless you map is huge you don't really need to worry > > about this. Also, > > double negation, but this is just to give you some idea > > > > Cheers, > > Conrad > > -- > > Conrad Hoffmann > > Traffic Engineer > > > > SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, > Germany > > > > Managing Director: Alexander Ljung | Incorporated in England > > & Wales > > with Company No. 6343600 | Local Branch Office | AG > > Charlottenburg | > > HRB 110657B > > > > >
Re: Only using map file when an entry exists
Hello I've left a little time and no one has said anything more so time for me to act and submit a patch. I want to make functions that can be used in acls and take a map and provide has_key and, for completeness, has_value Are those names uncontroversial/ suitable and, i really hope, is this unnecessary as it already exists. I'm more that a little surprised to find myself the first to want this Cheers Neil On 11 Mar 2016 22:16, "Neil" <n...@iamafreeman.com> wrote: > Hello > > I've left a little time and no one has said anything more so time for me > to act and submit a patch. > > I want to make functions that can be used in acls and take a map and > provide has_key and, for completeness, has_value > > Are those names uncontroversia/ suitablel and, i really hope, is this > unnecessary as it already exists. > > I'm more that a little sutprised to find myself the first to want this > > Cheers > > Neil > On 3 Mar 2016 18:08, "Neil - HAProxy List" < > maillist-hapr...@iamafreeman.com> wrote: > >> Thanks Conrad, >> >> That sort of thing looks better that what I had, and I'll give it a go. >> >> I still think this is a bit long winded syntax for something that >> probably quite a common things to want to do? A map_contains type boolean >> function still seems like a good to have? >> >> Thanks >> >> Neil >> >> On 3 March 2016 at 13:05, Conrad Hoffmann <con...@soundcloud.com> wrote: >> >>> If you are using haproxy >=1.6, you might be able to do something like >>> this: >>> >>> acl no_redir %[req.redir] -m str NO_REDIR >>> http-request set-var(req.redir) \ >>> %[hdr(host),map(/etc/haproxy/redirect_host.map,NO_REDIR)] >>> http-request redirect location %[req.redir] code 301 if !no_redir >>> >>> This is completely made up and untested, but I hope you get the idea. >>> Avoids a second map lookup altogether, but also map lookups are quite >>> fast, >>> so unless you map is huge you don't really need to worry about this. >>> Also, >>> double negation, but this is just to give you some idea >>> >>> Cheers, >>> Conrad >>> -- >>> Conrad Hoffmann >>> Traffic Engineer >>> >>> SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany >>> >>> Managing Director: Alexander Ljung | Incorporated in England & Wales >>> with Company No. 6343600 | Local Branch Office | AG Charlottenburg | >>> HRB 110657B >>> >> >>
Re: Only using map file when an entry exists
I'm amazed by the number of typos in one message. ;) On 3 Mar 2016 18:08, "Neil - HAProxy List" <maillist-hapr...@iamafreeman.com> wrote: > Thanks Conrad, > > That sort of thing looks better that what I had, and I'll give it a go. > > I still think this is a bit long winded syntax for something that probably > quite a common things to want to do? A map_contains type boolean function > still seems like a good to have? > > Thanks > > Neil > > On 3 March 2016 at 13:05, Conrad Hoffmann <con...@soundcloud.com> wrote: > >> If you are using haproxy >=1.6, you might be able to do something like >> this: >> >> acl no_redir %[req.redir] -m str NO_REDIR >> http-request set-var(req.redir) \ >> %[hdr(host),map(/etc/haproxy/redirect_host.map,NO_REDIR)] >> http-request redirect location %[req.redir] code 301 if !no_redir >> >> This is completely made up and untested, but I hope you get the idea. >> Avoids a second map lookup altogether, but also map lookups are quite >> fast, >> so unless you map is huge you don't really need to worry about this. Also, >> double negation, but this is just to give you some idea >> >> Cheers, >> Conrad >> -- >> Conrad Hoffmann >> Traffic Engineer >> >> SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany >> >> Managing Director: Alexander Ljung | Incorporated in England & Wales >> with Company No. 6343600 | Local Branch Office | AG Charlottenburg | >> HRB 110657B >> > >
Re: Only using map file when an entry exists
Hello I've left a little time and no one has said anything more so time for me to act and submit a patch. I want to make functions that can be used in acls and take a map and provide has_key and, for completeness, has_value Are those names uncontroversia/ suitablel and, i really hope, is this unnecessary as it already exists. I'm more that a little sutprised to find myself the first to want this Cheers Neil On 3 Mar 2016 18:08, "Neil - HAProxy List" <maillist-hapr...@iamafreeman.com> wrote: > Thanks Conrad, > > That sort of thing looks better that what I had, and I'll give it a go. > > I still think this is a bit long winded syntax for something that probably > quite a common things to want to do? A map_contains type boolean function > still seems like a good to have? > > Thanks > > Neil > > On 3 March 2016 at 13:05, Conrad Hoffmann <con...@soundcloud.com> wrote: > >> If you are using haproxy >=1.6, you might be able to do something like >> this: >> >> acl no_redir %[req.redir] -m str NO_REDIR >> http-request set-var(req.redir) \ >> %[hdr(host),map(/etc/haproxy/redirect_host.map,NO_REDIR)] >> http-request redirect location %[req.redir] code 301 if !no_redir >> >> This is completely made up and untested, but I hope you get the idea. >> Avoids a second map lookup altogether, but also map lookups are quite >> fast, >> so unless you map is huge you don't really need to worry about this. Also, >> double negation, but this is just to give you some idea >> >> Cheers, >> Conrad >> -- >> Conrad Hoffmann >> Traffic Engineer >> >> SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany >> >> Managing Director: Alexander Ljung | Incorporated in England & Wales >> with Company No. 6343600 | Local Branch Office | AG Charlottenburg | >> HRB 110657B >> > >
Re: Only using map file when an entry exists
Thanks Conrad, That sort of thing looks better that what I had, and I'll give it a go. I still think this is a bit long winded syntax for something that probably quite a common things to want to do? A map_contains type boolean function still seems like a good to have? Thanks Neil On 3 March 2016 at 13:05, Conrad Hoffmann <con...@soundcloud.com> wrote: > If you are using haproxy >=1.6, you might be able to do something like > this: > > acl no_redir %[req.redir] -m str NO_REDIR > http-request set-var(req.redir) \ > %[hdr(host),map(/etc/haproxy/redirect_host.map,NO_REDIR)] > http-request redirect location %[req.redir] code 301 if !no_redir > > This is completely made up and untested, but I hope you get the idea. > Avoids a second map lookup altogether, but also map lookups are quite fast, > so unless you map is huge you don't really need to worry about this. Also, > double negation, but this is just to give you some idea > > Cheers, > Conrad > -- > Conrad Hoffmann > Traffic Engineer > > SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany > > Managing Director: Alexander Ljung | Incorporated in England & Wales > with Company No. 6343600 | Local Branch Office | AG Charlottenburg | > HRB 110657B >
Only using map file when an entry exists
Hello HA-Proxy version 1.5.15 2015/11/01 I've got a service with some redirects for old virtual hosts to new locations on main website that I want to store in a map file /etc/haproxy/redirect_host.map with lines like www.oldname.com http://www.shiny.net/collections/oldname My issue is I don't want a redirect to occur when there is no entry in the map I started with http-request redirect location %[hdr(host),map(/etc/haproxy/redirect_host.map)] code 301 This would take out the whole site as a request to http://www.shiny.net gets a redirect with a blank location. (and so does http://www.shiny.net/collections/oldname) - this is because they are all in the same frontend so as a hack around I've taken the first column to another file and gone with acl isRedirectHost hdr(host) -i -f /etc/haproxy/acl_isRedirectHost.txt http-request redirect location %[hdr(host),map(/etc/haproxy/redirect_host.map)] code 301 if isRedirectHost This works but is yuck (I'd have to automate generating the acl file from the map - not hard but not clean). Ideally I'd like a way to only redirect when a value is in the map what would be fine is if there were a contained_in_map function that I could use something like http-request redirect location %[hdr(host),map(/etc/haproxy/redirect_host.map)] code 301 if %[hdr(host),contained_in_map(/etc/haproxy/redirect_host.map)] All other suggestions very welcome too Thank you, Neil
Re: HAProxy segfault
Hello Just an observation You have option tcpka twice Neil On 14 May 2015 14:02, David David dd432...@gmail.com wrote: Hi! HAProxy 1.6-dev1, CentOS6 Getting a segfault when trying connect to port 3389. segfault at 0 ip (null) sp 7fff18a41268 error 14 in haproxy[40+a4000] Compiling with next options: make TARGET=linux26 make install the configuration file is as listed below: global daemon stats socket /var/run/haproxy.stat mode 600 level admin pidfile /var/run/haproxy.pid defaults mode http timeout connect 4000 timeout client 42000 timeout server 43000 listen RDP_Test bind *:3389 mode tcp balance leastconn option tcpka tcp-request inspect-delay 5s tcp-request content accept if RDP_COOKIE timeout client 12h timeout server 12h option tcpka option redispatch option abortonclose maxconn 4 server TS1 10.64.0.209:3389 weight 100 check agent-check agent-port inter 2000 rise 2 fall 3 minconn 0 maxconn 0 on-marked-down shutdown-sessions server TS2 10.64.0.210:3389 weight 100 check agent-check agent-port inter 2000 rise 2 fall 3 minconn 0 maxconn 0 on-marked-down shutdown-sessions David.
Re: Access control for stats page
Hello Yep there is Have a frontend Send say /hastats to a hastats backend have the backend have its stats URL be /hastats too Set the acls in the frontend I'll post a config example in a bit. Neil On 21 Apr 2015 20:09, CJ Ess zxcvbn4...@gmail.com wrote: Is there a way to setup an ACL for the haproxy stats page? We do have authentication set up for the URL, but we would feel better if we could limit access to a white list of local networks. Is there a way to do that?
Re: Access control for stats page
heres are some relevent snips I run this in with same address as the service frontend SSL ... acl url_hastats url_beg /hastats acl location_trusted src 123.123.123.0/24 acl magic_cookie_trusted hdr_sub(cookie) magicforthissiteonly=foobar_SHA1value_etc use_backend hastats if url_hastats location_trusted use_backend hastats if url_hastats magic_cookie_trusted deny if url_hastats ... backend hastats mode http stats uri /hastats stats realm Service\ Loadbalancer stats show-desc br/font color='GoldenRod ' size='5'url.domain: Service Loadbalancer/fontbr/font color='blue' size='3'running on hostnamebr/ config version/font stats show-legends stats auth admin:password stats admin if TRUE On 21 April 2015 at 21:04, Neil - HAProxy List maillist-hapr...@iamafreeman.com wrote: Hello Yep there is Have a frontend Send say /hastats to a hastats backend have the backend have its stats URL be /hastats too Set the acls in the frontend I'll post a config example in a bit. Neil On 21 Apr 2015 20:09, CJ Ess zxcvbn4...@gmail.com wrote: Is there a way to setup an ACL for the haproxy stats page? We do have authentication set up for the URL, but we would feel better if we could limit access to a white list of local networks. Is there a way to do that?
Re: switching backends based on boolean value
If it is http then I'd use a cookie. Apps can decide when to give out that cookie You could use a table and add to the table using socket commands Neil On 17 Apr 2015 05:27, Dennis Jacobfeuerborn denni...@conversis.de wrote: On 17.04.2015 00:51, Igor Cicimov wrote: On Fri, Apr 17, 2015 at 3:26 AM, Dennis Jacobfeuerborn denni...@conversis.de wrote: Hi, I'm trying to find the best way to toggle maintenance mode for a site. I have a regular and a maintenance backend defined an I'm using something like: frontend: acl is_maintenance always_false use_backend back-maintenance if is_maintenance default_backend back Since I saw some ACL modifying command for the unix socket I figured that I could use those to switch the acl dynamically but apparently while there are get/add/del/clear commands there is no actual command to set an acl. Is there a way to accomplish this kind of dynamic switching? Regards, Dennis How about putting the maintenance server as backup in the pool and removing the real server from the pool when due for maintenance and then putting it back when finished. I was hoping to avoid this as I also want to switch between the maintenance/live site based on other ACLs e.g. based on the clients IP address or a header. In that case removing the live servers isn't really an option. Regards, Dennis
Re: cohaproxy
Hello That's an error from tomcat? Up the nofiles in etc security limits is normal way to do that Neil On 9 Apr 2015 18:50, ballu balram ballubalram...@gmail.com wrote: Hi, After installing haproxy in ubuntu i get error ;- *29-Mar-2015 06:47:43.963 SEVERE [http-nio-9191-Acceptor-0] org.apache.tomcat.util.net.NioEndpoint$Acceptor.run Socket accept failed* * java.io.IOException: Too many open files* *at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)* *at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)* *at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:688)* *at java.lang.Thread.run(Thread.java:745)* *My haproxy.cfg is as shown below== * global log127.0.0.1 local0 chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy maxconn 300 daemon pidfile /var/run/haproxy.pid defaults log global modehttp option httplog option forceclose timeout connect 3 timeout client 18 timeout server 18 listen stats *:1936 stats enable stats uri /stats stats hide-version stats auth xyz:ss frontend localnodes bind *:9000 mode http option forceclose option httplog log global use_backend %[path,map_beg(/usr/local/staging/haproxy_frontend.txt,users)] default_backend users backend apiproduct mode http balance roundrobin option httpchk HEAD /apiproduct-service/ HTTP/1.0 server local hostname:9191 maxconn 10 check inter 1 rise 2 fall 2 Let me know where i m doing wrong. If i check open file lsof -p it give me max 600 to 800.
Re: ldap-check with Active Directory
Hello I was thinking of updating the ldap-check but I think I've a better idea. Macros (well ish). send-binary 300c0201 # LDAP bind request ROOT simple send-binary 01 # message ID send-binary 6007 # protocol Op send-binary 0201 # bind request send-binary 03 # LDAP v3 send-binary 04008000 # name, simple authentication expect binary 0a0100 # bind response + result code: success send-binary 30050201034200 # unbind request could be in a file named macros/ldap-simple-bind then the option tcp-check-macro ldap-simple-bind would use it, I know this is close to includes. similarly macros/smtp-helo-quit connect port 25 expect rstring ^220 send QUIT\r\n expect rstring ^221 or from http://blog.haproxy.com/2014/06/06/binary-health-check-with-haproxy-1-5-php-fpmfastcgi-probe-example/ # FCGI_BEGIN_REQUEST send-binary 01 # version send-binary 01 # FCGI_BEGIN_REQUEST send-binary 0001 # request id send-binary 0008 # content length send-binary 00 # padding length send-binary 00 # send-binary 0001 # FCGI responder send-binary # flags send-binary # send-binary # # FCGI_PARAMS send-binary 01 # version send-binary 04 # FCGI_PARAMS send-binary 0001 # request id send-binary 0045 # content length send-binary 03 # padding length: padding for content % 8 = 0 send-binary 00 # send-binary 0e03524551554553545f4d4554484f44474554 # REQUEST_METHOD = GET send-binary 0b055343524950545f4e414d452f70696e67 # SCRIPT_NAME = /ping send-binary 0f055343524950545f46494c454e414d452f70696e67 # SCRIPT_FILENAME = /ping send-binary 040455534552524F4F54 # USER = ROOT send-binary 00 # padding # FCGI_PARAMS send-binary 01 # version send-binary 04 # FCGI_PARAMS send-binary 0001 # request id send-binary # content length send-binary 00 # padding length: padding for content % 8 = 0 send-binary 00 # expect binary 706f6e67 # pong (though for items like send-binary 0e03524551554553545f4d4554484f44474554 # REQUEST_METHOD = GET I'd prefer a send-as-binary REQUEST_METHOD = GET ) these and many others could be shipped with haproxy. this seems to make sense to me as they are small contained logical items Neil On 30 March 2015 at 23:02, Baptiste bed...@gmail.com wrote: you should believe it :) On Mon, Mar 30, 2015 at 11:34 PM, Neil - HAProxy List maillist-hapr...@iamafreeman.com wrote: Hello Thanks so much. That worked well, I now get L7OK/0 in 0ms not sure I believe the 0ms but maybe I should Thanks again, Neil On 30 March 2015 at 22:14, Baptiste bed...@gmail.com wrote: On Mon, Mar 30, 2015 at 10:33 PM, Neil - HAProxy List maillist-hapr...@iamafreeman.com wrote: Hello I'm trying to use ldap-check with active directory and the response active directory gives is not one ldap-check is happy to accept when I give a 389 directory backend ldap server all is well, when I use AD I get 'Not LDAPv3 protocol' I've done a little poking about and found that if ((msglen 2) || (memcmp(check-bi-data + 2 + msglen, \x02\x01\x01\x61, 4) != 0)) { set_server_check_status(check, HCHK_STATUS_L7RSP, Not LDAPv3 protocol); is where I'm getting stopped as msglen is 4 Here is tcpdump of 389 directory response (the one that works) 2 packets 21:29:34.195699 IP 389.ldap HAPROXY.57109: Flags [.], ack 15, win 905, options [nop,nop,TS val 856711882 ecr 20393440], length 0 0x: 0050 5688 7042 0064 403b 2700 0800 4500 .PV.pB.d@ ;'...E. 0x0010: 0034 9d07 4000 3f06 3523 ac1b e955 ac18 .4..@ .?.5#...U.. 0x0020: 2810 0185 df15 5cab ffcd 63ba 77d3 8010 (.\...c.w... 0x0030: 0389 2c07 0101 080a 3310 62ca 0137 ..,...3.b..7 0x0040: 2de0 -. 21:29:34.195958 IP 389.ldap HAPROXY.57109: Flags [P.], seq 1:15, ack 15, win 905, options [nop,nop,TS val 856711882 ecr 20393440], length 14 0x: 0050 5688 7042 0064 403b 2700 0800 4500 .PV.pB.d@ ;'...E. 0x0010: 0042 9d08 4000 3f06 3514 ac1b e955 ac18 .B..@ .?.5U.. 0x0020: 2810 0185 df15 5cab ffcd 63ba 77d3 8018 (.\...c.w... 0x0030: 0389 e878 0101 080a 3310 62ca 0137 ...x..3.b..7 0x0040: 2de0 300c 0201 0161 070a 0100 0400 0400 -.0a Here is tcpdump of active directory (broken) 1 packet 21:25:24.519883 IP ADSERVER.ldap HAPROXY.57789: Flags [P.], seq 1:23, ack 15, win 260, options [nop,nop,TS val 1870785 ecr 20331021], length 22 0x: 0050 5688 7042 0050 5688 7780 0800 4500 .PV.pB.PV.w...E. 0x0010: 004a 1d7d 4000 8006 34e3 ac18 280d ac18 .J.}@ ...4...(... 0x0020: 2810 0185 e1bd 5a3f 2ae7 3ced 7b5b 8018 (.Z?*..{[.. 0x0030: 0104 1d7a 0101 080a 001c 8bc1 0136 ...z...6 0x0040: 3a0d 3084 0010 0201
ldap-check with Active Directory
Hello I'm trying to use ldap-check with active directory and the response active directory gives is not one ldap-check is happy to accept when I give a 389 directory backend ldap server all is well, when I use AD I get 'Not LDAPv3 protocol' I've done a little poking about and found that if ((msglen 2) || (memcmp(check-bi-data + 2 + msglen, \x02\x01\x01\x61, 4) != 0)) { set_server_check_status(check, HCHK_STATUS_L7RSP, Not LDAPv3 protocol); is where I'm getting stopped as msglen is 4 Here is tcpdump of 389 directory response (the one that works) 2 packets 21:29:34.195699 IP 389.ldap HAPROXY.57109: Flags [.], ack 15, win 905, options [nop,nop,TS val 856711882 ecr 20393440], length 0 0x: 0050 5688 7042 0064 403b 2700 0800 4500 .PV.pB.d@;'...E. 0x0010: 0034 9d07 4000 3f06 3523 ac1b e955 ac18 .4..@.?.5#...U.. 0x0020: 2810 0185 df15 5cab ffcd 63ba 77d3 8010 (.\...c.w... 0x0030: 0389 2c07 0101 080a 3310 62ca 0137 ..,...3.b..7 0x0040: 2de0 -. 21:29:34.195958 IP 389.ldap HAPROXY.57109: Flags [P.], seq 1:15, ack 15, win 905, options [nop,nop,TS val 856711882 ecr 20393440], length 14 0x: 0050 5688 7042 0064 403b 2700 0800 4500 .PV.pB.d@;'...E. 0x0010: 0042 9d08 4000 3f06 3514 ac1b e955 ac18 .B..@.?.5U.. 0x0020: 2810 0185 df15 5cab ffcd 63ba 77d3 8018 (.\...c.w... 0x0030: 0389 e878 0101 080a 3310 62ca 0137 ...x..3.b..7 0x0040: 2de0 300c 0201 0161 070a 0100 0400 0400 -.0a Here is tcpdump of active directory (broken) 1 packet 21:25:24.519883 IP ADSERVER.ldap HAPROXY.57789: Flags [P.], seq 1:23, ack 15, win 260, options [nop,nop,TS val 1870785 ecr 20331021], length 22 0x: 0050 5688 7042 0050 5688 7780 0800 4500 .PV.pB.PV.w...E. 0x0010: 004a 1d7d 4000 8006 34e3 ac18 280d ac18 .J.}@...4...(... 0x0020: 2810 0185 e1bd 5a3f 2ae7 3ced 7b5b 8018 (.Z?*..{[.. 0x0030: 0104 1d7a 0101 080a 001c 8bc1 0136 ...z...6 0x0040: 3a0d 3084 0010 0201 0161 8400 :.0a 0x0050: 070a 0100 0400 0400 this was discussed but not finished before see http://www.serverphorums.com/read.php?10,394453 I can see the string \02\01\01\61 is there but not in the correct place Anyone have any ideas about fixing this so that both (and possibly other) ldap implementations work? Thanks, Neil
Re: ldap-check with Active Directory
Hello Thanks so much. That worked well, I now get *L7OK/0 in 0ms* not sure I believe the 0ms but maybe I should Thanks again, Neil On 30 March 2015 at 22:14, Baptiste bed...@gmail.com wrote: On Mon, Mar 30, 2015 at 10:33 PM, Neil - HAProxy List maillist-hapr...@iamafreeman.com wrote: Hello I'm trying to use ldap-check with active directory and the response active directory gives is not one ldap-check is happy to accept when I give a 389 directory backend ldap server all is well, when I use AD I get 'Not LDAPv3 protocol' I've done a little poking about and found that if ((msglen 2) || (memcmp(check-bi-data + 2 + msglen, \x02\x01\x01\x61, 4) != 0)) { set_server_check_status(check, HCHK_STATUS_L7RSP, Not LDAPv3 protocol); is where I'm getting stopped as msglen is 4 Here is tcpdump of 389 directory response (the one that works) 2 packets 21:29:34.195699 IP 389.ldap HAPROXY.57109: Flags [.], ack 15, win 905, options [nop,nop,TS val 856711882 ecr 20393440], length 0 0x: 0050 5688 7042 0064 403b 2700 0800 4500 .PV.pB.d@;'...E. 0x0010: 0034 9d07 4000 3f06 3523 ac1b e955 ac18 .4..@.?.5#...U.. 0x0020: 2810 0185 df15 5cab ffcd 63ba 77d3 8010 (.\...c.w... 0x0030: 0389 2c07 0101 080a 3310 62ca 0137 ..,...3.b..7 0x0040: 2de0 -. 21:29:34.195958 IP 389.ldap HAPROXY.57109: Flags [P.], seq 1:15, ack 15, win 905, options [nop,nop,TS val 856711882 ecr 20393440], length 14 0x: 0050 5688 7042 0064 403b 2700 0800 4500 .PV.pB.d@;'...E. 0x0010: 0042 9d08 4000 3f06 3514 ac1b e955 ac18 .B..@.?.5U.. 0x0020: 2810 0185 df15 5cab ffcd 63ba 77d3 8018 (.\...c.w... 0x0030: 0389 e878 0101 080a 3310 62ca 0137 ...x..3.b..7 0x0040: 2de0 300c 0201 0161 070a 0100 0400 0400 -.0a Here is tcpdump of active directory (broken) 1 packet 21:25:24.519883 IP ADSERVER.ldap HAPROXY.57789: Flags [P.], seq 1:23, ack 15, win 260, options [nop,nop,TS val 1870785 ecr 20331021], length 22 0x: 0050 5688 7042 0050 5688 7780 0800 4500 .PV.pB.PV.w...E. 0x0010: 004a 1d7d 4000 8006 34e3 ac18 280d ac18 .J.}@...4...(... 0x0020: 2810 0185 e1bd 5a3f 2ae7 3ced 7b5b 8018 (.Z?*..{[.. 0x0030: 0104 1d7a 0101 080a 001c 8bc1 0136 ...z...6 0x0040: 3a0d 3084 0010 0201 0161 8400 :.0a 0x0050: 070a 0100 0400 0400 this was discussed but not finished before see http://www.serverphorums.com/read.php?10,394453 I can see the string \02\01\01\61 is there but not in the correct place Anyone have any ideas about fixing this so that both (and possibly other) ldap implementations work? Thanks, Neil Hi Neil Yes you can switch to the tcp-check checking method. I works with binary protocols as well. Here is what I use for the AD in my lab: option tcp-check tcp-check connect port 389 tcp-check send-binary 300c0201 # LDAP bind request ROOT simple tcp-check send-binary 01 # message ID tcp-check send-binary 6007 # protocol Op tcp-check send-binary 0201 # bind request tcp-check send-binary 03 # LDAP v3 tcp-check send-binary 04008000 # name, simple authentication tcp-check expect binary 0a0100 # bind response + result code: success tcp-check send-binary 30050201034200 # unbind request You could add the same sequence for LDAPs on port 636: tcp-check connect port 636 ssl tcp-check send-binary 300c0201 # LDAP bind request ROOT simple tcp-check send-binary 01 # message ID tcp-check send-binary 6007 # protocol Op tcp-check send-binary 0201 # bind request tcp-check send-binary 03 # LDAP v3 tcp-check send-binary 04008000 # name, simple authentication tcp-check expect binary 0a0100 # bind response + result code: success tcp-check send-binary 30050201034200 # unbind request Note for myself: put this tip on the blog.. Baptiste
Re: no-sslv3 in default
Hello I'd go further. Sslv3 us an obsolete protocol does anyone disagree with that? For a start make no-sslv3 the default and have a enable-obsolete-sslv3 option. Or better make enabling it a compile time option. Or maybe just get rid of it altogether? The examples on the web and on this mailing lists archive should be able for beginners to use without opening themselves up to sslv3 issues. And it'll save us all having to remember to type 8 chars to disable support for something our clients do not use. Cheers Neil On 15 Oct 2014 20:11, Bryan Talbot bryan.tal...@playnext.com wrote: With SSLv3 being so old, and in light of new (POODLE) exploits driving additional nails into its coffin, it would be nice to disable SSLv3 in a defaults section so that it doesn't get enabled by accident when someone adds a new bind line. Docs for 1.5 say that no-sslv3 is not supported in a defaults section. Can that option be added and made available in 1.5? -Bryan
Re: Using a WhiteList in HAProxy 1.5
Hi If you only have one range and it does not change often then a acl file should be avoided. http-request deny unless src 123.123.123.123/123 If you have more than one range a acl should be used Only if you have many or they change often would a file suit. Is clearer imho Neil On 16 Jul 2014 17:10, Baptiste bed...@gmail.com wrote: On Wed, Jul 16, 2014 at 5:45 PM, JDzialo John jdzi...@edrnet.com wrote: Hi Guys, I want to only allow certain internal company IP addresses to have access to one of my web farms. I am using haproxy 1.5 on Debian 7. I am using a whitelist.lst file with the following contents... 10.0.0.0/8 Here is my frontend configuration... frontend https-in bind *:443 ssl crt /etc/ssl/xxx.cert.chain.pem http-request allow if { src -f /etc/haproxy/whitelist.lst } reqadd X-Forwarded-Proto:https reqadd X-Forwarded-Port:443 timeout client 60 default-backend web However any IP is still allowed through this frontend. It does not appear to be restricting access to any other IP. Am I missing something in my configuration? Thanks John Dzialo | Linux System Administrator Direct 203.783.8163 | Main 800.352.0050 Environmental Data Resources, Inc. 440 Wheelers Farms Road, Milford, CT 06461 www.edrnet.com | commonground.edrnet.com Hi John, Please avoid HTML mails... Give a try to the following configuration: http-request deny unless { src -f /etc/haproxy/whitelist.lst } Baptiste
Re: Binaries for HAProxy.
And lets not do too much to dampen any pressure to get haproxy 1.5 into rhel7 and ubuntu1404 Neil On 16 Jul 2014 16:12, Ghislain gad...@aqueos.com wrote: Just put http://nd-build-01.linux-appliance.net/repos/centos/ haproxy/haproy-centos-6x.repo under /etc/yum.repos.d/ and issue yum install haproxy. of course you do trust the security of your entire server on this repo ? before doing that just be sure of what this implies :) there is no issue on trusting someone but remember that you trust this someone to install software as root on your server and update the package when new version comes. Do a minimum homework before authorizing repos. I do trust the debian team for backported 1.5 haproxy package but nevertheless i asked here if they were legit , verified what i could and limited the package i accept from the repo to a minimum just in case. I think the same goes on for centOS/Redhat repos, do chack the source and if not sure build yourself if there is no official sources. regards, Ghislain.
Re: 1.5 latest segfault trying to negate acl
Hi Thank you, I can confirm this fixes the issue for me Thanks, Neil On 9 April 2014 12:35, Willy Tarreau w...@1wt.eu wrote: Hi guys, sorry it took that long to take a look at it. I've just pushed the patch, it's available here : http://git.1wt.eu/web?p=haproxy.git;a=commitdiff_plain;h=6a0b6bd648592e73f42fb8e7341bf984d26ba8dc The bug happens when the sc0_get_gpc0() statement is applied to an explicit table while sc0 is not yet tracked. The implicit table already contained the check for the existence of the tracker, but not the code doing the lookup in an alternate table. Thanks for reporting this! Willy
1.5 latest segfault trying to negate acl
Hello my logs have a uncomforting line *kernel: [7302179.685736] haproxy[1766]: segfault at 7c ip 7f6629410a9f sp 7fffdaf98868 error 4 in libc-2.15.so http://libc-2.15.so[7f66292ae000+1b5000]* We caused this trying to use this config which tries to track the source of a connection unless it matches a acl following along the lines of http://blog.serverfault.com/2010/08/26/1016491873/ *globalmaxconn 4096user haproxygroup haproxydefaultsmode httpretries3 option redispatchmaxconn2000timeout connect 5stimeout client 20stimeout server 60sfrontend http 0.0.0.0:80 http://0.0.0.0:80maxconn 25000 default_backend be_defaultstick-table type ip size 200 expire 10s store gpc0acl on_naughtystep sc0_get_gpc0(http) gt 0 use_backend be_badman if on_naughtystep# Both these directives will make haproxy segfaulttcp-request connection track-sc0 src if !on_naughtystep# tcp-request connection track-sc0 src unless on_naughtystep# This one doesn't# tcp-request connection track-sc0 srcbackend be_defaultbalance roundrobinfullconn 1000server server server:80 maxconn 50 check inter 2000 rise 2 fall 2backend be_badmanblock if TRUE* haproxy running is compiled from head *haproxy -vvHA-Proxy version 1.5-dev22 2014/02/03Copyright 2000-2014 Willy Tarreau w...@1wt.eu w...@1wt.euBuild options : TARGET = linux26 CPU = generic CC = gcc CFLAGS = -O2 -g -fno-strict-aliasing OPTIONS = USE_LINUX_SPLICE=1 USE_OPENSSL=1 USE_PCRE=1Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200Encrypted password support via crypt(3): yesBuilt without zlib support (USE_ZLIB not set)Compression algorithms supported : identityBuilt with OpenSSL version : OpenSSL 1.0.1 14 Mar 2012Running on OpenSSL version : OpenSSL 1.0.1 14 Mar 2012OpenSSL library supports TLS extensions : yesOpenSSL library supports SNI : yesOpenSSL library supports prefer-server-ciphers : yesBuilt with PCRE version : 8.12 2011-01-15PCRE library supports JIT : no (USE_PCRE_JIT not set)Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBINDAvailable polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OKTotal: 3 (3 usable), will use epoll.* Any ideas what to do next? Thanks Neil
Re: Haproxy 1.4 url redirection issue
Hello Amol Here is an example of the sort of thing I use The 3 important things for are ServerName https://servicename.domain.com:443 SetEnv HTTPS on UseCanonicalName On VirtualHost *:8080 ServerName https://servicename.domain.com:443 ## Vhost docroot DocumentRoot /var/www/ ## Directories, there should at least be a declaration for /var/www Directory /var/www Options Indexes ExecCGI AllowOverride None Order allow,deny Allow from all /Directory ## Logging LogLevel warn ServerSignature Off ## Custom fragment This tricks PHP into believing the script was accessed over SSL SetEnv HTTPS on DirectoryIndex index.php UseCanonicalName On ErrorLog |/usr/bin/cronolog --link /var/log/apache2/servicename_error.log /var/log/apache2/%Y/servicename_error-%Y%m%d.log LogFormat %h %l %u %t \%r\ %s %b \%{Referer}i\ \%{User-Agent}i\ direct LogFormat %{X-Forwarded-For}i %l %u %t \%r\ %s %b \%{Referer}i\ \%{User-Agent}i\ proxied SetEnvIf Remote_Addr ^ direct # make it always set SetEnvIf X-Forwarded-For ^.*\..*\..*\..* !direct SetEnvIf X-Forwarded-For ^.*\..*\..*\..* proxied SetEnvIf Request_URI ^/healthcheck$ !direct # keep these SetEnvIf Request_URI ^/healthcheck$ !proxied CustomLog |/usr/bin/cronolog --link /var/log/apache2/servicename_directaccess /var/log/apache2/%Y/servicename_directaccess-%Y%m%d.log direct env=direct CustomLog |/usr/bin/cronolog --link /var/log/apache2/servicename_access /var/log/apache2/%Y/servicename_access-%Y%m%d.log proxied env=proxied /VirtualHost I like to log traffic from the loadbal separately to traffic from the public and I ignore /healthcheck from the loadbal but not from others. You'll need to tell haproxy to option forwardfor. Also using cronolog. Neil On 1 March 2014 15:27, Baptiste bed...@gmail.com wrote: Hi More chance to get an answer from Apache 2.2 and wordpress people... Baptiste On Fri, Feb 28, 2014 at 4:12 PM, Amol mandm_z...@yahoo.com wrote: well the application behind haproxy in this case is wordpress on apache2.2, any settings there? On Friday, February 28, 2014 4:57 AM, Baptiste bed...@gmail.com wrote: It may not fix the issue. But at least the configuration will do what you expect from it... That said, the issue may be in the application too :) It is commonly seen that applications don't behave properly when SSL offloading is enabled in front of them. Baptiste On Thu, Feb 27, 2014 at 4:16 PM, Amol mandm_z...@yahoo.com wrote: Thanks Baptiste, let me give that a try On Thursday, February 27, 2014 9:37 AM, Baptiste bed...@gmail.com wrote: Hi Amol, There are a few improvement you can do. First update your frontend acl to: acl host_xx hdr(host) -i xx.com then in your backend, this ACL should never match: acl login_page url_beg /xyz replace url_beg by path_beg. Your problem is not there as well. I think your application server is sending hardcoded data or Location headers. analyzing the body of the pages and HAProxy logs may help here. Baptiste On Tue, Feb 25, 2014 at 4:56 PM, Amol mandm_z...@yahoo.com wrote: Hi i am using HA-Proxy version 1.4.12 and i have an issue trying to redirect my website to http requirement : when a user types in http://website_name.com he should not be redirected to https://website_name.com currently it does that and some of the video links on our main page do not work (basically vimeo has http links while our page is https so it throws a security exception) at the same time we need users with http://website_name.com/xyz to be redirected to https://website_name.com/xyz (this helps users login to secure application) so under my current configurations i cannot get the first part to work, basically (www.website_name.com works and stays http but when i type http://website_name.com it does a redirection to https) frontend http-in bind xx.xx.xx.xx:80 name http bind 10.xx.xx.xx:8000 name https # forwared by stunnel acl host_xx hdr_beg(host) -i xx.com use_backend xx-http if host_xx default_backend xx-https backend xx-http balance roundrobin cookie BALANCEID insert indirect nocache option http-server-close option httpchk OPTIONS /check.txt HTTP/1.1\r\nHost:\ www server xx-app1 xx.xx.xx.xx:80 cookie A check server xx-app6 xx.xx.xx.xx:80 cookie B check backup acl secure dst_port eq 8000 acl login_page url_beg /xyz redirect prefix https://xx.com if login_page !secure backend xx-https mode http balance roundrobin cookie BALANCEID insert indirect nocache option http-server-close # option forwardfor except 127.0.0.1 option httpchk OPTIONS /check.txt HTTP/1.1\r\nHost:\ www server xx-app1 xx.xx.xx.xx:80 cookie s1 weight 1 maxconn 5000 check
Re: Just a simple thought on health checks after a soft reload of HAProxy....
Hello Regarding restarts, rather that cold starts, if you configure peers the state from before the restart should be kept. The new process haproxy creates is automatically a peer to the existing process and gets the state as was. Neil On 23 Feb 2014 03:46, Patrick Hemmer hapr...@stormcloud9.net wrote: -- *From: *Sok Ann Yap sok...@gmail.com sok...@gmail.com *Sent: * 2014-02-21 05:11:48 E *To: *haproxy@formilux.org *Subject: *Re: Just a simple thought on health checks after a soft reload of HAProxy Patrick Hemmer haproxy@... haproxy@... writes: From: Willy Tarreau w at 1wt.eu Sent: 2014-01-25 05:45:11 E Till now that's exactly what's currently done. The servers are marked almost dead, so the first check gives the verdict. Initially we had all checks started immediately. But it caused a lot of issues at several places where there were a high number of backends or servers mapped to the same hardware, because the rush of connection really caused the servers to be flagged as down. So we started to spread the checks over the longest check period in a farm. Is there a way to enable this behavior? In my environment/configuration, it causes absolutely no issue that all the checks be fired off at the same time. As it is right now, when haproxy starts up, it takes it quite a while to discover which servers are down. -Patrick I faced the same problem in http://thread.gmane.org/ gmane.comp.web.haproxy/14644 After much contemplation, I decided to just patch away the initial spread check behavior: https://github.com/sayap/sayap-overlay/blob/master/net- proxy/haproxy/files/haproxy-immediate-first-check.diff I definitely think there should be an option to disable the behavior. We have an automated system which adds and removes servers from the config, and then bounces haproxy. Every time haproxy is bounced, we have a period where it can send traffic to a dead server. There's also a related bug on this. The bug is that when I have a config with inter 30s fastinter 1s and no httpchk enabled, when haproxy first starts up, it spreads the checks over the period defined as fastinter, but the stats output says UP 1/3 for the full 30 seconds. It also says L4OK in 30001ms, when I know it doesn't take the server 30 seconds to simply accept a connection. Yet you get different behavior when using httpchk. When I add option httpchk, it still spreads the checks over the 1s fastinter value, but the stats output goes full UP immediately after the check occurs, not UP 1/3. It also says L7OK/200 in 0ms, which is what I expect to see. -Patrick
Re: Extending Proxy Protocol
On 30 Jan 2014 08:12, Willy Tarreau w...@1wt.eu wrote: Hi David, On Wed, Jan 29, 2014 at 10:53:22PM -0500, David S wrote: I want to use HAProxy to terminate my incoming SSL connections and forward the messages to my server application. My challenge is that my application needs information from the client certificates. The Proxy Protocol is one way that connection information can be forwarded from HAProxy to the receiver. I'm interested in extending the Proxy Protocol to include client certificate information. The Proxy Protocol documentation mentions that this has been considered before. yes indeed. Do you have any advice for someone who might start developing a patch to extend Proxy Protocol? The difficulty lies more in trying to achieve something compatible with existing implementations than writing the code. One of the problems is to consider other information some people might want to pass. I'm thinking about the interface name, SSL/TLS version, ciphers, SSL session ID, SNI, local cert, ... We don't need to implement all this. But we need to ensure that whatever we add will not prevent these from being added later. One possibility would be to add new keywords. But that makes something quite complex to parse for receivers, and it's already very problematic not to know how many bytes to read. For example, in Postfix, postscreen uses recv(MSG_PEEK) and cannot poll, so it would like to know easily how many bytes to read in each block and if possible not to have to parse one char at a time. Another option could be to implement it only in the V2 of the protocol which is binary. The length is variable and depends on the types present in the header. So the parsing is very fast. I think one solution would be to implement a new command which would represent the encapsulated information, and that LOCAL (\x00) or PROXY (\x01) are alwas final. For example, let's imagine we have SSLCRT=\x02 followed by a length and a cert, then by a command again. By doing so we can easily implement up to 254 extra commands, thus as many extra types. One point of particular care is that the length of each field is encoded on 8 bits. If that's not always enough, maybe it would be simple enough to decide to cut large data into chunks. Another possibility would be to use a variable length encoding, but this is not always properly implemented, and is sometimes hard to implement for the sender. For example, if we consider that 16kB per data type, we can encode the length using values 0..191 as 1-byte, and values 192..255 as 14 bits (6 bits followed by an extra byte). But as it stands now, I really think that chunking large data into 255-or-less chunks is much easier for everyone, including the receiver which will need less buffers. Are there alternatives that I should consider? There's always an alternative, which is to decide whether or not it would make sense to migrate the application to HTTP, where all of this becomes immediately available for free. Maybe at the beginning it's not planned to do so, but in the long term, using HTTP is useful because it adds lots of new possibilities (proxies, compression, cookies, ...). Regards, Willy Another http proxy 'pound' passes on this information by added http headers similar to x-forwarded-for. It would,imho, be great to be able to take arbitary headers from client and mangle and pass them on to backend servers or use in acls or put the in state tables Similarly passing from backend to client after mangling would be useful
Re: Is there a way to mention ssl password in haproxy.cfg file
Hello Off the top of my head you could tell haproxy that the key is in a secured directory of say something like /dev/shm Then have your own init script that unlocks the private key and puts it where haproxy expects it (openssl will do that). After haproxy starts it can be deleted. It can do it again for restarts. Thanks. Neil On 28 Jan 2014 07:12, Sukanta Saha ss...@sprinklr.com wrote: Thanks for your suggestions Thanks Sukanta On Tue, Jan 28, 2014 at 12:13 PM, Willy Tarreau w...@1wt.eu wrote: On Mon, Jan 27, 2014 at 10:24:35PM +0100, Baptiste wrote: Hi, You can't do this from HAProxy's configuration file. The passphrase is requested by your OpenSSL library. If there is a passphrase on your private key, there is a good reason: keep it secret. Maybe hacking HAProxy start script with 'expect' could do the trick, but I'm not sure. By the way we've been discussing this point for some time with Emeric. It seems that a clean solution would consist in having a password server consisting in an external process that haproxy would request upon startup. This would allow us to use whatever mechanisms are available to feed haproxy with the needed passwords, without having to type it upon every reload and without leaving it in clear in any config. You would for example log into the system at boot, start the agent and type your password, then it would not be needed anymore. A bit like ssh-agent in fact. We need to think about some protections though, probably just at the socket level. Another difficulty would be to verify that the correct password was fed the first time. Maybe storing a short hash would work, this is still something to think about. Any ideas on the subject are welcome, of course! Willy
Re: HAProxy Next?
If anyone wants me to rebase sflow/haproxy against the latest trunk or a specific release, let me know. Neil -- Neil McKee InMon Corp. http://www.inmon.com On Tue, Dec 17, 2013 at 1:01 AM, Annika Wickert a.wick...@traviangames.comwrote: Hi Hi! - sflow output Can't log-format already do this? Sure, but it might be a better integration in the rest of networking infrastructure if sflow is supported. FYI, Neil Mckee has a fork available with sflow support: http://marc.info/?t=13673552702r=1w=2 http://blog.sflow.com/2013/05/haproxy.html https://github.com/sflow/haproxy I know ;). So it would be nice to merge to code in the official release :). Regards, Lukas Regards, Annika
Re: HAProxy Next?
Hi I'd like the option of a web based api to replace the functionality of the web admin pages with a service which can be used remotely to monitor and control multiple haproxy and provide any fancy authentication and auditing outside of the haproxy service using whichever tech seems appropriate. Exposing the socket via xinetd doesn't really do it, for me at least. Neil On 17 Dec 2013 08:16, Annika Wickert a.wick...@traviangames.com wrote: Hi all, we did some thinking about how to improve haproxy and which features we’d like to see in next versions. We came up with the following list and would like to discuss if they can be done/should be done or not. - One global statssocket which can be switched through to see stats of every bind process. And also an overall overview summed up from all backends and frontends. - One global control socket to control every backend server and set them inactive or active on the fly. - In general better nbproc 1 support - Include possibility in configfile to maintain one configfile for each backend / frontend pair - CPU pinning in haproxy without manually using taskset/cpuset - sflow output - latency metrics at stats interface (frontend and backend, avg, 95%, 90%, max, min) - accesslist for statssocket or ldap authentication for stats socket Are there any others things which would be cool? I hope we can have a nice discussion about a “fancy” feature set which could be provided by lovely haproxy. Best regards, Annika --- Systemadministration Travian Games GmbH Wilhelm-Wagenfeld-Str. 22 80807 München Germany a.wick...@traviangames.com www.traviangames.de Sitz der Gesellschaft München AG München HRB: 173511 Geschäftsführer: Siegfried Müller USt-IdNr.: DE246258085 Diese Email einschließlich ihrer Anlagen ist vertraulich und nur für den Adressaten bestimmt. Wenn Sie nicht der vorgesehene Empfänger sind, bitten wir Sie, diese Email mit Anlagen unverzüglich und vollständig zu löschen und uns umgehend zu benachrichtigen. This email and its attachments are strictly confidential and are intended solely for the attention of the person to whom it is addressed. If you are not the intended recipient of this email, please delete it including its attachments immediately and inform us accordingly.
url32+src - like base32+src but whole url including parameters
Hello I have a need to limit traffic to each url from each source address. much like base32+src but the whole url including parameters (this came from looking at the recent 'Haproxy rate limit per matching request' thread) attached is patch that seems to do the job, its a copy and paste job of the base32 functions the url32 function seems to work too and using 2 machines to request the same url locks me out of both if I abuse from either with the url32 key function and only the one if I use url32_src Neil url32+src Description: Binary data
Re: Haproxy rate limit per matching request
Hello Chris and I followed this example but found that it limits by url but for all users. that might be what you want in a slashdotting but its not what we want for individual users falling asleep with nose on f5(reload) key we looked at base32+src rather than url but that excludes the url parameters I've started a separate thread with a new url32+src function. Neil On 1 November 2013 18:39, Cyril Bonté cyril.bo...@free.fr wrote: Hi Przemyslaw, Le 31/10/2013 12:05, Przemysław Hejman a écrit : Hello guys, it's me one again. I just wanted to share my experiences after several very simple acceptance tests. First of all, I've found that the whitelist did not work - I had to change my configuration to something like this: global stats socket /tmp/haproxy.sock defaults mode http timeout connect 5000ms timeout client 5ms timeout server 5ms frontend app bind *:8080 option http-server-close stick-table type integer size 200k expire 30m store http_req_cnt acl white_list src 127.0.0.1 192.168.1.205 192.168.0.133 tcp-request content accept if white_list tcp-request content track-sc0 urlp(SID,?) tcp-request content reject if { sc0_http_req_cnt gt 2 } tcp-request inspect-delay 10s default_backend web_servers backend web_servers balance roundrobin server web01 127.0.0.1:80 check inter 1000 Therefore, I've decided to do a little test. I've put request sent by curl in a for loop like this for i in `seq 1 400`; do curl 192.168.0.132:8080/index.html?SID=33?asdf; done Eveything to seem fine HOWEVER I have noticed that several (about 20) requests randomly PASSED. Sorry, I didn't have time to reply to the configuration you provided last time. But it is normal if it didn't work 100% of the times : this is because your forgot to add a line that waits for a layer7 information, as Willy said. The important thing was to add : tcp-request content reject if !HTTP Pushing the stick-table and tracking/rejecting operations back to backend definition solved my problem. Indeed, this is another way to wait for HTTP data to be complete, as a HTTP frontend will use the backend only once the headers are received. Thanks for sharing. -- Cyril Bonté
Re: haproxy with sFlow instrumentation
Willy, Thank you for your comments. I agree that the logging you already have in haproxy is more flexible and detailed, and I acknowledge that the benefit of exporting sFlow-HTTP records is not immediately obvious. The value that sFlow brings is that the measurements are standard, and are designed to integrate seamlessly with sFlow feeds from switches, routers, servers and applications to provide a comprehensive end to end picture of the performance of large scale multi-tier systems. So the purpose is not so much to troubleshoot haproxy in isolation, but to analyze the performance of the whole system that haproxy is part of. Perhaps the best illustration of this is the 1-in-N sampling feature. If you configure sampling.http to be, say, 1-in-400 then you might only see a handful of sFlow records per second from an haproxy instance, but that is enough to tell you a great deal about what is going on -- in real time. And the data will not bury you even if you have a bank of load-balancers, hundreds of web-servers, a huge memcache-cluster and a fast network interconnect all contributing their own sFlow feeds to the same analyzer. Dave Mangot of Tagged.com describes how sFlow can help in large-scale environments in this Velocity talk: http://blog.sflow.com/2013/04/velocity-conference-talk.html Is this helpful? A link from the home page would be great of course, but that's up to you :) Regards, Neil On May 17, 2013, at 1:34 AM, Willy Tarreau wrote: Hello Neil, On Tue, Apr 30, 2013 at 01:50:30PM -0700, Neil Mckee wrote: Hello All, I had a go at adding standard sFlow instrumentation to haproxy: https://github.com/sflow/haproxy This implements the exact same binary-logging-over-UDP export that you get from mod-sflow for apache, nginx-sflow-module, tomcat-sflow-valve, and more. It supports random 1-in-N sampling for scalability, and it is designed to be integrated with host performance counters from hsflowd, along with sFlow traffic monitoring from most network switches. (The goal being holistic end-to-end visibility). Anyone want to try it out? (...) First, thank you for this work. I'm realizing that it can be frustrating to see no response to your post, but we probably need more time to get some testers. Let me knwo if you want me to add a link from the main page to you work. I must say I have zero experience with sflow, so please don't take my ignorance as a criticism of your work ! While I understand why it can be useful at the network level, it's still hard for me to understand the benefit at upper layers, since more precise details can be deduced from the logs probably at a lower cost since the work just has to be done once. Maybe you could enlighten me on this (and probably others as well, as it's likely that I'm not the only one who doesn't see an obvious benefit from doing this). Best regards, Willy
Re: haproxy with sFlow instrumentation
For more info on haproxy + sFlow, see http://blog.sflow.com/2013/05/haproxy.html Neil On Apr 30, 2013, at 1:50 PM, Neil Mckee wrote: Hello All, I had a go at adding standard sFlow instrumentation to haproxy: https://github.com/sflow/haproxy This implements the exact same binary-logging-over-UDP export that you get from mod-sflow for apache, nginx-sflow-module, tomcat-sflow-valve, and more. It supports random 1-in-N sampling for scalability, and it is designed to be integrated with host performance counters from hsflowd, along with sFlow traffic monitoring from most network switches. (The goal being holistic end-to-end visibility). Anyone want to try it out? (If you would need it on a particular branch of haproxy, let me know.) These would be the getting started steps: (1) make TARGET=linux26 USE_SFLOW=yes (2) install hsflowd from http://host-sflow.sourceforge.net (3) edit /etc/hsflowd.conf to set DNSSD=off, configure manual collector for collector { ip = 127.0.0.1 } , and add a line with sampling.http = 1 (4) /etc/init.d/hsflowd start (5) download sflowtool sources from http://www.inmon.com/technology/sflowTools.php (a simple tool for ASCIIfying the binary feed) (6) configure; make; make install (7) run sflowtool (8) generate some requests through haproxy (9) you should see output in sflowtool looking something like this: extendedType proxy_socket4 proxy_socket4_ip_protocol 6 proxy_socket4_local_ip 0.0.0.0 proxy_socket4_remote_ip 10.0.0.160 proxy_socket4_local_port 0 proxy_socket4_remote_port 80 flowBlock_tag 0:2100 extendedType socket4 socket4_ip_protocol 6 socket4_local_ip 0.0.0.0 socket4_remote_ip 10.1.3.2 socket4_local_port 0 socket4_remote_port 62902 flowBlock_tag 0:2206 flowSampleType http http_method 2 http_protocol 1001 http_uri GET /inmsf/Widget?id=base.categorytrend.1height=200width=320ms=1362731388006 HTTP/1.1 http_host 10.0.0.153:8080 http_referrer http://10.0.0.153:8080/inmsf/Home?action=widgets http_useragent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:19.0) Gecko/20100101 Firefox/19.0 http_mimetype image/png http_request_bytes 11292 http_bytes 481 http_duration_uS 1434000 http_status 200 (10) Ganglia will accept this feed and display the host and http counters. Other sFlow collectors are listed here: http://sflow.org/products/collectors.php (Though not all of these will recognize the sFlow-HTTP structures). More background and documentation here: http://blog.sflow.com/search?q=HTTP Neil
haproxy with sFlow instrumentation
Hello All, I had a go at adding standard sFlow instrumentation to haproxy: https://github.com/sflow/haproxy This implements the exact same binary-logging-over-UDP export that you get from mod-sflow for apache, nginx-sflow-module, tomcat-sflow-valve, and more. It supports random 1-in-N sampling for scalability, and it is designed to be integrated with host performance counters from hsflowd, along with sFlow traffic monitoring from most network switches. (The goal being holistic end-to-end visibility). Anyone want to try it out? (If you would need it on a particular branch of haproxy, let me know.) These would be the getting started steps: (1) make TARGET=linux26 USE_SFLOW=yes (2) install hsflowd from http://host-sflow.sourceforge.net (3) edit /etc/hsflowd.conf to set DNSSD=off, configure manual collector for collector { ip = 127.0.0.1 } , and add a line with sampling.http = 1 (4) /etc/init.d/hsflowd start (5) download sflowtool sources from http://www.inmon.com/technology/sflowTools.php (a simple tool for ASCIIfying the binary feed) (6) configure; make; make install (7) run sflowtool (8) generate some requests through haproxy (9) you should see output in sflowtool looking something like this: extendedType proxy_socket4 proxy_socket4_ip_protocol 6 proxy_socket4_local_ip 0.0.0.0 proxy_socket4_remote_ip 10.0.0.160 proxy_socket4_local_port 0 proxy_socket4_remote_port 80 flowBlock_tag 0:2100 extendedType socket4 socket4_ip_protocol 6 socket4_local_ip 0.0.0.0 socket4_remote_ip 10.1.3.2 socket4_local_port 0 socket4_remote_port 62902 flowBlock_tag 0:2206 flowSampleType http http_method 2 http_protocol 1001 http_uri GET /inmsf/Widget?id=base.categorytrend.1height=200width=320ms=1362731388006 HTTP/1.1 http_host 10.0.0.153:8080 http_referrer http://10.0.0.153:8080/inmsf/Home?action=widgets http_useragent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:19.0) Gecko/20100101 Firefox/19.0 http_mimetype image/png http_request_bytes 11292 http_bytes 481 http_duration_uS 1434000 http_status 200 (10) Ganglia will accept this feed and display the host and http counters. Other sFlow collectors are listed here: http://sflow.org/products/collectors.php (Though not all of these will recognize the sFlow-HTTP structures). More background and documentation here: http://blog.sflow.com/search?q=HTTP Neil
Re: HAProxy false negatives with tomcat httpchk Layer7 timeout
Hello I've looked into this further and the trouble is not with tomcat looking up the client but with tomcat looking up itself. changing the haproxy config from option httpchk GET / to option httpchk GET / HTTP/1.1\r\nHost:\ sys-haproxytomcat-test:8080 makes the server return quickly Needs some more thought but I think I'd like an option to add a fixed Host header to the request or maybe take the address:port from the server backend config for that backend server as that can't be specified in a single check line I got the clue from examining wireshark of ab compared to haproxy but I should have guessed this was the case as I was adding reverse entries for the backend server not the haproxy server. correcting the enableLookups typo is unrelated to the issue but thanks for spotting it. (might be nice if tomcat warned about config that was wrong) Thanks Neil On 20/02/11 00:09, Cyril Bonté wrote: Hi Neil and Willy, Le dimanche 20 février 2011 01:01:26, Willy Tarreau a écrit : But I still don't understand how you would like it to behave differently. Haproxy does no DNS lookup during health checks. From what I understand of your identification of the issue, it's caused by the tomcat server trying to resolve haproxy's address to a name (it's dangerous to have a webserver configured like that BTW). Neil sent me his tomcat configuration yesterday and didn't see an issue but with your last comment, I've reread the file to be sure. Well, Neil, there's an error in your Tomcat HTTP connector : please replace enableLookup by enableLookups, that should do the trick ;-) Please access the attached hyperlink for an important electronic communications disclaimer: http://lse.ac.uk/emailDisclaimer
Re: HAProxy false negatives with tomcat httpchk Layer7 timeout
Hello I think I've got to the bottom of this, thanks to the straces and some lucky guesswork. the trouble occurs when the backend server does not have a entry in the DNS reverse lookup zone. I've now got a cluster of identical backends the ones that have reverse lookup entries come back straight away the ones without take over 5 seconds. The main thing that tipped me off was running haproxy and tomcat on the same server (which did not have a reverse lookup entry) the problem could be solved by adding a etc hosts entry for the form of the name used in the server config line. I guess I'll add entries for all my servers but perhaps a fix in haproxy is also warranted? Many thanks, Neil On 19/02/11 06:42, Willy Tarreau wrote: Hello Neil, On Wed, Feb 16, 2011 at 05:45:00PM +, Neil Prockter wrote: Hello I using tomcat as a backend server and I'd like to use a httpchk. Because tomcat splits the response to the keepalive over a few packets haproxy is marking it as down. tshark shows the response is a 200 just its not in the first packet. That's not expected because haproxy 1.4 supports multiple-packet responses. (...) works, uncommenting the httpchk leads to [WARNING] 046/172220 (28956) : Server epd-rewrite/backend-0 is DOWN, reason: Layer7 timeout, check duration: 2003ms. [WARNING] 046/172221 (28956) : Server epd-rewrite/backend-1 is DOWN, reason: Layer7 timeout, check duration: 2004ms. Those really indicate that the read timeout has fired. We cannot exclude that you'd have spotted a bug though. I've looked at the mailing list archives and http://marc.info/?l=haproxym=126399109503224w=2 seems relevant but I'm still having the same issue. This one was different, if you notice, it immediately failed precisely because haproxy did only consider the first packet. Could you take a tcpdump capture of the check request/response so that we could try to reproduce exactly the same sequence ? Alternatively, the output of strace on the running process could also give a lot of indications. Regards, Willy Please access the attached hyperlink for an important electronic communications disclaimer: http://lse.ac.uk/emailDisclaimer
Re: HAProxy false negatives with tomcat httpchk Layer7 timeout
On 19/02/11 23:46, Willy Tarreau wrote: ... What fix do you mean ? Haproxy can't force your servers to disable DNS resolving. Regards, Willy Fix is probably the wrong word to have used. I don't think anything in haproxy is broken. To me haproxy's health checks are like a http client. Other http clients like ab, links and other web browsers do not seem to act in the same manner as haproxy does, in that they don't do the same DNS lookups. during the testing I did ab got quick fetches from servers regardless of them having reverse lookup entries. Maybe, just maybe, haproxy's health checks could act in a similar way. Regards, Neil Please access the attached hyperlink for an important electronic communications disclaimer: http://lse.ac.uk/emailDisclaimer
Re: HAProxy false negatives with tomcat httpchk Layer7 timeout
Hello On 16/02/11 20:29, Cyril Bonté wrote: Hi Neil, Le mercredi 16 février 2011 18:45:00, Neil Prockter a écrit : Hello I using tomcat as a backend server and I'd like to use a httpchk. Because tomcat splits the response to the keepalive over a few packets haproxy is marking it as down. tshark shows the response is a 200 just its not in the first packet. I don't understand what you mean with keepalive for this checks. sorry, I meant the httpchk request (...) # option httpchk HEAD / server backend-0 backend-0:8080 cookie epd0 check inter 2000 rise 2 fall 5 server backend-1 backend-1:8080 cookie epd1 check inter 2000 rise 2 fall 5 (...) uncommenting the httpchk leads to [WARNING] 046/172220 (28956) : Server epd-rewrite/backend-0 is DOWN, reason: Layer7 timeout, check duration: 2003ms. [WARNING] 046/172221 (28956) : Server epd-rewrite/backend-1 is DOWN, reason: Layer7 timeout, check duration: 2004ms. [ALERT] 046/172221 (28956) : proxy 'epd-rewrite' has no server available! Please could someone enlighten me as to which options/changes they've found effective. Are you sure that http://backend-0:8080/ and http://backend-1:8080/ answers in less that 2000 ms ? Your logs say they're longer than that. apachebench doing 1000 says the longest is 8ms (average 1.4ms). I'm pretty sure haproxy thinks the httpchk fails because the reply is split over 2 packets. Just to check here, which version of tomcat are you using ? 6.0.32 (I tried 6.0.24 first) Is your application behind those urls or the ROOT application provided by tomcat ? That url is backed with just a ROOT with a index.html in it for now so it returns status 200. I'll give it a proper test later Also, nothing with the issue, but : - I don't think you want to use cookie SERVERID rewrite - you should use option httpclose or option http-server-close to not have issues with your cookie stickiness. Thanks I'll look at those, they came to be there just because I started afresh with the example that comes with the ubuntu package. Please access the attached hyperlink for an important electronic communications disclaimer: http://lse.ac.uk/emailDisclaimer
HAProxy false negatives with tomcat httpchk Layer7 timeout
Hello I using tomcat as a backend server and I'd like to use a httpchk. Because tomcat splits the response to the keepalive over a few packets haproxy is marking it as down. tshark shows the response is a 200 just its not in the first packet. However I'm confused. I've used 1.4.8/9 with different options, in a kind of random throw it at them style, http to get it to work in the past but I'm not clear what I'm meant to do and I've got a new one to setup and I'm not able to get it working = global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 5 srvtimeout 5 listen epd-rewrite 0.0.0.0:10001 cookie SERVERID rewrite balance roundrobin # option httpchk HEAD / server backend-0 backend-0:8080 cookie epd0 check inter 2000 rise 2 fall 5 server backend-1 backend-1:8080 cookie epd1 check inter 2000 rise 2 fall 5 = works, uncommenting the httpchk leads to [WARNING] 046/172220 (28956) : Server epd-rewrite/backend-0 is DOWN, reason: Layer7 timeout, check duration: 2003ms. [WARNING] 046/172221 (28956) : Server epd-rewrite/backend-1 is DOWN, reason: Layer7 timeout, check duration: 2004ms. [ALERT] 046/172221 (28956) : proxy 'epd-rewrite' has no server available! Please could someone enlighten me as to which options/changes they've found effective. I've looked at the mailing list archives and http://marc.info/?l=haproxym=126399109503224w=2 seems relevant but I'm still having the same issue. Thanks, Neil Neil Prockter Systems Specialist IT Services London School of Economics and Political Science n.prock...@lse.ac.uk +44 (0) 20 7849 4904 Please access the attached hyperlink for an important electronic communications disclaimer: http://lse.ac.uk/emailDisclaimer