Re: Lua Shell letsencrypt
2015-12-05 23:42 GMT+01:00: > On Fri, 4 Dec 2015 00:23:53 -0700 > Mela Luca wrote: > >> I am looking to automate letsencrypt with lua, the process would be to >> detect to see if the domain has a cert already, if not it would execute >> letsencrypt on the domain. >> Any thought if this would be possible to do with lua. I am guessing using >> the os.execute. >> > > I'm not sure that you're using the good way to do this: > > - I don't known letsencrypt very well, but I heard that the >letsecncrypt framework expect a confirmation that the requester is >the real owner of the web site. It requires to the owner to add a >special webpage at a special url. So the process is very slow and it >cannot done during an http request timing. Also don't forget that you can be flood by bots using arbitrary Host headers. > > - os.execute() is a blocking action. While HAProxy is waiting for the >script response, it does nothing, and all the traffic is blocked. > > Actuelly, the Lua in HAProxy only communicates with other process with > the Socket provided by the Lua/HAProxy API. IMHO the right approach is to use async communication (any ASMQ middleware, 0MQ, IRC, what else ...) between haproxy and the letsencrypt client or any ACME protocol implementation. It also should be useful for other stuff. Joris > > Thierry >
Re: lua authentication
2015-12-06 3:44 GMT+01:00 Grant Haywood: > I found a pretty good starting point > > https://github.com/morganfainberg/HAProxyKeystoneMiddlware > > if i do anything with ldap ill post it... > > - Original Message - > From: "Grant Haywood" > To: "thierry fournier" > Cc: "Igor Cicimov" , "HAProxy" > , "Baptiste" > Sent: Saturday, December 5, 2015 6:48:52 PM > Subject: Re: lua authentication > > I see. > Still, is there an example of authenticating an Http connection in lua? > > im fairly certan i can do a JWT implementation > > and for LDAP, it may still easyer to proxy a simple (non-ldap) message over a > socket, and write a bridge to ldap daemon in something thats not lua. (use at > your own risk/understanding/vetting) > > kind of like this https://doc.powerdns.com/md/authoritative/backend-pipe/ > (i know thats not for auth, but same concept) > > - Original Message - > From: "thierry fournier" > To: "Baptiste" > Cc: "Grant Haywood" , "Igor Cicimov" > , "HAProxy" > Sent: Saturday, December 5, 2015 3:36:32 PM > Subject: Re: lua authentication > > Hi, > > I complement, I would say, that the Lua bindings for the standard > Openldap client exists, but unfortunately, the operation is blocking, > and doesn't run very well with HAProxy. > > It seems that a Lua rewrite of the LDAP protocol using standard Lua > HAProxy socket is a solution, but this is a big development. Maybe a > partial implementation (juste the binding) will be usefull. > > Thierry > > > > On Fri, 4 Dec 2015 08:35:41 +0100 > Baptiste wrote: > >> current Lua implementation already allows asynchronous network sockets. >> Now, what you need to do is to code a basic LDAP auth request in Lua >> and be able to parse the response. >> >> Baptiste >> >> >> >> On Thu, Dec 3, 2015 at 11:58 PM, Grant Haywood >> wrote: >> > Thats exactly what I am wanting to code, I just need an example of how to >> > do auth, like userlist, inside of lua. >> > >> > - Original Message - >> > From: "Igor Cicimov" >> > To: "Grant Haywood" >> > Cc: "HAProxy" >> > Sent: Thursday, December 3, 2015 3:58:28 PM >> > Subject: Re: lua authentication >> > >> > >> > >> > >> > Hi Grant, >> > >> > >> > >> > On Fri, Dec 4, 2015 at 7:46 AM, Grant Haywood < gr...@iowntheinter.net > >> > wrote: >> > >> > >> > Hello, >> > >> > I was wondering if there is a basic example of using lua to do >> > authentication? >> > >> > I am specificaly interested in constructing 'ldap' and 'jwt' versions of >> > the 'userlist' block >> > >> > thx in advance for your time >> > >> > >> > >> > Excellent question. One feature I would love to see in haproxy is support >> > for ldap authentication. It would be awesome If that could be done via lua. IHMO it should be easier to use SASL. Joris >> > >> > >> > Thanks, >> > >> > Igor >> > >> >
Re: Two questions about lua
Thanks Thierry, for your answers. 2015-11-30 16:53 GMT+01:00 Thierry FOURNIER <thierry.fourn...@arpalert.org>: > On Mon, 30 Nov 2015 08:37:00 +0100 > joris dedieu <joris.ded...@gmail.com> wrote: > >> Hi all, >> >> I started to drive into haproxy's lua interface. I produced a few code >> that allows dnsbl lookup and it seems to work. >> >> First I have a C wrapper against the libc resolver.. >> >> #include >> #include >> #include >> #include >> #include >> >> #include >> #include >> >> static int gethostbyname_wrapper(lua_State *L) >> { >> const char* query = luaL_checkstring(L, 1); >> struct hostent *he; >> if ((he = gethostbyname(query)) != NULL) { >> const char *first_addr = >> inet_ntoa(*(struct in_addr*)he->h_addr_list[0]); >> lua_pushstring(L, first_addr); >> return 1; >> } >> return 0; >> } >> >> static const luaL_Reg sysdb_methods[] = { >> {"gethostbyname", gethostbyname_wrapper}, >> {NULL, NULL} >> }; >> >> LUALIB_API int luaopen_sysdb(lua_State *L) { >> luaL_newlib(L, sysdb_methods); >> return 1; >> } >> >> I have some doubts on the asyncness of libc operations but in other >> side I don't want to reinvent the wheel. Should I prefer a resolver >> implementation that uses lua socket ? As far as I tested libc seems to >> do the job. > > Hello, > > I confirm your doubts: gethostbyname is synchronous and it is a > blocking call. If your hostname resolution is in the /etc/hosts file, > it blocks while reading file. It it is from DNS server, it blocks > waiting for the server response (or worse: wainting for the timeout). > > So, this code seems to run, but your HAProxy will be not efficient > because the entire haproxy process will be blocked during each > resolution. For example: if your DNS fails after 3s timeout, during 3s, > HAProxy doesn't process any data. > > Otherwise, your code is the good way to perform fast Lua/C libraries. > > There are no way to simulate blocking access out of the HAProxy core, > all the functions writed for Lua must be non block. Ok, I will check for a non blocking solution (maybe lua socket + pack / unpack in C) . > > >> Then the lua code >> >> local sysdb = require("sysdb") >> >> core.register_fetches("rbl", function(txn, rbl, ip) >> if (not ip) then >> ip = txn.sf:src() >> end >> if (not rbl) then >> rbl = "zen.spamhaus.org" >> end >> local query = rbl >> for x in string.gmatch(ip, "[^%.]+") do >> query = x .. '.' .. query >> end >> if(sysdb.gethostbyname(query)) then >> return 1 >> else >> return 0 >> end >> end) >> >> I want to use a sticky table as a local cache so my second question : >> is there a way to set a gpt0 value from lua ? > > > You can use the samples fetches mapper and use the sc_set_gpt0. The > syntax is like this: > > For the read access: > >txn.sf:sc_set_gpt0() >txn.sc:table_gpc0() > > For the write access, I don't have direct solution. You must use an Lua > sample fetch and the following configuration directive: > >http-request sc-set-gpt0 lua.my_sample_fetch Yes that's an option. > > Maybe it will be a good idea to implement the stick table access in Lua. > > If you want a other maneer to store shared data inhaproxy, you can use > maps. The maps are shared by all the HAProxy process including Lua with > a special API (see Map class) I thought on Maps but I didn't find a write access in lua according to http://www.arpalert.org/src/haproxy-lua-api/1.6/index.html#map-class and some of my experiments Thanks Joris > > Thierry
Re: Frontend ACL rewrites URL incorrectly to backend
Hi, 2015-10-04 23:33 GMT+02:00 Daren Sefcik: > I am trying to make some requests go to specific backends but am finding > that in certain backends that the url gets doubled up or otherwise mangled, > ie: > > request to frontend = http://my.company.com > what the backend server ends up with = > http://my.company.comhttp://my.company.com > > This does not happen in all of the backends, only a few...a wordpress site This is typically what append when wordpress is invoked with a wrong Host header. It must match WP_SITEURL and WP_HOME Regards Joris > comes to mind as a specific example. Since this does not happen on every > single backend server I suspect it is instead something happening on the > receiving server but since it only happens when I put haproxy in front of it > there is some connection between them. > > Can someone help me understand what haproxy is doing or how to fix this from > happening? > Before anyone says it is varnish doing it I should say several of the other > backends using varnish work fine, it is only a few that get the url messed > up. > > TIA > > example ACL: > > acl acl_my.company.com hdr(host) -i my.company.com > use_backend VARNISH_BKEND if acl_my.company.com
Re: HAProxy Slows At 1500+ connections Really Need some help to figure out why
Hi, Just a few translation Linux -> FreeBSD. As pfSense is FreeBSD based. 2015-10-04 10:56 GMT+02:00 Willy Tarreau: > On Sat, Oct 03, 2015 at 12:55:33AM -0700, Daren Sefcik wrote: >> > Is there some kernel messages >> > Load, swap usage, disk space >> > >> again, according to my limited know how, top and other built in utilities >> all report the system is barely doing anything and there is tons of memory >> and disk space > > Just run "free" after a test and "vmstat 1 10" during a test. > >> > During stress : >> > Is there more sys/interrupt than user cpu usage >> > Link saturation >> > Packet lost >> > >> I am not sure how to check this, I will try and figure this out but if you >> have any advice that would be appreciated. >> The LAN interface is a bonded interface with (3) 1000mb NIC cards so I am >> doubtful it is being saturated from this simple apache bench test. >> Here is what the Interfaces status shows me: >> >> *Status up* >> MTU 1500 >> Media autoselect >> LAGG Protocol lacp lagghash l2,l3,l4 > > That's interesting. Keep in mind that different aggregation algorithms > exist, and that hashing on l2+l3+l4 will spread different connections to > different ports. As long as you have enough connections (which seems to > be your case) your traffic should be evenly spread. But on low connections > it can happen that you saturate one link without traffic on the other ones. > So for now let's consider this not a problem. > >> LAGG Ports bge3 flags=1c >> bge2 flags=1c >> bge1 flags=1c >> In/out packets 248989670/305051696 (77.73 GB/88.68 GB) >> In/out packets (pass) 248989670/305051696 (77.73 GB/88.68 GB) >> In/out packets (block) 4130394/147 (4.75 GB/70 KB) >> In/out errors 0/608 >> Collisions 0 sysctl net.link.lagg.lacp.debug=1 should provide some interesting informations. Broadcom NICs : you should check man 4 bge and https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards >> >> Suboptimal firewall rules : replay stress packet filter unloaded. >> > >> There are only two simple allow firewall rules for LAN access, nothing >> complicated at all. > > No but very likely you're running with conntrack. If it's not properly > tuned you can quickly end up with a conntrack table full. Please run > "lsmod" to see the load modules, and "dmesg | grep -i conntrack" to > see if any such message has appeared, as well as "dmesg | grep -i drop" > to see if the kernel complained it was forced to drop anything. The > best thing to try to be sure is to unload all firewall modules, > especially conntrack. I had a look to pfsense kernel config it seems that pf, pflog, pfsync and all netgraph and altq stuff are not build as a loadable modules (the output of kldstat should confirm that). So you can't unload them. This is not ideal has simply loading a module could enable some features in the network stack. You can first test with pfctl -d to disable pf (better to have console access to do those things). Make also sure you don't have some QOS enable. grep kernel /var/log/messages to see if something is logged by kernel (or dmesg output). You could also check tcp states evolution during the test (with a bourn shell) : clear; while : ; do netstat -anp tcp |awk '$6 ~ /^[A-Z]/ && $6 !~/Foreign|LISTEN/{print $6}' | sort |uniq -c |sort -g ; sleep 2 ; clear; done or with csh clear ; while ( 1 == 1 ) netstat -anp tcp |awk '$6 ~ /^[A-Z]/ && $6 !~/Foreign|LISTEN/{print $6}' | sort |uniq -c |sort -g ; sleep 2 ; clear end Best regards Joris > >> I am really stumped by this problem and am hoping you guys can help me get >> this figured out. If there are any commands I can run to get info that >> would be helpful please let me know. > > In general the situation you describe is observed in a few cases : > - too low file descriptor limits. A non-root user is limited to 1024 > hence about 512 end-to-end connections, but I'm assuming you started > haproxy as a root user to get enough connections ; > > - improperly tuned firewall : this is the most common case. Each end > to end connection uses two conntrack entries, one from the client > to the proxy and one from the proxy to the server. Connections remain > for some time after they are closed due to the TIME_WAIT state and add > to the count. > > - communications in virtualized environments being limited by improper > configuration of the hypervisor. We've got a number of reports, some > even public on the list here where hosting providers were unable to > configure their hypervisor to stand at least the load of a single VM, > so packets were dropped by the hypervisor. > > - bogus NIC firmware. We used to face this situation for a few years > about 5 years ago, some NICs (netxtreme 2 found on a lot of Proliant > servers) were losing up to 12% of the packets, so I let you imagine > how TCP
Re: HAProxy Slows At 1500+ connections Really Need some help to figure out why
Le 3 oct. 2015 02:50, "Daren Sefcik"a écrit : > > So after making the changes (somewhat implied by Cyril) I ran apache bench with 2 concurrent instances of "-n 1 -c 500 -w -k" and the result on haproxy stats page is: > > pid = 18093 (process #1, nbproc = 1) > uptime = 0d 2h55m08s > system limits: memmax = unlimited; ulimit-n = 100043 > maxsock = 100043; maxconn = 5; maxpipes = 0 > current conns = 2235; current pipes = 0/0; conn rate = 39/sec > Running tasks: 1/2252; idle = 85 % > > and response times from the client are unacceptable, 15-20 seconds or longer. once the apache bench tests finish and concurrent conns go down to a few hundred or less the client response times are normal and quick. Not scientific but during the long wait on the client the browser reports down in the bottom browser bar "waiting for socket..." or "waiting for proxy tunnel..." Hi, How is the system during stress ? Here are a few things I would have checked in similar situation. Is there an accept filter Is there some kernel messages Load, swap usage, disk space During stress : Is there more sys/interrupt than user cpu usage Link saturation Packet lost Suboptimal firewall rules : replay stress packet filter unloaded. Regards Joris > > TIA for any further help anyone can provide, I really would like to get this figured out.
Re: Add query string at redirect
Hi, 2015-10-01 14:44 GMT+02:00 Andreas Mock: > Hi all, > > I really hope that this is doable with haproxy 1.5 and > I'm just too stupid to find it. After searching around > for an hour now I hope you can help me. > > Currently I use an idion like this in my config: > > acl aclname url_reg something > redirect location http://my.url.com/ code 301 if aclname > > What I need now is that any attached parameter string is > also attached at the target url. Is this possible? > If yes would you be so kind to show me how to do it? > > By the way: Is the same thing possible with cookie header lines. > I mean: A cookie header in the source request shall be inserted in > the redirect response? redirect prefix seems to be what you need http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#redirect%20prefix Regards Joris > > Thank you in advance. > > Best regards > Andreas > >
Re: Linux or FreeBSD ?
2015-10-01 1:48 GMT+02:00 Rainer Duffner: > >> Am 01.10.2015 um 01:22 schrieb Willy Tarreau : >> >>> >> >> I'd be tempted to place my judgement between yours and Jeff's. I'd say >> that if the company is already using the target OS on any other place, >> the cost of switching is low. If the load balancer is the opportunity >> to introduce a new OS, it's a bad idea. By nature a load balancer is >> very OS-dependant, and has bugs. Sometimes it's not trivial to tell >> if a bug is in haproxy or the underlying OS until you get network >> traces and/or strace output (BTW as far as I know, strace still doesn't >> support amd64 on FreeBSD). Mixing the two can cast a bad image on the >> new OS just because admins will initially not know well how to tune it >> for the load and to ensure stability, will not easily troubleshoot >> tricky issues, and a lot of frustration will result from this. >> > > > > Probably. > But OP’s admin will have his reasons for wanting FreeBSD in the picture. > My guess would be that FreeBSD is the OS he’s more familiar with debugging. > FreeBSD has ktrace - and dtrace (if you know how to use it, that is…) > > Here, most of our LBs run HAproxy on FreeBSD. > Sometimes, they’re not. Because…reasons ;-) > > Why? > Well, historically, most LBs and reverse-proxies ran FreeBSD (with NGINX). > So it was more or less a „natural“ choice, with some pushing from my side > (cough). > > FreeBSD has CARP. > Linux has keepalived. > etc. We are really lucky to have almost 2 production grade open source operating systems. I am really happy with my mixed infrastructure even if I have to write conditional code in my scripts. For heartbleed, all my Centos 6 were affected, my FreeBSD 8 weren't. When a nightmarish 0day occur on FreeBSD elf loader, Linux is not affected... and so on. Sometimes on critical services diversity is good for uptime and security. Joris > > I don’t think we’ll ever get so much traffic that either one will be superior > to the other. And I seriously doubt OP will. > > FreeBSD 10.1 has most of the optimizations that Netflix uses turned-on out of > the box - but they do file-serving with NGINX. > In their (extreme) case, it works better. > Proxying/load-balancing is a bit different. > > I like FreeBSD because I can get a very stable, simple, low overhead, > no-nonsense OS with a reasonable shelf-live and update-cycle while still > being able to get up-to-date packages directly from upstream. > > >> You should expect roughly the same performance on both OS so that is >> not a consideration for switching or not switching. Really keep in >> mind the admin cost, the cost of it being the exception in all your >> system and possibly different debugging tools. It's very likely that >> it will not be a problem, but better be aware of this. >> > > > That’s what you get by hiring a FreeBSD guy. > If OP had hired a CentOS guy, I bet he'd want to switch everything to CentOS > (or even Atomic Server…) > ;-) > > > > > > >
Re: [PATCH] BUILD: IP_TTL: Fix compilation on almost FreeBSD and OpenBSD.
Already fix by ae459f3 Joris 2015-09-29 8:21 GMT+02:00 Joris Dedieu <joris.ded...@gmail.com>: > IP_TTL socket option is defined on some systems that don't have SOL_IP. > Use IPPROTO_IP in this case. > --- > src/proto_tcp.c | 4 > 1 file changed, 4 insertions(+) > > diff --git a/src/proto_tcp.c b/src/proto_tcp.c > index f698889..3642398 100644 > --- a/src/proto_tcp.c > +++ b/src/proto_tcp.c > @@ -1456,7 +1456,11 @@ static enum act_return > tcp_exec_action_silent_drop(struct act_rule *rule, struct > * network and has no effect on local net. > */ > #ifdef IP_TTL > +#ifdef SOL_IP > setsockopt(conn->t.sock.fd, SOL_IP, IP_TTL, , sizeof(one)); > +#else > + setsockopt(conn->t.sock.fd, IPPROTO_IP, IP_TTL, , sizeof(one)); > +#endif > #endif > out: > /* kill the stream if any */ > -- > 2.3.6 >
Re: [ANNOUNCE] haproxy-1.6-dev6
Hi Willy, 2015-09-29 18:27 GMT+02:00 Willy Tarreau <w...@1wt.eu>: > On Tue, Sep 29, 2015 at 02:58:04PM +0200, joris dedieu wrote: >> kevent(3,0x0,0,{},5,{1.0 }) = 0 (0x0) >> kevent(3,0x0,0,{0x4,EVFILT_READ,0x0,0,0x1,0x0},5,{1.0 }) = 1 (0x1) >> accept(4,{ AF_INET 80.247.233.242:48068 },0x7fffe804) = 5 (0x5) >> fcntl(5,F_SETFL,O_NONBLOCK) = 0 (0x0) >> recvfrom(5,0x801407000,16384,0x20080,0x0,0x0) ERR#35 'Resource >> temporarily unavailable' >> setsockopt(0x5,0x0,0x4,0x48668c,0x4,0x0) = 0 (0x0) >> setsockopt(0x5,0x6,0x1,0x48668c,0x4,0x0) = 0 (0x0) >> accept(4,0x7fffe808,0x7fffe804) ERR#35 'Resource >> temporarily unavailable' >> shutdown(5,SHUT_WR) = 0 (0x0) >> close(5) = 0 (0x0) >> >> Has you can see it doesn't looks great. > > OK I found the reason, in my case the RST I was seeing was caused by pending > data otherwise haproxy didn't send it by itself since we're facing the client. > I've fixed it so that lingering is *really* disabled this time. You can retry Ok that fix the issue. I get the same tcp sequence on both Linux and FreeBSD. 08:15:25.552414 IP jau31-2-82-236-20-129.fbx.proxad.net.33355 > ladybug2.rmdir.fr.2: Flags [S], seq 1066137200, win 29200, options [mss 1460,sackOK,TS val 15996126 ecr 0,nop,wscale 7], length 0 08:15:25.552427 IP ladybug2.rmdir.fr.2 > jau31-2-82-236-20-129.fbx.proxad.net.33355: Flags [S.], seq 1788015248, ack 1066137201, win 65535, options [mss 1460,nop,wscale 6,sackOK,TS val 524803316 ecr 15996126], length 0 08:15:25.589643 IP jau31-2-82-236-20-129.fbx.proxad.net.33355 > ladybug2.rmdir.fr.2: Flags [.], ack 1, win 229, options [nop,nop,TS val 15996135 ecr 524803316], length 0 08:15:25.590207 IP ladybug2.rmdir.fr.2 > jau31-2-82-236-20-129.fbx.proxad.net.33355: Flags [F.], seq 1, ack 1, win 1040, options [nop,nop,TS val 524803354 ecr 15996135], length 0 08:15:25.590340 IP ladybug2.rmdir.fr.2 > jau31-2-82-236-20-129.fbx.proxad.net.33355: Flags [R.], seq 2, ack 1, win 1040, options [nop,nop,TS val 524803354 ecr 15996135], length 0 > with the attached patch if you want. The second one will get rid of the > useless recvfrom() call if your system doesn't have TCP_QUICKACK. The third Yes as far as I know there is no per socket option for this on FreeBSD (only the system wild net.inet.tcp.delayed_ack variable). The recvfrom disappears as expected. Many thanks Joris > patch addresses a build issue reported off-list on another FreeBSD machine > (SOL_IP not defined). > > Thanks, > Willy >
Re: [ANNOUNCE] haproxy-1.6-dev6
Hi, 2015-09-29 0:35 GMT+02:00 Willy Tarreau: > Hi everyone, > > this is the end of a harrassing week! I wanted to issue dev6 last monday > to have a calm week dedicated to bug fixes and documentation updates only > and it ended up completely differently with numerous painful bugs rising > at the same time while Thierry was testing his Lua update which uncovered > a mess at the applet layer (well, shared between applets and Lua). After > about 260 e-mails exchanged, thousands of tests and probably a lot of hair > lost due to head scratching we ended up fixing all the remaining ones last > night. > > So this version comes with a number of important and less important fixes, > and still a few feature updates that despite the feature freeze were > desirable to have before the release. > > Regarding the bugs first, all reported bugs and all the ones we found > during the Lua vs applet debugging were fixed in this version, including > the error on UDP sockets on FreeBSD, the issues causing Lua socket data > to be truncated, other issues causing the CLI to sometimes ignore client > disconnect and leak connections, and bugs affecting peers. The complete > changelog below lists 134 patches among which 35 bug fixes. A few of these > fixes will be backported to 1.5 as well. > > 22 patches concern doc updates, which is in line with our expectations for > an approaching release. I have still not found the time to write the last > missing doc piece allowing us to get rid of haproxy-{en,fr}.txt. > > Now regarding the last-minute changes that were merged : > > - server-state conservation across reload that we've long been talking > about was finally merged. Please check the backend directive > load-server-state-from-file in the doc. > > - cpu-map is now supported on FreeBSD. > > - 51degress device identification changed their API to support last > version (3.2). I didn't like this last-minute change but I understand > that sometimes it is better to do that before the release than being > forced to maintain an older API. The new implementation supports both > a fetch method (to inspect all headers) and a converter (to inspect > only a specific one). Please test this as the changes were important! > > - DeviceAtlas also updated their module to support both a sample fetch > function and a converter. Please test this as well, the changes were > much smaller and I'm less worried though. > > - Lua: change in the way actions are registered : instead of calling > random functions from haproxy, only registered ones may be accessed, > this is much safer to avoid namespace collisions over the long term > and to avoid mistakes due to similar looking function names. > > - Lua: do not limit socket addresses to IPv4/IPv6, support the same > address classes as servers (including unix and abstract namespaces). > > - Lua: add support for applet registration usable via the new > "use-service" directive. This allows a script to process contents > that are not limited to the size of a buffer anymore. It provides > easy mapping for TCP and HTTP manipulation so that servers are easy > to write. Thierry showed me that he could reimplement the haproxy > stats page entirely in Lua using this, so that was definitely something > to have before the release so that people don't feel limited anymore in > what they can do in Lua. > > - TCP actions: "silent-drop". Finally it got merged as the actions > registration mechanism made it a no-brainer. It works like a deny except > that it tries to prevent the TCP RST from reaching the client, so that's > quite efficient against certain bots and scripts as their connections > remain established on their side only. It works on Linux and could > possibly work on other systems (not tested). I can confirm that silent-drop is not working as expected on FreeBSD listen drop bind 80.247.233.40:2 tcp-request connection silent-drop 08:31:31.324885 IP 82.236.20.129.60620 > 80.247.233.40.2: Flags [S], seq 1048805770, win 29200, options [mss 1460,sackOK,TS val 14874937 ecr 0,nop,wscale 7], length 0 08:31:31.324903 IP 80.247.233.40.2 > 82.236.20.129.60620: Flags [S.], seq 510555620, ack 1048805771, win 65535, options [mss 1460,nop,wscale 6,sackOK,TS val 1100790208 ecr 14874937], length 0 08:31:31.367359 IP 82.236.20.129.60620 > 80.247.233.40.2: Flags [.], ack 1, win 229, options [nop,nop,TS val 14874946 ecr 1100790208], length 0 08:31:31.367425 IP 80.247.233.40.2 > 82.236.20.129.60620: Flags [F.], seq 1, ack 1, win 1040, options [nop,nop,TS val 1100790250 ecr 14874946], length 0 08:31:31.697612 IP 80.247.233.40.2 > 82.236.20.129.60620: Flags [F.], seq 1, ack 1, win 1040, options [nop,nop,TS val 1100790581 ecr 14874946], length 0 08:31:32.183981 IP 80.247.233.40.2 > 82.236.20.129.60620: Flags [F.], seq 1, ack 1, win 1040, options [nop,nop,TS val
[PATCH] BUILD: IP_TTL: Fix compilation on almost FreeBSD and OpenBSD.
IP_TTL socket option is defined on some systems that don't have SOL_IP. Use IPPROTO_IP in this case. --- src/proto_tcp.c | 4 1 file changed, 4 insertions(+) diff --git a/src/proto_tcp.c b/src/proto_tcp.c index f698889..3642398 100644 --- a/src/proto_tcp.c +++ b/src/proto_tcp.c @@ -1456,7 +1456,11 @@ static enum act_return tcp_exec_action_silent_drop(struct act_rule *rule, struct * network and has no effect on local net. */ #ifdef IP_TTL +#ifdef SOL_IP setsockopt(conn->t.sock.fd, SOL_IP, IP_TTL, , sizeof(one)); +#else + setsockopt(conn->t.sock.fd, IPPROTO_IP, IP_TTL, , sizeof(one)); +#endif #endif out: /* kill the stream if any */ -- 2.3.6
Re: [ANNOUNCE] haproxy-1.6-dev6
Hi Willy 2015-09-29 13:59 GMT+02:00 Willy Tarreau <w...@1wt.eu>: > Hi Joris, > > On Tue, Sep 29, 2015 at 08:56:54AM +0200, joris dedieu wrote: >> > - TCP actions: "silent-drop". Finally it got merged as the actions >> > registration mechanism made it a no-brainer. It works like a deny >> > except >> > that it tries to prevent the TCP RST from reaching the client, so >> > that's >> > quite efficient against certain bots and scripts as their connections >> > remain established on their side only. It works on Linux and could >> > possibly work on other systems (not tested). >> >> I can confirm that silent-drop is not working as expected on FreeBSD >> >> listen drop >> bind 80.247.233.40:2 >> tcp-request connection silent-drop >> >> 08:31:31.324885 IP 82.236.20.129.60620 > 80.247.233.40.2: Flags >> [S], seq 1048805770, win 29200, options [mss 1460,sackOK,TS val >> 14874937 ecr 0,nop,wscale 7], length 0 >> 08:31:31.324903 IP 80.247.233.40.2 > 82.236.20.129.60620: Flags >> [S.], seq 510555620, ack 1048805771, win 65535, options [mss >> 1460,nop,wscale 6,sackOK,TS val 1100790208 ecr 14874937], length 0 >> 08:31:31.367359 IP 82.236.20.129.60620 > 80.247.233.40.2: Flags >> [.], ack 1, win 229, options [nop,nop,TS val 14874946 ecr 1100790208], >> length 0 >> 08:31:31.367425 IP 80.247.233.40.2 > 82.236.20.129.60620: Flags >> [F.], seq 1, ack 1, win 1040, options [nop,nop,TS val 1100790250 ecr >> 14874946], length 0 > (...) >> [F.], seq 1, ack 1, win 1040, options [nop,nop,TS val 1100817450 ecr >> 14874946], length 0 >> 08:32:22.886834 IP 82.236.20.129.60620 > 80.247.233.40.2: Flags >> [P.], seq 1:7, ack 1, win 229, options [nop,nop,TS val 14887826 ecr >> 1100790208], length 6 >> 08:32:22.886850 IP 80.247.233.40.2 > 82.236.20.129.60620: Flags >> [R], seq 510555621, win 0, length 0 > > Thanks for your feedback. The fact that the FIN is retranmsmitted like > this tends to confirm that the TTL setting works. However I feel quite > concerned about the fact that a FIN was emitted instead of a reset. I > fear that SO_LINGER doesn't work as expected which is much more of a > problem if that happens on the server side too when closing a server > connection! > > Could you please run the same test under strace/truss/whatever you find > equivalent on your platform ? > > I'd be interested in seeing each syscall status. kevent(3,0x0,0,{},5,{1.0 }) = 0 (0x0) kevent(3,0x0,0,{0x4,EVFILT_READ,0x0,0,0x1,0x0},5,{1.0 }) = 1 (0x1) accept(4,{ AF_INET 80.247.233.242:48068 },0x7fffe804) = 5 (0x5) fcntl(5,F_SETFL,O_NONBLOCK) = 0 (0x0) recvfrom(5,0x801407000,16384,0x20080,0x0,0x0) ERR#35 'Resource temporarily unavailable' setsockopt(0x5,0x0,0x4,0x48668c,0x4,0x0) = 0 (0x0) setsockopt(0x5,0x6,0x1,0x48668c,0x4,0x0) = 0 (0x0) accept(4,0x7fffe808,0x7fffe804) ERR#35 'Resource temporarily unavailable' shutdown(5,SHUT_WR) = 0 (0x0) close(5) = 0 (0x0) Has you can see it doesn't looks great. Joris > > Thanks, > Willy >
Re: Need Help
Hi, 2015-09-18 3:13 GMT+02:00 Nitesh Kumar Gupta: > Hi, > > I want to setup haproxy in way there that will work on both http and https > and also tpc but that will be conditional mean if any perticular link will > come that will go via tcp > > So can you help me how can i setup this You may find a lot of useful ressources by searching how make ssh and https work on the same port with haproxy. This is a common case on using http and tcp stuff on the same port (to bypass corporate proxies I presume). Joris > > -- > Regards > Nitesh Kumar Gupta
Re: Pop / Imap Haproxy
2015-06-19 23:21 GMT+02:00 Nathan Neulinger nn...@neulinger.org: You can use the 'proxy protocol' - but you will have to insure that your target pop/imap daemons are aware of it. dovecot has preliminary proxy protocol support http://hg.dovecot.org/dovecot-2.2/rev/4d7a83ddb644 It's not realized yet. You will have to use a nightly snapshot http://www.dovecot.org/nightly/ Joris -- Nathan On 06/19/2015 04:01 PM, anil kumar wrote: Hello, We are trying to setup haproxy for pop and Imap, so I was wondering if we could have the originating IPs using POP/IMAP procotols just like x-forwarded-for for HTTP? Thanks, Anil -- Nathan Neulinger nn...@neulinger.org Neulinger Consulting (573) 612-1412
Re: Rate-limiting specific path
2015-07-08 15:28 GMT+02:00 Bastien Chong bastien...@gmail.com: Hi, I'd like to rate-limit a specific path, by rate-limit I mean continue to accept X req/s, and buffer or drop subsequent requests over the limit. That is was rate-limit sessions rate does, but is frontend-wise. It's not optimal but you can use a pipe frontend myfront ... use_backend pipe_in if { condition } backend pipe_in server pipe_out 127.0.0.1:8080 listen pipe_out bind 127.0.0.1:8080 rate-limit sessions 10 server ... I'm not interested in dropping all requests when the limit is reached, objective is to be gentle on the backend, not protect against abuser. So why not use maxconn (and maybe maxqueue) on your backend ? Joris Is there any way to achieve that ? Thanks, Bastien
Re: Question regarding haproxy nagios setup
2011-05-03 0:23 GMT+02:00 Amol mandm_z...@yahoo.com: I was using the nagios plugin for haproxy http://cvs.orion.education.fr/viewvc/viewvc.cgi/nagios-plugins-perl/trunk/plugins/check_haproxy.pl?revision=135view=markup my nagios installation version is Nagios Core 3.2.0 in my host config i have declared the service as define service { usedefault-service host_name server1 service_description HAProxy check_command check_haproxy! http://url/admin?stats;csv'!user!pass servicegroups linux } and my checkcommand.cfg is # command 'check_haproxy health' define command { command_name check_haproxy command_line perl /etc/nagios3/libexec/check_haproxy.pl -u $ARG1$ -U $ARG2$ -P $ARG3$ } i have copied the check_haproxy.pl from my download folder to the path mentioned in the checkcommand on my nagios admin console i see Current Status: CRITICAL (for 0d 1h 4m 44s) Status Information:(null) Performance Data: and an alert is sent to my email even though the load balancing is working fine, i am able to run the command from the command line perl /etc/nagios3/libexec/check_haproxy.pl -u 'http://url/ain?stats;csv' -U user -P pass HAPROXY OK - din_https (Active: 2/2) wbclus (Active: 2/2) | t=0.135153s;2;10;0; sess_din_https=0sessions;;;0;2000 sess_wbclus=0sessions;;;0;2000 please let me know if i am missing something Hello, Maybe you should look at this version : https://github.com/polymorf/check_haproxy Regards Joris
Re: Haproxy 1.6 segfault on FreeBSD
2015-06-12 8:18 GMT+02:00 joris dedieu joris.ded...@gmail.com: 2015-06-12 0:47 GMT+02:00 joris dedieu joris.ded...@gmail.com: Hi Willy, 2015-06-11 17:04 GMT+02:00 Willy Tarreau w...@1wt.eu: Hi Joris, On Thu, Jun 11, 2015 at 03:57:27PM +0200, joris dedieu wrote: Ok. I have checked out the main repo I'm at 28b48ccbc879a552f988e6e1db22941e3362b4db (...) Same issue same solution with minor adjustments for the patch to apply Joris diff --git a/include/common/compat.h b/include/common/compat.h index ecbc3b1..48ea1f7 100644 --- a/include/common/compat.h +++ b/include/common/compat.h @@ -27,6 +27,8 @@ #include sys/types.h #include sys/socket.h #include arpa/inet.h +#include common/config.h +#include common/standard.h #ifndef BITS_PER_INT #define BITS_PER_INT(8*sizeof(int)) diff --git a/include/common/config.h b/include/common/config.h index 27b8f14..5833cfc 100644 --- a/include/common/config.h +++ b/include/common/config.h @@ -23,7 +23,6 @@ #define _COMMON_CONFIG_H #include common/compiler.h -#include common/compat.h #include common/defaults.h /* this reduces the number of calls to select() by choosing appropriate diff --git a/include/types/global.h b/include/types/global.h index b3b9672..3812771 100644 --- a/include/types/global.h +++ b/include/types/global.h @@ -25,7 +25,6 @@ #include netinet/in.h #include common/config.h -#include common/standard.h #include import/da.h #include types/freq_ctr.h #include types/listener.h Since in theory that doesn't make sense at all, I suspect that we're falling into a case of a #ifndef something where something is defined on your platform. Could you please try the following with and without the patch above : make clean make [your usual options] DEBUG_CFLAGS=-dM -E It will not complete, as it will dump all known defines into each .o file. By archiving them between the builds, you can diff the trees and see if a define appears/disappears from one .o file (probably the one where you see the crash). I'd bet that we'll find that FreeBSD uses a macro with a similar name as one in haproxy and that it changes a struct size or something depending on how the .h are included. Thanks for the tip ! I have not found the guilty header for now but I will continue to search it tomorrow 2 files attached : - headers.diff (the output of diff -Nru) - defined.txt with the output of a small script with things like upstream : src/sessionhash.o : #define MAXSYMLINKS 32 patched : src/uri_auth.o : #define _POSIX_OPEN_MAX 20 MAXSYMLINKS 32 is defined in sessionhash.o without the patch but not with it _POSIX_OPEN_MAX 20 is defined in uri_auth.o patched but not in upstream version ... Hi, Here the list of the values that differ between one version and the other. You can see that without the patch (last value) some haproxy default apply to some values. I continue this way. Joris src/protocol.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/standard.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/chunk.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/queue.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/buffer.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/memory.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/namespace.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/freq_ctr.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/map.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/pipe.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/proto_http.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/stick_table.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/pattern.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/haproxy.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/listener.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/regex.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/frontend.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/raw_sock.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/auth.oSHRT_MAX__SHRT_MAX__SHRT_MAX__ src/auth.oSCHAR_MIN__SCHAR_MIN(-SCHAR_MAX - 1) src/auth.oSHRT_MIN__SHRT_MIN(-SHRT_MAX - 1) src/auth.oSCHAR_MAX__SCHAR_MAX__SCHAR_MAX__ src/auth.oLONG_MAX__LONG_MAX__LONG_MAX__ src/auth.oUSHRT_MAX__USHRT_MAX(SHRT_MAX * 2 + 1) src/auth.oLONG_MIN__LONG_MIN(-LONG_MAX - 1L) src/auth.oULONG_MAX__ULONG_MAX(LONG_MAX * 2UL + 1UL) src/auth.oCHAR_BIT__CHAR_BIT__CHAR_BIT__ src/auth.oUCHAR_MAX__UCHAR_MAX(SCHAR_MAX * 2 + 1) src/auth.oINT_MAX__INT_MAX__INT_MAX__ src/auth.oINT_MIN__INT_MIN(-INT_MAX - 1) src/auth.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/auth.oUINT_MAX__UINT_MAX(INT_MAX * 2U + 1U) src/acl.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/checks.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/base64.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/time.o
Re: Haproxy 1.6 segfault on FreeBSD
2015-06-12 0:47 GMT+02:00 joris dedieu joris.ded...@gmail.com: Hi Willy, 2015-06-11 17:04 GMT+02:00 Willy Tarreau w...@1wt.eu: Hi Joris, On Thu, Jun 11, 2015 at 03:57:27PM +0200, joris dedieu wrote: Ok. I have checked out the main repo I'm at 28b48ccbc879a552f988e6e1db22941e3362b4db (...) Same issue same solution with minor adjustments for the patch to apply Joris diff --git a/include/common/compat.h b/include/common/compat.h index ecbc3b1..48ea1f7 100644 --- a/include/common/compat.h +++ b/include/common/compat.h @@ -27,6 +27,8 @@ #include sys/types.h #include sys/socket.h #include arpa/inet.h +#include common/config.h +#include common/standard.h #ifndef BITS_PER_INT #define BITS_PER_INT(8*sizeof(int)) diff --git a/include/common/config.h b/include/common/config.h index 27b8f14..5833cfc 100644 --- a/include/common/config.h +++ b/include/common/config.h @@ -23,7 +23,6 @@ #define _COMMON_CONFIG_H #include common/compiler.h -#include common/compat.h #include common/defaults.h /* this reduces the number of calls to select() by choosing appropriate diff --git a/include/types/global.h b/include/types/global.h index b3b9672..3812771 100644 --- a/include/types/global.h +++ b/include/types/global.h @@ -25,7 +25,6 @@ #include netinet/in.h #include common/config.h -#include common/standard.h #include import/da.h #include types/freq_ctr.h #include types/listener.h Since in theory that doesn't make sense at all, I suspect that we're falling into a case of a #ifndef something where something is defined on your platform. Could you please try the following with and without the patch above : make clean make [your usual options] DEBUG_CFLAGS=-dM -E It will not complete, as it will dump all known defines into each .o file. By archiving them between the builds, you can diff the trees and see if a define appears/disappears from one .o file (probably the one where you see the crash). I'd bet that we'll find that FreeBSD uses a macro with a similar name as one in haproxy and that it changes a struct size or something depending on how the .h are included. Thanks for the tip ! I have not found the guilty header for now but I will continue to search it tomorrow 2 files attached : - headers.diff (the output of diff -Nru) - defined.txt with the output of a small script with things like upstream : src/sessionhash.o : #define MAXSYMLINKS 32 patched : src/uri_auth.o : #define _POSIX_OPEN_MAX 20 MAXSYMLINKS 32 is defined in sessionhash.o without the patch but not with it _POSIX_OPEN_MAX 20 is defined in uri_auth.o patched but not in upstream version ... Hi, Here the list of the values that differ between one version and the other. You can see that without the patch (last value) some haproxy default apply to some values. I continue this way. Joris src/protocol.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/standard.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/chunk.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/queue.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/buffer.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/memory.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/namespace.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/freq_ctr.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/map.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/pipe.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/proto_http.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/stick_table.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/pattern.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/haproxy.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/listener.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/regex.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/frontend.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/raw_sock.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/auth.oSHRT_MAX__SHRT_MAX__SHRT_MAX__ src/auth.oSCHAR_MIN__SCHAR_MIN(-SCHAR_MAX - 1) src/auth.oSHRT_MIN__SHRT_MIN(-SHRT_MAX - 1) src/auth.oSCHAR_MAX__SCHAR_MAX__SCHAR_MAX__ src/auth.oLONG_MAX__LONG_MAX__LONG_MAX__ src/auth.oUSHRT_MAX__USHRT_MAX(SHRT_MAX * 2 + 1) src/auth.oLONG_MIN__LONG_MIN(-LONG_MAX - 1L) src/auth.oULONG_MAX__ULONG_MAX(LONG_MAX * 2UL + 1UL) src/auth.oCHAR_BIT__CHAR_BIT__CHAR_BIT__ src/auth.oUCHAR_MAX__UCHAR_MAX(SCHAR_MAX * 2 + 1) src/auth.oINT_MAX__INT_MAX__INT_MAX__ src/auth.oINT_MIN__INT_MIN(-INT_MAX - 1) src/auth.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/auth.oUINT_MAX__UINT_MAX(INT_MAX * 2U + 1U) src/acl.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/checks.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/base64.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/time.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src/payload.oMAX_HOSTNAME_LENMAXHOSTNAMELEN64 src
Re: Haproxy 1.6 segfault on FreeBSD
2015-06-12 18:01 GMT+02:00 Willy Tarreau w...@1wt.eu: On Fri, Jun 12, 2015 at 05:54:13PM +0200, joris dedieu wrote: I would not be surprized that adding this line to compat.h solves the problem : #include netinet/in.h It was this one. So I finally add Great! * limits.h for things like LONG_MIN or MAX_HOSTNAME_LEN to have the correct definition * netinet/in.h so that struct conn_src have the right size OK thanks. Please just write a short commit message describing your observation and the condition where the bug appears so that we can easily find it later if reported again. You'll be faster than me to summarize everything you observed and troubleshooted. I'll use that as the commit message (or even better, to a complete commit and send the output of git format-patch). thanks! Willy Ok I sent the patch with git send-email. Thanks Joris
[PATCH] BUG/MEDIUM: compat: fix segfault on FreeBSD
Since commit 65d805fd witch removes standard.h from compat.h some values were not properly set on FreeBSD. This caused a segfault at startup when smp_resolve_args is called. As FreeBSD have IP_BINDANY, CONFIG_HAP_TRANSPARENT is define. This cause struct conn_src to be extended with some fields. The size of this structure was incorrect. Including netinet/in.h fix this issue. While diving in code preprocessing, I found that limits.h was require to properly set MAX_HOSTNAME_LEN, ULONG_MAX, USHRT_MAX and others system limits on FreeBSD. --- include/common/compat.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/common/compat.h b/include/common/compat.h index ecbc3b1..07dd01d 100644 --- a/include/common/compat.h +++ b/include/common/compat.h @@ -22,11 +22,13 @@ #ifndef _COMMON_COMPAT_H #define _COMMON_COMPAT_H +#include limits.h /* This is needed on Linux for Netfilter includes */ #include sys/param.h #include sys/types.h #include sys/socket.h #include arpa/inet.h +#include netinet/in.h #ifndef BITS_PER_INT #define BITS_PER_INT(8*sizeof(int)) -- 2.3.6
Re: Haproxy 1.6 segfault on FreeBSD
2015-06-12 8:41 GMT+02:00 Willy Tarreau w...@1wt.eu: On Fri, Jun 12, 2015 at 08:35:15AM +0200, Willy Tarreau wrote: On Fri, Jun 12, 2015 at 08:27:34AM +0200, joris dedieu wrote: (...) All those one are innocents. Including sys/limits.h on common/compat.h correct those differences and not the segfault. OK. Please use limits.h instead as sys/limits.h is not present on all systems. I'm thinking about something else you can test, which is to verify all objects sizes. You need to build with and without the patch, then you can diff the outputs and check whether some structures would have a different size. This could help spot the culprit : $ find . -name '*.o' | sort | xargs nm --size-sort When doing this here, I found something troubling : (joris.out is your version with the revert) --- normal.out 2015-06-12 08:37:00.828256778 +0200 +++ joris.out 2015-06-12 08:36:41.509255648 +0200 @@ -91,6 +91,7 @@ ./src/auth.o: 0004 B userlist +0006 C localtimezone 0041 T auth_find_userlist 0050 T check_group 0088 T check_user @@ -1002,7 +1003,7 @@ 0020 T tcp_req_cont_keywords_register 0020 T tcp_res_cont_keywords_register 0027 t port_range_release_port -0028 r CSWTCH.124 +0028 r CSWTCH.122 0039 t bind_parse_defer_accept 0039 t bind_parse_transparent 0039 T tcp_get_src @@ -1545,7 +1546,7 @@ 0008 C date 0008 C now 0008 C start_date -0008 b tv_offset.4129 +0008 b tv_offset.3989 0024 T _tv_isgt 0024 T _tv_isle 002f T _tv_ms_elapsed @@ -1562,6 +1563,7 @@ 023f T tv_update_date ./src/uri_auth.o: +0006 C localtimezone 001e T stats_set_flag 001e T stats_set_refresh 005a T stats_set_realm As you can see, localtimezone moves to another file. Guess what ? it's an array of 6 chars which is defined in standard.h ! That's completely bogus, I wonder where that comes from! I would not be surprized if you have a similarly named variable on FreeBSD which is of a smaller size and which gets overwritten when this variable is modified. Please could you try to rename it as a quick test ? It only appears at 3 locations : $ git grep -n localtimezone include/common/standard.h:827:char localtimezone[6]; src/haproxy.c:588: strftime(localtimezone, 6, %z, curtime); src/standard.c:2340:memcpy(dst, localtimezone, 5); // timezone Done and unfortunately it stills segfaulting. But despite Level3 high quality route filtering near lunchtime, I think, I'm not too far. Has you suggested I looked at nm output and found things that make me think on struct proxy size. I found that there was a link with _COMMON_COMPAT_H defined or not and finally with CONFIG_HAP_TRANSPARENT. Commenting lines 101 to 108 on include/common/compat.h without other modifications stop segfaulting. I will try to continue this way. Best Regards Joris Willy
Re: Haproxy 1.6 segfault on FreeBSD
2015-06-12 16:53 GMT+02:00 Willy Tarreau w...@1wt.eu: Hi Joris, On Fri, Jun 12, 2015 at 04:45:04PM +0200, joris dedieu wrote: $ git grep -n localtimezone include/common/standard.h:827:char localtimezone[6]; src/haproxy.c:588: strftime(localtimezone, 6, %z, curtime); src/standard.c:2340:memcpy(dst, localtimezone, 5); // timezone Done and unfortunately it stills segfaulting. But despite Level3 high quality route filtering near lunchtime, I think, I'm not too far. Has you suggested I looked at nm output and found things that make me think on struct proxy size. I found that there was a link with _COMMON_COMPAT_H defined or not and finally with CONFIG_HAP_TRANSPARENT. Commenting lines 101 to 108 on include/common/compat.h without other modifications stop segfaulting. I will try to continue this way. Ah, excellent finding. Indeed : - struct proxy references struct conn_src - struct conn_src's size depends on CONFIG_HAP_TRANSPARENT. - CONFIG_HAP_TRANSPARENT is set in compat.h when certain #defines are found. The conditions to set it are defined in compat.h : #if defined(IP_FREEBIND) \ || defined(IP_BINDANY)\ || defined(IPV6_BINDANY) \ || defined(SO_BINDANY)\ || defined(IP_TRANSPARENT)\ || defined(IPV6_TRANSPARENT) #define CONFIG_HAP_TRANSPARENT #endif I know that we don't have the same on FreeBSD and Linux, and it's very likely that the right one depends on certain includes on FreeBSD which are not set in compat.h, so that depending on what file includes compat.h, your define is set or not. I would not be surprized that adding this line to compat.h solves the problem : #include netinet/in.h It was this one. So I finally add * limits.h for things like LONG_MIN or MAX_HOSTNAME_LEN to have the correct definition * netinet/in.h so that struct conn_src have the right size diff --git a/include/common/compat.h b/include/common/compat.h index ecbc3b1..07dd01d 100644 --- a/include/common/compat.h +++ b/include/common/compat.h @@ -22,11 +22,13 @@ #ifndef _COMMON_COMPAT_H #define _COMMON_COMPAT_H +#include limits.h /* This is needed on Linux for Netfilter includes */ #include sys/param.h #include sys/types.h #include sys/socket.h #include arpa/inet.h +#include netinet/in.h #ifndef BITS_PER_INT #define BITS_PER_INT(8*sizeof(int)) Thanks for the help Nice week-end Joris If it's not this one, please copy the same as in standard.h. Once you find it, feel free to propose a patch and I'll merge it. Regards, Willy
Re: Haproxy 1.6 segfault on FreeBSD
Hi Lukas, This is the last commit available on github for haproxy/haproxy https://github.com/haproxy/haproxy/commit/80b59eb0d20245b4040f8ee0baae0d36b6c446b5 Best regards Joris 2015-06-11 14:17 GMT+02:00 Lukas Tribus luky...@hotmail.com: Hi! Hi everyone, It seems that since some times haproxy 1.6 segfault on freebsd Eg: at commit 80b59eb0d20245b4040f8ee0baae0d36b6c446b5 I can't find that commit? Where are you pulling/cloning from? Lukas
Haproxy 1.6 segfault on FreeBSD
Hi everyone, It seems that since some times haproxy 1.6 segfault on freebsd Eg: at commit 80b59eb0d20245b4040f8ee0baae0d36b6c446b5 Program received signal SIGSEGV, Segmentation fault. 0x004d5cf7 in smp_resolve_args (p=0x80144b000) at src/sample.c:1080 1080list_for_each_entry_safe(cur, bak, p-conf.args.list, list) { Current language: auto; currently minimal (gdb) backtrace #0 0x004d5cf7 in smp_resolve_args (p=0x80144b000) at src/sample.c:1080 #1 0x00444611 in check_config_validity () at src/cfgparse.c:7614 #2 0x00404229 in init (argc=0, argv=0x7fffeae8) at src/haproxy.c:741 #3 0x00406b3f in main (argc=3, argv=0x7fffead0) at src/haproxy.c:1542 (gdb) n 1080list_for_each_entry_safe(cur, bak, p-conf.args.list, list) { (gdb) print cur $1 = (struct arg_list *) 0x0 (gdb) print *bak $2 = {list = {n = 0x18b88148f8458b48, p = 0xf0c}, arg = 0x7d8b48002684, arg_pos = 35383544, ctx = -1924661248, kw = 0x8b48004f92b2253c Error reading address 0x8b48004f92b2253c: Bad address, conv = 0xa58918b48f84d Error reading address 0xa58918b48f84d: Bad address, file = 0x7ee800b0c6894800 Error reading address 0x7ee800b0c6894800: Bad address, line = 1224735602} (gdb) print p-conf.args.list $3 = {n = 0x0, p = 0x0} Don't ask me why (and I'd really like to get why) but a revert of 65d805fdfc5ceead2645d3107cbae7b7696a1f15 fix the issue diff --git a/include/common/compat.h b/include/common/compat.h index ecbc3b1..48ea1f7 100644 --- a/include/common/compat.h +++ b/include/common/compat.h @@ -27,6 +27,8 @@ #include sys/types.h #include sys/socket.h #include arpa/inet.h +#include common/config.h +#include common/standard.h #ifndef BITS_PER_INT #define BITS_PER_INT(8*sizeof(int)) diff --git a/include/common/config.h b/include/common/config.h index 27b8f14..5833cfc 100644 --- a/include/common/config.h +++ b/include/common/config.h @@ -23,7 +23,6 @@ #define _COMMON_CONFIG_H #include common/compiler.h -#include common/compat.h #include common/defaults.h /* this reduces the number of calls to select() by choosing appropriate diff --git a/include/types/global.h b/include/types/global.h index ec6679d..bdc0654 100644 --- a/include/types/global.h +++ b/include/types/global.h @@ -25,7 +25,6 @@ #include netinet/in.h #include common/config.h -#include common/standard.h #include types/freq_ctr.h #include types/listener.h #include types/proxy.h
Re: Haproxy 1.6 segfault on FreeBSD
2015-06-11 14:38 GMT+02:00 Lukas Tribus luky...@hotmail.com: Hi Lukas, This is the last commit available on github for haproxy/haproxy https://github.com/haproxy/haproxy/commit/80b59eb0d20245b4040f8ee0baae0d36b6c446b5 That is a unofficial mirror, updated manually and often outdated (like right now). Please clone from the official mirror at: http://git.haproxy.org/git/haproxy.git/ Ok. I have checked out the main repo I'm at 28b48ccbc879a552f988e6e1db22941e3362b4db This will probably not help with the issue you are facing, but at least we have the same commit hash. Same issue same solution with minor adjustments for the patch to apply Joris diff --git a/include/common/compat.h b/include/common/compat.h index ecbc3b1..48ea1f7 100644 --- a/include/common/compat.h +++ b/include/common/compat.h @@ -27,6 +27,8 @@ #include sys/types.h #include sys/socket.h #include arpa/inet.h +#include common/config.h +#include common/standard.h #ifndef BITS_PER_INT #define BITS_PER_INT(8*sizeof(int)) diff --git a/include/common/config.h b/include/common/config.h index 27b8f14..5833cfc 100644 --- a/include/common/config.h +++ b/include/common/config.h @@ -23,7 +23,6 @@ #define _COMMON_CONFIG_H #include common/compiler.h -#include common/compat.h #include common/defaults.h /* this reduces the number of calls to select() by choosing appropriate diff --git a/include/types/global.h b/include/types/global.h index b3b9672..3812771 100644 --- a/include/types/global.h +++ b/include/types/global.h @@ -25,7 +25,6 @@ #include netinet/in.h #include common/config.h -#include common/standard.h #include import/da.h #include types/freq_ctr.h #include types/listener.h Thanks, Lukas
Re: acl + map
Hi Willy, 2015-02-25 17:32 GMT+01:00 Willy Tarreau w...@1wt.eu: Hi Joris, On Wed, Feb 25, 2015 at 02:24:45PM +0100, joris dedieu wrote: Hi, I have a list of valid cookies associated with client IP, that I try to make match in an acl. The map format is : cookie-value\tip-address\n This acl should do : if (client has cookie plop and plop value lookup in plop.map returns src); then the acl is valid endif I tried things like : acl valid_cookie src %[req.cook(plop),map_str_ip(plop.map)] or acl valid_cookie req.cook(plop),map_str_ip(plop.map) -m ip %[src] but it clearly don't works (error detected while parsing ACL 'valid_cookie' : '%[req.cook(plop),map_str_ip(plop.map)]' or %[src] is not a valid IPv4 or IPv6 address). I maybe misunderstand %[ substitution ? Does anyone here knows the right way to do that ? Maybe the -M switch ? The problem with %[] is that it became widespread enough to let people believe it can be used everywhere. It's only valid in some arguments of the http-request actions, and in log formats of course. It cannot be used to describe ACL patterns since by definitions these patterns are constant. Ok thanks for this clarification. In your case, if you need to check that the combination of (source,cookie) matches one in your table, I think you could proceed like this : 1) build a composite header which contains $cookie=$ip : http-request add-header blah %[req.cook(plop)]=%[src] 2) match this header against your own list of cookie=src entries in an ACL : acl valid_cookie req.hdr(add-header) -f valid-cookies.lst 3) fill your valid-cookies.lst file with the valid combinations in the form cookie=ip. 4) optionally remove the header blah after you've used the valid_cookie ACL. Hoping this helps, Yes it helps a lot (even if I not really satisfy using this for client identification, but that's an other stuff :) Best Regards Joris Willy
acl + map
Hi, I have a list of valid cookies associated with client IP, that I try to make match in an acl. The map format is : cookie-value\tip-address\n This acl should do : if (client has cookie plop and plop value lookup in plop.map returns src); then the acl is valid endif I tried things like : acl valid_cookie src %[req.cook(plop),map_str_ip(plop.map)] or acl valid_cookie req.cook(plop),map_str_ip(plop.map) -m ip %[src] but it clearly don't works (error detected while parsing ACL 'valid_cookie' : '%[req.cook(plop),map_str_ip(plop.map)]' or %[src] is not a valid IPv4 or IPv6 address). I maybe misunderstand %[ substitution ? Does anyone here knows the right way to do that ? Maybe the -M switch ? Best regards Joris
Re: make haproxy notice that backend server ip has changed
2013/12/2 Pawel Veselov pawel.vese...@gmail.com: Here is my first attempt at this: http://pastebin.com/xXfZJf3f The diff is over http://git.1wt.eu/git/haproxy-1.4.git/ ref eb9632f7c6ae675bdee4c82eb0d298ba7f37fc52 To enable DNS checks on a server, the host name defined configuration should be suffixed with @ar, and checks must be enabled for the server entry. For example: server server1 hostname.com@ar:5432 check Limitations so far: - only somewhat tested. I'm yet to test the actual switch over, plus the code that sends DNS queries through the queue has never been tested (my requests always sink directly into socket buffer). There are probably DNS responses that will throw the code off - only the first nameserver entry is ever picked from resolv.conf Maybe an external library like ldns http://www.nlnetlabs.nl/projects/ldns/, may provide a complete protocole handeling (resolv.conf parsing, dns query, packet parsing, DNSSEC validation ...) Joris Sorry for dragging in uthash, I didn't find any hash-table implementation in ha-proxy, or thought of a better way to index the server entries for this. Any feedback is greatly appreciated.
Re: haproxy and mobile devices
2013/9/16 Christophe Rahier christo...@qualifio.com: Hi, It's a very strange problem. Some of our users have a blank page when they try to connect to our application but no more information (very easy to debug). Have you enable haproxy logs ? It contains almost everything useful for those kind of diagnostic. I think it's in HTTP, not in HTTPS. I continue to test but I've no problem with my iPad. Did you try with a GSM connection ? It's sometimes very slow (client timeout ...) Regards Joris Thanks for your help. Christophe On 16/09/13 16:24, david rene comba lareu shadow.of.sou...@gmail.com wrote: Hi, What problems do you have? i'd have some problems with SSL as on mobile devices (saying that the site is not signed by a trusted authority) doesn't have all the cross root CA for the certificate and i solved it just adding it to the PEM. Regards. 2013/9/16 Christophe Rahier christo...@qualifio.com: Hi, I'd like to know if I need to adapt the haproxy config file for mobile devices? We have a lot of customers who encounter problems with our application and I try to find the problem. Thanks for your help. Kind regards, Christophe
Re: Does the transparent can't work in FreeBSD?
2013/7/12 jinge altman87...@gmail.com: Hi PiBa-NL, I just follow your advice and find my pf configure is not correct rdr on vlan64 proto tcp from any to any - 127.0.0.1 port And I change to ipfw and fwd then it works corrently. ipfw add fwd 127.0.0.1, tcp from any to any via vlan64 in And you tell my I can use pf's divert-to, but after a test I found it doesn't work.Here is the configure pass in quick on vlan64 inet proto tcp from any to any divert-to 127.0.0.1 port So can your tell my the right configure? You can try to explicitly set original port : pass in quick on vlan64 inet proto tcp from any to any port 80 divert-to 127.0.0.1 port Also check that ipdivert is loaded. Joris Thank you. Regards Jinge On 2013-7-11, at 下午12:07, jinge altman87...@gmail.com wrote: Hi PiBa-NL, Thanks for your reply! And I will follow your advice! Regards Jinge On 2013-7-10, at 上午4:25, PiBa-NL piba.nl@gmail.com wrote: Hi Jinge, Im not exactly sure how this is supposed to work.. did manage to get transparent proxy for the server side working.. (the server is presented with a connection from original client ip.) This works with haproxy 1.5dev19 on FreeBSD8.3 with help of some ipfw fwd rules.. Your config also seams to be working (used some parts their-of to test..) Did require the following ipfw rule for me..: ipfw add 90 fwd localhost tcp from any to any in recv em1 Actually on pfSense it also needs -x haproxy as it is a bit customized.. And because i run 'ipfw' combined with 'pf' i also needed to configure pf with floating 'pass on match' rules to allow the 'strange traffic'.. That pf cannot handle.. If you however have FreeBSD 9 you might want to look into the divert-to rules that pf can make. Might make stuff simpler if it turns out to work.. Please report back your required settings (config if it changes) when you manage to get it working. Greetings PiBa-NL Op 9-7-2013 12:55, jinge schreef: Hi,all! We use haproxy and FreeBSD for our cache system. And we want to use the transparent option http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-option%20transparent which for some compatiable things. But found it doest work. Here is the configure which worked in Ubuntu. frontend tcp-in bind : mode tcp log global option tcplog #distingush HTTP and non-HTTP tcp-request inspect-delay 30s tcp-request content accept if HTTP default_backend Direct backend Direct mode tcp log global option tcplog no option httpclose no option http-server-close no option accept-invalid-http-response option transparent Can anyone tell my if is the FreeBSD can not support transparent here or my configure is not correct ? And how to make transparent work right. Thanks! Regards Jinge
Re: haproxy as a Windows service
2013/6/7 Tom Huybrechts tom.huybrec...@gmail.com: hi, I'd like to run haproxy as a service in Windows, not just as a background process. It doesn't look like this is supported out of the box, but does anyone have some tips on how to best implement this ? You can try cygrunsrv http://cygwin.wikia.com/wiki/Cygrunsrv Thanks, Tom
Re: LB Layout Question
Hi Syd, I'm guessing an an NFS share from the 2 webservers to the 1 fileserver. However, from a bit of research with load balanced magento setups there seems to be a lot of negative comments about using NFS in this way. It's always better to avoid NFS as it introduce a point of failure. Sometimes just syncing the files on both servers with rsync / unison / snapshots / whatever is preferable (it strongly depends on the number of files and the number of file changes). A crashy NFS server can leave inconsistent mount points on the webservers . Anyway it works but you must qualify your server and client version and setups before turning it in production. Avoid lockd unless it's absolutely necessary, enable jumbo frames, find the good rsize, wsize, check and recheck your disks health, your raids settings, your IO performances. If possible, use varnish on the web servers for caching static content or serve the static files directly from the file server using nginx. Never forget that NFS is slow. Joris 2013/5/29 Syd s...@summerwinter.com: Hi There, I've setup a few small load balanced environments with haproxy usually 2 LB's, 2+ webservers, 1 db server. However, I now have a client who needs the above but with an aditional file storage server for user uploads. So I'm arranging for an extra dedicated server with several TB that will be on private network with the 2 webservers. The client uses a custom coded CMS which allows for a path to be specified for an upload folder for user file storage. Any simple advice for the best method to connect a file server to the web servers? I'm guessing an an NFS share from the 2 webservers to the 1 fileserver. However, from a bit of research with load balanced magento setups there seems to be a lot of negative comments about using NFS in this way.
Re: compile warning
2013/5/22 Dmitry Sivachenko trtrmi...@gmail.com: Hello! Hi, When compiling the latest haproxy snapshot on FreeBSD-9 I get the following warning: cc -Iinclude -Iebtree -Wall -O2 -pipe -O2 -fno-strict-aliasing -pipe -DFREEBSD _PORTS-DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB -DENABLE_POL L -DENABLE_KQUEUE -DUSE_OPENSSL -DUSE_PCRE -I/usr/local/include -DCONFIG_HAPROX Y_VERSION=\1.5-dev18\ -DCONFIG_HAPROXY_DATE=\2013/04/03\ -c -o src/ev_kqueue .o src/ev_kqueue.c In file included from include/types/listener.h:33, from include/types/global.h:29, from src/ev_kqueue.c:30: include/common/mini-clist.h:141:1: warning: LIST_PREV redefined In file included from /usr/include/sys/event.h:32, from src/ev_kqueue.c:21: /usr/include/sys/queue.h:426:1: warning: this is the location of the previous definition For my part I can't reproduce it. $ uname -a FreeBSD mailhost2 9.1-RELEASE-p3 FreeBSD 9.1-RELEASE-p3 #0: Mon Apr 29 18:27:25 UTC 2013 r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 $ cc -v Using built-in specs. Target: amd64-undermydesk-freebsd Configured with: FreeBSD/amd64 system compiler Thread model: posix gcc version 4.2.1 20070831 patched [FreeBSD] rm src/ev_kqueue.o; cc -Iinclude -Iebtree -Wall -Werror -O2 -pipe -O2 -fno-strict-aliasing -pipe -DFREEBSD_PORTS -DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB -DENABLE_POLL -DENABLE_KQUEUE -DUSE_OPENSSL -DUSE_PCRE -I/usr/local/include -DCONFIG_HAPROXY_VERSION=\1.5-dev18\ -DCONFIG_HAPROXY_DATE=\2013/04/03\ -c -o src/ev_kqueue.o src/ev_kqueue.c Doesn't produce any warning with haproxy-ss-20130515. Could you please tell me how to reproduce it ? Joris JFYI.
Re: compile warning
2013/5/23 Dmitry Sivachenko trtrmi...@gmail.com: On 23.05.2013, at 11:22, joris dedieu joris.ded...@gmail.com wrote: For my part I can't reproduce it. $ uname -a FreeBSD mailhost2 9.1-RELEASE-p3 FreeBSD 9.1-RELEASE-p3 #0: Mon Apr 29 18:27:25 UTC 2013 r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64 $ cc -v Using built-in specs. Target: amd64-undermydesk-freebsd Configured with: FreeBSD/amd64 system compiler Thread model: posix gcc version 4.2.1 20070831 patched [FreeBSD] rm src/ev_kqueue.o; cc -Iinclude -Iebtree -Wall -Werror -O2 -pipe -O2 -fno-strict-aliasing -pipe -DFREEBSD_PORTS -DTPROXY -DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB -DENABLE_POLL -DENABLE_KQUEUE -DUSE_OPENSSL -DUSE_PCRE -I/usr/local/include -DCONFIG_HAPROXY_VERSION=\1.5-dev18\ -DCONFIG_HAPROXY_DATE=\2013/04/03\ -c -o src/ev_kqueue.o src/ev_kqueue.c Doesn't produce any warning with haproxy-ss-20130515. Could you please tell me how to reproduce it ? Update to FreeBSD-9-STABLE if you want to reproduce it. This change was MFC'd to 9/stable after 9.1-RELEASE: http://svnweb.freebsd.org/base/stable/9/sys/sys/queue.h?view=log Thanks Dimitry for this precision.
[PATCH] MINOR: mute warnings while compiling with clang
Hello, I noticed several warings while compiling haproxy with clang (from FreeBSD 9.1 base system). * 145 unused-value regarding mini-clist.h (LIST_ADD, LIST_ADDQ, LIST_DEL) and standard.h (UBOUND src/haproxy.c:1206:4: warning: expression result unused [-Wunused-value] LIST_DEL(log-list); ^~~~ include/common/mini-clist.h:117:95: note: expanded from macro 'LIST_DEL' ...({ typeof(el) __ret = (el); (el)-n-p = (el)-p; (el)-p-n = (el)-n; (__ret); }) ^ The patch mute then by casting the return value to (void) as we can expect further usage of those values. * 14 empty-body regarding a gcc warning fix src/haproxy.c:1532:73: warning: if statement has empty body [-Wempty-body] if (write(pidfd, pidstr, strlen(pidstr)) 0) /* shut gcc warning */; ^ src/haproxy.c:1532:73: note: put the semicolon on a separate line to silence this warning [-Wempty-body] The patch just apply clang recommendation put the semicolon on a separate line to silence this warning (adding a hack to another). As they are quite long, patches are attached. Best regards Joris clang-unused-value.patch Description: Binary data clang-empty-body.patch Description: Binary data
Re: reqrep force a trailing slash
2013/5/21 Nick Jennings n...@silverbucket.net: Hi All, When someone visit www.example.com/foobar I'd like to force a trailing slash. Here's what I've got so far, but it doesn't seem to be working, and You may have a look at redirect section : # send redirects for request for articles without a '/'. acl missing_slash path_reg ^/article/[^/]*$ redirect code 301 prefix / drop-query append-slash if missing_slash I've tried a number of variations to no avail. reqrep ^([^\ ]*)\ /foobar(.*) \1\ /foobar/\2 Do you really just try to add a trailing slash here ? What I read here is really much more complex url rewriting. For a simple trailing slash reqrep ^(.*)[^/]$\1/ may work (untested). If someone goes to example.com/foobar/stuff I'd also like to force a trailing slash. For some reason when pointing to directories, the resource files are not found unless the directory path ends with a slash. But If that second request is too much, I can just stick with the first one. So that people navigating to that first landing page get it properly rendered. Thanks for any help. Cheers Nick
Re: SSL OCSP Stapling
2012/11/7 Hervé COMMOWICK herve.commow...@lizeo-group.com: As of now, on client side, it is only working on IE9 (not before not after) and Opera, not so common... It's enable in Firefox for a long time (Edit / Preference / Advanced / Encryption / Validation or search ocsp in about:config). See : https://bugzilla.mozilla.org/show_bug.cgi?id=110161 Look this : http://www.imperialviolet.org/2012/02/05/crlsets.html for Google's thoughts Short : On this basis, we're currently planning on disabling online revocation checks in a future version of Chrome. (There is a class of higher-security certificate, called an EV certificate, where we haven't made a decision about what to do yet.) And this : https://bugzilla.mozilla.org/show_bug.cgi?id=360420#c10 for Mozilla's thoughts. Short : it's busted by design. It can only carry a single response and hardly any sites have only one OCSP certificate in their chain these days. So it doesn't eliminate the OCSP lookup delay, which it's primary attraction. Hervé C. On 11/06/2012 11:02 PM, Willy Tarreau wrote: Hi Lukas, On Tue, Nov 06, 2012 at 04:57:59PM +0100, Lukas Tribus wrote: Don't know if it helps without some knowledge of the nginx source code, but here [1] you can find the patches applied to nginx to introduce ocsp support. Thanks for the pointer. Anyway as you suspect, source code alone doesn't tell much about the real benefits to expect from this feature, nor how it's supposed to be used (especially by clients). Its doesn't seem to be trivial to implement though, because you also need to run (at regular intervals) an OCSP query towards the CA's OCSP server... Amusingly, running a task at regular intervals is the easiest part to do, it's just like health checks. We could decide to dedicate such a task per stapling-enabled bind line and it would not be much of an issue. The overhead would not even be measurable if we were working at insane refresh rates. What's unclear to me is how many clients do support this nowadays, how many servers do, whether or not users are willing to allow outgoing connections to fetch such cert statuses, whether or not non-stapling aware clients would be impacted by the feature (eg: increased handshake size due to advertised extension and data to everyone) etc... I think we need to take more time to study this in details, but until someone comes with a detailed description of what this will bring to his site, I'm not sure anyone will spend more time on this :-/ Regards, Willy -- Hervé COMMOWICK Ingénieur systèmes et réseaux. http://www.rezulteo.com by Lizeo Online Media Group http://www.lizeo-online-media-group.com/ 42 quai Rambaud - 69002 Lyon (France) ⎮ ☎ +33 (0)4 63 05 95 30
Re: How to update haproxy?
Hi, cd /tmp/ wget http://haproxy.1wt.eu/download/latest/version.tar.gz tar -xvzf haproxy-*cd haproxy-(version) make TARGET=linux26 USE_PCRE=1 replace the make with whatever fits your needs mv /usr/sbin/haproxy /usr/sbin/haproxy_v.X.X keeps an old copy of the version cp haproxy /usr/sbin/haproxy your distro may be different /etc/init.d/haproxy restart or whatever init script you have.. I know some people who do a very close variant. They install the binary with the version in its name, then make haproxy a symlink to the version. That way, it's even easier to revert in case of issues : cp haproxy /usr/sbin/haproxy-1.5-dev12 ln -sf haproxy-1.5-dev12 /usr/sbin/haproxy If you want to reproduce Makefile behavior you may prefer to use install(1) witch can invoke strip(1) and save around 50kB. install -s haproxy /usr/sbin/haproxy-1.5-dev12 That was my 2 euro cents Joris I'm doing that myself too on the main site BTW, except that since I regularly install test versions, I have a more complex naming scheme :-) Also, you can use reload instead of restart to replace the process. The difference is that upon reload, the old process will go away only if the new one managed to start without error. This can avoid service outages ! Regards, Willy
Re: HAProxy with native SSL support !
Hi, Willy Thanks for this long time expected feature ! Have a lot of fun and please report your success/failures, There is an include issue in this snapshot on FreeBSD (witch is not I think ssl related) : gmake TARGET=freebsd USE_OPENSSL=1 gcc -Iinclude -Iebtree -Wall -O2 -g -fno-strict-aliasing -DTPROXY -DCONFIG_HAP_CRYPT -DENABLE_POLL -DENABLE_KQUEUE -DUSE_OPENSSL -DCONFIG_HAPROXY_VERSION=\1.5-dev11\ -DCONFIG_HAPROXY_DATE=\2012/06/04\ \ -DBUILD_TARGET='freebsd' \ -DBUILD_ARCH='' \ -DBUILD_CPU='generic' \ -DBUILD_CC='gcc' \ -DBUILD_CFLAGS='-O2 -g -fno-strict-aliasing' \ -DBUILD_OPTIONS='' \ -c -o src/haproxy.o src/haproxy.c gcc -Iinclude -Iebtree -Wall -O2 -g -fno-strict-aliasing -DTPROXY -DCONFIG_HAP_CRYPT -DENABLE_POLL -DENABLE_KQUEUE -DUSE_OPENSSL -DCONFIG_HAPROXY_VERSION=\1.5-dev11\ -DCONFIG_HAPROXY_DATE=\2012/06/04\ -c -o src/sessionhash.o src/sessionhash.c gcc -Iinclude -Iebtree -Wall -O2 -g -fno-strict-aliasing -DTPROXY -DCONFIG_HAP_CRYPT -DENABLE_POLL -DENABLE_KQUEUE -DUSE_OPENSSL -DCONFIG_HAPROXY_VERSION=\1.5-dev11\ -DCONFIG_HAPROXY_DATE=\2012/06/04\ -c -o src/base64.o src/base64.c gcc -Iinclude -Iebtree -Wall -O2 -g -fno-strict-aliasing -DTPROXY -DCONFIG_HAP_CRYPT -DENABLE_POLL -DENABLE_KQUEUE -DUSE_OPENSSL -DCONFIG_HAPROXY_VERSION=\1.5-dev11\ -DCONFIG_HAPROXY_DATE=\2012/06/04\ -c -o src/protocols.o src/protocols.c In file included from src/protocols.c:20: include/common/standard.h: In function 'is_addr': include/common/standard.h:572: error: 'AF_INET' undeclared (first use in this function) include/common/standard.h:572: error: (Each undeclared identifier is reported only once include/common/standard.h:572: error: for each function it appears in.) include/common/standard.h:574: error: 'AF_INET6' undeclared (first use in this function) include/common/standard.h: In function 'get_net_port': include/common/standard.h:586: error: 'AF_INET' undeclared (first use in this function) include/common/standard.h:588: error: 'AF_INET6' undeclared (first use in this function) include/common/standard.h: In function 'get_host_port': include/common/standard.h:598: error: 'AF_INET' undeclared (first use in this function) include/common/standard.h:600: error: 'AF_INET6' undeclared (first use in this function) include/common/standard.h: In function 'get_addr_len': include/common/standard.h:610: error: 'AF_INET' undeclared (first use in this function) include/common/standard.h:612: error: 'AF_INET6' undeclared (first use in this function) include/common/standard.h:614: error: 'AF_UNIX' undeclared (first use in this function) include/common/standard.h: In function 'set_net_port': include/common/standard.h:624: error: 'AF_INET' undeclared (first use in this function) include/common/standard.h:626: error: 'AF_INET6' undeclared (first use in this function) include/common/standard.h: In function 'set_host_port': include/common/standard.h:636: error: 'AF_INET' undeclared (first use in this function) include/common/standard.h:638: error: 'AF_INET6' undeclared (first use in this function) gmake: *** [src/protocols.o] Erreur 1 A workaround is to include sys/socket.h in include/common/standard.h. Once corrected, ssl support successfully build and run on FreeBSD 8.3. Joris Willy
Re: HAProxy and DDOS protection
2012/2/27 Baptiste bed...@gmail.com: Hey the list, Just to let you know a new blog post about HAProxy and DDOS protection. The configuration examples applies to HAProxy 1.5 branch. Have a nice read: http://blog.exceliance.fr/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/ Any feedback is welcome :) I had some success using this timeout policy on non url based ddos (apache fork / syn flood) : timeout client connection server cheers Joris cheers
Re: haproxy and multi location failover
2011/11/1 Senthil Naidu senthil.na...@gmail.com: hi, we need to have a setup as follows site 1 site 2 LB (ip 1) LB (ip 2) | | | | srv1 srv2 srv1 srv2 site 1 is primary and site 2 is backup in case of site 1 LB's failure or failure of all the servers in site1 the website should work from backup location servers. Unless you have your own routing, if you want no downtime for nobody you have to imagine a more complex scenario. Has said below the only way to switch for a datacenter to an other is to use dns. So you have to find a solution for waiting dns propagation to be complete. I'll do something like : 1) if lb1 fail - change dns - srv1-1 become a lb for himself and srv2-1 2) if srv1-1 and srv2-1 fail - change dns - ld1 forward requests for lb2 (maybe slow but better than nothing). and so one ... Joris Regards On Tue, Nov 1, 2011 at 10:31 PM, Gene J gh5...@gmail.com wrote: Please provide more detail about what you are hosting and what you want to achieve with multiple sites. -Eugene On Nov 1, 2011, at 9:58, Senthil Naidu senthil.na...@gmail.com wrote: Hi, thanks for the reply, if the same needs to be done with dns do we need any external dns services our we can use our own ns1 and ns2 for the same. Regards On Tue, Nov 1, 2011 at 9:06 PM, Baptiste bed...@gmail.com wrote: Hi, Do you want to failover the Frontend or the Backend? If this is the frontend, you can do it through DNS or RHI (but you need your own AS). If this is the backend, you have nothing to do: adding your servers in the conf in a separated backend, using some ACL to take failover decision and you're done. cheers On Tue, Nov 1, 2011 at 2:25 PM, Senthil Naidu senthil.na...@gmail.com wrote: Hi, Is it possible to use haproxy in a active/passive failover scenario between multiple datacenters. Regards
Re: HAProxy Response time performance
2011/6/11 Matt Christiansen ad...@nikore.net: Thats good to know, while 2000 concurrent connections what we do right now, it will be closer to 10,000 concurrent connections come the holiday season which is closer to 2.5 GB of ram (still less then whats on the server). One though I have is our requests can be very large at times (big headers, super huge cookies), it may not be packet loss that the bigger buffer is fixing but a better ability to buffer our large requests. Which might explain why nginx wasn't showing this issue where as haproxy was. We don't have any HP Servers or Broadcom NICs (all Intel). I too have had a lot of issues in general with both HP and Broadcom and choose hardware for our LB that didn't have those nics. Our switches are new, but not super high quality (netgears) its possible they are not performing as well as we would like, ill have to do some more tests on them. I already experienced some negotiation problems with netgears. Have you tried to force the media on the nics ? Cheers Joris I'm working on creating a more production like lab where I can test a number of different aspects of the LB to see what else I can do in terms of performance. I will make lots of use of halog -srv along with other tools to measure performance and to see if I can crackdown any issues in our current H/W setup. Thanks for all the help, Matt C On Thu, Jun 9, 2011 at 10:20 PM, Willy Tarreau w...@1wt.eu wrote: On Thu, Jun 09, 2011 at 04:04:26PM -0700, Matt Christiansen wrote: I added in the tun.bufsize 65536 and right away things got better, I doubled that to 131072 and all of the outliers went way. Set at that with my tests it looks like haproxy is faster then nginx on 95% of responses and on par with nginx for the last 5% which is fine with me =). Nice, at least we have a good indication of what may be wrong. I'm pretty sure you're having an important packet loss rate. What is the negative to setting this high like that? If its just ram usage all of our LBs have 16GB of ram (don't ask why) so if thats all I don't think it will be an issue having that so high. Yes it's just an impact on RAM. There are two buffers per connection, so each connection consumes 256kB of RAM in your case. If you do that times 2000 concurrent connections, that's 512MB, which is still small compared to what is present in the machine :-) However, you should *really* try to spot what is causing the issue, because right now you're just hiding it under the carpet, and it's not completely hidden as retransmits still take some time to be sent. Many people have encountered the same problem with Broadcom NetXtreme2 network cards, which was particularly marked on those shipped with a lot of HP machines (firmware 1.9.6). The issue was a huge Tx drop rate (which is not reported in netstat). A tcpdump on the machine and another one on the next hop can show that some outgoing packets never reach their destination. It is also possible that one equipment is dying (eg: a switch port) and that the issue will get worse with time. You should pass halog -srv on your logs which exhibit the varying times. It will output the average connection times and response times per server. If you see that all servers are affected, you'll conclude that the issue is closer to haproxy. If you see that just a group of servers is affected, you'll conclude that the issue only lies around them (maybe you'll identify a few older servers too). Regards, Willy
Re: [PATCH] bind non local ip on FreeBSD
2010/11/22 joris dedieu joris.ded...@gmail.com: Hi list, FreeBSD (and maybe other BSD) use IP_BINDANY flag to permite bind() to bind a non local ip (ie an ip which is not defined in an interface). In most case, you will use carp to do so, but has I needed it without carp, I make a little quick and dirty patch on 1.4.9 version. If some here think it's a good feature, I can work for a best version (maybe with a config variable test on other OS ...) Thanks for haproxy Joris diff -Nru a/Makefile.bsd b/Makefile.bsd --- a/Makefile.bsd 2010-10-29 00:08:44.0 +0200 +++ b/Makefile.bsd 2010-11-22 13:24:41.885445784 +0100 @@ -35,6 +35,9 @@ COPTS.openbsd = -DENABLE_POLL -DENABLE_KQUEUE LIBS.openbsd = +#FreeBSD enable non local address binding +COPTS.freebsd = -DFREEBSD_ALLOW_NON_LOCAL + # CPU dependant optimizations COPTS.generic = -O2 COPTS.i586 = -O2 -march=i586 diff -Nru a/src/proto_tcp.c b/src/proto_tcp.c --- a/src/proto_tcp.c 2010-10-29 00:08:44.0 +0200 +++ b/src/proto_tcp.c 2010-11-22 13:48:38.841413187 +0100 @@ -525,6 +525,16 @@ } } #endif +#if defined FREEBSD_ALLOW_NON_LOCAL + if(setsockopt(fd, IPPROTO_IP, IP_BINDANY,(void *) one, sizeof(one)) == -1) { + err |= ERR_RETRYABLE | ERR_ALERT; + if(getuid() 0) + msg = only root can set IP_BINDANY; + else + msg = cannot set IP_BINDANY; + goto tcp_close_return; + } +#endif if (bind(fd, (struct sockaddr *)listener-addr, listener-proto-sock_addrlen) == -1) { err |= ERR_RETRYABLE | ERR_ALERT; msg = cannot bind socket; I made a better patch for haproxy-1.5dev3. It introduces a non-local keyword. eg : listen blabla bind 10.0.0.2:10001 non-local ... $ ifconfig -a | grep inet inet 80.247.233.40 netmask 0xffc0 broadcast 80.247.233.63 inet 80.247.233.59 netmask 0xffc0 broadcast 80.247.233.63 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3 inet6 ::1 prefixlen 128 inet 127.0.0.1 netmask 0xff00 $ sockstat | grep haproxy 99 haproxy34302 3 tcp4 10.0.0.2:10001*:* ... I really don't know if it's right and useful (except for me). But it was fun to do. As far has I know, it can be implemented on OpenBSD with SO_BINDANY socket option. Joris diff --git a/Makefile.bsd b/Makefile.bsd index ca2347b..6650760 100644 --- a/Makefile.bsd +++ b/Makefile.bsd @@ -35,6 +35,9 @@ PCREDIR!= pcre-config --prefix 2/dev/null || : COPTS.openbsd = -DENABLE_POLL -DENABLE_KQUEUE LIBS.openbsd = +# If you want to allow binding non local ip on FreeBSD +COPTS.freebsd = -DBIND_NON_LOCAL + # CPU dependant optimizations COPTS.generic = -O2 COPTS.i586 = -O2 -march=i586 diff --git a/include/types/protocols.h b/include/types/protocols.h index 3dcb2e7..62e2a99 100644 --- a/include/types/protocols.h +++ b/include/types/protocols.h @@ -75,6 +75,9 @@ #define LI_O_TCP_RULES 0x0010 /* run TCP rules checks on the incoming connection */ #define LI_O_CHK_MONNET 0x0020 /* check the source against a monitor-net rule */ #define LI_O_ACC_PROXY 0x0040 /* find the proxied address in the first request line */ +#ifdef BIND_NON_LOCAL +#define LI_O_NONLOCAL 0x0080 /* allow to bind a non local ip */ +#endif /* The listener will be directly referenced by the fdtab[] which holds its * socket. The listener provides the protocol-specific accept() function to diff --git a/src/cfgparse.c b/src/cfgparse.c index d3223ff..f110051 100644 --- a/src/cfgparse.c +++ b/src/cfgparse.c @@ -1733,6 +1733,17 @@ int cfg_parse_listen(const char *file, int linenum, char **args, int kwm) continue; } +#ifdef BIND_NON_LOCAL + if (!strcmp(args[cur_arg], non-local)) { + struct listener *l; + + for (l = curproxy-listen; l != last_listen; l = l-next) + l-options |= LI_O_NONLOCAL; +cur_arg ++; +continue; + } +#endif + if (!strcmp(args[cur_arg], name)) { struct listener *l; diff --git a/src/proto_tcp.c b/src/proto_tcp.c index 5039db8..2858663 100644 --- a/src/proto_tcp.c +++ b/src/proto_tcp.c @@ -527,6 +527,19 @@ int tcp_bind_listener(struct listener *listener, char *errmsg, int errlen) } } #endif +#if defined(BIND_NON_LOCAL) + if (listener-options LI_O_NONLOCAL) { + if (setsockopt(fd, IPPROTO_IP, IP_BINDANY,(void *) one, sizeof(one)) == -1) { + err |= ERR_RETRYABLE | ERR_ALERT; + if(getuid() 0) + msg = only root can set IP_BINDANY; + else
[PATCH] bind non local ip on FreeBSD
Hi list, FreeBSD (and maybe other BSD) use IP_BINDANY flag to permite bind() to bind a non local ip (ie an ip which is not defined in an interface). In most case, you will use carp to do so, but has I needed it without carp, I make a little quick and dirty patch on 1.4.9 version. If some here think it's a good feature, I can work for a best version (maybe with a config variable test on other OS ...) Thanks for haproxy Joris diff -Nru a/Makefile.bsd b/Makefile.bsd --- a/Makefile.bsd 2010-10-29 00:08:44.0 +0200 +++ b/Makefile.bsd 2010-11-22 13:24:41.885445784 +0100 @@ -35,6 +35,9 @@ COPTS.openbsd = -DENABLE_POLL -DENABLE_KQUEUE LIBS.openbsd = +#FreeBSD enable non local address binding +COPTS.freebsd = -DFREEBSD_ALLOW_NON_LOCAL + # CPU dependant optimizations COPTS.generic = -O2 COPTS.i586 = -O2 -march=i586 diff -Nru a/src/proto_tcp.c b/src/proto_tcp.c --- a/src/proto_tcp.c 2010-10-29 00:08:44.0 +0200 +++ b/src/proto_tcp.c 2010-11-22 13:48:38.841413187 +0100 @@ -525,6 +525,16 @@ } } #endif +#if defined FREEBSD_ALLOW_NON_LOCAL + if(setsockopt(fd, IPPROTO_IP, IP_BINDANY,(void *) one, sizeof(one)) == -1) { + err |= ERR_RETRYABLE | ERR_ALERT; + if(getuid() 0) + msg = only root can set IP_BINDANY; + else + msg = cannot set IP_BINDANY; + goto tcp_close_return; + } +#endif if (bind(fd, (struct sockaddr *)listener-addr, listener-proto-sock_addrlen) == -1) { err |= ERR_RETRYABLE | ERR_ALERT; msg = cannot bind socket;
Re: HAproxy FreeBSD no Logging?
2010/4/1 Joe P.H. Chiang jo3chi...@gmail.com: Hi Yes im using net/haproxy port Yes the log file exist i've tried your setup.. and still not logging.. Im using haproxy 1.4.2 , i wonder if that have anything to do with the logging.. im going to downgrade to 1.3.x see if that makes any differences Hi, Can you try to put your rule in the first line of syslog.conf local1.* /var/log/haproxy.log *.err;kern.warning;auth.notice;mail.crit/dev/console Joris On Wed, Mar 31, 2010 at 10:53 PM, Ross West we...@connection.ca wrote: JPHC I've trouble logging my haproxy on freebsd 7.2 HA-Proxy version 1.4.2 JPHC 2010/03/17 Are you using the net/haproxy port? Make sure the log files exist and/or use the -C option (create non-existent log files) for syslogd. Here's an example that works on my test system: -= /etc/rc.conf syslogd_enable=YES syslogd_flags=-b localhost -C -= -= /usr/local/etc/haproxy.conf global daemon # set to daemonize log 127.0.0.1:514 local1 debug # syslog logging -= -= /etc/syslog.conf local1.* /var/log/haproxy.log -= Doing a /usr/local/etc/rc.d/haproxy reload generates a bunch of log entries nicely for each config section. You might want to turn down debug mode though. :-) R. -- -- Thanks, Joe
Re: Re[2]: FreeBSD Ports: bumping haproxy from v1.2.18 - v1.4.x
Also, changing -devel right now at the same will cause all sorts of support issues as people deal with the migration - not everyone reads the UPDATING file before issuing portupgrade -a. Even a solution should be to mark the haproxy-devel has Moved (see /usr/ports/MOVED) I see in portupgrade's man page that there is an --ignore-moved switch. So we can suppose that portupgrade reads MOVED file. So maybe : moving haproxy-devel to haproxy13 creating haproxy14 and an haproxy15-devel (when time will come) should be a solution. For now, I think the best idea is to open a pr and see what FreeBSD ports teem think about it. Cheers, Joris