Re: Cannot enable a config "disabled" server via socket command
On Thu, Sep 17, 2015 at 9:42 PM, Pavlos Parissis <pavlos.paris...@gmail.com> wrote: > On 15/09/2015 08:45 πμ, Cyril Bonté wrote: >> Hi, >> >> >> Le 14/09/2015 14:23, Ayush Goyal a écrit : >>> Hi, >>> >>> We are testing haproxy-1.6dev4, we have added a server in backend as >>> disabled, but we are not able >>> to bring it up using socket command. >>> >>> Our backend conf looks like this: >>> >>> =cut >>> backend apiservers >>> server api101 localhost:1234 maxconn 128 weight 1 >>> check >>> server api102 localhost:1235 disabled maxconn 128 weight 1 >>> check >>> server api103 localhost:1236 disabled maxconn 128 weight 1 >>> check >>> =cut >>> >>> But, when I run the "enable apiservers/api103" command, it is still in >>> MAINT mode. Disabling and enabling of non "disabled" servers like api101 >>> are happening properly. >>> >>> Enabling a config "disabled" server works correctly with haproxy1.5. Can >>> you confirm whether its a bug in 1.6-dev4? >> >> This is due to the introduction of the SRV_ADMF_CMAINT flag, which is >> set permanently. The "enable/disable" socket command will only modify >> the SRV_ADMF_FMAINT and SRV_ADMF_FDRAIN flags. >> >> I add Baptiste to the thread. >> > > That will break our setup as well, where an external tool uses the > socket to disable a server in the running config and regenerate the > configuration with the server disabled. > > I am also interested in knowing the motivation behind this change. > > Cheers, > Pavlos > > Hi all, This "feature" was an early patch for later coming feature to avoid any impact of reloading HAProxy on servers state. I'm currently finishing the dev before forwarding the patches to Willy by tomorrow. We needed to know the real reason why a server was in maintenance state: was it because of configuration or through the socket, so at next reload we could apply the right state based on old running state, old config state and new config state. I'm going to check what we can do to fix your issue. Baptiste
Re: [ANNOUNCE] haproxy-1.6-dev5
On Mon, Sep 14, 2015 at 9:12 PM, Willy Tarreau <w...@1wt.eu> wrote: > On Mon, Sep 14, 2015 at 09:08:49PM +0200, PiBa-NL wrote: >> Op 14-9-2015 om 18:48 schreef Willy Tarreau: >> >BTW as a general rule, patches being merged are ACKed to their authors >> >or rejected, so if you don't get a response, simply consider it lost. >> I didn't sent a patch so to speak, Remi did send a 'diff --git' but >> without the comment to put into the haproxy repository, after which >> Baptiste then wrote he would submit it after confirmation that it did >> solved the issue, which i gave. > > Thanks for the explanation, I indeed missed all this exchange I guess. > >> Anyway its not that important i suppose, >> otherwise we / you could always issue another dev release.. > > OK so Baptiste will catch it when he has time and forward it to me > once he's OK with it. > >> Also the patch was added to FreeBSD ports repository, so should come >> through with the binary repositories building from there. That will >> solve my 'problem' for the moment. > > OK fine. Thanks! > Willy > Willy, The issue is related to the connect() function to establish the UDP connection. Currently, I use a sizeof() to get the len of the address structure and Remi suggested to use get_addr_len() instead. Pieter confirmed Remi's suggestion fixes the issue. I can reproduce the issue in a FreeBSD VM I have on my computer. I'll show you tomorrow at the office. Baptiste
Re: Chaining haproxy instances for a migration scenario
On Fri, Sep 11, 2015 at 10:41 AM, Tim Verhoeven <tim.verhoeven...@gmail.com> wrote: > Hello everyone, > > I'm mostly passive on this list but a happy haproxy user for more then 2 > years. > > Now, we are going to migrate our platform to a new provider (and new > hardware) in the coming months and I'm looking for a way to avoid a one-shot > migration. > > So I've been doing some googl'ing and it should be possible to use the proxy > protocol to send traffic from one haproxy instance (at the old site) to the > another haproxy instance (at the new site). Then at the new site the haproxy > instance there would just accept the traffic as it came from the internet > directly. > > Is that how it works? Is that possible? > > Ideally the traffic between the 2 haproxy instances would be encrypted with > TLS to avoid having to setup an VPN. > > Now I haven't found any examples of this kind of setup, so any pointers on > how to set this up would be really appriciated. > > Thanks, > Tim Hi Tim, Your usecase is an interesting scenario for a blog article :) About your questions, simply update the app backend of the current site in order to add a new 'server' that would be the HAProxy of the new site: backend myapp [...] server app1 ... server app2 ... server newhaproxy [IP]:8443 check ssl send-proxy-v2 ca-file /etc/haproxy/myca.pem crt /etc/haproxy/client.pem ca-file: to validate the certificate presented by the server using your own CA (or use DANGEROUSLY "ssl-server-verify none" in your global section) crt : allows you to use a client certificate to get connected on the other HAProxy On the newhaproxy (in the new instance): frontend fe_myapp bind :80 bind :443 ssl crt server.pem bind :8443 ssl crt server.pem accept-proxy-v2 You can play with weight on the current site to send a few request to the newhaproxy box and increase this weight once you're confident. Baptiste
Re: Client Affinity in HAProxy with MQTT Broker
Hi Sourav, Thanks a lot for the mail and the screenshot. That said, usually, when we ask for a capture, we mean a pcap file, not a png one :) It's fine, I have the information I need. I also used this documentation: http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718037 Please note that the MQTT messages are carried on top of TCP. Also, as > mentioned in previous mail, I am trying to load balance the traffic based > on the topic mentioned in the MQTT message( 3rd line from end ) which means > that all the traffic corresponding to a particular topic should be > forwarded to a specific server only. > That's doable only in the following condition: - the client speaks first (no server banner) - the information is available in first session buffer - the information is available always at the same place It seems MQTT meet the 3 rules above, so we may be able to do something. So considering above requirement, I have the following questions : > > 1. Is MQTT already supported as an out of box feature? > No, MQTT is not supported out of the box. > 2. Is there a configurable hook/plugin through which above can be achieved? > You might be able to code a MQTT protocol parser in Lua. You could even do MQTT routing to a specific farm based on the topic, but load-balancing is an other story. > 3. If no configuration/plugins are available, how easy it would be to add > the feature/code in order to retrieve payload information from TCP packets > and parse it as par MQTT? > Very complicated, I'm afraid... > 4. What are main modules which I should look into in order to implement > the same in case not yet implemented? > > Well, as I mentionned above, there might be some stuff we can do. Please check http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#req.payload_lv Following your capture, this fetch can be used to retrieve the whole topic: req.payload_lv(1,1) => it fetch content in the TCP payload buffer whose size is specified at byte 1 for 1 byte and data is stored right after the size. This fetch has the corresponding ACL to match against static patterns. Imagine 3 topics: topic1, topic2, topic3, then you could do: frontend mqtt [...] use_backend bk_topic1 if { req.payload_lv(1,1),lower topic1 } use_backend bk_topic2 if { req.payload_lv(1,1),lower topic2 } use_backend bk_topic3 if { req.payload_lv(1,1),lower topic3 } backend bk_topic1 [...] backend bk_topic2 [...] backend bk_topic3 [...] The same could be applied in a single backend, using the use-server statement. I hope this help and this is enough. Be aware that once the TCP session has been forwarded to a server, then all subsequent messages are going to be forwarded to this server, regardless of the next topics set over the same connection. To be routed again, a client must send next PUBLISH message over a new TCP connection. Baptiste On Thu, Sep 10, 2015 at 7:58 PM, Baptiste <bed...@gmail.com> wrote: > On Thu, Sep 10, 2015 at 4:05 PM, Sourav Das <souravdas@gmail.com> > wrote: > > Hi, > > > > I have been going through the HAProxy documentation for my work which > deals > > with scaling and load balancing for MQTT Brokers. > > > > However, I could not find any configuration regarding the Client Affinity > > where the routing of the MQTT traffic is done based on the topic present > in > > the MQTT message. As MQTT is also carried over TCP, is it possible to > use a > > pre-configured hook in HAProxy so that the traffic can be routed to the > > appropriate server based on the MQTT topic. > > > > At present I am not able to find out any hook which enables this to be > done. > > I am a bit curious to know whether the support of MQTT is planned in > future > > releases of HAProxy. > > > > > > Please let me know if this makes sense. > > > > Regards, > > Sourav > > > Hi Sourav, > > This would be doable only if the information can be retrived from the > payload of the first request sent by the client. > could you provide more information about how MQTT protocol works? Is > there any server banner? > A simple TCP dump containing an example of the message you want to > route would be appreciated and allow us to deliver you an accurate > answer. > > Baptiste > >
Re: Client Affinity in HAProxy with MQTT Broker
On Thu, Sep 10, 2015 at 4:05 PM, Sourav Das <souravdas@gmail.com> wrote: > Hi, > > I have been going through the HAProxy documentation for my work which deals > with scaling and load balancing for MQTT Brokers. > > However, I could not find any configuration regarding the Client Affinity > where the routing of the MQTT traffic is done based on the topic present in > the MQTT message. As MQTT is also carried over TCP, is it possible to use a > pre-configured hook in HAProxy so that the traffic can be routed to the > appropriate server based on the MQTT topic. > > At present I am not able to find out any hook which enables this to be done. > I am a bit curious to know whether the support of MQTT is planned in future > releases of HAProxy. > > > Please let me know if this makes sense. > > Regards, > Sourav Hi Sourav, This would be doable only if the information can be retrived from the payload of the first request sent by the client. could you provide more information about how MQTT protocol works? Is there any server banner? A simple TCP dump containing an example of the message you want to route would be appreciated and allow us to deliver you an accurate answer. Baptiste
Re: haproxy resolvers, DNS query not send / result NXDomain not expected
On Tue, Sep 8, 2015 at 7:58 AM, Baptiste <bed...@gmail.com> wrote: >>> Hi, >>> >>> I wonder why the code send the TCP port in the DNS query... >>> I'm currently installing an opnsense and I'll try to reproduce the >>> problem. >>> >>> I've not used FreeBSD since 5.4 version :) >>> >>> Baptiste >> >> Hi Baptiste, >> >> it seems ipv4 and ipv6 are handled differently regarding the removal of the >> port in server.c and this is apparently done after the initial dns check has >> already been performed.?. as it knows to treat 'google.com' differently from >> 'nu.nl' >> >> /* save hostname and create associated name resolution */ >> switch (sk->ss_family) { >> case AF_INET: { >> /* remove the port if any */ >> Alert("\n AF_INET remove the port if any: %s", args[2]); >> char *c; >> if ((c = rindex(args[2], ':')) != NULL) { >> newsrv->hostname = my_strndup(args[2], c - args[2]); >> } >> else { >> newsrv->hostname = strdup(args[2]); >> } >> } >> break; >> case AF_INET6: >> Alert("\n AF_INET6 args 2 : %s", args[2]); >> newsrv->hostname = strdup(args[2]); >> break; >> default: >> goto skip_name_resolution; >> } >> Alert("\nparse_server newsrv->hostname: %s", newsrv->hostname); >> >> Result: >> AF_INET6 args 2 : www.google.com:80[ALERT] 249/231349 >> (38431) : >> parse_server newsrv->hostname: www.google.com:80[ALERT] 249/231349 (38431) : >> AF_INET remove the port if any: nu.nl:80[ALERT] 249/231349 (38431) : >> parse_server newsrv->hostname: nu.nl[ALERT] 249/231349 (38431) : >> >> Though the tricky part i guess is that ipv6 can contain ':' in a address to >> but not in a hostname i would think.. Maybe the question should be why have >> the different treatment at all? >> >> I would expect this to not occur on FreeBSD systems.? >> >> Hope it helps investigate the issue. >> >> PiBa-NL > > > Hi Piba, > > You're right and I think this bug is not linked to FreeBSD at all :) > I think checking port1 and port2 variables at this moment of the code > is the best way to go instead of relying on ss_family. > I'll send you a patch later today for testing purpose. > > Baptiste Hi Piba, I was able to reproduce the bug on my linux machine (that said, I now have a nice Freebsd 10.2 running in a virtualbox). Please apply the patch in attachment and tell me whether it fixes the issue on your side. Here is new behavior: 09:28:18.097616 IP 10.0.3.20.40165 > 10.0.1.2.53: 20646+ ? www.google.com. (32) 09:28:18.098140 IP 10.0.1.2.53 > 10.0.3.20.40165: 20646 1/0/0 2a00:1450:4007:805::1014 (60) 09:28:18.098852 IP 10.0.3.20.38413 > 8.8.8.8.53: 61174+ ANY? www.google.com. (32) 09:28:18.108185 IP 8.8.8.8.53 > 10.0.3.20.38413: 61174 15/0/0 A 159.180.253.42, A 159.180.253.49, A 159.180.253.30, A 159.180.253.53, A 159.180.253.38, A 159.180.253.57, A 159.180.253.59, A 159.180.253.34, A 159.180.253.27, A 159.180.253.45, A 159.180.253.23, A 159.180.253.15, A 159.180.253.19, A 159.180.253.29, A 159.180.253.44 (272) Baptiste From c5eb4a02ecd6392d727ad7a08dbe483358e52643 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann <bed...@gmail.com> Date: Tue, 8 Sep 2015 09:28:39 +0200 Subject: [PATCH 11/11] MINOR: FIX: hostname parsing error lead TCP port to be part of hostname sent in DNS resolution A corner case could lead configuration parsing of server's hostname to use the TCP port as part of the hostname. This happens when the first resolution from libc points to an IPv6 address. The patch in attachement fix this by relying of the existence of TCP port instead of IP address family before saving server's hostname. --- src/server.c | 23 --- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/src/server.c b/src/server.c index fed180b..d364b5b 100644 --- a/src/server.c +++ b/src/server.c @@ -936,23 +936,16 @@ int parse_server(const char *file, int linenum, char **args, struct proxy *curpr } /* save hostname and create associated name resolution */ - switch (sk->ss_family) { - case AF_INET: { -/* remove the port if any */ + if (!port1 && !port2) { +/* no ports set, means no ':' are used to split IP from PORT */ +newsrv->hostname = strdup(args[2]); + } + else { +/* port set, means ':' is used to
Re: haproxy resolvers, DNS query not send / result NXDomain not expected
Hi Piba, Finally, Willy fixed it in a different (and smarter) way: http://git.haproxy.org/?p=haproxy.git;a=commit;h=07101d5a162a125232d992648a8598bfdeee3f3f Baptiste
Re: haproxy resolvers "nameserver: can't connect socket" (on FreeBSD)
On Mon, Sep 7, 2015 at 10:07 AM, Dmitry Sivachenko <trtrmi...@gmail.com> wrote: > >> On 7 сент. 2015 г., at 9:36, Lukas Tribus <luky...@hotmail.com> wrote: >> >> >> >> Best would be to strace this, but this is Freebsd amd64, >> so that doesn't work. Can you trace the syscalls with >> the strace equivalent at least? > > > It fails that way: > > socket(PF_INET,SOCK_DGRAM,17)= 4 (0x4) > connect(4,{ AF_INET 8.8.8.8:53 },128)ERR#22 'Invalid argument' > > 3rd argument for connect() looks wrong for ipv4: > > ERRORS > The connect() system call fails if: > > [EINVAL] The namelen argument is not a valid length for the > address family. > > Ok, excellent. I wonder how this could happen :) Let me check tonight and com back to you. Baptiste
Re: Question about the status of the connection pool
Hi Aleks; > There is a official docker repository for haproxy > > https://hub.docker.com/_/haproxy/ > > Is anyone from haproxy community involved into this repo? HAProxy Technologies and Docker Inc are going to work together on this repository in order to let us take the lead over it. Baptiste
Re: Re: haproxy resolvers "nameserver: can't connect socket" (on FreeBSD)
On Mon, Sep 7, 2015 at 12:32 PM, Remi Gacogne <rgaco...@coredump.fr> wrote: > Hi, > > On 09/07/2015 10:47 AM, Baptiste wrote: >>> It fails that way: >>> >>> socket(PF_INET,SOCK_DGRAM,17)= (0x4) >>> connect(4,{ AF_INET 8.8.8.8:53 },128)ERR#22 'Invalid argument' >>> >>> 3rd argument for connect() looks wrong for ipv4: >>> >>> ERRORS >>> The connect() system call fails if: >>> >>> [EINVAL] The namelen argument is not a valid length for the >>> address family. >>> >>> >> >> Ok, excellent. >> I wonder how this could happen :) > > It looks like this code is passing the size of a struct > sockaddr_storage to connect(), instead of the size corresponding to the > underlying socket family. Some OS are forgiving, other not so much :) > > diff --git a/src/dns.c b/src/dns.c > index 4bc5448..f725ff4 100644 > --- a/src/dns.c > +++ b/src/dns.c > @@ -819,7 +819,7 @@ int dns_init_resolvers(void) > } > > /* "connect" the UDP socket to the name server IP */ > - if (connect(fd, (struct > sockaddr*)>addr, sizeof(curnameserver->addr)) == -1) { > + if (connect(fd, (struct > sockaddr*)>addr, get_addr_len(>addr)) == -1) { > Alert("Starting [%s/%s] nameserver: > can't connect socket.\n", curr_resolvers->id, > curnameserver->id); > close(fd); > > > Thanks a lot Remi! Piba, could you please check it works with Remi's feedback? If yes, I'll send a patch to Willy with the fix. Baptiste
Re: Question about the status of the connection pool
> Good to hear that the vendors talk together ;-) > > Thanks aleks Well, we're not "vendors", we write open source software and propose services around them (our commercial versions of HAProxy are open source to our customers). And both docker inc and HAProxy technologies can benefit from such an alliance. Baptiste
Re: haproxy resolvers, DNS query not send / result NXDomain not expected
>> Hi, >> >> I wonder why the code send the TCP port in the DNS query... >> I'm currently installing an opnsense and I'll try to reproduce the >> problem. >> >> I've not used FreeBSD since 5.4 version :) >> >> Baptiste > > Hi Baptiste, > > it seems ipv4 and ipv6 are handled differently regarding the removal of the > port in server.c and this is apparently done after the initial dns check has > already been performed.?. as it knows to treat 'google.com' differently from > 'nu.nl' > > /* save hostname and create associated name resolution */ > switch (sk->ss_family) { > case AF_INET: { > /* remove the port if any */ > Alert("\n AF_INET remove the port if any: %s", args[2]); > char *c; > if ((c = rindex(args[2], ':')) != NULL) { > newsrv->hostname = my_strndup(args[2], c - args[2]); > } > else { > newsrv->hostname = strdup(args[2]); > } > } > break; > case AF_INET6: > Alert("\n AF_INET6 args 2 : %s", args[2]); > newsrv->hostname = strdup(args[2]); > break; > default: > goto skip_name_resolution; > } > Alert("\nparse_server newsrv->hostname: %s", newsrv->hostname); > > Result: > AF_INET6 args 2 : www.google.com:80[ALERT] 249/231349 > (38431) : > parse_server newsrv->hostname: www.google.com:80[ALERT] 249/231349 (38431) : > AF_INET remove the port if any: nu.nl:80[ALERT] 249/231349 (38431) : > parse_server newsrv->hostname: nu.nl[ALERT] 249/231349 (38431) : > > Though the tricky part i guess is that ipv6 can contain ':' in a address to > but not in a hostname i would think.. Maybe the question should be why have > the different treatment at all? > > I would expect this to not occur on FreeBSD systems.? > > Hope it helps investigate the issue. > > PiBa-NL Hi Piba, You're right and I think this bug is not linked to FreeBSD at all :) I think checking port1 and port2 variables at this moment of the code is the best way to go instead of relying on ss_family. I'll send you a patch later today for testing purpose. Baptiste
Re: haproxy resolvers, DNS query not send / result NXDomain not expected
On Mon, Sep 7, 2015 at 10:12 PM, PiBa-NL <piba.nl@gmail.com> wrote: > Hi Remi and Baptiste / haproxy users, > > Thanks for the quick fix for socket issues. > > Haproxy now starts succesfull and sends some DNS requests successfully. > However the google backend server immediately go's down. > Not sure if its more or less the same issue reported by Conrad.?. Tried his > fix but that did not seem to solve the issue. > > See below some tcpdump results with original haproxy code + Remi's patch. > > The googlesite server is marked down almost imidiately after starting.. It > does not seem to understand the 'NXDomain' reply? > The testsite2 does not send DNS query's, should it not send a dns query > every 10 seconds? > > Or maybe i'm misinterpreting the 'hold valid' description? > Perhaps you guy's could take another look? > > Thanks in advance, best regards, > PiBa-NL > > Same environment as before (p.s. if you want to test it yourself, its quite > easy to install the OPNsense iso into a virtualbox machine, thats how im > testing it). > # uname -a > FreeBSD OPNsense.localdomain 10.1-RELEASE-p18 FreeBSD 10.1-RELEASE-p18 #0 > 71275cd(stable/15.7): Sun Aug 23 20:32:26 CEST 2015 > root@sensey64:/usr/obj/usr/src/sys/SMP amd64 > # haproxy -v > [ALERT] 249/200618 (55609) : SSLv3 support requested but unavailable. > HA-Proxy version 1.6-dev4-b7ce424 2015/09/03 > Copyright 2000-2015 Willy Tarreau <wi...@haproxy.org> > > global > maxconn 100 > defaults > modehttp > timeout connect3 > timeout server3 > timeout client3 > resolvers globalresolvers > nameserver googleA 8.8.8.8:53 > resolve_retries 3 > timeout retry 1s > hold valid 10s > listen www > bind 0.0.0.0:81 > logglobal > servergooglesite www.google.com:80 check inter 2000 > resolvers globalresolvers > servertestsite2 nu.nl:80 check inter 2000 > resolvers globalresolvers > > 19:42:53.843549 IP 192.168.0.112.44128 > 8.8.8.8.53: 46758+ ? > www.google.com. (32) > 19:42:53.859410 IP 8.8.8.8.53 > 192.168.0.112.44128: 46758 1/0/0 > 2a00:1450:4013:c01::93 (60) > 19:42:53.859929 IP 192.168.0.112.42866 > 8.8.8.8.53: 57888+ A? nu.nl. (23) > 19:42:53.877414 IP 8.8.8.8.53 > 192.168.0.112.42866: 57888 1/0/0 A > 62.69.166.254 (39) > 19:42:53.877693 IP 192.168.0.112.54655 > 8.8.8.8.53: 983+ ? nu.nl. (23) > 19:42:53.894598 IP 8.8.8.8.53 > 192.168.0.112.54655: 983 0/1/0 (89) > 19:42:55.907078 IP 192.168.0.112.53716 > 8.8.8.8.53: 21069+ ANY? > www.google.com:80. (35) > 19:42:55.924236 IP 8.8.8.8.53 > 192.168.0.112.53716: 21069 NXDomain 0/1/0 > (110) > 19:42:59.923338 IP 192.168.0.112.53716 > 8.8.8.8.53: 52649+ ANY? > www.google.com:80. (35) > 19:42:59.940424 IP 8.8.8.8.53 > 192.168.0.112.53716: 52649 NXDomain 0/1/0 > (110) > 19:43:03.937163 IP 192.168.0.112.53716 > 8.8.8.8.53: 5746+ ANY? > www.google.com:80. (35) > 19:43:03.955002 IP 8.8.8.8.53 > 192.168.0.112.53716: 5746 NXDomain 0/1/0 > (110) > 19:43:07.957851 IP 192.168.0.112.53716 > 8.8.8.8.53: 32478+ ANY? > www.google.com:80. (35) > 19:43:07.973450 IP 8.8.8.8.53 > 192.168.0.112.53716: 32478 NXDomain 0/1/0 > (110) > 19:43:11.977145 IP 192.168.0.112.53716 > 8.8.8.8.53: 48547+ ANY? > www.google.com:80. (35) > 19:43:11.994878 IP 8.8.8.8.53 > 192.168.0.112.53716: 48547 NXDomain 0/1/0 > (110) > 19:43:16.013370 IP 192.168.0.112.53716 > 8.8.8.8.53: 24088+ ANY? > www.google.com:80. (35) > 19:43:16.01 IP 8.8.8.8.53 > 192.168.0.112.53716: 24088 NXDomain 0/1/0 > (110) > 19:43:20.025739 IP 192.168.0.112.53716 > 8.8.8.8.53: 52900+ ANY? > www.google.com:80. (35) > 19:43:20.041989 IP 8.8.8.8.53 > 192.168.0.112.53716: 52900 NXDomain 0/1/0 > (110) > 19:43:24.038682 IP 192.168.0.112.53716 > 8.8.8.8.53: 28729+ ANY? > www.google.com:80. (35) > 19:43:24.055154 IP 8.8.8.8.53 > 192.168.0.112.53716: 28729 NXDomain 0/1/0 > (110) > 19:43:28.060200 IP 192.168.0.112.53716 > 8.8.8.8.53: 27289+ ANY? > www.google.com:80. (35) > 19:43:28.076947 IP 8.8.8.8.53 > 192.168.0.112.53716: 27289 NXDomain 0/1/0 > (110) > 19:43:32.077052 IP 192.168.0.112.53716 > 8.8.8.8.53: 54796+ ANY? > www.google.com:80. (35) > 19:43:32.092108 IP 8.8.8.8.53 > 192.168.0.112.53716: 54796 NXDomain 0/1/0 > (110) > 19:43:36.094322 IP 192.168.0.112.53716 > 8.8.8.8.53: 4256+ ANY? > www.google.com:80. (35) > 19:43:36.111877 IP 8.8.8.8.53 > 192.168.0.112.53716: 4256 NXDomain 0/1/0 > (110) > 19:43:40.117106 IP 192.168.0.112.53716 > 8.8.8.8.53: 7297+ ANY? > www.google.com:80. (35) > 19:4
Re: DNS: defaulting resolve-prefer to ipv6 can lead to unexpected results
Le 6 sept. 2015 18:53, "Conrad Hoffmann" <con...@soundcloud.com> a écrit : > > Hi, > > I ran into the following problem: I have a server specified by hostname, > using the new DNS feature. I initially did not specify the "resolve-prefer" > parameter. The initial lookup of the server succeeded and produced an IPv4 > address. Unfortunately, that address was then never updated because of the > following situation: > > - In server.c:1034, resolve-prefer silently defaults to ipv6 > - The DNS server gave no records in response to ANY query > - Resolvers check resolve-prefer, try again with query > - Query yields no results > > This was a little unexpected, because the initial resolution works just > fine. I think there are at least two possible options to improve this > behaviour: > > - Document the defaulting to ipv6 and only allow ipv6 for the initial >lookup as well (in my scenario, this would have lead to failure to >start, leading to me adding resolve-prefer ipv4, an acceptable solution > - Do not default to ipv6, leave as unspecified instead. If ANY query >doesn't produce results, check current address type if no resolve- >prefer is specified > > The attached patch is merely to demonstrate the latter solution. It worked > for me, but I didn't check too hard if leaving resolver_family_priority set > to AF_UNSPEC might lead to other problems elsewhere. > > Maybe there is even other/better solutions? > > Regards, > Conrad > -- > Conrad Hoffmann > Traffic Engineer > > SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany > > Managing Director: Alexander Ljung | Incorporated in England & Wales > with Company No. 6343600 | Local Branch Office | AG Charlottenburg | > HRB 110657B Hi Conrad, Following some recommendation provided by Jan, I sent last week a few patch to Willy to update this behavior. Since some servers ignore the ANY type, we failover to the family pointed by 'resolve-prefer', if fail again, it fails over to the remaining family. The patches also trigger a failover if the server answers a truncated response. I'll send you the patch by tomorrow. I'll patch later to make haproxy send an OPT record to announce the number of bytes it support as UDP payload. Baptiste
Re: Using getaddrinfo_a on configuration load
On Thu, Sep 3, 2015 at 2:41 AM, Cyrus Hall <cy...@justin.tv> wrote: > Hi! > > I've searched the list and not found much on lengthy HAProxy start/config > load times. We tend to run HAProxy with a large number of backends (100+), > and recently noticed that we are seeing lengthy reload times (20+ seconds). > This is most noticeable in locations that are far away from our DNS server. > We've traced this back to HAProxy doing sequential DNS lookup, one address > at a time, and the long RTT to our DNS servers (~200ms). As such, load > times tend to be N*M ms, where N is the number of backends and M is the RTT > to the DNS server. > > While searching the mailing list, there was a little discussion about using > getaddrinfo_a for making DNS queries asynchronous. I can not find any signs > of this work in the latest development branch release. Is anyone currently > working on it, and if not, is it something that the project would be > interested in seeing? > > Cheers, > Cyrus > > -- > > Cyrus Hall | Lead Software Engineer | Twitch | 720-327-0344 | > cy...@twitch.tv > > Hi Cyrus, A few weeks ago, I discussed a similar issue in a thread: initial IP address of a server. Purpose was to discus the introduction of a new statement to let know HAProxy how to handle DNS resolution of servers. This statement would take multiple values, such as 'libc', 'a.b.c.d' (an arbitrary IP address), 'internal resolver', etc... This feature will fix your issue, combined with the new 'resolvers' feature (and maybe the upcoming 'server-state' which keeps server IPs when DNS resolution is enabled on a server). Basically, you'll be able to force an IP to 0.0.0.0, then let HAProxy's internal resolver (it is asynchronous and performs multiple resolutions in parallel). To speed up start up, the new server-state feature will apply last resolved IP to server which rely on DNS to resolve their IP addresses. All of this should be available in 1.6. In the mean time, I would recommend using a local DNS cache, such as dnsmasq. Baptiste
Re: Fix triggering of runtime DNS resolution?
Hi Conrad, Please use the two patches in attachement. Baptiste From c19188e50313616833f0a6b3d5b1373c8f5bac78 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann <bed...@gmail.com> Date: Thu, 3 Sep 2015 10:59:39 +0200 Subject: [PATCH 02/10] MINOR: BUGFIX: DNS resolution doesn't start Patch f046f1156149d3d8563cc45d7608f2c42ef5b596 introduced a regression: DNS resolution doesn't start anymore, while it was supposed to make it start with first health check. Current patch fix this issue by triggering a new DNS resolution if the last_resolution time is not set. --- src/checks.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/checks.c b/src/checks.c index e386bee..3fb166b 100644 --- a/src/checks.c +++ b/src/checks.c @@ -2155,7 +2155,7 @@ static struct task *process_chk(struct task *t) * if there has not been any name resolution for a longer period than * hold.valid, let's trigger a new one. */ - if (tick_is_expired(tick_add(resolution->last_resolution, resolution->resolvers->hold.valid), now_ms)) { + if (!resolution->last_resolution || tick_is_expired(tick_add(resolution->last_resolution, resolution->resolvers->hold.valid), now_ms)) { trigger_resolution(s); } } -- 2.5.0 From 9112dd30064172129af5dbb5ced8f02027075566 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann <bed...@gmail.com> Date: Thu, 3 Sep 2015 10:55:20 +0200 Subject: [PATCH 01/10] MINOR: dns_resolution structure update: time_t to unsigned int 3 variables of the dns_resolution structure are set to 'time_t' type. Since they are all set by 'now_ms' and used as 'ticks' in HAProxy's internal, it is safer to set them to the same type than now_ms: 'unsigned int'. --- include/types/dns.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/types/dns.h b/include/types/dns.h index d5431a5..d0a835d 100644 --- a/include/types/dns.h +++ b/include/types/dns.h @@ -151,9 +151,9 @@ struct dns_resolution { char *hostname_dn; /* server hostname in domain name label format */ int hostname_dn_len; /* server domain name label len */ int resolver_family_priority; /* which IP family should the resolver use when both are returned */ - time_t last_resolution; /* time of the lastest valid resolution */ - time_t last_sent_packet; /* time of the latest DNS packet sent */ - time_t last_status_change; /* time of the latest DNS resolution status change */ + unsigned int last_resolution; /* time of the lastest valid resolution */ + unsigned int last_sent_packet; /* time of the latest DNS packet sent */ + unsigned int last_status_change; /* time of the latest DNS resolution status change */ int query_id; /* DNS query ID dedicated for this resolution */ struct eb32_node qid; /* ebtree query id */ int query_type; /* query type to send. By default DNS_RTYPE_ANY */ -- 2.5.0
Re: Fix triggering of runtime DNS resolution?
On Thu, Sep 3, 2015 at 1:11 AM, Baptiste <bed...@gmail.com> wrote: > On Thu, Sep 3, 2015 at 12:56 AM, Conrad Hoffmann <con...@soundcloud.com> > wrote: >> Hello, >> >> it's kind of late and I am not 100% sure I'm getting this right, so would >> be great if someone could double-check this: >> >> Essentially, the runtime DNS resolution was never triggered for me. I >> tracked this down to a signed/unsigned problem in the usage of >> tick_is_expired() from checks.c:2158. >> >> curr_resolution->last_resolution is being initialized to zero >> (server.c:981), which in turn makes it say a few thousand after the value >> of hold.valid is added (also checks.c:2158). It is then compared to now_ms, >> which is an unsigned integer so large that it is out of the signed integer >> range. Thus, the comparison will not get the expected result, as it is done >> on integer values (now_ms cast to integer gave e.g. -1875721083 a few >> minutes ago, which is undeniably smaller then 3000). >> >> One way to fix this is to initialize curr_resolution->last_resolution to >> now_ms instead of zero (attached "patch"), but then it only works because >> both values are converted to negative integers. While I think that this >> will reasonably hide the problem for the time being, I do think there is a >> deeper problem here, which is the frequent passing of an unsigned integer >> into a function that takes signed int as argument. >> >> I see that tick_* is used all over the place, so I thought I would rather >> consult someone before spending lots of time creating a patch that would >> not be used. Also, I would need some more time to actually figure out what >> the best solution would be. >> >> Does anyone have any thoughts on this? Is someone maybe already aware of >> this? >> >> Thanks a lot, >> Conrad >> -- >> Conrad Hoffmann >> Traffic Engineer >> >> SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany >> >> Managing Director: Alexander Ljung | Incorporated in England & Wales >> with Company No. 6343600 | Local Branch Office | AG Charlottenburg | >> HRB 110657B > > > Hi Conrad, > > I remarked this as well. > Please apply the patch in attachment and confirm it fixes this issue. > > I introduced this bug when trying to fix an other one: DNS resolution > was supposed to start with first health check. > Unfortunately, it started after hold.valid period after HAProxy's start time. > > Please confirm the patch in attachment fix this and that DNS queries > are well sent at startup (and later). > > Baptiste Hi Conrad, Please note the patch in my previous mail is not the definitive one. I started a private thread with Willy right before your mail to discuss this point and I'll send today the definitive patch. Baptiste
Re: Fix triggering of runtime DNS resolution?
On Thu, Sep 3, 2015 at 12:56 AM, Conrad Hoffmann <con...@soundcloud.com> wrote: > Hello, > > it's kind of late and I am not 100% sure I'm getting this right, so would > be great if someone could double-check this: > > Essentially, the runtime DNS resolution was never triggered for me. I > tracked this down to a signed/unsigned problem in the usage of > tick_is_expired() from checks.c:2158. > > curr_resolution->last_resolution is being initialized to zero > (server.c:981), which in turn makes it say a few thousand after the value > of hold.valid is added (also checks.c:2158). It is then compared to now_ms, > which is an unsigned integer so large that it is out of the signed integer > range. Thus, the comparison will not get the expected result, as it is done > on integer values (now_ms cast to integer gave e.g. -1875721083 a few > minutes ago, which is undeniably smaller then 3000). > > One way to fix this is to initialize curr_resolution->last_resolution to > now_ms instead of zero (attached "patch"), but then it only works because > both values are converted to negative integers. While I think that this > will reasonably hide the problem for the time being, I do think there is a > deeper problem here, which is the frequent passing of an unsigned integer > into a function that takes signed int as argument. > > I see that tick_* is used all over the place, so I thought I would rather > consult someone before spending lots of time creating a patch that would > not be used. Also, I would need some more time to actually figure out what > the best solution would be. > > Does anyone have any thoughts on this? Is someone maybe already aware of this? > > Thanks a lot, > Conrad > -- > Conrad Hoffmann > Traffic Engineer > > SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany > > Managing Director: Alexander Ljung | Incorporated in England & Wales > with Company No. 6343600 | Local Branch Office | AG Charlottenburg | > HRB 110657B Hi Conrad, I remarked this as well. Please apply the patch in attachment and confirm it fixes this issue. I introduced this bug when trying to fix an other one: DNS resolution was supposed to start with first health check. Unfortunately, it started after hold.valid period after HAProxy's start time. Please confirm the patch in attachment fix this and that DNS queries are well sent at startup (and later). Baptiste From 06ec4730a0ed3fd5e7395d2bac907a60b62f2557 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann <bed...@gmail.com> Date: Wed, 2 Sep 2015 22:25:50 +0200 Subject: [PATCH] MINOR: FIX: DNS resolution doesn't start Patch f046f1156149d3d8563cc45d7608f2c42ef5b596 introduced a regression: DNS resolution doesn't start anymore, while it was supposed to make it start with first health check. current patch fix this issue with an other method: the last_resolution is setup to now_ms - hold.valid - 1 when parsing HAProxy's configuration file. So at first check, the last_resolution is old enough to trigger a new resolution. --- src/cfgparse.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/src/cfgparse.c b/src/cfgparse.c index 6e2bcd7..fc7f0eb 100644 --- a/src/cfgparse.c +++ b/src/cfgparse.c @@ -8052,8 +8052,10 @@ out_uri_auth_compat: } else { free(newsrv->resolvers_id); newsrv->resolvers_id = NULL; - if (newsrv->resolution) + if (newsrv->resolution) { newsrv->resolution->resolvers = curr_resolvers; + newsrv->resolution->last_resolution = tick_add(now_ms, -1 - newsrv->resolution->resolvers->hold.valid); + } } } else { -- 2.5.0
Re: HAProxy - How to filter (all) Headers by Regex
Le 28 août 2015 06:31, Firman Gautama firman.gaut...@gmail.com a écrit : Hello All, I was just wondering what is the best way if we want to filter all headers by certain regex to block invalid/malicious characters? I read on the documentation, CMIIW, but the example there only shown if we know the specific header name. Does anybody know how to filter all the http headers with specific regex, so we could discard all the traffic with the invalid headers and only forward the good one. Regards, Firman Gautama Hi Firman, This is already haproxy's default behavior. Do you have an example of a 'weird' character which passed through? Baptiste
Re: Health check and flapping
Le 28 août 2015 15:45, dra...@tinet.fr a écrit : Hello, We have tcp-check configured on some backends, which works fine, except when service is flapping. If the backend server is in transitional state, for example transitionally DOWN (going up), the counter is not reset to 0 if tcp-check give a KO state between some OK state. The result is that if the service is flapping, backend become up for a few seconds quite often, even if all OK state are not consecutives. Example of sequence with rise 3: KO - 0/3 KO - 0/3 OK - 1/3 KO - 1/3 - should back to 0/3 KO - 1/3 KO - 1/3 OK - 2/3 KO - 2/3 KO - 2/3 OK - 3/3 - Server UP Is there a way to configure the counter to reset itself in case of flapping ? Thanks. Hi there, Thanks for reporting this behavior. I'll have a look and come back to you. Baptiste
Re: getting transparent proxy to work.
Hi Rich, That's why I wanted to fix your issue step by step. I didn't want to add too much complexity from first step. The question you're asking correpond to the last step. And as Igor mentionned, you should use keepalived to create a VIP which will be used as the default gateway by your web servers. You can simply use any of the VIP handling the web traffic. Baptiste On Thu, Aug 27, 2015 at 4:25 AM, Igor Cicimov ig...@encompasscorporation.com wrote: Obviously you need to have a separate VIP for the 10.10.130.30 and 10.10.130.31 and use that as a DGW on the backend servers. On Thu, Aug 27, 2015 at 9:24 AM, Rich Vigorito ri...@ocp.org wrote: In regards to setting up the default gateway on the webservers. im confused on how that would work with having a load balanced haproxy environment w/ keepalive. Attached is our diagram of haproxy/webserver architecture. When it says have the default gateway point back to haproyx, is it saying the VIP or the haproxy box ip? in the case default gateway being that of the vip how would that work because there are multiple VIP? in the the case of changing default gateway to haproxy box would would that work in a failover? I wouldnt assume that our setup is unique because im sure most people use haproxy for more than one website and most have haproxy load balanced w/ keepalive or pacemaker or something along those lines. Thanks in advance, --Rich -- *From:* Bryan Talbot bryan.tal...@ijji.com *Sent:* Thursday, August 20, 2015 4:27 PM *To:* Rich Vigorito *Cc:* Bryan Talbot; Baptiste; HAProxy *Subject:* Re: getting transparent proxy to work. On Thu, Aug 20, 2015 at 4:05 PM, Rich Vigorito ri...@ocp.org wrote: Reading this: http://blog.haproxy.com/2012/06/05/preserve-source-ip-address-despite-reverse-proxies/ about PROXY protocol, what needs to happen for PROXY protocol to be recognized by the web server? The webserver needs to support it. There is a (probably incomplete) list here: http://blog.haproxy.com/haproxy/proxy-protocol/ Im assuming the haproxy server already does? Yes, of course. -Bryan -- Igor Cicimov | DevOps p. +61 (0) 433 078 728 e. ig...@encompasscorporation.com http://encompasscorporation.com/ w*.* encompasscorporation.com a. Level 4, 65 York Street, Sydney 2000
Re: getting transparent proxy to work.
On Tue, Aug 18, 2015 at 6:19 PM, Rich Vigorito ri...@ocp.org wrote: After changing the default gateway of the web servers to 10.10.130.79 this didnt fix it. The site we were testing on, and then all the other sites as well were unresponsive. So what I was unclear on is if we changed the default gateway to the vip of the test site we were using on the web server, how would the other web sites served from the box work. We have 4 sites on that box all w/ different VIPs for each. So we expected the other sites to fail and perhaps the test site to succeed but this wasnt the case. In the case of the test site traffic was getting to the web server to haproxy but not returning to either haproxy or the workstation making the request. Id just like to clarify I few of my assumptions about this doc: http://blog.haproxy.com/2013/09/16/howto-transparent-proxying-and-binding-with-haproxy-and-aloha-load-balancer/ Linux Kernel requirements You have to ensure your kernel has been compiled with the following options: – CONFIG_NETFILTER_TPROXY – CONFIG_NETFILTER_XT_TARGET_TPROXY this to be done on haproxy boxes (not the webservers), ie: [richv@haproxy2 ~]$ lsmod | grep -i tproxy xt_TPROXY 17327 0 nf_defrag_ipv6 34651 2 xt_socket,xt_TPROXY nf_defrag_ipv4 12729 3 xt_socket,xt_TPROXY,nf_conntrack_ipv4 and: [richv@haproxy2 ~]$ grep -i tproxy /boot/* /boot/config-3.10.0-229.4.2.el7.x86_64:CONFIG_NETFILTER_XT_TARGET_TPROXY=m ** note, im using centos 7. in boot file i see CONFIG_NETFILTER_XT_TARGET_TPROXY in lsmod output only see xt_TPROXY. This is correct, I should see both CONFIG_NETFILTER_TPROXY CONFIG_NETFILTER_XT_TARGET_TPROXY in lsmod output or boot file? sysctl settings The following sysctls must be enabled: – net.ipv4.ip_forward – net.ipv4.ip_nonlocal_bind this to be done on haproxy boxes (not the webservers), ie: [richv@haproxy2 ~]$ sudo sysctl -p vm.swappiness = 0 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 --- iptables rules You must setup the following iptables rules: iptables -t mangle -N DIVERT iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT iptables -t mangle -A DIVERT -j MARK --set-mark 1 iptables -t mangle -A DIVERT -j ACCEPT this to be done on haproxy boxes (not the webservers), ie: haproxy2 sudo iptables -L -n -t mangle Chain PREROUTING (policy ACCEPT) target prot opt source destination DIVERT tcp -- 0.0.0.0/00.0.0.0/0socket [...] Chain DIVERT (1 references) target prot opt source destination MARK all -- 0.0.0.0/00.0.0.0/0MARK set 0x1 ACCEPT all -- 0.0.0.0/00.0.0.0/0 IP route rules Then, tell the Operating System to forward packets marked by iptables to the loopback where HAProxy can catch them: ip rule add fwmark 1 lookup 100 ip route add local 0.0.0.0/0 dev lo table 100 this to be done on haproxy boxes (not the webservers), ie: haproxy2 ip rule show 0: from all lookup local 32762: from all fwmark 0x1 lookup 100 32766: from all lookup main 32767: from all lookup default haproxy ip route show table 100 local default dev lo scope host In summary for my setup, everything in that tutorial is to be performed on the haproxy box, not the web servers? Hi Rich, This has to be performed on the HAProxy box only. On your web server, you must change the default gateway to your HAProxy box. I you did all of this and this is still not working, then it deserves a deeper analysis of your whole platform with hands on the servers. Baptiste
Re: http-response add-header and stats enable
On Mon, Aug 17, 2015 at 10:35 AM, Lukas Erlacher erlac...@in.tum.de wrote: Hi Lukas, Actually, you're setting response headers with data available only at the request time. This is not possible in HAProxy 1.5 This will be possible in HAProxy 1.6 using the capture statement. Baptiste Hi, thanks for that info. Is there any way to make haproxy tell me these things? Luke Hi Luke, As I said, with the capture statement: http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#http-request Look for 'capture' keyword. Baptiste
Re: Infinite timeout
I would say yes, but better let Willy answer this question. Note: this is very dangerous to do this! Baptiste On Wed, Aug 19, 2015 at 9:18 AM, mihaly.vukov...@t-systems.com wrote: hi, thanks the answer, I will try that. One question is still open, setting a timeout to 0 mean infinite or what? Cheers, Mihaly From: Baptiste [mailto:bed...@gmail.com] Sent: Tuesday, August 18, 2015 12:50 PM To: Vukovics, Mihaly Cc: HAProxy Subject: Re: Infinite timeout Le 18 août 2015 10:41, mihaly.vukov...@t-systems.com a écrit : Hello All, we need to set infinite timeout for a specific listener, the docs says that the infinite timeout can be setup by not defining the timeout value at all. It means that I have to remove the default options, and define the timeouts explicitly in other listeners. My question is, what I have not found in the docs: what if I set the timeout to 0 (zero). Is that equal to infinite? I would mean I can set default values and set to 0 in one speicific listener block. Best Regards, Mihály Vukovics Hi, You could also set 2 defaults sections. One with timeouts, one without. Baptiste
Re: Infinite timeout
Le 18 août 2015 10:41, mihaly.vukov...@t-systems.com a écrit : Hello All, we need to set infinite timeout for a specific listener, the docs says that the infinite timeout can be setup by not defining the timeout value at all. It means that I have to remove the default options, and define the timeouts explicitly in other listeners. My question is, what I have not found in the docs: what if I set the timeout to 0 (zero). Is that equal to infinite? I would mean I can set default values and set to 0 in one speicific listener block. Best Regards, Mihály Vukovics Hi, You could also set 2 defaults sections. One with timeouts, one without. Baptiste
Re: http-request set-nice
On Mon, Aug 17, 2015 at 4:13 PM, Andrew Hayworth andrew.haywo...@getbraintree.com wrote: Hi all - I've been tweaking our HAProxy config here, and we have a desire to slow down (but not tarpit) requests that begin coming in at a extraordinarily high rate. I was looking into the 'http-{request,response} set-nice' directive, but I can't seem to find much in the way of documentation nor people on the Internet using it successfully. My main question is this: what exactly does set-nice do? What priority does it influence, and when (before the frontend processing? during front-end processing? in backend-processing but before dispatch? etc). I appreciate your collective wisdom. :) -- - Andrew Hayworth Hi Andrew, set-nice won't achieve what you want to do. Purpose of set nice is to give priorities over IOs to some requests against some others. So basically, if you're not saturating your HAProxy, set-nice may have a negligible impact for you. In order to slow down abusers, you can redirect them to a backend were your servers have a very low maxconn (let say 1)... In 1.6, you'll be able to pause those users using a LUA script: http://godevops.net/2015/06/24/adding-random-delay-specific-http-requests-haproxy-lua/ Baptiste
Re: Distinguishing multiple ssl sites
Search for use-backend here: http://cbonte.github.io/haproxy-dconv/configuration-1.5.html Baptiste On Mon, Aug 17, 2015 at 6:49 PM, Roman Gelfand rgelfa...@gmail.com wrote: I do decipher traffic at haproxy. Could you point me to a sample. On Mon, Aug 17, 2015 at 12:44 PM Baptiste bed...@gmail.com wrote: On Mon, Aug 17, 2015 at 5:13 PM, Roman Gelfand rgelfa...@gmail.com wrote: haproxy is a reverse proxy for 3 distinct (different urls) ssl sites pointing to the same ip address. One of the sites does a post to relative path or urn. Is there a way to retrieve the url of the urn or relative path so it could be used in a rule pointing to the backend? Thanks in advance If you decipher the traffic at HAProxy layer, yes. Baptiste
Re: Distinguishing multiple ssl sites
On Mon, Aug 17, 2015 at 5:13 PM, Roman Gelfand rgelfa...@gmail.com wrote: haproxy is a reverse proxy for 3 distinct (different urls) ssl sites pointing to the same ip address. One of the sites does a post to relative path or urn. Is there a way to retrieve the url of the urn or relative path so it could be used in a rule pointing to the backend? Thanks in advance If you decipher the traffic at HAProxy layer, yes. Baptiste
Re: http-response add-header and stats enable
On Mon, Aug 17, 2015 at 9:54 AM, Lukas Erlacher erlac...@in.tum.de wrote: Hello, I'm a new haproxy user (using haproxy 1.5) and I'm running into a few hitches. I made a stats backend: backend bk_stats log global mode http stats enable stats uri / stats scope ft_submission stats scope bk_postfix And because I wanted to have users authed by ssl client certificate, I put some http-response add-header statements into the frontend for debugging: frontend ft_stats log global mode http bind 131.159.42.4:443 ssl crt myserver.combined.key.pem ca-file mycafile.pem verify required no-sslv3 no-tlsv10 no-tlsv11 http-response add-header X-SSL-Client-CN %[ssl_c_s_dn(cn)] http-response add-header X-SSL-Client-E %[ssl_c_s_dn(emailAddress)] http-response add-header X-SSL-Client-DN %[ssl_c_s_dn] acl cn_allowed ssl_c_s_dn(emailAddress) -f /etc/haproxy/haproxy_admins #acl cn_allowed always_true use_backend bk_ssl_error unless cn_allowed default_backend bk_stats However, these headers won't show up in the response. They also won't show up if I put the add-header statements into the backend. It seems that stats enable disregards http-response lines. There is a stats http-request option but that doesn't allow adding any headers. As a workaround I just shimmed in another frontend and backend where I put the http-request add-header lines. [1] I believe that this is a bug, at least in the way that nothing in the documentation hints that http-request add-header in a /frontend/ will be ignored if the /backend/ has stats enabled. In fact, the documentation for http-response [2] states Since these rules apply on responses, the backend rules are applied first, followed by the frontend's rules. So whatever response the backend delivers to the frontend should have no influence on the headers being added by the frontend. Can anyone more experienced with haproxy tell me if this is really a bug or if I am just doing something wrong? Best regards, Luke [1] http://ix.io/kiO [2] https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-http-response Hi Lukas, Actually, you're setting response headers with data available only at the request time. This is not possible in HAProxy 1.5 This will be possible in HAProxy 1.6 using the capture statement. Baptiste
Re: HAProxy reports L4TOUT in 2001ms even if using mode HTTP
On Sat, Aug 15, 2015 at 7:36 PM, Vijay vijay6...@gmail.com wrote: HAProxy reports backend as down even if they are alive with a status as L4TOUT in 2001ms (Stats page) Version 1.5.2-2.el6 The backend is healthy for about 14-15 hours and then goes down stating the error L4TOUT Below is the configuration mode http option httpchk GET /abc/__status server APP_ELB internal-ELB.us-east-1.elb.amazonaws.com:80 maxconn 150 check I can successfuly do a GET on the backend from the HAProxy instance # wget internal-ELB.us-east-1.elb.amazonaws.com/abc/__status --2015-08-15 13:29:47-- http://internal-ELB.us-east-1.elb.amazonaws.com/abc/__status Resolving internal-ELB.us-east-1.elb.amazonaws.com... 10.x.x.x, 10.x.x.x Connecting to internal-ELB.us-east-1.elb.amazonaws.com|10.x.x.x|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1239 (1.2K) [text/plain] Saving to: “__status.2” 100%[] 1,239 --.-K/s in 0s 2015-08-15 13:29:47 (155 MB/s) - “__status.2” saved [1239/1239] Any known issues or bugs. HAProxy reload is enough to get it back alive Hi Vijay, This is because your ELB has changed its IP address (this is by design). You have to run HAProxy 1.6, which includes a DNS resolution of server IPs. That way, you won't have to reload HAProxy each time ELB change its IP address. HAProxy will resolve it automatically for you. Baptiste
Re: HAProxy for Statis IP redundancy
On Sun, Aug 16, 2015 at 3:20 PM, Mitchell Gurspan mitch...@visualjobmatch.com wrote: Hi – Would you be able to tell me if HAProxy can be used to solve the following problem? I host an iis 7.5) windows site on a comcast business static IP (in office). the internet goes down sometimes and I’d like redundancy. I cant find the proper way to add a second internet provider/static IP for failover when the primary line goes down. I thought maybe DNS round robin but it looks like an IIS site cannot have multiple bindings for this Any thoughts? Is there a standard architecture or method for Internet connectivity redundancy for one website on one server ? Cost is an issue. Thanks! Mitchell Visualjobmatch.com Hi, Simply order a couple of ADSL/cable lines from two providers and use route 53 to point to HAProxy. Setup your HAProxy with 2 network interfaces (one per ISP), one default gateway to each and setup 2 bind lines. Then HAProxy proxifies simply request to the server. To ensure HA over your 2 ISPs, you can use GSLB DNS services, such as route 53 from amazon. To know how to configure your NIC, your gateways and your HAProxy, please read the following article: http://blog.haproxy.com/2014/02/13/asymmetric-routing-multiple-default-gateways-on-linux-with-haproxy/ Baptiste
Re: Regarding using HAproxy for rate limiting
Hi Amol, For example, this one: # Shut the new connection as long as the client has already 40 opened tcp-request connection reject if { src_conn_cur ge 40 } Should be written # Shut the new connection as long as the client has already 40 opened tcp-request connection reject if { sc0_conn_cur ge 40 } Baptiste On Mon, Aug 17, 2015 at 4:53 AM, Amol mandm_z...@yahoo.com wrote: Hi Baptiste, I tried to read about SC0 and SRC, but i am not quite sure what i would gain by changing SRC to SCO for the acl paramters? did u have some example to explain? Thanks From: Amol mandm_z...@yahoo.com To: Baptiste bed...@gmail.com Cc: HAproxy Mailing Lists haproxy@formilux.org Sent: Friday, August 14, 2015 2:06 PM Subject: Re: Regarding using HAproxy for rate limiting Hi Baptiste, Yes sorry i might have confused you with some questions but to answer your questions here, the question is: what kiils your server exactly? A high number of queries from a single users or whatever the number of users? I'm trying to understand what you need... Yes i am trying to protect against high number of requests from a single user who can use API's or even mis-configure API's to generate high load. reposting the configuration frontend www-https bind xx.xx.xx.xx:443 ssl crt .pem ciphers AES128+EECDH:AES128+EDH no-sslv3 no-tls-tickets # Table definition stick-table type ip size 100k expire 30s store gpc0,conn_cur,conn_rate(3s),http_req_rate(10s),http_err_rate(10s) # Allow clean known IPs to bypass the filter tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst } # this is sending data defined in the stick-table and storing it the stick-table since by default nothing is restored in it tcp-request connection track-sc0 src # Shut the new connection as long as the client has already 40 opened tcp-request connection reject if { src_conn_cur ge 40 } # if someone has more than 40 connections in over a period of 3 seconds, REJECT tcp-request connection reject if { src_conn_rate ge 40 } # tracking connections that are not rejected from clients that don't have 10 connections/don't have 10 connections/3 seconds #tcp-request connection reject if { src_get_gpc0 gt 0 } acl abuse_err src_http_err_rate ge 10 acl flag_abuser_err src_inc_gpc0 ge 0 acl abuse src_http_req_rate ge 250 #acl flag_abuser src_inc_gpc0 ge 0 #tcp-request content reject if abuse_err flag_abuser_err #tcp-request content reject if abuse flag_abuser use_backend backend_slow_down if abuse flag_abuser use_backend backend_slow_down if abuse_err flag_abuser_err default_backend www-backend backend www-backend balance leastconn cookie BALANCEID insert indirect nocache secure httponly option httpchk HEAD /xxx.php HTTP/1.0 redirect scheme https if !{ ssl_fc } server A1 xx.xx.xx.xx:80 cookie A check server A2 yy.yy.yy.yy:80 cookie B check backend backend_slow_down timeout tarpit 2s errorfile 500 /etc/haproxy/errors/429.http http-request tarpit -- Yes i will check out the difference between SC0 and SRC paramters in config regarding this . What i am doing here is that if the http_req_rate 250 then i want to send them to a another backend which gives them a rate limiting message or if the number of concurrent connections are 4, then i want to rate limit their usage and allow on 40 connections to come in. i was trying to make 2 points i guess i should have been more clear... So i was saying that based on my config i am trying to achieve 2 things 1) to rate limit a client with high number of http requests in a certain time span (http_req_rate) 2) to rate limit a client with high number of concurrent connections in the certain time span. (src_conn_cur and src_conn_rate ) Thanks once again for looking into this. From: Baptiste bed...@gmail.com To: Amol mandm_z...@yahoo.com Cc: HAproxy Mailing Lists haproxy@formilux.org Sent: Friday, August 14, 2015 1:40 PM Subject: Re: Regarding using HAproxy for rate limiting Hi Amol, On Fri, Aug 14, 2015 at 4:16 PM, Amol mandm_z...@yahoo.com wrote: Hello, I am been trying to configure my Haproxy for rate limiting our customer usage, and wanted to know/understand some of my options what i am trying to achieve is to throttle any clients requests/api calls that can take lead to high load and can kill my servers. here, the question is: what kiils your server exactly? A high number of queries from a single users or whatever the number of users? I'm trying to understand what you need... First of all here is my configuration i have so far from reading a few articles frontend www-https bind xx.xx.xx.xx:443 ssl crt .pem ciphers AES128+EECDH:AES128+EDH no-sslv3 no-tls-tickets # Table definition stick
Re: IP address ACLs
Hi, there is no performance drop of loading from a file or directly in the config file. That said, if you have multiple ACLs with the same name loading many IPs, then you'll perform as many lookups as you have ACLs... While loading content from a file would perform a single lookup. Anyway, there should not be any noticeable performance impact, since IP lookup is very quick in HAProxy (a few hundred of nano second in a tree of 1.000.000 IPs). Concerning comments, any string after a dash '#' is considered as a comment and not loaded in the ACL. Baptiste On Sat, Aug 15, 2015 at 8:28 AM, Nathan Williams nath.e.w...@gmail.com wrote: We use a file for about 40 cidr blocks, and don't have any problems with load speed. Presumably large means more than that, though. We use comments as well, but they have to be at the beginning of their own line, not tagged on after the address. On Fri, Aug 14, 2015, 9:09 PM CJ Ess zxcvbn4...@gmail.com wrote: When doing a large number of IP based ACLs in HAProxy, is it more efficient to load the ACLs from a file with the -f argument? Or is just as good to use multiple ACL statements in the cfg file? If I did use a file with the -f parameter, is it possible to put comments in the file?
Re: getting transparent proxy to work.
temporary just for the troubleshooting period, and validate this is the root of your issue. The definitive solution belongs to you then! Please clarify the rest of your email. I don't understand what IPs or loopbacks you're speaking about. Before going further, please apply the default gateway change and confirm it works after this. Baptiste On Thu, Aug 13, 2015 at 10:28 PM, Rich Vigorito ri...@ocp.org wrote: A couple clarifications. What do you mean by temporary? ... this wouldnt be needed indefinitely? What ive articulated is only one site served through the 2 web servers. Our web servers serve multiple sites, how to accommodate this? Ie couldnt have 5 different IPs in the loopback? From: Baptiste bed...@gmail.com Sent: Wednesday, August 12, 2015 11:41 PM To: Rich Vigorito Cc: HAProxy Subject: Re: getting transparent proxy to work. Hi Rich, so here is your problem. Please temporarily change this default gateway of the web servers to the active VIP: 10.10.130.79. What happens, and what you highlithed in your diagrams is that HAProxy creates the TCP connection with the client IP. by default, the server tries to talk to the client directly, but the client is not aware of HAProxy's connection and it refuses it. If you route back your traffic to HAProxy, then HAProxy will handle this connection and perform the relation with the real client. More information here: http://blog.haproxy.com/2011/08/03/layer-7-load-balancing-transparent-proxy-mode/ Baptiste On Thu, Aug 13, 2015 at 2:29 AM, Rich Vigorito ri...@ocp.org wrote: No inside the firewall one default gateway. 10.10.130.1 The web servers and haproxy servers have one interface I believe Sent from my Verizon Wireless 4G LTE DROID Baptiste bed...@gmail.com wrote: Do you mean your web servers have 2 interfaces, each one with its own default gateway? Baptiste Le 12 août 2015 23:10, Rich Vigorito ri...@ocp.org a écrit : Good to hear. Into the firewall 192.168.0.1 and out of the firewall 10.10.130.1 Thanks! Sent from my Verizon Wireless 4G LTE DROID Baptiste bed...@gmail.com wrote: Hi Rich, Thanks a lot for this info, this is clearer now. In my first mail, I asked you to provide us the default gateway of the web servers. could you please let us know this information ? Baptiste On Wed, Aug 12, 2015 at 5:54 PM, Rich Vigorito ri...@ocp.org wrote: Also for clarification, the config listed in here is the config i used. The only difference between the 2 tests is removing: source 0.0.0.0 usesrc clientip Removing it loadbalancing works, keeping it in the config, load balancing doesnt work -Rich From: Rich Vigorito ri...@ocp.org Sent: Monday, August 10, 2015 5:22 PM To: Baptiste Cc: haproxy@formilux.org Subject: RE: getting transparent proxy to work. Thanks you very much for all the help, and yes, you were correct about the capture i reported being the health check. attached are 2 pngs. one w/ our simple diagram of network topology and the other being what me and the network admin though was happening in our TCP handshake. This was determined by loading a tcpdump into wireshark. Those 2 files are dump.pcap (Which was on haproxy box) and web1_dump.pcap which was taking on the web server). What is happening is I dont think web server knows how to communicate to back to the haproxy box. the iptables rules and the ip rule and ip route commands from the blog post, in my set up would that be done on the haproxy boxes or the web servers? From: Baptiste bed...@gmail.com Sent: Saturday, August 8, 2015 8:38 AM To: Rich Vigorito Cc: haproxy@formilux.org Subject: Re: getting transparent proxy to work. On Fri, Aug 7, 2015 at 11:05 PM, Rich Vigorito ri...@ocp.org wrote: Hello, this is my first time using the mailing list. I have the following issue. Followed steps to enable transparent proxy outlined here: Howto transparent proxying and binding with HAProxy and ALOHA Load-Balancer | HAProxy Technologies – Aloha Load Balancer It will not load balance however w/ the following line added: source 0.0.0.0 usesrc clientip Here is all the configuration and setup relevent: bash lsmod | grep -i tproxy xt_TPROXY 17327 0 nf_defrag_ipv6 34651 2 xt_socket,xt_TPROXY nf_defrag_ipv4 12729 3 xt_socket,xt_TPROXY,nf_conntrack_ipv4 bashsudo sysctl -p vm.swappiness = 0 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 bash sudo iptables -L -n -t mangle Chain PREROUTING (policy ACCEPT) target prot opt source destination DIVERT tcp -- 0.0.0.0/00.0.0.0/0socket [...] Chain DIVERT (1 references) target prot opt source destination MARK all
Re: Regarding using HAproxy for rate limiting
Hi Amol, On Fri, Aug 14, 2015 at 4:16 PM, Amol mandm_z...@yahoo.com wrote: Hello, I am been trying to configure my Haproxy for rate limiting our customer usage, and wanted to know/understand some of my options what i am trying to achieve is to throttle any clients requests/api calls that can take lead to high load and can kill my servers. here, the question is: what kiils your server exactly? A high number of queries from a single users or whatever the number of users? I'm trying to understand what you need... First of all here is my configuration i have so far from reading a few articles frontend www-https bind xx.xx.xx.xx:443 ssl crt .pem ciphers AES128+EECDH:AES128+EDH no-sslv3 no-tls-tickets # Table definition stick-table type ip size 100k expire 30s store gpc0,conn_cur,conn_rate(3s),http_req_rate(10s),http_err_rate(10s) # Allow clean known IPs to bypass the filter tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst } # this is sending data defined in the stick-table and storing it the stick-table since by default nothing is restored in it tcp-request connection track-sc0 src # Shut the new connection as long as the client has already 10 opened tcp-request connection reject if { src_conn_cur ge 40 } # if someone has more than 100 connections in over a period of 3 seconds, REJECT tcp-request connection reject if { src_conn_rate ge 40 } # tracking connections that are not rejected from clients that don't have 10 connections/don't have 10 connections/3 seconds #tcp-request connection reject if { src_get_gpc0 gt 0 } acl abuse_err src_http_err_rate ge 10 acl flag_abuser_err src_inc_gpc0 ge 0 acl abuse src_http_req_rate ge 250 #acl flag_abuser src_inc_gpc0 ge 0 #tcp-request content reject if abuse_err flag_abuser_err #tcp-request content reject if abuse flag_abuser use_backend backend_slow_down if abuse #use_backend backend_slow_down if flag_abuser use_backend backend_slow_down if abuse_err flag_abuser_err default_backend www-backend backend www-backend balance leastconn cookie BALANCEID insert indirect nocache secure httponly option httpchk HEAD /xxx.php HTTP/1.0 redirect scheme https if !{ ssl_fc } server A1 xx.xx.xx.xx:80 cookie A check server A2 yy.yy.yy.yy:80 cookie B check backend backend_slow_down timeout tarpit 2s errorfile 500 /etc/haproxy/errors/429.http http-request tarpit you should use the sc0_conn_* functions instead of src_conn_* since you're tracking over sc0. Also, please repost your configuration with comments updated. For now, some comments doesn't match the statement you configured, which makes it hard to follow up. What i am doing here is that if the http_req_rate 250 then i want to send them to a another backend which gives them a rate limiting message or if the number of concurrent connections are 4, then i want to rate limit their usage and allow on 40 connections to come in. Please be more accurate on the context. Furthermore, you mix rate-limiting and concurrent connections for the same purpose in your sentence and I'm really confused about the real goal you want to achieve. Please feel free to critique my config. Now on to questions, 1) is rate limiting based on IP a good way to do this or has anyone tried of other ways? The closest to the application layer the best. If you have a cookie or whatever header we can use to perform rate limiting, then it would be much better than source IP. 2) Am i missing anything critical in the configuration? no idea as long as I still don't know what your primary goal was. 3) when does the src_inc_gpc0 counter really increment? does it increment for every subsequent request from the client in the given timeframe, i have seen it goes from 0 to 6 during my test but wasn't sure about it Each event may update a counter, such as a new connection or a new HTTP request coming in. 4) can i not rate limit by just adding the maxconn to the server in the backend or will that throttle everyone instead of the rogue IP... This will prevent your server from running too many request in parallel and then being overloaded. You can mix both technics. server's maxconn to protect servers against a huge load generated by many clients running 1 request + the configuration you setup above to prevent a single user to generate too many request and taking too much connections allowed by the maxconn. Baptiste
Re: getting transparent proxy to work.
Hi Rich, so here is your problem. Please temporarily change this default gateway of the web servers to the active VIP: 10.10.130.79. What happens, and what you highlithed in your diagrams is that HAProxy creates the TCP connection with the client IP. by default, the server tries to talk to the client directly, but the client is not aware of HAProxy's connection and it refuses it. If you route back your traffic to HAProxy, then HAProxy will handle this connection and perform the relation with the real client. More information here: http://blog.haproxy.com/2011/08/03/layer-7-load-balancing-transparent-proxy-mode/ Baptiste On Thu, Aug 13, 2015 at 2:29 AM, Rich Vigorito ri...@ocp.org wrote: No inside the firewall one default gateway. 10.10.130.1 The web servers and haproxy servers have one interface I believe Sent from my Verizon Wireless 4G LTE DROID Baptiste bed...@gmail.com wrote: Do you mean your web servers have 2 interfaces, each one with its own default gateway? Baptiste Le 12 août 2015 23:10, Rich Vigorito ri...@ocp.org a écrit : Good to hear. Into the firewall 192.168.0.1 and out of the firewall 10.10.130.1 Thanks! Sent from my Verizon Wireless 4G LTE DROID Baptiste bed...@gmail.com wrote: Hi Rich, Thanks a lot for this info, this is clearer now. In my first mail, I asked you to provide us the default gateway of the web servers. could you please let us know this information ? Baptiste On Wed, Aug 12, 2015 at 5:54 PM, Rich Vigorito ri...@ocp.org wrote: Also for clarification, the config listed in here is the config i used. The only difference between the 2 tests is removing: source 0.0.0.0 usesrc clientip Removing it loadbalancing works, keeping it in the config, load balancing doesnt work -Rich From: Rich Vigorito ri...@ocp.org Sent: Monday, August 10, 2015 5:22 PM To: Baptiste Cc: haproxy@formilux.org Subject: RE: getting transparent proxy to work. Thanks you very much for all the help, and yes, you were correct about the capture i reported being the health check. attached are 2 pngs. one w/ our simple diagram of network topology and the other being what me and the network admin though was happening in our TCP handshake. This was determined by loading a tcpdump into wireshark. Those 2 files are dump.pcap (Which was on haproxy box) and web1_dump.pcap which was taking on the web server). What is happening is I dont think web server knows how to communicate to back to the haproxy box. the iptables rules and the ip rule and ip route commands from the blog post, in my set up would that be done on the haproxy boxes or the web servers? From: Baptiste bed...@gmail.com Sent: Saturday, August 8, 2015 8:38 AM To: Rich Vigorito Cc: haproxy@formilux.org Subject: Re: getting transparent proxy to work. On Fri, Aug 7, 2015 at 11:05 PM, Rich Vigorito ri...@ocp.org wrote: Hello, this is my first time using the mailing list. I have the following issue. Followed steps to enable transparent proxy outlined here: Howto transparent proxying and binding with HAProxy and ALOHA Load-Balancer | HAProxy Technologies – Aloha Load Balancer It will not load balance however w/ the following line added: source 0.0.0.0 usesrc clientip Here is all the configuration and setup relevent: bash lsmod | grep -i tproxy xt_TPROXY 17327 0 nf_defrag_ipv6 34651 2 xt_socket,xt_TPROXY nf_defrag_ipv4 12729 3 xt_socket,xt_TPROXY,nf_conntrack_ipv4 bashsudo sysctl -p vm.swappiness = 0 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 bash sudo iptables -L -n -t mangle Chain PREROUTING (policy ACCEPT) target prot opt source destination DIVERT tcp -- 0.0.0.0/00.0.0.0/0socket [...] Chain DIVERT (1 references) target prot opt source destination MARK all -- 0.0.0.0/00.0.0.0/0MARK set 0x1 ACCEPT all -- 0.0.0.0/00.0.0.0/0 bash ip rule show 0: from all lookup local 32762: from all fwmark 0x1 lookup 100 32766: from all lookup main 32767: from all lookup default bash ip route show table 100 local default dev lo scope host #haproxy.cfg frontend layer4-listener bind *:80 transparent bind *:443 transparent bind *:3306 bind *:8080 mode tcp option tcplog http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } acl is_esp dst 10.10.130.79 acl is_tls dst_port 443 use_backend site_http if is_esp !is_tls use_backend site_https if is_esp is_tls backend site_https mode tcp option tcpka option tcp-check #source 0.0.0.0 usesrc clientip ## load balancing only works when commented out
Re: HAProxy - Combination of SSL Termination and Pass through
Hi Sandeep, No, HAProxy doesn't pass through. HAProxy can terminate SSL then create a new ciphered connection: listen ssl_reencryption mode http bind :443 ssl crt /path/to/your/cert server 10.0.0.1:443 ssl What you mean by passthrough would be something like: listen ssl_passthourgh mode tcp bind :443 server 10.0.0.1:443 Baptiste On Thu, Aug 13, 2015 at 4:53 AM, Sandeep Jindal sandeep...@gmail.com wrote: Hi Baptiste, Not sure if that answers my question. What you suggested is to enable SSL for HAProxy. My use case if one step further. Once HAProxy receives the SSL request, after decrypting it, use case require to manipulate headers and then forward the request to a a bendend server which is SSL enabled. It seems HAProxy can pass through SSL certificates but not start new certificate for Backend. Regards Sandeep Jindal 201 604 5277 On Fri, Jul 31, 2015 at 2:11 AM, Baptiste bed...@gmail.com wrote: On Fri, Jul 31, 2015 at 4:12 AM, Sandeep Jindal sandeep...@gmail.com wrote: Hi All, My use case is to Manipulate Request Headers of the incoming request. So, for this, I would need to create a new SSL certificate, but it seems at HTTP level. Can you please suggest if this is possible and how? Regards Sandeep Jindal 201 604 5277 Hi Sandeep, Simply create your certificate with openssl, and enable enable 'ssl' and 'crt /path/to/your/cert' on your bind line in your HAProxy frontend. Baptiste
Re: getting transparent proxy to work.
Do you mean your web servers have 2 interfaces, each one with its own default gateway? Baptiste Le 12 août 2015 23:10, Rich Vigorito ri...@ocp.org a écrit : Good to hear. Into the firewall 192.168.0.1 and out of the firewall 10.10.130.1 Thanks! *Sent from my Verizon Wireless 4G LTE DROID* Baptiste bed...@gmail.com wrote: Hi Rich, Thanks a lot for this info, this is clearer now. In my first mail, I asked you to provide us the default gateway of the web servers. could you please let us know this information ? Baptiste On Wed, Aug 12, 2015 at 5:54 PM, Rich Vigorito ri...@ocp.org wrote: Also for clarification, the config listed in here is the config i used. The only difference between the 2 tests is removing: source 0.0.0.0 usesrc clientip Removing it loadbalancing works, keeping it in the config, load balancing doesnt work -Rich From: Rich Vigorito ri...@ocp.org Sent: Monday, August 10, 2015 5:22 PM To: Baptiste Cc: haproxy@formilux.org Subject: RE: getting transparent proxy to work. Thanks you very much for all the help, and yes, you were correct about the capture i reported being the health check. attached are 2 pngs. one w/ our simple diagram of network topology and the other being what me and the network admin though was happening in our TCP handshake. This was determined by loading a tcpdump into wireshark. Those 2 files are dump.pcap (Which was on haproxy box) and web1_dump.pcap which was taking on the web server). What is happening is I dont think web server knows how to communicate to back to the haproxy box. the iptables rules and the ip rule and ip route commands from the blog post, in my set up would that be done on the haproxy boxes or the web servers? From: Baptiste bed...@gmail.com Sent: Saturday, August 8, 2015 8:38 AM To: Rich Vigorito Cc: haproxy@formilux.org Subject: Re: getting transparent proxy to work. On Fri, Aug 7, 2015 at 11:05 PM, Rich Vigorito ri...@ocp.org wrote: Hello, this is my first time using the mailing list. I have the following issue. Followed steps to enable transparent proxy outlined here: Howto transparent proxying and binding with HAProxy and ALOHA Load-Balancer | HAProxy Technologies – Aloha Load Balancer It will not load balance however w/ the following line added: source 0.0.0.0 usesrc clientip Here is all the configuration and setup relevent: bash lsmod | grep -i tproxy xt_TPROXY 17327 0 nf_defrag_ipv6 34651 2 xt_socket,xt_TPROXY nf_defrag_ipv4 12729 3 xt_socket,xt_TPROXY,nf_conntrack_ipv4 bashsudo sysctl -p vm.swappiness = 0 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 bash sudo iptables -L -n -t mangle Chain PREROUTING (policy ACCEPT) target prot opt source destination DIVERT tcp -- 0.0.0.0/00.0.0.0/0socket [...] Chain DIVERT (1 references) target prot opt source destination MARK all -- 0.0.0.0/00.0.0.0/0MARK set 0x1 ACCEPT all -- 0.0.0.0/00.0.0.0/0 bash ip rule show 0: from all lookup local 32762: from all fwmark 0x1 lookup 100 32766: from all lookup main 32767: from all lookup default bash ip route show table 100 local default dev lo scope host #haproxy.cfg frontend layer4-listener bind *:80 transparent bind *:443 transparent bind *:3306 bind *:8080 mode tcp option tcplog http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } acl is_esp dst 10.10.130.79 acl is_tls dst_port 443 use_backend site_http if is_esp !is_tls use_backend site_https if is_esp is_tls backend site_https mode tcp option tcpka option tcp-check #source 0.0.0.0 usesrc clientip ## load balancing only works when commented out server site_www1 www1.site.org:443 weight 1 check inter 2000 rise 2 fall 3 server site_www2 www2.site.org:443 weight 1 check inter 2000 rise 2 fall 3 bash haproxy -vv HA-Proxy version 1.5.4 2014/09/02 Copyright 2000-2014 Willy Tarreau w...@1wt.eu Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -O2 -g -fno-strict-aliasing OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1 bash uname -r 3.10.0-229.4.2.el7.x86_64 Our network admin was indicated the following: A SYN packet from 10.10.130.31 (haproxy2) to 10.10.130.152 (site on web1) A SYN-ACK packet from web1 back to haproxy2 A RST packet from haproxy2 to web1. Anyone able/willing to help and/or give insight into this issue? Thanks Hi Rich, the information you provide are quite inaccurate. I've already reported
Re: Forwarding issue
On Wed, Aug 12, 2015 at 6:34 PM, Roman Gelfand rgelfa...@gmail.com wrote: Why would the following apache directives cause problems for haproxy. RewriteRule ^/Microsoft-Server-ActiveSync /rpc.php [PT,L,QSA] RewriteRule .* - [E=HTTP_MS_ASPROTOCOLVERSION:%{HTTP:Ms-Asprotocolversion}] RewriteRule .* - [E=HTTP_X_MS_POLICYKEY:%{HTTP:X-Ms-Policykey}] RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] Thanks in advance First, you say 'hi' Second, you explain your problem and whayt those apache rules are supposed to do, what type of application are they applied to and how this application is supposed to work. Without a bit of context, it is impossible to help! Baptiste
Re: getting transparent proxy to work.
Hi Rich, Thanks a lot for this info, this is clearer now. In my first mail, I asked you to provide us the default gateway of the web servers. could you please let us know this information ? Baptiste On Wed, Aug 12, 2015 at 5:54 PM, Rich Vigorito ri...@ocp.org wrote: Also for clarification, the config listed in here is the config i used. The only difference between the 2 tests is removing: source 0.0.0.0 usesrc clientip Removing it loadbalancing works, keeping it in the config, load balancing doesnt work -Rich From: Rich Vigorito ri...@ocp.org Sent: Monday, August 10, 2015 5:22 PM To: Baptiste Cc: haproxy@formilux.org Subject: RE: getting transparent proxy to work. Thanks you very much for all the help, and yes, you were correct about the capture i reported being the health check. attached are 2 pngs. one w/ our simple diagram of network topology and the other being what me and the network admin though was happening in our TCP handshake. This was determined by loading a tcpdump into wireshark. Those 2 files are dump.pcap (Which was on haproxy box) and web1_dump.pcap which was taking on the web server). What is happening is I dont think web server knows how to communicate to back to the haproxy box. the iptables rules and the ip rule and ip route commands from the blog post, in my set up would that be done on the haproxy boxes or the web servers? From: Baptiste bed...@gmail.com Sent: Saturday, August 8, 2015 8:38 AM To: Rich Vigorito Cc: haproxy@formilux.org Subject: Re: getting transparent proxy to work. On Fri, Aug 7, 2015 at 11:05 PM, Rich Vigorito ri...@ocp.org wrote: Hello, this is my first time using the mailing list. I have the following issue. Followed steps to enable transparent proxy outlined here: Howto transparent proxying and binding with HAProxy and ALOHA Load-Balancer | HAProxy Technologies – Aloha Load Balancer It will not load balance however w/ the following line added: source 0.0.0.0 usesrc clientip Here is all the configuration and setup relevent: bash lsmod | grep -i tproxy xt_TPROXY 17327 0 nf_defrag_ipv6 34651 2 xt_socket,xt_TPROXY nf_defrag_ipv4 12729 3 xt_socket,xt_TPROXY,nf_conntrack_ipv4 bashsudo sysctl -p vm.swappiness = 0 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 bash sudo iptables -L -n -t mangle Chain PREROUTING (policy ACCEPT) target prot opt source destination DIVERT tcp -- 0.0.0.0/00.0.0.0/0socket [...] Chain DIVERT (1 references) target prot opt source destination MARK all -- 0.0.0.0/00.0.0.0/0MARK set 0x1 ACCEPT all -- 0.0.0.0/00.0.0.0/0 bash ip rule show 0: from all lookup local 32762: from all fwmark 0x1 lookup 100 32766: from all lookup main 32767: from all lookup default bash ip route show table 100 local default dev lo scope host #haproxy.cfg frontend layer4-listener bind *:80 transparent bind *:443 transparent bind *:3306 bind *:8080 mode tcp option tcplog http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } acl is_esp dst 10.10.130.79 acl is_tls dst_port 443 use_backend site_http if is_esp !is_tls use_backend site_https if is_esp is_tls backend site_https mode tcp option tcpka option tcp-check #source 0.0.0.0 usesrc clientip ## load balancing only works when commented out server site_www1 www1.site.org:443 weight 1 check inter 2000 rise 2 fall 3 server site_www2 www2.site.org:443 weight 1 check inter 2000 rise 2 fall 3 bash haproxy -vv HA-Proxy version 1.5.4 2014/09/02 Copyright 2000-2014 Willy Tarreau w...@1wt.eu Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -O2 -g -fno-strict-aliasing OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1 bash uname -r 3.10.0-229.4.2.el7.x86_64 Our network admin was indicated the following: A SYN packet from 10.10.130.31 (haproxy2) to 10.10.130.152 (site on web1) A SYN-ACK packet from web1 back to haproxy2 A RST packet from haproxy2 to web1. Anyone able/willing to help and/or give insight into this issue? Thanks Hi Rich, the information you provide are quite inaccurate. I've already reported this on stackoverflow where you first posted your question. Here, for example, you ran multiple tests, with different configurations but you don't tell us during which one did your network admin saw the network he described. First point, the network packets reported by your network admin seems to be a health check... Second, it is hard to help troubleshooting transparent proxy without a network diagram. So please draw and share the simplest one showing a client
Re: ECC certificate
On Wed, Aug 12, 2015 at 11:22 AM, Marc-Antoine marc-antoine.b...@ovh.net wrote: Hi all, i'm trying to use an ECC certificate under haproxy without success : * haproxy -vv HA-Proxy version 1.5.8 2014/10/31 Copyright 2000-2014 Willy Tarreau w...@1wt.eu Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 Encrypted password support via crypt(3): yes Built with zlib version : 1.2.7 Compression algorithms supported : identity, deflate, gzip Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013 Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports prefer-server-ciphers : yes Built with PCRE version : 8.30 2012-02-04 PCRE library supports JIT : no (USE_PCRE_JIT not set) Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. * conf : global ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:!kEDH:!LOW:!EXP:!MD5:!RC4:!aNULL:!eNULL ssl-default-bind-options no-sslv3 frontend cluster2:443 bind 1.2.3.4:443 ssl strict-sni crt /home/provisionning/0.pem crt /home/provisionning/cluster2.d default_backend cluster2 any idea ? -- Marc-Antoine Hi, Might be related to your Openssl version :/ Baptiste
Re: getting transparent proxy to work.
On Fri, Aug 7, 2015 at 11:05 PM, Rich Vigorito ri...@ocp.org wrote: Hello, this is my first time using the mailing list. I have the following issue. Followed steps to enable transparent proxy outlined here: Howto transparent proxying and binding with HAProxy and ALOHA Load-Balancer | HAProxy Technologies – Aloha Load Balancer It will not load balance however w/ the following line added: source 0.0.0.0 usesrc clientip Here is all the configuration and setup relevent: bash lsmod | grep -i tproxy xt_TPROXY 17327 0 nf_defrag_ipv6 34651 2 xt_socket,xt_TPROXY nf_defrag_ipv4 12729 3 xt_socket,xt_TPROXY,nf_conntrack_ipv4 bashsudo sysctl -p vm.swappiness = 0 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 bash sudo iptables -L -n -t mangle Chain PREROUTING (policy ACCEPT) target prot opt source destination DIVERT tcp -- 0.0.0.0/00.0.0.0/0socket [...] Chain DIVERT (1 references) target prot opt source destination MARK all -- 0.0.0.0/00.0.0.0/0MARK set 0x1 ACCEPT all -- 0.0.0.0/00.0.0.0/0 bash ip rule show 0: from all lookup local 32762: from all fwmark 0x1 lookup 100 32766: from all lookup main 32767: from all lookup default bash ip route show table 100 local default dev lo scope host #haproxy.cfg frontend layer4-listener bind *:80 transparent bind *:443 transparent bind *:3306 bind *:8080 mode tcp option tcplog http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } acl is_esp dst 10.10.130.79 acl is_tls dst_port 443 use_backend site_http if is_esp !is_tls use_backend site_https if is_esp is_tls backend site_https mode tcp option tcpka option tcp-check #source 0.0.0.0 usesrc clientip ## load balancing only works when commented out server site_www1 www1.site.org:443 weight 1 check inter 2000 rise 2 fall 3 server site_www2 www2.site.org:443 weight 1 check inter 2000 rise 2 fall 3 bash haproxy -vv HA-Proxy version 1.5.4 2014/09/02 Copyright 2000-2014 Willy Tarreau w...@1wt.eu Build options : TARGET = linux2628 CPU = generic CC = gcc CFLAGS = -O2 -g -fno-strict-aliasing OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1 bash uname -r 3.10.0-229.4.2.el7.x86_64 Our network admin was indicated the following: A SYN packet from 10.10.130.31 (haproxy2) to 10.10.130.152 (site on web1) A SYN-ACK packet from web1 back to haproxy2 A RST packet from haproxy2 to web1. Anyone able/willing to help and/or give insight into this issue? Thanks Hi Rich, the information you provide are quite inaccurate. I've already reported this on stackoverflow where you first posted your question. Here, for example, you ran multiple tests, with different configurations but you don't tell us during which one did your network admin saw the network he described. First point, the network packets reported by your network admin seems to be a health check... Second, it is hard to help troubleshooting transparent proxy without a network diagram. So please draw and share the simplest one showing a client, haproxy and a server, with their respective interfaces, IPs and default gateway. Last, a TCPdump on HAProxy box showing the traffic on the interface between haproxy and the server for the IP address of the client. Baptiste
Re: REg: Connection field in HTTP header is set to close while sending to backend server
On Fri, Aug 7, 2015 at 1:25 PM, ilan ilan@gmail.com wrote: Hi Support, I configured haproxy to forward request to backend server. I did packet capture between browser and haproxy and noticed that connection field in HTTP header is set to keep-alive. Then I did packet capture between haproxy and backend server, I noticed that connection field in HTTP header is set to close. Could you please tell why haproxy is changing connection field to close when sending request to backend server. I am new to web programming. Please apologize if i did not provide enough information. Thanks for you help in advance. Here is my haproxy configuration, global log /dev/loglocal0 log /dev/loglocal1 notice chroot /var/lib/haproxy user haproxy group haproxy daemon defaults log global modehttp option httplog option dontlognull contimeout 5000 clitimeout 5 srvtimeout 5 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http listen appname 0.0.0.0:8002 mode http stats enable stats uri /haproxy?stats stats realm Strictly\ Private stats auth root:admin123 stats auth root:admin123 balance roundrobin option httpclose option forwardfor server lamp1 127.0.0.1:8001 Regards, Ilan Hi Ilian You have this behavior because of option httpclose. Remove it and you'll have connection keep-alive. To make it clear, I would add a option http-keep-alive in the defaults section. And why not adding a option prefer-last-server' which may help keeping the connection alive despite the load-balancing algorithm. Baptiste
Re: Resolvers are not applied from default_server line / Incorrect default value for resolve-prefer
On Tue, Aug 4, 2015 at 11:27 PM, BBuhl b_b...@yahoo.de wrote: Baptiste bed...@gmail.com schrieb am 22:21 Dienstag, 4.August 2015: Hi Benji, Thanks a lot for your feedback! First, about the resolve-prefer, I coded it (and documented as well) first for IPv4 as a default. That said, Willy asked me to default it to IPv6. I forgot to update the documentation. I'll send a patch right away to fix the doc. Second, about resolvers in default-server statement. Well, this has never been evoked and has never been coded this way. (note the documentation doesn't mention that resolvers is available for default-server). We did it this way because it allows admins to mix servers with an IP address and servers with a hostname in the same farm. It also allows the admin to choose on which servers you want to enable DNS resolution. If you think this makes sense to have it in the default-server, then we have to find a way to negate it per server. Baptiste Hi Bapiste, thanks for your prompt response. I might be mistaken, but i don't see how allowing resolvers in the default-server statement would prohibit the alternative usage scenarios you were describing (e.g. setting resolvers to different value per server). In that case one would just NOT set resolvers in the default-server statement and instead set them in every server statement, no? not only a different one, the absence of resolvers on a particular server. Some kind of no-resolvers, or resolvers none. I'm not against the idea, we just need to ensure we make it the best way for every one. I just keep in mind your request and will try to address it later. Anyhow, this is nothing big. I can live with setting the resolvers for every server. But it should be documented that resolvers are not supported in default-server. AFAICS, for all the other keywords this is expressively documented with the line Supported in default-server: No. Patch already sent to Willy. I've already been asked to improve the doc about DNS and will do it asap. By the way, if you meet any issue with the DNS resolution itself, please let us know. Baptiste
Re: Resolvers are not applied from default_server line / Incorrect default value for resolve-prefer
On Tue, Aug 4, 2015 at 5:34 PM, BBuhl b_b...@yahoo.de wrote: Steps to reproduce: 1. Add a resolver section mydns: resolvers mydns nameserver dns1 8.8.8.8:53 nameserver dns2 8.8.4.4:53 2. Add the following line to the backend: default-server inter 1000 weight 13 resolvers mydns 3. Query show stat resolvers mydns from stats socket Expected result: The mydns resolver is used to resolve the server names. Actual result: The mydns resolver is not used at all This configuration does pass a config check haproxy -f haproxy.config -c and there is no error about resolvers keyword being not supported in default-server. The resolve-prefer option on the other hand seems to be set from the default-server. So why not the resolvers as well? Another possible bug: Incorrect default value for resolve-prefer According to the docs, the default value for resolve-prefer should be IPv4. Looking at the source code i am pretty sure this is not the case and instead this option defaults to IPv6: 1.6. Docs: http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#5.2-resolve-prefer 1.6 trunk: http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/server.c;h=503d53ab36fc5688454b53dd0d7e88577df6ca46;hb=HEAD#l1037 Regards, Benji Hi Benji, Thanks a lot for your feedback! First, about the resolve-prefer, I coded it (and documented as well) first for IPv4 as a default. That said, Willy asked me to default it to IPv6. I forgot to update the documentation. I'll send a patch right away to fix the doc. Second, about resolvers in default-server statement. Well, this has never been evoked and has never been coded this way. (note the documentation doesn't mention that resolvers is available for default-server). We did it this way because it allows admins to mix servers with an IP address and servers with a hostname in the same farm. It also allows the admin to choose on which servers you want to enable DNS resolution. If you think this makes sense to have it in the default-server, then we have to find a way to negate it per server. Baptiste
Re: Copying request headers to response header
On Tue, Aug 4, 2015 at 10:34 AM, David Reuss shuffle...@gmail.com wrote: Hello, I'm using the unique-id-header, and i'm having some difficulty getting it to work how i want it to. First of all, we have a variety of backends, so i'd like if what i'm trying to do could be handled entirely by haproxy. We're trying to implement a header for storing a unique request id (and if possible, making it nested), so it's possible to track the entire chain of requests going through our proxies. Basically that means if a request already contains a X-ReqId header, it should just leave it be, and if there is none, it should tag on it's own. Now i can it to set the header just fine, (not sure about the don't set one if already there part), but i can't seem to figure out how to get this shown in the response (without going outside haproxy). I'm not entirely familar with the haproxy processing pipeline, and what samples are available where. I tried something like this in the frontend (and tried backends as well), to no avail. http-response set-header X-ReqId %[req.fhdr(x-reqid)] Any pointers on how i can achieve my goal? Hi David, This is not doable in 1.5. In 1.6, you can give a try to the http-request capture statement, to capture at the request time, then inject it back at the time of the response. Baptiste
Re: Haproxy balancing authenticated servers
Please be more accurate in your answer, otherwise we can't help you! Baptiste On Fri, Jul 31, 2015 at 3:44 PM, Francys Nivea francys.so...@neurotech.com.br wrote: Hello Baptiste, A simple one. Just wanted to send the user and pass together with each server balanced. Peace, *Francys Nivea*Analista de Datacenter +55 81 *3312-2740*+55 81 *9 * *9113-4564* www.neurotech.com.br On 31 July 2015 at 10:42, Baptiste bed...@gmail.com wrote: On Fri, Jul 31, 2015 at 3:38 PM, Francys Nivea francys.so...@neurotech.com.br wrote: Hello all, I'm trying to balance a few authenticated HTTP servers, each has its own credential to access. It would be possible to do this with HAProxy? if so, how? Thank you Peace, Hi Francys, What type of authentification do you use? Baptiste -- Esta mensagem pode conter informação confidencial e/ou privilegiada.Se você não for o destinatário ou a pessoa autorizada a receber esta mensagem, você não deve usar, copiar, divulgar, alterar e não tomar nenhuma ação em relação a esta mensagem ou qualquer informação aqui contida.Se você recebeu esta mensagem erroneamente, por favor entre em contato imediatamente ou responsa por e-mail ao remetente e apague esta mensagem. Opiniões pessoais do remetente não refletem, necessariamente, o ponto de vista da Neurotech, o qual é divulgado somente por pessoas autorizadas. Antes de imprimir este e-mail, veja se realmente é necessário. Ajude a preservar o meio ambiente. This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, please, you must not use, copy, disclose, change, take any action based on this message or any information herein. Personal opinions of the sender do not necessarily reflect the view of Neurotech, which is only divulged by authorized persons. Please consider the environment before printing this email.
Re: Haproxy balancing authenticated servers
Since you know the public IP address, you might be able to get connected on the application and see if you get a 401 or a web form... From my understanding you simply need stickyness, or worst case, a deterministic algorithm. Try enabling cookie based persistence first, if it doesn't work because the client is dumb, simply use balance source. Baptiste On Fri, Jul 31, 2015 at 3:53 PM, Francys Nivea francys.so...@neurotech.com.br wrote: Sorry I dont have control over the balanced servers. The only information I have are IP, Port, and credentials (User and Pass of each server). I have to do load balancing among them. Nowadays all loads are going to only one server, which is generating overhead. We already use HAProxy to balance our applications, and works great. That's why I wanted to use HAProxy now, but I have no idea how. Peace, *Francys Nivea*Analista de Datacenter +55 81 *3312-2740*+55 81 *9 * *9113-4564* www.neurotech.com.br On 31 July 2015 at 10:48, Baptiste bed...@gmail.com wrote: Please be more accurate in your answer, otherwise we can't help you! Baptiste On Fri, Jul 31, 2015 at 3:44 PM, Francys Nivea francys.so...@neurotech.com.br wrote: Hello Baptiste, A simple one. Just wanted to send the user and pass together with each server balanced. Peace, *Francys Nivea*Analista de Datacenter +55 81 *3312-2740*+55 81 *9 * *9113-4564* www.neurotech.com.br On 31 July 2015 at 10:42, Baptiste bed...@gmail.com wrote: On Fri, Jul 31, 2015 at 3:38 PM, Francys Nivea francys.so...@neurotech.com.br wrote: Hello all, I'm trying to balance a few authenticated HTTP servers, each has its own credential to access. It would be possible to do this with HAProxy? if so, how? Thank you Peace, Hi Francys, What type of authentification do you use? Baptiste -- Esta mensagem pode conter informação confidencial e/ou privilegiada.Se você não for o destinatário ou a pessoa autorizada a receber esta mensagem, você não deve usar, copiar, divulgar, alterar e não tomar nenhuma ação em relação a esta mensagem ou qualquer informação aqui contida.Se você recebeu esta mensagem erroneamente, por favor entre em contato imediatamente ou responsa por e-mail ao remetente e apague esta mensagem. Opiniões pessoais do remetente não refletem, necessariamente, o ponto de vista da Neurotech, o qual é divulgado somente por pessoas autorizadas. Antes de imprimir este e-mail, veja se realmente é necessário. Ajude a preservar o meio ambiente. This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, please, you must not use, copy, disclose, change, take any action based on this message or any information herein. Personal opinions of the sender do not necessarily reflect the view of Neurotech, which is only divulged by authorized persons. Please consider the environment before printing this email. -- Esta mensagem pode conter informação confidencial e/ou privilegiada.Se você não for o destinatário ou a pessoa autorizada a receber esta mensagem, você não deve usar, copiar, divulgar, alterar e não tomar nenhuma ação em relação a esta mensagem ou qualquer informação aqui contida.Se você recebeu esta mensagem erroneamente, por favor entre em contato imediatamente ou responsa por e-mail ao remetente e apague esta mensagem. Opiniões pessoais do remetente não refletem, necessariamente, o ponto de vista da Neurotech, o qual é divulgado somente por pessoas autorizadas. Antes de imprimir este e-mail, veja se realmente é necessário. Ajude a preservar o meio ambiente. This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, please, you must not use, copy, disclose, change, take any action based on this message or any information herein. Personal opinions of the sender do not necessarily reflect the view of Neurotech, which is only divulged by authorized persons. Please consider the environment before printing this email.
Re: Haproxy balancing authenticated servers
On Fri, Jul 31, 2015 at 3:38 PM, Francys Nivea francys.so...@neurotech.com.br wrote: Hello all, I'm trying to balance a few authenticated HTTP servers, each has its own credential to access. It would be possible to do this with HAProxy? if so, how? Thank you Peace, Hi Francys, What type of authentification do you use? Baptiste
Re: Capture sequencing in logs
In 1.6, %[query] should do the trick. Baptiste On Fri, Jul 31, 2015 at 1:17 AM, Phillip Decker pdecker999+hapr...@gmail.com wrote: And it only kinda works because when there is no question mark then the field will have the uri instead of being empty... On Thu, Jul 30, 2015 at 7:12 PM, Phillip Decker pdecker999+hapr...@gmail.com wrote: Funny, yeah I was just playing with it and couldn't get that to work, so I just did another git pull thinking maybe I just wasn't updated, then came back to my email and saw your second reply. Hrm. Well, something that seems to sorta work is this (in the log-format line): %[capture.req.uri,regsub(^.*\?,)] So, grabbing the full uri and then regex replace everything up to the '?' with nothing, but I don't know what kind of underlying impacts that approach might have, if any... Phillip On Thu, Jul 30, 2015 at 6:25 PM, Cyril Bonté cyril.bo...@free.fr wrote: On 31/07/2015 00:14, Cyril Bonté wrote: Hi Phillip, On 31/07/2015 00:05, Phillip Decker wrote: One other log question in this same vein - I'm trying to duplicate the functionality of the %q flag in Apache, and I don't see a way in the documentation to print _only_ the query string, that is, the information after the question mark in a URI. I see the URI without the query (path), the full URI, and looking up specific parameters in the URI... am I missing an obvious flag somewhere? This is only available in 1.6 development branch : http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#query Oops, I replied too quickly, as such HTTP sample fetches are not available in log-format. Maybe we can discuss adding a %HQ (or %HQS) log variable in the future ? -- Cyril Bonté
Re: HAProxy - Combination of SSL Termination and Pass through
On Fri, Jul 31, 2015 at 4:12 AM, Sandeep Jindal sandeep...@gmail.com wrote: Hi All, My use case is to Manipulate Request Headers of the incoming request. So, for this, I would need to create a new SSL certificate, but it seems at HTTP level. Can you please suggest if this is possible and how? Regards Sandeep Jindal 201 604 5277 Hi Sandeep, Simply create your certificate with openssl, and enable enable 'ssl' and 'crt /path/to/your/cert' on your bind line in your HAProxy frontend. Baptiste
Re: http-response set-mark with value from http-response header field
Hi Vinay, According to the documentation: - set-tos is used to set the TOS or DSCP field value of packets sent to the client to the value passed in tos on platforms which support this. This value represents the whole 8 bits of the IP TOS field, and can be expressed both in decimal or hexadecimal format (prefixed by 0x) It does not expect a log format variable as your trying to do. Baptiste On Sun, Jul 26, 2015 at 1:00 PM, Vinay Y S vinay...@gmail.com wrote: Actually I suppose the syntax could be same as sample fetches. For example: http-response set-tos %[res.hdr_val(X-Tos)] This syntax currently doesn't work. Is it possible to make this work easily? On Sun, Jul 26, 2015 at 3:45 PM Vinay Y S vinay...@gmail.com wrote: Hi, I would like to use set-mark and set-tos to values returned by the backend in the http-response header field. Is this possible? For example a syntax like this would be nice: http-response set-tos $http_resp_hdr[tos] Idea is to have backend determine the best value for tos and mark on a per request basis depending on the client ip address, client id etc. And then based on the tos mark values I've my policy routing setup to choose outbound interface and traffic queues etc. Thanks, Vinay
Re: Problems compiling HAProxy with Lua Support
Hi Baptiste, can you apply the patch to current git master? Thanks! Bjoern Hi, Only Willy can do this :) I'm nothing else than a humble contributor. Baptiste
Re: tcp-request + gpc ACLs
Hi Baptiste, thanks you for answering. At the moment i'm testing 1.6 to bring it in production soon. Do you have an example config snippet for your suggestion? Hi, Unfortunately, not. Baptiste
Re: Service down with TCP
On Tue, Jul 21, 2015 at 6:25 PM, Thibault LABRUT t.lab...@pickup-services.com wrote: Hello, I implemented the tcp flow at my haproxy . The problem is that since haproxy service stops after 5 minutes. I have seen rine especially in logs except this: kernel: Traps : haproxy [ 11939 ] Common IP protection : 7fe1ddc19f1a sp : 7fff12c2d580 error: 0 in haproxy [ + 7fe1ddbd5000 b6000 ] haproxy - systemd -wrapper : haproxy - systemd -wrapper : exit , haproxy RC = 0 Here is a sample configuration: frontend tcp_33101 fashion tcp tcplog option option tcpka capture request header Host len 200 bind 192.168.100.98:33101 default_backend prod_tools_tcp_33101 backend prod_tools_tcp_33101 fashion tcp tcplog option option tcpka server srv- prod_tools_tcp_33101-01 XXX.XXX.XXX.XXX:33101 check weight 100 Best regards, Thibault Hi Thibault, What troubleshooting steps have you already performed? Have you dug into systemd? NOTE: Please don't use a translator with your HAProxy configuration. That's why now, haproxy is in fashion tcp instead of mode tcp. Baptiste
Re: tcp-request + gpc ACLs
On Mon, Jul 20, 2015 at 8:19 PM, bjun...@gmail.com bjun...@gmail.com wrote: 2015-07-13 18:07 GMT+02:00 bjun...@gmail.com bjun...@gmail.com: Hi, i'm using stick-tables to track requests and block abusers if needed. Abusers should be blocked only for a short period of time and i want a stick-table entry to expire. Therefore, i have to check if the client is already marked as an abuser and do not track this client. example config: frontend fe_http_in bind 127.0.0.1:8001 stick-table type ip size 100k expire 600s store gpc0 # Not working # acl is_overlimit sc0_get_gpc0(fe_http_in) gt 0 # Working # acl is_overlimit src_get_gpc0(fe_http_in) gt 0 tcp-request connection track-sc0 src if !is_overlimit default_backend be backend be ... incrementing gpc0 ( with sc0_inc_gpc0) ... If i use sc0_get_gpc0, the stick-table entry will never expire because the timer will be resetted (tcp-request connection track-sc0 ... seems to ignore this acl). With src_get_gpc0 everything works as expected. Both ACL's are correct and triggered (verified with debug headers (http-response set-header ...)) What's the difference between these ACL's in conjunction with tcp-request connection track-sc0 ... ? Is this a bug or intended behaviour ? --- Bjoern Has anyone observed the same behaviour or knowing if this is the correct behaviour? --- Bjoern Hi, This is not doable in 1.5. In up coming 1.6, you can copy the data into a blacklist purpose stick table with an expire argument, then use the in_table converter to know if a request is blacklisted or not. When you use sc0_* function, you refresh the data in the table. Baptiste
Re: Haproxy 1.5.9 logging
Simply use the same statement to choose the severity level based on ACLs. It works on both http-request and http-response. Baptiste On Sun, Jul 19, 2015 at 10:53 AM, Haim Ari haim@startapp.com wrote: Thank you it works. What would be the best way to separate each log type to different log files: 1. Errors 2. testing 3. some other acl... Thanks again On 07/14/2015 03:30 PM, Baptiste wrote: On Tue, Jul 14, 2015 at 10:57 AM, Haim Ari haim@startapp.com haim@startapp.com wrote: Hello, How can log requests only if path begins with /testing I already use acl with path_beg to redirect to a backend. How can i use the same method for logging ? (or any other way for the above) Thank you, Haim hi Aim, Simply use the statement http-request set-log-level, like: http-request set-log-level silent unless { path_beg -i /testing } Baptiste -- Haim Ari */* SysOps Manager M: 972.584563032 */* T: 972.722288367
Re: FW: SSL offloading in HAProxy
Hi, SSL offloading in front of IMAPs (port 993) is supported. If you try to do STARTTLS over IMAP, it is not supported. Baptiste On Wed, Jul 15, 2015 at 10:38 AM, Cohen Galit galit.co...@comverse.com wrote: Hello HAProxy team, I see that the SSL offloading for http protocol is already supported ( http://blog.haproxy.com/2012/09/10/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/ ) I would like to know if there is an option of SSL offloading for IMAP protocol. Thanks, Galit From: Avrahami David Sent: Wednesday, July 01, 2015 3:50 PM To: Cohen Galit Cc: Sabban Gili; Meltser Tiran Subject: SSL offloading in HAProxy Hi Galit, Can you please post the below question to HAProxy forum? I see that the SSL offloading for http protocol is already supported ( http://blog.haproxy.com/2012/09/10/how-to-get-ssl-with-haproxy-getting-rid-of-stunnel-stud-nginx-or-pound/ ) I would like to know if there is an option of SSL offloading for IMAP protocol. Best Regards, David Avrahami Security SE Tel: +972-3-6452374 Mobile: +972-544382374 Email: david.avrah...@comverse.com “This e-mail message may contain confidential, commercial or privileged information that constitutes proprietary information of Comverse Inc. or its subsidiaries. If you are not the intended recipient of this message, you are hereby notified that any review, use or distribution of this information is absolutely prohibited and we request that you delete all copies and contact us by e-mailing to: secur...@comverse.com. Thank You.”
Re: ocsp
Hi Marc-Antoine, no idea, sorry. Maybe some of our SSL experts may help :) Baptiste On Wed, Jul 15, 2015 at 11:06 AM, Marc-Antoine marc-antoine.b...@ovh.net wrote: Hi, nobody knows plz ? On Thu, 9 Jul 2015 13:06:59 +0200, Marc-Antoine marc-antoine.b...@ovh.net wrote : Hi all, I have some problem making ocsp stapling working. here is what i did : I have 8150.pem with chain, cert and key in it. I have 8150.pem.ocsp that seems ok : # openssl ocsp -respin 8150.pem.ocsp -text -CAfile alphassl256.chain OCSP Response Data: OCSP Response Status: successful (0x0) Response Type: Basic OCSP Response Version: 1 (0x0) Responder Id: 9F10D9EDA5260B71A677124526751E17DC85A62F Produced At: Jul 9 09:47:04 2015 GMT Responses: Certificate ID: Hash Algorithm: sha1 Issuer Name Hash: 84D56BF8098BD307B766D8E1EBAD6596AA6B6761 Issuer Key Hash: F5CDD53C0850F96A4F3AB797DA5683E669D268F7 Serial Number: 11216784E7CA1813F3AD922B60EAF6428EE0 Cert Status: good This Update: Jul 9 09:47:04 2015 GMT Next Update: Jul 9 21:47:04 2015 GMT No error/warn at haproxy launching but not sure haproxy is loading .ocsp file because no notice in log. But nothing in tlsextdebug : echo Q | openssl s_client -connect www.beluc.fr:443 -servername www.beluc.fr -tlsextdebug -status -CApath /etc/ssl/certs [...] OCSP response: no response sent [...] Do you see smth wrong ? What can i do in order to debug ? Regards, -- Marc-Antoine
Re: Load Balancing the Load Balancer
On Wed, Jul 15, 2015 at 1:14 PM, mlist ml...@apsystems.it wrote: Hi, we see there is a new feature of HAProxy, peer and share table (sticky-table). This peer feature can be used to have in synch stick cookie so if one haproxy goes down the other can take over connections ? Yes, the stick table remember and share each which is sticked to which server. You can use any criteria of the connexion, and of course you can use a cookie set by your application. In othe way, HAProxy can put his own cookie in the HTTP response and use it for the persistance. This mode is useful because you don't need to share the stick table and two unconnected haproxy can assure the high avalaibility without loosing the session affinity. So if we'll use share stick table between 2 HAProxy LB we'll do not need cookie to maintain backend server sessions and if we'll use cookie we do not need to share stick table ? in the latter case how the surviving HAProxy know where to route the request to the correct backend server using some haproxy.cfg with some beckend server definition ? It does not work like this :) Persistence is based on your client and server capabilities as well as the type of protocol. IE, if you want persistence over POP protocol, then use source IP and a stick table. If you want persistence for a web mail application where clients are browsers and they can use a cookie, then use a cookie set by HAProxy. If you want persistence over a PHP or Java application, without inserting a new cookie, then store cookies generated by the application servers in a stick table you share between your HAProxy... etc... What is your choice ? The choice depends of each problem. HAProxy is very rich and permits to solve many LB and HA issues. Generally I prefer the simplest solution able to solve my issues. I mean your choice to take in sync haproxy.cfg file between 2 or more haproxy LB (rsync, custom script, etc.) rsync or scp... I mean, it's not only a cfg file, but also your SSL certificates, your ACLs, MAPs, etc... Baptiste
Re: Rewrite cookie path cookie domain
I've problem to rewrite cookie path and cookie domain in HAproxy; I've a Nginx configuration but I want to move from Nginx to HAProxy for this proxy pass. Just for my curiosity, why removing nginx ??? This is a Nginx config I want to replace: location /~xxx/ { proxy_cookie_domain ~.* .$site.it; proxy_cookie_path ~.* /~xxx/; proxy_set_headerHost $site.it; proxy_pass http://192.168.1.2/; } I need same function of proxy_cookie_domain and proxy_cookie_path; I found this: http://blog.haproxy.com/2014/04/28/howto-write-apache-proxypass-rules-in-haproxy/ but not work form me. Now I can change cookie path with: rspirep ^(Set-Cookie:.*)\ path=(.*) \1\ path=/~xxx/ It would be easier if you share a Set-Cookie header sent by your application server. In the mean time, you may use (add a 'if' statement if required): http-response replace-value Set-Cookie (.*)\ path=.* \1\ path=/~xxx/ Note: now, prefer the http-response rule over the rspirep. I need add also domain, only if exists, but with dynamic hostname; I;ve tried with acl hdr_set_cookie_domain_and_path res.hdr(Set-cookie) -m sub domain= res.hdr(Set-cookie) -m sub path= rspirep ^(Set-Cookie:.*)\ path=(.*) \1\ path=/~xxx/;\ domain=%[hdr(Host)] if hdr_set_cookie_domain_and_path This does not work like this. You can use a request header in the response. In 1.6, you'll be able to use variables. In 1.5, you can use the capture during the request, then use the value at response time. capture request header Host len 32 # first capture statement has capture.req.hdr id 0 acl hdr_set_cookie_domain res.hdr(Set-cookie) -m sub domain= acl hdr_set_cookie_path res.hdr(Set-cookie) -m sub path= http-response replace-value Set-Cookie (.*) \1;\ domain=%[capture.req.hdr(0)] # put your if statements as you want / need You can create as many http-response rules as you need to update first the domain, then the path. Baptiste Anyone can help me? Tnx, rr 2015-07-14 21:34 GMT+02:00 Baptiste bed...@gmail.com: Please repost your question. I can't see it in my mail history. Baptiste On Tue, Jul 14, 2015 at 3:33 PM, rickytato rickytato rickyt...@r2consulting.it wrote: Anyone can help me? I keep using Nginx? 2015-07-07 10:46 GMT+02:00 rickytato rickytato rickyt...@r2consulting.it: 1.5.12 2015-07-06 17:58 GMT+02:00 Aleksandar Lazic al-hapr...@none.at: Dear rickytato rickytato. Am 06-07-2015 15:32, schrieb rickytato rickytato: Hi all, I've problem to rewrite cookie path and cookie domain in HAproxy; I've a Nginx configuration but I want to move from Nginx to HAProxy for this proxy pass. Which Version of haproxy do you use? haproxy -vv ? Cheers Aleks
Re: Server IP resolution using DNS in HAProxy
On Wed, Jul 15, 2015 at 8:28 AM, Marco Corte ma...@marcocorte.it wrote: Il 14/07/2015 22:11, Baptiste ha scritto: - when parsing the configuration, HAProxy uses libc functions and resolvers provided by the operating system = if the server can't be resolved at this step, then HAProxy can't start [...] First, we want to fix the error when HAProxy fails starting up because the resolvers pointed by the system can't resolve a server's IP address (but HAProxy resolvers could). The idea here would to create a new flag on the server to tell HAProxy which IP to use. The server would be enabled when the IP has been provided by the expected tool. Hi Marco, Why not providing an option to start haproxy even if not all servers can be resolved? That's the purpose of my mail. I need this feature, but I want to make it in a way which would satisfy the community. Your proposal of the init-addr could be useful for a trick: I can set a surely unreacheable address to let haproxy start and then force/wait for the name resolution to have a working server. That's what we want. The hidden feature, is that you can start large farms and simply turn on DNS when spawning new application servers. Scale up without reloading HAProxy ;) A NX server state would be very nice. noted, thx Baptiste
Re: Server IP resolution using DNS in HAProxy
Hi Robin, I don't understand the necessity of the hold valid config option. DNS has something that takes care of this for you called the TTL. Besides if hold valid is shorter then the TTL it would be kind of pointless since the resolvers you are querying won't re-resolve until the TTL expires. Your server won't wait until end of TTL to fail ;) So you don't want to followup TTLs and prefer force HAProxy to resolve more often. In some cases, you don't choose the TTL (amazon), so 'hold valid' allows you to choose your own TTL. Tbh I don't really see the point of configuring the resolvers in haproxy when the OS has perfectly fine working facilities for this? Imagine a big company. Imagine the ops team managing HAProxy and the IT team managing the DNS servers. (It's a real case) When the ops team start up a new server, DNS propagation can be long (several minutes) before the DNS servers managed by the IT team are aware of the update (we speak about worldwide deployment). In order to start up the new service asap, then the ops team want to use the regular DNS servers and their own DNS server... There are many cases like this one, where the ops team doesn't have the hand over the DNS server. Same if you use a service discovery, then HAProxy can point its DNS request to it instead of regular DNS servers. What is the benefit besides possibly causing lookups to happen twice, once from the OS resolving stack and once from haproxies? If you really want exactly the same behavior as described you could always configure a local resolver that queries multiple other resolvers instead of recursing itself. you say this because you have the hand over your OS. We have many customers and community users where it's not the case. Once again, HAProxy, is a load-balancer, it needs the most accurate information and as fast as possible. You don't want to tune your local bind or powerdns just for HAProxy and prevent any other service to operate as usual. Baptiste
Re: Server IP resolution using DNS in HAProxy
Actually a local resolver can take care of that for you as well since every resolver I know allows configuring a different destination on domain basis. Also as described in the first email, the server has to be resolvable via the OS resolving stack as well otherwise haproxy won't start. That's the purpose of this thread. We want / need to get rid of this limitation and that's why we ask our community if the way we wanted to fix it makes sense. This means you cannot use custom domains without configuring some sort of custom resolver anyway. HAProxy's internal resolver can be made flexible enough for this purpose without being intrusive in the underlying operating system. Baptiste -Robin- Nenad Merdanovic wrote on 7/15/2015 08:56: Hello Robin, On 07/15/2015 08:49 AM, Robin Geuze wrote: Tbh I don't really see the point of configuring the resolvers in haproxy when the OS has perfectly fine working facilities for this? What is the benefit besides possibly causing lookups to happen twice, once from the OS resolving stack and once from haproxies? If you really want exactly the same behavior as described you could always configure a local resolver that queries multiple other resolvers instead of recursing itself. Because this would perfectly integrate with things like Consul (https://www.consul.io/docs/agent/dns.html), which are currently very widely used to provide service discovery. -Robin- Regards,
Re: How to disable backend servers without health check
On Thu, Jul 16, 2015 at 5:06 PM, Pavlos Parissis pavlos.paris...@gmail.com wrote: On 16/07/2015 04:02 μμ, Krishna Kumar (Engineering) wrote: Hi John, Your suggestion works very well, and exactly what I was looking for. Thank you very much. You could also try https://github.com/unixsurfer/haproxytool Cheers, Pavlos +1 to Pavlos' tool for this type of task Baptiste
Re: cookie prefix strange behavior
Hi Roberto, Look in your log lines, block 2, HAProxy says --IN. 'IN' is for cookie persistence. 'I' means the cookie sent by the client is invalid and 'N' means HAProxy did not perform any action on persistence. In such case, you could try to match that there are no prefix in your cookie and do a redirect to a page which cleans up the cookie then redirect the user to the login page. Baptiste On Fri, Jul 17, 2015 at 5:49 PM, mlist ml...@apsystems.it wrote: We found this behavior does not appears if we manually clean cookie in the browser. There is a configuration option to invalidate old cookie so client does not reuse this strange cookie not recognized by server ? Roberto From: mlist Sent: venerdì 17 luglio 2015 16.19 To: 'haproxy@formilux.org' Subject: cookie prefix strange behavior We have compiled and installed haproxy version 1.6 dev2. If we use cookie insert all works, but if we use cookie prefix (ASP.NET_SessionId) or sticky table in which one have to specify cookie to be sticked, so using cookie name = ASP.NET_SessionId) we have a strange behavior. BLOCK1 As you can see below, we open a browser and make a request, cookie prefix mechanism works well, we can login and use the application (all subsequent requests go on the some server). BLOCK2 But if we open a new browser instance (chrome in this case, but this happen also if we open IE) the client uses a strange “ASP.NET_SessionId” cookie without haproxy prefix with back-end server, server does not ask client to set a new cookie. As of login, clearly, backend server does not recognize the cookie sent by client (haproxy do a plain roundrobin distribution, no cookie management was done) and so backend server return an error. Can this be a bug of haproxy or a bad configuration by us ? BLOCK1 Jul 17 14:55:49 ha_server1 haproxy[5604]: client_ip:37322 [17/Jul/2015:14:55:49.414] front_end_https~ back_end_https/SERVER1 2/0/44/18/64 302 467 ASP.NET_SessionId=SERVER1~fi2b5smpq33tgwfy0qqmog45 - --VN 0/0/0/0/0 0/0 {||Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, l} GET /app1/ HTTP/1.1 Jul 17 14:55:49 ha_server1 haproxy[5604]: client_ip:37324 [17/Jul/2015:14:55:49.484] front_end_https~ back_end_https/SERVER1 2/0/5/22/29 200 6449 ASP.NET_SessionId=SERVER1~fi2b5smpq33tgwfy0qqmog45 - --VN 0/0/0/0/0 0/0 {||Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, l} GET /app1/login.aspx HTTP/1.1 Jul 17 14:55:52 ha_server1 haproxy[5604]: client_ip:37328 [17/Jul/2015:14:55:52.284] front_end_https~ back_end_https/SERVER1 2/0/1/3/7 404 1424 ASP.NET_SessionId=SERVER1~fi2b5smpq33tgwfy0qqmog45 - --VN 0/0/0/0/0 0/0 {||Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, l} GET /app1/segreteria HTTP/1.1 Jul 17 14:55:59 ha_server1 haproxy[5604]: client_ip:37344 [17/Jul/2015:14:55:59.452] front_end_https~ back_end_https/SERVER1 2/0/1/2/5 301 448 ASP.NET_SessionId=SERVER1~fi2b5smpq33tgwfy0qqmog45 - --VN 0/0/0/0/0 0/0 {||Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, l} GET /app1/anagrafenet HTTP/1.1 Jul 17 14:55:59 ha_server1 haproxy[5604]: client_ip:37345 [17/Jul/2015:14:55:59.461] front_end_https~ back_end_https/SERVER1 2/0/1/32/36 302 435 ASP.NET_SessionId=SERVER1~fi2b5smpq33tgwfy0qqmog45 - --VN 0/0/0/0/0 0/0 {||Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, l} GET /app1/anagrafenet/ HTTP/1.1 Jul 17 14:55:59 ha_server1 haproxy[5604]: client_ip:37346 [17/Jul/2015:14:55:59.501] front_end_https~ back_end_https/SERVER1 2/0/1/27/31 200 6625 ASP.NET_SessionId=SERVER1~fi2b5smpq33tgwfy0qqmog45 - --VN 0/0/0/0/0 0/0 {||Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, l} GET /app1/login.aspx HTTP/1.1 Jul 17 14:56:06 ha_server1 haproxy[5604]: client_ip:37359 [17/Jul/2015:14:56:06.515] front_end_https~ back_end_https/SERVER1 1/0/4/64/69 302 6712 ASP.NET_SessionId=SERVER1~fi2b5smpq33tgwfy0qqmog45 - --VN 0/0/0/0/0 0/0 {|946|Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, l} POST /app1/login.aspx HTTP/1.1 Jul 17 14:56:06 ha_server1 haproxy[5604]: client_ip:37361 [17/Jul/2015:14:56:06.588] front_end_https~ back_end_https/SERVER1 2/0/1/175/179 200 28897 ASP.NET_SessionId=SERVER1~fi2b5smpq33tgwfy0qqmog45 - --VN 0/0/0/0/0 0/0 {||Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, l} GET /app1/anagrafenet/default.aspx HTTP/1.1 Jul 17 14:56:06 ha_server1 haproxy[5604]: client_ip:37364 [17/Jul/2015:14:56:06.777] front_end_https~ back_end_https/SERVER1 5/0/2/11/18 200 2049 ASP.NET_SessionId=SERVER1~fi2b5smpq33tgwfy0qqmog45 - --VN 5/5/5/5/0 0/0 {||Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, l} GET /app1/templatelibrary/menutop/sf-menu.css HTTP/1.1 Jul 17 14:56:06 ha_server1 haproxy[5604]: client_ip:37363 [17/Jul/2015:14:56:06.776] front_end_https~ back_end_https/SERVER1 6/0/3/13/22 200 3580 ASP.NET_SessionId=SERVER1~fi2b5smpq33tgwfy0qqmog45 - --VN 4/4/4/4
Re: IP binding and standby health-checks
Hi Nathan, The 'usesrc' keyword triggers this error. It needs root privileges. (just checked in the source code) Baptiste On Thu, Jul 16, 2015 at 5:13 PM, Nathan Williams nath.e.w...@gmail.com wrote: oh, i think this comment thread explains it: http://comments.gmane.org/gmane.comp.web.haproxy/20366. I'll see about assigning CAP_NET_ADMIN On Wed, Jul 15, 2015 at 4:56 PM Nathan Williams nath.e.w...@gmail.com wrote: Hi Baptiste, Sorry for the delayed response, had some urgent things come up that required more immediate attention... thanks again for your continued support. Why not using proxy-protocol between HAProxy and nginx? Sounds interesting; I'd definitely heard of it before, but hadn't looked into it since what we've been doing has been working. My initial impression is that it's a pretty big change from what we're currently doing (looks like it would at least require a brief maintenance to roll out since it requires coordinated change between client and load-balancer), but I'm not fundamentally opposed if there's significant advantages. I'll definitely take a look to see if it satisfies our requirements. I disagree, it would be only 2: the 'real' IP addresses of the load-balancers only. OK, fair point. Maybe it's just being paranoid to think that unless we're explicitly setting the source, we should account for *all* possible sources. The VIP wouldn't be the default route, so we could probably get away with ignoring it. Come to think of it... maybe having keepalived change the default route on the primary and skipping hardcoding the source in haproxy would address what we're aiming for? seems worth further investigation, as I'm not sure whether it supports this out of the box. there is no 0.0.0.0 magic values neither subnet values accepted in nginx XFF module? I wouldn't use 0.0.0.0 whether there is or not, as i wouldn't want it to be that open. It might be a different case for a subnet value, if we were able to put the load-balancer cluster in a separate subnet, but our current situation (managed private openstack deployment) doesn't give us quite that much network control. maybe someday soon with VXLAN or another overlay (of course, that comes with performance penalties, so maybe not). Then instead of using a VIP, you can book 2 IPs in your subnet that could be used, whatever the LB is using. Pre-allocating network IPs from the subnet that aren't permitted to be assigned to anything other than whatever instance is currently filling the load-balancer role would certainly work (I like this idea!); that's actually pretty similar to what we're doing for the internal VIP currently (the external VIP is just an openstack floating IP, aka a DNAT in the underlying infrastructure), and then adding it as an allowed address for the instance-associated network port instance in Neutron's allowed-address-pairs... It'd be an extra step when creating an LB node, but a pretty reasonable one I think, and we're already treating them differently from generic instances anyways... definitely food for thought. HAProxy rocks ! +1 * 100. :) Can you start it up with strace ?? Yep! https://gist.github.com/nathwill/ea52324867072183b695 So far, I still like the source 0.0.0.0 usesrc 10.240.36.13 solution the best, as it seems the most direct and easily understood. Fingers crossed the permissions issue is easily overcome. Cheers, Nathan W On Tue, Jul 14, 2015 at 2:58 PM Baptiste bed...@gmail.com wrote: As for details, it's advantageous for us for a couple of reasons... the realip module in nginx requires that you list trusted hosts which are permitted to set the X-Forwarded-For header before it will set the source address in the logs to the x-forwarded-for address. as a result, using anything other than the VIP means: Why not using proxy-protocol between HAProxy and nginx? http://blog.haproxy.com/haproxy/proxy-protocol/ So you can get rid of X-FF header limitation in nginx. (don't know if proxy-protocol implementation in nginx suffers from the same limitations). - not using the vip means we have to trust 3 addresses instead of 1 to set x-forwarded-for I disagree, it would be only 2: the 'real' IP addresses of the load-balancers only. - we have to update the list of allowed hosts on all of our backends any time we replace a load-balancer node. We're using config management, so it's automated, but that's still more changes than should ideally be necessary to replace a no-data node that we ideally can trash and replace at will. there is no 0.0.0.0 magic values neither subnet values accepted in nginx XFF module? If not, it deserves a patch ! - there's a lag between the time of a change(e.g. node replacement) and the next converge cycle of the config mgmt on the backends, so for some period the backend config will be out of sync, incorrectly trusting IP(s) that may now
DOC: usesrc root privileges
Hi, The documentation is missing the usesrc requirements about root privileges. This patch add this information in the doc. Baptiste From 8537d9b6c136a270c79670ebccf972a11fa86af7 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann bed...@gmail.com Date: Fri, 17 Jul 2015 21:59:42 +0200 Subject: [PATCH] DOC usesrc root privileges requirments The usesrc parameter of the source statement requires root privileges. --- doc/configuration.txt | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doc/configuration.txt b/doc/configuration.txt index a806312..7c90ff4 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -6896,6 +6896,8 @@ source addr[:port] [interface name] is possible at the server level using the source server option. Refer to section 5 for more information. + In order to work, usesrc requires root privileges. + Examples : backend private # Connect to the servers using our 192.168.1.200 source address -- 2.4.5
Re: Server IP resolution using DNS in HAProxy
First would be resolution of SRV records and actually using the port supplied by the SRV record as the port for the server. I looked at the code and it doesn't seem like too much work, most of it would probably be changing the config stuff accordingly. You're right, this could be an interesting option. Actually, we though about the SRV records to populate a full backend server list with IP, name, port, weight using a single DNS query. The other one is... well you asked for it ;) so here it goes: it would be great to express in the config something like resolve this name and use up to X servers from the reply. The backend then allocates X servers. Assuming that the initial lookup returns Y results, the (sorted) records get assinged to the first Y servers, the other X-Y servers get marked as down. Upon a new lookup, same procedure for a potentially changing value of Y. I realize this a pretty bold feature request for several reasons, but I have actually spent some thought on it think it might be doable without really violating any of HAProxy's design paradigms. I would also be willing to invest some time (code) into this myself. If you think this might be at least worth a discussion, I'd be happy to share some more detailed thoughts and it would be great to hear your thoughts on that, too. First, there are some limitations about creating servers on the fly in a backend. So instead, we propose you to pre-allocate servers by configuration and then wait for the DNS to populate it. I don't speak about a request per server, I speak here about one request for the whole farm :) You go one step further than the design we have about SRV records to populate the backend. We thought using priority to decide whether a server is active or backup. The advantage is that you don't need to reload HAProxy to change your X value ;) I would welcome a contribution about SRV record type. That said, before this, I have to rewrite part of the response parser to store the response in a real DNS packet structure instead of keeping data in a buffer. Baptiste
Re: Mailer does not work
On Wed, Jul 15, 2015 at 9:48 AM, mlist ml...@apsystems.it wrote: We compiled from source haproxy-1.6-dev2.tar.gz. New Mailers mechanism does not seems to work, we configured as on manual: mailers apsmailer1 mailer smtp1 mailserver ip:10025 … … backend somebackend_https mode http balance roundrobin … email-alert mailers apsmailer1 email-alert from from mail email-alert to to mail email-alert level info … We see in haproxy.log server status change: Jul 15 09:42:00 localhost.localdomain haproxy[3342]: Server …/server1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue. Jul 15 09:42:00 localhost.localdomain haproxy[3342]: Server …/server1 is UP, reason: Layer6 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue. But no mail alerts are sent, no error or warning logged about sending mail. haproxy -f /etc/haproxy/haproxy.cfg –c does not return any error. All seems to be right, but mail alerts are not sent. Roberto Hi Roberto, Could you please take a tcpdump on port 10025 and confirm HAProxy tries to get connected to the SMTP server? Baptiste
Re: IP binding and standby health-checks
On Mon, Jul 13, 2015 at 6:03 PM, Nathan Williams nath.e.w...@gmail.com wrote: Hi all, I'm hoping I can get some advice on how we can improve our failover setup. At present, we have an active-standby setup. Failover works really well, but on the standby, none of the backend servers are marked as up since haproxy is bound to the VIP that is currently on the active member (managed with keepalived). as a result, there's an initial period of a second or two after the failover triggers and the standby claims the VIP where the backend servers have not yet passed a health-check on the new active member. It seems like the easiest way to sort it out would be if the health-checks weren't also bound to the VIP so that the standby could complete them successfully. i do still want the proxied requests bound to the VIP though, forthe benefit of our backends' real-ip configuration. is that doable? if not, is there some way to have the standby follow the active-member's view on the backends, or another way i haven't seen yet? Thanks! Nathan W Hi Nathan, Maybe you could share your configuration. Please also let us know the real and virtual IPs configured on your master and slave HAProxy servers. Baptiste
Re: Haproxy 1.5.9 logging
On Tue, Jul 14, 2015 at 10:57 AM, Haim Ari haim@startapp.com wrote: Hello, How can log requests only if path begins with /testing I already use acl with path_beg to redirect to a backend. How can i use the same method for logging ? (or any other way for the above) Thank you, Haim hi Aim, Simply use the statement http-request set-log-level, like: http-request set-log-level silent unless { path_beg -i /testing } Baptiste
Re: Rewrite cookie path cookie domain
Please repost your question. I can't see it in my mail history. Baptiste On Tue, Jul 14, 2015 at 3:33 PM, rickytato rickytato rickyt...@r2consulting.it wrote: Anyone can help me? I keep using Nginx? 2015-07-07 10:46 GMT+02:00 rickytato rickytato rickyt...@r2consulting.it: 1.5.12 2015-07-06 17:58 GMT+02:00 Aleksandar Lazic al-hapr...@none.at: Dear rickytato rickytato. Am 06-07-2015 15:32, schrieb rickytato rickytato: Hi all, I've problem to rewrite cookie path and cookie domain in HAproxy; I've a Nginx configuration but I want to move from Nginx to HAProxy for this proxy pass. Which Version of haproxy do you use? haproxy -vv ? Cheers Aleks
Re: IP binding and standby health-checks
As for details, it's advantageous for us for a couple of reasons... the realip module in nginx requires that you list trusted hosts which are permitted to set the X-Forwarded-For header before it will set the source address in the logs to the x-forwarded-for address. as a result, using anything other than the VIP means: Why not using proxy-protocol between HAProxy and nginx? http://blog.haproxy.com/haproxy/proxy-protocol/ So you can get rid of X-FF header limitation in nginx. (don't know if proxy-protocol implementation in nginx suffers from the same limitations). - not using the vip means we have to trust 3 addresses instead of 1 to set x-forwarded-for I disagree, it would be only 2: the 'real' IP addresses of the load-balancers only. - we have to update the list of allowed hosts on all of our backends any time we replace a load-balancer node. We're using config management, so it's automated, but that's still more changes than should ideally be necessary to replace a no-data node that we ideally can trash and replace at will. there is no 0.0.0.0 magic values neither subnet values accepted in nginx XFF module? If not, it deserves a patch ! - there's a lag between the time of a change(e.g. node replacement) and the next converge cycle of the config mgmt on the backends, so for some period the backend config will be out of sync, incorrectly trusting IP(s) that may now be associated with another host, or wrongly refusing to set the source ip to the x-forwarded-for address. this is problematic for us, since we have a highly-restricted internal environment, due to our business model (online learn-to-code school) being essentially running untrusted code as a service. Then instead of using a VIP, you can book 2 IPs in your subnet that could be used, whatever the LB is using. So you don't rely on the VIP, whatever the HAProxy box real IP, you configure one of the IP above as an alias and you use it from HAProxy. Happily, your suggested solution seems to achieve what we're aiming for (thanks!). The health-checks are coming from the local IP, and proxied requests from clients are coming from the VIP. The standby is seeing backends as UP since they're able to pass the health-checks. Progress! Finally we made it :) HAProxy rocks ! Unfortunately, this seems to cause another problem with our config... though haproxy passes the config validation (haproxy -c -f /etc/haproxy.cfg), it fails to start up, logging an error like Jul 14 20:22:48 lb01.stage.iad01.treehouse haproxy-systemd-wrapper[25225]: [ALERT] 194/202248 (25226) : [/usr/sbin/haproxy.main()] Some configuration options require full privileges, so global.uid cannot be changed.. We can get it to work by removing the user and group directives from the global section and letting haproxy run as root, but having to escalate privileges is also less than ideal... I almost hate to ask for further assistance, but do you have any suggestions related to the above? FWIW, we're using haproxy 1.5.4 and kernel 4.0.4 on CentOS 7. Some features require root privileges, that said, from a documentation point of view, It doesn't seem the 'source' keyword like I asked you to set it up is one of them. Can you start it up with strace ?? Baptiste Regards, Nathan W On Tue, Jul 14, 2015 at 12:31 PM Baptiste bed...@gmail.com wrote: Nathan, The question is: why do you want to use the VIP to get connected on your backend server? Please give a try to the following source line, instead of your current one: source 0.0.0.0 usesrc 10.240.36.13 Baptiste On Tue, Jul 14, 2015 at 9:06 PM, Nathan Williams nath.e.w...@gmail.com wrote: OK, that did not seem to work, so I think the correct interpretation of that addr option must be as an override for what address/port to perform the health-check *against* instead of from (which makes more sense in context of it being a server option). i was hoping for an option like health-check-source or similar, if that makes sense; I also tried removing the source directive and binding the frontend to the VIP explicitly, hoping that would cause the proxied requests to originate from the bound IP, but that didn't seem to do it either. While the standby was then able to see the backends as up, the proxied requests to the backends came from the local IP instead of the VIP. Regards, Nathan W On Tue, Jul 14, 2015 at 8:58 AM Nathan Williams nath.e.w...@gmail.com wrote: Hi Baptiste/Jarno, Thanks so much for responding. addr does indeed look like a promising option (though a strangely lacking explanation in the docs, which explains what it makes possible while leaving the reader to deduce what it actually does), thanks for pointing that out. Here's our config: https://gist.github.com/nathwill/d30f2e9cc0c97bc5fc6f (believe it or not this is the trimmed down version from what we used to have :), but backends, how
Re: Server IP resolution using DNS in HAProxy
On Sun, Jul 12, 2015 at 11:38 PM, Baptiste bed...@gmail.com wrote: hi all, As you may have noticed already, HAProxy 1.6-dev2 version has integrated a new feature: server IP address resolution using DNS. Main purpose of this dev is to make HAProxy aware of a server IP change when using environment such as AWS or docker. Here is the current status of HAProxy and server name resolution: - when parsing the configuration, HAProxy uses libc functions and resolvers provided by the operating system = if the server can't be resolved at this step, then HAProxy can't start - in order to make DNS resolution operational at run time, health checks must be enabled on the server. Actually, the health check triggers name resolution - HAProxy uses its own resolvers using the new section called resolvers. - HAProxy queries ALL resolvers and take the first non-error response - a resolution is considered in error when ALL resolvers failed (whatever the failure was) - When a resolution is successful, HAProxy keep it for hold valid period. Once hold valid has expired, next health check will trigger a new DNS resolution Documentation about it: - http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#resolvers%20%28Server%20and%20default-server%20options%29 - http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#5.3 Now current status is briefly explained, we have a few WIP tasks we want to discuss with the community. We want to here feedback about additional features we have in mind. First, we want to fix the error when HAProxy fails starting up because the resolvers pointed by the system can't resolve a server's IP address (but HAProxy resolvers could). The idea here would to create a new flag on the server to tell HAProxy which IP to use. The server would be enabled when the IP has been provided by the expected tool. IE, a new server directive could be init-addr (for inital IP address) and would take a list of directive from 'libc', 'dns', 'a.b.c.d' (arbitrary IP address), etc... (non exhaustive live, more to come...) Currently, HAProxy works like this: init-addr libc,dns A new value could be init-addr dns Or init-addr 1.2.3.4,dns Second, we want to log server IP changes. For now, there are 2 ways to change a server IP address: DNS resolution or using the stats socket command: set server addr 2 options: - we setup a parameter to enable logging server IP changes, whatever has updated the server IP - we allow HAProxy to log server IP changes from a specific source only. IE, log only when DNS change a server's IP Third, we have to handle DNS response errors. We thought about the 4 following cases: - NX domain : all DNS servers can't resolve this host name - response timeout : no response was received - query refused : the DNS servers refused our query - other : all other cases = For each error, we can maintain the latest good IP for a period decided by the user. IE, if you want to keep a server up for 5 minutes while your servers return NX, then setup hold nx 5m in your resolvers section Fourth, we need a new server state when a DNS resolution is in error. Currently, we have 2 types of state: operational or administrative - administrative states: ready, maint, drain - operational states: down, failed, stopped We have to create a new state (should be operational) which reports that HAProxy is not able to perform a proper DNS resolution for this server. Once in that state, the server won't be able to get new traffic, health checks will be stopped too. HAProxy will turn the server in this state after the hold period described in step #3. That's all for now. Looking forward to read your feedback! Baptiste Hey everyone! I know the message above is very long, but we really need your feedback! An other point I want to add: do you think it would make sense to allow updating the server hostname? It could be useful in environment where people want to pre-configure a farm for scalability, but server host names are not predictable (amazon ??). Baptiste
Re: IP binding and standby health-checks
Nathan, The question is: why do you want to use the VIP to get connected on your backend server? Please give a try to the following source line, instead of your current one: source 0.0.0.0 usesrc 10.240.36.13 Baptiste On Tue, Jul 14, 2015 at 9:06 PM, Nathan Williams nath.e.w...@gmail.com wrote: OK, that did not seem to work, so I think the correct interpretation of that addr option must be as an override for what address/port to perform the health-check *against* instead of from (which makes more sense in context of it being a server option). i was hoping for an option like health-check-source or similar, if that makes sense; I also tried removing the source directive and binding the frontend to the VIP explicitly, hoping that would cause the proxied requests to originate from the bound IP, but that didn't seem to do it either. While the standby was then able to see the backends as up, the proxied requests to the backends came from the local IP instead of the VIP. Regards, Nathan W On Tue, Jul 14, 2015 at 8:58 AM Nathan Williams nath.e.w...@gmail.com wrote: Hi Baptiste/Jarno, Thanks so much for responding. addr does indeed look like a promising option (though a strangely lacking explanation in the docs, which explains what it makes possible while leaving the reader to deduce what it actually does), thanks for pointing that out. Here's our config: https://gist.github.com/nathwill/d30f2e9cc0c97bc5fc6f (believe it or not this is the trimmed down version from what we used to have :), but backends, how they propagate in this microservice-oriented world of ours... ). As for addresses, the VIP is 10.240.36.13, and the active/standby local addresses are .11 and .12. fthe problem is basically that the way it's currently configured, when the .11 is active and has the .13 address, health-checks from haproxy on the .12 host also originate from the .13 address (guessing due to the source line), and so never return and are (rightfully) marked by haproxy as L4CON network timeouts. i'm going to try the addr config and report back; fingers crossed! cheers, Nathan W On Tue, Jul 14, 2015 at 5:21 AM Baptiste bed...@gmail.com wrote: On Mon, Jul 13, 2015 at 6:03 PM, Nathan Williams nath.e.w...@gmail.com wrote: Hi all, I'm hoping I can get some advice on how we can improve our failover setup. At present, we have an active-standby setup. Failover works really well, but on the standby, none of the backend servers are marked as up since haproxy is bound to the VIP that is currently on the active member (managed with keepalived). as a result, there's an initial period of a second or two after the failover triggers and the standby claims the VIP where the backend servers have not yet passed a health-check on the new active member. It seems like the easiest way to sort it out would be if the health-checks weren't also bound to the VIP so that the standby could complete them successfully. i do still want the proxied requests bound to the VIP though, forthe benefit of our backends' real-ip configuration. is that doable? if not, is there some way to have the standby follow the active-member's view on the backends, or another way i haven't seen yet? Thanks! Nathan W Hi Nathan, Maybe you could share your configuration. Please also let us know the real and virtual IPs configured on your master and slave HAProxy servers. Baptiste
FIX: wrong time unit for some default DNS timers
Hi, Madison May reported that the timeout applied by the default configuration is inproperly set up. This patch fix this: - hold valid default to 10s - timeout retry default to 1s Baptiste From d84e08b599c30fb1d0d35a3715d76c331ee4c1c4 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann bed...@gmail.com Date: Tue, 14 Jul 2015 21:42:49 +0200 Subject: [PATCH] FIX wrong time unit for some DNS default parameters Madison May reported that the timeout applied by the default configuration is inproperly set up. This patch fix this: - hold valid default to 10s - timeout retry default to 1s --- src/cfgparse.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/cfgparse.c b/src/cfgparse.c index 4015804..fb9d45b 100644 --- a/src/cfgparse.c +++ b/src/cfgparse.c @@ -2187,8 +2187,8 @@ int cfg_parse_resolvers(const char *file, int linenum, char **args, int kwm) curr_resolvers-id = strdup(args[1]); curr_resolvers-query_ids = EB_ROOT; /* default hold period for valid is 10s */ - curr_resolvers-hold.valid = 10; - curr_resolvers-timeout.retry = 1; + curr_resolvers-hold.valid = 1; + curr_resolvers-timeout.retry = 1000; curr_resolvers-resolve_retries = 3; LIST_INIT(curr_resolvers-nameserver_list); LIST_INIT(curr_resolvers-curr_resolution); -- 2.4.0
Re: haproxy/hapee Transparent LB
On Tue, Jul 14, 2015 at 7:15 PM, Bearly Breathin bearly.breet...@gmail.com wrote: I at a bit of a loss… Last week I tried, quite unsuccessfully, to make haproxy work as I understand it should after reading the docs, FAQs, and a variety of other sources i found with Google. I purchased hapee last Friday so that I could obtain support and hopefully stop banging my head against the wall. I was given a configuration to test (Thanks, Support!), but it did not work. What I need is to have hapee do round-robin load-balancing and source-IP spoofing of syslog traffic to multiple destination hosts via TCP. Actual Configuration: SourceHost-10.0.0.1--- ---DestHost-10.1.0.1 | | SourceHost-10.0.0.2-10.0.0.254-hapee-10.1.0.254-DestHost-10.1.0.2 | | SourceHost-10.0.0.3--- ---DestHost-10.1.0.3 Effective Functionality: ---DestHost-10.1.0.1 | SourceHost-10.0.0.1-- --DestHost-10.1.0.2 | ---DestHost-10.1.0.3 ---DestHost-10.1.0.1 | SourceHost-10.0.0.2-- --DestHost-10.1.0.2 | ---DestHost-10.1.0.3 ---DestHost-10.1.0.1 | SourceHost-10.0.0.3-- --DestHost-10.1.0.2 | ---DestHost-10.1.0.3 Should this be possible with hapee/haproxy? Thanks! BB Hi, It seems you double post, here and on haproxy.com. I'll answer you on haproxy.com and we'll share the definitive response here (I need some private information which you don't want to share on a public mailing list) :) Baptiste
Re: Contribution: change response line
On Mon, Jul 13, 2015 at 7:22 AM, Bowen Ni bowen1...@gmail.com wrote: Hi, With Lua integration in HAProxy 1.6, one can change the request method, path, uri, header, response header etc except response line. Hi Bowen, You can already change the fields above using HAProxy 1.6 statements: http-request and http-response. http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#http-request http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#http-response You don't need lua for this, unless your changes are complicated and you can find a converter which does the transformation you need: http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#7.3.1 I'd like to contribute the following methods to allow modification of the response line. Actually, that's right, HAProxy, there are currently no http-response set-return-code in haproxy. I let the LUA experts answer you on the rest of the mail :) Baptiste
Re: LB as a first row of defence against DDoS
Thank you for everything you do. You are one of the unsung heroes who make the guts of the Internet possible. Hehe don't feel like you're exagerating a bit here ? :-) Willy nope. Baptiste
Re: Need your help on HAProxy Load balancing algorithms
On Wed, Jun 24, 2015 at 10:13 AM, Vinod Kishan Lalbeg vklal...@yahoo.com wrote: Dear Sir/ Madam, I am a PhD student in Pune, India. I am working on Dynamic Algorithms for High-Availability Cloud Server Load Balancing in Linux Environment for QoS. I am very new to this concepts and technology. As I was reading Red_Hat_Enterprise_Linux-7-Load_Balancer_Administration-en-US document I came across Keepalived and HAProxy terms and started reading. I wanted your help on the load balancing algorithms which are used in HAProxy. If I can get the source code to study them along with documentation it will be a great help. I also wanted to know, if I get come idea/ algorithm on load balancing can you test and verify it so that I can have a detailed report on the working of the algorithm. Plz respond to my mail. Thanks and regards Mr. Vinod K. Lalbeg Asst. Prof., NWIMSR, Pune-1 Hi Vinod, First, good luck in your PhD. For load-balancing algorithm, you want to read this part of the doc: http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#balance about the source code, it's available here: http://git.haproxy.org/?p=haproxy.git Baptiste
Re: LB as a first row of defence against DDoS
hi all, Sorry for not answering sooner, but you know, you say I'll do it in a couple of minute, then you focus on something else, then you forget, then you say I'll do it in a couple of minute, then :) First of all, such type of article takes a long time to write, to review, to fix, to test, etc... So I need long period of time to focus to write this type of article. And these type of period are quite rare and I used them to contribute code to ... HAProxy :) That said, I'll write a new DDOS protection article once HAProxy 1.6 will be released, since it embeds some new features which are interesting on this topic. Concerning your demand, I don't understand it! Could you provide me your own configuration (or a fake one) you would like to be protected adding comment to the type of protection you expect, then I'll see what I can do. Baptiste
Re: Odd SSL performance
Phil, First, use '-k' option on ab to keep connections alive on ab side. From a pure benchamrk point of view, using the loopback is useless! Furthermore if all VMs are hosted on the same hypervisor. You won't be able to get any accurate conclusion from your test, because the injector VM is impacting the HAProxy VM, which migh be mutually impacted the server VMs... Baptiste On Thu, Jun 18, 2015 at 2:41 PM, Phil Daws ux...@splatnix.net wrote: Hello Lukas: Path is as follows: Internet - HAProxy [Frontend:443 - Backend:80] - 6 x NGINX Yeah, unfortunately due to the application behind NGINX our benchmarking has to be without keep-alives :( Thanks, Phil - On 18 Jun, 2015, at 13:38, Lukas Tribus luky...@hotmail.com wrote: Hi Phil, Hello all: we are rolling out a new system and are testing the SSL performance with some strange results. This is all being performed on a cloud hypervisor instance with the following: You are saying nginx listens on 443 (SSL) and 80, and you connect to those ports directly from ab. Where in that picture is haproxy? Have tried adding the option prefer-last-server but that did not make a great deal of difference. Any thoughts please as to what could be wrong ? Without keepalive it won't make any difference. Enable keepalive with ab (-k). Lukas (null)
Re: Odd SSL performance
Phil, without -k, HAProxy spends its time to compute TLS keys. Can you run 'openssl speed rsa2048' and report here the number? My guess is that it shouldn't be too far from 400 :) Baptiste On Thu, Jun 18, 2015 at 3:20 PM, Phil Daws ux...@splatnix.net wrote: Hello Baptiste: we were seeing lower tps from a remote system to the front-end LB hence trying to exclude client side issues by using the LB interface. Yes, when we use '-k', we do see a huge difference but its interesting that we pretty much always get 390 tps for a single core, and when we go to nbproc 2 then 780. Appreciate the input Baptiste Lukas. Thanks, Phil. - On 18 Jun, 2015, at 14:15, Baptiste bed...@gmail.com wrote: Phil, First, use '-k' option on ab to keep connections alive on ab side. From a pure benchamrk point of view, using the loopback is useless! Furthermore if all VMs are hosted on the same hypervisor. You won't be able to get any accurate conclusion from your test, because the injector VM is impacting the HAProxy VM, which migh be mutually impacted the server VMs... Baptiste On Thu, Jun 18, 2015 at 2:41 PM, Phil Daws ux...@splatnix.net wrote: Hello Lukas: Path is as follows: Internet - HAProxy [Frontend:443 - Backend:80] - 6 x NGINX Yeah, unfortunately due to the application behind NGINX our benchmarking has to be without keep-alives :( Thanks, Phil - On 18 Jun, 2015, at 13:38, Lukas Tribus luky...@hotmail.com wrote: Hi Phil, Hello all: we are rolling out a new system and are testing the SSL performance with some strange results. This is all being performed on a cloud hypervisor instance with the following: You are saying nginx listens on 443 (SSL) and 80, and you connect to those ports directly from ab. Where in that picture is haproxy? Have tried adding the option prefer-last-server but that did not make a great deal of difference. Any thoughts please as to what could be wrong ? Without keepalive it won't make any difference. Enable keepalive with ab (-k). Lukas (null) (null)
Re: Location of log file of haproxy
On Thu, Jun 18, 2015 at 7:17 PM, Ajay Kumar ajaykumarm...@gmail.com wrote: Hi, I am using HAProxy in smartOS VM of Joyent but failed to trace of its log file. I explored in internet too but not found any more than following but then not found folder /etc/rsyslog.d/ in the smartOS. http://kvz.io/blog/2010/08/11/haproxy-logging/ https://www.percona.com/blog/2014/10/03/haproxy-give-me-some-logs-on-centos-6-5/ I am looking for following help, 1. where is log file of HAProxy 2. how could get the log to my specific log file other than syslog(which has been mentioned many places in internet) Regards, Ajay Hi Ajay, HAProxy sends logs to a syslog server. So first, ensure your syslog server and HAProxy are propertly configured. Then, reading your syslog configuration will tell you where the files could be. Baptiste
Re: [ANNOUNCE] haproxy-1.6-dev2
: http://git.haproxy.org/git/haproxy.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy.git Changelog: http://www.haproxy.org/download/1.6/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.com/haproxy-dconv/configuration-1.6.html Regards, Willy It's a great release Looking forward to play with it! Note that in my lab, 1.6-dev performs slightly better than 1.5. Baptiste
Re: Disable/enable server for all backends
On Wed, Jun 17, 2015 at 10:23 PM, jeff saremi jeffsar...@hotmail.com wrote: In the command: disable server backend/server can the backend be left out? or passed as a wildcard? this way when a server is disabled or enabled it will be for all backends. Hi Jeff, This is currently not doable. Baptiste
Re: Health check of backends without explicit health-check?
Hi Krishna, Usually, people use a service discovery tool to do this. Some other people use a local service to cache the check response and serve it to all haproxy servers. Baptiste On Wed, Jun 17, 2015 at 11:38 AM, Krishna Kumar (Engineering) krishna...@flipkart.com wrote: On Tue, Jun 16, 2015 at 4:29 PM, Krishna Kumar (Engineering) krishna...@flipkart.com wrote: I was referring to HAProxy as the LB here. If there is any means to do this, kindly let me know. Thanks, - Krishna Kumar Hi list, Is there any way to log, or report, or notify, or identify any backend that is not responding, without using explicit health-checks? The reason for this is that we are planning a big deployment of LB/servers, something along the lines of: LB1, LB2, LB100 or more ^ | v Thousands of servers as backends where many of the LB's could share the same backend. Doing a health- check from many LB's to the same servers is a possible load issue on the servers. Is there any other way, based on response timeout, or something else, to determine which of the backends are not responding, and be able to retrieve that information? Thanks, - Krishna Kumar -- This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. Although Flipkart has taken reasonable precautions to ensure no viruses are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments
Re: HAProxy Stats and SSL Problems
As stated by Piba-nl, your error is here: listen stats :44300 bind *:44300 ssl crt /etc/ssl/private/the.pem.withkey.pem When you declare your listen section like this, it is equivalent to: listen stats bind :44300 bind *:44300 ssl crt /etc/ssl/private/the.pem.withkey.pem Which means that 2 listening sockets will get the traffic, one deciphering the traffic, and the other one not... Simply remove the ':44300' from your listen section definition. Baptiste
Re: Need help about ACLs settings
On Thu, Jun 11, 2015 at 11:06 AM, Thibault LABRUT t.lab...@pickup-services.com wrote: Hello, I’m going to install HA Proxy. My architecture is as folows : - 2 servers in DMZ = reverse proxy (RP) - 2 servers in LAN = Load balancing (LB) Several applications contact RP with different IP adress but with always de same port. With the settings as below the connection is up : RP settings # Frontend frontend http_test bind xx.xx.xx.xx:42 capture request header Host len 200 default_backend test # Backend backend test server srv_ test test.maycompany.local:42 check LB settings # Frontend frontend http_test bind xx.xx.xx.xx:42 capture request header Host len 200 default_backend test # Backend backend test balance roundrobin server test01 xx.xx.xx.xx:42 check server test02 xx.xx.xx.xx:42 check But in this case the connection is down : # Frontend frontend http_test bind xx.xx.xx.xx:42 capture request header Host len 200 # ACL acl acl_test src 12.34.56.78 (IP client) use_backend test if acl_test # Backend backend test server srv_ test test.maycompany.local:42 check LB settings # Frontend frontend http_test bind xx.xx.xx.xx:42 capture request header Host len 200 # ACL acl acl_test src 12.34.56.78 use_backend test if acl_test # Backend backend test balance roundrobin server test01 xx.xx.xx.xx:42 check server test02 xx.xx.xx.xx:42 check Can you say me what is the problem with my settings? Best Regards, Thibault Labrut. Hi Thibault, In the second case, you don't have any default backend. So you'll get a 503 unless you are 12.34.56.78. Baptiste
Re: The cause for 504's
On Thu, Jun 11, 2015 at 5:41 PM, Joseph Lynch joe.e.ly...@gmail.com wrote: Jun 10 17:27:33 localhost haproxy[23508]: 10.126.160.11:37139 [10/Jun/2015:17:26:03.027] http-in resub-bb-default/njorch0pe16 30935/0/1/-1/90937 504 194 - - sH-- 16/14/0/0/0 0/0 {569760396|297|RESUB|EMAIL|0|9001|0|0|1.0|NJ|60} POST /somepath HTTP/1.1 The interesting bit of this to me is the timing events: 30935/0/1/-1/90937. My understanding of http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.4 indicates that this took 30s for the proxy to receive the client request and over 90 seconds before timing out. What do you have timeout server set to? The docs suggest multiples of 3 usually indicate packet loss, Well, retransmit occurs after 3s, 9s, 27s, etc... In his case, I guess the timeout server is 60s, which is not enough, but obviously already high! so it might be worth running tcpdump on your outgoing traffic on the proxy and on your incoming traffic on your service's server and trying to see where these seconds are coming from (wireshark can be helpful to find these long sessions). If your application log doesn't show the request then that to me is more evidence that your requests are having issues getting from your proxy to your backend servers. Very true, tcpdump is your friend! Have you remarked any common pattern between those 504? Same source IP, same cookie value, same URLs, same server, etc... Baptiste
Re: Need help about ACLs settings
Or enable the proxy-protocol : http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.5.html#send-proxy http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.5.html#accept-proxy Baptiste On Thu, Jun 11, 2015 at 11:56 AM, Thierry FOURNIER tfourn...@haproxy.com wrote: On Thu, 11 Jun 2015 09:06:43 + Thibault LABRUT t.lab...@pickup-services.com wrote: Hello, I’m going to install HA Proxy. My architecture is as folows : - 2 servers in DMZ = reverse proxy (RP) - 2 servers in LAN = Load balancing (LB) Several applications contact RP with different IP adress but with always de same port. With the settings as below the connection is up : RP settings # Frontend frontend http_test bind xx.xx.xx.xx:42 capture request header Host len 200 default_backend test # Backend backend test server srv_ test test.maycompany.local:42 check LB settings # Frontend frontend http_test bind xx.xx.xx.xx:42 capture request header Host len 200 default_backend test # Backend backend test balance roundrobin server test01 xx.xx.xx.xx:42 check server test02 xx.xx.xx.xx:42 check But in this case the connection is down : # Frontend frontend http_test bind xx.xx.xx.xx:42 capture request header Host len 200 # ACL acl acl_test src 12.34.56.78 (IP client) use_backend test if acl_test # Backend backend test server srv_ test test.maycompany.local:42 check LB settings # Frontend frontend http_test bind xx.xx.xx.xx:42 capture request header Host len 200 # ACL acl acl_test src 12.34.56.78 use_backend test if acl_test # Backend backend test balance roundrobin server test01 xx.xx.xx.xx:42 check server test02 xx.xx.xx.xx:42 check Can you say me what is the problem with my settings? Hi, If I understand, you have two HAProxy chained, RP is in front and LB is in back. In this case, the connexions received by the LB load balancer cannot known the original IP source, because the connexions are established by the LB load balancer with its own IP. You can use the header x-forwarded-for for string the original ip source. The directive is option forwardfor. On the LB HAProxy, you can use a sample taht returns the content of the header x-forwarded-for, like this: acl acl_test fhdr(x-forwarded-for) -m ipv4 12.34.56.78 best regards Thierry Best Regards, Thibault Labrut.
Re: Limiting concurrent range connections
If you could give more information about the issue, share haproxy version, compilation procedure, etc... some gdb outputs.. Baptiste On Thu, Jun 4, 2015 at 1:43 PM, Sachin Shetty sshe...@egnyte.com wrote: I did try it, it needs 1.6.dev1 and that version segfaults as soon as the request is made (egnyte_server)egnyte@egnyte-laptop:~/haproxy$ ~/haproxy/sbin/haproxy -f conf/haproxy.conf -d [WARNING] 154/044207 (24974) : Setting tune.ssl.default-dh-param to 1024 by default, if your workload permits it you should set it to at least 2048. Please set a value = 1024 to make this warning disappear. Note: setting global.maxconn to 2000. Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result FAILED Total: 3 (2 usable), will use epoll. Using epoll() as the polling mechanism. :haproxy_l2.accept(0005)=0009 from [192.168.56.102:50119] Segmentation fault Thanks Sachin On 6/4/15 3:45 PM, Baptiste bed...@gmail.com wrote: Hi sachin, Look my conf, I turned your tcp-request content statement into http-request. Baptiste On Thu, Jun 4, 2015 at 12:05 PM, Sachin Shetty sshe...@egnyte.com wrote: Tried it, I don¹t see the table populating at all. stick-table type string size 1M expire 10m store conn_cur acl is_range hdr_sub(Range) bytes= acl is_path_throttled path_beg /public-api/v1/fs-content-download #tcp-request content track-sc1 base32 if is_range is_path_throttled http-request set-header X-track %[url] tcp-request content track-sc1 req.hdr(X-track) if is_range is_path_throttled http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled (egnyte_server)egnyte@egnyte-laptop:~$ echo show table haproxy_l2 | socat /tmp/haproxy.sock stdio # table: haproxy_l2, type: string, size:1048576, used:0 (egnyte_server)egnyte@egnyte-laptop:~$ On 6/3/15 8:36 PM, Baptiste bed...@gmail.com wrote: Yes, the url sample copies whole URL as sent by the client. Simply give it a try on a staging server and let us know the status. Baptiste On Wed, Jun 3, 2015 at 3:19 PM, Sachin Shetty sshe...@egnyte.com wrote: Thanks Baptiste - Will http-request set-header X-track %[url] help me track URL with query parameters as well? On 6/3/15 6:36 PM, Baptiste bed...@gmail.com wrote: On Wed, Jun 3, 2015 at 2:17 PM, Sachin Shetty sshe...@egnyte.com wrote: Hi, I am trying to write some throttles that would limit concurrent connections for Range requests + specific urls. For example I want to allow only 2 concurrent range requests downloading a file /public-api/v1/fs-content-download I have a working rule: stick-table type string size 1M expire 10m store conn_cur tcp-request inspect-delay 5s acl is_range hdr_sub(Range) bytes= acl is_path_throttled path_beg /public-api/v1/fs-content-download tcp-request content track-sc1 base32 if is_range is_path_throttled http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled Just wanted to see if there is a better way of doing this? Is this efficient enough. I need to include the query string as well in my tracker, but I could not figure that out. Thanks Sachin Hi Sachin, I would do it like this: stick-table type string size 1M expire 10m store conn_cur tcp-request inspect-delay 5s tcp-request accept if HTTP acl is_range hdr_sub(Range) bytes= acl is_path_throttled path_beg /public-api/v1/fs-content-download http-request set-header X-track %[url] http-request track-sc1 req.hdr(X-track) if is_range is_path_throttled http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled There might be some typo, but you get the idea. Baptiste
Re: Limiting concurrent range connections
Hi sachin, Look my conf, I turned your tcp-request content statement into http-request. Baptiste On Thu, Jun 4, 2015 at 12:05 PM, Sachin Shetty sshe...@egnyte.com wrote: Tried it, I don¹t see the table populating at all. stick-table type string size 1M expire 10m store conn_cur acl is_range hdr_sub(Range) bytes= acl is_path_throttled path_beg /public-api/v1/fs-content-download #tcp-request content track-sc1 base32 if is_range is_path_throttled http-request set-header X-track %[url] tcp-request content track-sc1 req.hdr(X-track) if is_range is_path_throttled http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled (egnyte_server)egnyte@egnyte-laptop:~$ echo show table haproxy_l2 | socat /tmp/haproxy.sock stdio # table: haproxy_l2, type: string, size:1048576, used:0 (egnyte_server)egnyte@egnyte-laptop:~$ On 6/3/15 8:36 PM, Baptiste bed...@gmail.com wrote: Yes, the url sample copies whole URL as sent by the client. Simply give it a try on a staging server and let us know the status. Baptiste On Wed, Jun 3, 2015 at 3:19 PM, Sachin Shetty sshe...@egnyte.com wrote: Thanks Baptiste - Will http-request set-header X-track %[url] help me track URL with query parameters as well? On 6/3/15 6:36 PM, Baptiste bed...@gmail.com wrote: On Wed, Jun 3, 2015 at 2:17 PM, Sachin Shetty sshe...@egnyte.com wrote: Hi, I am trying to write some throttles that would limit concurrent connections for Range requests + specific urls. For example I want to allow only 2 concurrent range requests downloading a file /public-api/v1/fs-content-download I have a working rule: stick-table type string size 1M expire 10m store conn_cur tcp-request inspect-delay 5s acl is_range hdr_sub(Range) bytes= acl is_path_throttled path_beg /public-api/v1/fs-content-download tcp-request content track-sc1 base32 if is_range is_path_throttled http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled Just wanted to see if there is a better way of doing this? Is this efficient enough. I need to include the query string as well in my tracker, but I could not figure that out. Thanks Sachin Hi Sachin, I would do it like this: stick-table type string size 1M expire 10m store conn_cur tcp-request inspect-delay 5s tcp-request accept if HTTP acl is_range hdr_sub(Range) bytes= acl is_path_throttled path_beg /public-api/v1/fs-content-download http-request set-header X-track %[url] http-request track-sc1 req.hdr(X-track) if is_range is_path_throttled http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled There might be some typo, but you get the idea. Baptiste
Re: add header or query parameter when redirecting
On Wed, Jun 3, 2015 at 11:58 AM, Sylvain Faivre sylvain.fai...@reservit.com wrote: Hello, I use the redirect directive to redirect users from old sites to a new site, eg: redirect prefix http://new-site.com code 301 if old-site I would like to redirect requests from many old sites to the same new site, so I need a way to add info about the old host in the new redirected request. I'm looking for a way to add a header to the redirected request to identify the host, for example : X-Orig-Site: old-site-123.com Is this possible ? I guess I can't add a header to the request with HAproxy, since HAproxy only sends a new Location header to the browser, and the browser sets the headers. So, is there a way to alter the location sent in the redirect, to include « orig-site=old-site-123.com » ? I think I'm missing something here. Should I user « http-request redirect » instead of « redirect prefix » ? By the way, I tried to use the set-cookie option for this, but it was a bad idea : redirect prefix http://new-site.com code 301 set-cookie ORIG=%[hdr(host)] if old_site This doesn't work for two reasons : 1. The « %[hdr(host)] » part is send literally in the request : Set-Cookie: ORIG=%[hdr(host)]; path=/; 2. The request sent to new-site.com doesn't seem to include this cookie Sylvain Hi Sylvain, The only good way to do what you want to achieve, is to use a query string parameter and http-request and http-response rules coupled to a few sections... Basically, haproxy is not able to modify the headers sent by a redirect rule. So the trick here, is to perform the redirect in a dummy frontend section used as a server in a dedicated backend and insert a header in the response, like this: backend be_redirect http-request capture req.hdr(host),word(1,:),lower len 32 http-response replace-value Location (.*) \1orig-site=%[capture.req.hdr(0)] if { res.hdr(Location) -m sub ? } http-response replace-value Location (.*) \1?orig-site=%[capture.req.hdr(0)] if !{ res.hdr(Location) -m sub ? } server dummy_redirect 127.0.0.1:8001 frontend fe_dummy_redirect bind 127.0.0.1:8001 http-request redirect prefix http://new-site.com code 301 Note that this configuration needs HAProxy 1.6 (latest snapshot). Baptiste
Re: Dynamic backend selection using maps
On Wed, Jun 3, 2015 at 2:22 PM, David Reuss shuffle...@gmail.com wrote: Hello, I have this use_backend declaration: use_backend %[req.hdr(host),lower,map_dom(/etc/haproxy/worker.map,b_nodes_default)] Which seems to work wonderfully, but say i have foo.com in my map, it will match foo.com.whatever.com, and ideally i'd like to only match if the domain ends with my value (foo.com), and also, it should NOT match blahfoo.com How would i achieve that? Hi David, Then store .foo.com as your map key, then use: %[req.hdr(host),lower,map_end(/etc/haproxy/worker.map,b_nodes_default)] Baptiste
Re: Limiting concurrent range connections
On Wed, Jun 3, 2015 at 2:17 PM, Sachin Shetty sshe...@egnyte.com wrote: Hi, I am trying to write some throttles that would limit concurrent connections for Range requests + specific urls. For example I want to allow only 2 concurrent range requests downloading a file /public-api/v1/fs-content-download I have a working rule: stick-table type string size 1M expire 10m store conn_cur tcp-request inspect-delay 5s acl is_range hdr_sub(Range) bytes= acl is_path_throttled path_beg /public-api/v1/fs-content-download tcp-request content track-sc1 base32 if is_range is_path_throttled http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled Just wanted to see if there is a better way of doing this? Is this efficient enough. I need to include the query string as well in my tracker, but I could not figure that out. Thanks Sachin Hi Sachin, I would do it like this: stick-table type string size 1M expire 10m store conn_cur tcp-request inspect-delay 5s tcp-request accept if HTTP acl is_range hdr_sub(Range) bytes= acl is_path_throttled path_beg /public-api/v1/fs-content-download http-request set-header X-track %[url] http-request track-sc1 req.hdr(X-track) if is_range is_path_throttled http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled There might be some typo, but you get the idea. Baptiste
Re: add header or query parameter when redirecting
Hi Baptiste, Unfortunately, we are not willing to upgrade to HAproxy 1.6 just yet, so we are going to use another solution for this redirect (change DNS records to resolve old hostnames to the new web server). Thank you for the info anyway, it may be useful for another time. Sylvain Well, HAPEE-1.5-r2 will have this feature and will be available soon. It's part of the backports from 1.6. Contact us at http://www.haproxy.com for more information. Cherry on the cake, you'll have access to our support team in the mean time :) Baptiste
Re: Limiting concurrent range connections
Yes, the url sample copies whole URL as sent by the client. Simply give it a try on a staging server and let us know the status. Baptiste On Wed, Jun 3, 2015 at 3:19 PM, Sachin Shetty sshe...@egnyte.com wrote: Thanks Baptiste - Will http-request set-header X-track %[url] help me track URL with query parameters as well? On 6/3/15 6:36 PM, Baptiste bed...@gmail.com wrote: On Wed, Jun 3, 2015 at 2:17 PM, Sachin Shetty sshe...@egnyte.com wrote: Hi, I am trying to write some throttles that would limit concurrent connections for Range requests + specific urls. For example I want to allow only 2 concurrent range requests downloading a file /public-api/v1/fs-content-download I have a working rule: stick-table type string size 1M expire 10m store conn_cur tcp-request inspect-delay 5s acl is_range hdr_sub(Range) bytes= acl is_path_throttled path_beg /public-api/v1/fs-content-download tcp-request content track-sc1 base32 if is_range is_path_throttled http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled Just wanted to see if there is a better way of doing this? Is this efficient enough. I need to include the query string as well in my tracker, but I could not figure that out. Thanks Sachin Hi Sachin, I would do it like this: stick-table type string size 1M expire 10m store conn_cur tcp-request inspect-delay 5s tcp-request accept if HTTP acl is_range hdr_sub(Range) bytes= acl is_path_throttled path_beg /public-api/v1/fs-content-download http-request set-header X-track %[url] http-request track-sc1 req.hdr(X-track) if is_range is_path_throttled http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled There might be some typo, but you get the idea. Baptiste
Re: Dynamic backend selection using maps
hi Jim, hdr_end could do the trick if you include the '.' in the matching string. Baptiste On Wed, Jun 3, 2015 at 4:55 PM, Jim Gronowski jgronow...@ditronics.com wrote: I’m not very familiar with the map function, but does hdr_end(host) work in this context? If so, in order to only match *.foo.com and not blahfoo.com, you’d need to include the dot in your map – ‘.foo.com’ instead of ‘foo.com’. From: David Reuss [mailto:shuffle...@gmail.com] Sent: Wednesday, June 03, 2015 05:23 To: haproxy@formilux.org Subject: Dynamic backend selection using maps Hello, I have this use_backend declaration: use_backend %[req.hdr(host),lower,map_dom(/etc/haproxy/worker.map,b_nodes_default)] Which seems to work wonderfully, but say i have foo.com in my map, it will match foo.com.whatever.com, and ideally i'd like to only match if the domain ends with my value (foo.com), and also, it should NOT match blahfoo.com How would i achieve that? Ditronics, LLC email disclaimer: This communication, including attachments, is intended only for the exclusive use of addressee and may contain proprietary, confidential, or privileged information. Any use, review, duplication, disclosure, dissemination, or distribution is strictly prohibited. If you were not the intended recipient, you have received this communication in error. Please notify sender immediately by return e-mail, delete this communication, and destroy any copies.