Re: stick-table replication not working anymore after Version-Upgrade

2023-03-01 Thread bjun...@gmail.com
Am Mi., 1. März 2023 um 11:49 Uhr schrieb Aurelien DARRAGON <
adarra...@haproxy.com>:

> > In the HAProxy configuration i'm using the FQDN name, and it seems
> > HAProxy is just using the short hostname.
> This seems to be true indeed, "localpeer" default value is retrieved
> thanks to gethostname() in haproxy
>
> However, since no obvious changes around "localpeer" handling occurred
> between 2.4.15 and 2.4.22 it looks like your previous system (ubuntu
> 18.04) behaved differently and used to return "s017.domain.local" with
> gethostname() and 'hostname' command (without the -f)
>
> (Please note that hostname command relies on uname() internally, and not
> on gethostname(), but the manual confirms that hostname command without
> arguments "will print the name of the system as returned by the
> gethostname(2) function.")
>
> Could you confirm?
>
> Thanks
>

Yes, your assumption is correct. I've now added "localpeer" global option
to the config to make this more robust/more independent from the OS.

 -
Mit freundlichen Grüßen / Best regards
Bjoern


Re: stick-table replication not working anymore after Version-Upgrade

2023-03-01 Thread bjun...@gmail.com
Am Mi., 1. März 2023 um 10:49 Uhr schrieb Lukas Tribus :

> On Wed, 1 Mar 2023 at 10:09, bjun...@gmail.com  wrote:
> >
> > Hi,
> >
> > i've upgraded from HAProxy 2.4.15 (OS: Ubuntu 18.04) to 2.4.22 (OS:
> Ubuntu 22.04). Now the stick-table synchronization between peers isn't
> working anymore.
> >
> > The peers listener is completely not existing (lsof output).
> >
> > HAProxy config:
> >
> > peers LB
> > peer s017.domain.local 192.168.120.207:1234
> > peer s018.domain.local 192.168.120.208:1234
>
> Is it possible the kernel rejects the bind to those IP addresses after
> the OS upgrade?
>
> Can you bind to those ports with something like nc?
>
> nc -l -s 192.168.120.207 -p 1234
>
>
>
> Lukas
>

Hi Lukas,

yes, i can perfectly bind to those ports/ips. I've also rebooted the hosts
to rule out other possibly issues. I've found something in the haproxy logs:

haproxy[16083]: [WARNING]  (16083) : Removing incomplete section 'peers LB'
(no peer named 's017').

Maybe it's some sort of hostname/FQDN lookup "issue":

# hostname
s017

# hostname -f
s017.domain.local

In the HAProxy configuration i'm using the FQDN name, and it seems HAProxy
is just using the short hostname.

-
Mit freundlichen Grüßen / Best regards
Bjoern


stick-table replication not working anymore after Version-Upgrade

2023-03-01 Thread bjun...@gmail.com
Hi,

i've upgraded from HAProxy 2.4.15 (OS: Ubuntu 18.04) to 2.4.22 (OS: Ubuntu
22.04). Now the stick-table synchronization between peers isn't working
anymore.

The peers listener is completely not existing (lsof output).

HAProxy config:

peers LB
peer s017.domain.local 192.168.120.207:1234
peer s018.domain.local 192.168.120.208:1234

backend be_redis
stick-table type ip size 10 peers LB
stick on dst
server s021.domain.local 192.168.120.212:6379 maxconn 1000
on-marked-down shutdown-sessions track be_redis_tracking/s021.domain.local
server s022.domain.local 192.168.120.213:6379 maxconn 1000
on-marked-down shutdown-sessions track be_redis_tracking/s022.domain.local
backup


--
Best regards
Bjoern


Re: [ANNOUNCE] HTX vulnerability from 2.0 to 2.5-dev

2021-09-09 Thread bjun...@gmail.com
Hi,

is HAProxy 2.0.x with "no option http-use-htx" also affected by
this vulnerability?

Best regards / Mit freundlichen Grüßen
Bjoern

Am Di., 7. Sept. 2021 um 17:30 Uhr schrieb Willy Tarreau :

> Hi everyone,
>
> Right after the previous announce of HTTP/2 vulnerabilities, a group
> of security researchers from JFrog Security have been looking for the
> possibility of remaining issues around the same topic. While there was
> nothing directly exploitable, Ori Hollander found a bug in the HTTP
> header name length encoding in the HTX representation by which the most
> significant bit of the name's length can slip into the value's least
> significant bit, and figured he could craft a valid request that could
> inject a dummy content-length on input that would be produced on output
> in addition to the other one, resulting in the possibility of a blind
> request smuggling attack ("blind" because the response never gets back
> to the attacker). Quite honestly they've done an excellent job at
> spotting this one because it's not every day that you manage to turn
> a single-bit overflow into an extra request, and figuring this required
> to dig deeply into the layers! It's likely that they'll publish something
> shortly about their finding.
>
> CVE-2021-40346 was assigned to this issue, which affects versions 2.0
> and above. I'm going to emit new maintenance releases for 2.0, 2.2, 2.3
> and 2.4 (2.5 still being in development, it will be released a bit later).
>
> A possible workaround for those who cannot upgrade is to block requests
> and responses featuring more than one content-length header after the
> overflow occured; these ones are always invalid because they're always
> resolved during the parsing phase, hence this condition never reaches
> the HTTP layer:
>
>http-request  deny if { req.hdr_cnt(content-length) gt 1 }
>http-response deny if { res.hdr_cnt(content-length) gt 1 }
>
> I'd like to thank the usual distro maintainers for having accepted to
> produce yet another version of their packages in a short time. Hopefully
> now we can all get back to development!
>
> Thanks,
> Willy
>
>


Re: No access to git.haproxy.org from Travis CI

2020-08-12 Thread bjun...@gmail.com
Am Sa., 13. Juni 2020 um 22:15 Uhr schrieb Willy Tarreau :

> Hi William,
>
> On Sat, Jun 13, 2020 at 03:13:06PM +0200, William Dauchy wrote:
> > Hi,
> >
> > On Thu, Jun 11, 2020 at 1:10 PM Willy Tarreau  wrote:
> > > Sure but what I wanted to say was that travis seems to be the only
> > > point experiencing such difficulties and we don't know how it works
> > > nor what are the rules in place.
> >
> > I don't know whether this is related to the issue described here but I
> just had:
> >
> > $ git pull --rebase
> > fatal: unable to access 'http://git.haproxy.org/git/haproxy.git/': The
> > requested URL returned error: 502
> > $ git pull --rebase
> > error: The requested URL returned error: 502 (curl_result = 22,
> > http_code = 502, sha1 = f38175cf6ed4d02132e6b21cbf643f73be5ee000)
> > error: Unable to find f38175cf6ed4d02132e6b21cbf643f73be5ee000 under
> > http://git.haproxy.org/git/haproxy.git
> > Fetching objects: 4529, done.
> > Cannot obtain needed commit f38175cf6ed4d02132e6b21cbf643f73be5ee000
> > while processing commit ff0e8a44a4c23ab36b6f67c4052777ac908d4211.
> > error: fetch failed.
> >
> > Third try was ok though :/
>
> Thanks for the info, I'll have a look. However the issue reported about
> travis was a connection error, which likely indicates a routing issue.
>
> Cheers,
> Willy
>

Hi,

fyi, after 2 months and more than 20 emails Travis has fixed it.

Best regards / Mit freundlichen Grüßen

Bjoern


Re: Storing src + backend or frontend name in stick-table

2020-07-17 Thread bjun...@gmail.com
Hi Christian,

i'm using the following (i don't know if you're asking for HTTP mode) when
i need to track multiple sample fetches:

frontend http
  http-request set-header X-Concat %[req.fhdr(User-Agent)]_%[src]
  http-request track-sc0 req.fhdr(X-Concat)


Best regards / Mit freundlichen Grüßen
Björn

Am Do., 16. Juli 2020 um 21:48 Uhr schrieb Christian Ruppert :

> Hi List,
>
> is it possible to store both, IP (src) and the frontend and/or backend
> name in a stick table? We use the IP in some frontends, the
> frontend/backend name is only for visibility/informational purpose.
> We have pretty huge configs with several hundred frontends/backends and
> we'd like to know like where a bot triggered some action and stuff like
> that.
>
> --
> Regards,
> Christian Ruppert
>
>


Re: Dynamic SSL certificate loading with haproxy-2.2-dev

2020-06-17 Thread bjun...@gmail.com
Am Mittwoch, 17. Juni 2020 schrieb William Lallemand :

> Hello,
>
> On Wed, Jun 17, 2020 at 03:28:19PM +0300, tbn wrote:
> > Hello list,
> >
> >I saw William Lallemand's announcement regarding the possibility of
> > loading dynamic ssl certificates right here
> > https://www.mail-archive.com/haproxy@formilux.org/msg36927.html and
> > the idea of having so much control over the haproxy instance was
> > intriguing.
> >
> >I've set up a test instance of the latest 2.2-dev9 to test out this
> > feature and I seem to have hit a bump in the road. I am an usure if I
> > misunderstood what was supposed to happen, or if I've stumbled across
> > a bug. In my configuration file, I'm instructing haproxy to load all
> > existing certificates from a folder and I'm trying to load a new
> > certificate using the new "new ssl cert/add ssl cert/commit ssl cert"
> > commands through the haproxy socket. The domain with the certificate
> > loaded manually seems to have SNI problems until haproxy is restarted
> > and the certificate is read from the crt folder.
> >
> >I'm using foo.com and bar.com as example domains. The one that
> > haproxy loads from the folder is generated and self-signed (foo.com),
> > while the one I'm trying to load is valid and issued by let's encrypt
> > (bar.com).
> >
> >I've used a slight variation of the config file found in
> > reg-tests/ssl/set_ssl_cert.vtc as follows:
> > --[
> Start]--
> > global
> > maxconn 4096
> > user root
> > group root
> > daemon
> > log 127.0.0.1 local0 debug
> > stats socket "/tmp/stats" level admin
> >
> > # Default SSL material locations
> > ca-base /etc/ssl/certs
> > crt-base /etc/ssl/private
> >
> > tune.ssl.default-dh-param 2048
> >
> > defaults
> > log global
> > modehttp
> > option  httplog
> > option  dontlognull
> > retries 3
> > option  redispatch
> > option  http-server-close
> > option  forwardfor
> > timeout connect 5000
> > timeout client  5
> > timeout server  5
> >
> >
> > listen https-in
> > bind :443 transparent ssl strict-sni crt /etc/haproxy/ssl alpn
> > h2,http/1.1
> > default_backend something
> >
> > backend something
> > mode http
> > server web 192.168.1.144:80 check
> > --[End]-
> -
> >
> > Haproxy starts succesfully and the pre-existing certificate in the
> > /etc/haproxy/ssl is present and loaded:
> >
> > --[
> Start]--
> > ]# haproxy -d -f /etc/haproxy/haproxy.cfg
> > Available polling systems :
> >   epoll : pref=300,  test result OK
> >poll : pref=200,  test result OK
> >  select : pref=150,  test result FAILED
> > Total: 3 (2 usable), will use epoll.
> >
> > Available filters :
> > [SPOE] spoe
> > [COMP] compression
> > [TRACE] trace
> > [CACHE] cache
> > [FCGI] fcgi-app
> > Using epoll() as the polling mechanism.
> > --[
> Middle]--
> > ]# echo -e "show ssl cert" | socat /tmp/stats stdio
> > # filename
> > /etc/haproxy/ssl/foo.com.pem
> >
> > ]# echo -e "show ssl cert /etc/haproxy/ssl/foo.com.pem" | socat
> /tmp/stats stdio
> > Filename: /etc/haproxy/ssl/foo.com.pem
> > *Status: Used*
> > Serial: DA0AD0EC8F6C0C30
> > notBefore: Nov  8 15:31:08 2019 GMT
> > notAfter: Dec  8 15:31:08 2019 GMT
> > Subject Alternative Name:
> > Algorithm: RSA2048
> > SHA1 FingerPrint: 81D4AF40722F5F7C704E3327C5695F78DA6DC1E0
> > Subject: /C=RO/ST=SomeState/L=Locality/O=OrganizationalOrg/OU=
> OrzanizatoricUnit/CN=foo.pem
> > Issuer: /C=RO/ST=SomeState/L=Locality/O=OrganizationalOrg/OU=
> OrzanizatoricUnit/CN=foo.pem
> > --[End]-
> -
> > Certificate status is "Used", browser loads "foo.com" with the proper
> > certificate"
> >
> > Next I've tried inserting "bar.com" into a running haproxy:
> > --[
> Start]--
> > ]# cat /root/certificates/bar.com/fullchain.pem
> > /root/certificates/bar.com/privkey.pem | sed '/^$/d' >
> > /etc/haproxy/ssl/bar.com.pem
> > ]# echo -e "new ssl cert /etc/haproxy/ssl/bar.com.pem" | socat
> /tmp/stats stdio
> > New empty certificate store '/etc/haproxy/ssl/bar.com.pem'!
> >
> > # echo -e "set ssl cert /etc/haproxy/ssl/bar.com.pem <<\n$(cat
> > /etc/haproxy/ssl/bar.com.pem)\n" | socat /tmp/stats stdio
> > Transaction created for 

Re: Ubuntu 20.04 + TLSv1

2020-06-12 Thread bjun...@gmail.com
Am Fr., 12. Juni 2020 um 16:02 Uhr schrieb Jerome Magnin :

> On Fri, Jun 12, 2020 at 03:09:18PM +0200, bjun...@gmail.com wrote:
> > Hi,
> >
> > currently i'm testing Ubuntu 20.04 and HAProxy 2.0.14.
> >
> > I'm trying to get TLSv1 working (we need this for some legacy clients),
> so
> > far without success.
> >
> > I've read different things, on the one hand Ubuntu has removed
> > TLSv1/TLSv1.1 support completely, otherwise that it can be enabled:
> >
> http://changelogs.ubuntu.com/changelogs/pool/main/o/openssl/openssl_1.1.1f-1ubuntu2/changelog
> >
> >
> > Is there anything that can be set in HAProxy? (apart from
> > "ssl-default-bind-options ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.2")
> >
> > Has anybody more information on this matter or has TLSv1 working in
> Ubuntu
> > 20.04 + HAProxy?
> >
>
> Hi,
>
> appending @SECLEVEL=1 to the cipher string I can perform the handshakes
> using TLSv1.0 and higher on ubuntu 20.04. You don't need to rebuild
> openssl. I was not able to use s_client -tls1 or -tls1_2 on the 20.04
> though, had to try with a different client. It's probably something that
> you can handle with openssl.cnf, just like the ciphers.
>
> frontend in
>   bind *:8443 ssl crt ssl.pem ssl-min-ver TLSv1.0  ciphers ALL:@SECLEVEL=1
>
>
> --
> Jérôme
>

Thanks Jérôme, that does the trick.

Best regards / Mit freundlichen Grüßen
Bjoern


Re: Ubuntu 20.04 + TLSv1

2020-06-12 Thread bjun...@gmail.com
Am Fr., 12. Juni 2020 um 15:38 Uhr schrieb bjun...@gmail.com <
bjun...@gmail.com>:

> Am Fr., 12. Juni 2020 um 15:24 Uhr schrieb Lukas Tribus :
>
>> Hello Bjoern,
>>
>>
>> On Fri, 12 Jun 2020 at 15:09, bjun...@gmail.com 
>> wrote:
>> >
>> > Hi,
>> >
>> > currently i'm testing Ubuntu 20.04 and HAProxy 2.0.14.
>> >
>> > I'm trying to get TLSv1 working (we need this for some legacy clients),
>> so far without success.
>> >
>> > I've read different things, on the one hand Ubuntu has removed
>> TLSv1/TLSv1.1 support completely, otherwise that it can be enabled:
>> http://changelogs.ubuntu.com/changelogs/pool/main/o/openssl/openssl_1.1.1f-1ubuntu2/changelog
>> >
>> > Is there anything that can be set in HAProxy? (apart from
>> "ssl-default-bind-options ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.2")
>> >
>> > Has anybody more information on this matter or has TLSv1 working in
>> Ubuntu 20.04 + HAProxy?
>>
>>
>> Please try "force-tlsv10" *directly* on the bind line (not from
>> ssl-default-bind-options).
>>
>> There are two issues:
>>
>> Bug 595 [1]: ssl-min-ver does not work from ssl-default-bind-options
>> Bug 676 [2]: ssl-min-ver does not work properly depending on OS defaults
>>
>> If force-tlsv10 works directly on the bind line to enable TLSv1.0,
>> then the next release 2.0.15 should work fine as it contains both
>> fixes.
>>
>>
>>
>> Regards,
>>
>> Lukas
>>
>>
>> [1] https://github.com/haproxy/haproxy/issues/595
>> [2] https://github.com/haproxy/haproxy/issues/676
>
>
>
> Hi Lukas,
>
>  "force-tlsv10" directly on the bind line doesn't work (i've also tried in
> "ssl-default-bind-options", same result).
>
> Best regards / Mit freundlichen Grüßen
> Bjoern
>
>

I'm using the ubuntu packages from haproxy.debian.net.

# haproxy -vvv | grep -i openssl
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_REGPARM=1 USE_OPENSSL=1
USE_LUA=1 USE_ZLIB=1 USE_SYSTEMD=1
Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE
-PCRE_JIT +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED
+REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE
+LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4
-MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS
-51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS
Built with OpenSSL version : OpenSSL 1.1.1d  10 Sep 2019
Running on OpenSSL version : OpenSSL 1.1.1f  31 Mar 2020
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3

Best regards / Mit freundlichen Grüßen
Bjoern


Re: Ubuntu 20.04 + TLSv1

2020-06-12 Thread bjun...@gmail.com
Am Fr., 12. Juni 2020 um 15:24 Uhr schrieb Lukas Tribus :

> Hello Bjoern,
>
>
> On Fri, 12 Jun 2020 at 15:09, bjun...@gmail.com  wrote:
> >
> > Hi,
> >
> > currently i'm testing Ubuntu 20.04 and HAProxy 2.0.14.
> >
> > I'm trying to get TLSv1 working (we need this for some legacy clients),
> so far without success.
> >
> > I've read different things, on the one hand Ubuntu has removed
> TLSv1/TLSv1.1 support completely, otherwise that it can be enabled:
> http://changelogs.ubuntu.com/changelogs/pool/main/o/openssl/openssl_1.1.1f-1ubuntu2/changelog
> >
> > Is there anything that can be set in HAProxy? (apart from
> "ssl-default-bind-options ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.2")
> >
> > Has anybody more information on this matter or has TLSv1 working in
> Ubuntu 20.04 + HAProxy?
>
>
> Please try "force-tlsv10" *directly* on the bind line (not from
> ssl-default-bind-options).
>
> There are two issues:
>
> Bug 595 [1]: ssl-min-ver does not work from ssl-default-bind-options
> Bug 676 [2]: ssl-min-ver does not work properly depending on OS defaults
>
> If force-tlsv10 works directly on the bind line to enable TLSv1.0,
> then the next release 2.0.15 should work fine as it contains both
> fixes.
>
>
>
> Regards,
>
> Lukas
>
>
> [1] https://github.com/haproxy/haproxy/issues/595
> [2] https://github.com/haproxy/haproxy/issues/676



Hi Lukas,

 "force-tlsv10" directly on the bind line doesn't work (i've also tried in
"ssl-default-bind-options", same result).

Best regards / Mit freundlichen Grüßen
Bjoern


Ubuntu 20.04 + TLSv1

2020-06-12 Thread bjun...@gmail.com
Hi,

currently i'm testing Ubuntu 20.04 and HAProxy 2.0.14.

I'm trying to get TLSv1 working (we need this for some legacy clients), so
far without success.

I've read different things, on the one hand Ubuntu has removed
TLSv1/TLSv1.1 support completely, otherwise that it can be enabled:
http://changelogs.ubuntu.com/changelogs/pool/main/o/openssl/openssl_1.1.1f-1ubuntu2/changelog


Is there anything that can be set in HAProxy? (apart from
"ssl-default-bind-options ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.2")

Has anybody more information on this matter or has TLSv1 working in Ubuntu
20.04 + HAProxy?

Best regards / Mit freundlichen Grüßen
Bjoern


Re: No access to git.haproxy.org from Travis CI

2020-06-11 Thread bjun...@gmail.com
Am Do., 11. Juni 2020 um 15:00 Uhr schrieb Willy Tarreau :

> By the way if that helps I've re-added the  records for
> {git,www}.haproxy.org. It will take one hour or so to propagate, but
> you'll be able to see if using IPv6 causes the same issue or not. I'd
> guess it would work better since the routes are likely different.
>
> Willy
>

Thanks for your effort Willy, but the travis vm's which are used for my
jobs do not support IPv6. I've informed travis about the issue, let's wait
and see.

Best regards / Mit freundlichen Grüßen
Bjoern


Re: No access to git.haproxy.org from Travis CI

2020-06-11 Thread bjun...@gmail.com
Hi,

one more question. Now that i have to use the github mirror, how can i get
the latest changes of the 2.1 tree? (as possible with "
https://git.haproxy.org/git/haproxy-2.1.git/;).

There is no branch or tag on github with which i can get the latest changes
of the 2.1 tree.

Best regards / Mit freundlichen Grüßen
Bjoern


Am Do., 11. Juni 2020 um 13:17 Uhr schrieb Willy Tarreau :

> On Thu, Jun 11, 2020 at 01:09:37PM +0200, bjun...@gmail.com wrote:
> > Hello Willy,
> >
> > just for clarity, it's not only port 80. I've looked at it, it's
> > definitely some issue/blocking within the travis infrastructure, routing
> > from GCE Cloud (us-east1) is fine.
>
> OK, that's good to know.
>
> Thanks,
> Willy
>


Re: No access to git.haproxy.org from Travis CI

2020-06-11 Thread bjun...@gmail.com
Hello Willy,

just for clarity, it's not only port 80. I've looked at it, it's
definitely some issue/blocking within the travis infrastructure, routing
from GCE Cloud (us-east1) is fine.

Best regards / Mit freundlichen Grüßen
Bjoern

Am Do., 11. Juni 2020 um 12:23 Uhr schrieb Willy Tarreau :

> On Thu, Jun 11, 2020 at 03:17:07PM +0500,  ??? wrote:
> > we had to change libslz url as well
> >
> >
> https://github.com/haproxy/haproxy/commit/13dd45178e24504504a02d89d9a81d4b80c63c93#diff-354f30a63fb0907d4ad57269548329e3
>
> Fine!
>
> > however, I did not investigate deeper (traceroute, etc, ...)
>
> And it really seems that only travis is affected, nobody else complains
> apparently. The server there is the same as the one running the mailing
> list and the one that's pushing to github, so I really doubt it would be
> a routing issue, which is why I more likely suspect some filtering of
> port 80 from within travis' infrastructure.
>
> Regards,
> Willy
>


No access to git.haproxy.org from Travis CI

2020-06-11 Thread bjun...@gmail.com
Hello Willy,

i have a Travis CI job that is pulling/cloning a repo from git.haproxy.org,
but unfortunately this isn't working anymore (i believe since May, 12).

Output Travis CI job:

$ ping -c 4 git.haproxy.org
PING ipv4.haproxy.org (51.15.8.218) 56(84) bytes of data.
--- ipv4.haproxy.org ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3059ms

+ git clone http://git.haproxy.org/git/haproxy-2.1.git/ haproxy
Cloning into 'haproxy'...
fatal: unable to access 'http://git.haproxy.org/git/haproxy-2.1.git/':
Failed to connect to git.haproxy.org port 80: Connection timed out

+ git clone https://git.haproxy.org/git/haproxy-2.1.git/ haproxy
Cloning into 'haproxy'...
fatal: unable to access 'https://git.haproxy.org/git/haproxy-2.1.git/':
Failed to connect to git.haproxy.org port 443: Connection timed out


Is there any IP blocking or something else in place?


Best regards / Mit freundlichen Grüßen

Bjoern


Re: Redirect and rewrite part of query string (using map files)

2020-01-18 Thread bjun...@gmail.com
Am Samstag, 18. Januar 2020 schrieb Aleksandar Lazic :

> Hi Bjoern.
>
> On 18.01.20 14:02, bjun...@gmail.com wrote:
>
>> Am Samstag, 18. Januar 2020 schrieb Aleksandar Lazic > <mailto:al-hapr...@none.at>>:
>>
>> Hi.
>>
>> On 18.01.20 13:11, bjun...@gmail.com <mailto:bjun...@gmail.com>
>> wrote:
>>
>> Hi,
>>
>> i want to redirect the following (the value of the code param
>> should be rewritten):
>>
>> abc.de/?v=1=1530=3 <http://abc.de/?v=1=1530=3>
>> -> abc.de/?v=1=6780=3 <http://abc.de/?v=1=6780=3>
>> abc.it/?v=2=2400=2 <http://abc.it/?v=2=2400=2>->
>> abc.it/?v=2=7150=2 <http://abc.it/?v=2=7150=2> abc.fr <
>> http://abc.fr>  ..
>> abc.se <http://abc.se>  ..
>> .
>> .
>>
>> When i don't use maps, i can accomplish the task with the
>> following lines (but this needs many of those lines):
>>
>> http-request set-header X-Redirect-Url
>> %[url,regsub(code=1530,code=6780,g)] if { hdr_reg(host) -i ^abc\.de$ }
>> http-request set-header X-Redirect-Url
>> %[url,regsub(code=2400,code=7150,g)] if { hdr_reg(host) -i ^abc\.it$ }
>>
>> http-request redirect code 302 location https://
>> %[hdr(host)]%[hdr(X-Redirect-Url)]
>>
>>
>> But i want to use map files to reduce duplication and make it
>> easier to add new items.
>>
>> I have these map files (domain is the lookup key):
>>
>> desktop_ids.map:
>> abc.de <http://abc.de> 1530
>> abc.it <http://abc.it> 2400
>> .
>> .
>>
>> mobile_ids.map:
>> abc.de <http://abc.de> 6780
>> abc.it <http://abc.it> 7150
>> .
>> .
>>
>> http-request set-header X-ID-Desktop
>> %[hdr(host),lower,map_str(/etc/haproxy/desktop_ids.map)]
>> http-request set-header X-ID-Mobile %[hdr(host),lower,map_str(/etc
>> /haproxy/mobile_ids.map)]
>>
>> What i would need is the following:
>> http-request set-header X-Redirect-Url
>> %[url,regsub(code=%[hdr(X-ID-Desktop)],code=%[hdr(X-ID-Mobile)],g)]
>>
>> http-request redirect code 302 location https://
>> %[hdr(host)]%[hdr(X-Redirect-Url)]
>>
>> But that's not possible, you cannot use variables in the regex or
>> substitution field of regsub. I've also tried "http-request
>> replace-header", but same problem, you cannot use variables for the
>> "match-regex".
>>
>> Maybe is it possible to cut the "code" param from the query
>> string and append it with the new value to the query string. But this needs
>> some complex regex and handling of multiple conditions in query string
>> (plus ordering of query string params is different than).
>>
>> Is there a possibility to use variables in regsub or in the
>> "match-regex" of replace-header?
>>
>>
>> I think you will need a small lua action handler which do the job.
>> In the blog post are some examples which can help you to reach your
>> target.
>> https://www.haproxy.com/blog/5-ways-to-extend-haproxy-with-lua/ <
>> https://www.haproxy.com/blog/5-ways-to-extend-haproxy-with-lua/>
>>
>> Maybe you can start with the example on stackoverflow and add the map
>> handling
>> into the code.
>>
>> https://stackoverflow.com/questions/50844292/how-to-make-ha-
>> proxy-to-follow-redirects-by-itself/50872246#50872246 <
>> https://stackoverflow.com/questions/50844292/how-to-make-
>> ha-proxy-to-follow-redirects-by-itself/50872246#50872246>
>> https://www.arpalert.org/src/haproxy-lua-api/2.0dev/index.ht
>> ml#map-class <https://www.arpalert.org/src/haproxy-lua-api/2.0dev/index.h
>> tml#map-class>
>>
>> It would be interesting to know which haproxy version do you use.
>> haproxy -vv
>>
>> Best regards / Mit freundlichen Grüßen
>>
>> Bjoern
>>
>>
>> Hth
>> Aleks
>>
>>
>> Hi Aleks,
>>
>> i'm using HAProxy 1.8. I wanted to avoid using lua for "only" doing some
>> redirects. I was searching for a native method in HAProxy. All "building
>> blocks" are present, except that you cannot use variables in the
>> match-regex etc. (or maybe there is possibility?)
>>
>
> I don't know any possibilities without lua, maybe someone else on the ML.
> Do you see any chance to update to 2.1 with lua?
>
> Please keep the ML in the loop, thanks.
>
> Bjoern
>>
>
> Best regards
> Aleks
>


Hi,

upgrading to latest LTS (2.0) is on the roadmap. We're already doing some
other stuff in lua with HAProxy 1.8. Lua Map() class is also available in
1.8, but i would prefer a native method for the redirects.

Bjoern


Redirect and rewrite part of query string (using map files)

2020-01-18 Thread bjun...@gmail.com
Hi,

i want to redirect the following (the value of the code param should be
rewritten):

abc.de/?v=1=1530=3->   abc.de/?v=1=6780=3
abc.it/?v=2=2400=2  ->   abc.it/?v=2=7150=2
abc.fr ..
abc.se ..
.
.

When i don't use maps, i can accomplish the task with the following lines
(but this needs many of those lines):

http-request set-header X-Redirect-Url %[url,regsub(code=1530,code=6780,g)]
if { hdr_reg(host) -i ^abc\.de$ }
http-request set-header X-Redirect-Url %[url,regsub(code=2400,code=7150,g)]
if { hdr_reg(host) -i ^abc\.it$ }

http-request redirect code 302 location https://
%[hdr(host)]%[hdr(X-Redirect-Url)]


But i want to use map files to reduce duplication and make it easier to add
new items.

I have these map files (domain is the lookup key):

desktop_ids.map:
abc.de 1530
abc.it 2400
.
.

mobile_ids.map:
abc.de 6780
abc.it 7150
.
.

http-request set-header X-ID-Desktop
%[hdr(host),lower,map_str(/etc/haproxy/desktop_ids.map)]
http-request set-header X-ID-Mobile
%[hdr(host),lower,map_str(/etc/haproxy/mobile_ids.map)]

What i would need is the following:
http-request set-header X-Redirect-Url
%[url,regsub(code=%[hdr(X-ID-Desktop)],code=%[hdr(X-ID-Mobile)],g)]

http-request redirect code 302 location https://
%[hdr(host)]%[hdr(X-Redirect-Url)]

But that's not possible, you cannot use variables in the regex or
substitution field of regsub. I've also tried "http-request
replace-header", but same problem, you cannot use variables for the
"match-regex".

Maybe is it possible to cut the "code" param from the query string and
append it with the new value to the query string. But this needs some
complex regex and handling of multiple conditions in query string (plus
ordering of query string params is different than).

Is there a possibility to use variables in regsub or in the "match-regex"
of replace-header?


Best regards / Mit freundlichen Grüßen

Bjoern


Re: [PATCH] MINOR: sample: add ssl_sni_check converter

2019-05-17 Thread bjun...@gmail.com
Am Fr., 17. Mai 2019 um 21:15 Uhr schrieb Tim Düsterhus :
>
> Willy,
>
> Am 23.12.18 um 21:20 schrieb Moemen MHEDHBI:
> > Hi,
> >
> > The attached patch adds the ssl_sni_check converter which returns true
> > if the sample input string matches a loaded certificate's CN/SAN.
> >
> > This can be useful to check for example if a host header matches a
> > loaded certificate CN/SAN before doing a redirect:
> >
> > frontent fe_main
> >   bind 127.0.0.1:80
> >   bind 127.0.0.1:443 ssl crt /etc/haproxy/ssl/
> >   http-request redirect scheme https if !{ ssl_fc } { 
> > hdr(host),ssl_sni_check() }
> >
> >
> > This converter may be even more useful when certificates will be
> > added/removed at runtime.
> >
>
> This email serves to bump the patch which appears to have slipped
> through the cracks. For the context see the "Re: Host header and sni
> extension differ" thread.
>
> Best regards
> Tim Düsterhus
>

Definitely thumbs up for this converter. I've implemented on-the-fly
certificate generation for HAProxy with the help of Lua. The converter
would help me to reduce or simplify parts of the code and could
possible improve performance.


Best regards / Mit freundlichen Grüßen

Bjoern



Re: [PATCH] MEDIUM: lua: Add stick table support for Lua

2018-10-10 Thread bjun...@gmail.com
Am Sa., 29. Sep. 2018 um 20:18 Uhr schrieb Willy Tarreau :
>
> Hi Adis,
>
> On Thu, Sep 27, 2018 at 05:32:22PM +0200, Adis Nezirovic wrote:
> > On Thu, Sep 27, 2018 at 04:52:29PM +0200, Thierry Fournier wrote:
> > > I Adis,
> > >
> > > Sorry for the delay, I processed a quick review, and all seems to be ok 
> > > for me!
> > >
> > > BR,
> > > Thierry
> >
> > Great, happy to hear that, I hope guys will merge it soon.
>
> OK, just merged now.
>
> Thanks to you both!
>
> Willy
>

Hi Adis,

nice feature, thank you. Are there plans for adding write access also?

Currently i've a use case for this in Lua (get/set some sort of shared
lock) and i'm planning to use HAProxy maps as a workaround instead of
stick tables (or writing entries to stick tables from Lua via tcp to
admin socket).


Best Regards / Mit freundlichen Grüßen

Bjoern



http-request set-src without PROXY protocol

2018-08-03 Thread bjun...@gmail.com
Hi,

i'm currently experimenting with "http-request set-src". When i use it
in a backend with PROXY Protocol configured, it's working and the IP
is written in the PROXY protocol header.


But what does "set-src" do if no PROXY Procotol is used/can be used?

Is the "http-request set-src" feature only intended for using it with
PROXY Protocol?

If not, what are the requirements when not using PROXY Protocol?


Example:

frontend fe
mode http
http-request set-header X-FakeIP 192.168.99.5
default_backend be

backend be
mode http
http-request set-src hdr(X-FakeIP)
server s1 172.16.0.10:80




Best Regards / Mit freundlichen Grüßen

Bjoern



Re: Possibility to modify PROXY protocol header

2018-08-01 Thread bjun...@gmail.com
2018-07-31 17:56 GMT+02:00 James Brown :
> I think if you use the `http-request set-src` directive it'll populate the
> PROXY headers in addition to the internal logging 
>
> On Fri, Jul 27, 2018 at 7:05 AM bjun...@gmail.com  wrote:
>>
>> Hi,
>>
>> is there any possibilty to modify the client ip in the PROXY Protocol
>> header before it is send to a backend server?
>>
>> My use case is a local integration/functional testing suite (multiple
>> local docker containers for testing the whole stack - haproxy, cache layer,
>> webserver, etc.).
>>
>> I would like to test functionalities which are dependent of/need specific
>> IP ranges or IP addresses.
>>
>> 
>> Best Regards / Mit freundlichen Grüßen
>>
>> Bjoern
>
>
>
> --
> James Brown
> Engineer

Hi James,


thanks, that worked like a charm. Didn't know that "set-src" works in
combination with PROXY protocol.



Best Regards / Mit freundlichen Grüßen

Bjoern



Possibility to modify PROXY protocol header

2018-07-27 Thread bjun...@gmail.com
Hi,

is there any possibilty to modify the client ip in the PROXY Protocol
header before it is send to a backend server?

My use case is a local integration/functional testing suite (multiple local
docker containers for testing the whole stack - haproxy, cache layer,
webserver, etc.).

I would like to test functionalities which are dependent of/need specific
IP ranges or IP addresses.


Best Regards / Mit freundlichen Grüßen

Bjoern


1.7.8 upgrade question

2017-08-02 Thread bjun...@gmail.com
Hi,

we want to roll-out 1.7.8 in production (upgrading from 1.6.8).

While preparing the update (reading changelog/mailinglist/git log,
searching for known issues etc.), i stumbled upon this:

https://www.mail-archive.com/haproxy@formilux.org/msg26282.html


I don't know if i'm interpreting "TUNNEL mode" correctly.

We are using "option http-server-close" in fe + be, so would this bug
affect us?


P.S.: I know a patch exists for the issue in 1.7 tree, but we're usually
only use version releases in production.


---
Best Regards / Mit freundlichen Grüßen

Bjoern


Re: Lua + shared memory segment

2017-08-01 Thread bjun...@gmail.com
2017-08-01 10:47 GMT+02:00 Thierry Fournier <thierry.fourn...@arpalert.org>:
>
>> On 31 Jul 2017, at 22:41, bjun...@gmail.com wrote:
>>
>> Hi,
>>
>> i'm experimenting with some Lua code in HAProxy where i need a simple 
>> key/value store (not persistent). I want to avoid Redis or other external 
>> dependency.
>>
>> Is there some sort of shared memory segment in HAProxy Lua integration that 
>> can be used? (or is it possible to access HAProxy stick-tables from Lua?)
>
>
> Adding shared memory segment in Lua is an interesting way. Adding a key/value 
> system with
> Lua using shared memory seems very smart, but the development of this kind of 
> function
> may be complicated and needs a little bit of brainstorm.
>
> As the shared memory is local on the system, the information is not shared 
> between nodes
> distributed on servers.
>
> Also I have some doubts about the right usage of these system because shared 
> memory requires
> lock (sem) and haproxy doesn’t like locks.
>
>
>
>
> Its not possible today to access to the stick tables with other method than 
> the sample
> fetches and converters bindings. In other way, peer protocol requires full 
> mesh connections
> (I’m using 12 HAProxy, so 66 connections)
>
> I prefer sharing data with stick tables, mechanism are already written and 
> are reliable,
> but the content stored is limited. Maybe a peer protocol hierarchical 
> concentrator will be
> welcome.
>
>
>
>
> Note that, for my own usage I’m brainstorming with Redis. This software is 
> very light
> and fast. My goal is tracking sessions on many servers.
> So I think to this behaviour:
>
>  - Each haproxy reads the local redis,
>
>  - HAProxy send updates message in MULTICAST/UDP and all nodes receives the 
> update
>(this way may loose messages)
>
>
>
> Sharing data between HAProxy is not so easy :-)
>
> Thierry
>
>
>
>> --
>> Best Regards
>>
>> Bjoern
>

Hi Thierry,

thanks for your detailed answer.

Due to your remarks, i will go with Redis.


---
Best Regards

Bjoern



Lua + shared memory segment

2017-07-31 Thread bjun...@gmail.com
Hi,

i'm experimenting with some Lua code in HAProxy where i need a simple
key/value store (not persistent). I want to avoid Redis or other external
dependency.

Is there some sort of shared memory segment in HAProxy Lua integration that
can be used? (or is it possible to access HAProxy stick-tables from Lua?)


--
Best Regards

Bjoern


Lua core.(m)sleep + http-response

2017-07-31 Thread bjun...@gmail.com
Hi,

i've an issue that was already posted some time ago (i'm using HAProxy
1.7.8):

https://discourse.haproxy.org/t/core-msleep-not-working-in-
http-resp-http-response


It seems that sleep is completely ignored, but the connection hangs as long
as the value in "timeout connect".


---
Best Regards

Bjoern


Re: tcp-response content tarpit if hdr(X-Tarpit-This)

2017-07-29 Thread bjun...@gmail.com
2017-07-29 16:57 GMT+02:00 Charlie Elgholm :
> Ok, but anyhow, this actually means that I can use http-response to do
> something on the response. That's good. I'll play with it for a while on my
> dev-server. Nice!
>
> Version can be upgraded, of course, if I can just motivate it! :)
>
> Den 29 juli 2017 12:44 em skrev "Igor Cicimov"
> :
>>
>>
>>
>> On Fri, Jul 28, 2017 at 10:00 PM, Charlie Elgholm 
>> wrote:
>>>
>>> Ok, I'm on the 1.5.x bransch unfortunately, due to Oracle Linux issues.
>>> Can install manually, but that might raise some eyebrows.
>>>
>>> But what you're telling me is that I can route the request to another
>>> backend (or drop it) in haproxy based on something I received from one
>>> backend??
>>>
>>> Den 28 juli 2017 1:40 em skrev "Igor Cicimov"
>>> :
>>>
>>>
>>>
>>> On Fri, Jul 28, 2017 at 6:03 PM, Charlie Elgholm 
>>> wrote:

 Thanks!

 I was really hoping for acl-validation on the basis of the response from
 the backend server, and not on the incoming request at the frontend.
 And, as much as I really like lua as a language, I'd rather keep my
 haproxy with as small footprint as possible. =)

 Really nice example about all the possibilities though, thanks!

 This is how all examples I find operate:
 incoming request => haproxy => frontend => acl based on what's known
 about the incoming requests => A or B
 A: backend => stream backend response to client
 B: tarpit / reject

 I would like this:
 incoming request => haproxy => frontend => backend => acl based on
 what's known about the response from the backend => A or B
 A: stream backend response to client
 B: tarpit / reject


 2017-07-28 9:52 GMT+02:00 Igor Cicimov :
>
>
>
> On 28 Jul 2017 5:41 pm, "Charlie Elgholm"  wrote:
>
> Hi Folks,
>
> Either I'm too stupid, or it's because it's Friday
>
> Can you tarpit/reject (or other action) based on a response from the
> backend?
> You should be able to, right?
>
> Like this:
> tcp-response content tarpit/reject if res.hdr(X-Tarpit-This)
>
> Can someone explain this to me? (Free beer.)
>
> I have a fairly complex ruleset on my backend server, written in Oracle
> PL/SQL, which monitors Hack- or DoS-attempts, and I would love to tarpit
> some requests on the frontend (by haproxy) based on something that happens
> on my backend.
>
> As I do now I return a 503 response from the server, and iptable-block
> those addresses for a while. But since they see the 503 response they'll
> return at a later date and try again. I would like the connection to just
> die (drop, no response at all) or tarpit (long timeout, so they give up). 
> I
> suppose/hope they'll eventually remove my IP from their databases.
>
> I'm guessing a tarpit is smarter than a reject, since the reject will
> indicate to the attacker that somethings exist behind the server IP.
> An iptable "drop" would be preferable, but I guess that's a little late
> since haproxy has already acknowledged the connection to the attacker.
>
> --
> Regards
> Charlie Elgholm
> Brightly AB
>
> Good example of delay with lua:
> http://godevops.net/2015/06/24/adding-random-delay-specific-http-requests-haproxy-lua/




 --
 Regards
 Charlie Elgholm
 Brightly AB
>>>
>>>
>>> Well the idea is to redirect the response on the backend (based on some
>>> condition) to a local frontend where you can use the tarpit on the request.
>>>
>>> You cam also try:
>>>
>>> http-response silent-drop if { status 503 }
>>>
>>> that you can use in the backed (at least in 1.7.8, not sure for other
>>> versions)
>>>
>>>
>>>
>>
>> I was thinking of something along these lines:
>>
>> frontend ft_tarpit
>>   mode http
>>   bind 127.0.0.1:
>>   default_backend bk_tarpit
>>
>> backend bk_tarpit
>>   mode http
>>   timeout tarpit 3600s
>>   http-request tarpit
>>
>> backend bk_main
>>   mode http
>>   http-response redirect 127.0.0.1: if { status 503 }
>>
>> but you are out of luck again since "http-response redirect" was
>> introduced in 1.6
>>
>

Hi Charlie,

many ideas come to my mind to solve your requirement, but all of these
require HAProxy 1.6 (or better: HAProxy 1.7)

Some keywords:

"http-response silent-drop"
"http-response track-sc" + checking stick-table entry with acl in frontend
"http-response" + (random) delay with lua (my preferred solution)


---
Best Regards

Bjoern



Re: Rate limiting w/o 429s

2016-08-05 Thread bjun...@gmail.com
Am Freitag, 5. August 2016 schrieb CJ Ess :

> So I know I can use Haproxy to send 429s when a given request rate is
> exceeded.
>
> I have a case where the "user" is mostly screen scrapers and click bots,
> so if I return a 429 they'll just turn around and re-request until
> successful - I can't expect them to voluntarily manage their request rate
> or do any sort of back-off when requests fail. So instead I want to keep
> the connections open and the requests alive, and just delay dispatching
> them to an upstream backend. Is there anyway I can do something like this?
> I'm open to suggestions of alternative ways to achieve the same effect.
>

>

Hi,

i've implemented something similar with the help of lua:

http://godevops.net/2015/06/24/adding-random-delay-specific-http-requests-haproxy-lua/

Use a condition which fits your use case (user-agent, counters, etc.).


Best Regards,
Bjoern


Re: POST data logging works without option http-buffer-request

2016-02-10 Thread bjun...@gmail.com
2016-02-10 8:17 GMT+01:00 Willy Tarreau <w...@1wt.eu>:

> On Tue, Feb 09, 2016 at 06:10:01PM +0100, bjun...@gmail.com wrote:
> > Hi,
> >
> > i'm currently testing 1.6.3 and request body logging. I'm wondering that
> > logging of req body even works without setting "option
> > http-buffer-request". Also "no option http-buffer-request" seems to have
> no
> > effect.
> >
> >
> > Is this intended or have i missed something?
>
> you're playing with fire, it just happens to work by pure luck. When
> you capture it you're lucky that the POST data were already there. The
> purpose of the aforementionned option is to wait for the data before
> proceeding. Try to connect using telnet and send your POST request by
> hand, you'll see the difference between the two modes :-)
>
> You may also force your client to send "Expect:100-continue" before
> sending the body, chances are that you'll never have the data in time.
>
> Willy
>



Hi Willy,

thanks for quick reply.


Would it be more "safe" / less error-prone to warn the user if "req.body"
is found in configuration without "option http-buffer-request" is set? (Or
do not consider req.body at all in this case?)



-
Best Regards

Bjoern


POST data logging works without option http-buffer-request

2016-02-09 Thread bjun...@gmail.com
Hi,

i'm currently testing 1.6.3 and request body logging. I'm wondering that
logging of req body even works without setting "option
http-buffer-request". Also "no option http-buffer-request" seems to have no
effect.


Is this intended or have i missed something?



simplified config:



frontend fe_http_in

bind *:8001

option  httplog


#option http-buffer-request

declare capture request len 40
http-request capture req.body id 0


capture request header Host len 50
capture request header User-Agent len 200
capture request header Referer len 300


#log-format {"%[capture.req.hdr(0)]"}

log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\
%CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r


reqidel ^X-Forwarded-For:.*
option forwardfor
option http-server-close

default_backend be



backend be

 option http-server-close

 server s 127.0.0.1:80




-
Best Regards

Bjoern


DRAIN status

2015-12-08 Thread bjun...@gmail.com
Hi,

when a healthcheck ("fall 2") on a backend server is failing, the status of
the backend is changing to "DRAIN 1/2"  (I do not manually set the DRAIN
state nor do i have agent-check's)

Does that mean that for the period till the next healthcheck, the server is
completely removed from load balancing? (or is "DRAIN" misleading here?)

HAProxy Version: 1.5.14

-
Best Regards

Bjoern


Re: [SPAM] Re: Architecture guide reworked

2015-12-02 Thread bjun...@gmail.com
2015-12-02 17:31 GMT+01:00 Olivier Doucet :

>
>
> 2015-12-02 17:25 GMT+01:00 Olivier Doucet :
>
>>
>> 2015-12-02 15:44 GMT+01:00 Michel Blanc :
>>
>>> Very good idea.
>>>
>>> Do you plan creating a git repo somewhere so people can contribute
>>> and/or create issues ?
>>>
>>> You might be interested in https://www.gitbook.com/ or
>>> https://readthedocs.org for the automated builds and the various output
>>> formats they provide.
>>>
>>
>> I intend to create the guide with text format that works so well for the
>> original documentation. I will also use haproxy-dconv project to test html
>> version.
>>
>> I think I'll create a repo on github to start the rewrite (will be easier
>> for contribs) but when rewrite is over this guide will of course get back
>> into haproxy project.
>>
>>
>> I did not know gitbook or readthedocs. Any other opinion on this ?
>>
>
>  Hi again,
>
> So gitbook format is specific (using Github markdown format) if I
> understand correctly. I'm not sure having a different format from other doc
> file in HAProxy is a good ideay ...
>
>
>
>
Hi Oliver,

very good proposal and i would like to contribute also.

I would like to see a chapter on healthchecks:

  - simple HTTP healthchecks
  - check specific applications (linux, via xinetd script)
  - check specific applications (windows, via powershell + simple webserver
(HttpListener))
  - tcp healtchecks (cleartext protocols)
  - tcp healtchecks (binary protocols)
  - how do you “chain” healthchecks  or create healtcheck dependencies


--
Best Regards,

Bjoern


Re: Chaining haproxy instances for a migration scenario

2015-09-11 Thread bjun...@gmail.com
2015-09-11 10:55 GMT+02:00 Baptiste :

> On Fri, Sep 11, 2015 at 10:41 AM, Tim Verhoeven
>  wrote:
> > Hello everyone,
> >
> > I'm mostly passive on this list but a happy haproxy user for more then 2
> > years.
> >
> > Now, we are going to migrate our platform to a new provider (and new
> > hardware) in the coming months and I'm looking for a way to avoid a
> one-shot
> > migration.
> >
> > So I've been doing some googl'ing and it should be possible to use the
> proxy
> > protocol to send traffic from one haproxy instance (at the old site) to
> the
> > another haproxy instance (at the new site). Then at the new site the
> haproxy
> > instance there would just accept the traffic as it came from the internet
> > directly.
> >
> > Is that how it works? Is that possible?
> >
> > Ideally the traffic between the 2 haproxy instances would be encrypted
> with
> > TLS to avoid having to setup an VPN.
> >
> > Now I haven't found any examples of this kind of setup, so any pointers
> on
> > how to set this up would be really appriciated.
> >
> > Thanks,
> > Tim
>
>
> Hi Tim,
>
> Your usecase is an interesting scenario for a blog article :)
>
> About your questions, simply update the app backend of the current
> site in order to add a new 'server' that would be the HAProxy of the
> new site:
>
> backend myapp
>  [...]
>  server app1 ...
>  server app2 ...
>  server newhaproxy [IP]:8443 check ssl send-proxy-v2 ca-file
> /etc/haproxy/myca.pem crt /etc/haproxy/client.pem
>
> ca-file: to validate the certificate presented by the server using
> your own CA (or use DANGEROUSLY "ssl-server-verify none" in your
> global section)
> crt : allows you to use a client certificate to get connected on the
> other HAProxy
>
> On the newhaproxy (in the new instance):
>
> frontend fe_myapp
>  bind :80
>  bind :443 ssl crt server.pem
>  bind :8443 ssl crt server.pem accept-proxy-v2
>
>
>
> You can play with weight on the current site to send a few request to
> the newhaproxy box and increase this weight once you're confident.
>
> Baptiste
>
>

Hi Tim,

i'm having a similiar use case (smooth migration from 1.5 to 1.6). I've
recently blogged about this:


http://godevops.net/2015/09/07/testing-new-haproxy-versions-with-some-sort-of-ab-testing/


-
Best Regards / Mit freundlichen Grüßen

Bjoern


Re: tcp-request + gpc ACLs

2015-07-20 Thread bjun...@gmail.com
2015-07-13 18:07 GMT+02:00 bjun...@gmail.com bjun...@gmail.com:
 Hi,

 i'm using stick-tables to track requests and block abusers if needed.
 Abusers should be blocked only for a short period of time and i want a
 stick-table entry to expire.

 Therefore, i have to check if the client is already marked as an
 abuser and do not track this client.


 example config:


 frontend fe_http_in

   bind 127.0.0.1:8001

   stick-table type ip size 100k expire 600s store gpc0

   # Not working
   # acl is_overlimit sc0_get_gpc0(fe_http_in) gt 0

   # Working
   # acl is_overlimit src_get_gpc0(fe_http_in) gt 0

   tcp-request connection track-sc0 src if !is_overlimit

   default_backend be


 backend be

   ... incrementing gpc0 ( with sc0_inc_gpc0) ...



 If i use sc0_get_gpc0, the stick-table entry will never expire
 because the timer will be resetted (tcp-request connection track-sc0
 ... seems to ignore this acl).


 With src_get_gpc0 everything works as expected.


 Both ACL's are correct and triggered (verified with debug headers
 (http-response set-header ...))


 What's the difference between these ACL's in conjunction with
 tcp-request connection track-sc0 ... ?

 Is this a bug or intended behaviour ?


 ---
 Bjoern



Has anyone observed the same behaviour or knowing if this is the
correct behaviour?



---
Bjoern



Re: Problems compiling HAProxy with Lua Support

2015-07-20 Thread bjun...@gmail.com
2015-07-16 21:04 GMT+02:00 Vincent Bernat ber...@luffy.cx:
  ❦ 13 juillet 2015 19:58 +0200, Vincent Bernat ber...@luffy.cx :

 I suppose that either -ldl could be added to OPTIONS_LDFLAGS append,
 like this is done for -lm. Or USE_DL section could be moved towards the
 end. I think the first solution is better since libdl seems to be a
 dependency of lua.

 Note that this is not an Ubuntu-specific but they enforce --as-needed by
 default directly in the linker.

 Here is a proposition. An alternative would be to move the ifneq
 ($(USE_DL),) section at the end of the Makefile.



 --
 Keep it right when you make it faster.
 - The Elements of Programming Style (Kernighan  Plauger)



Thanks Vincent,

I can confirm this fixes the issue.



Best Regards,

---
Bjoern



Problems compiling HAProxy with Lua Support

2015-07-13 Thread bjun...@gmail.com
Hi,


i'm trying to build HAProxy 1.6 (git HEAD) with Lua (5.3.1) on Ubuntu 14.04.


This was my first try:


make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_LUA=yes
LUA_LIB=/opt/lua53/lib/ LUA_INC=/opt/lua53/include/ LDFLAGS=-ldl



resulting error:

.
.
.
gcc -ldl -o haproxy src/haproxy.o src/sessionhash.o src/base64.o
src/protocol.o src/uri_auth.o src/standard.o src/buffer.o src/log.o
src/task.o src/chunk.o src/channel.o src/listener.o src/lru.o
src/xxhash.o src/time.o src/fd.o src/pipe.o src/regex.o src/cfgparse.o
src/server.o src/checks.o src/queue.o src/frontend.o src/proxy.o
src/peers.o src/arg.o src/stick_table.o src/proto_uxst.o
src/connection.o src/proto_http.o src/raw_sock.o src/appsession.o
src/backend.o src/lb_chash.o src/lb_fwlc.o src/lb_fwrr.o src/lb_map.o
src/lb_fas.o src/stream_interface.o src/dumpstats.o src/proto_tcp.o
src/applet.o src/session.o src/stream.o src/hdr_idx.o src/ev_select.o
src/signal.o src/acl.o src/sample.o src/memory.o src/freq_ctr.o
src/auth.o src/proto_udp.o src/compression.o src/payload.o src/hash.o
src/pattern.o src/map.o src/namespace.o src/mailers.o src/dns.o
src/vars.o src/ev_poll.o src/ev_epoll.o src/ssl_sock.o src/shctx.o
src/hlua.o ebtree/ebtree.o ebtree/eb32tree.o ebtree/eb64tree.o
ebtree/ebmbtree.o ebtree/ebsttree.o ebtree/ebimtree.o
ebtree/ebistree.o   -lcrypt  -lz -ldl  -lssl -lcrypto
-L/opt/lua53/lib/ -llua -lm -L/usr/lib -lpcreposix -lpcre
/usr/bin/ld: /opt/lua53/lib//liblua.a(loadlib.o): undefined reference
to symbol 'dlclose@@GLIBC_2.2.5'
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu/libdl.so:
error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make: *** [haproxy] Error 1




Only if i change LDFLAGS to the following the build is succesful:



make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_LUA=yes
LUA_LIB=/opt/lua53/lib/ LUA_INC=/opt/lua53/include/
LDFLAGS=-Wl,--no-as-needed




I'm not aware of the consequences, does anybody have an idea ?



---
Bjoern



tcp-request + gpc ACLs

2015-07-13 Thread bjun...@gmail.com
Hi,

i'm using stick-tables to track requests and block abusers if needed.
Abusers should be blocked only for a short period of time and i want a
stick-table entry to expire.

Therefore, i have to check if the client is already marked as an
abuser and do not track this client.


example config:


frontend fe_http_in

  bind 127.0.0.1:8001

  stick-table type ip size 100k expire 600s store gpc0

  # Not working
  # acl is_overlimit sc0_get_gpc0(fe_http_in) gt 0

  # Working
  # acl is_overlimit src_get_gpc0(fe_http_in) gt 0

  tcp-request connection track-sc0 src if !is_overlimit

  default_backend be


backend be

  ... incrementing gpc0 ( with sc0_inc_gpc0) ...



If i use sc0_get_gpc0, the stick-table entry will never expire
because the timer will be resetted (tcp-request connection track-sc0
... seems to ignore this acl).


With src_get_gpc0 everything works as expected.


Both ACL's are correct and triggered (verified with debug headers
(http-response set-header ...))


What's the difference between these ACL's in conjunction with
tcp-request connection track-sc0 ... ?

Is this a bug or intended behaviour ?


---
Bjoern



Re: [ANNOUNCE] haproxy-1.5.14

2015-07-05 Thread bjun...@gmail.com
Hi,

is there any workaround if updating to 1.5.14 isn't possible
immediately (for ex. disable http pipelining?)


---
Best Regards / Mit freundlichen Grüßen

Bjoern



Re: Updating a stick table from the HTTP response

2015-06-30 Thread bjun...@gmail.com
Hi Holger,


tcp-response content track- / http-response track- would be a nice
feature, don't know if this is on the roadmap.


For the moment i can only imagine the following (needs HAProxy 1.6):


http-response lua script.lua


Within this Lua function, you check the http response code and
update/set a stick-table entry per haproxy socket (set table ...).

(I don't know if there is a native lua function for this case (without
using the socket))


I didn't say it's an elegant solution ...  :)


---
Best Regards / Mit freundlichen Grüßen

Bjoern

2015-06-30 12:37 GMT+02:00 Holger Just hapr...@meine-er.de:
 Hello all,

 Unfortunately, I have not received any feedback on my earlier email so
 I'll try again.

 I am still struggling trying to implement a throttling mechanism based
 on prior HTTP responses of the same client.

 Basically, I have some requests (using Basic Auth or other mechanisms)
 that might result in HTTP 401 responses from the backend server. I want
 to throttle further requests by the same client if the number of such
 responses reaches a certain threshold (responses / second). In that
 case, I want to first delay and finally block new requests completely
 until the error rate goes down.

 In order to implement this, it would be great if we would have the
 possibility to update stick entries based on the response and not only
 the request, e.g.

 tcp-response content track-sc2 src if { status 401 }

 Is something like this feasible, maybe under the restriction that the
 user has to make sure on their own that the data required to update the
 stick table entry is still available?

 Thank you for your feedback.

 --Holger


 Holger Just wrote:
 Hello all,

 with HAProxy 1.5.11, we have implemented rate limiting based on some
 aspects of the request (Host header, path, ...). In our implementation,
 we delay limited requests by forcing a WAIT_END in order to prevent
 brute-force attacks against e.g. passwords or login tokens:


 acl bruteforce_slowdown sc2_http_req_rate gt 20
 acl limited_path path_beg /sensitive/stuff

 stick-table type ip size 100k expire 30m store http_req_rate(300s)
 tcp-request content track-sc2 src if METH_POST limited_path

 # Delay the request for 10 seconds if we have too many requests
 tcp-request inspect-delay 10s
 tcp-request content accept unless bruteforce_slowdown limited_path
 tcp-request content accept if WAIT_END


 As you can see above, we track only certain requests to sensitive
 resources and delay further requests after 20 req / 300 s without taking
 the actual response into account. This is good enough for e.g. a web
 form to login or change a password.

 Now, unfortunately we have some endpoints which are protected with Basic
 Auth which is validated by the application. If the password is
 incorrect, we return an HTTP 401.

 In order to prevent brute-forcing of passwords against these endpoints,
 we would like to employ a similar delay mechanism. Unfortunately, we
 can't detect from the request headers alone if we have a bad request but
 have to inspect the response and increase the sc2 counter only of we
 have seen a 401.

 In the end, I would like to use a fetch similar to sc1_http_err_rate but
 reduced to only specific cases, i.e. 401 responses on certain paths or
 Host names.

 Now the problem is that we apparently can't manipulate the stick table
 from a HTTP response, or more precisely: I have not found a way to do it.

 We would like to do something like


 tcp-response content track-sc2 src if { status 401 }


 which would allow us to track these error-responses similar to the first
 approach and handle the next requests the same way as above.

 Now my questions are:

 * Is something like this possible/feasible right now?
 * Is there some other way to implement rate limiting based on certain
   server responses?
 * If this is not possible right now, would it be feasible to implement
   the possibility to track responses similar to what is possible with
   requests right now?

 Thank you for your feedback,
 Holger Just





Re: Delaying requests with Lua

2015-06-22 Thread bjun...@gmail.com
Hi,

i got it working (with math.random).

I can confirm this is non-blocking with concurrent requests. Thanks Thierry :)


Full working example:


haproxy.cfg:

-
global
 lua-load /etc/haproxy/delay.lua


defaults
 mode http
 timeout connect 10s
 timeout client 10s
 timeout server 10s


frontend fe
 bind 127.0.0.1:8001 name fe

 http-request lua delay_request if  { ... your condition ... }

 default_backend be


backend be
 server s 127.0.0.1:80
-



delay.lua:

-
function delay_request(txn)

core.msleep(1000 + math.random(1000))

end
-


---
Bjoern

2015-06-19 19:37 GMT+02:00 PiBa-NL piba.nl@gmail.com:
 try it with:  math.rand(1000)

 bjun...@gmail.com schreef op 19-6-2015 om 14:15:

 Hi,


 i've tried Thierry's example:



 function delay_request(txn)
core.msleep(1000 + txn.f.rand(1000))
 end



 The request is forwarded to the backend (but isn't delayed and HAProxy
 throws the following error):

 [ALERT] 169/140715 (8264) : Lua function 'delay_request': runtime
 error: /etc/haproxy/case02.lua:4: bad argument #1 to 'rand' ((null)).


 Problem with the rand() fetch in combination with Lua ?


 Using latest 1.6 git HEAD.


 I'm still learning Lua and the integration in HAProxy. Any help will
 be appreciated.


 --
 Bjoern

 2015-06-18 21:25 GMT+02:00 PiBa-NL piba.nl@gmail.com:

 Ok i didn't realize the msleep to be coming from haproxy itself the
 'core.'
 should have made me think twice before sending that mail.
 Thanks for clearing it up, of course still actual results from Bjoern
 will
 be interesting to hear a well :).

 Thierry schreef op 18-6-2015 om 21:12:

 On Thu, 18 Jun 2015 20:27:07 +0200
 PiBa-NL piba.nl@gmail.com wrote:

 Thing to check, what happens to concurrent connection requests?
 My guess is with 10 concurrent requests it might take up to 20
 seconds(worst case for 10 connections) for some requests instead of the
 expected max 2..

 Note that we don't use the sleep from the standard Lua API, but the
 sleep from the HAProxy Lua API.

 The Lua sleep is not a real sleep. It have the behavior of a sleep only
 in the Lua code. Is real behavior, is to block the request, set a task
 timeour and give back the hand to the HAProxy scheduler.

 So, during the sleep, HAProxy is not blocked an continue to process
 other connections. Same bahavior for the TCP access; it seeme to be
 blocked in the lua code, but HAProxy is not blocked.

 Thierry FOURNIER schreef op 18-6-2015 om 19:35:

 Hi,

 You can do this with Lua. Its very easy.

 First, you create a lua file containing the following code. The name
 of
 this Lua file is file.lua.

   function delay_request(txn)
  core.msleep(1000 + txn.f.rand(1000))
   end

 Second, you configura haproxy for loading ths file. In the global
 section:

   lua-load file.lua

 In your frontend (or backend)

   http-request lua delay_request if { ... your condition ... }

 Note that I didn't test this configuration, I'm just giving the main
 lines. Please share your results, it's maybe interesting for everyone.

 Thierry



 On Thu, 18 Jun 2015 17:55:31 +0200
 bjun...@gmail.com bjun...@gmail.com wrote:

 Hi,

 i want to delay specific requests and i want to have a random delay
 for every request (for example in a range from 1000ms - 2000ms)


 As an ugly hack, you can use the following (with a static value):


 tcp-request inspect-delay 2000ms
 tcp-request content accept if WAIT_END


 I think i can achieve random delays with Lua. Does anyone have a
 example how this can be realized with Lua ?



 Thanks in advance !



 ---
 Bjoern






Delaying requests with Lua

2015-06-18 Thread bjun...@gmail.com
Hi,

i want to delay specific requests and i want to have a random delay
for every request (for example in a range from 1000ms - 2000ms)


As an ugly hack, you can use the following (with a static value):


 tcp-request inspect-delay 2000ms
 tcp-request content accept if WAIT_END


I think i can achieve random delays with Lua. Does anyone have a
example how this can be realized with Lua ?



Thanks in advance !



---
Bjoern



dynamic redirect with regex capture groups

2014-10-14 Thread bjun...@gmail.com
Hi,


i would like to redirect the following urls with HAProxy:


www.example.at.prod.site.local  - m.example.at
www.example.de.prod.site.local  - m.example.de
.
.
.
.


apache mod_rewrite-rule:


RewriteCond %{HTTP_HOST} ^(www\.)?example\.([a-z]{2,3}).prod\.site\.local$ [NC]
RewriteRule ^/(.*)$ http://m.example.%2/$1 [R=301,NE,QSA,L]


I've looked into http-request redirect, log format variables, map acls
etc., but i didn't found a dynamic method.


Is this actually possible in HAProxy ?


---
Bjoern



Re: Random values with inspect-delay possible ?

2014-09-10 Thread bjun...@gmail.com
2014-09-04 14:33 GMT+02:00 bjun...@gmail.com bjun...@gmail.com:
 Hi,


 i'm using the following in a backend to rate-limit spider or bad
 behavior clients:


 backend be_spider

 tcp-request inspect-delay 2000ms
 tcp-request content accept if WAIT_END

 server node01 192.168.1.10:80 maxconn {LOWVALUE}



 If now an abuser/spider/crawler is making many requests at the same
 time/same second, all requests are delayed for  ms. But if the
 delay is over, all requests are bursting anyway at the same point in
 time.


 What i want to do is to set the inspect-delay in a random fashion for
 every request (for example in a range from 1000ms - 2000ms) to
 distribute the requests over a timeframe and absorb immensive bursts.


 The overall backend capacity is limited with a low maxconn value, but
 i have to control bursts of requests also.


 Is this possible or is there a different method to accomplish this ?

 ---
 Bjoern

Hi,

if this is not possible, i would like to propose this as a feature (if
this is a valid feature request).


---
Bjoern



Random values with inspect-delay possible ?

2014-09-04 Thread bjun...@gmail.com
Hi,


i'm using the following in a backend to rate-limit spider or bad
behavior clients:


backend be_spider

tcp-request inspect-delay 2000ms
tcp-request content accept if WAIT_END

server node01 192.168.1.10:80 maxconn {LOWVALUE}



If now an abuser/spider/crawler is making many requests at the same
time/same second, all requests are delayed for  ms. But if the
delay is over, all requests are bursting anyway at the same point in
time.


What i want to do is to set the inspect-delay in a random fashion for
every request (for example in a range from 1000ms - 2000ms) to
distribute the requests over a timeframe and absorb immensive bursts.


The overall backend capacity is limited with a low maxconn value, but
i have to control bursts of requests also.


Is this possible or is there a different method to accomplish this ?

---
Bjoern



Re: tracking multiple samples in stick-table

2014-09-03 Thread bjun...@gmail.com
2014-08-25 18:58 GMT+02:00 bjun...@gmail.com bjun...@gmail.com:
 2014-08-20 19:33 GMT+02:00 bjun...@gmail.com bjun...@gmail.com:
 2014-08-18 18:49 GMT+02:00 Emeric Brun eb...@haproxy.com:
 On 08/18/2014 05:49 PM, Baptiste wrote:

 On Sun, Aug 17, 2014 at 4:49 PM, bjun...@gmail.com bjun...@gmail.com
 wrote:

 Hi,

 i was digging through some old threads:


 http://t53814.web-haproxy.webtalks.info/help-with-tcp-request-content-track-sc1-t53814.html
 http://marc.info/?l=haproxym=139458469126719w=2


 I have the same requirement and want to track not only on src (source
 ip), i want to concatenate src + hdr(User-Agent) or hdr(User-Agent) +
 hdr(X-Forward-For).



 Is there a way to actually do this ? (maybe it could be hashed, like
 it is possible with base32+src ?)


 Thanks,

 ---
 Bjoern



 Hi Bjoern,

 There is no way to do this currently in HAProxy.

 Baptiste



 Hi All,

 I think it is possible:

 You need to add a new header to the request, with a concat of these
 different values (http-request add-header and use log format to create the
 value).

 And use the fetch on this header on the stickin rule.

 Regards,
 Emeric







 Hi,


 i've tried the following config, but HAProxy isn't tracking anything :



 frontend http_in_01

   bind  0.0.0.0:80
   log global
   option  httplog

   reqidel ^X-Forwarded-For:.*
   option forwardfor

   option http-server-close

   # http-request set-header X-Concat
 %[req.fhdr(User-Agent)]_%[req.fhdr(X-Forwarded-For,-1)]

  http-request set-header X-Concat %[req.fhdr(User-Agent)]_%[src]


   # stick-table type binary len 180 size 32m expire 1m store
 http_req_rate(10s)

  stick-table type string len 180 size 32m expire 1m store
 http_req_rate(10s)


tcp-request inspect-delay 10s
tcp-request content track-sc0 req.fhdr(X-Concat) if HTTP


unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
unique-id-header X-Unique-ID


# acl  is_found req.hdr(X-Concat) -m sub Firefox
# http-request set-header X-Found yes if is_found



 Example X-Concat-Header:

 Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101
 Firefox/31.0_192.168.138.185



 Does anyone have any ideas why HAProxy isn't tracking anything or if
 my config is wrong ?


 ---
 Bjoern


 Hi,


 it's working now with the following workaround (config simplified):



 frontend http_in_01

  bind  0.0.0.0:80

  http-request set-header X-Concat 
 %[req.fhdr(User-Agent)]_%[req.fhdr(host)]

  acl  is_found req.fhdr(X-Concat) -m found
  http-request set-header X-Found yes if is_found

  default_backend forward


 backend forward

 server localhost 127.0.0.1:


 frontend internal_real

  bind  127.0.0.1:

  stick-table type string len 180 size 32m expire 1m store
 http_req_rate(10s)

  tcp-request inspect-delay 10s
  tcp-request content track-sc0 req.fhdr(X-Concat) if HTTP

  default_backend live-nodes


 backend live-nodes

   server apache01 127.0.0.1:8090 check inter 2s rise 2 fall 2
 maxconn 250 weight 50



 This is the same workaround that is used here for logging purposes:

 https://github.com/jvehent/haproxy-aws/blob/master/haproxy.cfg




 It seems that if you add a new/custom header in frontend, it is
 available to ACL's in the same frontend (acl is_found is matched), but
 not to stick-table tracking functionality.


 Is this by design and intended behaviour ?



 ---
  Bjoern


Hi,

does anyone know if this is by design and intended behaviour ?


---
 Bjoern



Re: tracking multiple samples in stick-table

2014-09-03 Thread bjun...@gmail.com
2014-09-03 11:36 GMT+02:00 Baptiste bed...@gmail.com:


 Hi,


 it's working now with the following workaround (config simplified):



 frontend http_in_01

  bind  0.0.0.0:80

  http-request set-header X-Concat 
 %[req.fhdr(User-Agent)]_%[req.fhdr(host)]

  acl  is_found req.fhdr(X-Concat) -m found
  http-request set-header X-Found yes if is_found

  default_backend forward


 backend forward

 server localhost 127.0.0.1:


 frontend internal_real

  bind  127.0.0.1:

  stick-table type string len 180 size 32m expire 1m store
 http_req_rate(10s)

  tcp-request inspect-delay 10s
  tcp-request content track-sc0 req.fhdr(X-Concat) if HTTP

  default_backend live-nodes


 backend live-nodes

   server apache01 127.0.0.1:8090 check inter 2s rise 2 fall 2
 maxconn 250 weight 50



 This is the same workaround that is used here for logging purposes:

 https://github.com/jvehent/haproxy-aws/blob/master/haproxy.cfg




 It seems that if you add a new/custom header in frontend, it is
 available to ACL's in the same frontend (acl is_found is matched), but
 not to stick-table tracking functionality.


 Is this by design and intended behaviour ?



 ---
  Bjoern


 Hi,

 does anyone know if this is by design and intended behaviour ?


 ---
  Bjoern


 Hi Bjoern,

 add a 'http-request track-sc0 (.)' in your frontend and it should work.
 Note this feature is only available in 1.6.

 Baptiste


Hi Baptiste,

very cool, it's working now with 1.6 and http-request track-sc*


If i haven't missed something its not yet in 1.6 documentation.


Thanks,

---
 Bjoern



Re: tracking multiple samples in stick-table

2014-08-25 Thread bjun...@gmail.com
2014-08-20 19:33 GMT+02:00 bjun...@gmail.com bjun...@gmail.com:
 2014-08-18 18:49 GMT+02:00 Emeric Brun eb...@haproxy.com:
 On 08/18/2014 05:49 PM, Baptiste wrote:

 On Sun, Aug 17, 2014 at 4:49 PM, bjun...@gmail.com bjun...@gmail.com
 wrote:

 Hi,

 i was digging through some old threads:


 http://t53814.web-haproxy.webtalks.info/help-with-tcp-request-content-track-sc1-t53814.html
 http://marc.info/?l=haproxym=139458469126719w=2


 I have the same requirement and want to track not only on src (source
 ip), i want to concatenate src + hdr(User-Agent) or hdr(User-Agent) +
 hdr(X-Forward-For).



 Is there a way to actually do this ? (maybe it could be hashed, like
 it is possible with base32+src ?)


 Thanks,

 ---
 Bjoern



 Hi Bjoern,

 There is no way to do this currently in HAProxy.

 Baptiste



 Hi All,

 I think it is possible:

 You need to add a new header to the request, with a concat of these
 different values (http-request add-header and use log format to create the
 value).

 And use the fetch on this header on the stickin rule.

 Regards,
 Emeric







 Hi,


 i've tried the following config, but HAProxy isn't tracking anything :



 frontend http_in_01

   bind  0.0.0.0:80
   log global
   option  httplog

   reqidel ^X-Forwarded-For:.*
   option forwardfor

   option http-server-close

   # http-request set-header X-Concat
 %[req.fhdr(User-Agent)]_%[req.fhdr(X-Forwarded-For,-1)]

  http-request set-header X-Concat %[req.fhdr(User-Agent)]_%[src]


   # stick-table type binary len 180 size 32m expire 1m store
 http_req_rate(10s)

  stick-table type string len 180 size 32m expire 1m store
 http_req_rate(10s)


tcp-request inspect-delay 10s
tcp-request content track-sc0 req.fhdr(X-Concat) if HTTP


unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
unique-id-header X-Unique-ID


# acl  is_found req.hdr(X-Concat) -m sub Firefox
# http-request set-header X-Found yes if is_found



 Example X-Concat-Header:

 Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101
 Firefox/31.0_192.168.138.185



 Does anyone have any ideas why HAProxy isn't tracking anything or if
 my config is wrong ?


 ---
 Bjoern


Hi,


it's working now with the following workaround (config simplified):



frontend http_in_01

 bind  0.0.0.0:80

 http-request set-header X-Concat %[req.fhdr(User-Agent)]_%[req.fhdr(host)]

 acl  is_found req.fhdr(X-Concat) -m found
 http-request set-header X-Found yes if is_found

 default_backend forward


backend forward

server localhost 127.0.0.1:


frontend internal_real

 bind  127.0.0.1:

 stick-table type string len 180 size 32m expire 1m store
http_req_rate(10s)

 tcp-request inspect-delay 10s
 tcp-request content track-sc0 req.fhdr(X-Concat) if HTTP

 default_backend live-nodes


backend live-nodes

  server apache01 127.0.0.1:8090 check inter 2s rise 2 fall 2
maxconn 250 weight 50



This is the same workaround that is used here for logging purposes:

https://github.com/jvehent/haproxy-aws/blob/master/haproxy.cfg




It seems that if you add a new/custom header in frontend, it is
available to ACL's in the same frontend (acl is_found is matched), but
not to stick-table tracking functionality.


Is this by design and intended behaviour ?



---
 Bjoern



Re: tracking multiple samples in stick-table

2014-08-20 Thread bjun...@gmail.com
2014-08-18 18:49 GMT+02:00 Emeric Brun eb...@haproxy.com:
 On 08/18/2014 05:49 PM, Baptiste wrote:

 On Sun, Aug 17, 2014 at 4:49 PM, bjun...@gmail.com bjun...@gmail.com
 wrote:

 Hi,

 i was digging through some old threads:


 http://t53814.web-haproxy.webtalks.info/help-with-tcp-request-content-track-sc1-t53814.html
 http://marc.info/?l=haproxym=139458469126719w=2


 I have the same requirement and want to track not only on src (source
 ip), i want to concatenate src + hdr(User-Agent) or hdr(User-Agent) +
 hdr(X-Forward-For).



 Is there a way to actually do this ? (maybe it could be hashed, like
 it is possible with base32+src ?)


 Thanks,

 ---
 Bjoern



 Hi Bjoern,

 There is no way to do this currently in HAProxy.

 Baptiste



 Hi All,

 I think it is possible:

 You need to add a new header to the request, with a concat of these
 different values (http-request add-header and use log format to create the
 value).

 And use the fetch on this header on the stickin rule.

 Regards,
 Emeric







Hi,


i've tried the following config, but HAProxy isn't tracking anything :



frontend http_in_01

  bind  0.0.0.0:80
  log global
  option  httplog

  reqidel ^X-Forwarded-For:.*
  option forwardfor

  option http-server-close

  # http-request set-header X-Concat
%[req.fhdr(User-Agent)]_%[req.fhdr(X-Forwarded-For,-1)]

 http-request set-header X-Concat %[req.fhdr(User-Agent)]_%[src]


  # stick-table type binary len 180 size 32m expire 1m store
http_req_rate(10s)

 stick-table type string len 180 size 32m expire 1m store
http_req_rate(10s)


   tcp-request inspect-delay 10s
   tcp-request content track-sc0 req.fhdr(X-Concat) if HTTP


   unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
   unique-id-header X-Unique-ID


   # acl  is_found req.hdr(X-Concat) -m sub Firefox
   # http-request set-header X-Found yes if is_found



Example X-Concat-Header:

Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101
Firefox/31.0_192.168.138.185



Does anyone have any ideas why HAProxy isn't tracking anything or if
my config is wrong ?


---
Bjoern



Re: tracking multiple samples in stick-table

2014-08-18 Thread bjun...@gmail.com
Thanks Emeric, brilliant idea.

I will try this configuration.


---
Bjoern

2014-08-18 18:49 GMT+02:00 Emeric Brun eb...@haproxy.com:
 On 08/18/2014 05:49 PM, Baptiste wrote:

 On Sun, Aug 17, 2014 at 4:49 PM, bjun...@gmail.com bjun...@gmail.com
 wrote:

 Hi,

 i was digging through some old threads:


 http://t53814.web-haproxy.webtalks.info/help-with-tcp-request-content-track-sc1-t53814.html
 http://marc.info/?l=haproxym=139458469126719w=2


 I have the same requirement and want to track not only on src (source
 ip), i want to concatenate src + hdr(User-Agent) or hdr(User-Agent) +
 hdr(X-Forward-For).



 Is there a way to actually do this ? (maybe it could be hashed, like
 it is possible with base32+src ?)


 Thanks,

 ---
 Bjoern



 Hi Bjoern,

 There is no way to do this currently in HAProxy.

 Baptiste



 Hi All,

 I think it is possible:

 You need to add a new header to the request, with a concat of these
 different values (http-request add-header and use log format to create the
 value).

 And use the fetch on this header on the stickin rule.

 Regards,
 Emeric








tracking multiple samples in stick-table

2014-08-17 Thread bjun...@gmail.com
Hi,

i was digging through some old threads:

http://t53814.web-haproxy.webtalks.info/help-with-tcp-request-content-track-sc1-t53814.html
http://marc.info/?l=haproxym=139458469126719w=2


I have the same requirement and want to track not only on src (source
ip), i want to concatenate src + hdr(User-Agent) or hdr(User-Agent) +
hdr(X-Forward-For).



Is there a way to actually do this ? (maybe it could be hashed, like
it is possible with base32+src ?)


Thanks,

---
Bjoern



Re: Problem with external healthchecks and haproxy-ss-20140720

2014-08-07 Thread bjun...@gmail.com
2014-08-07 1:16 GMT+02:00 Cyril Bonté cyril.bo...@free.fr:
 Hi Bjoern,

 Le 06/08/2014 22:16, bjun...@gmail.com a écrit :

 Hi Mark,

 trying to test this one, but if i use the frontend/backend-syntax
 (and not the listen-syntax) with external-check command, HAProxy
 segfaults :


 #   /usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg -d
 Available polling systems :
epoll : pref=300,  test result OK
 poll : pref=200,  test result OK
   select : pref=150,  test result FAILED
 Total: 3 (2 usable), will use epoll.
 Using epoll() as the polling mechanism.
 [ALERT] 216/205611 (1316) : Starting [be_test:node01] check: no listener.
 Segmentation fault (core dumped)


 OK, I could reproduce it. This is happening for several reasons :
 1. External checks can only be used in listen sections.
 This is not clearly documented but it can be guessed by the arguments passed
 to the command : the proxy address and port are required (see [1]).
 I think this is annoying because it's only usable in some specific use
 cases. Maybe we should rework this part of the implementation : I see that
 for unix sockets, the port argument is set to NOT_USED, we could do the
 same for checks in a backend section. Willy, Simon, is it OK for you ?


 2. Because external checks are initialized late in the process (only when
 the first check is started), leaving some variables uninitialized.
 I'm going to send a patch which prepares the external checks early in the
 process initialization and quit in a clean way when no listener is found
 (meaning that we are in a backend section). But definitely, we should work
 on an alternative allowing external checks in backends.

 [1]
 http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#external-check%20command


 --
 Cyril Bonté


Thanks Cyril,

would be very useful if we could we have it also in backend sections.


--
Bjoern



Re: Problem with external healthchecks and haproxy-ss-20140720

2014-08-06 Thread bjun...@gmail.com
2014-08-04 11:44 GMT+02:00 Mark Brooks m...@loadbalancer.org:
 We have started doing some testing with the external health check
 functionality but unfortunately we cannot get the real servers to be
 marked as online when using this feature.

 This was tested with haproxy-ss-20140720

 When using the external check the real servers are never marked as up.
 The external script being called just exits with a status of 0, we are
 getting this in the log -
 Aug  4 09:00:19 lbmaster haproxy[31688]: Health check for server
 VIP_Name/RIP_Name-1 failed, reason: External check passed, code: 0,
 check duration: 6002ms, status: 0/2 DOWN.

 It looks like the check is passing but its not bringing the service online.

 If we set the script to exit with a status of 1 the log shows -
 Aug  4 09:39:48 lbmaster haproxy[31917]: Health check for server
 VIP_Name/RIP_Name failed, reason: External check error, code: 1, check
 duration: 6001ms, status: 0/2 DOWN.

  The configuration is as follows -

 global
 daemon
 stats socket /var/run/haproxy.stat mode 600 level admin
 pidfile /var/run/haproxy.pid
 log /dev/log local4
 maxconn 4
 ulimit-n 81000
 tune.bufsize 16384
 tune.maxrewrite 1024
 external-check

 defaults
 mode http
 balance roundrobin
 timeout connect 4000
 timeout client 42000
 timeout server 43000
 log global

 listen VIP_Name
 bind 192.168.66.198:81 transparent
 mode tcp
 balance leastconn
 server backup 127.0.0.1:9081 backup  non-stick
 option redispatch
 option abortonclose
 maxconn 4
 log global
 option tcplog
 option log-health-checks
 option external-check
 external-check command /root/check.sh
 server RIP_Name 192.168.66.50:80  weight 1  check  inter 6000  rise 2
 fall 3  minconn 0  maxconn 0  on-marked-down shutdown-sessions
 server RIP_Name-1 192.168.66.51:80  weight 100  check  inter 6000
 rise 2  fall 3  minconn 0  maxconn 0  on-marked-down shutdown-sessions

 The /root/check.sh

 #!/bin/bash

 exit 0

 Thanks

 Mark



Hi Mark,

trying to test this one, but if i use the frontend/backend-syntax
(and not the listen-syntax) with external-check command, HAProxy
segfaults :


#   /usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg -d
Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
[ALERT] 216/205611 (1316) : Starting [be_test:node01] check: no listener.
Segmentation fault (core dumped)



The version i've tested with is haproxy-ss-20140731

#   haproxy -vv
HA-Proxy version 1.6-dev0-6bcb0a8 2014/07/30


--
Bjoern



ACL ordering/processing

2014-07-14 Thread bjun...@gmail.com
Hi folks,


I've a question regarding the ordering/processing of ACL’s.



Example (HAProxy 1.4.24):




frontend http_in
.
.


acl  is_example.com  hdr_beg(host) -i example.com

acl  check_id  url_reg   code=(1001|1002|)

acl  check_id  url_reg   code=(3000|4001|)

use_backend  node01 if  is_example.com  check_id



acl  is_example.de  hdr_beg(host) -i example.de

acl  check_id  url_reg   code=(6573|7890)

use_backend  node02 if  is_example.de  check_id






I assumed that the “check_id” - ACL from the second block wouldn’t be
combined/OR’ed with the 2 “check_id” - ACL’s from the first block
(because of the other configuration statements in between).



But they are combined/OR’ed, is this behavior intended ?



Thanks,
---

Bjoern



redirect question

2014-05-02 Thread bjun...@gmail.com
Hi,

i'm trying a basic redirect with HAProxy:


frontend http


 acl is_domain hdr_dom(host) -i abc.example.com

 acl root path_reg ^$|^/$


 redirect location http://abc.example.com/?code=1234  code 301  if
is_domain  root


Unfortunately this ends up in a redirect loop.

I suspect the ? - character, when i escape this character with \ the
loop problem is fixed, but now HAProxy redirects to 
http://abc.example.com/\?code=1234;


Thanks,

Bjoern


git clone hangs

2014-02-10 Thread bjun...@gmail.com
Hi Willy,


same problem as mentioned here:


http://comments.gmane.org/gmane.comp.web.haproxy/7172



I've tried for three days in a row.



P.S.: 1.5-dev22 is not linked on the front page, is this intended ?


---
Bjoern


Re: use_backend condition-processing

2013-07-01 Thread bjun...@gmail.com
I'm using 1.4.24.

I've tested some cases in the meantime, but these tests don't give a clear
answer.

Anybody an idea ?


2013/6/26 bjun...@gmail.com bjun...@gmail.com

 Hi folks,

 i've a question regarding use_backend and how conditions are processed.


 My Example:


 

  frontend http_in_01

  bind  1.2.3.4:80

  log global
  option  httplog

  capture request header Host len 32
  capture request header User-Agent len 200

  reqidel ^X-Forwarded-For:.*
  option forwardfor

  option http-server-close


  acl  is_domain_abc.de  hdr_dom(host)  -i  abc.de

  acl  is_regex_matching  url_reg .
  acl  is_regex_matching  url_reg .
  acl  is_regex_matching  url_reg 
  acl  is_regex_matching  url_reg 
  acl  is_regex_matching  url_reg 
  .
  .
  ... another 1 lines  is_regex_matching



  use_backend  webfarm01  if  is_domain_abc.de  !is_regex_matching

 ---


 If the first condition in use_backend line is NOT met ( the domain in
 the HTTP-Request is not abc.de ),  is the processing for this
 use_backend line stopped or is the second condition ( the
 heavy regex acl)  processed anyway ? (and unnecessarily* *consume
 ressources )



 ---
 Bjoern




use_backend condition-processing

2013-06-26 Thread bjun...@gmail.com
Hi folks,

i've a question regarding use_backend and how conditions are processed.


My Example:




 frontend http_in_01

 bind  1.2.3.4:80

 log global
 option  httplog

 capture request header Host len 32
 capture request header User-Agent len 200

 reqidel ^X-Forwarded-For:.*
 option forwardfor

 option http-server-close


 acl  is_domain_abc.de  hdr_dom(host)  -i  abc.de

 acl  is_regex_matching  url_reg .
 acl  is_regex_matching  url_reg .
 acl  is_regex_matching  url_reg 
 acl  is_regex_matching  url_reg 
 acl  is_regex_matching  url_reg 
 .
 .
 ... another 1 lines  is_regex_matching



 use_backend  webfarm01  if  is_domain_abc.de  !is_regex_matching

---


If the first condition in use_backend line is NOT met ( the domain in the
HTTP-Request is not abc.de ),  is the processing for this use_backend
line stopped or is the second condition ( the
heavy regex acl)  processed anyway ? (and unnecessarily* *consume
ressources )



---
Bjoern


keepalive + content-switching

2013-06-26 Thread bjun...@gmail.com
Hi folks,


we want to use http keep-alive + content-switching with HAProxy.


I would like to ask if it's safe to use content-switching with http
keep-alive when we use option http-server-close ?


We want to use content-switching with standard matching criteria's (
hdr_dom(host), url_reg ).



HAProxy 1.4.24

---
Bjoern


Re: IPv6 + option forwardfor produces 502

2012-09-28 Thread bjun...@gmail.com
Issue was not related to HAProxy.


apache error logs:

[Fri Sep 28 14:45:08 2012] [notice] child pid 24745 exit signal
Segmentation  fault (11)

That must be mod_rpaf.


2012/9/28 Baptiste bed...@gmail.com

 or any module manipulating the IP address :)

 Could you reply this on the ML please, so everybody will be aware that
 the issue is not related to HAProxy

 cheers



 On Fri, Sep 28, 2012 at 3:15 PM, bjun...@gmail.com bjun...@gmail.com
 wrote:
  Hi,
 
  thanks Baptiste, you were right.
 
 
  apache error logs:
 
  [Fri Sep 28 14:45:08 2012] [notice] child pid 24745 exit signal
 Segmentation
  fault (11)
 
 
  That must be mod_rpaf.
 
  
  Bjoern
 
 
 
 
  2012/9/28 Baptiste bed...@gmail.com
 
  HAProxy logged a SH termination code.
  From the documentation:
   SH   The server aborted before sending its full HTTP response
  headers, or
it crashed while processing the request. Since a server
 aborting
  at
this moment is very rare, it would be wise to inspect its logs
  to
control whether it crashed and why. The logged request may
  indicate a
small set of faulty requests, demonstrating bugs in the
  application.
Sometimes this might also be caused by an IDS killing the
  connection
between haproxy and the server.
 
 
  I'm pretty sure your rpaf mode does not understand IPv6 and simply
 crashes
  :)
 
  cheers
 
 
 
  On Fri, Sep 28, 2012 at 2:48 PM, bjun...@gmail.com bjun...@gmail.com
  wrote:
   Hi,
  
   thanks for quick reply.
  
  
   Backend is Apache 2.2.14
  
  
   log entry:
  
  
   Sep 28 14:45:08 localhost haproxy[3432]:
   2001:XXX:XXX:X::XX:XXX:6977:41559 [28/Sep/2012:14:45:08.023]
   http_in_v6
   apache/node09 0/0/0/-1/25 502 204 - - SH-- 0/0/0/0/0 0/0 GET /
   HTTP/1.1
  
  
  
   -
   Bjoern
  
  
  
  
   2012/9/28 Baptiste bed...@gmail.com
  
   Hi,
  
   Are you sure your backend server is able to process IPv6 address in
   headers?
   Could you provide HAProxy logs showing the 502?
  
   Regards
  
  
   On Fri, Sep 28, 2012 at 1:07 PM, bjun...@gmail.com 
 bjun...@gmail.com
   wrote:
Hi folks,
   
at the moment I'm testing IPv6 with HAProxy
(IPv6-to-IPv4-Translation).
   
Unfortunately IPv6-to-IPv4 HTTP-Connection doesn't work if you have
option
forwardfor in your IPv6-Frontend.
(produces 502 errors on every connection).
   
   
If I remove option forwardfor from the IPv6-Frontend
 (http_in_v6)
it
is
working as expected.
   
   
Unfortunately our application behind HAProxy uses X-Forward-For -
header
for
different functions and also HTTP-Request-Logging is affected
 (Apache
Access-Log + mod_rpaf, only HAProxy-IP is now logged on
 IPv6-Requests
instead of the real client ip).
   
   
Ubuntu 12.04 amd64, haproxy 1.4.22
   
   
haproxy.cfg :
   
   
global
log 127.0.0.1   local0
log 127.0.0.1   local1 notice
maxconn 2
ulimit-n   65536
user haproxy
group haproxy
daemon
stats socket /var/run/haproxy.sock mode 0600 level admin
   
   
defaults
log global
modehttp
option  httplog
option  dontlognull
retries 3
option redispatch
maxconn 19500
timeout connect 10s
timeout client 60s
timeout server 60s
timeout queue  60s
   
   
frontend http_in_v6
bind  2001:XXX:XXX:37::9:80
   
reqidel ^X-Forwarded-For:.*
option forwardfor
   
option http-server-close
   
default_backend apache
   
   
   
frontend http_in
bind  81.x.x.x:80
   
reqidel ^X-Forwarded-For:.*
option forwardfor
   
option http-server-close
   
default_backend apache
   
   
   
backend apache
balance roundrobin
   
appsession PHPSESSID len 64 timeout 3h request-learn prefix
   
option httpchk GET /health.php HTTP/1.0\r\nUser-Agent:\ HAProxy
http-check expect status 200
   
server apache09 192.168.3.109:80 check inter 1 rise 2
 fall 2
maxconn
250 weight 50