Re: Recommendations for deleting headers by regexp in 2.x?

2020-09-21 Thread Ricardo Fraile

Hello,


I'm testing this behaviour with 2.2.3-0e58a34 with the line 
"http-request del-header x-  -m beg" but it reports an error:


[ALERT] 264/110329 (5812) : parsing [/etc/haproxy//haproxy.cfg:91]: 
'http-request del-header' expects either 'if' or 'unless' followed by a 
condition but found '-m'.


One mail on this thread said "...we both agreed that it makes sense to 
implement it...", but the 2.2 documentation doesn't have any reference. 
Maybe it will be on a future release? Or maybe is there any other 
alternative to delete headers based on regex or a matching method?



Thanks,



Re: Right way to get file version with Data Plane API?

2020-09-21 Thread Ricardo Fraile
For example, to start a new transaction, as the documentation [1] 
points:


version / required
Configuration version on which to work on

Or the blog post about it [2]:

Call the /v1/services/haproxy/transactions endpoint to create a new 
transaction. This requires a version parameter in the URL, but the 
commands inside the transaction don’t need one. Whenever a POST, PUT, or 
DELETE command is called, a version must be included, which is then 
stamped onto the HAProxy configuration file. This ensures that if 
multiple clients are using the API, they’ll avoid conflicts. If the 
version you pass doesn’t match the version stamped onto the 
configuration file, you’ll get an error. When using a transaction, that 
version is specified up front when creating the transaction.


What is the right way to get the version stamped on the configuration 
file?


Thanks,

[1] - 
https://www.haproxy.com/documentation/dataplaneapi/latest/#operation/startTransaction

[2] - https://www.haproxy.com/blog/new-haproxy-data-plane-api/



what do you mean by "file version" ?




Right way to get file version with Data Plane API?

2020-09-18 Thread Ricardo Fraile

Hello,

Getting the file version seems to be one of the first things to do at 
the beginning of using the API, but I can't find an easy and clear way 
to get it.


It seems extrange that that thing doesn't have a target url to get it. 
Maybe I'm wrong, but I get it with the raw output:


# curl --user user1:password1 
http://127.0.0.1:/v2/services/haproxy/configuration/raw


Is there an other right way to get it?


Thanks,



Re: How to debug matching ACLs?

2020-07-24 Thread Ricardo Fraile

Hello Willy,


Following your suggestions, I've been testing the "debug" solution (in a 
2.2 HAproxy) with this sample conf:


   http-request use-service prometheus-exporter if { path,debug(buf0) -m 
beg /metrics }


seeing from the socket the entries registered on buf0:

   # echo "show events buf0" | socat stdio /var/run/haproxy.sock
   <0>2020-07-24T09:24:19.598250+02:00 [debug] buf0: type=str 
   <0>2020-07-24T09:24:26.981110+02:00 [debug] buf0: type=str 
   <0>2020-07-24T09:24:34.598446+02:00 [debug] buf0: type=str 


Later on, the same http condition, but now with "set-var":

   http-request use-service prometheus-exporter if { 
path,set-var(txn.last_expr) -m beg /metrics }

   log-format %ci,%[var(txn.last_expr)]

and seeing the lines on the logs:

   Jul 24 09:33:49 server1 haproxy[10291]: 192.168.1.17,/metrics
   Jul 24 09:34:04 server1 haproxy[10291]: 192.168.1.17,/metrics


I think that the "set-var" way is more flexible, as you can combine it 
with any other variable that helps to identify the client request. The 
proposed "here" converter can helps more.



As an idea, it would be nice if these debug modifications done over the 
ACLs can be logged based on a condition, for example only with a 
particular client pattern, something like:


   acl client_debug hdr_sub(User-Agent) client1-user-agent
   log-format %ci,%[var(txn.last_expr)] if client_debug


Thanks for your detailed explanations,



How to debug matching ACLs?

2020-07-23 Thread Ricardo Fraile

Hello,

On a complex configuration with multiples ACLs, is there a way to debug 
what of them are applied over a request?


Is it possible to append the unique id of the ACLs to the line on the 
log?



Thanks,



Re: Time applied on DNS resolution with valid response

2020-05-23 Thread Ricardo Fraile

El 2020-05-23 15:48, Baptiste escribió:

On Thu, May 21, 2020 at 11:47 AM Ricardo Fraile 
wrote:


Hello,

I'm fancing an extrange behaviour with DNS resolution and
timeout/hold
times. As testing enviroment, I use Haproxy 1.8.25 and this sample
conf:

global
master-worker
log /dev/log local5 info
pidfile /var/run/haproxy.pid
nbproc 1

resolvers dns
nameserver dns1  1.1.1.1:53 [1]

resolve_retries   3
timeout resolve   5s
timeout retry10s
hold other   10s
hold valid   60s
hold obsolete10s
hold refused 10s
hold nx  10s
hold timeout 10s

listen proxy-tcp
mode tcp
bind *:80
default-server check resolvers dns init-addr none
resolve-prefer ipv4

server host1 host1:80

On the DNS server, the entry for host1 is valid as noted here:

# dig host1 @1.1.1.1 [2]

;; ANSWER SECTION:
host1. 300 IN A 7.7.7.7

But getting the network traffic from the DNS server I can see the
following:

11:29:31.064136 IP [bal_ip].49967 > dns1: 121+ [1au] A? host1. (62)
11:29:36.065749 IP [bal_ip].49967 > dns1: 14393+ [1au] A? host1.
(62)
11:29:41.067816 IP [bal_ip].49967 > dns1: 35337+ [1au] A? host1.
(62)

Each 5 seconds, as defined in "timeout resolve", it receives a
query.
But as it is valid, why Haproxy doesn't hold it with the time
defined on
"hold valid", 60 seconds?

Thanks,


Hi Ricardo

Hold valid means that we keep this response for said period if the
server becomes unresponsive or returns NX.
HAProxy carry on performing queries at timeout.resolve period to
ensure a faster convergence in case the response is updated.

Baptiste

Links:
--
[1] http://1.1.1.1:53
[2] http://1.1.1.1



Thanks Baptiste, I haven't understood clearly the concepts with the 
documentation. Your comment fits with the behaviour that I see.




Time applied on DNS resolution with valid response

2020-05-21 Thread Ricardo Fraile

Hello,


I'm fancing an extrange behaviour with DNS resolution and timeout/hold 
times. As testing enviroment, I use Haproxy 1.8.25 and this sample conf:


global
master-worker
log /dev/log local5 info
pidfile /var/run/haproxy.pid
nbproc 1

resolvers dns
nameserver dns1  1.1.1.1:53

resolve_retries   3
timeout resolve   5s
timeout retry10s
hold other   10s
hold valid   60s
hold obsolete10s
hold refused 10s
hold nx  10s
hold timeout 10s

listen proxy-tcp
mode tcp
bind *:80
default-server check resolvers dns init-addr none resolve-prefer ipv4

server host1 host1:80



On the DNS server, the entry for host1 is valid as noted here:

# dig host1 @1.1.1.1

;; ANSWER SECTION:
host1. 300 IN A 7.7.7.7



But getting the network traffic from the DNS server I can see the 
following:


11:29:31.064136 IP [bal_ip].49967 > dns1: 121+ [1au] A? host1. (62)
11:29:36.065749 IP [bal_ip].49967 > dns1: 14393+ [1au] A? host1. (62)
11:29:41.067816 IP [bal_ip].49967 > dns1: 35337+ [1au] A? host1. (62)

Each 5 seconds, as defined in "timeout resolve", it receives a query. 
But as it is valid, why Haproxy doesn't hold it with the time defined on 
"hold valid", 60 seconds?




Thanks,



Re: Recommendations for deleting headers by regexp in 2.x?

2020-03-09 Thread Ricardo Fraile
Hello,


+1 for this feature

I have some rspidel and rspirep waiting to be migrated to 2.2 when this
feature will be available.


Thanks,



El vie, 14-02-2020 a las 09:59 +0100, Willy Tarreau escribió:
> Hi James,
> 
> On Fri, Jan 31, 2020 at 12:44:24PM -0800, James Brown wrote:
> > So how should we move this proposal forward? I'm glad to contribute
> > more
> > patches...
> 
> Sorry for the very late response, we needed to discuss this with
> Christopher then both got busy and then forgot :-/
> 
> So after discussion, we both agreed that it makes sense to implement
> it
> following the same model as the ACLs described below :
> 
> > > A variant of this could be to use the same syntax as the options
> > > we already
> > > use on ACL matches, which are "-m reg", "-m beg", "-m end". But
> > > these will
> > > also need to be placed after to avoid the same ambiguity (since
> > > "-m" is a
> > > token hence a valid header name). That would give for example :
> > > 
> > >  http-request del-header server
> > >  http-request del-header x-private-  -m beg
> > >  http-request del-header x-.*company -m reg
> > >  http-request del-header -tracea -m end
> 
> However, do not feel pressured to implement all matching methods! The
> currently known ones are described in section 7.1 of the doc, I think
> that "str", "reg", "sub", "beg" and "end" are the only ones which
> would
> make sense over the long term. In practice we could have "str" being
> the current one and "beg" being the one with the prefix as you need.
> If later others need more modes we can implement them (unless you
> want
> to provide them all at once of course).
> 
> Thanks for whatever you can do in this area and sorry again for
> responding late!
> 
> Willy
> 




Get raw http request after TLS negotiation

2019-12-05 Thread Ricardo Fraile
Hello,


I've been facing an issue related a malformed request sended from an
external client, the line that the HAproxy register was like this:

Dec  4 07:15:30 balancer haproxy[22482]: 1.1.1.1:35546
[04/Dec/2019:07:15:29.221] proxy-1~ proxy-1/ -1/-1/-1/-1/1096
400 5210 - - CR-- 41/12/0/0/0 0/0 {|} ""

I tought that was due a block rule, so I disabled all, but the response
was the same.

I swiched the protocol from https to http and caught the stream by
tcpdump, that was the key:

GET / HTTP/1.1
Header1 Authorization
Host: mydomain.com

The headers was configured incorrectly.

In this case, I had the possibility to do the swich and get the raw
request from the wire, but what happens if I can't swich from https?

Does HAproxy some raw output after the TLS negotiation, logging is
good, but in some cases, like this, it's hard to know root cause.



Thanks,



Re: Unify equal acl between backends

2019-07-11 Thread Ricardo Fraile
Hello,

On Wed, 2019-07-10 at 16:09 +0200, Lukas Tribus wrote:
> Hello Ricardo,
> 
> 
> On Wed, 10 Jul 2019 at 15:38, Ricardo Fraile 
> wrote:
> > Hello,
> > 
> > 
> > I have multiple backends and some of them share the same acl for
> > the
> > static content, as example:
> > 
> > 
> > backend back-1
> > acl no-cookie path_end .gif .jpg .png (+15 more)
> > ignore-persist if no-cookie
> > ...
> > 
> > backend back-2
> > acl no-cookie path_end .gif .jpg .png (+15 more)
> > ignore-persist if no-cookie
> > ...
> > 
> > 
> > I try to look for a solution to define once the "acl no-cookie" but
> > I
> > can't find a workaround because it only works if I define it under
> > the
> > same backend.
> > 
> > As middle step, I tried with env vars but it didn't work:
> > 
> > global
> > setenv px-static .gif .jpg .png
> > 
> > backend back-1
> > acl no-cookie path_end ${px-static}
> > ignore-persist if no-cookie
> 
> Two issues with your use of env vars:
> 
> - must be in double quotes
> - must contain only alphanumerical characters and underscore
> 
> So I suggest
> setenv pxstatic .gif .jpg .png
> 
> and
> acl no-cookie path_end "$pxstatic"
> 
> Also read:
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcbonte.github.io%2Fhaproxy-dconv%2F1.9%2Fconfiguration.html%232.3data=02%7C01%7C%7Cc2d173a38dcf4f74c6f108d7054048e4%7Cd78b7929c2a34897ae9a7d8f8dc1a1cf%7C0%7C0%7C636983645968605647sdata=lCLgmDfHJRAbfW8q%2BEM6o%2BhsBsZzD6P53EVGdy7zFmo%3Dreserved=0
> 
> 
> If you want to do more, you can set a txn variable in the frontend
> (http-request set-var(txn.nocookie) 1 if no-cookie), based on your
> ACL
> and use that variable in the backend (ignore-persist if {
> var(txn.nocookie) 1 }).
> 
> 
> cheers,
> lukas


Setting the suggested configuration doesn't work in v1.8, it looks like
setenv have a limit in the number of arguments:


global
setenv pxstatic .gif .jpg .png

backend xx
acl no-cookie path_end "$pxstatic"
ignore-persist if no-cookie


# haproxy -c -f haproxy.cfg 
[ALERT] 191/092843 (85340) : parsing [haproxy.cfg:18] : 'setenv' cannot
handle unexpected argument '.png'.
[WARNING] 191/092843 (85340) : parsing acl keyword 'path_end' :
  no pattern to match against were provided, so this ACL will never
match.
  If this is what you intended, please add '--' to get rid of this
warning.
  If you intended to match only for existence, please use '-m found'.
  If you wanted to force an int to match as a bool, please use '-m
bool'.

[ALERT] 191/092843 (85340) : Error(s) found in configuration file :
haproxy.cfg
[ALERT] 191/092843 (85340) : Fatal errors found in configuration.

I tried to set the list under single and double quotes, the error
disappears but it didn't work. Using () and {} still had the error.
Setting only one extension works, two, only with the first on the list.

What is the right assignment to use in setenv?


The alternative configuration that Aleksandar pointed, it works, but I
prefer to have this small list under the main file.



Thanks for the info,




Unify equal acl between backends

2019-07-10 Thread Ricardo Fraile
Hello,


I have multiple backends and some of them share the same acl for the
static content, as example:


backend back-1
acl no-cookie path_end .gif .jpg .png (+15 more)
ignore-persist if no-cookie
...

backend back-2
acl no-cookie path_end .gif .jpg .png (+15 more)
ignore-persist if no-cookie
...


I try to look for a solution to define once the "acl no-cookie" but I
can't find a workaround because it only works if I define it under the
same backend.

As middle step, I tried with env vars but it didn't work:

global
setenv px-static .gif .jpg .png

backend back-1
acl no-cookie path_end ${px-static}
ignore-persist if no-cookie
...


Does anyone know an alternative method to avoid define the same acl in
each backend, or at least, define once the list of file extensions?



Thanks,



Match response status code with regular expression

2019-06-25 Thread Ricardo Fraile
Hello,


I'm trying to set an acl for multiple status codes. As example, using
only for one works:

  http-response set-header Cache-Control max-age=60 if { status 302 }

but with more than one, trying with a regex, fails because it is not
implemented in httpr-response:

  http-response set-header Cache-Control max-age=60 if { rstatus 3* }

produces the following error:

  error detected while parsing an 'http-response set-header' condition :
unknown fetch method 'rstatus' in ACL expression 'rstatus'.



The "rstatus" is available only under "http-check expect". Are there any
equivalence to the regext status matching?


Thanks,



Difference between rspdel and http-response del-header use case?

2018-11-15 Thread Ricardo Fraile
Hello,


What is the difference between using one of the following rules instead
of the other?

I think that rspdel is the historic way to do, but maybe it have other
implications.


rspdel ^Server.*

or

http-response del-header Server


Thanks,



Re: Combine different ACLs under same name

2018-10-05 Thread Ricardo Fraile
El vie, 05-10-2018 a las 11:38 +0200, Jerome Magnin escribió:
> Hello,
> 
> On Fri, Oct 05, 2018 at 10:46:20AM +0200, Ricardo Fraile wrote:
> > Hello,
> > 
> > 
> > I have tested that some types of acls can't be combined, as example:
> > 
> > Server 192.138.1.1, acl with combined rules:
> > 
> > acl rule1 hdr_dom(host) -i test.com
> > acl rule1 src 192.168.1.2/24
> > redirect prefix 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fyes.comdata=02%7C01%7C%7C0a6e0b206dd5474eaeee08d62aa6535d%7Cd78b7929c2a34897ae9a7d8f8dc1a1cf%7C0%7C0%7C636743291183969700sdata=8RT5c2eXB%2FFk6TDNe6TqXyDmy8YRgVpSz2WbjXggFCg%3Dreserved=0
> >  code 301 if rule1 
> > redirect prefix 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fno.comdata=02%7C01%7C%7C0a6e0b206dd5474eaeee08d62aa6535d%7Cd78b7929c2a34897ae9a7d8f8dc1a1cf%7C0%7C0%7C636743291183969700sdata=Rt4XuK0X7D81dEQ9aNyviySqJInlLQg1U%2BdGX%2BBCtcM%3Dreserved=0
> > 
> > Request from 192.168.1.2:
> > 
> > $ curl -I -H "host: test.com" 192.138.1.1
> > HTTP/1.1 301 Moved Permanently
> > Content-length: 0
> > Location: 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fyes.com%2Fdata=02%7C01%7C%7C0a6e0b206dd5474eaeee08d62aa6535d%7Cd78b7929c2a34897ae9a7d8f8dc1a1cf%7C0%7C0%7C636743291183969700sdata=xLRo6a963KFqYn7BSmtUSb96EI7rLLuyVSwyfcdfP%2Bo%3Dreserved=0
> > 
> > Request from 192.168.1.3:
> > 
> > $ curl -I -H "host: test.com" 192.138.1.1
> > HTTP/1.1 301 Moved Permanently
> > Content-length: 0
> > Location: 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fyes.com%2Fdata=02%7C01%7C%7C0a6e0b206dd5474eaeee08d62aa6535d%7Cd78b7929c2a34897ae9a7d8f8dc1a1cf%7C0%7C0%7C636743291183969700sdata=xLRo6a963KFqYn7BSmtUSb96EI7rLLuyVSwyfcdfP%2Bo%3Dreserved=0
> > 
> > 
> > 
> > Server 192.138.1.1, acl with two rules:
> > 
> > acl rule1 hdr_dom(host) -i test.com
> > acl rule2 src 192.168.1.2/24
> > redirect prefix 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fyes.comdata=02%7C01%7C%7C0a6e0b206dd5474eaeee08d62aa6535d%7Cd78b7929c2a34897ae9a7d8f8dc1a1cf%7C0%7C0%7C636743291183969700sdata=8RT5c2eXB%2FFk6TDNe6TqXyDmy8YRgVpSz2WbjXggFCg%3Dreserved=0
> >  code 301 if rule1 rule2
> > redirect prefix 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fno.comdata=02%7C01%7C%7C0a6e0b206dd5474eaeee08d62aa6535d%7Cd78b7929c2a34897ae9a7d8f8dc1a1cf%7C0%7C0%7C636743291183969700sdata=Rt4XuK0X7D81dEQ9aNyviySqJInlLQg1U%2BdGX%2BBCtcM%3Dreserved=0
> > 
> > Request from 192.168.1.2:
> > 
> > $ curl -I -H "host: test.com" 192.138.1.1
> > HTTP/1.1 301 Moved Permanently
> > Content-length: 0
> > Location: 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fyes.com%2Fdata=02%7C01%7C%7C0a6e0b206dd5474eaeee08d62aa6535d%7Cd78b7929c2a34897ae9a7d8f8dc1a1cf%7C0%7C0%7C636743291183969700sdata=xLRo6a963KFqYn7BSmtUSb96EI7rLLuyVSwyfcdfP%2Bo%3Dreserved=0
> > 
> > Request from 192.168.1.3:
> > 
> > $ curl -I -H "host: test.com" 192.138.1.1
> > HTTP/1.1 301 Moved Permanently
> > Content-length: 0
> > Location: 
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fno.com%2Fdata=02%7C01%7C%7C0a6e0b206dd5474eaeee08d62aa6535d%7Cd78b7929c2a34897ae9a7d8f8dc1a1cf%7C0%7C0%7C636743291183969700sdata=8oG7jYs129GAJb9uqBZOp0c09KqCG6gLsR%2FctUsFsfM%3Dreserved=0
> > 
> > I look for this behaviour on the documentation but I don't find any
> > reference to it. Please, can someone know where it is documented?
> > 
> > 
> 
> This is expected behavior.
> 
> when you declare acls with the same name such as:
> 
> acl foo src 1.2.3.4
> acl foo hdr(host) foo.bar
> 
> 
> and use foo as a condition for anything, foo equivalent to :
> 
>  { src 1.2.3.4 } || { hdr(host) foo.bar }
> 
> There is at least an example of this behavior in the documentation:
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcbonte.github.io%2Fhaproxy-dconv%2F1.8%2Fconfiguration.html%237.2data=02%7C01%7C%7C0a6e0b206dd5474eaeee08d62aa6535d%7Cd78b7929c2a34897ae9a7d8f8dc1a1cf%7C0%7C0%7C636743291183969700sdata=q%2BDgDSduhH6PoH43SEG0VA4Ywesrs%2FP4EtYVpBMc4m4%3Dreserved=0
> 
> your splitting of the acl in two acls leads to implying an && between the two
> acls, and the behavior is different.
> 
> regards,
> Jérôme


It is definitely clever, indeed.

If it is possible, as suggestion, I think that it need to be more clear
on the documentation.


Thanks,





Combine different ACLs under same name

2018-10-05 Thread Ricardo Fraile
Hello,


I have tested that some types of acls can't be combined, as example:

Server 192.138.1.1, acl with combined rules:

acl rule1 hdr_dom(host) -i test.com
acl rule1 src 192.168.1.2/24
redirect prefix https://yes.com code 301 if rule1 
redirect prefix https://no.com

Request from 192.168.1.2:

$ curl -I -H "host: test.com" 192.138.1.1
HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: https://yes.com/

Request from 192.168.1.3:

$ curl -I -H "host: test.com" 192.138.1.1
HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: https://yes.com/



Server 192.138.1.1, acl with two rules:

acl rule1 hdr_dom(host) -i test.com
acl rule2 src 192.168.1.2/24
redirect prefix https://yes.com code 301 if rule1 rule2
redirect prefix https://no.com

Request from 192.168.1.2:

$ curl -I -H "host: test.com" 192.138.1.1
HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: https://yes.com/

Request from 192.168.1.3:

$ curl -I -H "host: test.com" 192.138.1.1
HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: https://no.com/

I look for this behaviour on the documentation but I don't find any
reference to it. Please, can someone know where it is documented?


Thanks,




Re: Issue after upgrade from 1.7 to 1.8 related with active sessions

2017-12-23 Thread Ricardo Fraile
Hello Willy,


It works perfect! Problem solved :)

>From my side, yesterday afternoon I was walking along the commits to reach 
>when was the change. I finished in the same commit "MEDIUM: connection: make 
>conn_sock_shutw() aware of lingering", and the workaround that I found was 
>using "option nolinger". Coincidentally, when I was going to write the email, 
>your answer arrived with the right fix.

The doubt that I have now is related with the trace line "-1 ENOTCONN 
(Transport endpoint is not connected)" and the relationship with the issue...

It still happend, but the problem is solved, therefore it hasn't any link 
between each one.

I found that this behaviour was introduced since the commit 
"3256073976d4f43e12e7ff97d243fdb8eb56165a - MEDIUM: stream: do not forcefully 
close the client connection anymore", but I can't reproduce it if I make the 
test sending the request (a simple curl) from outside the server network using 
a vpn link. Due that I can't see any other issue, does it fit inside the 
expected behaviour?


Thanks for your time Willy and Christopher.





De: Willy Tarreau <w...@1wt.eu>
Enviado: viernes, 22 de diciembre de 2017 18:57
Para: Ricardo Fraile
Cc: haproxy@formilux.org
Asunto: Re: Issue after upgrade from 1.7 to 1.8 related with active sessions

Hi Ricardo,

On Fri, Dec 22, 2017 at 12:37:42PM +0100, Ricardo Fraile wrote:
> Continuing with the investigation, I changed the listen only to this:
>
> listen proxy-test-tcp
> bind *:81
> option tcplog
> server test1 192.168.1.101:80
>
>
> And the difference between 1.7 and 1.8 tracing the process who receive
> only 1 request is that the shutdown of the socket who receives the
> request fails with an ENOTCONN. In 1.8 continue in CLOSE_WAIT a few
> time, meanwhile in 1.7 pass to TIME_WAIT as usual.

(...)

I finally found it thanks to all your information and to Christopher's
bisect. I've just fixed it now with the attached patch. Feel free to
retest it, but I'm confident I can issue 1.8.2 now.

Many thanks for your very detailed report!

Willy


Re: Issue after upgrade from 1.7 to 1.8 related with active sessions

2017-12-22 Thread Ricardo Fraile
L, 2, 75ed50)  = 0
epoll_wait(0, {}, 200, 0)   = 0
recvfrom(2, 0x1012be4, 16384, 0, 0, 0)  = -1 EAGAIN (Resource
temporarily unavailable)
epoll_ctl(0, EPOLL_CTL_ADD, 2, {EPOLLIN|EPOLLRDHUP, {u32=2, u64=2}}) = 0
epoll_wait(0, {{EPOLLIN, {u32=2, u64=2}}}, 200, 1000) = 1
recvfrom(2, "HTTP/1.1 301 Moved Permanently\r\n"..., 16384, 0, NULL,
NULL) = 515
sendto(1, "HTTP/1.1 301 Moved Permanently\r\n"..., 515, MSG_DONTWAIT|
MSG_NOSIGNAL, NULL, 0) = 515
epoll_wait(0, {{EPOLLIN|EPOLLRDHUP, {u32=1, u64=1}}}, 200, 1000) = 1
recvfrom(1, "", 16384, 0, NULL, NULL)   = 0
shutdown(2, SHUT_WR)= 0
epoll_ctl(0, EPOLL_CTL_DEL, 1, 75ed50)  = 0
epoll_wait(0, {{EPOLLIN|EPOLLHUP|EPOLLRDHUP, {u32=2, u64=2}}}, 200,
1000) = 1
recvfrom(2, "", 16384, 0, NULL, NULL)   = 0
close(2)= 0
shutdown(1, SHUT_WR)= 0
close(1)= 0
sendmsg(6, {msg_name(110)={sa_family=AF_LOCAL, sun_path="/dev/log"},
msg_iov(8)=[{"<174>Dec 22 12:09:45 ", 21}, {"haproxy", 7}, {"[", 1},
{"10408", 5}, {"]: ", 3}, {"", 0}, {"192.168.1.117:35835
[22/Dec/2017"..., 129}, {"\n", 1}], msg_controllen=0, msg_flags=0},
MSG_DONTWAIT|MSG_NOSIGNAL) = 167
epoll_wait(0, {}, 200, 1000)= 0
epoll_wait(0, {}, 200, 1000)= 0
epoll_wait(0, {}, 200, 1000)= 0
epoll_wait(0, {}, 200, 1000)= 0
epoll_wait(0, Process 10408 detached
 






El jue, 21-12-2017 a las 17:37 +0100, Ricardo Fraile escribió:
> [This sender failed our fraud detection checks and may not be who they appear 
> to be. Learn about spoofing at http://aka.ms/LearnAboutSpoofing]
> 
> Well, I isolate the service on a load balancer server with the minimal
> configuration, let me detail the problem.
> 
> Two equal (cloned) Debian load balancers with 3.16.7-ckt9 kernel, both
> working with keepalived sharing the ip address of the proxy-tcp service
> (192.168.1.100). In A server the Haproxy is v.1.8.1 and in B v.1.7.4,
> both have the following configuration (only in 1.7 the "master-worker"
> line is removed):
> 
> global
> master-worker
> node balancer
> log /dev/log local1 info
> stats socket /var/run/haproxy.sock mode 0660 user testuser1
> group testuser1 level admin
> pidfile /var/run/haproxy.pid
> maxconn 32000
> nbproc 1
> 
> defaults
> modetcp
> log global
> retries 3
> option redispatch
> 
> maxconn 10
> fullconn 10
> 
> timeout connect  5s
> timeout server   50s
> timeout client   50s
> timeout http-keep-alive  60s
> 
> default-server on-marked-down shutdown-sessions inter 5s
> fastinter 1s downinter 1s rise 2 fall 2
> 
> listen proxy-stats
> bind *:80
> mode http
> stats enable
> stats show-legends
> stats uri /
> stats realm   Haproxy\ Statistics
> stats refresh 5s
> 
> listen proxy-tcp
> bind 192.168.1.100:8080
> option tcplog
> balance roundrobin
> 
> option httpchk GET /test
> http-check expect string ok
> 
> server server1 192.168.1.101:8080 check
> server server2 192.168.1.102:8080 check
> 
> 
> After changing the traffic between both servers with keepalived, the
> results are:
> 
> 1.7:
> Session rate: 861
> Sessions: 220
> 
> 1.8:
> Session rate: 835
> Sessions: 31243
> 
> These are the evidences in 1.8 server:
> 
> - The ulimit-n is 64034 and the Sessions Max reach 31999 both in
> Frontend and Backend of the lister proxy-tcp, which I suppose that the
> limit is reached by consequence of the issue.
> 
> - The system log report a lot of "TCP: too many orphaned sockets" and
> some like "net_ratelimit: 2094 callbacks suppressed"
> 
> - The Haproxy log register the total time elapsed between the accept and
> the last close is equal to the 50s assigned to server and client
> timeout.
> 
> - The termination state is ok in 99% of them. Yesterday I said that it
> was "sD", but today I check that is very rare, I put one line here only
> as example.
> 
> Dec 21 15:38:09 balancer haproxy[8094]: 192.168.1.55:58674
> [21/Dec/2017:15:37:19.358] proxy-tcp proxy-tcp/server2 1/0/50001 1087 --
> 30211/30210/30209/15106/0 0/0
> Dec 21 15:38:09 balancer haproxy[8094]: 192.168.1.42:51027
> [21/Dec/2017:15:37:19.356] proxy-tcp proxy-tcp/server2 1/0/50008 5345 sD
> 30214/30213/30210/15106/0 0/0
> Dec 21 15:38:09 balancer haproxy[8094]: 1

Re: Issue after upgrade from 1.7 to 1.8 related with active sessions

2017-12-21 Thread Ricardo Fraile
Well, I isolate the service on a load balancer server with the minimal
configuration, let me detail the problem.

Two equal (cloned) Debian load balancers with 3.16.7-ckt9 kernel, both
working with keepalived sharing the ip address of the proxy-tcp service
(192.168.1.100). In A server the Haproxy is v.1.8.1 and in B v.1.7.4,
both have the following configuration (only in 1.7 the "master-worker"
line is removed):

global
master-worker
node balancer
log /dev/log local1 info
stats socket /var/run/haproxy.sock mode 0660 user testuser1
group testuser1 level admin
pidfile /var/run/haproxy.pid
maxconn 32000
nbproc 1

defaults
modetcp
log global
retries 3
option redispatch

maxconn 10
fullconn 10

timeout connect  5s
timeout server   50s
timeout client   50s
timeout http-keep-alive  60s

default-server on-marked-down shutdown-sessions inter 5s
fastinter 1s downinter 1s rise 2 fall 2

listen proxy-stats
bind *:80
mode http
stats enable
stats show-legends
stats uri /
stats realm   Haproxy\ Statistics
stats refresh 5s

listen proxy-tcp
bind 192.168.1.100:8080
option tcplog
balance roundrobin

option httpchk GET /test
http-check expect string ok

server server1 192.168.1.101:8080 check
server server2 192.168.1.102:8080 check


After changing the traffic between both servers with keepalived, the
results are:

1.7:
Session rate: 861
Sessions: 220

1.8:
Session rate: 835
Sessions: 31243

These are the evidences in 1.8 server:

- The ulimit-n is 64034 and the Sessions Max reach 31999 both in
Frontend and Backend of the lister proxy-tcp, which I suppose that the
limit is reached by consequence of the issue.

- The system log report a lot of "TCP: too many orphaned sockets" and
some like "net_ratelimit: 2094 callbacks suppressed"

- The Haproxy log register the total time elapsed between the accept and
the last close is equal to the 50s assigned to server and client
timeout.

- The termination state is ok in 99% of them. Yesterday I said that it
was "sD", but today I check that is very rare, I put one line here only
as example.

Dec 21 15:38:09 balancer haproxy[8094]: 192.168.1.55:58674
[21/Dec/2017:15:37:19.358] proxy-tcp proxy-tcp/server2 1/0/50001 1087 --
30211/30210/30209/15106/0 0/0
Dec 21 15:38:09 balancer haproxy[8094]: 192.168.1.42:51027
[21/Dec/2017:15:37:19.356] proxy-tcp proxy-tcp/server2 1/0/50008 5345 sD
30214/30213/30210/15106/0 0/0
Dec 21 15:38:09 balancer haproxy[8094]: 192.168.1.55:40442
[21/Dec/2017:15:37:19.364] proxy-tcp proxy-tcp/server1 1/0/50003 694 --
30216/30215/30211/15104/0 0/0

- From 30522 tcp sockets to the proxy-tcp address, there are 30160 in
CLOSE_WAIT state over local address 192.168.1.100:8080. Yesterday I
point that it was from the backend side, but I'm wrong, all are from the
fronted side.


I have the ouput of "show sess all" and "show fd", but as it have a lot
of private information, I send you using an alternative way. If there
are any clear evidence there, I will take the time to anonymize and
share.


Thanks,

El mié, 20-12-2017 a las 18:19 +0100, Willy Tarreau escribió:
> Hello Ricardo,
> 
> On Wed, Dec 20, 2017 at 05:00:33PM +0100, Ricardo Fraile wrote:
> > Hello,
> > 
> > After upgrade from 1.7.4 to 1.8.1, basically with the end of mail conf
> > snippet, the sessions started to grow, as example:
> > 
> > 1.7.4:
> > Active sessions: ~161
> > Active sessions rate: ~425
> > 
> > 1.8.1:
> > Active sessions: ~6700
> > Active sessions rate: ~350
> 
> Ah that's not good :-(
> 
> > Looking into the linux (3.16.7) server, there are a high number of
> > CLOSE_WAIT connections from the bind address of the listen service to
> > the backend nodes.
> 
> Strange, I don't understand well what type of traffic could cause this
> except a loop, that sounds a bit unusual.
> 
> > System logs reported "TCP: too many orphaned sockets", but after
> > increase net.ipv4.tcp_max_orphans value, the message stops but nothing
> > changes.
> 
> Normally orphans correspond to closed sockets for which there are still
> data in the system's buffers so this should be unrelated to the CLOSE_WAIT,
> unless there's a loop somewhere where a backend reconnects to the frontend,
> which can explain both situations at once when the timeout strikes.
> 
> > Haproxy logs reported for that listen the indicator "sD", but only with
> > 1.8.
> 
> Thus a server timeout during the end of the transfer. That doesn't make
> much sense either.
> 
> > Any ideas to dig into the i

Issue after upgrade from 1.7 to 1.8 related with active sessions

2017-12-20 Thread Ricardo Fraile
Hello,


After upgrade from 1.7.4 to 1.8.1, basically with the end of mail conf
snippet, the sessions started to grow, as example:

1.7.4:
Active sessions: ~161
Active sessions rate: ~425

1.8.1:
Active sessions: ~6700
Active sessions rate: ~350

Looking into the linux (3.16.7) server, there are a high number of
CLOSE_WAIT connections from the bind address of the listen service to
the backend nodes.

System logs reported "TCP: too many orphaned sockets", but after
increase net.ipv4.tcp_max_orphans value, the message stops but nothing
changes.

Haproxy logs reported for that listen the indicator "sD", but only with
1.8.


Any ideas to dig into the issue?



Thanks,





defaults
modetcp
retries 3
option redispatch

maxconn 10
fullconn 10

timeout connect  5s
timeout server   50s
timeout client   50s

listen proxy-tcp
bind 192.168.1.1:80
balance roundrobin

server node1 192.168.1.10:80
server node2 192.168.1.11:80
server node3 192.168.1.12:80




Stats with nproc > 1 and Haproxy 1.8

2017-12-19 Thread Ricardo Fraile
Hi Haproxy Team,


If I'm not wrong, with the previous versions, the stats was separated in
each process if the nproc > 1 was used. But what is the state now in 1.8
if the "master-worker" configuration is used?

In the following configuration snippet, the socket is bounded to process
1, but have it the information of all of the child threads?


global
master-worker
stats socket /var/run/haproxy.sock level admin expose-fd
listeners process 1
nbproc 8

listen proxy-stats
bind 192.168.1.1:80 process 1
mode http
stats enable
stats uri /stats


Thanks,




[PATCH] Add info about stats report when a reload is done in management.txt

2017-10-05 Thread Ricardo Fraile
Hi,


It's interesting to have in the section "4. Stopping and restarting
HAProxy" in the management.txt document some information related to the
behaviour of the stats when a restart is done.

As suggestion, here is my patch.



Thanks,
Ricardo F.
diff --git a/doc/management.txt b/doc/management.txt
index dd886de..2d7f2c4 100644
--- a/doc/management.txt
+++ b/doc/management.txt
@@ -484,6 +484,11 @@ don't have enough load to trigger the race conditions. And for most high traffic
 users, the failure rate is still fairly within the noise margin provided that at
 least SO_REUSEPORT is properly supported on their systems.
 
+Note that when a "restart" is doing, the new process have the listening ports 
+but the old process continue with the existing connections until they close. 
+All the active connections that the old process did are still working but not 
+reported in the statistics, which are only reported from the new process.
+
 
 5. File-descriptor limitations
 --


Re: Logging ACL activity

2017-04-27 Thread Ricardo Fraile
Hello,



I fallen into a similar requirement to the commented in these mails a
few years ago. As the right solution still is the use of any alternative
workaround, I add my 2 cents to the already been said.


For deny rules, the normal solution is:

frontend 
   
   acl rule_user-agent hdr_sub(User-Agent) -f user-agent.txt
   http-request deny if rule_user-agent



But for view the rule application in the logs, it can be changed to
this:

frontend 
   
   acl rule_user-agent hdr_sub(User-Agent) -f user-agent.txt
   use_backend acl-user_agent if rule_user-agent

backend acl-user_agent
   http-request deny


As the request is sent to a backend, the log line reflect that
information.


Regards,




> Hi Julien
> 
> With HAProxy 1.5, you can change the log severity using http-request
> rules:
>   http-request set-log-level notice if request-too-big
> 
> Then you can easily divert notice logs into a dedicated file in your
> syslog server.
> 
> My 2 cents.
> 
> Baptiste
> 
> 
> On Thu, Mar 13, 2014 at 4:23 AM, Julien Vehent 
> wrote:
> > On 2014-03-12 15:02, Julien Vehent wrote:
> >>
> >> Hi everyone,
> >>
> >> Is there a way to log the activity of an ACL?
> >> I tried to use a header insertion using reqadd, and then log that
> >> header, but it doesn't work.
> >>
> >> # match content-length larger than 500kB
> >> acl request-too-big hdr_val(content-length) gt 50
> >> reqadd X-Haproxy-ACL:\ request-too-big if METH_POST
> >> request-too-big
> >>
> >> capture request header X-Haproxy-ACL len 64
> >>
> >> The goal is to test a bunch of ACLs before enabling them in
> production.
> >>
> >> Any idea on how to do this?
> >
> >
> > I found a workaround, that's kind of a hack, but it works. When the
> custom
> > header is set, I send the request to a backend that is, in fact,
> another
> > haproxy frontend. The header is logged then, and passed to its final
> > backend. I guess I could call that "double backending" :)
> >
> > # ~~~ Requests validation using ACLs ~~~
> > # use a custom HTTP header to store the result of HAProxy's
ACLs.
> The
> > # default value is set to `pass`, and modified by ACLs below
> > http-request set-header X-Haproxy-ACL pass
> >
> > # block content-length larger than 5kB
> > acl request-too-big hdr_val(content-length) gt 5000
> > http-request set-header X-Haproxy-ACL request-too-big if
METH_POST
> > request-too-big
> >
> > # if previous ACL didn't pass, sent to logger backend
> > acl pass-acl-validation req.hdr(X-Haproxy-ACL) -m str pass
> > use_backend acl-logger if !pass-acl-validation
> >
> >
> > frontend acl-logger
> > bind localhost:5
> >
> > capture request header X-Haproxy-ACL len 64
> > capture request header X-Unique-ID len 64
> > default_backend fxa-nodejs
> >
> > backend acl-logger
> > server localhost localhost:5
> >
> > Downside is, in the logs, I know have two log entries for each
request
> that
> > doesn't pass the ACLs. I can use the Unique ID value to
> cross-reference
> > them. In the sample below, the first logged request indicates
> > "request-too-big" in the captured headers.
> >
> >Mar 12 21:32:35 localhost haproxy[23755]: [23755]
[1394659955.945]
> > 2/1/0/0/1/0/0 0/0/0/4/5  127.0.0.1:48120 127.0.0.1:5
> 127.0.0.1:8000
> > acl-logger - - "GET /v1/somethingsomething HTTP/1.1" 404
> fxa-nodejs:nodejs1
> > "-" "{request-too-big|
47B4176E:8E5E_0A977AE4:01BB_5320D273_03FF:5CCB}"
> "-"
> > "" "826 bytes"
> >
> >Mar 12 21:32:35 localhost haproxy[23755]: [23755]
[1394659955.850]
> > 2/1/0/0/1/0/0 94/0/0/5/99  1.10.2.10:36446 10.151.122.228:443
> > 127.0.0.1:5 fxa-https~ ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 "GET
> > /v1/somethingsomething HTTP/1.1" 404 acl-logger:localhost "-"
> > "{||Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101
Firefox/}"
> "-"
> > "" "802 bytes" 47B4176E:8E5E_0A977AE4:01BB_5320D273_03FF:5CCB
> >
> > - Julien
> >
> 
> 




Rate limit by IP based on all the current IPs from a network range

2017-02-02 Thread Ricardo Fraile
Hello,



Taking as starting point the following rate limit sticky table, in which
the requests are tracked by the "X-Client-IP" header and have an acl to
limit if there are more than 250 in 1 second:



stick-table type ip size 1m expire 1h store gpc0,http_req_rate(1s)
http-request track-sc0 req.hdr_ip(X-Client-IP,1)

acl rule_average sc0_http_req_rate gt 250

http-request deny if rule_average



With this configuration, a user is blocked if have more than 250 request
in a second. For example, at the same time, 192.168.1.1 can have 250
requests and 192.168.1.2 an other 250 requests.

But is it possible to apply this limit behaviour taking into account the
subnet?, for example, if the load balancer receive more than 250 request
from the 192.168.1.0/24, limit each particular IP, for example, at the
same time, 192.168.1.1 can have 100 and 192.168.1.2 the other 150,  but
not more than 250 together.



Thanks,





[PATCH] MINOR: systemd unit works with cfgdir and cfgfile

2017-01-12 Thread Ricardo Fraile
Hello,


As 1.7 release allow to load multiple files from a directory:


https://cbonte.github.io/haproxy-dconv/1.7/management.html

 -f <cfgfile|cfgdir> : adds  to the list of configuration files
to be loaded. If  is a directory, all the files (and only files)
it contains are added in lexical order (using LC_COLLATE=C) to the list
of configuration files to be loaded ; only files with ".cfg" extension
are added, only non hidden files (not prefixed with ".") are added.


I think that the systemd unit can have the configuration directory
instead of the path to the file for allow the same behaviour that the
"-f" option provides.


Thanks in advance,



Regards,
From a4d0ea299144f5f2c5983b1335b8d89241f3c0ec Mon Sep 17 00:00:00 2001
From: Ricardo Fraile <rfra...@rfraile.eu>
Date: Thu, 12 Jan 2017 12:29:44 +0100
Subject: [PATCH] MINOR: systemd unit works with cfgdir and cfgfile

---
 contrib/systemd/haproxy.service.in | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/systemd/haproxy.service.in b/contrib/systemd/haproxy.service.in
index dca81a2..1986083 100644
--- a/contrib/systemd/haproxy.service.in
+++ b/contrib/systemd/haproxy.service.in
@@ -3,7 +3,7 @@ Description=HAProxy Load Balancer
 After=network.target
 
 [Service]
-Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
+Environment="CONFIG=/etc/haproxy/" "PIDFILE=/run/haproxy.pid"
 ExecStartPre=@SBINDIR@/haproxy -f $CONFIG -c -q
 ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f $CONFIG -p $PIDFILE
 ExecReload=@SBINDIR@/haproxy -f $CONFIG -c -q
-- 
2.1.4



Re: Define path of configuration files in systemd unit

2016-12-19 Thread Ricardo Fraile
Hello Patrick,


You are right, with "exec" works:


# systemctl status haproxy.service -l
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/etc/systemd/system/haproxy.service; enabled)
   Active: active (running) since Mon 2016-12-19 12:23:28 CET; 1min 17s
ago
  Process: 25403 ExecReload=/bin/kill -USR2 $MAINPID (code=exited,
status=0/SUCCESS)
  Process: 25230 ExecStartPre=/bin/sh -c exec /usr/local/sbin/haproxy -c
-q -- /etc/haproxy/* (code=exited, status=0/SUCCESS)
 Main PID: 25231 (haproxy-systemd)
   CGroup: /system.slice/haproxy.service
   ├─25231 /usr/local/sbin/haproxy-systemd-wrapper
-p /run/haproxy.pid
-- /etc/haproxy/haproxy.conf /etc/haproxy/z.conf /etc/haproxy/zz.conf
   ├─25234 /usr/local/sbin/haproxy -Ds -p /run/haproxy.pid
-- /etc/haproxy/haproxy.conf /etc/haproxy/z.conf /etc/haproxy/zz.conf
   └─25235 /usr/local/sbin/haproxy -Ds -p /run/haproxy.pid
-- /etc/haproxy/haproxy.conf /etc/haproxy/z.conf /etc/haproxy/zz.conf



Thanks,







El mar, 13-12-2016 a las 11:56 -0500, Patrick Hemmer escribió:
> On 2016/12/13 11:14, Ricardo Fraile wrote:
> > Hello Jarno,
> > 
> > 
> > Yes, you are right, this is not an elegant solution, and reloading
> > doesn't work. This is the systemd report:
> > 
> > 
> > # systemctl status haproxy.service -l
> > ● haproxy.service - HAProxy Load Balancer
> >Loaded: loaded (/etc/systemd/system/haproxy.service; enabled)
> >Active: active (running) since Tue 2016-12-13 09:25:13 CET; 1s ago
> >   Process: 28736 ExecReload=/bin/kill -USR2 $MAINPID (code=exited,
> > status=0/SUCCESS)
> >   Process: 28764 ExecStartPre=/bin/sh -c /usr/local/sbin/haproxy -c -q
> > -- /etc/haproxy/* (code=exited, status=0/SUCCESS)
> >  Main PID: 28766 (sh)
> >CGroup: /system.slice/haproxy.service
> >├─28766 /bin/sh -c /usr/local/sbin/haproxy-systemd-wrapper
> > -p /run/haproxy.pid -- /etc/haproxy/*
> >├─28769 /usr/local/sbin/haproxy-systemd-wrapper
> > -p /run/haproxy.pid
> > -- /etc/haproxy/haproxy.conf /etc/haproxy/z.conf /etc/haproxy/zz.conf
> >├─28770 /usr/local/sbin/haproxy -Ds -p /run/haproxy.pid
> > -- /etc/haproxy/haproxy.conf /etc/haproxy/z.conf /etc/haproxy/zz.conf
> >└─28771 /usr/local/sbin/haproxy -Ds -p /run/haproxy.pid
> > -- /etc/haproxy/haproxy.conf /etc/haproxy/z.conf /etc/haproxy/zz.conf
> > 
> > 
> > Thanks,
> > 
> > 
> > El lun, 12-12-2016 a las 19:36 +0200, Jarno Huuskonen escribió:
> > > Hi Ricardo,
> > > 
> > > On Mon, Dec 12, Ricardo Fraile wrote:
> > > > Yes, shell expansion did the trick, this is the working systemd unit:
> > > > 
> > > > 
> > > > [Unit]
> > > > Description=HAProxy Load Balancer
> > > > After=network.target
> > > > 
> > > > [Service]
> > > > ExecStartPre=/bin/sh -c "/usr/local/sbin/haproxy -c -q
> > > > -- /etc/haproxy/*"
> > > > ExecStart=/bin/sh -c "/usr/local/sbin/haproxy-systemd-wrapper
> > > > -p /run/haproxy.pid -- /etc/haproxy/*"
> > > > ExecReload=/bin/kill -USR2 $MAINPID
> > > Does the /bin/sh -c add extra process to haproxy process tree ?
> > > Does systemctl status haproxy that "Main PID:" belongs to
> > > haproxy-systemd-wrapper process and reloading config works ?
> > > 
> > > -Jarno
> > > 
> 
> You can solve that specific issue easily by adding `exec` to the
> command.
> 
> ExecStart=/bin/sh -c "exec /usr/local/sbin/haproxy-systemd-wrapper
> -p /run/haproxy.pid -- /etc/haproxy/*"
> 
> -Patrick





Re: Define path of configuration files in systemd unit

2016-12-13 Thread Ricardo Fraile
Hello Jarno,


Yes, you are right, this is not an elegant solution, and reloading
doesn't work. This is the systemd report:


# systemctl status haproxy.service -l
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/etc/systemd/system/haproxy.service; enabled)
   Active: active (running) since Tue 2016-12-13 09:25:13 CET; 1s ago
  Process: 28736 ExecReload=/bin/kill -USR2 $MAINPID (code=exited,
status=0/SUCCESS)
  Process: 28764 ExecStartPre=/bin/sh -c /usr/local/sbin/haproxy -c -q
-- /etc/haproxy/* (code=exited, status=0/SUCCESS)
 Main PID: 28766 (sh)
   CGroup: /system.slice/haproxy.service
   ├─28766 /bin/sh -c /usr/local/sbin/haproxy-systemd-wrapper
-p /run/haproxy.pid -- /etc/haproxy/*
   ├─28769 /usr/local/sbin/haproxy-systemd-wrapper
-p /run/haproxy.pid
-- /etc/haproxy/haproxy.conf /etc/haproxy/z.conf /etc/haproxy/zz.conf
   ├─28770 /usr/local/sbin/haproxy -Ds -p /run/haproxy.pid
-- /etc/haproxy/haproxy.conf /etc/haproxy/z.conf /etc/haproxy/zz.conf
   └─28771 /usr/local/sbin/haproxy -Ds -p /run/haproxy.pid
-- /etc/haproxy/haproxy.conf /etc/haproxy/z.conf /etc/haproxy/zz.conf


Thanks,


El lun, 12-12-2016 a las 19:36 +0200, Jarno Huuskonen escribió:
> Hi Ricardo,
> 
> On Mon, Dec 12, Ricardo Fraile wrote:
> > Yes, shell expansion did the trick, this is the working systemd unit:
> > 
> > 
> > [Unit]
> > Description=HAProxy Load Balancer
> > After=network.target
> > 
> > [Service]
> > ExecStartPre=/bin/sh -c "/usr/local/sbin/haproxy -c -q
> > -- /etc/haproxy/*"
> > ExecStart=/bin/sh -c "/usr/local/sbin/haproxy-systemd-wrapper
> > -p /run/haproxy.pid -- /etc/haproxy/*"
> > ExecReload=/bin/kill -USR2 $MAINPID
> 
> Does the /bin/sh -c add extra process to haproxy process tree ?
> Does systemctl status haproxy that "Main PID:" belongs to
> haproxy-systemd-wrapper process and reloading config works ?
> 
> -Jarno
> 





Re: Define path of configuration files in systemd unit

2016-12-12 Thread Ricardo Fraile
Hi Jarno,


Yes, shell expansion did the trick, this is the working systemd unit:


[Unit]
Description=HAProxy Load Balancer
After=network.target

[Service]
ExecStartPre=/bin/sh -c "/usr/local/sbin/haproxy -c -q
-- /etc/haproxy/*"
ExecStart=/bin/sh -c "/usr/local/sbin/haproxy-systemd-wrapper
-p /run/haproxy.pid -- /etc/haproxy/*"
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always

[Install]
WantedBy=multi-user.target



I didn't know the new behavior of "-f" in v.1.7, that fit better with
the bash pattern substitution in proposed by Willy ${CONF[@]/#/-f } 



Thanks,



El lun, 12-12-2016 a las 16:28 +0200, Jarno Huuskonen escribió:
> Hi,
> 
> On Mon, Dec 12, Ricardo Fraile wrote:
> > But the systemd execution is still a issue with the following unit:
> > 
> > [Unit]
> > Description=HAProxy Load Balancer
> > After=network.target
> > 
> > [Service]
> > ExecStartPre=/usr/local/sbin/haproxy -c -q -- /etc/haproxy/*
> > ExecStart=/usr/local/sbin/haproxy-systemd-wrapper -p /run/haproxy.pid
> > -- /etc/haproxy/*
> > ExecReload=/bin/kill -USR2 $MAINPID
> 
> [...]
> 
> > 
> > Executing the same process that return the error from the terminal
> > report 
> > /usr/local/sbin/haproxy -c -q -- /etc/haproxy/*
> > echo $?
> > 0
> 
> I think systemd doesn't do shell expansion on "/etc/haproxy/*". Does
> it work from terminal if you run:
> /usr/local/sbin/haproxy -c -q -- '/etc/haproxy/*' ; echo $?
> (single quotes around '/etc/haproxy/*') ?
> 
> Have you tried with haproxy 1.7 ? With 1.7:
> "  - support of directories for config files : now if the argument to -f
> is a directory, all files found there are loaded in alphabetical
> order. Additionally, files can be specified after "--" without having
> to repeat "-f".
> "
> 
> Or you could try to use something like 
> /bin/sh -c "/usr/local/sbin/haproxy -c -q -- /etc/haproxy/*" as the
> ExecStartPre/ExecStart command. (This might leave extra /bin/sh
> process ...)
> 
> -Jarno
> 





Re: Define path of configuration files in systemd unit

2016-12-12 Thread Ricardo Fraile
Hello Willy,


I modified the haproxy-systemd-wrapper with the attached patch and it
works ok from the terminal:

# /usr/local/sbin/haproxy-systemd-wrapper -p /run/haproxy.pid
-- /etc/haproxy/*
<7>haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -Ds
-p /run/haproxy.pid -- /etc/haproxy/haproxy.conf /etc/haproxy/z.conf 


But the systemd execution is still a issue with the following unit:



[Unit]
Description=HAProxy Load Balancer
After=network.target

[Service]
ExecStartPre=/usr/local/sbin/haproxy -c -q -- /etc/haproxy/*
ExecStart=/usr/local/sbin/haproxy-systemd-wrapper -p /run/haproxy.pid
-- /etc/haproxy/*
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always

[Install]
WantedBy=multi-user.target






● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/etc/systemd/system/haproxy.service; enabled)
   Active: failed (Result: start-limit) since Mon 2016-12-12 12:28:45
CET; 2s ago
  Process: 5896 ExecStartPre=/usr/local/sbin/haproxy -c -q
-- /etc/haproxy/* (code=exited, status=1/FAILURE)
 Main PID: 5858 (code=exited, status=0/SUCCESS)

Dec 12 12:28:45 balback1b.pre.es.sys.idealista systemd[1]:
haproxy.service: control process exited, code=exited status=1
Dec 12 12:28:45 balback1b.pre.es.sys.idealista systemd[1]: Failed to
start HAProxy Load Balancer.
Dec 12 12:28:45 balback1b.pre.es.sys.idealista systemd[1]: Unit
haproxy.service entered failed state.
Dec 12 12:28:45 balback1b.pre.es.sys.idealista systemd[1]:
haproxy.service start request repeated too quickly, refusing...art.
Dec 12 12:28:45 balback1b.pre.es.sys.idealista systemd[1]: Failed to
start HAProxy Load Balancer.
Dec 12 12:28:45 balback1b.pre.es.sys.idealista systemd[1]: Unit
haproxy.service entered failed state.



Executing the same process that return the error from the terminal
report 
/usr/local/sbin/haproxy -c -q -- /etc/haproxy/*
echo $?
0



I continue looking for the problem, but now, I think that it's close to
the systemd side.


Thanks,





El lun, 05-12-2016 a las 19:40 +0100, Willy Tarreau escribió:
> Hi Ricardo,
> 
> On Mon, Dec 05, 2016 at 11:55:44AM +, Ricardo Fraile wrote:
> > Hello,
> > 
> > Finally I found a workaround. Generate a list with all the configuration
> > files with a script in a ExecStartPre unit option, load the list into a
> > enviroment variable and pass them to the haproxy executable. I tried to 
> > avoid
> > the use of a external script, but due the particularities of systemd I
> > couldn't make it to work.
> 
> (...)
> 
> I *think* that the problem you describe is in fact more related to the
> systemd wrapper itself, am I wrong ? Maybe we need to modify it to
> pass -Ds first
> 
> > 2.- Create a small script into "/usr/local/bin/haproxy-multiconf" with this 
> > content:
> > 
> > #!/bin/bash
> > 
> > for file in /etc/haproxy/*.conf; do
> > test -f $file
> > CNF="$CNF -f $file"
> > done
> > 
> > echo "CONF='$CNF'" > /etc/haproxy/haproxy-multiconf.lst
> 
> Does systemd support bash-like pattern substitution ? In this case, you
> could use something like ${CONF[@]/#/-f } to prepend "-f" in front of
> each file.
> 
> Regards,
> Willy

From f6d0203e8dbf0046203bd105513dd8b55719be63 Mon Sep 17 00:00:00 2001
From: rfraile <rfra...@idealista.com>
Date: Mon, 12 Dec 2016 12:40:11 +0100
Subject: [PATCH] Fix systemd-wrapper issue with multiconf argument

---
 src/haproxy-systemd-wrapper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/haproxy-systemd-wrapper.c b/src/haproxy-systemd-wrapper.c
index f6a9c85..2616e2f 100644
--- a/src/haproxy-systemd-wrapper.c
+++ b/src/haproxy-systemd-wrapper.c
@@ -114,9 +114,9 @@ static void spawn_haproxy(char **pid_strv, int nb_pid)
 
 		locate_haproxy(haproxy_bin, 512);
 		argv[argno++] = haproxy_bin;
+		argv[argno++] = "-Ds";
 		for (i = 0; i < main_argc; ++i)
 			argv[argno++] = main_argv[i];
-		argv[argno++] = "-Ds";
 		if (nb_pid > 0) {
 			argv[argno++] = "-sf";
 			for (i = 0; i < nb_pid; ++i)
-- 
2.1.4



Re: Define path of configuration files in systemd unit

2016-12-05 Thread Ricardo Fraile
Hello,


Finally I found a workaround. Generate a list with all the configuration files 
with a script in a ExecStartPre unit option, load the list into a enviroment 
variable and pass them to the haproxy executable. I tried to avoid the use of a 
external script, but due the particularities of systemd I couldn't make it to 
work.




1.- Split the Haproxy configuration file.

1.1.- One file called "00-haproxy.conf" with the basic haproxy conf (in my 
case global, defaults and listen stats). This must have the 00- at the begining 
for listed it at first place in the script.

1.2.- One file for each listen section of for the different services 
balanced, "some_name_a.conf". Each new balanced service will have a new file.

Note: I only define each balanced service in a listen section, not using 
fronted and backed.



2.- Create a small script into "/usr/local/bin/haproxy-multiconf" with this 
content:

#!/bin/bash

for file in /etc/haproxy/*.conf; do
test -f $file
CNF="$CNF -f $file"
done

echo "CONF='$CNF'" > /etc/haproxy/haproxy-multiconf.lst




3.- Change the systemd unit from this:

[Unit]
Description=HAProxy Load Balancer
After=network.target

[Service]
ExecStartPre=/usr/local/sbin/haproxy -f /etc/haproxy/haproxy.conf -c -q
ExecStart=/usr/local/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.conf 
-p /run/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always

[Install]
WantedBy=multi-user.target


To this:


[Unit]
Description=HAProxy Load Balancer
After=network.target

[Service]
ExecStartPre=/usr/local/bin/haproxy-multiconf
EnvironmentFile=/etc/haproxy/haproxy-multiconf.lst
ExecStartPre=/usr/local/sbin/haproxy -c -q $CONF
ExecStart=/usr/local/sbin/haproxy-systemd-wrapper -p /run/haproxy.pid $CONF
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always

[Install]
WantedBy=multi-user.target



4.- Refresh systemd and run it:

systemctl daemon-reload
systemctl restart haproxy.service


I hope that this help to someone.

Regards,



De: Ricardo Fraile <rfra...@idealista.com>
Enviado: miércoles, 23 de noviembre de 2016 12:43:20
Para: haproxy@formilux.org
Asunto: Define path of configuration files in systemd unit

Hello,

I'm trying to use the "--" option for load multiple files in a systemd
unit, using the following file:



[Unit]
Description=HAProxy Load Balancer
After=network.target

[Service]
ExecStartPre=/usr/local/sbin/haproxy -c -q -- /etc/haproxy/*
ExecStart=/usr/local/sbin/haproxy-systemd-wrapper -p /run/haproxy.pid
-- /etc/haproxy/*
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always

[Install]
WantedBy=multi-user.target



If I run "systemctl start haproxy.service" and "systemctl status
haproxy.service" the ExecStartPre report a Failue with "1".

But running the same command manually report "0"



/usr/local/sbin/haproxy -c -q -- /etc/haproxy/*
echo $?
0



Apart from this problem, the following is that the wrapper adds the
"-Ds" parameter at the end, and the previous "--" catch it as an other
argument, resulting in:



/usr/local/sbin/haproxy-systemd-wrapper -p /run/haproxy.pid
-- /etc/haproxy/*
<7>haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy
-p /run/haproxy.pid -- /etc/haproxy/haproxy.conf -Ds
[ALERT] 327/123043 (29118) : Could not open configuration file -Ds : No
such file or directory
<5>haproxy-systemd-wrapper: exit, haproxy RC=256



How is possible to define correctly a path with the configuration files
inside a systemd unit?



Thanks,




Define path of configuration files in systemd unit

2016-11-23 Thread Ricardo Fraile
Hello,

I'm trying to use the "--" option for load multiple files in a systemd
unit, using the following file:



[Unit]
Description=HAProxy Load Balancer
After=network.target

[Service]
ExecStartPre=/usr/local/sbin/haproxy -c -q -- /etc/haproxy/*
ExecStart=/usr/local/sbin/haproxy-systemd-wrapper -p /run/haproxy.pid
-- /etc/haproxy/*
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always

[Install]
WantedBy=multi-user.target



If I run "systemctl start haproxy.service" and "systemctl status
haproxy.service" the ExecStartPre report a Failue with "1".

But running the same command manually report "0"



/usr/local/sbin/haproxy -c -q -- /etc/haproxy/*
echo $?
0



Apart from this problem, the following is that the wrapper adds the
"-Ds" parameter at the end, and the previous "--" catch it as an other
argument, resulting in:



/usr/local/sbin/haproxy-systemd-wrapper -p /run/haproxy.pid
-- /etc/haproxy/*
<7>haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy
-p /run/haproxy.pid -- /etc/haproxy/haproxy.conf -Ds 
[ALERT] 327/123043 (29118) : Could not open configuration file -Ds : No
such file or directory
<5>haproxy-systemd-wrapper: exit, haproxy RC=256



How is possible to define correctly a path with the configuration files
inside a systemd unit?



Thanks,




Issue setting limits from Systemd to Haproxy service

2016-04-26 Thread Ricardo Fraile
Hello,



I try to limit the number of file descriptors using the variable
"LimitNOFILE" inside the following systemd unit:

[Unit]
Description=HAProxy Load Balancer
After=network.target

[Service]
ExecStartPre=/usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/local/sbin/haproxy-systemd-wrapper
-f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always
LimitNOFILE=5 # For testing only...

[Install]
WantedBy=multi-user.target



But it only works for the first process spawned, wich is
haproxy-systemd-wrapper:

root  4421  0.0  0.1  17084  1508 ?Ss   10:11
0:00 /usr/local/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg
-p /run/haproxy.pid
nobody4423  0.0  0.4  30104  4436 ?S10:11   0:00
\_ /usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg
-p /run/haproxy.pid -Ds
nobody4424  0.0  0.2  30104  2508 ?Ss   10:11   0:00
\_ /usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg
-p /run/haproxy.pid -Ds

# cat /proc/4421/limits
Limit Soft Limit   Hard Limit
Units
Max open files55
files

# cat /proc/4423/limits
Limit Soft Limit   Hard Limit
Units
Max open files6401364013
files

# cat /proc/4424/limits
Limit Soft Limit   Hard Limit
Units
Max open files6401364013
files



The process who listen in the socket is the last one, 4424, with the bad
settings:

# netstat -ntap | grep haproxy
tcp0  0 0.0.0.0:80 0.0.0.0:*   LISTEN
4424/haproxy
tcp0  0 0.0.0.0:8088   0.0.0.0:*   LISTEN
4424/haproxy



haproxy-systemd-wrapper would not have to pass these values?

Is it possible to pass the limits from systemd to the listening haproxy
process?


Thanks,




What are the random characters in the cookie header?

2014-08-29 Thread Ricardo Fraile
Hello,

When Haproxy is configured with persistence, delivering request along various 
backends with something like:

...
cookie SERVER insert maxidle 60m maxlife 180m indirect

server web1 192.168.1.50:80 cookie A check inter 5s fastinter 1s downinter 1s 
rise 2 fall 2
server web2 192.168.1.51:80 cookie B check inter 5s fastinter 1s downinter 1s 
rise 2 fall 2
...

I see that the cookie inserted have some random characters, in this example 
VAA8l|VAA8l, what are they?

Set-Cookie: SERVER=B|VAA8l|VAA8l; path=/


Thanks,


Re: limit connections by header

2014-08-13 Thread Ricardo Fraile
I only add the tcp-request inspect-delay  5s to the code indicated by Thierry 
and it work as expected. Before, some request pass over the limit.

Den, you can change the header in the error file. The 500 error file can reply 
with a 404, for example:

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#errorfile



Thanks,



El Martes 12 de agosto de 2014 12:59, Den Bozhok undyin...@yandex.ru escribió:
 


Wow! Thanks a lot for this information, it`s very useful.
From documentation tarpit returns 500 when timeout is reached. Is it possible 
to change response code? 500 it`s not the best code for timeout :)
 
Thanks again
 
12.08.2014, 14:18, Thierry FOURNIER tfourn...@haproxy.com:
On Tue, 12 Aug 2014 13:29:45 +0400
Den Bozhok undyin...@yandex.ru wrote:
  
 Well, now I know how to count connections by user`s header:
  
 stick-table type string size 32m expire 1m store conn_cur
 tcp-request content track-sc0 hdr(X-User-Id)
 acl limit_x_user_id sc0_conn_cur gt 500
  
 so acl is created, but I`m only know how to drop connection if it already 
reached his maximum, but is it possible to push connection to the queue and 
pull it when limit is passed?
  
Hello,

For information, you can also store the connection rate in the stick
table like this:

   stick-table type string size 32m expire 1m store http_req_rate(1s)
   tcp-request content track-sc0 hdr(X-User-Id)
   acl limit_x_user_id sc0_http_req_rate gt 1 # limit to one request per 
second / per user

The acl to drop the connection is:

   http-request KEYWORD if { limit_x_user_id }

KEYWORD can be:

   tarpit if you want to slow down this user
   redirect if you want to redirect the user to an information page

You can also use block if { limit_x_user_id } to send a 403 to the user.

Thierry
 12.08.2014, 11:44, Ricardo Fraile rfra...@yahoo.es:
   Hello,

   I'm interested on it too.

   Thanks,

Re: limit connections by header

2014-08-12 Thread Ricardo Fraile
Hello,

I'm interested on it too.

Thanks,

Re: Block clients based on header in real time?

2013-07-18 Thread Ricardo Fraile
Hello,

Pass these time, i return to this situation.

I try to implement in this stick table a white and black list, one solution is 
based on storing the ips  and play with setting data.gpc0 1 or 0, ok, it work, 
but the problem is now with networks.


The first isue is with the stick-table, this table is for store ips, not for a 
subnet or a pice of it. For these reason, the first thing is to change type 
ip to type sting.

Now, the only workaround for match a subnet is storing it in a format like 
match a 8/16/24 mask:
60.40.0
32.11
44

Well, now i can store what i want:
# table: name-of-back1, type: string, size:1048576, used:2
0x21559c4: key=10.0.0 use=0 exp=0 gpc0=1
0x2155a94: key=10.0.0.1 use=0 exp=0 gpc0=0

In this example, i want to deny all 10.0.0.0/24 network except for the host 
10.0.0.1. But the problem now is match these situation, whit this code:
tcp-request content track-sc1 req.hdr(True-Client-IP,1)
http-request deny if { sc1_get_gpc0 gt 0 }

Only work if the exact content is match in the header True-Client-IP, thing 
that is impossible in the case of networks.

I find in the doc the hdr_beg but over it have the text ACL Derivates, and 
i can't release a valid configuration working with it in my test.

¿Is i tpossible to do that, match the first characters of the track header? 
¿any example conf with hdr_beg running in a tcp-request line?


Thanks,








 De: Ricardo Fraile rfra...@yahoo.es
Para: Baptiste bed...@gmail.com 
CC: haproxy@formilux.org haproxy@formilux.org 
Enviado: Miércoles 12 de junio de 2013 11:03
Asunto: Re: Block clients based on header in real time?
 


Fantastic!

Whith this conf, now, i can update the list with a simple:
# echo set table name-of-the-table key 10.0.0.1 data.gpc0 1 | socat stdio 
/var/run/haproxy.sock


And with a curl:
$ curl -I 127.0.0.1:80 -H True-Client-IP: 10.0.0.1
HTTP/1.0 403 Forbidden
Cache-Control: no-cache
Connection: close
Content-Type: text/html

But one question more, if i need to block a subnet, how can i do it? I try to 
store:
echo set table name-of-the-table key 10.0.0.0/8 data.gpc0 1 | socat stdio 
/var/run/haproxy.sock

but not work, and the same with only 10. in the same place of 10.0.0.0/8 
but nothing.

Thanks, 




 De: Baptiste bed...@gmail.com
Para: Ricardo Fraile rfra...@yahoo.es 
CC: haproxy@formilux.org haproxy@formilux.org 
Enviado: Sábado 8 de junio de 2013 8:40
Asunto: Re: Block clients based on header in real time?
 

Hi Ricardo,

Actually, this is how I would do the conf:
  stick-table type ip
 size 1m store gpc0
  tcp-request content track-sc1 req.hdr_ip(True-Client-IP)
  http-request deny if { sc1_get_gpc0 gt 0 }


Then you can insert new data in the stick table using HAProxy UNIX
socket (which can run over TCP) with:
  set table table key key data.data_type value
In example, to block 10.0.0.1:
  set table mybackend key 1.0.0.1 data.gpc0 1

And you're done.

Here is the result when I test it with curl on my laptop:

$ curl 127.0.0.1:8080 -H True-Client-IP: 10.0.0.1

htmlbodyh1403 Forbidden/h1
Request forbidden by administrative rules.
/body/html


$ curl 127.0.0.1:8080

htmlbodyh1503 Service Unavailable/h1
No server is available to handle this request.
/body/html


Baptiste


On Thu, May 30, 2013 at
 12:50 PM, Ricardo Fraile rfra...@yahoo.es wrote:
 Hello,

    Ok, i update the server to 1.5 version but i have some troubles between 
stick-table and the acl.

    Before, i had:

 listen host1 *:80
     ...
     mode http
     acl block_invalid_client hdr_sub(True-Client-IP) -f true-client-ip.lst
     block if block_invalid_client
     ...

    Now, i try to change the file to a stick table:

 backend host1
     ...

     stick-table type ip size 1m store gpc0
     acl block_invalid_client hdr_ip(True-Client-IP) -- { stick match(host1) }
     http-request deny if block_invalid_client
    
 ...

     But not work:

     error detected while parsing ACL 'block_invalid_client' : '{' is not a 
valid IPv4 or IPv6 address.
     error detected while parsing an 'http-request deny' condition : no such 
ACL : 'block_invalid_client'.


     ¿Is it possible to match http header inside an acl to a stick table?

 Thanks,




 - Mensaje original -
 De: Baptiste bed...@gmail.com
 Para: Ricardo Fraile rfra...@yahoo.es
 CC: haproxy@formilux.org haproxy@formilux.org
 Enviado: Miércoles 29 de Mayo de 2013 14:51
 Asunto: Re: Block clients based on header in real time?

 Hi,

 With latest HAProxy version, you could use a stick table and insert
 IPs in the stick table through HAProxy socket.
 Then you can ban all IPs from the stick table.

 Baptiste


 On Wed, May 29, 2013 at 1:05 PM, Ricardo Fraile rfra...@yahoo.es wrote:
 Hello,


    I'm looking for a solution for blocking users based on a header, 
x-forwarded-for. I have yet an acl for this but is it possible to update the 
list of ips without restart haproxy?


 Thanks,



Re: Block clients based on header in real time?

2013-06-12 Thread Ricardo Fraile
Fantastic!

Whith this conf, now, i can update the list with a simple:
# echo set table name-of-the-table key 10.0.0.1 data.gpc0 1 | socat stdio 
/var/run/haproxy.sock


And with a curl:
$ curl -I 127.0.0.1:80 -H True-Client-IP: 10.0.0.1
HTTP/1.0 403 Forbidden
Cache-Control: no-cache
Connection: close
Content-Type: text/html

But one question more, if i need to block a subnet, how can i do it? I try to 
store:
echo set table name-of-the-table key 10.0.0.0/8 data.gpc0 1 | socat stdio 
/var/run/haproxy.sock

but not work, and the same with only 10. in the same place of 10.0.0.0/8 
but nothing.

Thanks, 




 De: Baptiste bed...@gmail.com
Para: Ricardo Fraile rfra...@yahoo.es 
CC: haproxy@formilux.org haproxy@formilux.org 
Enviado: Sábado 8 de junio de 2013 8:40
Asunto: Re: Block clients based on header in real time?
 

Hi Ricardo,

Actually, this is how I would do the conf:
  stick-table type ip size 1m store gpc0
  tcp-request content track-sc1 req.hdr_ip(True-Client-IP)
  http-request deny if { sc1_get_gpc0 gt 0 }


Then you can insert new data in the stick table using HAProxy UNIX
socket (which can run over TCP) with:
  set table table key key data.data_type value
In example, to block 10.0.0.1:
  set table mybackend key 1.0.0.1 data.gpc0 1

And you're done.

Here is the result when I test it with curl on my laptop:

$ curl 127.0.0.1:8080 -H True-Client-IP: 10.0.0.1

htmlbodyh1403 Forbidden/h1
Request forbidden by administrative rules.
/body/html


$ curl 127.0.0.1:8080

htmlbodyh1503 Service Unavailable/h1
No server is available to handle this request.
/body/html


Baptiste


On Thu, May 30, 2013 at 12:50 PM, Ricardo Fraile rfra...@yahoo.es wrote:
 Hello,

    Ok, i update the server to 1.5 version but i have some troubles between 
stick-table and the acl.

    Before, i had:

 listen host1 *:80
     ...
     mode http
     acl block_invalid_client hdr_sub(True-Client-IP) -f true-client-ip.lst
     block if block_invalid_client
     ...

    Now, i try to change the file to a stick table:

 backend host1
     ...

     stick-table type ip size 1m store gpc0
     acl block_invalid_client hdr_ip(True-Client-IP) -- { stick match(host1) }
     http-request deny if block_invalid_client
     ...

     But not work:

     error detected while parsing ACL 'block_invalid_client' : '{' is not a 
valid IPv4 or IPv6 address.
     error detected while parsing an 'http-request deny' condition : no such 
ACL : 'block_invalid_client'.


     ¿Is it possible to match http header inside an acl to a stick table?

 Thanks,




 - Mensaje original -
 De: Baptiste bed...@gmail.com
 Para: Ricardo Fraile rfra...@yahoo.es
 CC: haproxy@formilux.org haproxy@formilux.org
 Enviado: Miércoles 29 de Mayo de 2013 14:51
 Asunto: Re: Block clients based on header in real time?

 Hi,

 With latest HAProxy version, you could use a stick table and insert
 IPs in the stick table through HAProxy socket.
 Then you can ban all IPs from the stick table.

 Baptiste


 On Wed, May 29, 2013 at 1:05 PM, Ricardo Fraile rfra...@yahoo.es wrote:
 Hello,


    I'm looking for a solution for blocking users based on a header, 
x-forwarded-for. I have yet an acl for this but is it possible to update the 
list of ips without restart haproxy?


 Thanks,



Re: Block clients based on header in real time?

2013-05-30 Thread Ricardo Fraile
Hello,

   Ok, i update the server to 1.5 version but i have some troubles between 
stick-table and the acl.

   Before, i had:

listen host1 *:80
    ...
    mode http
    acl block_invalid_client hdr_sub(True-Client-IP) -f true-client-ip.lst
    block if block_invalid_client
    ... 

   Now, i try to change the file to a stick table:

backend host1
    ...

    stick-table type ip size 1m store gpc0
    acl block_invalid_client hdr_ip(True-Client-IP) -- { stick match(host1) }
    http-request deny if block_invalid_client
    ...

    But not work:

    error detected while parsing ACL 'block_invalid_client' : '{' is not a 
valid IPv4 or IPv6 address.
    error detected while parsing an 'http-request deny' condition : no such ACL 
: 'block_invalid_client'.


    ¿Is it possible to match http header inside an acl to a stick table?

Thanks, 




- Mensaje original -
De: Baptiste bed...@gmail.com
Para: Ricardo Fraile rfra...@yahoo.es
CC: haproxy@formilux.org haproxy@formilux.org
Enviado: Miércoles 29 de Mayo de 2013 14:51
Asunto: Re: Block clients based on header in real time?

Hi,

With latest HAProxy version, you could use a stick table and insert
IPs in the stick table through HAProxy socket.
Then you can ban all IPs from the stick table.

Baptiste


On Wed, May 29, 2013 at 1:05 PM, Ricardo Fraile rfra...@yahoo.es wrote:
 Hello,


    I'm looking for a solution for blocking users based on a header, 
x-forwarded-for. I have yet an acl for this but is it possible to update the 
list of ips without restart haproxy?


 Thanks,




Re: Block clients based on header in real time?

2013-05-30 Thread Ricardo Fraile
I continue trying configurations, looking in the list and some blogs, but i 
can't ban ips from a stick table or i don't know how. The last that i try:

backend host:80
        stick-table type ip size 1m  store gpc0
        http-request deny if hdr_sub(True-Client-IP) #How i check here if the 
True-Client-IP is inside the stick-table?


In the table, i put the ips by hand, it looks like this:

show table host
# table: back-idealista.es-http, type: ip, size:1048576, used:2
0xcae6c4: key=192.168.1.5 use=0 exp=0 gpc0=1
0xcdac34: key=192.168.1.6 use=0 exp=0 gpc0=1


The more similar is this message in the list: 
http://comments.gmane.org/gmane.comp.web.haproxy/9938 but the problem is that 
there the ip of the client is inside a header.


Thanks,



- Mensaje original -
De: Ricardo Fraile rfra...@yahoo.es
Para: haproxy@formilux.org haproxy@formilux.org
CC: 
Enviado: Jueves 30 de Mayo de 2013 12:50
Asunto: Re: Block clients based on header in real time?

Hello,

   Ok, i update the server to 1.5 version but i have some troubles between 
stick-table and the acl.

   Before, i had:

listen host1 *:80
    ...
    mode http
    acl block_invalid_client hdr_sub(True-Client-IP) -f true-client-ip.lst
    block if block_invalid_client
    ... 

   Now, i try to change the file to a stick table:

backend host1
    ...

    stick-table type ip size 1m store gpc0
    acl block_invalid_client hdr_ip(True-Client-IP) -- { stick match(host1) }
    http-request deny if block_invalid_client
    ...

    But not work:

    error detected while parsing ACL 'block_invalid_client' : '{' is not a 
valid IPv4 or IPv6 address.
    error detected while parsing an 'http-request deny' condition : no such ACL 
: 'block_invalid_client'.


    ¿Is it possible to match http header inside an acl to a stick table?

Thanks, 




- Mensaje original -
De: Baptiste bed...@gmail.com
Para: Ricardo Fraile rfra...@yahoo.es
CC: haproxy@formilux.org haproxy@formilux.org
Enviado: Miércoles 29 de Mayo de 2013 14:51
Asunto: Re: Block clients based on header in real time?

Hi,

With latest HAProxy version, you could use a stick table and insert
IPs in the stick table through HAProxy socket.
Then you can ban all IPs from the stick table.

Baptiste


On Wed, May 29, 2013 at 1:05 PM, Ricardo Fraile rfra...@yahoo.es wrote:
 Hello,


    I'm looking for a solution for blocking users based on a header, 
x-forwarded-for. I have yet an acl for this but is it possible to update the 
list of ips without restart haproxy?


 Thanks,





Block clients based on header in real time?

2013-05-29 Thread Ricardo Fraile
Hello,


   I'm looking for a solution for blocking users based on a header, 
x-forwarded-for. I have yet an acl for this but is it possible to update the 
list of ips without restart haproxy?


Thanks,



Re: HAProxy with native SSL support !

2012-09-04 Thread Ricardo Fraile
Great!

Thanks Willy,




 De: Willy Tarreau w...@1wt.eu
Para: haproxy@formilux.org 
Enviado: Martes 4 de septiembre de 2012 1:37
Asunto: HAProxy with native SSL support !
 
Hi all,

today is a great day (could say night considering the time I'm posting) !

After several months of efforts by the Exceliance team, we managed to
rework all the buffer and connection layers in order to get SSL working
bon both sides of HAProxy.

The code is still in preview, we can't break it anymore but considering
that we've fixed some bugs today, I'm sure that some still remain in the
100+ patches and 16000 lines of patches this work required (not counting
the many ones that were abandonned or re-merged multiple times).

The code is still going to change because we're getting closer to something
which will allow outgoing connections to be reused, resulting in keep-alive
on both sides. But not yet, be patient.

What's done right now ?

1) connections

Connections are independant entities which can be instanciated without
allocating a full session and its buffers. Connections are responsible
for handshakes and control, and pass data to buffers. Connection-level
TCP-request rules, the PROXY protocol and SSL handshakes are processed
at the connection level.

2) buffers

buffers have been split in three: channel (the tube where the data flows),
buffer (where data is temporarily stored for analysis or forwarding) and
optionally the pipe (stored in kernel area for forwarding only). New buffers
only handle data without consideration for what it's used for. Health checks
are currently being migrated to use this with connections.

3) data I/O

data I/O are now performed between a connection and a buffer. We have
two data-layer operations now : raw and ssl. It is very easy to add
new ones now, we're even wondering whether it would make sense to write
one dedicated to yassl in native mode (without the openssl API).

4) socket I/O

at the moment we only support normal sockets, but the design considered
remote sockets so that we could off-load heavy processing to external
processes (eg: HTTP on one process, SSL on two other). Remote sockets
have not been started yet but surely will. SHMs have also been considered
to emulate sockets.

5) configuration

Configuration has been extended to support the ssl keyword on bind lines
and on server lines. For both, the syntax is :

    ... ssl cert.pem [ciphers suite] [nosslv3] [notlsv1]

    cert.pem is a PEM file made by concatenating the .crt and the .key of a
    certificate.

    eg:   bind :443 ssl /etc/haproxy/pub.pem
          server local 192.168.0.1:443 ssl ciphers EXPORT40 notlsv1

6) session management

SSL sessions are stored in a shared memory cache, allowing haproxy to run
with nbproc  1 and still work correctly. This is the session cache we
developped for stunnel then stud, it was time to adopt it in haproxy. It's
so fast that we don't use openssl's cache at all, since even at one single
process, it's at least as fast.

7) other

A lot remains to be done, mainly some of the aforementionned structres
are still included in other ones, which simplified the split Once all
the work is over, we should end up with less memory used per connection.
This is important to better handle DDoS.


At the moment, everything we could try seems to work fine. The SSL stacks
well on top of the PROXY protocol, which is very important to build SSL
offload farms (I'm sure Baptiste will want to write a blog article on the
subject of using sub-$1000 machines to build large 100k+tps farms).
Stats work over https too. Right now we're missing ACLs to match whether
the traffic was SSL or clear, as well as logs. Both can be worked around
by using distinct bind lines or even frontends. The doc is still clearly
lacking, but we think that the config will change a little bit.

Only the GNU makefile was updated, neither the BSD nor OSX were, they're
a little trickier. If someone with one of these systems wants to update
them, I'll happily accept the patches.

What else ? Ah yes, 4k. You're there wondering about the results. 4000 SSL
connections per second and 300 Mbps is what we got out of a dual-core Atom
D510 at 1.66 GHz, in SSLv3 running over 4 processes (hyperthreading was
enabled) :-) This is a bit more than stud and obviously much better than
stunnel (which doesn't scale to more than a few hundred connections before
the performance quickly drops).

And older tests seem to indicate that with YaSSL we can get 30-40% more,
maybe even more. We need to work with the YaSSL guys to slightly improve
their cache management before this can become a default build option.

Enough speaking, for those who want to test or even have the hardware to
run more interesting benchmarks, the code was merged into the master
branch and is in today's snapshot (20120904) here :

    http://haproxy.1wt.eu/download/1.5/src/snapshot/

Build it by passing USE_OPENSSL=1 on the make command line. You should
also