Re: haproxy hiding url/minio

2020-12-23 Thread Chad Lavoie

Greetings,

On 12/23/2020 7:10 PM, Jonathan Opperman wrote:


Works perfectly fine, what is the best way to hide /minio so it will 
rather say /storage so externally

I hide the fact that we are using minio?


You can do that by using 'http-request set-path 
%[regsub(^/storage,/minio)]' to rewrite the path that the backend sees 
from what the client sent.


- Chad




Re: comparing stick-table values in acl

2019-12-07 Thread Chad Lavoie

Greetings,

On 12/7/2019 4:48 PM, Björn Jacke wrote:

Hi,

I would like to compare two different stick-table values in an ACL. 
What I tried to do was an obvious comparison like this:


http-request deny if { sc_conn_rate(0) le sc_http_req_rate(1) }


The following should do what you seek:

http-request set-var(req.request_rate) sc_http_req_rate(1)

http-request deny if { sc_conn_rate(0),sub(req.request_rate) lt 0 }


Thanks,

Chad




but this results in:

[ALERT] 340/213554 (9804) : parsing [/etc/haproxy/haproxy.cfg:203] : 
error detected while parsing an 'http-request deny' condition : 
'sc_http_req_rate(1)' is neither a number nor a supported operator.


is that an intended limitations that are there no variables allowed on 
the right hand side here?

Is there a different way to do something like this?

Thanks
Björn





Re: How to allow Client Requests at a given rate

2019-02-23 Thread Chad Lavoie

Greetings,

On 2/23/2019 3:06 AM, Santos Das wrote:

Hi,

I have a requirement where I need to allow only certain request rate 
for a given URL.


Say /login can be accessed at the rate of 10 RPS. If I get 100 RPS, 
then 10 should be allowed and 90 should be denied.



There are a couple of ways to do that, the easiest method is blocking 
before the track (so that a blocked request doesn't count against the 
limit).  Given that in this example you are already using the table_ 
fetches instead of the sc_ fetches you could just move both http-request 
set-var lines, the acl line, and the http-request deny line so it is 
above the http-request track-sc0 line.



Thanks,

- Chad



Any help on how this can be achieved ?

*I tried to use the sticky table, but once it blocks it blocks for 
ever. Please advise.*



frontend api_gateway
    bind 0.0.0.0:80 
    mode http
    option forwardfor

default_backend nodes

     # Set up stick table to track request rates
stick-table type binary len 8 size 1m expire 10s store http_req_rate(10s)

    # Track client by base32+src (Host header + URL path + src IP)
http-request track-sc0 base32+src

    # Check map file to get rate limit for path
http-request set-var(req.rate_limit) 
path,map_beg(/etc/hapee-1.8/maps/rates.map)


    # Client's request rate is tracked
http-request set-var(req.request_rate) 
base32+src,table_http_req_rate(api_gateway)


    # Subtract the current request rate from the limit
    # If less than zero, set rate_abuse to true
    acl rate_abuse var(req.rate_limit),sub(req.request_rate) lt 0

    # Deny if rate abuse
http-request deny deny_status 429 if rate_abuse
backend nodes
    mode http
    balance roundrobin
    server echoprgm 10.37.9.30:11001  check





Re: Logging actual fetched URL after request is re-written

2018-03-27 Thread Chad Lavoie

Greetings,

Sorry, pressed wrong button so didn't include on CC.


On 03/27/2018 01:03 PM, Chad Lavoie wrote:


Greetings,


On 03/27/2018 12:49 PM, Franks Andy (IT Technical Architecture 
Manager) wrote:


Hi all,

  Logging with HTTP as standard, the %{+Q}r log variable records the 
requested URL in the logs. I’d like to also record the URL that’s 
actually fetch after an http-request set-path directive is given (for 
debugging purposes). It’s linked to an application that provides next 
to no debugging, and tcpdump isn’t much help either – having it in 
haproxy logs would be really useful.


Can I do this or am I thinking too much outside the box? I tried 
setting a dynamic variable and then setting that in the frontend 
log-format, but it didn’t seem to record anything even despite 
populating the variable.




You should be able to add "http-request capture path len 32" at the 
end of a frontend to capture the path after all the modifications.
Variables should work too, though without knowing exactly what your 
variable rules looked like I can't guess as to why it didn't capture 
anything.


- Chad


Thanks







Re: Logging actual fetched URL after request is re-written

2018-03-27 Thread Chad Lavoie

Greetings,


On 03/27/2018 12:49 PM, Franks Andy (IT Technical Architecture Manager) 
wrote:


Hi all,

  Logging with HTTP as standard, the %{+Q}r log variable records the 
requested URL in the logs. I’d like to also record the URL that’s 
actually fetch after an http-request set-path directive is given (for 
debugging purposes). It’s linked to an application that provides next 
to no debugging, and tcpdump isn’t much help either – having it in 
haproxy logs would be really useful.


Can I do this or am I thinking too much outside the box? I tried 
setting a dynamic variable and then setting that in the frontend 
log-format, but it didn’t seem to record anything even despite 
populating the variable.




You should be able to add "http-request capture path len 32" at the end 
of a frontend to capture the path after all the modifications.
Variables should work too, though without knowing exactly what your 
variable rules looked like I can't guess as to why it didn't capture 
anything.


- Chad


Thanks





Re: Monitoring/testing tarpit and connection rejects

2018-02-16 Thread Chad Lavoie

Greetings,

Answers inline.

On 02/16/2018 08:03 AM, Stefan Magnus Landrø wrote:

Hi guys,

We're using using some of the DDOS features found ini haproxy (e.g. 
https://www.haproxy.com/blog/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/) 



We've performed some basic testing using apache bench, and get 
expected results (connections get droped etc).


Be careful with tarpitting as it will eat file descriptors and source 
ports.  I recommend http-request deny in most cases as with distributed 
attacks they can easily run you out of them.  In some cases 
"http-request silent-drop" can help with a similar effect, but beware of 
unintended consequences (other stateful devices in your network that you 
could unintentionally DoS).
Also ensure your kernel is tuned with settings like tw_reuse and an 
increased source port range.


Is there a better way to make sure the configuration works as expected?


First step to do that is to find what the specific bottleneck your 
hitting is; dropping connections is a symptom with many potential causes.
Can we somehow monitor the number of requests that get tarpitted or 
connectionis that get dropped, or is this info not collected/exposed 
by haproxy at all?


My favorite log field, the termination state, will be of interest to 
you.  The first two characters will be LT for a tarpitted request 
(https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#8.5).
You will also want to be sure you are graphing the dreq field (and many 
of the others) from the status page 
(https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.1).


- Chad


BTW - using  haproxy 1.8.4 alpine image

Cheers

Stefan





Re: Is there a more efficient way to use backend webservers with HAproxy+Lua scripts ?

2017-06-29 Thread Chad Lavoie

Greetings,


On 06/29/2017 05:36 PM, Burak Çayır wrote:

Hello,

I am a CS student and I am trying to learn HAproxy and Lua API.

I want to load web server pages faster than before. Is it possible 
with HAproxy Lua API ? If it is possible , which algorithm I should use ?


The correct algorithm depends on the problem your trying to solve. If 
your just trying to route via host header I'd not recommend using LUA, 
instead see my method below.
If you are trying to learn the LUA API I'd pick a more complicated task 
that can't be done directly in HAProxy (verifying and routing based on 
an authentication header, perhaps).


my getbackend.cfg
global
|daemon log /dev/log local0 log /dev/log local1 notice maxconn 5 
lua-load /etc/haproxy/getbackend.lua defaults log global retries 3 
backlog 1 maxconn 1 timeout connect 3s timeout client 30s 
timeout server 30s timeout tunnel 3600s timeout http-keep-alive 1s 
timeout http-request 15s timeout queue 30s timeout tarpit 60s option 
redispatch option http-server-close option dontlognull frontend 
mywebserver bind *:8080 mode http use_backend %[lua.choose_backend] 
backend backend1 balance roundrobin mode http server ws1 
192.168.122.232:8080  server ws3 
192.168.122.219:8080  backend backend2 
balance roundrobin mode http server ws2 192.168.122.72:8080 
 server ws4 192.168.122.172:8080 
|

​and my getbackend.lua

​core.register_fetches("choose_backend", function(txn)
|if txn.sf:req_fhdr("host") == 'test.com:8080 ' 
then return "backend1" elseif txn.sf:req_fhdr("host") == 
'example.com:8080 ' then return "backend2" 
end end)|
You could select a backend based on the host header in LUA, but that 
will be substantially slower than just doing it directly in the HAProxy 
configuration:

   use_backend backend1 if { hdr(Host) -i test.com }
   use_backend backend2 if { hdr(Host) -i example.com }

That also has the advantage of being case-insensitive.  If you wanted to 
verify the port is 8080 add '{ dst_port 8080 }' to the end of that, the 
port isn't part of the host header fetch (likely the reason why your LUA 
script isn't doing what you expect).


Thanks,
- Chad

​King Regards,
​Burak​
​

--
*Burak Çayır*
/
/
/539 578 3671
/
/burakcayir.net 
/
/Ankara - Türkiye/




Re: question about ssl and non-ssl on the same port

2017-04-05 Thread Chad Lavoie

Greetings,


On 04/05/2017 02:19 PM, Jerry Scharf wrote:

Hi,

I have a question that I think I know the answer to.

We have lots of things that are of the form of

bind *:80
redirect scheme https if !{ ssl_fc }
bind *:443 ssl crt xxx

use_backend xxx-be if { ssl_fc_sni www.soundhound.com }

We have an app that we would like to convert in place from non-ssl to 
ssl based. Can I have both binds use the same port? I am guessing not, 
but I want to be sure.


You can if you have a fake TCP frontend which determines if the traffic 
is HTTP or HTTPS using something like the following:


frontend is_ssl_frontend
   mode tcp
   bind *:
   tcp-request inspect-delay 10s
   tcp-request content accept if HTTP
   tcp-request content accept if { req.ssl_hello_type 1 }
   use_backend is_http_backend if HTTP
   default_backend is_https_backend

Each of said backends would then loop back to HAProxy via a socket or 
loopback address (likely with send-proxy-v2 and accpet-proxy to keep the 
client IP information) to be handled as HTTP or HTTPS by another frontend.


From your request of using 80/443 I'm not sure if this is what you want 
to do, but just wanted to indicate that it can be done.


Thanks,
- Chad


thanks,

jerry






Re: Redirection append '/' at the end of the Destination URL

2016-11-18 Thread Chad Lavoie

Greetings,


On 11/18/2016 04:55 PM, Qingshan Xie wrote:

Hello!  Experts,
I got one issue when configure a redirection.  I want to configure 
a redirection from http://http://%3Ca.b.c/> to 
https:///view/.  the configuration 
is as below,


acl is_map1  base_req  A.B.C/?
redirect prefix https:// /view/ 
if is_map1


however, in browser when I enter http:// , 
it returns https:///view//.  notice 
the double "//" at the end, it make the redirection failed.  Could 
someone tell me what the configuration is wrong, or any way to avoid 
this issue?


Just remove the trailing / from the redirect prefix.  Redirect prefix is 
adding whatever the original url was to the end of the specified string, 
and valid url's start with a / so you don't need to add one with the prefix.


- Chad


Many Thanks,
Qingshan Xie







Re: stick-table not updated with every request

2016-10-21 Thread Chad Lavoie

Greetings,


On 10/21/2016 08:19 AM, Dennis Jacobfeuerborn wrote:

Hi,
I'm currently experimenting with rate limiting request and while this
sort-of works I see an issue where sometimes the stick-table that
contains the rate-limiting variables isn't update with every request
allowing multiple requests to succeed even if they shouldn't.

I attached the configuration I'm using which basically is supposed to
limit the number of requests to 1 per five seconds and if that limit is
reached the request is diverted to a separate backend that sends a 429
status telling the client to back off.

This works fine as long as the stick-table in the backend abuse-warning
is updated properly but when I use curl from the shell to get the path
/site1/limittest I don't see an entry added in the abuse-warning
stick-table.


From your configuration example I think you need to add "tcp-request 
inspect-delay 10s" to the frontend with the stick table.
HAProxy should print a warning about random matching and suggest that on 
startup.  It does indeed cause it to record some hits and not others 
without it, and is quite hard to debug if the warning is missed.


- Chad

  As long as that entry doesn't appear there i can issue
requests without being limited.

I noticed that the last line of the curl output says:
* Connection #0 to host 192.168.0.100 left intact

I'm wondering if this might have something to do with it. Maybe the
stick-table is only updated when the connection closes? Is there a way
to force the entry to be create immediately?

I'm using haproxy 1.6.9 on a Fedora 24 System.

Regards,
   Dennis





[PATCH] Minor: Escape equals sign on socket dump

2016-10-04 Thread Chad Lavoie

Greetings,

Was recently working with a stick table storing URL's and one had an 
equals sign in it (e.g. 127.0.0.1/f=ab) which made it difficult to 
easily split the key and value without a regex.


This patch will change it so that the key looks like 
"key=127.0.0.1/f\=ab" instead of "key=127.0.0.1/f=ab".


Not very important given that there are ways to work around it.

Thanks,

- Chad

diff --git a/src/dumpstats.c b/src/dumpstats.c
index fe34223..f1d3132 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -645,7 +645,7 @@ static void stats_dump_csv_header()
 
 /* print a string of text buffer to . The format is :
  * Non-printable chars \t, \n, \r and \e are * encoded in C format.
- * Other non-printable chars are encoded "\xHH". Space and '\' are also escaped.
+ * Other non-printable chars are encoded "\xHH". Space, '\', and '=' are also escaped.
  * Print stopped if null char or  is reached, or if no more place in the chunk.
  */
 static int dump_text(struct chunk *out, const char *buf, int bsize)
@@ -655,12 +655,12 @@ static int dump_text(struct chunk *out, const char *buf, int bsize)
 
 	while (buf[ptr] && ptr < bsize) {
 		c = buf[ptr];
-		if (isprint(c) && isascii(c) && c != '\\' && c != ' ') {
+		if (isprint(c) && isascii(c) && c != '\\' && c != ' ' && c != '=') {
 			if (out->len > out->size - 1)
 break;
 			out->str[out->len++] = c;
 		}
-		else if (c == '\t' || c == '\n' || c == '\r' || c == '\e' || c == '\\' || c == ' ') {
+		else if (c == '\t' || c == '\n' || c == '\r' || c == '\e' || c == '\\' || c == ' ' || c == '=') {
 			if (out->len > out->size - 2)
 break;
 			out->str[out->len++] = '\\';
@@ -671,6 +671,7 @@ static int dump_text(struct chunk *out, const char *buf, int bsize)
 			case '\r': c = 'r'; break;
 			case '\e': c = 'e'; break;
 			case '\\': c = '\\'; break;
+			case '=': c = '='; break;
 			}
 			out->str[out->len++] = c;
 		}


Re: how can i get pass original ip in tcp mode

2016-09-07 Thread Chad Lavoie

Greetings,


On 09/07/2016 10:48 AM, Long Ma wrote:


HI haproxy:

 My haproxy version is 1.6.

 And I use haproxy before my game_server on tcp mode

 Client on A(172.16.77.32)

 HaProxy and game_server on B (172.16.77.37)

 Config file is:

 When I use client connect to game_server through haproxy , 
everything is ok, but in my game_server I find the connection remote 
ip is 172.16.77.37 not 172.16.77.32


 The question is how can I get original ip though haproxy  on 
tcp mode




I see your using send-proxy there; does the software in question 
understand the PROXY protocol 
(http://www.haproxy.org/download/1.6/doc/proxy-protocol.txt)?  If so, it 
should be able to use the client IP address it gets from that.


If you don't want to go that route, you could use full transparent 
proxying mode (see 
https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-source); 
but for that to work properly you will need to ensure your network setup 
works with that.


- Chad


Best Regards

Long Ma

=

百纳(武汉)信息技术有限公司

游戏后台开发马龙

TEL:18616693720

QQ:363419410

ADD:武汉市光谷大道77号光谷金融港A2栋2楼





Re: Rate limiting options using HAProxy

2016-08-30 Thread Chad Lavoie

Greetings,


On 08/30/2016 05:12 PM, Chad Lavoie wrote:

Greetings,


On 08/30/2016 12:30 PM, Sam kumar wrote:

Hello Sir,

I am trying to implement rate limiting using HA proxy for my HTTP 
restful services.


My requirement is to go implement below two scenario

1.URL based : Every API urls will have different throttle limit


To have limits that differ for different URL's I'd use a list of ACL's 
that look like the following:

http-request deny if { sc_http_req_rate(0) gt 10 } { path /api/call1 }
http-request deny if { sc_http_req_rate(0) gt 20 } { path /api/call2 }


I didn't directly mention, but if you use the same stick table and 
authorization token the limits will be additive (so that 10 requests to 
one and 5 to another mean all will be checked with a limit of 15).


If you don't want this and don't have an excessive number of unique ones 
I'd advise making a stick table for each.


If you do have an excessive number of them you may be better trying to 
track by src+url with the base32+src match instead or making a converter 
in LUA to combine the api and token.


- Chad


In addition to path you can use path_beg to match against the 
beginning of the path, you can also use url_param 
(https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.6-url_param) 
and other fetch methods depending on your requirements.
2. Authorization header : Every client has unique authorization token 
so using this I can have a throttle limit for each client.


For this you will want a stick table which stores a string 
(https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-stick-table):

backend track_api_token
stick-table type string len 32 size 1024 store http_req_rate(10s)

Then in your frontend:
http-request track-sc0 hdr(X-Authorization) table track_api_token

From there you can limit using the above rules.

Thanks,
- Chad


I was trying to get help from various other blogs but could not find 
much on this.


Please provide some examples or sample code for the same so that I 
can achieve this functionality


Thanks
Sam







Re: Rate limiting options using HAProxy

2016-08-30 Thread Chad Lavoie

Greetings,


On 08/30/2016 12:30 PM, Sam kumar wrote:

Hello Sir,

I am trying to implement rate limiting using HA proxy for my HTTP 
restful services.


My requirement is to go implement below two scenario

1.URL based : Every API urls will have different throttle limit


To have limits that differ for different URL's I'd use a list of ACL's 
that look like the following:

http-request deny if { sc_http_req_rate(0) gt 10 } { path /api/call1 }
http-request deny if { sc_http_req_rate(0) gt 20 } { path /api/call2 }

In addition to path you can use path_beg to match against the beginning 
of the path, you can also use url_param 
(https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.6-url_param) 
and other fetch methods depending on your requirements.
2. Authorization header : Every client has unique authorization token 
so using this I can have a throttle limit for each client.


For this you will want a stick table which stores a string 
(https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-stick-table):

backend track_api_token
stick-table type string len 32 size 1024 store http_req_rate(10s)

Then in your frontend:
http-request track-sc0 hdr(X-Authorization) table track_api_token

From there you can limit using the above rules.

Thanks,
- Chad


I was trying to get help from various other blogs but could not find 
much on this.


Please provide some examples or sample code for the same so that I can 
achieve this functionality


Thanks
Sam





Re: Help Needed || haproxy limiting the connection rate per user

2016-08-30 Thread Chad Lavoie

Greetings,


On 08/30/2016 01:10 PM, Samrat Roy wrote:

Thank you sir for your quick reply.

I am now able to give custom error code for my HAproxy configuration. 
However I am facing one more issue .


With the above approach HAproxy is rejecting each and every calls once 
the limit has crossed. It is behaving as a circuit breaker . But my 
requirement is to have a throttling for example every 10 second I 
should allow 200 request and anything more than 200 will be rejected.


There are two ways I can think to interpret your question:
1) You want to have a tick every 10 seconds which resets the counter to zero
2) You want to not count requests over the limit (which get blocked) to 
count to the blocking


For 1 you would need a script to talk to the socket, and I'd not advise 
doing that unless you know what you are doing and why there is no 
cleaner alternative.
For 2 I'd add gpc0,gpc0_rate(10s) to the stick table in place of 
conn_rate, then use something like the following:

http-request allow if { sc_inc_gpc0(0) }
After the use_backend statement.  Then instead of checking conn_rate 
check sc_gpc0_rate(0) per 
http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.3-sc_gpc0_rate.


Because in that case gpc0 will only be incremented if the request 
doesn't end up at the custom backend/blocked/etc that should fill your 
needs there.


Thanks,
- Chad


Is there any way I can achieve this .Please help me to configure the same.

Thanks in advance
Samrat


On Fri, Aug 26, 2016 at 10:16 PM, Chad Lavoie <clav...@haproxy.com 
<mailto:clav...@haproxy.com>> wrote:


Greetings,


On 08/26/2016 09:14 AM, Samrat Roy wrote:

Hello Sir,



down votefavorite

<http://stackoverflow.com/questions/39166887/haproxy-limiting-the-connection-rate-per-user#>



I am trying to achieve rate limiting using HAProxy. I am trying
to follow the "Limiting the connection rate per user" approach. I
am able to achieve this by the below configuration. But facing
one problem, that is, i am not able to send a custom error code
once the rate limit is reached. For example if i reached the rate
limit i want to send HTTP error code 429. In this case the proxy
is simply rejecting the incoming call and users are getting http
status code as 0.



"tcp-request connection reject" rejects the connection, so there
is no status code in this case.  If you want to send a 403 replace
it with "http-request deny if ..." instead.

If you want to respond with HTTP 429 make a backend with no
backend servers (so that all requests will get a 503) and set a
custom 503 error page, editing the headers at the top of the file
so that the response code is 429 (or whatever other
code/message/etc you desire).

- Chad


Please let me know how can i do this

frontend localnodes

|bind *:80 mode http default_backend nodes stick-table type ip
size 100k expire 30s store conn_rate(5s) tcp-request connection
reject if { src_conn_rate ge 60 } tcp-request connection
track-sc1 src |

backend nodes

|cookie MYSRV insert indirect nocache server srv1 :80
check cookie srv1 maxconn 500 |


Thanks
Samrat







Re: Help Needed || haproxy limiting the connection rate per user

2016-08-26 Thread Chad Lavoie

Greetings,


On 08/26/2016 09:14 AM, Samrat Roy wrote:

Hello Sir,



down votefavorite 
 




I am trying to achieve rate limiting using HAProxy. I am trying to 
follow the "Limiting the connection rate per user" approach. I am able 
to achieve this by the below configuration. But facing one problem, 
that is, i am not able to send a custom error code once the rate limit 
is reached. For example if i reached the rate limit i want to send 
HTTP error code 429. In this case the proxy is simply rejecting the 
incoming call and users are getting http status code as 0.




"tcp-request connection reject" rejects the connection, so there is no 
status code in this case.  If you want to send a 403 replace it with 
"http-request deny if ..." instead.


If you want to respond with HTTP 429 make a backend with no backend 
servers (so that all requests will get a 503) and set a custom 503 error 
page, editing the headers at the top of the file so that the response 
code is 429 (or whatever other code/message/etc you desire).


- Chad


Please let me know how can i do this

frontend localnodes

|bind *:80 mode http default_backend nodes stick-table type ip size 
100k expire 30s store conn_rate(5s) tcp-request connection reject if { 
src_conn_rate ge 60 } tcp-request connection track-sc1 src |


backend nodes

|cookie MYSRV insert indirect nocache server srv1 :80 check 
cookie srv1 maxconn 500 |



Thanks
Samrat




Re: Adding a custom tcp protocol to HAProxy

2016-07-10 Thread Chad Lavoie
Greetings,

On 7/10/16 6:33 AM, Matt Esch wrote:
> I need to load balance a custom tcp protocol and wonder if HAProxy
> could be configured or extended for my use case.
>
> The protocol is a multiplexed frame-based protocol. An incoming socket
> can send frames in arbitrary order. The first 2 bytes dictate the
> frame length entirely (so max 64k per frame). The frame has a type and
> a header format, followed by a payload.
>
> The multiplexing works by assigning long ids in the frame header and
> pairing the responses based on this id.

Depending on what the id's actually look like this may or may not work,
but before I started writing a lot of C I'd try something such as
payload() per
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.1.5 to
match an acl for use_backend; or stick on (using a table) via
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#stick%20on.

Again, entirely possible that its not usable for your use-case, but if
it is it sounds much easier than trying to code another format.

- Chad
>
> A load balancer would read frames from the socket, parse the frame,
> select the correct backend from a header and route the request through
> an available backend peer. The response would be matched to the
> incoming socket based on the frame id.
>
> I expect (a lot of) c code will need to be written to support such a
> custom protocol. I'm looking for specific pointers about how to add
> such a protocol to the existing codebase in a way that would fit
> cleanly, and preferably in a modular fashion.
>
> Any hints appreciated
>
>
> ~Matt
>
>




Re: Does haproxy use regex for balance url_param lookup?

2016-06-26 Thread Chad Lavoie
Greetings,

On 6/26/16 7:40 AM, k simon wrote:
> Hi, lists,
>I noticed that haproxy 1.6.5 hog the cpu periodiclly on FreeBSD 10 
> with 800K-1M syscalls. I change the balance algo to "uri" and delete all 
> the regular expressions can work around it. There maybe some bug with 
> PCRE on FreeBSD or some bug in haproxy, but I can't confirm it.
>And does haproxy support wildcard in acl string match ?
Depending on exactly how you need to match the string there are some
match methods that work like wildcards:
https://cbonte.github.io/haproxy-dconv/configuration-1.6.html#7.1.3

That allows for exact/substring/prefix/suffix/subdir/domain matches
without using PCRE.

- Chad
>  I can rewrite 
> my acls to avoid the pcre lib totally.
>
>
> Simon
> 20160626





Re: Proposal: auto-reload of ACL files

2016-04-29 Thread Chad Lavoie

Greetings,

On 04/29/2016 11:37 AM, Philipp Buehler wrote:

Am 29.04.2016 17:27 schrieb Chad Lavoie:

HAProxy sockets support "add acl  " to add an ACL entry
or "add map" to add to a map.  Can be used with "clear acl"/"clear
map" to empty the table first to refresh them completely.

See
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-add%20acl 


for details.

If how to use that isn't clear I can provide an example.


Oh, almost there, would love to see an example.
If you have an acl configuration such as "acl admin_ips src -f 
/usr/local/haproxy/admin_ips.acl" in your haproxy configuration, you can 
update it using the following:
# echo "clear acl /usr/local/haproxy/admin_ips.acl" | socat stdio 
/var/run/haproxy.sock
# cat /usr/local/haproxy/admin_ips.acl | sed "s|^|add acl 
/usr/local/haproxy/admin_ips.acl |" | socat stdio /var/run/haproxy.sock


The first line there clears the existing ACL entries, and the second 
line adds the ACL entries from the file.  Depending on the use-case the 
ACL could be updated instead of clearing it and refilling it.


Requires having "stats socket /var/run/haproxy.sock mode 0600 level 
admin" in your haproxy configs "global" section.


- Chad


(damn, 1.6.html has no chapter-9 anymore)

Yet, looks like this would render my proposal to void (always good!).

ciao





Re: Proposal: auto-reload of ACL files

2016-04-29 Thread Chad Lavoie

Greetings,

On 04/29/2016 11:16 AM, Philipp Buehler wrote:

Hi,

I quite like not to reload haproxy every here and there (stats and 
races..) and make

quite some use of 'acl foo .. -f aclfile'.

Now feature-creep mounts and aclfile shall be build/extended "on 
demand" (think of something along fail2ban).
Besides losing stats, that can grow into a problem if multiple events 
within very short times start to reload haproxy.


Following problems with rereading aclfile automatically by haproxy 
come to mind:

 - doing it for every request: disk IO killer
 - doing it at fixed intervals: might no suite for "every" use-case 
(and if many aclfiles around, disk IO again)

 - passing an option per acl line likely be a parser hell
 - more exotic foobars

My modest proposal would go like that - for starters :) :
 - (global) option 'timeout aclfiles 300': will reload "special" 
aclfile every 300s

 - aclfile introduced by -F (instead -f) will flag it as "special"
which leaves a somewhat race when the special file is written while 
the reload happens.


Maybe better: as above, but plus:
 - a flagfile like aclfile.RELOAD has to be present at the 300s mark

Or, in a totally different approach, do what OpenBSD's pf(4) can do, 
have a "table" that can be

manipulated via admin-socket.
HAProxy sockets support "add acl  " to add an ACL entry or 
"add map" to add to a map.  Can be used with "clear acl"/"clear map" to 
empty the table first to refresh them completely.


See 
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2-add%20acl 
for details.


If how to use that isn't clear I can provide an example.

- Chad


Thoughts about it?






Re: Help! HAProxy randomly failing health checks!

2016-03-15 Thread Chad Lavoie

Greetings,

On 03/15/2016 02:54 PM, Zachary Punches wrote:


Hello!

My name is Zack, and I have been in the middle of an on going HAProxy 
issue that has me scratching my head.


Here is the setup:

Our setup is hosted by amazon, and our HAProxy (1.6.3) boxes are in 
each region in 3 regions. We have 2 HAProxy boxes per region for a 
total of 6 proxy boxes.


These boxes are routed information through route 53. Their entire job 
is to forward data from one of our clients to our database backend. It 
handles this absolutely fine, except between the hours of 7pm PST and 
7am PST. During these hours, our route53 health checks time out thus 
causing the traffic to switch to the other HAProxy box inside of the 
same region.


During the other 12 hours of the day, we receive 0 alerts from our 
health checks.


I have noticed that we get a series of SSL handshake failures (though 
this happens throughout the entire day) that causes the server to hang 
for a second, thus causing the health checks to fail. During the day 
our SSL failures do not cause the server to hang long enough to go 
fail the checks, they only fail at night. I have attached my HAProxy 
config hoping that you guys have an answer for me. Lemme know if you 
need any more info.


Before thinking about less obvious potential causes, the CPU of the 
instance isn't close to getting capped out during the time in question?
Also, are the connection counts under 15,000 (otherwise I could see it 
ending up with a timeout and trying again)?


- Chad


I have done a few tcpdump captures during the SSL handshake failures 
(not at night during it failing, but during the day when it still gets 
the SSL handshake failures, but doesn’t fail the health check) and it 
seems there is a d/c and a reconnect during the handshake.


Here is my config, I will be running a tcpdump tonight to capture the 
packets during the failure and will attach it if you guys need more info.


#-

# Example configuration for a possible web application.  See the

# full configuration options online.

#

# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt

#

#-

#-

# Global settings

#-

global

log 127.0.0.1 local2

pidfile /var/run/haproxy.pid

maxconn 3

userhaproxy

group   haproxy

daemon

ssl-default-bind-options no-sslv3 no-tls-tickets

tune.ssl.default-dh-param 2048

 # turn on stats unix socket

#stats socket /var/lib/haproxy/stats`

#-

# common defaults that all the 'listen' and 'backend' sections will

# use if not designated in their block

#-

defaults

modehttp

log global

option  httplog

retries 3

timeout http-request5s

timeout queue   1m

timeout connect 31s

timeout client  31s

timeout server  31s

maxconn 15000

# Stats

stats  enable

stats uri  /haproxy?stats

stats realm Strictly\ Private

stats auth  $StatsUser:$StatsPass

#-

# main frontend which proxys to the backends

#-

frontend shared_incoming

maxconn 15000

timeout http-request 5s

#Bind ports of incoming traffic

bind *:1025 accept-proxy # http

bind *:1026 accept-proxy ssl crt /path/to/default/ssl/cert.pem ssl 
crt /path/to/cert/folder/ # https


bind *:1027 # Health checking port

acl gs_texthtml url_reg \/gstext\.html## allow gs to do meta 
tag verififcation


 acl gs_user_agent hdr_sub(User-Agent) -i globalsign## allow gs to 
do meta tag verififcation


#  Add headers

http-request set-header $Proxy-Header-Ip %[src]

http-request set-header $Proxy-Header-Proto http if !{ ssl_fc }

http-request set-header $Proxy-Header-Proto https if { ssl_fc }

# Route traffic based on domain

use_backend gs_verify if gs_texthtml or gs_user_agent## allow 
gs meta tag verification


use_backend 
%[req.hdr(host),lower,map_dom(/path/to/map/file.map,unknown_domain)]


# Drop unrecognized traffic

default_backend unknown_domain

#-

# Backends

#-

backend server0  ## added to allow gs ssl meta tag verification

reqrep ^GET\ /.*\ (HTTP/.*)GET\ /GlobalSignVerification\ \1

server server0_http 

Re: peers and stick-table stats

2016-03-15 Thread Chad Lavoie

Greetings Pavlo,

On 03/15/2016 05:23 AM, Pavlo Zhuk wrote:

Hi,


Is there any good way to monitor stick-table utilization?
The first line of a "show table" socket command has the "size" field 
(showing the size as set in the config) and the "used" field (showing 
how many entries are currently used).  For example running 'echo "show 
table " | socat stdio /var/run/haproxy.sock | head -n1' will print:

# table: , type: , size:1048576, used:12331
Also searching for any nice stats on peer replication (connectivity, 
fails etc)
I'm not aware of statistics for peer replication which can be extracted 
from HAProxy during runtime.  Someone else might be able to fill in that 
point.


- Chad


It doesn't seems that stats endpoint return any of this info.
Appreciate your feedback.

thnx

--
BR,
Pavlo Zhuk





Re: Asking for help: how to expire haproxy's stick table entry only after the closing of all sessions which used it

2016-03-15 Thread Chad Lavoie

Greetings Hugo,

On 03/15/2016 09:25 AM, Hugo Maia wrote:


Hi, my name is Hugo.

I'm currently using Haproxy 1.5, I have a backend with 2 servers. My 
app servers receive connection from two clients and I want both of 
them to be attributed to the same server. All connections have a url 
parameter X and sessions that should be attributed to the same server 
have the same url parameter X value. I use a stick table to save the 
server that a particular url parameter value uses so that future 
connections can be attributed to the same server.


I want to be able to add app servers as load increases. In order to 
instruct haproxy to move previous connections to the new app server I 
need to expire stick table entries when no session (of either client) 
is active in the server.


Table values can be changed by sending "set table  key  
data. " to the HAProxy socket ('socat stdio 
/var/run/haproxy.sock').  "table" is the name of the section the table 
is in, "key" is the key in the table, "type" is the datatype set in the 
type argument to stick table (integer/ip/string/etc), and "value" is 
what you want the value to be set to.


I don't think there is a good way to delete/expire stick table entires 
via the socket, however (entirely possible there is and I just 
overlooked it once I latched onto changing values).


So if I'm reading your problem correctly you should be able to have a 
script change the value it wanted to expire to the new server instead of 
removing it.


- Chad


Can you help me with this?

Thanks in advance for any kind of help.

Best Regards,

Hugo Maia






Re: 'show table' is unreliable?

2016-03-11 Thread Chad Lavoie

Greetings,

Ah, is the stats socket also bound to one process?  For example "stats 
socket /var/run/haproxy.sock mode 0600 level admin process 4" to bind it 
to process 4.


Otherwise the process your querying for the stats will bounce around, 
even if the process with the table doesn't.


- Chad

On 03/11/2016 05:29 PM, Robert Samuel Newson wrote:

ah, yes, nbproc of 2 here, but I should be clear. The stick tables are in a 
proxy pinned to one single process, the other is used to handle TLS decoding.


On 11 Mar 2016, at 18:27, Chad Lavoie <clav...@haproxy.com> wrote:

Greetings,

That should have been "Do you have nbproc set and more then 1?", sorry.

- Chad

On 03/11/2016 01:17 PM, Chad Lavoie wrote:

Greetings,

Do you have nbproc set or more then 1?

If so, then each thread has its own stick table set; and depending on what 
thread handles it the values will differ.

Individual frontends can be set to a specific thread with bind-process (or for 
SSL a frontend specifically for SSL termination can be made).  If that is the 
issue your seeing and you want more examples in that direction let me know what 
your use-case looks like and I'll go into more details there.

- Chad

On 03/11/2016 12:28 PM, Robert Samuel Newson wrote:

Hi,

I'm using haproxy 1.6.3 and think I've uncovered an issue.

I use the stick table feature and as you can see from below, items appear and 
disappear randomly, these samples were taken less than a second apart. 
Obviously the items in the middle have at least 56 seconds remaining before 
expiration, so should have been in all three samples. They reappear if I keep 
sampling, in seemingly random subsets.

I can't easily tell if this just a display issue (i.e 'show table' has the bug) 
or whether the table behaves as if it's empty when show table shows it empty.

Any advice?


echo "show table lookup" | socat /var/haproxy.sock -

# table: lookup, type: string, size:51200, used:0


echo "show table lookup" | socat /var/haproxy.sock -

# table: lookup, type: string, size:51200, used:3
0x3c1d9ec: key=user1 use=0 exp=56035 gpc0_rate(1000)=0
0x3c0ff0c: key=user2 use=0 exp=58786 gpc0_rate(1000)=0
0x3c41b2c: key=user3 use=0 exp=59737 gpc0_rate(1000)=0


echo "show table lookup" | socat /var/haproxy.sock -

# table: lookup, type: string, size:51200, used:0











Re: 'show table' is unreliable?

2016-03-11 Thread Chad Lavoie

Greetings,

That should have been "Do you have nbproc set and more then 1?", sorry.

- Chad

On 03/11/2016 01:17 PM, Chad Lavoie wrote:

Greetings,

Do you have nbproc set or more then 1?

If so, then each thread has its own stick table set; and depending on 
what thread handles it the values will differ.


Individual frontends can be set to a specific thread with bind-process 
(or for SSL a frontend specifically for SSL termination can be made).  
If that is the issue your seeing and you want more examples in that 
direction let me know what your use-case looks like and I'll go into 
more details there.


- Chad

On 03/11/2016 12:28 PM, Robert Samuel Newson wrote:

Hi,

I'm using haproxy 1.6.3 and think I've uncovered an issue.

I use the stick table feature and as you can see from below, items 
appear and disappear randomly, these samples were taken less than a 
second apart. Obviously the items in the middle have at least 56 
seconds remaining before expiration, so should have been in all three 
samples. They reappear if I keep sampling, in seemingly random subsets.


I can't easily tell if this just a display issue (i.e 'show table' 
has the bug) or whether the table behaves as if it's empty when show 
table shows it empty.


Any advice?


echo "show table lookup" | socat /var/haproxy.sock -

# table: lookup, type: string, size:51200, used:0


echo "show table lookup" | socat /var/haproxy.sock -

# table: lookup, type: string, size:51200, used:3
0x3c1d9ec: key=user1 use=0 exp=56035 gpc0_rate(1000)=0
0x3c0ff0c: key=user2 use=0 exp=58786 gpc0_rate(1000)=0
0x3c41b2c: key=user3 use=0 exp=59737 gpc0_rate(1000)=0


echo "show table lookup" | socat /var/haproxy.sock -

# table: lookup, type: string, size:51200, used:0










Re: 'show table' is unreliable?

2016-03-11 Thread Chad Lavoie

Greetings,

Do you have nbproc set or more then 1?

If so, then each thread has its own stick table set; and depending on 
what thread handles it the values will differ.


Individual frontends can be set to a specific thread with bind-process 
(or for SSL a frontend specifically for SSL termination can be made).  
If that is the issue your seeing and you want more examples in that 
direction let me know what your use-case looks like and I'll go into 
more details there.


- Chad

On 03/11/2016 12:28 PM, Robert Samuel Newson wrote:

Hi,

I'm using haproxy 1.6.3 and think I've uncovered an issue.

I use the stick table feature and as you can see from below, items appear and 
disappear randomly, these samples were taken less than a second apart. 
Obviously the items in the middle have at least 56 seconds remaining before 
expiration, so should have been in all three samples. They reappear if I keep 
sampling, in seemingly random subsets.

I can't easily tell if this just a display issue (i.e 'show table' has the bug) 
or whether the table behaves as if it's empty when show table shows it empty.

Any advice?


echo "show table lookup" | socat /var/haproxy.sock -

# table: lookup, type: string, size:51200, used:0


echo "show table lookup" | socat /var/haproxy.sock -

# table: lookup, type: string, size:51200, used:3
0x3c1d9ec: key=user1 use=0 exp=56035 gpc0_rate(1000)=0
0x3c0ff0c: key=user2 use=0 exp=58786 gpc0_rate(1000)=0
0x3c41b2c: key=user3 use=0 exp=59737 gpc0_rate(1000)=0


echo "show table lookup" | socat /var/haproxy.sock -

# table: lookup, type: string, size:51200, used:0







Re: Slowness on deployment

2016-03-10 Thread Chad Lavoie

Greetings,

Error in my last e-mail, used used the word client instead of server; 
fixed inline.


On 03/10/2016 02:34 PM, Chad Lavoie wrote:

Greetings,

Having paged through the logs, I see a lot that seem to have the first 
four numbers fairly small (indicating that the request to the response 
headers finished before times started getting extreme) (Tq, Tw, Tc, 
Tr), but which have an overall time (Tt) in the realm of five minutes.


This would indicate that the backend is getting the request from the 
client (Tq), gets through the queues (Tw), a TCP connection to the 
backend is established (Tc), and it sends the response headers (Tr) in 
a few hundred ms to a couple of seconds; but then most of the time is 
spent with the client sending the body.
Most of the time is spent with the server sending the body, not the 
client data (as per the timings the server already sent the response 
headers, so the client is mostly out of the picture).


- Chad


Before we move on, does that sound reasonable as a potential issue 
location?  If not, I can try running some math on the columns to get a 
better idea (I just looked at a random sampling of slow requests to 
compare to what I've seen as the baseline).


Another thing which is interesting here are the termination states (I 
usually look at them as they give an idea for why connections are 
failing; definitions are at 
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.5):

  7 CHVN
  9 SDVN
 10 cDVN
 12 LR--
 33 CDVN
 50 SHDN
 92 --NI
113 sHVN
186 SHVN
   2115 --DI
  13896 --VN

The first two chars show the state at termiation, and the second two 
talk about the persistence cookie (useful for seeing if first time 
clients are failing, etc).


The ones starting with -- indicate they were successful, so ignoring 
them here.  Other then that we have a bunch starting with SH, 
indicating that the TCP connection to the backend ether failed or was 
aborted, and sH indicating that the backend connection attempt timed out.
The numbers are fairly small there in terms of failures vs successes, 
so I'd say that isn't likely to be the primary issue (unless we get to 
talking about individual connections).


If thats the case, the next step would be to figure out why the body 
data takes so long; which is outside of what HAProxy can cleanly help 
with.  Do the backends have logs which would indicate what they are 
doing?  If not, the next thing I'd try would be making a file with 
TCPDump to view in Wireshark to see what is going on between haproxy 
and the backends (how to do that is outside the scope of what makes 
any sense to describe here, though).


- Chad

On 03/10/2016 08:06 AM, matt wrote:

I have the log, but a lot of the data is confidential.
Can I send you by email in order for you to take a look?

We can post a edited version later in order to help others
debug the same issue

Thanks in advance










Re: Slowness on deployment

2016-03-10 Thread Chad Lavoie

Greetings,

Having paged through the logs, I see a lot that seem to have the first 
four numbers fairly small (indicating that the request to the response 
headers finished before times started getting extreme) (Tq, Tw, Tc, Tr), 
but which have an overall time (Tt) in the realm of five minutes.


This would indicate that the backend is getting the request from the 
client (Tq), gets through the queues (Tw), a TCP connection to the 
backend is established (Tc), and it sends the response headers (Tr) in a 
few hundred ms to a couple of seconds; but then most of the time is 
spent with the client sending the body.


Before we move on, does that sound reasonable as a potential issue 
location?  If not, I can try running some math on the columns to get a 
better idea (I just looked at a random sampling of slow requests to 
compare to what I've seen as the baseline).


Another thing which is interesting here are the termination states (I 
usually look at them as they give an idea for why connections are 
failing; definitions are at 
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.5):

  7 CHVN
  9 SDVN
 10 cDVN
 12 LR--
 33 CDVN
 50 SHDN
 92 --NI
113 sHVN
186 SHVN
   2115 --DI
  13896 --VN

The first two chars show the state at termiation, and the second two 
talk about the persistence cookie (useful for seeing if first time 
clients are failing, etc).


The ones starting with -- indicate they were successful, so ignoring 
them here.  Other then that we have a bunch starting with SH, indicating 
that the TCP connection to the backend ether failed or was aborted, and 
sH indicating that the backend connection attempt timed out.
The numbers are fairly small there in terms of failures vs successes, so 
I'd say that isn't likely to be the primary issue (unless we get to 
talking about individual connections).


If thats the case, the next step would be to figure out why the body 
data takes so long; which is outside of what HAProxy can cleanly help 
with.  Do the backends have logs which would indicate what they are 
doing?  If not, the next thing I'd try would be making a file with 
TCPDump to view in Wireshark to see what is going on between haproxy and 
the backends (how to do that is outside the scope of what makes any 
sense to describe here, though).


- Chad

On 03/10/2016 08:06 AM, matt wrote:

I have the log, but a lot of the data is confidential.
Can I send you by email in order for you to take a look?

We can post a edited version later in order to help others
debug the same issue

Thanks in advance







Re: Slowness on deployment

2016-03-09 Thread Chad Lavoie

Greetings,

On 03/09/2016 04:28 PM, matt wrote:

Yes. Regarding the different times, I've made some
  editing in order to avoid exposing some information
  about our endpoints/ip addresses, but they are
normal times.
Okay, just wanted to ensure that you expected a wide variety of times, 
as seeing them for the first time in the logs when looking for another 
issue can confuse debugging (or could indicate another issue to be 
tracked down).


Besides from that, sounds great. I'll collect
some data  tonight (im trying not to do this
now since our traffic is really high)

I'm thinking about requests being queued due to
the maxconn parameter (I have a global maxconn
of 4000, and a default of 3000). Could this be the
  case? I'll take a look at haproxy stats too to see if
  any of the limits is reached when the app is being
  deployed
As HAProxy won't accept a connection if the global maxconn is reached 
(until a slot opens up), the timings wouldn't show anything interesting 
in that case (though with the logs looking normal and things still being 
slow that would be the next item to be examined).
If a backend's servers are all at maxconn then the request will be 
queued, and it will show up in the second timing column (Tq).


In general I'd advise keeping the global maxconn high enough so that all 
the backend connection slots can get filled (as that way the logs will 
make it clear where the issue is).  The global maxconn should be low 
enough so that the system can't run out of resources, but otherwise I'd 
advise using the backends to limit connections (and that also allows the 
returning 5xx errors instead of timeouts, which can confuse the 
diagnosis if the timeout is the first thing that is seen).


- Chad


I'll let you know about this data recollection.
Thanks again for the help, is being super productive for me







Re: Slowness on deployment

2016-03-09 Thread Chad Lavoie

Greetings,

In general I just eyeball the numbers, in most cases that gives a good 
idea for what is happening.  Sometimes I pipe a specific column through 
a script to get mean and std deviation, but generally I don't need to go 
that far.


Looking through the numbers I see some GET /'s taking 23 seconds while 
others take 200ms; so looks like there is normally quite a lot of 
variation.  Is that pretty much what you expected, or does such a result 
seem out of place?  Other then that and a few clients taking a long time 
to send a complete request I don't see anything that sticks out as unusual.


Gives a baseline for what to look for from the numbers showing the 
issues, which is good to have.


- Chad

On 03/09/2016 01:41 PM, matt wrote:

Chad! Thanks a lot for your response.

I've updated the configuration, and Im now
logging all the request.
Is there any tool to process this kind of data?

This is a normal capture:
https://gist.github.com/matiasdecarli/cd138d47a756d7b3d24e

I going to fire a deployment in order to look at the
specific timeframe of the slowness

Can you see anything strange in this logs?









Re: Slowness on deployment

2016-03-09 Thread Chad Lavoie

Greetings,

The first place I'd start looking would be the timings in the HAProxy 
logs to see what part of the process is being slow.


In the logs (if http mode) the default format has five timing values in 
the column after the backend_name/server_name component which will say 
what part of the request is taking too long.


Scroll down on 
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.2.3 a 
bit for the definitions of the various values.


If your in TCP log mode for the connections in question you will only 
have three of said timings instead of six, but still the place to start 
I think.


If you paste some examples here I might be able to spot something in the 
timing or help with figuring out what they mean if its not clear.


- Chad

On 03/09/2016 08:26 AM, matt wrote:

Hi guys. I been using HAproxy for two years now, and I really love the product.
Simple, quick and really well documented.

Lately I been having an issue that keeps me awake by night,
and maybe you could help me solving.

I have 4 VM's behind 2 HA proxies. On every VM I have a Docker
container serving on port 80.
So far is running great, but lately im having issues on deployments.

The deployment scenario is like this:
I go through every VM (one at the time),
remove the VM from both LB's with socat, stop the container
and then create a new container.

The thing is, just when I delete the container (not when I
remove from the LB), the response time of the OTHERS VM's
starts increasing, which causes my deploys to
  have a peak in response time.

The way I have to test the response times is a app that
keeps pinging the app from the  outside and see the response
payload to see which server is it, which lead me to two ideas

1) The apps on the others VM's gets overloaded
with the traffic (which I don't believe is the case, because
I've tried using 1 more VM and the issue remains
the same)

2) HAproxy is rerouting some request in a way that causes slowness

Does this sound familiar to any of you?
How can I debug this kind of events?

Thanks in advance









Re: SSL Cipher stats

2016-03-08 Thread Chad Lavoie

Greetings,

On 03/08/2016 11:20 AM, Jeff Palmer wrote:

I too would be interested in this.

extra points if the info could be gathered for individual backends or frontends.
I didn't explicitly mention it, but my example config tracks by frontend 
id in the stick table (id was 7 in my example).  If in "tcp-request 
content track-sc0 fe_id() table sslv3-count if { ssl_fc }" fe_id is 
changed with be_id then it will track based on the backend instead.


To translate the id's to names looking at the iid field of "show stat" 
(to the socket as the show table is done to get the stats) will identify 
the one in question.


Also, I neglected to mention if you have nbproc >1 it won't add up the 
values, so if its important to have all of the requests processed adding 
them up via a shell script should be able to do that.


- Chad




On Tue, Mar 8, 2016 at 11:18 AM, Stefan Johansson
 wrote:

Hi,



is it possible somehow to extract statistics on cipher used (total SSLv3,
total RC4 etc.) without necessarily turning on connection logging and
extract the data from there?



Thank you.



Regards,

Stefan








Re: SSL Cipher stats

2016-03-08 Thread Chad Lavoie

Greetings,

To do it without logging the only other ways I can think of to get it 
out of HAProxy will ether be headers to the backends for logging there, 
or doing it via stick tables (or sending the stick table stats via a 
header to the backend for logging).


To cover the stick table option as it sounds most like what you seek, 
try the following to check for sslv3:


backend sslv3-count
stick-table type integer size 10 expire 24h store 
http_req_rate(24h),gpc0,gpc0_rate(24h)



tcp-request inspect-delay 10s
tcp-request content track-sc0 fe_id() table sslv3-count if { 
ssl_fc }

http-request allow if sslv3 { sc_inc_gpc0(0) }

Then to look at the values:
user@server$ echo "show table sslv3-count" | socat stdio 
/var/run/haproxy.sock

# table: sslv3-count, type: integer, size:10, used:1
0x273e69c: key=7 use=0 exp=86398154 gpc0=0 gpc0_rate(8640)=0 
http_req_rate(8640)=2


In this case there have been two requests using SSL in the last 24 
hours, none of which have used SSLv3.


I've not really tested this, more just wrote up a quick configuration 
for the concept, so if it doesn't work let me know and I can use openssl 
to actually try an sslv3 configuration.


Various other SSL values can be tracked by increasing the sc0 and adding 
another backend for it; the SSL related variables can be found at 
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.4. If 
your looking for something that you can't see a way to craft let me know 
and I can provide more details.


As a side note SSLv3 isn't really considered secure any longer, but 
using the above to keep track of its use is a good step towards 
removing/restricting it.


- Chad

On 03/08/2016 11:18 AM, Stefan Johansson wrote:


Hi,

is it possible somehow to extract statistics on cipher used (total 
SSLv3, total RC4 etc.) without necessarily turning on connection 
logging and extract the data from there?


Thank you.

Regards,

Stefan





Re: acl for denying requests originating from Cloudflare protected servers while behind Cloudflare myself ...

2016-02-15 Thread Chad Lavoie

Greetings,

On 02/14/2016 08:39 PM, Woody Woodpecker wrote:

Hello,

I am struggling to get an acl working to reject traffic originating 
from servers protected by the Cloudflare network, while my servers are 
behind Cloudflare too …


So I allow only traffic from the Cloudflare network to HAProxy, since 
my server is behind Cloudflare too.


This is getting me a bit muddled … comparing the  CF-Connecting-IP 
and X-Forwarded-For headers is making a royal mess.


I am able to block other proxy traffic, but how do I distinguish 
between “clean” proxied traffic via Cloudflare and “unwanted” server 
generted traffic from Cloudflare?


Would any of you be able to point me in the right direction please?


If you have SSL on the origin I'd advise looking at enabling the 
"Authenticated Origin Pulls" feature of Cloudflare (under the 'Crypto' 
tab), which will then have Cloudflare send a client certificate to the 
origin (for information on verifying client certificates, see 
http://blog.haproxy.com/2013/06/13/ssl-client-certificate-information-in-http-headers-and-logs/).


One could restrict by IP with the headers, however if it needs to be 
locked down to that level half-way likely isn't worth it.


- Chad