Re: Rate limiting options using HAProxy

2016-08-30 Thread Chad Lavoie

Greetings,


On 08/30/2016 05:12 PM, Chad Lavoie wrote:

Greetings,


On 08/30/2016 12:30 PM, Sam kumar wrote:

Hello Sir,

I am trying to implement rate limiting using HA proxy for my HTTP 
restful services.


My requirement is to go implement below two scenario

1.URL based : Every API urls will have different throttle limit


To have limits that differ for different URL's I'd use a list of ACL's 
that look like the following:

http-request deny if { sc_http_req_rate(0) gt 10 } { path /api/call1 }
http-request deny if { sc_http_req_rate(0) gt 20 } { path /api/call2 }


I didn't directly mention, but if you use the same stick table and 
authorization token the limits will be additive (so that 10 requests to 
one and 5 to another mean all will be checked with a limit of 15).


If you don't want this and don't have an excessive number of unique ones 
I'd advise making a stick table for each.


If you do have an excessive number of them you may be better trying to 
track by src+url with the base32+src match instead or making a converter 
in LUA to combine the api and token.


- Chad


In addition to path you can use path_beg to match against the 
beginning of the path, you can also use url_param 
(https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.6-url_param) 
and other fetch methods depending on your requirements.
2. Authorization header : Every client has unique authorization token 
so using this I can have a throttle limit for each client.


For this you will want a stick table which stores a string 
(https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-stick-table):

backend track_api_token
stick-table type string len 32 size 1024 store http_req_rate(10s)

Then in your frontend:
http-request track-sc0 hdr(X-Authorization) table track_api_token

From there you can limit using the above rules.

Thanks,
- Chad


I was trying to get help from various other blogs but could not find 
much on this.


Please provide some examples or sample code for the same so that I 
can achieve this functionality


Thanks
Sam







[ANNOUNCE] haproxy-1.6.9

2016-08-30 Thread Willy Tarreau
Subject: [ANNOUNCE] haproxy-1.6.9
To: haproxy@formilux.org

Hi,

HAProxy 1.6.9 was released on 2016/08/30. It added 4 new commits after
version 1.6.8. The main fix is for a regression introduced in 1.6.8
while fixing the dead connections. Special thanks to Bartosz Koninski
for reporting it early and providing a reproducer for this bug. The
issue can cause a connection retry to be performed on a random address
and is more likely to try to connect to a server that just released a
connection during the retry. So it's important to upgrade to 1.6.9
especially when your servers go up and down often.

#
Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Sources  : http://www.haproxy.org/download/1.6/src/
   Git repository   : http://git.haproxy.org/git/haproxy-1.6.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-1.6.git
   Changelog: http://www.haproxy.org/download/1.6/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :

Chad Lavoie (1):
  MINOR: cli: allow the semi-colon to be escaped on the CLI

Willy Tarreau (3):
  BUG/MAJOR: stream: properly mark the server address as unset on connect 
retry
  BUG/MINOR: payload: fix SSLv2 version parser
  [RELEASE] Released version 1.6.9

ben51degrees (1):
  DOC: Updated 51Degrees readme.
---



標準展櫃租用,適用於各大展會及展覽場地 .

2016-08-30 Thread 標準展櫃租用,適用於各大展會及展覽場地


Re: Rate limiting options using HAProxy

2016-08-30 Thread Chad Lavoie

Greetings,


On 08/30/2016 12:30 PM, Sam kumar wrote:

Hello Sir,

I am trying to implement rate limiting using HA proxy for my HTTP 
restful services.


My requirement is to go implement below two scenario

1.URL based : Every API urls will have different throttle limit


To have limits that differ for different URL's I'd use a list of ACL's 
that look like the following:

http-request deny if { sc_http_req_rate(0) gt 10 } { path /api/call1 }
http-request deny if { sc_http_req_rate(0) gt 20 } { path /api/call2 }

In addition to path you can use path_beg to match against the beginning 
of the path, you can also use url_param 
(https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.6-url_param) 
and other fetch methods depending on your requirements.
2. Authorization header : Every client has unique authorization token 
so using this I can have a throttle limit for each client.


For this you will want a stick table which stores a string 
(https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-stick-table):

backend track_api_token
stick-table type string len 32 size 1024 store http_req_rate(10s)

Then in your frontend:
http-request track-sc0 hdr(X-Authorization) table track_api_token

From there you can limit using the above rules.

Thanks,
- Chad


I was trying to get help from various other blogs but could not find 
much on this.


Please provide some examples or sample code for the same so that I can 
achieve this functionality


Thanks
Sam





Re: Backend: Multiple A records

2016-08-30 Thread Baptiste
> What would happen if I'd configure X = 2 and the following happens:
>
> 1. Initially only 127.0.0.1 is returned.
>

1 UP server available in the backend


> 2. 127.0.0.2 is added and healthy.
>

2 servers UP available in the backend


> 3. 127.0.0.1 is removed from DNS and thus marked DOWN.
>

then only 1 server in the backend. I think we'll have a specific flag to
report a failure due to DNS


> 4. 127.0.0.3 is added (with 127.0.0.2 still being healthy and 127.0.0.1
> still being DOWN / missing from the DNS response)
>
>
then 2 servers in the backend.

Worst case, set X to 10 and you're good ;)



> You said that once an IP address disappears the backend will be marked as
> DOWN and that there is an upper limit.


Not the backend, the corresponding server will be DOWN because of DNS (a
specific flag should be added)


> Are new IP addresses able to push removed IP addresses from the list or
> will removed IP addresses be DOWN and taking up a slot until they reappear?
>
>
Yes, this is the purpose.
The algorithm will consider each DNS response atomically when updating the
backend server list.



> If new IPs are able to push away old IPs it sounds like it will meet my
> requirements perfectly. I won't have control over the IP addresses assigned
> in the DNS.
>
>
We may be good then, which is nice :)

Baptiste


Re: Help Needed || haproxy limiting the connection rate per user

2016-08-30 Thread Chad Lavoie

Greetings,


On 08/30/2016 01:10 PM, Samrat Roy wrote:

Thank you sir for your quick reply.

I am now able to give custom error code for my HAproxy configuration. 
However I am facing one more issue .


With the above approach HAproxy is rejecting each and every calls once 
the limit has crossed. It is behaving as a circuit breaker . But my 
requirement is to have a throttling for example every 10 second I 
should allow 200 request and anything more than 200 will be rejected.


There are two ways I can think to interpret your question:
1) You want to have a tick every 10 seconds which resets the counter to zero
2) You want to not count requests over the limit (which get blocked) to 
count to the blocking


For 1 you would need a script to talk to the socket, and I'd not advise 
doing that unless you know what you are doing and why there is no 
cleaner alternative.
For 2 I'd add gpc0,gpc0_rate(10s) to the stick table in place of 
conn_rate, then use something like the following:

http-request allow if { sc_inc_gpc0(0) }
After the use_backend statement.  Then instead of checking conn_rate 
check sc_gpc0_rate(0) per 
http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.3.3-sc_gpc0_rate.


Because in that case gpc0 will only be incremented if the request 
doesn't end up at the custom backend/blocked/etc that should fill your 
needs there.


Thanks,
- Chad


Is there any way I can achieve this .Please help me to configure the same.

Thanks in advance
Samrat


On Fri, Aug 26, 2016 at 10:16 PM, Chad Lavoie > wrote:


Greetings,


On 08/26/2016 09:14 AM, Samrat Roy wrote:

Hello Sir,



down votefavorite





I am trying to achieve rate limiting using HAProxy. I am trying
to follow the "Limiting the connection rate per user" approach. I
am able to achieve this by the below configuration. But facing
one problem, that is, i am not able to send a custom error code
once the rate limit is reached. For example if i reached the rate
limit i want to send HTTP error code 429. In this case the proxy
is simply rejecting the incoming call and users are getting http
status code as 0.



"tcp-request connection reject" rejects the connection, so there
is no status code in this case.  If you want to send a 403 replace
it with "http-request deny if ..." instead.

If you want to respond with HTTP 429 make a backend with no
backend servers (so that all requests will get a 503) and set a
custom 503 error page, editing the headers at the top of the file
so that the response code is 429 (or whatever other
code/message/etc you desire).

- Chad


Please let me know how can i do this

frontend localnodes

|bind *:80 mode http default_backend nodes stick-table type ip
size 100k expire 30s store conn_rate(5s) tcp-request connection
reject if { src_conn_rate ge 60 } tcp-request connection
track-sc1 src |

backend nodes

|cookie MYSRV insert indirect nocache server srv1 :80
check cookie srv1 maxconn 500 |


Thanks
Samrat







Re: Backend: Multiple A records

2016-08-30 Thread Tim Düsterhus

Hi

On 30.08.2016 01:49, Maciej Katafiasz wrote:

Right, I missed the "independent healthchecks" in the original
description, in which case it'd work well enough (albeit a low enough
TTL value is still a concern).



Thanks for the heads up. In my case this is not a concern: The DNS 
server is completely under my control, returns a low TTL and is able to 
update the list of nodes almost instantly after a node goes up or down.


Best regards
Tim Düsterhus



Re: Backend: Multiple A records

2016-08-30 Thread Tim Düsterhus

Hi

On 30.08.2016 09:11, Baptiste wrote:

The way we designed the feature is more like a "server template" line which
may be used to pre-configure in memory X servers sharing the same DNS
resolution.
In your case X=2. If you intend to have up to 10 servers for this service,
simply set X to 10.
HAProxy will use A records to create the servers and the health checks will
ensure that the servers are available before sending them traffic.
If a A record disappear from  the response, the corresponding server will
get down. If a new server is added and we provisioned less than X, then a
new server is provisioned.
This X "upper" limit is to ensure compatibility with all HAProxy features
(such as hash LBing algorithms).

Could you let me know if that meets your requirements?
(we can still change this description).



What would happen if I'd configure X = 2 and the following happens:

1. Initially only 127.0.0.1 is returned.
2. 127.0.0.2 is added and healthy.
3. 127.0.0.1 is removed from DNS and thus marked DOWN.
4. 127.0.0.3 is added (with 127.0.0.2 still being healthy and 127.0.0.1 
still being DOWN / missing from the DNS response)


You said that once an IP address disappears the backend will be marked 
as DOWN and that there is an upper limit. Are new IP addresses able to 
push removed IP addresses from the list or will removed IP addresses be 
DOWN and taking up a slot until they reappear?


If new IPs are able to push away old IPs it sounds like it will meet my 
requirements perfectly. I won't have control over the IP addresses 
assigned in the DNS.


Thanks for your replies so far! Looking forward to it.

Best regards
Tim Düsterhus



Re: Help Needed || haproxy limiting the connection rate per user

2016-08-30 Thread Samrat Roy
Thank you sir for your quick reply.

I am now able to give custom error code for my HAproxy configuration.
However I am facing one more issue .

With the above approach HAproxy is rejecting each and every calls once the
limit has crossed. It is behaving as a circuit breaker . But my requirement
is to have a throttling for example every 10 second I should allow 200
request and anything more than 200 will be rejected.

Is there any way I can achieve this .Please help me to configure the same.

Thanks in advance
Samrat


On Fri, Aug 26, 2016 at 10:16 PM, Chad Lavoie  wrote:

> Greetings,
>
> On 08/26/2016 09:14 AM, Samrat Roy wrote:
>
> Hello Sir,
>
>
>
> down votefavorite
> 
>
> I am trying to achieve rate limiting using HAProxy. I am trying to follow
> the "Limiting the connection rate per user" approach. I am able to achieve
> this by the below configuration. But facing one problem, that is, i am not
> able to send a custom error code once the rate limit is reached. For
> example if i reached the rate limit i want to send HTTP error code 429. In
> this case the proxy is simply rejecting the incoming call and users are
> getting http status code as 0.
>
>
> "tcp-request connection reject" rejects the connection, so there is no
> status code in this case.  If you want to send a 403 replace it with
> "http-request deny if ..." instead.
>
> If you want to respond with HTTP 429 make a backend with no backend
> servers (so that all requests will get a 503) and set a custom 503 error
> page, editing the headers at the top of the file so that the response code
> is 429 (or whatever other code/message/etc you desire).
>
> - Chad
>
> Please let me know how can i do this
>
> frontend localnodes
>
> bind *:80
> mode http
> default_backend nodes
>
> stick-table type ip size 100k expire 30s store conn_rate(5s)
> tcp-request connection reject if { src_conn_rate ge 60 }
> tcp-request connection track-sc1 src
>
> backend nodes
>
> cookie MYSRV insert indirect nocache
> server srv1 :80 check cookie srv1 maxconn 500
>
>
> Thanks
> Samrat
>
>
>


Rate limiting options using HAProxy

2016-08-30 Thread Sam kumar
Hello Sir,

I am trying to implement rate limiting using HA proxy for my HTTP
restful services.

My requirement is to go implement below two scenario

1.URL based : Every API urls will have different throttle limit
2. Authorization header : Every client has unique authorization token so
using this I can have a throttle limit for each client.

I was trying to get help from various other blogs but could not find much
on this.

Please provide some examples or sample code for the same so that I can
achieve this functionality

Thanks
Sam


Re: Need help to configure ha proxy

2016-08-30 Thread Jeff Palmer
This config appears to be a decent start.  and looks to meet your
requirements for http.

Now you just need another frontend configured for 443,  it would match
the :80 frontend, aside from port, using SSL, and a path to the
certificates.



On Tue, Aug 30, 2016 at 8:47 AM, Harish Chander
 wrote:
> Hi,
>
>
> I shall be really thankful you if you help in configure haproxy or its
> possible or not.
>
>
> External ELB - In external AWS ELB i have 2 Ha proxy server
>
>
> HA Proxy
>
> connect
>
> haproxy > beta.example.com
>
> beta.example.com > api-example.com
>
>
> beta.example.com server work's on 80 and 443 both, If i add A Name in DNS of
> direct server IP then work everything.
>
>
> Requirement - beta.example.com should work on both 443 and 80. now its
> working for 80 only. Please help me out. you can call me +918529142143 any
> time.
>
>
> Current haproxy conf under below
>
>
>
> haproxy.conf
>
>
> global
>
> log /dev/log local0
>
> log /dev/log local1 notice
>
> chroot /var/lib/haproxy
>
> stats socket /run/haproxy/admin.sock mode 660 level admin
>
> stats timeout 30s
>
> user haproxy
>
> group haproxy
>
> daemon
>
>
> # Default SSL material locations
>
> ca-base /etc/ssl/certs
>
> crt-base /etc/ssl/private
>
>
> # Default ciphers to use on SSL-enabled listening sockets.
>
> # For more information, see ciphers(1SSL). This list is from:
>
> #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
>
> ssl-default-bind-ciphers
> ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
>
> ssl-default-bind-options no-sslv3
>
>
> defaults
>
> log global
>
> mode http
>
> option httplog
>
> option dontlognull
>
> timeout connect 5000
>
> timeout client  5
>
> timeout server  5
>
>
> frontend haproxy
>
>bind *:80
>
>stats uri /stats
>
>stats realm Strictly\ Private
>
>stats auth pass:word
>
>
> # Define hosts
>
> #urls
>
> acl beta.example hdr(host) -i beta.example.com
>
>
>
> acl api.example hdr(host) -i api-example.com
>
>
>
> #cluster
>
> use_backend b.example if beta.example
>
>
> use_backend z.api if api.example
>
>
> #Frontend Server
>
>
> backend b.example
>
> mode http
>
> balance roundrobin
>
> option forwardfor
>
>server server01 10.0.0.1:80 check
>
>
> ##API
>
> backend z.api
>
> mode http
>
> balance roundrobin
>
> option forwardfor
>
> server api01 192.168.1.1:80 check
>
>
>
> Regard's
> Harish Chander
> 8529142143
>
>



-- 
Jeff Palmer
https://PalmerIT.net



[SPAM] Ne manquez pas les Rencontres de l'Udecam, le 6 septembre prochain à Paris !

2016-08-30 Thread Sophie de Leyraud
 

If you believe this has been sent to you in error, please safely unsubscribe.


Re: Backend: Multiple A records

2016-08-30 Thread Baptiste
On Tue, Aug 30, 2016 at 1:49 AM, Maciej Katafiasz <
mkatafi...@purestorage.com> wrote:

> On 29 August 2016 at 16:39, Igor Cicimov 
> wrote:
> > On Tue, Aug 30, 2016 at 6:18 AM, Maciej Katafiasz
> >  wrote:
> >> Be aware though that DNS round-robin reduces the availability of the
> >> entire setup, since there are no provisions in the protocol for the
> >> eviction of dead nodes. So unless you're very sure there will never be
> >> any in your DNS and also have the TTL set to some very low value,
> >> multiple DNS records will defeat some of the care HAProxy takes to
> >> ensure it only sends requests to backends that can service them.
> >
> > Hmmm, one would think though the backend health check and fail over
> should
> > take care of this ... or maybe not???
> >
> > Anyway, in case you use something like Consul which I mentioned before to
> > provide the DNS records, then Consul itself will remove the failed node
> from
> > the DNS record.
>
> Right, I missed the "independent healthchecks" in the original
> description, in which case it'd work well enough (albeit a low enough
> TTL value is still a concern).
>
> Cheers,
> Maciej
>
>
The way we designed the feature is more like a "server template" line which
may be used to pre-configure in memory X servers sharing the same DNS
resolution.
In your case X=2. If you intend to have up to 10 servers for this service,
simply set X to 10.
HAProxy will use A records to create the servers and the health checks will
ensure that the servers are available before sending them traffic.
If a A record disappear from  the response, the corresponding server will
get down. If a new server is added and we provisioned less than X, then a
new server is provisioned.
This X "upper" limit is to ensure compatibility with all HAProxy features
(such as hash LBing algorithms).

Could you let me know if that meets your requirements?
(we can still change this description).

Baptiste