Re: rate-limiting and retry-after header

2022-06-20 Thread Jérôme Magnin

Hello Corin,

On 2022-06-20 13:18, Corin Langosch wrote:

Hi guys,

 I'm using haproxy 2.5 and have some basic rate limiting configured
like this (the actual configuration contains more rules for different
urls):

 backend test
  acl rate_limit_by_ip_exceeds_limit
src,table_http_req_rate(rate_limit_by_ip) gt 100
  http-request deny deny_status 429 if
rate_limit_by_ip_exceeds_limit
  http-request track-sc0 src table rate_limit_by_ip
  ...

backend rate_limit_by_ip
  stick-table type ipv6 size 1m expire 24h store http_req_rate(5m)


 Is there any way to include a "retry-after" header in case the rate
limit is exceeded?



You can with an http-after-response rule. See 
https://cbonte.github.io/haproxy-dconv/2.5/configuration.html#http-after-response
Or you can use an http-request return rule instead of deny/deny_status 
to set the status code and header with the same rule.

https://cbonte.github.io/haproxy-dconv/2.5/configuration.html#http-request%20return

--
Jérôme





rate-limit sessions not consistent between ssl and non ssl traffic

2018-01-24 Thread Jérôme Magnin
Hi,

I've been toying with haproxy and rate limiting lately, and noticed an odd
behavior with rate-limit sessions, or maybe I misunderstood how it is supposed
to be used.

I'm using the following config:

global
maxconn 2
log 127.0.0.1 local0
userhaproxy
chroot  /usr/share/haproxy
pidfile /run/haproxy.pid
daemon
stats socket /var/run/haproxy.sock

defaults
mode http

frontend  fe_foo
bind *:1234
bind *:1235 ssl crt /etc/haproxy/www.pem
rate-limit sessions 10
default_backend be_foo

backend be_foo
server s1 127.0.0.1:8001

I'm using ab to send traffic to the frontend.

1/ ab -c 40 -n 100 http://127.0.0.1:1234/

the output of show info shows maxconnrate 10 and maxsessrate 10.
This is coherent with the value I set for rate-limit sessions.

2/ ab -c 40 -n 100 https://127.0.0.1:1235/

the output of show info shows maxconnrate, maxsslrate, maxsessrate and
sslfrontendmaxkeyrate equal 40, 4 times the value for my rate-limit sessions.

Am I doing something wrong here ?

thanks,
Jérôme



Re: haproxy SSL termination performance

2017-12-26 Thread Jérôme Magnin
On Tue, Dec 26, 2017 at 08:43:43AM +, Lucas Rolff wrote:
> Hi guys,
> 
> I’m currently performing a few tests on haproxy and nginx to determine the 
> best software to terminate SSL early in a hosting stack, and also perform 
> some load balancing to varnish machines.
> 
> I’d love to use haproxy for this setup, since haproxy does one thing really 
> great – load balancing and the way it determines if backends are down.
> 
> However, one issue I’m facing is the SSL termination performance, on a single 
> core I manage to terminate 21000 connections per second on nginx, but only 
> 748 connections per second in haproxy.

748 looks like what a single core on a VM can achieve in terms of private key
computation with rsa 2048 certs. You can confirm this by running the following
command in your vm:

openssl speed rsa2048.

21000 is too high to be key computation only. 

> 
> They’re using the exact same cipher suite 
> (TLSv1.2,ECDHE-RSA-AES128-GCM-SHA256) to minimize the SSL overhead, I decided 
> to go for AES128 since the security itself isn’t super important, but rather 
> just that things are somewhat encrypted (mainly serving static files or 
> non-sensitive content).
> 
> I’m testing with a single apache benchmark client (actually from the 
> hypervisor on where I have my VM running, so the network latency is minimal 
> to rule out any networking being the cause to get the highest possible 
> numbers.

Can you please share the exact ab command you are using for your tests ?

> 
> I generated a flame graph for both haproxy and nginx using `perf` tool
> 
> Haproxy flame graph can be found here: 
> https://snaps.trollcdn.com/sadiZsJd96twAez0GUiWJdDiEbwsRPWUxJ3sRskLG4.svg
> 
> Nginx flame graph can be found here: 
> https://snaps.trollcdn.com/P7PVyDkjhsxbsXCmK6bzVeqWsHHwnOxRucnCYG084f.svg
> 
> What I find odd, is that in haproxy you’ll libcrypto.so.1.0.2k with 81k 
> samples, but the function right below (unknown) only got 8.3k samples, where 
> in nginx the gap is *a lot* smaller, and I’ve still not really figured out 
> what actually happens in haproxy that causes this gap.
> 
> However, my main concern is the fact that terminating SSL, nginx performs 28 
> times better.
> 
> I’ve tried running haproxy with both 10 threads, or 10 processes on a 12 core 
> machine – pinning each thread or process to a specific core, and putting RX 
> and TX queues on individual cores as well to ensure that load would be evenly 
> distributed.
> 
> Doing the same with nginx, it still reveals a 5.5k requests per second on 
> haproxy, but 125.000 requests per second on nginx (22 times difference).
> I got absolutely best performance on haproxy by using processes over threads 
> – with the processes, it’s not maxing out on the CPU but it is with the 
> threads, so not sure why this happens either.
> 
> Now, since nginx can serve static files directly, I wanted to replicate the 
> same in haproxy so I wouldn’t have to have a backend that would then do a 
> connection in the backend, since this could surely degrade the overall 
> requests per second on haproxy.
> 
> I did this by using an errorfile 200 /etc/haproxy/errorfiles/200.http to just 
> serve a file directly on the frontend.
> 
> My haproxy config looks like this: 
> https://gist.github.com/lucasRolff/36fc84ac44aad559c1d43ab6f30237c8

This configuration has no backend, so each request will be replied to with a 503
response containing a connection: close header, which means each request will
lead to a key computation. 


> 
> Do anyone have any suggestions or maybe insight into why haproxy seems to be 
> terminating SSL connections at a way lower rate per second, than for example 
> nginx? Is there any missing functionality in haproxy that isn’t available, 
> and thus causing nginx to succeed in terms of the performance/scalability for 
> terminations?
> 
> There’s many things I absolutely love about haproxy, but if there’s a 22-28x 
> difference in how many SSL terminations it can handle per second, then we’re 
> talking about a lot of added hardware to be able to handle, let’s say 500k 
> requests per second.
> 
I don't think we are comparing the same values here, there definitely isn't a
22-28 time difference.


> The VM has AES-NI available as well.
> 

I think that only applies to ciphering traffic, not key exchange.

cheers,
Jérôme



Re: http/2 Frontend

2017-12-04 Thread Jérôme Magnin
On Mon, Dec 04, 2017 at 11:20:29AM +0100, Daniel wrote:
> Hi There,
> 
> i know that haproxy 1.8 is able now to handle http/2 connections in the 
> frontend.
> My Problem is, I cant find any Documention for 1.8 on the Website.
> 
> Has someone some Exmaple configs for me just to check how I need to configure 
> it?
> 
> Cheers
> 
> Daniel
>

Hello Daniel,

look for the alpn keyword in the doc.
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.1-alpn

regards,
Jérôme 



Re: Will HAProxy community supports mailers section?

2017-08-24 Thread Jérôme Magnin
On Thu, Aug 24, 2017 at 06:50:51PM +0530, Rajesh Kolli wrote:
> Hi Daniel,
> 
> Thanks for your quick response...
> 
> i am getting this error if i use mailers section in my configuration.
> -
> [root@DS-11-82-R7-CLST-Node1 ~]# systemctl status haproxy.service -l
> haproxy.service - HAProxy Load Balancer
>Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled)
>Active: inactive (dead) since Thu 2017-08-24 18:43:23 IST; 4s ago
>   Process: 6511 ExecStart=/usr/sbin/haproxy-systemd-wrapper -f
> /etc/haproxy/haproxy.cfg -p /run/haproxy.pid (code=exited, status=0/SUCCESS)
>  Main PID: 6511 (code=exited, status=0/SUCCESS)
> 
> Aug 24 18:43:23 DS-11-82-R7-CLST-Node1 systemd[1]: Starting HAProxy Load
> Balancer...
> Aug 24 18:43:23 DS-11-82-R7-CLST-Node1 systemd[1]: Started HAProxy Load
> Balancer.
> Aug 24 18:43:23 DS-11-82-R7-CLST-Node1 haproxy-systemd-wrapper[6511]:
> [ALERT] 235/184323 (6512) : parsing [/etc/haproxy/haproxy.cfg:81] : unknown
> keyword 'mailers' in 'listen' section
> Aug 24 18:43:23 DS-11-82-R7-CLST-Node1 haproxy-systemd-wrapper[6511]:
> [ALERT] 235/184323 (6512) : parsing [/etc/haproxy/haproxy.cfg:82] : unknown
> keyword 'mailer' in 'listen' section
> Aug 24 18:43:23 DS-11-82-R7-CLST-Node1 haproxy-systemd-wrapper[6511]:
> [ALERT] 235/184323 (6512) : parsing [/etc/haproxy/haproxy.cfg:117] :
> unknown keyword 'email-alert' in 'backend' section
> Aug 24 18:43:23 DS-11-82-R7-CLST-Node1 haproxy-systemd-wrapper[6511]:
> [ALERT] 235/184323 (6512) : parsing [/etc/haproxy/haproxy.cfg:119] :
> unknown keyword 'email-alert' in 'backend' section
> Aug 24 18:43:23 DS-11-82-R7-CLST-Node1 haproxy-systemd-wrapper[6511]:
> [ALERT] 235/184323 (6512) : parsing [/etc/haproxy/haproxy.cfg:120] :
> unknown keyword 'email-alert' in 'backend' section
> Aug 24 18:43:23 DS-11-82-R7-CLST-Node1 haproxy-systemd-wrapper[6511]:
> [ALERT] 235/184323 (6512) : Error(s) found in configuration file :
> /etc/haproxy/haproxy.cfg
> Aug 24 18:43:23 DS-11-82-R7-CLST-Node1 haproxy-systemd-wrapper[6511]:
> [ALERT] 235/184323 (6512) : Fatal errors found in configuration.
> Aug 24 18:43:23 DS-11-82-R7-CLST-Node1 haproxy-systemd-wrapper[6511]:
> haproxy-systemd-wrapper: exit, haproxy RC=256
>

Hello Rajesh,

you are most likely running a version in which mailers is not implemented
(<1.6).

Jérôme