Hi Willy,

thank for your support, this perfectly make sense. I've changed the configuration and will investigate if it works.


Marco


Am 12.12.19 um 16:39 schrieb Willy Tarreau:
Hi Marco,

On Wed, Dec 11, 2019 at 12:50:21PM +0100, Marco Nietz wrote:
Hi,

i'm running Haproxy 1.8.21 on a Debian 9 Box.

We use stick-tables to track http-connections and http-error-rate and block
clients (bots) that cause a high error-rate. But Today i recognize a lot of
400 / Bad Requests Error (~15/s) in the logs. The rate definitely exceeds
the defined limit, but the ip-address wasn't blocked. Therefor my question,
does HTTP-Error 400 not increase the http_err_rate Counter?
Yes it does.

The state is CR,
so the requests do not reach our backend-servers, but my expectation is,
that they get blocked at the loadbalancer. The below configuration works
fine with 404 or 401 errors.

Here's a sample logfile entry:

Dec 11 11:56:40 lb01 haproxy[41488]: [REDACTED]:3641
[11/Dec/2019:11:56:40.103] production~ production/<NOSRV> -1/-1/-1/-1/98 400
0 - - CR-- 461/461/0/0/0 0/0 "<BADREQ>"

Configuration:

stick-table type ip size 5M expire 2h store
http_req_rate(60s),http_err_rate(60s),gpt0
http-request track-sc0 src unless { src -f /etc/haproxy/whitelist.acl }
http-request sc-set-gpt0(0) 1 if { sc_http_err_rate(0) ge LIMIT }
tcp-request connection reject if { src,table_gpt0(production) eq 1 }
I think I get the issue. It's just that in case of a bad request you
never reach the http-request rules since there's no HTTP request in
the first place. You should move your "track-sc0" rule to tcp-request
connection instead.

Willy

--
practicalbytes | IT-Workshops & Consulting für PostgreSQL, Grafana und mehr

In der Rheinau 26
53639 Königswinter
+49 (0) 2223 75599 20

https://practicalbytes.de


Reply via email to