Hi Tristan,

I can’t add anything (yet) besides saying thank you for the write up.
I’m mostly writing this because I don’t see the message in the mailing list 
archive and found the actual mail in my junk mail folder for some reason.

Cheers,
Alex

> On 11. May 2022, at 21:34, Tristan <tris...@mangadex.org> wrote:
> 
> Hi,
> 
> I'm trying to find out a better approach than I've used so far for relatively 
> complex rate-limits management with HAProxy.
> 
> The vast majority of documentation out there focuses on a single global 
> rate-limit, or with a single proxy, with minimal documentation on more 
> granual approaches.
> 
> Unfortunately I'm afraid that without context it will not make much sense, so 
> apologies in advance for the long message...
> 
> ---
> 
> Here's what I'm aiming for:
> 
> 1. Rate-limit "zones" (in nginx parlance)
> 
> Essentially, arbitrary groups of paths/domains/backends/etc that share common 
> rate-limit thresholds and counters.
> 
> That bit isn't terribly complex and a few ACLs do the trick just fine.
> 
> 2. A tiered rate-limit system
> 
> That is, various levels of "infringement" with their own triggers and actions.
> 
> We have a few classes of traffic at all times:
> - Normal: the traffic we love, but irrelevant for this discussion
> - Silly: Public API + many consumers + varying levels of expertise = 
> occasional spam from the likes of looping requests ad-nauseam despite 429s
> - Infringing: attempting to abuse our services in a way or another 
> (more-or-less polite scrapers, ToS abusers, ...)
> - Malicious: The annoying part of the internet (skiddies and their crappy 
> booters, IPv4 space scanners leaking non-public IPs, vulnerability exploit 
> attempts, ...)
> 
> So while we are typically happy to be heavy-handed in general (users != 
> revenue for us, and we have limited access to compute, so the choice isn't 
> very hard) and issue blanket bans, we'd also like legitimate-but-misguided 
> users to have some leeway.
> 
> Finally, some of the "infringing" behaviors are more complex to detect (ie 
> typically rely on faked but believable headers and require 
> multi-requests-pattern rules for reliable detection) and we want to react to 
> those in more elaborate ways than just banning them after 1 request (maybe 
> split responses between fake data, invalid responses and conn resets; just 
> being creative in hinting at them to go annoy someone else), as that would 
> reveal that we identified them and allow them to test evasion methods a 
> little too easily.
> 
> 3. Tracking a couple of other interesting dimensions besides this
> 
> Could be anything ACL-able. For example TLSv1.2 vs. TLSv1.3 adoption.
> This is essentially about making up extra metrics we are interested in for 
> anything ACL-able we might care about.
> 
> 4. Being able to track sources/requests flagged with Prometheus
> 
> We can have 99 GPCs per table, so in theory none of this is an issue, however 
> that data isn't super easily accessible as-is:
> - A log-based approach isn't fine due to resource constraints
> - I'd prefer not having to introduce some admin API parser to extract and 
> process data this way if I can avoid it
> - On the other hand, we have a very streamlined/cheap/scalable/etc long-term 
> Prometheus setup, so this is our preferred approach, and HAProxy does nice 
> things like per-stick-table entry counts out of the box, so using that is 
> ideal. Same idea for frontend-level `http-request return` versus dedicated 
> backends
> 
> ---
> 
> Now that the requirements are hopefully a bit clear, here's the general 
> approach I came up with
> 
> #--- First, a few stick-tables since the number of entries is exported as 
> prom metrics
> 
> # Global source concurrent connections
> backend st_conns from defaults-base
>    stick-table type ip size 100k expire 300s store conn_cur
> 
> # Generic rate-limits, 1 gpc+gpc_rate per zone (I'm fine with not having a 
> dedicated metric per zone as-is)
> backend st_ratelimits from defaults-base
>    stick-table type ip size 100k expire 300s store gpc(2),gpc_rate(2,60s)
> 
> # For multi-requests pattern analysis we count infringing requests and flag 
> infringers after enough "suspicious" requests only to avoid false-positives
> backend st_infringing_grace from defaults-base
>    stick-table type ip size 30k expire 600s store gpc(1),gpc_rate(1,60s)
> 
> # Generic counter for silly requests (for example any request that we reject 
> due to rate-limits...)
> backend st_badreqs from defaults-base
>    stick-table type ip size 30k expire 600s store gpc(1),gpc_rate(1,60s)
> 
> # Soft bans, ie you get a response page telling you you're banned
> backend st_ban_soft from defaults-base
>    stick-table type ip size 30k expire 600s store gpc(1),gpc_rate(1,60s)
> 
> # Hard bans, ie we silent-drop the requests
> backend st_ban_hard from defaults-base
>    stick-table type ip size 30k expire 600s store gpc(1),gpc_rate(1,60s)
> 
> #--- Then a common default for our frontends
> # we have multiple frontends per edge node depending on network sources, 
> which doesn't make this simpler but it's at worse dealt with with a bit of 
> config templating, so not exemplified here
> 
> defaults defaults-fe-public from defaults-base
>    default_backend reject_req_ext
> 
>    http-request capture hdr(Host) len 48
>    http-request capture hdr(Origin) len 48
>    http-request capture hdr(X-Forwarded-For) len 15
> 
>    # Some conns are direct, some are proxied, so pretend this is dynamically 
> chosen between src and headers based on some src ACL
>    # this will also serve as our counter id hereafter
>    http-request set-var(txn.connecting_ip) src
> 
>    http-request track-sc0 var(txn.connecting_ip) table st_conns
> 
>    # the web is a mess, etc...
>    http-request set-header X-Forwarded-For %[var(txn.connecting_ip)]
>    http-request set-header X-Forwarded-Proto %[ssl_fc,iif(https,http)]
>    http-request del-header X-Forwarded-Host
>    http-request del-header Cache-Control
>    http-request del-header Pragma
> 
>    # Sample ACLs for zones
>    acl rl_zone_general hdr(host) -i foo
>    acl rl_zone_backend hdr(host) -i bar
> 
>    # Flag counter zone
>    http-request set-var(txn.rl_zone_general) int(1) if rl_zone_general
>    http-request set-var(txn.rl_zone_backend) int(1) if rl_zone_backend
> 
>    # Sample ACLs for paths
>    acl path_infringing path_beg -i /infringing
>    acl path_malicious  path_beg -i /malicious /.
>    acl path_well_known path_beg -i /.well-known
> 
>    # Set "badness" flags
>    http-request set-var(txn.kind_infringing) int(1) if path_infringing
>    http-request set-var(txn.kind_malicious)  int(1) if path_malicious 
> !path_well_known
>    ...
> 
>    # Sample ACLs for silly requests
>    acl urlparams_silly urlp_val(silly) -m found
>    http-request set-var(txn.kind_badreqs) int(1) if urlparams_silly
> 
> #--- Then our private frontends, which extend defaults-base (prometheus, 
> health-check, admin...)
> 
> #--- Then our public frontends
> 
> frontend http from defaults-fe-public
>    bind *:80 # we bind by ip in practice, but it'd be more noisy than useful 
> here
>    use_backend https_redirect
> 
> frontend https from defaults-fe-public
>    bind *:443 ssl strict-sni crt /path/to/cert.pem
> 
>    # Here comes the fun part. We need to check severities in decreasing order
>    # And we have 2 kinds of checks here
>    # The first merely checks previous flagging
>    acl is_ban_hard table_gpc(0,0,st_ban_hard) gt 0
>    use_backend be_silent_drop if is_ban_hard
> 
>    acl is_ban_soft sc_get_gpc(0,0,st_ban_soft) gt 0
>    use_backend be_synth_banned if is_ban_soft
> 
>    # ...
> 
>    # The other needs to do a bit of an awkward dance for counting purposes on 
> the other hand
>    acl rl_zone_general var(txn.rl_zone_general,0) -m bool
>    acl rl_count_general sc_inc_gpc(0,0,st_ratelimits) gt 0
>    acl rl_check_general sc_gpc_rate(0,0,st_ratelimits) gt 900
>    use_backend be_ratelimited if rl_zone_general rl_count_general 
> rl_check_general
> 
>    # Then we perform normal request routing...
>    acl domain_web hdr(host) -i foo
>    use_backend web if domain_web
> 
>    # ...
> 
> #--- Then our backends, which also need to perform some incrementing of 
> counters for tracking purposes
> 
> defaults defaults-be-badreq from defaults-base
>    http-request track-sc1 var(txn.connecting_ip) table st_badrequests
>    acl rl_count_badrequest sc_inc_gpc(0,1,st_badrequests) gt 0
>    acl rl_check_badrequest sc_gpc_rate(0,1,st_badrequests) gt 60
> 
>    http-request track-sc2 var(txn.connecting_ip) table st_bans_hard if 
> rl_count_badrequest rl_check_badrequest
>    acl rl_apply_ban_hard sc_inc_gpc(0,2,st_bans_hard) gt 0
>    http-request return status 403 content-type text/plain string "Now hard 
> banned" if rl_apply_ban_hard
> 
> backend be_synth_ratelimited from defaults-be-badreq
>    http-request return status 429 content-type text/plain string "General 
> rate-limit"
> 
> backend be_synth_banned from defaults-base
>    http-request return status 403 content-type text/plain string "Soft banned 
> [...]"
> 
> backend be_banned_hard from defaults-base
>    http-request track-sc1 var(txn.connecting_ip) table st_ban_hard
>    # A hard-ban is auto-renewed until the user completely goes away for a bit
>    acl rl_renew_ban_hard sc_inc_gpc(0,1,st_ban_hard) gt 0
>    http-request silent-drop if rl_renew_ban_hard
> 
> backend web from defaults-base
>    ...
>    server web-1 1.2.3.4...
>    ...
> 
> #----
> 
> Modulo some small issues I might have introduced while renaming/cutting 
> through the full thing for this message, this approach gets close to all I 
> want as long as I carefully manage what sc1/sc2 I use and where (maybe 
> tracking misc data like TLS versions only in routes where the backend won't 
> need them for other tables later, etc).
> 
> However, those are somewhat annoying limits in this case... For example the 
> chain normal->infringing->soft-ban->hard-ban becomes impossible as it'd 
> request an sc3 to handle it.
> 
> This makes me wonder if there's a technical drawback (maybe significant 
> overhead?) associated with having more than 3 counters as an immediate 
> workaround?
> 
> Similarly, I noted sc-inc-gpc(...) (here 
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.haproxy.org%2F2.5%2Fconfiguration.html%234.2-http-request%2520sc-inc-gpc&amp;data=05%7C01%7Calexander.lais%40sap.com%7Ca037c7718e6448a7850b08da338555df%7C42f7676cf455423c82f6dc2d99791af7%7C0%7C0%7C637878945701510887%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=gGpGh932ormS8YZOmQHFrnpGUp400qTh1uZkjhckFL4%3D&amp;reserved=0)
>  as the somewhat-ideal API for it, except for the fact that it doesn't allow 
> selecting a specific table to apply the increment and instead uses the table 
> to which the sticky counter is currently bound (if I understand correctly) so 
> doesn't really solve the problem as far as I can tell?
> 
> Finally, sc_inc_gpc(<id>,<sc>,<table>) has a table argument but it seems to 
> be effective only if the sticky counter used has been "bound" to that table 
> via a track-sc call (meaning that it's all back to 3 tables manipulated at 
> most). I was at first hoping that the <sc> argument was just intended for key 
> selection (so you could use a single sc0 and reuse it for any number of 
> tables as long as you're happy with the same bucketing process) but it 
> doesn't seem to be the case :/
> 
> Either way, sorry for the light novel; hopefully you know something I don't, 
> and even if not I realize that I'm somewhat trying to abuse HAProxy's 
> features there out of their designed scope 😅
> 
> Regards,
> Tristan
> 

Reply via email to