[ 
https://issues.apache.org/jira/browse/CASSANDRA-16663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17364631#comment-17364631
 ] 

Caleb Rackliffe commented on CASSANDRA-16663:
---------------------------------------------

How did we fare for the above client workloads and throwing 
{{OverloadedException}} when the rate limit is breached?

*Native Protocol V4*

|Rate Limit (requests/second)|Client-Requested Rate|Client p99 
(millis)|Client-Observed Rate|Errors/Second|
|n/a|1000|0.39|992.99|0|
|1000|999|0.41|992.02|0|
|1000|1001|0.41|994|0.88|
|n/a|2000|0.37|1986.47|0|
|1000|2000|0.4|1986.45|999.6|

*Native Protocol V5*

|Rate Limit (requests/second)|Client-Requested Rate|Client p99 
(millis)|Client-Observed Rate|Errors/Second|
|n/a|1000|0.39|993.01|0|
|1000|999|0.42|992.03|0|
|1000|1001|0.44|994.02|0.89|
|n/a|2000|0.38|1986.49|0|
|1000|2000|0.4|1986.46|999.61|

Like the runs w/ back-pressure above, we don't seem to have any difficulty 
keeping the rate of _accepted_ requests at the configured rate limit. (ex. In 
the case where we send 2000 requests/second from the client but the server-side 
rate limit is 1000 requests/second, we accept roughly 1000 requests/second, but 
also throw about 1000 errors/second for the ones we reject. We don't count 
those requests that will fail immediately.)

There isn't much to remark on in terms of latencies, and in the case where we 
sit just barely under the rate limit (999/1000), we don't observe a single 
{{OverloadedException}}. When the client sends just over the configured limit 
(1001/1000), we see the expected ~1 error/second.

> Request-Based Native Transport Rate-Limiting
> --------------------------------------------
>
>                 Key: CASSANDRA-16663
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-16663
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Messaging/Client
>            Reporter: Caleb Rackliffe
>            Assignee: Caleb Rackliffe
>            Priority: Normal
>             Fix For: 4.x
>
>          Time Spent: 3h
>  Remaining Estimate: 0h
>
> Together, CASSANDRA-14855, CASSANDRA-15013, and CASSANDRA-15519 added support 
> for a runtime-configurable, per-coordinator limit on the number of bytes 
> allocated for concurrent requests over the native protocol. It supports 
> channel back-pressure by default, and optionally supports throwing 
> OverloadedException if that is requested in the relevant connection’s STARTUP 
> message.
> This can be an effective tool to prevent the coordinator from running out of 
> memory, but it may not correspond to how expensive a queries are or provide a 
> direct conceptual mapping to how users think about request capacity. I 
> propose adding the option of request-based (or perhaps more correctly 
> message-based) back-pressure, coexisting with (and reusing the logic that 
> supports) the current bytes-based back-pressure.
> _We can roll this forward in phases_, where the server’s cost accounting 
> becomes more accurate, we segment limits by operation type/keyspace/etc., and 
> the client/driver reacts more intelligently to (especially non-back-pressure) 
> overload, _but something minimally viable could look like this_:
> 1.) Reuse most of the existing logic in Limits, et al. to support a simple 
> per-coordinator limit only on native transport requests per second. Under 
> this limit will be CQL reads and writes, but also auth requests, prepare 
> requests, and batches. This is obviously simplistic, and it does not account 
> for the variation in cost between individual queries, but even a fixed cost 
> model should be useful in aggregate.
>  * If the client specifies THROW_ON_OVERLOAD in its STARTUP message at 
> connection time, a breach of the per-node limit will result in an 
> OverloadedException being propagated to the client, and the server will 
> discard the request.
>  * If THROW_ON_OVERLOAD is not specified, the server will stop consuming 
> messages from the channel/socket, which should back-pressure the client, 
> while the message continues to be processed.
> 2.) This limit is infinite by default (or simply disabled), and can be 
> enabled via the YAML config or JMX at runtime. (It might be cleaner to have a 
> no-op rate limiter that's used when the feature is disabled entirely.)
> 3.) The current value of the limit is available via JMX, and metrics around 
> coordinator operations/second are already available to compare against it.
> 4.) Any interaction with existing byte-based limits will intersect. (i.e. A 
> breach of any limit, bytes or request-based, will actuate back-pressure or 
> OverloadedExceptions.)
> In this first pass, explicitly out of scope would be any work on the 
> client/driver side.
> In terms of validation/testing, our biggest concern with anything that adds 
> overhead on a very hot path is performance. In particular, we want to fully 
> understand how the client and server perform along two axes constituting 4 
> scenarios. Those are a.) whether or not we are breaching the request limit 
> and b.) whether the server is throwing on overload at the behest of the 
> client. Having said that, query execution should dwarf the cost of limit 
> accounting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to