[ 
https://issues.apache.org/jira/browse/CASSANDRA-16663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caleb Rackliffe updated CASSANDRA-16663:
----------------------------------------
    Description: 
Together, CASSANDRA-14855, CASSANDRA-15013, and CASSANDRA-15519 added support 
for a runtime-configurable, per-coordinator limit on the number of bytes 
allocated for concurrent requests over the native protocol. It supports channel 
back-pressure by default, and optionally supports throwing OverloadedException 
if that is requested in the relevant connection’s STARTUP message.

This can be an effective tool to prevent the coordinator from running out of 
memory, but it may not correspond to how expensive a queries are or provide a 
direct conceptual mapping to how users think about request capacity. I propose 
adding the option of request-based (or perhaps more correctly message-based) 
back-pressure, coexisting with (and reusing the logic that supports) the 
current bytes-based back-pressure.

_We can roll this forward in phases_, where the server’s cost accounting 
becomes more accurate, we segment limits by operation type/keyspace/etc., and 
the client/driver reacts more intelligently to (especially non-back-pressure) 
overload, but something minimally viable could look like this:

1.) Reuse most of the existing logic in Limits, et al. to support a simple 
per-coordinator limit only on native transport requests in flight (i.e. where 
requests have a fixed cost of 1). Under this limit will be CQL reads and 
writes, but also auth requests, prepare requests, and batches. This is 
obviously simplistic, and it does not account for the variation in cost between 
individual queries, but even a fixed cost model should be useful in aggregate.
 * If the client specifies THROW_ON_OVERLOAD in its STARTUP message at 
connection time, a breach of the per-node limit will result in an 
OverloadedException being propagated to the client, and the server will discard 
the request.
 * If THROW_ON_OVERLOAD is not specified, the server will stop consuming 
messages from the channel/socket, which should back-pressure the client, while 
the message continues to be processed.

2.) This limit is infinite by default, and can be enabled via the YAML config 
or JMX at runtime. (The literal default of -1 corresponding to “no limit” can 
be used here.)

3.) The current value of the limit is available via JMX, and metrics around 
coordinator operations/second are already available to compare against it.

4.) Any interaction with existing byte-based limits will intersect. (i.e. A 
breach of any limit, bytes or request-based, will actuate back-pressure or 
OverloadedExceptions.)

In this first pass, explicitly out of scope would be any work on the 
client/driver side.

In terms of validation/testing, our biggest concern with anything that adds 
overhead on a very hot path is performance. In particular, we want to fully 
understand how the client and server perform along two axes constituting 4 
scenarios. Those are a.) whether or not we are breaching the request limit and 
b.) whether the server is throwing on overload at the behest of the client. 
Having said that, query execution should dwarf the cost of limit accounting.

  was:
Together, CASSANDRA-14855, CASSANDRA-15013, and CASSANDRA-15519 added support 
for a runtime-configurable, per-coordinator limit on the number of bytes 
allocated for concurrent requests over the native protocol. It supports channel 
back-pressure by default, and optionally supports throwing OverloadedException 
if that is requested in the relevant connection’s STARTUP message.

This can be an effective tool to prevent the coordinator from running out of 
memory, but it may not correspond to how expensive a queries are or provide a 
direct conceptual mapping to how users think about request capacity. I propose 
adding the option of request-based (or perhaps more correctly message-based) 
back-pressure, coexisting with (and reusing the logic that supports) the 
current bytes-based back-pressure.

We can roll this forward in phases, where the server’s cost accounting becomes 
more accurate, and the client/driver reacts more intelligently to (especially 
non-back-pressure) overload, but something minimally viable could look like 
this:

1.) Reuse most of the existing logic in Limits, et al. to support a simple 
per-coordinator limit only on native transport requests in flight (i.e. where 
requests have a fixed cost of 1). Under this limit will be CQL reads and 
writes, but also auth requests, prepare requests, and batches. This is 
obviously simplistic, and it does not account for the variation in cost between 
individual queries, but even a fixed cost model should be useful in aggregate.
 * If the client specifies THROW_ON_OVERLOAD in its STARTUP message at 
connection time, a breach of the per-node limit will result in an 
OverloadedException being propagated to the client, and the server will discard 
the request.
 * If THROW_ON_OVERLOAD is not specified, the server will stop consuming 
messages from the channel/socket, which should back-pressure the client, while 
the message continues to be processed.

2.) This limit is infinite by default, and can be enabled via the YAML config 
or JMX at runtime. (The literal default of -1 corresponding to “no limit” can 
be used here.)

3.) The current value of the limit is available via JMX, and metrics around 
coordinator operations/second are already available to compare against it.

4.) Any interaction with existing byte-based limits will intersect. (i.e. A 
breach of any limit, bytes or request-based, will actuate back-pressure or 
OverloadedExceptions.)

In this first pass, explicitly out of scope would be any work on the 
client/driver side.

In terms of validation/testing, our biggest concern with anything that adds 
overhead on a very hot path is performance. In particular, we want to fully 
understand how the client and server perform along two axes constituting 4 
scenarios. Those are a.) whether or not we are breaching the request limit and 
b.) whether the server is throwing on overload at the behest of the client. 
Having said that, query execution should dwarf the cost of limit accounting.


> Request-Based Native Transport Rate-Limiting
> --------------------------------------------
>
>                 Key: CASSANDRA-16663
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-16663
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Messaging/Client
>            Reporter: Caleb Rackliffe
>            Assignee: Caleb Rackliffe
>            Priority: Normal
>             Fix For: 4.0.x, 4.x
>
>
> Together, CASSANDRA-14855, CASSANDRA-15013, and CASSANDRA-15519 added support 
> for a runtime-configurable, per-coordinator limit on the number of bytes 
> allocated for concurrent requests over the native protocol. It supports 
> channel back-pressure by default, and optionally supports throwing 
> OverloadedException if that is requested in the relevant connection’s STARTUP 
> message.
> This can be an effective tool to prevent the coordinator from running out of 
> memory, but it may not correspond to how expensive a queries are or provide a 
> direct conceptual mapping to how users think about request capacity. I 
> propose adding the option of request-based (or perhaps more correctly 
> message-based) back-pressure, coexisting with (and reusing the logic that 
> supports) the current bytes-based back-pressure.
> _We can roll this forward in phases_, where the server’s cost accounting 
> becomes more accurate, we segment limits by operation type/keyspace/etc., and 
> the client/driver reacts more intelligently to (especially non-back-pressure) 
> overload, but something minimally viable could look like this:
> 1.) Reuse most of the existing logic in Limits, et al. to support a simple 
> per-coordinator limit only on native transport requests in flight (i.e. where 
> requests have a fixed cost of 1). Under this limit will be CQL reads and 
> writes, but also auth requests, prepare requests, and batches. This is 
> obviously simplistic, and it does not account for the variation in cost 
> between individual queries, but even a fixed cost model should be useful in 
> aggregate.
>  * If the client specifies THROW_ON_OVERLOAD in its STARTUP message at 
> connection time, a breach of the per-node limit will result in an 
> OverloadedException being propagated to the client, and the server will 
> discard the request.
>  * If THROW_ON_OVERLOAD is not specified, the server will stop consuming 
> messages from the channel/socket, which should back-pressure the client, 
> while the message continues to be processed.
> 2.) This limit is infinite by default, and can be enabled via the YAML config 
> or JMX at runtime. (The literal default of -1 corresponding to “no limit” can 
> be used here.)
> 3.) The current value of the limit is available via JMX, and metrics around 
> coordinator operations/second are already available to compare against it.
> 4.) Any interaction with existing byte-based limits will intersect. (i.e. A 
> breach of any limit, bytes or request-based, will actuate back-pressure or 
> OverloadedExceptions.)
> In this first pass, explicitly out of scope would be any work on the 
> client/driver side.
> In terms of validation/testing, our biggest concern with anything that adds 
> overhead on a very hot path is performance. In particular, we want to fully 
> understand how the client and server perform along two axes constituting 4 
> scenarios. Those are a.) whether or not we are breaching the request limit 
> and b.) whether the server is throwing on overload at the behest of the 
> client. Having said that, query execution should dwarf the cost of limit 
> accounting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to