[
https://issues.apache.org/jira/browse/CASSANDRA-7402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14055168#comment-14055168
]
Rick Branson commented on CASSANDRA-7402:
-----------------------------------------
The more "practical" thing here is to make sure nothing on the read/write paths
can potentially allocate big chunks of heap. If one looks at the read/write
paths for something like PostgreSQL, they have made sure that the only
potentially large allocations (sorts, hash-joins, etc) occur in a very
controlled manner and have hard limits attached to them that can be configured.
This seems like a reasonable thing to ask for at this stage in Cassandra's life.
> limit the on heap memory available to requests
> ----------------------------------------------
>
> Key: CASSANDRA-7402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7402
> Project: Cassandra
> Issue Type: Improvement
> Reporter: T Jake Luciani
> Fix For: 3.0
>
>
> When running a production cluster one common operational issue is quantifying
> GC pauses caused by ongoing requests.
> Since different queries return varying amount of data you can easily get your
> self into a situation where you Stop the world from a couple of bad actors in
> the system. Or more likely the aggregate garbage generated on a single node
> across all in flight requests causes a GC.
> We should be able to set a limit on the max heap we can allocate to all
> outstanding requests and track the garbage per requests to stop this from
> happening. It should increase a single nodes availability substantially.
> In the yaml this would be
> {code}
> total_request_memory_space_mb: 400
> {code}
> It would also be nice to have either a log of queries which generate the most
> garbage so operators can track this. Also a histogram.
--
This message was sent by Atlassian JIRA
(v6.2#6252)