[ 
https://issues.apache.org/jira/browse/CASSANDRA-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549953#comment-14549953
 ] 

Antoine Blanchet commented on CASSANDRA-9413:
---------------------------------------------

Got it. Thank you.

> Add a default limit size (in bytes) for requests
> ------------------------------------------------
>
>                 Key: CASSANDRA-9413
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9413
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>         Environment: Cassandra 2.0.10, requested using Thrift
>            Reporter: Antoine Blanchet
>
> We experienced a crash on our production cluster following a massive wide row 
> read using Thrift
> A client tried to read a wide row (~4GB of raw data) without specifying any
> slice condition, which resulted in the crash of multiple nodes (as many as
> the replication factor) after long garbage collections.
> We know that wide rows should not be that big, but it is not the topic here.
> My question is the following: Is it possible to prevent Cassandra from
> OOM'ing when a client does this kind of requests? I'd rather have an error
> thrown to the client than a multi-server crash.
> The issue has already been discussed on the user mailing list, the thread is 
> here : https://www.mail-archive.com/user@cassandra.apache.org/msg42340.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to