[
https://issues.apache.org/jira/browse/CASSANDRA-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550793#comment-14550793
]
Kévin LOVATO commented on CASSANDRA-9413:
-----------------------------------------
@jbellis I agree that we (I'm Antoine's colleague) should migrate this project
to CQL but wouldn't this OOM problem occur all the same if the user set the
page size to a ridiculous amount, say int.MaxValue?
> Add a default limit size (in bytes) for requests
> ------------------------------------------------
>
> Key: CASSANDRA-9413
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9413
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Environment: Cassandra 2.0.10, requested using Thrift
> Reporter: Antoine Blanchet
>
> We experienced a crash on our production cluster following a massive wide row
> read using Thrift
> A client tried to read a wide row (~4GB of raw data) without specifying any
> slice condition, which resulted in the crash of multiple nodes (as many as
> the replication factor) after long garbage collections.
> We know that wide rows should not be that big, but it is not the topic here.
> My question is the following: Is it possible to prevent Cassandra from
> OOM'ing when a client does this kind of requests? I'd rather have an error
> thrown to the client than a multi-server crash.
> The issue has already been discussed on the user mailing list, the thread is
> here : https://www.mail-archive.com/[email protected]/msg42340.html
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)