[ 
https://issues.apache.org/jira/browse/CASSANDRA-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6052.
-----------------------------------------

    Resolution: Duplicate

This is a duplicate of CASSANDRA-4415 in the sense that CASSANDRA-4415 is the 
only reasonable solution to this known problem.

When a use submit a query with either no limit or a huge one, the server has no 
way to know how big the result will be in practice. So the only way to not 
OOM/exhaust memory is for the result to be paged to the client chunks by chunks 
if it's too big. And that what CASSANDRA-4415 added to cassandra 2.0.

Unfortunately, such change require some adaptation of the client (it should use 
the native protocol v2) and not all clients, including cqlsh (but work is 
underway to fix that), have been updated yet. In the meantime, if you use a 
client that does not use the native protocol v2, then you must be careful to 
not submit queries that would yield huge result set (you need to use a 
reasonable limit and page in your application if needs be).

                
> memory exhaustion
> -----------------
>
>                 Key: CASSANDRA-6052
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6052
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Debian, Cassandra 2.0
>            Reporter: arnaud-lb
>             Fix For: 2.0.1
>
>         Attachments: jconsole.png
>
>
> Issuing queries such as "select * from huge_table limit 1000000000" or "copy 
> hugetable to ..." reliably exhausts cassandra's heap space. (In cqlsh, at 
> least.)
> The JVM then get stuck in a Full GC loop, GC fails to free anything, 
> cassandra is unresponsive and never recovers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to