[
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14291297#comment-14291297
]
Jens Rantil edited comment on CASSANDRA-8574 at 5/3/15 7:57 PM:
----------------------------------------------------------------
I guess to do this one also have to be able to receive tombstones in the result
to be able to page over them...
was (Author: ztyx):
I guess to do this one one also have to be able to receive tombstones in the
result to be able to page over them...
> Gracefully degrade SELECT when there are lots of tombstones
> -----------------------------------------------------------
>
> Key: CASSANDRA-8574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
> Project: Cassandra
> Issue Type: Improvement
> Reporter: Jens Rantil
> Fix For: 3.x
>
>
> *Background:* There's lots of tooling out there to do BigData analysis on
> Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE.
> The problem with both of these so far, is that a single partition key with
> too many tombstones can make the query job fail hard.
> The described scenario happens despite the user setting a rather small
> FetchSize. I assume this is a common scenario if you have larger rows.
> *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a
> smaller batch of results if there are too many tombstones. The tombstones are
> ordered according to clustering key and one should be able to page through
> them. Potentially:
> SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
> would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
> I understand that this obviously would degrade performance, but it would at
> least yield a result.
> *Additional comment:* I haven't dug into Cassandra code, but conceptually I
> guess this would be doable. Let me know what you think.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)