[ 
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-8574:
----------------------------------------
    Component/s: CQL

> Gracefully degrade SELECT when there are lots of tombstones
> -----------------------------------------------------------
>
>                 Key: CASSANDRA-8574
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: CQL
>            Reporter: Jens Rantil
>            Priority: Major
>             Fix For: 4.x
>
>
> *Background:* There's lots of tooling out there to do BigData analysis on 
> Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. 
> The problem with both of these so far, is that a single partition key with 
> too many tombstones can make the query job fail hard.
> The described scenario happens despite the user setting a rather small 
> FetchSize. I assume this is a common scenario if you have larger rows.
> *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
> smaller batch of results if there are too many tombstones. The tombstones are 
> ordered according to clustering key and one should be able to page through 
> them. Potentially:
>     SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
> would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
> I understand that this obviously would degrade performance, but it would at 
> least yield a result.
> *Additional comment:* I haven't dug into Cassandra code, but conceptually I 
> guess this would be doable. Let me know what you think.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org

Reply via email to