[
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jens Rantil updated CASSANDRA-8574:
-----------------------------------
Description:
*Background:* There's lots of tooling out there to do BigData analysis on
Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. The
problem with both of these so far, is that a single partition key with too many
tombstones can make the query job fail hard.
The described scenario happens despite the user setting a rather small
FetchSize. I assume this is a common scenario if you have larger rows.
*Proposal:* To allow a CQL SELECT to gracefully degrade to only return a
smaller batch of results if there are too many tombstones. The tombstones are
ordered according to clustering key and one should be able to page through
them. Potentially:
SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
I understand that this obviously would degrade performance, but it would at
least yield a result.
*Additional comment:* I haven't dug into Cassandra code, but conceptually I
guess this would be doable. Let me know what you think.
was:
*Background:* There's lots of tooling out there to do BigData analysis on
Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. The
problem with both of these so far, is that a single partition key with too many
tombstones can make the query job fail hard.
The described scenario happens despite the user setting a rather small
FetchSize. I assume this is a common scenario if you have larger rows.
*Proposal:* To allow a CQL SELECT to gracefully degrade to only return a
smaller batch of results if there are too many tombstones. The tombstones are
ordered according to clustering key and one should be able to page through
them. Potentially:
SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
I understand that this obviously would degrade performance, but it would at
least yield a result.
Additional comment: I haven't dug into Cassandra code, but conceptually I guess
this would be doable. Let me know what you think.
> Gracefully degrade SELECT when there are lots of tombstones
> -----------------------------------------------------------
>
> Key: CASSANDRA-8574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
> Project: Cassandra
> Issue Type: Improvement
> Reporter: Jens Rantil
> Fix For: 3.0
>
>
> *Background:* There's lots of tooling out there to do BigData analysis on
> Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE.
> The problem with both of these so far, is that a single partition key with
> too many tombstones can make the query job fail hard.
> The described scenario happens despite the user setting a rather small
> FetchSize. I assume this is a common scenario if you have larger rows.
> *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a
> smaller batch of results if there are too many tombstones. The tombstones are
> ordered according to clustering key and one should be able to page through
> them. Potentially:
> SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
> would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
> I understand that this obviously would degrade performance, but it would at
> least yield a result.
> *Additional comment:* I haven't dug into Cassandra code, but conceptually I
> guess this would be doable. Let me know what you think.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)