[jira] [Commented] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-09 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895870#comment-13895870
 ] 

Machiel Groeneveld commented on CASSANDRA-6674:
---

Thanks for taking the time to explain.

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 Select query on a table where I'm doing insert fails with tombstone 
 exception. The database is clean/empty before doing inserts, doing the first 
 query after a few thousand records inserted. I don't understand where the 
 tombstones are coming from as I'm not doing any deletes.
 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13894426#comment-13894426
 ] 

Machiel Groeneveld commented on CASSANDRA-6674:
---

*Table*
create table IF NOT EXISTS visits.visits(
id text,
cookie_uuid text, cookie_uuid text, external_click_id text, session_id text,
visitor_ip text, user_agent text, uuid_hash text,
shop_product_id int, channel_id int, shop_id int, shop_category_id int,
type int, medium_id int, campaign_id int, channel_affiliate_id int,
default_cpc float,
created_at timestamp, updated_at timestamp, time_id int,
disabled int, has_referer boolean, known_visitor boolean, marketing boolean,
primary key(time_id, id));

*Insert statement*
BEGIN BATCH
insert into visits (
id, cookie_uuid, uuid_hash,
default_cpc, cookie_uuid,
external_click_id, session_id, visitor_ip, user_agent,
shop_product_id, channel_id, shop_id, shop_category_id,
type, medium_id, campaign_id, channel_affiliate_id,
disabled, has_referer, known_visitor, marketing,
created_at, updated_at, time_id)
VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) 
USING TTL 7776000
insert into visits_by_cookie (visit_id, time_id, cookie_uuid, shop_id, 
created_at, enabled_visit)
VALUES(?, ?, ?, ?, ?, ?) USING TTL 7776000
insert into visits_by_hash (visit_id, time_id, uuid_hash, shop_id, created_at)
VALUES(?, ?, ?, ?, ?) USING TTL 7776000
APPLY BATCH

*Select query that fails*
SELECT * FROM VISITS WHERE time_id = 

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13894485#comment-13894485
 ] 

Aleksey Yeschenko commented on CASSANDRA-6674:
--

How many of those inserted values are nulls?

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 Select query on a table where I'm doing insert fails with tombstone 
 exception. The database is clean/empty before doing inserts, doing the first 
 query after a few thousand records inserted. I don't understand where the 
 tombstones are coming from as I'm not doing any deletes.
 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13894501#comment-13894501
 ] 

Machiel Groeneveld commented on CASSANDRA-6674:
---

Just rough estimate, but one value is 99% if the cases null and one other value 
80% of the records.

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 Select query on a table where I'm doing insert fails with tombstone 
 exception. The database is clean/empty before doing inserts, doing the first 
 query after a few thousand records inserted. I don't understand where the 
 tombstones are coming from as I'm not doing any deletes.
 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13894505#comment-13894505
 ] 

Aleksey Yeschenko commented on CASSANDRA-6674:
--

Well, inserting a null equals to creating a tombstone - equal to doing a 
delete. There is your problem. Raise the tombstone fail threshold, or do more 
targeted SELECT queries.

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 Select query on a table where I'm doing insert fails with tombstone 
 exception. The database is clean/empty before doing inserts, doing the first 
 query after a few thousand records inserted. I don't understand where the 
 tombstones are coming from as I'm not doing any deletes.
 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13894561#comment-13894561
 ] 

Machiel Groeneveld commented on CASSANDRA-6674:
---

Is there a way to make the tombstones go away, can I force a cleanup for 
instance?

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 Select query on a table where I'm doing insert fails with tombstone 
 exception. The database is clean/empty before doing inserts, doing the first 
 query after a few thousand records inserted. I don't understand where the 
 tombstones are coming from as I'm not doing any deletes.
 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)