[jira] [Issue Comment Deleted] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-09 Thread Machiel Groeneveld (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Machiel Groeneveld updated CASSANDRA-6674:
--

Comment: was deleted

(was: Is there a way to make the tombstones go away, can I force a cleanup for 
instance?)

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 Select query on a table where I'm doing insert fails with tombstone 
 exception. The database is clean/empty before doing inserts, doing the first 
 query after a few thousand records inserted. I don't understand where the 
 tombstones are coming from as I'm not doing any deletes.
 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-09 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13895870#comment-13895870
 ] 

Machiel Groeneveld commented on CASSANDRA-6674:
---

Thanks for taking the time to explain.

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 Select query on a table where I'm doing insert fails with tombstone 
 exception. The database is clean/empty before doing inserts, doing the first 
 query after a few thousand records inserted. I don't understand where the 
 tombstones are coming from as I'm not doing any deletes.
 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Machiel Groeneveld (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Machiel Groeneveld updated CASSANDRA-6674:
--

Summary: TombstoneOverwhelmingException during/after batch insert  (was: 
TombstoneOver)

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6674) TombstoneOver

2014-02-07 Thread Machiel Groeneveld (JIRA)
Machiel Groeneveld created CASSANDRA-6674:
-

 Summary: TombstoneOver
 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical


ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
Scanned over 10 tombstones in visits.visits; query aborted (see 
tombstone_fail_threshold)
ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
Exception in thread Thread[ReadStage:41,5,main]
java.lang.RuntimeException: 
org.apache.cassandra.db.filter.TombstoneOverwhelmingException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at 
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
at 
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
at 
org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
at 
org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13894426#comment-13894426
 ] 

Machiel Groeneveld commented on CASSANDRA-6674:
---

*Table*
create table IF NOT EXISTS visits.visits(
id text,
cookie_uuid text, cookie_uuid text, external_click_id text, session_id text,
visitor_ip text, user_agent text, uuid_hash text,
shop_product_id int, channel_id int, shop_id int, shop_category_id int,
type int, medium_id int, campaign_id int, channel_affiliate_id int,
default_cpc float,
created_at timestamp, updated_at timestamp, time_id int,
disabled int, has_referer boolean, known_visitor boolean, marketing boolean,
primary key(time_id, id));

*Insert statement*
BEGIN BATCH
insert into visits (
id, cookie_uuid, uuid_hash,
default_cpc, cookie_uuid,
external_click_id, session_id, visitor_ip, user_agent,
shop_product_id, channel_id, shop_id, shop_category_id,
type, medium_id, campaign_id, channel_affiliate_id,
disabled, has_referer, known_visitor, marketing,
created_at, updated_at, time_id)
VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) 
USING TTL 7776000
insert into visits_by_cookie (visit_id, time_id, cookie_uuid, shop_id, 
created_at, enabled_visit)
VALUES(?, ?, ?, ?, ?, ?) USING TTL 7776000
insert into visits_by_hash (visit_id, time_id, uuid_hash, shop_id, created_at)
VALUES(?, ?, ?, ?, ?) USING TTL 7776000
APPLY BATCH

*Select query that fails*
SELECT * FROM VISITS WHERE time_id = 

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Machiel Groeneveld (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Machiel Groeneveld updated CASSANDRA-6674:
--

Description: 
Select query on a table where I'm doing insert fails with tombstone exception. 
The database is clean/empty before doing inserts, doing the first query after a 
few thousand records inserted. I don't understand where the tombstones are 
coming from as I'm not doing any deletes.

ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
Scanned over 10 tombstones in visits.visits; query aborted (see 
tombstone_fail_threshold)
ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
Exception in thread Thread[ReadStage:41,5,main]
java.lang.RuntimeException: 
org.apache.cassandra.db.filter.TombstoneOverwhelmingException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at 
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
at 
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
at 
org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
at 
org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
at 
org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
... 3 more

  was:
ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
Scanned over 10 tombstones in visits.visits; query aborted (see 
tombstone_fail_threshold)
ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
Exception in thread Thread[ReadStage:41,5,main]
java.lang.RuntimeException: 
org.apache.cassandra.db.filter.TombstoneOverwhelmingException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at 
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
at 
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
at 
org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
at 

[jira] [Comment Edited] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13894426#comment-13894426
 ] 

Machiel Groeneveld edited comment on CASSANDRA-6674 at 2/7/14 11:25 AM:


*Table*
create table IF NOT EXISTS visits.visits(
id text,
cookie_uuid text, cookie_uuid text, external_click_id text, session_id text,
visitor_ip text, user_agent text, uuid_hash text,
shop_product_id int, channel_id int, shop_id int, shop_category_id int,
type int, medium_id int, campaign_id int, channel_affiliate_id int,
default_cpc float,
created_at timestamp, updated_at timestamp, time_id int,
disabled int, has_referer boolean, known_visitor boolean, marketing boolean,
primary key(time_id, id));

*Insert statement*
BEGIN BATCH
insert into visits (
id, cookie_uuid, uuid_hash,
default_cpc, cookie_uuid,
external_click_id, session_id, visitor_ip, user_agent,
shop_product_id, channel_id, shop_id, shop_category_id,
type, medium_id, campaign_id, channel_affiliate_id,
disabled, has_referer, known_visitor, marketing,
created_at, updated_at, time_id)
VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) 
USING TTL 7776000
insert into visits_by_cookie (visit_id, time_id, cookie_uuid, shop_id, 
created_at, enabled_visit)
VALUES(?, ?, ?, ?, ?, ?) USING TTL 7776000
insert into visits_by_hash (visit_id, time_id, uuid_hash, shop_id, created_at)
VALUES(?, ?, ?, ?, ?) USING TTL 7776000
APPLY BATCH

*Select query that fails*
SELECT * FROM VISITS


was (Author: machielg):
*Table*
create table IF NOT EXISTS visits.visits(
id text,
cookie_uuid text, cookie_uuid text, external_click_id text, session_id text,
visitor_ip text, user_agent text, uuid_hash text,
shop_product_id int, channel_id int, shop_id int, shop_category_id int,
type int, medium_id int, campaign_id int, channel_affiliate_id int,
default_cpc float,
created_at timestamp, updated_at timestamp, time_id int,
disabled int, has_referer boolean, known_visitor boolean, marketing boolean,
primary key(time_id, id));

*Insert statement*
BEGIN BATCH
insert into visits (
id, cookie_uuid, uuid_hash,
default_cpc, cookie_uuid,
external_click_id, session_id, visitor_ip, user_agent,
shop_product_id, channel_id, shop_id, shop_category_id,
type, medium_id, campaign_id, channel_affiliate_id,
disabled, has_referer, known_visitor, marketing,
created_at, updated_at, time_id)
VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) 
USING TTL 7776000
insert into visits_by_cookie (visit_id, time_id, cookie_uuid, shop_id, 
created_at, enabled_visit)
VALUES(?, ?, ?, ?, ?, ?) USING TTL 7776000
insert into visits_by_hash (visit_id, time_id, uuid_hash, shop_id, created_at)
VALUES(?, ?, ?, ?, ?) USING TTL 7776000
APPLY BATCH

*Select query that fails*
SELECT * FROM VISITS WHERE time_id = 

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 Select query on a table where I'm doing insert fails with tombstone 
 exception. The database is clean/empty before doing inserts, doing the first 
 query after a few thousand records inserted. I don't understand where the 
 tombstones are coming from as I'm not doing any deletes.
 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 

[jira] [Commented] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13894501#comment-13894501
 ] 

Machiel Groeneveld commented on CASSANDRA-6674:
---

Just rough estimate, but one value is 99% if the cases null and one other value 
80% of the records.

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 Select query on a table where I'm doing insert fails with tombstone 
 exception. The database is clean/empty before doing inserts, doing the first 
 query after a few thousand records inserted. I don't understand where the 
 tombstones are coming from as I'm not doing any deletes.
 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13894501#comment-13894501
 ] 

Machiel Groeneveld edited comment on CASSANDRA-6674 at 2/7/14 2:19 PM:
---

Just rough estimate, but one value is 99% of the cases null and one other value 
80% of the records.


was (Author: machielg):
Just rough estimate, but one value is 99% if the cases null and one other value 
80% of the records.

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 Select query on a table where I'm doing insert fails with tombstone 
 exception. The database is clean/empty before doing inserts, doing the first 
 query after a few thousand records inserted. I don't understand where the 
 tombstones are coming from as I'm not doing any deletes.
 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6674) TombstoneOverwhelmingException during/after batch insert

2014-02-07 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13894561#comment-13894561
 ] 

Machiel Groeneveld commented on CASSANDRA-6674:
---

Is there a way to make the tombstones go away, can I force a cleanup for 
instance?

 TombstoneOverwhelmingException during/after batch insert
 

 Key: CASSANDRA-6674
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6674
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.4; 2.0.5 
 Mac OS X
Reporter: Machiel Groeneveld
Priority: Critical

 Select query on a table where I'm doing insert fails with tombstone 
 exception. The database is clean/empty before doing inserts, doing the first 
 query after a few thousand records inserted. I don't understand where the 
 tombstones are coming from as I'm not doing any deletes.
 ERROR [ReadStage:41] 2014-02-07 12:16:42,169 SliceQueryFilter.java (line 200) 
 Scanned over 10 tombstones in visits.visits; query aborted (see 
 tombstone_fail_threshold)
 ERROR [ReadStage:41] 2014-02-07 12:16:42,171 CassandraDaemon.java (line 192) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:202)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
 at 
 org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1607)
 at 
 org.apache.cassandra.db.ColumnFamilyStore$9.computeNext(ColumnFamilyStore.java:1603)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1754)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1718)
 at 
 org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:137)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6528) TombstoneOverwhelmingException is thrown while populating data in recently truncated CF

2014-02-06 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13893194#comment-13893194
 ] 

Machiel Groeneveld commented on CASSANDRA-6528:
---

It would be great if the ticket can be re-opened.

 TombstoneOverwhelmingException is thrown while populating data in recently 
 truncated CF
 ---

 Key: CASSANDRA-6528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6528
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassadra 2.0.3, Linux, 6 nodes
Reporter: Nikolai Grigoriev
Priority: Minor

 I am running some performance tests and recently I had to flush the data from 
 one of the tables and repopulate it. I have about 30M rows with a few columns 
 in each, about 5kb per row in in total. In order to repopulate the data I do 
 truncate table from CQLSH and then relaunch the test. The test simply 
 inserts the data in the table, does not read anything. Shortly after 
 restarting the data generator I see this on one of the nodes:
 {code}
  INFO [HintedHandoff:655] 2013-12-26 16:45:42,185 HintedHandOffManager.java 
 (line 323) Started hinted handoff f
 or host: 985c8a08-3d92-4fad-a1d1-7135b2b9774a with IP: /10.5.45.158
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 SliceQueryFilter.java (line 
 200) Scanned ove
 r 10 tombstones; query aborted (see tombstone_fail_threshold)
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 CassandraDaemon.java (line 
 187) Exception in thread Thread[HintedHandoff:655,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$4(HintedHandOffManager.java:281)
 at 
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
  INFO [OptionalTasks:1] 2013-12-26 16:45:53,946 MeteredFlusher.java (line 63) 
 flushing high-traffic column family CFS(Keyspace='test_jmeter', 
 ColumnFamily='test_profiles') (estimated 192717267 bytes)
 {code}
 I am inserting the data with CL=1.
 It seems to be happening every time I do it. But I do not see any errors on 
 the client side and the node seems to continue operating, this is why I think 
 it is not a major issue. Maybe not an issue at all, but the message is logged 
 as ERROR.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6528) TombstoneOverwhelmingException is thrown while populating data in recently truncated CF

2014-02-05 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892315#comment-13892315
 ] 

Machiel Groeneveld commented on CASSANDRA-6528:
---

I have the same issue, after inserting 216258 records (in one row) in a new 
database (I removed all files in the data directory files before starting) I 
couldn't run a select query (something like 'select * from partition_key = x')

 TombstoneOverwhelmingException is thrown while populating data in recently 
 truncated CF
 ---

 Key: CASSANDRA-6528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6528
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassadra 2.0.3, Linux, 6 nodes
Reporter: Nikolai Grigoriev
Priority: Minor

 I am running some performance tests and recently I had to flush the data from 
 one of the tables and repopulate it. I have about 30M rows with a few columns 
 in each, about 5kb per row in in total. In order to repopulate the data I do 
 truncate table from CQLSH and then relaunch the test. The test simply 
 inserts the data in the table, does not read anything. Shortly after 
 restarting the data generator I see this on one of the nodes:
 {code}
  INFO [HintedHandoff:655] 2013-12-26 16:45:42,185 HintedHandOffManager.java 
 (line 323) Started hinted handoff f
 or host: 985c8a08-3d92-4fad-a1d1-7135b2b9774a with IP: /10.5.45.158
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 SliceQueryFilter.java (line 
 200) Scanned ove
 r 10 tombstones; query aborted (see tombstone_fail_threshold)
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 CassandraDaemon.java (line 
 187) Exception in thread Thread[HintedHandoff:655,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$4(HintedHandOffManager.java:281)
 at 
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
  INFO [OptionalTasks:1] 2013-12-26 16:45:53,946 MeteredFlusher.java (line 63) 
 flushing high-traffic column family CFS(Keyspace='test_jmeter', 
 ColumnFamily='test_profiles') (estimated 192717267 bytes)
 {code}
 I am inserting the data with CL=1.
 It seems to be happening every time I do it. But I do not see any errors on 
 the client side and the node seems to continue operating, this is why I think 
 it is not a major issue. Maybe not an issue at all, but the message is logged 
 as ERROR.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6528) TombstoneOverwhelmingException is thrown while populating data in recently truncated CF

2014-02-05 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892315#comment-13892315
 ] 

Machiel Groeneveld edited comment on CASSANDRA-6528 at 2/5/14 5:14 PM:
---

I have the same issue, after inserting 216258 records (in one row) in a new 
database (I removed all files in the data directory files before starting) I 
couldn't run a select query (something like 'select * from partition_key = x')

In the log I get org.apache.cassandra.db.filter.TombstoneOverwhelmingException


was (Author: machielg):
I have the same issue, after inserting 216258 records (in one row) in a new 
database (I removed all files in the data directory files before starting) I 
couldn't run a select query (something like 'select * from partition_key = x')

 TombstoneOverwhelmingException is thrown while populating data in recently 
 truncated CF
 ---

 Key: CASSANDRA-6528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6528
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassadra 2.0.3, Linux, 6 nodes
Reporter: Nikolai Grigoriev
Priority: Minor

 I am running some performance tests and recently I had to flush the data from 
 one of the tables and repopulate it. I have about 30M rows with a few columns 
 in each, about 5kb per row in in total. In order to repopulate the data I do 
 truncate table from CQLSH and then relaunch the test. The test simply 
 inserts the data in the table, does not read anything. Shortly after 
 restarting the data generator I see this on one of the nodes:
 {code}
  INFO [HintedHandoff:655] 2013-12-26 16:45:42,185 HintedHandOffManager.java 
 (line 323) Started hinted handoff f
 or host: 985c8a08-3d92-4fad-a1d1-7135b2b9774a with IP: /10.5.45.158
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 SliceQueryFilter.java (line 
 200) Scanned ove
 r 10 tombstones; query aborted (see tombstone_fail_threshold)
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 CassandraDaemon.java (line 
 187) Exception in thread Thread[HintedHandoff:655,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$4(HintedHandOffManager.java:281)
 at 
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
  INFO [OptionalTasks:1] 2013-12-26 16:45:53,946 MeteredFlusher.java (line 63) 
 flushing high-traffic column family CFS(Keyspace='test_jmeter', 
 ColumnFamily='test_profiles') (estimated 192717267 bytes)
 {code}
 I am inserting the data with CL=1.
 It seems to be happening every time I do it. But I do not see any errors on 
 the client side and the node seems to continue operating, this is why I think 
 it is not a major issue. Maybe not an issue at all, but the message is logged 
 as ERROR.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6528) TombstoneOverwhelmingException is thrown while populating data in recently truncated CF

2014-02-05 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892376#comment-13892376
 ] 

Machiel Groeneveld commented on CASSANDRA-6528:
---

create table IF NOT EXISTS visits.visits(
  id text,
  cookie_uuid text, cookie_uuid text, external_click_id text, session_id text,
  visitor_ip text, user_agent text, uuid_hash text,
  shop_product_id int, channel_id int, shop_id int, shop_category_id int,
  type int, medium_id int, campaign_id int, channel_affiliate_id int,
  default_cpc float,
  created_at timestamp, updated_at timestamp, time_id int,
  disabled int, has_referer boolean, known_visitor boolean, marketing boolean,
  primary key(time_id, id));


 TombstoneOverwhelmingException is thrown while populating data in recently 
 truncated CF
 ---

 Key: CASSANDRA-6528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6528
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassadra 2.0.3, Linux, 6 nodes
Reporter: Nikolai Grigoriev
Priority: Minor

 I am running some performance tests and recently I had to flush the data from 
 one of the tables and repopulate it. I have about 30M rows with a few columns 
 in each, about 5kb per row in in total. In order to repopulate the data I do 
 truncate table from CQLSH and then relaunch the test. The test simply 
 inserts the data in the table, does not read anything. Shortly after 
 restarting the data generator I see this on one of the nodes:
 {code}
  INFO [HintedHandoff:655] 2013-12-26 16:45:42,185 HintedHandOffManager.java 
 (line 323) Started hinted handoff f
 or host: 985c8a08-3d92-4fad-a1d1-7135b2b9774a with IP: /10.5.45.158
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 SliceQueryFilter.java (line 
 200) Scanned ove
 r 10 tombstones; query aborted (see tombstone_fail_threshold)
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 CassandraDaemon.java (line 
 187) Exception in thread Thread[HintedHandoff:655,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$4(HintedHandOffManager.java:281)
 at 
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
  INFO [OptionalTasks:1] 2013-12-26 16:45:53,946 MeteredFlusher.java (line 63) 
 flushing high-traffic column family CFS(Keyspace='test_jmeter', 
 ColumnFamily='test_profiles') (estimated 192717267 bytes)
 {code}
 I am inserting the data with CL=1.
 It seems to be happening every time I do it. But I do not see any errors on 
 the client side and the node seems to continue operating, this is why I think 
 it is not a major issue. Maybe not an issue at all, but the message is logged 
 as ERROR.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6528) TombstoneOverwhelmingException is thrown while populating data in recently truncated CF

2014-02-05 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892376#comment-13892376
 ] 

Machiel Groeneveld edited comment on CASSANDRA-6528 at 2/5/14 6:07 PM:
---

create table IF NOT EXISTS visits.visits(
  id text,
  cookie_uuid text, cookie_uuid text, external_click_id text, session_id text,
  visitor_ip text, user_agent text, uuid_hash text,
  shop_product_id int, channel_id int, shop_id int, shop_category_id int,
  type int, medium_id int, campaign_id int, channel_affiliate_id int,
  default_cpc float,
  created_at timestamp, updated_at timestamp, time_id int,
  disabled int, has_referer boolean, known_visitor boolean, marketing boolean,
  primary key(time_id, id));

SELECT * FROM VISITS



was (Author: machielg):
create table IF NOT EXISTS visits.visits(
  id text,
  cookie_uuid text, cookie_uuid text, external_click_id text, session_id text,
  visitor_ip text, user_agent text, uuid_hash text,
  shop_product_id int, channel_id int, shop_id int, shop_category_id int,
  type int, medium_id int, campaign_id int, channel_affiliate_id int,
  default_cpc float,
  created_at timestamp, updated_at timestamp, time_id int,
  disabled int, has_referer boolean, known_visitor boolean, marketing boolean,
  primary key(time_id, id));


 TombstoneOverwhelmingException is thrown while populating data in recently 
 truncated CF
 ---

 Key: CASSANDRA-6528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6528
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassadra 2.0.3, Linux, 6 nodes
Reporter: Nikolai Grigoriev
Priority: Minor

 I am running some performance tests and recently I had to flush the data from 
 one of the tables and repopulate it. I have about 30M rows with a few columns 
 in each, about 5kb per row in in total. In order to repopulate the data I do 
 truncate table from CQLSH and then relaunch the test. The test simply 
 inserts the data in the table, does not read anything. Shortly after 
 restarting the data generator I see this on one of the nodes:
 {code}
  INFO [HintedHandoff:655] 2013-12-26 16:45:42,185 HintedHandOffManager.java 
 (line 323) Started hinted handoff f
 or host: 985c8a08-3d92-4fad-a1d1-7135b2b9774a with IP: /10.5.45.158
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 SliceQueryFilter.java (line 
 200) Scanned ove
 r 10 tombstones; query aborted (see tombstone_fail_threshold)
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 CassandraDaemon.java (line 
 187) Exception in thread Thread[HintedHandoff:655,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$4(HintedHandOffManager.java:281)
 at 
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
  INFO [OptionalTasks:1] 2013-12-26 16:45:53,946 MeteredFlusher.java (line 63) 
 flushing high-traffic column family CFS(Keyspace='test_jmeter', 
 ColumnFamily='test_profiles') (estimated 192717267 bytes)
 {code}
 I am inserting the data with CL=1.
 It seems to be happening every time I do it. But I do not see any errors on 
 the client side and the node seems to continue operating, this is why I think 
 it is not a major issue. Maybe not an issue at all, but the message is logged 
 as ERROR.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6528) TombstoneOverwhelmingException is thrown while populating data in recently truncated CF

2014-02-05 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892315#comment-13892315
 ] 

Machiel Groeneveld edited comment on CASSANDRA-6528 at 2/5/14 6:10 PM:
---

I have the same issue, after inserting 216258 records (sharing the same 
partition key) in a new database (I reinstalled Cassandra) I couldn't run a 
select query (something like 'select * from partition_key = x'). Also a 
count(*) on the table gives me tombstone warnings. I'm not expecting any 
tombstones as they are all inserts (not 100% sure about possible overwriting 
though)

In the log I get org.apache.cassandra.db.filter.TombstoneOverwhelmingException


was (Author: machielg):
I have the same issue, after inserting 216258 records (in one row) in a new 
database (I removed all files in the data directory files before starting) I 
couldn't run a select query (something like 'select * from partition_key = x')

In the log I get org.apache.cassandra.db.filter.TombstoneOverwhelmingException

 TombstoneOverwhelmingException is thrown while populating data in recently 
 truncated CF
 ---

 Key: CASSANDRA-6528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6528
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassadra 2.0.3, Linux, 6 nodes
Reporter: Nikolai Grigoriev
Priority: Minor

 I am running some performance tests and recently I had to flush the data from 
 one of the tables and repopulate it. I have about 30M rows with a few columns 
 in each, about 5kb per row in in total. In order to repopulate the data I do 
 truncate table from CQLSH and then relaunch the test. The test simply 
 inserts the data in the table, does not read anything. Shortly after 
 restarting the data generator I see this on one of the nodes:
 {code}
  INFO [HintedHandoff:655] 2013-12-26 16:45:42,185 HintedHandOffManager.java 
 (line 323) Started hinted handoff f
 or host: 985c8a08-3d92-4fad-a1d1-7135b2b9774a with IP: /10.5.45.158
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 SliceQueryFilter.java (line 
 200) Scanned ove
 r 10 tombstones; query aborted (see tombstone_fail_threshold)
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 CassandraDaemon.java (line 
 187) Exception in thread Thread[HintedHandoff:655,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$4(HintedHandOffManager.java:281)
 at 
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
  INFO [OptionalTasks:1] 2013-12-26 16:45:53,946 MeteredFlusher.java (line 63) 
 flushing high-traffic column family CFS(Keyspace='test_jmeter', 
 ColumnFamily='test_profiles') (estimated 192717267 bytes)
 {code}
 I am inserting the data with CL=1.
 It seems to be happening every time I do it. But I do not see any errors on 
 the client side and the node seems to continue operating, this is why I think 
 it is not a major issue. Maybe not an issue at all, but the message is logged 
 as ERROR.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6528) TombstoneOverwhelmingException is thrown while populating data in recently truncated CF

2014-02-05 Thread Machiel Groeneveld (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892475#comment-13892475
 ] 

Machiel Groeneveld commented on CASSANDRA-6528:
---

Only doing inserts (query below), no updates. Not sure about inserting null 
values, will get back on that.

 BEGIN BATCH
  insert into visits (
id, cookie_uuid, uuid_hash,
default_cpc, cookie_uuid,
external_click_id, session_id, visitor_ip, user_agent,
shop_product_id, channel_id, shop_id, shop_category_id,
type, medium_id, campaign_id, channel_affiliate_id,
disabled, has_referer, known_visitor, marketing,
created_at, updated_at, time_id)
VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
?, ?, ?, ?) USING TTL 7776000
  insert into visits_by_cookie (visit_id, time_id, cookie_uuid, 
shop_id, created_at, enabled_visit)
VALUES(?, ?, ?, ?, ?, ?) USING TTL 7776000
  insert into visits_by_hash (visit_id, time_id, uuid_hash, shop_id, 
created_at)
VALUES(?, ?, ?, ?, ?) USING TTL 7776000
APPLY BATCH

 TombstoneOverwhelmingException is thrown while populating data in recently 
 truncated CF
 ---

 Key: CASSANDRA-6528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6528
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassadra 2.0.3, Linux, 6 nodes
Reporter: Nikolai Grigoriev
Priority: Minor

 I am running some performance tests and recently I had to flush the data from 
 one of the tables and repopulate it. I have about 30M rows with a few columns 
 in each, about 5kb per row in in total. In order to repopulate the data I do 
 truncate table from CQLSH and then relaunch the test. The test simply 
 inserts the data in the table, does not read anything. Shortly after 
 restarting the data generator I see this on one of the nodes:
 {code}
  INFO [HintedHandoff:655] 2013-12-26 16:45:42,185 HintedHandOffManager.java 
 (line 323) Started hinted handoff f
 or host: 985c8a08-3d92-4fad-a1d1-7135b2b9774a with IP: /10.5.45.158
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 SliceQueryFilter.java (line 
 200) Scanned ove
 r 10 tombstones; query aborted (see tombstone_fail_threshold)
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 CassandraDaemon.java (line 
 187) Exception in thread Thread[HintedHandoff:655,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$4(HintedHandOffManager.java:281)
 at 
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
  INFO [OptionalTasks:1] 2013-12-26 16:45:53,946 MeteredFlusher.java (line 63) 
 flushing high-traffic column family CFS(Keyspace='test_jmeter', 
 ColumnFamily='test_profiles') (estimated 192717267 bytes)
 {code}
 I am inserting the data with CL=1.
 It seems to be happening every time I do it. But I do not see any errors on 
 the client side and the node seems to continue operating, this is why I think 
 it is not a major issue. Maybe not an issue at all, but the message is logged 
 as ERROR.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)