[jira] [Resolved] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-13 Thread Andrew S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew S resolved CASSANDRA-8089.
-
Resolution: Not a Problem

Thank you for a quick feedback! It was our own mistake we were inserting data 
in Cassandra setting some columns to null, this is why we were seeing tombstone 
warnings. Closing ticket.

 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings which look like this:
 {code}
 WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
 tombstone_warn_threshold). 500 columns was requested, 
 slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
  delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
 {code}
 What is strange is that the row requested should not contain any tombstones 
 as we never delete data from that row. (We do delete data from other row in 
 the same column family)
 To debug the issue we have dumped the data for this row using sstable2json 
 and the result does not contain any tombstones. (We have done this on all 
 nodes having the data and all sstables containing the key)
 {code}
 ./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
 xxx
 {code}
 We are getting the warnings after issuing a simple query:
 {code}
 select count(*) from xxx where key = 'x' and aggregate='x';
 {code}
 There are only ~500 cells but it issues a warning about scanning 1700 
 tombstones.
 We are very worried about this because for some of the queries we are hitting 
 TombstoneOverwhelmingException for no obvious reason.
 Here is the table definiion:
 {code}
 CREATE TABLE Xxxx.xxx (
 key text,
 aggregate text,
 t timeuuid,
 . {date fields }
 PRIMARY KEY ((key, aggregate), t)
 ) WITH CLUSTERING ORDER BY (t ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = 'we love cassandra'
 AND compaction = {'min_threshold': '6', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.SnappyCompressor'}
 AND dclocal_read_repair_chance = 0.0
 AND default_time_to_live = 0
 AND gc_grace_seconds = 3600
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.1
 AND speculative_retry = '99.0PERCENTILE';
 {code}
 Do you have any ideas how can we debug this further?
 Thanks,
 Andrew



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-10 Thread Andrew S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew S updated CASSANDRA-8089:

Description: 
Hey,

We are having a strange issue with tombstone warnings which look like this:

{code}
WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
tombstone_warn_threshold). 500 columns was requested, 
slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
 delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
{code}

What is strange is that the row requested should not contain any tombstones as 
we never delete data from that row. (We do delete data from other row in the 
same column family)

To debug the issue we have dumped the data for this row using sstable2json and 
the result does not contain any tombstones. (We have done this on all nodes 
having the data and all sstables containing the key)

{code}
./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
xxx
{code}

We are getting the warnings after issuing a simple query:

{code}
select count(*) from xxx where key = 'x' and aggregate='x';
{code}

There are only ~500 cells but it issues a warning about scanning 1700 
tombstones.

We are very worried about this because for some of the queries we are hitting 
TombstoneOverwhelmingException for no obvious reason.

Here is the table definiion:

{code}
CREATE TABLE Xxxx.xxx (
key text,
aggregate text,
t timeuuid,
. {date fields }
PRIMARY KEY ((key, aggregate), t)
) WITH CLUSTERING ORDER BY (t ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'we love cassandra'
AND compaction = {'min_threshold': '6', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 3600
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';
{code}

Do you have any ideas how can we debug this further?

Thanks,
Andrew

  was:
Hey,

We are having a strange issue with tombstone warnings which look like this:

{code}
WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
tombstone_warn_threshold). 500 columns was requested, 
slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
 delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
{code}

What is strange is that the row requested should not contain any tombstones as 
we never delete data from that row. (We do delete data from other row in the 
same column family)

To debug the issue we have dumped the data for this row using sstable2json and 
the result does not contain any tombstones. (We have done this on all nodes 
having the data and all sstables containing the key)

{code}
./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
xxx
{code}

We are getting the warnings after issuing a simple query:

{code}
select count(*) from xxx where key = 'x' and aggregate='x';
{code}

There are only ~500 cells but it issues a warning about scanning 1700 
tombstones.

We are very worried about this because for some of the queries we are hitting 
TombstoneOverwhelmingException for no obvious reason.

Here is the table definiion:

{code}
CREATE TABLE Xxxx.xxx (
key text,
aggregate text,
t timeuuid,
. {date fields }
PRIMARY KEY ((key, aggregate), t)
) WITH CLUSTERING ORDER BY (t ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'we love cassandra'
AND compaction = {'min_threshold': '6', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 3600
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';
{code}

Thanks,
Andrew


 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 

[jira] [Comment Edited] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-10 Thread Andrew S (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166563#comment-14166563
 ] 

Andrew S edited comment on CASSANDRA-8089 at 10/10/14 8:46 AM:
---

We are also getting the following exception on one of the servers after 
flushing another table with similar structure.

2014-10-10T07:02:46.402+ su8 ERROR [CompactionExecutor:2] 2014-10-10 
09:02:46,396 CassandraDaemon.java:166 - Exception in thread 
Thread[CompactionExecutor:2,1,main] java.lang.IllegalStateException: Unable to 
compute ceiling for max when histogram overflowed at 
org.apache.cassandra.utils.EstimatedHistogram.mean(EstimatedHistogram.java:203) 
~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.io.sstable.metadata.StatsMetadata.getEstimatedDroppableTombstoneRatio(StatsMetadata.java:98)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.io.sstable.SSTableReader.getEstimatedDroppableTombstoneRatio(SSTableReader.java:1805)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:297)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:106)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:229)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_51] at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]


was (Author: andrews):
We are also getting the following exception on one of the servers after 
flushing another table that also had one row with TTL a while back.

2014-10-10T07:02:46.402+ su8 ERROR [CompactionExecutor:2] 2014-10-10 
09:02:46,396 CassandraDaemon.java:166 - Exception in thread 
Thread[CompactionExecutor:2,1,main] java.lang.IllegalStateException: Unable to 
compute ceiling for max when histogram overflowed at 
org.apache.cassandra.utils.EstimatedHistogram.mean(EstimatedHistogram.java:203) 
~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.io.sstable.metadata.StatsMetadata.getEstimatedDroppableTombstoneRatio(StatsMetadata.java:98)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.io.sstable.SSTableReader.getEstimatedDroppableTombstoneRatio(SSTableReader.java:1805)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:297)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:106)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:229)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_51] at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]

 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings which look like this:
 {code}
 WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
 tombstone_warn_threshold). 500 columns was requested, 
 slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
  delInfo={deletedAt=-9223372036854775808, 

[jira] [Commented] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-10 Thread Andrew S (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166563#comment-14166563
 ] 

Andrew S commented on CASSANDRA-8089:
-

We are also getting the following exception on one of the servers after 
flushing another table that also had one row with TTL a while back.

2014-10-10T07:02:46.402+ su8 ERROR [CompactionExecutor:2] 2014-10-10 
09:02:46,396 CassandraDaemon.java:166 - Exception in thread 
Thread[CompactionExecutor:2,1,main] java.lang.IllegalStateException: Unable to 
compute ceiling for max when histogram overflowed at 
org.apache.cassandra.utils.EstimatedHistogram.mean(EstimatedHistogram.java:203) 
~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.io.sstable.metadata.StatsMetadata.getEstimatedDroppableTombstoneRatio(StatsMetadata.java:98)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.io.sstable.SSTableReader.getEstimatedDroppableTombstoneRatio(SSTableReader.java:1805)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:297)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:106)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:229)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_51] at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]

 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings which look like this:
 {code}
 WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
 tombstone_warn_threshold). 500 columns was requested, 
 slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
  delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
 {code}
 What is strange is that the row requested should not contain any tombstones 
 as we never delete data from that row. (We do delete data from other row in 
 the same column family)
 To debug the issue we have dumped the data for this row using sstable2json 
 and the result does not contain any tombstones. (We have done this on all 
 nodes having the data and all sstables containing the key)
 {code}
 ./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
 xxx
 {code}
 We are getting the warnings after issuing a simple query:
 {code}
 select count(*) from xxx where key = 'x' and aggregate='x';
 {code}
 There are only ~500 cells but it issues a warning about scanning 1700 
 tombstones.
 We are very worried about this because for some of the queries we are hitting 
 TombstoneOverwhelmingException for no obvious reason.
 Here is the table definiion:
 {code}
 CREATE TABLE Xxxx.xxx (
 key text,
 aggregate text,
 t timeuuid,
 . {date fields }
 PRIMARY KEY ((key, aggregate), t)
 ) WITH CLUSTERING ORDER BY (t ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = 'we love cassandra'
 AND compaction = {'min_threshold': '6', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.SnappyCompressor'}
 AND dclocal_read_repair_chance = 0.0
 AND default_time_to_live = 0
 AND gc_grace_seconds = 3600
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.1
 AND speculative_retry = '99.0PERCENTILE';
 {code}
 Do you have any ideas how can we debug this further?
 Thanks,
 Andrew



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-10 Thread Andrew S (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166563#comment-14166563
 ] 

Andrew S edited comment on CASSANDRA-8089 at 10/10/14 9:03 AM:
---

We are also getting the following exception on one of the servers after 
manually flushing (using nodetool) another table with similar structure.

2014-10-10T07:02:46.402+ host9 ERROR [CompactionExecutor:2] 2014-10-10 
09:02:46,396 CassandraDaemon.java:166 - Exception in thread 
Thread[CompactionExecutor:2,1,main] java.lang.IllegalStateException: Unable to 
compute ceiling for max when histogram overflowed at 
org.apache.cassandra.utils.EstimatedHistogram.mean(EstimatedHistogram.java:203) 
~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.io.sstable.metadata.StatsMetadata.getEstimatedDroppableTombstoneRatio(StatsMetadata.java:98)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.io.sstable.SSTableReader.getEstimatedDroppableTombstoneRatio(SSTableReader.java:1805)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:297)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:106)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:229)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_51] at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]


was (Author: andrews):
We are also getting the following exception on one of the servers after 
flushing another table with similar structure.

2014-10-10T07:02:46.402+ su8 ERROR [CompactionExecutor:2] 2014-10-10 
09:02:46,396 CassandraDaemon.java:166 - Exception in thread 
Thread[CompactionExecutor:2,1,main] java.lang.IllegalStateException: Unable to 
compute ceiling for max when histogram overflowed at 
org.apache.cassandra.utils.EstimatedHistogram.mean(EstimatedHistogram.java:203) 
~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.io.sstable.metadata.StatsMetadata.getEstimatedDroppableTombstoneRatio(StatsMetadata.java:98)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.io.sstable.SSTableReader.getEstimatedDroppableTombstoneRatio(SSTableReader.java:1805)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:297)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:106)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:229)
 ~[apache-cassandra-2.1.0.jar:2.1.0] at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_51] at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]

 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings which look like this:
 {code}
 WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
 tombstone_warn_threshold). 500 columns was requested, 
 slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
  

[jira] [Commented] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-10 Thread Andrew S (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166657#comment-14166657
 ] 

Andrew S commented on CASSANDRA-8089:
-

Same error message on all nodes and tables mentioned.

/opt/apache-cassandra/bin/nodetool --host xxx.xxx.xxx.xxx cfhistograms XXX xxx
nodetool: Unable to compute when histogram overflowed

It looks from here that this histogram issue may not be related to tombstone 
warnings we get, as we only get them on one table, but histograms are not 
working for more tables. Do you think these are two separate issues?

 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings which look like this:
 {code}
 WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
 tombstone_warn_threshold). 500 columns was requested, 
 slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
  delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
 {code}
 What is strange is that the row requested should not contain any tombstones 
 as we never delete data from that row. (We do delete data from other row in 
 the same column family)
 To debug the issue we have dumped the data for this row using sstable2json 
 and the result does not contain any tombstones. (We have done this on all 
 nodes having the data and all sstables containing the key)
 {code}
 ./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
 xxx
 {code}
 We are getting the warnings after issuing a simple query:
 {code}
 select count(*) from xxx where key = 'x' and aggregate='x';
 {code}
 There are only ~500 cells but it issues a warning about scanning 1700 
 tombstones.
 We are very worried about this because for some of the queries we are hitting 
 TombstoneOverwhelmingException for no obvious reason.
 Here is the table definiion:
 {code}
 CREATE TABLE Xxxx.xxx (
 key text,
 aggregate text,
 t timeuuid,
 . {date fields }
 PRIMARY KEY ((key, aggregate), t)
 ) WITH CLUSTERING ORDER BY (t ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = 'we love cassandra'
 AND compaction = {'min_threshold': '6', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.SnappyCompressor'}
 AND dclocal_read_repair_chance = 0.0
 AND default_time_to_live = 0
 AND gc_grace_seconds = 3600
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.1
 AND speculative_retry = '99.0PERCENTILE';
 {code}
 Do you have any ideas how can we debug this further?
 Thanks,
 Andrew



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-10 Thread Andrew S (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166698#comment-14166698
 ] 

Andrew S commented on CASSANDRA-8089:
-

 nodetool cfstats

Table: table_with_tombstone_issues
SSTable count: 8
Space used (live), bytes: 33464296209
Space used (total), bytes: 33464296209
Space used by snapshots (total), bytes: 67910279993
SSTable Compression Ratio: 0.24045657357329253
Memtable cell count: 118368
Memtable data size, bytes: 9541504
Memtable switch count: 543
Local read count: 34239312
Local read latency: 0.092 ms
Local write count: 580120
Local write latency: 0.086 ms
Pending flushes: 0
Bloom filter false positives: 430017
Bloom filter false ratio: 0.98621
Bloom filter space used, bytes: 23072
Compacted partition minimum bytes: 216
Compacted partition maximum bytes: 30753941057
Compacted partition mean bytes: 6764845
Average live cells per slice (last five minutes): 0.3324931968307692
Average tombstones per slice (last five minutes): 0.6754286242686398

Table: table_unable_to_compute_ceiling_for_max_when_histogram_overflowed
SSTable count: 11
Space used (live), bytes: 139688159520
Space used (total), bytes: 139688160740
Space used by snapshots (total), bytes: 53189649117
SSTable Compression Ratio: 0.23556415797023755
Memtable cell count: 439254
Memtable data size, bytes: 0702
Memtable switch count: 817
Local read count: 0
Local read latency: NaN ms
Local write count: 2692402
Local write latency: 0.081 ms
Pending flushes: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 3205776
Compacted partition minimum bytes: 373
Compacted partition maximum bytes: 110196931118
Compacted partition mean bytes: 144738
Average live cells per slice (last five minutes): 0.0
Average tombstones per slice (last five minutes): 0.0

 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings which look like this:
 {code}
 WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
 tombstone_warn_threshold). 500 columns was requested, 
 slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
  delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
 {code}
 What is strange is that the row requested should not contain any tombstones 
 as we never delete data from that row. (We do delete data from other row in 
 the same column family)
 To debug the issue we have dumped the data for this row using sstable2json 
 and the result does not contain any tombstones. (We have done this on all 
 nodes having the data and all sstables containing the key)
 {code}
 ./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
 xxx
 {code}
 We are getting the warnings after issuing a simple query:
 {code}
 select count(*) from xxx where key = 'x' and aggregate='x';
 {code}
 There are only ~500 cells but it issues a warning about scanning 1700 
 tombstones.
 We are very worried about this because for some of the queries we are hitting 
 TombstoneOverwhelmingException for no obvious reason.
 Here is the table definiion:
 {code}
 CREATE TABLE Xxxx.xxx (
 key text,
 aggregate text,
 t timeuuid,
 . {date fields }
 PRIMARY KEY ((key, aggregate), t)
 ) WITH CLUSTERING ORDER BY (t ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = 'we love cassandra'
 AND compaction = {'min_threshold': '6', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.SnappyCompressor'}
 AND dclocal_read_repair_chance = 0.0
 AND default_time_to_live = 0
 AND gc_grace_seconds = 3600
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.1
 AND speculative_retry = '99.0PERCENTILE';
 {code}
 Do you have any ideas how can we debug this further?
 Thanks,
 Andrew



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-10 Thread Andrew S (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166901#comment-14166901
 ] 

Andrew S commented on CASSANDRA-8089:
-

 looks like you have a 110GB partition in that second table, is that expected?

Thank you, the largest should be 5GB, this was a partition with an empty key, I 
have removed it and we no longer get the following exception on flush:
java.lang.IllegalStateException: Unable to compute ceiling for max when 
histogram overflowed

Cfhistograms still returns the same error for this and the other tables:
nodetool: Unable to compute when histogram overflowed

More importantly do you have any ideas on the first issue with tombstones?

 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings which look like this:
 {code}
 WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
 tombstone_warn_threshold). 500 columns was requested, 
 slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
  delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
 {code}
 What is strange is that the row requested should not contain any tombstones 
 as we never delete data from that row. (We do delete data from other row in 
 the same column family)
 To debug the issue we have dumped the data for this row using sstable2json 
 and the result does not contain any tombstones. (We have done this on all 
 nodes having the data and all sstables containing the key)
 {code}
 ./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
 xxx
 {code}
 We are getting the warnings after issuing a simple query:
 {code}
 select count(*) from xxx where key = 'x' and aggregate='x';
 {code}
 There are only ~500 cells but it issues a warning about scanning 1700 
 tombstones.
 We are very worried about this because for some of the queries we are hitting 
 TombstoneOverwhelmingException for no obvious reason.
 Here is the table definiion:
 {code}
 CREATE TABLE Xxxx.xxx (
 key text,
 aggregate text,
 t timeuuid,
 . {date fields }
 PRIMARY KEY ((key, aggregate), t)
 ) WITH CLUSTERING ORDER BY (t ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = 'we love cassandra'
 AND compaction = {'min_threshold': '6', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.SnappyCompressor'}
 AND dclocal_read_repair_chance = 0.0
 AND default_time_to_live = 0
 AND gc_grace_seconds = 3600
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.1
 AND speculative_retry = '99.0PERCENTILE';
 {code}
 Do you have any ideas how can we debug this further?
 Thanks,
 Andrew



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-10 Thread Andrew S (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166901#comment-14166901
 ] 

Andrew S edited comment on CASSANDRA-8089 at 10/10/14 2:21 PM:
---

 looks like you have a 110GB partition in that second table, is that expected?

Thank you, the largest should be 5GB, this was a partition with an empty key, I 
have removed it but we still get the exception.
java.lang.IllegalStateException: Unable to compute ceiling for max when 
histogram overflowed

Cfhistograms still returns the same error for this and the other tables:
nodetool: Unable to compute when histogram overflowed

More importantly do you have any ideas on the first issue with tombstones?


was (Author: andrews):
 looks like you have a 110GB partition in that second table, is that expected?

Thank you, the largest should be 5GB, this was a partition with an empty key, I 
have removed it and we no longer get the following exception on flush:
java.lang.IllegalStateException: Unable to compute ceiling for max when 
histogram overflowed

Cfhistograms still returns the same error for this and the other tables:
nodetool: Unable to compute when histogram overflowed

More importantly do you have any ideas on the first issue with tombstones?

 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings which look like this:
 {code}
 WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
 tombstone_warn_threshold). 500 columns was requested, 
 slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
  delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
 {code}
 What is strange is that the row requested should not contain any tombstones 
 as we never delete data from that row. (We do delete data from other row in 
 the same column family)
 To debug the issue we have dumped the data for this row using sstable2json 
 and the result does not contain any tombstones. (We have done this on all 
 nodes having the data and all sstables containing the key)
 {code}
 ./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
 xxx
 {code}
 We are getting the warnings after issuing a simple query:
 {code}
 select count(*) from xxx where key = 'x' and aggregate='x';
 {code}
 There are only ~500 cells but it issues a warning about scanning 1700 
 tombstones.
 We are very worried about this because for some of the queries we are hitting 
 TombstoneOverwhelmingException for no obvious reason.
 Here is the table definiion:
 {code}
 CREATE TABLE Xxxx.xxx (
 key text,
 aggregate text,
 t timeuuid,
 . {date fields }
 PRIMARY KEY ((key, aggregate), t)
 ) WITH CLUSTERING ORDER BY (t ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = 'we love cassandra'
 AND compaction = {'min_threshold': '6', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.SnappyCompressor'}
 AND dclocal_read_repair_chance = 0.0
 AND default_time_to_live = 0
 AND gc_grace_seconds = 3600
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.1
 AND speculative_retry = '99.0PERCENTILE';
 {code}
 Do you have any ideas how can we debug this further?
 Thanks,
 Andrew



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-10 Thread Andrew S (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166901#comment-14166901
 ] 

Andrew S edited comment on CASSANDRA-8089 at 10/10/14 2:27 PM:
---

 looks like you have a 110GB partition in that second table, is that expected?

Thank you, the largest should be 5GB, this was a partition with an empty key 
full of ttled values, I have removed the row but we still get the exception.
java.lang.IllegalStateException: Unable to compute ceiling for max when 
histogram overflowed

Cfhistograms still returns the same error for this and the other tables:
nodetool: Unable to compute when histogram overflowed

More importantly do you have any ideas on the first issue with tombstones?


was (Author: andrews):
 looks like you have a 110GB partition in that second table, is that expected?

Thank you, the largest should be 5GB, this was a partition with an empty key, I 
have removed it but we still get the exception.
java.lang.IllegalStateException: Unable to compute ceiling for max when 
histogram overflowed

Cfhistograms still returns the same error for this and the other tables:
nodetool: Unable to compute when histogram overflowed

More importantly do you have any ideas on the first issue with tombstones?

 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings which look like this:
 {code}
 WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
 tombstone_warn_threshold). 500 columns was requested, 
 slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
  delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
 {code}
 What is strange is that the row requested should not contain any tombstones 
 as we never delete data from that row. (We do delete data from other row in 
 the same column family)
 To debug the issue we have dumped the data for this row using sstable2json 
 and the result does not contain any tombstones. (We have done this on all 
 nodes having the data and all sstables containing the key)
 {code}
 ./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
 xxx
 {code}
 We are getting the warnings after issuing a simple query:
 {code}
 select count(*) from xxx where key = 'x' and aggregate='x';
 {code}
 There are only ~500 cells but it issues a warning about scanning 1700 
 tombstones.
 We are very worried about this because for some of the queries we are hitting 
 TombstoneOverwhelmingException for no obvious reason.
 Here is the table definiion:
 {code}
 CREATE TABLE Xxxx.xxx (
 key text,
 aggregate text,
 t timeuuid,
 . {date fields }
 PRIMARY KEY ((key, aggregate), t)
 ) WITH CLUSTERING ORDER BY (t ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = 'we love cassandra'
 AND compaction = {'min_threshold': '6', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.SnappyCompressor'}
 AND dclocal_read_repair_chance = 0.0
 AND default_time_to_live = 0
 AND gc_grace_seconds = 3600
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.1
 AND speculative_retry = '99.0PERCENTILE';
 {code}
 Do you have any ideas how can we debug this further?
 Thanks,
 Andrew



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-09 Thread Andrew S (JIRA)
Andrew S created CASSANDRA-8089:
---

 Summary: Invalid tombstone warnings / exceptions
 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Casasandra 2.1.0
Debian 7.6, 3.2.0-4-amd64 GNU/Linux

java version 1.7.0_51
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S


Hey,

We are having a strange issue with tombstone warnings which look like this:

{code}
WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
tombstone_warn_threshold). 500 columns was requested, 
slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
 delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
{code}

What is strange is that the row requested should not contain any tombstones as 
we never delete data from that row.

To debug the issue we have dumped the data for this row using sstable2json and 
the result does not contain any tombstones.

{code}
./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
xxx
{code}

We are getting the warnings after issuing a simple query:

{code}
select count(*) from xxx where key = 'x' and aggregate='x';
{code}

There are only ~500 cells but it issues a warning about scanning 1700 
tombstones.

We are very worried about this because for some of the queries we are hitting 
TombstoneOverwhelmingException for no obvious reason.

Here is the table definiion:

{code}
CREATE TABLE Xxxx.xxx (
key text,
aggregate text,
t timeuuid,
. {date fields }
PRIMARY KEY ((key, aggregate), t)
) WITH CLUSTERING ORDER BY (t ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'we love cassandra'
AND compaction = {'min_threshold': '6', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 3600
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';
{code}

Thanks,
Andrew



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-09 Thread Andrew S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew S updated CASSANDRA-8089:

Environment: 
Cassandra 2.1.0
Debian 7.6, 3.2.0-4-amd64 GNU/Linux

java version 1.7.0_51
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

  was:
Casasandra 2.1.0
Debian 7.6, 3.2.0-4-amd64 GNU/Linux

java version 1.7.0_51
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)


 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings which look like this:
 {code}
 WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
 tombstone_warn_threshold). 500 columns was requested, 
 slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
  delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
 {code}
 What is strange is that the row requested should not contain any tombstones 
 as we never delete data from that row. (We do delete data from other row in 
 the same column family)
 To debug the issue we have dumped the data for this row using sstable2json 
 and the result does not contain any tombstones.
 {code}
 ./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
 xxx
 {code}
 We are getting the warnings after issuing a simple query:
 {code}
 select count(*) from xxx where key = 'x' and aggregate='x';
 {code}
 There are only ~500 cells but it issues a warning about scanning 1700 
 tombstones.
 We are very worried about this because for some of the queries we are hitting 
 TombstoneOverwhelmingException for no obvious reason.
 Here is the table definiion:
 {code}
 CREATE TABLE Xxxx.xxx (
 key text,
 aggregate text,
 t timeuuid,
 . {date fields }
 PRIMARY KEY ((key, aggregate), t)
 ) WITH CLUSTERING ORDER BY (t ASC)
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = 'we love cassandra'
 AND compaction = {'min_threshold': '6', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.SnappyCompressor'}
 AND dclocal_read_repair_chance = 0.0
 AND default_time_to_live = 0
 AND gc_grace_seconds = 3600
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.1
 AND speculative_retry = '99.0PERCENTILE';
 {code}
 Thanks,
 Andrew



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-09 Thread Andrew S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew S updated CASSANDRA-8089:

Description: 
Hey,

We are having a strange issue with tombstone warnings which look like this:

{code}
WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
tombstone_warn_threshold). 500 columns was requested, 
slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
 delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
{code}

What is strange is that the row requested should not contain any tombstones as 
we never delete data from that row. (We do delete data from other row in the 
same column family)

To debug the issue we have dumped the data for this row using sstable2json and 
the result does not contain any tombstones.

{code}
./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
xxx
{code}

We are getting the warnings after issuing a simple query:

{code}
select count(*) from xxx where key = 'x' and aggregate='x';
{code}

There are only ~500 cells but it issues a warning about scanning 1700 
tombstones.

We are very worried about this because for some of the queries we are hitting 
TombstoneOverwhelmingException for no obvious reason.

Here is the table definiion:

{code}
CREATE TABLE Xxxx.xxx (
key text,
aggregate text,
t timeuuid,
. {date fields }
PRIMARY KEY ((key, aggregate), t)
) WITH CLUSTERING ORDER BY (t ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'we love cassandra'
AND compaction = {'min_threshold': '6', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 3600
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';
{code}

Thanks,
Andrew

  was:
Hey,

We are having a strange issue with tombstone warnings which look like this:

{code}
WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
tombstone_warn_threshold). 500 columns was requested, 
slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
 delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
{code}

What is strange is that the row requested should not contain any tombstones as 
we never delete data from that row.

To debug the issue we have dumped the data for this row using sstable2json and 
the result does not contain any tombstones.

{code}
./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
xxx
{code}

We are getting the warnings after issuing a simple query:

{code}
select count(*) from xxx where key = 'x' and aggregate='x';
{code}

There are only ~500 cells but it issues a warning about scanning 1700 
tombstones.

We are very worried about this because for some of the queries we are hitting 
TombstoneOverwhelmingException for no obvious reason.

Here is the table definiion:

{code}
CREATE TABLE Xxxx.xxx (
key text,
aggregate text,
t timeuuid,
. {date fields }
PRIMARY KEY ((key, aggregate), t)
) WITH CLUSTERING ORDER BY (t ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'we love cassandra'
AND compaction = {'min_threshold': '6', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 3600
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';
{code}

Thanks,
Andrew


 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Casasandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings which look like this:
 {code}
 WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
 tombstone_warn_threshold). 500 columns was 

[jira] [Updated] (CASSANDRA-8089) Invalid tombstone warnings / exceptions

2014-10-09 Thread Andrew S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew S updated CASSANDRA-8089:

Description: 
Hey,

We are having a strange issue with tombstone warnings which look like this:

{code}
WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
tombstone_warn_threshold). 500 columns was requested, 
slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
 delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
{code}

What is strange is that the row requested should not contain any tombstones as 
we never delete data from that row. (We do delete data from other row in the 
same column family)

To debug the issue we have dumped the data for this row using sstable2json and 
the result does not contain any tombstones. (We have done this on all nodes 
having the data and all sstables containing the key)

{code}
./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
xxx
{code}

We are getting the warnings after issuing a simple query:

{code}
select count(*) from xxx where key = 'x' and aggregate='x';
{code}

There are only ~500 cells but it issues a warning about scanning 1700 
tombstones.

We are very worried about this because for some of the queries we are hitting 
TombstoneOverwhelmingException for no obvious reason.

Here is the table definiion:

{code}
CREATE TABLE Xxxx.xxx (
key text,
aggregate text,
t timeuuid,
. {date fields }
PRIMARY KEY ((key, aggregate), t)
) WITH CLUSTERING ORDER BY (t ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'we love cassandra'
AND compaction = {'min_threshold': '6', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 3600
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';
{code}

Thanks,
Andrew

  was:
Hey,

We are having a strange issue with tombstone warnings which look like this:

{code}
WARN  12:28:42 Read 129 live and 4113 tombstoned cells in XXX.xxx (see 
tombstone_warn_threshold). 500 columns was requested, 
slices=[31660a4e-4f94-11e4-ac1d-53f244a29642-0a8073aa-4f9f-11e4-87c7-5b3e253389d8:!],
 delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
{code}

What is strange is that the row requested should not contain any tombstones as 
we never delete data from that row. (We do delete data from other row in the 
same column family)

To debug the issue we have dumped the data for this row using sstable2json and 
the result does not contain any tombstones.

{code}
./sstable2json /var/lib/cassandra/data/XXX/xxx/XXX-xxx-ka-81524-Data.db -k 
xxx
{code}

We are getting the warnings after issuing a simple query:

{code}
select count(*) from xxx where key = 'x' and aggregate='x';
{code}

There are only ~500 cells but it issues a warning about scanning 1700 
tombstones.

We are very worried about this because for some of the queries we are hitting 
TombstoneOverwhelmingException for no obvious reason.

Here is the table definiion:

{code}
CREATE TABLE Xxxx.xxx (
key text,
aggregate text,
t timeuuid,
. {date fields }
PRIMARY KEY ((key, aggregate), t)
) WITH CLUSTERING ORDER BY (t ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'we love cassandra'
AND compaction = {'min_threshold': '6', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 3600
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';
{code}

Thanks,
Andrew


 Invalid tombstone warnings / exceptions
 ---

 Key: CASSANDRA-8089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Debian 7.6, 3.2.0-4-amd64 GNU/Linux
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Reporter: Andrew S

 Hey,
 We are having a strange issue with tombstone warnings