[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2015-03-12 Thread Davide (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359080#comment-14359080
 ] 

Davide commented on CASSANDRA-8067:
---

Is there a way to fix the issue without the patch? When the 2.1.4 is planned to 
be released? 

 NullPointerException in KeyCacheSerializer
 --

 Key: CASSANDRA-8067
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Eric Leleu
Assignee: Aleksey Yeschenko
 Fix For: 2.1.4

 Attachments: 8067.txt


 Hi,
 I have this stack trace in the logs of Cassandra server (v2.1)
 {code}
 ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
 CassandraDaemon.java:166 - Exception in thread 
 Thread[CompactionExecutor:14,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
 Source) ~[na:1.7.0]
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
 ~[na:1.7.0]
 at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0]
 at java.lang.Thread.run(Unknown Source) [na:1.7.0]
 {code}
 It may not be critical because this error occured in the AutoSavingCache. 
 However the line 475 is about the CFMetaData so it may hide bigger issue...
 {code}
  474 CFMetaData cfm = 
 Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
 out);
 {code}
 Regards,
 Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2015-02-19 Thread Davide (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328311#comment-14328311
 ] 

Davide commented on CASSANDRA-8067:
---

Does this influence for you guys the compaction progress? My nodes once they 
trigger this error seems to be stuck at 1/2% without going forward.

!http://fs.daddye.it/Cswq+!

 NullPointerException in KeyCacheSerializer
 --

 Key: CASSANDRA-8067
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Eric Leleu
Assignee: Aleksey Yeschenko
 Fix For: 2.1.1


 Hi,
 I have this stack trace in the logs of Cassandra server (v2.1)
 {code}
 ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
 CassandraDaemon.java:166 - Exception in thread 
 Thread[CompactionExecutor:14,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
 Source) ~[na:1.7.0]
 at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
 ~[na:1.7.0]
 at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0]
 at java.lang.Thread.run(Unknown Source) [na:1.7.0]
 {code}
 It may not be critical because this error occured in the AutoSavingCache. 
 However the line 475 is about the CFMetaData so it may hide bigger issue...
 {code}
  474 CFMetaData cfm = 
 Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
 out);
 {code}
 Regards,
 Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7304) Ability to distinguish between NULL and UNSET values in Prepared Statements

2014-12-18 Thread Davide (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14253005#comment-14253005
 ] 

Davide commented on CASSANDRA-7304:
---

Hi guys,

I tried hard to understand this issue but it is not entirely clear to me.

I'm one that is affected by an huge amount of tombstones (and there isn't a 
single DELETE in our code)

I use prepared statements everywhere.

If I have a table with columns: _A_, _B_ and _C_.

What happens if using a *prepared statements* I do:

{code:sql}
INSERT INTO test  (A, B) VALUES (1, 2)
{code}

Will this generate a tombstone because I didn't set the column _C_?

Second question, this problem comes from the java driver, but since is a 
protocol issue I assume is affecting all right?

Last question, the query above *without* *prepared* statements is not going to 
generate any tombstone right?  _(assuming that's the case with prepared ones)_

Can some of you provide examples of cases where we can generate tombstones 
using prepared statements?

 Ability to distinguish between NULL and UNSET values in Prepared Statements
 ---

 Key: CASSANDRA-7304
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7304
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Drew Kutcharian
Assignee: Oded Peer
  Labels: cql, protocolv4
 Fix For: 3.0

 Attachments: 7304-03.patch, 7304-04.patch, 7304-2.patch, 7304.patch


 Currently Cassandra inserts tombstones when a value of a column is bound to 
 NULL in a prepared statement. At higher insert rates managing all these 
 tombstones becomes an unnecessary overhead. This limits the usefulness of the 
 prepared statements since developers have to either create multiple prepared 
 statements (each with a different combination of column names, which at times 
 is just unfeasible because of the sheer number of possible combinations) or 
 fall back to using regular (non-prepared) statements.
 This JIRA is here to explore the possibility of either:
 A. Have a flag on prepared statements that once set, tells Cassandra to 
 ignore null columns
 or
 B. Have an UNSET value which makes Cassandra skip the null columns and not 
 tombstone them
 Basically, in the context of a prepared statement, a null value means delete, 
 but we don’t have anything that means ignore (besides creating a new 
 prepared statement without the ignored column).
 Please refer to the original conversation on DataStax Java Driver mailing 
 list for more background:
 https://groups.google.com/a/lists.datastax.com/d/topic/java-driver-user/cHE3OOSIXBU/discussion
 *EDIT 18/12/14 - [~odpeer] Implementation Notes:*
 The motivation hasn't changed.
 Protocol version 4 specifies that bind variables do not require having a 
 value when executing a statement. Bind variables without a value are called 
 'unset'. The 'unset' bind variable is serialized as the int value '-2' 
 without following bytes.
 \\
 \\
 * An unset bind variable in an EXECUTE or BATCH request
 ** On a {{value}} does not modify the value and does not create a tombstone
 ** On the {{ttl}} clause is treated as 'unlimited'
 ** On the {{timestamp}} clause is treated as 'now'
 ** On a map key or a list index throws {{InvalidRequestException}}
 ** On a {{counter}} increment or decrement operation does not change the 
 counter value, e.g. {{UPDATE my_tab SET c = c - ? WHERE k = 1}} does change 
 the value of counter {{c}}
 ** On a tuple field or UDT field throws {{InvalidRequestException}}
 * An unset bind variable in a QUERY request
 ** On a partition column, clustering column or index column in the {{WHERE}} 
 clause throws {{InvalidRequestException}}
 ** On the {{limit}} clause is treated as 'unlimited'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8140) Compaction has no effects

2014-10-20 Thread Davide (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14177212#comment-14177212
 ] 

Davide commented on CASSANDRA-8140:
---

Hi Marcus, 

that's the only thing we have in logs:

{code}
CassandraDaemon.java [line 166] Exception in thread 
Thread[CompactionExecutor:227,1,RMI Runtime]
java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
down
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:61)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) 
~[na:1.7.0_25]
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372) 
~[na:1.7.0_25]
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:150)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
 ~[na:1.7.0_25]
at 
org.apache.cassandra.db.ColumnFamilyStore.switchMemtable(ColumnFamilyStore.java:827)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:902)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:863)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:473)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:231)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:202)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_25]
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 
~[na:1.7.0_25]
at java.util.concurrent.FutureTask.run(FutureTask.java:166) 
~[na:1.7.0_25]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_25]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
~[na:1.7.0_25]
at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_25]
{code}

 Compaction has no effects
 -

 Key: CASSANDRA-8140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8140
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Davide
Assignee: Marcus Eriksson

 Hi there,
 I'm on cassandra 2.1 since then I figured out that in some circumstances (I 
 can't find a way to reproduce them constantly) minor compactions and full 
 compactions had no effects.
 We are on a cluster composed of 5 nodes with around 500gb of data, no 
 deletions around 1.5k updates/s and same on reads.
 After a repair I saw that a couple of nodes were `slow`, I investigate 
 further and I found that on these two nodes the number of sstable were around 
 20.000+ ! We use STC.
 So with node tool I triggered a full compaction, It took less than I minute 
 (with nothing in the logs) and of course the number of sstable didn't go down.
 Then I drained the node, and I ran again with `nodetool compact`, at that 
 point the number of sstables went down to less than 10.
 I tough was a strange spot problem. However after a week I noticed that one 
 node had ~100 sstabels where others just 8-10.
 I ran again the compaction (It last less than a minute with nothing in logs) 
 and didn't change anything. I drained it and restarted then compacted and 
 took several hours to get it back close to 2/3 sstables.
 What could be? We never incurred this behavior before.
 Here informations about the table:
 {code}
 CREATE TABLE xyz (
 ppk text PRIMARY KEY,
.. ten more columns...
 ) WITH bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'cold_reads_to_omit': 

[jira] [Commented] (CASSANDRA-8140) Compaction has no effects

2014-10-20 Thread Davide (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14177231#comment-14177231
 ] 

Davide commented on CASSANDRA-8140:
---

Check here: https://gist.github.com/DAddYE/158f6b98253331dc2845
Unfortunately we had to reduce the verbosity due to very large logs.

I hope it helps

 Compaction has no effects
 -

 Key: CASSANDRA-8140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8140
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Davide
Assignee: Marcus Eriksson

 Hi there,
 I'm on cassandra 2.1 since then I figured out that in some circumstances (I 
 can't find a way to reproduce them constantly) minor compactions and full 
 compactions had no effects.
 We are on a cluster composed of 5 nodes with around 500gb of data, no 
 deletions around 1.5k updates/s and same on reads.
 After a repair I saw that a couple of nodes were `slow`, I investigate 
 further and I found that on these two nodes the number of sstable were around 
 20.000+ ! We use STC.
 So with node tool I triggered a full compaction, It took less than I minute 
 (with nothing in the logs) and of course the number of sstable didn't go down.
 Then I drained the node, and I ran again with `nodetool compact`, at that 
 point the number of sstables went down to less than 10.
 I tough was a strange spot problem. However after a week I noticed that one 
 node had ~100 sstabels where others just 8-10.
 I ran again the compaction (It last less than a minute with nothing in logs) 
 and didn't change anything. I drained it and restarted then compacted and 
 took several hours to get it back close to 2/3 sstables.
 What could be? We never incurred this behavior before.
 Here informations about the table:
 {code}
 CREATE TABLE xyz (
 ppk text PRIMARY KEY,
.. ten more columns...
 ) WITH bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'cold_reads_to_omit': '0.0', 
 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.SnappyCompressor'}
 AND dclocal_read_repair_chance = 0.0
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 {code}
 Here the current cf stats:
 {code}
   SSTable count: 11
   Space used (live), bytes: 118007220865
   Space used (total), bytes: 118007220865
   Space used by snapshots (total), bytes: 170591332257
   SSTable Compression Ratio: 0.3643916626015517
   Memtable cell count: 920306
   Memtable data size, bytes: 70034097
   Memtable switch count: 25
   Local read count: 5358772
   Local read latency: 54.621 ms
   Local write count: 4715106
   Local write latency: 0.069 ms
   Pending flushes: 0
   Bloom filter false positives: 53757
   Bloom filter false ratio: 0.04103
   Bloom filter space used, bytes: 220634056
   Compacted partition minimum bytes: 18
   Compacted partition maximum bytes: 61214
   Compacted partition mean bytes: 1935
   Average live cells per slice (last five minutes): 
 0.8139232271871242
   Average tombstones per slice (last five minutes): 
 0.5493417148555677
 {code}
 Is there anything else that I can provide?
 Thanks!
 DD



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8140) Compaction has no effects

2014-10-20 Thread Davide (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14177242#comment-14177242
 ] 

Davide commented on CASSANDRA-8140:
---

Oh, thank you!!! 

 Compaction has no effects
 -

 Key: CASSANDRA-8140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8140
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Davide
Assignee: Marcus Eriksson

 Hi there,
 I'm on cassandra 2.1 since then I figured out that in some circumstances (I 
 can't find a way to reproduce them constantly) minor compactions and full 
 compactions had no effects.
 We are on a cluster composed of 5 nodes with around 500gb of data, no 
 deletions around 1.5k updates/s and same on reads.
 After a repair I saw that a couple of nodes were `slow`, I investigate 
 further and I found that on these two nodes the number of sstable were around 
 20.000+ ! We use STC.
 So with node tool I triggered a full compaction, It took less than I minute 
 (with nothing in the logs) and of course the number of sstable didn't go down.
 Then I drained the node, and I ran again with `nodetool compact`, at that 
 point the number of sstables went down to less than 10.
 I tough was a strange spot problem. However after a week I noticed that one 
 node had ~100 sstabels where others just 8-10.
 I ran again the compaction (It last less than a minute with nothing in logs) 
 and didn't change anything. I drained it and restarted then compacted and 
 took several hours to get it back close to 2/3 sstables.
 What could be? We never incurred this behavior before.
 Here informations about the table:
 {code}
 CREATE TABLE xyz (
 ppk text PRIMARY KEY,
.. ten more columns...
 ) WITH bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'cold_reads_to_omit': '0.0', 
 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.SnappyCompressor'}
 AND dclocal_read_repair_chance = 0.0
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 {code}
 Here the current cf stats:
 {code}
   SSTable count: 11
   Space used (live), bytes: 118007220865
   Space used (total), bytes: 118007220865
   Space used by snapshots (total), bytes: 170591332257
   SSTable Compression Ratio: 0.3643916626015517
   Memtable cell count: 920306
   Memtable data size, bytes: 70034097
   Memtable switch count: 25
   Local read count: 5358772
   Local read latency: 54.621 ms
   Local write count: 4715106
   Local write latency: 0.069 ms
   Pending flushes: 0
   Bloom filter false positives: 53757
   Bloom filter false ratio: 0.04103
   Bloom filter space used, bytes: 220634056
   Compacted partition minimum bytes: 18
   Compacted partition maximum bytes: 61214
   Compacted partition mean bytes: 1935
   Average live cells per slice (last five minutes): 
 0.8139232271871242
   Average tombstones per slice (last five minutes): 
 0.5493417148555677
 {code}
 Is there anything else that I can provide?
 Thanks!
 DD



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8140) Compaction has no effects

2014-10-18 Thread Davide (JIRA)
Davide created CASSANDRA-8140:
-

 Summary: Compaction has no effects
 Key: CASSANDRA-8140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8140
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Davide


Hi there,

I'm on cassandra 2.1 since then I figure out that in some circumstances (I 
can't find a way to reproduce them constantly) minor compactions and full 
compactions takes no effects.

So we are on a cluster composed of 5 nodes with around 500gb of data, no 
deletions around 1.5k updates/s and same on reads.

After a repair I saw that a couple of nodes were `slow`, I investigate further 
and I found that on these two nodes the number of sstable were around 20.000+ ! 
We use STC.

So with node tool I triggered a full compaction, It took less than I minute 
(with noting in the logs) and of course the number of sstable didn't go down.

Then I drained the node, and I ran again the `nodetool compact`, at that point 
the number of sstables went down to less than 10.

I tough was a strange spot problem. However after a week I noticed that one 
node had ~100 sstabels where others just 8-10.

I ran again the compaction (It last less than a minute with nothing in logs) 
and didn't change anything. I drained it and restarted then compacted and took 
several hours to get it back close to 2/3 sstables.

What could be? We never incurred this behavior before.

Here informations about the table:

{code}
CREATE TABLE xyz (
ppk text PRIMARY KEY,
   .. ten more columns...
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'cold_reads_to_omit': '0.0', 
'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
{code}

Here the current cf stats:

{code}
SSTable count: 11
Space used (live), bytes: 118007220865
Space used (total), bytes: 118007220865
Space used by snapshots (total), bytes: 170591332257
SSTable Compression Ratio: 0.3643916626015517
Memtable cell count: 920306
Memtable data size, bytes: 70034097
Memtable switch count: 25
Local read count: 5358772
Local read latency: 54.621 ms
Local write count: 4715106
Local write latency: 0.069 ms
Pending flushes: 0
Bloom filter false positives: 53757
Bloom filter false ratio: 0.04103
Bloom filter space used, bytes: 220634056
Compacted partition minimum bytes: 18
Compacted partition maximum bytes: 61214
Compacted partition mean bytes: 1935
Average live cells per slice (last five minutes): 
0.8139232271871242
Average tombstones per slice (last five minutes): 
0.5493417148555677
{code}

Is there anything else that I can provide?

Thanks!
DD



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8140) Compaction has no effects

2014-10-18 Thread Davide (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Davide updated CASSANDRA-8140:
--
Description: 
Hi there,

I'm on cassandra 2.1 since then I figured out that in some circumstances (I 
can't find a way to reproduce them constantly) minor compactions and full 
compactions had no effects.

We are on a cluster composed of 5 nodes with around 500gb of data, no deletions 
around 1.5k updates/s and same on reads.

After a repair I saw that a couple of nodes were `slow`, I investigate further 
and I found that on these two nodes the number of sstable were around 20.000+ ! 
We use STC.

So with node tool I triggered a full compaction, It took less than I minute 
(with nothing in the logs) and of course the number of sstable didn't go down.

Then I drained the node, and I ran again with `nodetool compact`, at that point 
the number of sstables went down to less than 10.

I tough was a strange spot problem. However after a week I noticed that one 
node had ~100 sstabels where others just 8-10.

I ran again the compaction (It last less than a minute with nothing in logs) 
and didn't change anything. I drained it and restarted then compacted and took 
several hours to get it back close to 2/3 sstables.

What could be? We never incurred this behavior before.

Here informations about the table:

{code}
CREATE TABLE xyz (
ppk text PRIMARY KEY,
   .. ten more columns...
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'cold_reads_to_omit': '0.0', 
'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.SnappyCompressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
{code}

Here the current cf stats:

{code}
SSTable count: 11
Space used (live), bytes: 118007220865
Space used (total), bytes: 118007220865
Space used by snapshots (total), bytes: 170591332257
SSTable Compression Ratio: 0.3643916626015517
Memtable cell count: 920306
Memtable data size, bytes: 70034097
Memtable switch count: 25
Local read count: 5358772
Local read latency: 54.621 ms
Local write count: 4715106
Local write latency: 0.069 ms
Pending flushes: 0
Bloom filter false positives: 53757
Bloom filter false ratio: 0.04103
Bloom filter space used, bytes: 220634056
Compacted partition minimum bytes: 18
Compacted partition maximum bytes: 61214
Compacted partition mean bytes: 1935
Average live cells per slice (last five minutes): 
0.8139232271871242
Average tombstones per slice (last five minutes): 
0.5493417148555677
{code}

Is there anything else that I can provide?

Thanks!
DD

  was:
Hi there,

I'm on cassandra 2.1 since then I figure out that in some circumstances (I 
can't find a way to reproduce them constantly) minor compactions and full 
compactions takes no effects.

So we are on a cluster composed of 5 nodes with around 500gb of data, no 
deletions around 1.5k updates/s and same on reads.

After a repair I saw that a couple of nodes were `slow`, I investigate further 
and I found that on these two nodes the number of sstable were around 20.000+ ! 
We use STC.

So with node tool I triggered a full compaction, It took less than I minute 
(with noting in the logs) and of course the number of sstable didn't go down.

Then I drained the node, and I ran again the `nodetool compact`, at that point 
the number of sstables went down to less than 10.

I tough was a strange spot problem. However after a week I noticed that one 
node had ~100 sstabels where others just 8-10.

I ran again the compaction (It last less than a minute with nothing in logs) 
and didn't change anything. I drained it and restarted then compacted and took 
several hours to get it back close to 2/3 sstables.

What could be? We never incurred this behavior before.

Here informations about the table:

{code}
CREATE TABLE xyz (
ppk text PRIMARY KEY,
   .. ten more columns...
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'cold_reads_to_omit': '0.0', 
'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND