[jira] [Comment Edited] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed
[ https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15275062#comment-15275062 ] Paulo Motta edited comment on CASSANDRA-11670 at 5/7/16 3:59 AM: - The problem here is that during streaming we can potentially receive more than {{max_mutation_size}} updates for a single partition, and MV updates are later grouped into a single batchlog which in this case will exceed {{max_mutation_size}} and fail streaming/bootstrap/repair since it cannot be written to the commit log. I created a [dtest|https://github.com/pauloricardomg/cassandra-dtest/commit/e7670ef78011a946d096aac2a3e9be43bba70530] to reproduce it. I see two ways of solving this: 1) Split large updates for a single partition received during streaming into multiple mutations, smaller than {{max_mutation_size}} on {{StreamReceiveTask.OnCompletionRunnable}} 2) Split large MV updates into multiple batchlogs smaller than {{max_mutation_size}} on {{StorageProxy.mutateMV}}. Upside of 1) is that we deal with it earlier. Downsides are: have to handle deletions when splitting mutations and we cannot know in advance what the batchlog size will be so will need to estimate batchlog size or use a conservative value to split updates into mutations. Upside of 2) is to split batchlogs more precisely. Downsides are having to special case this in the MV Path. I initially considered only bootstrap (while this can also happen with repairs), and did an [initial implementation|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11670] based on 2, but supporting repair which goes through the ordinary MV path will probably make this more messy so I'm now leaning more towards 1. WDYT [~carlyeks] ? [~aheiss] as a interim workaround you can try manually applying this [this preliminary patch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11670] on cassandra-3.0.5, or a more brute force approach without patching is to increase {{max_mutation_size_in_kb}} or {{commitlog_segment_size_in_mb}}. was (Author: pauloricardomg): The problem here is that during streaming we can potentially receive more than {{max_mutation_size}} updates for a single partition, and MV updates are later grouped into a single batchlog which in this case will exceed {{max_mutation_size}} and fail streaming/bootstrap/repair. I created a [dtest|https://github.com/pauloricardomg/cassandra-dtest/commit/e7670ef78011a946d096aac2a3e9be43bba70530] to reproduce it. I see two ways of solving this: 1) Split large updates for a single partition received during streaming into multiple mutations, smaller than {{max_mutation_size}} on {{StreamReceiveTask.OnCompletionRunnable}} 2) Split large MV updates into multiple batchlogs smaller than {{max_mutation_size}} on {{StorageProxy.mutateMV}}. Upside of 1) is that we deal with it earlier. Downsides are: have to handle deletions when splitting mutations and we cannot know in advance what the batchlog size will be so will need to estimate batchlog size or use a conservative value to split updates into mutations. Upside of 2) is to split batchlogs more precisely. Downsides are having to special case this in the MV Path. I initially considered only bootstrap (while this can also happen with repairs), and did an [initial implementation|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11670] based on 2, but supporting repair which goes through the ordinary MV path will probably make this more messy so I'm now leaning more towards 1. WDYT [~carlyeks] ? [~aheiss] as a interim workaround you can try manually applying this [this preliminary patch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11670] on cassandra-3.0.5, or a more brute force approach without patching is to increase {{max_mutation_size_in_kb}} or {{commitlog_segment_size_in_mb}}. > Error while waiting on bootstrap to complete. Bootstrap will have to be > restarted. Stream failed > > > Key: CASSANDRA-11670 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11670 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Streaming and Messaging >Reporter: Anastasia Osintseva >Assignee: Paulo Motta > Fix For: 3.0.5 > > > I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each > DC. One node has been added successfully after I had made scrubing. > Now I'm trying to add node to another DC, but get error: > org.apache.cassandra.streaming.StreamException: Stream failed. > After scrubing and repair I get the same error. > {noformat} > ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - >
[jira] [Comment Edited] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed
[ https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15275062#comment-15275062 ] Paulo Motta edited comment on CASSANDRA-11670 at 5/7/16 3:52 AM: - The problem here is that during streaming we can potentially receive more than {{max_mutation_size}} updates for a single partition, and MV updates are later grouped into a single batchlog which in this case will exceed {{max_mutation_size}} and fail streaming/bootstrap/repair. I created a [dtest|https://github.com/pauloricardomg/cassandra-dtest/commit/e7670ef78011a946d096aac2a3e9be43bba70530] to reproduce it. I see two ways of solving this: 1) Split large updates for a single partition received during streaming into multiple mutations, smaller than {{max_mutation_size}} on {{StreamReceiveTask.OnCompletionRunnable}} 2) Split large MV updates into multiple batchlogs smaller than {{max_mutation_size}} on {{StorageProxy.mutateMV}}. Upside of 1) is that we deal with it earlier. Downsides are: have to handle deletions when splitting mutations and we cannot know in advance what the batchlog size will be so will need to estimate batchlog size or use a conservative value to split updates into mutations. Upside of 2) is to split batchlogs more precisely. Downsides are having to special case this in the MV Path. I initially considered only bootstrap (while this can also happen with repairs), and did an [initial implementation|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11670] based on 2, but supporting repair which goes through the ordinary MV path will probably make this more messy so I'm now leaning more towards 1. WDYT [~carlyeks] ? [~aheiss] as a interim workaround you can try manually applying this [this preliminary patch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11670] on cassandra-3.0.5, or a more brute force approach without patching is to increase {{max_mutation_size_in_kb}} or {{commitlog_segment_size_in_mb}}. was (Author: pauloricardomg): The problem here is that during streaming we can potentially receive more than {{max_mutation_size}} updates for a single partition on {{StreamReceiveTask.OnCompletionRunnable}}, and MV updates are later grouped into a single batchlog on {{StorageProxy.mutateMV}} which in this case will exceed {{max_mutation_size}} and fail streaming/bootstrap/repair. I created a [dtest|https://github.com/pauloricardomg/cassandra-dtest/commit/e7670ef78011a946d096aac2a3e9be43bba70530] to reproduce it. I see two ways of solving this: 1) Split large updates for a single partition received during streaming into multiple mutations, smaller than {{max_mutation_size}} on {{StreamReceiveTask.OnCompletionRunnable}} 2) Split large MV updates into multiple batchlogs smaller than {{max_mutation_size}} on {{StorageProxy.mutateMV}}. Upside of 1) is that we deal with it earlier. Downsides are: have to handle deletions when splitting mutations and we cannot know in advance what the batchlog size will be so will need to estimate batchlog size or use a conservative value to split updates into mutations. Upside of 2) is to split batchlogs more precisely. Downsides are having to special case this in the MV Path. I initially considered only bootstrap (while this can also happen with repairs), and did an [initial implementation|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11670] based on 2, but supporting repair which goes through the ordinary MV path will probably make this more messy so I'm now leaning more towards 1. WDYT [~carlyeks] ? [~aheiss] as a interim workaround you can try manually applying this [this preliminary patch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11670] on cassandra-3.0.5, or a more brute force approach without patching is to increase {{max_mutation_size_in_kb}} or {{commitlog_segment_size_in_mb}}. > Error while waiting on bootstrap to complete. Bootstrap will have to be > restarted. Stream failed > > > Key: CASSANDRA-11670 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11670 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Streaming and Messaging >Reporter: Anastasia Osintseva >Assignee: Paulo Motta > Fix For: 3.0.5 > > > I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each > DC. One node has been added successfully after I had made scrubing. > Now I'm trying to add node to another DC, but get error: > org.apache.cassandra.streaming.StreamException: Stream failed. > After scrubing and repair I get the same error. > {noformat} > ERROR [StreamReceiveTask:5] 2016-04-27
[jira] [Commented] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed
[ https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15275062#comment-15275062 ] Paulo Motta commented on CASSANDRA-11670: - The problem here is that during streaming we can potentially receive more than {{max_mutation_size}} updates for a single partition on {{StreamReceiveTask.OnCompletionRunnable}}, and MV updates are later grouped into a single batchlog on {{StorageProxy.mutateMV}} which in this case will exceed {{max_mutation_size}} and fail streaming/bootstrap/repair. I created a [dtest|https://github.com/pauloricardomg/cassandra-dtest/commit/e7670ef78011a946d096aac2a3e9be43bba70530] to reproduce it. I see two ways of solving this: 1) Split large updates for a single partition received during streaming into multiple mutations, smaller than {{max_mutation_size}} on {{StreamReceiveTask.OnCompletionRunnable}} 2) Split large MV updates into multiple batchlogs smaller than {{max_mutation_size}} on {{StorageProxy.mutateMV}}. Upside of 1) is that we deal with it earlier. Downsides are: have to handle deletions when splitting mutations and we cannot know in advance what the batchlog size will be so will need to estimate batchlog size or use a conservative value to split updates into mutations. Upside of 2) is to split batchlogs more precisely. Downsides are having to special case this in the MV Path. I initially considered only bootstrap (while this can also happen with repairs), and did an [initial implementation|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11670] based on 2, but supporting repair which goes through the ordinary MV path will probably make this more messy so I'm now leaning more towards 1. WDYT [~carlyeks] ? [~aheiss] as a interim workaround you can try manually applying this [this preliminary patch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11670] on cassandra-3.0.5, or a more brute force approach without patching is to increase {{max_mutation_size_in_kb}} or {{commitlog_segment_size_in_mb}}. > Error while waiting on bootstrap to complete. Bootstrap will have to be > restarted. Stream failed > > > Key: CASSANDRA-11670 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11670 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Streaming and Messaging >Reporter: Anastasia Osintseva > Fix For: 3.0.5 > > > I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each > DC. One node has been added successfully after I had made scrubing. > Now I'm trying to add node to another DC, but get error: > org.apache.cassandra.streaming.StreamException: Stream failed. > After scrubing and repair I get the same error. > {noformat} > ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - > Unknown exception caught while attempting to update MaterializedView! > messages_dump.messages > java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large > for the maxiumum size of 33554432 > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) > [apache-cassandra-3.0.5.jar:3.0.5] > at >
[jira] [Updated] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed
[ https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-11670: Assignee: Paulo Motta Reviewer: Carl Yeksigian > Error while waiting on bootstrap to complete. Bootstrap will have to be > restarted. Stream failed > > > Key: CASSANDRA-11670 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11670 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Streaming and Messaging >Reporter: Anastasia Osintseva >Assignee: Paulo Motta > Fix For: 3.0.5 > > > I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each > DC. One node has been added successfully after I had made scrubing. > Now I'm trying to add node to another DC, but get error: > org.apache.cassandra.streaming.StreamException: Stream failed. > After scrubing and repair I get the same error. > {noformat} > ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - > Unknown exception caught while attempting to update MaterializedView! > messages_dump.messages > java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large > for the maxiumum size of 33554432 > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) > [apache-cassandra-3.0.5.jar:3.0.5] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_11] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_11] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11] > ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 > StreamReceiveTask.java:214 - Error applying streamed data: > java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large > for the maxiumum size of 33554432 > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at
[jira] [Updated] (CASSANDRA-11663) dtest failure in upgrade_tests.storage_engine_upgrade_test.TestStorageEngineUpgrade.upgrade_with_wide_partition_test and upgrade_with_wide_partition_reversed_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Russ Hatch updated CASSANDRA-11663: --- Reproduced In: 3.x (was: 3.0.x, 3.x) > dtest failure in > upgrade_tests.storage_engine_upgrade_test.TestStorageEngineUpgrade.upgrade_with_wide_partition_test > and upgrade_with_wide_partition_reversed_test > -- > > Key: CASSANDRA-11663 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11663 > Project: Cassandra > Issue Type: Bug >Reporter: Russ Hatch > Labels: dtest > Attachments: node1.log, node1_debug.log > > > including two tests here, look to be failing for the same reason, example > failures: > http://cassci.datastax.com/job/trunk_dtest/1152/testReport/upgrade_tests.storage_engine_upgrade_test/TestStorageEngineUpgrade/upgrade_with_wide_partition_test > http://cassci.datastax.com/job/trunk_dtest/1152/testReport/upgrade_tests.storage_engine_upgrade_test/TestStorageEngineUpgrade/upgrade_with_wide_partition_reversed_test/ > Failed on CassCI build trunk_dtest #1152 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (CASSANDRA-11663) dtest failure in upgrade_tests.storage_engine_upgrade_test.TestStorageEngineUpgrade.upgrade_with_wide_partition_test and upgrade_with_wide_partition_re
[ https://issues.apache.org/jira/browse/CASSANDRA-11663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Russ Hatch updated CASSANDRA-11663: --- Comment: was deleted (was: looks to be happening on 3.0.x too) > dtest failure in > upgrade_tests.storage_engine_upgrade_test.TestStorageEngineUpgrade.upgrade_with_wide_partition_test > and upgrade_with_wide_partition_reversed_test > -- > > Key: CASSANDRA-11663 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11663 > Project: Cassandra > Issue Type: Bug >Reporter: Russ Hatch > Labels: dtest > Attachments: node1.log, node1_debug.log > > > including two tests here, look to be failing for the same reason, example > failures: > http://cassci.datastax.com/job/trunk_dtest/1152/testReport/upgrade_tests.storage_engine_upgrade_test/TestStorageEngineUpgrade/upgrade_with_wide_partition_test > http://cassci.datastax.com/job/trunk_dtest/1152/testReport/upgrade_tests.storage_engine_upgrade_test/TestStorageEngineUpgrade/upgrade_with_wide_partition_reversed_test/ > Failed on CassCI build trunk_dtest #1152 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11663) dtest failure in upgrade_tests.storage_engine_upgrade_test.TestStorageEngineUpgrade.upgrade_with_wide_partition_test and upgrade_with_wide_partition_reversed_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Russ Hatch updated CASSANDRA-11663: --- Reproduced In: 3.0.x, 3.x (was: 3.x) > dtest failure in > upgrade_tests.storage_engine_upgrade_test.TestStorageEngineUpgrade.upgrade_with_wide_partition_test > and upgrade_with_wide_partition_reversed_test > -- > > Key: CASSANDRA-11663 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11663 > Project: Cassandra > Issue Type: Bug >Reporter: Russ Hatch > Labels: dtest > Attachments: node1.log, node1_debug.log > > > including two tests here, look to be failing for the same reason, example > failures: > http://cassci.datastax.com/job/trunk_dtest/1152/testReport/upgrade_tests.storage_engine_upgrade_test/TestStorageEngineUpgrade/upgrade_with_wide_partition_test > http://cassci.datastax.com/job/trunk_dtest/1152/testReport/upgrade_tests.storage_engine_upgrade_test/TestStorageEngineUpgrade/upgrade_with_wide_partition_reversed_test/ > Failed on CassCI build trunk_dtest #1152 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11663) dtest failure in upgrade_tests.storage_engine_upgrade_test.TestStorageEngineUpgrade.upgrade_with_wide_partition_test and upgrade_with_wide_partition_reversed_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274932#comment-15274932 ] Russ Hatch commented on CASSANDRA-11663: looks to be happening on 3.0.x too > dtest failure in > upgrade_tests.storage_engine_upgrade_test.TestStorageEngineUpgrade.upgrade_with_wide_partition_test > and upgrade_with_wide_partition_reversed_test > -- > > Key: CASSANDRA-11663 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11663 > Project: Cassandra > Issue Type: Bug >Reporter: Russ Hatch > Labels: dtest > Attachments: node1.log, node1_debug.log > > > including two tests here, look to be failing for the same reason, example > failures: > http://cassci.datastax.com/job/trunk_dtest/1152/testReport/upgrade_tests.storage_engine_upgrade_test/TestStorageEngineUpgrade/upgrade_with_wide_partition_test > http://cassci.datastax.com/job/trunk_dtest/1152/testReport/upgrade_tests.storage_engine_upgrade_test/TestStorageEngineUpgrade/upgrade_with_wide_partition_reversed_test/ > Failed on CassCI build trunk_dtest #1152 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11732) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_parse_errors
Russ Hatch created CASSANDRA-11732: -- Summary: dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_parse_errors Key: CASSANDRA-11732 URL: https://issues.apache.org/jira/browse/CASSANDRA-11732 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng offheap dtest job, just one recent failure but looks suspect, may be worth trying to repro: http://cassci.datastax.com/job/trunk_offheap_dtest/189/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_with_parse_errors Failed on CassCI build trunk_offheap_dtest #189 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11731) dtest failure in pushed_notifications_test.TestPushedNotifications.move_single_node_test
Russ Hatch created CASSANDRA-11731: -- Summary: dtest failure in pushed_notifications_test.TestPushedNotifications.move_single_node_test Key: CASSANDRA-11731 URL: https://issues.apache.org/jira/browse/CASSANDRA-11731 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng one recent failure (no vnode job) {noformat} 'MOVED_NODE' != u'NEW_NODE' {noformat} http://cassci.datastax.com/job/trunk_novnode_dtest/366/testReport/pushed_notifications_test/TestPushedNotifications/move_single_node_test Failed on CassCI build trunk_novnode_dtest #366 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11730) [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test
Russ Hatch created CASSANDRA-11730: -- Summary: [windows] dtest failure in jmx_auth_test.TestJMXAuth.basic_auth_test Key: CASSANDRA-11730 URL: https://issues.apache.org/jira/browse/CASSANDRA-11730 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng looks to be failing on each run so far: http://cassci.datastax.com/job/trunk_dtest_win32/406/testReport/jmx_auth_test/TestJMXAuth/basic_auth_test Failed on CassCI build trunk_dtest_win32 #406 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11729) dtest failure in secondary_indexes_test.TestSecondaryIndexes.test_6924_dropping_ks
Russ Hatch created CASSANDRA-11729: -- Summary: dtest failure in secondary_indexes_test.TestSecondaryIndexes.test_6924_dropping_ks Key: CASSANDRA-11729 URL: https://issues.apache.org/jira/browse/CASSANDRA-11729 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng looks to be a single flap. might be worth trying to reproduce. example failure: http://cassci.datastax.com/job/trunk_dtest/1204/testReport/secondary_indexes_test/TestSecondaryIndexes/test_6924_dropping_ks Failed on CassCI build trunk_dtest #1204 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11723) Cassandra upgrade from 2.1.11 to 3.0.5 leads to unstable nodes (jemalloc to blame)
[ https://issues.apache.org/jira/browse/CASSANDRA-11723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274849#comment-15274849 ] Stefano Ortolani commented on CASSANDRA-11723: -- Tried Java 8 b91 with no improvements. Everything runs smooth if I disable jemalloc. > Cassandra upgrade from 2.1.11 to 3.0.5 leads to unstable nodes (jemalloc to > blame) > -- > > Key: CASSANDRA-11723 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11723 > Project: Cassandra > Issue Type: Bug >Reporter: Stefano Ortolani > Fix For: 3.0.x > > > Upgrade seems fine, but any restart of the node might lead to a situation > where the node just dies after 30 seconds / 1 minute. > Nothing in the logs besides many "FailureDetector.java:456 - Ignoring > interval time of 3000892567 for /10.12.a.x" output every second (against all > other nodes) in debug.log plus some spurious GraphiteErrors/ReadRepair > notifications: > {code:xml} > DEBUG [GossipStage:1] 2016-05-05 22:29:03,921 FailureDetector.java:456 - > Ignoring interval time of 2373187360 for /10.12.a.x > DEBUG [GossipStage:1] 2016-05-05 22:29:03,921 FailureDetector.java:456 - > Ignoring interval time of 2000276196 for /10.12.a.y > DEBUG [ReadRepairStage:24] 2016-05-05 22:29:03,990 ReadCallback.java:234 - > Digest mismatch: > org.apache.cassandra.service.DigestMismatchException: Mismatch for key > DecoratedKey(-152946356843306763, e859fdd2f264485f42030ce261e4e12e) > (d6e617ece3b7bec6138b52b8974b8cab vs 31becca666a62b3c4b2fc0bab9902718) > at > org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:85) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:225) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_60] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > DEBUG [GossipStage:1] 2016-05-05 22:29:04,841 FailureDetector.java:456 - > Ignoring interval time of 3000299340 for /10.12.33.5 > ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-05 22:29:05,692 > ScheduledReporter.java:119 - RuntimeException thrown from > GraphiteReporter#report. Exception was suppressed. > java.lang.IllegalStateException: Unable to compute ceiling for max when > histogram overflowed > at > org.apache.cassandra.utils.EstimatedHistogram.rawMean(EstimatedHistogram.java:231) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.metrics.EstimatedHistogramReservoir$HistogramSnapshot.getMean(EstimatedHistogramReservoir.java:103) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > com.codahale.metrics.graphite.GraphiteReporter.reportHistogram(GraphiteReporter.java:252) > ~[metrics-graphite-3.1.0.jar:3.1.0] > at > com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:166) > ~[metrics-graphite-3.1.0.jar:3.1.0] > at > com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) > ~[metrics-core-3.1.0.jar:3.1.0] > at > com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) > ~[metrics-core-3.1.0.jar:3.1.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_60] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_60] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_60] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_60] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > {code} > I know this is not much but nothing else gets to dmesg or to any other log. > Any suggestion how to debug this further? > I upgraded two nodes so far, and it happened on both nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11416) No longer able to load backups into new cluster if there was a dropped column
[ https://issues.apache.org/jira/browse/CASSANDRA-11416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274800#comment-15274800 ] Jeremiah Jordan commented on CASSANDRA-11416: - Thinking about it this more I'm leaning towards the "barf by default, but with an error telling you how to fix it". If we don't "clean up" the bad column, then if someone adds it back later the old data will show up. Which is the same thing that would happen in 2.1, but not really a great experience... Unless alter table add stored the "timestamp it was added" and filtered out things from before then... But yelling at people and telling them they need to "fix their schema", run "drop column blah" or set the "OK to load sstables with unknown columns" flag before they can load their data is probably a good idea. And even better if sstableloader spits out the error as well as it showing up in server side logs if you dropped the sstables in place... > No longer able to load backups into new cluster if there was a dropped column > - > > Key: CASSANDRA-11416 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11416 > Project: Cassandra > Issue Type: Bug >Reporter: Jeremiah Jordan >Assignee: Aleksey Yeschenko > Fix For: 3.0.x, 3.x > > > The following change to the sstableloader test works in 2.1/2.2 but fails in > 3.0+ > https://github.com/JeremiahDJordan/cassandra-dtest/commit/7dc66efb8d24239f0a488ec5a613240531aeb7db > {code} > CREATE TABLE test_drop (key text PRIMARY KEY, c1 text, c2 text, c3 text, c4 > text) > ...insert data... > ALTER TABLE test_drop DROP c4 > ...insert more data... > {code} > Make a snapshot and save off a describe to backup table test_drop. > Decide to restore the snapshot to a new cluster. First restore the schema > from describe. (column c4 isn't there) > {code} > CREATE TABLE test_drop (key text PRIMARY KEY, c1 text, c2 text, c3 text) > {code} > sstableload the snapshot data. > Works in 2.1/2.2. Fails in 3.0+ with: > {code} > java.lang.RuntimeException: Unknown column c4 during deserialization > java.lang.RuntimeException: Failed to list files in > /var/folders/t4/rlc2b6450qbg92762l9l4mt8gn/T/dtest-3eKv_g/test/node1/data1_copy/ks/drop_one-bcef5280f11b11e5825a43f0253f18b5 > at > org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:53) > at > org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:544) > at > org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:76) > at > org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:165) > at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:104) > Caused by: java.lang.RuntimeException: Unknown column c4 during > deserialization > at > org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:331) > at > org.apache.cassandra.io.sstable.format.SSTableReader.openForBatch(SSTableReader.java:430) > at > org.apache.cassandra.io.sstable.SSTableLoader.lambda$openSSTables$193(SSTableLoader.java:121) > at > org.apache.cassandra.db.lifecycle.LogAwareFileLister.lambda$innerList$184(LogAwareFileLister.java:75) > at > java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174) > at > java.util.TreeMap$EntrySpliterator.forEachRemaining(TreeMap.java:2965) > at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) > at > java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) > at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > at > java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) > at > org.apache.cassandra.db.lifecycle.LogAwareFileLister.innerList(LogAwareFileLister.java:77) > at > org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:49) > ... 4 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11728) Incremental repair fails with vnodes+lcs+multi-dc
[ https://issues.apache.org/jira/browse/CASSANDRA-11728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Bailey updated CASSANDRA-11728: Reproduced In: 2.1.12 > Incremental repair fails with vnodes+lcs+multi-dc > - > > Key: CASSANDRA-11728 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11728 > Project: Cassandra > Issue Type: Bug >Reporter: Nick Bailey > > Produced on 2.1.12 > We are seeing incremental repair fail with an error regarding creating > multiple repair sessions on overlapping sstables. This is happening in the > following setup > * 6 nodes > * 2 Datacenters > * Vnodes enabled > * Leveled compaction on the relevant tables > When STCS is used instead, we don't hit an issue. This is slightly related to > https://issues.apache.org/jira/browse/CASSANDRA-11461, except in this case > OpsCenter repair service is running all repairs sequentially. Let me know > what other information we can provide. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11728) Incremental repair fails with vnodes+lcs+multi-dc
Nick Bailey created CASSANDRA-11728: --- Summary: Incremental repair fails with vnodes+lcs+multi-dc Key: CASSANDRA-11728 URL: https://issues.apache.org/jira/browse/CASSANDRA-11728 Project: Cassandra Issue Type: Bug Reporter: Nick Bailey Produced on 2.1.12 We are seeing incremental repair fail with an error regarding creating multiple repair sessions on overlapping sstables. This is happening in the following setup * 6 nodes * 2 Datacenters * Vnodes enabled * Leveled compaction on the relevant tables When STCS is used instead, we don't hit an issue. This is slightly related to https://issues.apache.org/jira/browse/CASSANDRA-11461, except in this case OpsCenter repair service is running all repairs sequentially. Let me know what other information we can provide. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11722) No existant cassandra-topology.properties file causes repeating error
[ https://issues.apache.org/jira/browse/CASSANDRA-11722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274581#comment-15274581 ] Luke Jolly commented on CASSANDRA-11722: This information may help. The file did exist at one time and then was removed. Also, these nodes were recently upgraded from 3.0.3 -> 3.0.5 > No existant cassandra-topology.properties file causes repeating error > - > > Key: CASSANDRA-11722 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11722 > Project: Cassandra > Issue Type: Bug >Reporter: Luke Jolly > > So starting at the same time on two of my Cassandra Nodes this error was > repeatedly thrown: > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: ERROR 21:00:18 Timed run > of class org.apache.cassandra.locator.PropertyFileSnitch$1 failed. > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: > org.apache.cassandra.exceptions.ConfigurationException: unable to locate > cassandra-topology.properties > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > org.apache.cassandra.utils.FBUtilities.resourceToFile(FBUtilities.java:299) > ~[apache-cassandra-3.0.5.jar:3.0.5] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > org.apache.cassandra.utils.ResourceWatcher$WatchedResource.run(ResourceWatcher.java:53) > ~[apache-cassandra-3.0.5.jar:3.0.5] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118) > [apache-cassandra-3.0.5.jar:3.0.5] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.lang.Thread.run(Thread.java:745) [na:1.8.0_65] > I believe it is because it tired to fallback to the > cassandra-topology.properties file which does not exist. Only after I > restarted did it stop erroring. I am running 3.0.5. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11722) No existant cassandra-topology.properties file causes repeating error
[ https://issues.apache.org/jira/browse/CASSANDRA-11722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274581#comment-15274581 ] Luke Jolly edited comment on CASSANDRA-11722 at 5/6/16 7:17 PM: This information may help. The file did exist at one time and then was removed (though not since last restart). Also, these nodes were recently upgraded from 3.0.3 -> 3.0.5 was (Author: lukejolly): This information may help. The file did exist at one time and then was removed. Also, these nodes were recently upgraded from 3.0.3 -> 3.0.5 > No existant cassandra-topology.properties file causes repeating error > - > > Key: CASSANDRA-11722 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11722 > Project: Cassandra > Issue Type: Bug >Reporter: Luke Jolly > > So starting at the same time on two of my Cassandra Nodes this error was > repeatedly thrown: > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: ERROR 21:00:18 Timed run > of class org.apache.cassandra.locator.PropertyFileSnitch$1 failed. > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: > org.apache.cassandra.exceptions.ConfigurationException: unable to locate > cassandra-topology.properties > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > org.apache.cassandra.utils.FBUtilities.resourceToFile(FBUtilities.java:299) > ~[apache-cassandra-3.0.5.jar:3.0.5] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > org.apache.cassandra.utils.ResourceWatcher$WatchedResource.run(ResourceWatcher.java:53) > ~[apache-cassandra-3.0.5.jar:3.0.5] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118) > [apache-cassandra-3.0.5.jar:3.0.5] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_65] > May 05 21:00:18 tammy.11-e.ninja cassandra[19471]: at > java.lang.Thread.run(Thread.java:745) [na:1.8.0_65] > I believe it is because it tired to fallback to the > cassandra-topology.properties file which does not exist. Only after I > restarted did it stop erroring. I am running 3.0.5. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11723) Cassandra upgrade from 2.1.11 to 3.0.5 leads to unstable nodes (jemalloc to blame)
[ https://issues.apache.org/jira/browse/CASSANDRA-11723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-11723: Fix Version/s: 3.0.x > Cassandra upgrade from 2.1.11 to 3.0.5 leads to unstable nodes (jemalloc to > blame) > -- > > Key: CASSANDRA-11723 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11723 > Project: Cassandra > Issue Type: Bug >Reporter: Stefano Ortolani > Fix For: 3.0.x > > > Upgrade seems fine, but any restart of the node might lead to a situation > where the node just dies after 30 seconds / 1 minute. > Nothing in the logs besides many "FailureDetector.java:456 - Ignoring > interval time of 3000892567 for /10.12.a.x" output every second (against all > other nodes) in debug.log plus some spurious GraphiteErrors/ReadRepair > notifications: > {code:xml} > DEBUG [GossipStage:1] 2016-05-05 22:29:03,921 FailureDetector.java:456 - > Ignoring interval time of 2373187360 for /10.12.a.x > DEBUG [GossipStage:1] 2016-05-05 22:29:03,921 FailureDetector.java:456 - > Ignoring interval time of 2000276196 for /10.12.a.y > DEBUG [ReadRepairStage:24] 2016-05-05 22:29:03,990 ReadCallback.java:234 - > Digest mismatch: > org.apache.cassandra.service.DigestMismatchException: Mismatch for key > DecoratedKey(-152946356843306763, e859fdd2f264485f42030ce261e4e12e) > (d6e617ece3b7bec6138b52b8974b8cab vs 31becca666a62b3c4b2fc0bab9902718) > at > org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:85) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:225) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_60] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > DEBUG [GossipStage:1] 2016-05-05 22:29:04,841 FailureDetector.java:456 - > Ignoring interval time of 3000299340 for /10.12.33.5 > ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-05 22:29:05,692 > ScheduledReporter.java:119 - RuntimeException thrown from > GraphiteReporter#report. Exception was suppressed. > java.lang.IllegalStateException: Unable to compute ceiling for max when > histogram overflowed > at > org.apache.cassandra.utils.EstimatedHistogram.rawMean(EstimatedHistogram.java:231) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.metrics.EstimatedHistogramReservoir$HistogramSnapshot.getMean(EstimatedHistogramReservoir.java:103) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > com.codahale.metrics.graphite.GraphiteReporter.reportHistogram(GraphiteReporter.java:252) > ~[metrics-graphite-3.1.0.jar:3.1.0] > at > com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:166) > ~[metrics-graphite-3.1.0.jar:3.1.0] > at > com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) > ~[metrics-core-3.1.0.jar:3.1.0] > at > com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) > ~[metrics-core-3.1.0.jar:3.1.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_60] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_60] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_60] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_60] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > {code} > I know this is not much but nothing else gets to dmesg or to any other log. > Any suggestion how to debug this further? > I upgraded two nodes so far, and it happened on both nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11723) Cassandra upgrade from 2.1.11 to 3.0.5 leads to unstable nodes (jemalloc to blame)
[ https://issues.apache.org/jira/browse/CASSANDRA-11723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefano Ortolani updated CASSANDRA-11723: - Summary: Cassandra upgrade from 2.1.11 to 3.0.5 leads to unstable nodes (jemalloc to blame) (was: Cassandra upgrade from 2.1.11 to 3.0.5 leads to unstable nodes) > Cassandra upgrade from 2.1.11 to 3.0.5 leads to unstable nodes (jemalloc to > blame) > -- > > Key: CASSANDRA-11723 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11723 > Project: Cassandra > Issue Type: Bug >Reporter: Stefano Ortolani > > Upgrade seems fine, but any restart of the node might lead to a situation > where the node just dies after 30 seconds / 1 minute. > Nothing in the logs besides many "FailureDetector.java:456 - Ignoring > interval time of 3000892567 for /10.12.a.x" output every second (against all > other nodes) in debug.log plus some spurious GraphiteErrors/ReadRepair > notifications: > {code:xml} > DEBUG [GossipStage:1] 2016-05-05 22:29:03,921 FailureDetector.java:456 - > Ignoring interval time of 2373187360 for /10.12.a.x > DEBUG [GossipStage:1] 2016-05-05 22:29:03,921 FailureDetector.java:456 - > Ignoring interval time of 2000276196 for /10.12.a.y > DEBUG [ReadRepairStage:24] 2016-05-05 22:29:03,990 ReadCallback.java:234 - > Digest mismatch: > org.apache.cassandra.service.DigestMismatchException: Mismatch for key > DecoratedKey(-152946356843306763, e859fdd2f264485f42030ce261e4e12e) > (d6e617ece3b7bec6138b52b8974b8cab vs 31becca666a62b3c4b2fc0bab9902718) > at > org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:85) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:225) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_60] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > DEBUG [GossipStage:1] 2016-05-05 22:29:04,841 FailureDetector.java:456 - > Ignoring interval time of 3000299340 for /10.12.33.5 > ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-05 22:29:05,692 > ScheduledReporter.java:119 - RuntimeException thrown from > GraphiteReporter#report. Exception was suppressed. > java.lang.IllegalStateException: Unable to compute ceiling for max when > histogram overflowed > at > org.apache.cassandra.utils.EstimatedHistogram.rawMean(EstimatedHistogram.java:231) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.metrics.EstimatedHistogramReservoir$HistogramSnapshot.getMean(EstimatedHistogramReservoir.java:103) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > com.codahale.metrics.graphite.GraphiteReporter.reportHistogram(GraphiteReporter.java:252) > ~[metrics-graphite-3.1.0.jar:3.1.0] > at > com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:166) > ~[metrics-graphite-3.1.0.jar:3.1.0] > at > com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) > ~[metrics-core-3.1.0.jar:3.1.0] > at > com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) > ~[metrics-core-3.1.0.jar:3.1.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_60] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_60] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_60] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_60] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > {code} > I know this is not much but nothing else gets to dmesg or to any other log. > Any suggestion how to debug this further? > I upgraded two nodes so far, and it happened on both nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11723) Cassandra upgrade from 2.1.11 to 3.0.5 leads to unstable nodes
[ https://issues.apache.org/jira/browse/CASSANDRA-11723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274538#comment-15274538 ] Stefano Ortolani commented on CASSANDRA-11723: -- This executing on Ubuntu 12.04 with libjemalloc1 as below: {code:xml} Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-==-==- hi libjemalloc1 2.2.5-1 general-purpose scalable concurrent malloc(3) implementation {code} > Cassandra upgrade from 2.1.11 to 3.0.5 leads to unstable nodes > -- > > Key: CASSANDRA-11723 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11723 > Project: Cassandra > Issue Type: Bug >Reporter: Stefano Ortolani > > Upgrade seems fine, but any restart of the node might lead to a situation > where the node just dies after 30 seconds / 1 minute. > Nothing in the logs besides many "FailureDetector.java:456 - Ignoring > interval time of 3000892567 for /10.12.a.x" output every second (against all > other nodes) in debug.log plus some spurious GraphiteErrors/ReadRepair > notifications: > {code:xml} > DEBUG [GossipStage:1] 2016-05-05 22:29:03,921 FailureDetector.java:456 - > Ignoring interval time of 2373187360 for /10.12.a.x > DEBUG [GossipStage:1] 2016-05-05 22:29:03,921 FailureDetector.java:456 - > Ignoring interval time of 2000276196 for /10.12.a.y > DEBUG [ReadRepairStage:24] 2016-05-05 22:29:03,990 ReadCallback.java:234 - > Digest mismatch: > org.apache.cassandra.service.DigestMismatchException: Mismatch for key > DecoratedKey(-152946356843306763, e859fdd2f264485f42030ce261e4e12e) > (d6e617ece3b7bec6138b52b8974b8cab vs 31becca666a62b3c4b2fc0bab9902718) > at > org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:85) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:225) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_60] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > DEBUG [GossipStage:1] 2016-05-05 22:29:04,841 FailureDetector.java:456 - > Ignoring interval time of 3000299340 for /10.12.33.5 > ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-05 22:29:05,692 > ScheduledReporter.java:119 - RuntimeException thrown from > GraphiteReporter#report. Exception was suppressed. > java.lang.IllegalStateException: Unable to compute ceiling for max when > histogram overflowed > at > org.apache.cassandra.utils.EstimatedHistogram.rawMean(EstimatedHistogram.java:231) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.metrics.EstimatedHistogramReservoir$HistogramSnapshot.getMean(EstimatedHistogramReservoir.java:103) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > com.codahale.metrics.graphite.GraphiteReporter.reportHistogram(GraphiteReporter.java:252) > ~[metrics-graphite-3.1.0.jar:3.1.0] > at > com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:166) > ~[metrics-graphite-3.1.0.jar:3.1.0] > at > com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) > ~[metrics-core-3.1.0.jar:3.1.0] > at > com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) > ~[metrics-core-3.1.0.jar:3.1.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_60] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_60] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_60] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_60] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_60] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > {code} > I know this is not much but nothing else gets to dmesg or to any other log. > Any suggestion how to debug this further? > I upgraded two nodes so far, and it happened
[jira] [Commented] (CASSANDRA-11723) Cassandra upgrade from 2.1.11 to 3.0.5 leads to unstable nodes
[ https://issues.apache.org/jira/browse/CASSANDRA-11723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274535#comment-15274535 ] Stefano Ortolani commented on CASSANDRA-11723: -- Reproduced and managed to catch some output {code:xml} # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x7fed3c5edbf0, pid=29072, tid=140528026334976 # # JRE version: Java(TM) SE Runtime Environment (8.0_60-b27) (build 1.8.0_60-b27) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.60-b23 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [libjemalloc.so.1+0x8bf0] [error occurred during error reporting (printing problematic frame), id 0xb] # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again # # If you would like to submit a bug report, please visit: # http://bugreport.java.com/bugreport/crash.jsp # --- T H R E A D --- Current thread (0x7fed11282000): JavaThread "SharedPool-Worker-93" daemon [_thread_new, id=29652, stack(0x,0x)] siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x7fed948855a8 Registers: RAX=0x106bccb7, RBX=0x7fed1129e000, RCX=0x7fed1129eff0, RDX=0x0002 RSP=0x7fcf3b172b50, RBP=0xffe8, RSI=0x7fed1129e040, RDI=0x R8 =0x7fed3b4009c0, R9 =0x, R10=0x7fcf3b172218, R11=0x0001 R12=0x0020, R13=0x, R14=0x, R15=0x0003 RIP=0x7fed3c5edbf0, EFLAGS=0x00010202, CSGSFS=0x0033, ERR=0x0004 TRAPNO=0x000e Top of Stack: (sp=0x7fcf3b172b50) 0x7fcf3b172b50: 0x7fcf3b172b60: 0020 0x7fcf3b172b70: 0020 7fcf3b172d30 0x7fcf3b172b80: 7fed3c5e8da5 0x7fcf3b172b90: 7fcf3b173700 0x7fcf3b172ba0: 7fcf3b172d30 7fed3c3d1afa 0x7fcf3b172bb0: 0x7fcf3b172bc0: 0x7fcf3b172bd0: 0x7fcf3b172be0: 0x7fcf3b172bf0: 7fcf3b172da8 7fcf3b172d90 0x7fcf3b172c00: 7fcf3b172da0 7fcf3b172d30 0x7fcf3b172c10: 7fed11282000 7fed3a546c94 0x7fcf3b172c20: 0x7fcf3b172c30: 0x7fcf3b172c40: 0x7fcf3b172c50: 0x7fcf3b172c60: 0x7fcf3b172c70: 0x7fcf3b172c80: 0x7fcf3b172c90: 0x7fcf3b172ca0: 0x7fcf3b172cb0: 0x7fcf3b172cc0: 0x7fcf3b172cd0: 0x7fcf3b172ce0: 0x7fcf3b172cf0: 7fed11294728 0x7fcf3b172d00: 7fed3c3d3104 0x7fcf3b172d10: 0x7fcf3b172d20: 0x7fcf3b172d30: 0009 0x7fcf3b172d40: 7fcf3b174000 Instructions: (pc=0x7fed3c5edbf0) 0x7fed3c5edbd0: 08 00 00 00 00 e9 c4 fe ff ff 66 0f 1f 44 00 00 0x7fed3c5edbe0: 83 e8 01 3b 06 89 46 08 7d 02 89 06 48 8b 4e 10 0x7fed3c5edbf0: 48 8b 2c c1 48 85 ed 0f 85 13 ff ff ff e9 f5 fe 0x7fed3c5edc00: ff ff 66 0f 1f 44 00 00 e8 a3 ae 00 00 0f 1f 00 Register to memory mapping: RAX=0x106bccb7 is an unknown value RBX=0x7fed1129e000 is an unknown value RCX=0x7fed1129eff0 is an unknown value RDX=0x0002 is an unknown value RSP=0x7fcf3b172b50 is an unknown value RBP=0xffe8 is an unknown value RSI=0x7fed1129e040 is an unknown value RDI=0x is an unknown value R8 =0x7fed3b4009c0 is an unknown value R9 =0x is an unknown value R10=0x7fcf3b172218 is an unknown value R11=0x0001 is an unknown value R12=0x0020 is an unknown value R13=0x is an unknown value R14=0x is an unknown value R15=0x0003 is an unknown value Stack: [0x,0x], sp=0x7fcf3b172b50, free space=137234400714k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libjemalloc.so.1+0x8bf0] [error occurred during error reporting (printing native stack), id 0xb] {code} >
[jira] [Commented] (CASSANDRA-11552) Reduce amount of logging calls from ColumnFamilyStore.selectAndReference
[ https://issues.apache.org/jira/browse/CASSANDRA-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274532#comment-15274532 ] Benedict commented on CASSANDRA-11552: -- FTR, it's kind of meant to do that to bring attention to wherever the bug causing the long wait is. It's a pretty major bug if that code spins for even a few milliseconds, as it's used widely and slow downs snarl the whole system up. > Reduce amount of logging calls from ColumnFamilyStore.selectAndReference > > > Key: CASSANDRA-11552 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11552 > Project: Cassandra > Issue Type: Improvement >Reporter: Robert Stupp >Assignee: Robert Stupp > Fix For: 2.1.15, 2.2.7, 3.7, 3.0.7 > > > {{org.apache.cassandra.db.ColumnFamilyStore#selectAndReference}} logs two > messages at _info_ level "as fast as it can" if it waits for more than 100ms. > The following code is executed in a while-true fashion in this case: > {code} > logger.info("Spinning trying to capture released readers {}", > released); > logger.info("Spinning trying to capture all readers {}", > view.sstables); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option
[ https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274524#comment-15274524 ] Sylvain Lebresne edited comment on CASSANDRA-11721 at 5/6/16 6:30 PM: -- As said above, it's probably not gonna happen too soon, but for the record, if we do got with a DDL syntax, my preference would be to add some {{WITH OPTIONS}} rather than some specific {{NO SNAPSHOT}}. So something like: {noformat} TRUNCATE x WITH OPTIONS = { 'snapshot' : false } {noformat} so that it's somewhat more consistent with other statements and can be easily extended to other options without requiring new syntax every time. was (Author: slebresne): As said above, it's probably not gonna happen too soon, but for the record, if we do got with a DDL syntax, my preference would be to add some {{WITH OPTIONS}} rather than some specific {{NO SNAPSHOT}}. So something like: {{noformat}} TRUNCATE x WITH OPTIONS = { 'snapshot' : false } {{noformat}} so that it's somewhat more consistent with other statements and can be easily extended to other options without requiring new syntax every time. > Have a per operation truncate ddl "no snapshot" option > -- > > Key: CASSANDRA-11721 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11721 > Project: Cassandra > Issue Type: Wish > Components: CQL >Reporter: Jeremy Hanna >Priority: Minor > > Right now with truncate, it will always create a snapshot. That is the right > thing to do most of the time. 'auto_snapshot' exists as an option to disable > that but it is server wide and requires a restart to change. There are data > models, however, that require rotating through a handful of tables and > periodically truncating them. Currently you either have to operate with no > safety net (some actually do this) or manually clear those snapshots out > periodically. Both are less than optimal. > In HDFS, you generally delete something where it goes to the trash. If you > don't want that safety net, you can do something like 'rm -rf -skiptrash > /jeremy/stuff' in one command. > It would be nice to have something in the truncate ddl to skip the snapshot > on a per operation basis. Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'. > This might also be useful in those situations where you're just playing with > data and you don't want something to take a snapshot in a development system. > If that's the case, this would also be useful for the DROP operation, but > that convenience is not the main reason for this option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option
[ https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274524#comment-15274524 ] Sylvain Lebresne commented on CASSANDRA-11721: -- As said above, it's probably not gonna happen too soon, but for the record, if we do got with a DDL syntax, my preference would be to add some {{WITH OPTIONS}} rather than some specific {{NO SNAPSHOT}}. So something like: {{noformat}} TRUNCATE x WITH OPTIONS = { 'snapshot' : false } {{noformat}} so that it's somewhat more consistent with other statements and can be easily extended to other options without requiring new syntax every time. > Have a per operation truncate ddl "no snapshot" option > -- > > Key: CASSANDRA-11721 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11721 > Project: Cassandra > Issue Type: Wish > Components: CQL >Reporter: Jeremy Hanna >Priority: Minor > > Right now with truncate, it will always create a snapshot. That is the right > thing to do most of the time. 'auto_snapshot' exists as an option to disable > that but it is server wide and requires a restart to change. There are data > models, however, that require rotating through a handful of tables and > periodically truncating them. Currently you either have to operate with no > safety net (some actually do this) or manually clear those snapshots out > periodically. Both are less than optimal. > In HDFS, you generally delete something where it goes to the trash. If you > don't want that safety net, you can do something like 'rm -rf -skiptrash > /jeremy/stuff' in one command. > It would be nice to have something in the truncate ddl to skip the snapshot > on a per operation basis. Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'. > This might also be useful in those situations where you're just playing with > data and you don't want something to take a snapshot in a development system. > If that's the case, this would also be useful for the DROP operation, but > that convenience is not the main reason for this option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[07/13] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/08b1efe1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/08b1efe1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/08b1efe1 Branch: refs/heads/cassandra-3.0 Commit: 08b1efe11deb336827b3b63fbfe2b4690c252541 Parents: 0687037 483c745 Author: T Jake LucianiAuthored: Fri May 6 14:13:18 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 14:13:18 2016 -0400 -- --
[jira] [Updated] (CASSANDRA-9395) Prohibit Counter type as part of the PK
[ https://issues.apache.org/jira/browse/CASSANDRA-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-9395: -- Fix Version/s: (was: 3.0.x) (was: 2.2.x) 3.0.7 3.7 2.2.7 > Prohibit Counter type as part of the PK > --- > > Key: CASSANDRA-9395 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9395 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sebastian Estevez >Assignee: Brett Snyder > Labels: lhf > Fix For: 2.2.7, 3.7, 3.0.7 > > Attachments: cassandra-2.1-9395.txt > > > C* let me do this: > {code} > create table aggregated.counter1 ( a counter , b int , PRIMARY KEY (b,a)) > WITH CLUSTERING ORDER BY (a desc); > {code} > and then treated a as an int! > {code} > cqlsh> update aggregated.counter1 set a= a+1 where b = 2 ;Bad Request: > Invalid operation (a = a + 1) for non counter column a > {code} > {code} > insert INTO aggregated.counter1 (b, a ) VALUES ( 3, 2) ; > {code} > (should have given can't insert must update error) > Even though desc table still shows it as a counter type: > {code} > CREATE TABLE counter1 ( > b int, > a counter, > PRIMARY KEY ((b), a) > ) WITH CLUSTERING ORDER BY (a DESC) AND > bloom_filter_fp_chance=0.01 AND > caching='KEYS_ONLY' AND > comment='' AND > dclocal_read_repair_chance=0.10 AND > gc_grace_seconds=864000 AND > index_interval=128 AND > read_repair_chance=0.00 AND > replicate_on_write='true' AND > populate_io_cache_on_flush='false' AND > default_time_to_live=0 AND > speculative_retry='99.0PERCENTILE' AND > memtable_flush_period_in_ms=0 AND > compaction={'class': 'SizeTieredCompactionStrategy'} AND > compression={'sstable_compression': 'LZ4Compressor'}; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[03/13] cassandra git commit: Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType.
Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType. Patch by Brett Snyder; reviewed by tjake for CASSANDRA-9395 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/483c7453 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/483c7453 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/483c7453 Branch: refs/heads/cassandra-3.0 Commit: 483c745334619ff19df8469f565b5346e7b2a5d0 Parents: 5a923f6 Author: Brett SnyderAuthored: Fri Oct 2 11:41:04 2015 -0500 Committer: T Jake Luciani Committed: Fri May 6 14:11:16 2016 -0400 -- CHANGES.txt | 1 + .../cassandra/cql3/statements/CreateTableStatement.java | 6 +++--- .../cassandra/cql3/validation/entities/CountersTest.java | 11 +++ 3 files changed, 15 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index a46aa56..18cd90b 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.7 + * Prohibit Reverse Counter type as part of the PK (CASSANDRA-9395) * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626) * Exit JVM if JMX server fails to startup (CASSANDRA-11540) * Produce a heap dump when exiting on OOM (CASSANDRA-9861) http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java index 1b3665c..e761674 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java @@ -268,7 +268,7 @@ public class CreateTableStatement extends SchemaAlteringStatement { stmt.keyAliases.add(alias.bytes); AbstractType t = getTypeAndRemove(stmt.columns, alias); -if (t instanceof CounterColumnType) +if (t.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", alias)); if (staticColumns.contains(alias)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", alias)); @@ -316,7 +316,7 @@ public class CreateTableStatement extends SchemaAlteringStatement stmt.columnAliases.add(alias.bytes); AbstractType at = getTypeAndRemove(stmt.columns, alias); -if (at instanceof CounterColumnType) +if (at.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", stmt.columnAliases.get(0))); stmt.comparator = new SimpleDenseCellNameType(at); } @@ -328,7 +328,7 @@ public class CreateTableStatement extends SchemaAlteringStatement stmt.columnAliases.add(t.bytes); AbstractType type = getTypeAndRemove(stmt.columns, t); -if (type instanceof CounterColumnType) +if (type.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", t)); if (staticColumns.contains(t)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", t)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java index e5ff251..41b73bc 100644 --- a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java @@ -112,4 +112,15 @@ public class CountersTest extends CQLTester row(1L) // no change to the counter value ); }
[jira] [Updated] (CASSANDRA-9395) Prohibit Counter type as part of the PK
[ https://issues.apache.org/jira/browse/CASSANDRA-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-9395: -- Resolution: Fixed Status: Resolved (was: Patch Available) Committed thanks > Prohibit Counter type as part of the PK > --- > > Key: CASSANDRA-9395 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9395 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sebastian Estevez >Assignee: Brett Snyder > Labels: lhf > Fix For: 2.2.7, 3.7, 3.0.7 > > Attachments: cassandra-2.1-9395.txt > > > C* let me do this: > {code} > create table aggregated.counter1 ( a counter , b int , PRIMARY KEY (b,a)) > WITH CLUSTERING ORDER BY (a desc); > {code} > and then treated a as an int! > {code} > cqlsh> update aggregated.counter1 set a= a+1 where b = 2 ;Bad Request: > Invalid operation (a = a + 1) for non counter column a > {code} > {code} > insert INTO aggregated.counter1 (b, a ) VALUES ( 3, 2) ; > {code} > (should have given can't insert must update error) > Even though desc table still shows it as a counter type: > {code} > CREATE TABLE counter1 ( > b int, > a counter, > PRIMARY KEY ((b), a) > ) WITH CLUSTERING ORDER BY (a DESC) AND > bloom_filter_fp_chance=0.01 AND > caching='KEYS_ONLY' AND > comment='' AND > dclocal_read_repair_chance=0.10 AND > gc_grace_seconds=864000 AND > index_interval=128 AND > read_repair_chance=0.00 AND > replicate_on_write='true' AND > populate_io_cache_on_flush='false' AND > default_time_to_live=0 AND > speculative_retry='99.0PERCENTILE' AND > memtable_flush_period_in_ms=0 AND > compaction={'class': 'SizeTieredCompactionStrategy'} AND > compression={'sstable_compression': 'LZ4Compressor'}; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[09/13] cassandra git commit: Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType.
Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType. Patch by Brett Snyder; reviewed by tjake for CASSANDRA-9395 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/411c5601 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/411c5601 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/411c5601 Branch: refs/heads/trunk Commit: 411c56014d3eb8fdd001c2381c376c968cdef499 Parents: 08b1efe Author: T Jake LucianiAuthored: Fri May 6 11:36:04 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 14:19:57 2016 -0400 -- CHANGES.txt | 4 +++- .../cassandra/cql3/statements/CreateTableStatement.java | 4 ++-- .../cassandra/cql3/validation/entities/CountersTest.java | 10 ++ 3 files changed, 15 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/411c5601/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2e2b6af..af8be97 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,7 +1,9 @@ 3.0.7 * Refactor Materialized View code (CASSANDRA-11475) * Update Java Driver (CASSANDRA-11615) - +Merged from 2.2: + * Prohibit Reversed Counter type as part of the PK (CASSANDRA-9395) + 3.0.6 * Disallow creating view with a static column (CASSANDRA-11602) * Reduce the amount of object allocations caused by the getFunctions methods (CASSANDRA-11593) http://git-wip-us.apache.org/repos/asf/cassandra/blob/411c5601/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java index c19f970..04f76d3 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java @@ -241,7 +241,7 @@ public class CreateTableStatement extends SchemaAlteringStatement { stmt.keyAliases.add(alias); AbstractType t = getTypeAndRemove(stmt.columns, alias); -if (t instanceof CounterColumnType) +if (t.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", alias)); if (staticColumns.contains(alias)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", alias)); @@ -255,7 +255,7 @@ public class CreateTableStatement extends SchemaAlteringStatement stmt.columnAliases.add(t); AbstractType type = getTypeAndRemove(stmt.columns, t); -if (type instanceof CounterColumnType) +if (type.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", t)); if (staticColumns.contains(t)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", t)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/411c5601/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java index 89fd767..c9939c8 100644 --- a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java @@ -186,4 +186,14 @@ public class CountersTest extends CQLTester "SELECT * FROM %s WHERE b = null ALLOW FILTERING"); } } + +/** + * Test for the validation bug of #9395. + */ +@Test +public void testProhibitReversedCounterAsPartOfPrimaryKey() throws Throwable +{ +assertInvalidThrowMessage("counter type is not supported for PRIMARY KEY part a", + InvalidRequestException.class, String.format("CREATE TABLE %s.%s (a counter, b int, PRIMARY KEY (b, a)) WITH CLUSTERING ORDER BY (a desc);", KEYSPACE, createTableName())); +} }
[02/13] cassandra git commit: Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType.
Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType. Patch by Brett Snyder; reviewed by tjake for CASSANDRA-9395 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/483c7453 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/483c7453 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/483c7453 Branch: refs/heads/trunk Commit: 483c745334619ff19df8469f565b5346e7b2a5d0 Parents: 5a923f6 Author: Brett SnyderAuthored: Fri Oct 2 11:41:04 2015 -0500 Committer: T Jake Luciani Committed: Fri May 6 14:11:16 2016 -0400 -- CHANGES.txt | 1 + .../cassandra/cql3/statements/CreateTableStatement.java | 6 +++--- .../cassandra/cql3/validation/entities/CountersTest.java | 11 +++ 3 files changed, 15 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index a46aa56..18cd90b 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.7 + * Prohibit Reverse Counter type as part of the PK (CASSANDRA-9395) * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626) * Exit JVM if JMX server fails to startup (CASSANDRA-11540) * Produce a heap dump when exiting on OOM (CASSANDRA-9861) http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java index 1b3665c..e761674 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java @@ -268,7 +268,7 @@ public class CreateTableStatement extends SchemaAlteringStatement { stmt.keyAliases.add(alias.bytes); AbstractType t = getTypeAndRemove(stmt.columns, alias); -if (t instanceof CounterColumnType) +if (t.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", alias)); if (staticColumns.contains(alias)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", alias)); @@ -316,7 +316,7 @@ public class CreateTableStatement extends SchemaAlteringStatement stmt.columnAliases.add(alias.bytes); AbstractType at = getTypeAndRemove(stmt.columns, alias); -if (at instanceof CounterColumnType) +if (at.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", stmt.columnAliases.get(0))); stmt.comparator = new SimpleDenseCellNameType(at); } @@ -328,7 +328,7 @@ public class CreateTableStatement extends SchemaAlteringStatement stmt.columnAliases.add(t.bytes); AbstractType type = getTypeAndRemove(stmt.columns, t); -if (type instanceof CounterColumnType) +if (type.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", t)); if (staticColumns.contains(t)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", t)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java index e5ff251..41b73bc 100644 --- a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java @@ -112,4 +112,15 @@ public class CountersTest extends CQLTester row(1L) // no change to the counter value ); } + +
[05/13] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/08b1efe1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/08b1efe1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/08b1efe1 Branch: refs/heads/cassandra-3.7 Commit: 08b1efe11deb336827b3b63fbfe2b4690c252541 Parents: 0687037 483c745 Author: T Jake LucianiAuthored: Fri May 6 14:13:18 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 14:13:18 2016 -0400 -- --
[08/13] cassandra git commit: Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType.
Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType. Patch by Brett Snyder; reviewed by tjake for CASSANDRA-9395 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/411c5601 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/411c5601 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/411c5601 Branch: refs/heads/cassandra-3.7 Commit: 411c56014d3eb8fdd001c2381c376c968cdef499 Parents: 08b1efe Author: T Jake LucianiAuthored: Fri May 6 11:36:04 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 14:19:57 2016 -0400 -- CHANGES.txt | 4 +++- .../cassandra/cql3/statements/CreateTableStatement.java | 4 ++-- .../cassandra/cql3/validation/entities/CountersTest.java | 10 ++ 3 files changed, 15 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/411c5601/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2e2b6af..af8be97 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,7 +1,9 @@ 3.0.7 * Refactor Materialized View code (CASSANDRA-11475) * Update Java Driver (CASSANDRA-11615) - +Merged from 2.2: + * Prohibit Reversed Counter type as part of the PK (CASSANDRA-9395) + 3.0.6 * Disallow creating view with a static column (CASSANDRA-11602) * Reduce the amount of object allocations caused by the getFunctions methods (CASSANDRA-11593) http://git-wip-us.apache.org/repos/asf/cassandra/blob/411c5601/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java index c19f970..04f76d3 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java @@ -241,7 +241,7 @@ public class CreateTableStatement extends SchemaAlteringStatement { stmt.keyAliases.add(alias); AbstractType t = getTypeAndRemove(stmt.columns, alias); -if (t instanceof CounterColumnType) +if (t.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", alias)); if (staticColumns.contains(alias)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", alias)); @@ -255,7 +255,7 @@ public class CreateTableStatement extends SchemaAlteringStatement stmt.columnAliases.add(t); AbstractType type = getTypeAndRemove(stmt.columns, t); -if (type instanceof CounterColumnType) +if (type.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", t)); if (staticColumns.contains(t)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", t)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/411c5601/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java index 89fd767..c9939c8 100644 --- a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java @@ -186,4 +186,14 @@ public class CountersTest extends CQLTester "SELECT * FROM %s WHERE b = null ALLOW FILTERING"); } } + +/** + * Test for the validation bug of #9395. + */ +@Test +public void testProhibitReversedCounterAsPartOfPrimaryKey() throws Throwable +{ +assertInvalidThrowMessage("counter type is not supported for PRIMARY KEY part a", + InvalidRequestException.class, String.format("CREATE TABLE %s.%s (a counter, b int, PRIMARY KEY (b, a)) WITH CLUSTERING ORDER BY (a desc);", KEYSPACE, createTableName())); +} }
[13/13] cassandra git commit: Merge branch 'cassandra-3.7' into trunk
Merge branch 'cassandra-3.7' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ac0036b7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ac0036b7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ac0036b7 Branch: refs/heads/trunk Commit: ac0036b72a69c8bc7851df99a8dda984e0e6f276 Parents: adbef79 02f8725 Author: T Jake LucianiAuthored: Fri May 6 14:22:44 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 14:22:44 2016 -0400 -- CHANGES.txt | 1 + .../cassandra/cql3/statements/CreateTableStatement.java | 4 ++-- .../cql3/validation/entities/CountersTest.java | 12 +++- 3 files changed, 14 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ac0036b7/CHANGES.txt -- diff --cc CHANGES.txt index 8e545c4,3cee7ae..d9f1688 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -9,9 -3,9 +9,10 @@@ Merged from 3.0 * Refactor Materialized View code (CASSANDRA-11475) * Update Java Driver (CASSANDRA-11615) Merged from 2.2: + * Prohibit Reversed Counter type as part of the PK (CASSANDRA-9395) * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626) + 3.6 * Enhanced Compaction Logging (CASSANDRA-10805) * Make prepared statement cache size configurable (CASSANDRA-11555)
[10/13] cassandra git commit: Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType.
Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType. Patch by Brett Snyder; reviewed by tjake for CASSANDRA-9395 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/411c5601 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/411c5601 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/411c5601 Branch: refs/heads/cassandra-3.0 Commit: 411c56014d3eb8fdd001c2381c376c968cdef499 Parents: 08b1efe Author: T Jake LucianiAuthored: Fri May 6 11:36:04 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 14:19:57 2016 -0400 -- CHANGES.txt | 4 +++- .../cassandra/cql3/statements/CreateTableStatement.java | 4 ++-- .../cassandra/cql3/validation/entities/CountersTest.java | 10 ++ 3 files changed, 15 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/411c5601/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2e2b6af..af8be97 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,7 +1,9 @@ 3.0.7 * Refactor Materialized View code (CASSANDRA-11475) * Update Java Driver (CASSANDRA-11615) - +Merged from 2.2: + * Prohibit Reversed Counter type as part of the PK (CASSANDRA-9395) + 3.0.6 * Disallow creating view with a static column (CASSANDRA-11602) * Reduce the amount of object allocations caused by the getFunctions methods (CASSANDRA-11593) http://git-wip-us.apache.org/repos/asf/cassandra/blob/411c5601/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java index c19f970..04f76d3 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java @@ -241,7 +241,7 @@ public class CreateTableStatement extends SchemaAlteringStatement { stmt.keyAliases.add(alias); AbstractType t = getTypeAndRemove(stmt.columns, alias); -if (t instanceof CounterColumnType) +if (t.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", alias)); if (staticColumns.contains(alias)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", alias)); @@ -255,7 +255,7 @@ public class CreateTableStatement extends SchemaAlteringStatement stmt.columnAliases.add(t); AbstractType type = getTypeAndRemove(stmt.columns, t); -if (type instanceof CounterColumnType) +if (type.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", t)); if (staticColumns.contains(t)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", t)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/411c5601/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java index 89fd767..c9939c8 100644 --- a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java @@ -186,4 +186,14 @@ public class CountersTest extends CQLTester "SELECT * FROM %s WHERE b = null ALLOW FILTERING"); } } + +/** + * Test for the validation bug of #9395. + */ +@Test +public void testProhibitReversedCounterAsPartOfPrimaryKey() throws Throwable +{ +assertInvalidThrowMessage("counter type is not supported for PRIMARY KEY part a", + InvalidRequestException.class, String.format("CREATE TABLE %s.%s (a counter, b int, PRIMARY KEY (b, a)) WITH CLUSTERING ORDER BY (a desc);", KEYSPACE, createTableName())); +} }
[11/13] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.7
Merge branch 'cassandra-3.0' into cassandra-3.7 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/02f8725d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/02f8725d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/02f8725d Branch: refs/heads/cassandra-3.7 Commit: 02f8725d73bd2eab2552a07bdf2b4fa23700eedd Parents: 886f875 411c560 Author: T Jake LucianiAuthored: Fri May 6 14:22:22 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 14:22:22 2016 -0400 -- CHANGES.txt | 1 + .../cassandra/cql3/statements/CreateTableStatement.java | 4 ++-- .../cql3/validation/entities/CountersTest.java | 12 +++- 3 files changed, 14 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/02f8725d/CHANGES.txt -- diff --cc CHANGES.txt index ff98d48,af8be97..3cee7ae --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -3,75 -2,9 +3,76 @@@ Merged from 3.0 * Refactor Materialized View code (CASSANDRA-11475) * Update Java Driver (CASSANDRA-11615) Merged from 2.2: + * Prohibit Reversed Counter type as part of the PK (CASSANDRA-9395) + * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626) -3.0.6 +3.6 + * Enhanced Compaction Logging (CASSANDRA-10805) + * Make prepared statement cache size configurable (CASSANDRA-11555) + * Integrated JMX authentication and authorization (CASSANDRA-10091) + * Add units to stress ouput (CASSANDRA-11352) + * Fix PER PARTITION LIMIT for single and multi partitions queries (CASSANDRA-11603) + * Add uncompressed chunk cache for RandomAccessReader (CASSANDRA-5863) + * Clarify ClusteringPrefix hierarchy (CASSANDRA-11213) + * Always perform collision check before joining ring (CASSANDRA-10134) + * SSTableWriter output discrepancy (CASSANDRA-11646) + * Fix potential timeout in NativeTransportService.testConcurrentDestroys (CASSANDRA-10756) + * Support large partitions on the 3.0 sstable format (CASSANDRA-11206) + * Add support to rebuild from specific range (CASSANDRA-10406) + * Optimize the overlapping lookup by calculating all the + bounds in advance (CASSANDRA-11571) + * Support json/yaml output in noetool tablestats (CASSANDRA-5977) + * (stress) Add datacenter option to -node options (CASSANDRA-11591) + * Fix handling of empty slices (CASSANDRA-11513) + * Make number of cores used by cqlsh COPY visible to testing code (CASSANDRA-11437) + * Allow filtering on clustering columns for queries without secondary indexes (CASSANDRA-11310) + * Refactor Restriction hierarchy (CASSANDRA-11354) + * Eliminate allocations in R/W path (CASSANDRA-11421) + * Update Netty to 4.0.36 (CASSANDRA-11567) + * Fix PER PARTITION LIMIT for queries requiring post-query ordering (CASSANDRA-11556) + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818) + * Support UDT in CQLSSTableWriter (CASSANDRA-10624) + * Support for non-frozen user-defined types, updating + individual fields of user-defined types (CASSANDRA-7423) + * Make LZ4 compression level configurable (CASSANDRA-11051) + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017) + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295) + * Improve field-checking and error reporting in cassandra.yaml (CASSANDRA-10649) + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507) + * More user friendly error when providing an invalid token to nodetool (CASSANDRA-9348) + * Add static column support to SASI index (CASSANDRA-11183) + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization (CASSANDRA-11434) + * Support LIKE operator in prepared statements (CASSANDRA-11456) + * Add a command to see if a Materialized View has finished building (CASSANDRA-9967) + * Log endpoint and port associated with streaming operation (CASSANDRA-8777) + * Print sensible units for all log messages (CASSANDRA-9692) + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096) + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372) + * Compress only inter-dc traffic by default (CASSANDRA-) + * Add metrics to track write amplification (CASSANDRA-11420) + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739) + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411) + * Add require_endpoint_verification opt for internode encryption (CASSANDRA-9220) + * Add auto import java.util for UDF code block (CASSANDRA-11392) + * Add --hex-format option to nodetool getsstables (CASSANDRA-11337) + * sstablemetadata should print sstable min/max token (CASSANDRA-7159) + * Do not wrap
[01/13] cassandra git commit: Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType.
Repository: cassandra Updated Branches: refs/heads/cassandra-2.2 5a923f65c -> 483c74533 refs/heads/cassandra-3.0 06870372d -> 411c56014 refs/heads/cassandra-3.7 886f87571 -> 02f8725d7 refs/heads/trunk adbef7982 -> ac0036b72 Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType. Patch by Brett Snyder; reviewed by tjake for CASSANDRA-9395 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/483c7453 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/483c7453 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/483c7453 Branch: refs/heads/cassandra-2.2 Commit: 483c745334619ff19df8469f565b5346e7b2a5d0 Parents: 5a923f6 Author: Brett SnyderAuthored: Fri Oct 2 11:41:04 2015 -0500 Committer: T Jake Luciani Committed: Fri May 6 14:11:16 2016 -0400 -- CHANGES.txt | 1 + .../cassandra/cql3/statements/CreateTableStatement.java | 6 +++--- .../cassandra/cql3/validation/entities/CountersTest.java | 11 +++ 3 files changed, 15 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index a46aa56..18cd90b 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.7 + * Prohibit Reverse Counter type as part of the PK (CASSANDRA-9395) * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626) * Exit JVM if JMX server fails to startup (CASSANDRA-11540) * Produce a heap dump when exiting on OOM (CASSANDRA-9861) http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java index 1b3665c..e761674 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java @@ -268,7 +268,7 @@ public class CreateTableStatement extends SchemaAlteringStatement { stmt.keyAliases.add(alias.bytes); AbstractType t = getTypeAndRemove(stmt.columns, alias); -if (t instanceof CounterColumnType) +if (t.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", alias)); if (staticColumns.contains(alias)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", alias)); @@ -316,7 +316,7 @@ public class CreateTableStatement extends SchemaAlteringStatement stmt.columnAliases.add(alias.bytes); AbstractType at = getTypeAndRemove(stmt.columns, alias); -if (at instanceof CounterColumnType) +if (at.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", stmt.columnAliases.get(0))); stmt.comparator = new SimpleDenseCellNameType(at); } @@ -328,7 +328,7 @@ public class CreateTableStatement extends SchemaAlteringStatement stmt.columnAliases.add(t.bytes); AbstractType type = getTypeAndRemove(stmt.columns, t); -if (type instanceof CounterColumnType) +if (type.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", t)); if (staticColumns.contains(t)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", t)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java index e5ff251..41b73bc 100644 ---
[04/13] cassandra git commit: Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType.
Prohibit reversed counter type as part of the primary key. Check the actual CQL3Type to get the base type from the abstract type when comparing against CounterColumnType. Patch by Brett Snyder; reviewed by tjake for CASSANDRA-9395 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/483c7453 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/483c7453 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/483c7453 Branch: refs/heads/cassandra-3.7 Commit: 483c745334619ff19df8469f565b5346e7b2a5d0 Parents: 5a923f6 Author: Brett SnyderAuthored: Fri Oct 2 11:41:04 2015 -0500 Committer: T Jake Luciani Committed: Fri May 6 14:11:16 2016 -0400 -- CHANGES.txt | 1 + .../cassandra/cql3/statements/CreateTableStatement.java | 6 +++--- .../cassandra/cql3/validation/entities/CountersTest.java | 11 +++ 3 files changed, 15 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index a46aa56..18cd90b 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.7 + * Prohibit Reverse Counter type as part of the PK (CASSANDRA-9395) * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626) * Exit JVM if JMX server fails to startup (CASSANDRA-11540) * Produce a heap dump when exiting on OOM (CASSANDRA-9861) http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java index 1b3665c..e761674 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java @@ -268,7 +268,7 @@ public class CreateTableStatement extends SchemaAlteringStatement { stmt.keyAliases.add(alias.bytes); AbstractType t = getTypeAndRemove(stmt.columns, alias); -if (t instanceof CounterColumnType) +if (t.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", alias)); if (staticColumns.contains(alias)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", alias)); @@ -316,7 +316,7 @@ public class CreateTableStatement extends SchemaAlteringStatement stmt.columnAliases.add(alias.bytes); AbstractType at = getTypeAndRemove(stmt.columns, alias); -if (at instanceof CounterColumnType) +if (at.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", stmt.columnAliases.get(0))); stmt.comparator = new SimpleDenseCellNameType(at); } @@ -328,7 +328,7 @@ public class CreateTableStatement extends SchemaAlteringStatement stmt.columnAliases.add(t.bytes); AbstractType type = getTypeAndRemove(stmt.columns, t); -if (type instanceof CounterColumnType) +if (type.asCQL3Type().getType() instanceof CounterColumnType) throw new InvalidRequestException(String.format("counter type is not supported for PRIMARY KEY part %s", t)); if (staticColumns.contains(t)) throw new InvalidRequestException(String.format("Static column %s cannot be part of the PRIMARY KEY", t)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/483c7453/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java index e5ff251..41b73bc 100644 --- a/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/entities/CountersTest.java @@ -112,4 +112,15 @@ public class CountersTest extends CQLTester row(1L) // no change to the counter value ); }
[06/13] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/08b1efe1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/08b1efe1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/08b1efe1 Branch: refs/heads/trunk Commit: 08b1efe11deb336827b3b63fbfe2b4690c252541 Parents: 0687037 483c745 Author: T Jake LucianiAuthored: Fri May 6 14:13:18 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 14:13:18 2016 -0400 -- --
[jira] [Commented] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option
[ https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274496#comment-15274496 ] Wei Deng commented on CASSANDRA-11721: -- Option 1 (DDL NO SNAPSHOT) looks good to me and will cause the least amount of confusion to developers and operators. > Have a per operation truncate ddl "no snapshot" option > -- > > Key: CASSANDRA-11721 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11721 > Project: Cassandra > Issue Type: Wish > Components: CQL >Reporter: Jeremy Hanna >Priority: Minor > > Right now with truncate, it will always create a snapshot. That is the right > thing to do most of the time. 'auto_snapshot' exists as an option to disable > that but it is server wide and requires a restart to change. There are data > models, however, that require rotating through a handful of tables and > periodically truncating them. Currently you either have to operate with no > safety net (some actually do this) or manually clear those snapshots out > periodically. Both are less than optimal. > In HDFS, you generally delete something where it goes to the trash. If you > don't want that safety net, you can do something like 'rm -rf -skiptrash > /jeremy/stuff' in one command. > It would be nice to have something in the truncate ddl to skip the snapshot > on a per operation basis. Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'. > This might also be useful in those situations where you're just playing with > data and you don't want something to take a snapshot in a development system. > If that's the case, this would also be useful for the DROP operation, but > that convenience is not the main reason for this option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11658) java.lang.IllegalStateException: Unable to compute ceiling for max when histogram overflowed
[ https://issues.apache.org/jira/browse/CASSANDRA-11658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274479#comment-15274479 ] Stefano Ortolani commented on CASSANDRA-11658: -- Same thing with 3.0.5 > java.lang.IllegalStateException: Unable to compute ceiling for max when > histogram overflowed > > > Key: CASSANDRA-11658 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11658 > Project: Cassandra > Issue Type: Bug > Components: Compaction > Environment: RHEL-6.5 64-bit, Cassandra 2.2.4 >Reporter: Relish Chackochan >Priority: Minor > > On our 8 node Cassandra cluster ( 2.2.4v ) i am seeing the below error on > multiple nodes. > ERROR [CompactionExecutor:3] 2016-04-26 01:24:06,784 CassandraDaemon.java:185 > - Exception in thread Thread[CompactionExecutor:3,1,main] > java.lang.IllegalStateException: Unable to compute ceiling for max when > histogram overflowed > at > org.apache.cassandra.utils.EstimatedHistogram.mean(EstimatedHistogram.java:203) > ~[apache-cassandra-2.2.4.jar:2.2.4] > at > org.apache.cassandra.io.sstable.metadata.StatsMetadata.getEstimatedDroppableTombstoneRatio(StatsMetadata.java:98) > ~[apache-cassandra-2.2.4.jar:2.2.4] > at > org.apache.cassandra.io.sstable.format.SSTableReader.getEstimatedDroppableTombstoneRatio(SSTableReader.java:1840) > ~[apache-cassandra-2.2.4.jar:2.2.4] > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:372) > ~[apache-cassandra-2.2.4.jar:2.2.4] > at > org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:96) > ~[apache-cassandra-2.2.4.jar:2.2.4] > at > org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:180) > ~[apache-cassandra-2.2.4.jar:2.2.4] > at > org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getNextBackgroundTask(WrappingCompactionStrategy.java:85) > ~[apache-cassandra-2.2.4.jar:2.2.4] > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:241) > ~[apache-cassandra-2.2.4.jar:2.2.4] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_65] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[na:1.8.0_65] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > ~[na:1.8.0_65] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_65] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65] > . -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option
[ https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274469#comment-15274469 ] Ryan Svihla commented on CASSANDRA-11721: - I've thought about it a bit: 1. NO SNAPSHOT is probably the most pure and clean and satisfies even the most pendantic user who wants their temporary data backed up in C* when a drop or typical truncate is called, but comes at the cost of changing truncate and having driver dependencies. 2. table based is easy to impliment and satisfies a lot of people even if a couple of people will be sad. They probably can just log their data in another table before they truncate if they're that determined to have it backed up. > Have a per operation truncate ddl "no snapshot" option > -- > > Key: CASSANDRA-11721 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11721 > Project: Cassandra > Issue Type: Wish > Components: CQL >Reporter: Jeremy Hanna >Priority: Minor > > Right now with truncate, it will always create a snapshot. That is the right > thing to do most of the time. 'auto_snapshot' exists as an option to disable > that but it is server wide and requires a restart to change. There are data > models, however, that require rotating through a handful of tables and > periodically truncating them. Currently you either have to operate with no > safety net (some actually do this) or manually clear those snapshots out > periodically. Both are less than optimal. > In HDFS, you generally delete something where it goes to the trash. If you > don't want that safety net, you can do something like 'rm -rf -skiptrash > /jeremy/stuff' in one command. > It would be nice to have something in the truncate ddl to skip the snapshot > on a per operation basis. Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'. > This might also be useful in those situations where you're just playing with > data and you don't want something to take a snapshot in a development system. > If that's the case, this would also be useful for the DROP operation, but > that convenience is not the main reason for this option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option
[ https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274443#comment-15274443 ] Jeremy Hanna commented on CASSANDRA-11721: -- It's fine if it's not right away and it's understandable that at those levels, it takes a major version to make the change. People have been living with the limited options for this long :). If we could do the NO SNAPSHOT syntax for everything with a snapshot to be consistent in the DDL and do that as a per cf setting (auto_snapshot), I think both would be nice options. If only one option is considered, then the per operation would be preferable because it gives functionality that the per cf does not. What do you think [~rssvihla] [~weideng]? > Have a per operation truncate ddl "no snapshot" option > -- > > Key: CASSANDRA-11721 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11721 > Project: Cassandra > Issue Type: Wish > Components: CQL >Reporter: Jeremy Hanna >Priority: Minor > > Right now with truncate, it will always create a snapshot. That is the right > thing to do most of the time. 'auto_snapshot' exists as an option to disable > that but it is server wide and requires a restart to change. There are data > models, however, that require rotating through a handful of tables and > periodically truncating them. Currently you either have to operate with no > safety net (some actually do this) or manually clear those snapshots out > periodically. Both are less than optimal. > In HDFS, you generally delete something where it goes to the trash. If you > don't want that safety net, you can do something like 'rm -rf -skiptrash > /jeremy/stuff' in one command. > It would be nice to have something in the truncate ddl to skip the snapshot > on a per operation basis. Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'. > This might also be useful in those situations where you're just playing with > data and you don't want something to take a snapshot in a development system. > If that's the case, this would also be useful for the DROP operation, but > that convenience is not the main reason for this option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10876) Alter behavior of batch WARN and fail on single partition batches
[ https://issues.apache.org/jira/browse/CASSANDRA-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274388#comment-15274388 ] Martin Grotzke commented on CASSANDRA-10876: +1 > Alter behavior of batch WARN and fail on single partition batches > - > > Key: CASSANDRA-10876 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10876 > Project: Cassandra > Issue Type: Improvement >Reporter: Patrick McFadin >Assignee: Sylvain Lebresne >Priority: Minor > Labels: lhf > Fix For: 3.6 > > Attachments: 10876.txt > > > In an attempt to give operator insight into potentially harmful batch usage, > Jiras were created to log WARN or fail on certain batch sizes. This ignores > the single partition batch, which doesn't create the same issues as a > multi-partition batch. > The proposal is to ignore size on single partition batch statements. > Reference: > [CASSANDRA-6487|https://issues.apache.org/jira/browse/CASSANDRA-6487] > [CASSANDRA-8011|https://issues.apache.org/jira/browse/CASSANDRA-8011] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11517) o.a.c.utils.UUIDGen could handle contention better
[ https://issues.apache.org/jira/browse/CASSANDRA-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274384#comment-15274384 ] Joel Knighton commented on CASSANDRA-11517: --- Great - note to committer: the patch to apply is in my most recent comment and also linked here [jkni/CASSANDRA-11517-trunk|https://github.com/jkni/cassandra/tree/CASSANDRA-11517-trunk]. > o.a.c.utils.UUIDGen could handle contention better > -- > > Key: CASSANDRA-11517 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11517 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Minor > Fix For: 3.x > > > I noticed this profiling a query handler implementation that uses UUIDGen to > get handles to track queries for logging purposes. > Under contention threads are being unscheduled instead of spinning until the > lock is available. I would have expected intrinsic locks to be able to adapt > to this based on profiling information. > Either way it's seems pretty straightforward to rewrite this to use a CAS > loop and test that it generally produces unique values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11517) o.a.c.utils.UUIDGen could handle contention better
[ https://issues.apache.org/jira/browse/CASSANDRA-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Knighton updated CASSANDRA-11517: -- Status: Ready to Commit (was: Patch Available) > o.a.c.utils.UUIDGen could handle contention better > -- > > Key: CASSANDRA-11517 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11517 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Minor > Fix For: 3.x > > > I noticed this profiling a query handler implementation that uses UUIDGen to > get handles to track queries for logging purposes. > Under contention threads are being unscheduled instead of spinning until the > lock is available. I would have expected intrinsic locks to be able to adapt > to this based on profiling information. > Either way it's seems pretty straightforward to rewrite this to use a CAS > loop and test that it generally produces unique values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10876) Alter behavior of batch WARN and fail on single partition batches
[ https://issues.apache.org/jira/browse/CASSANDRA-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274377#comment-15274377 ] Vassil Lunchev commented on CASSANDRA-10876: How about backporting this change to 3.0.x? I know that it is marked as an improvement, but the patch contains a relatively minor change on the borderline between a fix and an improvement. We are currently using 3.0.x in production and we issue relatively large single partition batches. Our logs are full of warnings about that, but moving to 3.6 just for this trivial fix is a little to much for us. > Alter behavior of batch WARN and fail on single partition batches > - > > Key: CASSANDRA-10876 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10876 > Project: Cassandra > Issue Type: Improvement >Reporter: Patrick McFadin >Assignee: Sylvain Lebresne >Priority: Minor > Labels: lhf > Fix For: 3.6 > > Attachments: 10876.txt > > > In an attempt to give operator insight into potentially harmful batch usage, > Jiras were created to log WARN or fail on certain batch sizes. This ignores > the single partition batch, which doesn't create the same issues as a > multi-partition batch. > The proposal is to ignore size on single partition batch statements. > Reference: > [CASSANDRA-6487|https://issues.apache.org/jira/browse/CASSANDRA-6487] > [CASSANDRA-8011|https://issues.apache.org/jira/browse/CASSANDRA-8011] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11713) Add ability to log thread dump when NTR pool is blocked
[ https://issues.apache.org/jira/browse/CASSANDRA-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274370#comment-15274370 ] Joshua McKenzie commented on CASSANDRA-11713: - * Throwing RTE in SEPExecutor.ctor on failure to registerMBean changes this from an optional feature to an optional feature with mandatory infrastructure / registration demands. I'd recommend logging a warning instead of killing the executor. This will also necessitate a change to confirming registration succeeded and skip the thread dump if not. * Rather than having ThreadDumper be a static set of methods, you could make this an instance variable member of SEPExecutor and encapsulate the initialization, check if init, and also CAS inside the member class and keep that functionality separate from SEPExecutor. * .get() on the AtomicBoolean in SEPExecutor.addTask gives you the possibility of multiple tasks printing a dump during heavy contention. I recommend pushing the CAS up to where the .get() check is, with the caveat that I more strongly recommend encapsulating that logic inside an instance class (see above) > Add ability to log thread dump when NTR pool is blocked > --- > > Key: CASSANDRA-11713 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11713 > Project: Cassandra > Issue Type: Improvement > Components: Observability >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > > Thread dumps are very useful for troubleshooting Native-Transport-Requests > contention issues like CASSANDRA-11363 and CASSANDRA-11529. > While they could be generated externally with {{jstack}}, sometimes the > conditions are transient and it's hard to catch the exact moment when they > happen, so it could be useful to generate and log them upon user request when > certain internal condition happens. > I propose adding a {{logThreadDumpOnNextContention}} flag to {{SEPExecutor}} > that when enabled via JMX generates and logs a single thread dump on the > system log when the thread pool queue is full. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11357) ClientWarningsTest fails after single partition batch warning changes
[ https://issues.apache.org/jira/browse/CASSANDRA-11357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274365#comment-15274365 ] Joel Knighton commented on CASSANDRA-11357: --- No problem. I see where you're coming from - feel free to open an issue to discuss/advocate for backporting the change. Others are likely to have a stronger opinion than me, and I didn't mean to speak definitively on the subject. > ClientWarningsTest fails after single partition batch warning changes > - > > Key: CASSANDRA-11357 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11357 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Joel Knighton >Assignee: Joel Knighton >Priority: Trivial > Fix For: 3.6 > > > We no longer warn on single partition batches above the batch size warn > threshold, but the test wasn't changed accordingly. We should check that we > warn for multi-partition batches above this size and that we don't warn for > single partition batches above this size. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11357) ClientWarningsTest fails after single partition batch warning changes
[ https://issues.apache.org/jira/browse/CASSANDRA-11357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274345#comment-15274345 ] Vassil Lunchev commented on CASSANDRA-11357: Sorry Joel, I commented on the wrong issue. I meant backporting CASSANDRA-10876 indeed. It just sounds line a minor change on the borderline between a fix and an improvement, but I knew that the likelihood of that backport happening is low. > ClientWarningsTest fails after single partition batch warning changes > - > > Key: CASSANDRA-11357 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11357 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Joel Knighton >Assignee: Joel Knighton >Priority: Trivial > Fix For: 3.6 > > > We no longer warn on single partition batches above the batch size warn > threshold, but the test wasn't changed accordingly. We should check that we > warn for multi-partition batches above this size and that we don't warn for > single partition batches above this size. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (CASSANDRA-11357) ClientWarningsTest fails after single partition batch warning changes
[ https://issues.apache.org/jira/browse/CASSANDRA-11357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vassil Lunchev updated CASSANDRA-11357: --- Comment: was deleted (was: What do you think about getting this backported to 3.0.6?) > ClientWarningsTest fails after single partition batch warning changes > - > > Key: CASSANDRA-11357 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11357 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Joel Knighton >Assignee: Joel Knighton >Priority: Trivial > Fix For: 3.6 > > > We no longer warn on single partition batches above the batch size warn > threshold, but the test wasn't changed accordingly. We should check that we > warn for multi-partition batches above this size and that we don't warn for > single partition batches above this size. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11357) ClientWarningsTest fails after single partition batch warning changes
[ https://issues.apache.org/jira/browse/CASSANDRA-11357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274338#comment-15274338 ] Joel Knighton commented on CASSANDRA-11357: --- [~vas...@leanplum.com] - I'm not sure what you mean. These test changes are to accommodate the changes in [CASSANDRA-10876] and would not make sense backported to 3.0, which does not contain those changes. If you mean to backport the changes from [CASSANDRA-10876] to 3.0, it seems unlikely, since improvements normally only go to trunk. > ClientWarningsTest fails after single partition batch warning changes > - > > Key: CASSANDRA-11357 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11357 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Joel Knighton >Assignee: Joel Knighton >Priority: Trivial > Fix For: 3.6 > > > We no longer warn on single partition batches above the batch size warn > threshold, but the test wasn't changed accordingly. We should check that we > warn for multi-partition batches above this size and that we don't warn for > single partition batches above this size. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11357) ClientWarningsTest fails after single partition batch warning changes
[ https://issues.apache.org/jira/browse/CASSANDRA-11357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274330#comment-15274330 ] Vassil Lunchev commented on CASSANDRA-11357: What do you think about getting this backported to 3.0.6? > ClientWarningsTest fails after single partition batch warning changes > - > > Key: CASSANDRA-11357 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11357 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Joel Knighton >Assignee: Joel Knighton >Priority: Trivial > Fix For: 3.6 > > > We no longer warn on single partition batches above the batch size warn > threshold, but the test wasn't changed accordingly. We should check that we > warn for multi-partition batches above this size and that we don't warn for > single partition batches above this size. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11710) Cassandra dies with OOM when running stress
[ https://issues.apache.org/jira/browse/CASSANDRA-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274283#comment-15274283 ] T Jake Luciani commented on CASSANDRA-11710: Will re-tag thanks > Cassandra dies with OOM when running stress > --- > > Key: CASSANDRA-11710 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11710 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Branimir Lambov > Fix For: 3.6 > > > Running stress on trunk dies with OOM after about 3.5M ops: > {code} > ERROR [CompactionExecutor:1] 2016-05-04 15:01:31,231 > JVMStabilityInspector.java:137 - JVM state determined to be unstable. > Exiting forcefully due to: > java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_91] > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > ~[na:1.8.0_91] > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > ~[na:1.8.0_91] > at > org.apache.cassandra.utils.memory.BufferPool.allocateDirectAligned(BufferPool.java:519) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.access$600(BufferPool.java:46) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$GlobalPool.allocateMoreChunks(BufferPool.java:276) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$GlobalPool.get(BufferPool.java:249) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$LocalPool.addChunkFromGlobalPool(BufferPool.java:338) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$LocalPool.get(BufferPool.java:381) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.maybeTakeFromPool(BufferPool.java:142) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:114) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:84) > ~[main/:na] > at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:135) > ~[main/:na] > at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) > ~[main/:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807) > ~[caffeine-2.2.6.jar:na] > at > java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) > ~[na:1.8.0_91] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66) > ~[caffeine-2.2.6.jar:na] > at > org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215) > ~[main/:na] > at > org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193) > ~[main/:na] > at > org.apache.cassandra.io.util.LimitingRebufferer.rebuffer(LimitingRebufferer.java:34) > ~[main/:na] > at > org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78) > ~[main/:na] > at > org.apache.cassandra.io.util.RandomAccessReader.reBuffer(RandomAccessReader.java:72) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.read(RebufferingInputStream.java:88) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:66) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:400) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.readWithVIntLength(ByteBufferUtil.java:338) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:414) > ~[main/:na] > at > org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:243) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(UnfilteredSerializer.java:473) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:451) >
[jira] [Commented] (CASSANDRA-9395) Prohibit Counter type as part of the PK
[ https://issues.apache.org/jira/browse/CASSANDRA-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274239#comment-15274239 ] Brett Snyder commented on CASSANDRA-9395: - Makes sense, thanks! > Prohibit Counter type as part of the PK > --- > > Key: CASSANDRA-9395 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9395 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sebastian Estevez >Assignee: Brett Snyder > Labels: lhf > Fix For: 2.2.x, 3.0.x > > Attachments: cassandra-2.1-9395.txt > > > C* let me do this: > {code} > create table aggregated.counter1 ( a counter , b int , PRIMARY KEY (b,a)) > WITH CLUSTERING ORDER BY (a desc); > {code} > and then treated a as an int! > {code} > cqlsh> update aggregated.counter1 set a= a+1 where b = 2 ;Bad Request: > Invalid operation (a = a + 1) for non counter column a > {code} > {code} > insert INTO aggregated.counter1 (b, a ) VALUES ( 3, 2) ; > {code} > (should have given can't insert must update error) > Even though desc table still shows it as a counter type: > {code} > CREATE TABLE counter1 ( > b int, > a counter, > PRIMARY KEY ((b), a) > ) WITH CLUSTERING ORDER BY (a DESC) AND > bloom_filter_fp_chance=0.01 AND > caching='KEYS_ONLY' AND > comment='' AND > dclocal_read_repair_chance=0.10 AND > gc_grace_seconds=864000 AND > index_interval=128 AND > read_repair_chance=0.00 AND > replicate_on_write='true' AND > populate_io_cache_on_flush='false' AND > default_time_to_live=0 AND > speculative_retry='99.0PERCENTILE' AND > memtable_flush_period_in_ms=0 AND > compaction={'class': 'SizeTieredCompactionStrategy'} AND > compression={'sstable_compression': 'LZ4Compressor'}; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11724) False Failure Detection in Big Cassandra Cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274240#comment-15274240 ] Sylvain Lebresne commented on CASSANDRA-11724: -- bq. that instance-1 has not received any heartbeat after some time from instance-2 because the instance-2 run a long computation process That certainly explains what happens but we need to know what the "long computation process" is if we want to improve it. Would you be able to get some stack dump or wire up some profiling to tell us what it is since you seem to have all the testing set up? > False Failure Detection in Big Cassandra Cluster > > > Key: CASSANDRA-11724 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11724 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jeffrey F. Lukman > Labels: gossip, node-failure > Attachments: Workload1.jpg, Workload2.jpg, Workload3.jpg, > Workload4.jpg > > > We are running some testing on Cassandra v2.2.5 stable in a big cluster. The > setting in our testing is that each machine has 16-cores and runs 8 cassandra > instances, and our testing is 32, 64, 128, 256, and 512 instances of > Cassandra. We use the default number of vnodes for each instance which is > 256. The data and log directories are on in-memory tmpfs file system. > We run several types of workloads on this Cassandra cluster: > Workload1: Just start the cluster > Workload2: Start half of the cluster, wait until it gets into a stable > condition, and run another half of the cluster > Workload3: Start half of the cluster, wait until it gets into a stable > condition, load some data, and run another half of the cluster > Workload4: Start the cluster, wait until it gets into a stable condition, > load some data and decommission one node > For this testing, we measure the total numbers of false failure detection > inside the cluster. By false failure detection, we mean that, for example, > instance-1 marks the instance-2 down, but the instance-2 is not down. We dig > deeper into the root cause and find out that instance-1 has not received any > heartbeat after some time from instance-2 because the instance-2 run a long > computation process. > Here I attach the graphs of each workload result. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11517) o.a.c.utils.UUIDGen could handle contention better
[ https://issues.apache.org/jira/browse/CASSANDRA-11517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274247#comment-15274247 ] Ariel Weisberg commented on CASSANDRA-11517: Thanks for catching that test bug (again), fixing the comparison, and running it through JMH. Your changes LGTM. > o.a.c.utils.UUIDGen could handle contention better > -- > > Key: CASSANDRA-11517 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11517 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Minor > Fix For: 3.x > > > I noticed this profiling a query handler implementation that uses UUIDGen to > get handles to track queries for logging purposes. > Under contention threads are being unscheduled instead of spinning until the > lock is available. I would have expected intrinsic locks to be able to adapt > to this based on profiling information. > Either way it's seems pretty straightforward to rewrite this to use a CAS > loop and test that it generally produces unique values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11721) Have a per operation truncate ddl "no snapshot" option
[ https://issues.apache.org/jira/browse/CASSANDRA-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274233#comment-15274233 ] Sylvain Lebresne commented on CASSANDRA-11721: -- Not necessary saying I prefer it, but just mentioning that if I understand the motivating use case correctly, an alternative could be to have a table option to override the yaml one. In any case, it's worth mentioning that: # adding the option to {{TRUNCATE}} would require a change to the internal truncate VERB, which means a change to the intra-node protocol, which kind of means 4.0 # adding it as a table option requires adding it to the schema table and that's currently also problematic on minors (CASSANDRA-11382) tl;dr, that's a reasonable option to add but it might not happen right away. > Have a per operation truncate ddl "no snapshot" option > -- > > Key: CASSANDRA-11721 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11721 > Project: Cassandra > Issue Type: Wish > Components: CQL >Reporter: Jeremy Hanna >Priority: Minor > > Right now with truncate, it will always create a snapshot. That is the right > thing to do most of the time. 'auto_snapshot' exists as an option to disable > that but it is server wide and requires a restart to change. There are data > models, however, that require rotating through a handful of tables and > periodically truncating them. Currently you either have to operate with no > safety net (some actually do this) or manually clear those snapshots out > periodically. Both are less than optimal. > In HDFS, you generally delete something where it goes to the trash. If you > don't want that safety net, you can do something like 'rm -rf -skiptrash > /jeremy/stuff' in one command. > It would be nice to have something in the truncate ddl to skip the snapshot > on a per operation basis. Perhaps 'TRUNCATE solarsystem.earth NO SNAPSHOT'. > This might also be useful in those situations where you're just playing with > data and you don't want something to take a snapshot in a development system. > If that's the case, this would also be useful for the DROP operation, but > that convenience is not the main reason for this option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9395) Prohibit Counter type as part of the PK
[ https://issues.apache.org/jira/browse/CASSANDRA-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-9395: -- Fix Version/s: (was: 2.1.x) 3.0.x 2.2.x > Prohibit Counter type as part of the PK > --- > > Key: CASSANDRA-9395 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9395 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sebastian Estevez >Assignee: Brett Snyder > Labels: lhf > Fix For: 2.2.x, 3.0.x > > Attachments: cassandra-2.1-9395.txt > > > C* let me do this: > {code} > create table aggregated.counter1 ( a counter , b int , PRIMARY KEY (b,a)) > WITH CLUSTERING ORDER BY (a desc); > {code} > and then treated a as an int! > {code} > cqlsh> update aggregated.counter1 set a= a+1 where b = 2 ;Bad Request: > Invalid operation (a = a + 1) for non counter column a > {code} > {code} > insert INTO aggregated.counter1 (b, a ) VALUES ( 3, 2) ; > {code} > (should have given can't insert must update error) > Even though desc table still shows it as a counter type: > {code} > CREATE TABLE counter1 ( > b int, > a counter, > PRIMARY KEY ((b), a) > ) WITH CLUSTERING ORDER BY (a DESC) AND > bloom_filter_fp_chance=0.01 AND > caching='KEYS_ONLY' AND > comment='' AND > dclocal_read_repair_chance=0.10 AND > gc_grace_seconds=864000 AND > index_interval=128 AND > read_repair_chance=0.00 AND > replicate_on_write='true' AND > populate_io_cache_on_flush='false' AND > default_time_to_live=0 AND > speculative_retry='99.0PERCENTILE' AND > memtable_flush_period_in_ms=0 AND > compaction={'class': 'SizeTieredCompactionStrategy'} AND > compression={'sstable_compression': 'LZ4Compressor'}; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9395) Prohibit Counter type as part of the PK
[ https://issues.apache.org/jira/browse/CASSANDRA-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274232#comment-15274232 ] T Jake Luciani commented on CASSANDRA-9395: --- Sorry, this dropped off my radar. The underlying issue here was the CLUSTERING ORDER was wrapping the CounterColumnType in a ReverseType. I changed your test to check this case. [2.2 branch|https://github.com/tjake/cassandra/tree/counter-pk-fix-2.2] [2.2 testall|http://cassci.datastax.com/job/tjake-counter-pk-fix-2.2-testall] [2.2 dtest|http://cassci.datastax.com/job/tjake-counter-pk-fix-2.2-dtest] [3.0 branch|https://github.com/tjake/cassandra/tree/counter-pk-fix-3.0] [3.0 testall|http://cassci.datastax.com/job/tjake-counter-pk-fix-3.0-testall] [3.0 dtest|http://cassci.datastax.com/job/tjake-counter-pk-fix-3.0-dtest] > Prohibit Counter type as part of the PK > --- > > Key: CASSANDRA-9395 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9395 > Project: Cassandra > Issue Type: Bug >Reporter: Sebastian Estevez >Assignee: Brett Snyder > Labels: lhf > Fix For: 2.1.x > > Attachments: cassandra-2.1-9395.txt > > > C* let me do this: > {code} > create table aggregated.counter1 ( a counter , b int , PRIMARY KEY (b,a)) > WITH CLUSTERING ORDER BY (a desc); > {code} > and then treated a as an int! > {code} > cqlsh> update aggregated.counter1 set a= a+1 where b = 2 ;Bad Request: > Invalid operation (a = a + 1) for non counter column a > {code} > {code} > insert INTO aggregated.counter1 (b, a ) VALUES ( 3, 2) ; > {code} > (should have given can't insert must update error) > Even though desc table still shows it as a counter type: > {code} > CREATE TABLE counter1 ( > b int, > a counter, > PRIMARY KEY ((b), a) > ) WITH CLUSTERING ORDER BY (a DESC) AND > bloom_filter_fp_chance=0.01 AND > caching='KEYS_ONLY' AND > comment='' AND > dclocal_read_repair_chance=0.10 AND > gc_grace_seconds=864000 AND > index_interval=128 AND > read_repair_chance=0.00 AND > replicate_on_write='true' AND > populate_io_cache_on_flush='false' AND > default_time_to_live=0 AND > speculative_retry='99.0PERCENTILE' AND > memtable_flush_period_in_ms=0 AND > compaction={'class': 'SizeTieredCompactionStrategy'} AND > compression={'sstable_compression': 'LZ4Compressor'}; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9395) Prohibit Counter type as part of the PK
[ https://issues.apache.org/jira/browse/CASSANDRA-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-9395: -- Component/s: CQL > Prohibit Counter type as part of the PK > --- > > Key: CASSANDRA-9395 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9395 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sebastian Estevez >Assignee: Brett Snyder > Labels: lhf > Fix For: 2.2.x, 3.0.x > > Attachments: cassandra-2.1-9395.txt > > > C* let me do this: > {code} > create table aggregated.counter1 ( a counter , b int , PRIMARY KEY (b,a)) > WITH CLUSTERING ORDER BY (a desc); > {code} > and then treated a as an int! > {code} > cqlsh> update aggregated.counter1 set a= a+1 where b = 2 ;Bad Request: > Invalid operation (a = a + 1) for non counter column a > {code} > {code} > insert INTO aggregated.counter1 (b, a ) VALUES ( 3, 2) ; > {code} > (should have given can't insert must update error) > Even though desc table still shows it as a counter type: > {code} > CREATE TABLE counter1 ( > b int, > a counter, > PRIMARY KEY ((b), a) > ) WITH CLUSTERING ORDER BY (a DESC) AND > bloom_filter_fp_chance=0.01 AND > caching='KEYS_ONLY' AND > comment='' AND > dclocal_read_repair_chance=0.10 AND > gc_grace_seconds=864000 AND > index_interval=128 AND > read_repair_chance=0.00 AND > replicate_on_write='true' AND > populate_io_cache_on_flush='false' AND > default_time_to_live=0 AND > speculative_retry='99.0PERCENTILE' AND > memtable_flush_period_in_ms=0 AND > compaction={'class': 'SizeTieredCompactionStrategy'} AND > compression={'sstable_compression': 'LZ4Compressor'}; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11710) Cassandra dies with OOM when running stress
[ https://issues.apache.org/jira/browse/CASSANDRA-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274226#comment-15274226 ] Sylvain Lebresne commented on CASSANDRA-11710: -- [~tjake] I believe you may want to check what's above (I think we need to change the cassandra-3.6-tentative flag) > Cassandra dies with OOM when running stress > --- > > Key: CASSANDRA-11710 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11710 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Branimir Lambov > Fix For: 3.6 > > > Running stress on trunk dies with OOM after about 3.5M ops: > {code} > ERROR [CompactionExecutor:1] 2016-05-04 15:01:31,231 > JVMStabilityInspector.java:137 - JVM state determined to be unstable. > Exiting forcefully due to: > java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_91] > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > ~[na:1.8.0_91] > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > ~[na:1.8.0_91] > at > org.apache.cassandra.utils.memory.BufferPool.allocateDirectAligned(BufferPool.java:519) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.access$600(BufferPool.java:46) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$GlobalPool.allocateMoreChunks(BufferPool.java:276) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$GlobalPool.get(BufferPool.java:249) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$LocalPool.addChunkFromGlobalPool(BufferPool.java:338) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$LocalPool.get(BufferPool.java:381) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.maybeTakeFromPool(BufferPool.java:142) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:114) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:84) > ~[main/:na] > at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:135) > ~[main/:na] > at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) > ~[main/:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807) > ~[caffeine-2.2.6.jar:na] > at > java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) > ~[na:1.8.0_91] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66) > ~[caffeine-2.2.6.jar:na] > at > org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215) > ~[main/:na] > at > org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193) > ~[main/:na] > at > org.apache.cassandra.io.util.LimitingRebufferer.rebuffer(LimitingRebufferer.java:34) > ~[main/:na] > at > org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78) > ~[main/:na] > at > org.apache.cassandra.io.util.RandomAccessReader.reBuffer(RandomAccessReader.java:72) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.read(RebufferingInputStream.java:88) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:66) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:400) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.readWithVIntLength(ByteBufferUtil.java:338) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:414) > ~[main/:na] > at > org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:243) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(UnfilteredSerializer.java:473) > ~[main/:na] > at >
[jira] [Commented] (CASSANDRA-11710) Cassandra dies with OOM when running stress
[ https://issues.apache.org/jira/browse/CASSANDRA-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274219#comment-15274219 ] Branimir Lambov commented on CASSANDRA-11710: - Yes, it is. If we don't include it, nodes spawned with default ccm settings will oom in 3.6. > Cassandra dies with OOM when running stress > --- > > Key: CASSANDRA-11710 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11710 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Branimir Lambov > Fix For: 3.6 > > > Running stress on trunk dies with OOM after about 3.5M ops: > {code} > ERROR [CompactionExecutor:1] 2016-05-04 15:01:31,231 > JVMStabilityInspector.java:137 - JVM state determined to be unstable. > Exiting forcefully due to: > java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_91] > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > ~[na:1.8.0_91] > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > ~[na:1.8.0_91] > at > org.apache.cassandra.utils.memory.BufferPool.allocateDirectAligned(BufferPool.java:519) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.access$600(BufferPool.java:46) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$GlobalPool.allocateMoreChunks(BufferPool.java:276) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$GlobalPool.get(BufferPool.java:249) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$LocalPool.addChunkFromGlobalPool(BufferPool.java:338) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$LocalPool.get(BufferPool.java:381) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.maybeTakeFromPool(BufferPool.java:142) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:114) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:84) > ~[main/:na] > at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:135) > ~[main/:na] > at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) > ~[main/:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807) > ~[caffeine-2.2.6.jar:na] > at > java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) > ~[na:1.8.0_91] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66) > ~[caffeine-2.2.6.jar:na] > at > org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215) > ~[main/:na] > at > org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193) > ~[main/:na] > at > org.apache.cassandra.io.util.LimitingRebufferer.rebuffer(LimitingRebufferer.java:34) > ~[main/:na] > at > org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78) > ~[main/:na] > at > org.apache.cassandra.io.util.RandomAccessReader.reBuffer(RandomAccessReader.java:72) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.read(RebufferingInputStream.java:88) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:66) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:400) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.readWithVIntLength(ByteBufferUtil.java:338) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:414) > ~[main/:na] > at > org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:243) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(UnfilteredSerializer.java:473) > ~[main/:na] > at >
[jira] [Commented] (CASSANDRA-11726) IndexOutOfBoundsException when selecting (distinct) row ids from counter table.
[ https://issues.apache.org/jira/browse/CASSANDRA-11726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274214#comment-15274214 ] Aleksey Yeschenko commented on CASSANDRA-11726: --- It sounds like corruption to me. Repair would not correct this, but scrub *might*. Can you run scrub on the 'bad' nodes and get back to us with the result? Thanks. > IndexOutOfBoundsException when selecting (distinct) row ids from counter > table. > --- > > Key: CASSANDRA-11726 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11726 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: C* 3.5, cluster of 4 nodes. >Reporter: Jaroslav Kamenik > > I have simple table containing counters: > CREATE TABLE tablename ( > object_id ascii, > counter_id ascii, > count counter, > PRIMARY KEY (object_id, counter_id) > ) WITH CLUSTERING ORDER BY (counter_id ASC) > AND bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'enabled': 'false'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > Counters are often inc/decreased, whole rows are queried, deleted sometimes. > After some time I tried to query all object_ids, but it failed with: > cqlsh:woc> consistency quorum; > cqlsh:woc> select object_id from tablename; > ServerError: message="java.lang.IndexOutOfBoundsException"> > select * from ..., select where .., updates works well.. > With consistency one it works sometimes, so it seems something is broken at > one server, but I tried to repair table there and it did not help. > Whole exception from server log: > java.lang.IndexOutOfBoundsException: null > at java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_73] > at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) > ~[na:1.8.0_73] > at > org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:141) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.db.context.CounterContext.access$100(CounterContext.java:76) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.db.context.CounterContext$ContextState.(CounterContext.java:758) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.db.context.CounterContext$ContextState.wrap(CounterContext.java:765) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.db.context.CounterContext.merge(CounterContext.java:271) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.db.Conflicts.mergeCounterValues(Conflicts.java:76) > ~[apache-cassandra-3.5.jar:3.5] > at org.apache.cassandra.db.rows.Cells.reconcile(Cells.java:143) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:591) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:549) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > ~[apache-cassandra-3.5.jar:3.5] > at org.apache.cassandra.db.rows.Row$Merger.merge(Row.java:526) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:473) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:437) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156) > ~[apache-cassandra-3.5.jar:3.5] > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > ~[apache-cassandra-3.5.jar:3.5] > at
[jira] [Updated] (CASSANDRA-11725) Check for unnecessary JMX port setting in env vars at startup
[ https://issues.apache.org/jira/browse/CASSANDRA-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-11725: -- Labels: lhf (was: ) > Check for unnecessary JMX port setting in env vars at startup > - > > Key: CASSANDRA-11725 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11725 > Project: Cassandra > Issue Type: Improvement > Components: Lifecycle >Reporter: Sam Tunnicliffe >Priority: Minor > Labels: lhf > Fix For: 3.x > > > Since CASSANDRA-10091, C* expects to always be in control of initializing its > JMX connector server. However, if {{com.sun.management.jmxremote.port}} is > set when the JVM is started, the bootstrap agent takes over and sets up the > server before any C* code runs. Because C* is then unable to bind the server > it creates to the specified port, startup is halted and the root cause is > somewhat unclear. > We should add a check at startup so a more informative message can be > provided. This would test for the presence of the system property which would > differentiate from the case where some other process is already bound to the > port. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11712) testJsonThreadSafety is failing / flapping
[ https://issues.apache.org/jira/browse/CASSANDRA-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274190#comment-15274190 ] Alex Petrov commented on CASSANDRA-11712: - [~JoshuaMcKenzie] sorry about that, most likely posted it twice because of the network glitch. > testJsonThreadSafety is failing / flapping > -- > > Key: CASSANDRA-11712 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11712 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Alex Petrov >Priority: Minor > > {{JsonTest::testJsonThreadSafety}} is failing quite often recently: > https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11540-2.2-testall/lastCompletedBuild/testReport/org.apache.cassandra.cql3.validation.entities/JsonTest/testJsonThreadSafety/ > Output looks like > {code} > Stacktrace > java.util.concurrent.TimeoutException > at java.util.concurrent.FutureTask.get(FutureTask.java:201) > at > org.apache.cassandra.cql3.validation.entities.JsonTest.testJsonThreadSafety(JsonTest.java:1028) > WARN 12:19:23 Small commitlog volume detected at > build/test/cassandra/commitlog:30; setting commitlog_total_space_in_mb to > 1982. You can override this in cassandra.yaml > WARN 12:19:23 Small commitlog volume detected at > build/test/cassandra/commitlog:30; setting commitlog_total_space_in_mb to > 1982. You can override this in cassandra.yaml > WARN 12:19:23 Only 5581 MB free across all data volumes. Consider adding > more capacity to your cluster or removing obsolete snapshots > WARN 12:19:23 Only 5581 MB free across all data volumes. Consider adding > more capacity to your cluster or removing obsolete snapshots > WARN 12:19:26 Aggregation query used without partition key > WARN 12:19:26 Aggregation query used without partition key > WARN 12:19:26 Aggregation query used without partition key > WARN 12:19:26 Aggregation query used without partition key > Seed 889742091470 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11715) Make GCInspector's MIN_LOG_DURATION configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-11715: - Priority: Minor (was: Major) > Make GCInspector's MIN_LOG_DURATION configurable > > > Key: CASSANDRA-11715 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11715 > Project: Cassandra > Issue Type: Improvement >Reporter: Brandon Williams >Priority: Minor > Labels: lhf > > It's common for people to run C* with the G1 collector on appropriately-sized > heaps. Quite often, the target pause time is set to 500ms, but GCI fires on > anything over 200ms. We can already control the warn threshold, but these > are acceptable GCs for the configuration and create noise at the INFO log > level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-11712) testJsonThreadSafety is failing / flapping
[ https://issues.apache.org/jira/browse/CASSANDRA-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie resolved CASSANDRA-11712. - Resolution: Duplicate > testJsonThreadSafety is failing / flapping > -- > > Key: CASSANDRA-11712 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11712 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Alex Petrov >Priority: Minor > > {{JsonTest::testJsonThreadSafety}} is failing quite often recently: > https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11540-2.2-testall/lastCompletedBuild/testReport/org.apache.cassandra.cql3.validation.entities/JsonTest/testJsonThreadSafety/ > Output looks like > {code} > Stacktrace > java.util.concurrent.TimeoutException > at java.util.concurrent.FutureTask.get(FutureTask.java:201) > at > org.apache.cassandra.cql3.validation.entities.JsonTest.testJsonThreadSafety(JsonTest.java:1028) > WARN 12:19:23 Small commitlog volume detected at > build/test/cassandra/commitlog:30; setting commitlog_total_space_in_mb to > 1982. You can override this in cassandra.yaml > WARN 12:19:23 Small commitlog volume detected at > build/test/cassandra/commitlog:30; setting commitlog_total_space_in_mb to > 1982. You can override this in cassandra.yaml > WARN 12:19:23 Only 5581 MB free across all data volumes. Consider adding > more capacity to your cluster or removing obsolete snapshots > WARN 12:19:23 Only 5581 MB free across all data volumes. Consider adding > more capacity to your cluster or removing obsolete snapshots > WARN 12:19:26 Aggregation query used without partition key > WARN 12:19:26 Aggregation query used without partition key > WARN 12:19:26 Aggregation query used without partition key > WARN 12:19:26 Aggregation query used without partition key > Seed 889742091470 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11710) Cassandra dies with OOM when running stress
[ https://issues.apache.org/jira/browse/CASSANDRA-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274180#comment-15274180 ] Sylvain Lebresne commented on CASSANDRA-11710: -- Is this a problem for the currently frozen 3.6? That is, should we make sure this get included or we have a regression? > Cassandra dies with OOM when running stress > --- > > Key: CASSANDRA-11710 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11710 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Branimir Lambov > Fix For: 3.6 > > > Running stress on trunk dies with OOM after about 3.5M ops: > {code} > ERROR [CompactionExecutor:1] 2016-05-04 15:01:31,231 > JVMStabilityInspector.java:137 - JVM state determined to be unstable. > Exiting forcefully due to: > java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_91] > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > ~[na:1.8.0_91] > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > ~[na:1.8.0_91] > at > org.apache.cassandra.utils.memory.BufferPool.allocateDirectAligned(BufferPool.java:519) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.access$600(BufferPool.java:46) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$GlobalPool.allocateMoreChunks(BufferPool.java:276) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$GlobalPool.get(BufferPool.java:249) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$LocalPool.addChunkFromGlobalPool(BufferPool.java:338) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool$LocalPool.get(BufferPool.java:381) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.maybeTakeFromPool(BufferPool.java:142) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.takeFromPool(BufferPool.java:114) > ~[main/:na] > at > org.apache.cassandra.utils.memory.BufferPool.get(BufferPool.java:84) > ~[main/:na] > at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:135) > ~[main/:na] > at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) > ~[main/:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807) > ~[caffeine-2.2.6.jar:na] > at > java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) > ~[na:1.8.0_91] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97) > ~[caffeine-2.2.6.jar:na] > at > com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66) > ~[caffeine-2.2.6.jar:na] > at > org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215) > ~[main/:na] > at > org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193) > ~[main/:na] > at > org.apache.cassandra.io.util.LimitingRebufferer.rebuffer(LimitingRebufferer.java:34) > ~[main/:na] > at > org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78) > ~[main/:na] > at > org.apache.cassandra.io.util.RandomAccessReader.reBuffer(RandomAccessReader.java:72) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.read(RebufferingInputStream.java:88) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:66) > ~[main/:na] > at > org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:400) > ~[main/:na] > at > org.apache.cassandra.utils.ByteBufferUtil.readWithVIntLength(ByteBufferUtil.java:338) > ~[main/:na] > at > org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:414) > ~[main/:na] > at > org.apache.cassandra.db.rows.Cell$Serializer.deserialize(Cell.java:243) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(UnfilteredSerializer.java:473) > ~[main/:na] > at >
[jira] [Updated] (CASSANDRA-11709) Lock contention when large number of dead nodes come back within short time
[ https://issues.apache.org/jira/browse/CASSANDRA-11709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-11709: Issue Type: Improvement (was: Bug) > Lock contention when large number of dead nodes come back within short time > --- > > Key: CASSANDRA-11709 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11709 > Project: Cassandra > Issue Type: Improvement >Reporter: Dikang Gu >Assignee: Joel Knighton > Fix For: 2.2.x, 3.x > > > We have a few hundreds nodes across 3 data centers, and we are doing a few > millions writes per second into the cluster. > We were trying to simulate a data center failure, by disabling the gossip on > all the nodes in one data center. After ~20mins, I re-enabled the gossip on > those nodes, was doing 5 nodes in each batch, and sleep 5 seconds between the > batch. > After that, I saw the latency of read/write requests increased a lot, and > client requests started to timeout. > On the node, I can see there are huge number of pending tasks in GossipStage. > = > 2016-05-02_23:55:08.99515 WARN 23:55:08 Gossip stage has 36337 pending > tasks; skipping status check (no nodes will be marked down) > 2016-05-02_23:55:09.36009 INFO 23:55:09 Node > /2401:db00:2020:717a:face:0:41:0 state jump to normal > 2016-05-02_23:55:09.99057 INFO 23:55:09 Node > /2401:db00:2020:717a:face:0:43:0 state jump to normal > 2016-05-02_23:55:10.09742 WARN 23:55:10 Gossip stage has 36421 pending > tasks; skipping status check (no nodes will be marked down) > 2016-05-02_23:55:10.91860 INFO 23:55:10 Node > /2401:db00:2020:717a:face:0:45:0 state jump to normal > 2016-05-02_23:55:11.20100 WARN 23:55:11 Gossip stage has 36558 pending > tasks; skipping status check (no nodes will be marked down) > 2016-05-02_23:55:11.57893 INFO 23:55:11 Node > /2401:db00:2030:612a:face:0:49:0 state jump to normal > 2016-05-02_23:55:12.23405 INFO 23:55:12 Node /2401:db00:2020:7189:face:0:7:0 > state jump to normal > > And I took jstack of the node, I found the read/write threads are blocked by > a lock, > read thread == > "Thrift:7994" daemon prio=10 tid=0x7fde91080800 nid=0x5255 waiting for > monitor entry [0x7fde6f8a1000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.cassandra.locator.TokenMetadata.cachedOnlyTokenMap(TokenMetadata.java:546) > - waiting to lock <0x7fe4faef4398> (a > org.apache.cassandra.locator.TokenMetadata) > at > org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:111) > at > org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:3155) > at > org.apache.cassandra.service.StorageProxy.getLiveSortedEndpoints(StorageProxy.java:1526) > at > org.apache.cassandra.service.StorageProxy.getLiveSortedEndpoints(StorageProxy.java:1521) > at > org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:155) > at > org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1328) > at > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1270) > at > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1195) > at > org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:118) > at > org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:275) > at > org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:457) > at > org.apache.cassandra.thrift.CassandraServer.getSliceInternal(CassandraServer.java:346) > at > org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:325) > at > org.apache.cassandra.thrift.Cassandra$Processor$get_slice.getResult(Cassandra.java:3659) > at > org.apache.cassandra.thrift.Cassandra$Processor$get_slice.getResult(Cassandra.java:3643) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > = writer === > "Thrift:7668" daemon prio=10 tid=0x7fde90d91000 nid=0x50e9 waiting for > monitor entry [0x7fde78bbc000] >java.lang.Thread.State:
[jira] [Updated] (CASSANDRA-11705) clearSnapshots using Directories.dataDirectories instead of CFS.initialDirectories
[ https://issues.apache.org/jira/browse/CASSANDRA-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-11705: -- Reviewer: Aleksey Yeschenko Status: Patch Available (was: Open) > clearSnapshots using Directories.dataDirectories instead of > CFS.initialDirectories > -- > > Key: CASSANDRA-11705 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11705 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 3.0.x, 3.7 > > > An oversight in CASSANDRA-10518 prevents snapshots created in data > directories defined outside of cassandra.yaml from being cleared by > {{Keyspace.clearSnapshots}}. {{ColumnFamilyStore.initialDirectories}} should > be used when finding snapshots to clear, not {{Directories.dataDirectories}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11695) Move JMX connection config to cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-11695: - Labels: lhf (was: ) > Move JMX connection config to cassandra.yaml > > > Key: CASSANDRA-11695 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11695 > Project: Cassandra > Issue Type: Improvement > Components: Configuration >Reporter: Sam Tunnicliffe >Priority: Minor > Labels: lhf > Fix For: 3.x > > > Since CASSANDRA-10091, we always construct the JMX connector server > programatically, so we could move its configuration from cassandra-env to > yaml. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8343) Secondary index creation causes moves/bootstraps to fail
[ https://issues.apache.org/jira/browse/CASSANDRA-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274123#comment-15274123 ] Paulo Motta commented on CASSANDRA-8343: while trying to reproduce this once more on 2.2-HEAD before submitting a final patch I noticed that bootstrap was not failing if secondary index creation takes longer than {{streaming_socket_timeout_in_ms}}, even though the stream session failed on sender side, which closes the socket, but completes successfully on the bootstrapping node. the strange thing is that while the socket was closed on the sender side after {{streaming_socket_timeout_in_ms}}, the receiver still sent the last {{complete}} message on the "closed" socket without failures. I'll see what's going on. > Secondary index creation causes moves/bootstraps to fail > > > Key: CASSANDRA-8343 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8343 > Project: Cassandra > Issue Type: Bug >Reporter: Michael Frisch >Assignee: Paulo Motta > > Node moves/bootstraps are failing if the stream timeout is set to a value in > which secondary index creation cannot complete. This happens because at the > end of the very last stream the StreamInSession.closeIfFinished() function > calls maybeBuildSecondaryIndexes on every column family. If the stream time > + all CF's index creation takes longer than your stream timeout then the > socket closes from the sender's side, the receiver of the stream tries to > write to said socket because it's not null, an IOException is thrown but not > caught in closeIfFinished(), the exception is caught somewhere and not > logged, AbstractStreamSession.close() is never called, and the CountDownLatch > is never decremented. This causes the move/bootstrap to continue forever > until the node is restarted. > This problem of stream time + secondary index creation time exists on > decommissioning/unbootstrap as well but since it's on the sending side the > timeout triggers the onFailure() callback which does decrement the > CountDownLatch leading to completion. > A cursory glance at the 2.0 code leads me to believe this problem would exist > there as well. > Temporary workaround: set a really high/infinite stream timeout. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8343) Secondary index creation causes moves/bootstraps to fail
[ https://issues.apache.org/jira/browse/CASSANDRA-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-8343: --- Status: Open (was: Patch Available) > Secondary index creation causes moves/bootstraps to fail > > > Key: CASSANDRA-8343 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8343 > Project: Cassandra > Issue Type: Bug >Reporter: Michael Frisch >Assignee: Paulo Motta > > Node moves/bootstraps are failing if the stream timeout is set to a value in > which secondary index creation cannot complete. This happens because at the > end of the very last stream the StreamInSession.closeIfFinished() function > calls maybeBuildSecondaryIndexes on every column family. If the stream time > + all CF's index creation takes longer than your stream timeout then the > socket closes from the sender's side, the receiver of the stream tries to > write to said socket because it's not null, an IOException is thrown but not > caught in closeIfFinished(), the exception is caught somewhere and not > logged, AbstractStreamSession.close() is never called, and the CountDownLatch > is never decremented. This causes the move/bootstrap to continue forever > until the node is restarted. > This problem of stream time + secondary index creation time exists on > decommissioning/unbootstrap as well but since it's on the sending side the > timeout triggers the onFailure() callback which does decrement the > CountDownLatch leading to completion. > A cursory glance at the 2.0 code leads me to believe this problem would exist > there as well. > Temporary workaround: set a really high/infinite stream timeout. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed
[ https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274075#comment-15274075 ] Alexander Heiß commented on CASSANDRA-11670: In our cassandra.yaml we have a *commitlog_segment_size_in_mb* of 32 (the default) and no *max_mutation_size_in_kb* In the system.log it says *max_mutation_size_in_kb=null* and *commitlog_segment_size_in_mb=32* The Config file and log output is the same on every Server in the Cluster. > Error while waiting on bootstrap to complete. Bootstrap will have to be > restarted. Stream failed > > > Key: CASSANDRA-11670 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11670 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Streaming and Messaging >Reporter: Anastasia Osintseva > Fix For: 3.0.5 > > > I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each > DC. One node has been added successfully after I had made scrubing. > Now I'm trying to add node to another DC, but get error: > org.apache.cassandra.streaming.StreamException: Stream failed. > After scrubing and repair I get the same error. > {noformat} > ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - > Unknown exception caught while attempting to update MaterializedView! > messages_dump.messages > java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large > for the maxiumum size of 33554432 > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) > [apache-cassandra-3.0.5.jar:3.0.5] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_11] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_11] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11] > ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 > StreamReceiveTask.java:214 - Error applying streamed data: > java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large > for the maxiumum size of 33554432 > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) > ~[apache-cassandra-3.0.5.jar:3.0.5]
cassandra git commit: cassandra-stress should support case sensitive schemas
Repository: cassandra Updated Branches: refs/heads/trunk f580fb0ff -> adbef7982 cassandra-stress should support case sensitive schemas patch by Giampaolo Trapasso; reviewed by tjake for CASSANDRA-11546 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/adbef798 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/adbef798 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/adbef798 Branch: refs/heads/trunk Commit: adbef79823e91627989ba3893931986ded510550 Parents: f580fb0 Author: Giampaolo TrapassoAuthored: Thu Apr 14 10:40:29 2016 +0200 Committer: T Jake Luciani Committed: Fri May 6 09:52:55 2016 -0400 -- CHANGES.txt | 1 + .../apache/cassandra/stress/StressProfile.java | 23 ++-- 2 files changed, 17 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/adbef798/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c6b0af5..8e545c4 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.8 + * cassandra-stress profiles should support case sensitive schemas (CASSANDRA-11546) * Remove DatabaseDescriptor dependency from FileUtils (CASSANDRA-11578) * Faster streaming (CASSANDRA-9766) http://git-wip-us.apache.org/repos/asf/cassandra/blob/adbef798/tools/stress/src/org/apache/cassandra/stress/StressProfile.java -- diff --git a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java index d7b0540..8b59bda 100644 --- a/tools/stress/src/org/apache/cassandra/stress/StressProfile.java +++ b/tools/stress/src/org/apache/cassandra/stress/StressProfile.java @@ -28,6 +28,7 @@ import java.io.Serializable; import java.net.URI; import java.util.*; import java.util.concurrent.TimeUnit; +import java.util.regex.Pattern; import com.google.common.base.Function; import com.google.common.util.concurrent.Uninterruptibles; @@ -87,6 +88,8 @@ public class StressProfile implements Serializable transient volatile Map queryStatements; transient volatile Map thriftQueryIds; +private static final Pattern lowercaseAlphanumeric = Pattern.compile("[a-z0-9_]+"); + private void init(StressYaml yaml) throws RequestValidationException { keyspaceName = yaml.keyspace; @@ -243,7 +246,7 @@ public class StressProfile implements Serializable TableMetadata metadata = client.getCluster() .getMetadata() .getKeyspace(keyspaceName) - .getTable(tableName); + .getTable(quoteIdentifier(tableName)); if (metadata == null) throw new RuntimeException("Unable to find table " + keyspaceName + "." + tableName); @@ -386,7 +389,7 @@ public class StressProfile implements Serializable StringBuilder sb = new StringBuilder(); if (!isKeyOnlyTable) { -sb.append("UPDATE \"").append(tableName).append("\" SET "); +sb.append("UPDATE ").append(quoteIdentifier(tableName)).append(" SET "); //PK Columns StringBuilder pred = new StringBuilder(); pred.append(" WHERE "); @@ -401,21 +404,21 @@ public class StressProfile implements Serializable else pred.append(" AND "); -pred.append(c.getName()).append(" = ?"); + pred.append(quoteIdentifier(c.getName())).append(" = ?"); } else { if (firstCol) firstCol = false; else sb.append(','); -sb.append(c.getName()).append(" = "); + sb.append(quoteIdentifier(c.getName())).append(" = "); switch (c.getType().getName()) { case SET: case LIST: case COUNTER: -sb.append(c.getName()).append(" + ?"); +
[jira] [Updated] (CASSANDRA-11546) Stress doesn't respect case-sensitive column names when building insert queries
[ https://issues.apache.org/jira/browse/CASSANDRA-11546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-11546: --- Resolution: Fixed Fix Version/s: 3.8 Reproduced In: 3.0.5, 2.2.5, 3.6 (was: 2.2.5, 3.0.5, 3.6) Status: Resolved (was: Patch Available) committed with a couple of changes: * regex wouldn't allow fields that started with _ * fetching table metadata needed to also be quoted when tablename was case sensitive * codestyle Thanks! > Stress doesn't respect case-sensitive column names when building insert > queries > --- > > Key: CASSANDRA-11546 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11546 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Joel Knighton >Assignee: Giampaolo >Priority: Trivial > Labels: lhf > Fix For: 3.8 > > Attachments: cassandra-11546-trunk-giampaolo-trapasso.patch, > example.yaml > > > When using a custom stress profile, if the schema uses case sensitive column > names, stress doesn't respect case sensitivity when building insert/update > statements. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed
[ https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274059#comment-15274059 ] Paulo Motta commented on CASSANDRA-11670: - are you still facing this? if so, you'll probably need to provide more details on why the stream session failed (this should appear right before this in the logs) > Error while waiting on bootstrap to complete. Bootstrap will have to be > restarted. Stream failed > > > Key: CASSANDRA-11670 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11670 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Streaming and Messaging >Reporter: Anastasia Osintseva > Fix For: 3.0.5 > > > I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each > DC. One node has been added successfully after I had made scrubing. > Now I'm trying to add node to another DC, but get error: > org.apache.cassandra.streaming.StreamException: Stream failed. > After scrubing and repair I get the same error. > {noformat} > ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - > Unknown exception caught while attempting to update MaterializedView! > messages_dump.messages > java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large > for the maxiumum size of 33554432 > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) > [apache-cassandra-3.0.5.jar:3.0.5] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_11] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_11] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11] > ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 > StreamReceiveTask.java:214 - Error applying streamed data: > java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large > for the maxiumum size of 33554432 > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at
[jira] [Resolved] (CASSANDRA-11727) Streaming error while Bootstraping Materialized View
[ https://issues.apache.org/jira/browse/CASSANDRA-11727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta resolved CASSANDRA-11727. - Resolution: Duplicate > Streaming error while Bootstraping Materialized View > > > Key: CASSANDRA-11727 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11727 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: Ubuntu 14.04 > Oracle JDK 1.8.0_11 > 16GB RAM > Cassandra Version 3.0.5 >Reporter: Alexander Heiß > Fix For: 3.0.5 > > > We have a Cluster with 4 Servers in 2 Datacenters (2 in DC A and 2 in DC B), > Root servers. > We have a Replication Factor of 2, so atm we have 100% load on all 4 Servers. > Around 250GB of Data. Everything works fine. Now we want to add 2 more > Servers to the Cluster, one in each Datacenter. But we always get the same > Kind of error while Bootstraping: > {quote}ERROR 13:21:34 Unknown exception caught while attempting to update > MaterializedView! messages_dump.messages > java.lang.IllegalArgumentException: Mutation of 24032623 bytes is too large > for the maxiumum size of 16777216 > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) > [apache-cassandra-3.0.5.jar:3.0.5] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_11] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_11] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11] > {quote} > and > {quote} > WARN 13:21:34 Some data streaming failed. Use nodetool to check bootstrap > state and resume. For more, see `nodetool help bootstrap`. IN_PROGRESS > {quote} > And if we Resume the Bootstrap it starts all over again and then it fails > with the same Error Message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11727) Streaming error while Bootstraping Materialized View
[ https://issues.apache.org/jira/browse/CASSANDRA-11727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274054#comment-15274054 ] Paulo Motta commented on CASSANDRA-11727: - Closing this as a duplicate of CASSANDRA-11670 so we centralize discussion there. Can you double check that the source node of this streaming session (or other nodes in general) do not have a custom {{commitlog_segment_size_in_mb}} or {{max_mutation_size_in_kb}} configuration set? You may check that by grepping your system.log or cassandra.yaml for these properties. Please report back in the other ticket. > Streaming error while Bootstraping Materialized View > > > Key: CASSANDRA-11727 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11727 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: Ubuntu 14.04 > Oracle JDK 1.8.0_11 > 16GB RAM > Cassandra Version 3.0.5 >Reporter: Alexander Heiß > Fix For: 3.0.5 > > > We have a Cluster with 4 Servers in 2 Datacenters (2 in DC A and 2 in DC B), > Root servers. > We have a Replication Factor of 2, so atm we have 100% load on all 4 Servers. > Around 250GB of Data. Everything works fine. Now we want to add 2 more > Servers to the Cluster, one in each Datacenter. But we always get the same > Kind of error while Bootstraping: > {quote}ERROR 13:21:34 Unknown exception caught while attempting to update > MaterializedView! messages_dump.messages > java.lang.IllegalArgumentException: Mutation of 24032623 bytes is too large > for the maxiumum size of 16777216 > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) > [apache-cassandra-3.0.5.jar:3.0.5] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_11] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_11] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11] > {quote} > and > {quote} > WARN 13:21:34 Some data streaming failed. Use nodetool to check bootstrap > state and resume. For more, see `nodetool help bootstrap`. IN_PROGRESS > {quote} > And if we Resume the Bootstrap it starts all over again and then it fails > with the same Error Message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11727) Streaming error while Bootstraping Materialized View
Alexander Heiß created CASSANDRA-11727: -- Summary: Streaming error while Bootstraping Materialized View Key: CASSANDRA-11727 URL: https://issues.apache.org/jira/browse/CASSANDRA-11727 Project: Cassandra Issue Type: Bug Components: Streaming and Messaging Environment: Ubuntu 14.04 Oracle JDK 1.8.0_11 16GB RAM Cassandra Version 3.0.5 Reporter: Alexander Heiß Fix For: 3.0.5 We have a Cluster with 4 Servers in 2 Datacenters (2 in DC A and 2 in DC B), Root servers. We have a Replication Factor of 2, so atm we have 100% load on all 4 Servers. Around 250GB of Data. Everything works fine. Now we want to add 2 more Servers to the Cluster, one in each Datacenter. But we always get the same Kind of error while Bootstraping: {quote}ERROR 13:21:34 Unknown exception caught while attempting to update MaterializedView! messages_dump.messages java.lang.IllegalArgumentException: Mutation of 24032623 bytes is too large for the maxiumum size of 16777216 at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) [apache-cassandra-3.0.5.jar:3.0.5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_11] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_11] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_11] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_11] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11] {quote} and {quote} WARN 13:21:34 Some data streaming failed. Use nodetool to check bootstrap state and resume. For more, see `nodetool help bootstrap`. IN_PROGRESS {quote} And if we Resume the Bootstrap it starts all over again and then it fails with the same Error Message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11566) read time out when do count(*)
[ https://issues.apache.org/jira/browse/CASSANDRA-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274025#comment-15274025 ] Benjamin Lerer commented on CASSANDRA-11566: [~alseddnm] Count queries are also slower when they are perfomed through CQLSH. It is due to the page size. To be able to perform a count the coordinator will request all the rows from the other nodes to be able to count them. To avoid an OutOfMemoryException it will request them by pages, using the page size. In the case of CQLSH the page size is 20 so it will have to issue much more requests than if the request is initially coming from a java driver where the page size is 5000. > read time out when do count(*) > -- > > Key: CASSANDRA-11566 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11566 > Project: Cassandra > Issue Type: Bug > Environment: staging >Reporter: nizar > Fix For: 3.3 > > > Hello I using Cassandra Datastax 3.3, I keep getting read time out even if I > set the limit to 1, it would make sense if the limit is high number .. > However only limit 1 and still timing out sounds odd? > [cqlsh 5.0.1 | Cassandra 3.3 | CQL spec 3.4.0 | Native protocol v4] > cqlsh:test> select count(*) from test.my_view where s_id=? and flag=false > limit 1; > OperationTimedOut: errors={}, last_host= > my key look like this : > CREATE MATERIALIZED VIEW test.my_view AS > SELECT * > FROM table_name > WHERE id IS NOT NULL AND processed IS NOT NULL AND time IS NOT NULL AND id > IS NOT NULL > PRIMARY KEY ( ( s_id, flag ), time, id ) > WITH CLUSTERING ORDER BY ( time ASC ); > I have 5 nodes with replica 3 > CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', > 'dc': '3'} AND durable_writes = true; > Below was the result for nodetoolcfstats > Keyspace: test > Read Count: 128770 > Read Latency: 1.42208769123243 ms. > Write Count: 0 > Write Latency: NaN ms. > Pending Flushes: 0 > Table: tableName > SSTable count: 3 > Space used (live): 280777032 > Space used (total): 280777032 > Space used by snapshots (total): 0 > Off heap memory used (total): 2850227 > SSTable Compression Ratio: 0.24706731995327527 > Number of keys (estimate): 1277211 > Memtable cell count: 0 > Memtable data size: 0 > Memtable off heap memory used: 0 > Memtable switch count: 0 > Local read count: 3 > Local read latency: 0.396 ms > Local write count: 0 > Local write latency: NaN ms > Pending flushes: 0 > Bloom filter false positives: 0 > Bloom filter false ratio: 0.0 > Bloom filter space used: 1589848 > Bloom filter off heap memory used: 1589824 > Index summary off heap memory used: 1195691 > Compression metadata off heap memory used: 64712 > Compacted partition minimum bytes: 311 > Compacted partition maximum bytes: 535 > Compacted partition mean bytes: 458 > Average live cells per slice (last five minutes): 102.92671205446536 > Maximum live cells per slice (last five minutes): 103 > Average tombstones per slice (last five minutes): 1.0 > Maximum tombstones per slice (last five minutes): 1 > Table: my_view > SSTable count: 4 > Space used (live): 126114270 > Space used (total): 126114270 > Space used by snapshots (total): 0 > Off heap memory used (total): 91588 > SSTable Compression Ratio: 0.1652453778228639 > Number of keys (estimate): 8 > Memtable cell count: 0 > Memtable data size: 0 > Memtable off heap memory used: 0 > Memtable switch count: 0 > Local read count: 128767 > Local read latency: 1.590 ms > Local write count: 0 > Local write latency: NaN ms > Pending flushes: 0 > Bloom filter false positives: 0 > Bloom filter false ratio: 0.0 > Bloom filter space used: 96 > Bloom filter off heap memory used: 64 > Index summary off heap memory used: 140 > Compression metadata off heap memory used: 91384 > Compacted partition minimum bytes: 3974 > Compacted partition maximum bytes: 386857368 > Compacted partition mean bytes: 26034715 > Average live cells per slice (last five minutes): 102.99462595230145 > Maximum live cells per slice (last five minutes): 103 > Average tombstones per slice (last five minutes): 1.0 > Maximum tombstones per slice (last five minutes): 1 > Thank you. > Nizar -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows
[ https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15274015#comment-15274015 ] Benjamin Lerer commented on CASSANDRA-11528: Sorry, I rode you summary too fast. I though that you only had the problem with count queries on 3.3. >From your log I see at several places that the jvm as created some heap dump: >{{Dumping heap to java_pid56752.hprof}} If you could provide us one of those dumps it will be usefull to see from where the problem is coming from. > Server Crash when select returns more than a few hundred rows > - > > Key: CASSANDRA-11528 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11528 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: windows 7, 8 GB machine >Reporter: Mattias W >Assignee: Benjamin Lerer > Fix For: 3.x > > Attachments: datastax_ddc_server-stdout.2016-04-07.log > > > While implementing a dump procedure, which did "select * from" from one table > at a row, I instantly kill the server. A simple > {noformat}select count(*) from {noformat} > also kills it. For a while, I thought the size of blobs were the cause > I also try to only have a unique id as partition key, I was afraid a single > partition got too big or so, but that didn't change anything > It happens every time, both from Java/Clojure and from DevCenter. > I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is > so quick, so nothing is recorded there. > There is a Java-out-of-memory in the logs, but that isn't from the time of > the crash. > It only happens for one table, it only has 15000 entries, but there are blobs > and byte[] stored there, size between 100kb - 4Mb. Total size for that table > is about 6.5 GB on disk. > I made a workaround by doing many small selects instead, each only fetching > 100 rows. > Is there a setting a can set to make the system log more eagerly, in order to > at least get a stacktrace or similar, that might help you. > It is the prun_srv that dies. Restarting the NT service makes Cassandra run > again -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11615) cassandra-stress blocks when connecting to a big cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-11615: --- Assignee: Andy Tolbert (was: Eduard Tudenhoefner) > cassandra-stress blocks when connecting to a big cluster > > > Key: CASSANDRA-11615 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11615 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Eduard Tudenhoefner >Assignee: Andy Tolbert > Fix For: 3.7, 3.0.7 > > Attachments: 11615-3.0-2nd.patch, 11615-3.0.patch > > > I had a *100* node cluster and was running > {code} > cassandra-stress read n=100 no-warmup cl=LOCAL_QUORUM -rate 'threads=20' > 'limit=1000/s' > {code} > Based on the thread dump it looks like it's been blocked at > https://github.com/apache/cassandra/blob/cassandra-3.0/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java#L96 > {code} > "Thread-20" #245 prio=5 os_prio=0 tid=0x7f3781822000 nid=0x46c4 waiting > for monitor entry [0x7f36cc788000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96) > - waiting to lock <0x0005c003d920> (a > java.util.concurrent.ConcurrentHashMap) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261) > at > org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327) > "Thread-19" #244 prio=5 os_prio=0 tid=0x7f378182 nid=0x46c3 waiting > for monitor entry [0x7f36cc889000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96) > - waiting to lock <0x0005c003d920> (a > java.util.concurrent.ConcurrentHashMap) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261) > at > org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327) > {code} > I was trying the same with with a smaller cluster (50 nodes) and it was > working fine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11615) cassandra-stress blocks when connecting to a big cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-11615: --- Resolution: Fixed Fix Version/s: (was: 3.0.x) 3.0.7 3.7 Status: Resolved (was: Patch Available) committed thx > cassandra-stress blocks when connecting to a big cluster > > > Key: CASSANDRA-11615 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11615 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Eduard Tudenhoefner >Assignee: Andy Tolbert > Fix For: 3.7, 3.0.7 > > Attachments: 11615-3.0-2nd.patch, 11615-3.0.patch > > > I had a *100* node cluster and was running > {code} > cassandra-stress read n=100 no-warmup cl=LOCAL_QUORUM -rate 'threads=20' > 'limit=1000/s' > {code} > Based on the thread dump it looks like it's been blocked at > https://github.com/apache/cassandra/blob/cassandra-3.0/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java#L96 > {code} > "Thread-20" #245 prio=5 os_prio=0 tid=0x7f3781822000 nid=0x46c4 waiting > for monitor entry [0x7f36cc788000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96) > - waiting to lock <0x0005c003d920> (a > java.util.concurrent.ConcurrentHashMap) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261) > at > org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327) > "Thread-19" #244 prio=5 os_prio=0 tid=0x7f378182 nid=0x46c3 waiting > for monitor entry [0x7f36cc889000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.cassandra.stress.util.JavaDriverClient.prepare(JavaDriverClient.java:96) > - waiting to lock <0x0005c003d920> (a > java.util.concurrent.ConcurrentHashMap) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation$JavaDriverWrapper.createPreparedStatement(CqlOperation.java:314) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:77) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109) > at > org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261) > at > org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327) > {code} > I was trying the same with with a smaller cluster (50 nodes) and it was > working fine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[3/6] cassandra git commit: Update java driver
Update java driver patch by Andy Tolbert; reviewed by tjake for CASSANDRA-11615 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/06870372 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/06870372 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/06870372 Branch: refs/heads/trunk Commit: 06870372d0e144bb0dd7f567f2efeca1dc996080 Parents: 86ba227 Author: T Jake LucianiAuthored: Thu May 5 12:42:00 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 09:04:31 2016 -0400 -- CHANGES.txt| 3 +- build.xml | 2 +- lib/cassandra-driver-core-3.0.0-shaded.jar | Bin 2433676 -> 0 bytes lib/cassandra-driver-core-3.0.1-shaded.jar | Bin 0 -> 2445093 bytes lib/licenses/cassandra-driver-3.0.0.txt| 177 lib/licenses/cassandra-driver-3.0.1.txt| 177 6 files changed, 180 insertions(+), 179 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 3a49f6a..2e2b6af 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,6 +1,7 @@ 3.0.7 * Refactor Materialized View code (CASSANDRA-11475) - + * Update Java Driver (CASSANDRA-11615) + 3.0.6 * Disallow creating view with a static column (CASSANDRA-11602) * Reduce the amount of object allocations caused by the getFunctions methods (CASSANDRA-11593) http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/build.xml -- diff --git a/build.xml b/build.xml index 271481f..f4099f7 100644 --- a/build.xml +++ b/build.xml @@ -401,7 +401,7 @@ - + http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/lib/cassandra-driver-core-3.0.0-shaded.jar -- diff --git a/lib/cassandra-driver-core-3.0.0-shaded.jar b/lib/cassandra-driver-core-3.0.0-shaded.jar deleted file mode 100644 index 86093a9..000 Binary files a/lib/cassandra-driver-core-3.0.0-shaded.jar and /dev/null differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/lib/cassandra-driver-core-3.0.1-shaded.jar -- diff --git a/lib/cassandra-driver-core-3.0.1-shaded.jar b/lib/cassandra-driver-core-3.0.1-shaded.jar new file mode 100644 index 000..bc269a0 Binary files /dev/null and b/lib/cassandra-driver-core-3.0.1-shaded.jar differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/lib/licenses/cassandra-driver-3.0.0.txt -- diff --git a/lib/licenses/cassandra-driver-3.0.0.txt b/lib/licenses/cassandra-driver-3.0.0.txt deleted file mode 100644 index f433b1a..000 --- a/lib/licenses/cassandra-driver-3.0.0.txt +++ /dev/null @@ -1,177 +0,0 @@ - - Apache License - Version 2.0, January 2004 -http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source
[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.7
Merge branch 'cassandra-3.0' into cassandra-3.7 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/886f8757 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/886f8757 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/886f8757 Branch: refs/heads/trunk Commit: 886f8757143d652c1d30d9bc792e1dbee7d14da4 Parents: a87fd71 0687037 Author: T Jake LucianiAuthored: Fri May 6 09:08:35 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 09:08:35 2016 -0400 -- CHANGES.txt| 3 + build.xml | 2 +- lib/cassandra-driver-core-3.0.0-shaded.jar | Bin 2433676 -> 0 bytes lib/cassandra-driver-core-3.0.1-shaded.jar | Bin 0 -> 2445093 bytes lib/licenses/cassandra-driver-3.0.0.txt| 177 lib/licenses/cassandra-driver-3.0.1.txt| 177 6 files changed, 181 insertions(+), 178 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/886f8757/CHANGES.txt -- diff --cc CHANGES.txt index 882be7c,2e2b6af..ff98d48 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,74 -1,8 +1,77 @@@ -3.0.7 +3.7 ++Merged from 3.0: + * Refactor Materialized View code (CASSANDRA-11475) + * Update Java Driver (CASSANDRA-11615) - -3.0.6 +Merged from 2.2: + * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626) + +3.6 + * Enhanced Compaction Logging (CASSANDRA-10805) + * Make prepared statement cache size configurable (CASSANDRA-11555) + * Integrated JMX authentication and authorization (CASSANDRA-10091) + * Add units to stress ouput (CASSANDRA-11352) + * Fix PER PARTITION LIMIT for single and multi partitions queries (CASSANDRA-11603) + * Add uncompressed chunk cache for RandomAccessReader (CASSANDRA-5863) + * Clarify ClusteringPrefix hierarchy (CASSANDRA-11213) + * Always perform collision check before joining ring (CASSANDRA-10134) + * SSTableWriter output discrepancy (CASSANDRA-11646) + * Fix potential timeout in NativeTransportService.testConcurrentDestroys (CASSANDRA-10756) + * Support large partitions on the 3.0 sstable format (CASSANDRA-11206) + * Add support to rebuild from specific range (CASSANDRA-10406) + * Optimize the overlapping lookup by calculating all the + bounds in advance (CASSANDRA-11571) + * Support json/yaml output in noetool tablestats (CASSANDRA-5977) + * (stress) Add datacenter option to -node options (CASSANDRA-11591) + * Fix handling of empty slices (CASSANDRA-11513) + * Make number of cores used by cqlsh COPY visible to testing code (CASSANDRA-11437) + * Allow filtering on clustering columns for queries without secondary indexes (CASSANDRA-11310) + * Refactor Restriction hierarchy (CASSANDRA-11354) + * Eliminate allocations in R/W path (CASSANDRA-11421) + * Update Netty to 4.0.36 (CASSANDRA-11567) + * Fix PER PARTITION LIMIT for queries requiring post-query ordering (CASSANDRA-11556) + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818) + * Support UDT in CQLSSTableWriter (CASSANDRA-10624) + * Support for non-frozen user-defined types, updating + individual fields of user-defined types (CASSANDRA-7423) + * Make LZ4 compression level configurable (CASSANDRA-11051) + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017) + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295) + * Improve field-checking and error reporting in cassandra.yaml (CASSANDRA-10649) + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507) + * More user friendly error when providing an invalid token to nodetool (CASSANDRA-9348) + * Add static column support to SASI index (CASSANDRA-11183) + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization (CASSANDRA-11434) + * Support LIKE operator in prepared statements (CASSANDRA-11456) + * Add a command to see if a Materialized View has finished building (CASSANDRA-9967) + * Log endpoint and port associated with streaming operation (CASSANDRA-8777) + * Print sensible units for all log messages (CASSANDRA-9692) + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096) + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372) + * Compress only inter-dc traffic by default (CASSANDRA-) + * Add metrics to track write amplification (CASSANDRA-11420) + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739) + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411) + * Add require_endpoint_verification opt for internode encryption (CASSANDRA-9220) + * Add auto import java.util for UDF code block (CASSANDRA-11392) + * Add --hex-format option to
[6/6] cassandra git commit: Merge branch 'cassandra-3.7' into trunk
Merge branch 'cassandra-3.7' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f580fb0f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f580fb0f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f580fb0f Branch: refs/heads/trunk Commit: f580fb0ff2b2356f25839ad328833c11a7a8eed2 Parents: 89a645a 886f875 Author: T Jake LucianiAuthored: Fri May 6 09:11:06 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 09:11:06 2016 -0400 -- CHANGES.txt| 3 + build.xml | 2 +- lib/cassandra-driver-core-3.0.0-shaded.jar | Bin 2433676 -> 0 bytes lib/cassandra-driver-core-3.0.1-shaded.jar | Bin 0 -> 2445093 bytes lib/licenses/cassandra-driver-3.0.0.txt| 177 lib/licenses/cassandra-driver-3.0.1.txt| 177 6 files changed, 181 insertions(+), 178 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f580fb0f/CHANGES.txt -- diff --cc CHANGES.txt index ba64a19,ff98d48..c6b0af5 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,9 -1,7 +1,12 @@@ +3.8 + * Remove DatabaseDescriptor dependency from FileUtils (CASSANDRA-11578) + * Faster streaming (CASSANDRA-9766) + + 3.7 + Merged from 3.0: + * Refactor Materialized View code (CASSANDRA-11475) + * Update Java Driver (CASSANDRA-11615) Merged from 2.2: * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f580fb0f/build.xml --
[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.7
Merge branch 'cassandra-3.0' into cassandra-3.7 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/886f8757 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/886f8757 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/886f8757 Branch: refs/heads/cassandra-3.7 Commit: 886f8757143d652c1d30d9bc792e1dbee7d14da4 Parents: a87fd71 0687037 Author: T Jake LucianiAuthored: Fri May 6 09:08:35 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 09:08:35 2016 -0400 -- CHANGES.txt| 3 + build.xml | 2 +- lib/cassandra-driver-core-3.0.0-shaded.jar | Bin 2433676 -> 0 bytes lib/cassandra-driver-core-3.0.1-shaded.jar | Bin 0 -> 2445093 bytes lib/licenses/cassandra-driver-3.0.0.txt| 177 lib/licenses/cassandra-driver-3.0.1.txt| 177 6 files changed, 181 insertions(+), 178 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/886f8757/CHANGES.txt -- diff --cc CHANGES.txt index 882be7c,2e2b6af..ff98d48 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,74 -1,8 +1,77 @@@ -3.0.7 +3.7 ++Merged from 3.0: + * Refactor Materialized View code (CASSANDRA-11475) + * Update Java Driver (CASSANDRA-11615) - -3.0.6 +Merged from 2.2: + * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626) + +3.6 + * Enhanced Compaction Logging (CASSANDRA-10805) + * Make prepared statement cache size configurable (CASSANDRA-11555) + * Integrated JMX authentication and authorization (CASSANDRA-10091) + * Add units to stress ouput (CASSANDRA-11352) + * Fix PER PARTITION LIMIT for single and multi partitions queries (CASSANDRA-11603) + * Add uncompressed chunk cache for RandomAccessReader (CASSANDRA-5863) + * Clarify ClusteringPrefix hierarchy (CASSANDRA-11213) + * Always perform collision check before joining ring (CASSANDRA-10134) + * SSTableWriter output discrepancy (CASSANDRA-11646) + * Fix potential timeout in NativeTransportService.testConcurrentDestroys (CASSANDRA-10756) + * Support large partitions on the 3.0 sstable format (CASSANDRA-11206) + * Add support to rebuild from specific range (CASSANDRA-10406) + * Optimize the overlapping lookup by calculating all the + bounds in advance (CASSANDRA-11571) + * Support json/yaml output in noetool tablestats (CASSANDRA-5977) + * (stress) Add datacenter option to -node options (CASSANDRA-11591) + * Fix handling of empty slices (CASSANDRA-11513) + * Make number of cores used by cqlsh COPY visible to testing code (CASSANDRA-11437) + * Allow filtering on clustering columns for queries without secondary indexes (CASSANDRA-11310) + * Refactor Restriction hierarchy (CASSANDRA-11354) + * Eliminate allocations in R/W path (CASSANDRA-11421) + * Update Netty to 4.0.36 (CASSANDRA-11567) + * Fix PER PARTITION LIMIT for queries requiring post-query ordering (CASSANDRA-11556) + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818) + * Support UDT in CQLSSTableWriter (CASSANDRA-10624) + * Support for non-frozen user-defined types, updating + individual fields of user-defined types (CASSANDRA-7423) + * Make LZ4 compression level configurable (CASSANDRA-11051) + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017) + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295) + * Improve field-checking and error reporting in cassandra.yaml (CASSANDRA-10649) + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507) + * More user friendly error when providing an invalid token to nodetool (CASSANDRA-9348) + * Add static column support to SASI index (CASSANDRA-11183) + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization (CASSANDRA-11434) + * Support LIKE operator in prepared statements (CASSANDRA-11456) + * Add a command to see if a Materialized View has finished building (CASSANDRA-9967) + * Log endpoint and port associated with streaming operation (CASSANDRA-8777) + * Print sensible units for all log messages (CASSANDRA-9692) + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096) + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372) + * Compress only inter-dc traffic by default (CASSANDRA-) + * Add metrics to track write amplification (CASSANDRA-11420) + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739) + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411) + * Add require_endpoint_verification opt for internode encryption (CASSANDRA-9220) + * Add auto import java.util for UDF code block (CASSANDRA-11392) + * Add --hex-format
[1/6] cassandra git commit: Update java driver
Repository: cassandra Updated Branches: refs/heads/cassandra-3.0 86ba22747 -> 06870372d refs/heads/cassandra-3.7 a87fd715d -> 886f87571 refs/heads/trunk 89a645ac4 -> f580fb0ff Update java driver patch by Andy Tolbert; reviewed by tjake for CASSANDRA-11615 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/06870372 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/06870372 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/06870372 Branch: refs/heads/cassandra-3.0 Commit: 06870372d0e144bb0dd7f567f2efeca1dc996080 Parents: 86ba227 Author: T Jake LucianiAuthored: Thu May 5 12:42:00 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 09:04:31 2016 -0400 -- CHANGES.txt| 3 +- build.xml | 2 +- lib/cassandra-driver-core-3.0.0-shaded.jar | Bin 2433676 -> 0 bytes lib/cassandra-driver-core-3.0.1-shaded.jar | Bin 0 -> 2445093 bytes lib/licenses/cassandra-driver-3.0.0.txt| 177 lib/licenses/cassandra-driver-3.0.1.txt| 177 6 files changed, 180 insertions(+), 179 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 3a49f6a..2e2b6af 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,6 +1,7 @@ 3.0.7 * Refactor Materialized View code (CASSANDRA-11475) - + * Update Java Driver (CASSANDRA-11615) + 3.0.6 * Disallow creating view with a static column (CASSANDRA-11602) * Reduce the amount of object allocations caused by the getFunctions methods (CASSANDRA-11593) http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/build.xml -- diff --git a/build.xml b/build.xml index 271481f..f4099f7 100644 --- a/build.xml +++ b/build.xml @@ -401,7 +401,7 @@ - + http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/lib/cassandra-driver-core-3.0.0-shaded.jar -- diff --git a/lib/cassandra-driver-core-3.0.0-shaded.jar b/lib/cassandra-driver-core-3.0.0-shaded.jar deleted file mode 100644 index 86093a9..000 Binary files a/lib/cassandra-driver-core-3.0.0-shaded.jar and /dev/null differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/lib/cassandra-driver-core-3.0.1-shaded.jar -- diff --git a/lib/cassandra-driver-core-3.0.1-shaded.jar b/lib/cassandra-driver-core-3.0.1-shaded.jar new file mode 100644 index 000..bc269a0 Binary files /dev/null and b/lib/cassandra-driver-core-3.0.1-shaded.jar differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/lib/licenses/cassandra-driver-3.0.0.txt -- diff --git a/lib/licenses/cassandra-driver-3.0.0.txt b/lib/licenses/cassandra-driver-3.0.0.txt deleted file mode 100644 index f433b1a..000 --- a/lib/licenses/cassandra-driver-3.0.0.txt +++ /dev/null @@ -1,177 +0,0 @@ - - Apache License - Version 2.0, January 2004 -http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form,
[2/6] cassandra git commit: Update java driver
Update java driver patch by Andy Tolbert; reviewed by tjake for CASSANDRA-11615 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/06870372 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/06870372 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/06870372 Branch: refs/heads/cassandra-3.7 Commit: 06870372d0e144bb0dd7f567f2efeca1dc996080 Parents: 86ba227 Author: T Jake LucianiAuthored: Thu May 5 12:42:00 2016 -0400 Committer: T Jake Luciani Committed: Fri May 6 09:04:31 2016 -0400 -- CHANGES.txt| 3 +- build.xml | 2 +- lib/cassandra-driver-core-3.0.0-shaded.jar | Bin 2433676 -> 0 bytes lib/cassandra-driver-core-3.0.1-shaded.jar | Bin 0 -> 2445093 bytes lib/licenses/cassandra-driver-3.0.0.txt| 177 lib/licenses/cassandra-driver-3.0.1.txt| 177 6 files changed, 180 insertions(+), 179 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 3a49f6a..2e2b6af 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,6 +1,7 @@ 3.0.7 * Refactor Materialized View code (CASSANDRA-11475) - + * Update Java Driver (CASSANDRA-11615) + 3.0.6 * Disallow creating view with a static column (CASSANDRA-11602) * Reduce the amount of object allocations caused by the getFunctions methods (CASSANDRA-11593) http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/build.xml -- diff --git a/build.xml b/build.xml index 271481f..f4099f7 100644 --- a/build.xml +++ b/build.xml @@ -401,7 +401,7 @@ - + http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/lib/cassandra-driver-core-3.0.0-shaded.jar -- diff --git a/lib/cassandra-driver-core-3.0.0-shaded.jar b/lib/cassandra-driver-core-3.0.0-shaded.jar deleted file mode 100644 index 86093a9..000 Binary files a/lib/cassandra-driver-core-3.0.0-shaded.jar and /dev/null differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/lib/cassandra-driver-core-3.0.1-shaded.jar -- diff --git a/lib/cassandra-driver-core-3.0.1-shaded.jar b/lib/cassandra-driver-core-3.0.1-shaded.jar new file mode 100644 index 000..bc269a0 Binary files /dev/null and b/lib/cassandra-driver-core-3.0.1-shaded.jar differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/06870372/lib/licenses/cassandra-driver-3.0.0.txt -- diff --git a/lib/licenses/cassandra-driver-3.0.0.txt b/lib/licenses/cassandra-driver-3.0.0.txt deleted file mode 100644 index f433b1a..000 --- a/lib/licenses/cassandra-driver-3.0.0.txt +++ /dev/null @@ -1,177 +0,0 @@ - - Apache License - Version 2.0, January 2004 -http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in
[jira] [Commented] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns
[ https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273962#comment-15273962 ] Alex Petrov commented on CASSANDRA-9842: As pointed out by [~blerer], the previous patch was working only for the local path. I've added some [dtests|https://github.com/ifesdjeen/cassandra-dtest/tree/7826-trunk] for the same behaviour and fixed the patch for the non-local path in {{CQL3CasRequest}}. The null checks in conditions now became redundant. Although changes only in {{CQL3CasRequest}} wouldn't be sufficient, as result of {{ModificationStatement::casInternal}} relies on non-null value. || ||2.1||2.2||3.0||trunk| ||code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]| ||utest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-testall/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-testall/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-testall/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-testall/]| ||dtest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-dtest/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-dtest/]| > Inconsistent behavior for '= null' conditions on static columns > --- > > Key: CASSANDRA-9842 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9842 > Project: Cassandra > Issue Type: Bug > Environment: cassandra-2.1.8 on Ubuntu 15.04 >Reporter: Chandra Sekar >Assignee: Alex Petrov > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > > Both inserting a row (in a non-existent partition) and updating a static > column in the same LWT fails. Creating the partition before performing the > LWT works. > h3. Table Definition > {code} > create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, > primary key((pcol), ccol)); > {code} > h3. Inserting row in non-existent partition and updating static column in one > LWT > {code} > begin batch > insert into txtable (pcol, ccol, ncol) values (1, 1, 'A'); > update txtable set scol = 1 where pcol = 1 if scol = null; > apply batch; > [applied] > --- > False > {code} > h3. Creating partition before LWT > {code} > insert into txtable (pcol, scol) values (1, null) if not exists; > begin batch > insert into txtable (pcol, ccol, ncol) values (1, 1, 'A'); > update txtable set scol = 1 where pcol = 1 if scol = null; > apply batch; > [applied] > --- > True > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11475) MV code refactor
[ https://issues.apache.org/jira/browse/CASSANDRA-11475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-11475: - Resolution: Fixed Fix Version/s: (was: 3.0.x) (was: 3.x) 3.0.7 3.7 Status: Resolved (was: Patch Available) Committed, thanks. (For info, the first run of CI wasn't happy because I wrote the {{removeByName}} method using the iterator {{remove()}} method but {{COWArrayList}} doesn't support that. Anyway, that was trivial to fix and after that CI was happy. I also had to rebase on trunk and I re-ran CI to make sure it was happy there too, which it was. Hence the slight delay committing) > MV code refactor > > > Key: CASSANDRA-11475 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11475 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne > Fix For: 3.7, 3.0.7 > > > While working on CASSANDRA-5546 I run into a problem with TTLs on MVs, which > looking more closely is a bug of the MV code. But one thing leading to > another I reviewed a good portion of the MV code and found the following > correction problems: > * If a base row is TTLed then even if an update remove that TTL the view > entry remained TTLed and expires, leading to an inconsistency. > * Due to calling the wrong ctor for {{LivenessInfo}}, when a TTL was set on > the base table, the view entry was living twice as long as the TTL. Again > leading to a temporary inconsistency. > * When reading existing data to compute view updates, all deletion > informations are completely ignored (the code uses a {{PartitionIterator}} > instead of an {{UnfilteredPartitionIterator}}). This is a serious issue since > it means some deletions could be totally ignored as far as views are > concerned especially when messages are delivered to a replica out of order. > I'll note that while the 2 previous points are relatively easy to fix, I > didn't find an easy and clean way to fix this one on the current code. > Further, I think the MV code in general has inefficiencies/code complexities > that should be avoidable: > * {{TemporalRow.Set}} is buffering both everything read and a pretty much > complete copy of the updates. That's a potentially high memory requirement. > We shouldn't have to copy the updates and we shouldn't buffer all reads but > rather work incrementally. > * {{TemporalRow}}/{{TemporalRow.Set}}/{{TemporalCell}} classes are somewhat > re-inventing the wheel. They are really just storing both an update we're > doing and the corresponding existing data, but we already have > {{Row}}/{{Partition}}/{{Cell}} for that. In practice, those {{Temporal*}} > class generates a lot of allocations that we could avoid. > * The code from CASSANDRA-10060 to avoid multiple reads of the base table > with multiple views doesn't work when the update has partition/range > tombstones because the code uses {{TemporalRow.Set.setTombstonedExisting()}} > to trigger reuse, but the {{TemporalRow.Set.withNewViewPrimaryKey()}} method > is used between view and it does not preseve the {{hasTombstonedExisting}} > flag. But that oversight, which is trivial to fix, is kind of a good thing > since if you fix it, you're left with a correction problem. > The read done when there is a partition deletion depends on the view itself > (if there is clustering filters in particular) and so reusing that read for > other views is wrong. Which makes that whole reuse code really dodgy imo: the > read for existing data is in {{View.java}}, suggesting that it depends on the > view (which again, it does at least for partition deletion), but it shouldn't > if we're going to reuse the result across multiple views. > * Even ignoring the previous point, we still potentially read the base table > twice if the update mix both row updates and partition/range deletions, > potentially re-reading the same values. > * It's probably more minor but the reading code is using {{QueryPager}}, > which is probably an artifact of the initial version of the code being > pre-8099, but it's not necessary anymore (the reads are local and locally > we're completely iterator based), adding, especially when we do page. I'll > note that despite using paging, the current code still buffers everything in > {{TemporalRow.Set}} anyway . > Overall, I suspect trying to fix the problems above (particularly the fact > that existing deletion infos are ignored) is only going to add complexity > with the current code and we'd still have to fix the inefficiencies. So I > propose a refactor of that code which does 2 main things: > # it removes all of {{TemporalRow}} and related classes. Instead, it directly > uses the existing {{Row}} (with
[jira] [Updated] (CASSANDRA-11475) MV code refactor
[ https://issues.apache.org/jira/browse/CASSANDRA-11475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-11475: - Status: Patch Available (was: Open) > MV code refactor > > > Key: CASSANDRA-11475 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11475 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne > Fix For: 3.0.x, 3.x > > > While working on CASSANDRA-5546 I run into a problem with TTLs on MVs, which > looking more closely is a bug of the MV code. But one thing leading to > another I reviewed a good portion of the MV code and found the following > correction problems: > * If a base row is TTLed then even if an update remove that TTL the view > entry remained TTLed and expires, leading to an inconsistency. > * Due to calling the wrong ctor for {{LivenessInfo}}, when a TTL was set on > the base table, the view entry was living twice as long as the TTL. Again > leading to a temporary inconsistency. > * When reading existing data to compute view updates, all deletion > informations are completely ignored (the code uses a {{PartitionIterator}} > instead of an {{UnfilteredPartitionIterator}}). This is a serious issue since > it means some deletions could be totally ignored as far as views are > concerned especially when messages are delivered to a replica out of order. > I'll note that while the 2 previous points are relatively easy to fix, I > didn't find an easy and clean way to fix this one on the current code. > Further, I think the MV code in general has inefficiencies/code complexities > that should be avoidable: > * {{TemporalRow.Set}} is buffering both everything read and a pretty much > complete copy of the updates. That's a potentially high memory requirement. > We shouldn't have to copy the updates and we shouldn't buffer all reads but > rather work incrementally. > * {{TemporalRow}}/{{TemporalRow.Set}}/{{TemporalCell}} classes are somewhat > re-inventing the wheel. They are really just storing both an update we're > doing and the corresponding existing data, but we already have > {{Row}}/{{Partition}}/{{Cell}} for that. In practice, those {{Temporal*}} > class generates a lot of allocations that we could avoid. > * The code from CASSANDRA-10060 to avoid multiple reads of the base table > with multiple views doesn't work when the update has partition/range > tombstones because the code uses {{TemporalRow.Set.setTombstonedExisting()}} > to trigger reuse, but the {{TemporalRow.Set.withNewViewPrimaryKey()}} method > is used between view and it does not preseve the {{hasTombstonedExisting}} > flag. But that oversight, which is trivial to fix, is kind of a good thing > since if you fix it, you're left with a correction problem. > The read done when there is a partition deletion depends on the view itself > (if there is clustering filters in particular) and so reusing that read for > other views is wrong. Which makes that whole reuse code really dodgy imo: the > read for existing data is in {{View.java}}, suggesting that it depends on the > view (which again, it does at least for partition deletion), but it shouldn't > if we're going to reuse the result across multiple views. > * Even ignoring the previous point, we still potentially read the base table > twice if the update mix both row updates and partition/range deletions, > potentially re-reading the same values. > * It's probably more minor but the reading code is using {{QueryPager}}, > which is probably an artifact of the initial version of the code being > pre-8099, but it's not necessary anymore (the reads are local and locally > we're completely iterator based), adding, especially when we do page. I'll > note that despite using paging, the current code still buffers everything in > {{TemporalRow.Set}} anyway . > Overall, I suspect trying to fix the problems above (particularly the fact > that existing deletion infos are ignored) is only going to add complexity > with the current code and we'd still have to fix the inefficiencies. So I > propose a refactor of that code which does 2 main things: > # it removes all of {{TemporalRow}} and related classes. Instead, it directly > uses the existing {{Row}} (with all its deletion infos) and the update being > applied to it and compute the updates for the view from that. I submit that > this is more clear/simple, but this also avoid copying every cell of both the > existing and update data as a {{TemporalCell}}. We can also reuse codes like > {{Rows.merge}} and {{Rows.diff}} to make the handling of deletions relatively > painless. > # instead of dealing with each view one at a time, re-iterating over all > updates each time, it iterates over each individual updates once and deal > with each
[9/9] cassandra git commit: Merge branch 'cassandra-3.7' into trunk
Merge branch 'cassandra-3.7' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/89a645ac Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/89a645ac Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/89a645ac Branch: refs/heads/trunk Commit: 89a645ac4ca63114d74dedc2e94a869f769b15a2 Parents: 1dd33ec a87fd71 Author: Sylvain LebresneAuthored: Fri May 6 13:47:20 2016 +0200 Committer: Sylvain Lebresne Committed: Fri May 6 13:47:20 2016 +0200 -- .../org/apache/cassandra/config/CFMetaData.java | 6 + .../apache/cassandra/config/ViewDefinition.java | 1 - .../cql3/statements/CreateViewStatement.java| 4 +- .../cql3/statements/SelectStatement.java| 41 +- .../apache/cassandra/db/ColumnFamilyStore.java | 6 +- src/java/org/apache/cassandra/db/Keyspace.java | 2 +- .../db/SinglePartitionReadCommand.java | 33 + src/java/org/apache/cassandra/db/Slices.java| 7 + .../apache/cassandra/db/filter/RowFilter.java | 24 + .../SingletonUnfilteredPartitionIterator.java | 3 +- .../apache/cassandra/db/rows/AbstractCell.java | 5 + .../org/apache/cassandra/db/rows/BTreeRow.java | 35 +- .../apache/cassandra/db/rows/BufferCell.java| 5 + src/java/org/apache/cassandra/db/rows/Cell.java | 2 + .../apache/cassandra/db/rows/ColumnData.java| 2 + .../cassandra/db/rows/ComplexColumnData.java| 8 + .../apache/cassandra/db/rows/NativeCell.java| 5 + src/java/org/apache/cassandra/db/rows/Row.java | 35 +- .../cassandra/db/rows/RowDiffListener.java | 2 +- .../db/rows/UnfilteredRowIterators.java | 2 +- .../apache/cassandra/db/view/TableViews.java| 481 ++ .../apache/cassandra/db/view/TemporalRow.java | 601 -- src/java/org/apache/cassandra/db/view/View.java | 629 ++- .../apache/cassandra/db/view/ViewBuilder.java | 38 +- .../apache/cassandra/db/view/ViewManager.java | 146 + .../cassandra/db/view/ViewUpdateGenerator.java | 549 .../org/apache/cassandra/cql3/ViewTest.java | 52 +- .../org/apache/cassandra/db/rows/RowsTest.java | 6 +- 28 files changed, 1401 insertions(+), 1329 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/89a645ac/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/89a645ac/src/java/org/apache/cassandra/db/Slices.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/89a645ac/src/java/org/apache/cassandra/db/rows/BTreeRow.java -- diff --cc src/java/org/apache/cassandra/db/rows/BTreeRow.java index 63aa157,0eed9e1..c699634 --- a/src/java/org/apache/cassandra/db/rows/BTreeRow.java +++ b/src/java/org/apache/cassandra/db/rows/BTreeRow.java @@@ -686,7 -704,11 +714,12 @@@ public class BTreeRow extends AbstractR public void addCell(Cell cell) { assert cell.column().isStatic() == (clustering == Clustering.STATIC_CLUSTERING) : "Column is " + cell.column() + ", clustering = " + clustering; ++ + // In practice, only unsorted builder have to deal with shadowed cells, but it doesn't cost us much to deal with it unconditionally in this case + if (deletion.deletes(cell)) + return; + -cells.add(cell); +getCells().add(cell); hasComplex |= cell.column.isComplex(); } http://git-wip-us.apache.org/repos/asf/cassandra/blob/89a645ac/src/java/org/apache/cassandra/db/rows/ComplexColumnData.java --
[7/9] cassandra git commit: Merge commit '86ba227477b9f8595eb610ecaf950cfbc29dd36b' into cassandra-3.7
Merge commit '86ba227477b9f8595eb610ecaf950cfbc29dd36b' into cassandra-3.7 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a87fd715 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a87fd715 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a87fd715 Branch: refs/heads/trunk Commit: a87fd715d6b26128603a404074ec3df42a595b2e Parents: 4e364d7 86ba227 Author: Sylvain LebresneAuthored: Fri May 6 13:43:44 2016 +0200 Committer: Sylvain Lebresne Committed: Fri May 6 13:44:12 2016 +0200 -- .../org/apache/cassandra/config/CFMetaData.java | 6 + .../apache/cassandra/config/ViewDefinition.java | 1 - .../cql3/statements/CreateViewStatement.java| 4 +- .../cql3/statements/SelectStatement.java| 41 +- .../apache/cassandra/db/ColumnFamilyStore.java | 6 +- src/java/org/apache/cassandra/db/Keyspace.java | 2 +- .../db/SinglePartitionReadCommand.java | 33 + src/java/org/apache/cassandra/db/Slices.java| 7 + .../apache/cassandra/db/filter/RowFilter.java | 24 + .../SingletonUnfilteredPartitionIterator.java | 3 +- .../apache/cassandra/db/rows/AbstractCell.java | 5 + .../org/apache/cassandra/db/rows/BTreeRow.java | 34 +- .../apache/cassandra/db/rows/BufferCell.java| 5 + src/java/org/apache/cassandra/db/rows/Cell.java | 2 + .../apache/cassandra/db/rows/ColumnData.java| 2 + .../cassandra/db/rows/ComplexColumnData.java| 8 + .../apache/cassandra/db/rows/NativeCell.java| 5 + src/java/org/apache/cassandra/db/rows/Row.java | 35 +- .../cassandra/db/rows/RowDiffListener.java | 2 +- .../db/rows/UnfilteredRowIterators.java | 2 +- .../apache/cassandra/db/view/TableViews.java| 481 ++ .../apache/cassandra/db/view/TemporalRow.java | 601 -- src/java/org/apache/cassandra/db/view/View.java | 629 ++- .../apache/cassandra/db/view/ViewBuilder.java | 38 +- .../apache/cassandra/db/view/ViewManager.java | 146 + .../cassandra/db/view/ViewUpdateGenerator.java | 549 .../org/apache/cassandra/cql3/ViewTest.java | 52 +- .../org/apache/cassandra/db/rows/RowsTest.java | 6 +- 28 files changed, 1400 insertions(+), 1329 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/config/CFMetaData.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/config/ViewDefinition.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/Keyspace.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/Slices.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/filter/RowFilter.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/rows/AbstractCell.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/rows/BTreeRow.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/rows/BufferCell.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/rows/Cell.java --
[6/9] cassandra git commit: Refactor MV code
Refactor MV code patch by slebresne; reviewed by carlyeks for CASSANDRA-11475 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86ba2274 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86ba2274 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86ba2274 Branch: refs/heads/trunk Commit: 86ba227477b9f8595eb610ecaf950cfbc29dd36b Parents: c19066e Author: Sylvain LebresneAuthored: Fri Mar 11 14:19:38 2016 +0100 Committer: Sylvain Lebresne Committed: Fri May 6 13:41:41 2016 +0200 -- CHANGES.txt | 3 + .../org/apache/cassandra/config/CFMetaData.java | 6 + .../apache/cassandra/config/ViewDefinition.java | 1 - .../cql3/statements/CreateViewStatement.java| 4 +- .../cql3/statements/SelectStatement.java| 41 +- .../apache/cassandra/db/ColumnFamilyStore.java | 6 +- src/java/org/apache/cassandra/db/Keyspace.java | 2 +- .../db/SinglePartitionReadCommand.java | 33 + src/java/org/apache/cassandra/db/Slices.java| 7 + .../apache/cassandra/db/filter/RowFilter.java | 24 + .../SingletonUnfilteredPartitionIterator.java | 3 +- .../apache/cassandra/db/rows/AbstractCell.java | 5 + .../org/apache/cassandra/db/rows/BTreeRow.java | 34 +- .../apache/cassandra/db/rows/BufferCell.java| 5 + src/java/org/apache/cassandra/db/rows/Cell.java | 2 + .../apache/cassandra/db/rows/ColumnData.java| 2 + .../cassandra/db/rows/ComplexColumnData.java| 8 + src/java/org/apache/cassandra/db/rows/Row.java | 35 +- .../cassandra/db/rows/RowDiffListener.java | 2 +- .../db/rows/UnfilteredRowIterators.java | 2 +- .../apache/cassandra/db/view/TableViews.java| 481 ++ .../apache/cassandra/db/view/TemporalRow.java | 610 -- src/java/org/apache/cassandra/db/view/View.java | 629 ++- .../apache/cassandra/db/view/ViewBuilder.java | 38 +- .../apache/cassandra/db/view/ViewManager.java | 146 + .../cassandra/db/view/ViewUpdateGenerator.java | 549 .../org/apache/cassandra/cql3/ViewTest.java | 52 +- .../org/apache/cassandra/db/rows/RowsTest.java | 6 +- 28 files changed, 1399 insertions(+), 1337 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/86ba2274/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0679e11..3a49f6a 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,6 @@ +3.0.7 + * Refactor Materialized View code (CASSANDRA-11475) + 3.0.6 * Disallow creating view with a static column (CASSANDRA-11602) * Reduce the amount of object allocations caused by the getFunctions methods (CASSANDRA-11593) http://git-wip-us.apache.org/repos/asf/cassandra/blob/86ba2274/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index 79cd779..e263697 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -31,6 +31,7 @@ import com.google.common.annotations.VisibleForTesting; import com.google.common.base.MoreObjects; import com.google.common.base.Objects; import com.google.common.collect.ImmutableSet; +import com.google.common.collect.Iterables; import com.google.common.collect.Sets; import org.apache.commons.lang3.ArrayUtils; import org.apache.commons.lang3.builder.HashCodeBuilder; @@ -612,6 +613,11 @@ public final class CFMetaData }; } +public Iterable primaryKeyColumns() +{ +return Iterables.concat(partitionKeyColumns, clusteringColumns); +} + public List partitionKeyColumns() { return partitionKeyColumns; http://git-wip-us.apache.org/repos/asf/cassandra/blob/86ba2274/src/java/org/apache/cassandra/config/ViewDefinition.java -- diff --git a/src/java/org/apache/cassandra/config/ViewDefinition.java b/src/java/org/apache/cassandra/config/ViewDefinition.java index b29a8f9..5300f56 100644 --- a/src/java/org/apache/cassandra/config/ViewDefinition.java +++ b/src/java/org/apache/cassandra/config/ViewDefinition.java @@ -37,7 +37,6 @@ public class ViewDefinition public final UUID baseTableId; public final String baseTableName; public final boolean includeAllColumns; -// The order of partititon columns and clustering columns is important, so we cannot switch these two to sets public final CFMetaData metadata; public SelectStatement.RawStatement select;
[8/9] cassandra git commit: Merge commit '86ba227477b9f8595eb610ecaf950cfbc29dd36b' into cassandra-3.7
Merge commit '86ba227477b9f8595eb610ecaf950cfbc29dd36b' into cassandra-3.7 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a87fd715 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a87fd715 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a87fd715 Branch: refs/heads/cassandra-3.7 Commit: a87fd715d6b26128603a404074ec3df42a595b2e Parents: 4e364d7 86ba227 Author: Sylvain LebresneAuthored: Fri May 6 13:43:44 2016 +0200 Committer: Sylvain Lebresne Committed: Fri May 6 13:44:12 2016 +0200 -- .../org/apache/cassandra/config/CFMetaData.java | 6 + .../apache/cassandra/config/ViewDefinition.java | 1 - .../cql3/statements/CreateViewStatement.java| 4 +- .../cql3/statements/SelectStatement.java| 41 +- .../apache/cassandra/db/ColumnFamilyStore.java | 6 +- src/java/org/apache/cassandra/db/Keyspace.java | 2 +- .../db/SinglePartitionReadCommand.java | 33 + src/java/org/apache/cassandra/db/Slices.java| 7 + .../apache/cassandra/db/filter/RowFilter.java | 24 + .../SingletonUnfilteredPartitionIterator.java | 3 +- .../apache/cassandra/db/rows/AbstractCell.java | 5 + .../org/apache/cassandra/db/rows/BTreeRow.java | 34 +- .../apache/cassandra/db/rows/BufferCell.java| 5 + src/java/org/apache/cassandra/db/rows/Cell.java | 2 + .../apache/cassandra/db/rows/ColumnData.java| 2 + .../cassandra/db/rows/ComplexColumnData.java| 8 + .../apache/cassandra/db/rows/NativeCell.java| 5 + src/java/org/apache/cassandra/db/rows/Row.java | 35 +- .../cassandra/db/rows/RowDiffListener.java | 2 +- .../db/rows/UnfilteredRowIterators.java | 2 +- .../apache/cassandra/db/view/TableViews.java| 481 ++ .../apache/cassandra/db/view/TemporalRow.java | 601 -- src/java/org/apache/cassandra/db/view/View.java | 629 ++- .../apache/cassandra/db/view/ViewBuilder.java | 38 +- .../apache/cassandra/db/view/ViewManager.java | 146 + .../cassandra/db/view/ViewUpdateGenerator.java | 549 .../org/apache/cassandra/cql3/ViewTest.java | 52 +- .../org/apache/cassandra/db/rows/RowsTest.java | 6 +- 28 files changed, 1400 insertions(+), 1329 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/config/CFMetaData.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/config/ViewDefinition.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/Keyspace.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/Slices.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/filter/RowFilter.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/rows/AbstractCell.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/rows/BTreeRow.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/rows/BufferCell.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87fd715/src/java/org/apache/cassandra/db/rows/Cell.java --
[5/9] cassandra git commit: Refactor MV code
http://git-wip-us.apache.org/repos/asf/cassandra/blob/86ba2274/src/java/org/apache/cassandra/db/view/View.java -- diff --git a/src/java/org/apache/cassandra/db/view/View.java b/src/java/org/apache/cassandra/db/view/View.java index 1b823aa..845a6ab 100644 --- a/src/java/org/apache/cassandra/db/view/View.java +++ b/src/java/org/apache/cassandra/db/view/View.java @@ -32,17 +32,15 @@ import org.apache.cassandra.cql3.statements.SelectStatement; import org.apache.cassandra.db.*; import org.apache.cassandra.config.*; import org.apache.cassandra.cql3.ColumnIdentifier; -import org.apache.cassandra.db.AbstractReadCommandBuilder.SinglePartitionSliceBuilder; import org.apache.cassandra.db.compaction.CompactionManager; -import org.apache.cassandra.db.partitions.AbstractBTreePartition; -import org.apache.cassandra.db.partitions.PartitionIterator; -import org.apache.cassandra.db.partitions.PartitionUpdate; +import org.apache.cassandra.db.partitions.*; import org.apache.cassandra.db.rows.*; import org.apache.cassandra.schema.KeyspaceMetadata; import org.apache.cassandra.service.ClientState; import org.apache.cassandra.service.pager.QueryPager; import org.apache.cassandra.transport.Server; import org.apache.cassandra.utils.FBUtilities; +import org.apache.cassandra.utils.btree.BTreeSet; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -50,46 +48,18 @@ import org.slf4j.LoggerFactory; * A View copies data from a base table into a view table which can be queried independently from the * base. Every update which targets the base table must be fed through the {@link ViewManager} to ensure * that if a view needs to be updated, the updates are properly created and fed into the view. - * - * This class does the job of translating the base row to the view row. - * - * It handles reading existing state and figuring out what tombstones need to be generated. - * - * {@link View#createMutations(AbstractBTreePartition, TemporalRow.Set, boolean)} is the "main method" - * */ public class View { private static final Logger logger = LoggerFactory.getLogger(View.class); -/** - * The columns should all be updated together, so we use this object as group. - */ -private static class Columns -{ -//These are the base column definitions in terms of the *views* partitioning. -//Meaning we can see (for example) the partition key of the view contains a clustering key -//from the base table. -public final List partitionDefs; -public final List primaryKeyDefs; -public final List baseComplexColumns; - -private Columns(List partitionDefs, List primaryKeyDefs, List baseComplexColumns) -{ -this.partitionDefs = partitionDefs; -this.primaryKeyDefs = primaryKeyDefs; -this.baseComplexColumns = baseComplexColumns; -} -} - public final String name; private volatile ViewDefinition definition; private final ColumnFamilyStore baseCfs; -private Columns columns; +public volatile List baseNonPKColumnsInViewPK; -private final boolean viewPKIncludesOnlyBasePKColumns; private final boolean includeAllColumns; private ViewBuilder builder; @@ -104,12 +74,11 @@ public class View ColumnFamilyStore baseCfs) { this.baseCfs = baseCfs; - -name = definition.viewName; -includeAllColumns = definition.includeAllColumns; - -viewPKIncludesOnlyBasePKColumns = updateDefinition(definition); +this.name = definition.viewName; +this.includeAllColumns = definition.includeAllColumns; this.rawSelect = definition.select; + +updateDefinition(definition); } public ViewDefinition getDefinition() @@ -118,513 +87,100 @@ public class View } /** - * Lookup column definitions in the base table that correspond to the view columns (should be 1:1) - * - * Notify caller if all primary keys in the view are ALL primary keys in the base. We do this to simplify - * tombstone checks. - * - * @param columns a list of columns to lookup in the base table - * @param definitions lists to populate for the base table definitions - * @return true if all view PKs are also Base PKs - */ -private boolean resolveAndAddColumns(Iterable columns, List... definitions) -{ -boolean allArePrimaryKeys = true; -for (ColumnIdentifier identifier : columns) -{ -ColumnDefinition cdef = baseCfs.metadata.getColumnDefinition(identifier); -assert cdef != null : "Could not resolve column " + identifier.toString(); - -for (List list : definitions) -{ -list.add(cdef); -} - -allArePrimaryKeys = allArePrimaryKeys && cdef.isPrimaryKeyColumn(); -} - -return
[2/9] cassandra git commit: Refactor MV code
Refactor MV code patch by slebresne; reviewed by carlyeks for CASSANDRA-11475 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86ba2274 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86ba2274 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86ba2274 Branch: refs/heads/cassandra-3.0 Commit: 86ba227477b9f8595eb610ecaf950cfbc29dd36b Parents: c19066e Author: Sylvain LebresneAuthored: Fri Mar 11 14:19:38 2016 +0100 Committer: Sylvain Lebresne Committed: Fri May 6 13:41:41 2016 +0200 -- CHANGES.txt | 3 + .../org/apache/cassandra/config/CFMetaData.java | 6 + .../apache/cassandra/config/ViewDefinition.java | 1 - .../cql3/statements/CreateViewStatement.java| 4 +- .../cql3/statements/SelectStatement.java| 41 +- .../apache/cassandra/db/ColumnFamilyStore.java | 6 +- src/java/org/apache/cassandra/db/Keyspace.java | 2 +- .../db/SinglePartitionReadCommand.java | 33 + src/java/org/apache/cassandra/db/Slices.java| 7 + .../apache/cassandra/db/filter/RowFilter.java | 24 + .../SingletonUnfilteredPartitionIterator.java | 3 +- .../apache/cassandra/db/rows/AbstractCell.java | 5 + .../org/apache/cassandra/db/rows/BTreeRow.java | 34 +- .../apache/cassandra/db/rows/BufferCell.java| 5 + src/java/org/apache/cassandra/db/rows/Cell.java | 2 + .../apache/cassandra/db/rows/ColumnData.java| 2 + .../cassandra/db/rows/ComplexColumnData.java| 8 + src/java/org/apache/cassandra/db/rows/Row.java | 35 +- .../cassandra/db/rows/RowDiffListener.java | 2 +- .../db/rows/UnfilteredRowIterators.java | 2 +- .../apache/cassandra/db/view/TableViews.java| 481 ++ .../apache/cassandra/db/view/TemporalRow.java | 610 -- src/java/org/apache/cassandra/db/view/View.java | 629 ++- .../apache/cassandra/db/view/ViewBuilder.java | 38 +- .../apache/cassandra/db/view/ViewManager.java | 146 + .../cassandra/db/view/ViewUpdateGenerator.java | 549 .../org/apache/cassandra/cql3/ViewTest.java | 52 +- .../org/apache/cassandra/db/rows/RowsTest.java | 6 +- 28 files changed, 1399 insertions(+), 1337 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/86ba2274/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0679e11..3a49f6a 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,6 @@ +3.0.7 + * Refactor Materialized View code (CASSANDRA-11475) + 3.0.6 * Disallow creating view with a static column (CASSANDRA-11602) * Reduce the amount of object allocations caused by the getFunctions methods (CASSANDRA-11593) http://git-wip-us.apache.org/repos/asf/cassandra/blob/86ba2274/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index 79cd779..e263697 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -31,6 +31,7 @@ import com.google.common.annotations.VisibleForTesting; import com.google.common.base.MoreObjects; import com.google.common.base.Objects; import com.google.common.collect.ImmutableSet; +import com.google.common.collect.Iterables; import com.google.common.collect.Sets; import org.apache.commons.lang3.ArrayUtils; import org.apache.commons.lang3.builder.HashCodeBuilder; @@ -612,6 +613,11 @@ public final class CFMetaData }; } +public Iterable primaryKeyColumns() +{ +return Iterables.concat(partitionKeyColumns, clusteringColumns); +} + public List partitionKeyColumns() { return partitionKeyColumns; http://git-wip-us.apache.org/repos/asf/cassandra/blob/86ba2274/src/java/org/apache/cassandra/config/ViewDefinition.java -- diff --git a/src/java/org/apache/cassandra/config/ViewDefinition.java b/src/java/org/apache/cassandra/config/ViewDefinition.java index b29a8f9..5300f56 100644 --- a/src/java/org/apache/cassandra/config/ViewDefinition.java +++ b/src/java/org/apache/cassandra/config/ViewDefinition.java @@ -37,7 +37,6 @@ public class ViewDefinition public final UUID baseTableId; public final String baseTableName; public final boolean includeAllColumns; -// The order of partititon columns and clustering columns is important, so we cannot switch these two to sets public final CFMetaData metadata; public SelectStatement.RawStatement