[jira] [Commented] (CASSANDRA-10233) IndexOutOfBoundsException in HintedHandOffManager
[ https://issues.apache.org/jira/browse/CASSANDRA-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14972173#comment-14972173 ] Paulo Motta commented on CASSANDRA-10233: - [~fhsgoncalves] [~eitikimura] bq. the problem started when we added 7 new nodes to the cluster a week ago Do you remember if there was any failed bootstrap when you added these nodes? For example, you started bootstrapping, streams hanged, you wiped the node and restarted the process again? Or did all bootstraps succeed in the first time? (I'm investigating the root cause of the issue on CASSANDRA-10485) > IndexOutOfBoundsException in HintedHandOffManager > - > > Key: CASSANDRA-10233 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10233 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 2.2.0 >Reporter: Omri Iluz >Assignee: J.P. Eiti Kimura >Priority: Minor > Fix For: 2.2.4, 2.1.12 > > Attachments: cassandra-2.0-10233.txt, cassandra-2.1-10233-v5.txt, > cassandra-2.1.8-10233-v3.txt, cassandra-2.1.8-10233-v4.txt, > cassandra-2.2.1-10233-v2.txt, cassandra-2.2.1-10233.txt > > > After upgrading our cluster to 2.2.0, the following error started showing > exectly every 10 minutes on every server in the cluster: > {noformat} > INFO [CompactionExecutor:1381] 2015-08-31 18:31:55,506 > CompactionTask.java:142 - Compacting (8e7e1520-500e-11e5-b1e3-e95897ba4d20) > [/cassandra/data/system/hints-2666e20573ef38b390fefecf96e8f0c7/la-540-big-Data.db:level=0, > ] > INFO [CompactionExecutor:1381] 2015-08-31 18:31:55,599 > CompactionTask.java:224 - Compacted (8e7e1520-500e-11e5-b1e3-e95897ba4d20) 1 > sstables to > [/cassandra/data/system/hints-2666e20573ef38b390fefecf96e8f0c7/la-541-big,] > to level=0. 1,544,495 bytes to 1,544,495 (~100% of original) in 93ms = > 15.838121MB/s. 0 total partitions merged to 4. Partition merge counts were > {1:4, } > ERROR [HintedHandoff:1] 2015-08-31 18:31:55,600 CassandraDaemon.java:182 - > Exception in thread Thread[HintedHandoff:1,1,main] > java.lang.IndexOutOfBoundsException: null > at java.nio.Buffer.checkIndex(Buffer.java:538) ~[na:1.7.0_79] > at java.nio.HeapByteBuffer.getLong(HeapByteBuffer.java:410) > ~[na:1.7.0_79] > at org.apache.cassandra.utils.UUIDGen.getUUID(UUIDGen.java:106) > ~[apache-cassandra-2.2.0.jar:2.2.0] > at > org.apache.cassandra.db.HintedHandOffManager.scheduleAllDeliveries(HintedHandOffManager.java:515) > ~[apache-cassandra-2.2.0.jar:2.2.0] > at > org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:88) > ~[apache-cassandra-2.2.0.jar:2.2.0] > at > org.apache.cassandra.db.HintedHandOffManager$1.run(HintedHandOffManager.java:168) > ~[apache-cassandra-2.2.0.jar:2.2.0] > at > org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118) > ~[apache-cassandra-2.2.0.jar:2.2.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > [na:1.7.0_79] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) > [na:1.7.0_79] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) > [na:1.7.0_79] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > [na:1.7.0_79] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_79] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_79] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-10582) CorruptSSTableException should print the SS Table Name
[ https://issues.apache.org/jira/browse/CASSANDRA-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anubhav Kale resolved CASSANDRA-10582. -- Resolution: Fixed Fix Version/s: (was: 2.1.9) 3.1 > CorruptSSTableException should print the SS Table Name > -- > > Key: CASSANDRA-10582 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10582 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Azure >Reporter: Anubhav Kale >Priority: Minor > Fix For: 3.1 > > > We should print the SS Table name that's being reported as corrupt to help > with quick recovery. > INFO 16:32:15 Opening > /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-21214 > (23832772 bytes) > INFO 16:32:15 Opening > /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-18398 > (149675 bytes) > INFO 16:32:15 Opening > /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-23707 > (18270 bytes) > INFO 16:32:15 Opening > /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-13656 > (814588 bytes) > ERROR 16:32:15 Exiting forcefully due to file system exception on startup, > disk failure policy "stop" > org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException > at > org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131) > ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT] > at > org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85) > ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT] > at > org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79) > ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT] > at > org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72) > ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT] > at -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10554) Batch that updates two or more table can produce unreadable SSTable (was: Auto Bootstraping a new node fails)
[ https://issues.apache.org/jira/browse/CASSANDRA-10554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-10554: Priority: Blocker (was: Critical) > Batch that updates two or more table can produce unreadable SSTable (was: > Auto Bootstraping a new node fails) > - > > Key: CASSANDRA-10554 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10554 > Project: Cassandra > Issue Type: Bug >Reporter: Alan Boudreault >Assignee: Sylvain Lebresne >Priority: Blocker > Fix For: 3.0.0 > > Attachments: 0001-Add-debug.txt, 10554.cql, debug.log, system.log, > test.sh > > > I've been trying to add a new node in my 3.0 cluster and it seems to fail. > All my nodes are using apache/cassandra-3.0.0 branch. At the beginning, I can > see the following error: > {code} > INFO 18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a ID#0] Prepare > completed. Receiving 42 files(1910066622 bytes), sending 0 files(0 bytes) > WARN 18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Retrying for > following error > java.lang.RuntimeException: Unknown column added_time during deserialization > at > org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:331) > ~[main/:na] > at > org.apache.cassandra.streaming.StreamReader.createWriter(StreamReader.java:136) > ~[main/:na] > at > org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:77) > ~[main/:na] > at > org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:50) > [main/:na] > at > org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:39) > [main/:na] > at > org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:59) > [main/:na] > at > org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > ERROR 18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Streaming error > occurred > java.lang.IllegalArgumentException: Unknown type 0 > at > org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:97) > ~[main/:na] > at > org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58) > ~[main/:na] > at > org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261) > ~[main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > INFO 18:45:55 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Session with > /54.210.187.114 is complete > INFO 18:45:56 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a ID#0] Prepare > completed. Receiving 38 files(2323537628 bytes), sending 0 files(0 bytes) > WARN 18:45:56 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Retrying for > following error > java.lang.RuntimeException: Unknown column added_time during deserialization > at > org.apache.cassandra.db.SerializationHeader$Component.toHeader(SerializationHeader.java:331) > ~[main/:na] > at > org.apache.cassandra.streaming.StreamReader.createWriter(StreamReader.java:136) > ~[main/:na] > at > org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:77) > ~[main/:na] > at > org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:50) > [main/:na] > at > org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:39) > [main/:na] > at > org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:59) > [main/:na] > at > org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > ERROR 18:45:56 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Streaming error > occurred > java.lang.IllegalArgumentException: Unknown type 0 > at > org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:97) > ~[main/:na] > at > org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58) > ~[main/:na] > at > org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261) > ~[main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > INFO 18:45:56 [Stream #9f95fa90-7691-11e5-931f-5b735851f84a] Session with > /
[4/5] cassandra git commit: Fix potential issue with LogTransaction only checking single dir for files
Fix potential issue with LogTransaction only checking single dir for files Patch by stefania; reviewed by aweisberg for CASSANDRA-10421 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73781a9a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73781a9a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73781a9a Branch: refs/heads/trunk Commit: 73781a9a497de99d8cf2088d804173a11a3982f0 Parents: 1d28a4a Author: Stefania Alborghetti Authored: Fri Oct 23 17:40:46 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 17:40:46 2015 -0400 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/Memtable.java | 28 +- .../db/lifecycle/LifecycleTransaction.java | 39 +- .../db/lifecycle/LogAwareFileLister.java| 3 +- .../apache/cassandra/db/lifecycle/LogFile.java | 263 + .../cassandra/db/lifecycle/LogRecord.java | 192 +-- .../cassandra/db/lifecycle/LogReplica.java | 105 .../cassandra/db/lifecycle/LogReplicaSet.java | 229 .../cassandra/db/lifecycle/LogTransaction.java | 150 +++-- .../apache/cassandra/db/lifecycle/Tracker.java | 4 +- .../cassandra/io/sstable/SSTableTxnWriter.java | 4 +- .../org/apache/cassandra/io/util/FileUtils.java | 20 +- .../cassandra/streaming/StreamReceiveTask.java | 2 +- .../org/apache/cassandra/utils/Throwables.java | 5 + .../unit/org/apache/cassandra/db/ScrubTest.java | 2 +- .../cassandra/db/lifecycle/HelpersTest.java | 2 +- .../db/lifecycle/LogTransactionTest.java| 575 +++ .../db/lifecycle/RealTransactionsTest.java | 6 +- .../io/sstable/SSTableRewriterTest.java | 2 +- 19 files changed, 1232 insertions(+), 400 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/73781a9a/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 67e06ca..bc7c001 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0 + * Fix LogTransaction checking only a single directory for files (CASSANDRA-10421) * Support encrypted and plain traffic on the same port (CASSANDRA-10559) * Fix handling of range tombstones when reading old format sstables (CASSANDRA-10360) * Aggregate with Initial Condition fails with C* 3.0 (CASSANDRA-10367) http://git-wip-us.apache.org/repos/asf/cassandra/blob/73781a9a/src/java/org/apache/cassandra/db/Memtable.java -- diff --git a/src/java/org/apache/cassandra/db/Memtable.java b/src/java/org/apache/cassandra/db/Memtable.java index f47efe3..96b1775 100644 --- a/src/java/org/apache/cassandra/db/Memtable.java +++ b/src/java/org/apache/cassandra/db/Memtable.java @@ -423,15 +423,25 @@ public class Memtable implements Comparable { // we operate "offline" here, as we expose the resulting reader consciously when done // (although we may want to modify this behaviour in future, to encapsulate full flush behaviour in LifecycleTransaction) -LifecycleTransaction txn = LifecycleTransaction.offline(OperationType.FLUSH, cfs.metadata); -MetadataCollector sstableMetadataCollector = new MetadataCollector(cfs.metadata.comparator).replayPosition(context); -return new SSTableTxnWriter(txn, - cfs.createSSTableMultiWriter(Descriptor.fromFilename(filename), - (long)partitions.size(), - ActiveRepairService.UNREPAIRED_SSTABLE, - sstableMetadataCollector, - new SerializationHeader(true, cfs.metadata, columns, stats), - txn)); +LifecycleTransaction txn = null; +try +{ +txn = LifecycleTransaction.offline(OperationType.FLUSH); +MetadataCollector sstableMetadataCollector = new MetadataCollector(cfs.metadata.comparator).replayPosition(context); +return new SSTableTxnWriter(txn, + cfs.createSSTableMultiWriter(Descriptor.fromFilename(filename), + (long) partitions.size(), + ActiveRepairService.UNREPAIRED_SSTABLE, + sstableMetadataCollector, +
[2/5] cassandra git commit: Fix potential issue with LogTransaction only checking single dir for files
Fix potential issue with LogTransaction only checking single dir for files Patch by stefania; reviewed by aweisberg for CASSANDRA-10421 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73781a9a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73781a9a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73781a9a Branch: refs/heads/cassandra-3.0 Commit: 73781a9a497de99d8cf2088d804173a11a3982f0 Parents: 1d28a4a Author: Stefania Alborghetti Authored: Fri Oct 23 17:40:46 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 17:40:46 2015 -0400 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/Memtable.java | 28 +- .../db/lifecycle/LifecycleTransaction.java | 39 +- .../db/lifecycle/LogAwareFileLister.java| 3 +- .../apache/cassandra/db/lifecycle/LogFile.java | 263 + .../cassandra/db/lifecycle/LogRecord.java | 192 +-- .../cassandra/db/lifecycle/LogReplica.java | 105 .../cassandra/db/lifecycle/LogReplicaSet.java | 229 .../cassandra/db/lifecycle/LogTransaction.java | 150 +++-- .../apache/cassandra/db/lifecycle/Tracker.java | 4 +- .../cassandra/io/sstable/SSTableTxnWriter.java | 4 +- .../org/apache/cassandra/io/util/FileUtils.java | 20 +- .../cassandra/streaming/StreamReceiveTask.java | 2 +- .../org/apache/cassandra/utils/Throwables.java | 5 + .../unit/org/apache/cassandra/db/ScrubTest.java | 2 +- .../cassandra/db/lifecycle/HelpersTest.java | 2 +- .../db/lifecycle/LogTransactionTest.java| 575 +++ .../db/lifecycle/RealTransactionsTest.java | 6 +- .../io/sstable/SSTableRewriterTest.java | 2 +- 19 files changed, 1232 insertions(+), 400 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/73781a9a/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 67e06ca..bc7c001 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0 + * Fix LogTransaction checking only a single directory for files (CASSANDRA-10421) * Support encrypted and plain traffic on the same port (CASSANDRA-10559) * Fix handling of range tombstones when reading old format sstables (CASSANDRA-10360) * Aggregate with Initial Condition fails with C* 3.0 (CASSANDRA-10367) http://git-wip-us.apache.org/repos/asf/cassandra/blob/73781a9a/src/java/org/apache/cassandra/db/Memtable.java -- diff --git a/src/java/org/apache/cassandra/db/Memtable.java b/src/java/org/apache/cassandra/db/Memtable.java index f47efe3..96b1775 100644 --- a/src/java/org/apache/cassandra/db/Memtable.java +++ b/src/java/org/apache/cassandra/db/Memtable.java @@ -423,15 +423,25 @@ public class Memtable implements Comparable { // we operate "offline" here, as we expose the resulting reader consciously when done // (although we may want to modify this behaviour in future, to encapsulate full flush behaviour in LifecycleTransaction) -LifecycleTransaction txn = LifecycleTransaction.offline(OperationType.FLUSH, cfs.metadata); -MetadataCollector sstableMetadataCollector = new MetadataCollector(cfs.metadata.comparator).replayPosition(context); -return new SSTableTxnWriter(txn, - cfs.createSSTableMultiWriter(Descriptor.fromFilename(filename), - (long)partitions.size(), - ActiveRepairService.UNREPAIRED_SSTABLE, - sstableMetadataCollector, - new SerializationHeader(true, cfs.metadata, columns, stats), - txn)); +LifecycleTransaction txn = null; +try +{ +txn = LifecycleTransaction.offline(OperationType.FLUSH); +MetadataCollector sstableMetadataCollector = new MetadataCollector(cfs.metadata.comparator).replayPosition(context); +return new SSTableTxnWriter(txn, + cfs.createSSTableMultiWriter(Descriptor.fromFilename(filename), + (long) partitions.size(), + ActiveRepairService.UNREPAIRED_SSTABLE, + sstableMetadataCollector, +
[5/5] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Merge branch 'cassandra-3.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3a2dd0cf Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3a2dd0cf Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3a2dd0cf Branch: refs/heads/trunk Commit: 3a2dd0cf60f34c242c53d59ad144e76558dface9 Parents: 87f16ca 73781a9 Author: Joshua McKenzie Authored: Fri Oct 23 17:43:32 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 17:43:32 2015 -0400 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/Memtable.java | 28 +- .../db/lifecycle/LifecycleTransaction.java | 39 +- .../db/lifecycle/LogAwareFileLister.java| 3 +- .../apache/cassandra/db/lifecycle/LogFile.java | 263 + .../cassandra/db/lifecycle/LogRecord.java | 192 +-- .../cassandra/db/lifecycle/LogReplica.java | 105 .../cassandra/db/lifecycle/LogReplicaSet.java | 229 .../cassandra/db/lifecycle/LogTransaction.java | 150 +++-- .../apache/cassandra/db/lifecycle/Tracker.java | 4 +- .../cassandra/io/sstable/SSTableTxnWriter.java | 4 +- .../org/apache/cassandra/io/util/FileUtils.java | 20 +- .../cassandra/streaming/StreamReceiveTask.java | 2 +- .../org/apache/cassandra/utils/Throwables.java | 5 + .../unit/org/apache/cassandra/db/ScrubTest.java | 2 +- .../cassandra/db/lifecycle/HelpersTest.java | 2 +- .../db/lifecycle/LogTransactionTest.java| 575 +++ .../db/lifecycle/RealTransactionsTest.java | 6 +- .../io/sstable/SSTableRewriterTest.java | 2 +- 19 files changed, 1232 insertions(+), 400 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a2dd0cf/CHANGES.txt -- diff --cc CHANGES.txt index 2ecad90,bc7c001..dae9264 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,10 -1,5 +1,11 @@@ +3.2 + * Added graphing option to cassandra-stress (CASSANDRA-7918) + * Abort in-progress queries that time out (CASSANDRA-7392) + * Add transparent data encryption core classes (CASSANDRA-9945) + + 3.0 + * Fix LogTransaction checking only a single directory for files (CASSANDRA-10421) * Support encrypted and plain traffic on the same port (CASSANDRA-10559) * Fix handling of range tombstones when reading old format sstables (CASSANDRA-10360) * Aggregate with Initial Condition fails with C* 3.0 (CASSANDRA-10367)
[3/5] cassandra git commit: Fix potential issue with LogTransaction only checking single dir for files
http://git-wip-us.apache.org/repos/asf/cassandra/blob/73781a9a/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java -- diff --git a/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java b/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java index a655fd8..df05d71 100644 --- a/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java +++ b/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java @@ -20,6 +20,7 @@ package org.apache.cassandra.db.lifecycle; import java.io.File; import java.io.IOException; import java.io.RandomAccessFile; +import java.nio.file.Files; import java.util.*; import java.util.function.BiConsumer; import java.util.function.Consumer; @@ -80,6 +81,7 @@ public class LogTransactionTest extends AbstractTransactionalTest { final ColumnFamilyStore cfs; final LogTransaction txnLogs; +final File dataFolder; final SSTableReader sstableOld; final SSTableReader sstableNew; final LogTransaction.SSTableTidier tidier; @@ -88,12 +90,13 @@ public class LogTransactionTest extends AbstractTransactionalTest { this.cfs = cfs; this.txnLogs = txnLogs; -this.sstableOld = sstable(cfs, 0, 128); -this.sstableNew = sstable(cfs, 1, 128); +this.dataFolder = new Directories(cfs.metadata).getDirectoryForNewSSTables(); +this.sstableOld = sstable(dataFolder, cfs, 0, 128); +this.sstableNew = sstable(dataFolder, cfs, 1, 128); assertNotNull(txnLogs); -assertNotNull(txnLogs.getId()); -Assert.assertEquals(OperationType.COMPACTION, txnLogs.getType()); +assertNotNull(txnLogs.id()); +Assert.assertEquals(OperationType.COMPACTION, txnLogs.type()); txnLogs.trackNew(sstableNew); tidier = txnLogs.obsoleted(sstableOld); @@ -131,9 +134,9 @@ public class LogTransactionTest extends AbstractTransactionalTest void assertInProgress() throws Exception { -assertFiles(txnLogs.getDataFolder(), Sets.newHashSet(Iterables.concat(sstableNew.getAllFilePaths(), - sstableOld.getAllFilePaths(), - Collections.singleton(txnLogs.getLogFile().file.getPath(); +assertFiles(dataFolder.getPath(), Sets.newHashSet(Iterables.concat(sstableNew.getAllFilePaths(), + sstableOld.getAllFilePaths(), + txnLogs.logFilePaths(; } void assertPrepared() throws Exception @@ -142,12 +145,12 @@ public class LogTransactionTest extends AbstractTransactionalTest void assertAborted() throws Exception { -assertFiles(txnLogs.getDataFolder(), new HashSet<>(sstableOld.getAllFilePaths())); +assertFiles(dataFolder.getPath(), new HashSet<>(sstableOld.getAllFilePaths())); } void assertCommitted() throws Exception { -assertFiles(txnLogs.getDataFolder(), new HashSet<>(sstableNew.getAllFilePaths())); +assertFiles(dataFolder.getPath(), new HashSet<>(sstableNew.getAllFilePaths())); } } @@ -160,7 +163,7 @@ public class LogTransactionTest extends AbstractTransactionalTest private TxnTest(ColumnFamilyStore cfs) throws IOException { -this(cfs, new LogTransaction(OperationType.COMPACTION, cfs.metadata)); +this(cfs, new LogTransaction(OperationType.COMPACTION)); } private TxnTest(ColumnFamilyStore cfs, LogTransaction txnLogs) throws IOException @@ -199,10 +202,11 @@ public class LogTransactionTest extends AbstractTransactionalTest public void testUntrack() throws Throwable { ColumnFamilyStore cfs = MockSchema.newCFS(KEYSPACE); -SSTableReader sstableNew = sstable(cfs, 1, 128); +File dataFolder = new Directories(cfs.metadata).getDirectoryForNewSSTables(); +SSTableReader sstableNew = sstable(dataFolder, cfs, 1, 128); // complete a transaction without keep the new files since they were untracked -LogTransaction log = new LogTransaction(OperationType.COMPACTION, cfs.metadata); +LogTransaction log = new LogTransaction(OperationType.COMPACTION); assertNotNull(log); log.trackNew(sstableNew); @@ -214,18 +218,19 @@ public class LogTransactionTest extends AbstractTransactionalTest Thre
[1/5] cassandra git commit: Fix potential issue with LogTransaction only checking single dir for files
Repository: cassandra Updated Branches: refs/heads/cassandra-3.0 1d28a4acf -> 73781a9a4 refs/heads/trunk 87f16ca9a -> 3a2dd0cf6 http://git-wip-us.apache.org/repos/asf/cassandra/blob/73781a9a/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java -- diff --git a/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java b/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java index a655fd8..df05d71 100644 --- a/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java +++ b/test/unit/org/apache/cassandra/db/lifecycle/LogTransactionTest.java @@ -20,6 +20,7 @@ package org.apache.cassandra.db.lifecycle; import java.io.File; import java.io.IOException; import java.io.RandomAccessFile; +import java.nio.file.Files; import java.util.*; import java.util.function.BiConsumer; import java.util.function.Consumer; @@ -80,6 +81,7 @@ public class LogTransactionTest extends AbstractTransactionalTest { final ColumnFamilyStore cfs; final LogTransaction txnLogs; +final File dataFolder; final SSTableReader sstableOld; final SSTableReader sstableNew; final LogTransaction.SSTableTidier tidier; @@ -88,12 +90,13 @@ public class LogTransactionTest extends AbstractTransactionalTest { this.cfs = cfs; this.txnLogs = txnLogs; -this.sstableOld = sstable(cfs, 0, 128); -this.sstableNew = sstable(cfs, 1, 128); +this.dataFolder = new Directories(cfs.metadata).getDirectoryForNewSSTables(); +this.sstableOld = sstable(dataFolder, cfs, 0, 128); +this.sstableNew = sstable(dataFolder, cfs, 1, 128); assertNotNull(txnLogs); -assertNotNull(txnLogs.getId()); -Assert.assertEquals(OperationType.COMPACTION, txnLogs.getType()); +assertNotNull(txnLogs.id()); +Assert.assertEquals(OperationType.COMPACTION, txnLogs.type()); txnLogs.trackNew(sstableNew); tidier = txnLogs.obsoleted(sstableOld); @@ -131,9 +134,9 @@ public class LogTransactionTest extends AbstractTransactionalTest void assertInProgress() throws Exception { -assertFiles(txnLogs.getDataFolder(), Sets.newHashSet(Iterables.concat(sstableNew.getAllFilePaths(), - sstableOld.getAllFilePaths(), - Collections.singleton(txnLogs.getLogFile().file.getPath(); +assertFiles(dataFolder.getPath(), Sets.newHashSet(Iterables.concat(sstableNew.getAllFilePaths(), + sstableOld.getAllFilePaths(), + txnLogs.logFilePaths(; } void assertPrepared() throws Exception @@ -142,12 +145,12 @@ public class LogTransactionTest extends AbstractTransactionalTest void assertAborted() throws Exception { -assertFiles(txnLogs.getDataFolder(), new HashSet<>(sstableOld.getAllFilePaths())); +assertFiles(dataFolder.getPath(), new HashSet<>(sstableOld.getAllFilePaths())); } void assertCommitted() throws Exception { -assertFiles(txnLogs.getDataFolder(), new HashSet<>(sstableNew.getAllFilePaths())); +assertFiles(dataFolder.getPath(), new HashSet<>(sstableNew.getAllFilePaths())); } } @@ -160,7 +163,7 @@ public class LogTransactionTest extends AbstractTransactionalTest private TxnTest(ColumnFamilyStore cfs) throws IOException { -this(cfs, new LogTransaction(OperationType.COMPACTION, cfs.metadata)); +this(cfs, new LogTransaction(OperationType.COMPACTION)); } private TxnTest(ColumnFamilyStore cfs, LogTransaction txnLogs) throws IOException @@ -199,10 +202,11 @@ public class LogTransactionTest extends AbstractTransactionalTest public void testUntrack() throws Throwable { ColumnFamilyStore cfs = MockSchema.newCFS(KEYSPACE); -SSTableReader sstableNew = sstable(cfs, 1, 128); +File dataFolder = new Directories(cfs.metadata).getDirectoryForNewSSTables(); +SSTableReader sstableNew = sstable(dataFolder, cfs, 1, 128); // complete a transaction without keep the new files since they were untracked -LogTransaction log = new LogTransaction(OperationType.COMPACTION, cfs.metadata); +LogTransaction log = new LogTransaction(OperationType.COMPACTION); assertNotNull(log);
[jira] [Commented] (CASSANDRA-10578) bootstrap_test.py:TestBootstrap.simultaneous_bootstrap_test dtest failing
[ https://issues.apache.org/jira/browse/CASSANDRA-10578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971890#comment-14971890 ] Yuki Morishita commented on CASSANDRA-10578: dtest PR here: https://github.com/riptano/cassandra-dtest/pull/628 > bootstrap_test.py:TestBootstrap.simultaneous_bootstrap_test dtest failing > - > > Key: CASSANDRA-10578 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10578 > Project: Cassandra > Issue Type: Sub-task >Reporter: Jim Witschey >Assignee: Yuki Morishita > Fix For: 2.1.x, 2.2.x, 3.0.0 > > > This test fails on 2.1, 2.2, and 3.0 versions tested on CassCI: > http://cassci.datastax.com/view/cassandra-2.1/job/cassandra-2.1_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/simultaneous_bootstrap_test/ > http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/350/testReport/junit/bootstrap_test/TestBootstrap/simultaneous_bootstrap_test/ > http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/simultaneous_bootstrap_test/ > It fails with the same error, indicating that the third node, which should > not start while another node is bootstrapping, started. Oddly, the assertion > just before it, looking for a particular error in the logs, succeeds. > This could be a race condition, where one node successfully completes > bootstrapping before the third node is started. However, I don't know how > likely that is, since it fails consistently. Unfortunately, we don't have > enough history on CassCI to show when the test failure started. > I'm assigning [~yukim] for now, feel free to reassign. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10513) Update cqlsh for new driver execution API
[ https://issues.apache.org/jira/browse/CASSANDRA-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Holmberg updated CASSANDRA-10513: -- Attachment: 10513-2.1.txt 10513-2.2.txt Attached patches for 2.1 and 2.2 branches. > Update cqlsh for new driver execution API > - > > Key: CASSANDRA-10513 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10513 > Project: Cassandra > Issue Type: New Feature > Components: Tools >Reporter: Adam Holmberg >Assignee: Paulo Motta >Priority: Minor > Labels: cqlsh > Fix For: 2.2.x, 3.0.x > > Attachments: 10513-2.1.txt, 10513-2.2.txt, 10513.txt > > > The 3.0 Python driver will have a few tweaks to the execution API. We'll need > to update cqlsh in a couple of minor ways. > [Results are always returned as an iterable > ResultSet|https://datastax-oss.atlassian.net/browse/PYTHON-368] > [Trace data is always attached to the > ResultSet|https://datastax-oss.atlassian.net/browse/PYTHON-318] (instead of > being attached to a statement) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-9741) cfhistograms dtest flaps on trunk and 2.2
[ https://issues.apache.org/jira/browse/CASSANDRA-9741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg resolved CASSANDRA-9741. --- Resolution: Fixed Going to call this fixed. Josh and myself have reviewed the test history and the test seems reliable. > cfhistograms dtest flaps on trunk and 2.2 > - > > Key: CASSANDRA-9741 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9741 > Project: Cassandra > Issue Type: Bug >Reporter: Jim Witschey >Assignee: Ariel Weisberg > Fix For: 3.0.x > > > {{jmx_test.py:TestJMX.cfhistograms_test}} flaps on CassCI under trunk and 2.2. > On 2.2, it fails one of its assertions when {{'Unable to compute when > histogram overflowed'}} is found in the output of {{nodetool cfhistograms}}. > Here's the failure history for 2.2: > http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/lastCompletedBuild/testReport/junit/jmx_test/TestJMX/cfhistograms_test/history/ > On trunk, it fails when an error about a {{WriteFailureException}} during > hinted handoff is found in the C* logs after the tests run ([example cassci > output|http://cassci.datastax.com/view/trunk/job/trunk_dtest/315/testReport/junit/jmx_test/TestJMX/cfhistograms_test/]). > Here's the failure history for trunk: > http://cassci.datastax.com/view/trunk/job/trunk_dtest/lastCompletedBuild/testReport/junit/jmx_test/TestJMX/cfhistograms_test/history/ > I haven't seen it fail locally yet, but haven't run the test more than a > couple times because it takes a while. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9526) Provide a JMX hook to monitor phi values in the FailureDetector
[ https://issues.apache.org/jira/browse/CASSANDRA-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971857#comment-14971857 ] Ariel Weisberg commented on CASSANDRA-9526: --- |[Updated pull request (branch name is wrong)|https://github.com/apache/cassandra/compare/cassandra-2.2...aweisberg:CASSANDRA-9525-v2?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-9525-v2-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-9525-v2-dtest/]| > Provide a JMX hook to monitor phi values in the FailureDetector > --- > > Key: CASSANDRA-9526 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9526 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Ron Kuris >Assignee: Ron Kuris > Labels: docs-impacting > Fix For: 2.2.4, 3.0.0 > > Attachments: Monitor-Phi-JMX.patch, Phi-Log-Debug-When-Close.patch, > Tiny-Race-Condition.patch > > > phi_convict_threshold can be tuned, but there's currently no way to monitor > the phi values to see if you're getting close. > The attached patch adds the ability to get these values via JMX. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9741) cfhistograms dtest flaps on trunk and 2.2
[ https://issues.apache.org/jira/browse/CASSANDRA-9741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971845#comment-14971845 ] Joshua McKenzie commented on CASSANDRA-9741: [~aweisberg]: History on this tests looks clean on both 3.0 and 2.2 Are we good to close this out? > cfhistograms dtest flaps on trunk and 2.2 > - > > Key: CASSANDRA-9741 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9741 > Project: Cassandra > Issue Type: Bug >Reporter: Jim Witschey >Assignee: Ariel Weisberg > Fix For: 3.0.x > > > {{jmx_test.py:TestJMX.cfhistograms_test}} flaps on CassCI under trunk and 2.2. > On 2.2, it fails one of its assertions when {{'Unable to compute when > histogram overflowed'}} is found in the output of {{nodetool cfhistograms}}. > Here's the failure history for 2.2: > http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/lastCompletedBuild/testReport/junit/jmx_test/TestJMX/cfhistograms_test/history/ > On trunk, it fails when an error about a {{WriteFailureException}} during > hinted handoff is found in the C* logs after the tests run ([example cassci > output|http://cassci.datastax.com/view/trunk/job/trunk_dtest/315/testReport/junit/jmx_test/TestJMX/cfhistograms_test/]). > Here's the failure history for trunk: > http://cassci.datastax.com/view/trunk/job/trunk_dtest/lastCompletedBuild/testReport/junit/jmx_test/TestJMX/cfhistograms_test/history/ > I haven't seen it fail locally yet, but haven't run the test more than a > couple times because it takes a while. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10326) Performance is worse in 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-10326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971838#comment-14971838 ] Joshua McKenzie commented on CASSANDRA-10326: - Now that CASSANDRA-10403 is committed, can we get some final comparison #'s for 2.2 v. 3.0? > Performance is worse in 3.0 > --- > > Key: CASSANDRA-10326 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10326 > Project: Cassandra > Issue Type: Bug >Reporter: Benedict >Priority: Critical > Fix For: 3.0.x > > > Performance is generally turning out to be worse after 8099, despite a number > of unrelated performance enhancements being delivered. This isn't entirely > unexpected, given a great deal of time was spent optimising the old code, > however things appear worse than we had hoped. > My expectation was that workloads making extensive use of CQL constructs > would be faster post-8099, however the latest tests performed with very large > CQL rows, including use of collections, still exhibit performance below that > of 2.1 and 2.2. > Eventually, as the dataset size grows large enough and the locality of access > is just right, the reduction in size of our dataset will yield a window > during which some users will perform better due simply to improved page cache > hit rates. We seem to see this in some of the tests. However we should be at > least as fast (and really faster) off the bat. > The following are some large partition benchmark results, with as many as 40K > rows per partition, running LCS. There are a number of parameters we can > modify to see how behaviour changes and under what scenarios we might still > be faster, but the picture painted isn't brilliant, and is consistent, so we > should really try and figure out what's up before GA. > [trades-with-flags (collections), > blade11b|http://cstar.datastax.com/graph?stats=f0a17292-5a13-11e5-847a-42010af0688f&metric=op_rate&operation=1_user&smoothing=1&show_aggregates=true&xmin=0&xmax=4387.02&ymin=0&ymax=122951.4] > [trades-with-flags (collections), > blade11|http://cstar.datastax.com/graph?stats=e250-5a13-11e5-ae0d-42010af0688f&metric=op_rate&operation=1_user&smoothing=1&show_aggregates=true&xmin=0&xmax=4424.75&ymin=0&ymax=130158.6] > [trades (no collections), > blade11|http://cstar.datastax.com/graph?stats=9b7da48e-570c-11e5-90fe-42010af0688f&metric=op_rate&operation=1_user&smoothing=1&show_aggregates=true&xmin=0&xmax=2682.46&ymin=0&ymax=142547.9] > [~slebresne]: will you have time to look into this before GA? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10501) Failure to start up Cassandra when temporary compaction files are not all renamed after kill/crash (FSReadError)
[ https://issues.apache.org/jira/browse/CASSANDRA-10501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-10501: Fix Version/s: 3.0.0 > Failure to start up Cassandra when temporary compaction files are not all > renamed after kill/crash (FSReadError) > > > Key: CASSANDRA-10501 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10501 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 2.1.6 > Redhat Linux >Reporter: Mathieu Roy >Assignee: Marcus Eriksson > Labels: compaction, triage > Fix For: 2.1.x, 2.2.x, 3.0.0 > > > We have seen an issue intermittently but repeatedly over the last few months > where, after exiting the Cassandra process, it fails to start with an > FSReadError (stack trace below). The FSReadError refers to a 'statistics' > file for a that doesn't exist, though a corresponding temporary file does > exist (eg. there is no > /media/data/cassandraDB/data/clusteradmin/singleton_token-01a92ed069b511e59b2c53679a538c14/clusteradmin-singleton_token-ka-9-Statistics.db > file, but there is a > /media/data/cassandraDB/data/clusteradmin/singleton_token-01a92ed069b511e59b2c53679a538c14/clusteradmin-singleton_token-tmp-ka-9-Statistics.db > file.) > We tracked down the issue to the fact that the process exited with leftover > compactions and some of the 'tmp' files for the SSTable had been renamed to > final files, but not all of them - the issue happens if the 'Statistics' file > is not renamed but others are. The scenario we've seen on the last two > occurrences involves the 'CompressionInfo' file being a final file while all > other files for the SSTable generation were left with 'tmp' names. > When this occurs, Cassandra cannot start until the file issue is resolved; > we've worked around it by deleting the SSTable files from the same > generation, both final and tmp, which at least allows Cassandra to start. > Renaming all files to either tmp or final names would also work. > We've done some debugging in Cassandra and have been unable to cause the > issue without renaming the files manually. The rename code at > SSTableWriter.rename() looks like it could result in this if the process > exits in the middle of the rename, but in every occurrence we've debugged > through, the Set of components is ordered and Statistics is the first file > renamed. > However the comments in SSTableWriter.rename() suggest that the 'Data' file > is meant to be used as meaning the files were completely renamed. The method > ColumnFamilyStore. removeUnfinishedCompactionLeftovers(), however, will > proceed assuming the compaction is complete if any of the component files has > a final name, and will skip temporary files when reading the list. If the > 'Statistics' file is temporary then it won't be read, and the defaults does > not include a list of ancestors, leading to the NullPointerException. > It appears that ColumnFamilyStore. removeUnfinishedCompactionLeftovers() > should perhaps either ensure that all 'tmp' files are properly renamed before > it uses them, or skip SSTable files that don't have either the 'Data' or > 'Statistics' file in final form. > Stack trace: > {code} > FSReadError in Failed to remove unfinished compaction leftovers (file: > /media/data/cassandraDB/data/clusteradmin/singleton_token-01a92ed069b511e59b2c53679a538c14/clusteradmin-singleton_token-ka-9-Statistics.db). > See log for details. > at > org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:617) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:302) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:536) > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:625) > Caused by: java.lang.NullPointerException > at > org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:609) > ... 3 more > Exception encountered during startup: java.lang.NullPointerException > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10508) Remove hard-coded SSL cipher suites and protocols
[ https://issues.apache.org/jira/browse/CASSANDRA-10508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-10508: --- Fix Version/s: 3.x > Remove hard-coded SSL cipher suites and protocols > - > > Key: CASSANDRA-10508 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10508 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Stefan Podkowinski > Fix For: 3.x > > > Currently each SSL connections will be initialized using a hard-coded list of > protocols ("SSLv2Hello", "TLSv1", "TLSv1.1", "TLSv1.2") and cipher suites. We > now require Java 8 which comes with solid defaults for these kind of SSL > settings and I'm wondering if the current behavior shouldn't be re-evaluated. > In my impression the way cipher suites are currently whitelisted is > problematic, as this will prevent the JVM from using more recent and more > secure suites that haven't been added to the hard-coded list. JVM updates may > also cause issues in case the limited number of ciphers cannot be used, e.g. > see CASSANDRA-6613. > Looking at the source I've also stumbled upon a bug in the > {{filterCipherSuites()}} method that would return the filtered list of > ciphers in undetermined order where the result is passed to > {{setEnabledCipherSuites()}}. However, the list of ciphers will reflect the > order of preference > ([source|https://bugs.openjdk.java.net/browse/JDK-8087311]) and therefore you > may end up with weaker algorithms on the top. Currently it's not that > critical, as we only whitelist a couple of ciphers anyway. But it adds to the > question if it still really makes sense to work with the cipher list at all > in the Cassandra code base. > Another way to effect used ciphers is by changing the security properties. > This is a more versatile way to work with cipher lists instead of relying on > hard-coded values, see > [here|https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#DisabledAlgorithms] > for details. > The same applies to the protocols. Introduced in CASSANDRA-8265 to prevent > SSLv3 attacks, this is not necessary anymore as SSLv3 is now blacklisted > anyway and will stop using safer protocol sets on new JVM releases or user > request. Again, we should stick with the JVM defaults. Using the > {{jdk.tls.client.protocols}} systems property will always allow to restrict > the set of protocols in case another emergency fix is needed. > You can find a patch with where I ripped out the mentioned options here: > [Diff > trunk|https://github.com/apache/cassandra/compare/trunk...spodkowinski:fix/ssloptions] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10495) Improve the way we do streaming with vnodes
[ https://issues.apache.org/jira/browse/CASSANDRA-10495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971787#comment-14971787 ] Jonathan Ellis commented on CASSANDRA-10495: bq. LCS will not want to mix sstables from different levels while STCS can probably just combine everything In the bootstrap case, we can leave things in the original levels, but in the more common repair case we can't. Maybe just "always combine everything" is fine. > Improve the way we do streaming with vnodes > --- > > Key: CASSANDRA-10495 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10495 > Project: Cassandra > Issue Type: Improvement >Reporter: Marcus Eriksson > Fix For: 3.x > > > Streaming with vnodes usually creates a large amount of sstables on the > target node - for example if each source node has 100 sstables and we use > num_tokens = 256, the bootstrapping (for example) node might get 100*256 > sstables > One approach could be to do an on-the-fly compaction on the source node, > meaning we would only stream out one sstable per range. Note that we will > want the compaction strategy to decide how to combine the sstables, for > example LCS will not want to mix sstables from different levels while STCS > can probably just combine everything > cc [~yukim] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-10489) arbitrary order by on partitions
[ https://issues.apache.org/jira/browse/CASSANDRA-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-10489. Resolution: Won't Fix > arbitrary order by on partitions > > > Key: CASSANDRA-10489 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10489 > Project: Cassandra > Issue Type: Improvement >Reporter: Jon Haddad >Priority: Minor > > We've got aggregations, we might as well allow sorting rows within a > partition on arbitrary fields. Currently the advice is "do it client side", > but when combined with a LIMIT clause it makes sense do this server side. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10241) Keep a separate production debug log for troubleshooting
[ https://issues.apache.org/jira/browse/CASSANDRA-10241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-10241: --- Labels: doc-impacting (was: ) > Keep a separate production debug log for troubleshooting > > > Key: CASSANDRA-10241 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10241 > Project: Cassandra > Issue Type: New Feature > Components: Config >Reporter: Jonathan Ellis >Assignee: Paulo Motta > Labels: doc-impacting > Fix For: 3.0.0 rc2, 2.2.2 > > Attachments: 2.2-debug.log, 2.2-system.log, 3.0-debug.log, > 3.0-system.log > > > [~aweisberg] had the suggestion to keep a separate debug log for aid in > troubleshooting, not intended for regular human consumption but where we can > log things that might help if something goes wrong. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10241) Keep a separate production debug log for troubleshooting
[ https://issues.apache.org/jira/browse/CASSANDRA-10241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971769#comment-14971769 ] Jonathan Ellis commented on CASSANDRA-10241: Paulo, looks good. Send it off to dev! > Keep a separate production debug log for troubleshooting > > > Key: CASSANDRA-10241 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10241 > Project: Cassandra > Issue Type: New Feature > Components: Config >Reporter: Jonathan Ellis >Assignee: Paulo Motta > Fix For: 3.0.0 rc2, 2.2.2 > > Attachments: 2.2-debug.log, 2.2-system.log, 3.0-debug.log, > 3.0-system.log > > > [~aweisberg] had the suggestion to keep a separate debug log for aid in > troubleshooting, not intended for regular human consumption but where we can > log things that might help if something goes wrong. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8755) Replace trivial uses of String.replace/replaceAll/split with StringUtils methods
[ https://issues.apache.org/jira/browse/CASSANDRA-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-8755: -- Reviewer: Robert Stupp (was: Jaroslav Kamenik) > Replace trivial uses of String.replace/replaceAll/split with StringUtils > methods > > > Key: CASSANDRA-8755 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8755 > Project: Cassandra > Issue Type: Improvement >Reporter: Jaroslav Kamenik >Priority: Trivial > Labels: lhf > Attachments: trunk-8755.patch, trunk-8755.txt > > > There are places in the code where those regex based methods are used with > plain, not regexp, strings, so StringUtils alternatives should be faster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10470) Fix upgrade_tests.cql_tests/TestCQL/counters_test dtest
[ https://issues.apache.org/jira/browse/CASSANDRA-10470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971760#comment-14971760 ] Jim Witschey commented on CASSANDRA-10470: -- These tests are being run on 3-node clusters with RF=3 here now: http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/287/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3/counters_test/history/ And, as before, on 2 nodes with RF=1 here: http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/287/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1/counters_test/history/ > Fix upgrade_tests.cql_tests/TestCQL/counters_test dtest > --- > > Key: CASSANDRA-10470 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10470 > Project: Cassandra > Issue Type: Sub-task >Reporter: Jim Witschey >Assignee: Paulo Motta >Priority: Critical > Fix For: 3.0.0 > > > This test fails on CassCI: > http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.cql_tests/TestCQL/counters_test/ > Once [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/586] is > merged, these tests should also run with this upgrade path on normal 3.0 > jobs. Until then, you can run it with the following command: > {code} > SKIP=false CASSANDRA_VERSION=binary:2.2.0 UPGRADE_TO=git:cassandra-3.0 > nosetests 2>&1 upgrade_tests/cql_tests.py:TestCQL.counters_test > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-8655) Exception on upgrade to trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971692#comment-14971692 ] Philip Thompson edited comment on CASSANDRA-8655 at 10/23/15 7:42 PM: -- We no longer have this test, unfortunately. However, it hasn't come up in the new tests that track 2.1->3.0 upgrade or 2.2->3.0 upgrade. You can probably close this. EDIT: Russ showed me where this test set was moved. This exact exception is not there anymore, but I do see others, which are unrelated. We'll open tickets for those. was (Author: philipthompson): We no longer have this test, unfortunately. However, it hasn't come up in the new tests that track 2.1->3.0 upgrade or 2.2->3.0 upgrade. You can probably close this. > Exception on upgrade to trunk > - > > Key: CASSANDRA-8655 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8655 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson > Fix For: 3.0.0 > > > The dtest > upgrade_through_versions_test.TestUpgrade_from_cassandra_2_1_latest_tag_to_trunk_HEAD.upgrade_test_mixed > is failing with the following exception: > {code} > ERROR [Thread-10] 2015-01-20 14:12:44,117 CassandraDaemon.java:170 - > Exception in thread Thread[Thread-10,5,main] > java.lang.NullPointerException: null > at > org.apache.cassandra.db.SliceFromReadCommandSerializer.deserialize(SliceFromReadCommand.java:153) > ~[main/:na] > at > org.apache.cassandra.db.ReadCommandSerializer.deserialize(ReadCommand.java:157) > ~[main/:na] > at > org.apache.cassandra.db.ReadCommandSerializer.deserialize(ReadCommand.java:131) > ~[main/:na] > at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) > ~[main/:na] > at > org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) > ~[main/:na] > at > org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) > ~[main/:na] > at > org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) > ~[main/:na] > {code} > It is trying to execute a simple "SELECT k,v FROM cf WHERE k=X" query on a > trunk node after upgrading from 2.1-HEAD. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8655) Exception on upgrade to trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971692#comment-14971692 ] Philip Thompson commented on CASSANDRA-8655: We no longer have this test, unfortunately. However, it hasn't come up in the new tests that track 2.1->3.0 upgrade or 2.2->3.0 upgrade. You can probably close this. > Exception on upgrade to trunk > - > > Key: CASSANDRA-8655 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8655 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson > Fix For: 3.0.0 > > > The dtest > upgrade_through_versions_test.TestUpgrade_from_cassandra_2_1_latest_tag_to_trunk_HEAD.upgrade_test_mixed > is failing with the following exception: > {code} > ERROR [Thread-10] 2015-01-20 14:12:44,117 CassandraDaemon.java:170 - > Exception in thread Thread[Thread-10,5,main] > java.lang.NullPointerException: null > at > org.apache.cassandra.db.SliceFromReadCommandSerializer.deserialize(SliceFromReadCommand.java:153) > ~[main/:na] > at > org.apache.cassandra.db.ReadCommandSerializer.deserialize(ReadCommand.java:157) > ~[main/:na] > at > org.apache.cassandra.db.ReadCommandSerializer.deserialize(ReadCommand.java:131) > ~[main/:na] > at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) > ~[main/:na] > at > org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) > ~[main/:na] > at > org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) > ~[main/:na] > at > org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) > ~[main/:na] > {code} > It is trying to execute a simple "SELECT k,v FROM cf WHERE k=X" query on a > trunk node after upgrading from 2.1-HEAD. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-10583) After bulk loading CQL query on timestamp column returns wrong result
[ https://issues.apache.org/jira/browse/CASSANDRA-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971593#comment-14971593 ] Kai Wang edited comment on CASSANDRA-10583 at 10/23/15 7:36 PM: This seems to be related to bulk loading. To reproduce: 1. Clone https://github.com/depend/issues/tree/master/CASSANDRA-10583. Build and run it, this application will generate an sstable with 10 rows. 2. Load it into C* with sstableloader. 3. {noformat} cqlsh:timeseries_test> select * from double_daily; tag | group | timestamp| value --+---+--+--- TEST | 1 | 2002-05-01 04:00:00+ | 0 TEST | 1 | 2002-05-02 04:00:00+ | 1 TEST | 1 | 2002-05-03 04:00:00+ | 2 TEST | 1 | 2002-05-04 04:00:00+ | 3 TEST | 1 | 2002-05-05 04:00:00+ | 4 TEST | 1 | 2002-05-06 04:00:00+ | 5 TEST | 1 | 2002-05-07 04:00:00+ | 6 TEST | 1 | 2002-05-08 04:00:00+ | 7 TEST | 1 | 2002-05-09 04:00:00+ | 8 TEST | 1 | 2002-05-10 04:00:00+ | 9 (10 rows) {noformat} 4. {noformat} cqlsh:timeseries_test> select * from double_daily where tag='TEST' and group = 1 and timestamp > '2002-05-01 00:00:00-0400'; tag | group | timestamp| value --+---+--+--- TEST | 1 | 2002-05-01 04:00:00+ | 0 TEST | 1 | 2002-05-02 04:00:00+ | 1 TEST | 1 | 2002-05-03 04:00:00+ | 2 TEST | 1 | 2002-05-04 04:00:00+ | 3 TEST | 1 | 2002-05-05 04:00:00+ | 4 TEST | 1 | 2002-05-06 04:00:00+ | 5 TEST | 1 | 2002-05-07 04:00:00+ | 6 TEST | 1 | 2002-05-08 04:00:00+ | 7 TEST | 1 | 2002-05-09 04:00:00+ | 8 TEST | 1 | 2002-05-10 04:00:00+ | 9 (10 rows) {noformat} 5. {noformat} cqlsh:timeseries_test> select * from double_daily where tag='TEST' and group = 1 and timestamp > '2002-05-02 00:00:00-0400'; tag | group | timestamp | value -+---+---+--- (0 rows) {noformat} I wasn't able to find that "equal" condition which returns everything. But query #5 still shows nothing is later than 2002/5/2 which is not true. was (Author: depend): This seems to be related to bulk loading. To reproduce: 1. Clone https://github.com/depend/issues/tree/master/CASSANDRA-10583. Build and run it, this application will generate an sstable with 10 rows. 2. Load it into C* with sstableloader. 3. {noformat} cqlsh:timeseries_test> select * from double_daily; tag | group | timestamp| value --+---+--+--- TEST | 1 | 2002-05-01 04:00:00+ | 0 TEST | 1 | 2002-05-02 04:00:00+ | 1 TEST | 1 | 2002-05-03 04:00:00+ | 2 TEST | 1 | 2002-05-04 04:00:00+ | 3 TEST | 1 | 2002-05-05 04:00:00+ | 4 TEST | 1 | 2002-05-06 04:00:00+ | 5 TEST | 1 | 2002-05-07 04:00:00+ | 6 TEST | 1 | 2002-05-08 04:00:00+ | 7 TEST | 1 | 2002-05-09 04:00:00+ | 8 TEST | 1 | 2002-05-10 04:00:00+ | 9 (10 rows) {noformat} 4. cqlsh:timeseries_test> select * from double_daily where tag='TEST' and group = 1 and timestamp > '2002-05-01 00:00:00-0400'; tag | group | timestamp| value --+---+--+--- TEST | 1 | 2002-05-01 04:00:00+ | 0 TEST | 1 | 2002-05-02 04:00:00+ | 1 TEST | 1 | 2002-05-03 04:00:00+ | 2 TEST | 1 | 2002-05-04 04:00:00+ | 3 TEST | 1 | 2002-05-05 04:00:00+ | 4 TEST | 1 | 2002-05-06 04:00:00+ | 5 TEST | 1 | 2002-05-07 04:00:00+ | 6 TEST | 1 | 2002-05-08 04:00:00+ | 7 TEST | 1 | 2002-05-09 04:00:00+ | 8 TEST | 1 | 2002-05-10 04:00:00+ | 9 (10 rows) 5. cqlsh:timeseries_test> select * from double_daily where tag='TEST' and group = 1 and timestamp > '2002-05-02 00:00:00-0400'; tag | group | timestamp | value -+---+---+--- (0 rows) I wasn't able to find that "equal" condition which returns everything. But query #5 still shows nothing is later than 2002/5/2 which is not true. > After bulk loading CQL query on timestamp column returns wrong result > - > > Key: CASSANDRA-10583 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10583 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Datastax Community Edition 2.1.10, Windows 2008 R2, Java > x64 1.8.0_60 >Reporter: Kai Wang > Fix For: 3.x, 2.1.x, 2.2.x > > > I have this table: > {noformat} > CREATE TABLE test ( > tag tex
[jira] [Comment Edited] (CASSANDRA-10583) After bulk loading CQL query on timestamp column returns wrong result
[ https://issues.apache.org/jira/browse/CASSANDRA-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971593#comment-14971593 ] Kai Wang edited comment on CASSANDRA-10583 at 10/23/15 7:35 PM: This seems to be related to bulk loading. To reproduce: 1. Clone https://github.com/depend/issues/tree/master/CASSANDRA-10583. Build and run it, this application will generate an sstable with 10 rows. 2. Load it into C* with sstableloader. 3. {noformat} cqlsh:timeseries_test> select * from double_daily; tag | group | timestamp| value --+---+--+--- TEST | 1 | 2002-05-01 04:00:00+ | 0 TEST | 1 | 2002-05-02 04:00:00+ | 1 TEST | 1 | 2002-05-03 04:00:00+ | 2 TEST | 1 | 2002-05-04 04:00:00+ | 3 TEST | 1 | 2002-05-05 04:00:00+ | 4 TEST | 1 | 2002-05-06 04:00:00+ | 5 TEST | 1 | 2002-05-07 04:00:00+ | 6 TEST | 1 | 2002-05-08 04:00:00+ | 7 TEST | 1 | 2002-05-09 04:00:00+ | 8 TEST | 1 | 2002-05-10 04:00:00+ | 9 (10 rows) {noformat} 4. cqlsh:timeseries_test> select * from double_daily where tag='TEST' and group = 1 and timestamp > '2002-05-01 00:00:00-0400'; tag | group | timestamp| value --+---+--+--- TEST | 1 | 2002-05-01 04:00:00+ | 0 TEST | 1 | 2002-05-02 04:00:00+ | 1 TEST | 1 | 2002-05-03 04:00:00+ | 2 TEST | 1 | 2002-05-04 04:00:00+ | 3 TEST | 1 | 2002-05-05 04:00:00+ | 4 TEST | 1 | 2002-05-06 04:00:00+ | 5 TEST | 1 | 2002-05-07 04:00:00+ | 6 TEST | 1 | 2002-05-08 04:00:00+ | 7 TEST | 1 | 2002-05-09 04:00:00+ | 8 TEST | 1 | 2002-05-10 04:00:00+ | 9 (10 rows) 5. cqlsh:timeseries_test> select * from double_daily where tag='TEST' and group = 1 and timestamp > '2002-05-02 00:00:00-0400'; tag | group | timestamp | value -+---+---+--- (0 rows) I wasn't able to find that "equal" condition which returns everything. But query #5 still shows nothing is later than 2002/5/2 which is not true. was (Author: depend): This seems to be related to bulk loading. To reproduce: 1. Clone https://github.com/depend/issues/tree/master/CASSANDRA-10583. Build and run it, this application will generate an sstable with 10 rows. 2. Load it into C* with sstableloader. 3. cqlsh:timeseries_test> select * from double_daily; tag | group | timestamp| value --+---+--+--- TEST | 1 | 2002-05-01 04:00:00+ | 0 TEST | 1 | 2002-05-02 04:00:00+ | 1 TEST | 1 | 2002-05-03 04:00:00+ | 2 TEST | 1 | 2002-05-04 04:00:00+ | 3 TEST | 1 | 2002-05-05 04:00:00+ | 4 TEST | 1 | 2002-05-06 04:00:00+ | 5 TEST | 1 | 2002-05-07 04:00:00+ | 6 TEST | 1 | 2002-05-08 04:00:00+ | 7 TEST | 1 | 2002-05-09 04:00:00+ | 8 TEST | 1 | 2002-05-10 04:00:00+ | 9 (10 rows) 4. cqlsh:timeseries_test> select * from double_daily where tag='TEST' and group = 1 and timestamp > '2002-05-01 00:00:00-0400'; tag | group | timestamp| value --+---+--+--- TEST | 1 | 2002-05-01 04:00:00+ | 0 TEST | 1 | 2002-05-02 04:00:00+ | 1 TEST | 1 | 2002-05-03 04:00:00+ | 2 TEST | 1 | 2002-05-04 04:00:00+ | 3 TEST | 1 | 2002-05-05 04:00:00+ | 4 TEST | 1 | 2002-05-06 04:00:00+ | 5 TEST | 1 | 2002-05-07 04:00:00+ | 6 TEST | 1 | 2002-05-08 04:00:00+ | 7 TEST | 1 | 2002-05-09 04:00:00+ | 8 TEST | 1 | 2002-05-10 04:00:00+ | 9 (10 rows) 5. cqlsh:timeseries_test> select * from double_daily where tag='TEST' and group = 1 and timestamp > '2002-05-02 00:00:00-0400'; tag | group | timestamp | value -+---+---+--- (0 rows) I wasn't able to find that "equal" condition which returns everything. But query #5 still shows nothing is later than 2002/5/2 which is not true. > After bulk loading CQL query on timestamp column returns wrong result > - > > Key: CASSANDRA-10583 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10583 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Datastax Community Edition 2.1.10, Windows 2008 R2, Java > x64 1.8.0_60 >Reporter: Kai Wang > Fix For: 3.x, 2.1.x, 2.2.x > > > I have this table: > {noformat} > CREATE TABLE test ( > tag text, > group int, > timestamp timestamp, > value double, > PR
[jira] [Commented] (CASSANDRA-10421) Potential issue with LogTransaction as it only checks in a single directory for files
[ https://issues.apache.org/jira/browse/CASSANDRA-10421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971656#comment-14971656 ] Ariel Weisberg commented on CASSANDRA-10421: This looks ready to commit. The windows utests and dtests seem to match the 3.0 windows branch. > Potential issue with LogTransaction as it only checks in a single directory > for files > - > > Key: CASSANDRA-10421 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10421 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Stefania >Priority: Blocker > Fix For: 3.0.0 > > > When creating a new LogTransaction we try to create the new logfile in the > same directory as the one we are writing to, but as we use > {{[directories.getDirectoryForNewSSTables()|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/lifecycle/LogTransaction.java#L125]}} > this might end up in "any" of the configured data directories. If it does, > we will not be able to clean up leftovers as we check for files in the same > directory as the logfile was created: > https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/lifecycle/LogRecord.java#L163 > cc [~Stefania] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10583) After bulk loading CQL query on timestamp column returns wrong result
[ https://issues.apache.org/jira/browse/CASSANDRA-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971593#comment-14971593 ] Kai Wang commented on CASSANDRA-10583: -- This seems to be related to bulk loading. To reproduce: 1. Clone https://github.com/depend/issues/tree/master/CASSANDRA-10583. Build and run it, this application will generate an sstable with 10 rows. 2. Load it into C* with sstableloader. 3. cqlsh:timeseries_test> select * from double_daily; tag | group | timestamp| value --+---+--+--- TEST | 1 | 2002-05-01 04:00:00+ | 0 TEST | 1 | 2002-05-02 04:00:00+ | 1 TEST | 1 | 2002-05-03 04:00:00+ | 2 TEST | 1 | 2002-05-04 04:00:00+ | 3 TEST | 1 | 2002-05-05 04:00:00+ | 4 TEST | 1 | 2002-05-06 04:00:00+ | 5 TEST | 1 | 2002-05-07 04:00:00+ | 6 TEST | 1 | 2002-05-08 04:00:00+ | 7 TEST | 1 | 2002-05-09 04:00:00+ | 8 TEST | 1 | 2002-05-10 04:00:00+ | 9 (10 rows) 4. cqlsh:timeseries_test> select * from double_daily where tag='TEST' and group = 1 and timestamp > '2002-05-01 00:00:00-0400'; tag | group | timestamp| value --+---+--+--- TEST | 1 | 2002-05-01 04:00:00+ | 0 TEST | 1 | 2002-05-02 04:00:00+ | 1 TEST | 1 | 2002-05-03 04:00:00+ | 2 TEST | 1 | 2002-05-04 04:00:00+ | 3 TEST | 1 | 2002-05-05 04:00:00+ | 4 TEST | 1 | 2002-05-06 04:00:00+ | 5 TEST | 1 | 2002-05-07 04:00:00+ | 6 TEST | 1 | 2002-05-08 04:00:00+ | 7 TEST | 1 | 2002-05-09 04:00:00+ | 8 TEST | 1 | 2002-05-10 04:00:00+ | 9 (10 rows) 5. cqlsh:timeseries_test> select * from double_daily where tag='TEST' and group = 1 and timestamp > '2002-05-02 00:00:00-0400'; tag | group | timestamp | value -+---+---+--- (0 rows) I wasn't able to find that "equal" condition which returns everything. But query #5 still shows nothing is later than 2002/5/2 which is not true. > After bulk loading CQL query on timestamp column returns wrong result > - > > Key: CASSANDRA-10583 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10583 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Datastax Community Edition 2.1.10, Windows 2008 R2, Java > x64 1.8.0_60 >Reporter: Kai Wang > Fix For: 3.x, 2.1.x, 2.2.x > > > I have this table: > {noformat} > CREATE TABLE test ( > tag text, > group int, > timestamp timestamp, > value double, > PRIMARY KEY (tag, group, timestamp) > ) WITH CLUSTERING ORDER BY (group ASC, timestamp DESC) > {noformat} > First I used CQLSSTableWriter to bulk load a bunch of sstables. Then I ran > this query: > {noformat} > cqlsh> select * from test where tag = 'MSFT' and group = 1 and timestamp > ='2004-12-15 16:00:00-0500'; > tag | group | timestamp| value > --+---+--+--- > MSFT | 1 | 2004-12-15 21:00:00+ | 27.11 > MSFT | 1 | 2004-12-16 21:00:00+ | 27.16 > MSFT | 1 | 2004-12-17 21:00:00+ | 26.96 > MSFT | 1 | 2004-12-20 21:00:00+ | 26.95 > MSFT | 1 | 2004-12-21 21:00:00+ | 27.07 > MSFT | 1 | 2004-12-22 21:00:00+ | 26.98 > MSFT | 1 | 2004-12-23 21:00:00+ | 27.01 > MSFT | 1 | 2004-12-27 21:00:00+ | 26.85 > MSFT | 1 | 2004-12-28 21:00:00+ | 26.95 > MSFT | 1 | 2004-12-29 21:00:00+ | 26.9 > MSFT | 1 | 2004-12-30 21:00:00+ | 26.76 > (11 rows) > {noformat} > The result is obviously wrong. > If I run this query: > {noformat} > cqlsh> select * from test where tag = 'MSFT' and group = 1 and timestamp > ='2004-12-16 16:00:00-0500'; > tag | group | timestamp | value > -+---+---+--- > (0 rows) > {noformat} > In DevCenter I tried to create a similar table and insert a few rows but > couldn't reproduce this. This may have something to do with the bulk loading > process. But still, the fact cqlsh returns data that doesn't match the query > is concerning. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10583) After bulk loading CQL query on timestamp column returns wrong result
[ https://issues.apache.org/jira/browse/CASSANDRA-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-10583: Reproduced In: 2.1.10 Fix Version/s: 2.2.x 2.1.x 3.x > After bulk loading CQL query on timestamp column returns wrong result > - > > Key: CASSANDRA-10583 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10583 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Datastax Community Edition 2.1.10, Windows 2008 R2, Java > x64 1.8.0_60 >Reporter: Kai Wang > Fix For: 3.x, 2.1.x, 2.2.x > > > I have this table: > {noformat} > CREATE TABLE test ( > tag text, > group int, > timestamp timestamp, > value double, > PRIMARY KEY (tag, group, timestamp) > ) WITH CLUSTERING ORDER BY (group ASC, timestamp DESC) > {noformat} > First I used CQLSSTableWriter to bulk load a bunch of sstables. Then I ran > this query: > {noformat} > cqlsh> select * from test where tag = 'MSFT' and group = 1 and timestamp > ='2004-12-15 16:00:00-0500'; > tag | group | timestamp| value > --+---+--+--- > MSFT | 1 | 2004-12-15 21:00:00+ | 27.11 > MSFT | 1 | 2004-12-16 21:00:00+ | 27.16 > MSFT | 1 | 2004-12-17 21:00:00+ | 26.96 > MSFT | 1 | 2004-12-20 21:00:00+ | 26.95 > MSFT | 1 | 2004-12-21 21:00:00+ | 27.07 > MSFT | 1 | 2004-12-22 21:00:00+ | 26.98 > MSFT | 1 | 2004-12-23 21:00:00+ | 27.01 > MSFT | 1 | 2004-12-27 21:00:00+ | 26.85 > MSFT | 1 | 2004-12-28 21:00:00+ | 26.95 > MSFT | 1 | 2004-12-29 21:00:00+ | 26.9 > MSFT | 1 | 2004-12-30 21:00:00+ | 26.76 > (11 rows) > {noformat} > The result is obviously wrong. > If I run this query: > {noformat} > cqlsh> select * from test where tag = 'MSFT' and group = 1 and timestamp > ='2004-12-16 16:00:00-0500'; > tag | group | timestamp | value > -+---+---+--- > (0 rows) > {noformat} > In DevCenter I tried to create a similar table and insert a few rows but > couldn't reproduce this. This may have something to do with the bulk loading > process. But still, the fact cqlsh returns data that doesn't match the query > is concerning. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10583) After bulk loading CQL query on timestamp column returns wrong result
[ https://issues.apache.org/jira/browse/CASSANDRA-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-10583: Description: I have this table: {noformat} CREATE TABLE test ( tag text, group int, timestamp timestamp, value double, PRIMARY KEY (tag, group, timestamp) ) WITH CLUSTERING ORDER BY (group ASC, timestamp DESC) {noformat} First I used CQLSSTableWriter to bulk load a bunch of sstables. Then I ran this query: {noformat} cqlsh> select * from test where tag = 'MSFT' and group = 1 and timestamp ='2004-12-15 16:00:00-0500'; tag | group | timestamp| value --+---+--+--- MSFT | 1 | 2004-12-15 21:00:00+ | 27.11 MSFT | 1 | 2004-12-16 21:00:00+ | 27.16 MSFT | 1 | 2004-12-17 21:00:00+ | 26.96 MSFT | 1 | 2004-12-20 21:00:00+ | 26.95 MSFT | 1 | 2004-12-21 21:00:00+ | 27.07 MSFT | 1 | 2004-12-22 21:00:00+ | 26.98 MSFT | 1 | 2004-12-23 21:00:00+ | 27.01 MSFT | 1 | 2004-12-27 21:00:00+ | 26.85 MSFT | 1 | 2004-12-28 21:00:00+ | 26.95 MSFT | 1 | 2004-12-29 21:00:00+ | 26.9 MSFT | 1 | 2004-12-30 21:00:00+ | 26.76 (11 rows) {noformat} The result is obviously wrong. If I run this query: {noformat} cqlsh> select * from test where tag = 'MSFT' and group = 1 and timestamp ='2004-12-16 16:00:00-0500'; tag | group | timestamp | value -+---+---+--- (0 rows) {noformat} In DevCenter I tried to create a similar table and insert a few rows but couldn't reproduce this. This may have something to do with the bulk loading process. But still, the fact cqlsh returns data that doesn't match the query is concerning. was: I have this table: CREATE TABLE test ( tag text, group int, timestamp timestamp, value double, PRIMARY KEY (tag, group, timestamp) ) WITH CLUSTERING ORDER BY (group ASC, timestamp DESC) First I used CQLSSTableWriter to bulk load a bunch of sstables. Then I ran this query: cqlsh> select * from test where tag = 'MSFT' and group = 1 and timestamp ='2004-12-15 16:00:00-0500'; tag | group | timestamp| value --+---+--+--- MSFT | 1 | 2004-12-15 21:00:00+ | 27.11 MSFT | 1 | 2004-12-16 21:00:00+ | 27.16 MSFT | 1 | 2004-12-17 21:00:00+ | 26.96 MSFT | 1 | 2004-12-20 21:00:00+ | 26.95 MSFT | 1 | 2004-12-21 21:00:00+ | 27.07 MSFT | 1 | 2004-12-22 21:00:00+ | 26.98 MSFT | 1 | 2004-12-23 21:00:00+ | 27.01 MSFT | 1 | 2004-12-27 21:00:00+ | 26.85 MSFT | 1 | 2004-12-28 21:00:00+ | 26.95 MSFT | 1 | 2004-12-29 21:00:00+ | 26.9 MSFT | 1 | 2004-12-30 21:00:00+ | 26.76 (11 rows) The result is obviously wrong. If I run this query: cqlsh> select * from test where tag = 'MSFT' and group = 1 and timestamp ='2004-12-16 16:00:00-0500'; tag | group | timestamp | value -+---+---+--- (0 rows) In DevCenter I tried to create a similar table and insert a few rows but couldn't reproduce this. This may have something to do with the bulk loading process. But still, the fact cqlsh returns data that doesn't match the query is concerning. > After bulk loading CQL query on timestamp column returns wrong result > - > > Key: CASSANDRA-10583 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10583 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Datastax Community Edition 2.1.10, Windows 2008 R2, Java > x64 1.8.0_60 >Reporter: Kai Wang > Fix For: 3.x, 2.1.x, 2.2.x > > > I have this table: > {noformat} > CREATE TABLE test ( > tag text, > group int, > timestamp timestamp, > value double, > PRIMARY KEY (tag, group, timestamp) > ) WITH CLUSTERING ORDER BY (group ASC, timestamp DESC) > {noformat} > First I used CQLSSTableWriter to bulk load a bunch of sstables. Then I ran > this query: > {noformat} > cqlsh> select * from test where tag = 'MSFT' and group = 1 and timestamp > ='2004-12-15 16:00:00-0500'; > tag | group | timestamp| value > --+---+--+--- > MSFT | 1 | 2004-12-15 21:00:00+ | 27.11 > MSFT | 1 | 2004-12-16 21:00:00+ | 27.16 > MSFT | 1 | 2004-12-17 21:00:00+ | 26.96 > MSFT | 1 | 2004-12-20 21:00:00+ | 26.95 > MSFT | 1 | 2004-12-21 21:00:00+ | 27.07 > MSFT | 1 | 2004-12-22 21:00:00+ | 26.98 > MSFT | 1 | 2004-12-23 21:00:00+ | 27.01 > MSFT | 1 | 2004-12-27 21:00:00+ | 26.85 > MSFT | 1 | 2004-12-28 21:00:00+ | 26.95 > MSFT | 1 | 2004-12-29 21:00:00+ | 26.9 > MSFT | 1 | 2004-12-30 21:00:00+00
[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator
[ https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971574#comment-14971574 ] Sergio Bossa commented on CASSANDRA-9318: - bq. The whole point of this ticket is to avoid the complexity of intra-node backpressure, and instead basing coordinator -> client backpressure on the coordinator's local knowledge. Just to be clear, my proposal _does_ keep the back-pressure decision local to the coordinator, that is, there's no communication between nodes in such regard (which I agree would be a much different matter). The difference in my proposal is that we deal with back-pressure on a per-replica basis and in a fine grained way, rather than with coarse grained global memory limits which would end up flooding replicas before being triggered. > Bound the number of in-flight requests at the coordinator > - > > Key: CASSANDRA-9318 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9318 > Project: Cassandra > Issue Type: Improvement >Reporter: Ariel Weisberg >Assignee: Jacek Lewandowski > Fix For: 2.1.x, 2.2.x > > > It's possible to somewhat bound the amount of load accepted into the cluster > by bounding the number of in-flight requests and request bytes. > An implementation might do something like track the number of outstanding > bytes and requests and if it reaches a high watermark disable read on client > connections until it goes back below some low watermark. > Need to make sure that disabling read on the client connection won't > introduce other issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10585) SSTablesPerReadHistogram seems wrong when row cache hit happend
[ https://issues.apache.org/jira/browse/CASSANDRA-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Burmistrov updated CASSANDRA-10585: Attachment: cassandra-10585.patch Description: SSTablePerReadHistogram metric now not considers case when row has been read from row cache. And so, this metric will have big values even almost all requests processed by row cache (and without touching SSTables, of course). So, it seems that correct behavior is to consider that if we read row from row cache then we read zero SSTables by this request. The patch at the attachment. was: SSTablePerReadHistogram metric now not considers case when row has been read from row cache. And so, this metric will have big values even almost all requests processed by row cache (and without touching SSTables, of course). So, it seems that correct behavior is to consider that if we read row from row cache then we read zero SSTables by this request. > SSTablesPerReadHistogram seems wrong when row cache hit happend > --- > > Key: CASSANDRA-10585 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10585 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Ivan Burmistrov >Priority: Minor > Fix For: 2.1.x > > Attachments: cassandra-10585.patch > > > SSTablePerReadHistogram metric now not considers case when row has been read > from row cache. > And so, this metric will have big values even almost all requests processed > by row cache (and without touching SSTables, of course). > So, it seems that correct behavior is to consider that if we read row from > row cache then we read zero SSTables by this request. > The patch at the attachment. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-10585) SSTablesPerReadHistogram seems wrong when row cache hit happend
Ivan Burmistrov created CASSANDRA-10585: --- Summary: SSTablesPerReadHistogram seems wrong when row cache hit happend Key: CASSANDRA-10585 URL: https://issues.apache.org/jira/browse/CASSANDRA-10585 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ivan Burmistrov Priority: Minor Fix For: 2.1.x SSTablePerReadHistogram metric now not considers case when row has been read from row cache. And so, this metric will have big values even almost all requests processed by row cache (and without touching SSTables, of course). So, it seems that correct behavior is to consider that if we read row from row cache then we read zero SSTables by this request. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10444) Create an option to forcibly disable tracing
[ https://issues.apache.org/jira/browse/CASSANDRA-10444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971494#comment-14971494 ] Jonathan Ellis commented on CASSANDRA-10444: Really a subset of CASSANDRA-8303 but I suppose we could special case it for old versions. > Create an option to forcibly disable tracing > > > Key: CASSANDRA-10444 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10444 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Brandon Williams >Priority: Minor > Fix For: 2.1.x > > > Sometimes people will experience dropped TRACE messages. Ostensibly, trace > is disabled on the server and we know it's from some client, somewhere. With > an inability to locate exactly where client code is causing this, it would be > useful to just be able to kill it entirely on the server side. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10446) Run repair with down replicas
[ https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-10446: --- Fix Version/s: 3.x > Run repair with down replicas > - > > Key: CASSANDRA-10446 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10446 > Project: Cassandra > Issue Type: Improvement >Reporter: sankalp kohli >Priority: Minor > Fix For: 3.x > > > We should have an option of running repair when replicas are down. We can > call it -force. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-10584) reads with EACH_QUORUM on keyspace with SimpleTopologyStrategy throw a ClassCastException
Andy Tolbert created CASSANDRA-10584: Summary: reads with EACH_QUORUM on keyspace with SimpleTopologyStrategy throw a ClassCastException Key: CASSANDRA-10584 URL: https://issues.apache.org/jira/browse/CASSANDRA-10584 Project: Cassandra Issue Type: Bug Reporter: Andy Tolbert Priority: Minor I think this may be a regression introduced w/ [CASSANDRA-9602]. Starting with C* 3.0.0-rc2 an error is returned when querying a keyspace with {{SimpleTopologyStrategy}} using EACH_QUORUM CL: {noformat} cqlsh> create keyspace test with replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; cqlsh> create table test.test (k int PRIMARY KEY, i int); cqlsh> consistency EACH_QUORUM; Consistency level set to EACH_QUORUM. cqlsh> select * from test.test; ServerError: {noformat} The exception yielded in the system logs: {noformat} ERROR [SharedPool-Worker-1] 2015-10-23 13:02:15,405 ErrorMessage.java:336 - Unexpected exception during request java.lang.ClassCastException: org.apache.cassandra.locator.SimpleStrategy cannot be cast to org.apache.cassandra.locator.NetworkTopologyStrategy at org.apache.cassandra.db.ConsistencyLevel.filterForEachQuorum(ConsistencyLevel.java:227) ~[main/:na] at org.apache.cassandra.db.ConsistencyLevel.filterForQuery(ConsistencyLevel.java:188) ~[main/:na] at org.apache.cassandra.db.ConsistencyLevel.filterForQuery(ConsistencyLevel.java:180) ~[main/:na] at org.apache.cassandra.service.StorageProxy$RangeIterator.computeNext(StorageProxy.java:1795) ~[main/:na] at org.apache.cassandra.service.StorageProxy$RangeIterator.computeNext(StorageProxy.java:1762) ~[main/:na] at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[main/:na] at com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1149) ~[guava-18.0.jar:na] at org.apache.cassandra.service.StorageProxy$RangeMerger.computeNext(StorageProxy.java:1814) ~[main/:na] at org.apache.cassandra.service.StorageProxy$RangeMerger.computeNext(StorageProxy.java:1799) ~[main/:na] at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[main/:na] at org.apache.cassandra.service.StorageProxy$RangeCommandIterator.computeNext(StorageProxy.java:1925) ~[main/:na] at org.apache.cassandra.service.StorageProxy$RangeCommandIterator.computeNext(StorageProxy.java:1892) ~[main/:na] at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[main/:na] at org.apache.cassandra.db.partitions.WrappingPartitionIterator.hasNext(WrappingPartitionIterator.java:33) ~[main/:na] at org.apache.cassandra.db.partitions.CountingPartitionIterator.hasNext(CountingPartitionIterator.java:49) ~[main/:na] at org.apache.cassandra.db.partitions.WrappingPartitionIterator.hasNext(WrappingPartitionIterator.java:33) ~[main/:na] at org.apache.cassandra.db.partitions.CountingPartitionIterator.hasNext(CountingPartitionIterator.java:49) ~[main/:na] at org.apache.cassandra.service.pager.AbstractQueryPager$PagerIterator.hasNext(AbstractQueryPager.java:99) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:610) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:371) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:327) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:213) ~[main/:na] at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76) ~[main/:na] at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:205) ~[main/:na] at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:236) ~[main/:na] at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:221) ~[main/:na] at org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115) ~[main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) [main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) [main/:na] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) [netty-all-4.0.23.Final.jar:4.0.23.Final]
[06/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/013ce885 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/013ce885 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/013ce885 Branch: refs/heads/cassandra-2.2 Commit: 013ce88512af839800597d1adc52689679a725a3 Parents: a5053fd 34b8d8f Author: Joshua McKenzie Authored: Fri Oct 23 13:58:59 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 13:58:59 2015 -0400 -- pylib/cqlshlib/test/run_cqlsh.py | 2 +- pylib/cqlshlib/test/test_cqlsh_output.py | 10 +- 2 files changed, 6 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/013ce885/pylib/cqlshlib/test/run_cqlsh.py -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/013ce885/pylib/cqlshlib/test/test_cqlsh_output.py --
[03/10] cassandra git commit: Make cqlsh tests work when authentication is configured
Make cqlsh tests work when authentication is configured Patch by stefania; reviewed by aholmberg for CASSANDRA-10544 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34b8d8fc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34b8d8fc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34b8d8fc Branch: refs/heads/cassandra-3.0 Commit: 34b8d8fcbf528f21ac7869685e33214af381265c Parents: 5a1d376 Author: Stefania Alborghetti Authored: Fri Oct 23 13:58:19 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 13:58:19 2015 -0400 -- pylib/cqlshlib/test/run_cqlsh.py | 2 +- pylib/cqlshlib/test/test_cqlsh_output.py | 10 +- 2 files changed, 6 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/34b8d8fc/pylib/cqlshlib/test/run_cqlsh.py -- diff --git a/pylib/cqlshlib/test/run_cqlsh.py b/pylib/cqlshlib/test/run_cqlsh.py index 88b0ca6..cc929e1 100644 --- a/pylib/cqlshlib/test/run_cqlsh.py +++ b/pylib/cqlshlib/test/run_cqlsh.py @@ -27,7 +27,7 @@ import math from time import time from . import basecase -DEFAULT_CQLSH_PROMPT = '\ncqlsh(:\S+)?> ' +DEFAULT_CQLSH_PROMPT = os.linesep + '(\S+@)?cqlsh(:\S+)?> ' DEFAULT_CQLSH_TERM = 'xterm' cqlshlog = basecase.cqlshlog http://git-wip-us.apache.org/repos/asf/cassandra/blob/34b8d8fc/pylib/cqlshlib/test/test_cqlsh_output.py -- diff --git a/pylib/cqlshlib/test/test_cqlsh_output.py b/pylib/cqlshlib/test/test_cqlsh_output.py index 64950e2..e3af8e8 100644 --- a/pylib/cqlshlib/test/test_cqlsh_output.py +++ b/pylib/cqlshlib/test/test_cqlsh_output.py @@ -522,26 +522,26 @@ class TestCqlshOutput(BaseTestCase): def test_prompt(self): with testrun_cqlsh(tty=True, keyspace=None, cqlver=cqlsh.DEFAULT_CQLVER) as c: -self.assertEqual(c.output_header.splitlines()[-1], 'cqlsh> ') +self.assertTrue(c.output_header.splitlines()[-1].endswith('cqlsh> ')) c.send('\n') output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, '\ncqlsh> ') +self.assertTrue(output.endswith('cqlsh> ')) cmd = "USE \"%s\";\n" % get_test_keyspace().replace('"', '""') c.send(cmd) output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, '%scqlsh:%s> ' % (cmd, get_test_keyspace())) +self.assertTrue(output.endswith('cqlsh:%s> ' % (get_test_keyspace( c.send('use system;\n') output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, 'use system;\ncqlsh:system> ') +self.assertTrue(output.endswith('cqlsh:system> ')) c.send('use NONEXISTENTKEYSPACE;\n') outputlines = c.read_to_next_prompt().splitlines() self.assertEqual(outputlines[0], 'use NONEXISTENTKEYSPACE;') -self.assertEqual(outputlines[2], 'cqlsh:system> ') +self.assertTrue(outputlines[2].endswith('cqlsh:system> ')) midline = ColoredText(outputlines[1]) self.assertEqual(midline.plain(), 'InvalidRequest: code=2200 [Invalid query] message="Keyspace \'nonexistentkeyspace\' does not exist"')
[01/10] cassandra git commit: Make cqlsh tests work when authentication is configured
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 5a1d37648 -> 34b8d8fcb refs/heads/cassandra-2.2 a5053fd94 -> 013ce8851 refs/heads/cassandra-3.0 535c3ac75 -> 1d28a4acf refs/heads/trunk 71d9dba06 -> 87f16ca9a Make cqlsh tests work when authentication is configured Patch by stefania; reviewed by aholmberg for CASSANDRA-10544 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34b8d8fc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34b8d8fc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34b8d8fc Branch: refs/heads/cassandra-2.1 Commit: 34b8d8fcbf528f21ac7869685e33214af381265c Parents: 5a1d376 Author: Stefania Alborghetti Authored: Fri Oct 23 13:58:19 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 13:58:19 2015 -0400 -- pylib/cqlshlib/test/run_cqlsh.py | 2 +- pylib/cqlshlib/test/test_cqlsh_output.py | 10 +- 2 files changed, 6 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/34b8d8fc/pylib/cqlshlib/test/run_cqlsh.py -- diff --git a/pylib/cqlshlib/test/run_cqlsh.py b/pylib/cqlshlib/test/run_cqlsh.py index 88b0ca6..cc929e1 100644 --- a/pylib/cqlshlib/test/run_cqlsh.py +++ b/pylib/cqlshlib/test/run_cqlsh.py @@ -27,7 +27,7 @@ import math from time import time from . import basecase -DEFAULT_CQLSH_PROMPT = '\ncqlsh(:\S+)?> ' +DEFAULT_CQLSH_PROMPT = os.linesep + '(\S+@)?cqlsh(:\S+)?> ' DEFAULT_CQLSH_TERM = 'xterm' cqlshlog = basecase.cqlshlog http://git-wip-us.apache.org/repos/asf/cassandra/blob/34b8d8fc/pylib/cqlshlib/test/test_cqlsh_output.py -- diff --git a/pylib/cqlshlib/test/test_cqlsh_output.py b/pylib/cqlshlib/test/test_cqlsh_output.py index 64950e2..e3af8e8 100644 --- a/pylib/cqlshlib/test/test_cqlsh_output.py +++ b/pylib/cqlshlib/test/test_cqlsh_output.py @@ -522,26 +522,26 @@ class TestCqlshOutput(BaseTestCase): def test_prompt(self): with testrun_cqlsh(tty=True, keyspace=None, cqlver=cqlsh.DEFAULT_CQLVER) as c: -self.assertEqual(c.output_header.splitlines()[-1], 'cqlsh> ') +self.assertTrue(c.output_header.splitlines()[-1].endswith('cqlsh> ')) c.send('\n') output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, '\ncqlsh> ') +self.assertTrue(output.endswith('cqlsh> ')) cmd = "USE \"%s\";\n" % get_test_keyspace().replace('"', '""') c.send(cmd) output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, '%scqlsh:%s> ' % (cmd, get_test_keyspace())) +self.assertTrue(output.endswith('cqlsh:%s> ' % (get_test_keyspace( c.send('use system;\n') output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, 'use system;\ncqlsh:system> ') +self.assertTrue(output.endswith('cqlsh:system> ')) c.send('use NONEXISTENTKEYSPACE;\n') outputlines = c.read_to_next_prompt().splitlines() self.assertEqual(outputlines[0], 'use NONEXISTENTKEYSPACE;') -self.assertEqual(outputlines[2], 'cqlsh:system> ') +self.assertTrue(outputlines[2].endswith('cqlsh:system> ')) midline = ColoredText(outputlines[1]) self.assertEqual(midline.plain(), 'InvalidRequest: code=2200 [Invalid query] message="Keyspace \'nonexistentkeyspace\' does not exist"')
[09/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1d28a4ac Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1d28a4ac Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1d28a4ac Branch: refs/heads/trunk Commit: 1d28a4acf816ee5fdd866627c6f23f3fc5f98a98 Parents: 535c3ac 013ce88 Author: Joshua McKenzie Authored: Fri Oct 23 13:59:14 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 13:59:14 2015 -0400 -- pylib/cqlshlib/test/run_cqlsh.py | 2 +- pylib/cqlshlib/test/test_cqlsh_output.py | 10 +- 2 files changed, 6 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d28a4ac/pylib/cqlshlib/test/test_cqlsh_output.py --
[04/10] cassandra git commit: Make cqlsh tests work when authentication is configured
Make cqlsh tests work when authentication is configured Patch by stefania; reviewed by aholmberg for CASSANDRA-10544 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34b8d8fc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34b8d8fc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34b8d8fc Branch: refs/heads/trunk Commit: 34b8d8fcbf528f21ac7869685e33214af381265c Parents: 5a1d376 Author: Stefania Alborghetti Authored: Fri Oct 23 13:58:19 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 13:58:19 2015 -0400 -- pylib/cqlshlib/test/run_cqlsh.py | 2 +- pylib/cqlshlib/test/test_cqlsh_output.py | 10 +- 2 files changed, 6 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/34b8d8fc/pylib/cqlshlib/test/run_cqlsh.py -- diff --git a/pylib/cqlshlib/test/run_cqlsh.py b/pylib/cqlshlib/test/run_cqlsh.py index 88b0ca6..cc929e1 100644 --- a/pylib/cqlshlib/test/run_cqlsh.py +++ b/pylib/cqlshlib/test/run_cqlsh.py @@ -27,7 +27,7 @@ import math from time import time from . import basecase -DEFAULT_CQLSH_PROMPT = '\ncqlsh(:\S+)?> ' +DEFAULT_CQLSH_PROMPT = os.linesep + '(\S+@)?cqlsh(:\S+)?> ' DEFAULT_CQLSH_TERM = 'xterm' cqlshlog = basecase.cqlshlog http://git-wip-us.apache.org/repos/asf/cassandra/blob/34b8d8fc/pylib/cqlshlib/test/test_cqlsh_output.py -- diff --git a/pylib/cqlshlib/test/test_cqlsh_output.py b/pylib/cqlshlib/test/test_cqlsh_output.py index 64950e2..e3af8e8 100644 --- a/pylib/cqlshlib/test/test_cqlsh_output.py +++ b/pylib/cqlshlib/test/test_cqlsh_output.py @@ -522,26 +522,26 @@ class TestCqlshOutput(BaseTestCase): def test_prompt(self): with testrun_cqlsh(tty=True, keyspace=None, cqlver=cqlsh.DEFAULT_CQLVER) as c: -self.assertEqual(c.output_header.splitlines()[-1], 'cqlsh> ') +self.assertTrue(c.output_header.splitlines()[-1].endswith('cqlsh> ')) c.send('\n') output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, '\ncqlsh> ') +self.assertTrue(output.endswith('cqlsh> ')) cmd = "USE \"%s\";\n" % get_test_keyspace().replace('"', '""') c.send(cmd) output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, '%scqlsh:%s> ' % (cmd, get_test_keyspace())) +self.assertTrue(output.endswith('cqlsh:%s> ' % (get_test_keyspace( c.send('use system;\n') output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, 'use system;\ncqlsh:system> ') +self.assertTrue(output.endswith('cqlsh:system> ')) c.send('use NONEXISTENTKEYSPACE;\n') outputlines = c.read_to_next_prompt().splitlines() self.assertEqual(outputlines[0], 'use NONEXISTENTKEYSPACE;') -self.assertEqual(outputlines[2], 'cqlsh:system> ') +self.assertTrue(outputlines[2].endswith('cqlsh:system> ')) midline = ColoredText(outputlines[1]) self.assertEqual(midline.plain(), 'InvalidRequest: code=2200 [Invalid query] message="Keyspace \'nonexistentkeyspace\' does not exist"')
[08/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1d28a4ac Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1d28a4ac Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1d28a4ac Branch: refs/heads/cassandra-3.0 Commit: 1d28a4acf816ee5fdd866627c6f23f3fc5f98a98 Parents: 535c3ac 013ce88 Author: Joshua McKenzie Authored: Fri Oct 23 13:59:14 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 13:59:14 2015 -0400 -- pylib/cqlshlib/test/run_cqlsh.py | 2 +- pylib/cqlshlib/test/test_cqlsh_output.py | 10 +- 2 files changed, 6 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d28a4ac/pylib/cqlshlib/test/test_cqlsh_output.py --
[07/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/013ce885 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/013ce885 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/013ce885 Branch: refs/heads/trunk Commit: 013ce88512af839800597d1adc52689679a725a3 Parents: a5053fd 34b8d8f Author: Joshua McKenzie Authored: Fri Oct 23 13:58:59 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 13:58:59 2015 -0400 -- pylib/cqlshlib/test/run_cqlsh.py | 2 +- pylib/cqlshlib/test/test_cqlsh_output.py | 10 +- 2 files changed, 6 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/013ce885/pylib/cqlshlib/test/run_cqlsh.py -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/013ce885/pylib/cqlshlib/test/test_cqlsh_output.py --
[05/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/013ce885 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/013ce885 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/013ce885 Branch: refs/heads/cassandra-3.0 Commit: 013ce88512af839800597d1adc52689679a725a3 Parents: a5053fd 34b8d8f Author: Joshua McKenzie Authored: Fri Oct 23 13:58:59 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 13:58:59 2015 -0400 -- pylib/cqlshlib/test/run_cqlsh.py | 2 +- pylib/cqlshlib/test/test_cqlsh_output.py | 10 +- 2 files changed, 6 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/013ce885/pylib/cqlshlib/test/run_cqlsh.py -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/013ce885/pylib/cqlshlib/test/test_cqlsh_output.py --
[10/10] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Merge branch 'cassandra-3.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/87f16ca9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/87f16ca9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/87f16ca9 Branch: refs/heads/trunk Commit: 87f16ca9ac52d8042563ce5198b776f355236617 Parents: 71d9dba 1d28a4a Author: Joshua McKenzie Authored: Fri Oct 23 13:59:23 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 13:59:23 2015 -0400 -- pylib/cqlshlib/test/run_cqlsh.py | 2 +- pylib/cqlshlib/test/test_cqlsh_output.py | 10 +- 2 files changed, 6 insertions(+), 6 deletions(-) --
[02/10] cassandra git commit: Make cqlsh tests work when authentication is configured
Make cqlsh tests work when authentication is configured Patch by stefania; reviewed by aholmberg for CASSANDRA-10544 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/34b8d8fc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/34b8d8fc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/34b8d8fc Branch: refs/heads/cassandra-2.2 Commit: 34b8d8fcbf528f21ac7869685e33214af381265c Parents: 5a1d376 Author: Stefania Alborghetti Authored: Fri Oct 23 13:58:19 2015 -0400 Committer: Joshua McKenzie Committed: Fri Oct 23 13:58:19 2015 -0400 -- pylib/cqlshlib/test/run_cqlsh.py | 2 +- pylib/cqlshlib/test/test_cqlsh_output.py | 10 +- 2 files changed, 6 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/34b8d8fc/pylib/cqlshlib/test/run_cqlsh.py -- diff --git a/pylib/cqlshlib/test/run_cqlsh.py b/pylib/cqlshlib/test/run_cqlsh.py index 88b0ca6..cc929e1 100644 --- a/pylib/cqlshlib/test/run_cqlsh.py +++ b/pylib/cqlshlib/test/run_cqlsh.py @@ -27,7 +27,7 @@ import math from time import time from . import basecase -DEFAULT_CQLSH_PROMPT = '\ncqlsh(:\S+)?> ' +DEFAULT_CQLSH_PROMPT = os.linesep + '(\S+@)?cqlsh(:\S+)?> ' DEFAULT_CQLSH_TERM = 'xterm' cqlshlog = basecase.cqlshlog http://git-wip-us.apache.org/repos/asf/cassandra/blob/34b8d8fc/pylib/cqlshlib/test/test_cqlsh_output.py -- diff --git a/pylib/cqlshlib/test/test_cqlsh_output.py b/pylib/cqlshlib/test/test_cqlsh_output.py index 64950e2..e3af8e8 100644 --- a/pylib/cqlshlib/test/test_cqlsh_output.py +++ b/pylib/cqlshlib/test/test_cqlsh_output.py @@ -522,26 +522,26 @@ class TestCqlshOutput(BaseTestCase): def test_prompt(self): with testrun_cqlsh(tty=True, keyspace=None, cqlver=cqlsh.DEFAULT_CQLVER) as c: -self.assertEqual(c.output_header.splitlines()[-1], 'cqlsh> ') +self.assertTrue(c.output_header.splitlines()[-1].endswith('cqlsh> ')) c.send('\n') output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, '\ncqlsh> ') +self.assertTrue(output.endswith('cqlsh> ')) cmd = "USE \"%s\";\n" % get_test_keyspace().replace('"', '""') c.send(cmd) output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, '%scqlsh:%s> ' % (cmd, get_test_keyspace())) +self.assertTrue(output.endswith('cqlsh:%s> ' % (get_test_keyspace( c.send('use system;\n') output = c.read_to_next_prompt().replace('\r\n', '\n') -self.assertEqual(output, 'use system;\ncqlsh:system> ') +self.assertTrue(output.endswith('cqlsh:system> ')) c.send('use NONEXISTENTKEYSPACE;\n') outputlines = c.read_to_next_prompt().splitlines() self.assertEqual(outputlines[0], 'use NONEXISTENTKEYSPACE;') -self.assertEqual(outputlines[2], 'cqlsh:system> ') +self.assertTrue(outputlines[2].endswith('cqlsh:system> ')) midline = ColoredText(outputlines[1]) self.assertEqual(midline.plain(), 'InvalidRequest: code=2200 [Invalid query] message="Keyspace \'nonexistentkeyspace\' does not exist"')
[jira] [Updated] (CASSANDRA-10540) RangeAwareCompaction
[ https://issues.apache.org/jira/browse/CASSANDRA-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-10540: Reviewer: Carl Yeksigian > RangeAwareCompaction > > > Key: CASSANDRA-10540 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10540 > Project: Cassandra > Issue Type: New Feature >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Fix For: 3.2 > > > Broken out from CASSANDRA-6696, we should split sstables based on ranges > during compaction. > Requirements; > * dont create tiny sstables - keep them bunched together until a single vnode > is big enough (configurable how big that is) > * make it possible to run existing compaction strategies on the per-range > sstables > We should probably add a global compaction strategy parameter that states > whether this should be enabled or not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-10583) After bulk loading CQL query on timestamp column returns wrong result
Kai Wang created CASSANDRA-10583: Summary: After bulk loading CQL query on timestamp column returns wrong result Key: CASSANDRA-10583 URL: https://issues.apache.org/jira/browse/CASSANDRA-10583 Project: Cassandra Issue Type: Bug Components: Core Environment: Datastax Community Edition 2.1.10, Windows 2008 R2, Java x64 1.8.0_60 Reporter: Kai Wang I have this table: CREATE TABLE test ( tag text, group int, timestamp timestamp, value double, PRIMARY KEY (tag, group, timestamp) ) WITH CLUSTERING ORDER BY (group ASC, timestamp DESC) First I used CQLSSTableWriter to bulk load a bunch of sstables. Then I ran this query: cqlsh> select * from test where tag = 'MSFT' and group = 1 and timestamp ='2004-12-15 16:00:00-0500'; tag | group | timestamp| value --+---+--+--- MSFT | 1 | 2004-12-15 21:00:00+ | 27.11 MSFT | 1 | 2004-12-16 21:00:00+ | 27.16 MSFT | 1 | 2004-12-17 21:00:00+ | 26.96 MSFT | 1 | 2004-12-20 21:00:00+ | 26.95 MSFT | 1 | 2004-12-21 21:00:00+ | 27.07 MSFT | 1 | 2004-12-22 21:00:00+ | 26.98 MSFT | 1 | 2004-12-23 21:00:00+ | 27.01 MSFT | 1 | 2004-12-27 21:00:00+ | 26.85 MSFT | 1 | 2004-12-28 21:00:00+ | 26.95 MSFT | 1 | 2004-12-29 21:00:00+ | 26.9 MSFT | 1 | 2004-12-30 21:00:00+ | 26.76 (11 rows) The result is obviously wrong. If I run this query: cqlsh> select * from test where tag = 'MSFT' and group = 1 and timestamp ='2004-12-16 16:00:00-0500'; tag | group | timestamp | value -+---+---+--- (0 rows) In DevCenter I tried to create a similar table and insert a few rows but couldn't reproduce this. This may have something to do with the bulk loading process. But still, the fact cqlsh returns data that doesn't match the query is concerning. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-10515) Commit logs back up with move to 2.1.10
[ https://issues.apache.org/jira/browse/CASSANDRA-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971417#comment-14971417 ] Jeff Griffith edited comment on CASSANDRA-10515 at 10/23/15 5:34 PM: - [~krummas] [~tjake] Here is a separate instance of commit logs breaking our 12G setting but with different behavior. I have captured the whole thing with thread dumps and tpstats every two minutes. I've embedded pending numbers in the filenames for your convenience to make it easy to see where the backup starts. *-node1.tar.gz is the only one i uploaded since the files were so large, but note in the Dashboard.jpg file that all three nodes break the limit at about the same time. I can upload the others if it is useful. This case seems different from the previous case where there were lots of L0 files causing thread blocking, but even here it seems like the MemtablePostFlush is stopping on a countdownlatch. https://issues.apache.org/jira/secure/attachment/12768344/MultinodeCommitLogGrowth-node1.tar.gz This happened twice during this period and here is the first one. Note the pid changed because our monitoring detected and restarted the node. {code} tpstats_20151023-00:16:02_pid_37996_postpend_0.txt tpstats_20151023-00:18:08_pid_37996_postpend_1.txt tpstats_20151023-00:20:14_pid_37996_postpend_0.txt tpstats_20151023-00:22:19_pid_37996_postpend_3.txt tpstats_20151023-00:24:25_pid_37996_postpend_133.txt tpstats_20151023-00:26:30_pid_37996_postpend_809.txt tpstats_20151023-00:28:35_pid_37996_postpend_1596.txt tpstats_20151023-00:30:39_pid_37996_postpend_2258.txt tpstats_20151023-00:32:42_pid_37996_postpend_3095.txt tpstats_20151023-00:34:45_pid_37996_postpend_3822.txt tpstats_20151023-00:36:48_pid_37996_postpend_4593.txt tpstats_20151023-00:38:52_pid_37996_postpend_5363.txt tpstats_20151023-00:40:55_pid_37996_postpend_6212.txt tpstats_20151023-00:42:59_pid_37996_postpend_7137.txt tpstats_20151023-00:45:03_pid_37996_postpend_8559.txt tpstats_20151023-00:47:06_pid_37996_postpend_9060.txt tpstats_20151023-00:49:09_pid_37996_postpend_9060.txt tpstats_20151023-00:51:11_pid_48196_postpend_0.txt tpstats_20151023-00:53:13_pid_48196_postpend_0.txt tpstats_20151023-00:55:16_pid_48196_postpend_0.txt tpstats_20151023-00:57:21_pid_48196_postpend_0.txt {code} was (Author: jeffery.griffith): [~krummas] [~tjake] Here is a separate instance of commit logs breaking our 12G setting but with different behavior. I have captured the whole thing with thread dumps and tpstats every two minutes. I've embedded pending numbers in the filenames for your convenience to make it easy to see where the backup starts. *-node1.tar.gz is the only one i uploaded since the files were so large, but note in the Dashboard.jpg file that all three nodes break the limit at about the same time. I can upload the others if it is useful. This case seems different from the previous case where there were lots of L0 files causing thread blocking, but even here it seems like the MemtablePostFlush is stopping on a countdownlatch. https://issues.apache.org/jira/secure/attachment/12768344/MultinodeCommitLogGrowth-node1.tar.gz This happened twice during this period and here is the first one. Note the pid changed because our monitoring detected and restarted the node. {code} -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:16 tpstats_20151023-00:16:02_pid_37996_postpend_0.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:18 tpstats_20151023-00:18:08_pid_37996_postpend_1.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:20 tpstats_20151023-00:20:14_pid_37996_postpend_0.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:22 tpstats_20151023-00:22:19_pid_37996_postpend_3.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:24 tpstats_20151023-00:24:25_pid_37996_postpend_133.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:26 tpstats_20151023-00:26:30_pid_37996_postpend_809.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:28 tpstats_20151023-00:28:35_pid_37996_postpend_1596.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:30 tpstats_20151023-00:30:39_pid_37996_postpend_2258.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:32 tpstats_20151023-00:32:42_pid_37996_postpend_3095.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:34 tpstats_20151023-00:34:45_pid_37996_postpend_3822.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:36 tpstats_20151023-00:36:48_pid_37996_postpend_4593.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:38 tpstats_20151023-00:38:52_pid_37996_postpend_5363.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:40 tpstats_20151023-00:40:55_pid_37996_postpend_6212.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:43 tpstats_20151023-00:42:59_pid_37996_postpend_7137.tx
[jira] [Comment Edited] (CASSANDRA-10515) Commit logs back up with move to 2.1.10
[ https://issues.apache.org/jira/browse/CASSANDRA-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971417#comment-14971417 ] Jeff Griffith edited comment on CASSANDRA-10515 at 10/23/15 5:33 PM: - [~krummas] [~tjake] Here is a separate instance of commit logs breaking our 12G setting but with different behavior. I have captured the whole thing with thread dumps and tpstats every two minutes. I've embedded pending numbers in the filenames for your convenience to make it easy to see where the backup starts. *-node1.tar.gz is the only one i uploaded since the files were so large, but note in the Dashboard.jpg file that all three nodes break the limit at about the same time. I can upload the others if it is useful. This case seems different from the previous case where there were lots of L0 files causing thread blocking, but even here it seems like the MemtablePostFlush is stopping on a countdownlatch. https://issues.apache.org/jira/secure/attachment/12768344/MultinodeCommitLogGrowth-node1.tar.gz This happened twice during this period and here is the first one. Note the pid changed because our monitoring detected and restarted the node. {code} -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:16 tpstats_20151023-00:16:02_pid_37996_postpend_0.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:18 tpstats_20151023-00:18:08_pid_37996_postpend_1.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:20 tpstats_20151023-00:20:14_pid_37996_postpend_0.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:22 tpstats_20151023-00:22:19_pid_37996_postpend_3.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:24 tpstats_20151023-00:24:25_pid_37996_postpend_133.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:26 tpstats_20151023-00:26:30_pid_37996_postpend_809.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:28 tpstats_20151023-00:28:35_pid_37996_postpend_1596.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:30 tpstats_20151023-00:30:39_pid_37996_postpend_2258.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:32 tpstats_20151023-00:32:42_pid_37996_postpend_3095.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:34 tpstats_20151023-00:34:45_pid_37996_postpend_3822.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:36 tpstats_20151023-00:36:48_pid_37996_postpend_4593.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:38 tpstats_20151023-00:38:52_pid_37996_postpend_5363.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:40 tpstats_20151023-00:40:55_pid_37996_postpend_6212.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:43 tpstats_20151023-00:42:59_pid_37996_postpend_7137.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:45 tpstats_20151023-00:45:03_pid_37996_postpend_8559.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2002 Oct 22 20:47 tpstats_20151023-00:47:06_pid_37996_postpend_9060.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2002 Oct 22 20:49 tpstats_20151023-00:49:09_pid_37996_postpend_9060.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2002 Oct 22 20:51 tpstats_20151023-00:51:11_pid_48196_postpend_0.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2002 Oct 22 20:53 tpstats_20151023-00:53:13_pid_48196_postpend_0.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:55 tpstats_20151023-00:55:16_pid_48196_postpend_0.txt -rw-r--r-- 1 jgriffith Y\Domain Users 2180 Oct 22 20:57 tpstats_20151023-00:57:21_pid_48196_postpend_0.txt {code} was (Author: jeffery.griffith): [~krummas] [~tjake] Here is a separate instance of commit logs breaking our 12G setting but with different behavior. I have captured the whole thing with thread dumps and tpstats every two minutes. I've embedded pending numbers in the filenames for your convenience to make it easy to see where the backup starts. *-node1.tar.gz is the only one i uploaded since the files were so large, but note in the Dashboard.jpg file that all three nodes break the limit at about the same time. I can upload the others if it is useful. This case seems different from the previous case where there were lots of L0 files causing thread blocking, but even here it seems like the MemtablePostFlush is stopping on a countdownlatch. https://issues.apache.org/jira/secure/attachment/12768344/MultinodeCommitLogGrowth-node1.tar.gz > Commit logs back up with move to 2.1.10 > --- > > Key: CASSANDRA-10515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10515 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: redhat 6.5, cassandra 2.1.10 >Reporter: Jeff Griffith >Assignee: Branimir Lambov >Priority: Critical > Labels:
[jira] [Comment Edited] (CASSANDRA-10140) Enable GC logging by default
[ https://issues.apache.org/jira/browse/CASSANDRA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971405#comment-14971405 ] Ariel Weisberg edited comment on CASSANDRA-10140 at 10/23/15 5:30 PM: -- This doesn't work with the debian package. It doesn't break anything, but the gc log is not in /var/log/cassandra along with system.log and debug.log. was (Author: aweisberg): This doesn't work with the debian package. It doesn't break anything, but the gc log is not in /var/log along with system.log and debug.log. > Enable GC logging by default > > > Key: CASSANDRA-10140 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10140 > Project: Cassandra > Issue Type: Improvement > Components: Config >Reporter: Chris Lohfink >Assignee: Chris Lohfink >Priority: Minor > Fix For: 2.2.x, 3.0.x > > Attachments: CASSANDRA-10140-2-2.txt, CASSANDRA-10140-v2.txt, > CASSANDRA-10140.txt, cassandra-2.2-10140-v2.txt, cassandra-2.2-10140-v3.txt > > > Overhead for the gc logging is very small (with cycling logs in 7+) and it > provides a ton of useful information. This will open up more for C* > diagnostic tools to provide feedback as well without requiring restarts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-10515) Commit logs back up with move to 2.1.10
[ https://issues.apache.org/jira/browse/CASSANDRA-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971417#comment-14971417 ] Jeff Griffith edited comment on CASSANDRA-10515 at 10/23/15 5:31 PM: - [~krummas] [~tjake] Here is a separate instance of commit logs breaking our 12G setting but with different behavior. I have captured the whole thing with thread dumps and tpstats every two minutes. I've embedded pending numbers in the filenames for your convenience to make it easy to see where the backup starts. *-node1.tar.gz is the only one i uploaded since the files were so large, but note in the Dashboard.jpg file that all three nodes break the limit at about the same time. I can upload the others if it is useful. This case seems different from the previous case where there were lots of L0 files causing thread blocking, but even here it seems like the MemtablePostFlush is stopping on a countdownlatch. https://issues.apache.org/jira/secure/attachment/12768344/MultinodeCommitLogGrowth-node1.tar.gz was (Author: jeffery.griffith): [~krummas] [~tjake] Here is a separate instance of commit logs breaking our 12G setting but with different behavior. I have captured the whole thing with thread dumps and tpstats every two minutes. I've embedded pending numbers in the filenames for your convenience to make it easy to see where the backup starts. *-node1.tar.gz is the only one i uploaded since the files were so large, but note in the Dashboard.jpg file that all three nodes break the limit at about the same time. I can upload the others if it is useful. This case seems different from the previous case where there were lots of L0 files causing thread blocking, but even here it seems like the MemtablePostFlush is stopping on a countdownlatch. > Commit logs back up with move to 2.1.10 > --- > > Key: CASSANDRA-10515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10515 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: redhat 6.5, cassandra 2.1.10 >Reporter: Jeff Griffith >Assignee: Branimir Lambov >Priority: Critical > Labels: commitlog, triage > Attachments: C5commitLogIncrease.jpg, CommitLogProblem.jpg, > CommitLogSize.jpg, MultinodeCommitLogGrowth-node1.tar.gz, RUN3tpstats.jpg, > cassandra.yaml, cfstats-clean.txt, stacktrace.txt, system.log.clean > > > After upgrading from cassandra 2.0.x to 2.1.10, we began seeing problems > where some nodes break the 12G commit log max we configured and go as high as > 65G or more before it restarts. Once it reaches the state of more than 12G > commit log files, "nodetool compactionstats" hangs. Eventually C* restarts > without errors (not sure yet whether it is crashing but I'm checking into it) > and the cleanup occurs and the commit logs shrink back down again. Here is > the nodetool compactionstats immediately after restart. > {code} > jgriffith@prod1xc1.c2.bf1:~$ ndc > pending tasks: 2185 >compaction type keyspace table completed > totalunit progress > Compaction SyncCore *cf1* 61251208033 > 170643574558 bytes 35.89% > Compaction SyncCore *cf2* 19262483904 > 19266079916 bytes 99.98% > Compaction SyncCore *cf3*6592197093 > 6592316682 bytes100.00% > Compaction SyncCore *cf4*3411039555 > 3411039557 bytes100.00% > Compaction SyncCore *cf5*2879241009 > 2879487621 bytes 99.99% > Compaction SyncCore *cf6* 21252493623 > 21252635196 bytes100.00% > Compaction SyncCore *cf7* 81009853587 > 81009854438 bytes100.00% > Compaction SyncCore *cf8*3005734580 > 3005768582 bytes100.00% > Active compaction remaining time :n/a > {code} > I was also doing periodic "nodetool tpstats" which were working but not being > logged in system.log on the StatusLogger thread until after the compaction > started working again. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10515) Commit logs back up with move to 2.1.10
[ https://issues.apache.org/jira/browse/CASSANDRA-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Griffith updated CASSANDRA-10515: -- Attachment: MultinodeCommitLogGrowth-node1.tar.gz [~krummas] [~tjake] Here is a separate instance of commit logs breaking our 12G setting but with different behavior. I have captured the whole thing with thread dumps and tpstats every two minutes. I've embedded pending numbers in the filenames for your convenience to make it easy to see where the backup starts. *-node1.tar.gz is the only one i uploaded since the files were so large, but note in the Dashboard.jpg file that all three nodes break the limit at about the same time. I can upload the others if it is useful. This case seems different from the previous case where there were lots of L0 files causing thread blocking, but even here it seems like the MemtablePostFlush is stopping on a countdownlatch. > Commit logs back up with move to 2.1.10 > --- > > Key: CASSANDRA-10515 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10515 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: redhat 6.5, cassandra 2.1.10 >Reporter: Jeff Griffith >Assignee: Branimir Lambov >Priority: Critical > Labels: commitlog, triage > Attachments: C5commitLogIncrease.jpg, CommitLogProblem.jpg, > CommitLogSize.jpg, MultinodeCommitLogGrowth-node1.tar.gz, RUN3tpstats.jpg, > cassandra.yaml, cfstats-clean.txt, stacktrace.txt, system.log.clean > > > After upgrading from cassandra 2.0.x to 2.1.10, we began seeing problems > where some nodes break the 12G commit log max we configured and go as high as > 65G or more before it restarts. Once it reaches the state of more than 12G > commit log files, "nodetool compactionstats" hangs. Eventually C* restarts > without errors (not sure yet whether it is crashing but I'm checking into it) > and the cleanup occurs and the commit logs shrink back down again. Here is > the nodetool compactionstats immediately after restart. > {code} > jgriffith@prod1xc1.c2.bf1:~$ ndc > pending tasks: 2185 >compaction type keyspace table completed > totalunit progress > Compaction SyncCore *cf1* 61251208033 > 170643574558 bytes 35.89% > Compaction SyncCore *cf2* 19262483904 > 19266079916 bytes 99.98% > Compaction SyncCore *cf3*6592197093 > 6592316682 bytes100.00% > Compaction SyncCore *cf4*3411039555 > 3411039557 bytes100.00% > Compaction SyncCore *cf5*2879241009 > 2879487621 bytes 99.99% > Compaction SyncCore *cf6* 21252493623 > 21252635196 bytes100.00% > Compaction SyncCore *cf7* 81009853587 > 81009854438 bytes100.00% > Compaction SyncCore *cf8*3005734580 > 3005768582 bytes100.00% > Active compaction remaining time :n/a > {code} > I was also doing periodic "nodetool tpstats" which were working but not being > logged in system.log on the StatusLogger thread until after the compaction > started working again. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10140) Enable GC logging by default
[ https://issues.apache.org/jira/browse/CASSANDRA-10140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971405#comment-14971405 ] Ariel Weisberg commented on CASSANDRA-10140: This doesn't work with the debian package. It doesn't break anything, but the gc log is not in /var/log along with system.log and debug.log. > Enable GC logging by default > > > Key: CASSANDRA-10140 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10140 > Project: Cassandra > Issue Type: Improvement > Components: Config >Reporter: Chris Lohfink >Assignee: Chris Lohfink >Priority: Minor > Fix For: 2.2.x, 3.0.x > > Attachments: CASSANDRA-10140-2-2.txt, CASSANDRA-10140-v2.txt, > CASSANDRA-10140.txt, cassandra-2.2-10140-v2.txt, cassandra-2.2-10140-v3.txt > > > Overhead for the gc logging is very small (with cycling logs in 7+) and it > provides a ton of useful information. This will open up more for C* > diagnostic tools to provide feedback as well without requiring restarts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9304) COPY TO improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971400#comment-14971400 ] Tyler Hobbs commented on CASSANDRA-9304: Interesting. The current formatting call stack is very deep and expensive, which is okay for normal cqlsh usage, but could definitely be the bottleneck for {{COPY TO}}. I bet if we pre-fetched the formatting functions for each expected type and stored them in a local list, we could improve those numbers dramatically. I can open another ticket for that when I review your recent changes later today. > COPY TO improvements > > > Key: CASSANDRA-9304 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9304 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Jonathan Ellis >Assignee: Stefania >Priority: Minor > Labels: cqlsh > Fix For: 3.x, 2.1.x, 2.2.x > > > COPY FROM has gotten a lot of love. COPY TO not so much. One obvious > improvement could be to parallelize reading and writing (write one page of > data while fetching the next). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-10582) CorruptSSTableException should print the SS Table Name
Anubhav Kale created CASSANDRA-10582: Summary: CorruptSSTableException should print the SS Table Name Key: CASSANDRA-10582 URL: https://issues.apache.org/jira/browse/CASSANDRA-10582 Project: Cassandra Issue Type: Bug Components: Core Environment: Azure Reporter: Anubhav Kale Priority: Minor Fix For: 2.1.9 We should print the SS Table name that's being reported as corrupt to help with quick recovery. INFO 16:32:15 Opening /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-21214 (23832772 bytes) INFO 16:32:15 Opening /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-18398 (149675 bytes) INFO 16:32:15 Opening /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-23707 (18270 bytes) INFO 16:32:15 Opening /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-13656 (814588 bytes) ERROR 16:32:15 Exiting forcefully due to file system exception on startup, disk failure policy "stop" org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException at org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131) ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT] at org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85) ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT] at org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79) ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT] at org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72) ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT] at -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9813) cqlsh column header can be incorrect when no rows are returned
[ https://issues.apache.org/jira/browse/CASSANDRA-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971349#comment-14971349 ] Tyler Hobbs commented on CASSANDRA-9813: +1 from me, that's my preferred solution as well. > cqlsh column header can be incorrect when no rows are returned > -- > > Key: CASSANDRA-9813 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9813 > Project: Cassandra > Issue Type: Bug >Reporter: Aleksey Yeschenko > Labels: cqlsh > Fix For: 3.x, 2.1.x, 2.2.x > > Attachments: Test-for-9813.txt > > > Upon migration, we internally create a pair of surrogate clustering/regular > columns for compact static tables. These shouldn't be exposed to the user. > That is, for the table > {code} > CREATE TABLE bar (k int, c int, PRIMARY KEY (k)) WITH COMPACT STORAGE; > {code} > {{SELECT * FROM bar}} should not be returning this result set: > {code} > cqlsh:test> select * from bar; > c | column1 | k | value > ---+-+---+--- > (0 rows) > {code} > Should only contain the defined {{c}} and {{k}} columns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10581) Update cassandra.yaml comments to reflect memory_allocator deprecation, remove in 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971348#comment-14971348 ] Jim Witschey commented on CASSANDRA-10581: -- This should also be looked at and updated to reflect the new behavior: https://github.com/apache/cassandra/blob/0d2ec11c7e0abfb84d872289af6d3ac386cf381f/conf/cassandra-env.sh#L311 > Update cassandra.yaml comments to reflect memory_allocator deprecation, > remove in 3.0 > - > > Key: CASSANDRA-10581 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10581 > Project: Cassandra > Issue Type: Bug >Reporter: Jim Witschey >Assignee: Robert Stupp > Fix For: 2.2.x, 3.0.0 > > > Looks like in 2.2+ changing the {{memory_allocator}} field in cassandra.yaml > has no effect: > https://github.com/apache/cassandra/commit/0d2ec11c7e0abfb84d872289af6d3ac386cf381f#diff-b66584c9ce7b64019b5db5a531deeda1R207 > The instructions in comments on how to use jemalloc haven't been updated to > reflect this change. [~snazy], is that an accurate assessment? > [~iamaleksey] How do we want to handle the deprecation more generally? Warn > on 2.2, remove in 3.0? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10581) Update cassandra.yaml comments to reflect memory_allocator deprecation, remove in 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10581: -- Reviewer: Jim Witschey > Update cassandra.yaml comments to reflect memory_allocator deprecation, > remove in 3.0 > - > > Key: CASSANDRA-10581 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10581 > Project: Cassandra > Issue Type: Bug >Reporter: Jim Witschey >Assignee: Robert Stupp > Fix For: 2.2.x, 3.0.0 > > > Looks like in 2.2+ changing the {{memory_allocator}} field in cassandra.yaml > has no effect: > https://github.com/apache/cassandra/commit/0d2ec11c7e0abfb84d872289af6d3ac386cf381f#diff-b66584c9ce7b64019b5db5a531deeda1R207 > The instructions in comments on how to use jemalloc haven't been updated to > reflect this change. [~snazy], is that an accurate assessment? > [~iamaleksey] How do we want to handle the deprecation more generally? Warn > on 2.2, remove in 3.0? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10581) Update cassandra.yaml comments to reflect memory_allocator deprecation, remove in 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Witschey updated CASSANDRA-10581: - Assignee: Robert Stupp > Update cassandra.yaml comments to reflect memory_allocator deprecation, > remove in 3.0 > - > > Key: CASSANDRA-10581 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10581 > Project: Cassandra > Issue Type: Bug >Reporter: Jim Witschey >Assignee: Robert Stupp > Fix For: 2.2.x, 3.0.0 > > > Looks like in 2.2+ changing the {{memory_allocator}} field in cassandra.yaml > has no effect: > https://github.com/apache/cassandra/commit/0d2ec11c7e0abfb84d872289af6d3ac386cf381f#diff-b66584c9ce7b64019b5db5a531deeda1R207 > The instructions in comments on how to use jemalloc haven't been updated to > reflect this change. [~snazy], is that an accurate assessment? > [~iamaleksey] How do we want to handle the deprecation more generally? Warn > on 2.2, remove in 3.0? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10441) Move stress tool into it's own repository and manage dependency on Cassandra externally
[ https://issues.apache.org/jira/browse/CASSANDRA-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971327#comment-14971327 ] T Jake Luciani commented on CASSANDRA-10441: [~nitsanw] We also are happy to incorporate any work you've done (I know you have a fork) We can also perhaps make Stress it's own artifact so it's available to use on snapshot builds? > Move stress tool into it's own repository and manage dependency on Cassandra > externally > --- > > Key: CASSANDRA-10441 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10441 > Project: Cassandra > Issue Type: Wish > Components: Tools >Reporter: Nitsan Wakart >Priority: Minor > > This will: > 1. Allow distinct release/maintenance/contribution cycles > 2. Prevent accidental dependencies from Cassandra into the stress tool > 3. Isolate performance changes in Cassandra from changes to stress tool > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-10441) Move stress tool into it's own repository and manage dependency on Cassandra externally
[ https://issues.apache.org/jira/browse/CASSANDRA-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971327#comment-14971327 ] T Jake Luciani edited comment on CASSANDRA-10441 at 10/23/15 4:38 PM: -- [~nitsanw] We also are happy to incorporate any work you've done (I know you have a fork) We can also perhaps make Stress its own artifact so it's available to use on snapshot builds? was (Author: tjake): [~nitsanw] We also are happy to incorporate any work you've done (I know you have a fork) We can also perhaps make Stress it's own artifact so it's available to use on snapshot builds? > Move stress tool into it's own repository and manage dependency on Cassandra > externally > --- > > Key: CASSANDRA-10441 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10441 > Project: Cassandra > Issue Type: Wish > Components: Tools >Reporter: Nitsan Wakart >Priority: Minor > > This will: > 1. Allow distinct release/maintenance/contribution cycles > 2. Prevent accidental dependencies from Cassandra into the stress tool > 3. Isolate performance changes in Cassandra from changes to stress tool > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9813) cqlsh column header can be incorrect when no rows are returned
[ https://issues.apache.org/jira/browse/CASSANDRA-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971323#comment-14971323 ] Aleksey Yeschenko commented on CASSANDRA-9813: -- [~aholmber] that would work for me. [~thobbs] any objections? > cqlsh column header can be incorrect when no rows are returned > -- > > Key: CASSANDRA-9813 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9813 > Project: Cassandra > Issue Type: Bug >Reporter: Aleksey Yeschenko > Labels: cqlsh > Fix For: 3.x, 2.1.x, 2.2.x > > Attachments: Test-for-9813.txt > > > Upon migration, we internally create a pair of surrogate clustering/regular > columns for compact static tables. These shouldn't be exposed to the user. > That is, for the table > {code} > CREATE TABLE bar (k int, c int, PRIMARY KEY (k)) WITH COMPACT STORAGE; > {code} > {{SELECT * FROM bar}} should not be returning this result set: > {code} > cqlsh:test> select * from bar; > c | column1 | k | value > ---+-+---+--- > (0 rows) > {code} > Should only contain the defined {{c}} and {{k}} columns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10581) Update cassandra.yaml comments to reflect memory_allocator deprecation
[ https://issues.apache.org/jira/browse/CASSANDRA-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Witschey updated CASSANDRA-10581: - Fix Version/s: (was: 3.0.x) 3.0.0 > Update cassandra.yaml comments to reflect memory_allocator deprecation > -- > > Key: CASSANDRA-10581 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10581 > Project: Cassandra > Issue Type: Bug >Reporter: Jim Witschey > Fix For: 2.2.x, 3.0.0 > > > Looks like in 2.2+ changing the {{memory_allocator}} field in cassandra.yaml > has no effect: > https://github.com/apache/cassandra/commit/0d2ec11c7e0abfb84d872289af6d3ac386cf381f#diff-b66584c9ce7b64019b5db5a531deeda1R207 > The instructions in comments on how to use jemalloc haven't been updated to > reflect this change. [~snazy], is that an accurate assessment? > [~iamaleksey] How do we want to handle the deprecation more generally? Warn > on 2.2, remove in 3.0? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10581) Update cassandra.yaml comments to reflect memory_allocator deprecation, remove in 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Witschey updated CASSANDRA-10581: - Summary: Update cassandra.yaml comments to reflect memory_allocator deprecation, remove in 3.0 (was: Update cassandra.yaml comments to reflect memory_allocator deprecation) > Update cassandra.yaml comments to reflect memory_allocator deprecation, > remove in 3.0 > - > > Key: CASSANDRA-10581 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10581 > Project: Cassandra > Issue Type: Bug >Reporter: Jim Witschey > Fix For: 2.2.x, 3.0.0 > > > Looks like in 2.2+ changing the {{memory_allocator}} field in cassandra.yaml > has no effect: > https://github.com/apache/cassandra/commit/0d2ec11c7e0abfb84d872289af6d3ac386cf381f#diff-b66584c9ce7b64019b5db5a531deeda1R207 > The instructions in comments on how to use jemalloc haven't been updated to > reflect this change. [~snazy], is that an accurate assessment? > [~iamaleksey] How do we want to handle the deprecation more generally? Warn > on 2.2, remove in 3.0? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10581) Update cassandra.yaml comments to reflect memory_allocator deprecation
[ https://issues.apache.org/jira/browse/CASSANDRA-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971321#comment-14971321 ] Aleksey Yeschenko commented on CASSANDRA-10581: --- bq. How do we want to handle the deprecation more generally? Warn on 2.2, remove in 3.0? Yep. > Update cassandra.yaml comments to reflect memory_allocator deprecation > -- > > Key: CASSANDRA-10581 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10581 > Project: Cassandra > Issue Type: Bug >Reporter: Jim Witschey > Fix For: 2.2.x, 3.0.x > > > Looks like in 2.2+ changing the {{memory_allocator}} field in cassandra.yaml > has no effect: > https://github.com/apache/cassandra/commit/0d2ec11c7e0abfb84d872289af6d3ac386cf381f#diff-b66584c9ce7b64019b5db5a531deeda1R207 > The instructions in comments on how to use jemalloc haven't been updated to > reflect this change. [~snazy], is that an accurate assessment? > [~iamaleksey] How do we want to handle the deprecation more generally? Warn > on 2.2, remove in 3.0? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9813) cqlsh column header can be incorrect when no rows are returned
[ https://issues.apache.org/jira/browse/CASSANDRA-9813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971320#comment-14971320 ] Adam Holmberg commented on CASSANDRA-9813: -- The driver presently models all columns in the metadata, regardless of whether the should appear in generated CQL (whether that's right is debatable). cqlsh uses the metadata model when writing the header. My recommendation would be to instead use the column names from the results metadata. This will require an addition to the driver API. If this approach sounds reasonable, I'll plan on getting that update into the driver 3.0 ga, and we can follow up with the cqlsh updates. > cqlsh column header can be incorrect when no rows are returned > -- > > Key: CASSANDRA-9813 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9813 > Project: Cassandra > Issue Type: Bug >Reporter: Aleksey Yeschenko > Labels: cqlsh > Fix For: 3.x, 2.1.x, 2.2.x > > Attachments: Test-for-9813.txt > > > Upon migration, we internally create a pair of surrogate clustering/regular > columns for compact static tables. These shouldn't be exposed to the user. > That is, for the table > {code} > CREATE TABLE bar (k int, c int, PRIMARY KEY (k)) WITH COMPACT STORAGE; > {code} > {{SELECT * FROM bar}} should not be returning this result set: > {code} > cqlsh:test> select * from bar; > c | column1 | k | value > ---+-+---+--- > (0 rows) > {code} > Should only contain the defined {{c}} and {{k}} columns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-10581) Update cassandra.yaml comments to reflect memory_allocator deprecation
Jim Witschey created CASSANDRA-10581: Summary: Update cassandra.yaml comments to reflect memory_allocator deprecation Key: CASSANDRA-10581 URL: https://issues.apache.org/jira/browse/CASSANDRA-10581 Project: Cassandra Issue Type: Bug Reporter: Jim Witschey Fix For: 2.2.x, 3.0.x Looks like in 2.2+ changing the {{memory_allocator}} field in cassandra.yaml has no effect: https://github.com/apache/cassandra/commit/0d2ec11c7e0abfb84d872289af6d3ac386cf381f#diff-b66584c9ce7b64019b5db5a531deeda1R207 The instructions in comments on how to use jemalloc haven't been updated to reflect this change. [~snazy], is that an accurate assessment? [~iamaleksey] How do we want to handle the deprecation more generally? Warn on 2.2, remove in 3.0? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10476) Fix upgrade paging dtest failures on 2.2->3.0 path
[ https://issues.apache.org/jira/browse/CASSANDRA-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971296#comment-14971296 ] Tyler Hobbs commented on CASSANDRA-10476: - I suggest that [~blerer] look into it. (Sorry Benjamin!) > Fix upgrade paging dtest failures on 2.2->3.0 path > -- > > Key: CASSANDRA-10476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10476 > Project: Cassandra > Issue Type: Sub-task >Reporter: Jim Witschey > Fix For: 3.0.0 > > > The following upgrade tests for paging features fail or flap on the upgrade > path from 2.2 to 3.0: > - {{upgrade_tests/paging_test.py:TestPagingData.static_columns_paging_test}} > - > {{upgrade_tests/paging_test.py:TestPagingSize.test_undefined_page_size_default}} > - > {{upgrade_tests/paging_test.py:TestPagingSize.test_with_more_results_than_page_size}} > - > {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions}} > - > {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_multiple_cell_deletions}} > - > {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_cell_deletions}} > - > {{upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_row_deletions}} > - > {{upgrade_tests/paging_test.py:TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging/}} > I've grouped them all together because I don't know how to tell if they're > related; once someone triages them, it may be appropriate to break this out > into multiple tickets. > The failures can be found here: > http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingData/static_columns_paging_test/history/ > http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingSize/test_undefined_page_size_default/history/ > http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/42/testReport/upgrade_tests.paging_test/TestPagingSize/test_with_more_results_than_page_size/history/ > http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_failure_threshold_deletions/history/ > http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_multiple_cell_deletions/history/ > http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_single_cell_deletions/history/ > http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingWithDeletions/test_single_row_deletions/history/ > http://cassci.datastax.com/view/Upgrades/job/storage_engine_upgrade_dtest-22_tarball-30_HEAD/44/testReport/upgrade_tests.paging_test/TestPagingDatasetChanges/test_cell_TTL_expiry_during_paging/ > Once [this dtest PR|https://github.com/riptano/cassandra-dtest/pull/586] is > merged, these tests should also run with this upgrade path on normal 3.0 > jobs. Until then, you can run them with the following command: > {code} > SKIP=false CASSANDRA_VERSION=binary:2.2.0 UPGRADE_TO=git:cassandra-3.0 > nosetests > upgrade_tests/paging_test.py:TestPagingData.static_columns_paging_test > upgrade_tests/paging_test.py:TestPagingSize.test_undefined_page_size_default > upgrade_tests/paging_test.py:TestPagingSize.test_with_more_results_than_page_size > > upgrade_tests/paging_test.py:TestPagingWithDeletions.test_failure_threshold_deletions > > upgrade_tests/paging_test.py:TestPagingWithDeletions.test_multiple_cell_deletions > > upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_cell_deletions > > upgrade_tests/paging_test.py:TestPagingWithDeletions.test_single_row_deletions > upgrade_tests/paging_test.py:TestPagingDatasetChanges.test_cell_TTL_expiry_during_paging > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10258) Counter table written with CQLSSTableWriter generates exceptions and become corrupted at first use
[ https://issues.apache.org/jira/browse/CASSANDRA-10258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10258: -- Reproduced In: 2.1.8, 2.1.5, 2.0.12 (was: 2.0.12, 2.1.5, 2.1.8) Reviewer: Aleksey Yeschenko > Counter table written with CQLSSTableWriter generates exceptions and become > corrupted at first use > -- > > Key: CASSANDRA-10258 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10258 > Project: Cassandra > Issue Type: Bug > Components: API > Environment: Linux Debian Wheezie 7.8 / Oracle Java 1.7.0_67 > Ubuntu 14.04.3 LTS / Oracle Java 1.7.0_75 > Cassandra 2.0.12 2.1.5 2.1.8 >Reporter: Guillaume VIEL >Assignee: Paulo Motta > Fix For: 2.1.x, 2.2.x, 3.0.x > > > We use CQLSStableWriter to produce testing datasets. > Here are the steps to reproduce this issue : > 1) definition of a table with counter > {code} > CREATE TABLE my_counter ( > my_id text, > my_counter counter, > PRIMARY KEY (my_id) > ) > {code} > 2) with CQLSSTableWriter initialize this table (about 2millions entries) with > this insert order (one insert / key only) > {{UPDATE myks.my_counter SET my_counter = my_counter + ? WHERE my_id = ?}} > 3) load the files written by CQLSSTableWriter with sstableloader in your > cassandra cluster (tested on a single node and a 3 nodes cluster) > 4) start a process that updates the counters (we used 3millions entries > distributed on the key my_id) > 5) after a while try to query a key in the my_counter table > {{cqlsh:myks> select * from my_counter where my_id='001';}} > Request did not complete within rpc_timeout. > In the logs of cassandra (2.0.12) : > {code} > ERROR [CompactionExecutor:3] 2015-05-28 15:53:39,491 CassandraDaemon.java > (line 258) Exception in thread Thread[CompactionExecutor:3,1,main] > java.lang.AssertionError: Wrong class type. > at > org.apache.cassandra.db.CounterUpdateColumn.reconcile(CounterUpdateColumn.java:70) > at > org.apache.cassandra.db.ArrayBackedSortedColumns.resolveAgainst(ArrayBackedSortedColumns.java:147) > at > org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:126) > at > org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121) > at > org.apache.cassandra.db.compaction.PrecompactedRow$1.reduce(PrecompactedRow.java:120) > at > org.apache.cassandra.db.compaction.PrecompactedRow$1.reduce(PrecompactedRow.java:115) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:112) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:191) > at > org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:144) > at > org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:103) > at > org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:85) > at > org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196) > at > org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74) > at > org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:164) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at
[jira] [Commented] (CASSANDRA-10258) Counter table written with CQLSSTableWriter generates exceptions and become corrupted at first use
[ https://issues.apache.org/jira/browse/CASSANDRA-10258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971292#comment-14971292 ] Aleksey Yeschenko commented on CASSANDRA-10258: --- bq. So really, the simplest and imo best solution is to add "you cannot write sstable yourself with CQLSSTableWriter" to the list of counters limitations. Basically, this. > Counter table written with CQLSSTableWriter generates exceptions and become > corrupted at first use > -- > > Key: CASSANDRA-10258 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10258 > Project: Cassandra > Issue Type: Bug > Components: API > Environment: Linux Debian Wheezie 7.8 / Oracle Java 1.7.0_67 > Ubuntu 14.04.3 LTS / Oracle Java 1.7.0_75 > Cassandra 2.0.12 2.1.5 2.1.8 >Reporter: Guillaume VIEL >Assignee: Paulo Motta > Fix For: 2.1.x, 2.2.x, 3.0.x > > > We use CQLSStableWriter to produce testing datasets. > Here are the steps to reproduce this issue : > 1) definition of a table with counter > {code} > CREATE TABLE my_counter ( > my_id text, > my_counter counter, > PRIMARY KEY (my_id) > ) > {code} > 2) with CQLSSTableWriter initialize this table (about 2millions entries) with > this insert order (one insert / key only) > {{UPDATE myks.my_counter SET my_counter = my_counter + ? WHERE my_id = ?}} > 3) load the files written by CQLSSTableWriter with sstableloader in your > cassandra cluster (tested on a single node and a 3 nodes cluster) > 4) start a process that updates the counters (we used 3millions entries > distributed on the key my_id) > 5) after a while try to query a key in the my_counter table > {{cqlsh:myks> select * from my_counter where my_id='001';}} > Request did not complete within rpc_timeout. > In the logs of cassandra (2.0.12) : > {code} > ERROR [CompactionExecutor:3] 2015-05-28 15:53:39,491 CassandraDaemon.java > (line 258) Exception in thread Thread[CompactionExecutor:3,1,main] > java.lang.AssertionError: Wrong class type. > at > org.apache.cassandra.db.CounterUpdateColumn.reconcile(CounterUpdateColumn.java:70) > at > org.apache.cassandra.db.ArrayBackedSortedColumns.resolveAgainst(ArrayBackedSortedColumns.java:147) > at > org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:126) > at > org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121) > at > org.apache.cassandra.db.compaction.PrecompactedRow$1.reduce(PrecompactedRow.java:120) > at > org.apache.cassandra.db.compaction.PrecompactedRow$1.reduce(PrecompactedRow.java:115) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:112) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:191) > at > org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:144) > at > org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:103) > at > org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:85) > at > org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196) > at > org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74) > at > org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115) > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:164) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) > at > java.util.concurrent.Executo
[jira] [Comment Edited] (CASSANDRA-8653) Upgrading to trunk with auth throws exception
[ https://issues.apache.org/jira/browse/CASSANDRA-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971286#comment-14971286 ] Sam Tunnicliffe edited comment on CASSANDRA-8653 at 10/23/15 4:23 PM: -- bq. This ticket mentions trunk, but any reason to think 3.0 is immune to that? It's actually the 3.0 that the test upgrades to, or it would be except it's currently skipped (waiting on CASSANDRA-9704, so should be re-enabled). When I run locally I see no auth problems and the test completes as expected. It fails though because of an unexpected ERROR in the log of node1, which is thrown just after the last node is upgraded: {noformat} ERROR [HintsDispatcher:2] 2015-10-23 17:18:53,942 CassandraDaemon.java:195 - Exception in thread Thread[HintsDispatcher:2,1,main] java.lang.RuntimeException: java.nio.file.NoSuchFileException: /home/sam/.ccm/repository/gitCOLONcassandra-3.0/data/hints/ac459445-1f7f-45f2-b9a8-2b185df34845-1445617063586-1.hints at org.apache.cassandra.io.util.ChannelProxy.openChannel(ChannelProxy.java:55) ~[main/:na] at org.apache.cassandra.io.util.ChannelProxy.(ChannelProxy.java:66) ~[main/:na] at org.apache.cassandra.hints.ChecksummedDataInput.open(ChecksummedDataInput.java:63) ~[main/:na] at org.apache.cassandra.hints.HintsReader.open(HintsReader.java:77) ~[main/:na] at org.apache.cassandra.hints.HintsDispatcher.create(HintsDispatcher.java:71) ~[main/:na] at org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242) ~[main/:na] at org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:219) ~[main/:na] at org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:198) ~[main/:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_60] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_60] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_60] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] Caused by: java.nio.file.NoSuchFileException: /home/sam/.ccm/repository/gitCOLONcassandra-3.0/data/hints/ac459445-1f7f-45f2-b9a8-2b185df34845-1445617063586-1.hints at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) ~[na:1.8.0_60] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[na:1.8.0_60] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[na:1.8.0_60] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) ~[na:1.8.0_60] at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[na:1.8.0_60] at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[na:1.8.0_60] at org.apache.cassandra.io.util.ChannelProxy.openChannel(ChannelProxy.java:51) ~[main/:na] ... 12 common frames omitted {noformat} was (Author: beobal): b.q This ticket mentions trunk, but any reason to think 3.0 is immune to that? It's actually the 3.0 that the test upgrades to, or it would be except it's currently skipped (waiting on CASSANDRA-9704, so should be re-enabled). When I run locally I see no auth problems and the test completes as expected. It fails though because of an unexpected ERROR in the log of node1, which is thrown just after the last node is upgraded: {noformat} RROR [HintsDispatcher:2] 2015-10-23 17:18:53,942 CassandraDaemon.java:195 - Exception in thread Thread[HintsDispatcher:2,1,main] java.lang.RuntimeException: java.nio.file.NoSuchFileException: /home/sam/.ccm/repository/gitCOLONcassandra-3.0/data/hints/ac459445-1f7f-45f2-b9a8-2b185df34845-1445617063586-1.hints at org.apache.cassandra.io.util.ChannelProxy.openChannel(ChannelProxy.java:55) ~[main/:na] at org.apache.cassandra.io.util.ChannelProxy.(ChannelProxy.java:66) ~[main/:na] at org.apache.cassandra.hints.ChecksummedDataInput.open(ChecksummedDataInput.java:63) ~[main/:na] at org.apache.cassandra.hints.HintsReader.open(HintsReader.java:77) ~[main/:na] at org.apache.cassandra.hints.HintsDispatcher.create(HintsDispatcher.java:71) ~[main/:na] at org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242) ~[main/:na] at org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:219) ~[main/:na] at org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:198) ~[main/:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_60] at java.util.concurrent.FutureTask.run(FutureTask.java:2
[jira] [Commented] (CASSANDRA-8653) Upgrading to trunk with auth throws exception
[ https://issues.apache.org/jira/browse/CASSANDRA-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971286#comment-14971286 ] Sam Tunnicliffe commented on CASSANDRA-8653: b.q This ticket mentions trunk, but any reason to think 3.0 is immune to that? It's actually the 3.0 that the test upgrades to, or it would be except it's currently skipped (waiting on CASSANDRA-9704, so should be re-enabled). When I run locally I see no auth problems and the test completes as expected. It fails though because of an unexpected ERROR in the log of node1, which is thrown just after the last node is upgraded: {noformat} RROR [HintsDispatcher:2] 2015-10-23 17:18:53,942 CassandraDaemon.java:195 - Exception in thread Thread[HintsDispatcher:2,1,main] java.lang.RuntimeException: java.nio.file.NoSuchFileException: /home/sam/.ccm/repository/gitCOLONcassandra-3.0/data/hints/ac459445-1f7f-45f2-b9a8-2b185df34845-1445617063586-1.hints at org.apache.cassandra.io.util.ChannelProxy.openChannel(ChannelProxy.java:55) ~[main/:na] at org.apache.cassandra.io.util.ChannelProxy.(ChannelProxy.java:66) ~[main/:na] at org.apache.cassandra.hints.ChecksummedDataInput.open(ChecksummedDataInput.java:63) ~[main/:na] at org.apache.cassandra.hints.HintsReader.open(HintsReader.java:77) ~[main/:na] at org.apache.cassandra.hints.HintsDispatcher.create(HintsDispatcher.java:71) ~[main/:na] at org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242) ~[main/:na] at org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:219) ~[main/:na] at org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:198) ~[main/:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_60] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_60] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_60] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] Caused by: java.nio.file.NoSuchFileException: /home/sam/.ccm/repository/gitCOLONcassandra-3.0/data/hints/ac459445-1f7f-45f2-b9a8-2b185df34845-1445617063586-1.hints at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) ~[na:1.8.0_60] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[na:1.8.0_60] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[na:1.8.0_60] at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) ~[na:1.8.0_60] at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[na:1.8.0_60] at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[na:1.8.0_60] at org.apache.cassandra.io.util.ChannelProxy.openChannel(ChannelProxy.java:51) ~[main/:na] ... 12 common frames omitted {noformat} > Upgrading to trunk with auth throws exception > - > > Key: CASSANDRA-8653 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8653 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Sam Tunnicliffe > Fix For: 3.0.0 > > Attachments: node1.log, node2.log, node3.log > > > When running Sam's upgrade_internal_auth_dtest, I am seeing the following > exception (amongst others) in the log file of the second node to be upgraded > to trunk from 2.1: > {code} > ERROR [GossipStage:1] 2015-01-20 13:46:21,679 CassandraDaemon.java:170 - > Exception in thread Thread[GossipStage:1,5,main] > java.lang.NoClassDefFoundError: > org/apache/cassandra/transport/Event$TopologyChange$Change > at > org.apache.cassandra.transport.Server$EventNotifier.onJoinCluster(Server.java:374) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1668) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.onChange(StorageService.java:1384) > ~[main/:na] > at > org.apache.cassandra.gms.Gossiper.doOnChangeNotifications(Gossiper.java:1094) > ~[main/:na] > at > org.apache.cassandra.gms.Gossiper.applyNewStates(Gossiper.java:1076) > ~[main/:na] > at > org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:1034) > ~[main/:na] > at > org.apache.cassandra.gms.GossipDigestAckVerbHandler.doVerb(GossipDigestAckVerbHandler.java:58) > ~[main/:na] > 1554 - Node /127.0.0.1 state jump to normal > ERROR [GossipStage:1] 2015-01-20 13:46:21,679 CassandraDaemon.java > :170 - Exception in thread Thread[GossipStage:1,5,main] > java.lang.NoClassDefFoundErr
[jira] [Updated] (CASSANDRA-9912) SizeEstimatesRecorder has assertions after decommission sometimes
[ https://issues.apache.org/jira/browse/CASSANDRA-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-9912: - Fix Version/s: (was: 2.1.x) 2.1.12 > SizeEstimatesRecorder has assertions after decommission sometimes > - > > Key: CASSANDRA-9912 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9912 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jeremiah Jordan > Fix For: 2.1.12 > > > Doing some testing with 2.1.8 adding and decommissioning nodes. Sometimes > after decommissioning the following starts being thrown by the > SizeEstimatesRecorder. > {noformat} > java.lang.AssertionError: -9223372036854775808 not found in > -9223372036854775798, 10 > at > org.apache.cassandra.locator.TokenMetadata.getPredecessor(TokenMetadata.java:683) > ~[cassandra-all-2.1.8.621.jar:2.1.8.621] > at > org.apache.cassandra.locator.TokenMetadata.getPrimaryRangesFor(TokenMetadata.java:627) > ~[cassandra-all-2.1.8.621.jar:2.1.8.621] > at > org.apache.cassandra.db.SizeEstimatesRecorder.run(SizeEstimatesRecorder.java:68) > ~[cassandra-all-2.1.8.621.jar:2.1.8.621] > at > org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118) > ~[cassandra-all-2.1.8.621.jar:2.1.8.621] > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_40] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [na:1.8.0_40] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > [na:1.8.0_40] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > [na:1.8.0_40] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_40] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_40] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10249) Make buffered read size configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10249: -- Fix Version/s: (was: 2.1.x) 2.1.12 > Make buffered read size configurable > > > Key: CASSANDRA-10249 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10249 > Project: Cassandra > Issue Type: Improvement >Reporter: Albert P Tobey >Assignee: Albert P Tobey > Fix For: 2.1.12 > > Attachments: Screenshot 2015-09-11 09.32.04.png, Screenshot > 2015-09-11 09.34.10.png, patched-2.1.9-dstat-lvn10.png, > stock-2.1.9-dstat-lvn10.png, yourkit-screenshot.png > > > On read workloads, Cassandra 2.1 reads drastically more data than it emits > over the network. This causes problems throughput the system by wasting disk > IO and causing unnecessary GC. > I have reproduce the issue on clusters and locally with a single instance. > The only requirement to reproduce the issue is enough data to blow through > the page cache. The default schema and data size with cassandra-stress is > sufficient for exposing the issue. > With stock 2.1.9 I regularly observed anywhere from 300:1 to 500 > disk:network ratio. That is to say, for 1MB/s of network IO, Cassandra was > doing 300-500MB/s of disk reads, saturating the drive. > After applying this patch for standard IO mode > https://gist.github.com/tobert/10c307cf3709a585a7cf the ratio fell to around > 100:1 on my local test rig. Latency improved considerably and GC became a lot > less frequent. > I tested with 512 byte reads as well, but got the same performance, which > makes sense since all HDD and SSD made in the last few years have a 4K block > size (many of them lie and say 512). > I'm re-running the numbers now and will post them tomorrow. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10179) Duplicate index should throw AlreadyExistsException
[ https://issues.apache.org/jira/browse/CASSANDRA-10179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10179: -- Issue Type: Sub-task (was: Improvement) Parent: CASSANDRA-9362 > Duplicate index should throw AlreadyExistsException > --- > > Key: CASSANDRA-10179 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10179 > Project: Cassandra > Issue Type: Sub-task >Reporter: T Jake Luciani >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.x > > > If a 2i already exists we currently throw a InvalidQueryException. This > should be a AlreadyExistsException for consistency like trying to create the > same CQL Table twice. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-10441) Move stress tool into it's own repository and manage dependency on Cassandra externally
[ https://issues.apache.org/jira/browse/CASSANDRA-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-10441. Resolution: Won't Fix I hear you, but since stress's primary purpose is to inform C* development, I don't think moving it out of tree makes sense. Among other reasons, we play pretty fast and loose with compatibility and we'd like to keep it that way. > Move stress tool into it's own repository and manage dependency on Cassandra > externally > --- > > Key: CASSANDRA-10441 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10441 > Project: Cassandra > Issue Type: Wish > Components: Tools >Reporter: Nitsan Wakart >Priority: Minor > > This will: > 1. Allow distinct release/maintenance/contribution cycles > 2. Prevent accidental dependencies from Cassandra into the stress tool > 3. Isolate performance changes in Cassandra from changes to stress tool > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-7324) Strange warnings during pig-test
[ https://issues.apache.org/jira/browse/CASSANDRA-7324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko resolved CASSANDRA-7324. -- Resolution: Won't Fix Fix Version/s: (was: 2.1.x) > Strange warnings during pig-test > > > Key: CASSANDRA-7324 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7324 > Project: Cassandra > Issue Type: Bug > Components: Tests >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Minor > > Not present in 2.0, we have strange but seemingly harmless warnings when > running pig-test on 2.1: > {noformat} > [junit] 14/05/29 22:21:53 WARN util.MBeans: > Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1779863485 > [junit] javax.management.InstanceNotFoundException: > Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1779863485 > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095) > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427) > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415) > [junit] at > com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546) > [junit] at > org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71) > [junit] at > org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:2067) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:799) > [junit] at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:566) > [junit] at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:550) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:87) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68) > [junit] at > org.apache.cassandra.pig.PigTestBase.oneTimeTearDown(PigTestBase.java:77) > [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [junit] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > [junit] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [junit] at java.lang.reflect.Method.invoke(Method.java:606) > [junit] at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) > [junit] at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) > [junit] at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) > [junit] at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) > [junit] at org.junit.runners.ParentRunner.run(ParentRunner.java:220) > [junit] at > junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) > [junit] 14/05/29 22:21:53 WARN datanode.FSDatasetAsyncDiskService: > AsyncDiskService has already shut down. > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10405) MV updates should optionally wait for acknowledgement from view replicas
[ https://issues.apache.org/jira/browse/CASSANDRA-10405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-10405: --- Priority: Minor (was: Major) > MV updates should optionally wait for acknowledgement from view replicas > > > Key: CASSANDRA-10405 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10405 > Project: Cassandra > Issue Type: Improvement >Reporter: Carl Yeksigian >Priority: Minor > Labels: materializedviews > Fix For: 3.x > > > MV updates are currently completely asynchronous in order to provide > parallelism of updates trying to acquire the partition lock. For some use > cases, leaving the MV updates asynchronous is exactly what's needed. > However, there are some use cases where knowing that the update has either > succeeded or failed on the view is necessary, especially when trying to allow > read-your-write behavior. In those cases, we would follow the same code path > as asynchronous writes, but at the end wait on the acknowledgements from the > view replicas before acknowledging our write. This option should be for each > MV separately, since MVs which need the synchronous properties might be mixed > with MV which do not need this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9984) Improve error reporting for malformed schemas in stress profile
[ https://issues.apache.org/jira/browse/CASSANDRA-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-9984: - Fix Version/s: (was: 3.x) 3.1 > Improve error reporting for malformed schemas in stress profile > --- > > Key: CASSANDRA-9984 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9984 > Project: Cassandra > Issue Type: Improvement >Reporter: Jim Witschey >Assignee: T Jake Luciani >Priority: Trivial > Fix For: 3.1 > > > See this gist: > https://gist.github.com/mambocab/a78fae8c356223245c63 > for an example of a profile that triggers the bug when used as a stress > profile on trunk. It contains a number of old, now unused, configuration > options in the table schema. The error raised when this schema is executed > isn't propagated because of improper error handling. > To reproduce this error with CCM you can save the file in the gist above as > {{8-columns.yaml}} and run > {code} > ccm create -v git:trunk reproduce-error -n 1 > ccm start --wait-for-binary-proto > ccm stress user profile=8-columns.yaml ops\(insert=1\) n=5K > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10544) Make cqlsh tests work when authentication is configured
[ https://issues.apache.org/jira/browse/CASSANDRA-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10544: -- Issue Type: Improvement (was: Bug) > Make cqlsh tests work when authentication is configured > --- > > Key: CASSANDRA-10544 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10544 > Project: Cassandra > Issue Type: Improvement > Components: Tests, Tools >Reporter: Adam Holmberg >Assignee: Stefania >Priority: Trivial > Labels: cqlsh, test > Fix For: 2.1.x, 2.2.x, 3.1 > > > cqlsh tests break if the runner has an authentication section in their > ~/.cassandra/cqlshrc, because cqlsh changes the prompt and the tests scan > output for a prompt. It manifests as read timeouts while waiting for a prompt > in test/run_cqlsh.py. > [This > pattern|https://github.com/mambocab/cassandra/blob/1c27f9be1ba8ea10dbe843d513e23de6238dede8/pylib/cqlshlib/test/run_cqlsh.py#L30] > could be generalized to match the "@cqlsh..." prompt that arises > with this config. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9984) Improve error reporting for malformed schemas in stress profile
[ https://issues.apache.org/jira/browse/CASSANDRA-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-9984: - Issue Type: Improvement (was: Bug) > Improve error reporting for malformed schemas in stress profile > --- > > Key: CASSANDRA-9984 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9984 > Project: Cassandra > Issue Type: Improvement >Reporter: Jim Witschey >Assignee: T Jake Luciani >Priority: Trivial > Fix For: 3.x > > > See this gist: > https://gist.github.com/mambocab/a78fae8c356223245c63 > for an example of a profile that triggers the bug when used as a stress > profile on trunk. It contains a number of old, now unused, configuration > options in the table schema. The error raised when this schema is executed > isn't propagated because of improper error handling. > To reproduce this error with CCM you can save the file in the gist above as > {{8-columns.yaml}} and run > {code} > ccm create -v git:trunk reproduce-error -n 1 > ccm start --wait-for-binary-proto > ccm stress user profile=8-columns.yaml ops\(insert=1\) n=5K > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10544) Make cqlsh tests work when authentication is configured
[ https://issues.apache.org/jira/browse/CASSANDRA-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10544: -- Fix Version/s: (was: 3.0.x) 3.1 > Make cqlsh tests work when authentication is configured > --- > > Key: CASSANDRA-10544 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10544 > Project: Cassandra > Issue Type: Improvement > Components: Tests, Tools >Reporter: Adam Holmberg >Assignee: Stefania >Priority: Trivial > Labels: cqlsh, test > Fix For: 2.1.x, 2.2.x, 3.1 > > > cqlsh tests break if the runner has an authentication section in their > ~/.cassandra/cqlshrc, because cqlsh changes the prompt and the tests scan > output for a prompt. It manifests as read timeouts while waiting for a prompt > in test/run_cqlsh.py. > [This > pattern|https://github.com/mambocab/cassandra/blob/1c27f9be1ba8ea10dbe843d513e23de6238dede8/pylib/cqlshlib/test/run_cqlsh.py#L30] > could be generalized to match the "@cqlsh..." prompt that arises > with this config. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8965) Cassandra retains a file handle to the directory its writing to for each writer instance
[ https://issues.apache.org/jira/browse/CASSANDRA-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-8965: - Fix Version/s: (was: 3.x) 3.1 > Cassandra retains a file handle to the directory its writing to for each > writer instance > > > Key: CASSANDRA-8965 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8965 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Benedict >Priority: Trivial > Fix For: 3.1 > > > We could either share this amongst the CF object, or have a shared > ref-counted cache that opens a reference and shares it amongst all writer > instances, closing it once they all close. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9510) assassinating an unknown endpoint could npe
[ https://issues.apache.org/jira/browse/CASSANDRA-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-9510: - Fix Version/s: (was: 3.x) 3.1 > assassinating an unknown endpoint could npe > --- > > Key: CASSANDRA-9510 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9510 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Dave Brosius >Assignee: Dave Brosius >Priority: Trivial > Fix For: 3.1 > > Attachments: assissinate_unknown.txt > > > If the code assissinates an unknown endpoint, it doesn't generate a 'tokens' > collection, which then does > epState.addApplicationState(ApplicationState.STATUS, > StorageService.instance.valueFactory.left(tokens, computeExpireTime())); > and left(null, time); will npe -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10374) List and Map values incorrectly limited to 64k size
[ https://issues.apache.org/jira/browse/CASSANDRA-10374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10374: -- Fix Version/s: (was: 3.0.x) 3.1 > List and Map values incorrectly limited to 64k size > --- > > Key: CASSANDRA-10374 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10374 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Tyler Hobbs >Assignee: Benjamin Lerer >Priority: Minor > Fix For: 2.1.x, 2.2.x, 3.1 > > > With the v3 native protocol, we switched from encoding collection element > sizes with shorts to ints. However, in {{Lists.java}} and {{Maps.java}}, we > still validate that list and map values are smaller than > {{MAX_UNSIGNED_SHORT}}. > Map keys and set elements are stored in the cell name, so they're implicitly > limited to the cell name size limit of 64k. However, for non-frozen > collections, this limitation should not apply, so we probably don't want to > perform this check here for those either. > The fix should include tests where we exceed the 64k limit for frozen and > non-frozen collections. In the case of non-frozen lists and maps, we should > verify that the 64k cell-name size limit is enforced in a friendly way. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10350) cqlsh describe keyspace output no longers keeps indexes in sorted order
[ https://issues.apache.org/jira/browse/CASSANDRA-10350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10350: -- Fix Version/s: (was: 3.0.x) 3.1 > cqlsh describe keyspace output no longers keeps indexes in sorted order > --- > > Key: CASSANDRA-10350 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10350 > Project: Cassandra > Issue Type: Bug >Reporter: Andrew Hust >Priority: Minor > Labels: cqlsh > Fix For: 3.1 > > > cqlsh command {{describe keyspace }} no longer keeps indexes in alpha > sorted order. This was caught with a dtest on > [cassci|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/]. > Tested on: C* {{b4544846def2bdd00ff841c7e3d9f2559620827b}} > Can be reproduced with the following: > {code} > ccm stop > ccm remove describe_order > ccm create -n 1 -v git:cassandra-2.2 describe_order > ccm start > cat << EOF | ccm node1 cqlsh > CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > USE ks1; > CREATE TABLE ks1.test (id int, col int, val text, val2 text, val3 text, > PRIMARY KEY(id, col)); > CREATE INDEX ix0 ON ks1.test (col); > CREATE INDEX ix3 ON ks1.test (val3); > CREATE INDEX ix2 ON ks1.test (val2); > CREATE INDEX ix1 ON ks1.test (val); > DESCRIBE KEYSPACE ks1; > EOF > ccm stop > ccm setdir -v git:cassandra-3.0 > ccm start > sleep 15 > cat << EOF | ccm node1 cqlsh > DESCRIBE KEYSPACE ks1; > EOF > ccm stop > {code} > Ouput on <= cassandra-2.2: > {code} > CREATE INDEX ix0 ON ks1.test (col); > CREATE INDEX ix1 ON ks1.test (val); > CREATE INDEX ix2 ON ks1.test (val2); > CREATE INDEX ix3 ON ks1.test (val3); > {code} > Output on cassandra-3.0: > {code} > CREATE INDEX ix2 ON ks1.test (val2); > CREATE INDEX ix3 ON ks1.test (val3); > CREATE INDEX ix0 ON ks1.test (col); > CREATE INDEX ix1 ON ks1.test (val); > {code} > //CC [~enigmacurry] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10350) cqlsh describe keyspace output no longers keeps indexes in sorted order
[ https://issues.apache.org/jira/browse/CASSANDRA-10350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971240#comment-14971240 ] Aleksey Yeschenko commented on CASSANDRA-10350: --- [~aholmber] Can we fix this in the driver? > cqlsh describe keyspace output no longers keeps indexes in sorted order > --- > > Key: CASSANDRA-10350 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10350 > Project: Cassandra > Issue Type: Bug >Reporter: Andrew Hust >Priority: Minor > Labels: cqlsh > Fix For: 3.1 > > > cqlsh command {{describe keyspace }} no longer keeps indexes in alpha > sorted order. This was caught with a dtest on > [cassci|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/]. > Tested on: C* {{b4544846def2bdd00ff841c7e3d9f2559620827b}} > Can be reproduced with the following: > {code} > ccm stop > ccm remove describe_order > ccm create -n 1 -v git:cassandra-2.2 describe_order > ccm start > cat << EOF | ccm node1 cqlsh > CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > USE ks1; > CREATE TABLE ks1.test (id int, col int, val text, val2 text, val3 text, > PRIMARY KEY(id, col)); > CREATE INDEX ix0 ON ks1.test (col); > CREATE INDEX ix3 ON ks1.test (val3); > CREATE INDEX ix2 ON ks1.test (val2); > CREATE INDEX ix1 ON ks1.test (val); > DESCRIBE KEYSPACE ks1; > EOF > ccm stop > ccm setdir -v git:cassandra-3.0 > ccm start > sleep 15 > cat << EOF | ccm node1 cqlsh > DESCRIBE KEYSPACE ks1; > EOF > ccm stop > {code} > Ouput on <= cassandra-2.2: > {code} > CREATE INDEX ix0 ON ks1.test (col); > CREATE INDEX ix1 ON ks1.test (val); > CREATE INDEX ix2 ON ks1.test (val2); > CREATE INDEX ix3 ON ks1.test (val3); > {code} > Output on cassandra-3.0: > {code} > CREATE INDEX ix2 ON ks1.test (val2); > CREATE INDEX ix3 ON ks1.test (val3); > CREATE INDEX ix0 ON ks1.test (col); > CREATE INDEX ix1 ON ks1.test (val); > {code} > //CC [~enigmacurry] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10271) ORDER BY should allow skipping equality-restricted clustering columns
[ https://issues.apache.org/jira/browse/CASSANDRA-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10271: -- Issue Type: Improvement (was: Bug) > ORDER BY should allow skipping equality-restricted clustering columns > - > > Key: CASSANDRA-10271 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10271 > Project: Cassandra > Issue Type: Improvement > Components: API, Core >Reporter: Tyler Hobbs >Assignee: Brett Snyder >Priority: Minor > Fix For: 3.x, 2.2.x > > Attachments: cassandra-2.2-10271.txt > > > Given a table like the following: > {noformat} > CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY (a, b, c)); > {noformat} > We should support a query like this: > {noformat} > SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY c ASC; > {noformat} > Currently, this results in the following error: > {noformat} > [Invalid query] message="Order by currently only support the ordering of > columns following their declared order in the PRIMARY KEY" > {noformat} > However, since {{b}} is restricted by an equality restriction, we shouldn't > require it to be present in the {{ORDER BY}} clause. > As a workaround, you can use this query instead: > {noformat} > SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY b ASC, c ASC; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10257) InvertedIndex trigger example has not been updated post 8099
[ https://issues.apache.org/jira/browse/CASSANDRA-10257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10257: -- Reviewer: Aleksey Yeschenko > InvertedIndex trigger example has not been updated post 8099 > > > Key: CASSANDRA-10257 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10257 > Project: Cassandra > Issue Type: Bug > Components: Examples >Reporter: Mike Adamson >Assignee: Mike Adamson >Priority: Minor > Fix For: 3.0.x > > Attachments: 10257.txt > > > The {{InvertedIndex}} example is still using pre-8099 code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9223) ArithmeticException after decommission
[ https://issues.apache.org/jira/browse/CASSANDRA-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-9223: - Fix Version/s: (was: 3.x) 3.1 > ArithmeticException after decommission > -- > > Key: CASSANDRA-9223 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9223 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Brandon Williams >Priority: Minor > Fix For: 3.1 > > > Also seen on trunk while working on CASSANDRA-8072: > {noformat} > ERROR 19:21:33 Exception in thread Thread[BatchlogTasks:1,5,main] > java.lang.ArithmeticException: / by zero > at > org.apache.cassandra.db.BatchlogManager.replayAllFailedBatches(BatchlogManager.java:173) > ~[main/:na] > at > org.apache.cassandra.db.BatchlogManager.access$000(BatchlogManager.java:61) > ~[main/:na] > at > org.apache.cassandra.db.BatchlogManager$1.runMayThrow(BatchlogManager.java:91) > ~[main/:na] > at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[main/:na] > at > org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:82) > ~[main/:na] > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > [na:1.7.0_76] > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) > [na:1.7.0_76] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) > [na:1.7.0_76] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > [na:1.7.0_76] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [na:1.7.0_76] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_76] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_76] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10096) SerializationHelper should provide a rewindable in-order tester
[ https://issues.apache.org/jira/browse/CASSANDRA-10096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10096: -- Issue Type: Improvement (was: Bug) > SerializationHelper should provide a rewindable in-order tester > --- > > Key: CASSANDRA-10096 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10096 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Benedict >Priority: Minor > Fix For: 3.x > > > When deserializing a row we perform a logarithmic lookup on column name for > every cell. There is also a lot of unnecessary indirection to reach this > method call. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10179) Duplicate index should throw AlreadyExistsException
[ https://issues.apache.org/jira/browse/CASSANDRA-10179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10179: -- Issue Type: Improvement (was: Bug) > Duplicate index should throw AlreadyExistsException > --- > > Key: CASSANDRA-10179 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10179 > Project: Cassandra > Issue Type: Improvement >Reporter: T Jake Luciani >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.x > > > If a 2i already exists we currently throw a InvalidQueryException. This > should be a AlreadyExistsException for consistency like trying to create the > same CQL Table twice. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-10580) When mutations are dropped, the column family should be printed / have a counter per column family
Anubhav Kale created CASSANDRA-10580: Summary: When mutations are dropped, the column family should be printed / have a counter per column family Key: CASSANDRA-10580 URL: https://issues.apache.org/jira/browse/CASSANDRA-10580 Project: Cassandra Issue Type: New Feature Components: Core Environment: Production Reporter: Anubhav Kale Priority: Minor Fix For: 2.1.x In our production cluster, we are seeing a large number of dropped mutations. It would be really helpful to see which column families are really affected by this (either through logs or through a dedicated counter for every column family). I have made a hack in StorageProxy (below) to help us with this. I am happy to extend this to a better solution (print the CF affected in as logger.debug and then manually grep) if experts agree this additional detail would be helpful in general. Any other suggestions are welcome. private static abstract class LocalMutationRunnable implements Runnable { private final long constructionTime = System.currentTimeMillis(); private IMutation mutation; public final void run() { if (System.currentTimeMillis() > constructionTime + 2000L) { long timeTaken = System.currentTimeMillis() - constructionTime; logger.warn("Anubhav LocalMutationRunnable thread ran after " + timeTaken); try { for(ColumnFamily family : this.mutation.getColumnFamilies()) { if (family.toString().toLowerCase().contains("udsuserdailysnapshot")) { MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.USERDAILY); } else if (family.toString().toLowerCase().contains("udsuserhourlysnapshot")) { MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.USERHOURLY); } else if (family.toString().toLowerCase().contains("udstenantdailysnapshot")) { MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.TENANTDAILY); } else if (family.toString().toLowerCase().contains("udstenanthourlysnapshot")) { MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.TENANTHOURLY); } else if (family.toString().toLowerCase().contains("userdatasetraw")) { MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.USERDSRAW); } else if (family.toString().toLowerCase().contains("tenants")) { MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.TENANTS); } else if (family.toString().toLowerCase().contains("users")) { MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.USERS); } else if (family.toString().toLowerCase().contains("tenantactivity")) { MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.TENANTACTIVITY); } else if (family.getKeySpaceName().toLowerCase().contains("system")) { MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.SYSTEMKS); } else { logger.warn("Anubhav LocalMutationRunnable updating mutations for " + family.toString().toLowerCase()); MessagingService.instance().incrementDroppedMessages(MessagingService.Verb.OTHERTBL); } } } catch (Exception e) { logger.error("Anubhav LocalMutationRunnable Exception ", e); } MessagingSer
[jira] [Updated] (CASSANDRA-9222) AssertionError after decommission
[ https://issues.apache.org/jira/browse/CASSANDRA-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-9222: - Fix Version/s: (was: 3.x) 3.1 > AssertionError after decommission > - > > Key: CASSANDRA-9222 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9222 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Brandon Williams >Priority: Minor > Fix For: 3.1 > > > Saw this on trunk while working on CASSANDRA-8072, but it may affect earlier > revisions as well: > {noformat} > INFO 17:48:57 MessagingService has terminated the accept() thread > INFO 17:48:58 DECOMMISSIONED > ERROR 17:52:25 Exception in thread Thread[OptionalTasks:1,5,main] > java.lang.AssertionError: -1011553757645129692 not found in > -9212067178699207814, -9200531256183869940, -9166030381776079682, > -9162013024688602642, -9151724494713671168, -9095828490921521759, > -9035494031488373110, -8993765846966048219, -8912013107131353260, > -8909000788978186800, -8879514397454962673, -8868628980500567099, > -8850730903031889070, -8810378752213886595, -8779200870214886308, > -8758215747589442842, -8751091270073031687, -8727034084505556969, > -8665197275159395069, -8656563059526305598, -8468078121019364990, > -8465001791134178844, -8442193507205463429, -8422069069190372219, > -8342133517826612505, -8341643847610190520, -8340770353573450569, > -8337671516798157281, -8299063757464280571, -8294397037816683529, > -8190643358275415766, -8125907580996325958, -8080821167493102683, > -8058428707430264364, -8033777866368709204, -8018079744052327023, > -8005568943124488030, -7911488756902729132, -7831006227012170930, > -7824529182957931950, -7807286997402075771, -7795080548612350344, > -7778629955912441437, -7771701686959718810, -7759250335393772671, > -7745731940317799541, -7703194536911509010, -7694764467260740698, > -7691909270364954632, -7687121918922986909, -7682707339911246942, > -7517133373189921954, -7482800574078120526, -7475897243891441451, > -7334307376946940271, -7326649207653179327, -7258677281263041990, > -7221843646683358238, -7193299656451825680, -7105256682000196035, > -7035269781687029457, -7024278722443497027, -7019197046707993025, > -7015131617238216508, -7003811999522811317, -6980314778696530567, > -6966235125715836473, -691530498397662, -6912703644363131398, > -6881456879008059927, -6861265076865721267, -6850740895102395611, > -6808435504617684311, -6785202117470372844, -6782573711981746574, > -6763604807975420855, -6738443719321921481, -6718513123799422576, > -6711670508127917469, -6709012720615571304, -6645945635050443947, > -6629420613692951932, -6542209628003661283, -6535684002637060628, > -6507671461487774245, -6423206762015678338, -6409843839148310789, > -6404011469157179029, -6381904465334594648, -6311911206861962333, > -6296991709696294898, -6264931794517958640, -6261574198670386500, > -6261382604358802512, -6252257370391459113, -6241897861580419597, > -6227245701986117982, -6199525755295090433, -6180934919369759659, > -6144605078172691818, -6126223305042342065, -6118447361839427651, > -6074679422903704861, -6053157348245110185, -6029489996808528900, > -5984211855143878285, -5976157876053718897, -5960786495011670628, > -5958735514226770035, -5899767639655442330, -5822684184303415148, > -5781417439294763637, -5751460432371890910, -5740166642636309327, > -5695626417612186310, -5640765045723408247, -5617181156049689169, > -5609533985177356591, -5601369236916580549, -5597950494887081576, > -5563417985168606424, -5544827346340456629, -5532661047516804641, > -5522839053491352218, -5515748028172318343, -5503681859719385351, > -5454037971834611841, -5391841126413524561, -5391486446881271229, > -5345799278441821500, -5334673760925625816, -5223383618739305156, > -5221923994481449381, -5201263557535069480, -5146266397250565218, > -5129908985877585855, -5105202808286786842, -5087879514740126453, > -5015647678958926683, -4956601765875516828, -4870012706573251068, > -4843165740363419346, -4785540557423875550, -4769272272470020667, > -4743838345902355963, -4652149714081482841, -4651813505681686208, > -4633498525751156636, -4617489888285113964, -4575171285024168183, > -4426852178336308913, -4426400792698710435, -4389286320937036309, > -4324528033603203034, -4310368852323145495, -4302216608677327172, > -4229528661709148440, -4207740831738287983, -4203528661247313570, > -3948641241721335982, -3946554569612854645, -3931865850800685387, > -3925635355333550077, -3834502440481769685, -3827908348147378297, > -3805680095754927988, -3804947918584815385, -3800995210938487618, > -3783564223836955070, -3775028120786497996, -3711629770355538643, > -3710182799291812403, -3643158926306968005, -3625334149683154824, >
[jira] [Resolved] (CASSANDRA-8554) Node where gossip is disabled still shows as UP on that node; other nodes show it as DN
[ https://issues.apache.org/jira/browse/CASSANDRA-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko resolved CASSANDRA-8554. -- Resolution: Cannot Reproduce Fix Version/s: (was: 3.x) > Node where gossip is disabled still shows as UP on that node; other nodes > show it as DN > > > Key: CASSANDRA-8554 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8554 > Project: Cassandra > Issue Type: Bug > Environment: Centos 6.5, DSE4.5.1 tarball install >Reporter: Mark Curtis >Priority: Minor > > When running nodetool drain, the drained node will still show the status of > itself as UP in nodetool status even after the drain has finished. For > example using a 3 node cluster on one of the nodes that is still operating > and not drained we see this: > {code} > $ ./dse-4.5.1/bin/nodetool status > Note: Ownership information does not include topology; for complete > information, specify a keyspace > Datacenter: Central > === > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- AddressLoad Tokens Owns Host ID > Rack > UN 192.168.56.21 210.78 KB 256 32.1% > 82eb2fca-4f57-467b-a972-93096ec5d69f RAC1 > DN 192.168.56.23 2.22 GB256 33.5% > a11bfac1-fad0-440b-bd68-7562a89ce3c7 RAC1 > UN 192.168.56.22 2.22 GB256 34.4% > 4250cb05-97be-4bac-887a-acc307d1bc0c RAC1 > {code} > While on the drained node we see this: > {code} > [datastax@DSE4 ~]$ ./dse-4.5.1/bin/nodetool drain > [datastax@DSE4 ~]$ ./dse-4.5.1/bin/nodetool status > Note: Ownership information does not include topology; for complete > information, specify a keyspace > Datacenter: Central > === > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- AddressLoad Tokens Owns Host ID > Rack > UN 192.168.56.21 210.78 KB 256 32.1% > 82eb2fca-4f57-467b-a972-93096ec5d69f RAC1 > UN 192.168.56.23 2.22 GB256 33.5% > a11bfac1-fad0-440b-bd68-7562a89ce3c7 RAC1 > UN 192.168.56.22 2.22 GB256 34.4% > 4250cb05-97be-4bac-887a-acc307d1bc0c RAC1 > {code} > Netstat shows outgoing connections from the drained node to other nodes as > still established on port 7000 but the node is no longer listening on port > 7000 which I believe is expected. > However the output of nodetool status on the drained node could be > interpreted as misleading. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9069) debug-cql broken in trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-9069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-9069: - Fix Version/s: (was: 3.x) 3.1 > debug-cql broken in trunk > - > > Key: CASSANDRA-9069 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9069 > Project: Cassandra > Issue Type: Bug >Reporter: Robert Stupp >Priority: Minor > Fix For: 3.1 > > > {{debug-cql}} is broken on trunk. > At startup it just says: > {code} > Error: Exception thrown by the agent : java.lang.NullPointerException > {code} > That exception originates from JMX agent (which cannot bind). > It can be reproduced by starting C* locally and starting {{debug-cql}}. > Workaround is to comment out sourcing of {{cassandra-env.sh}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-5261) Remove token generator
[ https://issues.apache.org/jira/browse/CASSANDRA-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5261: - Fix Version/s: (was: 3.x) 3.0.0 > Remove token generator > -- > > Key: CASSANDRA-5261 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5261 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Jonathan Ellis >Priority: Minor > Labels: triaged > Fix For: 3.0.0 > > > Obsoleted by vnodes -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-2986) Fix short reads in range (and index?) scans
[ https://issues.apache.org/jira/browse/CASSANDRA-2986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-2986: - Issue Type: Improvement (was: Bug) > Fix short reads in range (and index?) scans > --- > > Key: CASSANDRA-2986 > URL: https://issues.apache.org/jira/browse/CASSANDRA-2986 > Project: Cassandra > Issue Type: Improvement >Reporter: Jonathan Ellis >Assignee: Jason Brown >Priority: Minor > Fix For: 3.x > > > See CASSANDRA-2643 for the [multi]get fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10511) Index summary downsampling prevents mmap access of large files after restart
[ https://issues.apache.org/jira/browse/CASSANDRA-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10511: -- Fix Version/s: (was: 3.0.x) (was: 3.x) 3.1 > Index summary downsampling prevents mmap access of large files after restart > > > Key: CASSANDRA-10511 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10511 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Benedict > Fix For: 2.1.x, 2.2.x, 3.1 > > > {{SSTableReader.cloneWithNewSummarySampleLevel}} constructs a > {{SegmentedFile.Builder}} but never populates it with any boundaries. For > files larger than 2Gb, this will result in their being accessed via buffered > io after a restart. -- This message was sent by Atlassian JIRA (v6.3.4#6332)