[jira] [Commented] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-08 Thread Terry Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088931#comment-15088931
 ] 

Terry Ma commented on CASSANDRA-10961:
--

hi xiaost
I replace the patched jar on new bootstrap node, but got the same error. 
do I need to replace jar on all nodes?

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: apache-cassandra-2.2.4-SNAPSHOT.jar, debug.1.log, 
> debug.logs.zip, netstats.1.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> 

[jira] [Created] (CASSANDRA-10985) OOM during bulk read(slice query) operation

2016-01-08 Thread sumit thakur (JIRA)
sumit thakur created CASSANDRA-10985:


 Summary: OOM during bulk read(slice query) operation
 Key: CASSANDRA-10985
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10985
 Project: Cassandra
  Issue Type: Bug
  Components: Observability
 Environment: OS : Linux 6.5
RAM : 126GB
assign heap size: 8GB
Reporter: sumit thakur


The thread java.lang.Thread @ 0x55000a4f0 Thrift:6 keeps local variables with 
total size 16,214,953,728 (98.23%) bytes.

The memory is accumulated in one instance of "java.lang.Thread" loaded by 
"".
The stacktrace of this Thread is available. See stacktrace.



Keywords
java.lang.Thread
--
Trace: 

Thrift:6
  at java.lang.OutOfMemoryError.()V (OutOfMemoryError.java:48)
  at 
org.apache.cassandra.utils.ByteBufferUtil.read(Ljava/io/DataInput;I)Ljava/nio/ByteBuffer;
 (ByteBufferUtil.java:401)
  at 
org.apache.cassandra.utils.ByteBufferUtil.readWithVIntLength(Lorg/apache/cassandra/io/util/DataInputPlus;)Ljava/nio/ByteBuffer;
 (ByteBufferUtil.java:339)
  at 
org.apache.cassandra.db.marshal.AbstractType.readValue(Lorg/apache/cassandra/io/util/DataInputPlus;)Ljava/nio/ByteBuffer;
 (AbstractType.java:391)
  at 
org.apache.cassandra.db.rows.BufferCell$Serializer.deserialize(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/LivenessInfo;Lorg/apache/cassandra/config/ColumnDefinition;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;)Lorg/apache/cassandra/db/rows/Cell;
 (BufferCell.java:298)
  at 
org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(Lorg/apache/cassandra/config/ColumnDefinition;Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;Lorg/apache/cassandra/db/rows/Row$Builder;Lorg/apache/cassandra/db/LivenessInfo;)V
 (UnfilteredSerializer.java:453)
  at 
org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;IILorg/apache/cassandra/db/rows/Row$Builder;)Lorg/apache/cassandra/db/rows/Row;
 (UnfilteredSerializer.java:431)
  at 
org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;Lorg/apache/cassandra/db/rows/Row$Builder;)Lorg/apache/cassandra/db/rows/Unfiltered;
 (UnfilteredSerializer.java:360)
  at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext()Lorg/apache/cassandra/db/rows/Unfiltered;
 (UnfilteredRowIteratorSerializer.java:217)
  at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext()Ljava/lang/Object;
 (UnfilteredRowIteratorSerializer.java:210)
  at org.apache.cassandra.utils.AbstractIterator.hasNext()Z 
(AbstractIterator.java:47)
  at org.apache.cassandra.db.transform.BaseRows.hasNext()Z (BaseRows.java:108)
  at 
org.apache.cassandra.db.LegacyLayout$3.computeNext()Lorg/apache/cassandra/db/LegacyLayout$LegacyCell;
 (LegacyLayout.java:658)
  at org.apache.cassandra.db.LegacyLayout$3.computeNext()Ljava/lang/Object; 
(LegacyLayout.java:640)
  at org.apache.cassandra.utils.AbstractIterator.hasNext()Z 
(AbstractIterator.java:47)
  at 
org.apache.cassandra.thrift.CassandraServer.thriftifyColumns(Lorg/apache/cassandra/config/CFMetaData;Ljava/util/Iterator;)Ljava/util/List;
 (CassandraServer.java:112)
  at 
org.apache.cassandra.thrift.CassandraServer.thriftifyPartition(Lorg/apache/cassandra/db/rows/RowIterator;ZZI)Ljava/util/List;
 (CassandraServer.java:250)
  at 
org.apache.cassandra.thrift.CassandraServer.getSlice(Ljava/util/List;ZILorg/apache/cassandra/db/ConsistencyLevel;Lorg/apache/cassandra/service/ClientState;)Ljava/util/Map;
 (CassandraServer.java:270)
  at 
org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(Ljava/lang/String;Ljava/util/List;Lorg/apache/cassandra/thrift/ColumnParent;ILorg/apache/cassandra/thrift/SlicePredicate;Lorg/apache/cassandra/thrift/ConsistencyLevel;Lorg/apache/cassandra/service/ClientState;)Ljava/util/Map;
 (CassandraServer.java:566)
  at 
org.apache.cassandra.thrift.CassandraServer.multiget_slice(Ljava/util/List;Lorg/apache/cassandra/thrift/ColumnParent;Lorg/apache/cassandra/thrift/SlicePredicate;Lorg/apache/cassandra/thrift/ConsistencyLevel;)Ljava/util/Map;
 (CassandraServer.java:348)
  at 
org.apache.cassandra.thrift.Cassandra$Processor$multiget_slice.getResult(Lorg/apache/cassandra/thrift/Cassandra$Iface;Lorg/apache/cassandra/thrift/Cassandra$multiget_slice_args;)Lorg/apache/cassandra/thrift/Cassandra$multiget_slice_result;
 (Cassandra.java:3716)
  at 

[jira] [Commented] (CASSANDRA-7715) Add a credentials cache to the PasswordAuthenticator

2016-01-08 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088963#comment-15088963
 ] 

Sam Tunnicliffe commented on CASSANDRA-7715:


bq. Any updates here?

Actually, yes. I have a patch pretty much completed, but with a couple of 
smallish things to tidy up (mostly adding more tests). I'll have it ready for 
review early next week.

> Add a credentials cache to the PasswordAuthenticator
> 
>
> Key: CASSANDRA-7715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7715
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Mike Adamson
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.x
>
>
> If the PasswordAuthenticator cached credentials for a short time it would 
> reduce the overhead of user journeys when they need to do multiple 
> authentications in quick succession.
> This cache should work in the same way as the cache in CassandraAuthorizer in 
> that if it's TTL is set to 0 the cache will be disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-01-08 Thread slebresne
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ea3ba687
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ea3ba687
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ea3ba687

Branch: refs/heads/trunk
Commit: ea3ba68722862afefdbe4a70088cf28de46fffd1
Parents: 3fc02df 87d80b4
Author: Sylvain Lebresne 
Authored: Fri Jan 8 10:32:58 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 10:32:58 2016 +0100

--
 NEWS.txt  |  2 ++
 conf/cassandra-env.ps1|  3 +++
 conf/cassandra-env.sh |  3 +++
 conf/jvm.options  | 18 +-
 debian/patches/002cassandra_logdir_fix.dpatch | 18 +++---
 5 files changed, 32 insertions(+), 12 deletions(-)
--




[1/3] cassandra git commit: Enable GC logging by default (3.0 version)

2016-01-08 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.3 1717e10aa -> 87d80b478
  refs/heads/trunk 3fc02dfcc -> ea3ba6872


Enable GC logging by default (3.0 version)

patch by Chris Lohfink; reviewed by aweisberg for CASSANDRA-10140


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/87d80b47
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/87d80b47
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/87d80b47

Branch: refs/heads/cassandra-3.3
Commit: 87d80b478bd47770460ecf7a8713e00d2f53fcca
Parents: 1717e10
Author: Ariel Weisberg 
Authored: Tue Dec 29 14:33:26 2015 -0500
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 10:32:22 2016 +0100

--
 NEWS.txt  |  2 ++
 conf/cassandra-env.ps1|  3 +++
 conf/cassandra-env.sh |  3 +++
 conf/jvm.options  | 18 +-
 debian/patches/002cassandra_logdir_fix.dpatch | 18 +++---
 5 files changed, 32 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/87d80b47/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index dcdd309..b6b9e92 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -56,6 +56,8 @@ Upgrading
- Custom index implementation should be aware that the method 
Indexer::indexes()
  has been removed as its contract was misleading and all custom 
implementation
  should have almost surely returned true inconditionally for that method.
+   - GC logging is now enabled by default (you can disable it in the 
jvm.options
+ file if you prefer).
 
 
 3.0

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87d80b47/conf/cassandra-env.ps1
--
diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1
index 0326199..2a9acce 100644
--- a/conf/cassandra-env.ps1
+++ b/conf/cassandra-env.ps1
@@ -333,6 +333,9 @@ Function SetCassandraEnvironment
 
 ParseJVMInfo
 
+#GC log path has to be defined here since it needs to find CASSANDRA_HOME
+$env:JVM_OPTS="$env:JVM_OPTS -Xloggc:$env:CASSANDRA_HOME/logs/gc.log"
+
 # Read user-defined JVM options from jvm.options file
 $content = Get-Content "$env:CASSANDRA_CONF\jvm.options"
 for ($i = 0; $i -lt $content.Count; $i++)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87d80b47/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 477a4f3..6e1910c 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -156,6 +156,9 @@ if [ "x$MALLOC_ARENA_MAX" = "x" ] ; then
 export MALLOC_ARENA_MAX=4
 fi
 
+#GC log path has to be defined here because it needs to access CASSANDRA_HOME
+JVM_OPTS="$JVM_OPTS -Xloggc:${CASSANDRA_HOME}/logs/gc.log"
+
 # Here we create the arguments that will get passed to the jvm when
 # starting cassandra.
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87d80b47/conf/jvm.options
--
diff --git a/conf/jvm.options b/conf/jvm.options
index ad0edb0..4aec619 100644
--- a/conf/jvm.options
+++ b/conf/jvm.options
@@ -222,14 +222,14 @@
 
 ### GC logging options -- uncomment to enable
 
-#-XX:+PrintGCDetails
-#-XX:+PrintGCDateStamps
-#-XX:+PrintHeapAtGC
-#-XX:+PrintTenuringDistribution
-#-XX:+PrintGCApplicationStoppedTime
-#-XX:+PrintPromotionFailure
+-XX:+PrintGCDetails
+-XX:+PrintGCDateStamps
+-XX:+PrintHeapAtGC
+-XX:+PrintTenuringDistribution
+-XX:+PrintGCApplicationStoppedTime
+-XX:+PrintPromotionFailure
 #-XX:PrintFLSStatistics=1
 #-Xloggc:/var/log/cassandra/gc.log
-#-XX:+UseGCLogFileRotation
-#-XX:NumberOfGCLogFiles=10
-#-XX:GCLogFileSize=10M
+-XX:+UseGCLogFileRotation
+-XX:NumberOfGCLogFiles=10
+-XX:GCLogFileSize=10M

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87d80b47/debian/patches/002cassandra_logdir_fix.dpatch
--
diff --git a/debian/patches/002cassandra_logdir_fix.dpatch 
b/debian/patches/002cassandra_logdir_fix.dpatch
index 8836eb4..87387b9 100644
--- a/debian/patches/002cassandra_logdir_fix.dpatch
+++ b/debian/patches/002cassandra_logdir_fix.dpatch
@@ -6,9 +6,9 @@
 
 @DPATCH@
 diff -urNad '--exclude=CVS' '--exclude=.svn' '--exclude=.git' 
'--exclude=.arch' '--exclude=.hg' '--exclude=_darcs' '--exclude=.bzr' 
cassandra~/bin/cassandra cassandra/bin/cassandra
 cassandra~/bin/cassandra   2014-09-15 19:42:28.0 -0500
-+++ cassandra/bin/cassandra2014-09-15 21:15:15.627505503 -0500
-@@ -134,7 +134,7 @@
+--- 

[2/3] cassandra git commit: Enable GC logging by default (3.0 version)

2016-01-08 Thread slebresne
Enable GC logging by default (3.0 version)

patch by Chris Lohfink; reviewed by aweisberg for CASSANDRA-10140


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/87d80b47
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/87d80b47
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/87d80b47

Branch: refs/heads/trunk
Commit: 87d80b478bd47770460ecf7a8713e00d2f53fcca
Parents: 1717e10
Author: Ariel Weisberg 
Authored: Tue Dec 29 14:33:26 2015 -0500
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 10:32:22 2016 +0100

--
 NEWS.txt  |  2 ++
 conf/cassandra-env.ps1|  3 +++
 conf/cassandra-env.sh |  3 +++
 conf/jvm.options  | 18 +-
 debian/patches/002cassandra_logdir_fix.dpatch | 18 +++---
 5 files changed, 32 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/87d80b47/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index dcdd309..b6b9e92 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -56,6 +56,8 @@ Upgrading
- Custom index implementation should be aware that the method 
Indexer::indexes()
  has been removed as its contract was misleading and all custom 
implementation
  should have almost surely returned true inconditionally for that method.
+   - GC logging is now enabled by default (you can disable it in the 
jvm.options
+ file if you prefer).
 
 
 3.0

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87d80b47/conf/cassandra-env.ps1
--
diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1
index 0326199..2a9acce 100644
--- a/conf/cassandra-env.ps1
+++ b/conf/cassandra-env.ps1
@@ -333,6 +333,9 @@ Function SetCassandraEnvironment
 
 ParseJVMInfo
 
+#GC log path has to be defined here since it needs to find CASSANDRA_HOME
+$env:JVM_OPTS="$env:JVM_OPTS -Xloggc:$env:CASSANDRA_HOME/logs/gc.log"
+
 # Read user-defined JVM options from jvm.options file
 $content = Get-Content "$env:CASSANDRA_CONF\jvm.options"
 for ($i = 0; $i -lt $content.Count; $i++)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87d80b47/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 477a4f3..6e1910c 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -156,6 +156,9 @@ if [ "x$MALLOC_ARENA_MAX" = "x" ] ; then
 export MALLOC_ARENA_MAX=4
 fi
 
+#GC log path has to be defined here because it needs to access CASSANDRA_HOME
+JVM_OPTS="$JVM_OPTS -Xloggc:${CASSANDRA_HOME}/logs/gc.log"
+
 # Here we create the arguments that will get passed to the jvm when
 # starting cassandra.
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87d80b47/conf/jvm.options
--
diff --git a/conf/jvm.options b/conf/jvm.options
index ad0edb0..4aec619 100644
--- a/conf/jvm.options
+++ b/conf/jvm.options
@@ -222,14 +222,14 @@
 
 ### GC logging options -- uncomment to enable
 
-#-XX:+PrintGCDetails
-#-XX:+PrintGCDateStamps
-#-XX:+PrintHeapAtGC
-#-XX:+PrintTenuringDistribution
-#-XX:+PrintGCApplicationStoppedTime
-#-XX:+PrintPromotionFailure
+-XX:+PrintGCDetails
+-XX:+PrintGCDateStamps
+-XX:+PrintHeapAtGC
+-XX:+PrintTenuringDistribution
+-XX:+PrintGCApplicationStoppedTime
+-XX:+PrintPromotionFailure
 #-XX:PrintFLSStatistics=1
 #-Xloggc:/var/log/cassandra/gc.log
-#-XX:+UseGCLogFileRotation
-#-XX:NumberOfGCLogFiles=10
-#-XX:GCLogFileSize=10M
+-XX:+UseGCLogFileRotation
+-XX:NumberOfGCLogFiles=10
+-XX:GCLogFileSize=10M

http://git-wip-us.apache.org/repos/asf/cassandra/blob/87d80b47/debian/patches/002cassandra_logdir_fix.dpatch
--
diff --git a/debian/patches/002cassandra_logdir_fix.dpatch 
b/debian/patches/002cassandra_logdir_fix.dpatch
index 8836eb4..87387b9 100644
--- a/debian/patches/002cassandra_logdir_fix.dpatch
+++ b/debian/patches/002cassandra_logdir_fix.dpatch
@@ -6,9 +6,9 @@
 
 @DPATCH@
 diff -urNad '--exclude=CVS' '--exclude=.svn' '--exclude=.git' 
'--exclude=.arch' '--exclude=.hg' '--exclude=_darcs' '--exclude=.bzr' 
cassandra~/bin/cassandra cassandra/bin/cassandra
 cassandra~/bin/cassandra   2014-09-15 19:42:28.0 -0500
-+++ cassandra/bin/cassandra2014-09-15 21:15:15.627505503 -0500
-@@ -134,7 +134,7 @@
+--- cassandra~/bin/cassandra   2015-10-27 14:35:22.0 -0500
 cassandra/bin/cassandra2015-10-27 14:41:38.0 -0500
+@@ -139,7 +139,7 @@
  

[jira] [Commented] (CASSANDRA-10887) Pending range calculator gives wrong pending ranges for moves

2016-01-08 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088892#comment-15088892
 ] 

Branimir Lambov commented on CASSANDRA-10887:
-

Pushed all branches to github to run tests:
|[2.0|https://github.com/blambov/cassandra/tree/kohlisankalp/10887]|[utests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-kohlisankalp-10887-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-kohlisankalp-10887-dtest/]|
|[2.1|https://github.com/blambov/cassandra/tree/kohlisankalp/10887-2.1]|[utests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-kohlisankalp-10887-2.1-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-kohlisankalp-10887-2.1-dtest/]|
|[2.2|https://github.com/blambov/cassandra/tree/kohlisankalp/10887-2.2]|[utests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-kohlisankalp-10887-2.2-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-kohlisankalp-10887-2.2-dtest/]|
|[3.0|https://github.com/blambov/cassandra/tree/kohlisankalp/10887-3.0]|[utests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-kohlisankalp-10887-3.0-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-kohlisankalp-10887-3.0-dtest/]|

3.0 tests are quite unstable, but eventually got a clean enough run. Ready to 
commit.

> Pending range calculator gives wrong pending ranges for moves
> -
>
> Key: CASSANDRA-10887
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10887
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Richard Low
>Assignee: sankalp kohli
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: CASSANDRA-10887.diff, CASSANDRA_10887_2.2.diff, 
> CASSANDRA_10887_3.0.diff, CASSANDRA_10887_v2.diff, CASSANDRA_10887_v3.diff
>
>
> My understanding is the PendingRangeCalculator is meant to calculate who 
> should receive extra writes during range movements. However, it adds the 
> wrong ranges for moves. An extreme example of this can be seen in the 
> following reproduction. Create a 5 node cluster (I did this on 2.0.16 and 
> 2.2.4) and a keyspace RF=3 and a simple table. Then start moving a node and 
> immediately kill -9 it. Now you see a node as down and moving in the ring. 
> Try a quorum write for a partition that is stored on that node - it will fail 
> with a timeout. Further, all CAS reads or writes fail immediately with 
> unavailable exception because they attempt to include the moving node twice. 
> This is likely to be the cause of CASSANDRA-10423.
> In my example I had this ring:
> 127.0.0.1  rack1   Up Normal  170.97 KB   20.00%  
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  124.06 KB   20.00%  
> -5534023222112865485
> 127.0.0.3  rack1   Down   Moving  108.7 KB40.00%  
> 1844674407370955160
> 127.0.0.4  rack1   Up Normal  142.58 KB   0.00%   
> 1844674407370955161
> 127.0.0.5  rack1   Up Normal  118.64 KB   20.00%  
> 5534023222112865484
> Node 3 was moving to -1844674407370955160. I added logging to print the 
> pending and natural endpoints. For ranges owned by node 3, node 3 appeared in 
> pending and natural endpoints. The blockFor is increased to 3 so we’re 
> effectively doing CL.ALL operations. This manifests as write timeouts and CAS 
> unavailables when the node is down.
> The correct pending range for this scenario is node 1 is gaining the range 
> (-1844674407370955160, 1844674407370955160). So node 1 should be added as a 
> destination for writes and CAS for this range, not node 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10985) OOM during bulk read(slice query) operation

2016-01-08 Thread sumit thakur (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088957#comment-15088957
 ] 

sumit thakur commented on CASSANDRA-10985:
--

Object / Stack Framejava.lang.Thread @ 0x55000a4f0
NameThrift:6
Shallow Heap120
Retained Heap   16,214,953,728
Context Class Loadersun.misc.Launcher$AppClassLoader @ 0x55000
Is Daemon   true

Total: 6 entries


> OOM during bulk read(slice query) operation
> ---
>
> Key: CASSANDRA-10985
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10985
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
> Environment: OS : Linux 6.5
> RAM : 126GB
> assign heap size: 8GB
>Reporter: sumit thakur
>
> The thread java.lang.Thread @ 0x55000a4f0 Thrift:6 keeps local variables with 
> total size 16,214,953,728 (98.23%) bytes.
> The memory is accumulated in one instance of "java.lang.Thread" loaded by 
> "".
> The stacktrace of this Thread is available. See stacktrace.
> Keywords
> java.lang.Thread
> --
> Trace: 
> Thrift:6
>   at java.lang.OutOfMemoryError.()V (OutOfMemoryError.java:48)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.read(Ljava/io/DataInput;I)Ljava/nio/ByteBuffer;
>  (ByteBufferUtil.java:401)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithVIntLength(Lorg/apache/cassandra/io/util/DataInputPlus;)Ljava/nio/ByteBuffer;
>  (ByteBufferUtil.java:339)
>   at 
> org.apache.cassandra.db.marshal.AbstractType.readValue(Lorg/apache/cassandra/io/util/DataInputPlus;)Ljava/nio/ByteBuffer;
>  (AbstractType.java:391)
>   at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.deserialize(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/LivenessInfo;Lorg/apache/cassandra/config/ColumnDefinition;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;)Lorg/apache/cassandra/db/rows/Cell;
>  (BufferCell.java:298)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(Lorg/apache/cassandra/config/ColumnDefinition;Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;Lorg/apache/cassandra/db/rows/Row$Builder;Lorg/apache/cassandra/db/LivenessInfo;)V
>  (UnfilteredSerializer.java:453)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;IILorg/apache/cassandra/db/rows/Row$Builder;)Lorg/apache/cassandra/db/rows/Row;
>  (UnfilteredSerializer.java:431)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;Lorg/apache/cassandra/db/rows/Row$Builder;)Lorg/apache/cassandra/db/rows/Unfiltered;
>  (UnfilteredSerializer.java:360)
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext()Lorg/apache/cassandra/db/rows/Unfiltered;
>  (UnfilteredRowIteratorSerializer.java:217)
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext()Ljava/lang/Object;
>  (UnfilteredRowIteratorSerializer.java:210)
>   at org.apache.cassandra.utils.AbstractIterator.hasNext()Z 
> (AbstractIterator.java:47)
>   at org.apache.cassandra.db.transform.BaseRows.hasNext()Z (BaseRows.java:108)
>   at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext()Lorg/apache/cassandra/db/LegacyLayout$LegacyCell;
>  (LegacyLayout.java:658)
>   at org.apache.cassandra.db.LegacyLayout$3.computeNext()Ljava/lang/Object; 
> (LegacyLayout.java:640)
>   at org.apache.cassandra.utils.AbstractIterator.hasNext()Z 
> (AbstractIterator.java:47)
>   at 
> org.apache.cassandra.thrift.CassandraServer.thriftifyColumns(Lorg/apache/cassandra/config/CFMetaData;Ljava/util/Iterator;)Ljava/util/List;
>  (CassandraServer.java:112)
>   at 
> org.apache.cassandra.thrift.CassandraServer.thriftifyPartition(Lorg/apache/cassandra/db/rows/RowIterator;ZZI)Ljava/util/List;
>  (CassandraServer.java:250)
>   at 
> org.apache.cassandra.thrift.CassandraServer.getSlice(Ljava/util/List;ZILorg/apache/cassandra/db/ConsistencyLevel;Lorg/apache/cassandra/service/ClientState;)Ljava/util/Map;
>  (CassandraServer.java:270)
>   at 
> org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(Ljava/lang/String;Ljava/util/List;Lorg/apache/cassandra/thrift/ColumnParent;ILorg/apache/cassandra/thrift/SlicePredicate;Lorg/apache/cassandra/thrift/ConsistencyLevel;Lorg/apache/cassandra/service/ClientState;)Ljava/util/Map;
>  (CassandraServer.java:566)
>   at 
> 

[jira] [Commented] (CASSANDRA-10940) sstableloader shuold skip streaming SSTable generated in < 3.0.0

2016-01-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089660#comment-15089660
 ] 

Aleksey Yeschenko commented on CASSANDRA-10940:
---

I think we should first find a valid reason for that, and I doubt there is one.

In order of my preferences:
1. Fix streaming to restore the ability to stream older sstables
2. Error out in the beginning with {{upgradesstables}} instructions, until 1) 
is done 

> sstableloader shuold skip streaming SSTable generated in < 3.0.0
> 
>
> Key: CASSANDRA-10940
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10940
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging, Tools
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> Since 3.0.0, [streaming does not support SSTable from version less than 
> 3.0.0|https://github.com/apache/cassandra/blob/0f5e780781ce3f0cb3732515dacc7e467571a7c9/src/java/org/apache/cassandra/io/sstable/SSTableSimpleIterator.java#L116].
> {{sstableloader}} should skip those files to be streamed, instead of erroring 
> out like below:
> {code}
> Failed to list files in 
> /home/yuki/.ccm/2.1.11/node1/data/keyspace1/standard1-5242ae50a9b311e585b29dc952593398
> java.lang.NullPointerException
> java.lang.RuntimeException: Failed to list files in 
> /home/yuki/.ccm/2.1.11/node1/data/keyspace1/standard1-5242ae50a9b311e585b29dc952593398
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:53)
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:544)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:76)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:165)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:101)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.openForBatch(SSTableReader.java:421)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.lambda$openSSTables$186(SSTableLoader.java:121)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader$$Lambda$18/712974096.apply(Unknown
>  Source)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.lambda$innerList$178(LogAwareFileLister.java:75)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister$$Lambda$29/1191654595.test(Unknown
>  Source)
> at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at 
> java.util.TreeMap$EntrySpliterator.forEachRemaining(TreeMap.java:2965)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:512)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:502)
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.innerList(LogAwareFileLister.java:77)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:49)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10959) missing timeout option propagation in cqlsh (cqlsh.py)

2016-01-08 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10959:

Assignee: Julien Blondeau

> missing timeout option propagation in cqlsh (cqlsh.py)
> --
>
> Key: CASSANDRA-10959
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10959
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: linux
>Reporter: Julien Blondeau
>Assignee: Julien Blondeau
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 10959-3.1.1.txt
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> On a slow cluster (here used for testing purpose), cqlsh fails with a timeout 
> error, whatever --connect-timeout option you can pass.
> Here is a sample call:
> {noformat}
> cqlsh 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> {noformat}
> cqlsh --connect-timeout=30 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> Debugging shows that the timeout is not properly propagated on the underlying 
> {{ResponseWaiter.deliver()}} method in 
> {{/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/connection.py}}
> Workaround is to propagate, in cqlsh.py, the {{--connect-timeout}} option 
> when initialize the cluster connection object (i.e. add kwarg 
> "control_connection_timeout" in addition to the existing kwarg 
> "connect_timeout")
> {noformat}
> Cluster(
> ,
> control_connection_timeout=float(connect_timeout),
> connect_timeout=connect_timeout)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089703#comment-15089703
 ] 

Paulo Motta commented on CASSANDRA-10866:
-

LGTM, let's wait test results before marking as ready to commit.

||trunk||
|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-10686]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10686-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10686-dtest/lastCompletedBuild/testReport/]|

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
> Environment: PROD
>Reporter: Anubhav Kale
>Assignee: Anubhav Kale
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0001-CF-Dropped-Mutation-Stats.patch, 
> 0001-CFCount.patch, 10866-Trunk.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10924) Pass base table's metadata to Index.validateOptions

2016-01-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089751#comment-15089751
 ] 

Aleksey Yeschenko commented on CASSANDRA-10924:
---

It's a static method though. So we could in theory check for existence of the 
overload and call that if available? Maybe.

> Pass base table's metadata to Index.validateOptions
> ---
>
> Key: CASSANDRA-10924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10924
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Local Write-Read Paths
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Minor
>  Labels: 2i, index, validation
> Fix For: 3.0.x, 3.x
>
> Attachments: CASSANDRA-10924-v0.diff
>
>
> Some custom index implementations require the base table's metadata to 
> validate their creation options. For example, the options of these 
> implementations can contain information about which base table's columns are 
> going to be indexed and how, so the implementation needs to know the 
> existence and the type of the columns to be indexed to properly validate.
> The attached patch proposes to add base table's {{CFMetaData}} to Index' 
> optional static method to validate the custom index options:
> {{public static Map validateOptions(CFMetaData cfm, 
> Map options);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10959) missing timeout option propagation in cqlsh (cqlsh.py)

2016-01-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089666#comment-15089666
 ] 

Tyler Hobbs commented on CASSANDRA-10959:
-

Thanks for the patch!  There was one other spot that needed the 
{{connect_timeout}}, so I made that quick change in my branch.  Here are the 
pending test runs:
||branch||testall||dtest||
|[CASSANDRA-10959|https://github.com/thobbs/cassandra/tree/CASSANDRA-10959]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-10959-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-10959-dtest]|
|[CASSANDRA-10959-3.0|https://github.com/thobbs/cassandra/tree/CASSANDRA-10959-3.0]|none|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-10959-3.0-dtest]|
|[CASSANDRA-10959-3.3|https://github.com/thobbs/cassandra/tree/CASSANDRA-10959-3.3]|none|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-10959-3.3-dtest]|
|[CASSANDRA-10959-trunk|https://github.com/thobbs/cassandra/tree/CASSANDRA-10959-trunk]|none|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-10959-trunk-dtest]|

I didn't schedule {{testall}} runs for anything above 2.2, since cqlsh isn't 
tested at all by {{testall}}.

> missing timeout option propagation in cqlsh (cqlsh.py)
> --
>
> Key: CASSANDRA-10959
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10959
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: linux
>Reporter: Julien Blondeau
>  Labels: patch
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 10959-3.1.1.txt
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> On a slow cluster (here used for testing purpose), cqlsh fails with a timeout 
> error, whatever --connect-timeout option you can pass.
> Here is a sample call:
> {noformat}
> cqlsh 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> {noformat}
> cqlsh --connect-timeout=30 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> Debugging shows that the timeout is not properly propagated on the underlying 
> {{ResponseWaiter.deliver()}} method in 
> {{/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/connection.py}}
> Workaround is to propagate, in cqlsh.py, the {{--connect-timeout}} option 
> when initialize the cluster connection object (i.e. add kwarg 
> "control_connection_timeout" in addition to the existing kwarg 
> "connect_timeout")
> {noformat}
> Cluster(
> ,
> control_connection_timeout=float(connect_timeout),
> connect_timeout=connect_timeout)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10981) Consider striping view locks by key and cfid

2016-01-08 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089691#comment-15089691
 ] 

Carl Yeksigian commented on CASSANDRA-10981:


+1, once CI is happy.

> Consider striping view locks by key and cfid
> 
>
> Key: CASSANDRA-10981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10981
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.x
>
>
> We use a striped lock to protect updates to tables with materialized views, 
> and the lock is currently striped by the partition key of the {{Mutation}}.  
> This causes concurrent updates to separate tables with the same partition key 
> to contend for the same lock, resulting in one or more of the mutations being 
> rescheduled on the {{MUTATION}} threadpool (potentially becoming an 
> asynchronous operation instead a synchronous operations, from the perspective 
> of local internal modifications).
> Since it's probably fairly common to use the same partition key across 
> multiple tables, I suggest that we add the cfid of the affected table to the 
> lock striping, and acquire one lock per affected table (with the same 
> rescheduling-under-contention behavior).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9830) Option to disable bloom filter in highest level of LCS sstables

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089775#comment-15089775
 ] 

Paulo Motta commented on CASSANDRA-9830:


bq. The cstar runs that you kicked off didn't work because you aren't on the 
list of repos for cstar, so I kicked off a new ssd test. The increase looks 
modest, but there is an improvement.

Thanks for triggering the test again!

bq. Also, is there any way to get memory usage during the tests?

The last {{nodetool tablestats}} command should print memory usage stats in the 
console, but for some reason the console output is unavailable. Is there any 
way to retrieve it manually [~enigmacurry]?

bq. I think we should be skipping creating the bloom filter for the leveled 
major compaction as well. That's because in major compaction, while we aren't 
always adding at what ends up being the highest level after we are done, we are 
always writing the highest level for a given key. Plus, this will ensure that 
whichever level ends up as the highest will not have bloom filters.

Thanks for the feedback. That's right, we should definitely support this in 
major leveled compaction. I will update the patch post back soon.

> Option to disable bloom filter in highest level of LCS sstables
> ---
>
> Key: CASSANDRA-9830
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9830
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Jonathan Ellis
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: performance
> Fix For: 3.x
>
>
> We expect about 90% of data to be in the highest level of LCS in a fully 
> populated series.  (See also CASSANDRA-9829.)
> Thus if the user is primarily asking for data (partitions) that has actually 
> been inserted, the bloom filter on the highest level only helps reject 
> sstables about 10% of the time.
> We should add an option that suppresses bloom filter creation on top-level 
> sstables.  This will dramatically reduce memory usage for LCS and may even 
> improve performance as we no longer check a low-value filter.
> (This is also an idea from RocksDB.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10940) sstableloader shuold skip streaming SSTable generated in < 3.0.0

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089621#comment-15089621
 ] 

Paulo Motta commented on CASSANDRA-10940:
-

+1 to [~iamaleksey]'s comments. We could optionally provide a 
{{--skip-incompatible}} option if there's any value in streaming only old 
sstables for some reason.

> sstableloader shuold skip streaming SSTable generated in < 3.0.0
> 
>
> Key: CASSANDRA-10940
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10940
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging, Tools
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> Since 3.0.0, [streaming does not support SSTable from version less than 
> 3.0.0|https://github.com/apache/cassandra/blob/0f5e780781ce3f0cb3732515dacc7e467571a7c9/src/java/org/apache/cassandra/io/sstable/SSTableSimpleIterator.java#L116].
> {{sstableloader}} should skip those files to be streamed, instead of erroring 
> out like below:
> {code}
> Failed to list files in 
> /home/yuki/.ccm/2.1.11/node1/data/keyspace1/standard1-5242ae50a9b311e585b29dc952593398
> java.lang.NullPointerException
> java.lang.RuntimeException: Failed to list files in 
> /home/yuki/.ccm/2.1.11/node1/data/keyspace1/standard1-5242ae50a9b311e585b29dc952593398
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:53)
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:544)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:76)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:165)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:101)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.openForBatch(SSTableReader.java:421)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.lambda$openSSTables$186(SSTableLoader.java:121)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader$$Lambda$18/712974096.apply(Unknown
>  Source)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.lambda$innerList$178(LogAwareFileLister.java:75)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister$$Lambda$29/1191654595.test(Unknown
>  Source)
> at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at 
> java.util.TreeMap$EntrySpliterator.forEachRemaining(TreeMap.java:2965)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:512)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:502)
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.innerList(LogAwareFileLister.java:77)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:49)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10959) missing timeout option propagation in cqlsh (cqlsh.py)

2016-01-08 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10959:

  Labels: cqlsh  (was: patch)
Priority: Minor  (was: Major)

> missing timeout option propagation in cqlsh (cqlsh.py)
> --
>
> Key: CASSANDRA-10959
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10959
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: linux
>Reporter: Julien Blondeau
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 10959-3.1.1.txt
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> On a slow cluster (here used for testing purpose), cqlsh fails with a timeout 
> error, whatever --connect-timeout option you can pass.
> Here is a sample call:
> {noformat}
> cqlsh 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> {noformat}
> cqlsh --connect-timeout=30 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> Debugging shows that the timeout is not properly propagated on the underlying 
> {{ResponseWaiter.deliver()}} method in 
> {{/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/connection.py}}
> Workaround is to propagate, in cqlsh.py, the {{--connect-timeout}} option 
> when initialize the cluster connection object (i.e. add kwarg 
> "control_connection_timeout" in addition to the existing kwarg 
> "connect_timeout")
> {noformat}
> Cluster(
> ,
> control_connection_timeout=float(connect_timeout),
> connect_timeout=connect_timeout)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10688) Stack overflow from SSTableReader$InstanceTidier.runOnClose in Leak Detector

2016-01-08 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089602#comment-15089602
 ] 

Michael Shuler commented on CASSANDRA-10688:


TE was poking around your branch, and we think CASSANDRA-9303 missing from this 
dev branch may be causing dtest issues. A re-run is currently in progress, but 
a 3.0 HEAD rebase might be more fruitful in determining a good comparison with 
3.0 HEAD, and then we could re-run another dtest. Just a thought.

> Stack overflow from SSTableReader$InstanceTidier.runOnClose in Leak Detector
> 
>
> Key: CASSANDRA-10688
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10688
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Testing
>Reporter: Jeremiah Jordan
>Assignee: Ariel Weisberg
> Fix For: 3.0.x
>
>
> Running some tests against cassandra-3.0 
> 9fc957cf3097e54ccd72e51b2d0650dc3e83eae0
> The tests are just running cassandra-stress write and read while adding and 
> removing nodes from the cluster.  After the test runs when I go back through 
> logs I find the following Stackoverflow fairly often:
> ERROR [Strong-Reference-Leak-Detector:1] 2015-11-11 00:04:10,638  
> Ref.java:413 - Stackoverflow [private java.lang.Runnable 
> org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier.runOnClose,
>  final java.lang.Runnable 
> org.apache.cassandra.io.sstable.format.SSTableReader$DropPageCache.andThen, 
> final org.apache.cassandra.cache.InstrumentingCache 
> org.apache.cassandra.io.sstable.SSTableRewriter$InvalidateKeys.cache, private 
> final org.apache.cassandra.cache.ICache 
> org.apache.cassandra.cache.InstrumentingCache.map, private final 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap 
> org.apache.cassandra.cache.ConcurrentLinkedHashCache.map, final 
> com.googlecode.concurrentlinkedhashmap.LinkedDeque 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.evictionDeque, 
> com.googlecode.concurrentlinkedhashmap.Linked 
> com.googlecode.concurrentlinkedhashmap.LinkedDeque.first, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> ... (repeated a whole bunch more)  
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> final java.lang.Object 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.key, 
> public final byte[] org.apache.cassandra.cache.KeyCacheKey.key



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10989) Move away from SEDA to TPC

2016-01-08 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-10989:
-

 Summary: Move away from SEDA to TPC
 Key: CASSANDRA-10989
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10989
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko


Since its inception, Cassandra has been utilising [SEDA 
|http://www.eecs.harvard.edu/~mdw/papers/seda-sosp01.pdf] at its core.

As originally conceived, it means every request is split into several stages, 
and each stage is backed by a thread pool. That imposes certain challenges:
- thread parking/unparking overheads (partially improved by SEPExecutor in 
CASSANDRA-4718)
- extensive context switching (i-/d- caches thrashing)
- less than optimal multiple writer/multiple reader data structures for 
memtables, partitions, metrics, more
- hard to grok concurrent code
- large number of GC roots, longer TTSP
- increased complexity for moving data structures off java heap
- inability to easily balance writes/reads/compaction/flushing

Latency implications of SEDA have been acknowledged by the authors themselves - 
see 2010 [retrospective on 
SEDA|http://matt-welsh.blogspot.co.uk/2010/07/retrospective-on-seda.html].

To fix these issues (and more), two years ago at NGCC [~benedict] suggested 
moving Cassandra away from SEDA to the more mechanically sympathetic thread per 
core architecture (TPC). See the slides from the original presentation 
[here|https://docs.google.com/presentation/d/19_U8I7mq9JKBjgPmmi6Hri3y308QEx1FmXLt-53QqEw/edit?ts=56265eb4#slide=id.g98ad32b25_1_19].

In a nutshell, each core would become a logical shared nothing micro instance 
of Cassandra, taking over a portion of the node’s range {{*}}.

Client connections will be assigned randomly to one of the cores (sharing a 
single listen socket). A request that cannot be served by the client’s core 
will be proxied to the one owning the data, similar to the way we perform 
remote coordination today.

Each thread (pinned to an exclusive core) would have a single event loop, and 
be responsible for both serving requests and performing maintenance tasks 
(flushing, compaction, repair), scheduling them intelligently.

One notable exception from the original proposal is that we cannot, 
unfortunately, use linux AIO for file I/O, as it's only properly implemented 
for xfs. We might, however, have a specialised implementation for xfs and 
Windows (based on IOCP) later. In the meantime, we have no other choice other 
than to hand off I/O that cannot be served from cache to a separate threadpool.

Transitioning from SEDA to TPC will be done in stages, incrementally and in 
parallel.

This is a high-level overview meta-ticket that will track JIRA issues for each 
individual stage.

{{*}} they’ll share certain things still, like schema, gossip, file I/O 
threadpool(s), and maybe MessagingService.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8520) Prototype thread per core

2016-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8520:
-
Assignee: (was: Aleksey Yeschenko)

> Prototype thread per core
> -
>
> Key: CASSANDRA-8520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8520
> Project: Cassandra
>  Issue Type: Task
>Reporter: Jonathan Ellis
>  Labels: performance
> Fix For: 3.x
>
>
> Let's prototype the best possible scenario for how well we can perform with a 
> thread per core design by simplifying everything we can.  For instance,
> - No HH, no RR, no replication at all
> - No MessagingService
> - No compaction (so test a workload w/o overwrites)
> - No repair
> - Just local writes and reads
> If we can't get a big win (say at least 2x) with these simplifications then I 
> think we can say that it's not worth it.
> If we can get a big win, then we can either refine the prototype to make it 
> more realistic or start working on it in earnest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10989) Move away from SEDA to TPC

2016-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10989:
--
Labels: performance  (was: )

> Move away from SEDA to TPC
> --
>
> Key: CASSANDRA-10989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10989
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>  Labels: performance
>
> Since its inception, Cassandra has been utilising [SEDA 
> |http://www.eecs.harvard.edu/~mdw/papers/seda-sosp01.pdf] at its core.
> As originally conceived, it means every request is split into several stages, 
> and each stage is backed by a thread pool. That imposes certain challenges:
> - thread parking/unparking overheads (partially improved by SEPExecutor in 
> CASSANDRA-4718)
> - extensive context switching (i-/d- caches thrashing)
> - less than optimal multiple writer/multiple reader data structures for 
> memtables, partitions, metrics, more
> - hard to grok concurrent code
> - large number of GC roots, longer TTSP
> - increased complexity for moving data structures off java heap
> - inability to easily balance writes/reads/compaction/flushing
> Latency implications of SEDA have been acknowledged by the authors themselves 
> - see 2010 [retrospective on 
> SEDA|http://matt-welsh.blogspot.co.uk/2010/07/retrospective-on-seda.html].
> To fix these issues (and more), two years ago at NGCC [~benedict] suggested 
> moving Cassandra away from SEDA to the more mechanically sympathetic thread 
> per core architecture (TPC). See the slides from the original presentation 
> [here|https://docs.google.com/presentation/d/19_U8I7mq9JKBjgPmmi6Hri3y308QEx1FmXLt-53QqEw/edit?ts=56265eb4#slide=id.g98ad32b25_1_19].
> In a nutshell, each core would become a logical shared nothing micro instance 
> of Cassandra, taking over a portion of the node’s range {{*}}.
> Client connections will be assigned randomly to one of the cores (sharing a 
> single listen socket). A request that cannot be served by the client’s core 
> will be proxied to the one owning the data, similar to the way we perform 
> remote coordination today.
> Each thread (pinned to an exclusive core) would have a single event loop, and 
> be responsible for both serving requests and performing maintenance tasks 
> (flushing, compaction, repair), scheduling them intelligently.
> One notable exception from the original proposal is that we cannot, 
> unfortunately, use linux AIO for file I/O, as it's only properly implemented 
> for xfs. We might, however, have a specialised implementation for xfs and 
> Windows (based on IOCP) later. In the meantime, we have no other choice other 
> than to hand off I/O that cannot be served from cache to a separate 
> threadpool.
> Transitioning from SEDA to TPC will be done in stages, incrementally and in 
> parallel.
> This is a high-level overview meta-ticket that will track JIRA issues for 
> each individual stage.
> {{*}} they’ll share certain things still, like schema, gossip, file I/O 
> threadpool(s), and maybe MessagingService.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8520) Prototype thread per core

2016-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-8520.
--
   Resolution: Won't Fix
Fix Version/s: (was: 3.x)

Superseded by CASSANDRA-10989.

> Prototype thread per core
> -
>
> Key: CASSANDRA-8520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8520
> Project: Cassandra
>  Issue Type: Task
>Reporter: Jonathan Ellis
>  Labels: performance
>
> Let's prototype the best possible scenario for how well we can perform with a 
> thread per core design by simplifying everything we can.  For instance,
> - No HH, no RR, no replication at all
> - No MessagingService
> - No compaction (so test a workload w/o overwrites)
> - No repair
> - Just local writes and reads
> If we can't get a big win (say at least 2x) with these simplifications then I 
> think we can say that it's not worth it.
> If we can get a big win, then we can either refine the prototype to make it 
> more realistic or start working on it in earnest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9830) Option to disable bloom filter in highest level of LCS sstables

2016-01-08 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089748#comment-15089748
 ] 

Carl Yeksigian commented on CASSANDRA-9830:
---

The cstar runs that you kicked off didn't work because you aren't on the list 
of repos for cstar, so I kicked off a new [ssd 
test|http://cstar.datastax.com/tests/id/7ebab860-b48f-11e5-9d2a-0256e416528f]. 
The increase looks modest, but there is an improvement.

[~enigmacurry] looks like the bdplab test never kicked off; can you take a 
look? Also, is there any way to get memory usage during the tests?

I think we should be skipping creating the bloom filter for the leveled major 
compaction as well. That's because in major compaction, while we aren't always 
adding at what ends up being the highest level after we are done, we are always 
writing the highest level for a given key. Plus, this will ensure that 
whichever level ends up as the highest will not have bloom filters.

Otherwise, code looks good.

> Option to disable bloom filter in highest level of LCS sstables
> ---
>
> Key: CASSANDRA-9830
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9830
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Jonathan Ellis
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: performance
> Fix For: 3.x
>
>
> We expect about 90% of data to be in the highest level of LCS in a fully 
> populated series.  (See also CASSANDRA-9829.)
> Thus if the user is primarily asking for data (partitions) that has actually 
> been inserted, the bloom filter on the highest level only helps reject 
> sstables about 10% of the time.
> We should add an option that suppresses bloom filter creation on top-level 
> sstables.  This will dramatically reduce memory usage for LCS and may even 
> improve performance as we no longer check a low-value filter.
> (This is also an idea from RocksDB.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10924) Pass base table's metadata to Index.validateOptions

2016-01-08 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089607#comment-15089607
 ] 

Sam Tunnicliffe commented on CASSANDRA-10924:
-

Hmm, yes I can see how that may make validation tricky for some custom 
implementations. The problem is with changing the method signature in a 
non-major version. I know it will complicate {{IndexMetadata}} somewhat, but 
could we extend the check to look for both signatures and call whichever is 
defined? 


> Pass base table's metadata to Index.validateOptions
> ---
>
> Key: CASSANDRA-10924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10924
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Local Write-Read Paths
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Minor
>  Labels: 2i, index, validation
> Fix For: 3.0.x, 3.x
>
> Attachments: CASSANDRA-10924-v0.diff
>
>
> Some custom index implementations require the base table's metadata to 
> validate their creation options. For example, the options of these 
> implementations can contain information about which base table's columns are 
> going to be indexed and how, so the implementation needs to know the 
> existence and the type of the columns to be indexed to properly validate.
> The attached patch proposes to add base table's {{CFMetaData}} to Index' 
> optional static method to validate the custom index options:
> {{public static Map validateOptions(CFMetaData cfm, 
> Map options);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10959) missing timeout option propagation in cqlsh (cqlsh.py)

2016-01-08 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10959:

Fix Version/s: 3.0.x
   2.2.x

> missing timeout option propagation in cqlsh (cqlsh.py)
> --
>
> Key: CASSANDRA-10959
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10959
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: linux
>Reporter: Julien Blondeau
>  Labels: patch
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 10959-3.1.1.txt
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> On a slow cluster (here used for testing purpose), cqlsh fails with a timeout 
> error, whatever --connect-timeout option you can pass.
> Here is a sample call:
> {noformat}
> cqlsh 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> {noformat}
> cqlsh --connect-timeout=30 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> Debugging shows that the timeout is not properly propagated on the underlying 
> {{ResponseWaiter.deliver()}} method in 
> {{/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/connection.py}}
> Workaround is to propagate, in cqlsh.py, the {{--connect-timeout}} option 
> when initialize the cluster connection object (i.e. add kwarg 
> "control_connection_timeout" in addition to the existing kwarg 
> "connect_timeout")
> {noformat}
> Cluster(
> ,
> control_connection_timeout=float(connect_timeout),
> connect_timeout=connect_timeout)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10990) Support streaming of older version sstables in 3.0

2016-01-08 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta reassigned CASSANDRA-10990:
---

Assignee: Paulo Motta

> Support streaming of older version sstables in 3.0
> --
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>Assignee: Paulo Motta
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10916) TestGlobalRowKeyCache.functional_test fails on Windows

2016-01-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie reassigned CASSANDRA-10916:
---

Assignee: Joshua McKenzie

> TestGlobalRowKeyCache.functional_test fails on Windows
> --
>
> Key: CASSANDRA-10916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10916
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Joshua McKenzie
> Fix For: 3.0.x
>
>
> {{global_row_key_cache_test.py:TestGlobalRowKeyCache.functional_test}} fails 
> hard on Windows when a node fails to start:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test/
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/140/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test_2/
> I have not dug much into the failure history, so I don't know how closely the 
> failures are related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10959) missing timeout option propagation in cqlsh (cqlsh.py)

2016-01-08 Thread Julien Blondeau (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089851#comment-15089851
 ] 

Julien Blondeau commented on CASSANDRA-10959:
-

Thanks for your reactivity!

> missing timeout option propagation in cqlsh (cqlsh.py)
> --
>
> Key: CASSANDRA-10959
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10959
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: linux
>Reporter: Julien Blondeau
>Assignee: Julien Blondeau
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 10959-3.1.1.txt
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> On a slow cluster (here used for testing purpose), cqlsh fails with a timeout 
> error, whatever --connect-timeout option you can pass.
> Here is a sample call:
> {noformat}
> cqlsh 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> {noformat}
> cqlsh --connect-timeout=30 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> Debugging shows that the timeout is not properly propagated on the underlying 
> {{ResponseWaiter.deliver()}} method in 
> {{/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/connection.py}}
> Workaround is to propagate, in cqlsh.py, the {{--connect-timeout}} option 
> when initialize the cluster connection object (i.e. add kwarg 
> "control_connection_timeout" in addition to the existing kwarg 
> "connect_timeout")
> {noformat}
> Cluster(
> ,
> control_connection_timeout=float(connect_timeout),
> connect_timeout=connect_timeout)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089915#comment-15089915
 ] 

Paulo Motta commented on CASSANDRA-10866:
-

Tests look good. Marking as ready to commit.

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
> Environment: PROD
>Reporter: Anubhav Kale
>Assignee: Anubhav Kale
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0001-CF-Dropped-Mutation-Stats.patch, 
> 0001-CFCount.patch, 10866-Trunk.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10990) Support streaming of older version sstables in 3.0

2016-01-08 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10990:

Reviewer: Yuki Morishita

> Support streaming of older version sstables in 3.0
> --
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>Assignee: Paulo Motta
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10532) Allow LWT operation on static column with only partition keys

2016-01-08 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089895#comment-15089895
 ] 

Carl Yeksigian commented on CASSANDRA-10532:


I pushed up a new version. The 2 important versions are 
[2.1|https://github.com/carlyeks/cassandra/tree/ticket/10532/2.1] and 
[3.0|https://github.com/carlyeks/cassandra/tree/ticket/10532/3.0], the rest 
were just merges.

While trying to fix the 3.0 test to output about the static columns, I realized 
that we don't need to separate out the partition keys from the primary keys, as 
they are checked elsewhere, and we also don't need to add a special message 
since there is no way to get that far when you haven't specified the partition 
key.

||2.1||3.3||
|[branch|https://github.com/carlyeks/cassandra/tree/ticket/10532/2.1]|[branch|https://github.com/carlyeks/cassandra/tree/ticket/10532/3.3]|
|[utest|http://cassci.datastax.com/view/Dev/view/carlyeks/job/carlyeks-ticket-10532-2.1-testall/]|[utest|http://cassci.datastax.com/view/Dev/view/carlyeks/job/carlyeks-ticket-10532-3.3-testall/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/carlyeks/job/carlyeks-ticket-10532-2.1-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/carlyeks/job/carlyeks-ticket-10532-3.3-dtest/]|

> Allow LWT operation on static column with only partition keys
> -
>
> Key: CASSANDRA-10532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10532
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 2.2.0
>Reporter: DOAN DuyHai
>Assignee: Carl Yeksigian
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Schema
> {code:sql}
> CREATE TABLE IF NOT EXISTS achilles_embedded.entity_with_static_column(
> id bigint,
> uuid uuid,
> static_col text static,
> value text,
> PRIMARY KEY(id, uuid));
> {code}
> When trying to prepare the following query
> {code:sql}
> DELETE static_col FROM achilles_embedded.entity_with_static_column WHERE 
> id=:id_Eq IF static_col=:static_col;
> {code}
> I got the error *DELETE statements must restrict all PRIMARY KEY columns with 
> equality relations in order to use IF conditions, but column 'uuid' is not 
> restricted*
> Since the mutation only impacts the static column and the CAS check is on the 
> static column, it makes sense to provide only partition key



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9830) Option to disable bloom filter in highest level of LCS sstables

2016-01-08 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089900#comment-15089900
 ] 

Carl Yeksigian commented on CASSANDRA-9830:
---

Kicked off a new run without the flush step: 
http://cstar.datastax.com/tests/id/8943a864-b647-11e5-b06f-0256e416528f

Yeah, it seems surprising we wouldn't be using significantly less memory for 
the bloom filters, considering how many of the sstables are in the top level, 
but it might be down to the compact step. We'll see what happens with this run.

> Option to disable bloom filter in highest level of LCS sstables
> ---
>
> Key: CASSANDRA-9830
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9830
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Jonathan Ellis
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: performance
> Fix For: 3.x
>
>
> We expect about 90% of data to be in the highest level of LCS in a fully 
> populated series.  (See also CASSANDRA-9829.)
> Thus if the user is primarily asking for data (partitions) that has actually 
> been inserted, the bloom filter on the highest level only helps reject 
> sstables about 10% of the time.
> We should add an option that suppresses bloom filter creation on top-level 
> sstables.  This will dramatically reduce memory usage for LCS and may even 
> improve performance as we no longer check a low-value filter.
> (This is also an idea from RocksDB.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10924) Pass base table's metadata to Index.validateOptions

2016-01-08 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089805#comment-15089805
 ] 

Sam Tunnicliffe commented on CASSANDRA-10924:
-

:) yes that's what I was trying (& clearly failing) to say

> Pass base table's metadata to Index.validateOptions
> ---
>
> Key: CASSANDRA-10924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10924
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Local Write-Read Paths
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Minor
>  Labels: 2i, index, validation
> Fix For: 3.0.x, 3.x
>
> Attachments: CASSANDRA-10924-v0.diff
>
>
> Some custom index implementations require the base table's metadata to 
> validate their creation options. For example, the options of these 
> implementations can contain information about which base table's columns are 
> going to be indexed and how, so the implementation needs to know the 
> existence and the type of the columns to be indexed to properly validate.
> The attached patch proposes to add base table's {{CFMetaData}} to Index' 
> optional static method to validate the custom index options:
> {{public static Map validateOptions(CFMetaData cfm, 
> Map options);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2016-01-08 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089947#comment-15089947
 ] 

Anubhav Kale commented on CASSANDRA-10866:
--

Thanks.

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability, Tools
> Environment: PROD
>Reporter: Anubhav Kale
>Assignee: Anubhav Kale
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0001-CF-Dropped-Mutation-Stats.patch, 
> 0001-CFCount.patch, 10866-Trunk.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9830) Option to disable bloom filter in highest level of LCS sstables

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089891#comment-15089891
 ] 

Paulo Motta commented on CASSANDRA-9830:


I managed to extract some cfstats metrics from the [stats 
json|http://cstar.datastax.com/tests/artifacts/7ebab860-b48f-11e5-9d2a-0256e416528f/stats/stats.7ebab860-b48f-11e5-9d2a-0256e416528f.json].
 Below are some observations:
* The bloom filter false positive ratio is always higher in the branch with 
{{skip_top_level_bloom_filter}}, this is expected as there are some 
non-existent reads in the test case, but the actual use case for this is when 
reads are known to be present, so it actually doesn't make much sense to test 
non-existent reads.
* What surprises me a bit is that the memory usage of the bloom filter is not 
always lower with the {{skip_top_level_bloom_filter}} option, as can be seen in 
the metrics for the {{blade-11-2a}} node. I suspect this might be due to the 
major compaction step, which is not skipping top level bloom filters in the 
current implementation. Could you trigger another run without the major 
compaction step so we can see if this will hold [~carlyeks]? Do you have any 
other explanation for this? Thanks!

* blade-11-2a
** trunk {noformat}
SSTable count: 25
SSTables in each level: [0, 10, 15, 0, 0, 0, 0, 0, 0]
Space used (live): 4273599852
Space used (total): 4273599852
Bloom filter false positives: 8
Bloom filter false ratio: 0.0
Bloom filter space used: 61442184
Bloom filter off heap memory used: 61441984
{noformat}
** skip_top_level_bloom_filter {noformat}
SSTable count: 26
SSTables in each level: [0, 10, 16, 0, 0, 0, 0, 0, 0]
Space used (live): 4269482588
Space used (total): 4269482588
Bloom filter false positives: 272
Bloom filter false ratio: 0.1
Bloom filter space used: 92524640
Bloom filter off heap memory used: 92524560
{noformat}


* blade-11-3a
** trunk {noformat}
SSTable count: 26
SSTables in each level: [0, 10, 16, 0, 0, 0, 0, 0, 0]
Space used (live): 4318124528
Space used (total): 4318124528
Bloom filter false positives: 17
Bloom filter false ratio: 0.0
Bloom filter space used: 69421560
Bloom filter off heap memory used: 69421352
{noformat}
** skip_top_level_bloom_filter {noformat}
SSTable count: 25
SSTables in each level: [0, 10, 15, 0, 0, 0, 0, 0, 0]
Space used (live): 4195812995
Space used (total): 4195812995
Bloom filter false positives: 364
Bloom filter false ratio: 0.1
Bloom filter space used: 56484240
Bloom filter off heap memory used: 56484160
{noformat}


* blade-11-4a
** trunk {noformat}
SSTable count: 25
SSTables in each level: [0, 10, 15, 0, 0, 0, 0, 0, 0]
Space used (live): 4269592570
Space used (total): 4269592570
Bloom filter false positives: 9
Bloom filter false ratio: 0.0
Bloom filter space used: 61316032
Bloom filter off heap memory used: 61315832
{noformat}
** skip_top_level_bloom_filter {noformat}
SSTable count: 25
SSTables in each level: [0, 10, 15, 0, 0, 0, 0, 0, 0]
Space used (live): 4195876894
Space used (total): 4195876894
Bloom filter false positives: 543
Bloom filter false ratio: 0.2
Bloom filter space used: 56474560
Bloom filter off heap memory used: 56474480
{noformat}

> Option to disable bloom filter in highest level of LCS sstables
> ---
>
> Key: CASSANDRA-9830
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9830
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Jonathan Ellis
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: performance
> Fix For: 3.x
>
>
> We expect about 90% of data to be in the highest level of LCS in a fully 
> populated series.  (See also CASSANDRA-9829.)
> Thus if the user is primarily asking for data (partitions) that has actually 
> been inserted, the bloom filter on the highest level only helps reject 
> sstables about 10% of the time.
> We should add an option that suppresses bloom filter creation on top-level 
> sstables.  This will dramatically reduce memory usage for LCS and may even 
> improve performance as we no longer check a low-value filter.
> (This is also an idea from RocksDB.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Stripe MV locks by key plus cfid to reduce contention

2016-01-08 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 273ea780d -> 20e6750df


Stripe MV locks by key plus cfid to reduce contention

Patch by Tyler Hobbs; reviewed by Carl Yeksigian for CASSANDRA-10981


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/20e6750d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/20e6750d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/20e6750d

Branch: refs/heads/trunk
Commit: 20e6750df73cf5cb483659b996911445489ef322
Parents: 273ea78
Author: Tyler Hobbs 
Authored: Fri Jan 8 14:59:41 2016 -0600
Committer: Tyler Hobbs 
Committed: Fri Jan 8 14:59:41 2016 -0600

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/Keyspace.java  | 74 
 .../apache/cassandra/db/view/ViewManager.java   |  4 +-
 3 files changed, 49 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/20e6750d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 67cb67b..3efd6a4 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.4
+ * Stripe view locks by key and table ID to reduce contention (CASSANDRA-10981)
  * Add nodetool gettimeout and settimeout commands (CASSANDRA-10953)
  * Add 3.0 metadata to sstablemetadata output (CASSANDRA-10838)
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/20e6750d/src/java/org/apache/cassandra/db/Keyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/Keyspace.java 
b/src/java/org/apache/cassandra/db/Keyspace.java
index 7b4f79b..0ec94ea 100644
--- a/src/java/org/apache/cassandra/db/Keyspace.java
+++ b/src/java/org/apache/cassandra/db/Keyspace.java
@@ -19,6 +19,7 @@ package org.apache.cassandra.db;
 
 import java.io.File;
 import java.io.IOException;
+import java.nio.ByteBuffer;
 import java.util.*;
 import java.util.concurrent.*;
 import java.util.concurrent.atomic.AtomicLong;
@@ -408,46 +409,60 @@ public class Keyspace
 if (TEST_FAIL_WRITES && metadata.name.equals(TEST_FAIL_WRITES_KS))
 throw new RuntimeException("Testing write failures");
 
-Lock lock = null;
+Lock[] locks = null;
 boolean requiresViewUpdate = updateIndexes && 
viewManager.updatesAffectView(Collections.singleton(mutation), false);
 
 if (requiresViewUpdate)
 {
 mutation.viewLockAcquireStart.compareAndSet(0L, 
System.currentTimeMillis());
-lock = ViewManager.acquireLockFor(mutation.key().getKey());
 
-if (lock == null)
+// the order of lock acquisition doesn't matter (from a deadlock 
perspective) because we only use tryLock()
+Collection columnFamilyIds = mutation.getColumnFamilyIds();
+Iterator idIterator = columnFamilyIds.iterator();
+locks = new Lock[columnFamilyIds.size()];
+
+for (int i = 0; i < columnFamilyIds.size(); i++)
 {
-if ((System.currentTimeMillis() - mutation.createdAt) > 
DatabaseDescriptor.getWriteRpcTimeout())
+UUID cfid = idIterator.next();
+int lockKey = Objects.hash(mutation.key().getKey(), cfid);
+Lock lock = ViewManager.acquireLockFor(lockKey);
+if (lock == null)
 {
-logger.trace("Could not acquire lock for {}", 
ByteBufferUtil.bytesToHex(mutation.key().getKey()));
-Tracing.trace("Could not acquire MV lock");
-throw new WriteTimeoutException(WriteType.VIEW, 
ConsistencyLevel.LOCAL_ONE, 0, 1);
+// we will either time out or retry, so release all 
acquired locks
+for (int j = 0; j < i; j++)
+locks[j].unlock();
+
+if ((System.currentTimeMillis() - mutation.createdAt) > 
DatabaseDescriptor.getWriteRpcTimeout())
+{
+logger.trace("Could not acquire lock for {} and table 
{}", ByteBufferUtil.bytesToHex(mutation.key().getKey()), 
columnFamilyStores.get(cfid).name);
+Tracing.trace("Could not acquire MV lock");
+throw new WriteTimeoutException(WriteType.VIEW, 
ConsistencyLevel.LOCAL_ONE, 0, 1);
+}
+else
+{
+// This view update can't happen right now. so rather 
than keep this thread busy
+// we will re-apply ourself to the queue and try again 
later
+StageManager.getStage(Stage.MUTATION).execute(() -> {
+if 

[jira] [Comment Edited] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090010#comment-15090010
 ] 

Paulo Motta edited comment on CASSANDRA-10907 at 1/8/16 9:54 PM:
-

Overall looks good but we cannot change the methods from 
{{StorageServiceMBean}} as this is a public interface and might be used by 
other systems.

I propose you add a new method {{takeSnapshot(String tag, Map 
options, String... entities)}}, where the {{options}} map may only contain the 
{{skipFlush}} option for the time being, but may be extended in the future with 
more options. The {{entities}} array will contain strings in the format 
ks\[.cf\], meaning take a snapshot of keyspaces and/or specific cfs. In this 
way, we don't need to create a new method in the future if we add a new option. 

You should also add a {{@Deprecated}} annotation to the previous methods and 
javadocs, similar to the {{forceRepairAsync}} deprecation notices. It would be 
nice to unify the implementation of {{takeMultipleTableSnapshot}}, 
{{takeTableSnapshot}}, {{takeSnapshot}} to use the new method 
{{takeSnapshot(String tag, Map options, String... entities)}}.


was (Author: pauloricardomg):
Overall looks good but we cannot change the methods from 
{{StorageServiceMBean}} as this is a public interface and might be used by 
other systems.

I propose you add a new method {{takeSnapshot(String tag, Map 
options, String... entities)}}, where the {{options}} map may only contain the 
{{skipFlush}} option for the time being, but may be extended in the future with 
more options. The {{entities}} array will contain strings in the format 
ks\[.cf\], meaning take a snapshot of keyspaces and/or specific cfs. In this 
way, we don't need to create a new method in the future if we add a new option. 

You should also add a {{@Deprecated}} annotation to the previous methods and 
javadocs, similar to the {{forceRepairAsync}} deprecation notices. It would be 
nice to unify the implementation of {takeMultipleTableSnapshot}}, 
{{takeTableSnapshot}}, {{takeSnapshot}} to use the new method 
{{takeSnapshot(String tag, Map options, String... entities)}}.

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
> Attachments: 0001-flush.patch
>
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2016-01-08 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10907:

Reviewer: Paulo Motta

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
> Attachments: 0001-flush.patch
>
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090012#comment-15090012
 ] 

Paulo Motta commented on CASSANDRA-10907:
-

Please click submit patch when you have a new version ready.

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
> Attachments: 0001-flush.patch
>
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-08 Thread Juliano Vidal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090044#comment-15090044
 ] 

Juliano Vidal commented on CASSANDRA-10961:
---

It works
Just finished to join the 7th node on my existing cluster.

Thanks all!!

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: apache-cassandra-2.2.4-SNAPSHOT.jar, debug.1.log, 
> debug.logs.zip, netstats.1.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 

[jira] [Commented] (CASSANDRA-10952) NullPointerException in Gossiper.getHostId

2016-01-08 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090148#comment-15090148
 ] 

Joel Knighton commented on CASSANDRA-10952:
---

You are correct - this looks to be a different issue than [CASSANDRA-10089]. 
I'm not immediately able to find the problem.

A few more pieces of information would be helpful:
1. Can you provide full logs from both a node that was already in the cluster 
and a bootstrapping node?
2. Is the bootstrapped node missing from all nodes in the cluster or just some?
3. What does nodetool status on a missing bootstrapped node report?

Thanks.

> NullPointerException in Gossiper.getHostId
> --
>
> Key: CASSANDRA-10952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10952
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dikang Gu
>Assignee: Joel Knighton
>
> We added some nodes into our cluster, for some of them, when they finish 
> bootstrap, they are not shown in the `nodetool status` of other nodes.
> I checked the logs and found this:
> {code}
> 2015-12-29_05:06:23.89461 ERROR 05:06:23 Exception in thread 
> Thread[GossipStage:9,5,main]
> 2015-12-29_05:06:23.89463 java.lang.NullPointerException: null
> 2015-12-29_05:06:23.89463   at 
> org.apache.cassandra.gms.Gossiper.getHostId(Gossiper.java:811) 
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89464   at 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1664)
>  
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89464   at 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1485)
>  
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89464   at 
> org.apache.cassandra.gms.Gossiper.doOnChangeNotifications(Gossiper.java:1156) 
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89465   at 
> org.apache.cassandra.gms.Gossiper.applyNewStates(Gossiper.java:1138) 
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89465   at 
> org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:1095) 
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89466   at 
> org.apache.cassandra.gms.GossipDigestAckVerbHandler.doVerb(GossipDigestAckVerbHandler.java:58)
>  
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89466   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89469   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_45]
> 2015-12-29_05:06:23.89469   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[na:1.7.0_45]
> 2015-12-29_05:06:23.89469   at java.lang.Thread.run(Thread.java:744) 
> ~[na:1.7.0_45]
> {code}
> This looks different than CASSANDRA-10089, so I create a new jira.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10965) Shadowable tombstones can continue to shadow view results when timestamps match

2016-01-08 Thread Taiyuan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090159#comment-15090159
 ] 

Taiyuan Zhang commented on CASSANDRA-10965:
---

I ran the script, and here is the output:

{code}
cqlsh:mykeyspace> SELECT c, k, val FROM base; SELECT c, k, val FROM mv_reuse;

 c | k | val
---+---+-
 0 | 1 |   1

(1 rows)

 c | k | val
---+---+-
 0 | 1 |   1

(1 rows)
cqlsh:mykeyspace> UPDATE base USING TIMESTAMP 1 SET c = 1 WHERE k = 1;
cqlsh:mykeyspace> SELECT c, k, val FROM base; SELECT c, k, val FROM mv_reuse;

 c | k | val
---+---+-
 1 | 1 |   1

(1 rows)

 c | k | val
---+---+-
{code}

So the problem is: after the update using the same timestamp, the row is not 
shown when query from the materialized view?

> Shadowable tombstones can continue to shadow view results when timestamps 
> match
> ---
>
> Key: CASSANDRA-10965
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10965
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
> Fix For: 3.0.x
>
> Attachments: shadow-ts.cql
>
>
> I've attached a script which reproduces the issue. The first time we insert 
> with {{TIMESTAMP 2}}, we are inserting a new row which has the same timestamp 
> as the previous shadow tombstone, and it continues to be shadowed by that 
> tombstone because we shadow values with the same timestamp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10985) OOM during bulk read(slice query) operation

2016-01-08 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090139#comment-15090139
 ] 

Jack Krupansky commented on CASSANDRA-10985:


How big a slice are you trying to read? I'd recommend no more than 5K columns 
in a single request and issue multiple requests.

Very large operations are an anti-pattern even if they do manage to sort of 
work.

Was this working before for you and suddenly stopped working or was this the 
first time you tried a slice of this size?

You're dealing with Thrift, so don't expect too much support.


> OOM during bulk read(slice query) operation
> ---
>
> Key: CASSANDRA-10985
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10985
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
> Environment: OS : Linux 6.5
> RAM : 126GB
> assign heap size: 8GB
>Reporter: sumit thakur
>
> The thread java.lang.Thread @ 0x55000a4f0 Thrift:6 keeps local variables with 
> total size 16,214,953,728 (98.23%) bytes.
> The memory is accumulated in one instance of "java.lang.Thread" loaded by 
> "".
> The stacktrace of this Thread is available. See stacktrace.
> Keywords
> java.lang.Thread
> --
> Trace: 
> Thrift:6
>   at java.lang.OutOfMemoryError.()V (OutOfMemoryError.java:48)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.read(Ljava/io/DataInput;I)Ljava/nio/ByteBuffer;
>  (ByteBufferUtil.java:401)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithVIntLength(Lorg/apache/cassandra/io/util/DataInputPlus;)Ljava/nio/ByteBuffer;
>  (ByteBufferUtil.java:339)
>   at 
> org.apache.cassandra.db.marshal.AbstractType.readValue(Lorg/apache/cassandra/io/util/DataInputPlus;)Ljava/nio/ByteBuffer;
>  (AbstractType.java:391)
>   at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.deserialize(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/LivenessInfo;Lorg/apache/cassandra/config/ColumnDefinition;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;)Lorg/apache/cassandra/db/rows/Cell;
>  (BufferCell.java:298)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(Lorg/apache/cassandra/config/ColumnDefinition;Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;Lorg/apache/cassandra/db/rows/Row$Builder;Lorg/apache/cassandra/db/LivenessInfo;)V
>  (UnfilteredSerializer.java:453)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;IILorg/apache/cassandra/db/rows/Row$Builder;)Lorg/apache/cassandra/db/rows/Row;
>  (UnfilteredSerializer.java:431)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;Lorg/apache/cassandra/db/rows/Row$Builder;)Lorg/apache/cassandra/db/rows/Unfiltered;
>  (UnfilteredSerializer.java:360)
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext()Lorg/apache/cassandra/db/rows/Unfiltered;
>  (UnfilteredRowIteratorSerializer.java:217)
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext()Ljava/lang/Object;
>  (UnfilteredRowIteratorSerializer.java:210)
>   at org.apache.cassandra.utils.AbstractIterator.hasNext()Z 
> (AbstractIterator.java:47)
>   at org.apache.cassandra.db.transform.BaseRows.hasNext()Z (BaseRows.java:108)
>   at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext()Lorg/apache/cassandra/db/LegacyLayout$LegacyCell;
>  (LegacyLayout.java:658)
>   at org.apache.cassandra.db.LegacyLayout$3.computeNext()Ljava/lang/Object; 
> (LegacyLayout.java:640)
>   at org.apache.cassandra.utils.AbstractIterator.hasNext()Z 
> (AbstractIterator.java:47)
>   at 
> org.apache.cassandra.thrift.CassandraServer.thriftifyColumns(Lorg/apache/cassandra/config/CFMetaData;Ljava/util/Iterator;)Ljava/util/List;
>  (CassandraServer.java:112)
>   at 
> org.apache.cassandra.thrift.CassandraServer.thriftifyPartition(Lorg/apache/cassandra/db/rows/RowIterator;ZZI)Ljava/util/List;
>  (CassandraServer.java:250)
>   at 
> org.apache.cassandra.thrift.CassandraServer.getSlice(Ljava/util/List;ZILorg/apache/cassandra/db/ConsistencyLevel;Lorg/apache/cassandra/service/ClientState;)Ljava/util/Map;
>  (CassandraServer.java:270)
>   at 
> 

[jira] [Commented] (CASSANDRA-10661) Integrate SASI to Cassandra

2016-01-08 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090025#comment-15090025
 ] 

DOAN DuyHai commented on CASSANDRA-10661:
-

A minor remark. Shouldn't we take this integration into C* opportunity to 
*rename* the *SUFFIX* mode to *CONTAINS* ?
Indeed, *NORMAL* and *SPARSE* indexing modes are quite self-explanatory whereas 
*SUFFIX* mode not only allows searching on suffixes but also on prefixes. 

 Users can be confused by the suffix name and think it only works for suffix 
search

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.x
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10958) Range query with filtering interacts badly with static columns

2016-01-08 Thread Taiyuan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090127#comment-15090127
 ] 

Taiyuan Zhang edited comment on CASSANDRA-10958 at 1/8/16 10:52 PM:


The cause of this bizarre out is the following code:

{code:title=SelectStatement.java}
// If there is no rows, then provided the select was a full partition 
selection
// (i.e. not a 2ndary index search and there was no condition on 
clustering columns),
// we want to include static columns and we're done.
if (!partition.hasNext())
{
if (!staticRow.isEmpty() && (!restrictions.usesSecondaryIndexing() 
|| cfm.isStaticCompactTable()) && 
!restrictions.hasClusteringColumnsRestriction())
{
result.newRow(protocolVersion);
for (ColumnDefinition def : selection.getColumns())
{
switch (def.kind)
{
case PARTITION_KEY:
result.add(keyComponents[def.position()]);
break;
case STATIC:
addValue(result, def, staticRow, nowInSec, 
protocolVersion);
break;
default:
result.add((ByteBuffer)null);
}
}
}
return;
}
{code}

Why do we need to keep the static row? Can anyone give me a case where we need 
to keep the static row?


was (Author: firstprayer):
The cause of this bizarre out is the following code:

{code}
// If there is no rows, then provided the select was a full partition 
selection
// (i.e. not a 2ndary index search and there was no condition on 
clustering columns),
// we want to include static columns and we're done.
if (!partition.hasNext())
{
if (!staticRow.isEmpty() && (!restrictions.usesSecondaryIndexing() 
|| cfm.isStaticCompactTable()) && 
!restrictions.hasClusteringColumnsRestriction())
{
result.newRow(protocolVersion);
for (ColumnDefinition def : selection.getColumns())
{
switch (def.kind)
{
case PARTITION_KEY:
result.add(keyComponents[def.position()]);
break;
case STATIC:
addValue(result, def, staticRow, nowInSec, 
protocolVersion);
break;
default:
result.add((ByteBuffer)null);
}
}
}
return;
}
{code}

Why do we need to keep the static row? Can anyone give me a case where we need 
to keep the static row?

> Range query with filtering interacts badly with static columns
> --
>
> Key: CASSANDRA-10958
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10958
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Taiyuan Zhang
>Priority: Minor
>
>  I'm playing with Cassandra 3. I added a secondary index on a column of 
> integer, then I want to do a range query. First it threw an error:
> {code}
> InvalidRequest: code=2200 [Invalid query] message="No supported secondary 
> index found for the non primary key columns restrictions"
> {code}
> So I added 'Allow Filtering'
> {code}
> cqlsh:mykeyspace> SELECT * FROM test ;
> id | id2 | age | extra
> +-+-+---
>   1 |   1 |   1 | 1
>   2 |   2 |   2 | 2
> (2 rows)
> cqlsh:mykeyspace > CREATE INDEX test_age on test (extra) ;
> cqlsh:mykeyspace > select * FROM test WHERE extra < 2 ALLOW FILTERING ;
>  id | id2  | age | extra
> +--+-+---
>   1 |1 |   1 | 1
>   2 | null |   2 |  null
> (2 rows)
> {code}
> My schema is:
> {code}
> CREATE TABLE mykeyspace.test (
> id int,
> id2 int,
> age int static,
> extra int,
> PRIMARY KEY (id, id2)
> ) 
> {code}
> I don't know if this is by design or not, but it really does look like a BUG 
> to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10952) NullPointerException in Gossiper.getHostId

2016-01-08 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090148#comment-15090148
 ] 

Joel Knighton edited comment on CASSANDRA-10952 at 1/8/16 11:22 PM:


You are correct - this looks to be a different issue than [CASSANDRA-10089]. 
I'm not immediately able to find the problem.

A few more pieces of information would be helpful:
1. Can you provide full logs from both a node that was already in the cluster 
and a missing bootstrapped node?
2. Is the bootstrapped node missing from all nodes in the cluster or just some?
3. What does nodetool status on a missing bootstrapped node report?

Thanks.


was (Author: jkni):
You are correct - this looks to be a different issue than [CASSANDRA-10089]. 
I'm not immediately able to find the problem.

A few more pieces of information would be helpful:
1. Can you provide full logs from both a node that was already in the cluster 
and a bootstrapping node?
2. Is the bootstrapped node missing from all nodes in the cluster or just some?
3. What does nodetool status on a missing bootstrapped node report?

Thanks.

> NullPointerException in Gossiper.getHostId
> --
>
> Key: CASSANDRA-10952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10952
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dikang Gu
>Assignee: Joel Knighton
>
> We added some nodes into our cluster, for some of them, when they finish 
> bootstrap, they are not shown in the `nodetool status` of other nodes.
> I checked the logs and found this:
> {code}
> 2015-12-29_05:06:23.89461 ERROR 05:06:23 Exception in thread 
> Thread[GossipStage:9,5,main]
> 2015-12-29_05:06:23.89463 java.lang.NullPointerException: null
> 2015-12-29_05:06:23.89463   at 
> org.apache.cassandra.gms.Gossiper.getHostId(Gossiper.java:811) 
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89464   at 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1664)
>  
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89464   at 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1485)
>  
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89464   at 
> org.apache.cassandra.gms.Gossiper.doOnChangeNotifications(Gossiper.java:1156) 
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89465   at 
> org.apache.cassandra.gms.Gossiper.applyNewStates(Gossiper.java:1138) 
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89465   at 
> org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:1095) 
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89466   at 
> org.apache.cassandra.gms.GossipDigestAckVerbHandler.doVerb(GossipDigestAckVerbHandler.java:58)
>  
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89466   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) 
> ~[apache-cassandra-2.1.8+git20151215.c8ed2ab16.jar:2.1.8+git20151215.c8ed2ab16]
> 2015-12-29_05:06:23.89469   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_45]
> 2015-12-29_05:06:23.89469   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[na:1.7.0_45]
> 2015-12-29_05:06:23.89469   at java.lang.Thread.run(Thread.java:744) 
> ~[na:1.7.0_45]
> {code}
> This looks different than CASSANDRA-10089, so I create a new jira.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10981) Consider striping view locks by key and cfid

2016-01-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089637#comment-15089637
 ] 

Tyler Hobbs commented on CASSANDRA-10981:
-

Okay, I've pushed a second commit to the same branch that uses the simpler 
CounterMutation-like approach of building the lock key.

> Consider striping view locks by key and cfid
> 
>
> Key: CASSANDRA-10981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10981
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.x
>
>
> We use a striped lock to protect updates to tables with materialized views, 
> and the lock is currently striped by the partition key of the {{Mutation}}.  
> This causes concurrent updates to separate tables with the same partition key 
> to contend for the same lock, resulting in one or more of the mutations being 
> rescheduled on the {{MUTATION}} threadpool (potentially becoming an 
> asynchronous operation instead a synchronous operations, from the perspective 
> of local internal modifications).
> Since it's probably fairly common to use the same partition key across 
> multiple tables, I suggest that we add the cfid of the affected table to the 
> lock striping, and acquire one lock per affected table (with the same 
> rescheduling-under-contention behavior).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10959) missing timeout option propagation in cqlsh (cqlsh.py)

2016-01-08 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10959:

Description: 
On a slow cluster (here used for testing purpose), cqlsh fails with a timeout 
error, whatever --connect-timeout option you can pass.

Here is a sample call:
{noformat}
cqlsh 192.168.XXX.YYY
Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
OperationTimedOut('errors=None, last_host=None',)})
{noformat}
{noformat}
cqlsh --connect-timeout=30 192.168.XXX.YYY
Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
OperationTimedOut('errors=None, last_host=None',)})
{noformat}

Debugging shows that the timeout is not properly propagated on the underlying 
{{ResponseWaiter.deliver()}} method in 
{{/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/connection.py}}

Workaround is to propagate, in cqlsh.py, the {{--connect-timeout}} option when 
initialize the cluster connection object (i.e. add kwarg 
"control_connection_timeout" in addition to the existing kwarg 
"connect_timeout")
{noformat}
Cluster(
,
control_connection_timeout=float(connect_timeout),
connect_timeout=connect_timeout)
{noformat}

  was:
On a slow cluster (here used for testing purpose), cqlsh fails with a timeout 
error, whatever --connect-timeout option you can pass.

Here is a sample call:
cqlsh 192.168.XXX.YYY
Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
OperationTimedOut('errors=None, last_host=None',)})

cqlsh --connect-timeout=30 192.168.XXX.YYY
Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
OperationTimedOut('errors=None, last_host=None',)})

Debugging shows that the timeout is not properly propagated on the underlying 
ResponseWaiter.deliver() method in 
/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/connection.py

Workaround is to propagate, in cqlsh.py, the --connect-timeout option when 
initialize the cluster connection object (i.e. add kwarg 
"control_connection_timeout" in addition to the existing kwarg 
"connect_timeout")
Cluster(
,
control_connection_timeout=float(connect_timeout),
connect_timeout=connect_timeout)


> missing timeout option propagation in cqlsh (cqlsh.py)
> --
>
> Key: CASSANDRA-10959
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10959
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: linux
>Reporter: Julien Blondeau
>  Labels: patch
> Fix For: 3.x
>
> Attachments: 10959-3.1.1.txt
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> On a slow cluster (here used for testing purpose), cqlsh fails with a timeout 
> error, whatever --connect-timeout option you can pass.
> Here is a sample call:
> {noformat}
> cqlsh 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> {noformat}
> cqlsh --connect-timeout=30 192.168.XXX.YYY
> Connection error: ('Unable to connect to any servers', {'192.168.XXX.YYY': 
> OperationTimedOut('errors=None, last_host=None',)})
> {noformat}
> Debugging shows that the timeout is not properly propagated on the underlying 
> {{ResponseWaiter.deliver()}} method in 
> {{/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/connection.py}}
> Workaround is to propagate, in cqlsh.py, the {{--connect-timeout}} option 
> when initialize the cluster connection object (i.e. add kwarg 
> "control_connection_timeout" in addition to the existing kwarg 
> "connect_timeout")
> {noformat}
> Cluster(
> ,
> control_connection_timeout=float(connect_timeout),
> connect_timeout=connect_timeout)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10990) Support streaming of older version sstables in 3.0

2016-01-08 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10990:
-
Summary: Support streaming of older version sstables in 3.0  (was: Support 
streaming of older version sstables in (3.0 to 2.1/2.2))

> Support streaming of older version sstables in 3.0
> --
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10940) sstableloader shuold skip streaming SSTable generated in < 3.0.0

2016-01-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089675#comment-15089675
 ] 

Aleksey Yeschenko commented on CASSANDRA-10940:
---

See CASSANDRA-10990.

> sstableloader shuold skip streaming SSTable generated in < 3.0.0
> 
>
> Key: CASSANDRA-10940
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10940
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging, Tools
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> Since 3.0.0, [streaming does not support SSTable from version less than 
> 3.0.0|https://github.com/apache/cassandra/blob/0f5e780781ce3f0cb3732515dacc7e467571a7c9/src/java/org/apache/cassandra/io/sstable/SSTableSimpleIterator.java#L116].
> {{sstableloader}} should skip those files to be streamed, instead of erroring 
> out like below:
> {code}
> Failed to list files in 
> /home/yuki/.ccm/2.1.11/node1/data/keyspace1/standard1-5242ae50a9b311e585b29dc952593398
> java.lang.NullPointerException
> java.lang.RuntimeException: Failed to list files in 
> /home/yuki/.ccm/2.1.11/node1/data/keyspace1/standard1-5242ae50a9b311e585b29dc952593398
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:53)
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:544)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:76)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:165)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:101)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.openForBatch(SSTableReader.java:421)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.lambda$openSSTables$186(SSTableLoader.java:121)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader$$Lambda$18/712974096.apply(Unknown
>  Source)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.lambda$innerList$178(LogAwareFileLister.java:75)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister$$Lambda$29/1191654595.test(Unknown
>  Source)
> at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
> at 
> java.util.TreeMap$EntrySpliterator.forEachRemaining(TreeMap.java:2965)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:512)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:502)
> at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.innerList(LogAwareFileLister.java:77)
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:49)
> ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10990) Support streaming of older version sstables in (3.0 to 2.1/2.2)

2016-01-08 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10990:
-
Summary: Support streaming of older version sstables in (3.0 to 2.1/2.2)  
(was: Supporting streaming of older version sstables in (3.0 to 2.1/2.2))

> Support streaming of older version sstables in (3.0 to 2.1/2.2)
> ---
>
> Key: CASSANDRA-10990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>
> In 2.0 we introduced support for streaming older versioned sstables 
> (CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
> became no longer supported.  So currently, while 3.0 can read sstables in the 
> 2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
> some work to make this still possible to be consistent with what 
> CASSANDRA-5772 provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10990) Supporting streaming of older version sstables in (3.0 to 2.1/2.2)

2016-01-08 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-10990:


 Summary: Supporting streaming of older version sstables in (3.0 to 
2.1/2.2)
 Key: CASSANDRA-10990
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10990
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
Reporter: Jeremy Hanna


In 2.0 we introduced support for streaming older versioned sstables 
(CASSANDRA-5772).  In 3.0, because of the rewrite of the storage layer, this 
became no longer supported.  So currently, while 3.0 can read sstables in the 
2.1/2.2 format, it cannot stream the older versioned sstables.  We should do 
some work to make this still possible to be consistent with what CASSANDRA-5772 
provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10688) Stack overflow from SSTableReader$InstanceTidier.runOnClose in Leak Detector

2016-01-08 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089070#comment-15089070
 ] 

Benedict commented on CASSANDRA-10688:
--

LGTM; seems to be some issues with dtests though, but they look probably down 
to flakey environment

> Stack overflow from SSTableReader$InstanceTidier.runOnClose in Leak Detector
> 
>
> Key: CASSANDRA-10688
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10688
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Testing
>Reporter: Jeremiah Jordan
>Assignee: Ariel Weisberg
> Fix For: 3.0.x
>
>
> Running some tests against cassandra-3.0 
> 9fc957cf3097e54ccd72e51b2d0650dc3e83eae0
> The tests are just running cassandra-stress write and read while adding and 
> removing nodes from the cluster.  After the test runs when I go back through 
> logs I find the following Stackoverflow fairly often:
> ERROR [Strong-Reference-Leak-Detector:1] 2015-11-11 00:04:10,638  
> Ref.java:413 - Stackoverflow [private java.lang.Runnable 
> org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier.runOnClose,
>  final java.lang.Runnable 
> org.apache.cassandra.io.sstable.format.SSTableReader$DropPageCache.andThen, 
> final org.apache.cassandra.cache.InstrumentingCache 
> org.apache.cassandra.io.sstable.SSTableRewriter$InvalidateKeys.cache, private 
> final org.apache.cassandra.cache.ICache 
> org.apache.cassandra.cache.InstrumentingCache.map, private final 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap 
> org.apache.cassandra.cache.ConcurrentLinkedHashCache.map, final 
> com.googlecode.concurrentlinkedhashmap.LinkedDeque 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.evictionDeque, 
> com.googlecode.concurrentlinkedhashmap.Linked 
> com.googlecode.concurrentlinkedhashmap.LinkedDeque.first, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> ... (repeated a whole bunch more)  
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> final java.lang.Object 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.key, 
> public final byte[] org.apache.cassandra.cache.KeyCacheKey.key



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10661) Integrate SASI to Cassandra

2016-01-08 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090476#comment-15090476
 ] 

Pavel Yaskevich commented on CASSANDRA-10661:
-

Sounds good, Doan! we can do that. 

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.x
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090010#comment-15090010
 ] 

Paulo Motta commented on CASSANDRA-10907:
-

Overall looks good but we cannot change the methods from 
{{StorageServiceMBean}} as this is a public interface and might be used by 
other systems.

I propose you add a new method {{takeSnapshot(String tag, Map 
options, String... entities)}}, where the {{options}} map may only contain the 
{{skipFlush}} option for the time being, but may be extended in the future with 
more options. The {{entities}} array will contain strings in the format 
ks\[.cf\], meaning take a snapshot of keyspaces and/or specific cfs. In this 
way, we don't need to create a new method in the future if we add a new option. 

You should also add a {{@Deprecated}} annotation to the previous methods and 
javadocs, similar to the {{forceRepairAsync}} deprecation notices. It would be 
nice to unify the implementation of {takeMultipleTableSnapshot}}, 
{{takeTableSnapshot}}, {{takeSnapshot}} to use the new method 
{{takeSnapshot(String tag, Map options, String... entities)}}.

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
> Attachments: 0001-flush.patch
>
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10958) Range query with filtering interacts badly with static columns

2016-01-08 Thread Taiyuan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090127#comment-15090127
 ] 

Taiyuan Zhang edited comment on CASSANDRA-10958 at 1/8/16 10:51 PM:


The cause of this bizarre out is the following code:

{code}
// If there is no rows, then provided the select was a full partition 
selection
// (i.e. not a 2ndary index search and there was no condition on 
clustering columns),
// we want to include static columns and we're done.
if (!partition.hasNext())
{
if (!staticRow.isEmpty() && (!restrictions.usesSecondaryIndexing() 
|| cfm.isStaticCompactTable()) && 
!restrictions.hasClusteringColumnsRestriction())
{
result.newRow(protocolVersion);
for (ColumnDefinition def : selection.getColumns())
{
switch (def.kind)
{
case PARTITION_KEY:
result.add(keyComponents[def.position()]);
break;
case STATIC:
addValue(result, def, staticRow, nowInSec, 
protocolVersion);
break;
default:
result.add((ByteBuffer)null);
}
}
}
return;
}
{code}

Why do we need to keep the static row? Can anyone give me a case where we need 
to keep the static row?


was (Author: firstprayer):
The cause of this bizarre out is the following code:

{code}
// If there is no rows, then provided the select was a full partition 
selection
// (i.e. not a 2ndary index search and there was no condition on 
clustering columns),
// we want to include static columns and we're done.
if (!partition.hasNext())
{
if (!staticRow.isEmpty() && (!restrictions.usesSecondaryIndexing() 
|| cfm.isStaticCompactTable()) && 
!restrictions.hasClusteringColumnsRestriction())
{
result.newRow(protocolVersion);
for (ColumnDefinition def : selection.getColumns())
{
switch (def.kind)
{
case PARTITION_KEY:
result.add(keyComponents[def.position()]);
break;
case STATIC:
addValue(result, def, staticRow, nowInSec, 
protocolVersion);
break;
default:
result.add((ByteBuffer)null);
}
}
}
return;
}
{/code}

Why do we need to keep the static row? Can anyone give me a case where we need 
to keep the static row?

> Range query with filtering interacts badly with static columns
> --
>
> Key: CASSANDRA-10958
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10958
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Taiyuan Zhang
>Priority: Minor
>
>  I'm playing with Cassandra 3. I added a secondary index on a column of 
> integer, then I want to do a range query. First it threw an error:
> {code}
> InvalidRequest: code=2200 [Invalid query] message="No supported secondary 
> index found for the non primary key columns restrictions"
> {code}
> So I added 'Allow Filtering'
> {code}
> cqlsh:mykeyspace> SELECT * FROM test ;
> id | id2 | age | extra
> +-+-+---
>   1 |   1 |   1 | 1
>   2 |   2 |   2 | 2
> (2 rows)
> cqlsh:mykeyspace > CREATE INDEX test_age on test (extra) ;
> cqlsh:mykeyspace > select * FROM test WHERE extra < 2 ALLOW FILTERING ;
>  id | id2  | age | extra
> +--+-+---
>   1 |1 |   1 | 1
>   2 | null |   2 |  null
> (2 rows)
> {code}
> My schema is:
> {code}
> CREATE TABLE mykeyspace.test (
> id int,
> id2 int,
> age int static,
> extra int,
> PRIMARY KEY (id, id2)
> ) 
> {code}
> I don't know if this is by design or not, but it really does look like a BUG 
> to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10958) Range query with filtering interacts badly with static columns

2016-01-08 Thread Taiyuan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090127#comment-15090127
 ] 

Taiyuan Zhang commented on CASSANDRA-10958:
---

The cause of this bizarre out is the following code:

{code}
// If there is no rows, then provided the select was a full partition 
selection
// (i.e. not a 2ndary index search and there was no condition on 
clustering columns),
// we want to include static columns and we're done.
if (!partition.hasNext())
{
if (!staticRow.isEmpty() && (!restrictions.usesSecondaryIndexing() 
|| cfm.isStaticCompactTable()) && 
!restrictions.hasClusteringColumnsRestriction())
{
result.newRow(protocolVersion);
for (ColumnDefinition def : selection.getColumns())
{
switch (def.kind)
{
case PARTITION_KEY:
result.add(keyComponents[def.position()]);
break;
case STATIC:
addValue(result, def, staticRow, nowInSec, 
protocolVersion);
break;
default:
result.add((ByteBuffer)null);
}
}
}
return;
}
{/code}

Why do we need to keep the static row? Can anyone give me a case where we need 
to keep the static row?

> Range query with filtering interacts badly with static columns
> --
>
> Key: CASSANDRA-10958
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10958
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Taiyuan Zhang
>Priority: Minor
>
>  I'm playing with Cassandra 3. I added a secondary index on a column of 
> integer, then I want to do a range query. First it threw an error:
> {code}
> InvalidRequest: code=2200 [Invalid query] message="No supported secondary 
> index found for the non primary key columns restrictions"
> {code}
> So I added 'Allow Filtering'
> {code}
> cqlsh:mykeyspace> SELECT * FROM test ;
> id | id2 | age | extra
> +-+-+---
>   1 |   1 |   1 | 1
>   2 |   2 |   2 | 2
> (2 rows)
> cqlsh:mykeyspace > CREATE INDEX test_age on test (extra) ;
> cqlsh:mykeyspace > select * FROM test WHERE extra < 2 ALLOW FILTERING ;
>  id | id2  | age | extra
> +--+-+---
>   1 |1 |   1 | 1
>   2 | null |   2 |  null
> (2 rows)
> {code}
> My schema is:
> {code}
> CREATE TABLE mykeyspace.test (
> id int,
> id2 int,
> age int static,
> extra int,
> PRIMARY KEY (id, id2)
> ) 
> {code}
> I don't know if this is by design or not, but it really does look like a BUG 
> to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10965) Shadowable tombstones can continue to shadow view results when timestamps match

2016-01-08 Thread Taiyuan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090159#comment-15090159
 ] 

Taiyuan Zhang edited comment on CASSANDRA-10965 at 1/8/16 11:16 PM:


I ran the script, and here is the output:

{code}
cqlsh:mykeyspace> SELECT c, k, val FROM base; SELECT c, k, val FROM mv_reuse;

 c | k | val
---+---+-
 0 | 1 |   1

(1 rows)

 c | k | val
---+---+-
 0 | 1 |   1

(1 rows)
cqlsh:mykeyspace> UPDATE base USING TIMESTAMP 1 SET c = 1 WHERE k = 1;
cqlsh:mykeyspace> SELECT c, k, val FROM base; SELECT c, k, val FROM mv_reuse;

 c | k | val
---+---+-
 1 | 1 |   1

(1 rows)

 c | k | val
---+---+-

(0 rows)
{code}

So the problem is: after the update using the same timestamp, the row is not 
shown when query from the materialized view?


was (Author: firstprayer):
I ran the script, and here is the output:

{code}
cqlsh:mykeyspace> SELECT c, k, val FROM base; SELECT c, k, val FROM mv_reuse;

 c | k | val
---+---+-
 0 | 1 |   1

(1 rows)

 c | k | val
---+---+-
 0 | 1 |   1

(1 rows)
cqlsh:mykeyspace> UPDATE base USING TIMESTAMP 1 SET c = 1 WHERE k = 1;
cqlsh:mykeyspace> SELECT c, k, val FROM base; SELECT c, k, val FROM mv_reuse;

 c | k | val
---+---+-
 1 | 1 |   1

(1 rows)

 c | k | val
---+---+-
{code}

So the problem is: after the update using the same timestamp, the row is not 
shown when query from the materialized view?

> Shadowable tombstones can continue to shadow view results when timestamps 
> match
> ---
>
> Key: CASSANDRA-10965
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10965
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
> Fix For: 3.0.x
>
> Attachments: shadow-ts.cql
>
>
> I've attached a script which reproduces the issue. The first time we insert 
> with {{TIMESTAMP 2}}, we are inserting a new row which has the same timestamp 
> as the previous shadow tombstone, and it continues to be shadowed by that 
> tombstone because we shadow values with the same timestamp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-08 Thread Terry Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090422#comment-15090422
 ] 

Terry Ma commented on CASSANDRA-10961:
--

Yes, it works. my new node bootstraped well.  I got same error because of my 
mistake when replacing jar.
thank you.

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: apache-cassandra-2.2.4-SNAPSHOT.jar, debug.1.log, 
> debug.logs.zip, netstats.1.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> 

[jira] [Commented] (CASSANDRA-10982) Put gc.log in -Dcassandra.logdir location by default

2016-01-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088967#comment-15088967
 ] 

Sylvain Lebresne commented on CASSANDRA-10982:
--

bq. Additionally, I don't think 10140 made it to trunk

Something is wrong with my git fu but I don't know what. According to the git 
log, the patch from 3.0 _was_ properly merged but the changes somehow weren't 
applied as if the merge was done with {{--strategy=ours}} even though I'm 100% 
sure the merge was not done with that option. This is not the only patch that I 
merge that had this problem in the last few days so if someone knows what I'm 
doing wrong and can enlighten me, that would be highly appreciated. I did fix 
this manually in the meantime.

> Put gc.log in -Dcassandra.logdir location by default
> 
>
> Key: CASSANDRA-10982
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10982
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> CASSANDRA-10140 turned on gc.log by default, and set it's location to 
> CASSANDRA_HOME/logs. It would be much better UX if when -Dcassandra.logdir 
> was set, that it was used instead. This way users don't have to separately 
> configure gc.log from the other log files.
> Additionally, I don't think 10140 made it to trunk, as grepping for `loggc` 
> there shows me nothing in cassandra-env.sh as of 31f67c289.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10972) File based hints don't implement backpressure and can OOM

2016-01-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10972:

Reviewer: Benedict

> File based hints don't implement backpressure and can OOM
> -
>
> Key: CASSANDRA-10972
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10972
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 3.0.x, 3.x
>
>
> This is something I reproduced in practice. I have what I think is a 
> reasonable implementation of backpressure, but still need to put together a 
> unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10972) File based hints don't implement backpressure and can OOM

2016-01-08 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089160#comment-15089160
 ] 

Joshua McKenzie commented on CASSANDRA-10972:
-

[~benedict] to review.

> File based hints don't implement backpressure and can OOM
> -
>
> Key: CASSANDRA-10972
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10972
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 3.0.x, 3.x
>
>
> This is something I reproduced in practice. I have what I think is a 
> reasonable implementation of backpressure, but still need to put together a 
> unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10987) MV add_node_after_mv_test is failing on trunk

2016-01-08 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089194#comment-15089194
 ] 

Alan Boudreault commented on CASSANDRA-10987:
-

This failure is probably the same than 10978.

> MV add_node_after_mv_test is failing on trunk
> -
>
> Key: CASSANDRA-10987
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10987
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Alan Boudreault
> Fix For: 3.x
>
>
> This failure seems to be flaky.
> http://cassci.datastax.com/job/trunk_dtest/897/testReport/materialized_views_test/TestMaterializedViews/add_node_after_mv_test
> {code}
> ==
> ERROR: add_node_after_mv_test (materialized_views_test.TestMaterializedViews)
> --
> Traceback (most recent call last):
>   File "/home/aboudreault/git/cstar/cassandra-dtest/dtest.py", line 558, in 
> tearDown
> raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
> errors))
> AssertionError: Unexpected error in node4 node log: ['ERROR [main] 2016-01-08 
> 08:03:35,980 MigrationManager.java:164 - Migration task failed to 
> complete\nERROR [main] 2016-01-08 08:03:36,980 MigrationManager.java:164 - 
> Migration task failed to complete']
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-W5Ng_M
> dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-W5Ng_M
> dtest: DEBUG: clearing ssl stores from [/tmp/dtest-W5Ng_M] directory
> - >> end captured logging << -
> --
> Ran 1 test in 90.385s
> FAILED (errors=1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10743) Failed upgradesstables (upgrade from 2.2.2 to 3.0.0)

2016-01-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10743:
-
Component/s: Local Write-Read Paths

> Failed upgradesstables (upgrade from 2.2.2 to 3.0.0)
> 
>
> Key: CASSANDRA-10743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: CentOS Linux release 7.1.1503, OpenJDK Runtime 
> Environment (build 1.8.0_65-b17), DSC Cassandra 3.0.0 (tar.gz)
>Reporter: Gábor Auth
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.x
>
> Attachments: faulty-tables.tar.gz, schema.ddl
>
>
> {code}
> [cassandra@dc01-rack01-cass01 ~]$ 
> /home/cassandra/dsc-cassandra-3.0.0/bin/nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.db.rows.CellPath$EmptyCellPath.get(CellPath.java:143)
> at 
> org.apache.cassandra.db.marshal.CollectionType$CollectionPathSerializer.serializedSize(CollectionType.java:226)
> at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serializedSize(BufferCell.java:325)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.sizeOfComplexColumn(UnfilteredSerializer.java:297)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedRowBodySize(UnfilteredSerializer.java:282)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:163)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:144)
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:112)
> at 
> org.apache.cassandra.db.ColumnIndex.writeAndBuildIndex(ColumnIndex.java:52)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:149)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:121)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:397)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10697) Leak detected while running offline scrub

2016-01-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10697:
-
Assignee: Benedict

> Leak detected while running offline scrub
> -
>
> Key: CASSANDRA-10697
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10697
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: C* 2.1.9 on Debian Wheezy
>Reporter: mlowicki
>Assignee: Benedict
>Priority: Critical
>
> I got couple of those:
> {code}
> ERROR 05:09:15 LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3b60e162) to class 
> org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1433208674:/var/lib/cassandra/data/sync/entity2-e24b5040199b11e5a30f75bb514ae072/sync-entity2-ka-405434
>  was not released before the reference was garbage collected
> {code}
> and then:
> {code}
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:99)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:81)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:353)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:444)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:424)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:378)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:327)
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:397)
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52)
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:120)
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:165)
> at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:121)
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:192)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:127)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.tryAppend(SSTableRewriter.java:158)
> at 
> org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:220)
> at 
> org.apache.cassandra.tools.StandaloneScrubber.main(StandaloneScrubber.java:116)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[15/22] cassandra git commit: Merge commit '23123f04fd5e5381742b2bae16bb3e03225598c3' into cassandra-3.0

2016-01-08 Thread slebresne
Merge commit '23123f04fd5e5381742b2bae16bb3e03225598c3' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/08b241c1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/08b241c1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/08b241c1

Branch: refs/heads/cassandra-3.3
Commit: 08b241c153dc3c93436703e4c30720a28899d5f0
Parents: 24630b4 23123f0
Author: Sylvain Lebresne 
Authored: Fri Jan 8 15:24:49 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:24:49 2016 +0100

--

--




[09/22] cassandra git commit: Merge commit '812df9e8bc3cb98258a70a4b34cd6e289ff95e27' into cassandra-2.2

2016-01-08 Thread slebresne
Merge commit '812df9e8bc3cb98258a70a4b34cd6e289ff95e27' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/44a05786
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/44a05786
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/44a05786

Branch: refs/heads/cassandra-3.0
Commit: 44a05786aab603b832440391c2fb9051bf1ae36e
Parents: 52d8197 812df9e
Author: Sylvain Lebresne 
Authored: Fri Jan 8 15:22:06 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:22:06 2016 +0100

--

--




[04/22] cassandra git commit: Fix pending range calculation during moves

2016-01-08 Thread slebresne
Fix pending range calculation during moves

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/812df9e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/812df9e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/812df9e8

Branch: refs/heads/trunk
Commit: 812df9e8bc3cb98258a70a4b34cd6e289ff95e27
Parents: 6d6d189
Author: sankalp kohli 
Authored: Tue Jan 5 15:09:06 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:18:45 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../service/PendingRangeCalculatorService.java  |  36 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  83 +++-
 .../org/apache/cassandra/service/MoveTest.java  | 435 +++
 6 files changed, 557 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 14c5ee6..c167098 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.13
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 81c92a2..618a3f4 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -300,7 +300,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
--
diff --git 
a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java 
b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
index 0ff8a92..1e7b7bd 100644
--- a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
+++ b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
@@ -169,18 +169,44 @@ public class PendingRangeCalculatorService
 // At this stage pendingRanges has been updated according to leaving 
and bootstrapping nodes.
 // We can now finish the calculation by checking moving and relocating 
nodes.
 
-// For each of the moving nodes, we do the same thing we did for 
bootstrapping:
-// simply add and remove them one by one to allLeftMetadata and check 
in between what their ranges would be.
 for (Pair moving : tm.getMovingEndpoints())
 {
+//Calculate all the ranges which will could be affected. This will 
include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 allLeftMetadata.updateNormalToken(moving.left, endpoint);
-
+//Add ranges after the move
 for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
 {
-pendingRanges.put(range, endpoint);
+moveAffectedRanges.add(range);
+ 

[03/22] cassandra git commit: Fix pending range calculation during moves

2016-01-08 Thread slebresne
Fix pending range calculation during moves

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/812df9e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/812df9e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/812df9e8

Branch: refs/heads/cassandra-2.2
Commit: 812df9e8bc3cb98258a70a4b34cd6e289ff95e27
Parents: 6d6d189
Author: sankalp kohli 
Authored: Tue Jan 5 15:09:06 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:18:45 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../service/PendingRangeCalculatorService.java  |  36 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  83 +++-
 .../org/apache/cassandra/service/MoveTest.java  | 435 +++
 6 files changed, 557 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 14c5ee6..c167098 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.13
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 81c92a2..618a3f4 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -300,7 +300,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
--
diff --git 
a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java 
b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
index 0ff8a92..1e7b7bd 100644
--- a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
+++ b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
@@ -169,18 +169,44 @@ public class PendingRangeCalculatorService
 // At this stage pendingRanges has been updated according to leaving 
and bootstrapping nodes.
 // We can now finish the calculation by checking moving and relocating 
nodes.
 
-// For each of the moving nodes, we do the same thing we did for 
bootstrapping:
-// simply add and remove them one by one to allLeftMetadata and check 
in between what their ranges would be.
 for (Pair moving : tm.getMovingEndpoints())
 {
+//Calculate all the ranges which will could be affected. This will 
include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 allLeftMetadata.updateNormalToken(moving.left, endpoint);
-
+//Add ranges after the move
 for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
 {
-pendingRanges.put(range, endpoint);
+

[19/22] cassandra git commit: Fix pending range calculation during moves (3.0 version)

2016-01-08 Thread slebresne
Fix pending range calculation during moves (3.0 version)

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9c1679d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9c1679d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9c1679d1

Branch: refs/heads/trunk
Commit: 9c1679d1bd83d1d25fda6dbf29d1738d8e966da5
Parents: 08b241c
Author: sankalp kohli 
Authored: Thu Jan 7 16:24:06 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:25:36 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../apache/cassandra/locator/TokenMetadata.java |  34 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  55 ++
 .../org/apache/cassandra/service/MoveTest.java  | 496 ++-
 6 files changed, 581 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9c1679d1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7f6b761..1e7f4ed 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -26,6 +26,7 @@ Merged from 2.2:
  * Disable reloading of GossipingPropertyFileSnitch (CASSANDRA-9474)
  * Verify tables in pseudo-system keyspaces at startup (CASSANDRA-10761)
 Merged from 2.1:
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9c1679d1/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 985d6f6..b4fed65 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -291,7 +291,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9c1679d1/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 301613c..f6e9cf7 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -814,14 +814,42 @@ public class TokenMetadata
 // simply add and remove them one by one to allLeftMetadata and 
check in between what their ranges would be.
 for (Pair moving : movingEndpoints)
 {
+//Calculate all the ranges which will could be affected. This 
will include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving 
node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 allLeftMetadata.updateNormalToken(moving.left, endpoint);
-
+//Add ranges after the move
 for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
 {
-newPendingRanges.addPendingRange(range, endpoint);
+moveAffectedRanges.add(range);
+}
+
+for(Range range : moveAffectedRanges)
+{
+Set 

[16/22] cassandra git commit: Merge commit '23123f04fd5e5381742b2bae16bb3e03225598c3' into cassandra-3.0

2016-01-08 Thread slebresne
Merge commit '23123f04fd5e5381742b2bae16bb3e03225598c3' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/08b241c1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/08b241c1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/08b241c1

Branch: refs/heads/trunk
Commit: 08b241c153dc3c93436703e4c30720a28899d5f0
Parents: 24630b4 23123f0
Author: Sylvain Lebresne 
Authored: Fri Jan 8 15:24:49 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:24:49 2016 +0100

--

--




[22/22] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-01-08 Thread slebresne
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/267ab31d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/267ab31d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/267ab31d

Branch: refs/heads/trunk
Commit: 267ab31dbd5b9a59dab6a60893ad13f30444f238
Parents: ea3ba68 3ad6090
Author: Sylvain Lebresne 
Authored: Fri Jan 8 15:28:27 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:28:27 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../apache/cassandra/locator/TokenMetadata.java |  34 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  55 ++
 .../org/apache/cassandra/service/MoveTest.java  | 496 ++-
 6 files changed, 581 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/267ab31d/CHANGES.txt
--



[jira] [Assigned] (CASSANDRA-10743) Failed upgradesstables (upgrade from 2.2.2 to 3.0.0)

2016-01-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-10743:


Assignee: Sylvain Lebresne

> Failed upgradesstables (upgrade from 2.2.2 to 3.0.0)
> 
>
> Key: CASSANDRA-10743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10743
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS Linux release 7.1.1503, OpenJDK Runtime 
> Environment (build 1.8.0_65-b17), DSC Cassandra 3.0.0 (tar.gz)
>Reporter: Gábor Auth
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.x
>
> Attachments: faulty-tables.tar.gz, schema.ddl
>
>
> {code}
> [cassandra@dc01-rack01-cass01 ~]$ 
> /home/cassandra/dsc-cassandra-3.0.0/bin/nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.UnsupportedOperationException
> at 
> org.apache.cassandra.db.rows.CellPath$EmptyCellPath.get(CellPath.java:143)
> at 
> org.apache.cassandra.db.marshal.CollectionType$CollectionPathSerializer.serializedSize(CollectionType.java:226)
> at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.serializedSize(BufferCell.java:325)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.sizeOfComplexColumn(UnfilteredSerializer.java:297)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedRowBodySize(UnfilteredSerializer.java:282)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:163)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:108)
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:144)
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:112)
> at 
> org.apache.cassandra.db.ColumnIndex.writeAndBuildIndex(ColumnIndex.java:52)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:149)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:121)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:397)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[05/22] cassandra git commit: Fix pending range calculation during moves

2016-01-08 Thread slebresne
Fix pending range calculation during moves

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/812df9e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/812df9e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/812df9e8

Branch: refs/heads/cassandra-3.0
Commit: 812df9e8bc3cb98258a70a4b34cd6e289ff95e27
Parents: 6d6d189
Author: sankalp kohli 
Authored: Tue Jan 5 15:09:06 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:18:45 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../service/PendingRangeCalculatorService.java  |  36 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  83 +++-
 .../org/apache/cassandra/service/MoveTest.java  | 435 +++
 6 files changed, 557 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 14c5ee6..c167098 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.13
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 81c92a2..618a3f4 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -300,7 +300,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
--
diff --git 
a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java 
b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
index 0ff8a92..1e7b7bd 100644
--- a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
+++ b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
@@ -169,18 +169,44 @@ public class PendingRangeCalculatorService
 // At this stage pendingRanges has been updated according to leaving 
and bootstrapping nodes.
 // We can now finish the calculation by checking moving and relocating 
nodes.
 
-// For each of the moving nodes, we do the same thing we did for 
bootstrapping:
-// simply add and remove them one by one to allLeftMetadata and check 
in between what their ranges would be.
 for (Pair moving : tm.getMovingEndpoints())
 {
+//Calculate all the ranges which will could be affected. This will 
include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 allLeftMetadata.updateNormalToken(moving.left, endpoint);
-
+//Add ranges after the move
 for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
 {
-pendingRanges.put(range, endpoint);
+

[02/22] cassandra git commit: Fix pending range calculation during moves

2016-01-08 Thread slebresne
Fix pending range calculation during moves

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/812df9e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/812df9e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/812df9e8

Branch: refs/heads/cassandra-3.3
Commit: 812df9e8bc3cb98258a70a4b34cd6e289ff95e27
Parents: 6d6d189
Author: sankalp kohli 
Authored: Tue Jan 5 15:09:06 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:18:45 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../service/PendingRangeCalculatorService.java  |  36 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  83 +++-
 .../org/apache/cassandra/service/MoveTest.java  | 435 +++
 6 files changed, 557 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 14c5ee6..c167098 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.13
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 81c92a2..618a3f4 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -300,7 +300,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
--
diff --git 
a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java 
b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
index 0ff8a92..1e7b7bd 100644
--- a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
+++ b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
@@ -169,18 +169,44 @@ public class PendingRangeCalculatorService
 // At this stage pendingRanges has been updated according to leaving 
and bootstrapping nodes.
 // We can now finish the calculation by checking moving and relocating 
nodes.
 
-// For each of the moving nodes, we do the same thing we did for 
bootstrapping:
-// simply add and remove them one by one to allLeftMetadata and check 
in between what their ranges would be.
 for (Pair moving : tm.getMovingEndpoints())
 {
+//Calculate all the ranges which will could be affected. This will 
include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 allLeftMetadata.updateNormalToken(moving.left, endpoint);
-
+//Add ranges after the move
 for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
 {
-pendingRanges.put(range, endpoint);
+

[06/22] cassandra git commit: Merge commit '812df9e8bc3cb98258a70a4b34cd6e289ff95e27' into cassandra-2.2

2016-01-08 Thread slebresne
Merge commit '812df9e8bc3cb98258a70a4b34cd6e289ff95e27' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/44a05786
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/44a05786
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/44a05786

Branch: refs/heads/cassandra-2.2
Commit: 44a05786aab603b832440391c2fb9051bf1ae36e
Parents: 52d8197 812df9e
Author: Sylvain Lebresne 
Authored: Fri Jan 8 15:22:06 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:22:06 2016 +0100

--

--




[08/22] cassandra git commit: Merge commit '812df9e8bc3cb98258a70a4b34cd6e289ff95e27' into cassandra-2.2

2016-01-08 Thread slebresne
Merge commit '812df9e8bc3cb98258a70a4b34cd6e289ff95e27' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/44a05786
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/44a05786
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/44a05786

Branch: refs/heads/trunk
Commit: 44a05786aab603b832440391c2fb9051bf1ae36e
Parents: 52d8197 812df9e
Author: Sylvain Lebresne 
Authored: Fri Jan 8 15:22:06 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:22:06 2016 +0100

--

--




[11/22] cassandra git commit: Fix pending range calculation during moves (2.2 version)

2016-01-08 Thread slebresne
Fix pending range calculation during moves (2.2 version)

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23123f04
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23123f04
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23123f04

Branch: refs/heads/cassandra-3.3
Commit: 23123f04fd5e5381742b2bae16bb3e03225598c3
Parents: 44a0578
Author: sankalp kohli 
Authored: Thu Jan 7 16:21:47 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:23:30 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../apache/cassandra/locator/TokenMetadata.java |  34 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  55 +++
 .../org/apache/cassandra/service/MoveTest.java  | 491 ++-
 6 files changed, 576 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a26f9e0..e5c4430 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -14,6 +14,7 @@
  * Disable reloading of GossipingPropertyFileSnitch (CASSANDRA-9474)
  * Verify tables in pseudo-system keyspaces at startup (CASSANDRA-10761)
 Merged from 2.1:
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 9893531..f2c5996 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -292,7 +292,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 00d8ee9..de16fda 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -799,14 +799,42 @@ public class TokenMetadata
 // simply add and remove them one by one to allLeftMetadata and 
check in between what their ranges would be.
 for (Pair moving : movingEndpoints)
 {
+//Calculate all the ranges which will could be affected. This 
will include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving 
node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 allLeftMetadata.updateNormalToken(moving.left, endpoint);
-
+//Add ranges after the move
 for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
 {
-newPendingRanges.addPendingRange(range, endpoint);
+moveAffectedRanges.add(range);
+}
+
+for(Range range : moveAffectedRanges)
+{
+Set 

[07/22] cassandra git commit: Merge commit '812df9e8bc3cb98258a70a4b34cd6e289ff95e27' into cassandra-2.2

2016-01-08 Thread slebresne
Merge commit '812df9e8bc3cb98258a70a4b34cd6e289ff95e27' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/44a05786
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/44a05786
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/44a05786

Branch: refs/heads/cassandra-3.3
Commit: 44a05786aab603b832440391c2fb9051bf1ae36e
Parents: 52d8197 812df9e
Author: Sylvain Lebresne 
Authored: Fri Jan 8 15:22:06 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:22:06 2016 +0100

--

--




[10/22] cassandra git commit: Fix pending range calculation during moves (2.2 version)

2016-01-08 Thread slebresne
Fix pending range calculation during moves (2.2 version)

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23123f04
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23123f04
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23123f04

Branch: refs/heads/cassandra-3.0
Commit: 23123f04fd5e5381742b2bae16bb3e03225598c3
Parents: 44a0578
Author: sankalp kohli 
Authored: Thu Jan 7 16:21:47 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:23:30 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../apache/cassandra/locator/TokenMetadata.java |  34 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  55 +++
 .../org/apache/cassandra/service/MoveTest.java  | 491 ++-
 6 files changed, 576 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a26f9e0..e5c4430 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -14,6 +14,7 @@
  * Disable reloading of GossipingPropertyFileSnitch (CASSANDRA-9474)
  * Verify tables in pseudo-system keyspaces at startup (CASSANDRA-10761)
 Merged from 2.1:
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 9893531..f2c5996 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -292,7 +292,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 00d8ee9..de16fda 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -799,14 +799,42 @@ public class TokenMetadata
 // simply add and remove them one by one to allLeftMetadata and 
check in between what their ranges would be.
 for (Pair moving : movingEndpoints)
 {
+//Calculate all the ranges which will could be affected. This 
will include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving 
node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 allLeftMetadata.updateNormalToken(moving.left, endpoint);
-
+//Add ranges after the move
 for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
 {
-newPendingRanges.addPendingRange(range, endpoint);
+moveAffectedRanges.add(range);
+}
+
+for(Range range : moveAffectedRanges)
+{
+Set 

[01/22] cassandra git commit: Fix pending range calculation during moves

2016-01-08 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 6d6d18904 -> 812df9e8b
  refs/heads/cassandra-2.2 52d8197d1 -> 23123f04f
  refs/heads/cassandra-3.0 24630b4a1 -> 9c1679d1b
  refs/heads/cassandra-3.3 87d80b478 -> 3ad609062
  refs/heads/trunk ea3ba6872 -> 267ab31db


Fix pending range calculation during moves

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/812df9e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/812df9e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/812df9e8

Branch: refs/heads/cassandra-2.1
Commit: 812df9e8bc3cb98258a70a4b34cd6e289ff95e27
Parents: 6d6d189
Author: sankalp kohli 
Authored: Tue Jan 5 15:09:06 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:18:45 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../service/PendingRangeCalculatorService.java  |  36 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  83 +++-
 .../org/apache/cassandra/service/MoveTest.java  | 435 +++
 6 files changed, 557 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 14c5ee6..c167098 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.13
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 81c92a2..618a3f4 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -300,7 +300,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/812df9e8/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
--
diff --git 
a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java 
b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
index 0ff8a92..1e7b7bd 100644
--- a/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
+++ b/src/java/org/apache/cassandra/service/PendingRangeCalculatorService.java
@@ -169,18 +169,44 @@ public class PendingRangeCalculatorService
 // At this stage pendingRanges has been updated according to leaving 
and bootstrapping nodes.
 // We can now finish the calculation by checking moving and relocating 
nodes.
 
-// For each of the moving nodes, we do the same thing we did for 
bootstrapping:
-// simply add and remove them one by one to allLeftMetadata and check 
in between what their ranges would be.
 for (Pair moving : tm.getMovingEndpoints())
 {
+//Calculate all the ranges which will could be affected. This will 
include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 

[20/22] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-08 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ad60906
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ad60906
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ad60906

Branch: refs/heads/cassandra-3.3
Commit: 3ad6090623d1bb146005c5325abc3a513b547dc1
Parents: 87d80b4 9c1679d
Author: Sylvain Lebresne 
Authored: Fri Jan 8 15:26:09 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:26:09 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../apache/cassandra/locator/TokenMetadata.java |  34 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  55 ++
 .../org/apache/cassandra/service/MoveTest.java  | 496 ++-
 6 files changed, 581 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ad60906/CHANGES.txt
--
diff --cc CHANGES.txt
index 8aab604,1e7f4ed..92493d7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -52,7 -21,12 +52,8 @@@ Merged from 2.2
   * (cqlsh) show correct column names for empty result sets (CASSANDRA-9813)
   * Add new types to Stress (CASSANDRA-9556)
   * Add property to allow listening on broadcast interface (CASSANDRA-9748)
 - * Fix regression in split size on CqlInputFormat (CASSANDRA-10835)
 - * Better handling of SSL connection errors inter-node (CASSANDRA-10816)
 - * Disable reloading of GossipingPropertyFileSnitch (CASSANDRA-9474)
 - * Verify tables in pseudo-system keyspaces at startup (CASSANDRA-10761)
  Merged from 2.1:
+  * Fix pending range calculation during moves (CASSANDRA-10887)
   * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
   * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
   * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ad60906/src/java/org/apache/cassandra/dht/Range.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ad60906/src/java/org/apache/cassandra/locator/TokenMetadata.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ad60906/test/unit/org/apache/cassandra/Util.java
--



[13/22] cassandra git commit: Fix pending range calculation during moves (2.2 version)

2016-01-08 Thread slebresne
Fix pending range calculation during moves (2.2 version)

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23123f04
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23123f04
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23123f04

Branch: refs/heads/cassandra-2.2
Commit: 23123f04fd5e5381742b2bae16bb3e03225598c3
Parents: 44a0578
Author: sankalp kohli 
Authored: Thu Jan 7 16:21:47 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:23:30 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../apache/cassandra/locator/TokenMetadata.java |  34 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  55 +++
 .../org/apache/cassandra/service/MoveTest.java  | 491 ++-
 6 files changed, 576 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a26f9e0..e5c4430 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -14,6 +14,7 @@
  * Disable reloading of GossipingPropertyFileSnitch (CASSANDRA-9474)
  * Verify tables in pseudo-system keyspaces at startup (CASSANDRA-10761)
 Merged from 2.1:
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 9893531..f2c5996 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -292,7 +292,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 00d8ee9..de16fda 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -799,14 +799,42 @@ public class TokenMetadata
 // simply add and remove them one by one to allLeftMetadata and 
check in between what their ranges would be.
 for (Pair moving : movingEndpoints)
 {
+//Calculate all the ranges which will could be affected. This 
will include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving 
node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 allLeftMetadata.updateNormalToken(moving.left, endpoint);
-
+//Add ranges after the move
 for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
 {
-newPendingRanges.addPendingRange(range, endpoint);
+moveAffectedRanges.add(range);
+}
+
+for(Range range : moveAffectedRanges)
+{
+Set 

[17/22] cassandra git commit: Fix pending range calculation during moves (3.0 version)

2016-01-08 Thread slebresne
Fix pending range calculation during moves (3.0 version)

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9c1679d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9c1679d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9c1679d1

Branch: refs/heads/cassandra-3.3
Commit: 9c1679d1bd83d1d25fda6dbf29d1738d8e966da5
Parents: 08b241c
Author: sankalp kohli 
Authored: Thu Jan 7 16:24:06 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:25:36 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../apache/cassandra/locator/TokenMetadata.java |  34 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  55 ++
 .../org/apache/cassandra/service/MoveTest.java  | 496 ++-
 6 files changed, 581 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9c1679d1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7f6b761..1e7f4ed 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -26,6 +26,7 @@ Merged from 2.2:
  * Disable reloading of GossipingPropertyFileSnitch (CASSANDRA-9474)
  * Verify tables in pseudo-system keyspaces at startup (CASSANDRA-10761)
 Merged from 2.1:
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9c1679d1/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 985d6f6..b4fed65 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -291,7 +291,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9c1679d1/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 301613c..f6e9cf7 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -814,14 +814,42 @@ public class TokenMetadata
 // simply add and remove them one by one to allLeftMetadata and 
check in between what their ranges would be.
 for (Pair moving : movingEndpoints)
 {
+//Calculate all the ranges which will could be affected. This 
will include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving 
node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 allLeftMetadata.updateNormalToken(moving.left, endpoint);
-
+//Add ranges after the move
 for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
 {
-newPendingRanges.addPendingRange(range, endpoint);
+moveAffectedRanges.add(range);
+}
+
+for(Range range : moveAffectedRanges)
+{
+  

[14/22] cassandra git commit: Merge commit '23123f04fd5e5381742b2bae16bb3e03225598c3' into cassandra-3.0

2016-01-08 Thread slebresne
Merge commit '23123f04fd5e5381742b2bae16bb3e03225598c3' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/08b241c1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/08b241c1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/08b241c1

Branch: refs/heads/cassandra-3.0
Commit: 08b241c153dc3c93436703e4c30720a28899d5f0
Parents: 24630b4 23123f0
Author: Sylvain Lebresne 
Authored: Fri Jan 8 15:24:49 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:24:49 2016 +0100

--

--




[jira] [Commented] (CASSANDRA-10963) Can join cluster java.lang.InterruptedException

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089267#comment-15089267
 ] 

Paulo Motta commented on CASSANDRA-10963:
-

If scrubbing doesn't help could you try the patch on CASSANDRA-10961?

> Can join cluster java.lang.InterruptedException 
> 
>
> Key: CASSANDRA-10963
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10963
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: [cqlsh 5.0.1 | Cassandra 2.2.4 | CQL spec 3.3.1 | Native 
> protocol v4]
> java version "1.8.0_65"
>Reporter: Jack Money
>Assignee: Paulo Motta
>
> hello
> I got 2 nodes in 2 DC.
> Each node own 100% data of keyspace hugespace.
> Keyspace have 21 tables with 2TB data
> Biggest table have 1.6 TB of data.
> Biggest sstable 1,3 TB.
> Schemats:
> {noformat} 
> KEYSPACE hugespace WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'DC1': '3', 'DC2': '1'};
> CREATE TABLE hugespace.content (
> y int,
> m int,
> d int,
> ts bigint,
> ha text,
> co text,
> he text,
> ids bigint,
> ifr text,
> js text,
> PRIMARY KEY ((y, m, d), ts, ha)
> ) WITH CLUSTERING ORDER BY (ts ASC, ha ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> CREATE INDEX content_ids_idx ON hugespace.content (ids);
> {noformat}
> I tried to add one node (target 6 node in DC1) to DC1.
> Names:
> Existing node in DC1 = nodeDC1
> Existing node in DC2 = nodeDC2
> New node joining DC1 = joiningDC1
> joiningDC1
> {noformat} 
> INFO  [main] 2016-01-04 12:17:55,535 StorageService.java:1176 - JOINING: 
> Starting to bootstrap...
> INFO  [main] 2016-01-04 12:17:55,802 StreamResultFuture.java:86 - [Stream 
> #2f473320-b2dd-11e5-8353-b5506ad414a4] Executing streaming plan for Bootstrap
> INFO  [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,803 
> StreamSession.java:232 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Starting streaming to /nodeDC1
> INFO  [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,803 
> StreamSession.java:232 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Starting streaming to /nodeDC2
> DEBUG [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,803 
> ConnectionHandler.java:82 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending stream init for incoming stream
> DEBUG [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,803 
> ConnectionHandler.java:82 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending stream init for incoming stream
> DEBUG [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,806 
> ConnectionHandler.java:87 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending stream init for outgoing stream
> DEBUG [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,806 
> ConnectionHandler.java:87 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending stream init for outgoing stream
> DEBUG [STREAM-OUT-/nodeDC1] 2016-01-04 12:17:55,810 
> ConnectionHandler.java:334 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending Prepare (5 requests,  0 files}
> DEBUG [STREAM-OUT-/nodeDC2] 2016-01-04 12:17:55,810 
> ConnectionHandler.java:334 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending Prepare (2 requests,  0 files}
> INFO  [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,810 
> StreamCoordinator.java:213 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4, 
> ID#0] Beginning stream session with /nodeDC2
> INFO  [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,810 
> StreamCoordinator.java:213 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4, 
> ID#0] Beginning stream session with /nodeDC1
> DEBUG [STREAM-IN-/nodeDC2] 2016-01-04 12:17:55,821 ConnectionHandler.java:266 
> - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] Received Prepare (0 
> requests,  1 files}
> INFO  [STREAM-IN-/nodeDC2] 2016-01-04 12:17:55,822 
> StreamResultFuture.java:168 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4 
> ID#0] Prepare completed. Receiving 1 files(161 bytes), sending 0 files(0 
> bytes)
> DEBUG [STREAM-IN-/nodeDC2] 2016-01-04 12:17:55,828 
> CompressedStreamReader.java:67 - reading file from /nodeDC2, repairedAt = 
> 1451483586917
> DEBUG [STREAM-IN-/nodeDC2] 2016-01-04 

[jira] [Commented] (CASSANDRA-10448) "Unknown type 0" Stream failure on Repair

2016-01-08 Thread Bernhard K. Weisshuhn (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089168#comment-15089168
 ] 

Bernhard K. Weisshuhn commented on CASSANDRA-10448:
---

A quick heads up for the watchers: You might want to try the snapshot jar 
posted in CASSANDRA-10961, it seems to fix the problem for me (repair still 
ongoing, but no errors so far).

> "Unknown type 0" Stream failure on Repair
> -
>
> Key: CASSANDRA-10448
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10448
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.2.2
> 5 Nodes in Google Compute Engine
> Java 1.8.0_60
>Reporter: Omri Iluz
>Assignee: Paulo Motta
> Fix For: 2.2.x
>
> Attachments: apache-cassandra-2.2.4-SNAPSHOT.jar, casslogs.txt, 
> receiversystem.log, sendersystem.log
>
>
> While running repair after upgrading to 2.2.2 I am getting many stream fail 
> errors:
> {noformat}
> [2015-10-05 23:52:30,353] Repair session 4c181051-6bbb-11e5-acdb-d9a8bbd39330 
> for range (59694553044959221,86389982480621619] failed with error [repair 
> #4c181051-6bbb-11e5-acdb-d9a8bbd39330 on px/acti
> vities, (59694553044959221,86389982480621619]] Sync failed between 
> /10.240.81.104 and /10.240.134.221 (progress: 4%)
> {noformat}
> Logs from both sides of the stream:
> Sides 1 -
> {noformat}
> INFO  [STREAM-INIT-/10.240.81.104:52722] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:111 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Creating new streaming plan for Repair
> INFO  [STREAM-INIT-/10.240.81.104:52722] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:118 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Received streaming plan for Repair
> INFO  [STREAM-INIT-/10.240.81.104:52723] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:118 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Received streaming plan for Repair
> INFO  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,098 
> StreamResultFuture.java:168 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Prepare completed. Receiving 13 files(517391317 bytes), sending 10 
> files(469491729 bytes)
> ERROR [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,234 
> StreamSession.java:524 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Streaming error occurred
> java.lang.IllegalArgumentException: Unknown type 0
>   at 
> org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:96)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:57)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> INFO  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,302 
> StreamResultFuture.java:182 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Session with /10.240.81.104 is complete
> WARN  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,302 
> StreamResultFuture.java:209 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Stream failed
> {noformat}
> Side 2 -
> {noformat}
> INFO  [AntiEntropyStage:1] 2015-10-05 23:52:30,060 StreamResultFuture.java:86 
> - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] Executing streaming plan for 
> Repair
> INFO  [StreamConnectionEstablisher:6] 2015-10-05 23:52:30,061 
> StreamSession.java:232 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Starting streaming to /10.240.134.221
> INFO  [StreamConnectionEstablisher:6] 2015-10-05 23:52:30,063 
> StreamCoordinator.java:213 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Beginning stream session with /10.240.134.221
> INFO  [STREAM-IN-/10.240.134.221] 2015-10-05 23:52:30,098 
> StreamResultFuture.java:168 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Prepare completed. Receiving 10 files(469491729 bytes), sending 13 
> files(517391317 bytes)
> INFO  [STREAM-IN-/10.240.134.221] 2015-10-05 23:52:30,349 
> StreamResultFuture.java:182 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Session with /10.240.134.221 is complete
> ERROR [STREAM-OUT-/10.240.134.221] 2015-10-05 23:52:30,349 
> StreamSession.java:524 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Streaming error occurred
> org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe
>   at 
> org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:144) 
> ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:79)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> 

[jira] [Resolved] (CASSANDRA-10986) MV add_node_after_mv_test is failing on trunk

2016-01-08 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault resolved CASSANDRA-10986.
-
Resolution: Duplicate

connection issue during the creation, duplicate of CASSANDRA-10987

> MV add_node_after_mv_test is failing on trunk
> -
>
> Key: CASSANDRA-10986
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10986
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Alan Boudreault
> Fix For: 3.x
>
>
> This failure seems to be flaky.
> http://cassci.datastax.com/job/trunk_dtest/897/testReport/materialized_views_test/TestMaterializedViews/add_node_after_mv_test
> {code}
> ==
> ERROR: add_node_after_mv_test (materialized_views_test.TestMaterializedViews)
> --
> Traceback (most recent call last):
>   File "/home/aboudreault/git/cstar/cassandra-dtest/dtest.py", line 558, in 
> tearDown
> raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
> errors))
> AssertionError: Unexpected error in node4 node log: ['ERROR [main] 2016-01-08 
> 08:03:35,980 MigrationManager.java:164 - Migration task failed to 
> complete\nERROR [main] 2016-01-08 08:03:36,980 MigrationManager.java:164 - 
> Migration task failed to complete']
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-W5Ng_M
> dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-W5Ng_M
> dtest: DEBUG: clearing ssl stores from [/tmp/dtest-W5Ng_M] directory
> - >> end captured logging << -
> --
> Ran 1 test in 90.385s
> FAILED (errors=1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10448) "Unknown type 0" Stream failure on Repair

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089307#comment-15089307
 ] 

Paulo Motta commented on CASSANDRA-10448:
-

[~mroi] if you still haven't solved the problem could you try the 
CASSANDRA-10961 patch to see if it works?

If anyone else try the patch please report back so I can close this as 
duplicate of CASSANDRA-10961.

> "Unknown type 0" Stream failure on Repair
> -
>
> Key: CASSANDRA-10448
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10448
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.2.2
> 5 Nodes in Google Compute Engine
> Java 1.8.0_60
>Reporter: Omri Iluz
>Assignee: Paulo Motta
> Fix For: 2.2.x
>
> Attachments: apache-cassandra-2.2.4-SNAPSHOT.jar, casslogs.txt, 
> receiversystem.log, sendersystem.log
>
>
> While running repair after upgrading to 2.2.2 I am getting many stream fail 
> errors:
> {noformat}
> [2015-10-05 23:52:30,353] Repair session 4c181051-6bbb-11e5-acdb-d9a8bbd39330 
> for range (59694553044959221,86389982480621619] failed with error [repair 
> #4c181051-6bbb-11e5-acdb-d9a8bbd39330 on px/acti
> vities, (59694553044959221,86389982480621619]] Sync failed between 
> /10.240.81.104 and /10.240.134.221 (progress: 4%)
> {noformat}
> Logs from both sides of the stream:
> Sides 1 -
> {noformat}
> INFO  [STREAM-INIT-/10.240.81.104:52722] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:111 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Creating new streaming plan for Repair
> INFO  [STREAM-INIT-/10.240.81.104:52722] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:118 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Received streaming plan for Repair
> INFO  [STREAM-INIT-/10.240.81.104:52723] 2015-10-05 23:52:30,063 
> StreamResultFuture.java:118 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Received streaming plan for Repair
> INFO  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,098 
> StreamResultFuture.java:168 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Prepare completed. Receiving 13 files(517391317 bytes), sending 10 
> files(469491729 bytes)
> ERROR [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,234 
> StreamSession.java:524 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Streaming error occurred
> java.lang.IllegalArgumentException: Unknown type 0
>   at 
> org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:96)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:57)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> INFO  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,302 
> StreamResultFuture.java:182 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Session with /10.240.81.104 is complete
> WARN  [STREAM-IN-/10.240.81.104] 2015-10-05 23:52:30,302 
> StreamResultFuture.java:209 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Stream failed
> {noformat}
> Side 2 -
> {noformat}
> INFO  [AntiEntropyStage:1] 2015-10-05 23:52:30,060 StreamResultFuture.java:86 
> - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] Executing streaming plan for 
> Repair
> INFO  [StreamConnectionEstablisher:6] 2015-10-05 23:52:30,061 
> StreamSession.java:232 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Starting streaming to /10.240.134.221
> INFO  [StreamConnectionEstablisher:6] 2015-10-05 23:52:30,063 
> StreamCoordinator.java:213 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550, 
> ID#0] Beginning stream session with /10.240.134.221
> INFO  [STREAM-IN-/10.240.134.221] 2015-10-05 23:52:30,098 
> StreamResultFuture.java:168 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550 
> ID#0] Prepare completed. Receiving 10 files(469491729 bytes), sending 13 
> files(517391317 bytes)
> INFO  [STREAM-IN-/10.240.134.221] 2015-10-05 23:52:30,349 
> StreamResultFuture.java:182 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Session with /10.240.134.221 is complete
> ERROR [STREAM-OUT-/10.240.134.221] 2015-10-05 23:52:30,349 
> StreamSession.java:524 - [Stream #239d8e60-6bbc-11e5-93ac-31bdef2dc550] 
> Streaming error occurred
> org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe
>   at 
> org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:144) 
> ~[apache-cassandra-2.2.2.jar:2.2.2]
>   at 
> org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:79)
>  ~[apache-cassandra-2.2.2.jar:2.2.2]
>  

[jira] [Commented] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089161#comment-15089161
 ] 

Paulo Motta commented on CASSANDRA-10961:
-

There is no official forecast yet but I think it should be released in the next 
month or so. Use the patched jar in new nodes until then.

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: apache-cassandra-2.2.4-SNAPSHOT.jar, debug.1.log, 
> debug.logs.zip, netstats.1.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> 

[jira] [Created] (CASSANDRA-10987) MV add_node_after_mv_test is failing on trunk

2016-01-08 Thread Alan Boudreault (JIRA)
Alan Boudreault created CASSANDRA-10987:
---

 Summary: MV add_node_after_mv_test is failing on trunk
 Key: CASSANDRA-10987
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10987
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Alan Boudreault
 Fix For: 3.x


This failure seems to be flaky.

http://cassci.datastax.com/job/trunk_dtest/897/testReport/materialized_views_test/TestMaterializedViews/add_node_after_mv_test
{code}
==
ERROR: add_node_after_mv_test (materialized_views_test.TestMaterializedViews)
--
Traceback (most recent call last):
  File "/home/aboudreault/git/cstar/cassandra-dtest/dtest.py", line 558, in 
tearDown
raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
errors))
AssertionError: Unexpected error in node4 node log: ['ERROR [main] 2016-01-08 
08:03:35,980 MigrationManager.java:164 - Migration task failed to 
complete\nERROR [main] 2016-01-08 08:03:36,980 MigrationManager.java:164 - 
Migration task failed to complete']
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /tmp/dtest-W5Ng_M
dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-W5Ng_M
dtest: DEBUG: clearing ssl stores from [/tmp/dtest-W5Ng_M] directory
- >> end captured logging << -

--
Ran 1 test in 90.385s

FAILED (errors=1)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10988) ClassCastException in SelectStatement

2016-01-08 Thread Vassil Hristov (JIRA)
Vassil Hristov created CASSANDRA-10988:
--

 Summary: ClassCastException in SelectStatement
 Key: CASSANDRA-10988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10988
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
Reporter: Vassil Hristov


After we've upgraded our cluster to version 2.1.11, we started getting the 
bellow exceptions for some of our queries. Issue seems to be very similar to 
CASSANDRA-7284.

{code:java}
java.lang.ClassCastException: 
org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast to 
org.apache.cassandra.db.composites.CellName
at 
org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1197)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1205)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1283)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1250)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:276)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:224)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:138)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [apache-cassandra-2.1.11.jar:2.1.11]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_66]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [apache-cassandra-2.1.11.jar:2.1.11]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-2.1.11.jar:2.1.11]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10985) OOM during bulk read(slice query) operation

2016-01-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089236#comment-15089236
 ] 

Sylvain Lebresne commented on CASSANDRA-10985:
--

Can you tell us:
* whether this only happened once or if it's reproducible.
* what version of C* this is. This is clearly 3.X but a more precise version 
can't hurt.
* is that a brand new cluster (on 3.X), or an upgraded one.
* is there anything more you can tell us about the activity on the cluster when 
that happen? Is there node bootstrapping, some schema changes happening 
concurrently, a particularly high load, 


> OOM during bulk read(slice query) operation
> ---
>
> Key: CASSANDRA-10985
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10985
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
> Environment: OS : Linux 6.5
> RAM : 126GB
> assign heap size: 8GB
>Reporter: sumit thakur
>
> The thread java.lang.Thread @ 0x55000a4f0 Thrift:6 keeps local variables with 
> total size 16,214,953,728 (98.23%) bytes.
> The memory is accumulated in one instance of "java.lang.Thread" loaded by 
> "".
> The stacktrace of this Thread is available. See stacktrace.
> Keywords
> java.lang.Thread
> --
> Trace: 
> Thrift:6
>   at java.lang.OutOfMemoryError.()V (OutOfMemoryError.java:48)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.read(Ljava/io/DataInput;I)Ljava/nio/ByteBuffer;
>  (ByteBufferUtil.java:401)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithVIntLength(Lorg/apache/cassandra/io/util/DataInputPlus;)Ljava/nio/ByteBuffer;
>  (ByteBufferUtil.java:339)
>   at 
> org.apache.cassandra.db.marshal.AbstractType.readValue(Lorg/apache/cassandra/io/util/DataInputPlus;)Ljava/nio/ByteBuffer;
>  (AbstractType.java:391)
>   at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.deserialize(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/LivenessInfo;Lorg/apache/cassandra/config/ColumnDefinition;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;)Lorg/apache/cassandra/db/rows/Cell;
>  (BufferCell.java:298)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(Lorg/apache/cassandra/config/ColumnDefinition;Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;Lorg/apache/cassandra/db/rows/Row$Builder;Lorg/apache/cassandra/db/LivenessInfo;)V
>  (UnfilteredSerializer.java:453)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;IILorg/apache/cassandra/db/rows/Row$Builder;)Lorg/apache/cassandra/db/rows/Row;
>  (UnfilteredSerializer.java:431)
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(Lorg/apache/cassandra/io/util/DataInputPlus;Lorg/apache/cassandra/db/SerializationHeader;Lorg/apache/cassandra/db/rows/SerializationHelper;Lorg/apache/cassandra/db/rows/Row$Builder;)Lorg/apache/cassandra/db/rows/Unfiltered;
>  (UnfilteredSerializer.java:360)
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext()Lorg/apache/cassandra/db/rows/Unfiltered;
>  (UnfilteredRowIteratorSerializer.java:217)
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext()Ljava/lang/Object;
>  (UnfilteredRowIteratorSerializer.java:210)
>   at org.apache.cassandra.utils.AbstractIterator.hasNext()Z 
> (AbstractIterator.java:47)
>   at org.apache.cassandra.db.transform.BaseRows.hasNext()Z (BaseRows.java:108)
>   at 
> org.apache.cassandra.db.LegacyLayout$3.computeNext()Lorg/apache/cassandra/db/LegacyLayout$LegacyCell;
>  (LegacyLayout.java:658)
>   at org.apache.cassandra.db.LegacyLayout$3.computeNext()Ljava/lang/Object; 
> (LegacyLayout.java:640)
>   at org.apache.cassandra.utils.AbstractIterator.hasNext()Z 
> (AbstractIterator.java:47)
>   at 
> org.apache.cassandra.thrift.CassandraServer.thriftifyColumns(Lorg/apache/cassandra/config/CFMetaData;Ljava/util/Iterator;)Ljava/util/List;
>  (CassandraServer.java:112)
>   at 
> org.apache.cassandra.thrift.CassandraServer.thriftifyPartition(Lorg/apache/cassandra/db/rows/RowIterator;ZZI)Ljava/util/List;
>  (CassandraServer.java:250)
>   at 
> org.apache.cassandra.thrift.CassandraServer.getSlice(Ljava/util/List;ZILorg/apache/cassandra/db/ConsistencyLevel;Lorg/apache/cassandra/service/ClientState;)Ljava/util/Map;
>  (CassandraServer.java:270)
>   at 
> 

[jira] [Commented] (CASSANDRA-10961) Not enough bytes error when add nodes to cluster

2016-01-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089157#comment-15089157
 ] 

Paulo Motta commented on CASSANDRA-10961:
-

Replacing the patched jar on the new node should suffice. Are you still getting 
the same {{java.lang.IllegalArgumentException: Not enough bytes}} error?

Please try scrubbing your data before as it may genuinely be some corrupted 
data. If it still doesn't work please attach debug.log of source and 
destination nodes.

> Not enough bytes error when add nodes to cluster
> 
>
> Key: CASSANDRA-10961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10961
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: xiaost
>Assignee: Paulo Motta
> Attachments: apache-cassandra-2.2.4-SNAPSHOT.jar, debug.1.log, 
> debug.logs.zip, netstats.1.log
>
>
> we got the same problem all the time when we add nodes to cluster.
> netstats:
> on HostA
> {noformat}
> /la-38395-big-Data.db 14792091851/14792091851 bytes(100%) sent to idx:0/HostB
> {noformat}
> on HostB
> {noformat}
> tmp-la-4-big-Data.db 2667087450/14792091851 bytes(18%) received from 
> idx:0/HostA
> {noformat}
> After a while, Error on HostB
> {noformat}
> WARN  [STREAM-IN-/HostA] 2016-01-02 12:08:14,737 StreamSession.java:644 - 
> [Stream #b91a4e90-b105-11e5-bd57-dd0cc3b4634c] Retrying for following error
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.appendFromStream(BigTableWriter.java:243)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.StreamReader.writeRow(StreamReader.java:173) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:95)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:49)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:58)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  [apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> ERROR [Thread-28] 2016-01-02 12:08:14,737 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-28,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66-internal]
> Caused by: java.lang.InterruptedException: null
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335)
>  ~[na:1.8.0_66-internal]
> at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:350) 
> ~[na:1.8.0_66-internal]
> at 
> 

[jira] [Created] (CASSANDRA-10986) MV add_node_after_mv_test is failing on trunk

2016-01-08 Thread Alan Boudreault (JIRA)
Alan Boudreault created CASSANDRA-10986:
---

 Summary: MV add_node_after_mv_test is failing on trunk
 Key: CASSANDRA-10986
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10986
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Alan Boudreault
 Fix For: 3.x


This failure seems to be flaky.

http://cassci.datastax.com/job/trunk_dtest/897/testReport/materialized_views_test/TestMaterializedViews/add_node_after_mv_test
{code}
==
ERROR: add_node_after_mv_test (materialized_views_test.TestMaterializedViews)
--
Traceback (most recent call last):
  File "/home/aboudreault/git/cstar/cassandra-dtest/dtest.py", line 558, in 
tearDown
raise AssertionError('Unexpected error in %s node log: %s' % (node.name, 
errors))
AssertionError: Unexpected error in node4 node log: ['ERROR [main] 2016-01-08 
08:03:35,980 MigrationManager.java:164 - Migration task failed to 
complete\nERROR [main] 2016-01-08 08:03:36,980 MigrationManager.java:164 - 
Migration task failed to complete']
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /tmp/dtest-W5Ng_M
dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-W5Ng_M
dtest: DEBUG: clearing ssl stores from [/tmp/dtest-W5Ng_M] directory
- >> end captured logging << -

--
Ran 1 test in 90.385s

FAILED (errors=1)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[21/22] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-08 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ad60906
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ad60906
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ad60906

Branch: refs/heads/trunk
Commit: 3ad6090623d1bb146005c5325abc3a513b547dc1
Parents: 87d80b4 9c1679d
Author: Sylvain Lebresne 
Authored: Fri Jan 8 15:26:09 2016 +0100
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:26:09 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../apache/cassandra/locator/TokenMetadata.java |  34 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  55 ++
 .../org/apache/cassandra/service/MoveTest.java  | 496 ++-
 6 files changed, 581 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ad60906/CHANGES.txt
--
diff --cc CHANGES.txt
index 8aab604,1e7f4ed..92493d7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -52,7 -21,12 +52,8 @@@ Merged from 2.2
   * (cqlsh) show correct column names for empty result sets (CASSANDRA-9813)
   * Add new types to Stress (CASSANDRA-9556)
   * Add property to allow listening on broadcast interface (CASSANDRA-9748)
 - * Fix regression in split size on CqlInputFormat (CASSANDRA-10835)
 - * Better handling of SSL connection errors inter-node (CASSANDRA-10816)
 - * Disable reloading of GossipingPropertyFileSnitch (CASSANDRA-9474)
 - * Verify tables in pseudo-system keyspaces at startup (CASSANDRA-10761)
  Merged from 2.1:
+  * Fix pending range calculation during moves (CASSANDRA-10887)
   * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
   * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
   * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ad60906/src/java/org/apache/cassandra/dht/Range.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ad60906/src/java/org/apache/cassandra/locator/TokenMetadata.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ad60906/test/unit/org/apache/cassandra/Util.java
--



[18/22] cassandra git commit: Fix pending range calculation during moves (3.0 version)

2016-01-08 Thread slebresne
Fix pending range calculation during moves (3.0 version)

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9c1679d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9c1679d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9c1679d1

Branch: refs/heads/cassandra-3.0
Commit: 9c1679d1bd83d1d25fda6dbf29d1738d8e966da5
Parents: 08b241c
Author: sankalp kohli 
Authored: Thu Jan 7 16:24:06 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:25:36 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../apache/cassandra/locator/TokenMetadata.java |  34 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  55 ++
 .../org/apache/cassandra/service/MoveTest.java  | 496 ++-
 6 files changed, 581 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9c1679d1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7f6b761..1e7f4ed 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -26,6 +26,7 @@ Merged from 2.2:
  * Disable reloading of GossipingPropertyFileSnitch (CASSANDRA-9474)
  * Verify tables in pseudo-system keyspaces at startup (CASSANDRA-10761)
 Merged from 2.1:
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9c1679d1/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 985d6f6..b4fed65 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -291,7 +291,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9c1679d1/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 301613c..f6e9cf7 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -814,14 +814,42 @@ public class TokenMetadata
 // simply add and remove them one by one to allLeftMetadata and 
check in between what their ranges would be.
 for (Pair moving : movingEndpoints)
 {
+//Calculate all the ranges which will could be affected. This 
will include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving 
node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 allLeftMetadata.updateNormalToken(moving.left, endpoint);
-
+//Add ranges after the move
 for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
 {
-newPendingRanges.addPendingRange(range, endpoint);
+moveAffectedRanges.add(range);
+}
+
+for(Range range : moveAffectedRanges)
+{
+  

[12/22] cassandra git commit: Fix pending range calculation during moves (2.2 version)

2016-01-08 Thread slebresne
Fix pending range calculation during moves (2.2 version)

patch by kohlisankalp; reviewed by blambov for CASSANDRA-10887


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23123f04
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23123f04
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23123f04

Branch: refs/heads/trunk
Commit: 23123f04fd5e5381742b2bae16bb3e03225598c3
Parents: 44a0578
Author: sankalp kohli 
Authored: Thu Jan 7 16:21:47 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Jan 8 15:23:30 2016 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/dht/Range.java|  21 +
 .../apache/cassandra/locator/TokenMetadata.java |  34 +-
 test/unit/org/apache/cassandra/Util.java|   4 +-
 .../org/apache/cassandra/dht/RangeTest.java |  55 +++
 .../org/apache/cassandra/service/MoveTest.java  | 491 ++-
 6 files changed, 576 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a26f9e0..e5c4430 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -14,6 +14,7 @@
  * Disable reloading of GossipingPropertyFileSnitch (CASSANDRA-9474)
  * Verify tables in pseudo-system keyspaces at startup (CASSANDRA-10761)
 Merged from 2.1:
+ * Fix pending range calculation during moves (CASSANDRA-10887)
  * Sane default (200Mbps) for inter-DC streaming througput (CASSANDRA-9708)
  * Match cassandra-loader options in COPY FROM (CASSANDRA-9303)
  * Fix binding to any address in CqlBulkRecordWriter (CASSANDRA-9309)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/src/java/org/apache/cassandra/dht/Range.java
--
diff --git a/src/java/org/apache/cassandra/dht/Range.java 
b/src/java/org/apache/cassandra/dht/Range.java
index 9893531..f2c5996 100644
--- a/src/java/org/apache/cassandra/dht/Range.java
+++ b/src/java/org/apache/cassandra/dht/Range.java
@@ -292,7 +292,28 @@ public class Range> extends 
AbstractBounds implemen
 return rhs.differenceToFetch(this);
 }
 
+public Set subtractAll(Collection ranges)
+{
+Set result = new HashSet<>();
+result.add(this);
+for(Range range : ranges)
+{
+result = substractAllFromToken(result, range);
+}
+
+return result;
+}
 
+private static > Set 
substractAllFromToken(Set ranges, Range subtract)
+{
+Set result = new HashSet<>();
+for(Range range : ranges)
+{
+result.addAll(range.subtract(subtract));
+}
+
+return result;
+}
 /**
  * Calculate set of the difference ranges of given two ranges
  * (as current (A, B] and rhs is (C, D])

http://git-wip-us.apache.org/repos/asf/cassandra/blob/23123f04/src/java/org/apache/cassandra/locator/TokenMetadata.java
--
diff --git a/src/java/org/apache/cassandra/locator/TokenMetadata.java 
b/src/java/org/apache/cassandra/locator/TokenMetadata.java
index 00d8ee9..de16fda 100644
--- a/src/java/org/apache/cassandra/locator/TokenMetadata.java
+++ b/src/java/org/apache/cassandra/locator/TokenMetadata.java
@@ -799,14 +799,42 @@ public class TokenMetadata
 // simply add and remove them one by one to allLeftMetadata and 
check in between what their ranges would be.
 for (Pair moving : movingEndpoints)
 {
+//Calculate all the ranges which will could be affected. This 
will include the ranges before and after the move.
+Set moveAffectedRanges = new HashSet<>();
 InetAddress endpoint = moving.right; // address of the moving 
node
+//Add ranges before the move
+for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
+{
+moveAffectedRanges.add(range);
+}
 
-//  moving.left is a new token of the endpoint
 allLeftMetadata.updateNormalToken(moving.left, endpoint);
-
+//Add ranges after the move
 for (Range range : 
strategy.getAddressRanges(allLeftMetadata).get(endpoint))
 {
-newPendingRanges.addPendingRange(range, endpoint);
+moveAffectedRanges.add(range);
+}
+
+for(Range range : moveAffectedRanges)
+{
+Set currentEndpoints 

[jira] [Resolved] (CASSANDRA-10927) Stream failed during bootstrap

2016-01-08 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta resolved CASSANDRA-10927.
-
Resolution: Duplicate

> Stream failed during bootstrap
> --
>
> Key: CASSANDRA-10927
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10927
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: CentOS 7 x64, Java 1.8.0_65
>Reporter: Kai Wang
>Assignee: Paulo Motta
>
> When I start the new node, I got this error:
> {noformat}
> ERROR [STREAM-IN-/192.168.0.10] 2015-12-22 15:37:56,302 
> StreamSession.java:524 - [Stream #bfc4e100-a8eb-11e5-bec5-67d8099a8b91] 
> Streaming error occurred
> java.nio.channels.ClosedChannelException: null
>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(Unknown Source) 
> ~[na:1.8.0_65]
>   at sun.nio.ch.SocketChannelImpl.read(Unknown Source) ~[na:1.8.0_65]
>   at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:53)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
>   at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:261)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
>   at java.lang.Thread.run(Unknown Source) [na:1.8.0_65]
> ERROR [Thread-22] 2015-12-22 15:37:56,302 CassandraDaemon.java:185 - 
> Exception in thread Thread[Thread-22,5,main]
> java.lang.RuntimeException: java.lang.InterruptedException
>   at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
>   at java.lang.Thread.run(Unknown Source) ~[na:1.8.0_65]
> Caused by: java.lang.InterruptedException: null
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(Unknown
>  Source) ~[na:1.8.0_65]
>   at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(Unknown 
> Source) ~[na:1.8.0_65]
>   at java.util.concurrent.ArrayBlockingQueue.put(Unknown Source) 
> ~[na:1.8.0_65]
>   at 
> org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:176)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
>   ... 1 common frames omitted
> INFO  [STREAM-IN-/192.168.0.10] 2015-12-22 15:37:56,345 
> StreamResultFuture.java:182 - [Stream #bfc4e100-a8eb-11e5-bec5-67d8099a8b91] 
> Session with /192.168.0.10 is complete
> WARN  [STREAM-IN-/192.168.0.10] 2015-12-22 15:37:56,346 
> StreamResultFuture.java:209 - [Stream #bfc4e100-a8eb-11e5-bec5-67d8099a8b91] 
> Stream failed
> ERROR [main] 2015-12-22 15:37:56,347 StorageService.java:1245 - Error while 
> waiting on bootstrap to complete. Bootstrap will have to be restarted.
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.streaming.StreamException: Stream failed
>   at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) 
> ~[guava-16.0.jar:na]
>   at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1240)
>  [apache-cassandra-2.2.4.jar:2.2.4]
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:920)
>  [apache-cassandra-2.2.4.jar:2.2.4]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:705)
>  [apache-cassandra-2.2.4.jar:2.2.4]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:581)
>  [apache-cassandra-2.2.4.jar:2.2.4]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:315) 
> [apache-cassandra-2.2.4.jar:2.2.4]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:529)
>  [apache-cassandra-2.2.4.jar:2.2.4]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:638) 
> [apache-cassandra-2.2.4.jar:2.2.4]
> Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
>   at 
> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
>   at com.google.common.util.concurrent.Futures$4.run(Futures.java:1172) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>  ~[guava-16.0.jar:na]
>   at 
> 

[jira] [Updated] (CASSANDRA-10948) CQLSH error when trying to insert non-ascii statement

2016-01-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10948:
--
Priority: Minor  (was: Major)

> CQLSH error when trying to insert non-ascii statement
> -
>
> Key: CASSANDRA-10948
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10948
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Matthieu Nantern
>Priority: Minor
>  Labels: lhf
> Attachments: patch_CASSANDRA-10948
>
>
> We recently upgraded Cassandra to v2.2.4 with CQLSH 5.0.1 and we are now 
> unable to import some CQL file (with French character like 'ê'). It was 
> working on v2.0.12.
> The issue:
> {noformat}
> Using CQL driver:  '/OPT/cassandra/dsc-cassandra-2.2.4/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>
> Using connect timeout: 5 seconds
> Traceback (most recent call last):
>   File "/OPT/cassandra/dsc-cassandra/bin/cqlsh.py", line 1110, in onecmd
> self.handle_statement(st, statementtext)
>   File "/OPT/cassandra/dsc-cassandra/bin/cqlsh.py", line 1135, in 
> handle_statement
> readline.add_history(new_hist)
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xea' in position 
> 7192: ordinal not in range(128)
> {noformat}
> The issue was corrected by changing line 1135 of cqlsh.py (but I don't know 
> if it's the correct way to do it):
> readline.add_history(new_hist)  -> 
> readline.add_history(new_hist.encode('utf8'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >