[jira] [Commented] (CASSANDRA-13345) Increasing the per thread stack size to atleast 512k

2017-04-10 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963814#comment-15963814
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13345:
---

[~jasobrown] [~snazy] - Just wanted to know , if i create a branch locally 
under my own account on github , how will it be made public , as in how will it 
appear to all in - https://github.com/apache/cassandra/ . I was of the opinion 
that a separate branch would be created here - 
https://github.com/apache/cassandra/branches ( say for eg : ppc64le/trunk ) and 
all the changes pertaining to ppc64le would be up-streamed here. 

Let me know your views about this and how will the changes made for ppc64le - 
development branch be made public via apache cassandra repository . 

> Increasing the per thread stack size to atleast 512k 
> -
>
> Key: CASSANDRA-13345
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13345
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: Set up details
> $ lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):160
> On-line CPU(s) list:   0-159
> Thread(s) per core:8
> Core(s) per socket:5
> Socket(s): 4
> NUMA node(s):  4
> Model: 2.1 (pvr 004b 0201)
> Model name:POWER8E (raw), altivec supported
> CPU max MHz:   3690.
> CPU min MHz:   2061.
> L1d cache: 64K
> L1i cache: 32K
> L2 cache:  512K
> L3 cache:  8192K
> NUMA node0 CPU(s): 0-39
> NUMA node1 CPU(s): 40-79
> NUMA node16 CPU(s):80-119
> NUMA node17 CPU(s):120-159
> $ cat /etc/os-release
> NAME="Ubuntu"
> VERSION="16.04.1 LTS (Xenial Xerus)"
> ID=ubuntu
> ID_LIKE=debian
> PRETTY_NAME="Ubuntu 16.04.1 LTS"
> VERSION_ID="16.04"
> HOME_URL="http://www.ubuntu.com/;
> SUPPORT_URL="http://help.ubuntu.com/;
> BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;
> VERSION_CODENAME=xenial
> UBUNTU_CODENAME=xenial
> $ arch
> ppc64le
> $ java -version
> openjdk version "1.8.0_121"
> OpenJDK Runtime Environment (build 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13)
> OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)
>Reporter: Amitkumar Ghatwal
>Priority: Minor
>
> Hi All,
> I followed the below steps 
> ```
> $ git clone https://github.com/apache/cassandra.git
> $ cd cassandra/
> $ ant
> $ bin/cassandra -f
> The stack size specified is too small, Specify at least 328k
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> ```
> After getting this , i had to upgrade the thread stack size to 512kb in 
> 'conf/jvm.options'
> ```
> $ git diff conf/jvm.options
> diff --git a/conf/jvm.options b/conf/jvm.options
> index 49b2196..00c03ce 100644
> --- a/conf/jvm.options
> +++ b/conf/jvm.options
> @@ -99,7 +99,7 @@
>  -XX:+HeapDumpOnOutOfMemoryError
>  # Per-thread stack size.
> --Xss256k
> +-Xss512k
>  # Larger interned string table, for gossip's benefit (CASSANDRA-6410)
>  -XX:StringTableSize=103
> ```
> Thereafter i was able to start the Cassandra server successfully.
> Could you please consider increasing the stack size to '512k' in 
> 'conf/jvm.options.
> Similar to  "https://issues.apache.org/jira/browse/CASSANDRA-13300;. Let me 
> know if we can increase the stack size in the Apache Cassandra trunk.
> Thanks for support provided so far , and let me know
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13431) Streaming error occurred org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe

2017-04-10 Thread krish (JIRA)
krish created CASSANDRA-13431:
-

 Summary: Streaming error occurred 
org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe
 Key: CASSANDRA-13431
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13431
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
 Environment: ubuntu, cassandra 2.2.7, AWS EC2
Reporter: krish
 Fix For: 2.2.7


I am trying to add a node to the cluster. 
Adding new node to cluster fails with broken pipe. cassandra fails after 
starting with in 2 mints. 

removed the node from the ring. Adding back fails. 

OS info:  4.4.0-59-generic #80-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux.

ERROR [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,410 
StreamSession.java:532 - [Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de] 
Streaming error occurred
org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe
at 
org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:144) 
~[apache-cassandra-2.2.7.jar:2.2.7]
at 
org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:91)
 ~[apache-cassandra-2.2.7.jar:2.2.  
7]
at 
org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:88)
 ~[apache-cassandra-2.2.7.jar:2.2.  
7]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.applyToChannel(BufferedDataOutputStreamPlus.java:297)
 ~[apache-cassandra-2.2.7  
.jar:2.2.7]
at 
org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:87)
 ~[apache-cassandra-2.2.7.jar:2.2.7]
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage.serialize(OutgoingFileMessage.java:90)
 ~[apache-cassandra-2.2.7.jar:2.2.7]
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:48)
 ~[apache-cassandra-2.2.7.jar:2.2.7]
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:40)
 ~[apache-cassandra-2.2.7.jar:2.2.7]
at 
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:47)
 ~[apache-cassandra-2.2.7.jar:2.2.7]
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:389)
 ~[apache-cassandra-2.2.7  
.jar:2.2.7]
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:361)
 ~[apache-cassandra-2.2.7.jar:2.2.7]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method) ~[na:1.8.0_101]
at 
sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428) 
~[na:1.8.0_101]
at 
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493) 
~[na:1.8.0_101]
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608) 
~[na:1.8.0_101]
at 
org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:140) 
~[apache-cassandra-2.2.7.jar:2.2.7]
... 11 common frames omitted
INFO  [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,424 
StreamResultFuture.java:183 - [Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de] 
Session with /  123.120.56.71 
is complete
WARN  [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,425 
StreamResultFuture.java:210 - [Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de] 
Stream failed




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13307) The specification of protocol version in cqlsh means the python driver doesn't automatically downgrade protocol version.

2017-04-10 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13307:
---
Status: Ready to Commit  (was: Patch Available)

> The specification of protocol version in cqlsh means the python driver 
> doesn't automatically downgrade protocol version.
> 
>
> Key: CASSANDRA-13307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13307
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Matt Byrd
>Assignee: Matt Byrd
>Priority: Minor
> Fix For: 3.11.x
>
>
> Hi,
> Looks like we've regressed on the issue described in:
> https://issues.apache.org/jira/browse/CASSANDRA-9467
> In that we're no longer able to connect from newer cqlsh versions
> (e.g trunk) to older versions of Cassandra with a lower version of the 
> protocol (e.g 2.1 with protocol version 3)
> The problem seems to be that we're relying on the ability for the client to 
> automatically downgrade protocol version implemented in Cassandra here:
> https://issues.apache.org/jira/browse/CASSANDRA-12838
> and utilised in the python client here:
> https://datastax-oss.atlassian.net/browse/PYTHON-240
> The problem however comes when we implemented:
> https://datastax-oss.atlassian.net/browse/PYTHON-537
> "Don't downgrade protocol version if explicitly set" 
> (included when we bumped from 3.5.0 to 3.7.0 of the python driver as part of 
> fixing: https://issues.apache.org/jira/browse/CASSANDRA-11534)
> Since we do explicitly specify the protocol version in the bin/cqlsh.py.
> I've got a patch which just adds an option to explicitly specify the protocol 
> version (for those who want to do that) and then otherwise defaults to not 
> setting the protocol version, i.e using the protocol version from the client 
> which we ship, which should by default be the same protocol as the server.
> Then it should downgrade gracefully as was intended. 
> Let me know if that seems reasonable.
> Thanks,
> Matt



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13422) CompactionStrategyManager should take write not read lock when handling remove notifications

2017-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963650#comment-15963650
 ] 

Ariel Weisberg commented on CASSANDRA-13422:


Committed as 
[c97514243e8c58bdda0ebf75212a8a217f3d017e|https://github.com/apache/cassandra/commit/f6f50129d72b149a62f7e26e081e4d43097f9236]

> CompactionStrategyManager should take write not read lock when handling 
> remove notifications
> 
>
> Key: CASSANDRA-13422
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13422
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 4.0, 3.11.x
>
>
> {{getNextBackgroundTask}} in various compaction strategies (definitely 
> {{LCS}}) rely on checking the result of {{DataTracker.getCompacting()}} to 
> avoid accessing data and metadata related to tables that have already head 
> their resources released.
> There is a race where this check is unreliable and will claim a table that 
> has its resources already released is not compacting resulting in use after 
> free.
> [{{LeveledCompactionStrategy.findDroppableSSTable}}|https://github.com/apache/cassandra/blob/c794d2bed7ca1d10e13c4da08a3d45f5c755c1d8/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java#L504]
>  for instance has this three part logical && condition where the first check 
> is against the compacting set before calling {{worthDroppingTombstones}} 
> which fails if the table has been released.
> The order of events is basically that CompactionStrategyManager acquires the 
> read lock in getNextBackgroundTask(), then proceeds eventually to 
> findDroppableSSTable and acquires a set of SSTables from the manifest. While 
> the manifest is thread safe it's not accessed atomically WRT to other 
> operations. Once it has acquired the set of tables it acquires (not 
> atomically) the set of compacting SSTables and iterates checking the former 
> against the latter.
> Meanwhile other compaction threads are marking tables obsolete or compacted 
> and releasing their references. Doing this removes them from {{DataTracker}} 
> and publishes a notification to the strategies, but this notification only 
> requires the read lock. After the compaction thread has published the 
> notifications it eventually marks the table as not compacting in 
> {{DataTracker}} or removes it entirely.
> The race is then that the compaction thread generating a new background task 
> acquires the sstables from the manifest on the stack. Any table in that set 
> that was compacting at that time must remain compacting so that it can be 
> skipped. Another compaction thread finishes a compaction and is able to 
> remove the table from the manifest and then remove it from the compacting 
> set. The thread generating the background task then acquires the list of 
> compacting tables which doesn't include the table it is supposed to skip.
> The simple fix appears to be to require threads to acquire the write lock in 
> order to publish notifications of tables being removed from compaction 
> strategies. While holding the write lock it won't be possible for someone to 
> see a view of tables in the manifest where tables that are compacting aren't 
> compacting in the view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13422) CompactionStrategyManager should take write not read lock when handling remove notifications

2017-04-10 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13422:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

> CompactionStrategyManager should take write not read lock when handling 
> remove notifications
> 
>
> Key: CASSANDRA-13422
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13422
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 4.0, 3.11.x
>
>
> {{getNextBackgroundTask}} in various compaction strategies (definitely 
> {{LCS}}) rely on checking the result of {{DataTracker.getCompacting()}} to 
> avoid accessing data and metadata related to tables that have already head 
> their resources released.
> There is a race where this check is unreliable and will claim a table that 
> has its resources already released is not compacting resulting in use after 
> free.
> [{{LeveledCompactionStrategy.findDroppableSSTable}}|https://github.com/apache/cassandra/blob/c794d2bed7ca1d10e13c4da08a3d45f5c755c1d8/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java#L504]
>  for instance has this three part logical && condition where the first check 
> is against the compacting set before calling {{worthDroppingTombstones}} 
> which fails if the table has been released.
> The order of events is basically that CompactionStrategyManager acquires the 
> read lock in getNextBackgroundTask(), then proceeds eventually to 
> findDroppableSSTable and acquires a set of SSTables from the manifest. While 
> the manifest is thread safe it's not accessed atomically WRT to other 
> operations. Once it has acquired the set of tables it acquires (not 
> atomically) the set of compacting SSTables and iterates checking the former 
> against the latter.
> Meanwhile other compaction threads are marking tables obsolete or compacted 
> and releasing their references. Doing this removes them from {{DataTracker}} 
> and publishes a notification to the strategies, but this notification only 
> requires the read lock. After the compaction thread has published the 
> notifications it eventually marks the table as not compacting in 
> {{DataTracker}} or removes it entirely.
> The race is then that the compaction thread generating a new background task 
> acquires the sstables from the manifest on the stack. Any table in that set 
> that was compacting at that time must remain compacting so that it can be 
> skipped. Another compaction thread finishes a compaction and is able to 
> remove the table from the manifest and then remove it from the compacting 
> set. The thread generating the background task then acquires the list of 
> compacting tables which doesn't include the table it is supposed to skip.
> The simple fix appears to be to require threads to acquire the write lock in 
> order to publish notifications of tables being removed from compaction 
> strategies. While holding the write lock it won't be possible for someone to 
> see a view of tables in the manifest where tables that are compacting aren't 
> compacting in the view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[1/3] cassandra git commit: Use write lock not read lock for removing sstables from compaction strategies.

2017-04-10 Thread aweisberg
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 1a7b1ee4d -> c97514243
  refs/heads/trunk aa65c6c54 -> f6f50129d


Use write lock not read lock for removing sstables from compaction strategies.

Patch by Ariel Weisberg; Reviewed by Marcus Eriksson for CASSANDRA-13422


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c9751424
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c9751424
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c9751424

Branch: refs/heads/cassandra-3.11
Commit: c97514243e8c58bdda0ebf75212a8a217f3d017e
Parents: 1a7b1ee
Author: Ariel Weisberg 
Authored: Thu Apr 6 17:53:04 2017 -0400
Committer: Ariel Weisberg 
Committed: Mon Apr 10 16:45:15 2017 -0700

--
 CHANGES.txt  | 1 +
 .../cassandra/db/compaction/CompactionStrategyManager.java   | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c9751424/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a7b464a..7998e10 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.0
+ * Use write lock not read lock for removing sstables from compaction 
strategies. (CASSANDRA-13422)
  * Use corePoolSize equal to maxPoolSize in JMXEnabledThreadPoolExecutors 
(CASSANDRA-13329)
  * Avoid rebuilding SASI indexes containing no values (CASSANDRA-12962)
  * Add charset to Analyser input stream (CASSANDRA-13151)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c9751424/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
index 5679338..df89e53 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
@@ -531,14 +531,14 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 
 private void handleDeletingNotification(SSTableReader deleted)
 {
-readLock.lock();
+writeLock.lock();
 try
 {
 getCompactionStrategyFor(deleted).removeSSTable(deleted);
 }
 finally
 {
-readLock.unlock();
+writeLock.unlock();
 }
 }
 



[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-04-10 Thread aweisberg
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6f50129
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6f50129
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6f50129

Branch: refs/heads/trunk
Commit: f6f50129d72b149a62f7e26e081e4d43097f9236
Parents: aa65c6c c975142
Author: Ariel Weisberg 
Authored: Mon Apr 10 16:45:27 2017 -0700
Committer: Ariel Weisberg 
Committed: Mon Apr 10 16:45:27 2017 -0700

--
 CHANGES.txt  | 1 +
 .../cassandra/db/compaction/CompactionStrategyManager.java   | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6f50129/CHANGES.txt
--
diff --cc CHANGES.txt
index 3535b5f,7998e10..5c38307
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,58 -1,5 +1,59 @@@
 +4.0
 + * Take number of files in L0 in account when estimating remaining compaction 
tasks (CASSANDRA-13354)
 + * Skip building views during base table streams on range movements 
(CASSANDRA-13065)
 + * Improve error messages for +/- operations on maps and tuples 
(CASSANDRA-13197)
 + * Remove deprecated repair JMX APIs (CASSANDRA-11530)
 + * Fix version check to enable streaming keep-alive (CASSANDRA-12929)
 + * Make it possible to monitor an ideal consistency level separate from 
actual consistency level (CASSANDRA-13289)
 + * Outbound TCP connections ignore internode authenticator (CASSANDRA-13324)
 + * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360)
 + * Cleanup ParentRepairSession after repairs (CASSANDRA-13359)
 + * Incremental repair not streaming correct sstables (CASSANDRA-13328)
 + * Upgrade the jna version to 4.3.0 (CASSANDRA-13300)
 + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID 
functions (CASSANDRA-13132)
 + * Remove config option index_interval (CASSANDRA-10671)
 + * Reduce lock contention for collection types and serializers 
(CASSANDRA-13271)
 + * Make it possible to override MessagingService.Verb ids (CASSANDRA-13283)
 + * Avoid synchronized on prepareForRepair in ActiveRepairService 
(CASSANDRA-9292)
 + * Adds the ability to use uncompressed chunks in compressed files 
(CASSANDRA-10520)
 + * Don't flush sstables when streaming for incremental repair 
(CASSANDRA-13226)
 + * Remove unused method (CASSANDRA-13227)
 + * Fix minor bugs related to #9143 (CASSANDRA-13217)
 + * Output warning if user increases RF (CASSANDRA-13079)
 + * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081)
 + * Add support for + and - operations on dates (CASSANDRA-11936)
 + * Fix consistency of incrementally repaired data (CASSANDRA-9143)
 + * Increase commitlog version (CASSANDRA-13161)
 + * Make TableMetadata immutable, optimize Schema (CASSANDRA-9425)
 + * Refactor ColumnCondition (CASSANDRA-12981)
 + * Parallelize streaming of different keyspaces (CASSANDRA-4663)
 + * Improved compactions metrics (CASSANDRA-13015)
 + * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031)
 + * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855)
 + * Thrift removal (CASSANDRA-5)
 + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716)
 + * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
 + * Add (automate) Nodetool Documentation (CASSANDRA-12672)
 + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
 + * Reject invalid replication settings when creating or altering a keyspace 
(CASSANDRA-12681)
 + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter 
(CASSANDRA-12422)
 + * Use new token allocation for non bootstrap case as well (CASSANDRA-13080)
 + * Avoid byte-array copy when key cache is disabled (CASSANDRA-13084)
 + * Require forceful decommission if number of nodes is less than replication 
factor (CASSANDRA-12510)
 + * Allow IN restrictions on column families with collections (CASSANDRA-12654)
 + * Log message size in trace message in OutboundTcpConnection 
(CASSANDRA-13028)
 + * Add timeUnit Days for cassandra-stress (CASSANDRA-13029)
 + * Add mutation size and batch metrics (CASSANDRA-12649)
 + * Add method to get size of endpoints to TokenMetadata (CASSANDRA-12999)
 + * Expose time spent waiting in thread pool queue (CASSANDRA-8398)
 + * Conditionally update index built status to avoid unnecessary flushes 
(CASSANDRA-12969)
 + * cqlsh auto completion: refactor definition of compaction strategy options 
(CASSANDRA-12946)
 + * Add support for arithmetic operators (CASSANDRA-11935)
 + * Add histogram for delay to deliver hints (CASSANDRA-13234)
 +
 +
  3.11.0
+  * Use write lock not read lock for 

[2/3] cassandra git commit: Use write lock not read lock for removing sstables from compaction strategies.

2017-04-10 Thread aweisberg
Use write lock not read lock for removing sstables from compaction strategies.

Patch by Ariel Weisberg; Reviewed by Marcus Eriksson for CASSANDRA-13422


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c9751424
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c9751424
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c9751424

Branch: refs/heads/trunk
Commit: c97514243e8c58bdda0ebf75212a8a217f3d017e
Parents: 1a7b1ee
Author: Ariel Weisberg 
Authored: Thu Apr 6 17:53:04 2017 -0400
Committer: Ariel Weisberg 
Committed: Mon Apr 10 16:45:15 2017 -0700

--
 CHANGES.txt  | 1 +
 .../cassandra/db/compaction/CompactionStrategyManager.java   | 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c9751424/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a7b464a..7998e10 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.0
+ * Use write lock not read lock for removing sstables from compaction 
strategies. (CASSANDRA-13422)
  * Use corePoolSize equal to maxPoolSize in JMXEnabledThreadPoolExecutors 
(CASSANDRA-13329)
  * Avoid rebuilding SASI indexes containing no values (CASSANDRA-12962)
  * Add charset to Analyser input stream (CASSANDRA-13151)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c9751424/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
index 5679338..df89e53 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
@@ -531,14 +531,14 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 
 private void handleDeletingNotification(SSTableReader deleted)
 {
-readLock.lock();
+writeLock.lock();
 try
 {
 getCompactionStrategyFor(deleted).removeSSTable(deleted);
 }
 finally
 {
-readLock.unlock();
+writeLock.unlock();
 }
 }
 



[jira] [Comment Edited] (CASSANDRA-13422) CompactionStrategyManager should take write not read lock when handling remove notifications

2017-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15959848#comment-15959848
 ] 

Ariel Weisberg edited comment on CASSANDRA-13422 at 4/10/17 11:42 PM:
--

||Code|utests|dtests||
|[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...aweisberg:cassandra-13422-3.11?expand=1]|[utests|https://circleci.com/gh/aweisberg/cassandra/10]|[dtests|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/6/#showFailuresLink]|
|[trunk|https://github.com/apache/cassandra/compare/trunk...aweisberg:cassandra-13422-trunk?expand=1]|[utests|https://circleci.com/gh/aweisberg/cassandra/12]|[dtests|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/5/]|


was (Author: aweisberg):
||Code|utests|dtests||
|[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...aweisberg:cassandra-13422-3.11?expand=1]|[utests|https://circleci.com/gh/aweisberg/cassandra/10]|
|[trunk|https://github.com/apache/cassandra/compare/trunk...aweisberg:cassandra-13422-trunk?expand=1]|[utests|https://circleci.com/gh/aweisberg/cassandra/12]|

> CompactionStrategyManager should take write not read lock when handling 
> remove notifications
> 
>
> Key: CASSANDRA-13422
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13422
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 4.0, 3.11.x
>
>
> {{getNextBackgroundTask}} in various compaction strategies (definitely 
> {{LCS}}) rely on checking the result of {{DataTracker.getCompacting()}} to 
> avoid accessing data and metadata related to tables that have already head 
> their resources released.
> There is a race where this check is unreliable and will claim a table that 
> has its resources already released is not compacting resulting in use after 
> free.
> [{{LeveledCompactionStrategy.findDroppableSSTable}}|https://github.com/apache/cassandra/blob/c794d2bed7ca1d10e13c4da08a3d45f5c755c1d8/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java#L504]
>  for instance has this three part logical && condition where the first check 
> is against the compacting set before calling {{worthDroppingTombstones}} 
> which fails if the table has been released.
> The order of events is basically that CompactionStrategyManager acquires the 
> read lock in getNextBackgroundTask(), then proceeds eventually to 
> findDroppableSSTable and acquires a set of SSTables from the manifest. While 
> the manifest is thread safe it's not accessed atomically WRT to other 
> operations. Once it has acquired the set of tables it acquires (not 
> atomically) the set of compacting SSTables and iterates checking the former 
> against the latter.
> Meanwhile other compaction threads are marking tables obsolete or compacted 
> and releasing their references. Doing this removes them from {{DataTracker}} 
> and publishes a notification to the strategies, but this notification only 
> requires the read lock. After the compaction thread has published the 
> notifications it eventually marks the table as not compacting in 
> {{DataTracker}} or removes it entirely.
> The race is then that the compaction thread generating a new background task 
> acquires the sstables from the manifest on the stack. Any table in that set 
> that was compacting at that time must remain compacting so that it can be 
> skipped. Another compaction thread finishes a compaction and is able to 
> remove the table from the manifest and then remove it from the compacting 
> set. The thread generating the background task then acquires the list of 
> compacting tables which doesn't include the table it is supposed to skip.
> The simple fix appears to be to require threads to acquire the write lock in 
> order to publish notifications of tables being removed from compaction 
> strategies. While holding the write lock it won't be possible for someone to 
> see a view of tables in the manifest where tables that are compacting aren't 
> compacting in the view.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-9945) Add transparent data encryption core classes

2017-04-10 Thread Vincenzo Melandri (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963553#comment-15963553
 ] 

Vincenzo Melandri commented on CASSANDRA-9945:
--

Hi [~jasobrown], still checking up on this feature and CASSANDRA-9633 to see 
how far it is :) Any updates?

> Add transparent data encryption core classes
> 
>
> Key: CASSANDRA-9945
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9945
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: encryption
> Fix For: 3.2
>
>
> This patch will add the core infrastructure classes necessary for transparent 
> data encryption (file-level encryption), as required for CASSANDRA-6018 and 
> CASSANDRA-9633.  The phrase "transparent data encryption", while not the most 
> aesthetically pleasing, seems to be used throughout the database industry 
> (Oracle, SQLQServer, Datastax Enterprise) to describe file level encryption, 
> so we'll go with that, as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


cassandra-builds git commit: Fix comment on ant target trigger

2017-04-10 Thread mshuler
Repository: cassandra-builds
Updated Branches:
  refs/heads/master 1ba239fcf -> fc452326e


Fix comment on ant target trigger


Project: http://git-wip-us.apache.org/repos/asf/cassandra-builds/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-builds/commit/fc452326
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-builds/tree/fc452326
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-builds/diff/fc452326

Branch: refs/heads/master
Commit: fc452326e1dc6d156637180b493128616fcd91ef
Parents: 1ba239f
Author: Michael Shuler 
Authored: Mon Apr 10 16:52:25 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 16:52:25 2017 -0500

--
 jenkins-dsl/cassandra_job_dsl_seed.groovy | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-builds/blob/fc452326/jenkins-dsl/cassandra_job_dsl_seed.groovy
--
diff --git a/jenkins-dsl/cassandra_job_dsl_seed.groovy 
b/jenkins-dsl/cassandra_job_dsl_seed.groovy
index 7614dab..e70e730 100644
--- a/jenkins-dsl/cassandra_job_dsl_seed.groovy
+++ b/jenkins-dsl/cassandra_job_dsl_seed.groovy
@@ -259,7 +259,7 @@ cassandraBranches.each {
 testTargets.each {
 def targetName = it
 
-// Run default dtest daily and variations weekly
+// Run default ant test daily and variations weekly
 if (targetName == 'test') {
 triggerInterval = '@daily'
 } else {



cassandra-builds git commit: Drop variation ant test targets to @weekly

2017-04-10 Thread mshuler
Repository: cassandra-builds
Updated Branches:
  refs/heads/master 8d12386f4 -> 1ba239fcf


Drop variation ant test targets to @weekly


Project: http://git-wip-us.apache.org/repos/asf/cassandra-builds/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-builds/commit/1ba239fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-builds/tree/1ba239fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-builds/diff/1ba239fc

Branch: refs/heads/master
Commit: 1ba239fcf68c6f066d397d22c50e2a045e85d0bc
Parents: 8d12386
Author: Michael Shuler 
Authored: Mon Apr 10 16:50:40 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 16:50:40 2017 -0500

--
 jenkins-dsl/cassandra_job_dsl_seed.groovy | 10 ++
 1 file changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-builds/blob/1ba239fc/jenkins-dsl/cassandra_job_dsl_seed.groovy
--
diff --git a/jenkins-dsl/cassandra_job_dsl_seed.groovy 
b/jenkins-dsl/cassandra_job_dsl_seed.groovy
index 1ca7108..7614dab 100644
--- a/jenkins-dsl/cassandra_job_dsl_seed.groovy
+++ b/jenkins-dsl/cassandra_job_dsl_seed.groovy
@@ -259,6 +259,13 @@ cassandraBranches.each {
 testTargets.each {
 def targetName = it
 
+// Run default dtest daily and variations weekly
+if (targetName == 'test') {
+triggerInterval = '@daily'
+} else {
+triggerInterval = '@weekly'
+}
+
 // Skip test-cdc on cassandra-2.2 and cassandra-3.0 branches
 if ((targetName == 'test-cdc') && ((branchName == 'cassandra-2.2') || 
(branchName == 'cassandra-3.0'))) {
 println("Skipping ${targetName} on branch ${branchName}")
@@ -269,6 +276,9 @@ cassandraBranches.each {
 configure { node ->
 node / scm / branches / 'hudson.plugins.git.BranchSpec' / 
name(branchName)
 }
+triggers {
+scm(triggerInterval)
+}
 steps {
 
shell("./cassandra-builds/build-scripts/cassandra-unittest.sh ${targetName}")
 }



cassandra-builds git commit: Remove ccm cluster before attempting creation

2017-04-10 Thread mshuler
Repository: cassandra-builds
Updated Branches:
  refs/heads/master 7ec9534c7 -> 8d12386f4


Remove ccm cluster before attempting creation


Project: http://git-wip-us.apache.org/repos/asf/cassandra-builds/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-builds/commit/8d12386f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-builds/tree/8d12386f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-builds/diff/8d12386f

Branch: refs/heads/master
Commit: 8d12386f463dda5c2b9ea096577364a76db67336
Parents: 7ec9534
Author: Michael Shuler 
Authored: Mon Apr 10 16:44:50 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 16:44:50 2017 -0500

--
 build-scripts/cassandra-cqlsh-tests.sh | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra-builds/blob/8d12386f/build-scripts/cassandra-cqlsh-tests.sh
--
diff --git a/build-scripts/cassandra-cqlsh-tests.sh 
b/build-scripts/cassandra-cqlsh-tests.sh
index f7218eb..cec28ca 100755
--- a/build-scripts/cassandra-cqlsh-tests.sh
+++ b/build-scripts/cassandra-cqlsh-tests.sh
@@ -48,6 +48,7 @@ fi
 #
 
 
+ccm remove test || true # in case an old ccm cluster is left behind
 ccm create test -n 1
 ccm updateconf "enable_user_defined_functions: true"
 



[jira] [Commented] (CASSANDRA-13276) Regression on CASSANDRA-11416: can't load snapshots of tables with dropped columns

2017-04-10 Thread Matt Kopit (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963501#comment-15963501
 ] 

Matt Kopit commented on CASSANDRA-13276:


Thanks Andrés. Will this be patched for Cassandra ver. 3.0.x, too?

> Regression on CASSANDRA-11416: can't load snapshots of tables with dropped 
> columns
> --
>
> Key: CASSANDRA-13276
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13276
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Matt Kopit
>Assignee: Andrés de la Peña
> Fix For: 3.0.13, 3.11.0, 4.0
>
>
> I'm running Cassandra 3.10 and running into the exact same issue described in 
> CASSANDRA-11416: 
> 1. A table is created with columns 'a' and 'b'
> 2. Data is written to the table
> 3. Drop column 'b'
> 4. Take a snapshot
> 5. Drop the table
> 6. Run the snapshot schema.cql to recreate the table and the run the alter
> 7. Try to restore the snapshot data using sstableloader
> sstableloader yields the error:
> java.lang.RuntimeException: Unknown column b during deserialization



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-10968) When taking snapshot, manifest.json contains incorrect or no files when column family has secondary indexes

2017-04-10 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963472#comment-15963472
 ] 

Alex Petrov commented on CASSANDRA-10968:
-

I might have overlooked the answer to this question: is this also applicable to 
3.x/trunk? 

> When taking snapshot, manifest.json contains incorrect or no files when 
> column family has secondary indexes
> ---
>
> Key: CASSANDRA-10968
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10968
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fred A
>Assignee: Aleksandr Sorokoumov
>  Labels: lhf
> Fix For: 2.1.12
>
>
> xNoticed indeterminate behaviour when taking snapshot on column families that 
> has secondary indexes setup. The created manifest.json created when doing 
> snapshot, sometimes contains no file names at all and sometimes some file 
> names. 
> I don't know if this post is related but that was the only thing I could find:
> http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-10968) When taking snapshot, manifest.json contains incorrect or no files when column family has secondary indexes

2017-04-10 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-10968:

Reviewer:   (was: Alex Petrov)

> When taking snapshot, manifest.json contains incorrect or no files when 
> column family has secondary indexes
> ---
>
> Key: CASSANDRA-10968
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10968
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fred A
>Assignee: Aleksandr Sorokoumov
>  Labels: lhf
> Fix For: 2.1.12
>
>
> xNoticed indeterminate behaviour when taking snapshot on column families that 
> has secondary indexes setup. The created manifest.json created when doing 
> snapshot, sometimes contains no file names at all and sometimes some file 
> names. 
> I don't know if this post is related but that was the only thing I could find:
> http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-10968) When taking snapshot, manifest.json contains incorrect or no files when column family has secondary indexes

2017-04-10 Thread Aleksandr Sorokoumov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15961840#comment-15961840
 ] 

Aleksandr Sorokoumov edited comment on CASSANDRA-10968 at 4/10/17 8:17 PM:
---

I was able to reproduce the behavior described in 
http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html by 
creating a snapshot on a table with 2 columns and a secondary index:

{CODE}
CREATE KEYSPACE X
  WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };

USE X;

CREATE TABLE table1 (
  col1 varchar,
  col2 varchar,
  PRIMARY KEY (col1, col2)
);
CREATE INDEX col2_idx ON X.table1 (col2);

INSERT INTO x.table1 (col1, col2) VALUES ('a1', 'a2');
INSERT INTO x.table1 (col1, col2) VALUES ('b1', 'b2');
{CODE}

Before the patch, branch cassandra-2.1.12:
{CODE}
$ bin/nodetool snapshot x
Requested creating snapshot(s) for [x] with snapshot name [1491658291872]
Snapshot directory: 1491658291872

$ cat 
data/data/x/table1-a47092a01aa011e7b2e959ff5fdd622a/snapshots/1491658291872/manifest.json
{"files":["x-table1.col2_idx-ka-1-Data.db"]}
{CODE}

In the manifest above the index table is missing.

After the patch:
{CODE}
$ git checkout 10968-2.1.12
previous HEAD position was a6619e56b1... bump 2.1 versions
Switched to branch '10968-2.1.12'

$ bin/nodetool snapshot x
Requested creating snapshot(s) for [x] with snapshot name [1491658830545]
Snapshot directory: 1491658830545

$ cat 
data/data/x/table1-a47092a01aa011e7b2e959ff5fdd622a/snapshots/1491658830545/manifest.json
{"files":["x-table1-ka-1-Data.db","x-table1.col2_idx-ka-1-Data.db"]}
{CODE}

*Links to the branches:*

* https://github.com/Ge/cassandra/tree/10968-2.1.12
* https://github.com/Ge/cassandra/tree/10968-2.2.4


was (Author: ge):
I was able to reproduce the behavior described in 
http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html by 
creating a snapshot on a table with 2 columns and a secondary index:

{CODE}
CREATE KEYSPACE X
  WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };

USE X;

CREATE TABLE table1 (
  col1 varchar,
  col2 varchar,
  PRIMARY KEY (col1, col2)
);
CREATE INDEX col2_idx ON X.table1 (col2);

INSERT INTO x.table1 (col1, col2) VALUES ('a1', 'a2');
INSERT INTO x.table1 (col1, col2) VALUES ('b1', 'b2');
{CODE}

Before the patch, branch cassandra-2.1.12:
{CODE}
$ bin/nodetool snapshot x
Requested creating snapshot(s) for [x] with snapshot name [1491658291872]
Snapshot directory: 1491658291872

$ cat 
data/data/x/table1-a47092a01aa011e7b2e959ff5fdd622a/snapshots/1491658291872/manifest.json
{"files":["x-table1.col2_idx-ka-1-Data.db"]}
{CODE}

In the manifest above the index table is missing.

After the patch:
{CODE}
$ git checkout 10968-2.1.12
previous HEAD position was a6619e56b1... bump 2.1 versions
Switched to branch '10968-2.1.12'

$ bin/nodetool snapshot x
Requested creating snapshot(s) for [x] with snapshot name [1491658830545]
Snapshot directory: 1491658830545

$ cat 
data/data/x/table1-a47092a01aa011e7b2e959ff5fdd622a/snapshots/1491658830545/manifest.json
{"files":["x-table1-ka-1-Data.db","x-table1.col2_idx-ka-1-Data.db"]}
{CODE}

*Link to the branch* https://github.com/Ge/cassandra/tree/10968-2.1.12

> When taking snapshot, manifest.json contains incorrect or no files when 
> column family has secondary indexes
> ---
>
> Key: CASSANDRA-10968
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10968
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fred A
>Assignee: Aleksandr Sorokoumov
>  Labels: lhf
> Fix For: 2.1.12
>
>
> xNoticed indeterminate behaviour when taking snapshot on column families that 
> has secondary indexes setup. The created manifest.json created when doing 
> snapshot, sometimes contains no file names at all and sometimes some file 
> names. 
> I don't know if this post is related but that was the only thing I could find:
> http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13329) max_hints_delivery_threads does not work

2017-04-10 Thread Aleksandr Sorokoumov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963421#comment-15963421
 ] 

Aleksandr Sorokoumov commented on CASSANDRA-13329:
--

Hey [~alekiv],

Thank you for the suggestion! I have created a branch for 3.0 - 
https://github.com/Ge/cassandra/tree/13329-3.0
The fix there is only for HintsDispatchExecutor since there was no 
PerSSTableIndexWriter.

> max_hints_delivery_threads does not work
> 
>
> Key: CASSANDRA-13329
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13329
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fuud
>Assignee: Aleksandr Sorokoumov
>  Labels: lhf
>
> HintsDispatchExecutor creates JMXEnabledThreadPoolExecutor with corePoolSize  
> == 1 and maxPoolSize==max_hints_delivery_threads and unbounded 
> LinkedBlockingQueue.
> In this configuration additional threads will not be created.
> Same problem with PerSSTableIndexWriter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13329) max_hints_delivery_threads does not work

2017-04-10 Thread Aleksandr Ivanov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963396#comment-15963396
 ] 

Aleksandr Ivanov edited comment on CASSANDRA-13329 at 4/10/17 7:34 PM:
---

[~ifesdjeen], [~Ge]: do you have plans to fix this 
problem(max_hints_delivery_threads part) in 3.0.x version?


was (Author: alekiv):
[~ifesdjeen], do you have plans to fix this problem(max_hints_delivery_threads 
part) in 3.0.x version?

> max_hints_delivery_threads does not work
> 
>
> Key: CASSANDRA-13329
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13329
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fuud
>Assignee: Aleksandr Sorokoumov
>  Labels: lhf
>
> HintsDispatchExecutor creates JMXEnabledThreadPoolExecutor with corePoolSize  
> == 1 and maxPoolSize==max_hints_delivery_threads and unbounded 
> LinkedBlockingQueue.
> In this configuration additional threads will not be created.
> Same problem with PerSSTableIndexWriter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13329) max_hints_delivery_threads does not work

2017-04-10 Thread Aleksandr Ivanov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963396#comment-15963396
 ] 

Aleksandr Ivanov commented on CASSANDRA-13329:
--

[~ifesdjeen], do you have plans to fix this problem(max_hints_delivery_threads 
part) in 3.0.x version?

> max_hints_delivery_threads does not work
> 
>
> Key: CASSANDRA-13329
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13329
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fuud
>Assignee: Aleksandr Sorokoumov
>  Labels: lhf
>
> HintsDispatchExecutor creates JMXEnabledThreadPoolExecutor with corePoolSize  
> == 1 and maxPoolSize==max_hints_delivery_threads and unbounded 
> LinkedBlockingQueue.
> In this configuration additional threads will not be created.
> Same problem with PerSSTableIndexWriter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13304) Add checksumming to the native protocol

2017-04-10 Thread Tom van der Woerdt (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963392#comment-15963392
 ] 

Tom van der Woerdt commented on CASSANDRA-13304:


+1 for making this optional, as it adds only overhead for TLS users. In fact I 
strongly recommend replacing this entire checksumming idea by just defaulting 
to self-signed unverified TLS, which solves the key rotation problem mentioned 
earlier, and would make implementing all this a lot easier for driver authors 
(like myself). People who are concerned about the performance impact can then 
switch to a null cipher that still does checksumming.

Earlier in this thread TLS was mentioned a best practice. If that's the case, 
then let's make sure our best practices aren't suffering from decisions made to 
cover bad deployment practices, but try to do something about those bad 
practices instead. There are some very easy things we can do here, why go for 
the complex route?

> Add checksumming to the native protocol
> ---
>
> Key: CASSANDRA-13304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13304
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>  Labels: client-impacting
> Attachments: 13304_v1.diff, boxplot-read-throughput.png, 
> boxplot-write-throughput.png
>
>
> The native binary transport implementation doesn't include checksums. This 
> makes it highly susceptible to silently inserting corrupted data either due 
> to hardware issues causing bit flips on the sender/client side, C*/receiver 
> side, or network in between.
> Attaching an implementation that makes checksum'ing mandatory (assuming both 
> client and server know about a protocol version that supports checksums) -- 
> and also adds checksumming to clients that request compression.
> The serialized format looks something like this:
> {noformat}
>  *  1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
>  *  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Number of Compressed Chunks  | Compressed Length (e1)/
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * /  Compressed Length cont. (e1) |Uncompressed Length (e1)   /
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Uncompressed Length cont. (e1)| CRC32 Checksum of Lengths (e1)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Checksum of Lengths cont. (e1)|Compressed Bytes (e1)+//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e1) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (e2)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Compressed Bytes (e2)   +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e2) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (en)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Compressed Bytes (en)  +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (en) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
> {noformat}
> The first pass here adds checksums only to the actual contents of the frame 
> body itself (and doesn't actually checksum lengths and headers). While it 
> would be great to fully add checksuming across the entire protocol, the 
> proposed implementation will ensure we at least catch corrupted data and 
> likely protect ourselves pretty well anyways.
> I didn't go to the trouble of implementing a Snappy Checksum'ed Compressor 
> implementation as it's been deprecated for a while -- is really slow and 
> crappy 

[jira] [Created] (CASSANDRA-13430) Cleanup isIncremental/repairedAt usage

2017-04-10 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-13430:
---

 Summary: Cleanup isIncremental/repairedAt usage
 Key: CASSANDRA-13430
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13430
 Project: Cassandra
  Issue Type: Improvement
Reporter: Blake Eggleston
Assignee: Blake Eggleston


Post CASSANDRA-9143, there's no longer a reason to pass around 
{{isIncremental}} or {{repairedAt}} in streaming sessions, as well as some 
places in repair. The {{pendingRepair}} & {{repairedAt}} values should only be 
set at the beginning/finalize stages of incremental repair and just follow 
sstables around as they're streamed. Keeping these values with sstables also 
fixes an edge case where you could leak repaired data back into unrepaired if 
you run full and incremental repairs concurrently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13304) Add checksumming to the native protocol

2017-04-10 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963238#comment-15963238
 ] 

Michael Kjellman commented on CASSANDRA-13304:
--

I've benchmarked this in the real world and even at the p99.9 there is no 
increase in latency visible. We cannot continue let consumers screw themselves 
over for the potential reward of a couple of "nanoseconds"... sorry, firm -1 on 
making this optional.

> Add checksumming to the native protocol
> ---
>
> Key: CASSANDRA-13304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13304
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>  Labels: client-impacting
> Attachments: 13304_v1.diff, boxplot-read-throughput.png, 
> boxplot-write-throughput.png
>
>
> The native binary transport implementation doesn't include checksums. This 
> makes it highly susceptible to silently inserting corrupted data either due 
> to hardware issues causing bit flips on the sender/client side, C*/receiver 
> side, or network in between.
> Attaching an implementation that makes checksum'ing mandatory (assuming both 
> client and server know about a protocol version that supports checksums) -- 
> and also adds checksumming to clients that request compression.
> The serialized format looks something like this:
> {noformat}
>  *  1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
>  *  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Number of Compressed Chunks  | Compressed Length (e1)/
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * /  Compressed Length cont. (e1) |Uncompressed Length (e1)   /
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Uncompressed Length cont. (e1)| CRC32 Checksum of Lengths (e1)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Checksum of Lengths cont. (e1)|Compressed Bytes (e1)+//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e1) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (e2)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Compressed Bytes (e2)   +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e2) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (en)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Compressed Bytes (en)  +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (en) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
> {noformat}
> The first pass here adds checksums only to the actual contents of the frame 
> body itself (and doesn't actually checksum lengths and headers). While it 
> would be great to fully add checksuming across the entire protocol, the 
> proposed implementation will ensure we at least catch corrupted data and 
> likely protect ourselves pretty well anyways.
> I didn't go to the trouble of implementing a Snappy Checksum'ed Compressor 
> implementation as it's been deprecated for a while -- is really slow and 
> crappy compared to LZ4 -- and we should do everything in our power to make 
> sure no one in the community is still using it. I left it in (for obvious 
> backwards compatibility aspects) old for clients that don't know about the 
> new protocol.
> The current protocol has a 256MB (max) frame body -- where the serialized 
> contents are simply written in to the frame body.
> If the client sends a compression option in the startup, we will install a 
> FrameCompressor inline. Unfortunately, we went with a decision to treat the 
> 

[jira] [Commented] (CASSANDRA-13420) Pending repair info was added in 4.0

2017-04-10 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963235#comment-15963235
 ] 

Blake Eggleston commented on CASSANDRA-13420:
-

+1

> Pending repair info was added in 4.0
> 
>
> Key: CASSANDRA-13420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13420
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Pending repair information was actually added in 4.0
> https://github.com/krummas/cassandra/commits/marcuse/pendingrepairversion



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13420) Pending repair info was added in 4.0

2017-04-10 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13420:

Status: Ready to Commit  (was: Patch Available)

> Pending repair info was added in 4.0
> 
>
> Key: CASSANDRA-13420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13420
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Pending repair information was actually added in 4.0
> https://github.com/krummas/cassandra/commits/marcuse/pendingrepairversion



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-04-10 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963230#comment-15963230
 ] 

Ariel Weisberg commented on CASSANDRA-13265:


Great! Most regular contributors are linking to a branch in their fork, but 
there are issues with that (people force update, delete branches). You don't 
have to attach a patch to get it to submit patch. 

I do need a patch for each branch. 2.2, 3.0, 3.11 and trunk. We try to push the 
work of dealing with the merge conflicts to the assignees over the committers. 
Use whatever method you prefer to submit the code.

When you have all of the patches I'll kick off test runs. It's going to take a 
while for the dtests to run right now.


> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Fix For: 3.0.x
>
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13407) test failure at RemoveTest.testBadHostId

2017-04-10 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963226#comment-15963226
 ] 

Joel Knighton commented on CASSANDRA-13407:
---

Patches look good - good catch on 2.2/3.0 difference.

+1.

> test failure at RemoveTest.testBadHostId
> 
>
> Key: CASSANDRA-13407
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13407
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> Example trace:
> {code}
> java.lang.NullPointerException
>   at org.apache.cassandra.gms.Gossiper.getHostId(Gossiper.java:881)
>   at org.apache.cassandra.gms.Gossiper.getHostId(Gossiper.java:876)
>   at 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:2201)
>   at 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1855)
>   at org.apache.cassandra.Util.createInitialRing(Util.java:216)
>   at org.apache.cassandra.service.RemoveTest.setup(RemoveTest.java:89)
> {code} 
> [failure 
> example|https://cassci.datastax.com/job/trunk_testall/1491/testReport/org.apache.cassandra.service/RemoveTest/testBadHostId/]
> [history|https://cassci.datastax.com/job/trunk_testall/lastCompletedBuild/testReport/org.apache.cassandra.service/RemoveTest/testBadHostId/history/]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13304) Add checksumming to the native protocol

2017-04-10 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963109#comment-15963109
 ] 

Adam Holmberg commented on CASSANDRA-13304:
---

+1 on making it a startup option, even if it's opt-out. This aligns with my 
earlier mentioned concern about overhead in CPU-bound applications.

> Add checksumming to the native protocol
> ---
>
> Key: CASSANDRA-13304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13304
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>  Labels: client-impacting
> Attachments: 13304_v1.diff, boxplot-read-throughput.png, 
> boxplot-write-throughput.png
>
>
> The native binary transport implementation doesn't include checksums. This 
> makes it highly susceptible to silently inserting corrupted data either due 
> to hardware issues causing bit flips on the sender/client side, C*/receiver 
> side, or network in between.
> Attaching an implementation that makes checksum'ing mandatory (assuming both 
> client and server know about a protocol version that supports checksums) -- 
> and also adds checksumming to clients that request compression.
> The serialized format looks something like this:
> {noformat}
>  *  1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
>  *  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Number of Compressed Chunks  | Compressed Length (e1)/
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * /  Compressed Length cont. (e1) |Uncompressed Length (e1)   /
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Uncompressed Length cont. (e1)| CRC32 Checksum of Lengths (e1)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Checksum of Lengths cont. (e1)|Compressed Bytes (e1)+//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e1) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (e2)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Compressed Bytes (e2)   +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e2) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (en)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Compressed Bytes (en)  +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (en) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
> {noformat}
> The first pass here adds checksums only to the actual contents of the frame 
> body itself (and doesn't actually checksum lengths and headers). While it 
> would be great to fully add checksuming across the entire protocol, the 
> proposed implementation will ensure we at least catch corrupted data and 
> likely protect ourselves pretty well anyways.
> I didn't go to the trouble of implementing a Snappy Checksum'ed Compressor 
> implementation as it's been deprecated for a while -- is really slow and 
> crappy compared to LZ4 -- and we should do everything in our power to make 
> sure no one in the community is still using it. I left it in (for obvious 
> backwards compatibility aspects) old for clients that don't know about the 
> new protocol.
> The current protocol has a 256MB (max) frame body -- where the serialized 
> contents are simply written in to the frame body.
> If the client sends a compression option in the startup, we will install a 
> FrameCompressor inline. Unfortunately, we went with a decision to treat the 
> frame body separately from the header bits etc in a given message. So, 
> instead we put a compressor implementation in 

[jira] [Comment Edited] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-04-10 Thread Christian Esken (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963095#comment-15963095
 ] 

Christian Esken edited comment on CASSANDRA-13265 at 4/10/17 4:20 PM:
--

Done. My highest priority is the 3.0 branch. I created a patch (single file, 
squashed) for 3.0, that I also applied to my Github fork 
https://github.com/christian-esken/cassandra/commits/cassandra-3.0 . I attached 
the patch using the Submit Patch button on the top.


was (Author: cesken):
Done. My highest priority is the 3.0 branch. I created a patch (single file, 
squashed) for 3.0, that I also applied to my Github fork 
https://github.com/christian-esken/cassandra/commits/cassandra-3.0 . Please 
have a look at the attached file 
0001-3.0-Expire-OTC-messages-by-a-single-Thread.patch .

> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Fix For: 3.0.x
>
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-04-10 Thread Christian Esken (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Esken updated CASSANDRA-13265:

Status: Patch Available  (was: Open)

>From 6bd3f3fc3b2da3a66b53a94a819446a9ea8ea2cf Mon Sep 17 00:00:00 2001
From: Christian Esken 
Date: Wed, 1 Mar 2017 15:56:36 +0100
Subject: [PATCH] Expire OTC messages by a single Thread

This patch consists of the following aspects related to OutboundTcpConnection:
- Backlog queue expiration by a single Thread
- Drop count statistics
- QueuedMessage.isTimedOut() fix

When backlog queue expiration is done, one single Thread is elected to do the
work. Previously, all Threads would go in and do the same work,
producing high lock contention. The Thread reading from the Queue could
even be starved by not be able to acquire the read lock.
Backlog queue is inspected every otc_backlog_expiration_interval_ms
milliseconds if its size exceeds BACKLOG_PURGE_SIZE. Added unit tests
for OutboundTcpConnection.

Timed out messages are counted in the dropped statistics. Additionally
count the dropped messages when it is not possible to write to the
socket, e.g. if there is no connection because a target node is down.

Fix QueuedMessage.isTimedOut(), which had used a "a < b" comparison on
nano time values, which can be wrong due to wrapping of System.nanoTime().

CASSANDRA-13265
---
 conf/cassandra.yaml|   9 ++
 src/java/org/apache/cassandra/config/Config.java   |   6 +
 .../cassandra/config/DatabaseDescriptor.java   |  10 ++
 .../cassandra/net/OutboundTcpConnection.java   | 113 +++---
 .../org/apache/cassandra/service/StorageProxy.java |  10 +-
 .../cassandra/service/StorageProxyMBean.java   |   3 +
 .../cassandra/net/OutboundTcpConnectionTest.java   | 170 +
 7 files changed, 294 insertions(+), 27 deletions(-)
 create mode 100644 
test/unit/org/apache/cassandra/net/OutboundTcpConnectionTest.java

diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 790dfd743b..9c1510b66a 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -985,3 +985,12 @@ windows_timer_interval: 1
 
 # Do not try to coalesce messages if we already got that many messages. This 
should be more than 2 and less than 128.
 # otc_coalescing_enough_coalesced_messages: 8
+
+# How many milliseconds to wait between two expiration runs on the backlog 
(queue) of the OutboundTcpConnection.
+# Expiration is done if messages are piling up in the backlog. Droppable 
messages are expired to free the memory
+# taken by expired messages. The interval should be between 0 and 1000, and in 
most installations the default value
+# will be appropriate. A smaller value could potentially expire messages 
slightly sooner at the expense of more CPU
+# time and queue contention while iterating the backlog of messages.
+# An interval of 0 disables any wait time, which is the behavior of former 
Cassandra versions.
+#
+# otc_backlog_expiration_interval_ms: 200
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 9aaf7ae33e..6a99cd3cbd 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -298,6 +298,12 @@ public class Config
 public int otc_coalescing_window_us = otc_coalescing_window_us_default;
 public int otc_coalescing_enough_coalesced_messages = 8;
 
+/**
+ * Backlog expiration interval in milliseconds for the 
OutboundTcpConnection.
+ */
+public static final int otc_backlog_expiration_interval_ms_default = 200;
+public volatile int otc_backlog_expiration_interval_ms = 
otc_backlog_expiration_interval_ms_default;
+ 
 public int windows_timer_interval = 0;
 
 public boolean enable_user_defined_functions = false;
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 602214f3c6..e9e54c3e20 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1967,6 +1967,16 @@ public class DatabaseDescriptor
 conf.otc_coalescing_enough_coalesced_messages = 
otc_coalescing_enough_coalesced_messages;
 }
 
+public static int getOtcBacklogExpirationInterval()
+{
+return conf.otc_backlog_expiration_interval_ms;
+}
+
+public static void setOtcBacklogExpirationInterval(int intervalInMillis)
+{
+conf.otc_backlog_expiration_interval_ms = intervalInMillis;
+}
+ 
 public static int getWindowsTimerInterval()
 {
 return conf.windows_timer_interval;
diff --git a/src/java/org/apache/cassandra/net/OutboundTcpConnection.java 
b/src/java/org/apache/cassandra/net/OutboundTcpConnection.java
index 46083994df..99ad194b94 100644
--- 

[jira] [Updated] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-04-10 Thread Christian Esken (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Esken updated CASSANDRA-13265:

Status: Open  (was: Patch Available)

> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Fix For: 3.0.x
>
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-04-10 Thread Christian Esken (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Esken updated CASSANDRA-13265:

Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   (was: 2.2.x)
   Status: Patch Available  (was: Reopened)

> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Fix For: 3.0.x
>
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-04-10 Thread Christian Esken (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15963095#comment-15963095
 ] 

Christian Esken commented on CASSANDRA-13265:
-

Done. My highest priority is the 3.0 branch. I created a patch (single file, 
squashed) for 3.0, that I also applied to my Github fork 
https://github.com/christian-esken/cassandra/commits/cassandra-3.0 . Please 
have a look at the attached file 
0001-3.0-Expire-OTC-messages-by-a-single-Thread.patch .

> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13378) DS Cassandra3.0 Adding a datacenter to a cluster procedure not working for us

2017-04-10 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962976#comment-15962976
 ] 

Jeff Jirsa commented on CASSANDRA-13378:


In that case, CASSANDRA-12681 will eventually make this problem a nonissue, 
though for compatibility reasons it won't be available until 4.0

> DS Cassandra3.0  Adding a datacenter to a cluster procedure not working for us
> --
>
> Key: CASSANDRA-13378
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13378
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: 5 node cluster , 
> • Server Model: Dell R730xd
> • Processor: 2 * Intel Xeon E5-2630 – Total 16 cores 
> • Memory: 192 GB  
> • OS Drive: 2 * 120 GB 
> • Direct attached storage – 8 TB (4 * 2TB SSD) + 1 * 800GB SSD
> • 10Gb Dual Port + 1Gb Dual Port Network Daughter Card 
> OS - OEL 2.6.32-642.15.1.el6.x86_64
> java version "1.8.0_91"
> Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)
> 6 nodes in other DC are identical
>Reporter: Kevin Rivait
>Priority: Blocker
> Fix For: 3.0.9
>
> Attachments: cassandra-rackdc.properties_from_dc1_node, 
> cassandra-rackdc.properties_from_dc2_node, cassandra.yaml, 
> DS-add-a-dc-C3.0.rtf
>
>
> i have replicated the issue on my personal cluster using VMs.
> we have many keyspaces and users developing on the Dev Cluster we are trying 
> to stretch.
> With my current VMs (10 total) i can create a 2 DC cluster  dc1 and dc2.
> i rebuild all nodes - clear data dirs and restart dc1.
> i also clear data dirs on dc2 nodes,  i donot restart yet,
> now i have single dc1 5 nodes cluster.
> snitch - GossipingPropertyFileSnitch
> i create keyspaces on DC1
> after i alter keyspaces with replication dc1 3   dc2 0i can no longer 
> query tables - not enough replicas available for query at consistency ONE.
> same error with CQL using consistency local_one
> continue with procedure, startup dc2 nodes,  
> alter replication on keyspaces to  dc1 3  dc2 2
> from dc2 nodes , nodetool rebuild -- dc1  fails
> i am attaching detailed steps with errors  and cassandra.yaml



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13092) Debian package does not install the hostpot_compiler command file

2017-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962958#comment-15962958
 ] 

Jan Urbański commented on CASSANDRA-13092:
--

Thank you!

> Debian package does not install the hostpot_compiler command file
> -
>
> Key: CASSANDRA-13092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Jan Urbański
>Assignee: Michael Shuler
>Priority: Trivial
>  Labels: debian, patch
> Fix For: 2.1.18, 2.2.10, 3.0.13, 3.11.0, 4.0
>
> Attachments: install-jit-compiler-command-file-in-Debian-package.patch
>
>
> The default {{cassandra-env.sh}} file sets a JIT compiler commands file via 
> {{-XX:CompileCommandFile=$CASSANDRA_CONF/hotspot_compiler}} but the Debian 
> package does not install that file, even though it's generated during the 
> build process.
> Trivial patch against trunk attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13347) dtest failure in upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_2_x_To_indev_3_0_x.rolling_upgrade_test

2017-04-10 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962957#comment-15962957
 ] 

Joel Knighton commented on CASSANDRA-13347:
---

There's a subtly different problem at play on 3.11. The issue is that these 
upgrade through versions tests use the released branch for all earlier 
versions, so we upgrade from 2.1-current, 2.2-current, 3.0-current, to 
3.11-dev. The patch from [CASSANDRA-13320] went into 3.0 with 3.0.13, which 
hasn't been released. That means running this test on 3.11 in-dev won't bring 
in the fix.

I'm not sure how we want to address this, but it's a dtest fix, not a C* fix, 
so I'm going to close this for now.

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_2_x_To_indev_3_0_x.rolling_upgrade_test
> --
>
> Key: CASSANDRA-13347
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13347
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Fix For: 3.0.13, 3.11.0
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_large_dtest/58/testReport/upgrade_tests.upgrade_through_versions_test/TestUpgrade_current_2_2_x_To_indev_3_0_x/rolling_upgrade_test
> {code}
> Error Message
> Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', ['upgradesstables', 
> '-a']] exited with non-zero status; exit status: 2; 
> stderr: error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 279, in rolling_upgrade_test
> self.upgrade_scenario(rolling=True)
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 345, in upgrade_scenario
> self.upgrade_to_version(version_meta, partial=True, nodes=(node,))
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 446, in upgrade_to_version
> node.nodetool('upgradesstables -a')
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 789, in nodetool
> return 

[jira] [Resolved] (CASSANDRA-13347) dtest failure in upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_2_x_To_indev_3_0_x.rolling_upgrade_test

2017-04-10 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton resolved CASSANDRA-13347.
---
Resolution: Not A Problem

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_2_x_To_indev_3_0_x.rolling_upgrade_test
> --
>
> Key: CASSANDRA-13347
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13347
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Fix For: 3.0.13, 3.11.0
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_large_dtest/58/testReport/upgrade_tests.upgrade_through_versions_test/TestUpgrade_current_2_2_x_To_indev_3_0_x/rolling_upgrade_test
> {code}
> Error Message
> Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', ['upgradesstables', 
> '-a']] exited with non-zero status; exit status: 2; 
> stderr: error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 279, in rolling_upgrade_test
> self.upgrade_scenario(rolling=True)
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 345, in upgrade_scenario
> self.upgrade_to_version(version_meta, partial=True, nodes=(node,))
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 446, in upgrade_to_version
> node.nodetool('upgradesstables -a')
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 789, in nodetool
> return handle_external_tool_process(p, ['nodetool', '-h', 'localhost', 
> '-p', str(self.jmx_port), cmd.split()])
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 2002, in handle_external_tool_process
> raise ToolError(cmd_args, rc, out, err)
> {code}
> Related failures:
> http://cassci.datastax.com/job/cassandra-3.0_large_dtest/58/testReport/upgrade_tests.upgrade_through_versions_test/TestUpgrade_current_2_1_x_To_indev_3_0_x/rolling_upgrade_with_internode_ssl_test/
> 

[jira] [Commented] (CASSANDRA-13378) DS Cassandra3.0 Adding a datacenter to a cluster procedure not working for us

2017-04-10 Thread Kevin Rivait (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962952#comment-15962952
 ] 

Kevin Rivait commented on CASSANDRA-13378:
--

Datastax has updated their procedure ( 1 small tweak).
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html
We updated our scripts to match and in that process, realized we had a problem 
with our DC name on the alter keyspace stmts. There was an extra space in front 
of the DC name.
Looks like the original procedure would have also worked.
We have run through the corrected procedure on EPG and have successfully 
stretched an existing cluster.
I will reschedule the activity for our current DEV physical  cluster.  

> DS Cassandra3.0  Adding a datacenter to a cluster procedure not working for us
> --
>
> Key: CASSANDRA-13378
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13378
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: 5 node cluster , 
> • Server Model: Dell R730xd
> • Processor: 2 * Intel Xeon E5-2630 – Total 16 cores 
> • Memory: 192 GB  
> • OS Drive: 2 * 120 GB 
> • Direct attached storage – 8 TB (4 * 2TB SSD) + 1 * 800GB SSD
> • 10Gb Dual Port + 1Gb Dual Port Network Daughter Card 
> OS - OEL 2.6.32-642.15.1.el6.x86_64
> java version "1.8.0_91"
> Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)
> 6 nodes in other DC are identical
>Reporter: Kevin Rivait
>Priority: Blocker
> Fix For: 3.0.9
>
> Attachments: cassandra-rackdc.properties_from_dc1_node, 
> cassandra-rackdc.properties_from_dc2_node, cassandra.yaml, 
> DS-add-a-dc-C3.0.rtf
>
>
> i have replicated the issue on my personal cluster using VMs.
> we have many keyspaces and users developing on the Dev Cluster we are trying 
> to stretch.
> With my current VMs (10 total) i can create a 2 DC cluster  dc1 and dc2.
> i rebuild all nodes - clear data dirs and restart dc1.
> i also clear data dirs on dc2 nodes,  i donot restart yet,
> now i have single dc1 5 nodes cluster.
> snitch - GossipingPropertyFileSnitch
> i create keyspaces on DC1
> after i alter keyspaces with replication dc1 3   dc2 0i can no longer 
> query tables - not enough replicas available for query at consistency ONE.
> same error with CQL using consistency local_one
> continue with procedure, startup dc2 nodes,  
> alter replication on keyspaces to  dc1 3  dc2 2
> from dc2 nodes , nodetool rebuild -- dc1  fails
> i am attaching detailed steps with errors  and cassandra.yaml



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13429) testall failure in org.apache.cassandra.io.sstable.SSTableRewriterTest.testAbort

2017-04-10 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-13429:
-

 Summary: testall failure in 
org.apache.cassandra.io.sstable.SSTableRewriterTest.testAbort
 Key: CASSANDRA-13429
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13429
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Sean McCarthy
 Attachments: jenkins-cassandra-2.2_testall-670_logs.tar.gz

example failure:

http://cassci.datastax.com/job/cassandra-2.2_testall/670/testReport/org.apache.cassandra.io.sstable/SSTableRewriterTest/testAbort

{code}
Error Message

/home/automaton/cassandra/build/test/cassandra/data:187/SSTableRewriterTest/Standard1-9e58bfd01d1311e795f67f0eb3b48181/lb-61-big
{code}{code}
Stacktrace

junit.framework.AssertionFailedError: 
/home/automaton/cassandra/build/test/cassandra/data:187/SSTableRewriterTest/Standard1-9e58bfd01d1311e795f67f0eb3b48181/lb-61-big
at 
org.apache.cassandra.io.sstable.SSTableRewriterTest.validateCFS(SSTableRewriterTest.java:992)
at 
org.apache.cassandra.io.sstable.SSTableRewriterTest.truncate(SSTableRewriterTest.java:943)
at 
org.apache.cassandra.io.sstable.SSTableRewriterTest.testAbortHelper(SSTableRewriterTest.java:665)
at 
org.apache.cassandra.io.sstable.SSTableRewriterTest.testAbort(SSTableRewriterTest.java:652)
{code}{code}
Standard Output

ERROR 11:00:08 LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@dffb15a) to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@232035044:/home/automaton/cassandra/build/test/cassandra/data:187/SSTableRewriterTest/Standard1-9e58bfd01d1311e795f67f0eb3b48181/lb-61-big
 was not released before the reference was garbage collected
ERROR 11:00:08 LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@dffb15a) to class 
org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@232035044:/home/automaton/cassandra/build/test/cassandra/data:187/SSTableRewriterTest/Standard1-9e58bfd01d1311e795f67f0eb3b48181/lb-61-big
 was not released before the reference was garbage collected
ERROR 11:00:08 Allocate trace 
org.apache.cassandra.utils.concurrent.Ref$State@dffb15a:
Thread[main,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1589)
at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:228)
at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:158)
at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:80)
at 
org.apache.cassandra.io.sstable.format.SSTableReader.(SSTableReader.java:216)
at 
org.apache.cassandra.io.sstable.format.big.BigTableReader.(BigTableReader.java:60)
at 
org.apache.cassandra.io.sstable.format.big.BigFormat$ReaderFactory.open(BigFormat.java:116)
at 
org.apache.cassandra.io.sstable.format.SSTableReader.internalOpen(SSTableReader.java:587)
at 
org.apache.cassandra.io.sstable.format.SSTableReader.internalOpen(SSTableReader.java:565)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinal(BigTableWriter.java:346)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$800(BigTableWriter.java:56)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:385)
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:169)
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:179)
at 
org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:205)
at 
org.apache.cassandra.io.sstable.SSTableRewriterTest.writeFiles(SSTableRewriterTest.java:969)
at 
org.apache.cassandra.io.sstable.SSTableRewriterTest.writeFile(SSTableRewriterTest.java:948)
at 
org.apache.cassandra.io.sstable.SSTableRewriterTest.testSSTableSplit(SSTableRewriterTest.java:618)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at 

[jira] [Issue Comment Deleted] (CASSANDRA-13304) Add checksumming to the native protocol

2017-04-10 Thread Alexandre Dutra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Dutra updated CASSANDRA-13304:

Comment: was deleted

(was: Boxplot charts comparing reads and writes between driver 3.2.0 and driver 
3.2.0 with checksum enabled.)

> Add checksumming to the native protocol
> ---
>
> Key: CASSANDRA-13304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13304
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>  Labels: client-impacting
> Attachments: 13304_v1.diff, boxplot-read-throughput.png, 
> boxplot-write-throughput.png
>
>
> The native binary transport implementation doesn't include checksums. This 
> makes it highly susceptible to silently inserting corrupted data either due 
> to hardware issues causing bit flips on the sender/client side, C*/receiver 
> side, or network in between.
> Attaching an implementation that makes checksum'ing mandatory (assuming both 
> client and server know about a protocol version that supports checksums) -- 
> and also adds checksumming to clients that request compression.
> The serialized format looks something like this:
> {noformat}
>  *  1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
>  *  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Number of Compressed Chunks  | Compressed Length (e1)/
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * /  Compressed Length cont. (e1) |Uncompressed Length (e1)   /
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Uncompressed Length cont. (e1)| CRC32 Checksum of Lengths (e1)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Checksum of Lengths cont. (e1)|Compressed Bytes (e1)+//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e1) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (e2)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Compressed Bytes (e2)   +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e2) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (en)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Compressed Bytes (en)  +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (en) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
> {noformat}
> The first pass here adds checksums only to the actual contents of the frame 
> body itself (and doesn't actually checksum lengths and headers). While it 
> would be great to fully add checksuming across the entire protocol, the 
> proposed implementation will ensure we at least catch corrupted data and 
> likely protect ourselves pretty well anyways.
> I didn't go to the trouble of implementing a Snappy Checksum'ed Compressor 
> implementation as it's been deprecated for a while -- is really slow and 
> crappy compared to LZ4 -- and we should do everything in our power to make 
> sure no one in the community is still using it. I left it in (for obvious 
> backwards compatibility aspects) old for clients that don't know about the 
> new protocol.
> The current protocol has a 256MB (max) frame body -- where the serialized 
> contents are simply written in to the frame body.
> If the client sends a compression option in the startup, we will install a 
> FrameCompressor inline. Unfortunately, we went with a decision to treat the 
> frame body separately from the header bits etc in a given message. So, 
> instead we put a compressor implementation in the options and then if it's 
> not null, we 

[jira] [Reopened] (CASSANDRA-13347) dtest failure in upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_2_x_To_indev_3_0_x.rolling_upgrade_test

2017-04-10 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy reopened CASSANDRA-13347:
---

Still failing in 3.11: 
http://cassci.datastax.com/job/cassandra-3.11_large_dtest/28/testReport/

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.TestUpgrade_current_2_2_x_To_indev_3_0_x.rolling_upgrade_test
> --
>
> Key: CASSANDRA-13347
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13347
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Fix For: 3.0.13, 3.11.0
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_large_dtest/58/testReport/upgrade_tests.upgrade_through_versions_test/TestUpgrade_current_2_2_x_To_indev_3_0_x/rolling_upgrade_test
> {code}
> Error Message
> Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', ['upgradesstables', 
> '-a']] exited with non-zero status; exit status: 2; 
> stderr: error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 279, in rolling_upgrade_test
> self.upgrade_scenario(rolling=True)
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 345, in upgrade_scenario
> self.upgrade_to_version(version_meta, partial=True, nodes=(node,))
>   File 
> "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_through_versions_test.py",
>  line 446, in upgrade_to_version
> node.nodetool('upgradesstables -a')
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 789, in nodetool
> return handle_external_tool_process(p, ['nodetool', '-h', 'localhost', 
> '-p', str(self.jmx_port), cmd.split()])
>   File 
> "/home/automaton/venv/local/lib/python2.7/site-packages/ccmlib/node.py", line 
> 2002, in handle_external_tool_process
> raise ToolError(cmd_args, rc, out, err)
> {code}
> Related failures:
> 

[jira] [Updated] (CASSANDRA-13092) Debian package does not install the hostpot_compiler command file

2017-04-10 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13092:
---
Resolution: Fixed
  Assignee: Michael Shuler
Status: Resolved  (was: Ready to Commit)

Committed {{3dfc78449a402c984d3aa43b1b4fc43d07f92b7e}} to cassandra-2.1 branch 
and merged up.

Thanks Jan!

> Debian package does not install the hostpot_compiler command file
> -
>
> Key: CASSANDRA-13092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Jan Urbański
>Assignee: Michael Shuler
>Priority: Trivial
>  Labels: debian, patch
> Fix For: 2.1.18, 2.2.10, 3.0.13, 3.11.0, 4.0
>
> Attachments: install-jit-compiler-command-file-in-Debian-package.patch
>
>
> The default {{cassandra-env.sh}} file sets a JIT compiler commands file via 
> {{-XX:CompileCommandFile=$CASSANDRA_CONF/hotspot_compiler}} but the Debian 
> package does not install that file, even though it's generated during the 
> build process.
> Trivial patch against trunk attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13092) Debian package does not install the hostpot_compiler command file

2017-04-10 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13092:
---
Fix Version/s: 4.0
   3.11.0
   3.0.13
   2.2.10
   2.1.18

> Debian package does not install the hostpot_compiler command file
> -
>
> Key: CASSANDRA-13092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Jan Urbański
>Priority: Trivial
>  Labels: debian, patch
> Fix For: 2.1.18, 2.2.10, 3.0.13, 3.11.0, 4.0
>
> Attachments: install-jit-compiler-command-file-in-Debian-package.patch
>
>
> The default {{cassandra-env.sh}} file sets a JIT compiler commands file via 
> {{-XX:CompileCommandFile=$CASSANDRA_CONF/hotspot_compiler}} but the Debian 
> package does not install that file, even though it's generated during the 
> build process.
> Trivial patch against trunk attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13092) Debian package does not install the hostpot_compiler command file

2017-04-10 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13092:
---
Status: Ready to Commit  (was: Patch Available)

> Debian package does not install the hostpot_compiler command file
> -
>
> Key: CASSANDRA-13092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13092
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Jan Urbański
>Priority: Trivial
>  Labels: debian, patch
> Fix For: 2.1.18, 2.2.10, 3.0.13, 3.11.0, 4.0
>
> Attachments: install-jit-compiler-command-file-in-Debian-package.patch
>
>
> The default {{cassandra-env.sh}} file sets a JIT compiler commands file via 
> {{-XX:CompileCommandFile=$CASSANDRA_CONF/hotspot_compiler}} but the Debian 
> package does not install that file, even though it's generated during the 
> build process.
> Trivial patch against trunk attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[06/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-04-10 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/590e1512
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/590e1512
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/590e1512

Branch: refs/heads/trunk
Commit: 590e1512a9751c437b170d03da9428d18b152def
Parents: 470f15b 3dfc784
Author: Michael Shuler 
Authored: Mon Apr 10 09:09:22 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:09:22 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/590e1512/debian/cassandra.install
--



[08/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-04-10 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/590e1512
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/590e1512
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/590e1512

Branch: refs/heads/cassandra-3.0
Commit: 590e1512a9751c437b170d03da9428d18b152def
Parents: 470f15b 3dfc784
Author: Michael Shuler 
Authored: Mon Apr 10 09:09:22 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:09:22 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/590e1512/debian/cassandra.install
--



[13/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-04-10 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a7b1ee4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a7b1ee4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a7b1ee4

Branch: refs/heads/cassandra-3.11
Commit: 1a7b1ee4d1e7ee44a4de6faab6d222aa5ef28c2b
Parents: fc58dbb f63ea27
Author: Michael Shuler 
Authored: Mon Apr 10 09:11:54 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:11:54 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--




[09/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-04-10 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/590e1512
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/590e1512
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/590e1512

Branch: refs/heads/cassandra-3.11
Commit: 590e1512a9751c437b170d03da9428d18b152def
Parents: 470f15b 3dfc784
Author: Michael Shuler 
Authored: Mon Apr 10 09:09:22 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:09:22 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/590e1512/debian/cassandra.install
--



[07/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-04-10 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/590e1512
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/590e1512
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/590e1512

Branch: refs/heads/cassandra-2.2
Commit: 590e1512a9751c437b170d03da9428d18b152def
Parents: 470f15b 3dfc784
Author: Michael Shuler 
Authored: Mon Apr 10 09:09:22 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:09:22 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/590e1512/debian/cassandra.install
--



[14/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-04-10 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a7b1ee4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a7b1ee4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a7b1ee4

Branch: refs/heads/trunk
Commit: 1a7b1ee4d1e7ee44a4de6faab6d222aa5ef28c2b
Parents: fc58dbb f63ea27
Author: Michael Shuler 
Authored: Mon Apr 10 09:11:54 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:11:54 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--




[10/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-04-10 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f63ea272
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f63ea272
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f63ea272

Branch: refs/heads/trunk
Commit: f63ea2727654a048922f931915cb10b30e4cead2
Parents: 5e13020 590e151
Author: Michael Shuler 
Authored: Mon Apr 10 09:11:32 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:11:32 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f63ea272/debian/cassandra.install
--
diff --cc debian/cassandra.install
index 706f316,e8da5e9..50db32d
--- a/debian/cassandra.install
+++ b/debian/cassandra.install
@@@ -5,7 -5,7 +5,8 @@@ conf/commitlog_archiving.properties etc
  conf/cassandra-topology.properties etc/cassandra
  conf/logback.xml etc/cassandra
  conf/logback-tools.xml etc/cassandra
 +conf/jvm.options etc/cassandra
+ conf/hotspot_compiler etc/cassandra
  conf/triggers/* etc/cassandra/triggers
  debian/cassandra.in.sh usr/share/cassandra
  debian/cassandra.conf etc/security/limits.d



[15/15] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-04-10 Thread mshuler
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aa65c6c5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aa65c6c5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aa65c6c5

Branch: refs/heads/trunk
Commit: aa65c6c548099929a77b17dcd440f09eca7a72c9
Parents: ee6bf10 1a7b1ee
Author: Michael Shuler 
Authored: Mon Apr 10 09:12:10 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:12:10 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--




[05/15] cassandra git commit: Add conf/hostpot_compiler to debian packaging

2017-04-10 Thread mshuler
Add conf/hostpot_compiler to debian packaging

Patch by Jan Urbański; reviewed by Michael Shuler for CASSANDRA-13092


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3dfc7844
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3dfc7844
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3dfc7844

Branch: refs/heads/cassandra-3.11
Commit: 3dfc78449a402c984d3aa43b1b4fc43d07f92b7e
Parents: 64d8a1d
Author: Jan Urbański 
Authored: Mon Apr 10 09:06:13 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:06:13 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3dfc7844/debian/cassandra.install
--
diff --git a/debian/cassandra.install b/debian/cassandra.install
index a4654d1..9420949 100644
--- a/debian/cassandra.install
+++ b/debian/cassandra.install
@@ -6,6 +6,7 @@ conf/commitlog_archiving.properties etc/cassandra
 conf/cassandra-topology.properties etc/cassandra
 conf/logback.xml etc/cassandra
 conf/logback-tools.xml etc/cassandra
+conf/hotspot_compiler etc/cassandra
 conf/triggers/* etc/cassandra/triggers
 debian/cassandra.in.sh usr/share/cassandra
 debian/cassandra.conf etc/security/limits.d



[11/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-04-10 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f63ea272
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f63ea272
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f63ea272

Branch: refs/heads/cassandra-3.11
Commit: f63ea2727654a048922f931915cb10b30e4cead2
Parents: 5e13020 590e151
Author: Michael Shuler 
Authored: Mon Apr 10 09:11:32 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:11:32 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f63ea272/debian/cassandra.install
--
diff --cc debian/cassandra.install
index 706f316,e8da5e9..50db32d
--- a/debian/cassandra.install
+++ b/debian/cassandra.install
@@@ -5,7 -5,7 +5,8 @@@ conf/commitlog_archiving.properties etc
  conf/cassandra-topology.properties etc/cassandra
  conf/logback.xml etc/cassandra
  conf/logback-tools.xml etc/cassandra
 +conf/jvm.options etc/cassandra
+ conf/hotspot_compiler etc/cassandra
  conf/triggers/* etc/cassandra/triggers
  debian/cassandra.in.sh usr/share/cassandra
  debian/cassandra.conf etc/security/limits.d



[01/15] cassandra git commit: Add conf/hostpot_compiler to debian packaging

2017-04-10 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 64d8a1d9f -> 3dfc78449
  refs/heads/cassandra-2.2 470f15be6 -> 590e1512a
  refs/heads/cassandra-3.0 5e130209d -> f63ea2727
  refs/heads/cassandra-3.11 fc58dbb2f -> 1a7b1ee4d
  refs/heads/trunk ee6bf10ec -> aa65c6c54


Add conf/hostpot_compiler to debian packaging

Patch by Jan Urbański; reviewed by Michael Shuler for CASSANDRA-13092


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3dfc7844
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3dfc7844
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3dfc7844

Branch: refs/heads/cassandra-2.1
Commit: 3dfc78449a402c984d3aa43b1b4fc43d07f92b7e
Parents: 64d8a1d
Author: Jan Urbański 
Authored: Mon Apr 10 09:06:13 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:06:13 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3dfc7844/debian/cassandra.install
--
diff --git a/debian/cassandra.install b/debian/cassandra.install
index a4654d1..9420949 100644
--- a/debian/cassandra.install
+++ b/debian/cassandra.install
@@ -6,6 +6,7 @@ conf/commitlog_archiving.properties etc/cassandra
 conf/cassandra-topology.properties etc/cassandra
 conf/logback.xml etc/cassandra
 conf/logback-tools.xml etc/cassandra
+conf/hotspot_compiler etc/cassandra
 conf/triggers/* etc/cassandra/triggers
 debian/cassandra.in.sh usr/share/cassandra
 debian/cassandra.conf etc/security/limits.d



[03/15] cassandra git commit: Add conf/hostpot_compiler to debian packaging

2017-04-10 Thread mshuler
Add conf/hostpot_compiler to debian packaging

Patch by Jan Urbański; reviewed by Michael Shuler for CASSANDRA-13092


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3dfc7844
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3dfc7844
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3dfc7844

Branch: refs/heads/trunk
Commit: 3dfc78449a402c984d3aa43b1b4fc43d07f92b7e
Parents: 64d8a1d
Author: Jan Urbański 
Authored: Mon Apr 10 09:06:13 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:06:13 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3dfc7844/debian/cassandra.install
--
diff --git a/debian/cassandra.install b/debian/cassandra.install
index a4654d1..9420949 100644
--- a/debian/cassandra.install
+++ b/debian/cassandra.install
@@ -6,6 +6,7 @@ conf/commitlog_archiving.properties etc/cassandra
 conf/cassandra-topology.properties etc/cassandra
 conf/logback.xml etc/cassandra
 conf/logback-tools.xml etc/cassandra
+conf/hotspot_compiler etc/cassandra
 conf/triggers/* etc/cassandra/triggers
 debian/cassandra.in.sh usr/share/cassandra
 debian/cassandra.conf etc/security/limits.d



[12/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-04-10 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f63ea272
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f63ea272
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f63ea272

Branch: refs/heads/cassandra-3.0
Commit: f63ea2727654a048922f931915cb10b30e4cead2
Parents: 5e13020 590e151
Author: Michael Shuler 
Authored: Mon Apr 10 09:11:32 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:11:32 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f63ea272/debian/cassandra.install
--
diff --cc debian/cassandra.install
index 706f316,e8da5e9..50db32d
--- a/debian/cassandra.install
+++ b/debian/cassandra.install
@@@ -5,7 -5,7 +5,8 @@@ conf/commitlog_archiving.properties etc
  conf/cassandra-topology.properties etc/cassandra
  conf/logback.xml etc/cassandra
  conf/logback-tools.xml etc/cassandra
 +conf/jvm.options etc/cassandra
+ conf/hotspot_compiler etc/cassandra
  conf/triggers/* etc/cassandra/triggers
  debian/cassandra.in.sh usr/share/cassandra
  debian/cassandra.conf etc/security/limits.d



[04/15] cassandra git commit: Add conf/hostpot_compiler to debian packaging

2017-04-10 Thread mshuler
Add conf/hostpot_compiler to debian packaging

Patch by Jan Urbański; reviewed by Michael Shuler for CASSANDRA-13092


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3dfc7844
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3dfc7844
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3dfc7844

Branch: refs/heads/cassandra-3.0
Commit: 3dfc78449a402c984d3aa43b1b4fc43d07f92b7e
Parents: 64d8a1d
Author: Jan Urbański 
Authored: Mon Apr 10 09:06:13 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:06:13 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3dfc7844/debian/cassandra.install
--
diff --git a/debian/cassandra.install b/debian/cassandra.install
index a4654d1..9420949 100644
--- a/debian/cassandra.install
+++ b/debian/cassandra.install
@@ -6,6 +6,7 @@ conf/commitlog_archiving.properties etc/cassandra
 conf/cassandra-topology.properties etc/cassandra
 conf/logback.xml etc/cassandra
 conf/logback-tools.xml etc/cassandra
+conf/hotspot_compiler etc/cassandra
 conf/triggers/* etc/cassandra/triggers
 debian/cassandra.in.sh usr/share/cassandra
 debian/cassandra.conf etc/security/limits.d



[02/15] cassandra git commit: Add conf/hostpot_compiler to debian packaging

2017-04-10 Thread mshuler
Add conf/hostpot_compiler to debian packaging

Patch by Jan Urbański; reviewed by Michael Shuler for CASSANDRA-13092


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3dfc7844
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3dfc7844
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3dfc7844

Branch: refs/heads/cassandra-2.2
Commit: 3dfc78449a402c984d3aa43b1b4fc43d07f92b7e
Parents: 64d8a1d
Author: Jan Urbański 
Authored: Mon Apr 10 09:06:13 2017 -0500
Committer: Michael Shuler 
Committed: Mon Apr 10 09:06:13 2017 -0500

--
 debian/cassandra.install | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3dfc7844/debian/cassandra.install
--
diff --git a/debian/cassandra.install b/debian/cassandra.install
index a4654d1..9420949 100644
--- a/debian/cassandra.install
+++ b/debian/cassandra.install
@@ -6,6 +6,7 @@ conf/commitlog_archiving.properties etc/cassandra
 conf/cassandra-topology.properties etc/cassandra
 conf/logback.xml etc/cassandra
 conf/logback-tools.xml etc/cassandra
+conf/hotspot_compiler etc/cassandra
 conf/triggers/* etc/cassandra/triggers
 debian/cassandra.in.sh usr/share/cassandra
 debian/cassandra.conf etc/security/limits.d



[jira] [Commented] (CASSANDRA-13345) Increasing the per thread stack size to atleast 512k

2017-04-10 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962893#comment-15962893
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13345:
---

[~jasobrown] - Does that mean all the changes for ppc64le will local to my 
repository on github and not on apache/cassandra as separate branch like - 
ppc64le/trunk may be ?

> Increasing the per thread stack size to atleast 512k 
> -
>
> Key: CASSANDRA-13345
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13345
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: Set up details
> $ lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):160
> On-line CPU(s) list:   0-159
> Thread(s) per core:8
> Core(s) per socket:5
> Socket(s): 4
> NUMA node(s):  4
> Model: 2.1 (pvr 004b 0201)
> Model name:POWER8E (raw), altivec supported
> CPU max MHz:   3690.
> CPU min MHz:   2061.
> L1d cache: 64K
> L1i cache: 32K
> L2 cache:  512K
> L3 cache:  8192K
> NUMA node0 CPU(s): 0-39
> NUMA node1 CPU(s): 40-79
> NUMA node16 CPU(s):80-119
> NUMA node17 CPU(s):120-159
> $ cat /etc/os-release
> NAME="Ubuntu"
> VERSION="16.04.1 LTS (Xenial Xerus)"
> ID=ubuntu
> ID_LIKE=debian
> PRETTY_NAME="Ubuntu 16.04.1 LTS"
> VERSION_ID="16.04"
> HOME_URL="http://www.ubuntu.com/;
> SUPPORT_URL="http://help.ubuntu.com/;
> BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;
> VERSION_CODENAME=xenial
> UBUNTU_CODENAME=xenial
> $ arch
> ppc64le
> $ java -version
> openjdk version "1.8.0_121"
> OpenJDK Runtime Environment (build 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13)
> OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)
>Reporter: Amitkumar Ghatwal
>Priority: Minor
>
> Hi All,
> I followed the below steps 
> ```
> $ git clone https://github.com/apache/cassandra.git
> $ cd cassandra/
> $ ant
> $ bin/cassandra -f
> The stack size specified is too small, Specify at least 328k
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> ```
> After getting this , i had to upgrade the thread stack size to 512kb in 
> 'conf/jvm.options'
> ```
> $ git diff conf/jvm.options
> diff --git a/conf/jvm.options b/conf/jvm.options
> index 49b2196..00c03ce 100644
> --- a/conf/jvm.options
> +++ b/conf/jvm.options
> @@ -99,7 +99,7 @@
>  -XX:+HeapDumpOnOutOfMemoryError
>  # Per-thread stack size.
> --Xss256k
> +-Xss512k
>  # Larger interned string table, for gossip's benefit (CASSANDRA-6410)
>  -XX:StringTableSize=103
> ```
> Thereafter i was able to start the Cassandra server successfully.
> Could you please consider increasing the stack size to '512k' in 
> 'conf/jvm.options.
> Similar to  "https://issues.apache.org/jira/browse/CASSANDRA-13300;. Let me 
> know if we can increase the stack size in the Apache Cassandra trunk.
> Thanks for support provided so far , and let me know
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13302) last row of previous page == first row of next page while querying data using SASI index

2017-04-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-13302:
--
Status: Ready to Commit  (was: Patch Available)

> last row of previous page == first row of next page while querying data using 
> SASI index
> 
>
> Key: CASSANDRA-13302
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13302
> Project: Cassandra
>  Issue Type: Bug
> Environment: Tested with C* 3.9 and 3.10.
>Reporter: Andy Tolbert
>Assignee: Alex Petrov
>
> Apologies if this is a duplicate (couldn't track down an existing bug).
> Similarly to [CASSANDRA-11208], it appears it is possible to retrieve 
> duplicate rows when paging using a SASI index as documented in 
> [JAVA-1413|https://datastax-oss.atlassian.net/browse/JAVA-1413], the 
> following test demonstrates that data is repeated while querying using a SASI 
> index:
> {code:java}
> public class TestPagingBug
> {
>   public static void main(String[] args)
>   {
>   Cluster.Builder builder = Cluster.builder();
>   Cluster c = builder.addContactPoints("192.168.98.190").build(); 
> 
>   Session s = c.connect();
>   
>   s.execute("CREATE KEYSPACE IF NOT EXISTS test WITH replication 
> = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 }");
>   s.execute("CREATE TABLE IF NOT EXISTS test.test_table_sec(sec 
> BIGINT PRIMARY KEY, id INT)");
> //create secondary index on ID column, used for select 
> statement
> String index = "CREATE CUSTOM INDEX test_table_sec_idx ON 
> test.test_table_sec (id) USING 'org.apache.cassandra.index.sasi.SASIIndex' "
> + "WITH OPTIONS = { 'mode': 'PREFIX' }";
> s.execute(index);
>   
>   PreparedStatement insert = s.prepare("INSERT INTO 
> test.test_table_sec (id, sec) VALUES (1, ?)");
>   for (int i = 0; i < 1000; i++)
>   s.execute(insert.bind((long) i));
>   
>   PreparedStatement select = s.prepare("SELECT sec FROM 
> test.test_table_sec WHERE id = 1");
>   
>   long lastSec = -1;  
>   for (Row row : s.execute(select.bind().setFetchSize(300)))
>   {
>   long sec = row.getLong("sec");
>   if (sec == lastSec)
>   System.out.println(String.format("Duplicated id 
> %d", sec));
>   
>   lastSec = sec;
>   }
>   System.exit(0);
>   }
> }
> {code}
> The program outputs the following:
> {noformat}
> Duplicated id 23
> Duplicated id 192
> Duplicated id 684
> {noformat}
> Note that the simple primary key is required to reproduce this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13302) last row of previous page == first row of next page while querying data using SASI index

2017-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962882#comment-15962882
 ] 

Andrés de la Peña commented on CASSANDRA-13302:
---

The patch looks good to me, +1. There are some unused imports in 
{{SASICQLTest}} that can be removed while committing the patch.

> last row of previous page == first row of next page while querying data using 
> SASI index
> 
>
> Key: CASSANDRA-13302
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13302
> Project: Cassandra
>  Issue Type: Bug
> Environment: Tested with C* 3.9 and 3.10.
>Reporter: Andy Tolbert
>Assignee: Alex Petrov
>
> Apologies if this is a duplicate (couldn't track down an existing bug).
> Similarly to [CASSANDRA-11208], it appears it is possible to retrieve 
> duplicate rows when paging using a SASI index as documented in 
> [JAVA-1413|https://datastax-oss.atlassian.net/browse/JAVA-1413], the 
> following test demonstrates that data is repeated while querying using a SASI 
> index:
> {code:java}
> public class TestPagingBug
> {
>   public static void main(String[] args)
>   {
>   Cluster.Builder builder = Cluster.builder();
>   Cluster c = builder.addContactPoints("192.168.98.190").build(); 
> 
>   Session s = c.connect();
>   
>   s.execute("CREATE KEYSPACE IF NOT EXISTS test WITH replication 
> = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 }");
>   s.execute("CREATE TABLE IF NOT EXISTS test.test_table_sec(sec 
> BIGINT PRIMARY KEY, id INT)");
> //create secondary index on ID column, used for select 
> statement
> String index = "CREATE CUSTOM INDEX test_table_sec_idx ON 
> test.test_table_sec (id) USING 'org.apache.cassandra.index.sasi.SASIIndex' "
> + "WITH OPTIONS = { 'mode': 'PREFIX' }";
> s.execute(index);
>   
>   PreparedStatement insert = s.prepare("INSERT INTO 
> test.test_table_sec (id, sec) VALUES (1, ?)");
>   for (int i = 0; i < 1000; i++)
>   s.execute(insert.bind((long) i));
>   
>   PreparedStatement select = s.prepare("SELECT sec FROM 
> test.test_table_sec WHERE id = 1");
>   
>   long lastSec = -1;  
>   for (Row row : s.execute(select.bind().setFetchSize(300)))
>   {
>   long sec = row.getLong("sec");
>   if (sec == lastSec)
>   System.out.println(String.format("Duplicated id 
> %d", sec));
>   
>   lastSec = sec;
>   }
>   System.exit(0);
>   }
> }
> {code}
> The program outputs the following:
> {noformat}
> Duplicated id 23
> Duplicated id 192
> Duplicated id 684
> {noformat}
> Note that the simple primary key is required to reproduce this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-11668) InterruptedException in HintsDispatcher

2017-04-10 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko reassigned CASSANDRA-11668:
-

Assignee: (was: Aleksey Yeschenko)

> InterruptedException in HintsDispatcher
> ---
>
> Key: CASSANDRA-11668
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11668
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Russ Hatch
>  Labels: dtest
>
> This exception was seen when trying to repro a test problem. The original 
> issue test problem appears to be a non-issue, but the exception seems like it 
> could be worth investigation.
> This happened on upgrade from 3.2.1 to 3.3 HEAD (a soon to be retired 
> test-case).
> The test does a rolling upgrade where nodes are one by one stopped, upgraded, 
> and started on the new version.
> The exception occurred some time after starting node1 on the upgraded 
> version, and upgrading/starting node2 on the new version. Node2 logged the 
> exception.
> {noformat}
> node2: ERROR [HintsDispatcher:2] 2016-05-09 23:37:45,816 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[HintsDispatcher:2,1,main]
> java.lang.AssertionError: java.lang.InterruptedException
>   at 
> org.apache.cassandra.hints.HintsDispatcher$Callback.await(HintsDispatcher.java:205)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:146)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:121) 
> ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:93) 
> ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:247)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:219)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:198)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_51]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_51]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
> Caused by: java.lang.InterruptedException: null
>   at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.checkInterrupted(WaitQueue.java:313)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUntil(WaitQueue.java:301)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.utils.concurrent.SimpleCondition.await(SimpleCondition.java:63)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatcher$Callback.await(HintsDispatcher.java:201)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   ... 11 common frames omitted
> Unexpected error in node2 log, error: 
> ERROR [HintsDispatcher:2] 2016-05-09 23:37:45,816 CassandraDaemon.java:195 - 
> Exception in thread Thread[HintsDispatcher:2,1,main]
> java.lang.AssertionError: java.lang.InterruptedException
>   at 
> org.apache.cassandra.hints.HintsDispatcher$Callback.await(HintsDispatcher.java:205)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:146)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:121) 
> ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:93) 
> ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:247)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:219)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:198)
>  ~[apache-cassandra-3.2.1.jar:3.2.1]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_51]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_51]
>   at 
> 

[jira] [Commented] (CASSANDRA-13345) Increasing the per thread stack size to atleast 512k

2017-04-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962866#comment-15962866
 ] 

Jason Brown commented on CASSANDRA-13345:
-

[~amitkumar_ghatwal] I think [~snazy] meant you should create a branch under 
your own account on github. 

> Increasing the per thread stack size to atleast 512k 
> -
>
> Key: CASSANDRA-13345
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13345
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: Set up details
> $ lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):160
> On-line CPU(s) list:   0-159
> Thread(s) per core:8
> Core(s) per socket:5
> Socket(s): 4
> NUMA node(s):  4
> Model: 2.1 (pvr 004b 0201)
> Model name:POWER8E (raw), altivec supported
> CPU max MHz:   3690.
> CPU min MHz:   2061.
> L1d cache: 64K
> L1i cache: 32K
> L2 cache:  512K
> L3 cache:  8192K
> NUMA node0 CPU(s): 0-39
> NUMA node1 CPU(s): 40-79
> NUMA node16 CPU(s):80-119
> NUMA node17 CPU(s):120-159
> $ cat /etc/os-release
> NAME="Ubuntu"
> VERSION="16.04.1 LTS (Xenial Xerus)"
> ID=ubuntu
> ID_LIKE=debian
> PRETTY_NAME="Ubuntu 16.04.1 LTS"
> VERSION_ID="16.04"
> HOME_URL="http://www.ubuntu.com/;
> SUPPORT_URL="http://help.ubuntu.com/;
> BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;
> VERSION_CODENAME=xenial
> UBUNTU_CODENAME=xenial
> $ arch
> ppc64le
> $ java -version
> openjdk version "1.8.0_121"
> OpenJDK Runtime Environment (build 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13)
> OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)
>Reporter: Amitkumar Ghatwal
>Priority: Minor
>
> Hi All,
> I followed the below steps 
> ```
> $ git clone https://github.com/apache/cassandra.git
> $ cd cassandra/
> $ ant
> $ bin/cassandra -f
> The stack size specified is too small, Specify at least 328k
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> ```
> After getting this , i had to upgrade the thread stack size to 512kb in 
> 'conf/jvm.options'
> ```
> $ git diff conf/jvm.options
> diff --git a/conf/jvm.options b/conf/jvm.options
> index 49b2196..00c03ce 100644
> --- a/conf/jvm.options
> +++ b/conf/jvm.options
> @@ -99,7 +99,7 @@
>  -XX:+HeapDumpOnOutOfMemoryError
>  # Per-thread stack size.
> --Xss256k
> +-Xss512k
>  # Larger interned string table, for gossip's benefit (CASSANDRA-6410)
>  -XX:StringTableSize=103
> ```
> Thereafter i was able to start the Cassandra server successfully.
> Could you please consider increasing the stack size to '512k' in 
> 'conf/jvm.options.
> Similar to  "https://issues.apache.org/jira/browse/CASSANDRA-13300;. Let me 
> know if we can increase the stack size in the Apache Cassandra trunk.
> Thanks for support provided so far , and let me know
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13423) Secondary indexes can return stale data for deleted rows in 2.x

2017-04-10 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie reassigned CASSANDRA-13423:
---

Assignee: Andrés de la Peña

> Secondary indexes can return stale data for deleted rows in 2.x
> ---
>
> Key: CASSANDRA-13423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13423
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Andrés de la Peña
>
> In {{2.x}} when the secondary index detect that a row has been deleted it 
> removes it from the index. This approach can result in stale data being 
> returned if one of the nodes has not yet received the deletion.
> The problem come from this 
> {{line|https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/index/composites/CompositesSearcher.java#L284}}.
>  
> To avoid that problem we should remove the rows from the indexes only once 
> they has been garbage collected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13412) Update of column with TTL results in secondary index not returning row

2017-04-10 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie reassigned CASSANDRA-13412:
---

Assignee: Andrés de la Peña

> Update of column with TTL results in secondary index not returning row
> --
>
> Key: CASSANDRA-13412
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13412
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Enrique Bautista Barahona
>Assignee: Andrés de la Peña
>
> Cassandra versions: 2.2.3, 3.0.11
> 1 datacenter, keyspace has RF 3. Default consistency level.
> Steps:
> 1. I create these table and index.
> {code}
> CREATE TABLE my_table (
> a text,
> b text,
> c text,
> d set,
> e float,
> f text,
> g int,
> h double,
> j set,
> k float,
> m set,
> PRIMARY KEY (a, b, c)
> ) WITH read_repair_chance = 0.0
>AND dclocal_read_repair_chance = 0.1
>AND gc_grace_seconds = 864000
>AND bloom_filter_fp_chance = 0.01
>AND caching = { 'keys' : 'ALL', 'rows_per_partition' : 'NONE' }
>AND comment = ''
>AND compaction = { 'class' : 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' }
>AND compression = { 'sstable_compression' : 
> 'org.apache.cassandra.io.compress.LZ4Compressor' }
>AND default_time_to_live = 0
>AND speculative_retry = '99.0PERCENTILE'
>AND min_index_interval = 128
>AND max_index_interval = 2048;
> CREATE INDEX my_index ON my_table (c);
> {code}
> 2. I have 9951 INSERT statements in a file and I run the following command to 
> execute them. The INSERT statements have no TTL and no consistency level is 
> specified.
> {code}
> cqlsh   -u  -f 
> {code}
> 3. I update a column filtering by the whole primary key, and setting a TTL. 
> For example:
> {code}
> UPDATE my_table USING TTL 30 SET h = 10 WHERE a = 'test_a' AND b = 'test_b' 
> AND c = 'test_c';
> {code}
> 4. After the time specified in the TTL I run the following queries:
> {code}
> SELECT * FROM my_table WHERE a = 'test_a' AND b = 'test_b' AND c = 'test_c';
> SELECT * FROM my_table WHERE c = 'test_c';
> {code}
> The first one returns the correct row with an empty h column (as it has 
> expired). However, the second query (which uses the secondary index on column 
> c) returns nothing.
> I've done the query through my app which uses the Java driver v3.0.4 and 
> reads with CL local_one, from the cql shell and from DBeaver 3.8.5. All 
> display the same behaviour. The queries are performed minutes after the 
> writes and the servers don't have a high load, so I think it's unlikely to be 
> a consistency issue.
> I've tried to reproduce the issue in ccm and cqlsh by creating a new keyspace 
> and table, and inserting just 1 row, and the bug doesn't manifest. This leads 
> me to think that it's an issue only present with not trivially small amounts 
> of data, or maybe present only after Cassandra compacts or performs whatever 
> maintenance it needs to do.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-11928) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2017-04-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña reassigned CASSANDRA-11928:
-

Assignee: (was: Andrés de la Peña)

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-11928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11928
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>  Labels: dtest, flaky
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/727/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-3.0_dtest #727
> Is it a problem that the tracing message with the query is missing?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13427) testall failure in org.apache.cassandra.index.internal.CassandraIndexTest.indexOnRegularColumn

2017-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962718#comment-15962718
 ] 

Andrés de la Peña commented on CASSANDRA-13427:
---

I think that you have forgotten to increase the variable 
[{{CassandraIndexTest.indexCounter}}|https://github.com/ifesdjeen/cassandra/blob/13427-3.0/test/unit/org/apache/cassandra/index/internal/CassandraIndexTest.java#L529]
 used to generate the unique names for indexes. The tests don't fail because 
the generated index name also contains the table name, which contains an 
{{AtomicInteger}} sequence number with similar purpose. Alternatively, 
[CASSANDRA-13385|https://issues.apache.org/jira/browse/CASSANDRA-13385] could 
be useful to get the generated index names. A part from this, the patch looks 
good to me.

> testall failure in 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnRegularColumn
> --
>
> Key: CASSANDRA-13427
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13427
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> Because of the name clash, there's a following failure happening (extremely 
> infrequently, it's worth noting, seen it only once, no further traces / 
> instances found):
> {code}
> Error setting schema for test (query was: CREATE INDEX v_index ON 
> cql_test_keyspace.table_22(v))
> {code}
> Stacktrace:
> {code}
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> INDEX v_index ON cql_test_keyspace.table_22(v))
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13363) java.lang.ArrayIndexOutOfBoundsException: null

2017-04-10 Thread Artem Rokhin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962703#comment-15962703
 ] 

Artem Rokhin commented on CASSANDRA-13363:
--

[~ifesdjeen] Thank you, turned it off. [~slebresne] I'll let you know when/if 
the issue is reproduced. 

> java.lang.ArrayIndexOutOfBoundsException: null
> --
>
> Key: CASSANDRA-13363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13363
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS 6, Cassandra 3.10
>Reporter: Artem Rokhin
>
> Constantly see this error in the log without any additional information or a 
> stack trace.
> {code}
> Exception in thread Thread[MessagingService-Incoming-/10.0.1.26,5,main]
> {code}
> {code}
> java.lang.ArrayIndexOutOfBoundsException: null
> {code}
> Logger: org.apache.cassandra.service.CassandraDaemon
> Thrdead: MessagingService-Incoming-/10.0.1.12
> Method: uncaughtException
> File: CassandraDaemon.java
> Line: 229



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13385) Delegate utests index name creation to CQLTester.createIndex

2017-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962688#comment-15962688
 ] 

Andrés de la Peña commented on CASSANDRA-13385:
---

Thanks for the valuable input, indeed it is a better idea to always return the 
index name. Here is the new patch:

||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:13385-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13385-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13385-trunk-dtest/]|

{{createIndex}} parses the formatted query to retrieve the specified index name 
and, if it has not been specified, it uses 
[{{Indexes.getAvailableIndexName}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/schema/Indexes.java#L200]
 to get the index name that the query execution is going to generate.
The {{CREATE INDEX}} query is parsed with a regular expression that is quite 
long due to the number of cases:
{code}
CREATE INDEX ON table_0;
CREATE INDEX ON keyspace.table_0;
CREATE INDEX ON "keyspace".table_0;
CREATE INDEX ON keyspace."table_0";
CREATE INDEX idx_0 ON table_0;
CREATE INDEX table_0(c);
CREATE INDEX idx_0 ON table_0 (c);
CREATE INDEX idx_0 ON table_0( "c");
CREATE INDEX idx_0 ON table_0( keys (c);
CREATE INDEX idx_0 ON table_0( values (c);
CREATE INDEX IF NOT EXISTS idx_0 ON table_0(values(c);
CREATE CUSTOM INDEX idx_0 ON table_0 USING 'com.my' WITH OPTIONS = {'c':'(c)'};
{code}
Please let me know what do you think.

> Delegate utests index name creation to CQLTester.createIndex
> 
>
> Key: CASSANDRA-13385
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13385
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>  Labels: cql, unit-test
>
> Currently, many unit tests rely on {{CQLTester.createIndex}} to create 
> indexes. The index name should be specified by the test itself, for example:
> {code}
> createIndex("CREATE CUSTOM INDEX myindex ON %s(c) USING 
> 'org.apache.cassandra.index.internal.CustomCassandraIndex'");
> {code}
> Two different tests using the same index name can produce racy {{Index 
> myindex already exists}} errors due to the asynchronicity of 
> {{CQLTester.afterTest}} cleanup methods. 
> It would be nice to modify {{CQLTester.createIndex}} to make it generate its 
> own index names, as it is done by {{CQLTester.createTable}}:
> {code}
> createIndex("CREATE CUSTOM INDEX %s ON %s(c) USING 
> 'org.apache.cassandra.index.internal.CustomCassandraIndex'");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13304) Add checksumming to the native protocol

2017-04-10 Thread Alexandre Dutra (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Dutra updated CASSANDRA-13304:

Attachment: boxplot-read-throughput.png
boxplot-write-throughput.png

Boxplot charts comparing reads and writes between driver 3.2.0 and driver 3.2.0 
with checksum enabled.

> Add checksumming to the native protocol
> ---
>
> Key: CASSANDRA-13304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13304
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>  Labels: client-impacting
> Attachments: 13304_v1.diff, boxplot-read-throughput.png, 
> boxplot-write-throughput.png
>
>
> The native binary transport implementation doesn't include checksums. This 
> makes it highly susceptible to silently inserting corrupted data either due 
> to hardware issues causing bit flips on the sender/client side, C*/receiver 
> side, or network in between.
> Attaching an implementation that makes checksum'ing mandatory (assuming both 
> client and server know about a protocol version that supports checksums) -- 
> and also adds checksumming to clients that request compression.
> The serialized format looks something like this:
> {noformat}
>  *  1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
>  *  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Number of Compressed Chunks  | Compressed Length (e1)/
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * /  Compressed Length cont. (e1) |Uncompressed Length (e1)   /
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Uncompressed Length cont. (e1)| CRC32 Checksum of Lengths (e1)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Checksum of Lengths cont. (e1)|Compressed Bytes (e1)+//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e1) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (e2)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Compressed Bytes (e2)   +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e2) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (en)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Compressed Bytes (en)  +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (en) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
> {noformat}
> The first pass here adds checksums only to the actual contents of the frame 
> body itself (and doesn't actually checksum lengths and headers). While it 
> would be great to fully add checksuming across the entire protocol, the 
> proposed implementation will ensure we at least catch corrupted data and 
> likely protect ourselves pretty well anyways.
> I didn't go to the trouble of implementing a Snappy Checksum'ed Compressor 
> implementation as it's been deprecated for a while -- is really slow and 
> crappy compared to LZ4 -- and we should do everything in our power to make 
> sure no one in the community is still using it. I left it in (for obvious 
> backwards compatibility aspects) old for clients that don't know about the 
> new protocol.
> The current protocol has a 256MB (max) frame body -- where the serialized 
> contents are simply written in to the frame body.
> If the client sends a compression option in the startup, we will install a 
> FrameCompressor inline. Unfortunately, we went with a decision to treat the 
> frame body separately from the header bits etc in a given message. So, 
> instead we put a compressor 

[jira] [Commented] (CASSANDRA-13304) Add checksumming to the native protocol

2017-04-10 Thread Alexandre Dutra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962679#comment-15962679
 ] 

Alexandre Dutra commented on CASSANDRA-13304:
-

Added some benchmarks 
[here|https://github.com/datastax/java-driver/commit/a3bae833bb08b231d1f6e894015d5f3cd899a2b1#diff-ca42b8b2fb773033e101e1ed8cc043f0].
 Of course, the usual caveats about benchmarks apply :). On my local 
workstation the results show that the overhead of checksumming is negligible 
when compared to network latencies (throughput is divided by ~1.5 roughly).

However, I wanted to investigate a bit further and also ran a cassandra-stress 
test comparing driver 3.2.0 vs the checksumming driver, against [~beobal]'s C* 
branch. The test was performed on a 3-node cluster on EC2 with c3.8xlarge 
instances. I basically tested two modes (write and read) with 500 
partitions each and number of concurrent requests varying from 10 to 500. 

I am a bit concerned by the results as they show a clear performance penalty 
for high number of concurrent requests (>100). I will attach the charts to this 
ticket.

In the light of this, but not being a hardware expert, I would be slightly in 
favor of leaving this feature optional, or at least of giving users an opt-out 
escape hatch.





> Add checksumming to the native protocol
> ---
>
> Key: CASSANDRA-13304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13304
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>  Labels: client-impacting
> Attachments: 13304_v1.diff
>
>
> The native binary transport implementation doesn't include checksums. This 
> makes it highly susceptible to silently inserting corrupted data either due 
> to hardware issues causing bit flips on the sender/client side, C*/receiver 
> side, or network in between.
> Attaching an implementation that makes checksum'ing mandatory (assuming both 
> client and server know about a protocol version that supports checksums) -- 
> and also adds checksumming to clients that request compression.
> The serialized format looks something like this:
> {noformat}
>  *  1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
>  *  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Number of Compressed Chunks  | Compressed Length (e1)/
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * /  Compressed Length cont. (e1) |Uncompressed Length (e1)   /
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Uncompressed Length cont. (e1)| CRC32 Checksum of Lengths (e1)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Checksum of Lengths cont. (e1)|Compressed Bytes (e1)+//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e1) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (e2)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (e2) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * | Compressed Bytes (e2)   +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (e2) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |Compressed Length (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |   Uncompressed Length (en)|
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |CRC32 Checksum of Lengths (en) |
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  Compressed Bytes (en)  +//
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>  * |  CRC32 Checksum (en) ||
>  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
> {noformat}
> The first pass here adds checksums only to the actual contents of the frame 
> body itself (and doesn't actually checksum lengths and headers). While it 
> would be great to fully add checksuming across the entire protocol, the 
> proposed implementation will 

[jira] [Assigned] (CASSANDRA-13105) Multi-index query incorrectly returns 0 rows

2017-04-10 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-13105:
--

Assignee: Alex Petrov  (was: Benjamin Lerer)

> Multi-index query incorrectly returns 0 rows
> 
>
> Key: CASSANDRA-13105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13105
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: 3.9.0 on linux & osx
>Reporter: Voytek Jarnot
>Assignee: Alex Petrov
>
> Setup:
> {code}
> create table test1(id1 text PRIMARY KEY, val1 text, val2 text);
> create custom index test1_idx_val1 on test1(val1) using 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> create custom index test1_idx_val2 on test1(val2) using 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> insert into test1(id1, val1, val2) values ('1', '1val1', '1val2');
> insert into test1(id1, val1, val2) values ('2', '~~', '2val2');
> {code}
> Queries:
> {code}
> (1) select * from test1 where val1 = '~~';
> (2) select * from test1 where val1 < '~~' allow filtering;
> (3) select * from test1 where val2 = '1val2';
> (4) select * from test1 where val1 < '~~' and val2 = '1val2' allow filtering;
> {code}
> 1, 2, and 3 all work correctly.  4 does not work.
> 2, 3, and 4 should return the same row (id1='1'); 2 and 3 do, 4 returns 0 
> rows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (CASSANDRA-12873) Cassandra can't restart after set NULL to a frozen list

2017-04-10 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer resolved CASSANDRA-12873.

Resolution: Cannot Reproduce

> Cassandra can't restart after set NULL to a frozen list
> ---
>
> Key: CASSANDRA-12873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12873
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Mikhail Krupitskiy
>Priority: Critical
>
> Cassandra 3.5.
> 1) Create a table with frozen list as one of columns.
> 2) Add a row where the column is NULL.
> 3) Stop Cassandra.
> 4) Run Cassandra.
> Cassandra unable to start with the following exception:
> {noformat}
> ERROR o.a.c.utils.JVMStabilityInspector - Exiting due to error while 
> processing commit log during initialization.
> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: 
> Unexpected error deserializing mutation; saved to 
> /var/folders/gl/bvj71v5d39339dlr8yf08drcgq/T/mutation5963614818028050337dat.
>   This may be caused by replaying a mutation against a table with the same 
> name but incompatible schema.  Exception follows: 
> org.apache.cassandra.serializers.MarshalException: Not enough bytes to read a 
> list
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:611)
>  [apache-cassandra-3.5.jar:3.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:568)
>  [apache-cassandra-3.5.jar:3.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:521)
>  [apache-cassandra-3.5.jar:3.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:407)
>  [apache-cassandra-3.5.jar:3.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:236)
>  [apache-cassandra-3.5.jar:3.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:192) 
> [apache-cassandra-3.5.jar:3.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:172) 
> [apache-cassandra-3.5.jar:3.5]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283) 
> [apache-cassandra-3.5.jar:3.5]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.5.jar:3.5]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:680) 
> [apache-cassandra-3.5.jar:3.5]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_71]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_71]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_71]
>   at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_71]
> {noformat}
> Below is a script for steps #1, #2:
> {code}
> CREATE keyspace if not exists kmv WITH REPLICATION = { 'class' : 
> 'SimpleStrategy', 'replication_factor':'1'} ;
> USE kmv;
> CREATE TABLE if not exists kmv (id int, l frozen, PRIMARY 
> KEY(id));
> INSERT into kmv (id, l) values (1, null) ;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13345) Increasing the per thread stack size to atleast 512k

2017-04-10 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962629#comment-15962629
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13345:
---

[~snazy]  - Can you please help in creating a development branch - 
https://github.com/apache/cassandra/branches so that i can push my changes 
related to ppc64le . Currently i don't have access rights to create a branch on 
above cassandra repo. 

Hoping to hear from you soon.



> Increasing the per thread stack size to atleast 512k 
> -
>
> Key: CASSANDRA-13345
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13345
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: Set up details
> $ lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):160
> On-line CPU(s) list:   0-159
> Thread(s) per core:8
> Core(s) per socket:5
> Socket(s): 4
> NUMA node(s):  4
> Model: 2.1 (pvr 004b 0201)
> Model name:POWER8E (raw), altivec supported
> CPU max MHz:   3690.
> CPU min MHz:   2061.
> L1d cache: 64K
> L1i cache: 32K
> L2 cache:  512K
> L3 cache:  8192K
> NUMA node0 CPU(s): 0-39
> NUMA node1 CPU(s): 40-79
> NUMA node16 CPU(s):80-119
> NUMA node17 CPU(s):120-159
> $ cat /etc/os-release
> NAME="Ubuntu"
> VERSION="16.04.1 LTS (Xenial Xerus)"
> ID=ubuntu
> ID_LIKE=debian
> PRETTY_NAME="Ubuntu 16.04.1 LTS"
> VERSION_ID="16.04"
> HOME_URL="http://www.ubuntu.com/;
> SUPPORT_URL="http://help.ubuntu.com/;
> BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;
> VERSION_CODENAME=xenial
> UBUNTU_CODENAME=xenial
> $ arch
> ppc64le
> $ java -version
> openjdk version "1.8.0_121"
> OpenJDK Runtime Environment (build 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13)
> OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)
>Reporter: Amitkumar Ghatwal
>Priority: Minor
>
> Hi All,
> I followed the below steps 
> ```
> $ git clone https://github.com/apache/cassandra.git
> $ cd cassandra/
> $ ant
> $ bin/cassandra -f
> The stack size specified is too small, Specify at least 328k
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> ```
> After getting this , i had to upgrade the thread stack size to 512kb in 
> 'conf/jvm.options'
> ```
> $ git diff conf/jvm.options
> diff --git a/conf/jvm.options b/conf/jvm.options
> index 49b2196..00c03ce 100644
> --- a/conf/jvm.options
> +++ b/conf/jvm.options
> @@ -99,7 +99,7 @@
>  -XX:+HeapDumpOnOutOfMemoryError
>  # Per-thread stack size.
> --Xss256k
> +-Xss512k
>  # Larger interned string table, for gossip's benefit (CASSANDRA-6410)
>  -XX:StringTableSize=103
> ```
> Thereafter i was able to start the Cassandra server successfully.
> Could you please consider increasing the stack size to '512k' in 
> 'conf/jvm.options.
> Similar to  "https://issues.apache.org/jira/browse/CASSANDRA-13300;. Let me 
> know if we can increase the stack size in the Apache Cassandra trunk.
> Thanks for support provided so far , and let me know
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13123) Draining a node might fail to delete all inactive commitlogs

2017-04-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962568#comment-15962568
 ] 

Jan Urbański commented on CASSANDRA-13123:
--

We've been running it for weeks with no problems, so +1 from me.

> Draining a node might fail to delete all inactive commitlogs
> 
>
> Key: CASSANDRA-13123
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13123
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jan Urbański
>Assignee: Jan Urbański
> Fix For: 3.8
>
> Attachments: 13123-2.2.8.txt, 13123-3.0.10.txt, 13123-3.9.txt, 
> 13123-trunk.txt
>
>
> After issuing a drain command, it's possible that not all of the inactive 
> commitlogs are removed.
> The drain command shuts down the CommitLog instance, which in turn shuts down 
> the CommitLogSegmentManager. This has the effect of discarding any pending 
> management tasks it might have, like the removal of inactive commitlogs.
> This in turn leads to an excessive amount of commitlogs being left behind 
> after a drain and a lengthy recovery after a restart. With a fleet of dozens 
> of nodes, each of them leaving several GB of commitlogs after a drain and 
> taking up to two minutes to recover them on restart, the additional time 
> required to restart the entire fleet becomes noticeable.
> This problem is not present in 3.x or trunk because of the CLSM rewrite done 
> in CASSANDRA-8844.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13169) testall failure in org.apache.cassandra.index.sasi.disk.OnDiskIndexTest.testSparseMode-compression

2017-04-10 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-13169:
---

Assignee: Alex Petrov

> testall failure in 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexTest.testSparseMode-compression
> --
>
> Key: CASSANDRA-13169
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13169
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Attachments: 
> TEST-org.apache.cassandra.index.sasi.disk.OnDiskIndexTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1376/testReport/org.apache.cassandra.index.sasi.disk/OnDiskIndexTest/testSparseMode_compression
> {code}
> Error Message
> expected:<-1> but was:<0>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<-1> but was:<0>
>   at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexTest.testSparseMode(OnDiskIndexTest.java:350)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-12141) dtest failure in consistency_test.TestConsistency.short_read_reversed_test

2017-04-10 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-12141:
---

Assignee: (was: Alex Petrov)

> dtest failure in consistency_test.TestConsistency.short_read_reversed_test
> --
>
> Key: CASSANDRA-12141
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12141
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest
> Fix For: 3.11.x
>
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/280/testReport/consistency_test/TestConsistency/short_read_reversed_test
> Failed on CassCI build trunk_offheap_dtest #280
> {code}
> Standard Output
> Unexpected error in node2 log, error: 
> ERROR [epollEventLoopGroup-2-5] 2016-06-27 19:14:54,412 Slf4JLogger.java:176 
> - LEAK: ByteBuf.release() was not called before it's garbage-collected. 
> Enable advanced leak reporting to find out where the leak occurred. To enable 
> advanced leak reporting, specify the JVM option 
> '-Dio.netty.leakDetection.level=advanced' or call 
> ResourceLeakDetector.setLevel() See 
> http://netty.io/wiki/reference-counted-objects.html for more information.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13169) testall failure in org.apache.cassandra.index.sasi.disk.OnDiskIndexTest.testSparseMode-compression

2017-04-10 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-13169:
---

Assignee: (was: Alex Petrov)

> testall failure in 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexTest.testSparseMode-compression
> --
>
> Key: CASSANDRA-13169
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13169
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>  Labels: test-failure, testall
> Attachments: 
> TEST-org.apache.cassandra.index.sasi.disk.OnDiskIndexTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1376/testReport/org.apache.cassandra.index.sasi.disk/OnDiskIndexTest/testSparseMode_compression
> {code}
> Error Message
> expected:<-1> but was:<0>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<-1> but was:<0>
>   at 
> org.apache.cassandra.index.sasi.disk.OnDiskIndexTest.testSparseMode(OnDiskIndexTest.java:350)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13407) test failure at RemoveTest.testBadHostId

2017-04-10 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960470#comment-15960470
 ] 

Alex Petrov edited comment on CASSANDRA-13407 at 4/10/17 7:55 AM:
--

Looks like I was able to gather a bit more information on the issue. To confirm 
what you're saying. It is possible to reproduce locally by tweaking timeouts 
(particularly making the gossip interval shorter, to emulate the slow VM). 

{code}
INFO  [GossipTasks:1] 2017-04-03 23:05:53,433 Gossiper.java:810 - FatClient 
/127.0.0.4 has been silent for 1000ms, removing from gossip
DEBUG [GossipTasks:1] 2017-04-03 23:05:53,436 Gossiper.java:432 - removing 
endpoint /127.0.0.4
DEBUG [GossipTasks:1] 2017-04-03 23:05:53,436 Gossiper.java:407 - evicting 
/127.0.0.4 from gossip
{code}

After that we can get an NPE either in {{Gossiper#getHostId}} or 
{{StorageService#isStatus}}. 

The patch for 2.0 and 3.0 is slightly different, as if we do not initialise 
schema, we'll get the following error: 

{code}
[junit] junit.framework.AssertionFailedError: []
[junit] at 
org.apache.cassandra.db.lifecycle.Tracker.getMemtableFor(Tracker.java:312)
[junit] at 
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1185)
[junit] at 
org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:573)
[junit] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:421)
[junit] at org.apache.cassandra.db.Mutation.apply(Mutation.java:210)
[junit] at org.apache.cassandra.db.Mutation.apply(Mutation.java:215)
[junit] at org.apache.cassandra.db.Mutation.apply(Mutation.java:224)
[junit] at 
org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:566)
[junit] at 
org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:556)
[junit] at 
org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:295)
[junit] at 
org.apache.cassandra.db.SystemKeyspace.updatePeerInfo(SystemKeyspace.java:712)
[junit] at 
org.apache.cassandra.service.StorageService.updatePeerInfo(StorageService.java:1801)
[junit] at 
org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:2014)
[junit] at 
org.apache.cassandra.service.StorageService.onChange(StorageService.java:1669)
[junit] at org.apache.cassandra.Util.createInitialRing(Util.java:213)
[junit] at 
org.apache.cassandra.service.RemoveTest.setup(RemoveTest.java:77)
{code}

|[2.2|https://github.com/apache/cassandra/compare/2.2...ifesdjeen:13407-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13407-2.2-testall/]|
|[3.0|https://github.com/apache/cassandra/compare/3.0...ifesdjeen:13407-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13407-3.0-testall/]|
|[3.11|https://github.com/apache/cassandra/compare/3.11...ifesdjeen:13407-3.11]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13407-3.11-testall/]|
|[trunk|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:13407-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-13407-trunk-testall/]|


was (Author: ifesdjeen):
Looks like I was able to gather a bit more information on the issue. To confirm 
what you're saying. It is possible to reproduce locally by tweaking timeouts 
(particularly making the gossip interval shorter, to emulate the slow VM). 

{code}
INFO  [GossipTasks:1] 2017-04-03 23:05:53,433 Gossiper.java:810 - FatClient 
/127.0.0.4 has been silent for 1000ms, removing from gossip
DEBUG [GossipTasks:1] 2017-04-03 23:05:53,436 Gossiper.java:432 - removing 
endpoint /127.0.0.4
DEBUG [GossipTasks:1] 2017-04-03 23:05:53,436 Gossiper.java:407 - evicting 
/127.0.0.4 from gossip
{code}

After that we can get an NPE either in {{Gossiper#getHostId}} or 
{{StorageService#isStatus}}. 

The patch for 2.0 and 3.0 is slightly different, as if we do not initialise 
schema, we'll get the following error: 

{code}
[junit] junit.framework.AssertionFailedError: []
[junit] at 
org.apache.cassandra.db.lifecycle.Tracker.getMemtableFor(Tracker.java:312)
[junit] at 
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1185)
[junit] at 
org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:573)
[junit] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:421)
[junit] at org.apache.cassandra.db.Mutation.apply(Mutation.java:210)
[junit] at org.apache.cassandra.db.Mutation.apply(Mutation.java:215)
[junit] at org.apache.cassandra.db.Mutation.apply(Mutation.java:224)
[junit] at 

[jira] [Resolved] (CASSANDRA-12811) testall failure in org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression

2017-04-10 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov resolved CASSANDRA-12811.
-
Resolution: Fixed

> testall failure in 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression
> 
>
> Key: CASSANDRA-12811
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12811
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/34/testReport/org.apache.cassandra.cql3.validation.operations/DeleteTest/testDeleteWithOneClusteringColumns_compression/
> {code}
> Error Message
> Expected empty result but got 1 rows
> {code}
> {code}
> Stacktrace
> junit.framework.AssertionFailedError: Expected empty result but got 1 rows
>   at org.apache.cassandra.cql3.CQLTester.assertEmpty(CQLTester.java:1089)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:463)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:427)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-12811) testall failure in org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression

2017-04-10 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962545#comment-15962545
 ] 

Alex Petrov edited comment on CASSANDRA-12811 at 4/10/17 7:47 AM:
--

Sorry, this was most likely an artefact of my fork. Thank you for noticing!

Cherry-picked to 3.11 as 
[5e130209d38cd7e483d025d798895afe21f2a6bd|https://github.com/ifesdjeen/cassandra/commit/5e130209d38cd7e483d025d798895afe21f2a6bd],
 merged with {{-s ours}} to 
[3.11|https://github.com/ifesdjeen/cassandra/commit/fc58dbb2f89b1e53a7e8ddbfe3332cb254d7cd06]
 and 
[trunk|https://github.com/ifesdjeen/cassandra/commit/ee6bf10ecc7b16e12067ff41bf810c26c8730a03].


was (Author: ifesdjeen):
Sorry, this was most likely an artefact of my fork. Thank you for noticing!

Re-pushed to 3.11 as 
[5e130209d38cd7e483d025d798895afe21f2a6bd|https://github.com/ifesdjeen/cassandra/commit/5e130209d38cd7e483d025d798895afe21f2a6bd],
 merged with {{-s ours}} to 
[3.11|https://github.com/ifesdjeen/cassandra/commit/fc58dbb2f89b1e53a7e8ddbfe3332cb254d7cd06]
 and 
[trunk|https://github.com/ifesdjeen/cassandra/commit/ee6bf10ecc7b16e12067ff41bf810c26c8730a03].

> testall failure in 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression
> 
>
> Key: CASSANDRA-12811
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12811
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/34/testReport/org.apache.cassandra.cql3.validation.operations/DeleteTest/testDeleteWithOneClusteringColumns_compression/
> {code}
> Error Message
> Expected empty result but got 1 rows
> {code}
> {code}
> Stacktrace
> junit.framework.AssertionFailedError: Expected empty result but got 1 rows
>   at org.apache.cassandra.cql3.CQLTester.assertEmpty(CQLTester.java:1089)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:463)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:427)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12811) testall failure in org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression

2017-04-10 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962545#comment-15962545
 ] 

Alex Petrov commented on CASSANDRA-12811:
-

Sorry, this was most likely an artefact of my fork. Thank you for noticing!

Re-pushed to 3.11 as 
[5e130209d38cd7e483d025d798895afe21f2a6bd}https://github.com/ifesdjeen/cassandra/commit/5e130209d38cd7e483d025d798895afe21f2a6bd],
 merged with {{-s ours}} to 
[3.11|https://github.com/ifesdjeen/cassandra/commit/fc58dbb2f89b1e53a7e8ddbfe3332cb254d7cd06]
 and 
[trunk|https://github.com/ifesdjeen/cassandra/commit/ee6bf10ecc7b16e12067ff41bf810c26c8730a03].

> testall failure in 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression
> 
>
> Key: CASSANDRA-12811
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12811
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/34/testReport/org.apache.cassandra.cql3.validation.operations/DeleteTest/testDeleteWithOneClusteringColumns_compression/
> {code}
> Error Message
> Expected empty result but got 1 rows
> {code}
> {code}
> Stacktrace
> junit.framework.AssertionFailedError: Expected empty result but got 1 rows
>   at org.apache.cassandra.cql3.CQLTester.assertEmpty(CQLTester.java:1089)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:463)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:427)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-12811) testall failure in org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression

2017-04-10 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962545#comment-15962545
 ] 

Alex Petrov edited comment on CASSANDRA-12811 at 4/10/17 7:47 AM:
--

Sorry, this was most likely an artefact of my fork. Thank you for noticing!

Re-pushed to 3.11 as 
[5e130209d38cd7e483d025d798895afe21f2a6bd|https://github.com/ifesdjeen/cassandra/commit/5e130209d38cd7e483d025d798895afe21f2a6bd],
 merged with {{-s ours}} to 
[3.11|https://github.com/ifesdjeen/cassandra/commit/fc58dbb2f89b1e53a7e8ddbfe3332cb254d7cd06]
 and 
[trunk|https://github.com/ifesdjeen/cassandra/commit/ee6bf10ecc7b16e12067ff41bf810c26c8730a03].


was (Author: ifesdjeen):
Sorry, this was most likely an artefact of my fork. Thank you for noticing!

Re-pushed to 3.11 as 
[5e130209d38cd7e483d025d798895afe21f2a6bd}https://github.com/ifesdjeen/cassandra/commit/5e130209d38cd7e483d025d798895afe21f2a6bd],
 merged with {{-s ours}} to 
[3.11|https://github.com/ifesdjeen/cassandra/commit/fc58dbb2f89b1e53a7e8ddbfe3332cb254d7cd06]
 and 
[trunk|https://github.com/ifesdjeen/cassandra/commit/ee6bf10ecc7b16e12067ff41bf810c26c8730a03].

> testall failure in 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns-compression
> 
>
> Key: CASSANDRA-12811
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12811
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/34/testReport/org.apache.cassandra.cql3.validation.operations/DeleteTest/testDeleteWithOneClusteringColumns_compression/
> {code}
> Error Message
> Expected empty result but got 1 rows
> {code}
> {code}
> Stacktrace
> junit.framework.AssertionFailedError: Expected empty result but got 1 rows
>   at org.apache.cassandra.cql3.CQLTester.assertEmpty(CQLTester.java:1089)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:463)
>   at 
> org.apache.cassandra.cql3.validation.operations.DeleteTest.testDeleteWithOneClusteringColumns(DeleteTest.java:427)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-04-10 Thread ifesdjeen
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc58dbb2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc58dbb2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc58dbb2

Branch: refs/heads/trunk
Commit: fc58dbb2f89b1e53a7e8ddbfe3332cb254d7cd06
Parents: c38e618 5e13020
Author: Alex Petrov 
Authored: Mon Apr 10 09:43:19 2017 +0200
Committer: Alex Petrov 
Committed: Mon Apr 10 09:43:19 2017 +0200

--

--




[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-04-10 Thread ifesdjeen
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc58dbb2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc58dbb2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc58dbb2

Branch: refs/heads/cassandra-3.11
Commit: fc58dbb2f89b1e53a7e8ddbfe3332cb254d7cd06
Parents: c38e618 5e13020
Author: Alex Petrov 
Authored: Mon Apr 10 09:43:19 2017 +0200
Committer: Alex Petrov 
Committed: Mon Apr 10 09:43:19 2017 +0200

--

--




[2/6] cassandra git commit: Make reading of range tombstones more reliable

2017-04-10 Thread ifesdjeen
Make reading of range tombstones more reliable

Patch by Alex Petrov; reviewed by Benjamin Lerer for CASSANDRA-12811

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5e130209
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5e130209
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5e130209

Branch: refs/heads/cassandra-3.11
Commit: 5e130209d38cd7e483d025d798895afe21f2a6bd
Parents: 58e8008
Author: Alex Petrov 
Authored: Fri Apr 7 12:09:32 2017 +0200
Committer: Alex Petrov 
Committed: Mon Apr 10 09:38:30 2017 +0200

--
 CHANGES.txt |  1 +
 .../db/SinglePartitionReadCommand.java  | 11 +--
 .../db/filter/ClusteringIndexNamesFilter.java   |  6 +-
 .../db/partitions/AbstractBTreePartition.java   |  5 --
 .../cassandra/utils/IndexedSearchIterator.java  |  5 ++
 .../apache/cassandra/utils/SearchIterator.java  |  2 -
 .../cql3/validation/operations/DeleteTest.java  | 82 +++-
 .../partition/PartitionImplementationTest.java  |  2 +-
 8 files changed, 92 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e130209/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c5e517f..eeb71b9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.13
+ * Make reading of range tombstones more reliable (CASSANDRA-12811)
  * Fix startup problems due to schema tables not completely flushed 
(CASSANDRA-12213)
  * Fix view builder bug that can filter out data on restart (CASSANDRA-13405)
  * Fix 2i page size calculation when there are no regular columns 
(CASSANDRA-13400)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e130209/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java 
b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index 5f8df1b..99abd10 100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@ -736,13 +736,13 @@ public class SinglePartitionReadCommand extends 
ReadCommand
 
 // We need to get the partition deletion and include it if 
it's live. In any case though, we're done with that sstable.
 sstable.incrementReadCount();
-try (UnfilteredRowIterator iter = 
sstable.iterator(partitionKey(), columnFilter(), filter.isReversed(), 
isForThrift()))
+try (UnfilteredRowIterator iter = 
filter.filter(sstable.iterator(partitionKey(), columnFilter(), 
filter.isReversed(), isForThrift(
 {
+sstablesIterated++;
 if (!iter.partitionLevelDeletion().isLive())
-{
-sstablesIterated++;
 result = 
add(UnfilteredRowIterators.noRowsIterator(iter.metadata(), iter.partitionKey(), 
Rows.EMPTY_STATIC_ROW, iter.partitionLevelDeletion(), filter.isReversed()), 
result, filter, sstable.isRepaired());
-}
+else
+result = add(iter, result, filter, 
sstable.isRepaired());
 }
 continue;
 }
@@ -835,9 +835,6 @@ public class SinglePartitionReadCommand extends ReadCommand
 NavigableSet toRemove = null;
 for (Clustering clustering : clusterings)
 {
-if (!searchIter.hasNext())
-break;
-
 Row row = searchIter.next(clustering);
 if (row == null || !canRemoveRow(row, columns.regulars, 
sstableTimestamp))
 continue;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e130209/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
--
diff --git 
a/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java 
b/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
index a81a7a6..7769f2e 100644
--- a/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
@@ -176,7 +176,9 @@ public class ClusteringIndexNamesFilter extends 
AbstractClusteringIndexFilter
 
 public UnfilteredRowIterator getUnfilteredRowIterator(final ColumnFilter 
columnFilter, final Partition partition)
 {
+final Iterator clusteringIter = 
clusteringsInQueryOrder.iterator();
 final SearchIterator searcher = 

[1/6] cassandra git commit: Make reading of range tombstones more reliable

2017-04-10 Thread ifesdjeen
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 58e8008a5 -> 5e130209d
  refs/heads/cassandra-3.11 c38e618d6 -> fc58dbb2f
  refs/heads/trunk 333ebd67a -> ee6bf10ec


Make reading of range tombstones more reliable

Patch by Alex Petrov; reviewed by Benjamin Lerer for CASSANDRA-12811

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5e130209
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5e130209
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5e130209

Branch: refs/heads/cassandra-3.0
Commit: 5e130209d38cd7e483d025d798895afe21f2a6bd
Parents: 58e8008
Author: Alex Petrov 
Authored: Fri Apr 7 12:09:32 2017 +0200
Committer: Alex Petrov 
Committed: Mon Apr 10 09:38:30 2017 +0200

--
 CHANGES.txt |  1 +
 .../db/SinglePartitionReadCommand.java  | 11 +--
 .../db/filter/ClusteringIndexNamesFilter.java   |  6 +-
 .../db/partitions/AbstractBTreePartition.java   |  5 --
 .../cassandra/utils/IndexedSearchIterator.java  |  5 ++
 .../apache/cassandra/utils/SearchIterator.java  |  2 -
 .../cql3/validation/operations/DeleteTest.java  | 82 +++-
 .../partition/PartitionImplementationTest.java  |  2 +-
 8 files changed, 92 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e130209/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c5e517f..eeb71b9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.13
+ * Make reading of range tombstones more reliable (CASSANDRA-12811)
  * Fix startup problems due to schema tables not completely flushed 
(CASSANDRA-12213)
  * Fix view builder bug that can filter out data on restart (CASSANDRA-13405)
  * Fix 2i page size calculation when there are no regular columns 
(CASSANDRA-13400)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e130209/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java 
b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index 5f8df1b..99abd10 100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@ -736,13 +736,13 @@ public class SinglePartitionReadCommand extends 
ReadCommand
 
 // We need to get the partition deletion and include it if 
it's live. In any case though, we're done with that sstable.
 sstable.incrementReadCount();
-try (UnfilteredRowIterator iter = 
sstable.iterator(partitionKey(), columnFilter(), filter.isReversed(), 
isForThrift()))
+try (UnfilteredRowIterator iter = 
filter.filter(sstable.iterator(partitionKey(), columnFilter(), 
filter.isReversed(), isForThrift(
 {
+sstablesIterated++;
 if (!iter.partitionLevelDeletion().isLive())
-{
-sstablesIterated++;
 result = 
add(UnfilteredRowIterators.noRowsIterator(iter.metadata(), iter.partitionKey(), 
Rows.EMPTY_STATIC_ROW, iter.partitionLevelDeletion(), filter.isReversed()), 
result, filter, sstable.isRepaired());
-}
+else
+result = add(iter, result, filter, 
sstable.isRepaired());
 }
 continue;
 }
@@ -835,9 +835,6 @@ public class SinglePartitionReadCommand extends ReadCommand
 NavigableSet toRemove = null;
 for (Clustering clustering : clusterings)
 {
-if (!searchIter.hasNext())
-break;
-
 Row row = searchIter.next(clustering);
 if (row == null || !canRemoveRow(row, columns.regulars, 
sstableTimestamp))
 continue;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e130209/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
--
diff --git 
a/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java 
b/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
index a81a7a6..7769f2e 100644
--- a/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
@@ -176,7 +176,9 @@ public class ClusteringIndexNamesFilter extends 
AbstractClusteringIndexFilter
 
 public UnfilteredRowIterator getUnfilteredRowIterator(final ColumnFilter 
columnFilter, final 

[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-04-10 Thread ifesdjeen
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ee6bf10e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ee6bf10e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ee6bf10e

Branch: refs/heads/trunk
Commit: ee6bf10ecc7b16e12067ff41bf810c26c8730a03
Parents: 333ebd6 fc58dbb
Author: Alex Petrov 
Authored: Mon Apr 10 09:43:47 2017 +0200
Committer: Alex Petrov 
Committed: Mon Apr 10 09:43:47 2017 +0200

--

--




[3/6] cassandra git commit: Make reading of range tombstones more reliable

2017-04-10 Thread ifesdjeen
Make reading of range tombstones more reliable

Patch by Alex Petrov; reviewed by Benjamin Lerer for CASSANDRA-12811

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5e130209
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5e130209
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5e130209

Branch: refs/heads/trunk
Commit: 5e130209d38cd7e483d025d798895afe21f2a6bd
Parents: 58e8008
Author: Alex Petrov 
Authored: Fri Apr 7 12:09:32 2017 +0200
Committer: Alex Petrov 
Committed: Mon Apr 10 09:38:30 2017 +0200

--
 CHANGES.txt |  1 +
 .../db/SinglePartitionReadCommand.java  | 11 +--
 .../db/filter/ClusteringIndexNamesFilter.java   |  6 +-
 .../db/partitions/AbstractBTreePartition.java   |  5 --
 .../cassandra/utils/IndexedSearchIterator.java  |  5 ++
 .../apache/cassandra/utils/SearchIterator.java  |  2 -
 .../cql3/validation/operations/DeleteTest.java  | 82 +++-
 .../partition/PartitionImplementationTest.java  |  2 +-
 8 files changed, 92 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e130209/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c5e517f..eeb71b9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.13
+ * Make reading of range tombstones more reliable (CASSANDRA-12811)
  * Fix startup problems due to schema tables not completely flushed 
(CASSANDRA-12213)
  * Fix view builder bug that can filter out data on restart (CASSANDRA-13405)
  * Fix 2i page size calculation when there are no regular columns 
(CASSANDRA-13400)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e130209/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java 
b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index 5f8df1b..99abd10 100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@ -736,13 +736,13 @@ public class SinglePartitionReadCommand extends 
ReadCommand
 
 // We need to get the partition deletion and include it if 
it's live. In any case though, we're done with that sstable.
 sstable.incrementReadCount();
-try (UnfilteredRowIterator iter = 
sstable.iterator(partitionKey(), columnFilter(), filter.isReversed(), 
isForThrift()))
+try (UnfilteredRowIterator iter = 
filter.filter(sstable.iterator(partitionKey(), columnFilter(), 
filter.isReversed(), isForThrift(
 {
+sstablesIterated++;
 if (!iter.partitionLevelDeletion().isLive())
-{
-sstablesIterated++;
 result = 
add(UnfilteredRowIterators.noRowsIterator(iter.metadata(), iter.partitionKey(), 
Rows.EMPTY_STATIC_ROW, iter.partitionLevelDeletion(), filter.isReversed()), 
result, filter, sstable.isRepaired());
-}
+else
+result = add(iter, result, filter, 
sstable.isRepaired());
 }
 continue;
 }
@@ -835,9 +835,6 @@ public class SinglePartitionReadCommand extends ReadCommand
 NavigableSet toRemove = null;
 for (Clustering clustering : clusterings)
 {
-if (!searchIter.hasNext())
-break;
-
 Row row = searchIter.next(clustering);
 if (row == null || !canRemoveRow(row, columns.regulars, 
sstableTimestamp))
 continue;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5e130209/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
--
diff --git 
a/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java 
b/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
index a81a7a6..7769f2e 100644
--- a/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/ClusteringIndexNamesFilter.java
@@ -176,7 +176,9 @@ public class ClusteringIndexNamesFilter extends 
AbstractClusteringIndexFilter
 
 public UnfilteredRowIterator getUnfilteredRowIterator(final ColumnFilter 
columnFilter, final Partition partition)
 {
+final Iterator clusteringIter = 
clusteringsInQueryOrder.iterator();
 final SearchIterator searcher =