[jira] [Updated] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread Pavel Yaskevich (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Yaskevich updated CASSANDRA-11525:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed.

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:106)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> 

[1/2] cassandra git commit: StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread xedin
Repository: cassandra
Updated Branches:
  refs/heads/trunk 677230df6 -> 94b234313


StaticTokenTreeBuilder should respect posibility of duplicate tokens

patch by jrwest and xedin; reviewed by xedin for CASSANDRA-11525


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/020dd2d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/020dd2d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/020dd2d1

Branch: refs/heads/trunk
Commit: 020dd2d1034abc5c729edf1975953614b33c5a8b
Parents: 11da411
Author: Jordan West 
Authored: Thu Apr 7 19:07:50 2016 -0700
Committer: Pavel Yaskevich 
Committed: Fri Apr 8 21:22:00 2016 -0700

--
 CHANGES.txt |  1 +
 .../sasi/disk/AbstractTokenTreeBuilder.java |  1 +
 .../index/sasi/disk/StaticTokenTreeBuilder.java | 92 +---
 .../cassandra/index/sasi/disk/TokenTree.java| 18 ++--
 .../index/sasi/disk/TokenTreeTest.java  | 72 ++-
 5 files changed, 122 insertions(+), 62 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/020dd2d1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 58d8ae8..392d9e7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.5
+ * StaticTokenTreeBuilder should respect posibility of duplicate tokens 
(CASSANDRA-11525)
  * Correctly fix potential assertion error during compaction (CASSANDRA-11353)
  * Avoid index segment stitching in RAM which lead to OOM on big SSTable files 
(CASSANDRA-11383)
  * Fix clustering and row filters for LIKE queries on clustering columns 
(CASSANDRA-11397)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/020dd2d1/src/java/org/apache/cassandra/index/sasi/disk/AbstractTokenTreeBuilder.java
--
diff --git 
a/src/java/org/apache/cassandra/index/sasi/disk/AbstractTokenTreeBuilder.java 
b/src/java/org/apache/cassandra/index/sasi/disk/AbstractTokenTreeBuilder.java
index 4e93b2b..9a1f7f1 100644
--- 
a/src/java/org/apache/cassandra/index/sasi/disk/AbstractTokenTreeBuilder.java
+++ 
b/src/java/org/apache/cassandra/index/sasi/disk/AbstractTokenTreeBuilder.java
@@ -397,6 +397,7 @@ public abstract class AbstractTokenTreeBuilder implements 
TokenTreeBuilder
 
 public short offsetExtra()
 {
+// exta offset is supposed to be an unsigned 16-bit integer
 return (short) offset;
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/020dd2d1/src/java/org/apache/cassandra/index/sasi/disk/StaticTokenTreeBuilder.java
--
diff --git 
a/src/java/org/apache/cassandra/index/sasi/disk/StaticTokenTreeBuilder.java 
b/src/java/org/apache/cassandra/index/sasi/disk/StaticTokenTreeBuilder.java
index 147427e..7a41b38 100644
--- a/src/java/org/apache/cassandra/index/sasi/disk/StaticTokenTreeBuilder.java
+++ b/src/java/org/apache/cassandra/index/sasi/disk/StaticTokenTreeBuilder.java
@@ -79,7 +79,7 @@ public class StaticTokenTreeBuilder extends 
AbstractTokenTreeBuilder
 
 public boolean isEmpty()
 {
-return combinedTerm.getTokenIterator().getCount() == 0;
+return tokenCount == 0;
 }
 
 public Iterator> iterator()
@@ -100,7 +100,7 @@ public class StaticTokenTreeBuilder extends 
AbstractTokenTreeBuilder
 
 public long getTokenCount()
 {
-return combinedTerm.getTokenIterator().getCount();
+return tokenCount;
 }
 
 @Override
@@ -130,64 +130,50 @@ public class StaticTokenTreeBuilder extends 
AbstractTokenTreeBuilder
 {
 RangeIterator tokens = combinedTerm.getTokenIterator();
 
-tokenCount = tokens.getCount();
+tokenCount = 0;
 treeMinToken = tokens.getMinimum();
 treeMaxToken = tokens.getMaximum();
 numBlocks = 1;
 
-if (tokenCount <= TOKENS_PER_BLOCK)
+root = new InteriorNode();
+rightmostParent = (InteriorNode) root;
+Leaf lastLeaf = null;
+Long lastToken, firstToken = null;
+int leafSize = 0;
+while (tokens.hasNext())
 {
-leftmostLeaf = new StaticLeaf(tokens, tokens.getMinimum(), 
tokens.getMaximum(), tokens.getCount(), true);
-rightmostLeaf = leftmostLeaf;
-root = leftmostLeaf;
+Long token = tokens.next().get();
+if (firstToken == null)
+firstToken = token;
+
+tokenCount++;
+leafSize++;
+
+// skip until the last token in the leaf
+if (tokenCount % TOKENS_PER_BLOCK != 0 && token 

[2/2] cassandra git commit: Merge branch 'cassandra-3.5' into trunk

2016-04-08 Thread xedin
Merge branch 'cassandra-3.5' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/94b23431
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/94b23431
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/94b23431

Branch: refs/heads/trunk
Commit: 94b234313073242c47d1aba3de6f79a4fda0a93d
Parents: 677230d 020dd2d
Author: Pavel Yaskevich 
Authored: Fri Apr 8 21:27:07 2016 -0700
Committer: Pavel Yaskevich 
Committed: Fri Apr 8 21:27:07 2016 -0700

--
 CHANGES.txt |  1 +
 .../sasi/disk/AbstractTokenTreeBuilder.java |  1 +
 .../index/sasi/disk/StaticTokenTreeBuilder.java | 92 +---
 .../cassandra/index/sasi/disk/TokenTree.java| 18 ++--
 .../index/sasi/disk/TokenTreeTest.java  | 72 ++-
 5 files changed, 122 insertions(+), 62 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/94b23431/CHANGES.txt
--
diff --cc CHANGES.txt
index 5b71af1,392d9e7..59ab55b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,54 -1,5 +1,55 @@@
 +3.6
 + * Support for non-frozen user-defined types, updating
 +   individual fields of user-defined types (CASSANDRA-7423)
 + * Make LZ4 compression level configurable (CASSANDRA-11051)
 + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017)
 + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295)
 + * Improve field-checking and error reporting in cassandra.yaml 
(CASSANDRA-10649)
 + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507)
 + * More user friendly error when providing an invalid token to nodetool 
(CASSANDRA-9348)
 + * Add static column support to SASI index (CASSANDRA-11183)
 + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization 
(CASSANDRA-11434)
 + * Support LIKE operator in prepared statements (CASSANDRA-11456)
 + * Add a command to see if a Materialized View has finished building 
(CASSANDRA-9967)
 + * Log endpoint and port associated with streaming operation (CASSANDRA-8777)
 + * Print sensible units for all log messages (CASSANDRA-9692)
 + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096)
 + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372)
 + * Compress only inter-dc traffic by default (CASSANDRA-)
 + * Add metrics to track write amplification (CASSANDRA-11420)
 + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739)
 + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411)
 + * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
 + * Add auto import java.util for UDF code block (CASSANDRA-11392)
 + * Add --hex-format option to nodetool getsstables (CASSANDRA-11337)
 + * sstablemetadata should print sstable min/max token (CASSANDRA-7159)
 + * Do not wrap CassandraException in TriggerExecutor (CASSANDRA-9421)
 + * COPY TO should have higher double precision (CASSANDRA-11255)
 + * Stress should exit with non-zero status after failure (CASSANDRA-10340)
 + * Add client to cqlsh SHOW_SESSION (CASSANDRA-8958)
 + * Fix nodetool tablestats keyspace level metrics (CASSANDRA-11226)
 + * Store repair options in parent_repair_history (CASSANDRA-11244)
 + * Print current leveling in sstableofflinerelevel (CASSANDRA-9588)
 + * Change repair message for keyspaces with RF 1 (CASSANDRA-11203)
 + * Remove hard-coded SSL cipher suites and protocols (CASSANDRA-10508)
 + * Improve concurrency in CompactionStrategyManager (CASSANDRA-10099)
 + * (cqlsh) interpret CQL type for formatting blobs (CASSANDRA-11274)
 + * Refuse to start and print txn log information in case of disk
 +   corruption (CASSANDRA-10112)
 + * Resolve some eclipse-warnings (CASSANDRA-11086)
 + * (cqlsh) Show static columns in a different color (CASSANDRA-11059)
 + * Allow to remove TTLs on table with default_time_to_live (CASSANDRA-11207)
 +Merged from 3.0:
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 +
 +
  3.5
+  * StaticTokenTreeBuilder should respect posibility of duplicate tokens 
(CASSANDRA-11525)
   * Correctly fix potential assertion error during compaction (CASSANDRA-11353)
   * Avoid index segment stitching in RAM which lead to OOM on big SSTable 
files (CASSANDRA-11383)
   * Fix clustering and row filters for LIKE queries on clustering columns 
(CASSANDRA-11397)



cassandra git commit: StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread xedin
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.5 11da411fb -> 020dd2d10


StaticTokenTreeBuilder should respect posibility of duplicate tokens

patch by jrwest and xedin; reviewed by xedin for CASSANDRA-11525


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/020dd2d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/020dd2d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/020dd2d1

Branch: refs/heads/cassandra-3.5
Commit: 020dd2d1034abc5c729edf1975953614b33c5a8b
Parents: 11da411
Author: Jordan West 
Authored: Thu Apr 7 19:07:50 2016 -0700
Committer: Pavel Yaskevich 
Committed: Fri Apr 8 21:22:00 2016 -0700

--
 CHANGES.txt |  1 +
 .../sasi/disk/AbstractTokenTreeBuilder.java |  1 +
 .../index/sasi/disk/StaticTokenTreeBuilder.java | 92 +---
 .../cassandra/index/sasi/disk/TokenTree.java| 18 ++--
 .../index/sasi/disk/TokenTreeTest.java  | 72 ++-
 5 files changed, 122 insertions(+), 62 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/020dd2d1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 58d8ae8..392d9e7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.5
+ * StaticTokenTreeBuilder should respect posibility of duplicate tokens 
(CASSANDRA-11525)
  * Correctly fix potential assertion error during compaction (CASSANDRA-11353)
  * Avoid index segment stitching in RAM which lead to OOM on big SSTable files 
(CASSANDRA-11383)
  * Fix clustering and row filters for LIKE queries on clustering columns 
(CASSANDRA-11397)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/020dd2d1/src/java/org/apache/cassandra/index/sasi/disk/AbstractTokenTreeBuilder.java
--
diff --git 
a/src/java/org/apache/cassandra/index/sasi/disk/AbstractTokenTreeBuilder.java 
b/src/java/org/apache/cassandra/index/sasi/disk/AbstractTokenTreeBuilder.java
index 4e93b2b..9a1f7f1 100644
--- 
a/src/java/org/apache/cassandra/index/sasi/disk/AbstractTokenTreeBuilder.java
+++ 
b/src/java/org/apache/cassandra/index/sasi/disk/AbstractTokenTreeBuilder.java
@@ -397,6 +397,7 @@ public abstract class AbstractTokenTreeBuilder implements 
TokenTreeBuilder
 
 public short offsetExtra()
 {
+// exta offset is supposed to be an unsigned 16-bit integer
 return (short) offset;
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/020dd2d1/src/java/org/apache/cassandra/index/sasi/disk/StaticTokenTreeBuilder.java
--
diff --git 
a/src/java/org/apache/cassandra/index/sasi/disk/StaticTokenTreeBuilder.java 
b/src/java/org/apache/cassandra/index/sasi/disk/StaticTokenTreeBuilder.java
index 147427e..7a41b38 100644
--- a/src/java/org/apache/cassandra/index/sasi/disk/StaticTokenTreeBuilder.java
+++ b/src/java/org/apache/cassandra/index/sasi/disk/StaticTokenTreeBuilder.java
@@ -79,7 +79,7 @@ public class StaticTokenTreeBuilder extends 
AbstractTokenTreeBuilder
 
 public boolean isEmpty()
 {
-return combinedTerm.getTokenIterator().getCount() == 0;
+return tokenCount == 0;
 }
 
 public Iterator> iterator()
@@ -100,7 +100,7 @@ public class StaticTokenTreeBuilder extends 
AbstractTokenTreeBuilder
 
 public long getTokenCount()
 {
-return combinedTerm.getTokenIterator().getCount();
+return tokenCount;
 }
 
 @Override
@@ -130,64 +130,50 @@ public class StaticTokenTreeBuilder extends 
AbstractTokenTreeBuilder
 {
 RangeIterator tokens = combinedTerm.getTokenIterator();
 
-tokenCount = tokens.getCount();
+tokenCount = 0;
 treeMinToken = tokens.getMinimum();
 treeMaxToken = tokens.getMaximum();
 numBlocks = 1;
 
-if (tokenCount <= TOKENS_PER_BLOCK)
+root = new InteriorNode();
+rightmostParent = (InteriorNode) root;
+Leaf lastLeaf = null;
+Long lastToken, firstToken = null;
+int leafSize = 0;
+while (tokens.hasNext())
 {
-leftmostLeaf = new StaticLeaf(tokens, tokens.getMinimum(), 
tokens.getMaximum(), tokens.getCount(), true);
-rightmostLeaf = leftmostLeaf;
-root = leftmostLeaf;
+Long token = tokens.next().get();
+if (firstToken == null)
+firstToken = token;
+
+tokenCount++;
+leafSize++;
+
+// skip until the last token in the leaf
+if (tokenCount % 

[jira] [Commented] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233284#comment-15233284
 ] 

Pavel Yaskevich commented on CASSANDRA-11525:
-

[~doanduyhai] I've force pushed updated code/tests to the CASSANDRA-11525 
branch (testall/dtest are currently running). If you want to verify everything 
you (unfortunately) will have to rebuild indexes again, but this time you can 
only do it on the ma-2164 sstable everything else in unaffected. I'm going to 
wait until testall/dtest completes and merge everything to unblock 3.5 release.

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> 

[jira] [Updated] (CASSANDRA-11340) Heavy read activity on system_auth tables can cause apparent livelock

2016-04-08 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-11340:
---
Attachment: prepare_mass_connect.py
mass_connect.py

adding scripts.

> Heavy read activity on system_auth tables can cause apparent livelock
> -
>
> Key: CASSANDRA-11340
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11340
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Aleksey Yeschenko
> Attachments: mass_connect.py, prepare_mass_connect.py
>
>
> Reproduced in at least 2.1.9. 
> It appears possible for queries against system_auth tables to trigger 
> speculative retry, which causes auth to block on traffic going off node. In 
> some cases, it appears possible for threads to become deadlocked, causing 
> load on the nodes to increase sharply. This happens even in clusters with RF 
> of system_auth == N, as all requests being served locally puts the bar for 
> 99% SR pretty low. 
> Incomplete stack trace below, but we haven't yet figured out what exactly is 
> blocking:
> {code}
> Thread 82291: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.parkNanos(long) @bci=11, line=338 
> (Compiled frame)
>  - 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUntil(long)
>  @bci=28, line=307 (Compiled frame)
>  - org.apache.cassandra.utils.concurrent.SimpleCondition.await(long, 
> java.util.concurrent.TimeUnit) @bci=76, line=63 (Compiled frame)
>  - org.apache.cassandra.service.ReadCallback.await(long, 
> java.util.concurrent.TimeUnit) @bci=25, line=92 (Compiled frame)
>  - 
> org.apache.cassandra.service.AbstractReadExecutor$SpeculatingReadExecutor.maybeTryAdditionalReplicas()
>  @bci=39, line=281 (Compiled frame)
>  - org.apache.cassandra.service.StorageProxy.fetchRows(java.util.List, 
> org.apache.cassandra.db.ConsistencyLevel) @bci=175, line=1338 (Compiled frame)
>  - org.apache.cassandra.service.StorageProxy.readRegular(java.util.List, 
> org.apache.cassandra.db.ConsistencyLevel) @bci=9, line=1274 (Compiled frame)
>  - org.apache.cassandra.service.StorageProxy.read(java.util.List, 
> org.apache.cassandra.db.ConsistencyLevel, 
> org.apache.cassandra.service.ClientState) @bci=57, line=1199 (Compiled frame)
>  - 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(org.apache.cassandra.service.pager.Pageable,
>  org.apache.cassandra.cql3.QueryOptions, int, long, 
> org.apache.cassandra.service.QueryState) @bci=35, line=272 (Compiled frame)
>  - 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(org.apache.cassandra.service.QueryState,
>  org.apache.cassandra.cql3.QueryOptions) @bci=105, line=224 (Compiled frame)
>  - org.apache.cassandra.auth.Auth.selectUser(java.lang.String) @bci=27, 
> line=265 (Compiled frame)
>  - org.apache.cassandra.auth.Auth.isExistingUser(java.lang.String) @bci=1, 
> line=86 (Compiled frame)
>  - 
> org.apache.cassandra.service.ClientState.login(org.apache.cassandra.auth.AuthenticatedUser)
>  @bci=11, line=206 (Compiled frame)
>  - 
> org.apache.cassandra.transport.messages.AuthResponse.execute(org.apache.cassandra.service.QueryState)
>  @bci=58, line=82 (Compiled frame)
>  - 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(io.netty.channel.ChannelHandlerContext,
>  org.apache.cassandra.transport.Message$Request) @bci=75, line=439 (Compiled 
> frame)
>  - 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(io.netty.channel.ChannelHandlerContext,
>  java.lang.Object) @bci=6, line=335 (Compiled frame)
>  - 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(io.netty.channel.ChannelHandlerContext,
>  java.lang.Object) @bci=17, line=105 (Compiled frame)
>  - 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(java.lang.Object)
>  @bci=9, line=333 (Compiled frame)
>  - 
> io.netty.channel.AbstractChannelHandlerContext.access$700(io.netty.channel.AbstractChannelHandlerContext,
>  java.lang.Object) @bci=2, line=32 (Compiled frame)
>  - io.netty.channel.AbstractChannelHandlerContext$8.run() @bci=8, line=324 
> (Compiled frame)
>  - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=511 
> (Compiled frame)
>  - 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run()
>  @bci=5, line=164 (Compiled frame)
>  - org.apache.cassandra.concurrent.SEPWorker.run() @bci=87, line=105 
> (Interpreted frame)
>  - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
> {code}
> In a cluster with many connected clients (potentially thousands), a 
> reconnection flood (for example, restarting all at once) is likely to trigger 
> this bug. However, it 

[jira] [Commented] (CASSANDRA-11340) Heavy read activity on system_auth tables can cause apparent livelock

2016-04-08 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233214#comment-15233214
 ] 

Russ Hatch commented on CASSANDRA-11340:


I've been attempting to reproduce this issue, but haven't had any success so 
far. Have tried 20 and 60 node clusters (ec2).

Using PasswordAuthenticator, CassandraAuthorizer, various values for 
permissions_validity_in_ms. I've managed to get nodes up to several hundred 
connections, but cpu utilization always recovers afterwards. This was with 
short duration connections, so perhaps this has something to do with connection 
duration as well.

I'll attach my python script I was using to generate connections and create a 
small amount of writes.

> Heavy read activity on system_auth tables can cause apparent livelock
> -
>
> Key: CASSANDRA-11340
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11340
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Aleksey Yeschenko
>
> Reproduced in at least 2.1.9. 
> It appears possible for queries against system_auth tables to trigger 
> speculative retry, which causes auth to block on traffic going off node. In 
> some cases, it appears possible for threads to become deadlocked, causing 
> load on the nodes to increase sharply. This happens even in clusters with RF 
> of system_auth == N, as all requests being served locally puts the bar for 
> 99% SR pretty low. 
> Incomplete stack trace below, but we haven't yet figured out what exactly is 
> blocking:
> {code}
> Thread 82291: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.parkNanos(long) @bci=11, line=338 
> (Compiled frame)
>  - 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUntil(long)
>  @bci=28, line=307 (Compiled frame)
>  - org.apache.cassandra.utils.concurrent.SimpleCondition.await(long, 
> java.util.concurrent.TimeUnit) @bci=76, line=63 (Compiled frame)
>  - org.apache.cassandra.service.ReadCallback.await(long, 
> java.util.concurrent.TimeUnit) @bci=25, line=92 (Compiled frame)
>  - 
> org.apache.cassandra.service.AbstractReadExecutor$SpeculatingReadExecutor.maybeTryAdditionalReplicas()
>  @bci=39, line=281 (Compiled frame)
>  - org.apache.cassandra.service.StorageProxy.fetchRows(java.util.List, 
> org.apache.cassandra.db.ConsistencyLevel) @bci=175, line=1338 (Compiled frame)
>  - org.apache.cassandra.service.StorageProxy.readRegular(java.util.List, 
> org.apache.cassandra.db.ConsistencyLevel) @bci=9, line=1274 (Compiled frame)
>  - org.apache.cassandra.service.StorageProxy.read(java.util.List, 
> org.apache.cassandra.db.ConsistencyLevel, 
> org.apache.cassandra.service.ClientState) @bci=57, line=1199 (Compiled frame)
>  - 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(org.apache.cassandra.service.pager.Pageable,
>  org.apache.cassandra.cql3.QueryOptions, int, long, 
> org.apache.cassandra.service.QueryState) @bci=35, line=272 (Compiled frame)
>  - 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(org.apache.cassandra.service.QueryState,
>  org.apache.cassandra.cql3.QueryOptions) @bci=105, line=224 (Compiled frame)
>  - org.apache.cassandra.auth.Auth.selectUser(java.lang.String) @bci=27, 
> line=265 (Compiled frame)
>  - org.apache.cassandra.auth.Auth.isExistingUser(java.lang.String) @bci=1, 
> line=86 (Compiled frame)
>  - 
> org.apache.cassandra.service.ClientState.login(org.apache.cassandra.auth.AuthenticatedUser)
>  @bci=11, line=206 (Compiled frame)
>  - 
> org.apache.cassandra.transport.messages.AuthResponse.execute(org.apache.cassandra.service.QueryState)
>  @bci=58, line=82 (Compiled frame)
>  - 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(io.netty.channel.ChannelHandlerContext,
>  org.apache.cassandra.transport.Message$Request) @bci=75, line=439 (Compiled 
> frame)
>  - 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(io.netty.channel.ChannelHandlerContext,
>  java.lang.Object) @bci=6, line=335 (Compiled frame)
>  - 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(io.netty.channel.ChannelHandlerContext,
>  java.lang.Object) @bci=17, line=105 (Compiled frame)
>  - 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(java.lang.Object)
>  @bci=9, line=333 (Compiled frame)
>  - 
> io.netty.channel.AbstractChannelHandlerContext.access$700(io.netty.channel.AbstractChannelHandlerContext,
>  java.lang.Object) @bci=2, line=32 (Compiled frame)
>  - io.netty.channel.AbstractChannelHandlerContext$8.run() @bci=8, line=324 
> (Compiled frame)
>  - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=511 
> (Compiled frame)
>  - 
> 

[jira] [Commented] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread Jordan West (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233209#comment-15233209
 ] 

Jordan West commented on CASSANDRA-11525:
-

[~doanduyhai] we have tracked down the root cause of the bug and it has 
affected all versions of SASI since its original inclusion in Cassandra. The 
issue is that when positions in the -Index.db file are > Integer.MAX_VALUE the 
positions are factored into a 32-bit and 16-bit value. The 16-bit value was 
being read as a signed short and for certain positions this resulted in 
reconstructing an incorrect 64-bit offset from the 32-bit and 16-bit parts. 
Thankfully, this is a quick, one-line fix (reading the short as unsigned), and 
is entirely independent of the changes in CASSANDRA-11383 or this ticket. We 
will include the fix for this with the merge of the changes in this ticket. We 
are working on final verification using your SSTables before we merge. 

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   

[jira] [Commented] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233148#comment-15233148
 ] 

Pavel Yaskevich commented on CASSANDRA-11525:
-

[~doanduyhai] Thanks for the information! We can reproduce it now, trying with 
3.4 to make sure it's the bug we introduced in CASSANDRA-11383, will keep you 
posted.

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:106)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
>  

[jira] [Commented] (CASSANDRA-11473) Clustering column value is zeroed out in some query results

2016-04-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233067#comment-15233067
 ] 

Tyler Hobbs commented on CASSANDRA-11473:
-

I've been looking at some of the partitions in an sstable that [~longtimer] was 
able to provide.  The problem seems to be that there are extra, unexplained 
bytes at the end of rows.  I hacked {{nodetool scrub}} to enable dumping the 
raw bytes for a partition.  Here's an example of one of the partitions:

{noformat}
00 17 00 06 55 48 4E 54 52 43 00 00 04 00 00 00
00 00 00 04 00 00 00 00 00 7F FF FF FF 80 00 00
00 00 00 00 00 04 00 00 00 01 51 9C 84 68 B0 20
25 E8 57 F0 C8 01 00 F8 54 1E 52 C9 F8 00 00 00
00 00 F8 54 1E 52 C9 F8 00 00 00 00 04 00 00 00
01
{noformat}

By hand, I deserialized this into the following:

{noformat}
--- start partition header ---

00 17   - partition key length (short)
00 06 55 48 4E 54 52 43 00
  00 04 00 00 00 00 00
  00 04 00 00 00 00 00  - partition key (composite ('UHNTRC', 0, 0))
7F FF FF FF - local deletion time (max int)
80 00 00 00 00 00 00 00 - marked for delete at (min long)

-- end partition header ---

-- start row
04  - flags (only HAS_TIMESTAMP flag present, note that 
HAS_ALL_COLUMNS is not present)
00  - clustering block header, unsigned vint (zero b/c no 
null/empty values)
00 00 01 51 9C 84 68 B0 - clustering (timestamp, 2015-12-13 12:05)
20  - row body size (unsigned vint)
25  - previous row size (unsigned vint)

E8 57 F0 C8 - timestamp (unsigned vint, E8 indicates three more 
bytes)
01  - columns subset (unsigned vint, small encoding, 
indicates first column is missing)

00  - cell flags
F8 54 1E 52 C9 F8   - timestamp (unsigned vint, F8 indicates five more 
bytes)
00 00 00 00 - cell value (int32, value zero)

00  - cell flags
F8 54 1E 52 C9 F8   - timestamp (unsigned vint, F8 indicates five more 
bytes)
00 00 00 00 - cell value (int32, value zero)

-- end row

04 00 00 00 - MYSTERY BYTES
01  - end of partition marker (?)
{noformat}

A couple of notes:
* The four "MYSTERY BYTES" are what I cannot explain.  After looking over the 
serialization code many times, I can't find a good explanation for these.
* There is actually a third column in the schema ("assignment", a timestamp 
column).  This is why the "column subset" byte is 01 instead of 00.
* The two present columns are both ints, not an int an a float, like the schema 
in the description

I tried to reproduce this by creating a second table with the same schema.  
This is as close as I could get:

{noformat}
00 17 00 06 55 48 4E 54 52 43 00 00 04 00 00 00
00 00 00 04 00 00 00 00 00 7F FF FF FF 80 00 00
00 00 00 00 00 04 00 00 00 01 53 F7 D1 63 E1 18
25 E3 3F AD CB 01 00 E4 70 7B FE 00 00 00 00 00
E4 70 7B FE 00 00 00 00 01ยท

-- start partition header

00 17   - partition key length
00 06 55 48 4E 54 52 43 00 00 04 00 00 00 00
00 00 04 00 00 00 00 00 - partition key
7F FF FF FF - local deletion time
80 00 00 00 00 00 00 00 - marked for delete at

-- end partition header

-- start row

04  - flags (only HAS_TIMESTAMP)
00  - clustering block header
00 00 01 53 F7 D1 63 E1 - clustering (timestamp)
18  - row body size
25  - previous unfiltered size

E3 3F AD CB - timestamp (unsigned vint, E3 indicates three more 
bytes)
01  - columns subset

00  - cell flags
E4 70 7B FE - timestamp
00 00 00 00 - cell value (int32, value zero)

00  - cell flags
E4 70 7B FE - timestamp
00 00 00 00 - cell value (int32, value zero)

-- end row

01  - end of partition marker
{noformat}

This is almost identical, except that it doesn't have the "mystery bytes".

It's also interesting to note that the next few partitions in Jason's sstable 
all have the mystery bytes (I'm guessing all of them do):

{noformat}
Reading row at 0
row 00065241544341520404000400 is 53 bytes
00 17 00 06 52 41 54 43 41 52 00 00 04 00 00 00 00
00 00 04 00 00 00 04 00 7F FF FF FF 80 00 00 00
00 00 00 00 04 00 00 00 01 4D 9A D8 2C 80 1D 25
00 01 00 F8 54 1E 3C B1 B8 00 00 00 00 00 F8 54
1E 3C B1 B8 00 00 00 00 04 00 00 00 01

Reading row at 78
row 000655484e545243040400 is 56 bytes
00 17 00 06 55 48 4E 54 52 43 00 00 04 00 00 00 00
00 00 04 00 00 00 00 00 7F FF FF FF 80 00 00 00
00 00 00 00 04 00 00 00 01 51 9C 84 68 B0 20 25
E8 57 F0 C8 01 00 F8 54 1E 52 C9 F8 00 00 00 00
00 F8 54 1E 52 C9 F8 00 00 00 00 04 00 00 00 01

Reading row at 159
row 

[jira] [Commented] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233064#comment-15233064
 ] 

DOAN DuyHai commented on CASSANDRA-11525:
-

[~xedin]

Output of my java tester:

{noformat}
1518 : youtube
1520 : vevo
1522 : vevo
1524 : youtube
1526 : spotify
Exception in thread "main" 
com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure 
during read query at consistency LOCAL_ONE (1 responses were required but only 
0 replica responded, 1 failed)
at 
com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:85)
at 
com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:27)
at 
com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at 
com.datastax.driver.core.ArrayBackedResultSet$MultiPage.prepareNextRow(ArrayBackedResultSet.java:308)
at 
com.datastax.driver.core.ArrayBackedResultSet$MultiPage.isExhausted(ArrayBackedResultSet.java:265)
at 
com.datastax.driver.core.ArrayBackedResultSet$1.hasNext(ArrayBackedResultSet.java:136)
at fr.sacem.sharon.SASIBench.execute(SASIBench.java:61)
at fr.sacem.sharon.SASIBench.main(SASIBench.java:86)
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra 
failure during read query at consistency LOCAL_ONE (1 responses were required 
but only 0 replica responded, 1 failed)
{noformat}


Root cause in a `system.log`

{noformat}
WARN  [SharedPool-Worker-1] 2016-04-09 00:15:05,214 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.ArrayIndexOutOfBoundsException: 117
at 
org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:149)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.utils.MergeIterator$TrivialOneToOne.computeNext(MergeIterator.java:484)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:106)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
at 
org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
 ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]

{noformat}

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: 

[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-04-08 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232903#comment-15232903
 ] 

Joshua McKenzie commented on CASSANDRA-8844:


[~carlyeks] / [~blambov]: I realized on a call today that I'd overlooked the 
whole "respect CDC being enabled per-DC" and instead originally implemented the 
CommitLog routing to SegmentManager as a simple on/off per keyspace. I've 
pushed a fairly trivial commit that computes and caches a hasLocalCDC flag per 
local keyspace on create/alter time that's checked on the write path now.

Figured I'd point that out if either / both of you came up with that during 
your review.

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the client daemon's responibility
> - Logfile size should probably be configurable.
> - Logfiles should be named with a predictable naming schema, making it 
> triivial to process them in order.
> - Daemons should be able to checkpoint their work, and resume from where they 
> left off. This means they would have to leave some file artifact in the CDC 
> log's directory.
> - A sophisticated daemon should be able to be written that could 
> -- Catch up, in written-order, 

[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-04-08 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232898#comment-15232898
 ] 

Joshua McKenzie commented on CASSANDRA-8844:


Nothing user-consumable as yet since we're still finalizing what all those 
things look like.

In its current state, the data resides in CASSANDRA_HOME/data/cdc and 
CASSANDRA_HOME/data/cdc_overflow, configurable in the .yaml. Format is the 3.6 
C* CommitLog format, performance impact of having it enabled should be 
negligible (checks an extra boolean during write path and looks up an ArrayList 
entry), and no extra memory requirement.

That being said, the actual user consumption of CDC data will in fact take up 
CPU cycles and memory on the machine but independently of the C* JVM, so impact 
should be limited depending on how the consumer daemon / client is written.

Assuming the patch goes through in its current form, a consumer would want to 
implement the 
[ICommitLogReadHandler|https://github.com/josh-mckenzie/cassandra/blob/8844_review/src/java/org/apache/cassandra/db/commitlog/ICommitLogReadHandler.java]
 interface and use the newly refactored out 
[CommitLogReader|https://github.com/josh-mckenzie/cassandra/blob/8844_review/src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java]
 to parse the files from disk. These files will be kept up to date as CommitLog 
formats change, so porting to future revisions of the subsystem and potential 
file format changes should be relatively easy to do.

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log 

[jira] [Commented] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232878#comment-15232878
 ] 

Pavel Yaskevich commented on CASSANDRA-11525:
-

[~doanduyhai] Thanks, we got it running now trying to reproduce with the SI 
files you have included. Also the {{CREATE INDEX}} command you added to 
description creates index with different name, it doesn't have "_int_" which SI 
files do have.

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:106)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> 

[jira] [Updated] (CASSANDRA-7423) Allow updating individual subfields of UDT

2016-04-08 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7423:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Thanks, committed to trunk as {{677230df694752c7ecf6d5459eee60ad7cf45ecf}}.

> Allow updating individual subfields of UDT
> --
>
> Key: CASSANDRA-7423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7423
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Tyler Hobbs
>  Labels: client-impacting, cql, docs-impacting
> Fix For: 3.6
>
>
> Since user defined types were implemented in CASSANDRA-5590 as blobs (you 
> have to rewrite the entire type in order to make any modifications), they 
> can't be safely used without LWT for any operation that wants to modify a 
> subset of the UDT's fields by any client process that is not authoritative 
> for the entire blob. 
> When trying to use UDTs to model complex records (particularly with nesting), 
> this is not an exceptional circumstance, this is the totally expected normal 
> situation. 
> The use of UDTs for anything non-trivial is harmful to either performance or 
> consistency or both.
> edit: to clarify, i believe that most potential uses of UDTs should be 
> considered anti-patterns until/unless we have field-level r/w access to 
> individual elements of the UDT, with individual timestamps and standard LWW 
> semantics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Support for non-frozen UDTS

2016-04-08 Thread tylerhobbs
http://git-wip-us.apache.org/repos/asf/cassandra/blob/677230df/src/java/org/apache/cassandra/cql3/UserTypes.java
--
diff --git a/src/java/org/apache/cassandra/cql3/UserTypes.java 
b/src/java/org/apache/cassandra/cql3/UserTypes.java
index 0beff06..db46fdd 100644
--- a/src/java/org/apache/cassandra/cql3/UserTypes.java
+++ b/src/java/org/apache/cassandra/cql3/UserTypes.java
@@ -20,12 +20,18 @@ package org.apache.cassandra.cql3;
 import java.nio.ByteBuffer;
 import java.util.*;
 
+import org.apache.cassandra.config.ColumnDefinition;
 import org.apache.cassandra.cql3.functions.Function;
+import org.apache.cassandra.db.DecoratedKey;
+import org.apache.cassandra.db.marshal.TupleType;
 import org.apache.cassandra.db.marshal.UTF8Type;
 import org.apache.cassandra.db.marshal.UserType;
+import org.apache.cassandra.db.rows.CellPath;
 import org.apache.cassandra.exceptions.InvalidRequestException;
 import org.apache.cassandra.utils.ByteBufferUtil;
 
+import static org.apache.cassandra.cql3.Constants.UNSET_VALUE;
+
 /**
  * Static helper methods and classes for user types.
  */
@@ -78,8 +84,10 @@ public abstract class UserTypes
 {
 // We had some field that are not part of the type
 for (ColumnIdentifier id : entries.keySet())
+{
 if (!ut.fieldNames().contains(id.bytes))
 throw new 
InvalidRequestException(String.format("Unknown field '%s' in value of user 
defined type %s", id, ut.getNameAsString()));
+}
 }
 
 DelayedValue value = new DelayedValue(((UserType)receiver.type), 
values);
@@ -88,7 +96,7 @@ public abstract class UserTypes
 
 private void validateAssignableTo(String keyspace, ColumnSpecification 
receiver) throws InvalidRequestException
 {
-if (!(receiver.type instanceof UserType))
+if (!receiver.type.isUDT())
 throw new InvalidRequestException(String.format("Invalid user 
type literal for %s of type %s", receiver, receiver.type.asCQL3Type()));
 
 UserType ut = (UserType)receiver.type;
@@ -101,7 +109,10 @@ public abstract class UserTypes
 
 ColumnSpecification fieldSpec = fieldSpecOf(receiver, i);
 if (!value.testAssignment(keyspace, fieldSpec).isAssignable())
-throw new InvalidRequestException(String.format("Invalid 
user type literal for %s: field %s is not of type %s", receiver, field, 
fieldSpec.type.asCQL3Type()));
+{
+throw new InvalidRequestException(String.format("Invalid 
user type literal for %s: field %s is not of type %s",
+receiver, field, fieldSpec.type.asCQL3Type()));
+}
 }
 }
 
@@ -135,7 +146,52 @@ public abstract class UserTypes
 }
 }
 
-// Same purpose than Lists.DelayedValue, except we do handle bind marker 
in that case
+public static class Value extends Term.MultiItemTerminal
+{
+private final UserType type;
+public final ByteBuffer[] elements;
+
+public Value(UserType type, ByteBuffer[] elements)
+{
+this.type = type;
+this.elements = elements;
+}
+
+public static Value fromSerialized(ByteBuffer bytes, UserType type)
+{
+ByteBuffer[] values = type.split(bytes);
+if (values.length > type.size())
+{
+throw new InvalidRequestException(String.format(
+"UDT value contained too many fields (expected %s, got 
%s)", type.size(), values.length));
+}
+
+return new Value(type, type.split(bytes));
+}
+
+public ByteBuffer get(int protocolVersion)
+{
+return TupleType.buildValue(elements);
+}
+
+public boolean equals(UserType userType, Value v)
+{
+if (elements.length != v.elements.length)
+return false;
+
+for (int i = 0; i < elements.length; i++)
+if (userType.fieldType(i).compare(elements[i], v.elements[i]) 
!= 0)
+return false;
+
+return true;
+}
+
+public List getElements()
+{
+return Arrays.asList(elements);
+}
+}
+
 public static class DelayedValue extends Term.NonTerminal
 {
 private final UserType type;
@@ -168,20 +224,27 @@ public abstract class UserTypes
 
 private ByteBuffer[] bindInternal(QueryOptions options) throws 
InvalidRequestException
 {
+if (values.size() > type.size())
+{
+throw new InvalidRequestException(String.format(
+"UDT value contained too many fields (expected %s, got 
%s)", type.size(), values.size()));
+}
+
 ByteBuffer[] buffers = new 

[1/3] cassandra git commit: Support for non-frozen UDTS

2016-04-08 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 66fb8f51e -> 677230df6


http://git-wip-us.apache.org/repos/asf/cassandra/blob/677230df/test/unit/org/apache/cassandra/cql3/validation/entities/UserTypesTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/UserTypesTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/entities/UserTypesTest.java
index d9df206..5501561 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/entities/UserTypesTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/entities/UserTypesTest.java
@@ -73,32 +73,86 @@ public class UserTypesTest extends CQLTester
 execute("INSERT INTO %s(k, v) VALUES (?, {x:?})", 1, -104.99251);
 execute("UPDATE %s SET b = ? WHERE k = ?", true, 1);
 
-assertRows(execute("SELECT v.x FROM %s WHERE k = ? AND v = {x:?}", 1, 
-104.99251),
-row(-104.99251)
-);
-
-flush();
-
-assertRows(execute("SELECT v.x FROM %s WHERE k = ? AND v = {x:?}", 1, 
-104.99251),
-   row(-104.99251)
+beforeAndAfterFlush(() ->
+assertRows(execute("SELECT v.x FROM %s WHERE k = ? AND v = {x:?}", 
1, -104.99251),
+row(-104.99251)
+)
 );
 }
 
 @Test
-public void testCreateInvalidTablesWithUDT() throws Throwable
+public void testInvalidUDTStatements() throws Throwable
 {
-String myType = createType("CREATE TYPE %s (f int)");
-
-// Using a UDT without frozen shouldn't work
-assertInvalidMessage("Non-frozen User-Defined types are not supported, 
please use frozen<>",
- "CREATE TABLE " + KEYSPACE + ".wrong (k int 
PRIMARY KEY, v " + KEYSPACE + '.' + myType + ")");
-
+String typename = createType("CREATE TYPE %s (a int)");
+String myType = KEYSPACE + '.' + typename;
+
+// non-frozen UDTs in a table PK
+assertInvalidMessage("Invalid non-frozen user-defined type for PRIMARY 
KEY component k",
+"CREATE TABLE " + KEYSPACE + ".wrong (k " + myType + " PRIMARY 
KEY , v int)");
+assertInvalidMessage("Invalid non-frozen user-defined type for PRIMARY 
KEY component k2",
+"CREATE TABLE " + KEYSPACE + ".wrong (k1 int, k2 " + myType + 
", v int, PRIMARY KEY (k1, k2))");
+
+// non-frozen UDTs in a collection
+assertInvalidMessage("Non-frozen UDTs are not allowed inside 
collections: list<" + myType + ">",
+"CREATE TABLE " + KEYSPACE + ".wrong (k int PRIMARY KEY, v 
list<" + myType + ">)");
+assertInvalidMessage("Non-frozen UDTs are not allowed inside 
collections: set<" + myType + ">",
+"CREATE TABLE " + KEYSPACE + ".wrong (k int PRIMARY KEY, v 
set<" + myType + ">)");
+assertInvalidMessage("Non-frozen UDTs are not allowed inside 
collections: map<" + myType + ", int>",
+"CREATE TABLE " + KEYSPACE + ".wrong (k int PRIMARY KEY, v 
map<" + myType + ", int>)");
+assertInvalidMessage("Non-frozen UDTs are not allowed inside 
collections: map",
+"CREATE TABLE " + KEYSPACE + ".wrong (k int PRIMARY KEY, v 
map)");
+
+// non-frozen UDT in a collection (as part of a UDT definition)
+assertInvalidMessage("Non-frozen UDTs are not allowed inside 
collections: list<" + myType + ">",
+"CREATE TYPE " + KEYSPACE + ".wrong (a int, b list<" + myType 
+ ">)");
+
+// non-frozen UDT in a UDT
+assertInvalidMessage("A user type cannot contain non-frozen UDTs",
+"CREATE TYPE " + KEYSPACE + ".wrong (a int, b " + myType + 
")");
+
+// referencing a UDT in another keyspace
 assertInvalidMessage("Statement on keyspace " + KEYSPACE + " cannot 
refer to a user type in keyspace otherkeyspace;" +
  " user types can only be used in the keyspace 
they are defined in",
  "CREATE TABLE " + KEYSPACE + ".wrong (k int 
PRIMARY KEY, v frozen)");
 
+// referencing an unknown UDT
 assertInvalidMessage("Unknown type " + KEYSPACE + ".unknowntype",
  "CREATE TABLE " + KEYSPACE + ".wrong (k int 
PRIMARY KEY, v frozen<" + KEYSPACE + '.' + "unknownType>)");
+
+// bad deletions on frozen UDTs
+createTable("CREATE TABLE %s (a int PRIMARY KEY, b frozen<" + myType + 
">, c int)");
+assertInvalidMessage("Frozen UDT column b does not support field 
deletion", "DELETE b.a FROM %s WHERE a = 0");
+assertInvalidMessage("Invalid field deletion operation for non-UDT 
column c", "DELETE c.a FROM %s WHERE a = 0");
+
+// bad updates on frozen UDTs
+assertInvalidMessage("Invalid operation (b.a = 0) for frozen UDT 
column b", "UPDATE %s SET b.a = 0 WHERE a = 0");
+assertInvalidMessage("Invalid 

[3/3] cassandra git commit: Support for non-frozen UDTS

2016-04-08 Thread tylerhobbs
Support for non-frozen UDTS

Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-7423


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/677230df
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/677230df
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/677230df

Branch: refs/heads/trunk
Commit: 677230df694752c7ecf6d5459eee60ad7cf45ecf
Parents: 66fb8f5
Author: Tyler Hobbs 
Authored: Fri Apr 8 11:56:39 2016 -0500
Committer: Tyler Hobbs 
Committed: Fri Apr 8 15:26:12 2016 -0500

--
 CHANGES.txt |   2 +
 bin/cqlsh.py|   3 +-
 doc/cql3/CQL.textile|  27 +-
 pylib/cqlshlib/cql3handling.py  |  86 +++-
 pylib/cqlshlib/test/test_cqlsh_completion.py|   6 +-
 src/antlr/Parser.g  |  20 +-
 .../cassandra/config/ColumnDefinition.java  |  16 +-
 .../apache/cassandra/cql3/AbstractMarker.java   |  26 +-
 .../org/apache/cassandra/cql3/CQL3Type.java |  86 ++--
 .../apache/cassandra/cql3/ColumnCondition.java  | 505 ++-
 .../apache/cassandra/cql3/ColumnIdentifier.java |   4 +-
 .../org/apache/cassandra/cql3/Constants.java|   2 +-
 .../org/apache/cassandra/cql3/Operation.java| 145 --
 src/java/org/apache/cassandra/cql3/Tuples.java  |  15 +-
 .../apache/cassandra/cql3/UntypedResultSet.java |   2 +-
 .../org/apache/cassandra/cql3/UserTypes.java| 186 ++-
 .../cql3/functions/AbstractFunction.java|   2 +-
 .../cassandra/cql3/functions/FunctionCall.java  |  24 +-
 .../cassandra/cql3/functions/UDAggregate.java   |   2 +-
 .../restrictions/StatementRestrictions.java |  10 +-
 .../cassandra/cql3/selection/Selectable.java|  22 +-
 .../cassandra/cql3/selection/Selection.java |   8 +-
 .../cassandra/cql3/selection/Selector.java  |   2 +-
 .../cql3/statements/AlterTypeStatement.java |  10 +-
 .../cql3/statements/CreateTableStatement.java   |  33 +-
 .../cql3/statements/CreateTypeStatement.java|   6 +-
 .../cql3/statements/DeleteStatement.java|   2 +-
 .../cql3/statements/ModificationStatement.java  |   2 +-
 .../cql3/statements/SelectStatement.java|  12 +-
 .../cql3/statements/UpdateStatement.java|   6 +-
 .../cassandra/db/marshal/AbstractType.java  |  21 +
 .../cassandra/db/marshal/CollectionType.java|  30 +-
 .../apache/cassandra/db/marshal/TupleType.java  |  45 +-
 .../apache/cassandra/db/marshal/TypeParser.java |   6 +-
 .../apache/cassandra/db/marshal/UserType.java   | 166 +-
 .../org/apache/cassandra/db/rows/CellPath.java  |  13 +-
 .../cassandra/db/rows/ComplexColumnData.java|   1 -
 .../org/apache/cassandra/schema/Functions.java  |   2 +-
 .../cassandra/schema/LegacySchemaMigrator.java  |   2 +-
 src/java/org/apache/cassandra/schema/Types.java |  36 +-
 .../cassandra/service/MigrationManager.java |   1 +
 .../apache/cassandra/transport/DataType.java|   2 +-
 .../cassandra/cql3/CQL3TypeLiteralTest.java |   2 +-
 .../org/apache/cassandra/cql3/CQLTester.java|  80 ++-
 .../cassandra/cql3/ColumnConditionTest.java |  36 +-
 .../selection/SelectionColumnMappingTest.java   |   2 +-
 .../cql3/validation/entities/UserTypesTest.java | 463 +++--
 .../operations/InsertUpdateIfConditionTest.java | 371 ++
 .../schema/LegacySchemaMigratorTest.java|  23 +-
 .../cassandra/transport/SerDeserTest.java   |   3 +-
 50 files changed, 2054 insertions(+), 523 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/677230df/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 74ba07e..5b71af1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 3.6
+ * Support for non-frozen user-defined types, updating
+   individual fields of user-defined types (CASSANDRA-7423)
  * Make LZ4 compression level configurable (CASSANDRA-11051)
  * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017)
  * Make custom filtering more extensible with UserExpression (CASSANDRA-11295)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/677230df/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index c3fcc48..2593486 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -898,8 +898,7 @@ class Shell(cmd.Cmd):
 except KeyError:
 raise UserTypeNotFound("User type %r not found" % typename)
 
-return [(field_name, field_type.cql_parameterized_type())
-for field_name, field_type in zip(user_type.field_names, 
user_type.field_types)]
+return zip(user_type.field_names, 

[jira] [Created] (CASSANDRA-11539) dtest failure in topology_test.TestTopology.movement_test

2016-04-08 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-11539:
--

 Summary: dtest failure in topology_test.TestTopology.movement_test
 Key: CASSANDRA-11539
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11539
 Project: Cassandra
  Issue Type: Test
  Components: Testing
Reporter: Michael Shuler
Assignee: DS Test Eng
 Fix For: 3.x


example failure:
{noformat}
Error Message

values not within 16.00% of the max: (335.88, 404.31) ()
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-XGOyDd
dtest: DEBUG: Custom init_config not found. Setting defaults.
dtest: DEBUG: Done setting configuration options:
{   'num_tokens': None,
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 1,
'read_request_timeout_in_ms': 1,
'request_timeout_in_ms': 1,
'truncate_request_timeout_in_ms': 1,
'write_request_timeout_in_ms': 1}
- >> end captured logging << -
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/topology_test.py", line 93, in 
movement_test
assert_almost_equal(sizes[1], sizes[2])
  File "/home/automaton/cassandra-dtest/assertions.py", line 75, in 
assert_almost_equal
assert vmin > vmax * (1.0 - error) or vmin == vmax, "values not within 
%.2f%% of the max: %s (%s)" % (error * 100, args, error_message)
"values not within 16.00% of the max: (335.88, 404.31) ()\n 
>> begin captured logging << \ndtest: DEBUG: cluster ccm 
directory: /mnt/tmp/dtest-XGOyDd\ndtest: DEBUG: Custom init_config not found. 
Setting defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
'num_tokens': None,\n'phi_convict_threshold': 5,\n
'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
1,\n'request_timeout_in_ms': 1,\n
'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
1}\n- >> end captured logging << -"
{noformat}

http://cassci.datastax.com/job/cassandra-3.5_novnode_dtest/22/testReport/topology_test/TestTopology/movement_test


I dug through this test's history on the trunk, 3.5, 3.0, and 2.2 branches. It 
appears this test is stable and passing on 3.0 & 2.2 (which could be just 
luck). On trunk & 3.5, however, this test has flapped a small number of times.

The test's threshold is 16% and I found test failures in the 3.5 branch of 
16.2%, 16.9%, and 18.3%. In trunk I found 17.4% and 23.5% diff failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232848#comment-15232848
 ] 

DOAN DuyHai commented on CASSANDRA-11525:
-

[~xedin] 

Statistics files uploaded

In the mean time I'm re-testing

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:106)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> 

[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232837#comment-15232837
 ] 

Paulo Motta commented on CASSANDRA-11363:
-

[~CRolo] [~arodrime] if you could record and attach JFR files of servers with 
high numbers of blocked NTR threads that would be of great help in 
investigating this issue. (I will also have a look on the already attached 
files, but they could be affected by CASSANDRA-11529). Thanks in advance!

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8272) 2ndary indexes can return stale data

2016-04-08 Thread Henry Manasseh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232834#comment-15232834
 ] 

Henry Manasseh commented on CASSANDRA-8272:
---

Can this be avoided by increasing the consistency level to ALL? I am just 
wondering if there is a workaround in order to eliminate the risk.

> 2ndary indexes can return stale data
> 
>
> Key: CASSANDRA-8272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8272
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
> Fix For: 2.1.x
>
>
> When replica return 2ndary index results, it's possible for a single replica 
> to return a stale result and that result will be sent back to the user, 
> potentially failing the CL contract.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v text);
> CREATE INDEX ON test(v);
> INSERT INTO test(k, v) VALUES (0, 'foo');
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v = 'bar' WHERE k = 0;
> SELECT * FROM test WHERE v = 'foo';
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned (since 
> C will return it and A or B will return nothing).
> A potential solution would be that when we read a tombstone in the index (and 
> provided we make the index inherit the gcGrace of it's parent CF), instead of 
> skipping that tombstone, we'd insert in the result a corresponding range 
> tombstone.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11484) Consistency issues with subsequent writes, deletes and reads

2016-04-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11484:

Assignee: Joel Knighton

> Consistency issues with subsequent writes, deletes and reads
> 
>
> Key: CASSANDRA-11484
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11484
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra version: DataStax Enterprise 4.7.1
> Driver version: cassandra-driver-core-2.1.7.1
>Reporter: Prashanth
>Assignee: Joel Knighton
> Attachments: CassandraDbCheckAppTest.java
>
>
> There have been intermittent failures when the following subsequent queries 
> are performed on a 4 node cluster:
> 1. Insert a few records with consistency level QUORUM
> 2. Delete one of the records with consistency level ALL
> 3. Retrieve all the records with consistency level QUORUM or ALL and test 
> that the deleted record does not exist
> The tests are failing because the record does not appear to be deleted and a 
> pattern for the failures couldn't be established.
> A snippet of the code is attached to this issue so that the setup/tear down 
> mechanism can be seen as well. 
> (Both truncating and dropping the table approaches where used as a tear down 
> mechanism)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11479) BatchlogManager unit tests failing on truncate race condition

2016-04-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11479:

Assignee: Yuki Morishita

> BatchlogManager unit tests failing on truncate race condition
> -
>
> Key: CASSANDRA-11479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11479
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Yuki Morishita
> Attachments: 
> TEST-org.apache.cassandra.batchlog.BatchlogManagerTest.log
>
>
> Example on CI 
> [here|http://cassci.datastax.com/job/trunk_testall/818/testReport/junit/org.apache.cassandra.batchlog/BatchlogManagerTest/testLegacyReplay_compression/].
>  This seems to have only started happening relatively recently (within the 
> last month or two).
> As far as I can tell, this is only showing up on BatchlogManagerTests purely 
> because it is an aggressive user of truncate. The assertion is hit in the 
> setUp method, so it can happen before any of the test methods. The assertion 
> occurs because a compaction is happening when truncate wants to discard 
> SSTables; trace level logs suggest that this compaction is submitted after 
> the pause on the CompactionStrategyManager.
> This should be reproducible by running BatchlogManagerTest in a loop - it 
> takes up to half an hour in my experience. A trace-level log from such a run 
> is attached - grep for my added log message "SSTABLES COMPACTING WHEN 
> DISCARDING" to find when the assert is hit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11479) BatchlogManager unit tests failing on truncate race condition

2016-04-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11479:

Reviewer: Joel Knighton

> BatchlogManager unit tests failing on truncate race condition
> -
>
> Key: CASSANDRA-11479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11479
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Yuki Morishita
> Attachments: 
> TEST-org.apache.cassandra.batchlog.BatchlogManagerTest.log
>
>
> Example on CI 
> [here|http://cassci.datastax.com/job/trunk_testall/818/testReport/junit/org.apache.cassandra.batchlog/BatchlogManagerTest/testLegacyReplay_compression/].
>  This seems to have only started happening relatively recently (within the 
> last month or two).
> As far as I can tell, this is only showing up on BatchlogManagerTests purely 
> because it is an aggressive user of truncate. The assertion is hit in the 
> setUp method, so it can happen before any of the test methods. The assertion 
> occurs because a compaction is happening when truncate wants to discard 
> SSTables; trace level logs suggest that this compaction is submitted after 
> the pause on the CompactionStrategyManager.
> This should be reproducible by running BatchlogManagerTest in a loop - it 
> takes up to half an hour in my experience. A trace-level log from such a run 
> is attached - grep for my added log message "SSTABLES COMPACTING WHEN 
> DISCARDING" to find when the assert is hit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232809#comment-15232809
 ] 

Pavel Yaskevich commented on CASSANDRA-11525:
-

[~doanduyhai] ok. we are working on reproducing locally. In the meantime would 
you be able to test against 3.4 to help us confirm/deny whether the issue was 
caused by the changes in CASSANDRA-11383. Also, can you provide the output of 
running your script so we can determine specifically where it errored out? Can 
you please also add Stats component which is currently missing, without it we 
can't to do much?

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> 

[jira] [Updated] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11363:

Assignee: Paulo Motta

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232797#comment-15232797
 ] 

DOAN DuyHai commented on CASSANDRA-11525:
-

[~xedin] [~jrwest]

The patch does not solve the issue.

The exception occurs after fetching 15 260 000 rows using the index.

Below is my Java test:

{noformat}
String query = "SELECT dsp_code, unic_work, paying_net_qty FROM 
test.resource_bench WHERE period_end_month_int = 201401";
final PreparedStatement ps = session.prepare(query);
System.out.println("* Execution query : " + 
ps.getQueryString() + " \n");
System.out.println("Start date = " + new Date() + "\n");

long count = 0;
int fetchSize = 2;
int sleepTimeInMs = 20;
final BoundStatement bs = ps.bind();
bs.setFetchSize(fetchSize);
bs.setReadTimeoutMillis(9);
final ResultSet resultSet = session.execute(bs);
final Iterator iterator = resultSet.iterator();
while (iterator.hasNext()) {
count ++;
final Row row = iterator.next();
if (count % fetchSize == 0) {
System.out.println(count + " : " + row.getString("dsp_code") );
// Give the system some time to breath
Thread.sleep(sleepTimeInMs);
}
}

System.out.println("End date = " + new Date());
System.out.println("Fetched rows = " + count);
{noformat}

It took a while to hit the 15 million rows but I eventually fall into the same 
exception.

You can try to reproduce now that you have all the SSTables

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> 

[jira] [Commented] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-08 Thread Brian Hess (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232771#comment-15232771
 ] 

 Brian Hess commented on CASSANDRA-11521:
-

Ah - that's a good point (about internal things for other CLs).

> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8488) Filter by UDF

2016-04-08 Thread Henry Manasseh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232752#comment-15232752
 ] 

Henry Manasseh commented on CASSANDRA-8488:
---

Got it. Yes, that looks like a bigger problem. Thank you for pointing it out.

> Filter by UDF
> -
>
> Key: CASSANDRA-8488
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8488
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL, Local Write-Read Paths
>Reporter: Jonathan Ellis
>  Labels: client-impacting, cql, udf
> Fix For: 3.x
>
>
> Allow user-defined functions in WHERE clause with ALLOW FILTERING.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8488) Filter by UDF

2016-04-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232730#comment-15232730
 ] 

Sylvain Lebresne commented on CASSANDRA-8488:
-

bq. What do you mean by a UserExpression is for users?

Not much I guess, this is more of an implementation detail. As I said, there is 
bigger issues there before discussing minor implementation details anyway.

> Filter by UDF
> -
>
> Key: CASSANDRA-8488
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8488
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL, Local Write-Read Paths
>Reporter: Jonathan Ellis
>  Labels: client-impacting, cql, udf
> Fix For: 3.x
>
>
> Allow user-defined functions in WHERE clause with ALLOW FILTERING.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232728#comment-15232728
 ] 

Sylvain Lebresne commented on CASSANDRA-11521:
--

bq. WRT CL, with this approach, I don't quite see why you would have to stick 
to CL_ONE here

It's more a matter of CL.ONE being the case where we know we can get great 
benefits. Because in that case we'll "keep the query open", which save tons of 
work that is done for every page otherwise. For other CLs, because we asks 
other nodes, we'd kind of have to add some intra-node streaming of results to 
get substantial gains. And that's a lot more involved, hence the "that's an 
optimization for another day".

> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11532) CqlConfigHelper requires both truststore and keystore to work with SSL encryption

2016-04-08 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232723#comment-15232723
 ] 

Jeremiah Jordan commented on CASSANDRA-11532:
-

CI looks good.

> CqlConfigHelper requires both truststore and keystore to work with SSL 
> encryption
> -
>
> Key: CASSANDRA-11532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11532
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
> Attachments: CASSANDRA_11532.patch
>
>
> {{CqlConfigHelper}} configures SSL in the following way:
> {code:java}
> public static Optional getSSLOptions(Configuration conf)
> {
> Optional truststorePath = 
> getInputNativeSSLTruststorePath(conf);
> Optional keystorePath = getInputNativeSSLKeystorePath(conf);
> Optional truststorePassword = 
> getInputNativeSSLTruststorePassword(conf);
> Optional keystorePassword = 
> getInputNativeSSLKeystorePassword(conf);
> Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
> 
> if (truststorePath.isPresent() && keystorePath.isPresent() && 
> truststorePassword.isPresent() && keystorePassword.isPresent())
> {
> SSLContext context;
> try
> {
> context = getSSLContext(truststorePath.get(), 
> truststorePassword.get(), keystorePath.get(), keystorePassword.get());
> }
> catch (UnrecoverableKeyException | KeyManagementException |
> NoSuchAlgorithmException | KeyStoreException | 
> CertificateException | IOException e)
> {
> throw new RuntimeException(e);
> }
> String[] css = null;
> if (cipherSuites.isPresent())
> css = cipherSuites.get().split(",");
> return Optional.of(JdkSSLOptions.builder()
> .withSSLContext(context)
> .withCipherSuites(css)
> .build());
> }
> return Optional.absent();
> }
> {code}
> which forces you to connect only to trusted nodes and client authentication. 
> This should be made more flexible so that at least client authentication is 
> optional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11532) CqlConfigHelper requires both truststore and keystore to work with SSL encryption

2016-04-08 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-11532:

Status: Ready to Commit  (was: Patch Available)

> CqlConfigHelper requires both truststore and keystore to work with SSL 
> encryption
> -
>
> Key: CASSANDRA-11532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11532
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
> Attachments: CASSANDRA_11532.patch
>
>
> {{CqlConfigHelper}} configures SSL in the following way:
> {code:java}
> public static Optional getSSLOptions(Configuration conf)
> {
> Optional truststorePath = 
> getInputNativeSSLTruststorePath(conf);
> Optional keystorePath = getInputNativeSSLKeystorePath(conf);
> Optional truststorePassword = 
> getInputNativeSSLTruststorePassword(conf);
> Optional keystorePassword = 
> getInputNativeSSLKeystorePassword(conf);
> Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
> 
> if (truststorePath.isPresent() && keystorePath.isPresent() && 
> truststorePassword.isPresent() && keystorePassword.isPresent())
> {
> SSLContext context;
> try
> {
> context = getSSLContext(truststorePath.get(), 
> truststorePassword.get(), keystorePath.get(), keystorePassword.get());
> }
> catch (UnrecoverableKeyException | KeyManagementException |
> NoSuchAlgorithmException | KeyStoreException | 
> CertificateException | IOException e)
> {
> throw new RuntimeException(e);
> }
> String[] css = null;
> if (cipherSuites.isPresent())
> css = cipherSuites.get().split(",");
> return Optional.of(JdkSSLOptions.builder()
> .withSSLContext(context)
> .withCipherSuites(css)
> .build());
> }
> return Optional.absent();
> }
> {code}
> which forces you to connect only to trusted nodes and client authentication. 
> This should be made more flexible so that at least client authentication is 
> optional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11295) Make custom filtering more extensible via custom classes

2016-04-08 Thread Henry Manasseh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232724#comment-15232724
 ] 

Henry Manasseh commented on CASSANDRA-11295:


Is this CASSANDRA-8273 an issue for UserExpression filters?

> Make custom filtering more extensible via custom classes 
> -
>
> Key: CASSANDRA-11295
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11295
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.6
>
> Attachments: DummyFilteringRestrictions.java
>
>
> At the moment, the implementation of {{RowFilter.CustomExpression}} is 
> tightly bound to the syntax designed to support non-CQL search syntax for 
> custom 2i implementations. It might be interesting to decouple the two things 
> by making the custom expression implementation and serialization a bit more 
> pluggable. This would allow users to add their own custom expression 
> implementations to experiment with custom filtering strategies without having 
> to patch the C* source. As a minimally invasive first step, custom 
> expressions could be added programmatically via {{QueryHandler}}. Further 
> down the line, if this proves useful and we can figure out some reasonable 
> syntax we could think about adding the capability in CQL in a separate 
> ticket. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8488) Filter by UDF

2016-04-08 Thread Henry Manasseh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232719#comment-15232719
 ] 

Henry Manasseh commented on CASSANDRA-8488:
---

What do you mean by a UserExpression is for users? Wouldn't filtering based on 
a UDF be the same thing as filtering by a "user defined filtering expression"?

> Filter by UDF
> -
>
> Key: CASSANDRA-8488
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8488
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL, Local Write-Read Paths
>Reporter: Jonathan Ellis
>  Labels: client-impacting, cql, udf
> Fix For: 3.x
>
>
> Allow user-defined functions in WHERE clause with ALLOW FILTERING.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-08 Thread Brian Hess (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232715#comment-15232715
 ] 

 Brian Hess commented on CASSANDRA-11521:
-

[~slebresne] - I like this new approach better.  I think it simplifies things a 
bit and I'm worried about the server easily overpowering the client, which I 
think could be really easy to do (then we'd have to think about things like 
back-pressure, etc).  There could be a way to tell the server that the client 
is going to ask for all (or a lot) of the pages, so keep this stuff ready to 
flow, etc. Additionally we could have a setting that will tell the server "if 
you haven't heard me ask for the next page (or given some heartbeat) in a X 
long, then feel free to clean things up and throw an error if I ask for the 
next page later", or something, so that we don't have resources tied up even if 
the client dies.

WRT CL, with this approach, I don't quite see why you would have to stick to 
CL_ONE here.  That said, starting with CL_ONE and "growing" to other CL's is 
probably okay.  Just not entirely sure what it gains given this new approach.

> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232701#comment-15232701
 ] 

Sylvain Lebresne commented on CASSANDRA-11521:
--

The first thing that I think should be answered here is how do we "expose" this 
"externally". My initial though was more or less what I think your proof of 
concept is doing, that is having a different "paging mode" where the server 
sends pages "as fast as possible" rather than waiting for the client to ask for 
them.

But I'm starting to wonder if that's the best approach. Because one of the 
question in that case is "how to make sure we don't overwhelm the client?". And 
taking a step back, I strongly suspect that by far the majority of the gain of 
"streaming" in the numbers on CASSANDRA-9259 is due to not having to re-start a 
new query server side for each page. Because other than that, the difference 
between clients requesting pages as-fast-as-they-can versus server sending them 
as-fast-as-they-can (without waiting on the client to ask) is really just the 
latency of 2 client-server messages per page, which should be fairly small (and 
probably not even noticeable if the server can send data faster than the client 
can process).

So an alternative could be to not change how current paging works in general, 
but simply allow user to provide a "hint" when they know that they intend to 
consume the whole result set no matter what (and do so rapidly).  That hint 
would be used by the driver and server to optimize based on that assumption, 
which would mean for the driver to try to ask all pages to the same replica and 
for the server to, at CL.ONE at least, maintain the ongoing query iterator in 
memory.

My reasoning is that this would trade some hopefully negligable amount of 
latency between pages for:
# a simple solution to the problem of rate limiting for clients sake (since 
client will still control how fast things come).
# almost no change to the native protocol. We only need to pass the new "hint" 
flag, which would really only mean "please optimize if you can". In particular, 
we could actually introduce this _without_ a bump of the native protocol since 
we have flags available for query/execute messages. Given that so far we have 
no plan on doing the protocol v5 before 4.0, this would let us deliver this 
earlier which is nice.
# very little changes for the drivers: all they probably have to do is make 
sure they reuse the same replica for all pages if the "hint" is set by users 
but that should be pretty trivial to implement.
# it makes the question of what CL is supported moot: the "hint" flag will be 
just that, a hint, so users will be able to use it whenever. It just happens 
that we'll only optimize CL.ONE initially.
Overall, assuming the loss in latency (compared to having the server sends page 
as fast as it can) is indeed very small (which we should certainly validate), 
this would appear to a pretty good tradeoff to me.

But anyway, that's my initial brain dump on that first question of "how we 
expose this?". There are other questions too that needs to be discussed (and 
the sooner the better). For instance, how do we concretely handle the long 
running queries that this will allow? Holding an OpOrder for too long feels 
problematic to name just one problem.


> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10818) Evaluate exposure of DataType instances from JavaUDF class

2016-04-08 Thread Henry Manasseh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232677#comment-15232677
 ] 

Henry Manasseh commented on CASSANDRA-10818:


Will this fix go into 3.6?

> Evaluate exposure of DataType instances from JavaUDF class
> --
>
> Key: CASSANDRA-10818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10818
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.x
>
>
> Currently UDF implementations cannot create new UDT instances.
> There's no way to create a new UT instance without having the 
> {{com.datastax.driver.core.DataType}} to be able to call 
> {{com.datastax.driver.core.UserType.newValue()}}.
> From a quick look into the related code in {{JavaUDF}}, {{DataType}} and 
> {{UserType}} classes it looks fine to expose information about return and 
> argument types via {{JavaUDF}}.
> Have to find some solution for script UDFs - but feels doable, too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11295) Make custom filtering more extensible via custom classes

2016-04-08 Thread Henry Manasseh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232669#comment-15232669
 ] 

Henry Manasseh commented on CASSANDRA-11295:


Thank you.

> Make custom filtering more extensible via custom classes 
> -
>
> Key: CASSANDRA-11295
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11295
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.6
>
> Attachments: DummyFilteringRestrictions.java
>
>
> At the moment, the implementation of {{RowFilter.CustomExpression}} is 
> tightly bound to the syntax designed to support non-CQL search syntax for 
> custom 2i implementations. It might be interesting to decouple the two things 
> by making the custom expression implementation and serialization a bit more 
> pluggable. This would allow users to add their own custom expression 
> implementations to experiment with custom filtering strategies without having 
> to patch the C* source. As a minimally invasive first step, custom 
> expressions could be added programmatically via {{QueryHandler}}. Further 
> down the line, if this proves useful and we can figure out some reasonable 
> syntax we could think about adding the capability in CQL in a separate 
> ticket. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9625) GraphiteReporter not reporting

2016-04-08 Thread Brandon Wulf (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232654#comment-15232654
 ] 

Brandon Wulf commented on CASSANDRA-9625:
-

I too upgraded to 2.1.13 hoping to fix this issue. 

I upgraded from 2.1.8 primarily because the graphite metrics reporter kept 
dropping and for a few weeks I saw no more issues. Then, after we added some 
new nodes with SSDs, this seems to be effecting the fast nodes. Maybe the old 
nodes were having enough io wait time injected to negate a race condition 
causing this issue?

> GraphiteReporter not reporting
> --
>
> Key: CASSANDRA-9625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9625
> Project: Cassandra
>  Issue Type: Bug
> Environment: Debian Jessie, 7u79-2.5.5-1~deb8u1, Cassandra 2.1.3
>Reporter: Eric Evans
>Assignee: T Jake Luciani
> Attachments: metrics.yaml, thread-dump.log
>
>
> When upgrading from 2.1.3 to 2.1.6, the Graphite metrics reporter stops 
> working.  The usual startup is logged, and one batch of samples is sent, but 
> the reporting interval comes and goes, and no other samples are ever sent.  
> The logs are free from errors.
> Frustratingly, metrics reporting works in our smaller (staging) environment 
> on 2.1.6; We are able to reproduce this on all 6 of production nodes, but not 
> on a 3 node (otherwise identical) staging cluster (maybe it takes a certain 
> level of concurrency?).
> Attached is a thread dump, and our metrics.yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-04-08 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated CASSANDRA-11537:

Description: 
As an ops person upgrading and servicing Cassandra servers, I require a more 
clear message when I issue a nodetool command that the server is not ready for 
it so that I am not confused.

Technical description:
If you deploy a new binary, restart, and issue nodetool scrub/compact/updatess 
etc you get unfriendly assertion. An exception would be easier to understand. 
Also if a user has turned assertions off it is unclear what might happen. 
{noformat}

EC1: Throw exception to make it clear server is still in start up process. 
:~# nodetool upgradesstables
error: null
-- StackTrace --
java.lang.AssertionError
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
at 
org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
at 
org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
at 
org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
{noformat}
EC1: 
Patch against 2.1 (branch)

https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1


  was:
As an ops person upgrading and servicing Cassandra servers, I require a more 
clear message when I issue a nodetool command that the server is not ready for 
so that I am not confused.

Technical description:
If you deploy a new binary, restart, and issue nodetool scrub/compact/updatess 
etc you get unfriendly assertion. An exception would be easier to understand. 
Also if a user has turned assertions off it is unclear what might happen. 
{noformat}

EC1: Throw exception to make it clear server is still in start up process. 
:~# nodetool upgradesstables
error: null
-- StackTrace --
java.lang.AssertionError
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
at 
org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
at 
org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
at 
org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
{noformat}
EC1: 
Patch against 2.1 (branch)

https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11538) Give secondary index on partition column the same treatment as static column

2016-04-08 Thread DOAN DuyHai (JIRA)
DOAN DuyHai created CASSANDRA-11538:
---

 Summary: Give secondary index on partition column the same 
treatment as static column
 Key: CASSANDRA-11538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11538
 Project: Cassandra
  Issue Type: Improvement
  Components: CQL
 Environment: Cassandra 3.4
Reporter: DOAN DuyHai
Priority: Minor


For index on static column, in the index table we store:

- partition key = base table static column value
- clustering =  base table complete partition key (as a single value)


The way we handle index on partition column is different, we store:

- partition key = base table partition column value
- clustering 1 = base table complete partition key (as a single value)
- clustering 2 ... N+1 = N clustering values of the base table

It is more consistent to give partition column index the same treatment as the 
one for static column



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11536) Cassandra Imporve

2016-04-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-11536.
---
Resolution: Invalid

> Cassandra Imporve
> -
>
> Key: CASSANDRA-11536
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11536
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tan Pin Siang
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-04-08 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232413#comment-15232413
 ] 

Edward Capriolo commented on CASSANDRA-11537:
-

WIth the patch above the user now sees this:

{noformat}
[edward@jackintosh apache-cassandra-2.1.13-SNAPSHOT]$ bin/nodetool compact
error: Can not execute command because startup is not complete.
-- StackTrace --
org.apache.cassandra.service.NotReadyException: Can not execute command because 
startup is not complete.
at 
org.apache.cassandra.service.StorageService.throwIfNotInitialized(StorageService.java:4447)
at 
org.apache.cassandra.service.StorageService.forceKeyspaceCompaction(StorageService.java:2449)
{noformat}

> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11430) Add legacy notifications backward-support on deprecated repair methods

2016-04-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232405#comment-15232405
 ] 

Paulo Motta commented on CASSANDRA-11430:
-

Thanks, updated 2.2 patch to use {{com.google.common.base.Optional}} instead 
and resubmitted 2.2 tests.

> Add legacy notifications backward-support on deprecated repair methods
> --
>
> Key: CASSANDRA-11430
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11430
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
>Assignee: Paulo Motta
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> forceRepairRangeAsync is deprecated in 2.2/3.x series. It's still available 
> for older clients though. Unfortunately it sometimes hangs when you call it. 
> It looks like it completes fine but the notification to the client that the 
> operation is done is never sent. This is easiest to see by using nodetool 
> from 2.1 against a 3.x cluster.
> {noformat}
> [Nicks-MacBook-Pro:16:06:21 cassandra-2.1] cassandra$ ./bin/nodetool repair 
> -st 0 -et 1 OpsCenter
> [2016-03-24 16:06:50,165] Nothing to repair for keyspace 'OpsCenter'
> [Nicks-MacBook-Pro:16:06:50 cassandra-2.1] cassandra$
> [Nicks-MacBook-Pro:16:06:55 cassandra-2.1] cassandra$
> [Nicks-MacBook-Pro:16:06:55 cassandra-2.1] cassandra$ ./bin/nodetool repair 
> -st 0 -et 1 system_distributed
> ...
> ...
> {noformat}
> (I added the ellipses)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-04-08 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated CASSANDRA-11537:

Description: 
As an ops person upgrading and servicing Cassandra servers, I require a more 
clear message when I issue a nodetool command that the server is not ready for 
so that I am not confused.

Technical description:
If you deploy a new binary, restart, and issue nodetool scrub/compact/updatess 
etc you get unfriendly assertion. An exception would be easier to understand. 
Also if a user has turned assertions off it is unclear what might happen. 
{noformat}

EC1: Throw exception to make it clear server is still in start up process. 
:~# nodetool upgradesstables
error: null
-- StackTrace --
java.lang.AssertionError
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
at 
org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
at 
org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
at 
org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
{noformat}
EC1: 
Patch against 2.1 (branch)

https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1


  was:
As an ops person upgrading and servicing Cassandra servers, i require a more 
clear message when I issue a nodetool command that the server is not ready for 
so that I am not confused.

Technical description:
If you deploy a new binary restart and issue nodetool scrub/compact/updatess 
etc you can an unfriendly assertion. An exception would be easier to 
understand. Also if a user has turned assertions off it is unclear what might 
happen. 
{noformat}

EC1: Throw exception to make it clear server is still in start up process. 
:~# nodetool upgradesstables
error: null
-- StackTrace --
java.lang.AssertionError
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
at 
org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
at 
org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
at 
org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
{noformat}
EC1: 
Patch against 2.1 (branch)

https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11532) CqlConfigHelper requires both truststore and keystore to work with SSL encryption

2016-04-08 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232404#comment-15232404
 ] 

Jeremiah Jordan edited comment on CASSANDRA-11532 at 4/8/16 4:12 PM:
-

+1 Started CI

||2.2||3.0||trunk||
|[branch|https://github.com/JeremiahDJordan/cassandra/tree/CASSANDRA-11532-22]|[branch|https://github.com/JeremiahDJordan/cassandra/tree/CASSANDRA-11532-30]|[branch|https://github.com/JeremiahDJordan/cassandra/tree/CASSANDRA-11532-trunk]|
|[testall|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-30-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-trunk-testall/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-dtest/]|

The 2.2 branch merges forward cleanly.


was (Author: jjordan):
+1 Started CI

||2.2||3.0||trunk||
|[branch|https://github.com/JeremiahDJordan/cassandra/tree/CASSANDRA-11532-22]|[branch|https://github.com/JeremiahDJordan/cassandra/tree/CASSANDRA-11532-30]|[branch|https://github.com/JeremiahDJordan/cassandra/tree/CASSANDRA-11532-trunk]|
|[testall|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-30-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-trunk-testall/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-dtest/]|

> CqlConfigHelper requires both truststore and keystore to work with SSL 
> encryption
> -
>
> Key: CASSANDRA-11532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11532
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
> Attachments: CASSANDRA_11532.patch
>
>
> {{CqlConfigHelper}} configures SSL in the following way:
> {code:java}
> public static Optional getSSLOptions(Configuration conf)
> {
> Optional truststorePath = 
> getInputNativeSSLTruststorePath(conf);
> Optional keystorePath = getInputNativeSSLKeystorePath(conf);
> Optional truststorePassword = 
> getInputNativeSSLTruststorePassword(conf);
> Optional keystorePassword = 
> getInputNativeSSLKeystorePassword(conf);
> Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
> 
> if (truststorePath.isPresent() && keystorePath.isPresent() && 
> truststorePassword.isPresent() && keystorePassword.isPresent())
> {
> SSLContext context;
> try
> {
> context = getSSLContext(truststorePath.get(), 
> truststorePassword.get(), keystorePath.get(), keystorePassword.get());
> }
> catch (UnrecoverableKeyException | KeyManagementException |
> NoSuchAlgorithmException | KeyStoreException | 
> CertificateException | IOException e)
> {
> throw new RuntimeException(e);
> }
> String[] css = null;
> if (cipherSuites.isPresent())
> css = cipherSuites.get().split(",");
> return Optional.of(JdkSSLOptions.builder()
> .withSSLContext(context)
> .withCipherSuites(css)
> .build());
> }
> return Optional.absent();
> }
> {code}
> which forces you to connect only to trusted nodes and client authentication. 
> This should be made more flexible so that at least client authentication is 
> optional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11532) CqlConfigHelper requires both truststore and keystore to work with SSL encryption

2016-04-08 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232404#comment-15232404
 ] 

Jeremiah Jordan commented on CASSANDRA-11532:
-

+1 Started CI

||2.2||3.0||trunk||
|[branch|https://github.com/JeremiahDJordan/cassandra/tree/CASSANDRA-11532-22]|[branch|https://github.com/JeremiahDJordan/cassandra/tree/CASSANDRA-11532-30]|[branch|https://github.com/JeremiahDJordan/cassandra/tree/CASSANDRA-11532-trunk]|
|[testall|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-30-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-trunk-testall/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/zanson/job/JeremiahDJordan-CASSANDRA-11532-22-dtest/]|

> CqlConfigHelper requires both truststore and keystore to work with SSL 
> encryption
> -
>
> Key: CASSANDRA-11532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11532
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
> Attachments: CASSANDRA_11532.patch
>
>
> {{CqlConfigHelper}} configures SSL in the following way:
> {code:java}
> public static Optional getSSLOptions(Configuration conf)
> {
> Optional truststorePath = 
> getInputNativeSSLTruststorePath(conf);
> Optional keystorePath = getInputNativeSSLKeystorePath(conf);
> Optional truststorePassword = 
> getInputNativeSSLTruststorePassword(conf);
> Optional keystorePassword = 
> getInputNativeSSLKeystorePassword(conf);
> Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
> 
> if (truststorePath.isPresent() && keystorePath.isPresent() && 
> truststorePassword.isPresent() && keystorePassword.isPresent())
> {
> SSLContext context;
> try
> {
> context = getSSLContext(truststorePath.get(), 
> truststorePassword.get(), keystorePath.get(), keystorePassword.get());
> }
> catch (UnrecoverableKeyException | KeyManagementException |
> NoSuchAlgorithmException | KeyStoreException | 
> CertificateException | IOException e)
> {
> throw new RuntimeException(e);
> }
> String[] css = null;
> if (cipherSuites.isPresent())
> css = cipherSuites.get().split(",");
> return Optional.of(JdkSSLOptions.builder()
> .withSSLContext(context)
> .withCipherSuites(css)
> .build());
> }
> return Optional.absent();
> }
> {code}
> which forces you to connect only to trusted nodes and client authentication. 
> This should be made more flexible so that at least client authentication is 
> optional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-04-08 Thread Edward Capriolo (JIRA)
Edward Capriolo created CASSANDRA-11537:
---

 Summary: Give clear error when certain nodetool commands are 
issued before server is ready
 Key: CASSANDRA-11537
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
 Project: Cassandra
  Issue Type: Improvement
Reporter: Edward Capriolo
Assignee: Edward Capriolo
Priority: Minor


As an ops person upgrading and servicing Cassandra servers, i require a more 
clear message when I issue a nodetool command that the server is not ready for 
so that I am not confused.

Technical description:
If you deploy a new binary restart and issue nodetool scrub/compact/updatess 
etc you can an unfriendly assertion. An exception would be easier to 
understand. Also if a user has turned assertions off it is unclear what might 
happen. 
{noformat}

EC1: Throw exception to make it clear server is still in start up process. 
:~# nodetool upgradesstables
error: null
-- StackTrace --
java.lang.AssertionError
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
at 
org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
at 
org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
at 
org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
{noformat}
EC1: 
Patch against 2.1 (branch)

https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11354) PrimaryKeyRestrictionSet should be refactored

2016-04-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11354:
---
Reviewer: Tyler Hobbs  (was: Sylvain Lebresne)

> PrimaryKeyRestrictionSet should be refactored
> -
>
> Key: CASSANDRA-11354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11354
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> While reviewing CASSANDRA-11310 I realized that the code of 
> {{PrimaryKeyRestrictionSet}} was really confusing.
> The main 2 issues are:
> * the fact that it is used for partition keys and clustering columns 
> restrictions whereas those types of column required different processing
> * the {{isEQ}}, {{isSlice}}, {{isIN}} and {{isContains}} methods should not 
> be there as the set of restrictions might not match any of those categories 
> when secondary indexes are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11529) Checking if an unlogged batch is local is inefficient

2016-04-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232392#comment-15232392
 ] 

Paulo Motta commented on CASSANDRA-11529:
-

LGTM, given that alternative was already proposed and discussed on 
CASSANDRA-9282. Thanks!

Since we now have quite a few warning thresholds, I think we should group them 
together on cassandra.yaml to make finding them easier, so I updated the trunk 
patch moving the warning thresholds to its own section (the previous versions 
remain unchanged):

||trunk||
|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11529]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-11529-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-11529-dtest/lastCompletedBuild/testReport/]|

(please mark as ready to commit if tests look good and you agree with above 
change)

> Checking if an unlogged batch is local is inefficient
> -
>
> Key: CASSANDRA-11529
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11529
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Paulo Motta
>Assignee: Stefania
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Based on CASSANDRA-11363 report I noticed that on CASSANDRA-9303 we 
> introduced the following check to avoid printing a {{WARN}} in case an 
> unlogged batch statement is local:
> {noformat}
>  for (IMutation im : mutations)
>  {
>  keySet.add(im.key());
>  for (ColumnFamily cf : im.getColumnFamilies())
>  ksCfPairs.add(String.format("%s.%s", 
> cf.metadata().ksName, cf.metadata().cfName));
> +
> +if (localMutationsOnly)
> +localMutationsOnly &= isMutationLocal(localTokensByKs, 
> im);
>  }
>  
> +// CASSANDRA-9303: If we only have local mutations we do not warn
> +if (localMutationsOnly)
> +return;
> +
>  NoSpamLogger.log(logger, NoSpamLogger.Level.WARN, 1, 
> TimeUnit.MINUTES, unloggedBatchWarning,
>   keySet.size(), keySet.size() == 1 ? "" : "s",
>   ksCfPairs.size() == 1 ? "" : "s", ksCfPairs);
> {noformat}
> The {{isMutationLocal}} check uses 
> {{StorageService.instance.getLocalRanges(mutation.getKeyspaceName())}}, which 
> underneaths uses {{AbstractReplication.getAddressRanges}} to calculate local 
> ranges. 
> Recalculating this at every unlogged batch can be pretty inefficient, so we 
> should at the very least cache it every time the ring changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11524) Cqlsh default cqlver needs bumped

2016-04-08 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11524:

   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Ready to Commit)

It looks like Sylvain already ninja committed a fix in 
{{66fb8f51eddc6738db443b935f9f0666dc5d3767}}, so I'm resolving this as Fixed.

> Cqlsh default cqlver needs bumped
> -
>
> Key: CASSANDRA-11524
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11524
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Philip Thompson
>Assignee: Philip Thompson
> Fix For: 3.6
>
>
> CI: 
> http://cassci.datastax.com/view/Dev/view/ptnapoleon/job/ptnapoleon-trunk-dtest/18/
> Patch: https://github.com/ptnapoleon/cassandra/tree/fix-cqlsh
> Here is the current trunk CI, showing what is broken:
> http://cassci.datastax.com/job/trunk_offheap_dtest/120/testReport/
> And here is the commit that I believe broke this:
> https://github.com/apache/cassandra/commit/e1e692a75451c239b8ef489db7d2c233d3d63e4e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11470) dtest failure in materialized_views_test.TestMaterializedViews.base_replica_repair_test

2016-04-08 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232365#comment-15232365
 ] 

Philip Thompson commented on CASSANDRA-11470:
-

That makes sense. Running here: 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/63/

> dtest failure in 
> materialized_views_test.TestMaterializedViews.base_replica_repair_test
> ---
>
> Key: CASSANDRA-11470
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11470
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node2.log, node2_debug.log, node3.log, 
> node3_debug.log
>
>
> base_replica_repair_test has failed on trunk with the following exception in 
> the log of node2:
> {code}
> ERROR [main] 2016-03-31 08:48:46,949 CassandraDaemon.java:708 - Exception 
> encountered during startup
> java.lang.RuntimeException: Failed to list files in 
> /mnt/tmp/dtest-du964e/test/node2/data0/system_schema/views-9786ac1cdd583201a7cdad556410c985
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:53)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:547)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Directories$SSTableLister.filter(Directories.java:725)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Directories$SSTableLister.list(Directories.java:690) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:567)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:555)
>  ~[main/:na]
> at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:383) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.(Keyspace.java:320) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:130) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:107) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:139)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:864)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:811)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:799)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:505)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:242)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.prepareInternal(QueryProcessor.java:286)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:294)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.query(SchemaKeyspace.java:1246) 
> ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:875)
>  ~[main/:na]
> at 
> org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
>  ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
> ~[main/:na]
> at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:562)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:691) 
> [main/:na]
> Caused by: java.lang.RuntimeException: Failed to list directory files in 
> /mnt/tmp/dtest-du964e/test/node2/data0/system_schema/views-9786ac1cdd583201a7cdad556410c985,
>  inconsistent disk state for transaction 
> [ma_txn_flush_58db56b0-f71d-11e5-bf68-03a01adb9f11.log in 
> /mnt/tmp/dtest-du964e/test/node2/data0/system_schema/views-9786ac1cdd583201a7cdad556410c985]
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.classifyFiles(LogAwareFileLister.java:149)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.classifyFiles(LogAwareFileLister.java:103)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister$$Lambda$48/35984028.accept(Unknown
>  Source) ~[na:na]
> at 
> 

[jira] [Commented] (CASSANDRA-11524) Cqlsh default cqlver needs bumped

2016-04-08 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232354#comment-15232354
 ] 

Paulo Motta commented on CASSANDRA-11524:
-

LGTM

> Cqlsh default cqlver needs bumped
> -
>
> Key: CASSANDRA-11524
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11524
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Philip Thompson
>Assignee: Philip Thompson
> Fix For: 3.x
>
>
> CI: 
> http://cassci.datastax.com/view/Dev/view/ptnapoleon/job/ptnapoleon-trunk-dtest/18/
> Patch: https://github.com/ptnapoleon/cassandra/tree/fix-cqlsh
> Here is the current trunk CI, showing what is broken:
> http://cassci.datastax.com/job/trunk_offheap_dtest/120/testReport/
> And here is the commit that I believe broke this:
> https://github.com/apache/cassandra/commit/e1e692a75451c239b8ef489db7d2c233d3d63e4e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11524) Cqlsh default cqlver needs bumped

2016-04-08 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-11524:

Status: Ready to Commit  (was: Patch Available)

> Cqlsh default cqlver needs bumped
> -
>
> Key: CASSANDRA-11524
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11524
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Philip Thompson
>Assignee: Philip Thompson
> Fix For: 3.x
>
>
> CI: 
> http://cassci.datastax.com/view/Dev/view/ptnapoleon/job/ptnapoleon-trunk-dtest/18/
> Patch: https://github.com/ptnapoleon/cassandra/tree/fix-cqlsh
> Here is the current trunk CI, showing what is broken:
> http://cassci.datastax.com/job/trunk_offheap_dtest/120/testReport/
> And here is the commit that I believe broke this:
> https://github.com/apache/cassandra/commit/e1e692a75451c239b8ef489db7d2c233d3d63e4e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11536) Cassandra Imporve

2016-04-08 Thread Tan Pin Siang (JIRA)
Tan Pin Siang created CASSANDRA-11536:
-

 Summary: Cassandra Imporve
 Key: CASSANDRA-11536
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11536
 Project: Cassandra
  Issue Type: Improvement
Reporter: Tan Pin Siang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11525:
--
Status: Patch Available  (was: Open)

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:106)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> 

[jira] [Updated] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11528:
--
Fix Version/s: (was: 3.3)
   3.x

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11508) GPFS property file should more clearly explain the relationship with PFS

2016-04-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11508:

Issue Type: Improvement  (was: Bug)

> GPFS property file should more clearly explain the relationship with PFS
> 
>
> Key: CASSANDRA-11508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11508
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Brandon Williams
>Priority: Trivial
> Fix For: 2.1.14
>
>
> We should clearly explain that GPFS will load, and possibly use, the 
> cassandra-topology.properties file if it is present.  If this is a new 
> cluster, they should delete the topology file so that GPFS never even 
> instantiates PFS. The topology file should only exist if they are migrating 
> from PFS to GPFS, and be removed after the migration is complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11535) Add dtests for PER PARTITION LIMIT queries with paging

2016-04-08 Thread Alex Petrov (JIRA)
Alex Petrov created CASSANDRA-11535:
---

 Summary: Add dtests for PER PARTITION LIMIT queries with paging
 Key: CASSANDRA-11535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11535
 Project: Cassandra
  Issue Type: Test
  Components: Testing
Reporter: Alex Petrov
Assignee: Alex Petrov
Priority: Minor


[#7017|https://issues.apache.org/jira/browse/CASSANDRA-7017] introduces {{PER 
PARTITION LIMIT}} queries. In order to ensure they work with paging, with 
partitions containing only static columns, we need to add {{dtests}} to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns

2016-04-08 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232315#comment-15232315
 ] 

Alex Petrov commented on CASSANDRA-9842:


To sum up, there's no distinction between the non-existing row and a static 
column containing {{null}} value, so both an update to non-existing row and row 
with null in static column will succeed. 

Inconsistent behaviour is only in {{2.1}} and {{2.2}}, although I've added same 
tests to {{3.0}} and {{trunk}}. 

|code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]|

Waiting for CI results.

> Inconsistent behavior for '= null' conditions on static columns
> ---
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>Assignee: Alex Petrov
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11481) Example metrics config has DroppedMetrics

2016-04-08 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11481:
--
Status: Patch Available  (was: Open)

> Example metrics config has DroppedMetrics
> -
>
> Key: CASSANDRA-11481
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11481
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Christopher Batey
>Assignee: Christopher Batey
>Priority: Minor
> Attachments: fix-example-metrics.patch
>
>
> Noticed this when setting up metrics reporting on a new cluster. I assume it 
> is meant to be DroppedMessage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7396) Allow selecting Map key, List index

2016-04-08 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-7396:

Status: Patch Available  (was: In Progress)

[Branch|https://github.com/apache/cassandra/compare/trunk...snazy:7396-coll-slice]
[utests|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-7396-coll-slice-testall/lastSuccessfulBuild/]
[dtests|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-7396-coll-slice-dtest/lastSuccessfulBuild/]

It allows single element and slice selections on sets, maps and lists - both 
frozen and non-frozen ones.

Syntax for single element selections is {{collection_column\[element_term\]}}.
Syntax for slice selections is {{collection_column\[from_term..to_term\]}}. 
Either {{from_term}} or {{to_term}} can be omitted to make either one unbounded.
All kinds of terms are allowed - including bind markers and function calls. 
Single element and slice selections can also be passed to functions/aggregates.
The resulting type of any element or slice selection is the same as the 
collection type. For example, {{set_column\[element\]}} will return a set (not 
a boolean).

Non-frozen collections (complex data) except lists are treated special. It uses 
{{ColumnFilter}} to filter only those values, that are within the bounds of the 
slice(s).

{{CollectionSerializer.serializeForNativeProtocol}} (moved from 
{{CollectionType}}) now takes additional parameters to prevent superfluous 
serialization - this works for non-frozen lists, too.
{{CollectionSerializer.reserializeForNativeProtocol}} performs similar work for 
frozen types.

{{SelectStatement}} tries to use one {{ColumnFilter}} instance for all 
executions of a (prepared) statement. In case the terms for the element/slice 
sub-selection are dynamic ones (bound parameters, function calls), a new 
{{ColumnFilter}} needs to be created for each execution in order to benefit 
from its select/slice capabilities.

The new class {{SelectionRestriction}} is used as a descriptor of column 
element/slice/full selections. Each individual selection has its own 
{{SelectionRestriction}} instance.

{{Term.Raw}} got a new function {{isConstant}}, which is needed to determine 
whether a {{SelectionRestriction}} refers to a constant/literal value.

Some {{Selector}} methods now take {{QueryOptions}} instead of {{int 
protocolVersion}}, which is needed to resolve bound variables and functions for 
element/slice bounds.

{{Selection}} now distinguishes between {{ByteBuffer}}s and 
{{ComplexColumnData}}. The {{ComplexColumnData}} is used to use the โ€œrawโ€ cells 
for collection select/slice before these get serialized. That 
{{ComplexColumnData}} is then passed to {{CollectionSerializer}}.

One maybe annoying thing in the syntax is the use of two dots to separate slice 
bounds. Given a {{map}} and doing a slice selection like {{SELECT 
col\[1..5\]}} will result in a parser failure, because it interprets {{1.}} as 
the start of a floating point literal and then complains about the 2nd dot. 
Workaround is to put a space between ({{SELECT col\[1 .. 5\]}}). I thought 
about using {{:}} as a separator, but that is the start of a bind parameter. 
{{-}} is already โ€œoccupiedโ€ by negative numeric values. {{;}} is the end of a 
statement. Using a keyword like {{TO}} makes unbounded slice queried look a bit 
awkward ({{SELECT col\[ TO 5\]}} or {{SELECT col\[5 TO \]}}.

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
>  Labels: cql
> Fix For: 3.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-04-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232249#comment-15232249
 ] 

Benjamin Lerer commented on CASSANDRA-11310:


The patch looks good to me.
Could you add some DTests similar to 
https://github.com/riptano/cassandra-dtest/blob/master/paging_test.py#L941 to 
test filtering on clustering columns with paging?


> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11477) MerkleTree mismatch when a cell is shadowed by a range tombstone in different SSTables

2016-04-08 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-11477:

Reproduced In: 2.2.5, 2.1.13  (was: 2.1.13, 2.2.5)
   Status: Patch Available  (was: Open)

> MerkleTree mismatch when a cell is shadowed by a range tombstone in different 
> SSTables
> --
>
> Key: CASSANDRA-11477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11477
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fabien Rousseau
>Assignee: Fabien Rousseau
>  Labels: repair
> Attachments: 11477-2.1.patch
>
>
> Below is a script which allows to reproduce the problem:
> {noformat}
> ccm create test -v 2.1.13 -n 2 -s
> ccm node1 cqlsh
> CREATE KEYSPACE test_rt WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> USE test_rt;
> CREATE TABLE IF NOT EXISTS table1 (
> c1 text,
> c2 text,
> c3 text,
> PRIMARY KEY ((c1), c2)
> );
> INSERT INTO table1 (c1, c2, c3) VALUES ( 'a', 'b', 'c');
> ctrl ^d
> # now flush only one of the two nodes
> ccm node1 flush 
> ccm node1 cqlsh
> USE test_rt;
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> ccm node1 repair test_rt table1
> # now grep the log and observe that there was some inconstencies detected 
> between nodes (while it shouldn't have detected any)
> ccm node1 showlog | grep "out of sync"
> {noformat}
> The wrong hash will be computed on node1, which will include the previously 
> deleted cell, thus resulting in a MT mismatch.
> This is due to the fact that, in LazilyCompactedRow, the RT is not added into 
> the RT tracker (in fact, it's added only if it is GCable, but should always 
> be added).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11534) cqlsh fails to format collections when using aliases

2016-04-08 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-11534:


 Summary: cqlsh fails to format collections when using aliases
 Key: CASSANDRA-11534
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11534
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Stupp
Priority: Minor


Given is a simple table. Selecting the columns without an alias works fine. 
However, if the map is selected using an alias, cqlsh fails to format the value.

{code}
create keyspace foo WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
CREATE TABLE foo.foo (id int primary key, m map);
insert into foo.foo (id, m) VALUES ( 1, {1: 'one', 2: 'two', 3:'three'});
insert into foo.foo (id, m) VALUES ( 2, {1: '1one', 2: '2two', 3:'3three'});

cqlsh> select id, m from foo.foo;

 id | m
+-
  1 |{1: 'one', 2: 'two', 3: 'three'}
  2 | {1: '1one', 2: '2two', 3: '3three'}

(2 rows)
cqlsh> select id, m as "weofjkewopf" from foo.foo;

 id | weofjkewopf
+---
  1 |OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, u'three')])
  2 | OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, u'3three')])

(2 rows)
Failed to format value OrderedMapSerializedKey([(1, u'one'), (2, u'two'), (3, 
u'three')]) : 'NoneType' object has no attribute 'sub_types'
Failed to format value OrderedMapSerializedKey([(1, u'1one'), (2, u'2two'), (3, 
u'3three')]) : 'NoneType' object has no attribute 'sub_types'
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Change cql version in cqlsh

2016-04-08 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk edcfe32be -> 66fb8f51e


Change cql version in cqlsh


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66fb8f51
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66fb8f51
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66fb8f51

Branch: refs/heads/trunk
Commit: 66fb8f51eddc6738db443b935f9f0666dc5d3767
Parents: edcfe32
Author: Sylvain Lebresne 
Authored: Fri Apr 8 15:25:50 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Apr 8 15:25:50 2016 +0200

--
 bin/cqlsh.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/66fb8f51/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index 8a142d2..c3fcc48 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -177,7 +177,7 @@ from cqlshlib.util import get_file_encoding_bomsize, 
trim_if_present
 DEFAULT_HOST = '127.0.0.1'
 DEFAULT_PORT = 9042
 DEFAULT_SSL = False
-DEFAULT_CQLVER = '3.4.0'
+DEFAULT_CQLVER = '3.4.2'
 DEFAULT_PROTOCOL_VERSION = 4
 DEFAULT_CONNECT_TIMEOUT_SECONDS = 5
 DEFAULT_REQUEST_TIMEOUT_SECONDS = 10



[jira] [Commented] (CASSANDRA-11354) PrimaryKeyRestrictionSet should be refactored

2016-04-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232149#comment-15232149
 ] 

Benjamin Lerer commented on CASSANDRA-11354:


|[trunk|https://github.com/apache/cassandra/compare/trunk...blerer:11354-trunk]|[utest|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-11354-trunk-testall/5/]|[dtest|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-11354-trunk-dtest/4/]|

> PrimaryKeyRestrictionSet should be refactored
> -
>
> Key: CASSANDRA-11354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11354
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> While reviewing CASSANDRA-11310 I realized that the code of 
> {{PrimaryKeyRestrictionSet}} was really confusing.
> The main 2 issues are:
> * the fact that it is used for partition keys and clustering columns 
> restrictions whereas those types of column required different processing
> * the {{isEQ}}, {{isSlice}}, {{isIN}} and {{isContains}} methods should not 
> be there as the set of restrictions might not match any of those categories 
> when secondary indexes are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7423) Allow updating individual subfields of UDT

2016-04-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-7423:
--
Fix Version/s: (was: 3.x)
   3.6

> Allow updating individual subfields of UDT
> --
>
> Key: CASSANDRA-7423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7423
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Tyler Hobbs
>  Labels: client-impacting, cql, docs-impacting
> Fix For: 3.6
>
>
> Since user defined types were implemented in CASSANDRA-5590 as blobs (you 
> have to rewrite the entire type in order to make any modifications), they 
> can't be safely used without LWT for any operation that wants to modify a 
> subset of the UDT's fields by any client process that is not authoritative 
> for the entire blob. 
> When trying to use UDTs to model complex records (particularly with nesting), 
> this is not an exceptional circumstance, this is the totally expected normal 
> situation. 
> The use of UDTs for anything non-trivial is harmful to either performance or 
> consistency or both.
> edit: to clarify, i believe that most potential uses of UDTs should be 
> considered anti-patterns until/unless we have field-level r/w access to 
> individual elements of the UDT, with individual timestamps and standard LWW 
> semantics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7423) Allow updating individual subfields of UDT

2016-04-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-7423:
--
Status: Ready to Commit  (was: Patch Available)

> Allow updating individual subfields of UDT
> --
>
> Key: CASSANDRA-7423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7423
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Tyler Hobbs
>  Labels: client-impacting, cql, docs-impacting
> Fix For: 3.6
>
>
> Since user defined types were implemented in CASSANDRA-5590 as blobs (you 
> have to rewrite the entire type in order to make any modifications), they 
> can't be safely used without LWT for any operation that wants to modify a 
> subset of the UDT's fields by any client process that is not authoritative 
> for the entire blob. 
> When trying to use UDTs to model complex records (particularly with nesting), 
> this is not an exceptional circumstance, this is the totally expected normal 
> situation. 
> The use of UDTs for anything non-trivial is harmful to either performance or 
> consistency or both.
> edit: to clarify, i believe that most potential uses of UDTs should be 
> considered anti-patterns until/unless we have field-level r/w access to 
> individual elements of the UDT, with individual timestamps and standard LWW 
> semantics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7423) Allow updating individual subfields of UDT

2016-04-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232141#comment-15232141
 ] 

Benjamin Lerer commented on CASSANDRA-7423:
---

Your change seems to have fixed the problem :-)
Thanks.

+1

> Allow updating individual subfields of UDT
> --
>
> Key: CASSANDRA-7423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7423
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Tyler Hobbs
>  Labels: client-impacting, cql, docs-impacting
> Fix For: 3.6
>
>
> Since user defined types were implemented in CASSANDRA-5590 as blobs (you 
> have to rewrite the entire type in order to make any modifications), they 
> can't be safely used without LWT for any operation that wants to modify a 
> subset of the UDT's fields by any client process that is not authoritative 
> for the entire blob. 
> When trying to use UDTs to model complex records (particularly with nesting), 
> this is not an exceptional circumstance, this is the totally expected normal 
> situation. 
> The use of UDTs for anything non-trivial is harmful to either performance or 
> consistency or both.
> edit: to clarify, i believe that most potential uses of UDTs should be 
> considered anti-patterns until/unless we have field-level r/w access to 
> individual elements of the UDT, with individual timestamps and standard LWW 
> semantics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7423) Allow updating individual subfields of UDT

2016-04-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15228433#comment-15228433
 ] 

Benjamin Lerer edited comment on CASSANDRA-7423 at 4/8/16 12:50 PM:


I see a weird behavior when playing with auto-completion.
{code}
cqlsh> create KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
cqlsh> use test;
cqlsh:test> create type t (a int, b int);
cqlsh:test> create table test (pk int primary key, u t, v int);
cqlsh:test> update test set  
u v
cqlsh:test> update test set u 
= .
{code}
but if I press TAB after {{update test SET u.}} I get:
{code}
cqlsh:test> update test SET u.
{code}
whereas I would have expected {{}} on the following line (or even 
better: {{a b}} ;-))


was (Author: blerer):
I see a weird behavior when playing with auto-completion.
{code}
cqlsh> create KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};
cqlsh> use test;
cqlsh:test> create type t (a int, b int);
cqlsh:test> create table test (pk int primary key, u t, v int);
cqlsh:test> update test set  
u v
cqlsh:test> update test2 set u 
= .
{code}
but if I press TAB after {{update test SET u.}} I get:
{code}
cqlsh:test> update test SET u.
{code}
whereas I would have expected {{}} on the following line (or even 
better: {{a b}} ;-))

> Allow updating individual subfields of UDT
> --
>
> Key: CASSANDRA-7423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7423
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Tyler Hobbs
>  Labels: client-impacting, cql, docs-impacting
> Fix For: 3.x
>
>
> Since user defined types were implemented in CASSANDRA-5590 as blobs (you 
> have to rewrite the entire type in order to make any modifications), they 
> can't be safely used without LWT for any operation that wants to modify a 
> subset of the UDT's fields by any client process that is not authoritative 
> for the entire blob. 
> When trying to use UDTs to model complex records (particularly with nesting), 
> this is not an exceptional circumstance, this is the totally expected normal 
> situation. 
> The use of UDTs for anything non-trivial is harmful to either performance or 
> consistency or both.
> edit: to clarify, i believe that most potential uses of UDTs should be 
> considered anti-patterns until/unless we have field-level r/w access to 
> individual elements of the UDT, with individual timestamps and standard LWW 
> semantics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-6716) nodetool scrub constantly fails with RuntimeException (Tried to hard link to file that does not exist)

2016-04-08 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp reopened CASSANDRA-6716:
-

Seen this issue more than once now on 2.1 + 2.2 cluster. It seems that 
restarting nodes sometimes helps. Hope to be able to get my hands on one of 
these clusters soon.

There was no "bad" activity on these clusters (like s/o removing sstables 
manually).
All snapshot related operations failed (i.e. nodetool snapshot/scrub/repair).

By looking at the code it seems (just a wild guess!) that the {{View}} for the 
snapshot contains sstables that have been removed. Until now I don't know 
whether these were (recently) compacted or whether removal of sstables from the 
view occasionally fails.

> nodetool scrub constantly fails with RuntimeException (Tried to hard link to 
> file that does not exist)
> --
>
> Key: CASSANDRA-6716
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6716
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.0.5 (built from source), Linux, 6 nodes, JDK 
> 1.7
>Reporter: Nikolai Grigoriev
> Attachments: system.log.gz
>
>
> It seems that since recently I have started getting a number of exceptions 
> like "File not found" on all Cassandra nodes. Currently I am getting an 
> exception like this every couple of seconds on each node, for different 
> keyspaces and CFs.
> I have tried to restart the nodes, tried to scrub them. No luck so far. It 
> seems that scrub cannot complete on any of these nodes, at some point it 
> fails because of the file that it can't find.
> One one of the nodes currently the "nodetool scrub" command fails  instantly 
> and consistently with this exception:
> {code}
> # /opt/cassandra/bin/nodetool scrub 
> Exception in thread "main" java.lang.RuntimeException: Tried to hard link to 
> file that does not exist 
> /mnt/disk5/cassandra/data/mykeyspace_jmeter/test_contacts/mykeyspace_jmeter-test_contacts-jb-28049-Data.db
>   at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1215)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:1826)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrub(ColumnFamilyStore.java:1122)
>   at 
> org.apache.cassandra.service.StorageService.scrub(StorageService.java:2159)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>   at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$1.run(Transport.java:177)
>   at sun.rmi.transport.Transport$1.run(Transport.java:174)
>   at 

[jira] [Updated] (CASSANDRA-6716) snapshots constantly fail with "Tried to hard link to file that does not exist"

2016-04-08 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-6716:

Summary: snapshots constantly fail with "Tried to hard link to file that 
does not exist"  (was: nodetool scrub constantly fails with RuntimeException 
(Tried to hard link to file that does not exist))

> snapshots constantly fail with "Tried to hard link to file that does not 
> exist"
> ---
>
> Key: CASSANDRA-6716
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6716
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.0.5 (built from source), Linux, 6 nodes, JDK 
> 1.7
>Reporter: Nikolai Grigoriev
> Attachments: system.log.gz
>
>
> It seems that since recently I have started getting a number of exceptions 
> like "File not found" on all Cassandra nodes. Currently I am getting an 
> exception like this every couple of seconds on each node, for different 
> keyspaces and CFs.
> I have tried to restart the nodes, tried to scrub them. No luck so far. It 
> seems that scrub cannot complete on any of these nodes, at some point it 
> fails because of the file that it can't find.
> One one of the nodes currently the "nodetool scrub" command fails  instantly 
> and consistently with this exception:
> {code}
> # /opt/cassandra/bin/nodetool scrub 
> Exception in thread "main" java.lang.RuntimeException: Tried to hard link to 
> file that does not exist 
> /mnt/disk5/cassandra/data/mykeyspace_jmeter/test_contacts/mykeyspace_jmeter-test_contacts-jb-28049-Data.db
>   at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1215)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:1826)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrub(ColumnFamilyStore.java:1122)
>   at 
> org.apache.cassandra.service.StorageService.scrub(StorageService.java:2159)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>   at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$1.run(Transport.java:177)
>   at sun.rmi.transport.Transport$1.run(Transport.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
>   at 
> 

[jira] [Updated] (CASSANDRA-6716) nodetool scrub constantly fails with RuntimeException (Tried to hard link to file that does not exist)

2016-04-08 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-6716:

Reproduced In: 2.2.3, 2.0.5  (was: 2.0.5)

> nodetool scrub constantly fails with RuntimeException (Tried to hard link to 
> file that does not exist)
> --
>
> Key: CASSANDRA-6716
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6716
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.0.5 (built from source), Linux, 6 nodes, JDK 
> 1.7
>Reporter: Nikolai Grigoriev
> Attachments: system.log.gz
>
>
> It seems that since recently I have started getting a number of exceptions 
> like "File not found" on all Cassandra nodes. Currently I am getting an 
> exception like this every couple of seconds on each node, for different 
> keyspaces and CFs.
> I have tried to restart the nodes, tried to scrub them. No luck so far. It 
> seems that scrub cannot complete on any of these nodes, at some point it 
> fails because of the file that it can't find.
> One one of the nodes currently the "nodetool scrub" command fails  instantly 
> and consistently with this exception:
> {code}
> # /opt/cassandra/bin/nodetool scrub 
> Exception in thread "main" java.lang.RuntimeException: Tried to hard link to 
> file that does not exist 
> /mnt/disk5/cassandra/data/mykeyspace_jmeter/test_contacts/mykeyspace_jmeter-test_contacts-jb-28049-Data.db
>   at 
> org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1215)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:1826)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrub(ColumnFamilyStore.java:1122)
>   at 
> org.apache.cassandra.service.StorageService.scrub(StorageService.java:2159)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>   at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$1.run(Transport.java:177)
>   at sun.rmi.transport.Transport$1.run(Transport.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> 

[jira] [Commented] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232115#comment-15232115
 ] 

DOAN DuyHai commented on CASSANDRA-11525:
-

Ok, all the files are uploaded to Google Drive

One SSTable (*ma-2164*) is too big so that I had to split it into chunks of 
1Gb. To merge them, type {{cat ma-2164-big-Data.db_split.tgz_* | tar xz}}

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:106)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
>  

[jira] [Commented] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-08 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232113#comment-15232113
 ] 

DOAN DuyHai commented on CASSANDRA-11525:
-

Rebuilding now the index with the fix from branch [CASSANDRA-11525]

Also, I'm uploading all the files to Google drive, it will take a while to 
complete but I guess when you wake up in SF they'll be ready

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:106)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
>  

[jira] [Updated] (CASSANDRA-11353) ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread

2016-04-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11353:
-
Fix Version/s: (was: 3.6)
   3.5

> ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread 
> 
>
> Key: CASSANDRA-11353
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11353
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths
>Reporter: Alexey Ivanchin
>Assignee: Sylvain Lebresne
>  Labels: error
> Fix For: 3.5
>
>
> Hey. Please help me with a problem. Recently I updated to 3.3.0 and this 
> problem appeared in the logs.
> ERROR [CompactionExecutor:2458] 2016-03-10 12:41:15,127 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[CompactionExecutor:2458,1,main]
> java.lang.AssertionError: null
> at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:83) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.purge(BufferCell.java:175) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.lambda$purge$107(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData$$Lambda$68/1224572667.apply(Unknown
>  Source) ~[na:na]
> at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.transformAndFilter(ComplexColumnData.java:170)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:43)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow.lambda$purge$102(BTreeRow.java:333) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow$$Lambda$67/1968133513.apply(Unknown 
> Source) ~[na:na]
> at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.BTreeRow.transformAndFilter(BTreeRow.java:338) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow.purge(BTreeRow.java:333) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToRow(PurgeFunction.java:88)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 

[3/4] cassandra git commit: Merge branch 'cassandra-3.5' into trunk

2016-04-08 Thread slebresne
Merge branch 'cassandra-3.5' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bba54317
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bba54317
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bba54317

Branch: refs/heads/trunk
Commit: bba54317a2e3ff4072f590c4e728736530ef222a
Parents: 2e3b12f 11da411
Author: Sylvain Lebresne 
Authored: Fri Apr 8 14:09:16 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Apr 8 14:09:16 2016 +0200

--
 CHANGES.txt | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bba54317/CHANGES.txt
--
diff --cc CHANGES.txt
index a6f7a27,58d8ae8..f24c93b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,53 -1,5 +1,54 @@@
 +3.6
 + * Correctly fix potential assertion error during compaction (CASSANDRA-11353)
 + * Make LZ4 compression level configurable (CASSANDRA-11051)
 + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017)
 + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295)
 + * Improve field-checking and error reporting in cassandra.yaml 
(CASSANDRA-10649)
 + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507)
 + * More user friendly error when providing an invalid token to nodetool 
(CASSANDRA-9348)
 + * Add static column support to SASI index (CASSANDRA-11183)
 + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization 
(CASSANDRA-11434)
 + * Support LIKE operator in prepared statements (CASSANDRA-11456)
 + * Add a command to see if a Materialized View has finished building 
(CASSANDRA-9967)
 + * Log endpoint and port associated with streaming operation (CASSANDRA-8777)
 + * Print sensible units for all log messages (CASSANDRA-9692)
 + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096)
 + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372)
 + * Compress only inter-dc traffic by default (CASSANDRA-)
 + * Add metrics to track write amplification (CASSANDRA-11420)
 + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739)
 + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411)
 + * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
 + * Add auto import java.util for UDF code block (CASSANDRA-11392)
 + * Add --hex-format option to nodetool getsstables (CASSANDRA-11337)
 + * sstablemetadata should print sstable min/max token (CASSANDRA-7159)
 + * Do not wrap CassandraException in TriggerExecutor (CASSANDRA-9421)
 + * COPY TO should have higher double precision (CASSANDRA-11255)
 + * Stress should exit with non-zero status after failure (CASSANDRA-10340)
 + * Add client to cqlsh SHOW_SESSION (CASSANDRA-8958)
 + * Fix nodetool tablestats keyspace level metrics (CASSANDRA-11226)
 + * Store repair options in parent_repair_history (CASSANDRA-11244)
 + * Print current leveling in sstableofflinerelevel (CASSANDRA-9588)
 + * Change repair message for keyspaces with RF 1 (CASSANDRA-11203)
 + * Remove hard-coded SSL cipher suites and protocols (CASSANDRA-10508)
 + * Improve concurrency in CompactionStrategyManager (CASSANDRA-10099)
 + * (cqlsh) interpret CQL type for formatting blobs (CASSANDRA-11274)
 + * Refuse to start and print txn log information in case of disk
 +   corruption (CASSANDRA-10112)
 + * Resolve some eclipse-warnings (CASSANDRA-11086)
 + * (cqlsh) Show static columns in a different color (CASSANDRA-11059)
 + * Allow to remove TTLs on table with default_time_to_live (CASSANDRA-11207)
 +Merged from 3.0:
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 +
 +
  3.5
+  * Correctly fix potential assertion error during compaction (CASSANDRA-11353)
   * Avoid index segment stitching in RAM which lead to OOM on big SSTable 
files (CASSANDRA-11383)
   * Fix clustering and row filters for LIKE queries on clustering columns 
(CASSANDRA-11397)
  Merged from 3.0:



[2/4] cassandra git commit: Fix potential assertion error during compaction on trunk

2016-04-08 Thread slebresne
Fix potential assertion error during compaction on trunk

patch by slebresne; reviewed by krummas for CASSANDRA-11353


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/11da411f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/11da411f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/11da411f

Branch: refs/heads/trunk
Commit: 11da411fb62b0d6bc32d9119dd8548d94f88021f
Parents: e616867
Author: Sylvain Lebresne 
Authored: Mon Dec 28 14:08:10 2015 +0100
Committer: Sylvain Lebresne 
Committed: Fri Apr 8 14:09:01 2016 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/db/ReadCommand.java|  2 +-
 .../db/compaction/CompactionIterator.java   |  6 +--
 .../cassandra/db/partitions/PurgeFunction.java  | 12 ++---
 .../apache/cassandra/db/rows/AbstractCell.java  |  2 +-
 .../cassandra/db/compaction/TTLExpiryTest.java  | 47 ++--
 6 files changed, 46 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/11da411f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 20c866a..58d8ae8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.5
+ * Correctly fix potential assertion error during compaction (CASSANDRA-11353)
  * Avoid index segment stitching in RAM which lead to OOM on big SSTable files 
(CASSANDRA-11383)
  * Fix clustering and row filters for LIKE queries on clustering columns 
(CASSANDRA-11397)
 Merged from 3.0:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/11da411f/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index 3d044f2..387e062 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -554,7 +554,7 @@ public abstract class ReadCommand extends MonitorableImpl 
implements ReadQuery
 {
 public WithoutPurgeableTombstones()
 {
-super(isForThrift, cfs.gcBefore(nowInSec()), 
oldestUnrepairedTombstone(), 
cfs.getCompactionStrategyManager().onlyPurgeRepairedTombstones());
+super(isForThrift, nowInSec(), cfs.gcBefore(nowInSec()), 
oldestUnrepairedTombstone(), 
cfs.getCompactionStrategyManager().onlyPurgeRepairedTombstones());
 }
 
 protected long getMaxPurgeableTimestamp()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/11da411f/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
index 8a3b24b..d39da2a 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
@@ -103,7 +103,7 @@ public class CompactionIterator extends 
CompactionInfo.Holder implements Unfilte
  ? 
EmptyIterators.unfilteredPartition(controller.cfs.metadata, false)
  : 
UnfilteredPartitionIterators.merge(scanners, nowInSec, listener());
 boolean isForThrift = merged.isForThrift(); // to stop capture of 
iterator in Purger, which is confusing for debug
-this.compacted = Transformation.apply(merged, new Purger(isForThrift, 
controller));
+this.compacted = Transformation.apply(merged, new Purger(isForThrift, 
controller, nowInSec));
 }
 
 public boolean isForThrift()
@@ -264,9 +264,9 @@ public class CompactionIterator extends 
CompactionInfo.Holder implements Unfilte
 
 private long compactedUnfiltered;
 
-private Purger(boolean isForThrift, CompactionController controller)
+private Purger(boolean isForThrift, CompactionController controller, 
int nowInSec)
 {
-super(isForThrift, controller.gcBefore, 
controller.compactingRepaired() ? Integer.MIN_VALUE : Integer.MAX_VALUE, 
controller.cfs.getCompactionStrategyManager().onlyPurgeRepairedTombstones());
+super(isForThrift, nowInSec, controller.gcBefore, 
controller.compactingRepaired() ? Integer.MIN_VALUE : Integer.MAX_VALUE, 
controller.cfs.getCompactionStrategyManager().onlyPurgeRepairedTombstones());
 this.controller = controller;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/11da411f/src/java/org/apache/cassandra/db/partitions/PurgeFunction.java

[4/4] cassandra git commit: Fix changelog after backporting #11353

2016-04-08 Thread slebresne
Fix changelog after backporting #11353


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edcfe32b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edcfe32b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edcfe32b

Branch: refs/heads/trunk
Commit: edcfe32be1cb7023c52465a3c06eb36ee0e7ef63
Parents: bba5431
Author: Sylvain Lebresne 
Authored: Fri Apr 8 14:10:49 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Apr 8 14:10:49 2016 +0200

--
 CHANGES.txt | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/edcfe32b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f24c93b..74ba07e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,4 @@
 3.6
- * Correctly fix potential assertion error during compaction (CASSANDRA-11353)
  * Make LZ4 compression level configurable (CASSANDRA-11051)
  * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017)
  * Make custom filtering more extensible with UserExpression (CASSANDRA-11295)



[1/4] cassandra git commit: Fix potential assertion error during compaction on trunk

2016-04-08 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.5 e6168672b -> 11da411fb
  refs/heads/trunk 2e3b12fbe -> edcfe32be


Fix potential assertion error during compaction on trunk

patch by slebresne; reviewed by krummas for CASSANDRA-11353


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/11da411f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/11da411f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/11da411f

Branch: refs/heads/cassandra-3.5
Commit: 11da411fb62b0d6bc32d9119dd8548d94f88021f
Parents: e616867
Author: Sylvain Lebresne 
Authored: Mon Dec 28 14:08:10 2015 +0100
Committer: Sylvain Lebresne 
Committed: Fri Apr 8 14:09:01 2016 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/db/ReadCommand.java|  2 +-
 .../db/compaction/CompactionIterator.java   |  6 +--
 .../cassandra/db/partitions/PurgeFunction.java  | 12 ++---
 .../apache/cassandra/db/rows/AbstractCell.java  |  2 +-
 .../cassandra/db/compaction/TTLExpiryTest.java  | 47 ++--
 6 files changed, 46 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/11da411f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 20c866a..58d8ae8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.5
+ * Correctly fix potential assertion error during compaction (CASSANDRA-11353)
  * Avoid index segment stitching in RAM which lead to OOM on big SSTable files 
(CASSANDRA-11383)
  * Fix clustering and row filters for LIKE queries on clustering columns 
(CASSANDRA-11397)
 Merged from 3.0:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/11da411f/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index 3d044f2..387e062 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -554,7 +554,7 @@ public abstract class ReadCommand extends MonitorableImpl 
implements ReadQuery
 {
 public WithoutPurgeableTombstones()
 {
-super(isForThrift, cfs.gcBefore(nowInSec()), 
oldestUnrepairedTombstone(), 
cfs.getCompactionStrategyManager().onlyPurgeRepairedTombstones());
+super(isForThrift, nowInSec(), cfs.gcBefore(nowInSec()), 
oldestUnrepairedTombstone(), 
cfs.getCompactionStrategyManager().onlyPurgeRepairedTombstones());
 }
 
 protected long getMaxPurgeableTimestamp()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/11da411f/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
index 8a3b24b..d39da2a 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionIterator.java
@@ -103,7 +103,7 @@ public class CompactionIterator extends 
CompactionInfo.Holder implements Unfilte
  ? 
EmptyIterators.unfilteredPartition(controller.cfs.metadata, false)
  : 
UnfilteredPartitionIterators.merge(scanners, nowInSec, listener());
 boolean isForThrift = merged.isForThrift(); // to stop capture of 
iterator in Purger, which is confusing for debug
-this.compacted = Transformation.apply(merged, new Purger(isForThrift, 
controller));
+this.compacted = Transformation.apply(merged, new Purger(isForThrift, 
controller, nowInSec));
 }
 
 public boolean isForThrift()
@@ -264,9 +264,9 @@ public class CompactionIterator extends 
CompactionInfo.Holder implements Unfilte
 
 private long compactedUnfiltered;
 
-private Purger(boolean isForThrift, CompactionController controller)
+private Purger(boolean isForThrift, CompactionController controller, 
int nowInSec)
 {
-super(isForThrift, controller.gcBefore, 
controller.compactingRepaired() ? Integer.MIN_VALUE : Integer.MAX_VALUE, 
controller.cfs.getCompactionStrategyManager().onlyPurgeRepairedTombstones());
+super(isForThrift, nowInSec, controller.gcBefore, 
controller.compactingRepaired() ? Integer.MIN_VALUE : Integer.MAX_VALUE, 
controller.cfs.getCompactionStrategyManager().onlyPurgeRepairedTombstones());
 this.controller = controller;
 }
 


[3/3] cassandra git commit: Merge branch 'cassandra-3.5' into trunk

2016-04-08 Thread samt
Merge branch 'cassandra-3.5' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e3b12fb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e3b12fb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e3b12fb

Branch: refs/heads/trunk
Commit: 2e3b12fbe4bd218e99060a9509c24df443e3b626
Parents: 2750b18 e616867
Author: Sam Tunnicliffe 
Authored: Fri Apr 8 12:21:48 2016 +0100
Committer: Sam Tunnicliffe 
Committed: Fri Apr 8 12:21:48 2016 +0100

--
 src/java/org/apache/cassandra/service/StartupChecks.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2e3b12fb/src/java/org/apache/cassandra/service/StartupChecks.java
--



[1/3] cassandra git commit: recommit CASSANDRA-10902

2016-04-08 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.5 2dab42bce -> e6168672b
  refs/heads/trunk 2750b18f9 -> 2e3b12fbe


recommit CASSANDRA-10902


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e6168672
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e6168672
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e6168672

Branch: refs/heads/cassandra-3.5
Commit: e6168672bc421f0d0f90dd45bf3a991be578b3dc
Parents: 2dab42b
Author: Sam Tunnicliffe 
Authored: Fri Apr 8 12:05:49 2016 +0100
Committer: Sam Tunnicliffe 
Committed: Fri Apr 8 12:21:11 2016 +0100

--
 src/java/org/apache/cassandra/service/StartupChecks.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e6168672/src/java/org/apache/cassandra/service/StartupChecks.java
--
diff --git a/src/java/org/apache/cassandra/service/StartupChecks.java 
b/src/java/org/apache/cassandra/service/StartupChecks.java
index e903721..ad6a104 100644
--- a/src/java/org/apache/cassandra/service/StartupChecks.java
+++ b/src/java/org/apache/cassandra/service/StartupChecks.java
@@ -36,6 +36,7 @@ import org.apache.cassandra.db.*;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.exceptions.StartupException;
 import org.apache.cassandra.io.sstable.Descriptor;
+import org.apache.cassandra.io.util.FileUtils;
 import org.apache.cassandra.utils.*;
 
 /**
@@ -230,6 +231,11 @@ public class StartupChecks
 public void execute() throws StartupException
 {
 final Set invalid = new HashSet<>();
+final Set nonSSTablePaths = new HashSet<>();
+
nonSSTablePaths.add(FileUtils.getCanonicalPath(DatabaseDescriptor.getCommitLogLocation()));
+
nonSSTablePaths.add(FileUtils.getCanonicalPath(DatabaseDescriptor.getSavedCachesLocation()));
+
nonSSTablePaths.add(FileUtils.getCanonicalPath(DatabaseDescriptor.getHintsDirectory()));
+
 FileVisitor sstableVisitor = new SimpleFileVisitor()
 {
 public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IOException
@@ -253,7 +259,8 @@ public class StartupChecks
 {
 String name = dir.getFileName().toString();
 return (name.equals(Directories.SNAPSHOT_SUBDIR)
-|| name.equals(Directories.BACKUPS_SUBDIR))
+|| name.equals(Directories.BACKUPS_SUBDIR)
+|| 
nonSSTablePaths.contains(dir.toFile().getCanonicalPath()))
? FileVisitResult.SKIP_SUBTREE
: FileVisitResult.CONTINUE;
 }



[2/3] cassandra git commit: recommit CASSANDRA-10902

2016-04-08 Thread samt
recommit CASSANDRA-10902


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e6168672
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e6168672
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e6168672

Branch: refs/heads/trunk
Commit: e6168672bc421f0d0f90dd45bf3a991be578b3dc
Parents: 2dab42b
Author: Sam Tunnicliffe 
Authored: Fri Apr 8 12:05:49 2016 +0100
Committer: Sam Tunnicliffe 
Committed: Fri Apr 8 12:21:11 2016 +0100

--
 src/java/org/apache/cassandra/service/StartupChecks.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e6168672/src/java/org/apache/cassandra/service/StartupChecks.java
--
diff --git a/src/java/org/apache/cassandra/service/StartupChecks.java 
b/src/java/org/apache/cassandra/service/StartupChecks.java
index e903721..ad6a104 100644
--- a/src/java/org/apache/cassandra/service/StartupChecks.java
+++ b/src/java/org/apache/cassandra/service/StartupChecks.java
@@ -36,6 +36,7 @@ import org.apache.cassandra.db.*;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.exceptions.StartupException;
 import org.apache.cassandra.io.sstable.Descriptor;
+import org.apache.cassandra.io.util.FileUtils;
 import org.apache.cassandra.utils.*;
 
 /**
@@ -230,6 +231,11 @@ public class StartupChecks
 public void execute() throws StartupException
 {
 final Set invalid = new HashSet<>();
+final Set nonSSTablePaths = new HashSet<>();
+
nonSSTablePaths.add(FileUtils.getCanonicalPath(DatabaseDescriptor.getCommitLogLocation()));
+
nonSSTablePaths.add(FileUtils.getCanonicalPath(DatabaseDescriptor.getSavedCachesLocation()));
+
nonSSTablePaths.add(FileUtils.getCanonicalPath(DatabaseDescriptor.getHintsDirectory()));
+
 FileVisitor sstableVisitor = new SimpleFileVisitor()
 {
 public FileVisitResult visitFile(Path file, 
BasicFileAttributes attrs) throws IOException
@@ -253,7 +259,8 @@ public class StartupChecks
 {
 String name = dir.getFileName().toString();
 return (name.equals(Directories.SNAPSHOT_SUBDIR)
-|| name.equals(Directories.BACKUPS_SUBDIR))
+|| name.equals(Directories.BACKUPS_SUBDIR)
+|| 
nonSSTablePaths.contains(dir.toFile().getCanonicalPath()))
? FileVisitResult.SKIP_SUBTREE
: FileVisitResult.CONTINUE;
 }



[2/3] cassandra git commit: recommit CASSANDRA-10817

2016-04-08 Thread marcuse
recommit CASSANDRA-10817


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2dab42bc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2dab42bc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2dab42bc

Branch: refs/heads/trunk
Commit: 2dab42bce7ce1fe6c6dfee1c6fd8ad5a09734e37
Parents: 89bd935
Author: Marcus Eriksson 
Authored: Fri Apr 8 13:03:25 2016 +0200
Committer: Marcus Eriksson 
Committed: Fri Apr 8 13:03:25 2016 +0200

--
 src/java/org/apache/cassandra/cql3/Cql.g | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2dab42bc/src/java/org/apache/cassandra/cql3/Cql.g
--
diff --git a/src/java/org/apache/cassandra/cql3/Cql.g 
b/src/java/org/apache/cassandra/cql3/Cql.g
index 5cb479c..d191007 100644
--- a/src/java/org/apache/cassandra/cql3/Cql.g
+++ b/src/java/org/apache/cassandra/cql3/Cql.g
@@ -1088,7 +1088,7 @@ alterUserStatement returns [AlterRoleStatement stmt]
 RoleOptions opts = new RoleOptions();
 RoleName name = new RoleName();
 }
-: K_ALTER K_USER u=username { name.setName($u.text, false); }
+: K_ALTER K_USER u=username { name.setName($u.text, true); }
   ( K_WITH userPassword[opts] )?
   ( K_SUPERUSER { opts.setOption(IRoleManager.Option.SUPERUSER, true); }
 | K_NOSUPERUSER { opts.setOption(IRoleManager.Option.SUPERUSER, 
false); } ) ?
@@ -1103,7 +1103,7 @@ dropUserStatement returns [DropRoleStatement stmt]
 boolean ifExists = false;
 RoleName name = new RoleName();
 }
-: K_DROP K_USER (K_IF K_EXISTS { ifExists = true; })? u=username { 
name.setName($u.text, false); $stmt = new DropRoleStatement(name, ifExists); }
+: K_DROP K_USER (K_IF K_EXISTS { ifExists = true; })? u=username { 
name.setName($u.text, true); $stmt = new DropRoleStatement(name, ifExists); }
 ;
 
 /**



[3/3] cassandra git commit: Merge branch 'cassandra-3.5' into trunk

2016-04-08 Thread marcuse
Merge branch 'cassandra-3.5' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2750b18f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2750b18f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2750b18f

Branch: refs/heads/trunk
Commit: 2750b18f9acce89fe637cefe36c00c71927e7a8f
Parents: d3f4ae8 2dab42b
Author: Marcus Eriksson 
Authored: Fri Apr 8 13:03:40 2016 +0200
Committer: Marcus Eriksson 
Committed: Fri Apr 8 13:03:40 2016 +0200

--
 src/antlr/Parser.g | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2750b18f/src/antlr/Parser.g
--
diff --cc src/antlr/Parser.g
index 0b21775,000..36d4e20
mode 100644,00..100644
--- a/src/antlr/Parser.g
+++ b/src/antlr/Parser.g
@@@ -1,1578 -1,0 +1,1578 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + *   http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing,
 + * software distributed under the License is distributed on an
 + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 + * KIND, either express or implied.  See the License for the
 + * specific language governing permissions and limitations
 + * under the License.
 + */
 +
 +parser grammar Parser;
 +
 +options {
 +language = Java;
 +}
 +
 +@members {
 +private final List listeners = new 
ArrayList();
 +protected final List bindVariables = new 
ArrayList();
 +
 +public static final Set reservedTypeNames = new HashSet()
 +{{
 +add("byte");
 +add("complex");
 +add("enum");
 +add("date");
 +add("interval");
 +add("macaddr");
 +add("bitstring");
 +}};
 +
 +public AbstractMarker.Raw newBindVariables(ColumnIdentifier name)
 +{
 +AbstractMarker.Raw marker = new 
AbstractMarker.Raw(bindVariables.size());
 +bindVariables.add(name);
 +return marker;
 +}
 +
 +public AbstractMarker.INRaw newINBindVariables(ColumnIdentifier name)
 +{
 +AbstractMarker.INRaw marker = new 
AbstractMarker.INRaw(bindVariables.size());
 +bindVariables.add(name);
 +return marker;
 +}
 +
 +public Tuples.Raw newTupleBindVariables(ColumnIdentifier name)
 +{
 +Tuples.Raw marker = new Tuples.Raw(bindVariables.size());
 +bindVariables.add(name);
 +return marker;
 +}
 +
 +public Tuples.INRaw newTupleINBindVariables(ColumnIdentifier name)
 +{
 +Tuples.INRaw marker = new Tuples.INRaw(bindVariables.size());
 +bindVariables.add(name);
 +return marker;
 +}
 +
 +public Json.Marker newJsonBindVariables(ColumnIdentifier name)
 +{
 +Json.Marker marker = new Json.Marker(bindVariables.size());
 +bindVariables.add(name);
 +return marker;
 +}
 +
 +public void addErrorListener(ErrorListener listener)
 +{
 +this.listeners.add(listener);
 +}
 +
 +public void removeErrorListener(ErrorListener listener)
 +{
 +this.listeners.remove(listener);
 +}
 +
 +public void displayRecognitionError(String[] tokenNames, 
RecognitionException e)
 +{
 +for (int i = 0, m = listeners.size(); i < m; i++)
 +listeners.get(i).syntaxError(this, tokenNames, e);
 +}
 +
 +protected void addRecognitionError(String msg)
 +{
 +for (int i = 0, m = listeners.size(); i < m; i++)
 +listeners.get(i).syntaxError(this, msg);
 +}
 +
 +public Map convertPropertyMap(Maps.Literal map)
 +{
 +if (map == null || map.entries == null || map.entries.isEmpty())
 +return Collections.emptyMap();
 +
 +Map res = new HashMap(map.entries.size());
 +
 +for (Pair entry : map.entries)
 +{
 +// Because the parser tries to be smart and recover on error (to
 +// allow displaying more than one error I suppose), we have null
 +// entries in there. Just skip those, a proper error will be 
thrown in the end.
 +if (entry.left == null || entry.right == null)
 +break;
 +
 +if (!(entry.left 

[1/3] cassandra git commit: recommit CASSANDRA-10817

2016-04-08 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.5 89bd93502 -> 2dab42bce
  refs/heads/trunk d3f4ae842 -> 2750b18f9


recommit CASSANDRA-10817


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2dab42bc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2dab42bc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2dab42bc

Branch: refs/heads/cassandra-3.5
Commit: 2dab42bce7ce1fe6c6dfee1c6fd8ad5a09734e37
Parents: 89bd935
Author: Marcus Eriksson 
Authored: Fri Apr 8 13:03:25 2016 +0200
Committer: Marcus Eriksson 
Committed: Fri Apr 8 13:03:25 2016 +0200

--
 src/java/org/apache/cassandra/cql3/Cql.g | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2dab42bc/src/java/org/apache/cassandra/cql3/Cql.g
--
diff --git a/src/java/org/apache/cassandra/cql3/Cql.g 
b/src/java/org/apache/cassandra/cql3/Cql.g
index 5cb479c..d191007 100644
--- a/src/java/org/apache/cassandra/cql3/Cql.g
+++ b/src/java/org/apache/cassandra/cql3/Cql.g
@@ -1088,7 +1088,7 @@ alterUserStatement returns [AlterRoleStatement stmt]
 RoleOptions opts = new RoleOptions();
 RoleName name = new RoleName();
 }
-: K_ALTER K_USER u=username { name.setName($u.text, false); }
+: K_ALTER K_USER u=username { name.setName($u.text, true); }
   ( K_WITH userPassword[opts] )?
   ( K_SUPERUSER { opts.setOption(IRoleManager.Option.SUPERUSER, true); }
 | K_NOSUPERUSER { opts.setOption(IRoleManager.Option.SUPERUSER, 
false); } ) ?
@@ -1103,7 +1103,7 @@ dropUserStatement returns [DropRoleStatement stmt]
 boolean ifExists = false;
 RoleName name = new RoleName();
 }
-: K_DROP K_USER (K_IF K_EXISTS { ifExists = true; })? u=username { 
name.setName($u.text, false); $stmt = new DropRoleStatement(name, ifExists); }
+: K_DROP K_USER (K_IF K_EXISTS { ifExists = true; })? u=username { 
name.setName($u.text, true); $stmt = new DropRoleStatement(name, ifExists); }
 ;
 
 /**



[jira] [Commented] (CASSANDRA-11427) Range slice queries CL > ONE trigger read-repair of purgeable tombstones

2016-04-08 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232007#comment-15232007
 ] 

Stefan Podkowinski commented on CASSANDRA-11427:


I've now created {{11427-2.2_v2.patch}} by purging tombstones in 
{{ColumnFamilyStore.filter}} as suggested. I agree that it's easier and more 
efficient, just wondering why it already hasn't been done there. Tests look 
good, WDYT [~slebresne]?

||2.2||
|[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-11427-2.2]|
|[dtest|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-11427-2.2-dtest/]|
|[testall|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-11427-2.2-testall/]|


> Range slice queries CL > ONE trigger read-repair of purgeable tombstones
> 
>
> Key: CASSANDRA-11427
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11427
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 2.1.x, 2.2.x
>
> Attachments: 11427-2.1.patch
>
>
> Range queries will trigger read repairs for purgeable tombstones on hosts 
> that already compacted given tombstones. Clusters with periodical jobs for 
> scanning data ranges will likely see tombstones ressurected through RRs just 
> to have them compacted again later at the destination host.
> Executing range queries (e.g. for reading token ranges) will compare the 
> actual data instead of using digests when executed with CL > ONE. Responses 
> will be consolidated by {{RangeSliceResponseResolver.Reducer}}, where the 
> result of {{RowDataResolver.resolveSuperset}} is used as the reference 
> version for the results. {{RowDataResolver.scheduleRepairs}} will then send 
> the superset to all nodes that returned a different result before. 
> Unfortunately this does also involve cases where the superset is just made up 
> of purgeable tombstone(s) that already have been compacted on the other 
> nodes. In this case a read-repair will be triggered for transfering the 
> purgeable tombstones to all other nodes nodes that returned an empty result.
> The issue can be reproduced with the provided dtest or manually using the 
> following steps:
> {noformat}
> create keyspace test1 with replication = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 2 };
> use test1;
> create table test1 ( a text, b text, primary key(a, b) ) WITH compaction = 
> {'class': 'SizeTieredCompactionStrategy', 'enabled': 'false'} AND 
> dclocal_read_repair_chance = 0 AND gc_grace_seconds = 0;
> delete from test1 where a = 'a';
> {noformat}
> {noformat}
> ccm flush;
> ccm node2 compact;
> {noformat}
> {noformat}
> use test1;
> consistency all;
> tracing on;
> select * from test1;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11427) Range slice queries CL > ONE trigger read-repair of purgeable tombstones

2016-04-08 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-11427:
---
Attachment: 11427-2.2_v2.patch

> Range slice queries CL > ONE trigger read-repair of purgeable tombstones
> 
>
> Key: CASSANDRA-11427
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11427
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 2.1.x, 2.2.x
>
> Attachments: 11427-2.1.patch, 11427-2.2_v2.patch
>
>
> Range queries will trigger read repairs for purgeable tombstones on hosts 
> that already compacted given tombstones. Clusters with periodical jobs for 
> scanning data ranges will likely see tombstones ressurected through RRs just 
> to have them compacted again later at the destination host.
> Executing range queries (e.g. for reading token ranges) will compare the 
> actual data instead of using digests when executed with CL > ONE. Responses 
> will be consolidated by {{RangeSliceResponseResolver.Reducer}}, where the 
> result of {{RowDataResolver.resolveSuperset}} is used as the reference 
> version for the results. {{RowDataResolver.scheduleRepairs}} will then send 
> the superset to all nodes that returned a different result before. 
> Unfortunately this does also involve cases where the superset is just made up 
> of purgeable tombstone(s) that already have been compacted on the other 
> nodes. In this case a read-repair will be triggered for transfering the 
> purgeable tombstones to all other nodes nodes that returned an empty result.
> The issue can be reproduced with the provided dtest or manually using the 
> following steps:
> {noformat}
> create keyspace test1 with replication = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 2 };
> use test1;
> create table test1 ( a text, b text, primary key(a, b) ) WITH compaction = 
> {'class': 'SizeTieredCompactionStrategy', 'enabled': 'false'} AND 
> dclocal_read_repair_chance = 0 AND gc_grace_seconds = 0;
> delete from test1 where a = 'a';
> {noformat}
> {noformat}
> ccm flush;
> ccm node2 compact;
> {noformat}
> {noformat}
> use test1;
> consistency all;
> tracing on;
> select * from test1;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load

2016-04-08 Thread Carlos Rolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231924#comment-15231924
 ] 

Carlos Rolo commented on CASSANDRA-11363:
-

On my systems: 3.0.3 -> Logged Batches, 
  2.1.13 -> Unlogged Batches.

But on 2.1.13 batches where taken out at some point to see if it would improve, 
but not much improvement was seen. So it might not be related to batches. The 
test was also small (With batches disabled), so would not make that count as 
solid test. 

> Blocked NTR When Connecting Causing Excessive Load
> --
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Priority: Critical
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11295) Make custom filtering more extensible via custom classes

2016-04-08 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11295:

Attachment: DummyFilteringRestrictions.java

[~henryman], you can attach a {{UserExpression}} to a query using an 
implementation of {{o.a.c.cql3.restrictions.Restrictions}} which adds it to the 
row filter when the query is executed. This is obviously a bit clunky and the 
interface can surely be improved, but bear in mind that this is a first pass at 
a pretty low-level api, so things can (and probably will) change. I've attached 
a sample class which to illustrate. The way to use it in your {{QueryHandler}} 
would be something like:

{code}

// Register custom expression class
static
{

RowFilter.UserExpression.register(DummyFilteringRestrictions.DummyFilteringExpression.class,
  new 
DummyFilteringRestrictions.DummyFilteringExpression.Deserializer());
}

public ResultMessage process(String query,
 QueryState state,
 QueryOptions options,
 Map customPayload)
throws RequestExecutionException, RequestValidationException
{
ParsedStatement.Prepared parsed = QueryProcessor.getStatement(query, 
state.getClientState());
if (parsed.statement instanceof SelectStatement)
{
((SelectStatement) parsed.statement).getRestrictions()
.getIndexRestrictions()
.add(new 
DummyFilteringRestrictions());
}
return QueryProcessor.instance.processStatement(parsed.statement, 
state, options);
}
{code}

of course you'd also need to handle prepared statements, doing basically the 
exact same thing in {{processPrepared}}. 


> Make custom filtering more extensible via custom classes 
> -
>
> Key: CASSANDRA-11295
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11295
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.6
>
> Attachments: DummyFilteringRestrictions.java
>
>
> At the moment, the implementation of {{RowFilter.CustomExpression}} is 
> tightly bound to the syntax designed to support non-CQL search syntax for 
> custom 2i implementations. It might be interesting to decouple the two things 
> by making the custom expression implementation and serialization a bit more 
> pluggable. This would allow users to add their own custom expression 
> implementations to experiment with custom filtering strategies without having 
> to patch the C* source. As a minimally invasive first step, custom 
> expressions could be added programmatically via {{QueryHandler}}. Further 
> down the line, if this proves useful and we can figure out some reasonable 
> syntax we could think about adding the capability in CQL in a separate 
> ticket. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11527) Improve ColumnFilter

2016-04-08 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp resolved CASSANDRA-11527.
--
   Resolution: Invalid
Fix Version/s: (was: 4.x)

Oh - completely overlooked that 
{{org.apache.cassandra.db.filter.ColumnFilter.Builder#build}} creates a multi 
map.

> Improve ColumnFilter
> 
>
> Key: CASSANDRA-11527
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11527
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Priority: Minor
>
> While working on CASSANDRA-7396, it turned out that it could be beneficial to 
> modify {{ColumnFilter}} class:
> * Allow multiple single element + slice filters for a single column
> * At the moment we fetch all cell paths for a single column and just skip the 
> values. For a subselection it feels more convenient to just return the 
> selected cells and skip the filtered cell-paths.
> * Remove Thrift related code.
> This requires a change in the serialized format of {{ColumnFilter}} and thus 
> proposed for 4.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11353) ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread

2016-04-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231914#comment-15231914
 ] 

Marcus Eriksson edited comment on CASSANDRA-11353 at 4/8/16 9:20 AM:
-

Looking at the diff on CHANGES.txt it seems we lost:
* CASSANDRA-10140 - but it looks like we fixed this in CASSANDRA-10494
* CASSANDRA-9258 - but there have been several commits related to this after so 
I assume it is fixed
* CASSANDRA-10902 - and this one seems to still be lost, could you have a look 
[~carlyeks] / [~beobal] ?
* CASSANDRA-10817 - I will fix this


was (Author: krummas):
Looking at the diff on CHANGES.txt it seems we lost:
* CASSANDRA-10140 - but it looks like we fixed this in CASSANDRA-10494
* CASSANDRA-9258 - but there have been several commits related to this after so 
I assume it is fixed
* CASSANDRA-10902 - and this one seems to still be lost, could you have a look 
[~carlyeks] / [~beobal] ?
* CASSANDRA-10817 - is lost, I will fix this

> ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread 
> 
>
> Key: CASSANDRA-11353
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11353
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths
>Reporter: Alexey Ivanchin
>Assignee: Sylvain Lebresne
>  Labels: error
> Fix For: 3.6
>
>
> Hey. Please help me with a problem. Recently I updated to 3.3.0 and this 
> problem appeared in the logs.
> ERROR [CompactionExecutor:2458] 2016-03-10 12:41:15,127 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[CompactionExecutor:2458,1,main]
> java.lang.AssertionError: null
> at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:83) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.purge(BufferCell.java:175) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.lambda$purge$107(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData$$Lambda$68/1224572667.apply(Unknown
>  Source) ~[na:na]
> at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.transformAndFilter(ComplexColumnData.java:170)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:43)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow.lambda$purge$102(BTreeRow.java:333) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow$$Lambda$67/1968133513.apply(Unknown 
> Source) ~[na:na]
> at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.BTreeRow.transformAndFilter(BTreeRow.java:338) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow.purge(BTreeRow.java:333) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToRow(PurgeFunction.java:88)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> 

[jira] [Commented] (CASSANDRA-11353) ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread

2016-04-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231914#comment-15231914
 ] 

Marcus Eriksson commented on CASSANDRA-11353:
-

Looking at the diff on CHANGES.txt it seems we lost:
* CASSANDRA-10140 - but it looks like we fixed this in CASSANDRA-10494
* CASSANDRA-9258 - but there have been several commits related to this after so 
I assume it is fixed
* CASSANDRA-10902 - and this one seems to still be lost, could you have a look 
[~carlyeks] / [~beobal] ?
* CASSANDRA-10817 - is lost, I will fix this

> ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread 
> 
>
> Key: CASSANDRA-11353
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11353
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths
>Reporter: Alexey Ivanchin
>Assignee: Sylvain Lebresne
>  Labels: error
> Fix For: 3.6
>
>
> Hey. Please help me with a problem. Recently I updated to 3.3.0 and this 
> problem appeared in the logs.
> ERROR [CompactionExecutor:2458] 2016-03-10 12:41:15,127 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[CompactionExecutor:2458,1,main]
> java.lang.AssertionError: null
> at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:83) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.purge(BufferCell.java:175) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.lambda$purge$107(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData$$Lambda$68/1224572667.apply(Unknown
>  Source) ~[na:na]
> at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.transformAndFilter(ComplexColumnData.java:170)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:43)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow.lambda$purge$102(BTreeRow.java:333) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow$$Lambda$67/1968133513.apply(Unknown 
> Source) ~[na:na]
> at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.BTreeRow.transformAndFilter(BTreeRow.java:338) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow.purge(BTreeRow.java:333) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToRow(PurgeFunction.java:88)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  

[jira] [Commented] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-08 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231896#comment-15231896
 ] 

Stefania commented on CASSANDRA-11521:
--

bq. I'd like us to tackle this and CASSANDRA-11520 as independently as 
possible, since as much as they both are meant to help the same scenario, they 
are independent optimisations.

In the proof of concept they are not independent, you can take a look at [this 
code|https://github.com/stef1927/cassandra/blob/9259/src/java/org/apache/cassandra/service/BulkReadService.java]
 when you get a chance. It should give you a very good idea of what I had in 
mind, we basically stream results without even stopping the iteration.

I can see how they can be independent, and I'll be sure to share the design 
upfront if that's the path we choose, but wouldn't this mean that internally we 
still process each page independently, which means creating sstable iterators 
for every single page for example. 

Do you think the approach for CL.ONE that I chose in the proof of concept would 
be too problematic for resource management?

> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231869#comment-15231869
 ] 

Sylvain Lebresne commented on CASSANDRA-11521:
--

I'd like us to tackle this and CASSANDRA-11520 as independently as possible, 
since as much as they both are meant to help the same scenario, they are 
independent optimisations. For this, it's not immediately clear to me that it'd 
help in any way to limit to CL.ONE and so I wouldn't do it unless there is good 
reason to. But that could boil down to us not having the same idea of how this 
should work in the end, or me missing some of the challenges, so we can discuss 
more precisely once you've provided more details on how you plan on tackling 
this. But please, do provide some reasonably precise design we can discuss 
upfront.

> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11353) ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread

2016-04-08 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231853#comment-15231853
 ] 

Sylvain Lebresne commented on CASSANDRA-11353:
--

bq. we should probably check that nothing else was lost

Yeah but how? Since I don't understand what happened nor does I know how to 
even show such problem with git, I have no clue where to start for that.

But anyway, committed this one at least.

> ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread 
> 
>
> Key: CASSANDRA-11353
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11353
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths
>Reporter: Alexey Ivanchin
>Assignee: Sylvain Lebresne
>  Labels: error
> Fix For: 3.6
>
>
> Hey. Please help me with a problem. Recently I updated to 3.3.0 and this 
> problem appeared in the logs.
> ERROR [CompactionExecutor:2458] 2016-03-10 12:41:15,127 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[CompactionExecutor:2458,1,main]
> java.lang.AssertionError: null
> at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:83) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.purge(BufferCell.java:175) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.lambda$purge$107(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData$$Lambda$68/1224572667.apply(Unknown
>  Source) ~[na:na]
> at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.transformAndFilter(ComplexColumnData.java:170)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:43)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow.lambda$purge$102(BTreeRow.java:333) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow$$Lambda$67/1968133513.apply(Unknown 
> Source) ~[na:na]
> at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.BTreeRow.transformAndFilter(BTreeRow.java:338) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow.purge(BTreeRow.java:333) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToRow(PurgeFunction.java:88)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  

[jira] [Updated] (CASSANDRA-11353) ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread

2016-04-08 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11353:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ERROR [CompactionExecutor] CassandraDaemon.java Exception in thread 
> 
>
> Key: CASSANDRA-11353
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11353
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Local Write-Read Paths
>Reporter: Alexey Ivanchin
>Assignee: Sylvain Lebresne
>  Labels: error
> Fix For: 3.6
>
>
> Hey. Please help me with a problem. Recently I updated to 3.3.0 and this 
> problem appeared in the logs.
> ERROR [CompactionExecutor:2458] 2016-03-10 12:41:15,127 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[CompactionExecutor:2458,1,main]
> java.lang.AssertionError: null
> at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:83) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BufferCell.purge(BufferCell.java:175) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.lambda$purge$107(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData$$Lambda$68/1224572667.apply(Unknown
>  Source) ~[na:na]
> at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.transformAndFilter(ComplexColumnData.java:170)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:165)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.ComplexColumnData.purge(ComplexColumnData.java:43)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow.lambda$purge$102(BTreeRow.java:333) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow$$Lambda$67/1968133513.apply(Unknown 
> Source) ~[na:na]
> at 
> org.apache.cassandra.utils.btree.BTree$FiltrationTracker.apply(BTree.java:650)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:693) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.btree.BTree.transformAndFilter(BTree.java:668) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.rows.BTreeRow.transformAndFilter(BTreeRow.java:338) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.rows.BTreeRow.purge(BTreeRow.java:333) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToRow(PurgeFunction.java:88)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[apache-cassandra-3.3.0.jar:3.3.0]
> at 

  1   2   >