[jira] [Updated] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests

2016-04-19 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-11381:

Since Version: 2.0.16
Fix Version/s: 2.2.7
   3.0.6
   3.6
   2.1.14

> Node running with join_ring=false and authentication can not serve requests
> ---
>
> Key: CASSANDRA-11381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11381
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mck
>Assignee: mck
> Fix For: 2.1.14, 3.6, 3.0.6, 2.2.7
>
> Attachments: 11381-2.0.txt, 11381-2.1.txt, 11381-2.2.txt, 
> 11381-3.0.txt, 11381-trunk.txt, dtest-11381-trunk.txt
>
>
> Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has 
> authentication configured, eg PasswordAuthenticator, won't be able to serve 
> requests. This is because {{Auth.setup()}} never gets called during the 
> startup.
> Without {{Auth.setup()}} having been called in {{StorageService}} clients 
> connecting to the node fail with the node throwing
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119)
> at 
> org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:314)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The exception thrown from the 
> [code|https://github.com/apache/cassandra/blob/cassandra-2.0.16/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java#L119]
> {code}
> ResultMessage.Rows rows = 
> authenticateStatement.execute(QueryState.forInternalCalls(), new 
> QueryOptions(consistencyForUser(username),
>   
>Lists.newArrayList(ByteBufferUtil.bytes(username;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests

2016-04-19 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15249315#comment-15249315
 ] 

mck commented on CASSANDRA-11381:
-

[~JoshuaMcKenzie], [~jkni], what's up?

> Node running with join_ring=false and authentication can not serve requests
> ---
>
> Key: CASSANDRA-11381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11381
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mck
>Assignee: mck
> Attachments: 11381-2.0.txt, 11381-2.1.txt, 11381-2.2.txt, 
> 11381-3.0.txt, 11381-trunk.txt, dtest-11381-trunk.txt
>
>
> Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has 
> authentication configured, eg PasswordAuthenticator, won't be able to serve 
> requests. This is because {{Auth.setup()}} never gets called during the 
> startup.
> Without {{Auth.setup()}} having been called in {{StorageService}} clients 
> connecting to the node fail with the node throwing
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119)
> at 
> org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505)
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:314)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The exception thrown from the 
> [code|https://github.com/apache/cassandra/blob/cassandra-2.0.16/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java#L119]
> {code}
> ResultMessage.Rows rows = 
> authenticateStatement.execute(QueryState.forInternalCalls(), new 
> QueryOptions(consistencyForUser(username),
>   
>Lists.newArrayList(ByteBufferUtil.bytes(username;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11412) Many sstablescanners opened during repair

2016-04-19 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-11412:

   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.0.6
   3.6
   Status: Resolved  (was: Ready to Commit)

committed, thanks

> Many sstablescanners opened during repair
> -
>
> Key: CASSANDRA-11412
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11412
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.6, 3.0.6
>
>
> Since CASSANDRA-5220 we open [one sstablescanner per range per 
> sstable|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java#L374].
>  If compaction gets behind and you are running vnodes with 256 tokens and 
> RF3, this could become a problem (ie, {{768 * number of sstables}} scanners)
> We could probably refactor this similar to the way we handle scanners with 
> LCS - only open the scanner once we need it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Only open one sstable scanner per sstable

2016-04-19 Thread marcuse
Only open one sstable scanner per sstable

Patch by marcuse; reviewed by Paulo Motta for CASSANDRA-11412


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9b48a0bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9b48a0bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9b48a0bf

Branch: refs/heads/trunk
Commit: 9b48a0bf430b995332e1a4dde20ba7482175ef99
Parents: 0e96d3e
Author: Marcus Eriksson 
Authored: Thu Mar 31 16:32:11 2016 +0200
Committer: Marcus Eriksson 
Committed: Wed Apr 20 06:28:55 2016 +0200

--
 CHANGES.txt |  1 +
 .../compaction/AbstractCompactionStrategy.java  | 11 --
 .../compaction/CompactionStrategyManager.java   | 21 +++---
 .../compaction/LeveledCompactionStrategy.java   | 41 
 .../io/sstable/format/big/BigTableReader.java   |  5 ++-
 5 files changed, 43 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b48a0bf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6586299..cc50a23 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.6
+ * Only open one sstable scanner per sstable (CASSANDRA-11412)
  * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410)
  * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485)
  * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b48a0bf/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index ae8839e..c205d5c 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@ -263,6 +263,11 @@ public abstract class AbstractCompactionStrategy
 });
 }
 
+
+public ScannerList getScanners(Collection sstables, 
Range range)
+{
+return range == null ? getScanners(sstables, 
(Collection)null) : getScanners(sstables, 
Collections.singleton(range));
+}
 /**
  * Returns a list of KeyScanners given sstables and a range on which to 
scan.
  * The default implementation simply grab one SSTableScanner per-sstable, 
but overriding this method
@@ -270,14 +275,14 @@ public abstract class AbstractCompactionStrategy
  * LeveledCompactionStrategy for instance).
  */
 @SuppressWarnings("resource")
-public ScannerList getScanners(Collection sstables, 
Range range)
+public ScannerList getScanners(Collection sstables, 
Collection ranges)
 {
 RateLimiter limiter = CompactionManager.instance.getRateLimiter();
 ArrayList scanners = new ArrayList();
 try
 {
 for (SSTableReader sstable : sstables)
-scanners.add(sstable.getScanner(range, limiter));
+scanners.add(sstable.getScanner(ranges, limiter));
 }
 catch (Throwable t)
 {
@@ -349,7 +354,7 @@ public abstract class AbstractCompactionStrategy
 
 public ScannerList getScanners(Collection toCompact)
 {
-return getScanners(toCompact, null);
+return getScanners(toCompact, (Collection)null);
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b48a0bf/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
index bd72c64..82fd872 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
@@ -353,7 +353,7 @@ public class CompactionStrategyManager implements 
INotificationConsumer
  *
  * Delegates the call to the compaction strategies to allow LCS to create 
a scanner
  * @param sstables
- * @param range
+ * @param ranges
  * @return
  */
 @SuppressWarnings("resource")
@@ -370,25 +370,16 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 }
 
 Set scanners = new HashSet<>(sstables.size());
-
-for (Range range : ranges)
-{
-AbstractCompactionStrategy.ScannerList 

[1/3] cassandra git commit: Only open one sstable scanner per sstable

2016-04-19 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 0e96d3e52 -> 9b48a0bf4
  refs/heads/trunk c83729f41 -> 0541597e7


Only open one sstable scanner per sstable

Patch by marcuse; reviewed by Paulo Motta for CASSANDRA-11412


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9b48a0bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9b48a0bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9b48a0bf

Branch: refs/heads/cassandra-3.0
Commit: 9b48a0bf430b995332e1a4dde20ba7482175ef99
Parents: 0e96d3e
Author: Marcus Eriksson 
Authored: Thu Mar 31 16:32:11 2016 +0200
Committer: Marcus Eriksson 
Committed: Wed Apr 20 06:28:55 2016 +0200

--
 CHANGES.txt |  1 +
 .../compaction/AbstractCompactionStrategy.java  | 11 --
 .../compaction/CompactionStrategyManager.java   | 21 +++---
 .../compaction/LeveledCompactionStrategy.java   | 41 
 .../io/sstable/format/big/BigTableReader.java   |  5 ++-
 5 files changed, 43 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b48a0bf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6586299..cc50a23 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.6
+ * Only open one sstable scanner per sstable (CASSANDRA-11412)
  * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410)
  * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485)
  * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b48a0bf/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index ae8839e..c205d5c 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@ -263,6 +263,11 @@ public abstract class AbstractCompactionStrategy
 });
 }
 
+
+public ScannerList getScanners(Collection sstables, 
Range range)
+{
+return range == null ? getScanners(sstables, 
(Collection)null) : getScanners(sstables, 
Collections.singleton(range));
+}
 /**
  * Returns a list of KeyScanners given sstables and a range on which to 
scan.
  * The default implementation simply grab one SSTableScanner per-sstable, 
but overriding this method
@@ -270,14 +275,14 @@ public abstract class AbstractCompactionStrategy
  * LeveledCompactionStrategy for instance).
  */
 @SuppressWarnings("resource")
-public ScannerList getScanners(Collection sstables, 
Range range)
+public ScannerList getScanners(Collection sstables, 
Collection ranges)
 {
 RateLimiter limiter = CompactionManager.instance.getRateLimiter();
 ArrayList scanners = new ArrayList();
 try
 {
 for (SSTableReader sstable : sstables)
-scanners.add(sstable.getScanner(range, limiter));
+scanners.add(sstable.getScanner(ranges, limiter));
 }
 catch (Throwable t)
 {
@@ -349,7 +354,7 @@ public abstract class AbstractCompactionStrategy
 
 public ScannerList getScanners(Collection toCompact)
 {
-return getScanners(toCompact, null);
+return getScanners(toCompact, (Collection)null);
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9b48a0bf/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
index bd72c64..82fd872 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
@@ -353,7 +353,7 @@ public class CompactionStrategyManager implements 
INotificationConsumer
  *
  * Delegates the call to the compaction strategies to allow LCS to create 
a scanner
  * @param sstables
- * @param range
+ * @param ranges
  * @return
  */
 @SuppressWarnings("resource")
@@ -370,25 +370,16 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 }
 
 Set scanners = new 

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-19 Thread marcuse
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0541597e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0541597e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0541597e

Branch: refs/heads/trunk
Commit: 0541597e792cb43ef019db7257e66da8671beea4
Parents: c83729f 9b48a0b
Author: Marcus Eriksson 
Authored: Wed Apr 20 06:33:51 2016 +0200
Committer: Marcus Eriksson 
Committed: Wed Apr 20 06:58:40 2016 +0200

--
 CHANGES.txt |  1 +
 .../compaction/AbstractCompactionStrategy.java  | 11 --
 .../compaction/CompactionStrategyManager.java   | 30 +-
 .../compaction/LeveledCompactionStrategy.java   | 41 
 .../io/sstable/format/big/BigTableReader.java   |  5 ++-
 5 files changed, 47 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0541597e/CHANGES.txt
--
diff --cc CHANGES.txt
index 1e6bcd6,cc50a23..08efbfb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,58 -1,5 +1,59 @@@
 -3.0.6
 +3.6
 + * Optimize the overlapping lookup by calculating all the
 +   bounds in advance (CASSANDRA-11571)
 + * Support json/yaml output in noetool tablestats (CASSANDRA-5977)
 + * (stress) Add datacenter option to -node options (CASSANDRA-11591)
 + * Fix handling of empty slices (CASSANDRA-11513)
 + * Make number of cores used by cqlsh COPY visible to testing code 
(CASSANDRA-11437)
 + * Allow filtering on clustering columns for queries without secondary 
indexes (CASSANDRA-11310)
 + * Refactor Restriction hierarchy (CASSANDRA-11354)
 + * Eliminate allocations in R/W path (CASSANDRA-11421)
 + * Update Netty to 4.0.36 (CASSANDRA-11567)
 + * Fix PER PARTITION LIMIT for queries requiring post-query ordering 
(CASSANDRA-11556)
 + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818)
 + * Support UDT in CQLSSTableWriter (CASSANDRA-10624)
 + * Support for non-frozen user-defined types, updating
 +   individual fields of user-defined types (CASSANDRA-7423)
 + * Make LZ4 compression level configurable (CASSANDRA-11051)
 + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017)
 + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295)
 + * Improve field-checking and error reporting in cassandra.yaml 
(CASSANDRA-10649)
 + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507)
 + * More user friendly error when providing an invalid token to nodetool 
(CASSANDRA-9348)
 + * Add static column support to SASI index (CASSANDRA-11183)
 + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization 
(CASSANDRA-11434)
 + * Support LIKE operator in prepared statements (CASSANDRA-11456)
 + * Add a command to see if a Materialized View has finished building 
(CASSANDRA-9967)
 + * Log endpoint and port associated with streaming operation (CASSANDRA-8777)
 + * Print sensible units for all log messages (CASSANDRA-9692)
 + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096)
 + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372)
 + * Compress only inter-dc traffic by default (CASSANDRA-)
 + * Add metrics to track write amplification (CASSANDRA-11420)
 + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739)
 + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411)
 + * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
 + * Add auto import java.util for UDF code block (CASSANDRA-11392)
 + * Add --hex-format option to nodetool getsstables (CASSANDRA-11337)
 + * sstablemetadata should print sstable min/max token (CASSANDRA-7159)
 + * Do not wrap CassandraException in TriggerExecutor (CASSANDRA-9421)
 + * COPY TO should have higher double precision (CASSANDRA-11255)
 + * Stress should exit with non-zero status after failure (CASSANDRA-10340)
 + * Add client to cqlsh SHOW_SESSION (CASSANDRA-8958)
 + * Fix nodetool tablestats keyspace level metrics (CASSANDRA-11226)
 + * Store repair options in parent_repair_history (CASSANDRA-11244)
 + * Print current leveling in sstableofflinerelevel (CASSANDRA-9588)
 + * Change repair message for keyspaces with RF 1 (CASSANDRA-11203)
 + * Remove hard-coded SSL cipher suites and protocols (CASSANDRA-10508)
 + * Improve concurrency in CompactionStrategyManager (CASSANDRA-10099)
 + * (cqlsh) interpret CQL type for formatting blobs (CASSANDRA-11274)
 + * Refuse to start and print txn log information in case of disk
 +   corruption (CASSANDRA-10112)
 + * Resolve some eclipse-warnings (CASSANDRA-11086)
 + * (cqlsh) Show static columns in a different color (CASSANDRA-11059)
 + * Allow to remove TTLs on table 

[jira] [Commented] (CASSANDRA-11137) JSON datetime formatting needs timezone

2016-04-19 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15249254#comment-15249254
 ] 

Stefania commented on CASSANDRA-11137:
--

I agree with Aleksey, adding a timezone is sufficient for now.

> JSON datetime formatting needs timezone
> ---
>
> Key: CASSANDRA-11137
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11137
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Stefania
>Assignee: Alex Petrov
>
> The JSON date time string representation lacks the timezone information:
> {code}
> cqlsh:events> select toJson(created_at) AS created_at from 
> event_by_user_timestamp ;
>  created_at
> ---
>  "2016-01-04 16:05:47.123"
> (1 rows)
> {code}
> vs.
> {code}
> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>  created_at
> --
>  2016-01-04 15:05:47+
> (1 rows)
> cqlsh:events>
> {code}
> To make things even more complicated the JSON timestamp is not returned in 
> UTC.
> At the moment {{DateType}} picks this formatting string {{"-MM-dd 
> HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a 
> minimum add the timezone?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11614) Expired tombstones are purged before locally applied (Cassandra 1.2 EOL)

2016-04-19 Thread Tatsuya Kawano (JIRA)
Tatsuya Kawano created CASSANDRA-11614:
--

 Summary: Expired tombstones are purged before locally applied 
(Cassandra 1.2 EOL)
 Key: CASSANDRA-11614
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11614
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
Reporter: Tatsuya Kawano


- Found in Cassandra 1.2.19.
- Cannot reproduce in Cassandra 2.1.13.

We have several customers using Cassandra 1.2.19 via Thrift API. We understand 
1.2.x is already EOL and Thrift API is deprecated. We found this problem in 
1.2.19 and decided to fix it by ourselves and maintain our own patched version 
of Cassandra 1.2.x until all customers deployments are migrated to recent 
Cassandra versions.

I wanted to share this info with other Cassandra 1.2.x users. Also any feedback 
about the patch (shown at the bottom of message) will be welcome.


h3. Problem:

Cassandra 1.2.19 may purge expired tombstones before locally applying them. 
This problem happens when the both of the following conditions meet:

- Columns are deleted via Thrift API {{remove()}} or {{batch_mutate()}} with a 
deletion
- And, a minor compaction is performed with LazilyCompactedRow (large row 
compaction)

We use size-tired compaction strategy for some column families, and leveled 
compaction strategy for others. This problem happens in both strategies.

h3. Steps to Reproduce:

(Single node Cassandra 1.2.19)

 1. In cassandra.yaml, set in_memory_compaction_limit_in_mb to 1.
 2. Create a key space "myks" and column family "mycf" with, for example, 
SizeTiredCompactionStrategy, 
 and set gc_grace to 0 (so that tombstones will expire immediately).

{code}
cassandra-cli -h 127.0.0.1 < No column should be returned.
10. Run a user-defined compaction from JMX with SSTable A and C as the input.

e.g. With jmxterm
(Replace  with the actual generation number of SSTable)

{code}
java -jar /path/to/jmxterm-1.0-alpha-4-uber.jar
$> open 127.0.0.1:7199
$> bean org.apache.cassandra.db:type=CompactionManager
$> run forceUserDefinedCompaction myks 
myks-mycf-ic--Data.db,myks-mycf-ic--Data.db
{code}

11. Ensure the following message is written to the system.log:
"Compacting a large row ..."
12. Once compaction is finished, get the column again.
-> (*expected*) The column should not be returned.
   (*actual*)   The column is returned.

h3. Cause:

I found {{row.getColumnFamily().deletionInfo().maxTimestamp()}} (where the 
{{row}} is an instance of OnDiskAtomIterator) is always set to 
{{Long.MIN_VALUE}} for non-system column families. This value is used by 
{{CompactionController#shouldPurge()}} for compacting a row with 
LazilyCompactedRow to determine if expired tombstones in a row can be purged. 
The MIN_VALUE causes {{#shouldPurge()}} almost always to return true, thus all 
expired tombstones in the row will be purged even when they have not been 
locally applied.

I do not know if this is a feature but DeletionInfo is not updated by 
{{DeleteStatement#mutationForKey()}} for single column deletion (unless it is 
an range tombstone).


h3. Workaround:

Increase the gc_grace to something big enough so that tombstones will not be 
purged.


h3. Solution:

Change LazilyCompactedRow to use {{Long.MAX_VALUE}} as maxDelTimestamp when 
calling {{#shouldPurge()}}.

{{#shouldPurge()}} considers that tombstones in a row can be purged when one of 
the following conditions meet:

1) {{maxDelTimestamp}} is smaller than the {{minTimestamp}} of overlapping 
SSTables.
2) Or, BloomFilters of overlapping SSTables indicate they do not contain the 
row that is being compacted.

Currently LazilyCompactedRow uses 
{{row.getColumnFamily().deletionInfo().maxTimestamp()}} as {{maxDelTimestamp}}, 
which causes the problem in our environment because it is always 
{{Long.MIN_VALUE}}. Instead, I will change LazilyCompactedRow to use 
{{Long.MAX_VALUE}} to disable the condition 1).

{code}
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 433794a..0995a99 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -84,7 +84,10 @@ public class LazilyCompactedRow extends AbstractCompactedRow 
implements Iterable
 else
 emptyColumnFamily.delete(cf);
 }
-this.shouldPurge = controller.shouldPurge(key, maxDelTimestamp);
+
+// Do not use maxDelTimestamp here, but Long.MAX_VALUE, because
+// maxDelTimestamp may not be updated for some delete operations.
+this.shouldPurge = controller.shouldPurge(key, Long.MAX_VALUE);

 try
 {
{code}

Note that I would not change the behavior of DeleteStatement#mutationForKey() 
to update DeletionInfo for single column 

[jira] [Commented] (CASSANDRA-11542) Create a benchmark to compare HDFS and Cassandra bulk read times

2016-04-19 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15249213#comment-15249213
 ] 

Stefania commented on CASSANDRA-11542:
--

I've run [the 
benchmark|https://github.com/stef1927/spark-load-perf/tree/master] described 
above on a 5-node GCE {{n1-standard-8}} cluster (30 GB RAM and 8 virtual cores 
per node, HDDs).

The following schemas were tested:

* {{CREATE TABLE ks.schema1 (id TEXT, timestamp BIGINT, val1 INT, val2 INT, 
val3 INT, val4 INT, val5 INT, val6 INT, val7 INT, val8 INT, val9 INT, val10 
INT, PRIMARY KEY (id, timestamp))}}

* {{CREATE TABLE ks.schema2 (id TEXT, timestamp BIGINT, val1 INT, val2 INT, 
val3 INT, val4 INT, val5 INT, val6 INT, val7 INT, val8 INT, val9 INT, val10 
INT, PRIMARY KEY ((id, timestamp)))}}

* {{CREATE TABLE ks.schema3 (id TEXT, timestamp BIGINT, data TEXT, PRIMARY KEY 
(id, timestamp))}}

* {{CREATE TABLE ks.schama4 (id TEXT, timestamp BIGINT, data TEXT, PRIMARY KEY 
((id, timestamp)))}}

The first two schemas are identical except that the second schema uses a 
composite partition key whist the first one uses a clustering key. The same is 
true for the third and forth schemas. The difference between the first two 
schemas and the last twos is that the 10 integer values are encoded into a 
string in the last two schemas. This was done to measure the impact of reading 
multiple values from Cassandra, whilst the impact of clustering rows can be 
determined by comparing schemas one and two or three and four.

15 million rows of random data were generated and stored in the following 
sources:

* Cassandra
* A CSV file stored in HDFS
* A Parquet file stored in HDFS

After generating the data, the Cassandra tables were flushed and compacted. The 
OS page cache was also flushed after generating the data, and after every test 
run, via {{sync && echo 3 | sudo tee /proc/sys/vm/drop_caches}}. The HDFS files 
were divided into 1000 partitions due to how the data was generated.

The benchmark either retrieves a Spark RDD (Resilient Distributed Datasets) or 
a DF (Data Frame). The difference between the two is that the RDD contains the 
entire table or file data, whilst the data frame only contains the two columns 
that are used to produce the final result. The following tests were performed 
in random order:

* *Cassandra RDD:* the entire Cassandra table is loaded into an RDD via 
{{sc.cassandraTable}};
* *CSV RDD:* the CSV data is loaded into an RDD via {{sc.textFile}};
* *Parquet RDD:* the Parquet data is loaded into an RDD via 
{{sqlContext.read.parquet}}
* *Cassandra DF:* a SELECT predicate is pushed to the server via 
{{CassandraSQLContext}} to retrieve two columns that are saved into a data 
frame;
* *CSV DF:* the CSV data is loaded into a DF via the spark SQL context using 
{{com.databricks.spark.csv}} as the format, and two columns are saved in a data 
frame; 
* *Parquet DF:* a SELECT predicate is used via {{SQLContext}} to retrieve two 
columns that are saved into a data frame.

RDD or DF are iterated and the result is calculated by selecting the global 
maximum of the maximum of two columns for each row. The time taken to create 
either RDD or DF and to iterate them is then measured.

h3. RDD Results

*Schema1*

||15M records||Parquet||CSV||Cassandra||Cassandra / CSV||Cassandra / Parquet||
|Run 1|3.494837|5.478472|43.423967|8|12|
|Run 2|2.845326|5.167405|47.170665|9|17|
|Run 3|2.613721|4.904634|48.451015|10|19|
|Average|2.98|5.18|46.35|9|16|
|Std. Dev.|0.46|0.29|2.61| | |


*Schema2*

||15M records||Parquet||CSV||Cassandra||Cassandra / CSV||Cassandra / Parquet||
|Run 1|3.486563|5.635907|46.00437|8|13|
|Run 2|2.68518|5.13979|46.108184|9|17|
|Run 3|2.673291|5.035654|46.076284|9|17|
|Average|2.95|5.27|46.06|9|16|
|Std. Dev.|0.47|0.32|0.05| | |

*Schema3*

||15M records||Parquet||CSV||Cassandra||Cassandra / CSV||Cassandra / Parquet||
|Run 1|6.122885|6.79348|29.643609|4|5|
|Run 2|5.826286|6.563861|32.900336|5|6|
|Run 3|5.751427|6.41375|33.176358|5|6|
|Average|5.90|6.59|31.91|5|5|
|Std. Dev.|0.20|0.19|1.96| | |

*Schema4*

||15M records||Parquet||CSV||Cassandra||Cassandra / CSV||Cassandra / Parquet||
|Run 1|6.137645|7.511649|29.518883|4|5|
|Run 2|5.984526|6.569239|30.723268|5|5|
|Run 3|5.763102|6.590789|30.789137|5|5|
|Average|5.96|6.89|30.34|4|5|
|Std. Dev.|0.19|0.54|0.72| | |


h3. DF Results

*Schema1*

||15M records||Parquet||CSV||Cassandra||Cassandra / CSV||Cassandra / Parquet||
|Run 1|2.843182|15.651141|37.997299|2|13|
|Run 2|2.357436|11.582413|30.836383|3|13|
|Run 3|2.386732|11.75583|30.061433|3|13|
|Average|2.53|13.00|32.97|3|13|
|Std. Dev.|0.27|2.30|4.38| | |


*Schema2*

||15M records||Parquet||CSV||Cassandra||Cassandra / CSV||Cassandra / Parquet||
|Run 1|3.016107|12.484605|95.12199|8|32|
|Run 2|2.455694|12.13422|37.583736|3|15|
|Run 3|2.329835|12.007215|34.966389|3|15|
|Average|2.60|12.21|55.89|5|21|
|Std. Dev.|0.37|0.25|34.00| | |


*Schema3*

||15M 

[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-04-19 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15249212#comment-15249212
 ] 

Joshua McKenzie commented on CASSANDRA-8844:


I am impressed (and bothered) at how much I missed the forest for the trees on 
that one - I refactored out the {{CommitLogReplayer}} behavior quite awhile 
before adding the segment/offset skipping logic in the CommitLogReader for CDC 
and it never clicked that I was just duplicating the existing CommitLogReplayer 
globalPosition skip. I better understand where the confusion on our discussion 
(and your reading of the code) stemmed from.

Pushed a commit that does the following:
* Moved {{CommitLogReplayer}} skip logic into {{CommitLogReader}}
* Unified on minPosition in {{CommitLogReader}} rather than old startPosition
* Removed superfluous interface methods
* Tidied up and commented various read* methods in CommitLogReader
* Commented CommitLogSegment.nextId to clarify that we rely on it for correct 
ordering between multiple CLSM
* Revised static initializer in CommitLogSegment to take CDC log location into 
account on idBase determination
* Added comment in CommitLog reinforcing the need for the above

The fact that none of us caught the idBase determination in CommitLogSegment's 
init makes me wary, and I agree with you that this needs further testing. Where 
are we with that [~mambocab]?

Regarding the DirectorySizeCalculator, while I much prefer the elegance of your 
one-liner

# I like to avoid changing code that's battle-tested and working during an 
unrelated refactor
# it's a micro-optimzation in a part of the code that's not critical path and 
where the delta will be on the order of microseconds for the average case 
(though a large simplification and reduction in code as well, so I'd do it for 
that alone), and
# the benchmarking results of testing that on both win10 and linux had some 
surprises in store:

{noformat}
Windows, skylake, SSD:
   DirectorySizeCalculator
  [java] Result: 31.061 ¦(99.9%) 0.287 ms/op [Average]
  [java]   Statistics: (min, avg, max) = (30.861, 31.061, 33.028), stdev = 
0.430
  [java]   Confidence interval (99.9%): [30.774, 31.349]
   One liner:
  [java] Result: 116.941 ¦(99.9%) 1.238 ms/op [Average]
  [java]   Statistics: (min, avg, max) = (115.163, 116.941, 124.950), stdev 
= 1.854
  [java]   Confidence interval (99.9%): [115.703, 118.179]
Linux, haswell, SSD:
   DirectorySizeCalculator
  [java] Result: 76.765 ±(99.9%) 0.876 ms/op [Average]
  [java]   Statistics: (min, avg, max) = (75.586, 76.765, 81.744), stdev = 
1.311
  [java]   Confidence interval (99.9%): [75.889, 77.641]
   One liner:
  [java] Result: 57.608 ±(99.9%) 0.889 ms/op [Average]
  [java]   Statistics: (min, avg, max) = (56.365, 57.608, 61.697), stdev = 
1.330
  [java]   Confidence interval (99.9%): [56.719, 58.497]
{noformat}

I think that makes a strong case for us having a platform independent 
implementation of this and doing this in a follow-up ticket.

I also haven't done anything about CommitLogSegmentPosition's name yet. I don't 
have really strong feelings on it but am leaning towards {{CommitLogPosition}}.

Re-ran CI since we've made quite a few minor tweaks/refactors throughout, and 
there's a small amount (14 failures) of house-cleaning left to do on the tests. 
I'll start digging into that tomorrow.

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible 

[jira] [Updated] (CASSANDRA-11542) Create a benchmark to compare HDFS and Cassandra bulk read times

2016-04-19 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11542:
-
Attachment: spark-load-perf-results-001.zip

> Create a benchmark to compare HDFS and Cassandra bulk read times
> 
>
> Key: CASSANDRA-11542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11542
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Testing
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
> Attachments: spark-load-perf-results-001.zip
>
>
> I propose creating a benchmark for comparing Cassandra and HDFS bulk reading 
> performance. Simple Spark queries will be performed on data stored in HDFS or 
> Cassandra, and the entire duration will be measured. An example query would 
> be the max or min of a column or a count\(*\).
> This benchmark should allow determining the impact of:
> * partition size
> * number of clustering columns
> * number of value columns (cells)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11393) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test

2016-04-19 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15249024#comment-15249024
 ] 

Russ Hatch commented on CASSANDRA-11393:


Not sure what the state of things are here, but just adding in case this 
helpts, this still appears to be happening on 3.0, as this issue was seen on 
recent 2.1 upgrading to 3.0 tests.

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test
> --
>
> Key: CASSANDRA-11393
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11393
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: dtest
>
> We are seeing a failure in the upgrade tests that go from 2.1 to 3.0
> {code}
> node2: ERROR [SharedPool-Worker-2] 2016-03-10 20:05:17,865 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xeb79b477, 
> /127.0.0.1:39613 => /127.0.0.2:9042]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1208)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1155)
>  ~[main/:na]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:330)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:302)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:297)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:333)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:209)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  

[jira] [Updated] (CASSANDRA-11600) Don't require HEAP_NEW_SIZE to be set when using G1

2016-04-19 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-11600:


patches:
[3.0|https://github.com/bdeggleston/cassandra/tree/11600-3.0]
[trunk|https://github.com/bdeggleston/cassandra/tree/11600-trunk]

> Don't require HEAP_NEW_SIZE to be set when using G1
> ---
>
> Key: CASSANDRA-11600
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11600
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 3.6, 3.0.x
>
>
> Although cassandra-env.sh doesn't set -Xmn (unless set in jvm.options) when 
> using G1GC, it still requires that you set HEAP_NEW_SIZE and MAX_HEAP_SIZE 
> together, and won't start until you do. Since we ignore that setting if 
> you're using G1, we shouldn't require that the user set it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11566) read time out when do count(*)

2016-04-19 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248694#comment-15248694
 ] 

Jack Krupansky edited comment on CASSANDRA-11566 at 4/19/16 9:59 PM:
-

I suspect that this timeout is simply because cqlsh is set to only allow 10 
seconds for a request by default. Try setting the request timeout to some 
largish number, like 2000 (seconds) using the {{--request-timeout}} command 
line option for cqlsh:

{code}
cqlsh --request-timeout=2000 ...
{code}

To be clear, even if setting a longer timeout works, it is not advisable to 
perform such a slow and resource-intensive operation on a production cluster 
unless absolutely necessary.


was (Author: jkrupan):
I suspect that this timeout is simply because cqlsh is set to only allow 10 
seconds for a request by default. Try setting the request timeout to some 
largish number, like 2000 (seconds) using the {{--request-timeout}} command 
line option for cqlsh:

{code}
cqlsh --request-timeout=2000 ...
{code}


> read time out when do count(*)
> --
>
> Key: CASSANDRA-11566
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11566
> Project: Cassandra
>  Issue Type: Bug
> Environment: staging
>Reporter: nizar
> Fix For: 3.3
>
>
> Hello I using Cassandra Datastax 3.3, I keep getting read time out even if I 
> set the limit to 1, it would make sense if the limit is high number .. 
> However only limit 1 and still timing out sounds odd?
> [cqlsh 5.0.1 | Cassandra 3.3 | CQL spec 3.4.0 | Native protocol v4]
> cqlsh:test> select count(*) from test.my_view where s_id=? and flag=false 
> limit 1;
> OperationTimedOut: errors={}, last_host=
> my key look like this :
> CREATE MATERIALIZED VIEW test.my_view AS
>   SELECT *
>   FROM table_name
>   WHERE id IS NOT NULL AND processed IS NOT NULL AND time IS  NOT NULL AND id 
> IS NOT NULL
>   PRIMARY KEY ( ( s_id, flag ), time, id )
>   WITH CLUSTERING ORDER BY ( time ASC );
>  I have 5 nodes with replica 3
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc': '3'}  AND durable_writes = true;
> Below was the result for nodetoolcfstats
> Keyspace: test
> Read Count: 128770
> Read Latency: 1.42208769123243 ms.
> Write Count: 0
> Write Latency: NaN ms.
> Pending Flushes: 0
> Table: tableName
> SSTable count: 3
> Space used (live): 280777032
> Space used (total): 280777032
> Space used by snapshots (total): 0
> Off heap memory used (total): 2850227
> SSTable Compression Ratio: 0.24706731995327527
> Number of keys (estimate): 1277211
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 0
> Local read count: 3
> Local read latency: 0.396 ms
> Local write count: 0
> Local write latency: NaN ms
> Pending flushes: 0
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 1589848
> Bloom filter off heap memory used: 1589824
> Index summary off heap memory used: 1195691
> Compression metadata off heap memory used: 64712
> Compacted partition minimum bytes: 311
> Compacted partition maximum bytes: 535
> Compacted partition mean bytes: 458
> Average live cells per slice (last five minutes): 102.92671205446536
> Maximum live cells per slice (last five minutes): 103
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Table: my_view
> SSTable count: 4
> Space used (live): 126114270
> Space used (total): 126114270
> Space used by snapshots (total): 0
> Off heap memory used (total): 91588
> SSTable Compression Ratio: 0.1652453778228639
> Number of keys (estimate): 8
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 0
> Local read count: 128767
> Local read latency: 1.590 ms
> Local write count: 0
> Local write latency: NaN ms
> Pending flushes: 0
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 96
> Bloom filter off heap memory used: 64
> Index summary off heap memory used: 140
> Compression metadata off heap memory used: 91384
> Compacted partition minimum bytes: 3974
> Compacted partition maximum bytes: 386857368
> Compacted partition mean bytes: 26034715
> Average live cells per slice (last five minutes): 102.99462595230145
> Maximum live cells per slice (last five minutes): 103
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Thank you.
> Nizar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11566) read time out when do count(*)

2016-04-19 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248742#comment-15248742
 ] 

Jack Krupansky commented on CASSANDRA-11566:


This issue may also be considered a duplicate of CASSANDRA-9051.

For reference, setting the {{--request-timeout}} parameter on the command line 
and the {{request_timeout}} option in the {{\[connection]}} section of the 
{{cqlshrc}} file are documented here:
http://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlsh.html

> read time out when do count(*)
> --
>
> Key: CASSANDRA-11566
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11566
> Project: Cassandra
>  Issue Type: Bug
> Environment: staging
>Reporter: nizar
> Fix For: 3.3
>
>
> Hello I using Cassandra Datastax 3.3, I keep getting read time out even if I 
> set the limit to 1, it would make sense if the limit is high number .. 
> However only limit 1 and still timing out sounds odd?
> [cqlsh 5.0.1 | Cassandra 3.3 | CQL spec 3.4.0 | Native protocol v4]
> cqlsh:test> select count(*) from test.my_view where s_id=? and flag=false 
> limit 1;
> OperationTimedOut: errors={}, last_host=
> my key look like this :
> CREATE MATERIALIZED VIEW test.my_view AS
>   SELECT *
>   FROM table_name
>   WHERE id IS NOT NULL AND processed IS NOT NULL AND time IS  NOT NULL AND id 
> IS NOT NULL
>   PRIMARY KEY ( ( s_id, flag ), time, id )
>   WITH CLUSTERING ORDER BY ( time ASC );
>  I have 5 nodes with replica 3
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc': '3'}  AND durable_writes = true;
> Below was the result for nodetoolcfstats
> Keyspace: test
> Read Count: 128770
> Read Latency: 1.42208769123243 ms.
> Write Count: 0
> Write Latency: NaN ms.
> Pending Flushes: 0
> Table: tableName
> SSTable count: 3
> Space used (live): 280777032
> Space used (total): 280777032
> Space used by snapshots (total): 0
> Off heap memory used (total): 2850227
> SSTable Compression Ratio: 0.24706731995327527
> Number of keys (estimate): 1277211
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 0
> Local read count: 3
> Local read latency: 0.396 ms
> Local write count: 0
> Local write latency: NaN ms
> Pending flushes: 0
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 1589848
> Bloom filter off heap memory used: 1589824
> Index summary off heap memory used: 1195691
> Compression metadata off heap memory used: 64712
> Compacted partition minimum bytes: 311
> Compacted partition maximum bytes: 535
> Compacted partition mean bytes: 458
> Average live cells per slice (last five minutes): 102.92671205446536
> Maximum live cells per slice (last five minutes): 103
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Table: my_view
> SSTable count: 4
> Space used (live): 126114270
> Space used (total): 126114270
> Space used by snapshots (total): 0
> Off heap memory used (total): 91588
> SSTable Compression Ratio: 0.1652453778228639
> Number of keys (estimate): 8
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 0
> Local read count: 128767
> Local read latency: 1.590 ms
> Local write count: 0
> Local write latency: NaN ms
> Pending flushes: 0
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 96
> Bloom filter off heap memory used: 64
> Index summary off heap memory used: 140
> Compression metadata off heap memory used: 91384
> Compacted partition minimum bytes: 3974
> Compacted partition maximum bytes: 386857368
> Compacted partition mean bytes: 26034715
> Average live cells per slice (last five minutes): 102.99462595230145
> Maximum live cells per slice (last five minutes): 103
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Thank you.
> Nizar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11613) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk.more_user_types_test

2016-04-19 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-11613:
---
Reproduced In: 3.6

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk.more_user_types_test
> --
>
> Key: CASSANDRA-11613
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11613
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all-custom_branch_runs/8/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk/more_user_types_test
> Failed on CassCI build upgrade_tests-all-custom_branch_runs #8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11613) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk.more_user_types_test

2016-04-19 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248736#comment-15248736
 ] 

Russ Hatch commented on CASSANDRA-11613:


Based on the stack trace, I'm thinking this is not likely to be a test issue:
{noformat}
ERROR [MessagingService-Incoming-/127.0.0.2] 2016-04-19 00:59:15,353 
CassandraDaemon.java:195 - Exception in thread 
Thread[MessagingService-Incoming-/127.0.0.2,5,main]
java.lang.AssertionError: null
at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:42) 
~[main/:na]
at 
org.apache.cassandra.db.LegacyLayout$CellGrouper.addCell(LegacyLayout.java:1190)
 ~[main/:na]
at 
org.apache.cassandra.db.LegacyLayout$CellGrouper.addAtom(LegacyLayout.java:1144)
 ~[main/:na]
at 
org.apache.cassandra.db.LegacyLayout.getNextRow(LegacyLayout.java:646) 
~[main/:na]
at 
org.apache.cassandra.db.LegacyLayout.access$300(LegacyLayout.java:50) 
~[main/:na]
at 
org.apache.cassandra.db.LegacyLayout$2.computeNext(LegacyLayout.java:669) 
~[main/:na]
at 
org.apache.cassandra.db.LegacyLayout$2.computeNext(LegacyLayout.java:663) 
~[main/:na]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[main/:na]
at 
org.apache.cassandra.db.rows.RowAndDeletionMergeIterator.updateNextRow(RowAndDeletionMergeIterator.java:117)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.RowAndDeletionMergeIterator.computeNext(RowAndDeletionMergeIterator.java:77)
 ~[main/:na]
at 
org.apache.cassandra.db.rows.RowAndDeletionMergeIterator.computeNext(RowAndDeletionMergeIterator.java:35)
 ~[main/:na]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[main/:na]
at 
org.apache.cassandra.db.partitions.AbstractBTreePartition.build(AbstractBTreePartition.java:283)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.PartitionUpdate.fromIterator(PartitionUpdate.java:207)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserializePre30(PartitionUpdate.java:704)
 ~[main/:na]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:647)
 ~[main/:na]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:331)
 ~[main/:na]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:350)
 ~[main/:na]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:287)
 ~[main/:na]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:114) 
~[main/:na]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:190)
 ~[main/:na]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
 ~[main/:na]
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
 ~[main/:na]
{noformat}

Also wondering if this could have some relation to CASSANDRA-11609, both 
involve upgrading udt's and both failures look to be pretty new.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk.more_user_types_test
> --
>
> Key: CASSANDRA-11613
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11613
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all-custom_branch_runs/8/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk/more_user_types_test
> Failed on CassCI build upgrade_tests-all-custom_branch_runs #8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11613) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk.more_user_types_test

2016-04-19 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-11613:
--

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk.more_user_types_test
 Key: CASSANDRA-11613
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11613
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch


example failure:

http://cassci.datastax.com/job/upgrade_tests-all-custom_branch_runs/8/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_2_2_HEAD_UpTo_Trunk/more_user_types_test

Failed on CassCI build upgrade_tests-all-custom_branch_runs #8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11566) read time out when do count(*)

2016-04-19 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248694#comment-15248694
 ] 

Jack Krupansky commented on CASSANDRA-11566:


I suspect that this timeout is simply because cqlsh is set to only allow 10 
seconds for a request by default. Try setting the request timeout to some 
largish number, like 2000 (seconds) using the {{--request-timeout}} command 
line option for cqlsh:

{code}
cqlsh --request-timeout=2000 ...
{code}


> read time out when do count(*)
> --
>
> Key: CASSANDRA-11566
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11566
> Project: Cassandra
>  Issue Type: Bug
> Environment: staging
>Reporter: nizar
> Fix For: 3.3
>
>
> Hello I using Cassandra Datastax 3.3, I keep getting read time out even if I 
> set the limit to 1, it would make sense if the limit is high number .. 
> However only limit 1 and still timing out sounds odd?
> [cqlsh 5.0.1 | Cassandra 3.3 | CQL spec 3.4.0 | Native protocol v4]
> cqlsh:test> select count(*) from test.my_view where s_id=? and flag=false 
> limit 1;
> OperationTimedOut: errors={}, last_host=
> my key look like this :
> CREATE MATERIALIZED VIEW test.my_view AS
>   SELECT *
>   FROM table_name
>   WHERE id IS NOT NULL AND processed IS NOT NULL AND time IS  NOT NULL AND id 
> IS NOT NULL
>   PRIMARY KEY ( ( s_id, flag ), time, id )
>   WITH CLUSTERING ORDER BY ( time ASC );
>  I have 5 nodes with replica 3
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc': '3'}  AND durable_writes = true;
> Below was the result for nodetoolcfstats
> Keyspace: test
> Read Count: 128770
> Read Latency: 1.42208769123243 ms.
> Write Count: 0
> Write Latency: NaN ms.
> Pending Flushes: 0
> Table: tableName
> SSTable count: 3
> Space used (live): 280777032
> Space used (total): 280777032
> Space used by snapshots (total): 0
> Off heap memory used (total): 2850227
> SSTable Compression Ratio: 0.24706731995327527
> Number of keys (estimate): 1277211
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 0
> Local read count: 3
> Local read latency: 0.396 ms
> Local write count: 0
> Local write latency: NaN ms
> Pending flushes: 0
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 1589848
> Bloom filter off heap memory used: 1589824
> Index summary off heap memory used: 1195691
> Compression metadata off heap memory used: 64712
> Compacted partition minimum bytes: 311
> Compacted partition maximum bytes: 535
> Compacted partition mean bytes: 458
> Average live cells per slice (last five minutes): 102.92671205446536
> Maximum live cells per slice (last five minutes): 103
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Table: my_view
> SSTable count: 4
> Space used (live): 126114270
> Space used (total): 126114270
> Space used by snapshots (total): 0
> Off heap memory used (total): 91588
> SSTable Compression Ratio: 0.1652453778228639
> Number of keys (estimate): 8
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 0
> Local read count: 128767
> Local read latency: 1.590 ms
> Local write count: 0
> Local write latency: NaN ms
> Pending flushes: 0
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 96
> Bloom filter off heap memory used: 64
> Index summary off heap memory used: 140
> Compression metadata off heap memory used: 91384
> Compacted partition minimum bytes: 3974
> Compacted partition maximum bytes: 386857368
> Compacted partition mean bytes: 26034715
> Average live cells per slice (last five minutes): 102.99462595230145
> Maximum live cells per slice (last five minutes): 103
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Thank you.
> Nizar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[cassandra] Git Push Summary

2016-04-19 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/2.1.14-tentative [created] 209ebd380


[cassandra] Git Push Summary

2016-04-19 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/2.1.14-tentative [deleted] 5c5c5b44c


[jira] [Created] (CASSANDRA-11612) dtest failure in materialized_views_test.TestMaterializedViews.interrupt_build_process_test

2016-04-19 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-11612:


 Summary: dtest failure in 
materialized_views_test.TestMaterializedViews.interrupt_build_process_test
 Key: CASSANDRA-11612
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11612
 Project: Cassandra
  Issue Type: Test
Reporter: Jim Witschey
Assignee: DS Test Eng


This has flapped a couple times so far. Example failure:

http://cassci.datastax.com/job/trunk_dtest_win32/385/testReport/materialized_views_test/TestMaterializedViews/interrupt_build_process_test

Failed on CassCI build trunk_dtest_win32 #385

{code}
Error Message

9847 != 1
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: d:\temp\dtest-kozpni
dtest: DEBUG: Custom init_config not found. Setting defaults.
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 1,
'read_request_timeout_in_ms': 1,
'request_timeout_in_ms': 1,
'truncate_request_timeout_in_ms': 1,
'write_request_timeout_in_ms': 1}
dtest: DEBUG: Inserting initial data
dtest: DEBUG: Create a MV
dtest: DEBUG: Stop the cluster. Interrupt the MV build process.
dtest: DEBUG: Restart the cluster
dtest: DEBUG: MV shouldn't be built yet.
dtest: DEBUG: Wait and ensure the MV build resumed. Waiting up to 2 minutes.
dtest: DEBUG: Verify all data
- >> end captured logging << -
Stacktrace

  File "C:\tools\python2\lib\unittest\case.py", line 329, in run
testMethod()
  File 
"D:\jenkins\workspace\trunk_dtest_win32\cassandra-dtest\materialized_views_test.py",
 line 700, in interrupt_build_process_test
self.assertEqual(result[0].count, 1)
  File "C:\tools\python2\lib\unittest\case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
  File "C:\tools\python2\lib\unittest\case.py", line 506, in _baseAssertEqual
raise self.failureException(msg)
"9847 != 1\n >> begin captured logging << 
\ndtest: DEBUG: cluster ccm directory: 
d:\\temp\\dtest-kozpni\ndtest: DEBUG: Custom init_config not found. Setting 
defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
5,\n'range_request_timeout_in_ms': 1,\n
'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n
'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
1}\ndtest: DEBUG: Inserting initial data\ndtest: DEBUG: Create a MV\ndtest: 
DEBUG: Stop the cluster. Interrupt the MV build process.\ndtest: DEBUG: Restart 
the cluster\ndtest: DEBUG: MV shouldn't be built yet.\ndtest: DEBUG: Wait and 
ensure the MV build resumed. Waiting up to 2 minutes.\ndtest: DEBUG: Verify all 
data\n- >> end captured logging << -"
Standard Error

Started: node1 with pid: 6056
Started: node3 with pid: 7728
Started: node2 with pid: 6428
Started: node1 with pid: 6088
Started: node3 with pid: 6824
Started: node2 with pid: 4024

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7645) cqlsh: show partial trace if incomplete after max_trace_wait

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7645:
---
Labels: cqlsh  (was: )

> cqlsh: show partial trace if incomplete after max_trace_wait
> 
>
> Key: CASSANDRA-7645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7645
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Carl Yeksigian
>Priority: Trivial
>  Labels: cqlsh
> Fix For: 2.2.4, 3.0.0
>
>
> If a trace hasn't completed within {{max_trace_wait}}, cqlsh will say the 
> trace is unavailable and not show anything.  It (and the underlying python 
> driver) determines when the trace is complete by checking if the {{duration}} 
> column in {{system_traces.sessions}} is non-null.  If {{duration}} is null 
> but we still have some trace events when the timeout is hit, cqlsh should 
> print whatever trace events we have along with a warning about it being 
> incomplete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11598) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11598:

Fix Version/s: 2.1.x

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-11598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11598
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jim Witschey
>  Labels: dtest
> Fix For: 2.1.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/330/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-2.1_offheap_dtest #330 - 2.1.14-tentative
> {{' 127.0.0.2 '}} isn't in the trace, but it looks like {{'\127.0.0.2 '}} is 
> -- should we change the regex here?
> {code}
> Error Message
> ' 127.0.0.2 ' not found in 'Consistency level set to ALL.\nNow Tracing is 
> enabled\n\n firstname | lastname\n---+--\n Frodo |  
> Baggins\n\n(1 rows)\n\nTracing session: 
> 0268da20-0328-11e6-b014-53144f0dba91\n\n activity 
>   
>  | timestamp  | source| 
> source_elapsed\n-++---+\n
>   
> Execute CQL3 query | 2016-04-15 16:35:05.538000 | 
> 127.0.0.1 |  0\n
> READ message received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] 
> | 2016-04-15 16:35:05.54 | 127.0.0.3 | 47\n Parsing SELECT 
> firstname, lastname FROM ks.users WHERE userid = 
> 550e8400-e29b-41d4-a716-44665544; [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.54 | 127.0.0.1 | 88\n
>Preparing statement 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.54 | 127.0.0.1 |
> 355\n
> reading digest from /127.0.0.2 [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.542000 | 127.0.0.1 |   1245\n
>  Executing single-partition query on users 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.542000 | 127.0.0.1 |   
> 1249\n
>   Acquiring sstable references [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.545000 | 127.0.0.1 |   1265\n
>  Executing single-partition query on users 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.546000 | 127.0.0.3 |
> 369\n 
>   Merging memtable tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.546000 | 127.0.0.1 |   1302\n
> reading digest from /127.0.0.3 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.547000 | 127.0.0.1 |   
> 1338\n Skipped 0/0 non-slice-intersecting 
> sstables, included 0 due to tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.548000 | 127.0.0.1 |   1403\n
>   Acquiring sstable references 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.549000 | 127.0.0.3 |
> 392\nMerging data 
> from memtables and 0 sstables [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.549000 | 127.0.0.1 |   1428\n
>  Read 1 live and 0 tombstone cells 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.55 | 127.0.0.1 |   
> 1509\n   Sending READ message 
> to /127.0.0.3 [MessagingService-Outgoing-/127.0.0.3] | 2016-04-15 
> 16:35:05.55 | 127.0.0.1 |   1780\n
>Sending READ message to /127.0.0.2 
> [MessagingService-Outgoing-/127.0.0.2] | 2016-04-15 16:35:05.551000 | 
> 127.0.0.1 |   3748\n
> REQUEST_RESPONSE message received from /127.0.0.3 
> [MessagingService-Incoming-/127.0.0.3] | 2016-04-15 16:35:05.552000 | 
> 127.0.0.1 |   4454\n  
> 

[jira] [Updated] (CASSANDRA-11598) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11598:

Issue Type: Bug  (was: Test)

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-11598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11598
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
> Fix For: 2.1.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/330/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-2.1_offheap_dtest #330 - 2.1.14-tentative
> {{' 127.0.0.2 '}} isn't in the trace, but it looks like {{'\127.0.0.2 '}} is 
> -- should we change the regex here?
> {code}
> Error Message
> ' 127.0.0.2 ' not found in 'Consistency level set to ALL.\nNow Tracing is 
> enabled\n\n firstname | lastname\n---+--\n Frodo |  
> Baggins\n\n(1 rows)\n\nTracing session: 
> 0268da20-0328-11e6-b014-53144f0dba91\n\n activity 
>   
>  | timestamp  | source| 
> source_elapsed\n-++---+\n
>   
> Execute CQL3 query | 2016-04-15 16:35:05.538000 | 
> 127.0.0.1 |  0\n
> READ message received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] 
> | 2016-04-15 16:35:05.54 | 127.0.0.3 | 47\n Parsing SELECT 
> firstname, lastname FROM ks.users WHERE userid = 
> 550e8400-e29b-41d4-a716-44665544; [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.54 | 127.0.0.1 | 88\n
>Preparing statement 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.54 | 127.0.0.1 |
> 355\n
> reading digest from /127.0.0.2 [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.542000 | 127.0.0.1 |   1245\n
>  Executing single-partition query on users 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.542000 | 127.0.0.1 |   
> 1249\n
>   Acquiring sstable references [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.545000 | 127.0.0.1 |   1265\n
>  Executing single-partition query on users 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.546000 | 127.0.0.3 |
> 369\n 
>   Merging memtable tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.546000 | 127.0.0.1 |   1302\n
> reading digest from /127.0.0.3 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.547000 | 127.0.0.1 |   
> 1338\n Skipped 0/0 non-slice-intersecting 
> sstables, included 0 due to tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.548000 | 127.0.0.1 |   1403\n
>   Acquiring sstable references 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.549000 | 127.0.0.3 |
> 392\nMerging data 
> from memtables and 0 sstables [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.549000 | 127.0.0.1 |   1428\n
>  Read 1 live and 0 tombstone cells 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.55 | 127.0.0.1 |   
> 1509\n   Sending READ message 
> to /127.0.0.3 [MessagingService-Outgoing-/127.0.0.3] | 2016-04-15 
> 16:35:05.55 | 127.0.0.1 |   1780\n
>Sending READ message to /127.0.0.2 
> [MessagingService-Outgoing-/127.0.0.2] | 2016-04-15 16:35:05.551000 | 
> 127.0.0.1 |   3748\n
> REQUEST_RESPONSE message received from /127.0.0.3 
> [MessagingService-Incoming-/127.0.0.3] | 2016-04-15 16:35:05.552000 | 
> 127.0.0.1 |   4454\n  

[jira] [Updated] (CASSANDRA-11598) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11598:

Reproduced In: 2.1.13
  Component/s: Tools

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-11598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11598
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jim Witschey
>  Labels: dtest
> Fix For: 2.1.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/330/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-2.1_offheap_dtest #330 - 2.1.14-tentative
> {{' 127.0.0.2 '}} isn't in the trace, but it looks like {{'\127.0.0.2 '}} is 
> -- should we change the regex here?
> {code}
> Error Message
> ' 127.0.0.2 ' not found in 'Consistency level set to ALL.\nNow Tracing is 
> enabled\n\n firstname | lastname\n---+--\n Frodo |  
> Baggins\n\n(1 rows)\n\nTracing session: 
> 0268da20-0328-11e6-b014-53144f0dba91\n\n activity 
>   
>  | timestamp  | source| 
> source_elapsed\n-++---+\n
>   
> Execute CQL3 query | 2016-04-15 16:35:05.538000 | 
> 127.0.0.1 |  0\n
> READ message received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] 
> | 2016-04-15 16:35:05.54 | 127.0.0.3 | 47\n Parsing SELECT 
> firstname, lastname FROM ks.users WHERE userid = 
> 550e8400-e29b-41d4-a716-44665544; [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.54 | 127.0.0.1 | 88\n
>Preparing statement 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.54 | 127.0.0.1 |
> 355\n
> reading digest from /127.0.0.2 [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.542000 | 127.0.0.1 |   1245\n
>  Executing single-partition query on users 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.542000 | 127.0.0.1 |   
> 1249\n
>   Acquiring sstable references [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.545000 | 127.0.0.1 |   1265\n
>  Executing single-partition query on users 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.546000 | 127.0.0.3 |
> 369\n 
>   Merging memtable tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.546000 | 127.0.0.1 |   1302\n
> reading digest from /127.0.0.3 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.547000 | 127.0.0.1 |   
> 1338\n Skipped 0/0 non-slice-intersecting 
> sstables, included 0 due to tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.548000 | 127.0.0.1 |   1403\n
>   Acquiring sstable references 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.549000 | 127.0.0.3 |
> 392\nMerging data 
> from memtables and 0 sstables [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.549000 | 127.0.0.1 |   1428\n
>  Read 1 live and 0 tombstone cells 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.55 | 127.0.0.1 |   
> 1509\n   Sending READ message 
> to /127.0.0.3 [MessagingService-Outgoing-/127.0.0.3] | 2016-04-15 
> 16:35:05.55 | 127.0.0.1 |   1780\n
>Sending READ message to /127.0.0.2 
> [MessagingService-Outgoing-/127.0.0.2] | 2016-04-15 16:35:05.551000 | 
> 127.0.0.1 |   3748\n
> REQUEST_RESPONSE message received from /127.0.0.3 
> [MessagingService-Incoming-/127.0.0.3] | 2016-04-15 16:35:05.552000 | 
> 127.0.0.1 |   4454\n   

[jira] [Updated] (CASSANDRA-11598) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11598:

Assignee: (was: Philip Thompson)

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-11598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11598
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jim Witschey
>  Labels: dtest
> Fix For: 2.1.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/330/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-2.1_offheap_dtest #330 - 2.1.14-tentative
> {{' 127.0.0.2 '}} isn't in the trace, but it looks like {{'\127.0.0.2 '}} is 
> -- should we change the regex here?
> {code}
> Error Message
> ' 127.0.0.2 ' not found in 'Consistency level set to ALL.\nNow Tracing is 
> enabled\n\n firstname | lastname\n---+--\n Frodo |  
> Baggins\n\n(1 rows)\n\nTracing session: 
> 0268da20-0328-11e6-b014-53144f0dba91\n\n activity 
>   
>  | timestamp  | source| 
> source_elapsed\n-++---+\n
>   
> Execute CQL3 query | 2016-04-15 16:35:05.538000 | 
> 127.0.0.1 |  0\n
> READ message received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] 
> | 2016-04-15 16:35:05.54 | 127.0.0.3 | 47\n Parsing SELECT 
> firstname, lastname FROM ks.users WHERE userid = 
> 550e8400-e29b-41d4-a716-44665544; [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.54 | 127.0.0.1 | 88\n
>Preparing statement 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.54 | 127.0.0.1 |
> 355\n
> reading digest from /127.0.0.2 [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.542000 | 127.0.0.1 |   1245\n
>  Executing single-partition query on users 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.542000 | 127.0.0.1 |   
> 1249\n
>   Acquiring sstable references [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.545000 | 127.0.0.1 |   1265\n
>  Executing single-partition query on users 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.546000 | 127.0.0.3 |
> 369\n 
>   Merging memtable tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.546000 | 127.0.0.1 |   1302\n
> reading digest from /127.0.0.3 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.547000 | 127.0.0.1 |   
> 1338\n Skipped 0/0 non-slice-intersecting 
> sstables, included 0 due to tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.548000 | 127.0.0.1 |   1403\n
>   Acquiring sstable references 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.549000 | 127.0.0.3 |
> 392\nMerging data 
> from memtables and 0 sstables [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.549000 | 127.0.0.1 |   1428\n
>  Read 1 live and 0 tombstone cells 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.55 | 127.0.0.1 |   
> 1509\n   Sending READ message 
> to /127.0.0.3 [MessagingService-Outgoing-/127.0.0.3] | 2016-04-15 
> 16:35:05.55 | 127.0.0.1 |   1780\n
>Sending READ message to /127.0.0.2 
> [MessagingService-Outgoing-/127.0.0.2] | 2016-04-15 16:35:05.551000 | 
> 127.0.0.1 |   3748\n
> REQUEST_RESPONSE message received from /127.0.0.3 
> [MessagingService-Incoming-/127.0.0.3] | 2016-04-15 16:35:05.552000 | 
> 127.0.0.1 |   4454\n  
> 

[jira] [Commented] (CASSANDRA-11598) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2016-04-19 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248576#comment-15248576
 ] 

Philip Thompson commented on CASSANDRA-11598:
-

So, to persist what I've said on IRC here:

The entire request is completing, at CL.ALL appropriately. We see the 
coordinator trace that it has received the response from each node. But 
sometimes we get no trace messages from 127.0.0.2 (and only *ever* 127.0.0.2, 
never 127.0.0.3). This only ever happens on 2.1. I have run it hundreds of 
times on 2.1, 2.2, and 3.5. Is this something that where we are relying on 
tracing to do something it cannot? The specificity of the issue is suspicious. 
I'm moving this to the bug queue, so that a dev can authoritatively say "This 
should work", or "This should not work".

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-11598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11598
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/330/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-2.1_offheap_dtest #330 - 2.1.14-tentative
> {{' 127.0.0.2 '}} isn't in the trace, but it looks like {{'\127.0.0.2 '}} is 
> -- should we change the regex here?
> {code}
> Error Message
> ' 127.0.0.2 ' not found in 'Consistency level set to ALL.\nNow Tracing is 
> enabled\n\n firstname | lastname\n---+--\n Frodo |  
> Baggins\n\n(1 rows)\n\nTracing session: 
> 0268da20-0328-11e6-b014-53144f0dba91\n\n activity 
>   
>  | timestamp  | source| 
> source_elapsed\n-++---+\n
>   
> Execute CQL3 query | 2016-04-15 16:35:05.538000 | 
> 127.0.0.1 |  0\n
> READ message received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] 
> | 2016-04-15 16:35:05.54 | 127.0.0.3 | 47\n Parsing SELECT 
> firstname, lastname FROM ks.users WHERE userid = 
> 550e8400-e29b-41d4-a716-44665544; [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.54 | 127.0.0.1 | 88\n
>Preparing statement 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.54 | 127.0.0.1 |
> 355\n
> reading digest from /127.0.0.2 [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.542000 | 127.0.0.1 |   1245\n
>  Executing single-partition query on users 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.542000 | 127.0.0.1 |   
> 1249\n
>   Acquiring sstable references [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.545000 | 127.0.0.1 |   1265\n
>  Executing single-partition query on users 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.546000 | 127.0.0.3 |
> 369\n 
>   Merging memtable tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.546000 | 127.0.0.1 |   1302\n
> reading digest from /127.0.0.3 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.547000 | 127.0.0.1 |   
> 1338\n Skipped 0/0 non-slice-intersecting 
> sstables, included 0 due to tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.548000 | 127.0.0.1 |   1403\n
>   Acquiring sstable references 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.549000 | 127.0.0.3 |
> 392\nMerging data 
> from memtables and 0 sstables [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.549000 | 127.0.0.1 |   1428\n
>  Read 1 live and 0 tombstone cells 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.55 | 127.0.0.1 |   
> 1509\n  

[jira] [Assigned] (CASSANDRA-11608) dtest failure in replace_address_test.TestReplaceAddress.replace_first_boot_test

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-11608:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> dtest failure in 
> replace_address_test.TestReplaceAddress.replace_first_boot_test
> 
>
> Key: CASSANDRA-11608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11608
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> This looks like a timeout kind of flap. It's flapped once. Example failure:
> http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/344/testReport/replace_address_test/TestReplaceAddress/replace_first_boot_test
> Failed on CassCI build cassandra-2.2_offheap_dtest #344 - 2.2.6-tentative
> {code}
> Error Message
> 15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:
> INFO  [main] 2016-04-15 16:21:32,345 Config.java:4.
> See system.log for remainder
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-4i5qkE
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'memtable_allocation_type': 'offheap_objects',
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'start_rpc': 'true'}
> dtest: DEBUG: Starting cluster with 3 nodes.
> dtest: DEBUG: 32
> dtest: DEBUG: Inserting Data...
> dtest: DEBUG: Stopping node 3.
> dtest: DEBUG: Testing node stoppage (query should fail).
> dtest: DEBUG: Retrying read after timeout. Attempt #0
> dtest: DEBUG: Retrying read after timeout. Attempt #1
> dtest: DEBUG: Retrying request after UE. Attempt #2
> dtest: DEBUG: Retrying request after UE. Attempt #3
> dtest: DEBUG: Retrying request after UE. Attempt #4
> dtest: DEBUG: Starting node 4 to replace node 3
> dtest: DEBUG: Verifying querying works again.
> dtest: DEBUG: Verifying tokens migrated sucessfully
> dtest: DEBUG: ('WARN  [main] 2016-04-15 16:21:21,068 TokenMetadata.java:196 - 
> Token -3855903180169109916 changing ownership from /127.0.0.3 to 
> /127.0.0.4\n', <_sre.SRE_Match object at 0x7fd21c0e2370>)
> dtest: DEBUG: Try to restart node 3 (should fail)
> dtest: DEBUG: [('WARN  [GossipStage:1] 2016-04-15 16:21:22,942 
> StorageService.java:1962 - Host ID collision for 
> 75916cc0-86ec-4136-b336-862a49953616 between /127.0.0.3 and /127.0.0.4; 
> /127.0.0.4 is the new owner\n', <_sre.SRE_Match object at 0x7fd1f83555e0>)]
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/replace_address_test.py", line 212, 
> in replace_first_boot_test
> node4.start(wait_for_binary_proto=True)
>   File "/home/automaton/ccm/ccmlib/node.py", line 610, in start
> node.watch_log_for_alive(self, from_mark=mark)
>   File "/home/automaton/ccm/ccmlib/node.py", line 457, in watch_log_for_alive
> self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, 
> filename=filename)
>   File "/home/automaton/ccm/ccmlib/node.py", line 425, in watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> "15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:\nINFO  [main] 
> 2016-04-15 16:21:32,345 Config.java:4.\nSee system.log for 
> remainder\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-4i5qkE\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n  
>   'num_tokens': '32',\n'phi_convict_threshold': 5,\n'start_rpc': 
> 'true'}\ndtest: DEBUG: Starting cluster with 3 nodes.\ndtest: DEBUG: 
> 32\ndtest: DEBUG: Inserting Data...\ndtest: DEBUG: Stopping node 3.\ndtest: 
> DEBUG: Testing node stoppage (query should fail).\ndtest: DEBUG: Retrying 
> read after timeout. Attempt #0\ndtest: DEBUG: Retrying read after timeout. 
> Attempt #1\ndtest: DEBUG: Retrying request after UE. Attempt #2\ndtest: 
> DEBUG: Retrying request after UE. Attempt #3\ndtest: DEBUG: Retrying request 
> after UE. Attempt #4\ndtest: DEBUG: Starting node 4 to replace node 3\ndtest: 
> DEBUG: Verifying querying works again.\ndtest: DEBUG: Verifying tokens 
> migrated sucessfully\ndtest: DEBUG: ('WARN  [main] 2016-04-15 16:21:21,068 
> TokenMetadata.java:196 - Token 

[jira] [Created] (CASSANDRA-11611) dtest failure in topology_test.TestTopology.crash_during_decommission_test

2016-04-19 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-11611:


 Summary: dtest failure in 
topology_test.TestTopology.crash_during_decommission_test
 Key: CASSANDRA-11611
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11611
 Project: Cassandra
  Issue Type: Test
Reporter: Jim Witschey
Assignee: DS Test Eng


Looks like some kind of streaming error. Example failure:

http://cassci.datastax.com/job/trunk_dtest_win32/382/testReport/topology_test/TestTopology/crash_during_decommission_test

Failed on CassCI build trunk_dtest_win32 #382

{code}
Error Message

Unexpected error in log, see stdout
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: d:\temp\dtest-ce_wos
dtest: DEBUG: Custom init_config not found. Setting defaults.
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 1,
'read_request_timeout_in_ms': 1,
'request_timeout_in_ms': 1,
'truncate_request_timeout_in_ms': 1,
'write_request_timeout_in_ms': 1}
dtest: DEBUG: Status as reported by node 127.0.0.2
dtest: DEBUG: Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   Owns (effective)  Host ID
   Rack
UN  127.0.0.1  98.73 KiB  32   78.4% 
b8c55c71-bf3d-462b-8c17-3c88d7ac2284  rack1
UN  127.0.0.2  162.38 KiB  32   65.9% 
71aacf1d-8e2f-44cf-b354-f10c71313ec6  rack1
UN  127.0.0.3  98.71 KiB  32   55.7% 
3a4529a3-dc7f-445c-aec3-94417c920fdf  rack1


dtest: DEBUG: Restarting node2
dtest: DEBUG: Status as reported by node 127.0.0.2
dtest: DEBUG: Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   Owns (effective)  Host ID
   Rack
UL  127.0.0.1  98.73 KiB  32   78.4% 
b8c55c71-bf3d-462b-8c17-3c88d7ac2284  rack1
UN  127.0.0.2  222.26 KiB  32   65.9% 
71aacf1d-8e2f-44cf-b354-f10c71313ec6  rack1
UN  127.0.0.3  98.71 KiB  32   55.7% 
3a4529a3-dc7f-445c-aec3-94417c920fdf  rack1


dtest: DEBUG: Restarting node2
dtest: DEBUG: Status as reported by node 127.0.0.2
dtest: DEBUG: Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   Owns (effective)  Host ID
   Rack
UL  127.0.0.1  174.2 KiB  32   78.4% 
b8c55c71-bf3d-462b-8c17-3c88d7ac2284  rack1
UN  127.0.0.2  336.69 KiB  32   65.9% 
71aacf1d-8e2f-44cf-b354-f10c71313ec6  rack1
UN  127.0.0.3  116.7 KiB  32   55.7% 
3a4529a3-dc7f-445c-aec3-94417c920fdf  rack1


dtest: DEBUG: Restarting node2
dtest: DEBUG: Status as reported by node 127.0.0.2
dtest: DEBUG: Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   Owns (effective)  Host ID
   Rack
UL  127.0.0.1  174.2 KiB  32   78.4% 
b8c55c71-bf3d-462b-8c17-3c88d7ac2284  rack1
UN  127.0.0.2  360.82 KiB  32   65.9% 
71aacf1d-8e2f-44cf-b354-f10c71313ec6  rack1
UN  127.0.0.3  116.7 KiB  32   55.7% 
3a4529a3-dc7f-445c-aec3-94417c920fdf  rack1


dtest: DEBUG: Restarting node2
dtest: DEBUG: Status as reported by node 127.0.0.2
dtest: DEBUG: Datacenter: datacenter1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens   Owns (effective)  Host ID
   Rack
UL  127.0.0.1  174.2 KiB  32   78.4% 
b8c55c71-bf3d-462b-8c17-3c88d7ac2284  rack1
UN  127.0.0.2  240.54 KiB  32   65.9% 
71aacf1d-8e2f-44cf-b354-f10c71313ec6  rack1
UN  127.0.0.3  116.7 KiB  32   55.7% 
3a4529a3-dc7f-445c-aec3-94417c920fdf  rack1


dtest: DEBUG: Restarting node2
dtest: DEBUG: Decommission failed with exception: Nodetool command 
'D:\jenkins\workspace\trunk_dtest_win32\cassandra\bin\nodetool.bat -h localhost 
-p 7100 decommission' failed; exit status: 2; stderr: error: Stream failed
-- StackTrace --
org.apache.cassandra.streaming.StreamException: Stream failed
at 
org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
at 
com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
at 
com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)

[jira] [Comment Edited] (CASSANDRA-9633) Add ability to encrypt sstables

2016-04-19 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247797#comment-15247797
 ] 

Jason Brown edited comment on CASSANDRA-9633 at 4/19/16 8:09 PM:
-

OK, so here we go with the updates [~bdeggleston] requested.

||9633||
|[branch|https://github.com/jasobrown/cassandra/tree/9633]|
|[dtest|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-9633-dtest/]|
|[testall|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-9633-testall/]|

To encrypt the (primary) index and summary sstable files, in addition to the 
data file, different code paths were required.

- As the summary is the once-shot read and different from all other code paths, 
I made custom classes for handling it, {{EncryptedSummaryWritableByteChannel}} 
and {{EncryptedSummaryInputStream}}. As the summary is intimately linked with 
the owning sstable & data file, the summary will simply inherit the 
key_alias/algo from the sstable, but then has it's IV written to the front of 
the summary file.

- The encrypted primary index needs to have it's own 'chunks' file, a la the 
{{Component.COMPRESSION_INFO}}, so I created 
{{Component.INDEX_COMPRESSION_INFO}} so that it gets the proper treatment. 
Thus, we can use {{CompressedSequentialWriter}} for writing out the index 
file's offsets, just like what we do for the compressed data file.

- As with the first version of this patch, encrypting the data file (and now 
the primary index) is handled by {{EncryptingCompressor}}. 

WRT to CQL changes, we simply do {{... WITH ENCRYPTION='true'}} to enable the 
sstable encryption. All the encryption parameters are already in the yaml, so 
no need to pass those in separately. Further, to disable the sstable 
encryption, simple execute {{ALTER TABLE ... WITH ENCRYPTION='false'}}. As a 
side effect of piggy-backing on the compression infrastructure, though, when 
executing {{DESCRIBE TABLE}} in cqlsh the encryption params show up as 
'compression' data, not as encryption. I believe all the code for handling the 
cqlsh describe queries is largely in the python driver, afaict. 


Some miscellaneous changes:
- {{ICompressor}} got some additional functions for instance-specific values as 
we need to carry a unique IV for each cipher.
- {{CipherFactory}} needed a small cleanup wrt caching instances (we were 
creating a crap tonne of instances on the fly)
- Apparently I messed up a small part of the merge for #11040, and thus adding 
it in here ({{HintsService}}). Without this, hints don't get encrypted when 
enabled.
- added some {{@SupressWarnings}} annotations



was (Author: jasobrown):
OK, so here we go with the updates [~bdeggleston] requested.

||9633||
|[branch|https://github.com/jasobrown/cassandra/tree/9633]|
|[dtest|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-9633-dtest/]|
|[testall|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-9633-testall/]|

To encrypt the (primary) index and summary sstable files, in addition to the 
data file, different code paths were required.

-As the summary is the once-shot read and different from all other code paths, 
I made custom classes for handling it, {{EncryptedSummaryWritableByteChannel}} 
and {{EncryptedSummaryInputStream}}. As the summary is intimately linked with 
the owning sstable & data file, the summary will simply inherit the 
key_alias/algo from the sstable, but then has it's IV written to the front of 
the summary file.

- The encrypted primary index needs to have it's own 'chunks' file, a la the 
{{Component.COMPRESSION_INFO}}, so I created 
{{Component.INDEX_COMPRESSION_INFO}} so that it gets the proper treatment. 
Thus, we can use {{CompressedSequentialWriter}} for writing out the index 
file's offsets, just like what we do for the compressed data file.

- As with the first version of this patch, encrypting the data file (and now 
the primary index) is handled by {{EncryptingCompressor}}. 

WRT to CQL changes, we simply do {{... WITH ENCRYPTION='true'}} to enable the 
sstable encryption. All the encryption parameters are already in the yaml, so 
no need to pass those in separately. Further, to disable the sstable 
encryption, simple execute {{ALTER TABLE ... WITH ENCRYPTION='false'}}. As a 
side effect of piggy-backing on the compression infrastructure, though, when 
executing {{DESCRIBE TABLE}} in cqlsh the encryption params show up as 
'compression' data, not as encryption. I believe all the code for handling the 
cqlsh describe queries is largely in the python driver, afaict. 


Some miscellaneous changes:
- {{ICompressor}} got some additional functions for instance-specific values as 
we need to carry a unique IV for each cipher.
- {{CipherFactory}} needed a small cleanup wrt caching instances (we were 
creating a crap tonne of instances on the fly)
- Apparently I messed up a small part of the merge for 

[jira] [Commented] (CASSANDRA-11598) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2016-04-19 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248507#comment-15248507
 ] 

Philip Thompson commented on CASSANDRA-11598:
-

Doesn't happen on 2.2. Definitely a 2.1 only problem.

http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/73/

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-11598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11598
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/330/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-2.1_offheap_dtest #330 - 2.1.14-tentative
> {{' 127.0.0.2 '}} isn't in the trace, but it looks like {{'\127.0.0.2 '}} is 
> -- should we change the regex here?
> {code}
> Error Message
> ' 127.0.0.2 ' not found in 'Consistency level set to ALL.\nNow Tracing is 
> enabled\n\n firstname | lastname\n---+--\n Frodo |  
> Baggins\n\n(1 rows)\n\nTracing session: 
> 0268da20-0328-11e6-b014-53144f0dba91\n\n activity 
>   
>  | timestamp  | source| 
> source_elapsed\n-++---+\n
>   
> Execute CQL3 query | 2016-04-15 16:35:05.538000 | 
> 127.0.0.1 |  0\n
> READ message received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] 
> | 2016-04-15 16:35:05.54 | 127.0.0.3 | 47\n Parsing SELECT 
> firstname, lastname FROM ks.users WHERE userid = 
> 550e8400-e29b-41d4-a716-44665544; [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.54 | 127.0.0.1 | 88\n
>Preparing statement 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.54 | 127.0.0.1 |
> 355\n
> reading digest from /127.0.0.2 [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.542000 | 127.0.0.1 |   1245\n
>  Executing single-partition query on users 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.542000 | 127.0.0.1 |   
> 1249\n
>   Acquiring sstable references [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.545000 | 127.0.0.1 |   1265\n
>  Executing single-partition query on users 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.546000 | 127.0.0.3 |
> 369\n 
>   Merging memtable tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.546000 | 127.0.0.1 |   1302\n
> reading digest from /127.0.0.3 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.547000 | 127.0.0.1 |   
> 1338\n Skipped 0/0 non-slice-intersecting 
> sstables, included 0 due to tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.548000 | 127.0.0.1 |   1403\n
>   Acquiring sstable references 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.549000 | 127.0.0.3 |
> 392\nMerging data 
> from memtables and 0 sstables [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.549000 | 127.0.0.1 |   1428\n
>  Read 1 live and 0 tombstone cells 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.55 | 127.0.0.1 |   
> 1509\n   Sending READ message 
> to /127.0.0.3 [MessagingService-Outgoing-/127.0.0.3] | 2016-04-15 
> 16:35:05.55 | 127.0.0.1 |   1780\n
>Sending READ message to /127.0.0.2 
> [MessagingService-Outgoing-/127.0.0.2] | 2016-04-15 16:35:05.551000 | 
> 127.0.0.1 |   3748\n
> REQUEST_RESPONSE message received from /127.0.0.3 
> 

[jira] [Updated] (CASSANDRA-11607) dtest failure in user_types_test.TestUserTypes.test_nested_user_types

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11607:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dtest failure in user_types_test.TestUserTypes.test_nested_user_types
> -
>
> Key: CASSANDRA-11607
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11607
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest, windows
>
> This is a single flap:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/217/testReport/user_types_test/TestUserTypes/test_nested_user_types
> Failed on CassCI build cassandra-2.2_dtest_win32 #217
> {code}
> Error Message
> Lists differ: [None] != [[u'test', u'test2']]
> First differing element 0:
> None
> [u'test', u'test2']
> - [None]
> + [[u'test', u'test2']]
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: d:\temp\dtest-vgkgwi
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "C:\tools\python2\lib\unittest\case.py", line 329, in run
> testMethod()
>   File 
> "D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra-dtest\user_types_test.py",
>  line 289, in test_nested_user_types
> self.assertEqual(listify(primary_item), [[u'test', u'test2']])
>   File "C:\tools\python2\lib\unittest\case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "C:\tools\python2\lib\unittest\case.py", line 742, in assertListEqual
> self.assertSequenceEqual(list1, list2, msg, seq_type=list)
>   File "C:\tools\python2\lib\unittest\case.py", line 724, in 
> assertSequenceEqual
> self.fail(msg)
>   File "C:\tools\python2\lib\unittest\case.py", line 410, in fail
> raise self.failureException(msg)
> "Lists differ: [None] != [[u'test', u'test2']]\n\nFirst differing element 
> 0:\nNone\n[u'test', u'test2']\n\n- [None]\n+ [[u'test', 
> u'test2']]\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> d:\\temp\\dtest-vgkgwi\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> Standard Error
> Started: node1 with pid: 4328
> Started: node3 with pid: 7568
> Started: node2 with pid: 7504
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11610) Revisit consistent range movement restrictions

2016-04-19 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-11610:
-
Description: 
I have a suspicion that we chose the wrong default by always having this on, 
with a flag to disable.  We ran for a very long time without it, and at most a 
handful of people cared, with very few complaints.  Now that we have this on, 
the majority who didn't care before but want to double even fairly small 
clusters now have to add a flag to disable it, and then are curious why they 
need to.

Regardless of that, I think we need to at least make the restriction more lax; 
we currently block the simultaneous addition of nodes across the entire 
cluster, even they use NTS and have many DCs with different RF settings.  We 
should at least allow simultaneous addition in multiple DCs.

  was:
I have a suspicion that we chose the wrong default by always having this on, 
with a flag to disable.  We ran for a very long time without it out, and at 
most a handful of people cared, with very few complaints.  Now that we have 
this on, the majority who didn't care but want to double even fairly small 
clusters have to add a flag to disable it, and then are curious why they need 
to.

Regardless of that, I think we need to at least make the restriction more lax; 
we currently block the simultaneous addition of nodes across the entire 
cluster, even they use NTS and have many DCs with different RF settings.  We 
should at least allow simultaneous addition in multiple DCs.


> Revisit consistent range movement restrictions
> --
>
> Key: CASSANDRA-11610
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11610
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>
> I have a suspicion that we chose the wrong default by always having this on, 
> with a flag to disable.  We ran for a very long time without it, and at most 
> a handful of people cared, with very few complaints.  Now that we have this 
> on, the majority who didn't care before but want to double even fairly small 
> clusters now have to add a flag to disable it, and then are curious why they 
> need to.
> Regardless of that, I think we need to at least make the restriction more 
> lax; we currently block the simultaneous addition of nodes across the entire 
> cluster, even they use NTS and have many DCs with different RF settings.  We 
> should at least allow simultaneous addition in multiple DCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11548) Anticompaction not removing old sstables

2016-04-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248413#comment-15248413
 ] 

Paulo Motta commented on CASSANDRA-11548:
-

Thanks [~krummas]. Good job [~ruoranwang]!

> Anticompaction not removing old sstables
> 
>
> Key: CASSANDRA-11548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11548
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.1.13
>Reporter: Ruoran Wang
>Assignee: Ruoran Wang
> Fix For: 2.1.14
>
> Attachments: 0001-cassandra-2.1.13-potential-fix.patch
>
>
> 1. 12/29/15 https://issues.apache.org/jira/browse/CASSANDRA-10831
> Moved markCompactedSSTablesReplaced out of the loop ```for (SSTableReader 
> sstable : repairedSSTables)```
> 2. 1/18/16 https://issues.apache.org/jira/browse/CASSANDRA-10829
> Added unmarkCompacting into the loop. ```for (SSTableReader sstable : 
> repairedSSTables)```
> I think the effect of those above change might cause the 
> markCompactedSSTablesReplaced fail on 
> DataTracker.java
> {noformat}
>assert newSSTables.size() + newShadowed.size() == newSSTablesSize :
> String.format("Expecting new size of %d, got %d while 
> replacing %s by %s in %s",
>   newSSTablesSize, newSSTables.size() + 
> newShadowed.size(), oldSSTables, replacements, this);
> {noformat}
> Since change CASSANDRA-10831 moved it out. This AssertError won't be caught, 
> leaving the oldsstables not removed. (Then this might cause row out of order 
> error when doing incremental repair if there are L1 un-repaired sstables.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11610) Revisit consistent range movement restrictions

2016-04-19 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-11610:


 Summary: Revisit consistent range movement restrictions
 Key: CASSANDRA-11610
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11610
 Project: Cassandra
  Issue Type: Improvement
Reporter: Brandon Williams


I have a suspicion that we chose the wrong default by always having this on, 
with a flag to disable.  We ran for a very long time without it out, and at 
most a handful of people cared, with very few complaints.  Now that we have 
this on, the majority who didn't care but want to double even fairly small 
clusters have to add a flag to disable it, and then are curious why they need 
to.

Regardless of that, I think we need to at least make the restriction more lax; 
we currently block the simultaneous addition of nodes across the entire 
cluster, even they use NTS and have many DCs with different RF settings.  We 
should at least allow simultaneous addition in multiple DCs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11607) dtest failure in user_types_test.TestUserTypes.test_nested_user_types

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11607:

Status: Patch Available  (was: Open)

https://github.com/riptano/cassandra-dtest/pull/936

> dtest failure in user_types_test.TestUserTypes.test_nested_user_types
> -
>
> Key: CASSANDRA-11607
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11607
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest, windows
>
> This is a single flap:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/217/testReport/user_types_test/TestUserTypes/test_nested_user_types
> Failed on CassCI build cassandra-2.2_dtest_win32 #217
> {code}
> Error Message
> Lists differ: [None] != [[u'test', u'test2']]
> First differing element 0:
> None
> [u'test', u'test2']
> - [None]
> + [[u'test', u'test2']]
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: d:\temp\dtest-vgkgwi
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "C:\tools\python2\lib\unittest\case.py", line 329, in run
> testMethod()
>   File 
> "D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra-dtest\user_types_test.py",
>  line 289, in test_nested_user_types
> self.assertEqual(listify(primary_item), [[u'test', u'test2']])
>   File "C:\tools\python2\lib\unittest\case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "C:\tools\python2\lib\unittest\case.py", line 742, in assertListEqual
> self.assertSequenceEqual(list1, list2, msg, seq_type=list)
>   File "C:\tools\python2\lib\unittest\case.py", line 724, in 
> assertSequenceEqual
> self.fail(msg)
>   File "C:\tools\python2\lib\unittest\case.py", line 410, in fail
> raise self.failureException(msg)
> "Lists differ: [None] != [[u'test', u'test2']]\n\nFirst differing element 
> 0:\nNone\n[u'test', u'test2']\n\n- [None]\n+ [[u'test', 
> u'test2']]\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> d:\\temp\\dtest-vgkgwi\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> Standard Error
> Started: node1 with pid: 4328
> Started: node3 with pid: 7568
> Started: node2 with pid: 7504
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11548) Anticompaction not removing old sstables

2016-04-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-11548:

Assignee: Ruoran Wang

> Anticompaction not removing old sstables
> 
>
> Key: CASSANDRA-11548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11548
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.1.13
>Reporter: Ruoran Wang
>Assignee: Ruoran Wang
> Fix For: 2.1.14
>
> Attachments: 0001-cassandra-2.1.13-potential-fix.patch
>
>
> 1. 12/29/15 https://issues.apache.org/jira/browse/CASSANDRA-10831
> Moved markCompactedSSTablesReplaced out of the loop ```for (SSTableReader 
> sstable : repairedSSTables)```
> 2. 1/18/16 https://issues.apache.org/jira/browse/CASSANDRA-10829
> Added unmarkCompacting into the loop. ```for (SSTableReader sstable : 
> repairedSSTables)```
> I think the effect of those above change might cause the 
> markCompactedSSTablesReplaced fail on 
> DataTracker.java
> {noformat}
>assert newSSTables.size() + newShadowed.size() == newSSTablesSize :
> String.format("Expecting new size of %d, got %d while 
> replacing %s by %s in %s",
>   newSSTablesSize, newSSTables.size() + 
> newShadowed.size(), oldSSTables, replacements, this);
> {noformat}
> Since change CASSANDRA-10831 moved it out. This AssertError won't be caught, 
> leaving the oldsstables not removed. (Then this might cause row out of order 
> error when doing incremental repair if there are L1 un-repaired sstables.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11598) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2016-04-19 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248291#comment-15248291
 ] 

Philip Thompson edited comment on CASSANDRA-11598 at 4/19/16 6:46 PM:
--

So there is nothing wrong in the logs for this build, and this test has never 
failed on any other job ever. I marked it as flaky in the code, and I'll run it 
a few hundred times to see if I can get it to fail again usefully.

The bad news is, this seems isolated to 2.1, or at least, isn't on trunk:
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/72/
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/71/


was (Author: philipthompson):
So there is nothing wrong in the logs for this build, and this test has never 
failed on any other job ever. I marked it as flaky in the code, and I'll run it 
a few hundred times to see if I can get it to fail again usefully.

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-11598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11598
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/330/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-2.1_offheap_dtest #330 - 2.1.14-tentative
> {{' 127.0.0.2 '}} isn't in the trace, but it looks like {{'\127.0.0.2 '}} is 
> -- should we change the regex here?
> {code}
> Error Message
> ' 127.0.0.2 ' not found in 'Consistency level set to ALL.\nNow Tracing is 
> enabled\n\n firstname | lastname\n---+--\n Frodo |  
> Baggins\n\n(1 rows)\n\nTracing session: 
> 0268da20-0328-11e6-b014-53144f0dba91\n\n activity 
>   
>  | timestamp  | source| 
> source_elapsed\n-++---+\n
>   
> Execute CQL3 query | 2016-04-15 16:35:05.538000 | 
> 127.0.0.1 |  0\n
> READ message received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] 
> | 2016-04-15 16:35:05.54 | 127.0.0.3 | 47\n Parsing SELECT 
> firstname, lastname FROM ks.users WHERE userid = 
> 550e8400-e29b-41d4-a716-44665544; [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.54 | 127.0.0.1 | 88\n
>Preparing statement 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.54 | 127.0.0.1 |
> 355\n
> reading digest from /127.0.0.2 [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.542000 | 127.0.0.1 |   1245\n
>  Executing single-partition query on users 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.542000 | 127.0.0.1 |   
> 1249\n
>   Acquiring sstable references [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.545000 | 127.0.0.1 |   1265\n
>  Executing single-partition query on users 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.546000 | 127.0.0.3 |
> 369\n 
>   Merging memtable tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.546000 | 127.0.0.1 |   1302\n
> reading digest from /127.0.0.3 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.547000 | 127.0.0.1 |   
> 1338\n Skipped 0/0 non-slice-intersecting 
> sstables, included 0 due to tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.548000 | 127.0.0.1 |   1403\n
>   Acquiring sstable references 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.549000 | 127.0.0.3 |
> 392\nMerging data 
> from memtables and 0 sstables [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.549000 | 127.0.0.1 |   1428\n
>  

[jira] [Commented] (CASSANDRA-11595) Cassandra cannot start because of empty commitlog

2016-04-19 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248361#comment-15248361
 ] 

DOAN DuyHai commented on CASSANDRA-11595:
-

Ok so it's like a vicious circle.

 But then all those things are just *consequences* or another problem, which is 
*too many opened files*. Sure we can fix this side effect but we need to find 
the root cause of why there are too many opened files.

> Cassandra cannot start because of empty commitlog
> -
>
> Key: CASSANDRA-11595
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11595
> Project: Cassandra
>  Issue Type: Bug
>Reporter: n0rad
>Assignee: Benjamin Lerer
>
> After the crash of CASSANDRA-11594.
> Cassandra try to restart and fail because of commit log replay.
> Same on 4 of the crashed nodes out of 6.
> ```
> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: 
> Could not read commit log descriptor in file 
> /data/commitlog/CommitLog-6-1460632496764.log
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:644)
>  [apache-cassandra-3.0.5.jar:3.0.5]
> ```
> This file is empty and is not the commitlog with the latest date.
> ```
> ...
> -rw-r--r-- 1 root root 32M Apr 16 21:46 CommitLog-6-1460632496761.log
> -rw-r--r-- 1 root root 32M Apr 16 21:47 CommitLog-6-1460632496762.log
> -rw-r--r-- 1 root root 32M Apr 16 21:47 CommitLog-6-1460632496763.log
> -rw-r--r-- 1 root root   0 Apr 16 21:47 CommitLog-6-1460632496764.log
> -rw-r--r-- 1 root root 32M Apr 16 21:50 CommitLog-6-1460843401097.log
> -rw-r--r-- 1 root root 32M Apr 16 21:51 CommitLog-6-1460843513346.log
> -rw-r--r-- 1 root root 32M Apr 16 21:53 CommitLog-6-1460843619271.log
> -rw-r--r-- 1 root root 32M Apr 16 21:55 CommitLog-6-1460843730533.log
> -rw-r--r-- 1 root root 32M Apr 16 21:57 CommitLog-6-1460843834129.log
> -rw-r--r-- 1 root root 32M Apr 16 21:58 CommitLog-6-1460843935094.log
> -rw-r--r-- 1 root root 32M Apr 16 22:00 CommitLog-6-1460844038543.log
> -rw-r--r-- 1 root root 32M Apr 16 22:02 CommitLog-6-1460844141003.log
> ...
> ```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11607) dtest failure in user_types_test.TestUserTypes.test_nested_user_types

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11607:

Labels: dtest windows  (was: dtest)

> dtest failure in user_types_test.TestUserTypes.test_nested_user_types
> -
>
> Key: CASSANDRA-11607
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11607
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest, windows
>
> This is a single flap:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/217/testReport/user_types_test/TestUserTypes/test_nested_user_types
> Failed on CassCI build cassandra-2.2_dtest_win32 #217
> {code}
> Error Message
> Lists differ: [None] != [[u'test', u'test2']]
> First differing element 0:
> None
> [u'test', u'test2']
> - [None]
> + [[u'test', u'test2']]
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: d:\temp\dtest-vgkgwi
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "C:\tools\python2\lib\unittest\case.py", line 329, in run
> testMethod()
>   File 
> "D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra-dtest\user_types_test.py",
>  line 289, in test_nested_user_types
> self.assertEqual(listify(primary_item), [[u'test', u'test2']])
>   File "C:\tools\python2\lib\unittest\case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "C:\tools\python2\lib\unittest\case.py", line 742, in assertListEqual
> self.assertSequenceEqual(list1, list2, msg, seq_type=list)
>   File "C:\tools\python2\lib\unittest\case.py", line 724, in 
> assertSequenceEqual
> self.fail(msg)
>   File "C:\tools\python2\lib\unittest\case.py", line 410, in fail
> raise self.failureException(msg)
> "Lists differ: [None] != [[u'test', u'test2']]\n\nFirst differing element 
> 0:\nNone\n[u'test', u'test2']\n\n- [None]\n+ [[u'test', 
> u'test2']]\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> d:\\temp\\dtest-vgkgwi\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> Standard Error
> Started: node1 with pid: 4328
> Started: node3 with pid: 7568
> Started: node2 with pid: 7504
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11607) dtest failure in user_types_test.TestUserTypes.test_nested_user_types

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-11607:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> dtest failure in user_types_test.TestUserTypes.test_nested_user_types
> -
>
> Key: CASSANDRA-11607
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11607
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> This is a single flap:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/217/testReport/user_types_test/TestUserTypes/test_nested_user_types
> Failed on CassCI build cassandra-2.2_dtest_win32 #217
> {code}
> Error Message
> Lists differ: [None] != [[u'test', u'test2']]
> First differing element 0:
> None
> [u'test', u'test2']
> - [None]
> + [[u'test', u'test2']]
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: d:\temp\dtest-vgkgwi
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "C:\tools\python2\lib\unittest\case.py", line 329, in run
> testMethod()
>   File 
> "D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra-dtest\user_types_test.py",
>  line 289, in test_nested_user_types
> self.assertEqual(listify(primary_item), [[u'test', u'test2']])
>   File "C:\tools\python2\lib\unittest\case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "C:\tools\python2\lib\unittest\case.py", line 742, in assertListEqual
> self.assertSequenceEqual(list1, list2, msg, seq_type=list)
>   File "C:\tools\python2\lib\unittest\case.py", line 724, in 
> assertSequenceEqual
> self.fail(msg)
>   File "C:\tools\python2\lib\unittest\case.py", line 410, in fail
> raise self.failureException(msg)
> "Lists differ: [None] != [[u'test', u'test2']]\n\nFirst differing element 
> 0:\nNone\n[u'test', u'test2']\n\n- [None]\n+ [[u'test', 
> u'test2']]\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> d:\\temp\\dtest-vgkgwi\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> Standard Error
> Started: node1 with pid: 4328
> Started: node3 with pid: 7568
> Started: node2 with pid: 7504
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11598) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2016-04-19 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248291#comment-15248291
 ] 

Philip Thompson commented on CASSANDRA-11598:
-

So there is nothing wrong in the logs for this build, and this test has never 
failed on any other job ever. I marked it as flaky in the code, and I'll run it 
a few hundred times to see if I can get it to fail again usefully.

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-11598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11598
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/330/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-2.1_offheap_dtest #330 - 2.1.14-tentative
> {{' 127.0.0.2 '}} isn't in the trace, but it looks like {{'\127.0.0.2 '}} is 
> -- should we change the regex here?
> {code}
> Error Message
> ' 127.0.0.2 ' not found in 'Consistency level set to ALL.\nNow Tracing is 
> enabled\n\n firstname | lastname\n---+--\n Frodo |  
> Baggins\n\n(1 rows)\n\nTracing session: 
> 0268da20-0328-11e6-b014-53144f0dba91\n\n activity 
>   
>  | timestamp  | source| 
> source_elapsed\n-++---+\n
>   
> Execute CQL3 query | 2016-04-15 16:35:05.538000 | 
> 127.0.0.1 |  0\n
> READ message received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] 
> | 2016-04-15 16:35:05.54 | 127.0.0.3 | 47\n Parsing SELECT 
> firstname, lastname FROM ks.users WHERE userid = 
> 550e8400-e29b-41d4-a716-44665544; [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.54 | 127.0.0.1 | 88\n
>Preparing statement 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.54 | 127.0.0.1 |
> 355\n
> reading digest from /127.0.0.2 [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.542000 | 127.0.0.1 |   1245\n
>  Executing single-partition query on users 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.542000 | 127.0.0.1 |   
> 1249\n
>   Acquiring sstable references [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.545000 | 127.0.0.1 |   1265\n
>  Executing single-partition query on users 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.546000 | 127.0.0.3 |
> 369\n 
>   Merging memtable tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.546000 | 127.0.0.1 |   1302\n
> reading digest from /127.0.0.3 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.547000 | 127.0.0.1 |   
> 1338\n Skipped 0/0 non-slice-intersecting 
> sstables, included 0 due to tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.548000 | 127.0.0.1 |   1403\n
>   Acquiring sstable references 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.549000 | 127.0.0.3 |
> 392\nMerging data 
> from memtables and 0 sstables [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.549000 | 127.0.0.1 |   1428\n
>  Read 1 live and 0 tombstone cells 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.55 | 127.0.0.1 |   
> 1509\n   Sending READ message 
> to /127.0.0.3 [MessagingService-Outgoing-/127.0.0.3] | 2016-04-15 
> 16:35:05.55 | 127.0.0.1 |   1780\n
>Sending READ message to /127.0.0.2 
> [MessagingService-Outgoing-/127.0.0.2] | 2016-04-15 16:35:05.551000 | 
> 127.0.0.1 |   3748\n
> 

[jira] [Commented] (CASSANDRA-11598) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2016-04-19 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248274#comment-15248274
 ] 

Philip Thompson commented on CASSANDRA-11598:
-

I think no. It seems the existing search is checking if the IP is in the 
{{source}} column, given that is where we find {{' $IP '}}. Also, given the 
semantics of the test (running at CL.ALL), I would expect to see results from 
127.0.0.2 in there. We are missing expected trace output here.

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-11598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11598
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/330/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-2.1_offheap_dtest #330 - 2.1.14-tentative
> {{' 127.0.0.2 '}} isn't in the trace, but it looks like {{'\127.0.0.2 '}} is 
> -- should we change the regex here?
> {code}
> Error Message
> ' 127.0.0.2 ' not found in 'Consistency level set to ALL.\nNow Tracing is 
> enabled\n\n firstname | lastname\n---+--\n Frodo |  
> Baggins\n\n(1 rows)\n\nTracing session: 
> 0268da20-0328-11e6-b014-53144f0dba91\n\n activity 
>   
>  | timestamp  | source| 
> source_elapsed\n-++---+\n
>   
> Execute CQL3 query | 2016-04-15 16:35:05.538000 | 
> 127.0.0.1 |  0\n
> READ message received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] 
> | 2016-04-15 16:35:05.54 | 127.0.0.3 | 47\n Parsing SELECT 
> firstname, lastname FROM ks.users WHERE userid = 
> 550e8400-e29b-41d4-a716-44665544; [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.54 | 127.0.0.1 | 88\n
>Preparing statement 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.54 | 127.0.0.1 |
> 355\n
> reading digest from /127.0.0.2 [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.542000 | 127.0.0.1 |   1245\n
>  Executing single-partition query on users 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.542000 | 127.0.0.1 |   
> 1249\n
>   Acquiring sstable references [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.545000 | 127.0.0.1 |   1265\n
>  Executing single-partition query on users 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.546000 | 127.0.0.3 |
> 369\n 
>   Merging memtable tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.546000 | 127.0.0.1 |   1302\n
> reading digest from /127.0.0.3 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.547000 | 127.0.0.1 |   
> 1338\n Skipped 0/0 non-slice-intersecting 
> sstables, included 0 due to tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.548000 | 127.0.0.1 |   1403\n
>   Acquiring sstable references 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.549000 | 127.0.0.3 |
> 392\nMerging data 
> from memtables and 0 sstables [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.549000 | 127.0.0.1 |   1428\n
>  Read 1 live and 0 tombstone cells 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.55 | 127.0.0.1 |   
> 1509\n   Sending READ message 
> to /127.0.0.3 [MessagingService-Outgoing-/127.0.0.3] | 2016-04-15 
> 16:35:05.55 | 127.0.0.1 |   1780\n
>Sending READ message to /127.0.0.2 
> [MessagingService-Outgoing-/127.0.0.2] | 2016-04-15 16:35:05.551000 | 
> 

[jira] [Created] (CASSANDRA-11609) cassandra won't start with schema complaint that does not appear to be valid

2016-04-19 Thread Russ Hatch (JIRA)
Russ Hatch created CASSANDRA-11609:
--

 Summary: cassandra won't start with schema complaint that does not 
appear to be valid
 Key: CASSANDRA-11609
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11609
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch


This was found in the upgrades user_types_test.

Can be repro'd with ccm.

Create a 1 node ccm cluster on 2.2.x

Create this schema:
{noformat}
create keyspace test2 with replication = {'class':'SimpleStrategy', 
'replication_factor':1};
use test2;
CREATE TYPE address (
 street text,
 city text,
 zip_code int,
 phones set
 );
CREATE TYPE fullname (
 irstname text,
 astname text
 );
CREATE TABLE users (
 d uuid PRIMARY KEY,
 ame frozen,
 ddresses map
 );
{noformat}

Upgrade the single node to trunk, attempt to start the node up. Start will fail 
with this exception:
{noformat}
ERROR [main] 2016-04-19 11:33:19,218 CassandraDaemon.java:704 - Exception 
encountered during startup
org.apache.cassandra.exceptions.InvalidRequestException: Non-frozen UDTs are 
not allowed inside collections: map
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.throwNestedNonFrozenError(CQL3Type.java:686)
 ~[main/:na]
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepare(CQL3Type.java:652) 
~[main/:na]
at 
org.apache.cassandra.cql3.CQL3Type$Raw$RawCollection.prepareInternal(CQL3Type.java:644)
 ~[main/:na]
at 
org.apache.cassandra.schema.CQLTypeParser.parse(CQLTypeParser.java:53) 
~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.createColumnFromRow(SchemaKeyspace.java:1022)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.lambda$fetchColumns$12(SchemaKeyspace.java:1006)
 ~[main/:na]
at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_77]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchColumns(SchemaKeyspace.java:1006)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:960) 
~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:939) 
~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:902)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:879)
 ~[main/:na]
at 
org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:867)
 ~[main/:na]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:134) 
~[main/:na]
at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:124) 
~[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:558) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:687) 
[main/:na]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11608) dtest failure in replace_address_test.TestReplaceAddress.replace_first_boot_test

2016-04-19 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-11608:


 Summary: dtest failure in 
replace_address_test.TestReplaceAddress.replace_first_boot_test
 Key: CASSANDRA-11608
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11608
 Project: Cassandra
  Issue Type: Test
Reporter: Jim Witschey
Assignee: DS Test Eng


This looks like a timeout kind of flap. It's flapped once. Example failure:

http://cassci.datastax.com/job/cassandra-2.2_offheap_dtest/344/testReport/replace_address_test/TestReplaceAddress/replace_first_boot_test

Failed on CassCI build cassandra-2.2_offheap_dtest #344 - 2.2.6-tentative

{code}
Error Message

15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:
INFO  [main] 2016-04-15 16:21:32,345 Config.java:4.
See system.log for remainder
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-4i5qkE
dtest: DEBUG: Custom init_config not found. Setting defaults.
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'memtable_allocation_type': 'offheap_objects',
'num_tokens': '32',
'phi_convict_threshold': 5,
'start_rpc': 'true'}
dtest: DEBUG: Starting cluster with 3 nodes.
dtest: DEBUG: 32
dtest: DEBUG: Inserting Data...
dtest: DEBUG: Stopping node 3.
dtest: DEBUG: Testing node stoppage (query should fail).
dtest: DEBUG: Retrying read after timeout. Attempt #0
dtest: DEBUG: Retrying read after timeout. Attempt #1
dtest: DEBUG: Retrying request after UE. Attempt #2
dtest: DEBUG: Retrying request after UE. Attempt #3
dtest: DEBUG: Retrying request after UE. Attempt #4
dtest: DEBUG: Starting node 4 to replace node 3
dtest: DEBUG: Verifying querying works again.
dtest: DEBUG: Verifying tokens migrated sucessfully
dtest: DEBUG: ('WARN  [main] 2016-04-15 16:21:21,068 TokenMetadata.java:196 - 
Token -3855903180169109916 changing ownership from /127.0.0.3 to /127.0.0.4\n', 
<_sre.SRE_Match object at 0x7fd21c0e2370>)
dtest: DEBUG: Try to restart node 3 (should fail)
dtest: DEBUG: [('WARN  [GossipStage:1] 2016-04-15 16:21:22,942 
StorageService.java:1962 - Host ID collision for 
75916cc0-86ec-4136-b336-862a49953616 between /127.0.0.3 and /127.0.0.4; 
/127.0.0.4 is the new owner\n', <_sre.SRE_Match object at 0x7fd1f83555e0>)]
- >> end captured logging << -
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/replace_address_test.py", line 212, in 
replace_first_boot_test
node4.start(wait_for_binary_proto=True)
  File "/home/automaton/ccm/ccmlib/node.py", line 610, in start
node.watch_log_for_alive(self, from_mark=mark)
  File "/home/automaton/ccm/ccmlib/node.py", line 457, in watch_log_for_alive
self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, 
filename=filename)
  File "/home/automaton/ccm/ccmlib/node.py", line 425, in watch_log_for
raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " [" 
+ self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
reads[:50] + ".\nSee {} for remainder".format(filename))
"15 Apr 2016 16:23:41 [node3] Missing: ['127.0.0.4.* now UP']:\nINFO  [main] 
2016-04-15 16:21:32,345 Config.java:4.\nSee system.log for 
remainder\n >> begin captured logging << 
\ndtest: DEBUG: cluster ccm directory: 
/mnt/tmp/dtest-4i5qkE\ndtest: DEBUG: Custom init_config not found. Setting 
defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'memtable_allocation_type': 'offheap_objects',\n
'num_tokens': '32',\n'phi_convict_threshold': 5,\n'start_rpc': 
'true'}\ndtest: DEBUG: Starting cluster with 3 nodes.\ndtest: DEBUG: 32\ndtest: 
DEBUG: Inserting Data...\ndtest: DEBUG: Stopping node 3.\ndtest: DEBUG: Testing 
node stoppage (query should fail).\ndtest: DEBUG: Retrying read after timeout. 
Attempt #0\ndtest: DEBUG: Retrying read after timeout. Attempt #1\ndtest: 
DEBUG: Retrying request after UE. Attempt #2\ndtest: DEBUG: Retrying request 
after UE. Attempt #3\ndtest: DEBUG: Retrying request after UE. Attempt 
#4\ndtest: DEBUG: Starting node 4 to replace node 3\ndtest: DEBUG: Verifying 
querying works again.\ndtest: DEBUG: Verifying tokens migrated 
sucessfully\ndtest: DEBUG: ('WARN  [main] 2016-04-15 16:21:21,068 
TokenMetadata.java:196 - Token -3855903180169109916 changing ownership from 
/127.0.0.3 to /127.0.0.4\\n', <_sre.SRE_Match object at 
0x7fd21c0e2370>)\ndtest: DEBUG: Try to restart node 3 (should fail)\ndtest: 
DEBUG: [('WARN  [GossipStage:1] 2016-04-15 16:21:22,942 
StorageService.java:1962 - Host ID collision for 
75916cc0-86ec-4136-b336-862a49953616 between /127.0.0.3 and /127.0.0.4; 
/127.0.0.4 is the new owner\\n', <_sre.SRE_Match object at 

[jira] [Updated] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-04-19 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11310:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks, committed the followup to trunk in 
{{c83729f41d358ce3ca2ac0323704ef516dff9298}}

> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.6
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Make sure that indexing/filtering restrictions are picked up correctly even if the columns are given in order

2016-04-19 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/trunk 30bfa07a4 -> c83729f41


Make sure that indexing/filtering restrictions are picked up correctly even if 
the columns are given in order

Patch by Alex Petrov; reviewed by Sam Tunnicliffe and Benjamin Lerer for 
CASSANDRA-11310


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c83729f4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c83729f4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c83729f4

Branch: refs/heads/trunk
Commit: c83729f41d358ce3ca2ac0323704ef516dff9298
Parents: 30bfa07
Author: Alex Petrov 
Authored: Tue Apr 19 10:43:12 2016 +0200
Committer: Sam Tunnicliffe 
Committed: Tue Apr 19 18:05:29 2016 +0100

--
 .../restrictions/StatementRestrictions.java | 39 +++---
 .../entities/FrozenCollectionsTest.java |  2 +-
 .../cql3/validation/operations/SelectTest.java  | 56 
 3 files changed, 77 insertions(+), 20 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c83729f4/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java 
b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
index b146cfd..c4d7622 100644
--- a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
+++ b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
@@ -42,7 +42,6 @@ import org.apache.cassandra.utils.btree.BTreeSet;
 
 import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkFalse;
 import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkNotNull;
-import static 
org.apache.cassandra.cql3.statements.RequestValidations.checkTrue;
 import static 
org.apache.cassandra.cql3.statements.RequestValidations.invalidRequest;
 
 /**
@@ -471,35 +470,37 @@ public final class StatementRestrictions
 checkFalse(clusteringColumnsRestrictions.hasIN() && 
selectsComplexColumn,
"Cannot restrict clustering columns by IN relations 
when a collection is selected by the query");
 checkFalse(clusteringColumnsRestrictions.hasContains() && 
!hasQueriableIndex && !allowFiltering,
-   "Cannot restrict clustering columns by a CONTAINS 
relation without a secondary index");
 
+   "Clustering columns can only be restricted with 
CONTAINS with a secondary index or filtering");
 
-if (hasClusteringColumnsRestriction())
+if (hasClusteringColumnsRestriction() && 
clusteringColumnsRestrictions.needFiltering())
 {
-List clusteringColumns = 
cfm.clusteringColumns();
-List restrictedColumns = new 
LinkedList<>(clusteringColumnsRestrictions.getColumnDefs());
-
-for (int i = 0, m = restrictedColumns.size(); i < m; i++)
+if (hasQueriableIndex || forView)
+{
+usesSecondaryIndexing = true;
+}
+else
 {
-ColumnDefinition clusteringColumn = 
clusteringColumns.get(i);
-ColumnDefinition restrictedColumn = 
restrictedColumns.get(i);
+List clusteringColumns = 
cfm.clusteringColumns();
+List restrictedColumns = new 
LinkedList<>(clusteringColumnsRestrictions.getColumnDefs());
 
-if (!clusteringColumn.equals(restrictedColumn) && 
!allowFiltering)
+for (int i = 0, m = restrictedColumns.size(); i < m; i++)
 {
-checkTrue(hasQueriableIndex || forView,
-  "PRIMARY KEY column \"%s\" cannot be 
restricted as preceding column \"%s\" is not restricted",
-  restrictedColumn.name,
-  clusteringColumn.name);
-
-usesSecondaryIndexing = true; // handle gaps and 
non-keyrange cases.
-break;
+ColumnDefinition clusteringColumn = 
clusteringColumns.get(i);
+ColumnDefinition restrictedColumn = 
restrictedColumns.get(i);
+
+if (!clusteringColumn.equals(restrictedColumn) && 
!allowFiltering)
+{
+throw invalidRequest("PRIMARY KEY column \"%s\" 
cannot be restricted as preceding column \"%s\" is not restricted",
+ restrictedColumn.name,
+ 

[jira] [Created] (CASSANDRA-11607) dtest failure in user_types_test.TestUserTypes.test_nested_user_types

2016-04-19 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-11607:


 Summary: dtest failure in 
user_types_test.TestUserTypes.test_nested_user_types
 Key: CASSANDRA-11607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11607
 Project: Cassandra
  Issue Type: Test
Reporter: Jim Witschey
Assignee: DS Test Eng


This is a single flap:

http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/217/testReport/user_types_test/TestUserTypes/test_nested_user_types

Failed on CassCI build cassandra-2.2_dtest_win32 #217

{code}
Error Message

Lists differ: [None] != [[u'test', u'test2']]

First differing element 0:
None
[u'test', u'test2']

- [None]
+ [[u'test', u'test2']]
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: d:\temp\dtest-vgkgwi
dtest: DEBUG: Custom init_config not found. Setting defaults.
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 1,
'read_request_timeout_in_ms': 1,
'request_timeout_in_ms': 1,
'truncate_request_timeout_in_ms': 1,
'write_request_timeout_in_ms': 1}
- >> end captured logging << -
Stacktrace

  File "C:\tools\python2\lib\unittest\case.py", line 329, in run
testMethod()
  File 
"D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra-dtest\user_types_test.py",
 line 289, in test_nested_user_types
self.assertEqual(listify(primary_item), [[u'test', u'test2']])
  File "C:\tools\python2\lib\unittest\case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
  File "C:\tools\python2\lib\unittest\case.py", line 742, in assertListEqual
self.assertSequenceEqual(list1, list2, msg, seq_type=list)
  File "C:\tools\python2\lib\unittest\case.py", line 724, in assertSequenceEqual
self.fail(msg)
  File "C:\tools\python2\lib\unittest\case.py", line 410, in fail
raise self.failureException(msg)
"Lists differ: [None] != [[u'test', u'test2']]\n\nFirst differing element 
0:\nNone\n[u'test', u'test2']\n\n- [None]\n+ [[u'test', 
u'test2']]\n >> begin captured logging << 
\ndtest: DEBUG: cluster ccm directory: 
d:\\temp\\dtest-vgkgwi\ndtest: DEBUG: Custom init_config not found. Setting 
defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
5,\n'range_request_timeout_in_ms': 1,\n
'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n
'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
1}\n- >> end captured logging << -"
Standard Error

Started: node1 with pid: 4328
Started: node3 with pid: 7568
Started: node2 with pid: 7504
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10756) Timeout failures in NativeTransportService.testConcurrentDestroys unit test

2016-04-19 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248148#comment-15248148
 ] 

Joel Knighton commented on CASSANDRA-10756:
---

Thanks for picking this up - these little things can linger.

CI has a few small failures - looking at them, they're all on dtests that 
changed for recent trunk commits not yet included in your branch. After a 
rebase, they all pass as expected.

+1 LGTM.

> Timeout failures in NativeTransportService.testConcurrentDestroys unit test
> ---
>
> Key: CASSANDRA-10756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10756
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Alex Petrov
> Fix For: 3.x
>
>
> History of test on trunk 
> [here|http://cassci.datastax.com/job/trunk_testall/lastCompletedBuild/testReport/org.apache.cassandra.service/NativeTransportServiceTest/testConcurrentDestroys/history/].
> I've seen these failures across 3.0/trunk for a while. I ran the test looping 
> locally for a while and the timeout is fairly easy to reproduce. The timeout 
> appears to be an indefinite hang and not a timing issue.
> When the timeout occurs, the following stack trace is at the end of the logs 
> for the unit test.
> {code}
> ERROR [ForkJoinPool.commonPool-worker-1] 2015-11-22 21:30:53,635 Failed to 
> submit a listener notification task. Event loop shut down?
> java.util.concurrent.RejectedExecutionException: event executor terminated
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:745)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:322)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:728)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.execute(DefaultPromise.java:671) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyLateListener(DefaultPromise.java:641)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:138) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:93)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:28)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroupFuture.(DefaultChannelGroupFuture.java:116)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:275)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:167)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> org.apache.cassandra.transport.Server$ConnectionTracker.closeAll(Server.java:277)
>  [main/:na]
>   at org.apache.cassandra.transport.Server.close(Server.java:180) 
> [main/:na]
>   at org.apache.cassandra.transport.Server.stop(Server.java:116) 
> [main/:na]
>   at java.util.Collections$SingletonSet.forEach(Collections.java:4767) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.service.NativeTransportService.stop(NativeTransportService.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportService.destroy(NativeTransportService.java:144)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportServiceTest.lambda$withService$102(NativeTransportServiceTest.java:201)
>  ~[classes/:na]
>   at java.util.stream.IntPipeline$3$1.accept(IntPipeline.java:233) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110)
>  ~[na:1.8.0_60]
>   at java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721) 
> ~[na:1.8.0_60]
>   at java.util.stream.AbstractTask.compute(AbstractTask.java:316) 
> ~[na:1.8.0_60]
>   at 
> java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) 
> ~[na:1.8.0_60]
>   at 

[jira] [Updated] (CASSANDRA-10756) Timeout failures in NativeTransportService.testConcurrentDestroys unit test

2016-04-19 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10756:
--
Fix Version/s: 3.x

> Timeout failures in NativeTransportService.testConcurrentDestroys unit test
> ---
>
> Key: CASSANDRA-10756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10756
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Alex Petrov
> Fix For: 3.x
>
>
> History of test on trunk 
> [here|http://cassci.datastax.com/job/trunk_testall/lastCompletedBuild/testReport/org.apache.cassandra.service/NativeTransportServiceTest/testConcurrentDestroys/history/].
> I've seen these failures across 3.0/trunk for a while. I ran the test looping 
> locally for a while and the timeout is fairly easy to reproduce. The timeout 
> appears to be an indefinite hang and not a timing issue.
> When the timeout occurs, the following stack trace is at the end of the logs 
> for the unit test.
> {code}
> ERROR [ForkJoinPool.commonPool-worker-1] 2015-11-22 21:30:53,635 Failed to 
> submit a listener notification task. Event loop shut down?
> java.util.concurrent.RejectedExecutionException: event executor terminated
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:745)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:322)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:728)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.execute(DefaultPromise.java:671) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyLateListener(DefaultPromise.java:641)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:138) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:93)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:28)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroupFuture.(DefaultChannelGroupFuture.java:116)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:275)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:167)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> org.apache.cassandra.transport.Server$ConnectionTracker.closeAll(Server.java:277)
>  [main/:na]
>   at org.apache.cassandra.transport.Server.close(Server.java:180) 
> [main/:na]
>   at org.apache.cassandra.transport.Server.stop(Server.java:116) 
> [main/:na]
>   at java.util.Collections$SingletonSet.forEach(Collections.java:4767) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.service.NativeTransportService.stop(NativeTransportService.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportService.destroy(NativeTransportService.java:144)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportServiceTest.lambda$withService$102(NativeTransportServiceTest.java:201)
>  ~[classes/:na]
>   at java.util.stream.IntPipeline$3$1.accept(IntPipeline.java:233) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110)
>  ~[na:1.8.0_60]
>   at java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721) 
> ~[na:1.8.0_60]
>   at java.util.stream.AbstractTask.compute(AbstractTask.java:316) 
> ~[na:1.8.0_60]
>   at 
> java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) 
> ~[na:1.8.0_60]
>   at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) 
> ~[na:1.8.0_60]
>   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) 
> ~[na:1.8.0_60]
>   at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) 
> ~[na:1.8.0_60]
>   at 
> 

[jira] [Updated] (CASSANDRA-10756) Timeout failures in NativeTransportService.testConcurrentDestroys unit test

2016-04-19 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10756:
--
Reviewer: Joel Knighton

> Timeout failures in NativeTransportService.testConcurrentDestroys unit test
> ---
>
> Key: CASSANDRA-10756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10756
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Alex Petrov
>
> History of test on trunk 
> [here|http://cassci.datastax.com/job/trunk_testall/lastCompletedBuild/testReport/org.apache.cassandra.service/NativeTransportServiceTest/testConcurrentDestroys/history/].
> I've seen these failures across 3.0/trunk for a while. I ran the test looping 
> locally for a while and the timeout is fairly easy to reproduce. The timeout 
> appears to be an indefinite hang and not a timing issue.
> When the timeout occurs, the following stack trace is at the end of the logs 
> for the unit test.
> {code}
> ERROR [ForkJoinPool.commonPool-worker-1] 2015-11-22 21:30:53,635 Failed to 
> submit a listener notification task. Event loop shut down?
> java.util.concurrent.RejectedExecutionException: event executor terminated
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:745)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:322)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:728)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.execute(DefaultPromise.java:671) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyLateListener(DefaultPromise.java:641)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:138) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:93)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:28)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroupFuture.(DefaultChannelGroupFuture.java:116)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:275)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:167)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> org.apache.cassandra.transport.Server$ConnectionTracker.closeAll(Server.java:277)
>  [main/:na]
>   at org.apache.cassandra.transport.Server.close(Server.java:180) 
> [main/:na]
>   at org.apache.cassandra.transport.Server.stop(Server.java:116) 
> [main/:na]
>   at java.util.Collections$SingletonSet.forEach(Collections.java:4767) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.service.NativeTransportService.stop(NativeTransportService.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportService.destroy(NativeTransportService.java:144)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportServiceTest.lambda$withService$102(NativeTransportServiceTest.java:201)
>  ~[classes/:na]
>   at java.util.stream.IntPipeline$3$1.accept(IntPipeline.java:233) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110)
>  ~[na:1.8.0_60]
>   at java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721) 
> ~[na:1.8.0_60]
>   at java.util.stream.AbstractTask.compute(AbstractTask.java:316) 
> ~[na:1.8.0_60]
>   at 
> java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) 
> ~[na:1.8.0_60]
>   at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) 
> ~[na:1.8.0_60]
>   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) 
> ~[na:1.8.0_60]
>   at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) 
> ~[na:1.8.0_60]
>   at 
> 

[jira] [Updated] (CASSANDRA-10756) Timeout failures in NativeTransportService.testConcurrentDestroys unit test

2016-04-19 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-10756:
--
Status: Ready to Commit  (was: Patch Available)

> Timeout failures in NativeTransportService.testConcurrentDestroys unit test
> ---
>
> Key: CASSANDRA-10756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10756
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Alex Petrov
>
> History of test on trunk 
> [here|http://cassci.datastax.com/job/trunk_testall/lastCompletedBuild/testReport/org.apache.cassandra.service/NativeTransportServiceTest/testConcurrentDestroys/history/].
> I've seen these failures across 3.0/trunk for a while. I ran the test looping 
> locally for a while and the timeout is fairly easy to reproduce. The timeout 
> appears to be an indefinite hang and not a timing issue.
> When the timeout occurs, the following stack trace is at the end of the logs 
> for the unit test.
> {code}
> ERROR [ForkJoinPool.commonPool-worker-1] 2015-11-22 21:30:53,635 Failed to 
> submit a listener notification task. Event loop shut down?
> java.util.concurrent.RejectedExecutionException: event executor terminated
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:745)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:322)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:728)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.execute(DefaultPromise.java:671) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyLateListener(DefaultPromise.java:641)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:138) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:93)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:28)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroupFuture.(DefaultChannelGroupFuture.java:116)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:275)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:167)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> org.apache.cassandra.transport.Server$ConnectionTracker.closeAll(Server.java:277)
>  [main/:na]
>   at org.apache.cassandra.transport.Server.close(Server.java:180) 
> [main/:na]
>   at org.apache.cassandra.transport.Server.stop(Server.java:116) 
> [main/:na]
>   at java.util.Collections$SingletonSet.forEach(Collections.java:4767) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.service.NativeTransportService.stop(NativeTransportService.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportService.destroy(NativeTransportService.java:144)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportServiceTest.lambda$withService$102(NativeTransportServiceTest.java:201)
>  ~[classes/:na]
>   at java.util.stream.IntPipeline$3$1.accept(IntPipeline.java:233) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110)
>  ~[na:1.8.0_60]
>   at java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721) 
> ~[na:1.8.0_60]
>   at java.util.stream.AbstractTask.compute(AbstractTask.java:316) 
> ~[na:1.8.0_60]
>   at 
> java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) 
> ~[na:1.8.0_60]
>   at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) 
> ~[na:1.8.0_60]
>   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) 
> ~[na:1.8.0_60]
>   at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) 
> ~[na:1.8.0_60]
>   at 
> 

[jira] [Assigned] (CASSANDRA-11598) dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-11598:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_simple_test
> 
>
> Key: CASSANDRA-11598
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11598
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/330/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test
> Failed on CassCI build cassandra-2.1_offheap_dtest #330 - 2.1.14-tentative
> {{' 127.0.0.2 '}} isn't in the trace, but it looks like {{'\127.0.0.2 '}} is 
> -- should we change the regex here?
> {code}
> Error Message
> ' 127.0.0.2 ' not found in 'Consistency level set to ALL.\nNow Tracing is 
> enabled\n\n firstname | lastname\n---+--\n Frodo |  
> Baggins\n\n(1 rows)\n\nTracing session: 
> 0268da20-0328-11e6-b014-53144f0dba91\n\n activity 
>   
>  | timestamp  | source| 
> source_elapsed\n-++---+\n
>   
> Execute CQL3 query | 2016-04-15 16:35:05.538000 | 
> 127.0.0.1 |  0\n
> READ message received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] 
> | 2016-04-15 16:35:05.54 | 127.0.0.3 | 47\n Parsing SELECT 
> firstname, lastname FROM ks.users WHERE userid = 
> 550e8400-e29b-41d4-a716-44665544; [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.54 | 127.0.0.1 | 88\n
>Preparing statement 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.54 | 127.0.0.1 |
> 355\n
> reading digest from /127.0.0.2 [SharedPool-Worker-2] | 2016-04-15 
> 16:35:05.542000 | 127.0.0.1 |   1245\n
>  Executing single-partition query on users 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.542000 | 127.0.0.1 |   
> 1249\n
>   Acquiring sstable references [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.545000 | 127.0.0.1 |   1265\n
>  Executing single-partition query on users 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.546000 | 127.0.0.3 |
> 369\n 
>   Merging memtable tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.546000 | 127.0.0.1 |   1302\n
> reading digest from /127.0.0.3 
> [SharedPool-Worker-2] | 2016-04-15 16:35:05.547000 | 127.0.0.1 |   
> 1338\n Skipped 0/0 non-slice-intersecting 
> sstables, included 0 due to tombstones [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.548000 | 127.0.0.1 |   1403\n
>   Acquiring sstable references 
> [SharedPool-Worker-1] | 2016-04-15 16:35:05.549000 | 127.0.0.3 |
> 392\nMerging data 
> from memtables and 0 sstables [SharedPool-Worker-3] | 2016-04-15 
> 16:35:05.549000 | 127.0.0.1 |   1428\n
>  Read 1 live and 0 tombstone cells 
> [SharedPool-Worker-3] | 2016-04-15 16:35:05.55 | 127.0.0.1 |   
> 1509\n   Sending READ message 
> to /127.0.0.3 [MessagingService-Outgoing-/127.0.0.3] | 2016-04-15 
> 16:35:05.55 | 127.0.0.1 |   1780\n
>Sending READ message to /127.0.0.2 
> [MessagingService-Outgoing-/127.0.0.2] | 2016-04-15 16:35:05.551000 | 
> 127.0.0.1 |   3748\n
> REQUEST_RESPONSE message received from /127.0.0.3 
> [MessagingService-Incoming-/127.0.0.3] | 2016-04-15 16:35:05.552000 | 
> 127.0.0.1 |   4454\n  
> 

[jira] [Created] (CASSANDRA-11606) Upgrade from 2.1.9 to 3.0.5 Fails with AssertionError

2016-04-19 Thread Anthony Verslues (JIRA)
Anthony Verslues created CASSANDRA-11606:


 Summary: Upgrade from 2.1.9 to 3.0.5 Fails with AssertionError
 Key: CASSANDRA-11606
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11606
 Project: Cassandra
  Issue Type: Bug
 Environment: Fedora 20, Oracle Java 8, Apache Cassandra 2.1.9 -> 3.0.5
Reporter: Anthony Verslues
 Fix For: 3.0.x


I get this error while upgrading sstables. I got the same error when upgrading 
to 3.0.2 and 3.0.4.


error: null
-- StackTrace --
java.lang.AssertionError
at 
org.apache.cassandra.db.LegacyLayout$CellGrouper.addCell(LegacyLayout.java:1167)
at 
org.apache.cassandra.db.LegacyLayout$CellGrouper.addAtom(LegacyLayout.java:1142)
at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.readRow(UnfilteredDeserializer.java:444)
at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:423)
at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:289)
at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:133)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57)
at 
org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:334)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:65)
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109)
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100)
at 
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:442)
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
at 
org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
at 
org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:416)
at 
org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:313)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11414) dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test

2016-04-19 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248094#comment-15248094
 ] 

Philip Thompson edited comment on CASSANDRA-11414 at 4/19/16 4:25 PM:
--

So it appears there can sometimes be issues on the receiving node when the 
stream dies, causing that node to fail to start listening for clients.

A successful run: 
https://gist.github.com/ptnapoleon/b999e42e3e0a39de233d875ac00ee79b
A failed run: 
https://gist.github.com/ptnapoleon/00e1e5739ad867b9dfb10fbf16da9022

Here is a collection of several hundred runs of this test, to give an idea of 
how often it flakes, and how.
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/68/testReport/


was (Author: philipthompson):
So it appears there can sometimes be issues on the receiving node when the 
stream dies, causing that node to fail to start listening for clients.

A successful run: 
https://gist.github.com/ptnapoleon/b999e42e3e0a39de233d875ac00ee79b
A failed run: 
https://gist.github.com/ptnapoleon/00e1e5739ad867b9dfb10fbf16da9022

> dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test
> --
>
> Key: CASSANDRA-11414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11414
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Paulo Motta
>  Labels: dtest
> Fix For: 3.x
>
>
> Stress is failing to read back all data. We can see this output from the 
> stress read
> {code}
> java.io.IOException: Operation x0 on key(s) [314c384f304f4c325030]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> java.io.IOException: Operation x0 on key(s) [33383438363931353131]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> FAILURE
> {code}
> Started happening with build 1075. Does not appear flaky on CI.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1076/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test
> Failed on CassCI build trunk_dtest #1076



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11414) dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11414:

Fix Version/s: 3.x

> dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test
> --
>
> Key: CASSANDRA-11414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11414
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Paulo Motta
>  Labels: dtest
> Fix For: 3.x
>
>
> Stress is failing to read back all data. We can see this output from the 
> stress read
> {code}
> java.io.IOException: Operation x0 on key(s) [314c384f304f4c325030]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> java.io.IOException: Operation x0 on key(s) [33383438363931353131]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> FAILURE
> {code}
> Started happening with build 1075. Does not appear flaky on CI.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1076/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test
> Failed on CassCI build trunk_dtest #1076



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11414) dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test

2016-04-19 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248094#comment-15248094
 ] 

Philip Thompson commented on CASSANDRA-11414:
-

So it appears there can sometimes be issues on the receiving node when the 
stream dies, causing that node to fail to start listening for clients.

A successful run: 
https://gist.github.com/ptnapoleon/b999e42e3e0a39de233d875ac00ee79b
A failed run: 
https://gist.github.com/ptnapoleon/00e1e5739ad867b9dfb10fbf16da9022

> dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test
> --
>
> Key: CASSANDRA-11414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11414
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Paulo Motta
>  Labels: dtest
>
> Stress is failing to read back all data. We can see this output from the 
> stress read
> {code}
> java.io.IOException: Operation x0 on key(s) [314c384f304f4c325030]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> java.io.IOException: Operation x0 on key(s) [33383438363931353131]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> FAILURE
> {code}
> Started happening with build 1075. Does not appear flaky on CI.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1076/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test
> Failed on CassCI build trunk_dtest #1076



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11414) dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11414:

Issue Type: Bug  (was: Test)

> dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test
> --
>
> Key: CASSANDRA-11414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11414
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
>
> Stress is failing to read back all data. We can see this output from the 
> stress read
> {code}
> java.io.IOException: Operation x0 on key(s) [314c384f304f4c325030]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> java.io.IOException: Operation x0 on key(s) [33383438363931353131]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> FAILURE
> {code}
> Started happening with build 1075. Does not appear flaky on CI.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1076/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test
> Failed on CassCI build trunk_dtest #1076



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11414) dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11414:

Assignee: Paulo Motta  (was: DS Test Eng)

> dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test
> --
>
> Key: CASSANDRA-11414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11414
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Paulo Motta
>  Labels: dtest
>
> Stress is failing to read back all data. We can see this output from the 
> stress read
> {code}
> java.io.IOException: Operation x0 on key(s) [314c384f304f4c325030]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> java.io.IOException: Operation x0 on key(s) [33383438363931353131]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> FAILURE
> {code}
> Started happening with build 1075. Does not appear flaky on CI.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1076/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test
> Failed on CassCI build trunk_dtest #1076



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11602) Materialized View Doest Not Have Static Columns

2016-04-19 Thread giridhar muralibabu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248086#comment-15248086
 ] 

giridhar muralibabu commented on CASSANDRA-11602:
-

Adding static columns in MVs will help developers to maximum use of 
materialized views 

> Materialized View Doest Not Have Static Columns
> ---
>
> Key: CASSANDRA-11602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11602
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ravishankar Rajendran
>Assignee: Carl Yeksigian
>
> {quote}
> CREATE TABLE "team" (teamname text, manager text, location text static, 
> PRIMARY KEY ((teamname), manager));
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull1', 
> 'Ricciardo11', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo12', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo13', 'Australian');
> select * From team;
> CREATE MATERIALIZED VIEW IF NOT EXISTS "teamMV" AS SELECT "teamname", 
> "manager", "location" FROM "team" WHERE "teamname" IS NOT NULL AND "manager" 
> is NOT NULL AND "location" is NOT NULL PRIMARY KEY("manager", "teamname");  
> select * from "teamMV";
> {quote}
> The teamMV does not have "location" column. Static columns are not getting 
> created in MV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-11043) Secondary indexes doesn't properly validate custom expressions

2016-04-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña reopened CASSANDRA-11043:
---
Reproduced In: 3.5, 3.0.5  (was: 3.2.1)

The validation method works great with CQL queries producing a 
{{PartitionRangeReadCommand}}, but it is not called with CQL queries producing 
one or more {{SinglePartitionReadCommand}}.

For example, the following query properly validates the expression:
{code}
SELECT * FROM test WHERE expr(test_idx, 'error');
{code}
However the following queries skip validation:
{code}
SELECT * FROM test WHERE expr(test_idx, 'error') AND id=1;
SELECT * FROM test WHERE expr(test_idx, 'error') AND id IN (1,2);
{code}

> Secondary indexes doesn't properly validate custom expressions
> --
>
> Key: CASSANDRA-11043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11043
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Local Write-Read Paths
>Reporter: Andrés de la Peña
>Assignee: Sam Tunnicliffe
>  Labels: 2i, index, validation
> Fix For: 3.0.4, 3.4
>
> Attachments: test-index.zip
>
>
> It seems that 
> [CASSANDRA-7575|https://issues.apache.org/jira/browse/CASSANDRA-7575] is 
> broken in Cassandra 3.x. As stated in the secondary indexes' API 
> documentation, custom index implementations should perform any validation of 
> query expressions at {{Index#searcherFor(ReadCommand)}}, throwing an 
> {{InvalidRequestException}} if the expressions are not valid. I assume these 
> validation errors should produce an {{InvalidRequest}} error on cqlsh, or 
> raise an {{InvalidQueryException}} on Java driver. However, when 
> {{Index#searcherFor(ReadCommand)}} throws its {{InvalidRequestException}}, I 
> get this cqlsh output:
> {noformat}
> Traceback (most recent call last):
>   File "bin/cqlsh.py", line 1246, in perform_simple_statement
> result = future.result()
>   File 
> "/Users/adelapena/stratio/platform/src/cassandra-3.2.1/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {noformat}
> I attach a dummy index implementation to reproduce the error:
> {noformat}
> CREATE KEYSPACE test with replication = {'class' : 'SimpleStrategy', 
> 'replication_factor' : '1' }; 
> CREATE TABLE test.test (id int PRIMARY KEY, value varchar); 
> CREATE CUSTOM INDEX test_index ON test.test() USING 'com.stratio.TestIndex'; 
> SELECT * FROM test.test WHERE expr(test_index,'ok');
> SELECT * FROM test.test WHERE expr(test_index,'error');
> {noformat}
> This is specially problematic when using Cassandra Java Driver, because one 
> of these server exceptions can produce subsequent queries fail (even if they 
> are valid) with a no host available exception.
> Maybe the validation method added with 
> [CASSANDRA-7575|https://issues.apache.org/jira/browse/CASSANDRA-7575] should 
> be restored, unless there is a way to properly manage the exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9861) When forcibly exiting due to OOM, we should produce a heap dump

2016-04-19 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-9861:
--
Reviewer: Aleksey Yeschenko  (was: Joshua McKenzie)

> When forcibly exiting due to OOM, we should produce a heap dump
> ---
>
> Key: CASSANDRA-9861
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9861
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Lifecycle
>Reporter: Benedict
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: lhf
> Fix For: 2.2.x
>
> Attachments: 9861-2.2-V2.txt, 9861-2.2.txt
>
>
> CASSANDRA-7507 introduced earlier termination on encountering an OOM, due to 
> lack of certainty about system state. However a side effect is that we never 
> produce heap dumps on OOM. We should ideally try to produce one forcibly 
> before exiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9633) Add ability to encrypt sstables

2016-04-19 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248041#comment-15248041
 ] 

Jason Brown commented on CASSANDRA-9633:


[~iamaleksey] Please :)

> Add ability to encrypt sstables
> ---
>
> Key: CASSANDRA-9633
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9633
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: encryption, security, sstable
> Fix For: 3.x
>
>
> Add option to allow encrypting of sstables.
> I have a version of this functionality built on cassandra 2.0 that 
> piggy-backs on the existing sstable compression functionality and ICompressor 
> interface (similar in nature to what DataStax Enterprise does). However, if 
> we're adding the feature to the main OSS product, I'm not sure if we want to 
> use the pluggable compression framework or if it's worth investigating a 
> different path. I think there's a lot of upside in reusing the sstable 
> compression scheme, but perhaps add a new component in cqlsh for table 
> encryption and a corresponding field in CFMD.
> Encryption configuration in the yaml can use the same mechanism as 
> CASSANDRA-6018 (which is currently pending internal review).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11022) Use SHA hashing to store password in the credentials cache

2016-04-19 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248042#comment-15248042
 ] 

Sam Tunnicliffe commented on CASSANDRA-11022:
-

The ticket that made it configurable was CASSANDRA-8085

> Use SHA hashing to store password in the credentials cache
> --
>
> Key: CASSANDRA-11022
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11022
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Mike Adamson
> Fix For: 3.x
>
>
> In CASSANDRA-7715 a credentials cache has been added to the 
> {{PasswordAuthenticator}} to improve performance when multiple 
> authentications occur for the same user. 
> Unfortunately, the bcrypt hash is being cached which is one of the major 
> performance overheads in password authentication. 
> I propose that the cache is changed to use a SHA- hash to store the user 
> password. As long as the cache is cleared for the user on an unsuccessful 
> authentication this won't significantly increase the ability of an attacker 
> to use a brute force attack because every other attempt will use bcrypt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Merge branch asf/cassandra-2.2 into cassandra-3.0

2016-04-19 Thread blerer
Merge branch asf/cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0e96d3e5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0e96d3e5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0e96d3e5

Branch: refs/heads/trunk
Commit: 0e96d3e5248750239f9b762b1f9fb2d28d763976
Parents: f85a20f 60997c2
Author: Benjamin Lerer 
Authored: Tue Apr 19 17:49:13 2016 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 19 17:51:41 2016 +0200

--
 .../apache/cassandra/cql3/functions/AggregateFcts.java| 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0e96d3e5/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--
diff --cc src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
index 79a08cd,cca6156..b2cae50
--- a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
+++ b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
@@@ -703,101 -657,25 +703,101 @@@ public abstract class AggregateFct
   * The SUM function for counter column values.
   */
  public static final AggregateFunction sumFunctionForCounter =
 -new NativeAggregateFunction("sum", CounterColumnType.instance, 
CounterColumnType.instance)
 +new NativeAggregateFunction("sum", CounterColumnType.instance, 
CounterColumnType.instance)
 +{
 +public Aggregate newAggregate()
 +{
 +return new LongSumAggregate();
 +}
 +};
 +
 +/**
 + * AVG function for counter column values.
 + */
 +public static final AggregateFunction avgFunctionForCounter =
 +new NativeAggregateFunction("avg", CounterColumnType.instance, 
CounterColumnType.instance)
 +{
 +public Aggregate newAggregate()
 +{
 +return new LongAvgAggregate();
 +}
 +};
 +
 +/**
 + * The MIN function for counter column values.
 + */
 +public static final AggregateFunction minFunctionForCounter =
 +new NativeAggregateFunction("min", CounterColumnType.instance, 
CounterColumnType.instance)
 +{
 +public Aggregate newAggregate()
 +{
 +return new Aggregate()
  {
 -public Aggregate newAggregate()
 +private Long min;
 +
 +public void reset()
  {
 -return new LongSumAggregate();
 +min = null;
 +}
 +
 +public ByteBuffer compute(int protocolVersion)
 +{
 +return min != null ? LongType.instance.decompose(min) : 
null;
 +}
 +
 +public void addInput(int protocolVersion, List 
values)
 +{
 +ByteBuffer value = values.get(0);
 +
 +if (value == null)
 +return;
 +
 +long lval = LongType.instance.compose(value);
 +
 +if (min == null || lval < min)
 +min = lval;
  }
  };
 +}
 +};
  
  /**
-- * AVG function for counter column values.
++ * MAX function for counter column values.
   */
 -public static final AggregateFunction avgFunctionForCounter =
 -new NativeAggregateFunction("avg", CounterColumnType.instance, 
CounterColumnType.instance)
 +public static final AggregateFunction maxFunctionForCounter =
 +new NativeAggregateFunction("max", CounterColumnType.instance, 
CounterColumnType.instance)
 +{
 +public Aggregate newAggregate()
 +{
 +return new Aggregate()
  {
 -public Aggregate newAggregate()
 +private Long max;
 +
 +public void reset()
  {
 -return new LongAvgAggregate();
 +max = null;
 +}
 +
 +public ByteBuffer compute(int protocolVersion)
 +{
 +return max != null ? LongType.instance.decompose(max) : 
null;
 +}
 +
 +public void addInput(int protocolVersion, List 
values)
 +{
 +ByteBuffer value = values.get(0);
 +
 +if (value == null)
 +return;
 +
 +long lval = LongType.instance.compose(value);
 +
 +if (max == null || lval > max)
 +max = lval;
  }
  };
 +}
 +};
  
  /**
   * Creates a MAX function for the 

[3/3] cassandra git commit: Merge branch cassandra-3.0 into trunk

2016-04-19 Thread blerer
Merge branch cassandra-3.0 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/30bfa07a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/30bfa07a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/30bfa07a

Branch: refs/heads/trunk
Commit: 30bfa07a48d9a0a2fb206e1b56e9dd9396bc1984
Parents: 391cae6 0e96d3e
Author: Benjamin Lerer 
Authored: Tue Apr 19 17:53:25 2016 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 19 17:54:12 2016 +0200

--
 .../apache/cassandra/cql3/functions/AggregateFcts.java| 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/30bfa07a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--



[1/3] cassandra git commit: Fix javadoc in AggregateFcts

2016-04-19 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk 391cae615 -> 30bfa07a4


Fix javadoc in AggregateFcts


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60997c2d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60997c2d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60997c2d

Branch: refs/heads/trunk
Commit: 60997c2dedc23a80dcc15fe6b1e2e7769ad2d383
Parents: 97a43ff
Author: Benjamin Lerer 
Authored: Tue Apr 19 17:46:21 2016 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 19 17:46:21 2016 +0200

--
 .../org/apache/cassandra/cql3/functions/AggregateFcts.java   | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/60997c2d/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java 
b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
index 77be525..cca6156 100644
--- a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
+++ b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
@@ -240,7 +240,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * The SUM function for int32 values.
+ * The SUM function for byte values (tinyint).
  */
 public static final AggregateFunction sumFunctionForByte =
 new NativeAggregateFunction("sum", ByteType.instance, 
ByteType.instance)
@@ -276,7 +276,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * AVG function for int32 values.
+ * AVG function for byte values (tinyint).
  */
 public static final AggregateFunction avgFunctionForByte =
 new NativeAggregateFunction("avg", ByteType.instance, 
ByteType.instance)
@@ -318,7 +318,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * The SUM function for int32 values.
+ * The SUM function for short values (smallint).
  */
 public static final AggregateFunction sumFunctionForShort =
 new NativeAggregateFunction("sum", ShortType.instance, 
ShortType.instance)
@@ -354,7 +354,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * AVG function for int32 values.
+ * AVG function for for short values (smallint).
  */
 public static final AggregateFunction avgFunctionForShort =
 new NativeAggregateFunction("avg", ShortType.instance, 
ShortType.instance)



[2/2] cassandra git commit: Merge branch asf/cassandra-2.2 into cassandra-3.0

2016-04-19 Thread blerer
Merge branch asf/cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0e96d3e5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0e96d3e5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0e96d3e5

Branch: refs/heads/cassandra-3.0
Commit: 0e96d3e5248750239f9b762b1f9fb2d28d763976
Parents: f85a20f 60997c2
Author: Benjamin Lerer 
Authored: Tue Apr 19 17:49:13 2016 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 19 17:51:41 2016 +0200

--
 .../apache/cassandra/cql3/functions/AggregateFcts.java| 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0e96d3e5/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--
diff --cc src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
index 79a08cd,cca6156..b2cae50
--- a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
+++ b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
@@@ -703,101 -657,25 +703,101 @@@ public abstract class AggregateFct
   * The SUM function for counter column values.
   */
  public static final AggregateFunction sumFunctionForCounter =
 -new NativeAggregateFunction("sum", CounterColumnType.instance, 
CounterColumnType.instance)
 +new NativeAggregateFunction("sum", CounterColumnType.instance, 
CounterColumnType.instance)
 +{
 +public Aggregate newAggregate()
 +{
 +return new LongSumAggregate();
 +}
 +};
 +
 +/**
 + * AVG function for counter column values.
 + */
 +public static final AggregateFunction avgFunctionForCounter =
 +new NativeAggregateFunction("avg", CounterColumnType.instance, 
CounterColumnType.instance)
 +{
 +public Aggregate newAggregate()
 +{
 +return new LongAvgAggregate();
 +}
 +};
 +
 +/**
 + * The MIN function for counter column values.
 + */
 +public static final AggregateFunction minFunctionForCounter =
 +new NativeAggregateFunction("min", CounterColumnType.instance, 
CounterColumnType.instance)
 +{
 +public Aggregate newAggregate()
 +{
 +return new Aggregate()
  {
 -public Aggregate newAggregate()
 +private Long min;
 +
 +public void reset()
  {
 -return new LongSumAggregate();
 +min = null;
 +}
 +
 +public ByteBuffer compute(int protocolVersion)
 +{
 +return min != null ? LongType.instance.decompose(min) : 
null;
 +}
 +
 +public void addInput(int protocolVersion, List 
values)
 +{
 +ByteBuffer value = values.get(0);
 +
 +if (value == null)
 +return;
 +
 +long lval = LongType.instance.compose(value);
 +
 +if (min == null || lval < min)
 +min = lval;
  }
  };
 +}
 +};
  
  /**
-- * AVG function for counter column values.
++ * MAX function for counter column values.
   */
 -public static final AggregateFunction avgFunctionForCounter =
 -new NativeAggregateFunction("avg", CounterColumnType.instance, 
CounterColumnType.instance)
 +public static final AggregateFunction maxFunctionForCounter =
 +new NativeAggregateFunction("max", CounterColumnType.instance, 
CounterColumnType.instance)
 +{
 +public Aggregate newAggregate()
 +{
 +return new Aggregate()
  {
 -public Aggregate newAggregate()
 +private Long max;
 +
 +public void reset()
  {
 -return new LongAvgAggregate();
 +max = null;
 +}
 +
 +public ByteBuffer compute(int protocolVersion)
 +{
 +return max != null ? LongType.instance.decompose(max) : 
null;
 +}
 +
 +public void addInput(int protocolVersion, List 
values)
 +{
 +ByteBuffer value = values.get(0);
 +
 +if (value == null)
 +return;
 +
 +long lval = LongType.instance.compose(value);
 +
 +if (max == null || lval > max)
 +max = lval;
  }
  };
 +}
 +};
  
  /**
   * Creates a MAX function 

[1/2] cassandra git commit: Fix javadoc in AggregateFcts

2016-04-19 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 f85a20f79 -> 0e96d3e52


Fix javadoc in AggregateFcts


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60997c2d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60997c2d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60997c2d

Branch: refs/heads/cassandra-3.0
Commit: 60997c2dedc23a80dcc15fe6b1e2e7769ad2d383
Parents: 97a43ff
Author: Benjamin Lerer 
Authored: Tue Apr 19 17:46:21 2016 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 19 17:46:21 2016 +0200

--
 .../org/apache/cassandra/cql3/functions/AggregateFcts.java   | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/60997c2d/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java 
b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
index 77be525..cca6156 100644
--- a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
+++ b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
@@ -240,7 +240,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * The SUM function for int32 values.
+ * The SUM function for byte values (tinyint).
  */
 public static final AggregateFunction sumFunctionForByte =
 new NativeAggregateFunction("sum", ByteType.instance, 
ByteType.instance)
@@ -276,7 +276,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * AVG function for int32 values.
+ * AVG function for byte values (tinyint).
  */
 public static final AggregateFunction avgFunctionForByte =
 new NativeAggregateFunction("avg", ByteType.instance, 
ByteType.instance)
@@ -318,7 +318,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * The SUM function for int32 values.
+ * The SUM function for short values (smallint).
  */
 public static final AggregateFunction sumFunctionForShort =
 new NativeAggregateFunction("sum", ShortType.instance, 
ShortType.instance)
@@ -354,7 +354,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * AVG function for int32 values.
+ * AVG function for for short values (smallint).
  */
 public static final AggregateFunction avgFunctionForShort =
 new NativeAggregateFunction("avg", ShortType.instance, 
ShortType.instance)



cassandra git commit: Fix javadoc in AggregateFcts

2016-04-19 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 97a43 -> 60997c2de


Fix javadoc in AggregateFcts


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60997c2d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60997c2d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60997c2d

Branch: refs/heads/cassandra-2.2
Commit: 60997c2dedc23a80dcc15fe6b1e2e7769ad2d383
Parents: 97a43ff
Author: Benjamin Lerer 
Authored: Tue Apr 19 17:46:21 2016 +0200
Committer: Benjamin Lerer 
Committed: Tue Apr 19 17:46:21 2016 +0200

--
 .../org/apache/cassandra/cql3/functions/AggregateFcts.java   | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/60997c2d/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java 
b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
index 77be525..cca6156 100644
--- a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
+++ b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
@@ -240,7 +240,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * The SUM function for int32 values.
+ * The SUM function for byte values (tinyint).
  */
 public static final AggregateFunction sumFunctionForByte =
 new NativeAggregateFunction("sum", ByteType.instance, 
ByteType.instance)
@@ -276,7 +276,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * AVG function for int32 values.
+ * AVG function for byte values (tinyint).
  */
 public static final AggregateFunction avgFunctionForByte =
 new NativeAggregateFunction("avg", ByteType.instance, 
ByteType.instance)
@@ -318,7 +318,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * The SUM function for int32 values.
+ * The SUM function for short values (smallint).
  */
 public static final AggregateFunction sumFunctionForShort =
 new NativeAggregateFunction("sum", ShortType.instance, 
ShortType.instance)
@@ -354,7 +354,7 @@ public abstract class AggregateFcts
 };
 
 /**
- * AVG function for int32 values.
+ * AVG function for for short values (smallint).
  */
 public static final AggregateFunction avgFunctionForShort =
 new NativeAggregateFunction("avg", ShortType.instance, 
ShortType.instance)



[jira] [Comment Edited] (CASSANDRA-11602) Materialized View Doest Not Have Static Columns

2016-04-19 Thread Ravishankar Rajendran (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248002#comment-15248002
 ] 

Ravishankar Rajendran edited comment on CASSANDRA-11602 at 4/19/16 3:43 PM:


[~slebresne]  Logically all tables in C* are materialized views .i.e. tables 
cannot have foreign keys and are modeled based on queries. That is what is a 
View in RDBMS world. All that the C* MVs do is; it automates denormalization. 
Essentially MVs in C* must be similar to tables in all behaviors. MVs in C* 
must be nothing but tables that can automate denormalization.

MVs must solve for static columns. MVs with no support to static column will be 
very disappointing for developers. Static columns are a boon for developers and 
performance engineers. Static columns + MV is like a holy grail.

If I can manually denormalize data between two tables with static columns. I 
should be able to do the same with a table + MV. We must add support for static 
columns in MV.


was (Author: ravishankar_rajend...@yahoo.com):
[~slebresne]  Logically all tables in C* are materialized views .i.e. tables 
cannot have foreign keys and are modeled based on queries. That is what is a 
View in RDBMS world. All that the C* MVs do is; it automates denormalization. 
Essentially MVs in C* must be similar to tables in all behaviors. MVs in C* 
must be nothing but tables that can automate denormalization.

MVs must solve for static columns. MVs with no support to static column will be 
very disappointing for developers. Static columns are a boon for developers and 
performance engineers. Static columns + MV is like a holy grail.

We must add support for static columns in MV.

> Materialized View Doest Not Have Static Columns
> ---
>
> Key: CASSANDRA-11602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11602
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ravishankar Rajendran
>Assignee: Carl Yeksigian
>
> {quote}
> CREATE TABLE "team" (teamname text, manager text, location text static, 
> PRIMARY KEY ((teamname), manager));
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull1', 
> 'Ricciardo11', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo12', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo13', 'Australian');
> select * From team;
> CREATE MATERIALIZED VIEW IF NOT EXISTS "teamMV" AS SELECT "teamname", 
> "manager", "location" FROM "team" WHERE "teamname" IS NOT NULL AND "manager" 
> is NOT NULL AND "location" is NOT NULL PRIMARY KEY("manager", "teamname");  
> select * from "teamMV";
> {quote}
> The teamMV does not have "location" column. Static columns are not getting 
> created in MV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11602) Materialized View Doest Not Have Static Columns

2016-04-19 Thread Ravishankar Rajendran (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15248002#comment-15248002
 ] 

Ravishankar Rajendran commented on CASSANDRA-11602:
---

[~slebresne]  Logically all tables in C* are materialized views .i.e. tables 
cannot have foreign keys and are modeled based on queries. That is what is a 
View in RDBMS world. All that the C* MVs do is; it automates denormalization. 
Essentially MVs in C* must be similar to tables in all behaviors. MVs in C* 
must be nothing but tables that can automate denormalization.

MVs must solve for static columns. MVs with no support to static column will be 
very disappointing for developers. Static columns are a boon for developers and 
performance engineers. Static columns + MV is like a holy grail.

We must add support for static columns in MV.

> Materialized View Doest Not Have Static Columns
> ---
>
> Key: CASSANDRA-11602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11602
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ravishankar Rajendran
>Assignee: Carl Yeksigian
>
> {quote}
> CREATE TABLE "team" (teamname text, manager text, location text static, 
> PRIMARY KEY ((teamname), manager));
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull1', 
> 'Ricciardo11', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo12', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo13', 'Australian');
> select * From team;
> CREATE MATERIALIZED VIEW IF NOT EXISTS "teamMV" AS SELECT "teamname", 
> "manager", "location" FROM "team" WHERE "teamname" IS NOT NULL AND "manager" 
> is NOT NULL AND "location" is NOT NULL PRIMARY KEY("manager", "teamname");  
> select * from "teamMV";
> {quote}
> The teamMV does not have "location" column. Static columns are not getting 
> created in MV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11597) dtest failure in upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test

2016-04-19 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247998#comment-15247998
 ] 

Philip Thompson commented on CASSANDRA-11597:
-

Given that I've seen this error on this test before, and that it does flap at a 
steady rate: 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/69/testReport/
and that this is happening during the 2.0 steps in thrift code, I imagine this 
is not a huge problem. I'm not sure how we want to shore up this test to reduce 
the number of timeouts.

> dtest failure in 
> upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test
> ---
>
> Key: CASSANDRA-11597
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11597
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>  Labels: dtest
>
> Looks like a new flap. Example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/447/testReport/upgrade_supercolumns_test/TestSCUpgrade/upgrade_with_counters_test
> Failed on CassCI build cassandra-2.1_dtest #447 - 2.1.14-tentative
> {code}
> Error Message
> TimedOutException(acknowledged_by=0, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-1Fi9qz
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Upgrading to binary:2.0.17
> dtest: DEBUG: Shutting down node: node1
> dtest: DEBUG: Set new cassandra dir for node1: 
> /home/automaton/.ccm/repository/2.0.17
> dtest: DEBUG: Starting node1 on new version (binary:2.0.17)
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_supercolumns_test.py", line 
> 215, in upgrade_with_counters_test
> client.add('Counter1', column_parent, column, ThriftConsistencyLevel.ONE)
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 985, in add
> self.recv_add()
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1013, in recv_add
> raise result.te
> "TimedOutException(acknowledged_by=0, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-1Fi9qz\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Upgrading to binary:2.0.17\ndtest: DEBUG: Shutting down 
> node: node1\ndtest: DEBUG: Set new cassandra dir for node1: 
> /home/automaton/.ccm/repository/2.0.17\ndtest: DEBUG: Starting node1 on new 
> version (binary:2.0.17)\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11605) dtest failure in repair_tests.repair_test.TestRepair.dc_parallel_repair_test

2016-04-19 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247986#comment-15247986
 ] 

Philip Thompson commented on CASSANDRA-11605:
-

On first glance, I suspect this is a bug. I'll look at it though

> dtest failure in repair_tests.repair_test.TestRepair.dc_parallel_repair_test
> 
>
> Key: CASSANDRA-11605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11605
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/217/testReport/repair_tests.repair_test/TestRepair/dc_parallel_repair_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #217
> [~philipthompson] may be the person to look -- did you do the most recent 
> stuff with the repair tests?
> Here's the output:
> {code}
> Error Message
> Unexpected error in log, see stdout
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: d:\temp\dtest-mr7s9s
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Starting cluster..
> dtest: DEBUG: Inserting data...
> dtest: DEBUG: Checking data...
> dtest: DEBUG: starting repair...
> dtest: DEBUG: removing ccm cluster test at: d:\temp\dtest-mr7s9s
> dtest: DEBUG: clearing ssl stores from [d:\temp\dtest-mr7s9s] directory
> - >> end captured logging << -
> Stacktrace
>   File "C:\tools\python2\lib\unittest\case.py", line 358, in run
> self.tearDown()
>   File 
> "D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra-dtest\dtest.py", 
> line 667, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> d:\\temp\\dtest-mr7s9s\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Starting cluster..\ndtest: DEBUG: Inserting 
> data...\ndtest: DEBUG: Checking data...\ndtest: DEBUG: starting 
> repair...\ndtest: DEBUG: removing ccm cluster test at: 
> d:\\temp\\dtest-mr7s9s\ndtest: DEBUG: clearing ssl stores from 
> [d:\\temp\\dtest-mr7s9s] directory\n- >> end captured 
> logging << -"
> Standard Output
> Unexpected error in node2 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-03-31 23:44:07,365 
> SSTableDeletingTask.java:83 - Unable to delete 
> d:\temp\dtest-mr7s9s\test\node2\data2\ks\cf-a61921b0f79911e581805f908c518710\la-2-big-Data.db
>  (it will be removed on server restart; we'll also retry after GC)
> ERROR [NonPeriodicTasks:1] 2016-03-31 23:44:07,371 
> SSTableDeletingTask.java:83 - Unable to delete 
> d:\temp\dtest-mr7s9s\test\node2\data2\ks\cf-a61921b0f79911e581805f908c518710\la-1-big-Data.db
>  (it will be removed on server restart; we'll also retry after GC)
> Standard Error
> Started: node1 with pid: 1344
> Started: node3 with pid: 4788
> Started: node2 with pid: 5552
> Started: node4 with pid: 7532
> Started: node2 with pid: 7828
> Started: node1 with pid: 5424
> Started: node3 with pid: 3448
> Started: node4 with pid: 3316
> Started: node3 with pid: 6464
> Started: node2 with pid: 2824
> Started: node4 with pid: 4692
> Started: node1 with pid: 7868
> Started: node2 with pid: 3824
> Started: node4 with pid: 6340
> Started: node1 with pid: 6412
> Started: node3 with pid: 6172
> Started: node2 with pid: 7280
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11605) dtest failure in repair_tests.repair_test.TestRepair.dc_parallel_repair_test

2016-04-19 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247987#comment-15247987
 ] 

Jim Witschey commented on CASSANDRA-11605:
--

It's actually failed a number of times:

http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/217/testReport/repair_tests.repair_test/TestRepair/dc_parallel_repair_test/history/

So, flaky test.

> dtest failure in repair_tests.repair_test.TestRepair.dc_parallel_repair_test
> 
>
> Key: CASSANDRA-11605
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11605
> Project: Cassandra
>  Issue Type: Test
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/217/testReport/repair_tests.repair_test/TestRepair/dc_parallel_repair_test
> Failed on CassCI build cassandra-2.2_dtest_win32 #217
> [~philipthompson] may be the person to look -- did you do the most recent 
> stuff with the repair tests?
> Here's the output:
> {code}
> Error Message
> Unexpected error in log, see stdout
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: d:\temp\dtest-mr7s9s
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Starting cluster..
> dtest: DEBUG: Inserting data...
> dtest: DEBUG: Checking data...
> dtest: DEBUG: starting repair...
> dtest: DEBUG: removing ccm cluster test at: d:\temp\dtest-mr7s9s
> dtest: DEBUG: clearing ssl stores from [d:\temp\dtest-mr7s9s] directory
> - >> end captured logging << -
> Stacktrace
>   File "C:\tools\python2\lib\unittest\case.py", line 358, in run
> self.tearDown()
>   File 
> "D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra-dtest\dtest.py", 
> line 667, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> "Unexpected error in log, see stdout\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> d:\\temp\\dtest-mr7s9s\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Starting cluster..\ndtest: DEBUG: Inserting 
> data...\ndtest: DEBUG: Checking data...\ndtest: DEBUG: starting 
> repair...\ndtest: DEBUG: removing ccm cluster test at: 
> d:\\temp\\dtest-mr7s9s\ndtest: DEBUG: clearing ssl stores from 
> [d:\\temp\\dtest-mr7s9s] directory\n- >> end captured 
> logging << -"
> Standard Output
> Unexpected error in node2 log, error: 
> ERROR [NonPeriodicTasks:1] 2016-03-31 23:44:07,365 
> SSTableDeletingTask.java:83 - Unable to delete 
> d:\temp\dtest-mr7s9s\test\node2\data2\ks\cf-a61921b0f79911e581805f908c518710\la-2-big-Data.db
>  (it will be removed on server restart; we'll also retry after GC)
> ERROR [NonPeriodicTasks:1] 2016-03-31 23:44:07,371 
> SSTableDeletingTask.java:83 - Unable to delete 
> d:\temp\dtest-mr7s9s\test\node2\data2\ks\cf-a61921b0f79911e581805f908c518710\la-1-big-Data.db
>  (it will be removed on server restart; we'll also retry after GC)
> Standard Error
> Started: node1 with pid: 1344
> Started: node3 with pid: 4788
> Started: node2 with pid: 5552
> Started: node4 with pid: 7532
> Started: node2 with pid: 7828
> Started: node1 with pid: 5424
> Started: node3 with pid: 3448
> Started: node4 with pid: 3316
> Started: node3 with pid: 6464
> Started: node2 with pid: 2824
> Started: node4 with pid: 4692
> Started: node1 with pid: 7868
> Started: node2 with pid: 3824
> Started: node4 with pid: 6340
> Started: node1 with pid: 6412
> Started: node3 with pid: 6172
> Started: node2 with pid: 7280
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11605) dtest failure in repair_tests.repair_test.TestRepair.dc_parallel_repair_test

2016-04-19 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-11605:


 Summary: dtest failure in 
repair_tests.repair_test.TestRepair.dc_parallel_repair_test
 Key: CASSANDRA-11605
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11605
 Project: Cassandra
  Issue Type: Test
Reporter: Jim Witschey
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/217/testReport/repair_tests.repair_test/TestRepair/dc_parallel_repair_test

Failed on CassCI build cassandra-2.2_dtest_win32 #217

[~philipthompson] may be the person to look -- did you do the most recent stuff 
with the repair tests?

Here's the output:

{code}
Error Message

Unexpected error in log, see stdout
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: d:\temp\dtest-mr7s9s
dtest: DEBUG: Custom init_config not found. Setting defaults.
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 1,
'read_request_timeout_in_ms': 1,
'request_timeout_in_ms': 1,
'truncate_request_timeout_in_ms': 1,
'write_request_timeout_in_ms': 1}
dtest: DEBUG: Starting cluster..
dtest: DEBUG: Inserting data...
dtest: DEBUG: Checking data...
dtest: DEBUG: starting repair...
dtest: DEBUG: removing ccm cluster test at: d:\temp\dtest-mr7s9s
dtest: DEBUG: clearing ssl stores from [d:\temp\dtest-mr7s9s] directory
- >> end captured logging << -
Stacktrace

  File "C:\tools\python2\lib\unittest\case.py", line 358, in run
self.tearDown()
  File 
"D:\jenkins\workspace\cassandra-2.2_dtest_win32\cassandra-dtest\dtest.py", line 
667, in tearDown
raise AssertionError('Unexpected error in log, see stdout')
"Unexpected error in log, see stdout\n >> begin captured 
logging << \ndtest: DEBUG: cluster ccm directory: 
d:\\temp\\dtest-mr7s9s\ndtest: DEBUG: Custom init_config not found. Setting 
defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
5,\n'range_request_timeout_in_ms': 1,\n
'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n
'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
1}\ndtest: DEBUG: Starting cluster..\ndtest: DEBUG: Inserting 
data...\ndtest: DEBUG: Checking data...\ndtest: DEBUG: starting 
repair...\ndtest: DEBUG: removing ccm cluster test at: 
d:\\temp\\dtest-mr7s9s\ndtest: DEBUG: clearing ssl stores from 
[d:\\temp\\dtest-mr7s9s] directory\n- >> end captured 
logging << -"
Standard Output

Unexpected error in node2 log, error: 
ERROR [NonPeriodicTasks:1] 2016-03-31 23:44:07,365 SSTableDeletingTask.java:83 
- Unable to delete 
d:\temp\dtest-mr7s9s\test\node2\data2\ks\cf-a61921b0f79911e581805f908c518710\la-2-big-Data.db
 (it will be removed on server restart; we'll also retry after GC)
ERROR [NonPeriodicTasks:1] 2016-03-31 23:44:07,371 SSTableDeletingTask.java:83 
- Unable to delete 
d:\temp\dtest-mr7s9s\test\node2\data2\ks\cf-a61921b0f79911e581805f908c518710\la-1-big-Data.db
 (it will be removed on server restart; we'll also retry after GC)
Standard Error

Started: node1 with pid: 1344
Started: node3 with pid: 4788
Started: node2 with pid: 5552
Started: node4 with pid: 7532
Started: node2 with pid: 7828
Started: node1 with pid: 5424
Started: node3 with pid: 3448
Started: node4 with pid: 3316
Started: node3 with pid: 6464
Started: node2 with pid: 2824
Started: node4 with pid: 4692
Started: node1 with pid: 7868
Started: node2 with pid: 3824
Started: node4 with pid: 6340
Started: node1 with pid: 6412
Started: node3 with pid: 6172
Started: node2 with pid: 7280
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11022) Use SHA hashing to store password in the credentials cache

2016-04-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247980#comment-15247980
 ] 

Aleksey Yeschenko commented on CASSANDRA-11022:
---

I would prefer to just make the number or rounds configurable (also, I'm pretty 
sure it's already configurable with a -D flag - just cannot remember the ticket 
number). If the price of doing the bcrypt rounds is higher for you than that of 
fetching the hash from the table, and it is too high, the user should just 
lower the default # of rounds - instead of this workaround.

> Use SHA hashing to store password in the credentials cache
> --
>
> Key: CASSANDRA-11022
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11022
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Mike Adamson
> Fix For: 3.x
>
>
> In CASSANDRA-7715 a credentials cache has been added to the 
> {{PasswordAuthenticator}} to improve performance when multiple 
> authentications occur for the same user. 
> Unfortunately, the bcrypt hash is being cached which is one of the major 
> performance overheads in password authentication. 
> I propose that the cache is changed to use a SHA- hash to store the user 
> password. As long as the cache is cleared for the user on an unsuccessful 
> authentication this won't significantly increase the ability of an attacker 
> to use a brute force attack because every other attempt will use bcrypt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11604) select on table fails after changing user defined type in map

2016-04-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11604:

Reproduced In: 3.5
Fix Version/s: 3.x

> select on table fails after changing user defined type in map
> -
>
> Key: CASSANDRA-11604
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11604
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Andreas Jaekle
> Fix For: 3.x
>
>
> in cassandra 3.5 i get the following exception when i run this cqls:
> {code}
> --DROP KEYSPACE bugtest ;
> CREATE KEYSPACE bugtest
>  WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
> use bugtest;
> CREATE TYPE tt (
>   a boolean
> );
> create table t1 (
>   k text,
>   v map,
>   PRIMARY KEY(k)
> );
> insert into t1 (k,v) values ('k2',{'mk':{a:false}});
> ALTER TYPE tt ADD b boolean;
> UPDATE t1 SET v['mk'] = { b:true } WHERE k = 'k2';
> select * from t1;  
> {code}
> the last select fails.
> {code}
> WARN  [SharedPool-Worker-5] 2016-04-19 14:18:49,885 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.rows.ComplexColumnData$Builder.addCell(ComplexColumnData.java:254)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:623)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:549)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.jar:3.5]
> at org.apache.cassandra.db.rows.Row$Merger.merge(Row.java:526) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:473)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:437)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:419)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:279)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:100)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:112) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:289)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> 

[jira] [Updated] (CASSANDRA-11190) Fail fast repairs

2016-04-19 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11190:

Reviewer: Yuki Morishita

> Fail fast repairs
> -
>
> Key: CASSANDRA-11190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>
> Currently, if one node fails any phase of the repair (validation, streaming), 
> the repair session is aborted, but the other nodes are not notified and keep 
> doing either validation or syncing with other nodes.
> With CASSANDRA-10070 automatically scheduling repairs and potentially 
> scheduling retries it would be nice to make sure all nodes abort failed 
> repairs in other to be able to start other repairs safely in the same nodes.
> From CASSANDRA-10070:
> bq. As far as I understood, if there are nodes A, B, C running repair, A is 
> the coordinator. If validation or streaming fails on node B, the coordinator 
> (A) is notified and fails the repair session, but node C will remain doing 
> validation and/or streaming, what could cause problems (or increased load) if 
> we start another repair session on the same range.
> bq. We will probably need to extend the repair protocol to perform this 
> cleanup/abort step on failure. We already have a legacy cleanup message that 
> doesn't seem to be used in the current protocol that we could maybe reuse to 
> cleanup repair state after a failure. This repair abortion will probably have 
> intersection with CASSANDRA-3486. In any case, this is a separate (but 
> related) issue and we should address it in an independent ticket, and make 
> this ticket dependent on that.
> On CASSANDRA-5426 [~slebresne] suggested doing this to avoid unexpected 
> conditions/hangs:
> bq. I wonder if maybe we should have more of a fail-fast policy when there is 
> errors. For instance, if one node fail it's validation phase, maybe it might 
> be worth failing right away and let the user re-trigger a repair once he has 
> fixed whatever was the source of the error, rather than still 
> differencing/syncing the other nodes.
> bq. Going a bit further, I think we should add 2 messages to interrupt the 
> validation and sync phase. If only because that could be useful to users if 
> they need to stop a repair for some reason, but also, if we get an error 
> during validation from one node, we could use that to interrupt the other 
> nodes and thus fail fast while minimizing the amount of work done uselessly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11603) PER PARTITION LIMIT does not work properly for SinglePartition

2016-04-19 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11603:
---
Reviewer: Alex Petrov
  Status: Patch Available  (was: Open)

> PER PARTITION LIMIT does not work properly for SinglePartition
> --
>
> Key: CASSANDRA-11603
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11603
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.x
>
>
> When the {{PER PARTITION LIMIT}} is greater than the page size the limit is 
> not respected for single or multi partitions queries.
> The problem can be reproduced using the java driver with the following code:
> {code}
> session = cluster.connect();
> session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION 
> = {'class' : 'SimpleStrategy', 'replication_factor' : '1'}");
> session.execute("USE test");
> session.execute("DROP TABLE IF EXISTS test");
> session.execute("CREATE TABLE IF NOT EXISTS test (a int, b int, c 
> int, PRIMARY KEY(a, b))");
> PreparedStatement prepare = session.prepare("INSERT INTO test (a, b, 
> c) VALUES (?, ?, ?);");
> for (int i = 0; i < 5; i++)
> for (int j = 0; j < 10; j++)
> session.execute(prepare.bind(i, j, i + j));
> ResultSet rs = session.execute(session.newSimpleStatement("SELECT * 
> FROM test WHERE a = 1 PER PARTITION LIMIT 3")
>   .setFetchSize(2));
> for (Row row : rs)
> {
> System.out.println(row);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11603) PER PARTITION LIMIT does not work properly for SinglePartition

2016-04-19 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247876#comment-15247876
 ] 

Benjamin Lerer commented on CASSANDRA-11603:


|[trunk|https://github.com/apache/cassandra/compare/trunk...blerer:11603-trunk] 
| [utests|http://cassci.datastax.com/job/blerer-11603-trunk-testall/2/] | 
[dtests|http://cassci.datastax.com/job/blerer-11603-trunk-dtest/2/]|
The patch fix 2 problems:
# {{SinglePartitionPager}} was not passing the number or remaining rows to 
fetch from the partition to the {{DataLimits}}
# {{MultiPartitionPager}} was not setting properly the number of remaining rows 
in the current partition in the {{PagingState}}

The DTests did not catch the problem because the page size was not set properly 
(the Dtest was always using the default page size). The DTest PR is 
[here|https://github.com/riptano/cassandra-dtest/pull/934]

> PER PARTITION LIMIT does not work properly for SinglePartition
> --
>
> Key: CASSANDRA-11603
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11603
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.x
>
>
> When the {{PER PARTITION LIMIT}} is greater than the page size the limit is 
> not respected for single or multi partitions queries.
> The problem can be reproduced using the java driver with the following code:
> {code}
> session = cluster.connect();
> session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION 
> = {'class' : 'SimpleStrategy', 'replication_factor' : '1'}");
> session.execute("USE test");
> session.execute("DROP TABLE IF EXISTS test");
> session.execute("CREATE TABLE IF NOT EXISTS test (a int, b int, c 
> int, PRIMARY KEY(a, b))");
> PreparedStatement prepare = session.prepare("INSERT INTO test (a, b, 
> c) VALUES (?, ?, ?);");
> for (int i = 0; i < 5; i++)
> for (int j = 0; j < 10; j++)
> session.execute(prepare.bind(i, j, i + j));
> ResultSet rs = session.execute(session.newSimpleStatement("SELECT * 
> FROM test WHERE a = 1 PER PARTITION LIMIT 3")
>   .setFetchSize(2));
> for (Row row : rs)
> {
> System.out.println(row);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11432) Counter values become under-counted when running repair.

2016-04-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247850#comment-15247850
 ] 

Aleksey Yeschenko commented on CASSANDRA-11432:
---

Can you answer the questions I asked still? Thanks (:

> Counter values become under-counted when running repair.
> 
>
> Key: CASSANDRA-11432
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11432
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Dikang Gu
>Assignee: Aleksey Yeschenko
>
> We are experimenting Counters in Cassandra 2.2.5. Our setup is that we have 6 
> nodes, across three different regions, and in each region, the replication 
> factor is 2. Basically, each nodes holds a full copy of the data.
> We are writing to cluster with CL = 2, and reading with CL = 1. 
> When are doing 30k/s counter increment/decrement per node, and at the 
> meanwhile, we are double writing to our mysql tier, so that we can measure 
> the accuracy of C* counter, compared to mysql.
> The experiment result was great at the beginning, the counter value in C* and 
> mysql are very close. The difference is less than 0.1%. 
> But when we start to run the repair on one node, the counter value in C* 
> become much less than the value in mysql,  the difference becomes larger than 
> 1%.
> My question is that is it a known problem that the counter value will become 
> under-counted if repair is running? Should we avoid running repair for 
> counter tables?
> Thanks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11604) select on table fails after changing user defined type in map

2016-04-19 Thread Andreas Jaekle (JIRA)
Andreas Jaekle created CASSANDRA-11604:
--

 Summary: select on table fails after changing user defined type in 
map
 Key: CASSANDRA-11604
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11604
 Project: Cassandra
  Issue Type: Bug
Reporter: Andreas Jaekle


in cassandra 3.5 i get the following exception when i run this cqls:
{code}
--DROP KEYSPACE bugtest ;
CREATE KEYSPACE bugtest
 WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
use bugtest;
CREATE TYPE tt (
a boolean
);
create table t1 (
k text,
v map,
PRIMARY KEY(k)
);
insert into t1 (k,v) values ('k2',{'mk':{a:false}});
ALTER TYPE tt ADD b boolean;
UPDATE t1 SET v['mk'] = { b:true } WHERE k = 'k2';
select * from t1;  
{code}
the last select fails.
{code}
WARN  [SharedPool-Worker-5] 2016-04-19 14:18:49,885 
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-5,5,main]: {}
java.lang.AssertionError: null
at 
org.apache.cassandra.db.rows.ComplexColumnData$Builder.addCell(ComplexColumnData.java:254)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:623)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:549)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.5.jar:3.5]
at org.apache.cassandra.db.rows.Row$Merger.merge(Row.java:526) 
~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:473)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:437)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:419)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:279)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:100)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:112) 
~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:289)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[apache-cassandra-3.5.jar:3.5]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[apache-cassandra-3.5.jar:3.5]
at 

[jira] [Commented] (CASSANDRA-9633) Add ability to encrypt sstables

2016-04-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247836#comment-15247836
 ] 

Aleksey Yeschenko commented on CASSANDRA-9633:
--

Thanks Jason (: Given changes to CQL and schema are involved, I'd like to 
review at least those before anything gets committed, if you don't mind.

> Add ability to encrypt sstables
> ---
>
> Key: CASSANDRA-9633
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9633
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: encryption, security, sstable
> Fix For: 3.x
>
>
> Add option to allow encrypting of sstables.
> I have a version of this functionality built on cassandra 2.0 that 
> piggy-backs on the existing sstable compression functionality and ICompressor 
> interface (similar in nature to what DataStax Enterprise does). However, if 
> we're adding the feature to the main OSS product, I'm not sure if we want to 
> use the pluggable compression framework or if it's worth investigating a 
> different path. I think there's a lot of upside in reusing the sstable 
> compression scheme, but perhaps add a new component in cqlsh for table 
> encryption and a corresponding field in CFMD.
> Encryption configuration in the yaml can use the same mechanism as 
> CASSANDRA-6018 (which is currently pending internal review).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-04-19 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247813#comment-15247813
 ] 

Sam Tunnicliffe commented on CASSANDRA-11310:
-

LGTM, modulo 2 minor nits which I can fix on commit.
* the message {{Cannot restrict clustering columns by a CONTAINS relation 
without a secondary index}} is not strictly true now, as the same is also 
permitted with {{ALLOW FILTERING}}
* the new test method in {{SelectTest}} has a typo in its name and another in 
the issue # it refers to in the comment.




> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.6
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9633) Add ability to encrypt sstables

2016-04-19 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247797#comment-15247797
 ] 

Jason Brown commented on CASSANDRA-9633:


OK, so here we go with the updates [~bdeggleston] requested.

||9633||
|[branch|https://github.com/jasobrown/cassandra/tree/9633]|
|[dtest|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-9633-dtest/]|
|[testall|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-9633-testall/]|

To encrypt the (primary) index and summary sstable files, in addition to the 
data file, different code paths were required.

-As the summary is the once-shot read and different from all other code paths, 
I made custom classes for handling it, {{EncryptedSummaryWritableByteChannel}} 
and {{EncryptedSummaryInputStream}}. As the summary is intimately linked with 
the owning sstable & data file, the summary will simply inherit the 
key_alias/algo from the sstable, but then has it's IV written to the front of 
the summary file.

- The encrypted primary index needs to have it's own 'chunks' file, a la the 
{{Component.COMPRESSION_INFO}}, so I created 
{{Component.INDEX_COMPRESSION_INFO}} so that it gets the proper treatment. 
Thus, we can use {{CompressedSequentialWriter}} for writing out the index 
file's offsets, just like what we do for the compressed data file.

- As with the first version of this patch, encrypting the data file (and now 
the primary index) is handled by {{EncryptingCompressor}}. 

WRT to CQL changes, we simply do {{... WITH ENCRYPTION='true'}} to enable the 
sstable encryption. All the encryption parameters are already in the yaml, so 
no need to pass those in separately. Further, to disable the sstable 
encryption, simple execute {{ALTER TABLE ... WITH ENCRYPTION='false'}}. As a 
side effect of piggy-backing on the compression infrastructure, though, when 
executing {{DESCRIBE TABLE}} in cqlsh the encryption params show up as 
'compression' data, not as encryption. I believe all the code for handling the 
cqlsh describe queries is largely in the python driver, afaict. 


Some miscellaneous changes:
- {{ICompressor}} got some additional functions for instance-specific values as 
we need to carry a unique IV for each cipher.
- {{CipherFactory}} needed a small cleanup wrt caching instances (we were 
creating a crap tonne of instances on the fly)
- Apparently I messed up a small part of the merge for #11040, and thus adding 
it in here ({{HintsService}}). Without this, hints don't get encrypted when 
enabled.
- added some {{@SupressWarnings}} annotations


> Add ability to encrypt sstables
> ---
>
> Key: CASSANDRA-9633
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9633
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: encryption, security, sstable
> Fix For: 3.x
>
>
> Add option to allow encrypting of sstables.
> I have a version of this functionality built on cassandra 2.0 that 
> piggy-backs on the existing sstable compression functionality and ICompressor 
> interface (similar in nature to what DataStax Enterprise does). However, if 
> we're adding the feature to the main OSS product, I'm not sure if we want to 
> use the pluggable compression framework or if it's worth investigating a 
> different path. I think there's a lot of upside in reusing the sstable 
> compression scheme, but perhaps add a new component in cqlsh for table 
> encryption and a corresponding field in CFMD.
> Encryption configuration in the yaml can use the same mechanism as 
> CASSANDRA-6018 (which is currently pending internal review).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[07/14] cassandra git commit: Add unit test for CASSANDRA-11548

2016-04-19 Thread marcuse
Add unit test for CASSANDRA-11548

Patch by Paulo Motta; reviewed by Marcus Eriksson for CASSANDRA-11548


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/209ebd38
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/209ebd38
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/209ebd38

Branch: refs/heads/cassandra-2.2
Commit: 209ebd380b641c4f065e9687186f546f8a50b242
Parents: d200d13
Author: Paulo Motta 
Authored: Mon Apr 18 18:44:07 2016 -0300
Committer: Marcus Eriksson 
Committed: Tue Apr 19 15:42:36 2016 +0200

--
 CHANGES.txt |   4 +-
 .../org/apache/cassandra/db/DataTracker.java|  12 +++
 .../SSTableCompactingNotification.java  |  41 
 .../LongLeveledCompactionStrategyTest.java  | 101 +++
 4 files changed, 155 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 90a4f23..76d3673 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,7 +1,5 @@
-2.1.15
- * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
-
 2.1.14
+ * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
  * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index ef25236..c731a35 100644
--- a/src/java/org/apache/cassandra/db/DataTracker.java
+++ b/src/java/org/apache/cassandra/db/DataTracker.java
@@ -222,7 +222,10 @@ public class DataTracker
 
 View newView = currentView.markCompacting(sstables);
 if (view.compareAndSet(currentView, newView))
+{
+notifyCompacting(sstables, true);
 return true;
+}
 }
 }
 
@@ -247,6 +250,8 @@ public class DataTracker
 // interrupted after the CFS is invalidated, those sstables need 
to be unreferenced as well, so we do that here.
 unreferenceSSTables();
 }
+
+notifyCompacting(unmark, false);
 }
 
 public void markObsolete(Collection sstables, OperationType 
compactionType)
@@ -511,6 +516,13 @@ public class DataTracker
 subscriber.handleNotification(notification, this);
 }
 
+public void notifyCompacting(Iterable reader, boolean 
compacting)
+{
+INotification notification = new SSTableCompactingNotification(reader, 
compacting);
+for (INotificationConsumer subscriber : subscribers)
+subscriber.handleNotification(notification, this);
+}
+
 public void notifyAdded(SSTableReader added)
 {
 INotification notification = new SSTableAddedNotification(added);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
--
diff --git 
a/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
 
b/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
new file mode 100644
index 000..6eddf3f
--- /dev/null
+++ 
b/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.notifications;
+
+import org.apache.cassandra.io.sstable.SSTableReader;

[08/14] cassandra git commit: Add unit test for CASSANDRA-11548

2016-04-19 Thread marcuse
Add unit test for CASSANDRA-11548

Patch by Paulo Motta; reviewed by Marcus Eriksson for CASSANDRA-11548


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/209ebd38
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/209ebd38
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/209ebd38

Branch: refs/heads/cassandra-3.0
Commit: 209ebd380b641c4f065e9687186f546f8a50b242
Parents: d200d13
Author: Paulo Motta 
Authored: Mon Apr 18 18:44:07 2016 -0300
Committer: Marcus Eriksson 
Committed: Tue Apr 19 15:42:36 2016 +0200

--
 CHANGES.txt |   4 +-
 .../org/apache/cassandra/db/DataTracker.java|  12 +++
 .../SSTableCompactingNotification.java  |  41 
 .../LongLeveledCompactionStrategyTest.java  | 101 +++
 4 files changed, 155 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 90a4f23..76d3673 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,7 +1,5 @@
-2.1.15
- * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
-
 2.1.14
+ * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
  * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index ef25236..c731a35 100644
--- a/src/java/org/apache/cassandra/db/DataTracker.java
+++ b/src/java/org/apache/cassandra/db/DataTracker.java
@@ -222,7 +222,10 @@ public class DataTracker
 
 View newView = currentView.markCompacting(sstables);
 if (view.compareAndSet(currentView, newView))
+{
+notifyCompacting(sstables, true);
 return true;
+}
 }
 }
 
@@ -247,6 +250,8 @@ public class DataTracker
 // interrupted after the CFS is invalidated, those sstables need 
to be unreferenced as well, so we do that here.
 unreferenceSSTables();
 }
+
+notifyCompacting(unmark, false);
 }
 
 public void markObsolete(Collection sstables, OperationType 
compactionType)
@@ -511,6 +516,13 @@ public class DataTracker
 subscriber.handleNotification(notification, this);
 }
 
+public void notifyCompacting(Iterable reader, boolean 
compacting)
+{
+INotification notification = new SSTableCompactingNotification(reader, 
compacting);
+for (INotificationConsumer subscriber : subscribers)
+subscriber.handleNotification(notification, this);
+}
+
 public void notifyAdded(SSTableReader added)
 {
 INotification notification = new SSTableAddedNotification(added);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
--
diff --git 
a/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
 
b/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
new file mode 100644
index 000..6eddf3f
--- /dev/null
+++ 
b/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.notifications;
+
+import org.apache.cassandra.io.sstable.SSTableReader;

[01/14] cassandra git commit: Replace sstables on DataTracker before marking them as non-compacting during anti-compaction

2016-04-19 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 5c5c5b44c -> 209ebd380
  refs/heads/cassandra-2.2 77ab77328 -> 97a43
  refs/heads/cassandra-3.0 7f4b5e305 -> f85a20f79
  refs/heads/trunk a17120adf -> 391cae615


Replace sstables on DataTracker before marking them as non-compacting during 
anti-compaction

Patch by Ruoran Wang; reviewed by Paulo Motta for CASSANDRA-11548


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d200d137
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d200d137
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d200d137

Branch: refs/heads/cassandra-2.1
Commit: d200d137823d5b250406bccb35473a8fc2f14faf
Parents: 5c5c5b4
Author: Ruoran Wang 
Authored: Mon Apr 18 19:49:30 2016 -0300
Committer: Paulo Motta 
Committed: Tue Apr 19 08:00:09 2016 -0300

--
 CHANGES.txt  |  3 +++
 .../cassandra/db/compaction/CompactionManager.java   | 15 ---
 2 files changed, 11 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d200d137/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6385509..90a4f23 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+2.1.15
+ * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
+
 2.1.14
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d200d137/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index e382cab..96d873f 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1125,7 +1125,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 int unrepairedKeyCount = 0;
 logger.info("Performing anticompaction on {} sstables", 
repairedSSTables.size());
 // iterate over sstables to check if the repaired / unrepaired ranges 
intersect them.
-Set successfullyAntiCompactedSSTables = new HashSet<>();
 for (SSTableReader sstable : repairedSSTables)
 {
 // check that compaction hasn't stolen any sstables used in 
previous repair sessions
@@ -1137,8 +1136,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 }
 
 logger.info("Anticompacting {}", sstable);
-Set sstableAsSet = new HashSet<>();
-sstableAsSet.add(sstable);
+Set sstableAsSet = Sets.newHashSet(sstable);
 
 File destination = 
cfs.directories.getWriteableLocationAsFile(cfs.getExpectedCompactedFileSize(sstableAsSet,
 OperationType.ANTICOMPACTION));
 SSTableRewriter repairedSSTableWriter = new SSTableRewriter(cfs, 
sstableAsSet, sstable.maxDataAge, false, false);
@@ -1177,9 +1175,13 @@ public class CompactionManager implements 
CompactionManagerMBean
 {
 metrics.finishCompaction(ci);
 }
-
anticompactedSSTables.addAll(repairedSSTableWriter.finish(repairedAt));
-
anticompactedSSTables.addAll(unRepairedSSTableWriter.finish(ActiveRepairService.UNREPAIRED_SSTABLE));
-successfullyAntiCompactedSSTables.add(sstable);
+
+List anticompacted = new ArrayList<>();
+anticompacted.addAll(repairedSSTableWriter.finish(repairedAt));
+
anticompacted.addAll(unRepairedSSTableWriter.finish(ActiveRepairService.UNREPAIRED_SSTABLE));
+anticompactedSSTables.addAll(anticompacted);
+
+
cfs.getDataTracker().markCompactedSSTablesReplaced(sstableAsSet, anticompacted, 
OperationType.ANTICOMPACTION);
 cfs.getDataTracker().unmarkCompacting(sstableAsSet);
 }
 catch (Throwable e)
@@ -1190,7 +1192,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 unRepairedSSTableWriter.abort();
 }
 }
-
cfs.getDataTracker().markCompactedSSTablesReplaced(successfullyAntiCompactedSSTables,
 anticompactedSSTables, OperationType.ANTICOMPACTION);
 String format = "Repaired {} keys of {} for {}/{}";
 logger.debug(format, repairedKeyCount, (repairedKeyCount + 
unrepairedKeyCount), 

[jira] [Updated] (CASSANDRA-11548) Anticompaction not removing old sstables

2016-04-19 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-11548:

   Resolution: Fixed
Fix Version/s: 2.1.14
   Status: Resolved  (was: Ready to Commit)

Committed to 2.1.14 (changed CHANGES.txt on commit) and merged up with -s ours

thanks!

> Anticompaction not removing old sstables
> 
>
> Key: CASSANDRA-11548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11548
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.1.13
>Reporter: Ruoran Wang
> Fix For: 2.1.14
>
> Attachments: 0001-cassandra-2.1.13-potential-fix.patch
>
>
> 1. 12/29/15 https://issues.apache.org/jira/browse/CASSANDRA-10831
> Moved markCompactedSSTablesReplaced out of the loop ```for (SSTableReader 
> sstable : repairedSSTables)```
> 2. 1/18/16 https://issues.apache.org/jira/browse/CASSANDRA-10829
> Added unmarkCompacting into the loop. ```for (SSTableReader sstable : 
> repairedSSTables)```
> I think the effect of those above change might cause the 
> markCompactedSSTablesReplaced fail on 
> DataTracker.java
> {noformat}
>assert newSSTables.size() + newShadowed.size() == newSSTablesSize :
> String.format("Expecting new size of %d, got %d while 
> replacing %s by %s in %s",
>   newSSTablesSize, newSSTables.size() + 
> newShadowed.size(), oldSSTables, replacements, this);
> {noformat}
> Since change CASSANDRA-10831 moved it out. This AssertError won't be caught, 
> leaving the oldsstables not removed. (Then this might cause row out of order 
> error when doing incremental repair if there are L1 un-repaired sstables.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[11/14] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-19 Thread marcuse
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/97a43fff
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/97a43fff
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/97a43fff

Branch: refs/heads/trunk
Commit: 97a432610f5f57678c69ca674b7d795c36b4
Parents: 77ab773 209ebd3
Author: Marcus Eriksson 
Authored: Tue Apr 19 15:48:52 2016 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 19 15:48:52 2016 +0200

--

--




[14/14] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-19 Thread marcuse
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/391cae61
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/391cae61
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/391cae61

Branch: refs/heads/trunk
Commit: 391cae615b4665de5184bd2a8b8b7e12fd194589
Parents: a17120a f85a20f
Author: Marcus Eriksson 
Authored: Tue Apr 19 15:49:08 2016 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 19 15:49:08 2016 +0200

--

--




[03/14] cassandra git commit: Replace sstables on DataTracker before marking them as non-compacting during anti-compaction

2016-04-19 Thread marcuse
Replace sstables on DataTracker before marking them as non-compacting during 
anti-compaction

Patch by Ruoran Wang; reviewed by Paulo Motta for CASSANDRA-11548


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d200d137
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d200d137
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d200d137

Branch: refs/heads/cassandra-2.2
Commit: d200d137823d5b250406bccb35473a8fc2f14faf
Parents: 5c5c5b4
Author: Ruoran Wang 
Authored: Mon Apr 18 19:49:30 2016 -0300
Committer: Paulo Motta 
Committed: Tue Apr 19 08:00:09 2016 -0300

--
 CHANGES.txt  |  3 +++
 .../cassandra/db/compaction/CompactionManager.java   | 15 ---
 2 files changed, 11 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d200d137/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6385509..90a4f23 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+2.1.15
+ * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
+
 2.1.14
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d200d137/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index e382cab..96d873f 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1125,7 +1125,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 int unrepairedKeyCount = 0;
 logger.info("Performing anticompaction on {} sstables", 
repairedSSTables.size());
 // iterate over sstables to check if the repaired / unrepaired ranges 
intersect them.
-Set successfullyAntiCompactedSSTables = new HashSet<>();
 for (SSTableReader sstable : repairedSSTables)
 {
 // check that compaction hasn't stolen any sstables used in 
previous repair sessions
@@ -1137,8 +1136,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 }
 
 logger.info("Anticompacting {}", sstable);
-Set sstableAsSet = new HashSet<>();
-sstableAsSet.add(sstable);
+Set sstableAsSet = Sets.newHashSet(sstable);
 
 File destination = 
cfs.directories.getWriteableLocationAsFile(cfs.getExpectedCompactedFileSize(sstableAsSet,
 OperationType.ANTICOMPACTION));
 SSTableRewriter repairedSSTableWriter = new SSTableRewriter(cfs, 
sstableAsSet, sstable.maxDataAge, false, false);
@@ -1177,9 +1175,13 @@ public class CompactionManager implements 
CompactionManagerMBean
 {
 metrics.finishCompaction(ci);
 }
-
anticompactedSSTables.addAll(repairedSSTableWriter.finish(repairedAt));
-
anticompactedSSTables.addAll(unRepairedSSTableWriter.finish(ActiveRepairService.UNREPAIRED_SSTABLE));
-successfullyAntiCompactedSSTables.add(sstable);
+
+List anticompacted = new ArrayList<>();
+anticompacted.addAll(repairedSSTableWriter.finish(repairedAt));
+
anticompacted.addAll(unRepairedSSTableWriter.finish(ActiveRepairService.UNREPAIRED_SSTABLE));
+anticompactedSSTables.addAll(anticompacted);
+
+
cfs.getDataTracker().markCompactedSSTablesReplaced(sstableAsSet, anticompacted, 
OperationType.ANTICOMPACTION);
 cfs.getDataTracker().unmarkCompacting(sstableAsSet);
 }
 catch (Throwable e)
@@ -1190,7 +1192,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 unRepairedSSTableWriter.abort();
 }
 }
-
cfs.getDataTracker().markCompactedSSTablesReplaced(successfullyAntiCompactedSSTables,
 anticompactedSSTables, OperationType.ANTICOMPACTION);
 String format = "Repaired {} keys of {} for {}/{}";
 logger.debug(format, repairedKeyCount, (repairedKeyCount + 
unrepairedKeyCount), cfs.keyspace, cfs.getColumnFamilyName());
 String format2 = "Anticompaction completed successfully, anticompacted 
from {} to {} sstable(s).";



[05/14] cassandra git commit: Add unit test for CASSANDRA-11548

2016-04-19 Thread marcuse
Add unit test for CASSANDRA-11548

Patch by Paulo Motta; reviewed by Marcus Eriksson for CASSANDRA-11548


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/209ebd38
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/209ebd38
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/209ebd38

Branch: refs/heads/trunk
Commit: 209ebd380b641c4f065e9687186f546f8a50b242
Parents: d200d13
Author: Paulo Motta 
Authored: Mon Apr 18 18:44:07 2016 -0300
Committer: Marcus Eriksson 
Committed: Tue Apr 19 15:42:36 2016 +0200

--
 CHANGES.txt |   4 +-
 .../org/apache/cassandra/db/DataTracker.java|  12 +++
 .../SSTableCompactingNotification.java  |  41 
 .../LongLeveledCompactionStrategyTest.java  | 101 +++
 4 files changed, 155 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 90a4f23..76d3673 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,7 +1,5 @@
-2.1.15
- * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
-
 2.1.14
+ * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
  * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index ef25236..c731a35 100644
--- a/src/java/org/apache/cassandra/db/DataTracker.java
+++ b/src/java/org/apache/cassandra/db/DataTracker.java
@@ -222,7 +222,10 @@ public class DataTracker
 
 View newView = currentView.markCompacting(sstables);
 if (view.compareAndSet(currentView, newView))
+{
+notifyCompacting(sstables, true);
 return true;
+}
 }
 }
 
@@ -247,6 +250,8 @@ public class DataTracker
 // interrupted after the CFS is invalidated, those sstables need 
to be unreferenced as well, so we do that here.
 unreferenceSSTables();
 }
+
+notifyCompacting(unmark, false);
 }
 
 public void markObsolete(Collection sstables, OperationType 
compactionType)
@@ -511,6 +516,13 @@ public class DataTracker
 subscriber.handleNotification(notification, this);
 }
 
+public void notifyCompacting(Iterable reader, boolean 
compacting)
+{
+INotification notification = new SSTableCompactingNotification(reader, 
compacting);
+for (INotificationConsumer subscriber : subscribers)
+subscriber.handleNotification(notification, this);
+}
+
 public void notifyAdded(SSTableReader added)
 {
 INotification notification = new SSTableAddedNotification(added);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
--
diff --git 
a/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
 
b/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
new file mode 100644
index 000..6eddf3f
--- /dev/null
+++ 
b/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.notifications;
+
+import org.apache.cassandra.io.sstable.SSTableReader;
+

[04/14] cassandra git commit: Replace sstables on DataTracker before marking them as non-compacting during anti-compaction

2016-04-19 Thread marcuse
Replace sstables on DataTracker before marking them as non-compacting during 
anti-compaction

Patch by Ruoran Wang; reviewed by Paulo Motta for CASSANDRA-11548


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d200d137
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d200d137
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d200d137

Branch: refs/heads/cassandra-3.0
Commit: d200d137823d5b250406bccb35473a8fc2f14faf
Parents: 5c5c5b4
Author: Ruoran Wang 
Authored: Mon Apr 18 19:49:30 2016 -0300
Committer: Paulo Motta 
Committed: Tue Apr 19 08:00:09 2016 -0300

--
 CHANGES.txt  |  3 +++
 .../cassandra/db/compaction/CompactionManager.java   | 15 ---
 2 files changed, 11 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d200d137/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6385509..90a4f23 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+2.1.15
+ * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
+
 2.1.14
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d200d137/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index e382cab..96d873f 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1125,7 +1125,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 int unrepairedKeyCount = 0;
 logger.info("Performing anticompaction on {} sstables", 
repairedSSTables.size());
 // iterate over sstables to check if the repaired / unrepaired ranges 
intersect them.
-Set successfullyAntiCompactedSSTables = new HashSet<>();
 for (SSTableReader sstable : repairedSSTables)
 {
 // check that compaction hasn't stolen any sstables used in 
previous repair sessions
@@ -1137,8 +1136,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 }
 
 logger.info("Anticompacting {}", sstable);
-Set sstableAsSet = new HashSet<>();
-sstableAsSet.add(sstable);
+Set sstableAsSet = Sets.newHashSet(sstable);
 
 File destination = 
cfs.directories.getWriteableLocationAsFile(cfs.getExpectedCompactedFileSize(sstableAsSet,
 OperationType.ANTICOMPACTION));
 SSTableRewriter repairedSSTableWriter = new SSTableRewriter(cfs, 
sstableAsSet, sstable.maxDataAge, false, false);
@@ -1177,9 +1175,13 @@ public class CompactionManager implements 
CompactionManagerMBean
 {
 metrics.finishCompaction(ci);
 }
-
anticompactedSSTables.addAll(repairedSSTableWriter.finish(repairedAt));
-
anticompactedSSTables.addAll(unRepairedSSTableWriter.finish(ActiveRepairService.UNREPAIRED_SSTABLE));
-successfullyAntiCompactedSSTables.add(sstable);
+
+List anticompacted = new ArrayList<>();
+anticompacted.addAll(repairedSSTableWriter.finish(repairedAt));
+
anticompacted.addAll(unRepairedSSTableWriter.finish(ActiveRepairService.UNREPAIRED_SSTABLE));
+anticompactedSSTables.addAll(anticompacted);
+
+
cfs.getDataTracker().markCompactedSSTablesReplaced(sstableAsSet, anticompacted, 
OperationType.ANTICOMPACTION);
 cfs.getDataTracker().unmarkCompacting(sstableAsSet);
 }
 catch (Throwable e)
@@ -1190,7 +1192,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 unRepairedSSTableWriter.abort();
 }
 }
-
cfs.getDataTracker().markCompactedSSTablesReplaced(successfullyAntiCompactedSSTables,
 anticompactedSSTables, OperationType.ANTICOMPACTION);
 String format = "Repaired {} keys of {} for {}/{}";
 logger.debug(format, repairedKeyCount, (repairedKeyCount + 
unrepairedKeyCount), cfs.keyspace, cfs.getColumnFamilyName());
 String format2 = "Anticompaction completed successfully, anticompacted 
from {} to {} sstable(s).";



[06/14] cassandra git commit: Add unit test for CASSANDRA-11548

2016-04-19 Thread marcuse
Add unit test for CASSANDRA-11548

Patch by Paulo Motta; reviewed by Marcus Eriksson for CASSANDRA-11548


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/209ebd38
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/209ebd38
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/209ebd38

Branch: refs/heads/cassandra-2.1
Commit: 209ebd380b641c4f065e9687186f546f8a50b242
Parents: d200d13
Author: Paulo Motta 
Authored: Mon Apr 18 18:44:07 2016 -0300
Committer: Marcus Eriksson 
Committed: Tue Apr 19 15:42:36 2016 +0200

--
 CHANGES.txt |   4 +-
 .../org/apache/cassandra/db/DataTracker.java|  12 +++
 .../SSTableCompactingNotification.java  |  41 
 .../LongLeveledCompactionStrategyTest.java  | 101 +++
 4 files changed, 155 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 90a4f23..76d3673 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,7 +1,5 @@
-2.1.15
- * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
-
 2.1.14
+ * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
  * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index ef25236..c731a35 100644
--- a/src/java/org/apache/cassandra/db/DataTracker.java
+++ b/src/java/org/apache/cassandra/db/DataTracker.java
@@ -222,7 +222,10 @@ public class DataTracker
 
 View newView = currentView.markCompacting(sstables);
 if (view.compareAndSet(currentView, newView))
+{
+notifyCompacting(sstables, true);
 return true;
+}
 }
 }
 
@@ -247,6 +250,8 @@ public class DataTracker
 // interrupted after the CFS is invalidated, those sstables need 
to be unreferenced as well, so we do that here.
 unreferenceSSTables();
 }
+
+notifyCompacting(unmark, false);
 }
 
 public void markObsolete(Collection sstables, OperationType 
compactionType)
@@ -511,6 +516,13 @@ public class DataTracker
 subscriber.handleNotification(notification, this);
 }
 
+public void notifyCompacting(Iterable reader, boolean 
compacting)
+{
+INotification notification = new SSTableCompactingNotification(reader, 
compacting);
+for (INotificationConsumer subscriber : subscribers)
+subscriber.handleNotification(notification, this);
+}
+
 public void notifyAdded(SSTableReader added)
 {
 INotification notification = new SSTableAddedNotification(added);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/209ebd38/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
--
diff --git 
a/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
 
b/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
new file mode 100644
index 000..6eddf3f
--- /dev/null
+++ 
b/src/java/org/apache/cassandra/notifications/SSTableCompactingNotification.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.notifications;
+
+import org.apache.cassandra.io.sstable.SSTableReader;

[09/14] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-19 Thread marcuse
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/97a43fff
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/97a43fff
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/97a43fff

Branch: refs/heads/cassandra-3.0
Commit: 97a432610f5f57678c69ca674b7d795c36b4
Parents: 77ab773 209ebd3
Author: Marcus Eriksson 
Authored: Tue Apr 19 15:48:52 2016 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 19 15:48:52 2016 +0200

--

--




[02/14] cassandra git commit: Replace sstables on DataTracker before marking them as non-compacting during anti-compaction

2016-04-19 Thread marcuse
Replace sstables on DataTracker before marking them as non-compacting during 
anti-compaction

Patch by Ruoran Wang; reviewed by Paulo Motta for CASSANDRA-11548


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d200d137
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d200d137
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d200d137

Branch: refs/heads/trunk
Commit: d200d137823d5b250406bccb35473a8fc2f14faf
Parents: 5c5c5b4
Author: Ruoran Wang 
Authored: Mon Apr 18 19:49:30 2016 -0300
Committer: Paulo Motta 
Committed: Tue Apr 19 08:00:09 2016 -0300

--
 CHANGES.txt  |  3 +++
 .../cassandra/db/compaction/CompactionManager.java   | 15 ---
 2 files changed, 11 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d200d137/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6385509..90a4f23 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,6 @@
+2.1.15
+ * Replace sstables on DataTracker before marking them as non-compacting 
during anti-compaction (CASSANDRA-11548)
+
 2.1.14
  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d200d137/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index e382cab..96d873f 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1125,7 +1125,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 int unrepairedKeyCount = 0;
 logger.info("Performing anticompaction on {} sstables", 
repairedSSTables.size());
 // iterate over sstables to check if the repaired / unrepaired ranges 
intersect them.
-Set successfullyAntiCompactedSSTables = new HashSet<>();
 for (SSTableReader sstable : repairedSSTables)
 {
 // check that compaction hasn't stolen any sstables used in 
previous repair sessions
@@ -1137,8 +1136,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 }
 
 logger.info("Anticompacting {}", sstable);
-Set sstableAsSet = new HashSet<>();
-sstableAsSet.add(sstable);
+Set sstableAsSet = Sets.newHashSet(sstable);
 
 File destination = 
cfs.directories.getWriteableLocationAsFile(cfs.getExpectedCompactedFileSize(sstableAsSet,
 OperationType.ANTICOMPACTION));
 SSTableRewriter repairedSSTableWriter = new SSTableRewriter(cfs, 
sstableAsSet, sstable.maxDataAge, false, false);
@@ -1177,9 +1175,13 @@ public class CompactionManager implements 
CompactionManagerMBean
 {
 metrics.finishCompaction(ci);
 }
-
anticompactedSSTables.addAll(repairedSSTableWriter.finish(repairedAt));
-
anticompactedSSTables.addAll(unRepairedSSTableWriter.finish(ActiveRepairService.UNREPAIRED_SSTABLE));
-successfullyAntiCompactedSSTables.add(sstable);
+
+List anticompacted = new ArrayList<>();
+anticompacted.addAll(repairedSSTableWriter.finish(repairedAt));
+
anticompacted.addAll(unRepairedSSTableWriter.finish(ActiveRepairService.UNREPAIRED_SSTABLE));
+anticompactedSSTables.addAll(anticompacted);
+
+
cfs.getDataTracker().markCompactedSSTablesReplaced(sstableAsSet, anticompacted, 
OperationType.ANTICOMPACTION);
 cfs.getDataTracker().unmarkCompacting(sstableAsSet);
 }
 catch (Throwable e)
@@ -1190,7 +1192,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 unRepairedSSTableWriter.abort();
 }
 }
-
cfs.getDataTracker().markCompactedSSTablesReplaced(successfullyAntiCompactedSSTables,
 anticompactedSSTables, OperationType.ANTICOMPACTION);
 String format = "Repaired {} keys of {} for {}/{}";
 logger.debug(format, repairedKeyCount, (repairedKeyCount + 
unrepairedKeyCount), cfs.keyspace, cfs.getColumnFamilyName());
 String format2 = "Anticompaction completed successfully, anticompacted 
from {} to {} sstable(s).";



[12/14] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-19 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f85a20f7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f85a20f7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f85a20f7

Branch: refs/heads/cassandra-3.0
Commit: f85a20f7987814c7c6ca77acdb97587172c2a8ce
Parents: 7f4b5e3 97a43ff
Author: Marcus Eriksson 
Authored: Tue Apr 19 15:48:59 2016 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 19 15:48:59 2016 +0200

--

--




  1   2   >