[jira] [Created] (CASSANDRA-12155) proposeCallback.java is too spammy for debug.log

2016-07-08 Thread Wei Deng (JIRA)
Wei Deng created CASSANDRA-12155:


 Summary: proposeCallback.java is too spammy for debug.log
 Key: CASSANDRA-12155
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12155
 Project: Cassandra
  Issue Type: Bug
  Components: Observability
Reporter: Wei Deng
Priority: Minor


As stated in [this wiki 
page|https://wiki.apache.org/cassandra/LoggingGuidelines] derived from the work 
on CASSANDRA-10241, the DEBUG level logging in debug.log is intended for "+low 
frequency state changes or message passing. Non-critical path logs on operation 
details, performance measurements or general troubleshooting information.+"

However, it appears that in a production deployment of C* 3.x, the LWT message 
passing from ProposeCallback.java gets printed every 1-2 seconds, which 
overwhelms debug.log from presenting the other important DEBUG level logging 
messages, like the following:

{noformat}
DEBUG [SharedPool-Worker-2] 2016-07-09 05:23:57,800  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:00,803  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:00,804  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:03,807  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:03,807  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:06,811  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:06,811  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:09,815  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:09,815  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:12,819  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:12,819  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:15,823  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:15,823  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:18,827  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:18,827  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:21,831  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:21,831  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:24,835  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:24,835  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:27,839  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:27,839  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:30,843  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:30,843  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:33,847  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:33,847  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:36,851  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:36,852  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:39,855  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:39,855  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:42,859  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:42,859  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:45,864  ProposeCallback.java:62 - 
Propose response true from /10.240.0.3
DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:45,864  ProposeCallback.java:62 - 
Propose response true from /10.240.0.2
DEBUG [SharedPool-Worker-1] 2016-07-09 

[jira] [Updated] (CASSANDRA-11715) Make GCInspector's MIN_LOG_DURATION configurable

2016-07-08 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-11715:
---
Status: In Progress  (was: Patch Available)

> Make GCInspector's MIN_LOG_DURATION configurable
> 
>
> Key: CASSANDRA-11715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11715
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Jeff Jirsa
>Priority: Minor
>  Labels: lhf
>
> It's common for people to run C* with the G1 collector on appropriately-sized 
> heaps.  Quite often, the target pause time is set to 500ms, but GCI fires on 
> anything over 200ms.  We can already control the warn threshold, but these 
> are acceptable GCs for the configuration and create noise at the INFO log 
> level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-11715) Make GCInspector's MIN_LOG_DURATION configurable

2016-07-08 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-11715:
---
Comment: was deleted

(was: ||Branch||testall||dtest||
| [Trunk|https://github.com/jeffjirsa/cassandra/tree/cassandra-11715] | 
http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-testall/lastBuild/ | 
http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-dtest/lastBuild/ | 
| [3.0|https://github.com/jeffjirsa/cassandra/tree/cassandra-11715-3.0] | 
http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-3.0-testall/lastBuild/ 
| http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-3.0-dtest/lastBuild/ 
| 
| [2.2|https://github.com/jeffjirsa/cassandra/tree/cassandra-11715-2.2] | 
http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-2.2-testall/lastBuild/ 
| http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-2.2-dtest/lastBuild/ 
| 
)

> Make GCInspector's MIN_LOG_DURATION configurable
> 
>
> Key: CASSANDRA-11715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11715
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Jeff Jirsa
>Priority: Minor
>  Labels: lhf
>
> It's common for people to run C* with the G1 collector on appropriately-sized 
> heaps.  Quite often, the target pause time is set to 500ms, but GCI fires on 
> anything over 200ms.  We can already control the warn threshold, but these 
> are acceptable GCs for the configuration and create noise at the INFO log 
> level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11715) Make GCInspector's MIN_LOG_DURATION configurable

2016-07-08 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-11715:
---
Status: Patch Available  (was: In Progress)

> Make GCInspector's MIN_LOG_DURATION configurable
> 
>
> Key: CASSANDRA-11715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11715
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Jeff Jirsa
>Priority: Minor
>  Labels: lhf
>
> It's common for people to run C* with the G1 collector on appropriately-sized 
> heaps.  Quite often, the target pause time is set to 500ms, but GCI fires on 
> anything over 200ms.  We can already control the warn threshold, but these 
> are acceptable GCs for the configuration and create noise at the INFO log 
> level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11715) Make GCInspector's MIN_LOG_DURATION configurable

2016-07-08 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368937#comment-15368937
 ] 

Jeff Jirsa commented on CASSANDRA-11715:


||Branch||testall||dtest||
| [Trunk|https://github.com/jeffjirsa/cassandra/tree/cassandra-11715] | 
http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-testall/lastBuild/ | 
http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-dtest/lastBuild/ | 
| [3.0|https://github.com/jeffjirsa/cassandra/tree/cassandra-11715-3.0] | 
http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-3.0-testall/lastBuild/ 
| http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-3.0-dtest/lastBuild/ 
| 
| [2.2|https://github.com/jeffjirsa/cassandra/tree/cassandra-11715-2.2] | 
http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-2.2-testall/lastBuild/ 
| http://cassci.datastax.com/job/jeffjirsa-cassandra-11715-2.2-dtest/lastBuild/ 
| 


> Make GCInspector's MIN_LOG_DURATION configurable
> 
>
> Key: CASSANDRA-11715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11715
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Jeff Jirsa
>Priority: Minor
>  Labels: lhf
>
> It's common for people to run C* with the G1 collector on appropriately-sized 
> heaps.  Quite often, the target pause time is set to 500ms, but GCI fires on 
> anything over 200ms.  We can already control the warn threshold, but these 
> are acceptable GCs for the configuration and create noise at the INFO log 
> level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11715) Make GCInspector's MIN_LOG_DURATION configurable

2016-07-08 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa reassigned CASSANDRA-11715:
--

Assignee: Jeff Jirsa

> Make GCInspector's MIN_LOG_DURATION configurable
> 
>
> Key: CASSANDRA-11715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11715
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Jeff Jirsa
>Priority: Minor
>  Labels: lhf
>
> It's common for people to run C* with the G1 collector on appropriately-sized 
> heaps.  Quite often, the target pause time is set to 500ms, but GCI fires on 
> anything over 200ms.  We can already control the warn threshold, but these 
> are acceptable GCs for the configuration and create noise at the INFO log 
> level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11841) Add keep-alive to stream protocol

2016-07-08 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368810#comment-15368810
 ] 

Yuki Morishita commented on CASSANDRA-11841:


Thanks for the update.
I found several errors in the tests, so I pushed the fix to my branch bellow.

* Fix NPE when checking version. (btw version to support keep-alive is bumped 
to 3.10)
* Differ sending keep-alive after stream session is initialized. The original 
patch sometimes causes keep-alive to be received before the peer initializes 
session, and breaks {{StreamingTransferTest}}.

||branch||testall||dtest||
|[11841|https://github.com/yukim/cassandra/tree/11841]|[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-11841-testall/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-11841-dtest/lastCompletedBuild/testReport/]|

(Tests are still running.)

> Add keep-alive to stream protocol
> -
>
> Key: CASSANDRA-11841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11841
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11414) dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test

2016-07-08 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368804#comment-15368804
 ] 

Yuki Morishita commented on CASSANDRA-11414:


Thanks Dave, you are right.
Fixed in ninja commit {{e983590d303c9c19577b3bd5b5c95adc9f5abb8a}}.

> dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test
> --
>
> Key: CASSANDRA-11414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11414
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Paulo Motta
>  Labels: dtest
> Fix For: 2.2.8, 3.0.9, 3.9
>
>
> Stress is failing to read back all data. We can see this output from the 
> stress read
> {code}
> java.io.IOException: Operation x0 on key(s) [314c384f304f4c325030]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> java.io.IOException: Operation x0 on key(s) [33383438363931353131]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> FAILURE
> {code}
> Started happening with build 1075. Does not appear flaky on CI.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1076/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test
> Failed on CassCI build trunk_dtest #1076



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11414) dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test

2016-07-08 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364839#comment-15364839
 ] 

Yuki Morishita edited comment on CASSANDRA-11414 at 7/9/16 12:25 AM:
-

Committed as {{00e7ecf1394f8704e2f13369f7950e129459ce2c}}.


was (Author: yukim):
Committed as {00e7ecf1394f8704e2f13369f7950e129459ce2c}.

> dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test
> --
>
> Key: CASSANDRA-11414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11414
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Paulo Motta
>  Labels: dtest
> Fix For: 2.2.8, 3.0.9, 3.9
>
>
> Stress is failing to read back all data. We can see this output from the 
> stress read
> {code}
> java.io.IOException: Operation x0 on key(s) [314c384f304f4c325030]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> java.io.IOException: Operation x0 on key(s) [33383438363931353131]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> FAILURE
> {code}
> Started happening with build 1075. Does not appear flaky on CI.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1076/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test
> Failed on CassCI build trunk_dtest #1076



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[01/10] cassandra git commit: ninja fix condition to ensure close in ConnectionHandler

2016-07-08 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 f28409bb9 -> e983590d3
  refs/heads/cassandra-3.0 a227cc61c -> 2fa44cd88
  refs/heads/cassandra-3.9 c1dcc9ce4 -> 1417a516c
  refs/heads/trunk cb9865c94 -> 09720f81d


ninja fix condition to ensure close in ConnectionHandler


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e983590d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e983590d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e983590d

Branch: refs/heads/cassandra-2.2
Commit: e983590d303c9c19577b3bd5b5c95adc9f5abb8a
Parents: f28409b
Author: Yuki Morishita 
Authored: Fri Jul 8 19:05:06 2016 -0500
Committer: Yuki Morishita 
Committed: Fri Jul 8 19:05:06 2016 -0500

--
 src/java/org/apache/cassandra/streaming/ConnectionHandler.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e983590d/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
--
diff --git a/src/java/org/apache/cassandra/streaming/ConnectionHandler.java 
b/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
index 364435e..d3d8ed2 100644
--- a/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
+++ b/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
@@ -233,7 +233,7 @@ public class ConnectionHandler
 
 protected void signalCloseDone()
 {
-if (closeFuture == null)
+if (!isClosed())
 close();
 
 closeFuture.get().set(null);



[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-07-08 Thread yukim
Merge branch 'cassandra-3.0' into cassandra-3.9


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1417a516
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1417a516
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1417a516

Branch: refs/heads/cassandra-3.9
Commit: 1417a516cf7cc89dc456eb8c9c7e2759811a6991
Parents: c1dcc9c 2fa44cd
Author: Yuki Morishita 
Authored: Fri Jul 8 19:12:34 2016 -0500
Committer: Yuki Morishita 
Committed: Fri Jul 8 19:12:34 2016 -0500

--
 src/java/org/apache/cassandra/streaming/ConnectionHandler.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1417a516/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
--



[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-07-08 Thread yukim
Merge branch 'cassandra-3.0' into cassandra-3.9


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1417a516
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1417a516
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1417a516

Branch: refs/heads/trunk
Commit: 1417a516cf7cc89dc456eb8c9c7e2759811a6991
Parents: c1dcc9c 2fa44cd
Author: Yuki Morishita 
Authored: Fri Jul 8 19:12:34 2016 -0500
Committer: Yuki Morishita 
Committed: Fri Jul 8 19:12:34 2016 -0500

--
 src/java/org/apache/cassandra/streaming/ConnectionHandler.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1417a516/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
--



[02/10] cassandra git commit: ninja fix condition to ensure close in ConnectionHandler

2016-07-08 Thread yukim
ninja fix condition to ensure close in ConnectionHandler


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e983590d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e983590d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e983590d

Branch: refs/heads/cassandra-3.0
Commit: e983590d303c9c19577b3bd5b5c95adc9f5abb8a
Parents: f28409b
Author: Yuki Morishita 
Authored: Fri Jul 8 19:05:06 2016 -0500
Committer: Yuki Morishita 
Committed: Fri Jul 8 19:05:06 2016 -0500

--
 src/java/org/apache/cassandra/streaming/ConnectionHandler.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e983590d/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
--
diff --git a/src/java/org/apache/cassandra/streaming/ConnectionHandler.java 
b/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
index 364435e..d3d8ed2 100644
--- a/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
+++ b/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
@@ -233,7 +233,7 @@ public class ConnectionHandler
 
 protected void signalCloseDone()
 {
-if (closeFuture == null)
+if (!isClosed())
 close();
 
 closeFuture.get().set(null);



[04/10] cassandra git commit: ninja fix condition to ensure close in ConnectionHandler

2016-07-08 Thread yukim
ninja fix condition to ensure close in ConnectionHandler


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e983590d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e983590d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e983590d

Branch: refs/heads/trunk
Commit: e983590d303c9c19577b3bd5b5c95adc9f5abb8a
Parents: f28409b
Author: Yuki Morishita 
Authored: Fri Jul 8 19:05:06 2016 -0500
Committer: Yuki Morishita 
Committed: Fri Jul 8 19:05:06 2016 -0500

--
 src/java/org/apache/cassandra/streaming/ConnectionHandler.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e983590d/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
--
diff --git a/src/java/org/apache/cassandra/streaming/ConnectionHandler.java 
b/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
index 364435e..d3d8ed2 100644
--- a/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
+++ b/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
@@ -233,7 +233,7 @@ public class ConnectionHandler
 
 protected void signalCloseDone()
 {
-if (closeFuture == null)
+if (!isClosed())
 close();
 
 closeFuture.get().set(null);



[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-07-08 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2fa44cd8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2fa44cd8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2fa44cd8

Branch: refs/heads/cassandra-3.0
Commit: 2fa44cd88119c24489836e0c0c91fd3eed86ce3c
Parents: a227cc6 e983590
Author: Yuki Morishita 
Authored: Fri Jul 8 19:05:46 2016 -0500
Committer: Yuki Morishita 
Committed: Fri Jul 8 19:05:46 2016 -0500

--
 src/java/org/apache/cassandra/streaming/ConnectionHandler.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[10/10] cassandra git commit: Merge branch 'cassandra-3.9' into trunk

2016-07-08 Thread yukim
Merge branch 'cassandra-3.9' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/09720f81
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/09720f81
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/09720f81

Branch: refs/heads/trunk
Commit: 09720f81d9185a3e4c355f9cfb7db439561b1e3c
Parents: cb9865c 1417a51
Author: Yuki Morishita 
Authored: Fri Jul 8 19:24:08 2016 -0500
Committer: Yuki Morishita 
Committed: Fri Jul 8 19:24:08 2016 -0500

--
 src/java/org/apache/cassandra/streaming/ConnectionHandler.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-07-08 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2fa44cd8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2fa44cd8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2fa44cd8

Branch: refs/heads/cassandra-3.9
Commit: 2fa44cd88119c24489836e0c0c91fd3eed86ce3c
Parents: a227cc6 e983590
Author: Yuki Morishita 
Authored: Fri Jul 8 19:05:46 2016 -0500
Committer: Yuki Morishita 
Committed: Fri Jul 8 19:05:46 2016 -0500

--
 src/java/org/apache/cassandra/streaming/ConnectionHandler.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-07-08 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2fa44cd8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2fa44cd8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2fa44cd8

Branch: refs/heads/trunk
Commit: 2fa44cd88119c24489836e0c0c91fd3eed86ce3c
Parents: a227cc6 e983590
Author: Yuki Morishita 
Authored: Fri Jul 8 19:05:46 2016 -0500
Committer: Yuki Morishita 
Committed: Fri Jul 8 19:05:46 2016 -0500

--
 src/java/org/apache/cassandra/streaming/ConnectionHandler.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[03/10] cassandra git commit: ninja fix condition to ensure close in ConnectionHandler

2016-07-08 Thread yukim
ninja fix condition to ensure close in ConnectionHandler


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e983590d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e983590d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e983590d

Branch: refs/heads/cassandra-3.9
Commit: e983590d303c9c19577b3bd5b5c95adc9f5abb8a
Parents: f28409b
Author: Yuki Morishita 
Authored: Fri Jul 8 19:05:06 2016 -0500
Committer: Yuki Morishita 
Committed: Fri Jul 8 19:05:06 2016 -0500

--
 src/java/org/apache/cassandra/streaming/ConnectionHandler.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e983590d/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
--
diff --git a/src/java/org/apache/cassandra/streaming/ConnectionHandler.java 
b/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
index 364435e..d3d8ed2 100644
--- a/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
+++ b/src/java/org/apache/cassandra/streaming/ConnectionHandler.java
@@ -233,7 +233,7 @@ public class ConnectionHandler
 
 protected void signalCloseDone()
 {
-if (closeFuture == null)
+if (!isClosed())
 close();
 
 closeFuture.get().set(null);



cassandra git commit: only calculate getWriteableLocations once

2016-07-08 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk ae4d705db -> cb9865c94


only calculate getWriteableLocations once


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb9865c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb9865c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb9865c9

Branch: refs/heads/trunk
Commit: cb9865c940fbba1abdcc64d151ee79c22f6d3371
Parents: ae4d705
Author: Dave Brosius 
Authored: Fri Jul 8 19:30:04 2016 -0400
Committer: Dave Brosius 
Committed: Fri Jul 8 19:30:04 2016 -0400

--
 .../cassandra/db/compaction/CompactionStrategyManager.java  | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb9865c9/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
index b6d31d5..bf367a3 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java
@@ -235,11 +235,10 @@ public class CompactionStrategyManager implements 
INotificationConsumer
 if (!cfs.getPartitioner().splitter().isPresent())
 return 0;
 
-List boundaries = 
StorageService.getDiskBoundaries(cfs, locations.getWriteableLocations());
+Directories.DataDirectory[] directories = 
locations.getWriteableLocations();
+List boundaries = 
StorageService.getDiskBoundaries(cfs, directories);
 if (boundaries == null)
 {
-Directories.DataDirectory[] directories = 
locations.getWriteableLocations();
-
 // try to figure out location based on sstable directory:
 for (int i = 0; i < directories.length; i++)
 {



[jira] [Commented] (CASSANDRA-11414) dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test

2016-07-08 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368664#comment-15368664
 ] 

Dave Brosius commented on CASSANDRA-11414:
--

This doesn't look right to me, someone take a look?

protected void signalCloseDone()
{
if (closeFuture == null)
close();

closeFuture.get().set(null);


seems like it should be

if (closeFuture.get() == null) 
close();


(closeFuture will never be null)

> dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test
> --
>
> Key: CASSANDRA-11414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11414
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Paulo Motta
>  Labels: dtest
> Fix For: 2.2.8, 3.0.9, 3.9
>
>
> Stress is failing to read back all data. We can see this output from the 
> stress read
> {code}
> java.io.IOException: Operation x0 on key(s) [314c384f304f4c325030]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> java.io.IOException: Operation x0 on key(s) [33383438363931353131]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> FAILURE
> {code}
> Started happening with build 1075. Does not appear flaky on CI.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1076/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test
> Failed on CassCI build trunk_dtest #1076



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8457) nio MessagingService

2016-07-08 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-8457:
---
   Labels: netty performance  (was: performance)
 Assignee: Jason Brown
Fix Version/s: (was: 3.x)
   4.x
   Status: Patch Available  (was: Open)

> nio MessagingService
> 
>
> Key: CASSANDRA-8457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Jason Brown
>Priority: Minor
>  Labels: performance, netty
> Fix For: 4.x
>
>
> Thread-per-peer (actually two each incoming and outbound) is a big 
> contributor to context switching, especially for larger clusters.  Let's look 
> at switching to nio, possibly via Netty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8457) nio MessagingService

2016-07-08 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368604#comment-15368604
 ] 

Jason Brown commented on CASSANDRA-8457:


Here's the first pass at switching internode messaging to netty.
||8457||
|[branch|https://github.com/jasobrown/cassandra/tree/8457]|
|[dtest|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-8457-dtest/]|
|[testall|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-8457-testall/]|

I've tried to preserve all of the functionality/behaviors of the existing 
implementation as I could, and some aspects were a bit tricky in the 
non-blocking IO, netty world. I've also extensively documented the code as much 
as I can, and I still want to add more high-level docs on 1) the internode 
protocol itself, and 2) the use of netty in internode messaging. Hopefully the 
current state of documentation helps understanding and reviewing the changes. 
Here's some high-level notes as points of departure/interest/discussion:

- I've left the existing {{OutboundTcpConnection}} code largely intact for the 
short term (read on for more detail). But mostly the new and existing behaviors 
and coexist in the code together (though not at run time)
- There is a yaml property to enable/disable using netty for internode 
messaging. If disabled, we'll fall back to the existing 
{{OutboundTcpConnection}} code. Part of this stems from the fact that streaming 
also uses the same the socket infrastructure as internode messaging handshake 
as messaging, and streaming would be broken without the 
{{OutboundTcpConnection}} implementation. I am knees deep in switching 
streaming over to a non-blocking, netty-based solution, but that is a separate 
ticket/body of work.
- In order to support non-blocking IO, I've altered the internode messaging 
protocol such that each message is framed, and the frame contains a message 
size. The protocol change is what forces these changes to happen at a major rev 
update, hence 4.0
- Backward compatibility - We will need to handle the case of cluster upgrade 
where some nodes are on the previous version of the protocol (not upgraded), 
and some are upgraded. The upgraded nodes will still need to behave and operate 
correctly with the older nodes, and that functionality is encapsulated and 
documented in {{LegacyClientHandler}} (for the receive side) and 
{{MessageOutHandler}} for the send side.
- Message coalescing - The existing behaviors in {{CoalescingStrategies}} are 
predicated on parking the thread to allow outbound messages to arrive (and be 
coalesced). Parking a thread in a non-blocking/netty context is a bad thing, so 
I've inverted the behavior of message coalescing a bit. Instead of blocking a 
thread, I've extended the {{CoalescingStrategies.CoalescingStrategy}} 
implementations to return a 'time to wait' to left messages arrive for sending. 
I then schedule a task in the netty scheduler to execute that many nanoseconds 
in the future, queuing up incoming message, and then send them out when the 
scheduled task executes (this is {{CoalescingMessageOutHandler}}). I've also 
added callback functions to {{CoalescingStrategies.CoalescingStrategy}} 
implementations for the non-blocking paradigm to record updates to the strategy 
(for recalculation of the time window, etc).
- Message flushing - Currently in {{OutboundTcpConnection}}, we only call flush 
on the output stream if the backlog is empty (there's no more messages to send 
to the peer). Unfortunately there's no equivalent API in netty to know there's 
any messages in the channel waiting to be sent. The solution that I've gone 
with is to have a shared counter outside of the channel 
{{InternodeMessagingConnection#outboundCount}} and inside the channel 
{{CoalescingMessageOutHandler#outboundCounter}}, and when 
{{CoalescingMessageOutHandler}} sees the value of that counter is zero, it 
knows it can explicitly call flush. I'm not entirely thrilled with this 
approach, and there's some potential race/correctness problems (and 
complexity!) when reconnections occur, so I'm open to suggestions on how to 
achieve this functionality.
- I've included support for netty's OpenSSL library. The operator will need to 
deploy an extra netty jar (http://netty.io/wiki/forked-tomcat-native.html) to 
get the OpenSSL behavior (I'm not sure if we can or want to include it in our 
distro). {{SSLFactory}} needed to be refactored a bit to support the OpenSSL 
functionality.

I'll be doing some more extensive testing next week (including a more thorough 
exploration of the backward compatibility).

> nio MessagingService
> 
>
> Key: CASSANDRA-8457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Priority: Minor
>   

[jira] [Commented] (CASSANDRA-10993) Make read and write requests paths fully non-blocking, eliminate related stages

2016-07-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368583#comment-15368583
 ] 

Tyler Hobbs commented on CASSANDRA-10993:
-

Well, we're also managing timeouts, dispatching events, and several other 
things in the event loop.  Converting all of that to Tasks to submit to the 
netty event loop might be possible, but could also be quite a bit slower.

I've done some further profiling with flame graphs, and it looks like the 
{{MpscQueue}} performance may not be as crucial as I thought.  So, don't worry 
about this too much until I've done some further research to figure out why I'm 
getting different profiling results.

> Make read and write requests paths fully non-blocking, eliminate related 
> stages
> ---
>
> Key: CASSANDRA-10993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10993
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Coordination, Local Write-Read Paths
>Reporter: Aleksey Yeschenko
>Assignee: Tyler Hobbs
> Fix For: 3.x
>
>
> Building on work done by [~tjake] (CASSANDRA-10528), [~slebresne] 
> (CASSANDRA-5239), and others, convert read and write request paths to be 
> fully non-blocking, to enable the eventual transition from SEDA to TPC 
> (CASSANDRA-10989)
> Eliminate {{MUTATION}}, {{COUNTER_MUTATION}}, {{VIEW_MUTATION}}, {{READ}}, 
> and {{READ_REPAIR}} stages, move read and write execution directly to Netty 
> context.
> For lack of decent async I/O options on Linux, we’ll still have to retain an 
> extra thread pool for serving read requests for data not residing in our page 
> cache (CASSANDRA-5863), however.
> Implementation-wise, we only have two options available to us: explicit FSMs 
> and chained futures. Fibers would be the third, and easiest option, but 
> aren’t feasible in Java without resorting to direct bytecode manipulation 
> (ourselves or using [quasar|https://github.com/puniverse/quasar]).
> I have seen 4 implementations bases on chained futures/promises now - three 
> in Java and one in C++ - and I’m not convinced that it’s the optimal (or 
> sane) choice for representing our complex logic - think 2i quorum read 
> requests with timeouts at all levels, read repair (blocking and 
> non-blocking), and speculative retries in the mix, {{SERIAL}} reads and 
> writes.
> I’m currently leaning towards an implementation based on explicit FSMs, and 
> intend to provide a prototype - soonish - for comparison with 
> {{CompletableFuture}}-like variants.
> Either way the transition is a relatively boring straightforward refactoring.
> There are, however, some extension points on both write and read paths that 
> we do not control:
> - authorisation implementations will have to be non-blocking. We have control 
> over built-in ones, but for any custom implementation we will have to execute 
> them in a separate thread pool
> - 2i hooks on the write path will need to be non-blocking
> - any trigger implementations will not be allowed to block
> - UDFs and UDAs
> We are further limited by API compatibility restrictions in the 3.x line, 
> forbidding us to alter, or add any non-{{default}} interface methods to those 
> extension points, so these pose a problem.
> Depending on logistics, expecting to get this done in time for 3.4 or 3.6 
> feature release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12153) RestrictionSet.hasIN() is slow

2016-07-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368576#comment-15368576
 ] 

Tyler Hobbs commented on CASSANDRA-12153:
-

bq. If we are really looking for speed, I think that we should have some field 
variables for hasIN, hasEq

When profiling with the patch applied, {{hasIN()}} drops to ~0.02% of the time, 
so I think that's fast enough.

bq. Any reason not to do the same with...

Seems like we might as well do that too.  Those didn't show up for the simple 
query I was profiling, but they might be in the hot path for some other types 
of queries.

> RestrictionSet.hasIN() is slow
> --
>
> Key: CASSANDRA-12153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12153
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.x
>
>
> While profiling local in-memory reads for CASSANDRA-10993, I noticed that 
> {{RestrictionSet.hasIN()}} was responsible for about 1% of the time.  It 
> looks like it's mostly slow because it creates a new LinkedHashSet (which is 
> expensive to init) and uses streams.  This can be replaced with a simple for 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11403) Serializer/Version mismatch during upgrades to C* 3.0

2016-07-08 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368524#comment-15368524
 ] 

Jeremiah Jordan commented on CASSANDRA-11403:
-

We are seeing this sporadically on current 3.0 head still.

> Serializer/Version mismatch during upgrades to C* 3.0
> -
>
> Key: CASSANDRA-11403
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11403
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Anthony Cozzie
>
> The problem line seems to be:
> {code}
> MessageOut message = 
> readCommand.createMessage(MessagingService.instance().getVersion(endpoint));
> {code}
> SinglePartitionReadCommand then picks the serializer based on the version:
> {code}
> return new MessageOut<>(MessagingService.Verb.READ, this, version < 
> MessagingService.VERSION_30 ? legacyReadCommandSerializer : serializer);
> {code}
> However, OutboundTcpConnectionPool will test the payload size vs the version 
> from its smallMessages connection:
> {code}
> return msg.payloadSize(smallMessages.getTargetVersion()) > 
> LARGE_MESSAGE_THRESHOLD
> {code}
> Which is set when the connection/pool is created:
> {code}
> targetVersion = MessagingService.instance().getVersion(pool.endPoint());
> {code}
> During an upgrade, this state can change between these two calls leading the 
> 3.0 serializer being used on 2.x packets and the following stacktrace:
> ERROR [OptionalTasks:1] 2016-03-07 19:53:06,445  CassandraDaemon.java:195 - 
> Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:632)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:536)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$NeverSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:214)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:918)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:251)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:77)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:237) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:252) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> 

[jira] [Reopened] (CASSANDRA-11403) Serializer/Version mismatch during upgrades to C* 3.0

2016-07-08 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reopened CASSANDRA-11403:
-

> Serializer/Version mismatch during upgrades to C* 3.0
> -
>
> Key: CASSANDRA-11403
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11403
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Anthony Cozzie
>
> The problem line seems to be:
> {code}
> MessageOut message = 
> readCommand.createMessage(MessagingService.instance().getVersion(endpoint));
> {code}
> SinglePartitionReadCommand then picks the serializer based on the version:
> {code}
> return new MessageOut<>(MessagingService.Verb.READ, this, version < 
> MessagingService.VERSION_30 ? legacyReadCommandSerializer : serializer);
> {code}
> However, OutboundTcpConnectionPool will test the payload size vs the version 
> from its smallMessages connection:
> {code}
> return msg.payloadSize(smallMessages.getTargetVersion()) > 
> LARGE_MESSAGE_THRESHOLD
> {code}
> Which is set when the connection/pool is created:
> {code}
> targetVersion = MessagingService.instance().getVersion(pool.endPoint());
> {code}
> During an upgrade, this state can change between these two calls leading the 
> 3.0 serializer being used on 2.x packets and the following stacktrace:
> ERROR [OptionalTasks:1] 2016-03-07 19:53:06,445  CassandraDaemon.java:195 - 
> Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:632)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.db.ReadCommand$Serializer.serializedSize(ReadCommand.java:536)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$NeverSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:214)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:918)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:251)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:212)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:77)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:237) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:252) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:247) 
> ~[cassandra-all-3.0.3.903.jar:3.0.3.903]
>   at 

[jira] [Commented] (CASSANDRA-12018) CDC follow-ups

2016-07-08 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368506#comment-15368506
 ] 

Branimir Lambov commented on CASSANDRA-12018:
-

{{size}} in {{DirectorySizeCalculator}} should also be volatile.

Other than that, +1.

> CDC follow-ups
> --
>
> Key: CASSANDRA-12018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12018
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>
> h6. Platform independent implementation of DirectorySizeCalculator
> On linux, simplify to 
> {{Arrays.stream(path.listFiles()).mapToLong(File::length).sum();}}
> h6. Refactor DirectorySizeCalculator
> bq. I don't get the DirectorySizeCalculator. Why the alive and visited sets, 
> the listFiles step? Either list the files and just loop through them, or do 
> the walkFileTree operation – you are now doing the same work twice. Use a 
> plain long instead of the atomic as the class is still thread-unsafe.
> h6. TolerateErrorsInSection should not depend on previous SyncSegment status 
> in CommitLogReader
> bq. tolerateErrorsInSection &=: I don't think it was intended for the value 
> to depend on previous iterations.
> h6. Refactor interface of SImpleCachedBufferPool
> bq. SimpleCachedBufferPool should provide getThreadLocalReusableBuffer(int 
> size) which should automatically reallocate if the available size is less, 
> and not expose a setter at all.
> h6. Change CDC exception to WriteFailureException instead of 
> WriteTimeoutException
> h6. Remove unused CommitLogTest.testRecovery(byte[] logData)
> h6. NoSpamLogger a message when at CDC capacity



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10202) simplify CommitLogSegmentManager

2016-07-08 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368493#comment-15368493
 ] 

Branimir Lambov commented on CASSANDRA-10202:
-

The above were real problems, code that wasn't correctly updated for the CDC 
changes. Fixed now:
 - {{discardSegment(CommitLogSegment)}} _was_ called, but it wasn't using 
{{discard}},
 - management thread changed to use {{createSegment}} instead of the segment 
factory to allow CDC size/type updates,
 - {{discardSegment(File)}} was really unused (renamed in newer code)).

Rebased and re-uploaded at the same location:
|[code|https://github.com/blambov/cassandra/tree/10202-commitlog]|[utest|http://cassci.datastax.com/job/blambov-10202-commitlog-testall/]|[dtest|http://cassci.datastax.com/job/blambov-10202-commitlog-dtest/]|

Tests look good, but CDC unit tests should have caught the problems above. I 
will add some new tests next week (i.e. ticket not ready to commit/review again 
yet).

> simplify CommitLogSegmentManager
> 
>
> Key: CASSANDRA-10202
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10202
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Jonathan Ellis
>Assignee: Branimir Lambov
>Priority: Minor
>
> Now that we only keep one active segment around we can simplify this from the 
> old recycling design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10202) simplify CommitLogSegmentManager

2016-07-08 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-10202:

Status: Open  (was: Patch Available)

> simplify CommitLogSegmentManager
> 
>
> Key: CASSANDRA-10202
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10202
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Jonathan Ellis
>Assignee: Branimir Lambov
>Priority: Minor
>
> Now that we only keep one active segment around we can simplify this from the 
> old recycling design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11687) dtest failure in rebuild_test.TestRebuild.simple_rebuild_test

2016-07-08 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368434#comment-15368434
 ] 

Jim Witschey edited comment on CASSANDRA-11687 at 7/8/16 8:36 PM:
--

Saw 8 failures, all of which looked like this:

{code}
Error Message

concurrent rebuild should not be allowed, but one rebuild command should have 
succeeded.
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-w6QEHl
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'start_rpc': 'true'}
cassandra.cluster: INFO: New Cassandra host  discovered
cassandra.cluster: WARNING: Host 127.0.0.1 has been marked down
cassandra.pool: INFO: Successful reconnection to 127.0.0.1, marking node up if 
it isn't already
cassandra.cluster: INFO: Host 127.0.0.1 may be up; will prepare queries and 
open connection pool
cassandra.cluster: INFO: New Cassandra host  discovered
- >> end captured logging << -
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/rebuild_test.py", line 106, in 
simple_rebuild_test
msg='concurrent rebuild should not be allowed, but one rebuild command 
should have succeeded.')
  File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
raise self.failureException(msg)
"concurrent rebuild should not be allowed, but one rebuild command should have 
succeeded.\n >> begin captured logging << 
\ndtest: DEBUG: cluster ccm directory: 
/mnt/tmp/dtest-w6QEHl\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
5,\n'start_rpc': 'true'}\ncassandra.cluster: INFO: New Cassandra host 
 discovered\ncassandra.cluster: WARNING: Host 127.0.0.1 
has been marked down\ncassandra.pool: INFO: Successful reconnection to 
127.0.0.1, marking node up if it isn't already\ncassandra.cluster: INFO: Host 
127.0.0.1 may be up; will prepare queries and open connection 
pool\ncassandra.cluster: INFO: New Cassandra host  
discovered\n- >> end captured logging << 
-"
{code}

EDIT: Ah, interesting -- this is _with_ vnodes, so it looks like it can fail 
with or without.


was (Author: mambocab):
Saw 8 failures, all of which looked like this:

{code}
Error Message

concurrent rebuild should not be allowed, but one rebuild command should have 
succeeded.
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-w6QEHl
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'start_rpc': 'true'}
cassandra.cluster: INFO: New Cassandra host  discovered
cassandra.cluster: WARNING: Host 127.0.0.1 has been marked down
cassandra.pool: INFO: Successful reconnection to 127.0.0.1, marking node up if 
it isn't already
cassandra.cluster: INFO: Host 127.0.0.1 may be up; will prepare queries and 
open connection pool
cassandra.cluster: INFO: New Cassandra host  discovered
- >> end captured logging << -
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/rebuild_test.py", line 106, in 
simple_rebuild_test
msg='concurrent rebuild should not be allowed, but one rebuild command 
should have succeeded.')
  File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
raise self.failureException(msg)
"concurrent rebuild should not be allowed, but one rebuild command should have 
succeeded.\n >> begin captured logging << 
\ndtest: DEBUG: cluster ccm directory: 
/mnt/tmp/dtest-w6QEHl\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
5,\n'start_rpc': 'true'}\ncassandra.cluster: INFO: New Cassandra host 
 discovered\ncassandra.cluster: WARNING: Host 127.0.0.1 
has been marked down\ncassandra.pool: INFO: Successful reconnection to 
127.0.0.1, marking node up if it isn't already\ncassandra.cluster: INFO: Host 
127.0.0.1 may be up; will prepare queries and open connection 
pool\ncassandra.cluster: INFO: New Cassandra host  
discovered\n- >> end captured logging << 
-"
{code}

> dtest failure in rebuild_test.TestRebuild.simple_rebuild_test
> 

[jira] [Commented] (CASSANDRA-11687) dtest failure in rebuild_test.TestRebuild.simple_rebuild_test

2016-07-08 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368434#comment-15368434
 ] 

Jim Witschey commented on CASSANDRA-11687:
--

Saw 8 failures, all of which looked like this:

{code}
Error Message

concurrent rebuild should not be allowed, but one rebuild command should have 
succeeded.
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-w6QEHl
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'start_rpc': 'true'}
cassandra.cluster: INFO: New Cassandra host  discovered
cassandra.cluster: WARNING: Host 127.0.0.1 has been marked down
cassandra.pool: INFO: Successful reconnection to 127.0.0.1, marking node up if 
it isn't already
cassandra.cluster: INFO: Host 127.0.0.1 may be up; will prepare queries and 
open connection pool
cassandra.cluster: INFO: New Cassandra host  discovered
- >> end captured logging << -
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/rebuild_test.py", line 106, in 
simple_rebuild_test
msg='concurrent rebuild should not be allowed, but one rebuild command 
should have succeeded.')
  File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
  File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
raise self.failureException(msg)
"concurrent rebuild should not be allowed, but one rebuild command should have 
succeeded.\n >> begin captured logging << 
\ndtest: DEBUG: cluster ccm directory: 
/mnt/tmp/dtest-w6QEHl\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
5,\n'start_rpc': 'true'}\ncassandra.cluster: INFO: New Cassandra host 
 discovered\ncassandra.cluster: WARNING: Host 127.0.0.1 
has been marked down\ncassandra.pool: INFO: Successful reconnection to 
127.0.0.1, marking node up if it isn't already\ncassandra.cluster: INFO: Host 
127.0.0.1 may be up; will prepare queries and open connection 
pool\ncassandra.cluster: INFO: New Cassandra host  
discovered\n- >> end captured logging << 
-"
{code}

> dtest failure in rebuild_test.TestRebuild.simple_rebuild_test
> -
>
> Key: CASSANDRA-11687
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11687
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
>
> single failure on most recent run (3.0 no-vnode)
> {noformat}
> concurrent rebuild should not be allowed, but one rebuild command should have 
> succeeded.
> {noformat}
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/217/testReport/rebuild_test/TestRebuild/simple_rebuild_test
> Failed on CassCI build cassandra-3.0_novnode_dtest #217



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables

2016-07-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368345#comment-15368345
 ] 

Benjamin Lerer commented on CASSANDRA-12127:


While working on this patch I ran into 2 other issues:
# in all the versions, for non-compact tables or compact tables with more than 
one clustering column, if the table was created with descending clustering 
order the ordering was not working properly when an empty ByteBuffer was used 
for a clustering column value. The problem was due to 
{{ReversedType::compareCustom()}}
# in 3.9, queries with desending {{ORDER BY}} and {{IN}} restrictions on the 
clustering column were not returning the proper results. The regression has 
been introduced in CASSANDRA-9986 and was affecting {{SSTableReversedIterator}}.

||Branch||utests||dtests||
|[2.1|https://github.com/apache/cassandra/compare/trunk...blerer:12127-2.1]|[2.1|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-2.1-testall/]|[2.1|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-2.1-dtest/]|
|[2.2|https://github.com/apache/cassandra/compare/trunk...blerer:12127-2.2]|[2.2|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-2.2-testall/]|[2.2|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-2.2-dtest/]|
|[3.0|https://github.com/apache/cassandra/compare/trunk...blerer:12127-3.0]|[3.0|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-3.0-testall/]|[3.0|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-3.0-dtest/]|
|[3.9|https://github.com/apache/cassandra/compare/trunk...blerer:12127-3.9]|[3.9|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-3.9-testall/]|[3.9|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-3.9-dtest/]|

The 2.1 and 2.2 patches:
* add support for queries on compact tables with EQ restrictions with empty 
value or slice restrictions with an empty value for end bound by returning no 
results
* add support for queries on compact tables with IN restrictions with an empty 
value by ignoring that value
* fix the handling of slice restrictions with an empty start bound on compact 
table with 1 clustering column
* fix the {{ReversedType::compareCustom()}} problem

The 3.0 patch:
* fix the {{ReversedType::compareCustom()}} problem

The 3.9 patch:
* fix the {{ReversedType::compareCustom()}} problem
* fix the {{SSTableReversedIterator}} problem

In the 2.2, 3.0 and 3.9 patches the {{SelectOrderByTest}} test have been modify 
to check the ordering behavior when the data are stored within the memtables 
and the SSTables.
   

> Queries with empty ByteBuffer values in clustering column restrictions fail 
> for non-composite compact tables
> 
>
> Key: CASSANDRA-12127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12127
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 12127.txt
>
>
> For the following table:
> {code}
> CREATE TABLE myTable (pk int,
>   c blob,
>   value int,
>   PRIMARY KEY (pk, c)) WITH COMPACT STORAGE;
> INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1);
> INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2);
> {code}
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}}
> Will result in the following Exception:
> {code}
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
> to org.apache.cassandra.db.composites.CellName
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188)
>   at 
> org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125)
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
>   [...]
> {code}
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}}
> Will return 2 rows instead of 0.
> The query: {{SELECT 

[jira] [Updated] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables

2016-07-08 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-12127:
---
Fix Version/s: 3.x
   3.0.x
   Status: Patch Available  (was: In Progress)

> Queries with empty ByteBuffer values in clustering column restrictions fail 
> for non-composite compact tables
> 
>
> Key: CASSANDRA-12127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12127
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 12127.txt
>
>
> For the following table:
> {code}
> CREATE TABLE myTable (pk int,
>   c blob,
>   value int,
>   PRIMARY KEY (pk, c)) WITH COMPACT STORAGE;
> INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1);
> INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2);
> {code}
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}}
> Will result in the following Exception:
> {code}
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
> to org.apache.cassandra.db.composites.CellName
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188)
>   at 
> org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125)
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
>   [...]
> {code}
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}}
> Will return 2 rows instead of 0.
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}}
> {code}
> java.lang.AssertionError
>   at 
> org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253)
>   [...]
> {code}
> I checked 2.0 and {{SELECT * FROM myTable  WHERE pk = 1 AND c > 
> textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND 
> c < textAsBlob('');}} return the same wrong results than in 2.1.
> The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is 
> rejected if a clear error message: {{Invalid empty value for clustering 
> column of COMPACT TABLE}}.
> As it is not possible to insert an empty ByteBuffer value within the 
> clustering column of a non-composite compact tables those queries do not
> have a lot of meaning. {{SELECT * FROM myTable WHERE pk = 1 AND c < 
> textAsBlob('');}} and {{SELECT * FROM myTable WHERE pk = 1 AND c = 
> textAsBlob('');}} will return nothing
> and {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} will 
> return the entire partition (pk = 1).
> In my opinion those queries should probably all be rejected as it seems that 
> the fact that {{SELECT * FROM myTable  WHERE pk = 1 AND c > textAsBlob('');}} 
> was accepted in {{2.0}} was due to a bug.
> I am of course open to discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12153) RestrictionSet.hasIN() is slow

2016-07-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368229#comment-15368229
 ] 

Benjamin Lerer edited comment on CASSANDRA-12153 at 7/8/16 7:28 PM:


I am the one to blame for the {{stream()}} method. My main concern, when I 
created it, was just to simplify the code.
If we are really looking for speed, I think that we should have some field 
variables for {{hasIN}}, {{hasEq}} ...
It will move the computation at preparation time rather than at execution time 
and will perform it only once (if my memory is correct {{hasIN()}} is called 
multiple times).

bq. Then remove RestrictionSet stream() to discourage this from being 
reintroduced?

There is 2 problems associated to the {{stream()}} method. The creation of the 
{{LinkedHashSet}} which is used to remove the {{MultiColumnRestriction}} 
duplicates and the Lambda expressions.
The {{LinkedHashSet}} is unfortunatly also created in {{iterator()}} so 
removing {{stream()}} will not solve that problem. 
I think, we could keep track of the fact that multicolumn restrictions are used 
or not and avoid creating the {{LinkedHashSet}} if they are not used.
I have no idea of the cost associated to the use of the lambda.


was (Author: blerer):
I am the one to blame for the {{stream()}} method. My main concern, when I 
created it, was just to simplify the code.
If we are really looking for speed, I think that we should have some field 
variables for {{hasIN}}, {{hasEq}} ...
It will move the computation at preparation time rather than at execution time 
and will perform it only once (if my memory is correct {{hasIN()}} is called 
multiple times).

bq. Then remove RestrictionSet stream() to discourage this from being 
reintroduced?

There is 2 problems associated to the {{stream()}} method. The creation of the 
{{LinkedHashSet}} which is used to remove the duplicates 
{{MultiColumnRestrictions}} and the Lambda expressions.
The {{{LinkedHashSet}} is unfortunatly also created in {{iterator()}} so 
removing {{stream()} will not solve that problem. 
I think, we could keep track of the fact that multicolumn restrictions are used 
or not and avoid creating the {{LinkedHashSet}} if they are not used.
I have no idea of the cost associated to the use of the lambda.

> RestrictionSet.hasIN() is slow
> --
>
> Key: CASSANDRA-12153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12153
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.x
>
>
> While profiling local in-memory reads for CASSANDRA-10993, I noticed that 
> {{RestrictionSet.hasIN()}} was responsible for about 1% of the time.  It 
> looks like it's mostly slow because it creates a new LinkedHashSet (which is 
> expensive to init) and uses streams.  This can be replaced with a simple for 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12153) RestrictionSet.hasIN() is slow

2016-07-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368229#comment-15368229
 ] 

Benjamin Lerer commented on CASSANDRA-12153:


I am the one to blame for the {{stream()}} method. My main concern, when I 
created it, was just to simplify the code.
If we are really looking for speed, I think that we should have some field 
variables for {{hasIN}}, {{hasEq}} ...
It will move the computation at preparation time rather than at execution time 
and will perform it only once (if my memory is correct {{hasIN()}} is called 
multiple times).

bq. Then remove RestrictionSet stream() to discourage this from being 
reintroduced?

There is 2 problems associated to the {{stream()}} method. The creation of the 
{{LinkedHashSet}} which is used to remove the duplicates 
{{MultiColumnRestrictions}} and the Lambda expressions.
The {{{LinkedHashSet}} is unfortunatly also created in {{iterator()}} so 
removing {{stream()} will not solve that problem. 
I think, we could keep track of the fact that multicolumn restrictions are used 
or not and avoid creating the {{LinkedHashSet}} if they are not used.
I have no idea of the cost associated to the use of the lambda.

> RestrictionSet.hasIN() is slow
> --
>
> Key: CASSANDRA-12153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12153
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.x
>
>
> While profiling local in-memory reads for CASSANDRA-10993, I noticed that 
> {{RestrictionSet.hasIN()}} was responsible for about 1% of the time.  It 
> looks like it's mostly slow because it creates a new LinkedHashSet (which is 
> expensive to init) and uses streams.  This can be replaced with a simple for 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-08 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368141#comment-15368141
 ] 

DOAN DuyHai commented on CASSANDRA-12149:
-

I'll debug this issue this weekend using the given sample dataset

> NullPointerException on SELECT with SASI index
> --
>
> Key: CASSANDRA-12149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12149
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Andrey Konstantinov
> Attachments: CASSANDRA-12149.txt
>
>
> If I execute the sequence of queries (see the attached file), Cassandra 
> aborts a connection reporting NPE on server side. SELECT query without token 
> range filter works, but does not work when token range filter is specified. 
> My intent was to issue multiple SELECT queries targeting the same single 
> partition, filtered by a column indexed by SASI, partitioning results by 
> different token ranges.
> Output from cqlsh on SELECT is the following:
> cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
> mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
> feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
> 9223372036854775807;
> ServerError:  message="java.lang.NullPointerException">



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10993) Make read and write requests paths fully non-blocking, eliminate related stages

2016-07-08 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368142#comment-15368142
 ] 

Norman Maurer commented on CASSANDRA-10993:
---

[~thobbs] can you explain me why you can not just add the tasks to nettys 
EventLoop directly ? Is it because you not want to wake it up but just run 
these once the EventLoop run ?

> Make read and write requests paths fully non-blocking, eliminate related 
> stages
> ---
>
> Key: CASSANDRA-10993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10993
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Coordination, Local Write-Read Paths
>Reporter: Aleksey Yeschenko
>Assignee: Tyler Hobbs
> Fix For: 3.x
>
>
> Building on work done by [~tjake] (CASSANDRA-10528), [~slebresne] 
> (CASSANDRA-5239), and others, convert read and write request paths to be 
> fully non-blocking, to enable the eventual transition from SEDA to TPC 
> (CASSANDRA-10989)
> Eliminate {{MUTATION}}, {{COUNTER_MUTATION}}, {{VIEW_MUTATION}}, {{READ}}, 
> and {{READ_REPAIR}} stages, move read and write execution directly to Netty 
> context.
> For lack of decent async I/O options on Linux, we’ll still have to retain an 
> extra thread pool for serving read requests for data not residing in our page 
> cache (CASSANDRA-5863), however.
> Implementation-wise, we only have two options available to us: explicit FSMs 
> and chained futures. Fibers would be the third, and easiest option, but 
> aren’t feasible in Java without resorting to direct bytecode manipulation 
> (ourselves or using [quasar|https://github.com/puniverse/quasar]).
> I have seen 4 implementations bases on chained futures/promises now - three 
> in Java and one in C++ - and I’m not convinced that it’s the optimal (or 
> sane) choice for representing our complex logic - think 2i quorum read 
> requests with timeouts at all levels, read repair (blocking and 
> non-blocking), and speculative retries in the mix, {{SERIAL}} reads and 
> writes.
> I’m currently leaning towards an implementation based on explicit FSMs, and 
> intend to provide a prototype - soonish - for comparison with 
> {{CompletableFuture}}-like variants.
> Either way the transition is a relatively boring straightforward refactoring.
> There are, however, some extension points on both write and read paths that 
> we do not control:
> - authorisation implementations will have to be non-blocking. We have control 
> over built-in ones, but for any custom implementation we will have to execute 
> them in a separate thread pool
> - 2i hooks on the write path will need to be non-blocking
> - any trigger implementations will not be allowed to block
> - UDFs and UDAs
> We are further limited by API compatibility restrictions in the 3.x line, 
> forbidding us to alter, or add any non-{{default}} interface methods to those 
> extension points, so these pose a problem.
> Depending on logistics, expecting to get this done in time for 3.4 or 3.6 
> feature release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-08 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368137#comment-15368137
 ] 

Pavel Yaskevich commented on CASSANDRA-12149:
-

[~doanduyhai] are you still planing to look at this or do you want me to?

> NullPointerException on SELECT with SASI index
> --
>
> Key: CASSANDRA-12149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12149
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Andrey Konstantinov
> Attachments: CASSANDRA-12149.txt
>
>
> If I execute the sequence of queries (see the attached file), Cassandra 
> aborts a connection reporting NPE on server side. SELECT query without token 
> range filter works, but does not work when token range filter is specified. 
> My intent was to issue multiple SELECT queries targeting the same single 
> partition, filtered by a column indexed by SASI, partitioning results by 
> different token ranges.
> Output from cqlsh on SELECT is the following:
> cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
> mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
> feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
> 9223372036854775807;
> ServerError:  message="java.lang.NullPointerException">



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12154) "SELECT * FROM foo LIMIT ;" does not error out

2016-07-08 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-12154:


 Summary: "SELECT * FROM foo LIMIT ;" does not error out
 Key: CASSANDRA-12154
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12154
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
Reporter: Robert Stupp
Priority: Minor


We found out that {{SELECT * FROM foo LIMIT ;}} is unanimously accepted and 
executed but it should not.

Have not dug deeper why that is possible (it's not a big issue IMO) but it is 
strange. Seems it doesn't parse {{LIMIT}} as {{K_LIMIT}} because otherwise it 
would require an int argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Reuse DataOutputBuffer from ColumnIndex

2016-07-08 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk ab2f74404 -> ae4d705db


Reuse DataOutputBuffer from ColumnIndex

patch by Robert Stupp; reviewed by T Jake Luciani for CASSANDRA-11970


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ae4d705d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ae4d705d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ae4d705d

Branch: refs/heads/trunk
Commit: ae4d705db38b713400292fc46ae0858fb0545fe3
Parents: ab2f744
Author: Robert Stupp 
Authored: Fri Jul 8 20:04:23 2016 +0200
Committer: Robert Stupp 
Committed: Fri Jul 8 20:04:23 2016 +0200

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/db/ColumnIndex.java   | 16 +++-
 .../apache/cassandra/io/util/DataOutputBuffer.java  |  5 +
 3 files changed, 21 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae4d705d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b65aad1..690b1d6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Reuse DataOutputBuffer from ColumnIndex (CASSANDRA-11970)
  * Remove DatabaseDescriptor dependency from SegmentedFile (CASSANDRA-11580)
  * Add supplied username to authentication error messages (CASSANDRA-12076)
  * Remove pre-startup check for open JMX port (CASSANDRA-12074)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae4d705d/src/java/org/apache/cassandra/db/ColumnIndex.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnIndex.java 
b/src/java/org/apache/cassandra/db/ColumnIndex.java
index 2e7a2ee..9cea084 100644
--- a/src/java/org/apache/cassandra/db/ColumnIndex.java
+++ b/src/java/org/apache/cassandra/db/ColumnIndex.java
@@ -48,6 +48,8 @@ public class ColumnIndex
 // used, until the row-index-entry reaches config 
column_index_cache_size_in_kb
 private final List indexSamples = new ArrayList<>();
 
+private DataOutputBuffer reusableBuffer;
+
 public int columnIndexCount;
 private int[] indexOffsets;
 
@@ -95,6 +97,8 @@ public class ColumnIndex
 this.firstClustering = null;
 this.lastClustering = null;
 this.openMarker = null;
+if (this.buffer != null)
+this.reusableBuffer = this.buffer;
 this.buffer = null;
 }
 
@@ -195,7 +199,7 @@ public class ColumnIndex
 indexSamplesSerializedSize += 
idxSerializer.serializedSize(cIndexInfo);
 if (indexSamplesSerializedSize + columnIndexCount * 
TypeSizes.sizeof(0) > DatabaseDescriptor.getColumnIndexCacheSize())
 {
-buffer = new 
DataOutputBuffer(DatabaseDescriptor.getColumnIndexCacheSize() * 2);
+buffer = useBuffer();
 for (IndexInfo indexSample : indexSamples)
 {
 idxSerializer.serialize(indexSample, buffer);
@@ -215,6 +219,16 @@ public class ColumnIndex
 firstClustering = null;
 }
 
+private DataOutputBuffer useBuffer()
+{
+if (reusableBuffer != null) {
+buffer = reusableBuffer;
+buffer.clear();
+}
+// don't use the standard RECYCLER as that only recycles up to 1MB and 
requires proper cleanup
+return new 
DataOutputBuffer(DatabaseDescriptor.getColumnIndexCacheSize() * 2);
+}
+
 private void add(Unfiltered unfiltered) throws IOException
 {
 long pos = currentPosition();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae4d705d/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
--
diff --git a/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java 
b/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
index f08b48f..cc42c66 100644
--- a/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
+++ b/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
@@ -175,6 +175,11 @@ public class DataOutputBuffer extends 
BufferedDataOutputStreamPlus
 return new GrowingChannel();
 }
 
+public void clear()
+{
+buffer.clear();
+}
+
 @VisibleForTesting
 final class GrowingChannel implements WritableByteChannel
 {



[jira] [Updated] (CASSANDRA-11970) Reuse DataOutputBuffer from ColumnIndex

2016-07-08 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11970:
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.10
   Status: Resolved  (was: Ready to Commit)

Thanks!
Committed as 
[ae4d705db38b713400292fc46ae0858fb0545fe3|https://github.com/apache/cassandra/commit/ae4d705db38b713400292fc46ae0858fb0545fe3]
 to [trunk|https://github.com/apache/cassandra/tree/trunk]


> Reuse DataOutputBuffer from ColumnIndex
> ---
>
> Key: CASSANDRA-11970
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11970
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.10
>
>
> With a simple change, the {{DataOutputBuffer}} used in {{ColumnIndex}} can be 
> reused. This saves a couple of (larger) object allocations.
> (Will provide a patch soon)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12153) RestrictionSet.hasIN() is slow

2016-07-08 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368069#comment-15368069
 ] 

Jeff Jirsa commented on CASSANDRA-12153:


Any reason not to do the same with:

https://github.com/thobbs/cassandra/blob/194b56b3c604ed4d6a5a56dcca8c613be13cb8ee/src/java/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictions.java#L155

https://github.com/thobbs/cassandra/blob/194b56b3c604ed4d6a5a56dcca8c613be13cb8ee/src/java/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictions.java#L166

Then remove {{RestrictionSet stream()}} to discourage this from being 
reintroduced?


> RestrictionSet.hasIN() is slow
> --
>
> Key: CASSANDRA-12153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12153
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.x
>
>
> While profiling local in-memory reads for CASSANDRA-10993, I noticed that 
> {{RestrictionSet.hasIN()}} was responsible for about 1% of the time.  It 
> looks like it's mostly slow because it creates a new LinkedHashSet (which is 
> expensive to init) and uses streams.  This can be replaced with a simple for 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8831) Create a system table to expose prepared statements

2016-07-08 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368070#comment-15368070
 ] 

Robert Stupp commented on CASSANDRA-8831:
-

Huh! That's strange - let me check.

> Create a system table to expose prepared statements
> ---
>
> Key: CASSANDRA-8831
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8831
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
>  Labels: client-impacting, docs-impacting
> Fix For: 3.x
>
>
> Because drivers abstract from users the handling of up/down nodes, they have 
> to deal with the fact that when a node is restarted (or join), it won't know 
> any prepared statement. Drivers could somewhat ignore that problem and wait 
> for a query to return an error (that the statement is unknown by the node) to 
> re-prepare the query on that node, but it's relatively inefficient because 
> every time a node comes back up, you'll get bad latency spikes due to some 
> queries first failing, then being re-prepared and then only being executed. 
> So instead, drivers (at least the java driver but I believe others do as 
> well) pro-actively re-prepare statements when a node comes up. It solves the 
> latency problem, but currently every driver instance blindly re-prepare all 
> statements, meaning that in a large cluster with many clients there is a lot 
> of duplication of work (it would be enough for a single client to prepare the 
> statements) and a bigger than necessary load on the node that started.
> An idea to solve this it to have a (cheap) way for clients to check if some 
> statements are prepared on the node. There is different options to provide 
> that but what I'd suggest is to add a system table to expose the (cached) 
> prepared statements because:
> # it's reasonably straightforward to implement: we just add a line to the 
> table when a statement is prepared and remove it when it's evicted (we 
> already have eviction listeners). We'd also truncate the table on startup but 
> that's easy enough). We can even switch it to a "virtual table" if/when 
> CASSANDRA-7622 lands but it's trivial to do with a normal table in the 
> meantime.
> # it doesn't require a change to the protocol or something like that. It 
> could even be done in 2.1 if we wish to.
> # exposing prepared statements feels like a genuinely useful information to 
> have (outside of the problem exposed here that is), if only for 
> debugging/educational purposes.
> The exposed table could look something like:
> {noformat}
> CREATE TABLE system.prepared_statements (
>keyspace_name text,
>table_name text,
>prepared_id blob,
>query_string text,
>PRIMARY KEY (keyspace_name, table_name, prepared_id)
> )
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12153) RestrictionSet.hasIN() is slow

2016-07-08 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12153:

Reviewer: Benjamin Lerer
  Status: Patch Available  (was: Open)

Patch and pending CI runs:

||branch||testall||dtest||
|[CASSANDRA-12153|https://github.com/thobbs/cassandra/tree/CASSANDRA-12153]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-12153-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-12153-dtest]|

I also replaced {{hasOnlyEqualityRestrictions()}} with a simple for-loop while 
I was at it.

> RestrictionSet.hasIN() is slow
> --
>
> Key: CASSANDRA-12153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12153
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.x
>
>
> While profiling local in-memory reads for CASSANDRA-10993, I noticed that 
> {{RestrictionSet.hasIN()}} was responsible for about 1% of the time.  It 
> looks like it's mostly slow because it creates a new LinkedHashSet (which is 
> expensive to init) and uses streams.  This can be replaced with a simple for 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12153) RestrictionSet.hasIN() is slow

2016-07-08 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-12153:
---

 Summary: RestrictionSet.hasIN() is slow
 Key: CASSANDRA-12153
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12153
 Project: Cassandra
  Issue Type: Improvement
  Components: Coordination
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 3.x


While profiling local in-memory reads for CASSANDRA-10993, I noticed that 
{{RestrictionSet.hasIN()}} was responsible for about 1% of the time.  It looks 
like it's mostly slow because it creates a new LinkedHashSet (which is 
expensive to init) and uses streams.  This can be replaced with a simple for 
loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12129) deb Packages cannot be installed under Ubuntu 16.04 (xenial) due to missing dependency

2016-07-08 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368023#comment-15368023
 ] 

Michael Shuler commented on CASSANDRA-12129:


Just a little further help - the dependencies for python-support are minimal.  
{{apt-get install}} those deps, if needed.  Here's where you can grab the 
latest Ubuntu release of python-support, if you need to install an older 
version of Cassandra on Ubuntu 16.04:

http://packages.ubuntu.com/wily/python-support

> deb Packages cannot be installed under Ubuntu 16.04 (xenial) due to missing 
> dependency
> --
>
> Key: CASSANDRA-12129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12129
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
> Environment: Ubuntu 16.04 (Xenial)
>Reporter: Stuart Bishop
>Assignee: Michael Shuler
> Fix For: 2.1.15, 2.2.7, 3.0.6, 3.6
>
>
> The deb packages depend on python-support, which no longer exists in Ubuntu 
> 16.04 (xenial).
> $ sudo apt install cassandra
> Reading package lists... Done
> Building dependency tree   
> Reading state information... Done
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
> The following packages have unmet dependencies:
>  cassandra : Depends: python-support (>= 0.90.0) but it is not installable
>  Recommends: ntp but it is not going to be installed or
>  time-daemon
> E: Unable to correct problems, you have held broken packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12129) deb Packages cannot be installed under Ubuntu 16.04 (xenial) due to missing dependency

2016-07-08 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-12129.

   Resolution: Not A Problem
Fix Version/s: 2.1.15
   2.2.7
   3.0.6
   3.6

> deb Packages cannot be installed under Ubuntu 16.04 (xenial) due to missing 
> dependency
> --
>
> Key: CASSANDRA-12129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12129
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
> Environment: Ubuntu 16.04 (Xenial)
>Reporter: Stuart Bishop
>Assignee: Michael Shuler
> Fix For: 3.6, 3.0.6, 2.2.7, 2.1.15
>
>
> The deb packages depend on python-support, which no longer exists in Ubuntu 
> 16.04 (xenial).
> $ sudo apt install cassandra
> Reading package lists... Done
> Building dependency tree   
> Reading state information... Done
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
> The following packages have unmet dependencies:
>  cassandra : Depends: python-support (>= 0.90.0) but it is not installable
>  Recommends: ntp but it is not going to be installed or
>  time-daemon
> E: Unable to correct problems, you have held broken packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12129) deb Packages cannot be installed under Ubuntu 16.04 (xenial) due to missing dependency

2016-07-08 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367992#comment-15367992
 ] 

Michael Shuler commented on CASSANDRA-12129:


CASSANDRA-10853 migration from python-support to dh_python2 was fixed for:

Fix Version/s:
2.1.15, 2.2.7, 3.0.6, 3.6

You didn't mention what version you were attempting to install, but the ones 
above should work fine on Ubuntu 16.04. Older Cassandra debs than above will 
indeed have this problem, but it's possible to also probably grab 
python-support (and deps) from a previous Ubuntu release, if you must.

Please, reopen this ticket and let us know if the above versions or greater 
don't install cleanly!

> deb Packages cannot be installed under Ubuntu 16.04 (xenial) due to missing 
> dependency
> --
>
> Key: CASSANDRA-12129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12129
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
> Environment: Ubuntu 16.04 (Xenial)
>Reporter: Stuart Bishop
>Assignee: Michael Shuler
>
> The deb packages depend on python-support, which no longer exists in Ubuntu 
> 16.04 (xenial).
> $ sudo apt install cassandra
> Reading package lists... Done
> Building dependency tree   
> Reading state information... Done
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
> The following packages have unmet dependencies:
>  cassandra : Depends: python-support (>= 0.90.0) but it is not installable
>  Recommends: ntp but it is not going to be installed or
>  time-daemon
> E: Unable to correct problems, you have held broken packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11895) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error

2016-07-08 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367984#comment-15367984
 ] 

Jim Witschey commented on CASSANDRA-11895:
--

I don't see why using {{setdefaultencoding}} would change things, but I don't 
know a lot about Python's encoding handling. So, worth a try?

> dtest failure in 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error
> -
>
> Key: CASSANDRA-11895
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11895
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/470/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_unicode_invalid_request_error
> Failed on CassCI build cassandra-2.1_dtest #470
> This is after the fix for CASSANDRA-11799.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12150) cqlsh does not automatically downgrade CQL version

2016-07-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367954#comment-15367954
 ] 

Aleksey Yeschenko commented on CASSANDRA-12150:
---

I'm not sure how that check, or the concept of cql version, is really useful at 
all, to be honest. Definitely for cqlsh.

> cqlsh does not automatically downgrade CQL version
> --
>
> Key: CASSANDRA-12150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Yusuke Takata
>Priority: Minor
>  Labels: cqlsh
> Attachments: patch.txt
>
>
> Cassandra drivers such as the Python driver can automatically connect a 
> supported version, 
> but I found that cqlsh does not automatically downgrade CQL version as the 
> following.
> {code}
> $ cqlsh
> Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
> ProtocolError("cql_version '3.4.2' is not supported by remote (w/ native 
> protocol). Supported versions: [u'3.4.0']",)})
> {code}
> I think that the function is useful for cqlsh too. 
> Could someone review the attached patch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11887) Duplicate rows after a 2.2.5 to 3.0.4 migration

2016-07-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367907#comment-15367907
 ] 

Tyler Hobbs commented on CASSANDRA-11887:
-

I haven't looked closely at this issue, but this could be related to 
CASSANDRA-12144.

> Duplicate rows after a 2.2.5 to 3.0.4 migration
> ---
>
> Key: CASSANDRA-11887
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11887
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Julien Anguenot
>Priority: Blocker
> Fix For: 3.0.x
>
>
> After migrating from 2.2.5 to 3.0.4, some tables seem to carry duplicate 
> primary keys.
> Below an example. Note, repair / scrub of such table do not seem to fix nor 
> indicate any issues.
> *Table definition*:
> {code}
> CREATE TABLE core.edge_ipsec_vpn_service (
> edge_uuid text PRIMARY KEY,
> enabled boolean,
> endpoints set,
> tunnels set
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> *UDTs:*
> {code}
> CREATE TYPE core.edge_ipsec_vpn_endpoint (
> network text,
> public_ip text
> );
> CREATE TYPE core.edge_ipsec_vpn_tunnel (
> name text,
> description text,
> peer_ip_address text,
> peer_id text,
> local_ip_address text,
> local_id text,
> local_subnets frozen>,
> peer_subnets frozen>,
> shared_secret text,
> shared_secret_encrypted boolean,
> encryption_protocol text,
> mtu int,
> enabled boolean,
> operational boolean,
> error_details text,
> vpn_peer frozen
> );
> CREATE TYPE core.edge_ipsec_vpn_subnet (
> name text,
> gateway text,
> netmask text
> );
> CREATE TYPE core.edge_ipsec_vpn_peer (
> type text,
> id text,
> name text,
> vcd_url text,
> vcd_org text,
> vcd_username text
> );
> {code}
> sstabledump extract (IP addressees hidden as well as  secrets)
> {code}
> [...]
>  {
> "partition" : {
>   "key" : [ "84d567cc-0165-4e64-ab97-3a9d06370ba9" ],
>   "position" : 131146
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 131236,
> "liveness_info" : { "tstamp" : "2016-05-06T17:07:15.416003Z" },
> "cells" : [
>   { "name" : "enabled", "value" : "true" },
>   { "name" : "tunnels", "path" : [ 
> “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:false::third
>  party\\:1.2.3.4\\:\\:\\:\\:” ], "value" : "" }
> ]
>   },
>   {
> "type" : "row",
> "position" : 131597,
> "cells" : [
>   { "name" : "endpoints", "path" : [ “XXX” ], "value" : "", "tstamp" 
> : "2016-03-29T08:05:38.297015Z" },
>   { "name" : "tunnels", "path" : [ 
> “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:true::third
>  party\\:1.2.3.4\\:\\:\\:\\:” ], "value" : "", "tstamp" : 
> "2016-03-29T08:05:38.297015Z" },
>   { "name" : "tunnels", "path" : [ 
> “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:false::third
>  party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : 
> "2016-03-14T18:05:07.262001Z" },
>   { "name" : "tunnels", "path" : [ 
> “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4XXX:XXX:false:AES256:1500:true:true::third
>  party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : 
> "2016-03-29T08:05:38.297015Z" }
> ]
>   },
>   {
> "type" : "row",
> "position" : 133644,
> "cells" : [
>   { "name" : "tunnels", "path" : [ 
> “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:true::third
>  party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : 
> "2016-03-29T07:05:27.213013Z" },
>   { "name" : "tunnels", "path" : [ 
> “XXX::1.2.3.4.7:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:true::third
>  party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : 
> "2016-03-29T07:05:27.213013Z" }
> ]
>   }
> ]
>   },
> [...]
> [...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-07-08 Thread Michael Frisch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367881#comment-15367881
 ] 

Michael Frisch edited comment on CASSANDRA-11978 at 7/8/16 4:03 PM:


We haven't tried to bootstrap a new node, this occurred 100% of the time when 
trying to do a repair in our production environment.

Worth noting: This cluster has been around since C* version 0.8.6. All sstables 
are the current version.


was (Author: blafrisch):
We haven't tried to bootstrap a new node, this occurred 100% of the time when 
trying to do a repair in our production environment.

Worth noting: This cluster has been around since C* version 0.8.6.

> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10993) Make read and write requests paths fully non-blocking, eliminate related stages

2016-07-08 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367886#comment-15367886
 ] 

Tyler Hobbs commented on CASSANDRA-10993:
-

[~norman]as part of the move to an event-driven architecture, we need to rely 
heavily on an event loop.  Currently, for prototyping purposes, we've created 
our own EventLoop class and tied one instance of that to each netty EventLoop. 
Our EventLoop's {{cycle}} method is added as a Task to the netty EventLoop, and 
it repeats this every time it cycles.  So, each time the netty EventLoop 
cycles, our loop gets cycled as well.

The problem is that the profiler shows a large portion of time is spent in 
{{MpscQueue}} methods.  The immediate goal is to reduce this cost by switching 
to another data structure (in particular, one that doesn't need to be 
threadsafe).  Eventually, we may also want to have more control over how much 
time is spent on handling IO events vs processing tasks (compared to what 
{{ioRatio}} offers), but we haven't thought too much about this yet.

> Make read and write requests paths fully non-blocking, eliminate related 
> stages
> ---
>
> Key: CASSANDRA-10993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10993
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Coordination, Local Write-Read Paths
>Reporter: Aleksey Yeschenko
>Assignee: Tyler Hobbs
> Fix For: 3.x
>
>
> Building on work done by [~tjake] (CASSANDRA-10528), [~slebresne] 
> (CASSANDRA-5239), and others, convert read and write request paths to be 
> fully non-blocking, to enable the eventual transition from SEDA to TPC 
> (CASSANDRA-10989)
> Eliminate {{MUTATION}}, {{COUNTER_MUTATION}}, {{VIEW_MUTATION}}, {{READ}}, 
> and {{READ_REPAIR}} stages, move read and write execution directly to Netty 
> context.
> For lack of decent async I/O options on Linux, we’ll still have to retain an 
> extra thread pool for serving read requests for data not residing in our page 
> cache (CASSANDRA-5863), however.
> Implementation-wise, we only have two options available to us: explicit FSMs 
> and chained futures. Fibers would be the third, and easiest option, but 
> aren’t feasible in Java without resorting to direct bytecode manipulation 
> (ourselves or using [quasar|https://github.com/puniverse/quasar]).
> I have seen 4 implementations bases on chained futures/promises now - three 
> in Java and one in C++ - and I’m not convinced that it’s the optimal (or 
> sane) choice for representing our complex logic - think 2i quorum read 
> requests with timeouts at all levels, read repair (blocking and 
> non-blocking), and speculative retries in the mix, {{SERIAL}} reads and 
> writes.
> I’m currently leaning towards an implementation based on explicit FSMs, and 
> intend to provide a prototype - soonish - for comparison with 
> {{CompletableFuture}}-like variants.
> Either way the transition is a relatively boring straightforward refactoring.
> There are, however, some extension points on both write and read paths that 
> we do not control:
> - authorisation implementations will have to be non-blocking. We have control 
> over built-in ones, but for any custom implementation we will have to execute 
> them in a separate thread pool
> - 2i hooks on the write path will need to be non-blocking
> - any trigger implementations will not be allowed to block
> - UDFs and UDAs
> We are further limited by API compatibility restrictions in the 3.x line, 
> forbidding us to alter, or add any non-{{default}} interface methods to those 
> extension points, so these pose a problem.
> Depending on logistics, expecting to get this done in time for 3.4 or 3.6 
> feature release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-07-08 Thread Michael Frisch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367881#comment-15367881
 ] 

Michael Frisch edited comment on CASSANDRA-11978 at 7/8/16 4:02 PM:


We haven't tried to bootstrap a new node, this occurred 100% of the time when 
trying to do a repair in our production environment.

Worth noting: This cluster has been around since C* version 0.8.6.


was (Author: blafrisch):
We haven't tried to bootstrap a new node, this occurred 100% of the time when 
trying to do a repair in our production environment.

> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-07-08 Thread Michael Frisch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367881#comment-15367881
 ] 

Michael Frisch commented on CASSANDRA-11978:


We haven't tried to bootstrap a new node, this occurred 100% of the time when 
trying to do a repair in our production environment.

> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-07-08 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367875#comment-15367875
 ] 

Arindam Gupta commented on CASSANDRA-11978:
---

so is this happening when you bootstrap a new node in a cluster and also when 
you run "nodetool repair"?

> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12152) Unknown exception caught while attempting to update MaterializedView: AssertionError: Flags = 128

2016-07-08 Thread Nilson Pontello (JIRA)
Nilson Pontello created CASSANDRA-12152:
---

 Summary: Unknown exception caught while attempting to update 
MaterializedView: AssertionError: Flags = 128
 Key: CASSANDRA-12152
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12152
 Project: Cassandra
  Issue Type: Bug
Reporter: Nilson Pontello


I have a single DC with 3 cassandra nodes. After a restart today, none of them 
were capable of processing the commitlog while starting up. The exception 
doesn't contains enough information about what is going on, please check bellow:

{code}
ERROR [SharedPool-Worker-21] 2016-07-08 12:42:12,866 Keyspace.java:521 - 
Unknown exception caught while attempting to update MaterializedView! 
data_monitor.user_timeline

java.lang.AssertionError: Flags = 128
 at 
org.apache.cassandra.db.ClusteringPrefix$Deserializer.prepare(ClusteringPrefix.java:421)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.UnfilteredDeserializer$CurrentDeserializer.prepareNext(UnfilteredDeserializer.java:172)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.UnfilteredDeserializer$CurrentDeserializer.hasNext(UnfilteredDeserializer.java:153)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.handlePreSliceData(SSTableIterator.java:96)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.columniterator.SSTableIterator$ForwardReader.hasNextInternal(SSTableIterator.java:141)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.columniterator.AbstractSSTableIterator$Reader.hasNext(AbstractSSTableIterator.java:354)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.columniterator.AbstractSSTableIterator.hasNext(AbstractSSTableIterator.java:229)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.columniterator.SSTableIterator.hasNext(SSTableIterator.java:32)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:100)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:93)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:25)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:419)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:279)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:70)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.SinglePartitionReadCommand.withSSTablesIterated(SinglePartitionReadCommand.java:637)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:586)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:463)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.SinglePartitionReadCommand.queryStorage(SinglePartitionReadCommand.java:325)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:366) 
~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.db.ReadCommand.executeInternal(ReadCommand.java:397) 
~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPageInternal(AbstractQueryPager.java:77)
 ~[apache-cassandra-3.5.jar:3.5]
 at 
org.apache.cassandra.service.pager.SinglePartitionPager.fetchPageInternal(SinglePartitionPager.java:34)
 ~[apache-cassandra-3.5.jar:3.5]
 at 

[jira] [Commented] (CASSANDRA-12144) Undeletable rows after upgrading from 2.2.4 to 3.0.7

2016-07-08 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367832#comment-15367832
 ] 

Alex Petrov commented on CASSANDRA-12144:
-

{{2.x}} storage format doesn't guarantee that there'll be a single range 
tombstone, or that tombstones will be in the certain order relative to the 
cells. Under some circumstances (which I unfortunately could not reproduce), we 
were in the situation when we had multiple tombstones, followed by the row:

{code}
[
{"key": "1",
 "cells": [["12345:_","12345:!",,"t",], (*1)
   ["12345:_","12345:!",,"t",], (*2)
   ["12345:","",],
   ["12345:c1","xx",],
   ["12345:c2","yy",]]}
]
{code}

Which was resulting into two rows: one tombstone made from the {{(*1)}} and 
second one, live row made from the tombstone and cells following it (since 
delete time was that the cells should be live). 

This was resulting into the two rows in the new storage format after 
{{sstableupgrade}}. During the iteration, the first tombstone row was read out, 
although since second row was also read out and since the rest of merge 
iterators (superceeding delete might have been in memtable, or any other 
sstable) were exhausted, it was treated as a completely normal live row. 
Undeletable since all deletes would only affect the tombstone, whose clustering 
was matching. 

I've made a patch that captures this edge case.

|[3.0|https://github.com/ifesdjeen/cassandra/tree/12144-3.0] 
|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12144-3.0-testall/]
 
|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12144-3.0-dtest/]
 |
|[trunk|https://github.com/ifesdjeen/cassandra/tree/12144-trunk] 
|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12144-trunk-testall/]
 
|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12144-trunk-dtest/]
 |

I'll run the CI and submit patch if it's successful (particularly interested in 
upgrade dtests). 

After talking to [~slebresne], we might have to also provide a fix for the 
scrub tool that'd detect and fix such cases.

Very big thanks to [~stanislav] for providing information required to track 
this issue down.

> Undeletable rows after upgrading from 2.2.4 to 3.0.7
> 
>
> Key: CASSANDRA-12144
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12144
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stanislav Vishnevskiy
>Assignee: Alex Petrov
>
> We upgraded our cluster today and now have a some rows that refuse to delete.
> Here are some example traces.
> https://gist.github.com/vishnevskiy/36aa18c468344ea22d14f9fb9b99171d
> Even weirder.
> Updating the row and querying it back results in 2 rows even though the id is 
> the clustering key.
> {noformat}
> user_id| id | since| type
> ---++--+--
> 116138050710536192 | 153047019424972800 | null |0
> 116138050710536192 | 153047019424972800 | 2016-05-30 14:53:08+ |2
> {noformat}
> And then deleting it again only removes the new one.
> {noformat}
> cqlsh:discord_relationships> DELETE FROM relationships WHERE user_id = 
> 116138050710536192 AND id = 153047019424972800;
> cqlsh:discord_relationships> SELECT * FROM relationships WHERE user_id = 
> 116138050710536192 AND id = 153047019424972800;
>  user_id| id | since| type
> ++--+--
>  116138050710536192 | 153047019424972800 | 2016-05-30 14:53:08+ |2
> {noformat}
> We tried repairing, compacting, scrubbing. No Luck.
> Not sure what to do. Is anyone aware of this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12018) CDC follow-ups

2016-07-08 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367801#comment-15367801
 ] 

Joshua McKenzie commented on CASSANDRA-12018:
-

bq. I would remove the secondary path here – this could lead to different 
results between the platforms as one will descend into subdirectories and the 
other won't. My problem was with doing both listFiles and walkFileTree, which 
just doing the walk fixes in the better way (less object churn). You can also 
remove the todo.
Todo removed, and I'm fine normalizing on a single path. While it's slower on 
linux than the one-liner, we're splitting hairs if we're at 500 usec on 
worst-case vs. 400 in a local microbench on a laptop.

bq. The size and unflushedCDCSize members do not need to be atomic, but they 
should be volatile for writes to be made visible to threads other than the one 
writing. tempSize isn't informative enough. Maybe sizeInProgress, 
incompleteSize or something similar?
Renamed, and the lack of volatile was an oversight on my part. Thanks for 
catching that.

bq. We still want the AND of global and section-specific flags, just not to 
carry it over from one section to the other. I.e. 
statusTracker.tolerateErrorsInSection = tolerateTruncation & 
syncSegment.toleratesErrorsInSection (or make sure the segment reader does it 
when setting tolerateErrorsInSection). 
Changed and commented.

> CDC follow-ups
> --
>
> Key: CASSANDRA-12018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12018
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>
> h6. Platform independent implementation of DirectorySizeCalculator
> On linux, simplify to 
> {{Arrays.stream(path.listFiles()).mapToLong(File::length).sum();}}
> h6. Refactor DirectorySizeCalculator
> bq. I don't get the DirectorySizeCalculator. Why the alive and visited sets, 
> the listFiles step? Either list the files and just loop through them, or do 
> the walkFileTree operation – you are now doing the same work twice. Use a 
> plain long instead of the atomic as the class is still thread-unsafe.
> h6. TolerateErrorsInSection should not depend on previous SyncSegment status 
> in CommitLogReader
> bq. tolerateErrorsInSection &=: I don't think it was intended for the value 
> to depend on previous iterations.
> h6. Refactor interface of SImpleCachedBufferPool
> bq. SimpleCachedBufferPool should provide getThreadLocalReusableBuffer(int 
> size) which should automatically reallocate if the available size is less, 
> and not expose a setter at all.
> h6. Change CDC exception to WriteFailureException instead of 
> WriteTimeoutException
> h6. Remove unused CommitLogTest.testRecovery(byte[] logData)
> h6. NoSpamLogger a message when at CDC capacity



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12018) CDC follow-ups

2016-07-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12018:

Status: Patch Available  (was: Open)

> CDC follow-ups
> --
>
> Key: CASSANDRA-12018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12018
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>
> h6. Platform independent implementation of DirectorySizeCalculator
> On linux, simplify to 
> {{Arrays.stream(path.listFiles()).mapToLong(File::length).sum();}}
> h6. Refactor DirectorySizeCalculator
> bq. I don't get the DirectorySizeCalculator. Why the alive and visited sets, 
> the listFiles step? Either list the files and just loop through them, or do 
> the walkFileTree operation – you are now doing the same work twice. Use a 
> plain long instead of the atomic as the class is still thread-unsafe.
> h6. TolerateErrorsInSection should not depend on previous SyncSegment status 
> in CommitLogReader
> bq. tolerateErrorsInSection &=: I don't think it was intended for the value 
> to depend on previous iterations.
> h6. Refactor interface of SImpleCachedBufferPool
> bq. SimpleCachedBufferPool should provide getThreadLocalReusableBuffer(int 
> size) which should automatically reallocate if the available size is less, 
> and not expose a setter at all.
> h6. Change CDC exception to WriteFailureException instead of 
> WriteTimeoutException
> h6. Remove unused CommitLogTest.testRecovery(byte[] logData)
> h6. NoSpamLogger a message when at CDC capacity



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12095) [windows] dtest failure in repair_tests.repair_test.TestRepair.no_anticompaction_after_hostspecific_repair_test

2016-07-08 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367790#comment-15367790
 ] 

Joshua McKenzie commented on CASSANDRA-12095:
-

Assigning to [~blambov] as I suspect this will be resolved by 11960 which 
you're reviewing. If not and it turns out to be windows specific, feel free to 
kick it back to me.

> [windows] dtest failure in 
> repair_tests.repair_test.TestRepair.no_anticompaction_after_hostspecific_repair_test
> ---
>
> Key: CASSANDRA-12095
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12095
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Branimir Lambov
>  Labels: dtest, windows
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest_win32/447/testReport/repair_tests.repair_test/TestRepair/no_anticompaction_after_hostspecific_repair_test
> Failed on CassCI build trunk_dtest_win32 #447
> {code}
> Standard Output
> Unexpected error in node2 log, error: 
> ERROR [HintsDispatcher:2] 2016-06-24 16:30:04,790 CassandraDaemon.java:219 - 
> Exception in thread Thread[HintsDispatcher:2,1,main]
> java.lang.UnsupportedOperationException: Hints are not seekable.
>   at org.apache.cassandra.hints.HintsReader.seek(HintsReader.java:114) 
> ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatcher.seek(HintsDispatcher.java:79) 
> ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:257)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_51]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_51]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12095) [windows] dtest failure in repair_tests.repair_test.TestRepair.no_anticompaction_after_hostspecific_repair_test

2016-07-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12095:

Assignee: Branimir Lambov

> [windows] dtest failure in 
> repair_tests.repair_test.TestRepair.no_anticompaction_after_hostspecific_repair_test
> ---
>
> Key: CASSANDRA-12095
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12095
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Branimir Lambov
>  Labels: dtest, windows
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest_win32/447/testReport/repair_tests.repair_test/TestRepair/no_anticompaction_after_hostspecific_repair_test
> Failed on CassCI build trunk_dtest_win32 #447
> {code}
> Standard Output
> Unexpected error in node2 log, error: 
> ERROR [HintsDispatcher:2] 2016-06-24 16:30:04,790 CassandraDaemon.java:219 - 
> Exception in thread Thread[HintsDispatcher:2,1,main]
> java.lang.UnsupportedOperationException: Hints are not seekable.
>   at org.apache.cassandra.hints.HintsReader.seek(HintsReader.java:114) 
> ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatcher.seek(HintsDispatcher.java:79) 
> ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:257)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_51]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_51]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12150) cqlsh does not automatically downgrade CQL version

2016-07-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12150:

Reviewer: Stefania

> cqlsh does not automatically downgrade CQL version
> --
>
> Key: CASSANDRA-12150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Yusuke Takata
>Priority: Minor
>  Labels: cqlsh
> Attachments: patch.txt
>
>
> Cassandra drivers such as the Python driver can automatically connect a 
> supported version, 
> but I found that cqlsh does not automatically downgrade CQL version as the 
> following.
> {code}
> $ cqlsh
> Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
> ProtocolError("cql_version '3.4.2' is not supported by remote (w/ native 
> protocol). Supported versions: [u'3.4.0']",)})
> {code}
> I think that the function is useful for cqlsh too. 
> Could someone review the attached patch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11895) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error

2016-07-08 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367787#comment-15367787
 ] 

Philip Thompson commented on CASSANDRA-11895:
-

I don't see why not. I support doing so

> dtest failure in 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error
> -
>
> Key: CASSANDRA-11895
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11895
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/470/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_unicode_invalid_request_error
> Failed on CassCI build cassandra-2.1_dtest #470
> This is after the fix for CASSANDRA-11799.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12133) Failed to load Java8 implementation ohc-core-j8

2016-07-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12133:

Reviewer: T Jake Luciani

> Failed to load Java8 implementation ohc-core-j8
> ---
>
> Key: CASSANDRA-12133
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12133
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Ubuntu 14.04, Java 1.8.0_91
>Reporter: Mike
>Assignee: Robert Stupp
>Priority: Trivial
> Fix For: 3.x
>
>
> After enabling row cache in cassandra.yaml by setting row_cache_size_in_mb, I 
> receive this warning in system.log during startup:
> {noformat}
> WARN  [main] 2016-07-05 13:36:14,671 Uns.java:169 - Failed to load Java8 
> implementation ohc-core-j8 : java.lang.NoSuchMethodException: 
> org.caffinitas.ohc.linked.UnsExt8.(java.lang.Class)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12040) If a level compaction fails due to no space it should schedule the next one

2016-07-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12040:

Reviewer: Marcus Eriksson

>   If a level compaction fails due to no space it should schedule the next one
> -
>
> Key: CASSANDRA-12040
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12040
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
> Attachments: CASSANDRA-12040_3.0.diff, CASSANDRA-12040_trunk.txt
>
>
> If a level compaction fails the space check, it aborts but next time the 
> compactions are scheduled it will attempt the same one. It should skip it and 
> go to the next so it can find smaller compactions to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11695) Move JMX connection config to cassandra.yaml

2016-07-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11695:

Reviewer: Sam Tunnicliffe

> Move JMX connection config to cassandra.yaml
> 
>
> Key: CASSANDRA-11695
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11695
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Sam Tunnicliffe
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
>
> Since CASSANDRA-10091, we always construct the JMX connector server 
> programatically, so we could move its configuration from cassandra-env to 
> yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-4650) RangeStreamer should be smarter when picking endpoints for streaming in case of N >=3 in each DC.

2016-07-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-4650:
---
Reviewer: T Jake Luciani

> RangeStreamer should be smarter when picking endpoints for streaming in case 
> of N >=3 in each DC.  
> ---
>
> Key: CASSANDRA-4650
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4650
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.1.5
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
>  Labels: streaming
> Attachments: CASSANDRA-4650_trunk.txt, photo-1.JPG
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> getRangeFetchMap method in RangeStreamer should pick unique nodes to stream 
> data from when number of replicas in each DC is three or more. 
> When N>=3 in a DC, there are two options for streaming a range. Consider an 
> example of 4 nodes in one datacenter and replication factor of 3. 
> If a node goes down, it needs to recover 3 ranges of data. With current code, 
> two nodes could get selected as it orders the node by proximity. 
> We ideally will want to select 3 nodes for streaming the data. We can do this 
> by selecting unique nodes for each range.  
> Advantages:
> This will increase the performance of bootstrapping a node and will also put 
> less pressure on nodes serving the data. 
> Note: This does not affect if N < 3 in each DC as then it streams data from 
> only 2 nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12145) Cassandra Stress histogram log is empty if there's only a single operation

2016-07-08 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-12145:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1 committed {{c1dcc9ce46f53dd89b80a08d363d5eacac1b9e23}} 

> Cassandra Stress histogram log is empty if there's only a single operation
> --
>
> Key: CASSANDRA-12145
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12145
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Nitsan Wakart
>Assignee: Nitsan Wakart
>Priority: Minor
> Fix For: 3.9
>
>
> Bug fix is available here:
> https://github.com/nitsanw/cassandra/tree/hdr-logging-bugfix



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Fix hdr logging for single operation workloads

2016-07-08 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.9 8475f891c -> c1dcc9ce4
  refs/heads/trunk de86ccf3a -> ab2f74404


Fix hdr logging for single operation workloads

Patch by Nitsan Wakart; reviewed by tjake for CASSANDRA-12145


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1dcc9ce
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1dcc9ce
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1dcc9ce

Branch: refs/heads/cassandra-3.9
Commit: c1dcc9ce46f53dd89b80a08d363d5eacac1b9e23
Parents: 8475f89
Author: nitsanw 
Authored: Wed Jul 6 14:45:18 2016 +0200
Committer: T Jake Luciani 
Committed: Fri Jul 8 10:38:40 2016 -0400

--
 CHANGES.txt|  1 +
 .../src/org/apache/cassandra/stress/StressMetrics.java | 13 -
 2 files changed, 9 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1dcc9ce/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 34e7587..b094b00 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.9
+ * Fix hdr logging for single operation workloads (CASSANDRA-12145)
  * Fix SASI PREFIX search in CONTAINS mode with partial terms (CASSANDRA-12073)
  * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 Merged from 3.0:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1dcc9ce/tools/stress/src/org/apache/cassandra/stress/StressMetrics.java
--
diff --git a/tools/stress/src/org/apache/cassandra/stress/StressMetrics.java 
b/tools/stress/src/org/apache/cassandra/stress/StressMetrics.java
index 668518c..86e9a7a 100644
--- a/tools/stress/src/org/apache/cassandra/stress/StressMetrics.java
+++ b/tools/stress/src/org/apache/cassandra/stress/StressMetrics.java
@@ -191,15 +191,18 @@ public class StressMetrics
 rowRateUncertainty.update(current.adjustedRowRate());
 if (current.operationCount() != 0)
 {
-if (result.intervals.intervals().size() > 1)
+// if there's a single operation we only print the total
+final boolean logPerOpSummaryLine = 
result.intervals.intervals().size() > 1;
+
+for (Map.Entry type : 
result.intervals.intervals().entrySet())
 {
-for (Map.Entry type : 
result.intervals.intervals().entrySet())
+final String opName = type.getKey();
+final TimingInterval opInterval = type.getValue();
+if (logPerOpSummaryLine)
 {
-final String opName = type.getKey();
-final TimingInterval opInterval = type.getValue();
 printRow("", opName, opInterval, 
timing.getHistory().get(type.getKey()), result.extra, rowRateUncertainty, 
output);
-logHistograms(opName, opInterval);
 }
+logHistograms(opName, opInterval);
 }
 
 printRow("", "total", current, history, result.extra, 
rowRateUncertainty, output);



[2/3] cassandra git commit: Fix hdr logging for single operation workloads

2016-07-08 Thread jake
Fix hdr logging for single operation workloads

Patch by Nitsan Wakart; reviewed by tjake for CASSANDRA-12145


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1dcc9ce
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1dcc9ce
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1dcc9ce

Branch: refs/heads/trunk
Commit: c1dcc9ce46f53dd89b80a08d363d5eacac1b9e23
Parents: 8475f89
Author: nitsanw 
Authored: Wed Jul 6 14:45:18 2016 +0200
Committer: T Jake Luciani 
Committed: Fri Jul 8 10:38:40 2016 -0400

--
 CHANGES.txt|  1 +
 .../src/org/apache/cassandra/stress/StressMetrics.java | 13 -
 2 files changed, 9 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1dcc9ce/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 34e7587..b094b00 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.9
+ * Fix hdr logging for single operation workloads (CASSANDRA-12145)
  * Fix SASI PREFIX search in CONTAINS mode with partial terms (CASSANDRA-12073)
  * Increase size of flushExecutor thread pool (CASSANDRA-12071)
 Merged from 3.0:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1dcc9ce/tools/stress/src/org/apache/cassandra/stress/StressMetrics.java
--
diff --git a/tools/stress/src/org/apache/cassandra/stress/StressMetrics.java 
b/tools/stress/src/org/apache/cassandra/stress/StressMetrics.java
index 668518c..86e9a7a 100644
--- a/tools/stress/src/org/apache/cassandra/stress/StressMetrics.java
+++ b/tools/stress/src/org/apache/cassandra/stress/StressMetrics.java
@@ -191,15 +191,18 @@ public class StressMetrics
 rowRateUncertainty.update(current.adjustedRowRate());
 if (current.operationCount() != 0)
 {
-if (result.intervals.intervals().size() > 1)
+// if there's a single operation we only print the total
+final boolean logPerOpSummaryLine = 
result.intervals.intervals().size() > 1;
+
+for (Map.Entry type : 
result.intervals.intervals().entrySet())
 {
-for (Map.Entry type : 
result.intervals.intervals().entrySet())
+final String opName = type.getKey();
+final TimingInterval opInterval = type.getValue();
+if (logPerOpSummaryLine)
 {
-final String opName = type.getKey();
-final TimingInterval opInterval = type.getValue();
 printRow("", opName, opInterval, 
timing.getHistory().get(type.getKey()), result.extra, rowRateUncertainty, 
output);
-logHistograms(opName, opInterval);
 }
+logHistograms(opName, opInterval);
 }
 
 printRow("", "total", current, history, result.extra, 
rowRateUncertainty, output);



[3/3] cassandra git commit: Merge branch 'cassandra-3.9' into trunk

2016-07-08 Thread jake
Merge branch 'cassandra-3.9' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab2f7440
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab2f7440
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab2f7440

Branch: refs/heads/trunk
Commit: ab2f744042764c66e8e6afed2418d5fb0caee8d7
Parents: de86ccf c1dcc9c
Author: T Jake Luciani 
Authored: Fri Jul 8 10:45:46 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Jul 8 10:45:46 2016 -0400

--
 CHANGES.txt|  1 +
 .../src/org/apache/cassandra/stress/StressMetrics.java | 13 -
 2 files changed, 9 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab2f7440/CHANGES.txt
--
diff --cc CHANGES.txt
index c789b5e,b094b00..b65aad1
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,9 -1,5 +1,10 @@@
 +3.10
 + * Remove DatabaseDescriptor dependency from SegmentedFile (CASSANDRA-11580)
 + * Add supplied username to authentication error messages (CASSANDRA-12076)
 + * Remove pre-startup check for open JMX port (CASSANDRA-12074)
 +
  3.9
+  * Fix hdr logging for single operation workloads (CASSANDRA-12145)
   * Fix SASI PREFIX search in CONTAINS mode with partial terms 
(CASSANDRA-12073)
   * Increase size of flushExecutor thread pool (CASSANDRA-12071)
  Merged from 3.0:



[jira] [Commented] (CASSANDRA-11895) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error

2016-07-08 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367753#comment-15367753
 ] 

Michael Shuler commented on CASSANDRA-11895:


http://stackoverflow.com/a/31137935/3033735

Suggests:
{noformat}
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
{noformat}

I do not find any setdefaultencoding() in dtest - should we do this?

> dtest failure in 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error
> -
>
> Key: CASSANDRA-11895
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11895
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/470/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_unicode_invalid_request_error
> Failed on CassCI build cassandra-2.1_dtest #470
> This is after the fix for CASSANDRA-11799.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11374) LEAK DETECTED during repair

2016-07-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367754#comment-15367754
 ] 

Marcus Eriksson commented on CASSANDRA-11374:
-

ping [~anubhavk] - do you still see this? (and what version are you on now?)

> LEAK DETECTED during repair
> ---
>
> Key: CASSANDRA-11374
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11374
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jean-Francois Gosselin
>Assignee: Marcus Eriksson
> Attachments: Leak_Logs_1.zip, Leak_Logs_2.zip
>
>
> When running a range repair we are seeing the following LEAK DETECTED errors:
> {noformat}
> ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,261 Ref.java:179 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@5ee90b43) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@367168611:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,262 Ref.java:179 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@4ea9d4a7) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@1875396681:Memory@[7f34b905fd10..7f34b9060b7a)
>  was not released before the reference was garbage collected
> ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,262 Ref.java:179 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@27a6b614) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@838594402:Memory@[7f34bae11ce0..7f34bae11d84)
>  was not released before the reference was garbage collected
> ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,263 Ref.java:179 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@64e7b566) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@674656075:Memory@[7f342deab4e0..7f342deb7ce0)
>  was not released before the reference was garbage collected
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11895) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error

2016-07-08 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367745#comment-15367745
 ] 

Michael Shuler commented on CASSANDRA-11895:


This test still failed when swapping around how/when the 
{{PYTHONIOENCODING=utf-8}} env var is set. Not sure what else we might try to 
correct this..

https://cassci.datastax.com/job/cassandra-2.1_dtest/493/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_unicode_invalid_request_error/

> dtest failure in 
> cqlsh_tests.cqlsh_tests.TestCqlsh.test_unicode_invalid_request_error
> -
>
> Key: CASSANDRA-11895
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11895
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/470/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_unicode_invalid_request_error
> Failed on CassCI build cassandra-2.1_dtest #470
> This is after the fix for CASSANDRA-11799.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11209) SSTable ancestor leaked reference

2016-07-08 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-11209.
-
Resolution: Cannot Reproduce

There have been a bunch of fixes related to incremental repairs in newer 
versions

If anyone hits this on a recent version, please reopen with details

> SSTable ancestor leaked reference
> -
>
> Key: CASSANDRA-11209
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11209
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jose Fernandez
>Assignee: Marcus Eriksson
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> We're running a fork of 2.1.13 that adds the TimeWindowCompactionStrategy 
> from [~jjirsa]. We've been running 4 clusters without any issues for many 
> months until a few weeks ago we started scheduling incremental repairs every 
> 24 hours (previously we didn't run any repairs at all).
> Since then we started noticing big discrepancies in the LiveDiskSpaceUsed, 
> TotalDiskSpaceUsed, and actual size of files on disk. The numbers are brought 
> back in sync by restarting the node. We also noticed that when this bug 
> happens there are several ancestors that don't get cleaned up. A restart will 
> queue up a lot of compactions that slowly eat away the ancestors.
> I looked at the code and noticed that we only decrease the LiveTotalDiskUsed 
> metric in the SSTableDeletingTask. Since we have no errors being logged, I'm 
> assuming that for some reason this task is not getting queued up. If I 
> understand correctly this only happens when the reference count for the 
> SStable reaches 0. So this is leading us to believe that something during 
> repairs and/or compactions is causing a reference leak to the ancestor table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11904) Exception in thread Thread[CompactionExecutor:13358,1,main] java.lang.AssertionError: Memory was freed

2016-07-08 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-11904.
-
Resolution: Cannot Reproduce

ok, closing as can't reproduce for now - if anyone hits this again, please 
reopen and attach logs

> Exception in thread Thread[CompactionExecutor:13358,1,main] 
> java.lang.AssertionError: Memory was freed
> --
>
> Key: CASSANDRA-11904
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11904
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Valentin Martinjuk
>Assignee: Marcus Eriksson
> Attachments: system.log.2016-06-10_0514
>
>
> We have Cassandra cluster 2.2.5 with two datacenters(3 nodes each).
> We observe ERRORs below on all nodes. The ERROR is repeated every minute. 
> No any complains from customers. Do we have any chance to fix it without 
> restart?
> {code}
> ERROR [CompactionExecutor:13996] 2016-05-26 21:20:46,700 
> CassandraDaemon.java:185 - Exception in thread 
> Thread[CompactionExecutor:13996,1,main]
> java.lang.AssertionError: Memory was freed
> at 
> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:103) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
> at org.apache.cassandra.io.util.Memory.getInt(Memory.java:292) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.getPositionInSummary(IndexSummary.java:148)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.fillTemporaryKey(IndexSummary.java:162)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:121)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getSampleIndexesForRanges(SSTableReader.java:1398)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.estimatedKeysForRanges(SSTableReader.java:1354)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:403)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.findDroppableSSTable(LeveledCompactionStrategy.java:412)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getNextBackgroundTask(LeveledCompactionStrategy.java:101)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getNextBackgroundTask(WrappingCompactionStrategy.java:88)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:250)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_74]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_74]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_74]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_74]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_74]
> ERROR [CompactionExecutor:13996] 2016-05-26 21:21:46,702 
> CassandraDaemon.java:185 - Exception in thread 
> Thread[CompactionExecutor:13996,1,main]
> java.lang.AssertionError: Memory was freed
> at 
> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:103) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
> at org.apache.cassandra.io.util.Memory.getInt(Memory.java:292) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.getPositionInSummary(IndexSummary.java:148)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.fillTemporaryKey(IndexSummary.java:162)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.IndexSummary.binarySearch(IndexSummary.java:121)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getSampleIndexesForRanges(SSTableReader.java:1398)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.estimatedKeysForRanges(SSTableReader.java:1354)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:403)
>  

[jira] [Commented] (CASSANDRA-11914) Provide option for cassandra-stress to dump all settings

2016-07-08 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367734#comment-15367734
 ] 

T Jake Luciani commented on CASSANDRA-11914:


Hi [~slater_ben] sorry for the delay.   Good idea. 

Would you consider adding info on the StressProfile since the column 
distributions etc all have defaults that are not easily seen/understood as 
well.  Also, the insert distribution is printed at the start of the stress run 
(when you use a yaml) that info could be moved here too.

> Provide option for cassandra-stress to dump all settings
> 
>
> Key: CASSANDRA-11914
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11914
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Ben Slater
>Priority: Minor
> Attachments: 11914-trunk.patch
>
>
> cassandra-stress has quite a lot of default settings and settings that are 
> derived as side effects of explicit options. For people learning the tool and 
> saving a clear record of what was run, I think it would be useful if there 
> was an option to have the tool print all its settings at the start of a run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12129) deb Packages cannot be installed under Ubuntu 16.04 (xenial) due to missing dependency

2016-07-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12129:

Assignee: Michael Shuler

> deb Packages cannot be installed under Ubuntu 16.04 (xenial) due to missing 
> dependency
> --
>
> Key: CASSANDRA-12129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12129
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
> Environment: Ubuntu 16.04 (Xenial)
>Reporter: Stuart Bishop
>Assignee: Michael Shuler
>
> The deb packages depend on python-support, which no longer exists in Ubuntu 
> 16.04 (xenial).
> $ sudo apt install cassandra
> Reading package lists... Done
> Building dependency tree   
> Reading state information... Done
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
> The following packages have unmet dependencies:
>  cassandra : Depends: python-support (>= 0.90.0) but it is not installable
>  Recommends: ntp but it is not going to be installed or
>  time-daemon
> E: Unable to correct problems, you have held broken packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11950) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy_each_quorum

2016-07-08 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367724#comment-15367724
 ] 

T Jake Luciani commented on CASSANDRA-11950:


Yeah if the buffer isn't copied we can't recycle. Also, it's calling getData() 
which is the entire byte array. This should have been (getData(), 0, 
getLength()).

But either way +1

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy_each_quorum
> -
>
> Key: CASSANDRA-11950
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11950
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log, node4.log, node4_debug.log, node5.log, 
> node5_debug.log, node6.log, node6_debug.log, node7.log, node7_debug.log, 
> node8.log, node8_debug.log, node9.log, node9_debug.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_large_dtest/10/testReport/consistency_test/TestAvailability/test_network_topology_strategy_each_quorum
> Failed on CassCI build trunk_large_dtest #10
> Logs are attached.
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 719, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> Standard Output
> Unexpected error in node3 log, error: 
> ERROR [SharedPool-Worker-1] 2016-06-03 14:25:27,460 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-2] 2016-06-03 14:25:27,460 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-3] 2016-06-03 14:25:27,462 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-2] 2016-06-03 14:25:27,464 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-3] 2016-06-03 14:25:27,464 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-1] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-4] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-5] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-7] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-6] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11950) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy_each_quorum

2016-07-08 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-11950:
---
Status: Ready to Commit  (was: Patch Available)

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy_each_quorum
> -
>
> Key: CASSANDRA-11950
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11950
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log, node4.log, node4_debug.log, node5.log, 
> node5_debug.log, node6.log, node6_debug.log, node7.log, node7_debug.log, 
> node8.log, node8_debug.log, node9.log, node9_debug.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_large_dtest/10/testReport/consistency_test/TestAvailability/test_network_topology_strategy_each_quorum
> Failed on CassCI build trunk_large_dtest #10
> Logs are attached.
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 719, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> Standard Output
> Unexpected error in node3 log, error: 
> ERROR [SharedPool-Worker-1] 2016-06-03 14:25:27,460 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-2] 2016-06-03 14:25:27,460 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-3] 2016-06-03 14:25:27,462 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-2] 2016-06-03 14:25:27,464 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-3] 2016-06-03 14:25:27,464 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-1] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-4] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-5] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-7] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-6] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11950) dtest failure in consistency_test.TestAvailability.test_network_topology_strategy_each_quorum

2016-07-08 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367725#comment-15367725
 ] 

T Jake Luciani commented on CASSANDRA-11950:


And please include in 3.8 branch.

> dtest failure in 
> consistency_test.TestAvailability.test_network_topology_strategy_each_quorum
> -
>
> Key: CASSANDRA-11950
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11950
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
> node3.log, node3_debug.log, node4.log, node4_debug.log, node5.log, 
> node5_debug.log, node6.log, node6_debug.log, node7.log, node7_debug.log, 
> node8.log, node8_debug.log, node9.log, node9_debug.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_large_dtest/10/testReport/consistency_test/TestAvailability/test_network_topology_strategy_each_quorum
> Failed on CassCI build trunk_large_dtest #10
> Logs are attached.
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 719, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> Standard Output
> Unexpected error in node3 log, error: 
> ERROR [SharedPool-Worker-1] 2016-06-03 14:25:27,460 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-2] 2016-06-03 14:25:27,460 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-3] 2016-06-03 14:25:27,462 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-2] 2016-06-03 14:25:27,464 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-3] 2016-06-03 14:25:27,464 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-1] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-4] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-5] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-7] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> ERROR [SharedPool-Worker-6] 2016-06-03 14:25:27,465 Keyspace.java:504 - 
> Attempting to mutate non-existant table 03b14ad0-2997-11e6-b8c7-01c3aea11be7 
> (mytestks.users)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12149) NullPointerException on SELECT with SASI index

2016-07-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367719#comment-15367719
 ] 

Aleksey Yeschenko commented on CASSANDRA-12149:
---

[~xedin] poke ^

> NullPointerException on SELECT with SASI index
> --
>
> Key: CASSANDRA-12149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12149
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Andrey Konstantinov
> Attachments: CASSANDRA-12149.txt
>
>
> If I execute the sequence of queries (see the attached file), Cassandra 
> aborts a connection reporting NPE on server side. SELECT query without token 
> range filter works, but does not work when token range filter is specified. 
> My intent was to issue multiple SELECT queries targeting the same single 
> partition, filtered by a column indexed by SASI, partitioning results by 
> different token ranges.
> Output from cqlsh on SELECT is the following:
> cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM 
> mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND 
> feature1 > 11 AND feature1 < 31  AND token(namespace, entity) <= 
> 9223372036854775807;
> ServerError:  message="java.lang.NullPointerException">



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-07-08 Thread Michael Frisch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367702#comment-15367702
 ] 

Michael Frisch edited comment on CASSANDRA-11978 at 7/8/16 1:58 PM:


1) You are correct
2) I don't believe that changes made should affect the streaming scenario, but 
here are the non-defaults we're using:
partitioner: org.apache.cassandra.dht.RandomPartitioner
data_file_directories:
- /data/cassandra/data

commitlog_directory: /data/cassandra/commitlog
key_cache_size_in_mb: 150
row_cache_size_in_mb: 30
saved_caches_directory: /data/cassandra/saved_caches
concurrent_reads: 128
concurrent_writes: 256
concurrent_counter_writes: 128
trickle_fsync: true
listen_address: 10.10.26.61
broadcast_address: 10.10.26.61
start_rpc: true
rpc_address: 10.10.26.61
rpc_server_type: hsha
rpc_min_threads: 32
rpc_max_threads: 1024
compaction_throughput_mb_per_sec: 0
stream_throughput_outbound_megabits_per_sec: 2000
streaming_socket_timeout_in_ms: 17280
endpoint_snitch: GossipingPropertyFileSnitch
3) No, flush/drain both work fine. I've only see this error with streaming.


was (Author: blafrisch):
1) You are correct
2) I don't believe that changes made should affect the streaming scenario, but 
here are the non-defaults we're using:
partitioner: org.apache.cassandra.dht.RandomPartitioner
data_file_directories:
- /data/cassandra/data
commitlog_directory: /data/cassandra/commitlog
key_cache_size_in_mb: 150
row_cache_size_in_mb: 30
saved_caches_directory: /data/cassandra/saved_caches
concurrent_reads: 128
concurrent_writes: 256
concurrent_counter_writes: 128
trickle_fsync: true
listen_address: 10.10.26.61
broadcast_address: 10.10.26.61
start_rpc: true
rpc_address: 10.10.26.61
rpc_server_type: hsha
rpc_min_threads: 32
rpc_max_threads: 1024
compaction_throughput_mb_per_sec: 0
stream_throughput_outbound_megabits_per_sec: 2000
streaming_socket_timeout_in_ms: 17280
endpoint_snitch: GossipingPropertyFileSnitch
3) No, flush/drain both work fine. I've only see this error with streaming.

> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-07-08 Thread Michael Frisch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367702#comment-15367702
 ] 

Michael Frisch commented on CASSANDRA-11978:


1) You are correct
2) I don't believe that changes made should affect the streaming scenario, but 
here are the non-defaults we're using:
partitioner: org.apache.cassandra.dht.RandomPartitioner
data_file_directories:
- /data/cassandra/data
commitlog_directory: /data/cassandra/commitlog
key_cache_size_in_mb: 150
row_cache_size_in_mb: 30
saved_caches_directory: /data/cassandra/saved_caches
concurrent_reads: 128
concurrent_writes: 256
concurrent_counter_writes: 128
trickle_fsync: true
listen_address: 10.10.26.61
broadcast_address: 10.10.26.61
start_rpc: true
rpc_address: 10.10.26.61
rpc_server_type: hsha
rpc_min_threads: 32
rpc_max_threads: 1024
compaction_throughput_mb_per_sec: 0
stream_throughput_outbound_megabits_per_sec: 2000
streaming_socket_timeout_in_ms: 17280
endpoint_snitch: GossipingPropertyFileSnitch
3) No, flush/drain both work fine. I've only see this error with streaming.

> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12035) Structure for tpstats output (JSON, YAML)

2016-07-08 Thread Hiroyuki Nishi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367694#comment-15367694
 ] 

Hiroyuki Nishi commented on CASSANDRA-12035:


Hi [~ifesdjeen],

Thank you for your review and feedback!

I think so. I tried refactor DefaultPrinter on TpStatsPrinter as the following.

https://github.com/yhnishi/cassandra/commit/cbdead281acf135fdcc66464ad3a7bfc1fd881d2

Is this what you mean by your suggestion?

> Structure for tpstats output (JSON, YAML)
> -
>
> Key: CASSANDRA-12035
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12035
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Hiroyuki Nishi
>Assignee: Hiroyuki Nishi
>Priority: Minor
> Attachments: CASSANDRA-12035-trunk.patch, tablestats_result.json, 
> tablestats_result.txt, tablestats_result.yaml, tpstats_result.json, 
> tpstats_result.txt, tpstats_result.yaml
>
>
> In CASSANDRA-5977, some extra output formats such as JSON and YAML were added 
> for nodetool tablestats. 
> Similarly, I would like to add the output formats in nodetool tpstats.
> Also, I tried to refactor the tablestats's code about the output formats to 
> integrate the existing code with my code.
> Please review the attached patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12018) CDC follow-ups

2016-07-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12018:

Reviewer: Branimir Lambov

> CDC follow-ups
> --
>
> Key: CASSANDRA-12018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12018
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>
> h6. Platform independent implementation of DirectorySizeCalculator
> On linux, simplify to 
> {{Arrays.stream(path.listFiles()).mapToLong(File::length).sum();}}
> h6. Refactor DirectorySizeCalculator
> bq. I don't get the DirectorySizeCalculator. Why the alive and visited sets, 
> the listFiles step? Either list the files and just loop through them, or do 
> the walkFileTree operation – you are now doing the same work twice. Use a 
> plain long instead of the atomic as the class is still thread-unsafe.
> h6. TolerateErrorsInSection should not depend on previous SyncSegment status 
> in CommitLogReader
> bq. tolerateErrorsInSection &=: I don't think it was intended for the value 
> to depend on previous iterations.
> h6. Refactor interface of SImpleCachedBufferPool
> bq. SimpleCachedBufferPool should provide getThreadLocalReusableBuffer(int 
> size) which should automatically reallocate if the available size is less, 
> and not expose a setter at all.
> h6. Change CDC exception to WriteFailureException instead of 
> WriteTimeoutException
> h6. Remove unused CommitLogTest.testRecovery(byte[] logData)
> h6. NoSpamLogger a message when at CDC capacity



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12018) CDC follow-ups

2016-07-08 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12018:

Status: Open  (was: Patch Available)

> CDC follow-ups
> --
>
> Key: CASSANDRA-12018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12018
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>
> h6. Platform independent implementation of DirectorySizeCalculator
> On linux, simplify to 
> {{Arrays.stream(path.listFiles()).mapToLong(File::length).sum();}}
> h6. Refactor DirectorySizeCalculator
> bq. I don't get the DirectorySizeCalculator. Why the alive and visited sets, 
> the listFiles step? Either list the files and just loop through them, or do 
> the walkFileTree operation – you are now doing the same work twice. Use a 
> plain long instead of the atomic as the class is still thread-unsafe.
> h6. TolerateErrorsInSection should not depend on previous SyncSegment status 
> in CommitLogReader
> bq. tolerateErrorsInSection &=: I don't think it was intended for the value 
> to depend on previous iterations.
> h6. Refactor interface of SImpleCachedBufferPool
> bq. SimpleCachedBufferPool should provide getThreadLocalReusableBuffer(int 
> size) which should automatically reallocate if the available size is less, 
> and not expose a setter at all.
> h6. Change CDC exception to WriteFailureException instead of 
> WriteTimeoutException
> h6. Remove unused CommitLogTest.testRecovery(byte[] logData)
> h6. NoSpamLogger a message when at CDC capacity



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12039) Add an index callback to be notified post bootstrap and before joining the ring

2016-07-08 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12039:

Status: Awaiting Feedback  (was: Open)

> Add an index callback to be notified post bootstrap and before joining the 
> ring
> ---
>
> Key: CASSANDRA-12039
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12039
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Sergio Bossa
>Assignee: Sergio Bossa
>
> Custom index implementations might need to be notified when the node finishes 
> bootstrapping in order to execute some blocking tasks before the node itself 
> goes into NORMAL state.
> This is a proposal to add such functionality, which should roughly require 
> the following:
> 1) Add a {{getPostBootstrapTask}} callback to the {{Index}} interface.
> 2) Add an {{executePostBootstrapBlockingTasks}} method to 
> {{SecondaryIndexManager}} calling into the previously mentioned callback.
> 3) Hook that into {{StorageService#joinTokenRing}}.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10635) Add metrics for authentication failures

2016-07-08 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-10635:

Status: Open  (was: Patch Available)

> Add metrics for authentication failures
> ---
>
> Key: CASSANDRA-10635
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10635
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Soumava Ghosh
>Assignee: Soumava Ghosh
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 10635-2.1.txt, 10635-2.2.txt, 10635-3.0.txt, 
> 10635-dtest.patch, 10635-trunk.patch
>
>
> There should be no auth failures on a cluster in general. 
> Having metrics around the authentication code would help detect clients 
> that are connecting to the wrong cluster or have auth incorrectly configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10635) Add metrics for authentication failures

2016-07-08 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-10635:

Status: Awaiting Feedback  (was: Open)

> Add metrics for authentication failures
> ---
>
> Key: CASSANDRA-10635
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10635
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Soumava Ghosh
>Assignee: Soumava Ghosh
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 10635-2.1.txt, 10635-2.2.txt, 10635-3.0.txt, 
> 10635-dtest.patch, 10635-trunk.patch
>
>
> There should be no auth failures on a cluster in general. 
> Having metrics around the authentication code would help detect clients 
> that are connecting to the wrong cluster or have auth incorrectly configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10635) Add metrics for authentication failures

2016-07-08 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367587#comment-15367587
 ] 

Sam Tunnicliffe commented on CASSANDRA-10635:
-

I wonder whether meters rather than counters would be more useful here, being 
able to provide not just an absolute count but rates of auth failure/success. I 
would imagine that the ability to detect spikes here would provide more 
actionable signals for operators. 

I'm not wild about tying the metric/mbean names to the message classes. It 
would be cleaner IMO to grouop them with the existing client metrics (at least 
in the mbeans). Doing it that way would mean being losing the ability to 
disambiguate between the counts generated from {{CredentialsMessage}}(protocol 
v1) and {{AuthResponse}}(later versions), but that's a feature not a bug for me 
and we should have dedicated metrics for the versions used by connecting 
clients if they're relevant. 

[~soumava] I've pushed a branch which applies the above changes to your 
original patch [here|https://github.com/beobal/cassandra/tree/10635-trunk], 
wdyt?

[~cnlwsu] would be good to get your opinion here too, if you have chance to 
take a look.

> Add metrics for authentication failures
> ---
>
> Key: CASSANDRA-10635
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10635
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Soumava Ghosh
>Assignee: Soumava Ghosh
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 10635-2.1.txt, 10635-2.2.txt, 10635-3.0.txt, 
> 10635-dtest.patch, 10635-trunk.patch
>
>
> There should be no auth failures on a cluster in general. 
> Having metrics around the authentication code would help detect clients 
> that are connecting to the wrong cluster or have auth incorrectly configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12151) Audit logging for database activity

2016-07-08 Thread stefan setyadi (JIRA)
stefan setyadi created CASSANDRA-12151:
--

 Summary: Audit logging for database activity
 Key: CASSANDRA-12151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12151
 Project: Cassandra
  Issue Type: New Feature
Reporter: stefan setyadi
Priority: Minor


we would like a way to enable cassandra to log database activity being done on 
our server.

It should show username, remote address, timestamp, action type, keyspace, 
column family, and the query statement.

I was thinking of making a new keyspace and insert an entry for every activity 
that occurs.
Then It would be possible to query for activity targeting a specific keyspace 
and column family.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12018) CDC follow-ups

2016-07-08 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367540#comment-15367540
 ] 

Branimir Lambov commented on CASSANDRA-12018:
-

bq. Done.

I would remove the secondary path here -- this could lead to different results 
between the platforms as one will descend into subdirectories and the other 
won't. My problem was with doing both {{listFiles}} and {{walkFileTree}}, which 
just doing the walk fixes in the better way (less object churn).
You can also remove the 
[todo|https://github.com/apache/cassandra/compare/trunk...josh-mckenzie:12018#diff-878dc31866184d5ef750ccd9befc8382R163].

bq. Cleaned up.

The {{size}} and {{unflushedCDCSize}} members do not need to be atomic, but 
they should be volatile for writes to be made visible to threads other than the 
one writing.
{{tempSize}} isn't informative enough. Maybe {{sizeInProgress}}, 
{{incompleteSize}} or something similar?

bq. Removed &

We still want the AND of global and section-specific flags, just not to carry 
it over from one section to the other. I.e. 
{{statusTracker.tolerateErrorsInSection = tolerateTruncation & 
syncSegment.toleratesErrorsInSection}} (or make sure the segment reader does it 
when setting {{tolerateErrorsInSection}}). 

> CDC follow-ups
> --
>
> Key: CASSANDRA-12018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12018
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>
> h6. Platform independent implementation of DirectorySizeCalculator
> On linux, simplify to 
> {{Arrays.stream(path.listFiles()).mapToLong(File::length).sum();}}
> h6. Refactor DirectorySizeCalculator
> bq. I don't get the DirectorySizeCalculator. Why the alive and visited sets, 
> the listFiles step? Either list the files and just loop through them, or do 
> the walkFileTree operation – you are now doing the same work twice. Use a 
> plain long instead of the atomic as the class is still thread-unsafe.
> h6. TolerateErrorsInSection should not depend on previous SyncSegment status 
> in CommitLogReader
> bq. tolerateErrorsInSection &=: I don't think it was intended for the value 
> to depend on previous iterations.
> h6. Refactor interface of SImpleCachedBufferPool
> bq. SimpleCachedBufferPool should provide getThreadLocalReusableBuffer(int 
> size) which should automatically reallocate if the available size is less, 
> and not expose a setter at all.
> h6. Change CDC exception to WriteFailureException instead of 
> WriteTimeoutException
> h6. Remove unused CommitLogTest.testRecovery(byte[] logData)
> h6. NoSpamLogger a message when at CDC capacity



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-07-08 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367510#comment-15367510
 ] 

Arindam Gupta commented on CASSANDRA-11978:
---

Can you please provide some more information to proceed further, need to 
clarify the following points :

1) In your case soft link name is "/path/to/data/dir/AnotherDisk/CFName" and 
actual path i.e. target path is "/path/to/data/dir/Keyspace/CFName", am I right?

2) Have you made any changes in cassandra.yaml or other config files for this 
scenario?

3) If I execute "nodetool flush" command after inserting some data into a table 
will I get this error immediately during this nodetool command execution?



> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12150) cqlsh does not automatically downgrade CQL version

2016-07-08 Thread Yusuke Takata (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yusuke Takata updated CASSANDRA-12150:
--
Status: Patch Available  (was: Open)

> cqlsh does not automatically downgrade CQL version
> --
>
> Key: CASSANDRA-12150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Yusuke Takata
>Priority: Minor
>  Labels: cqlsh
> Attachments: patch.txt
>
>
> Cassandra drivers such as the Python driver can automatically connect a 
> supported version, 
> but I found that cqlsh does not automatically downgrade CQL version as the 
> following.
> {code}
> $ cqlsh
> Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
> ProtocolError("cql_version '3.4.2' is not supported by remote (w/ native 
> protocol). Supported versions: [u'3.4.0']",)})
> {code}
> I think that the function is useful for cqlsh too. 
> Could someone review the attached patch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12150) cqlsh does not automatically downgrade CQL version

2016-07-08 Thread Yusuke Takata (JIRA)
Yusuke Takata created CASSANDRA-12150:
-

 Summary: cqlsh does not automatically downgrade CQL version
 Key: CASSANDRA-12150
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12150
 Project: Cassandra
  Issue Type: Improvement
  Components: CQL
Reporter: Yusuke Takata
Priority: Minor
 Attachments: patch.txt

Cassandra drivers such as the Python driver can automatically connect a 
supported version, 
but I found that cqlsh does not automatically downgrade CQL version as the 
following.
{code}
$ cqlsh
Connection error: ('Unable to connect to any servers', {'127.0.0.1': 
ProtocolError("cql_version '3.4.2' is not supported by remote (w/ native 
protocol). Supported versions: [u'3.4.0']",)})
{code}
I think that the function is useful for cqlsh too. 
Could someone review the attached patch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12039) Add an index callback to be notified post bootstrap and before joining the ring

2016-07-08 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12039:

Status: Open  (was: Patch Available)

In principle I think this is fine, there's just a couple of things:
* Pre-join tasks are not executed if a node is started in write survey mode and 
then fully joins the ring later. 
* If bootstrap fails and is subsequently resumed, pre-join tasks are not 
executed on its completion.
* {{Index::getPreJoinTask}} should have a default no-op implementation (and the 
same can then be removed from {{CassandraIndex}} & {{CustomCassandraIndex}})

This area is not particularly amenable to testing, especially unit testing. The 
utest in the patch is welcome, but I'd be happier if we also had some coverage 
of other scenarios, such as verifying the value of the {{hadBootstrap}} 
argument depending on whether bootstrap occurred or not & handling of the 
scenarios I mentioned above. This means dtests really, which rules out using a 
custom/stub index that can be easily observed. I think it would be sufficient 
to add some debug logging to {{StorageService::executePreJoinTasks}} and check 
for that in the node logs. 


> Add an index callback to be notified post bootstrap and before joining the 
> ring
> ---
>
> Key: CASSANDRA-12039
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12039
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Sergio Bossa
>Assignee: Sergio Bossa
>
> Custom index implementations might need to be notified when the node finishes 
> bootstrapping in order to execute some blocking tasks before the node itself 
> goes into NORMAL state.
> This is a proposal to add such functionality, which should roughly require 
> the following:
> 1) Add a {{getPostBootstrapTask}} callback to the {{Index}} interface.
> 2) Add an {{executePostBootstrapBlockingTasks}} method to 
> {{SecondaryIndexManager}} calling into the previously mentioned callback.
> 3) Hook that into {{StorageService#joinTokenRing}}.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11887) Duplicate rows after a 2.2.5 to 3.0.4 migration

2016-07-08 Thread Etienne Rugeri (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367389#comment-15367389
 ] 

Etienne Rugeri commented on CASSANDRA-11887:


Coming from 2.2.4 to 3.0.7, we had the same issue. We don't use any UDT but all 
tables with Maps (map for example) have the bug. Other tables seem 
ok.
upgradesstables / repair / scrub didn't help. 
Duplicated rows don't necessary have the same data for all non-PartitionKey and 
non-ClusteringColumn data, some have missing data.
We're going to backup tables, deduplicate backup, truncate and insert again. 
Hope it fixes everything.

> Duplicate rows after a 2.2.5 to 3.0.4 migration
> ---
>
> Key: CASSANDRA-11887
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11887
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Julien Anguenot
>Priority: Blocker
> Fix For: 3.0.x
>
>
> After migrating from 2.2.5 to 3.0.4, some tables seem to carry duplicate 
> primary keys.
> Below an example. Note, repair / scrub of such table do not seem to fix nor 
> indicate any issues.
> *Table definition*:
> {code}
> CREATE TABLE core.edge_ipsec_vpn_service (
> edge_uuid text PRIMARY KEY,
> enabled boolean,
> endpoints set,
> tunnels set
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> *UDTs:*
> {code}
> CREATE TYPE core.edge_ipsec_vpn_endpoint (
> network text,
> public_ip text
> );
> CREATE TYPE core.edge_ipsec_vpn_tunnel (
> name text,
> description text,
> peer_ip_address text,
> peer_id text,
> local_ip_address text,
> local_id text,
> local_subnets frozen>,
> peer_subnets frozen>,
> shared_secret text,
> shared_secret_encrypted boolean,
> encryption_protocol text,
> mtu int,
> enabled boolean,
> operational boolean,
> error_details text,
> vpn_peer frozen
> );
> CREATE TYPE core.edge_ipsec_vpn_subnet (
> name text,
> gateway text,
> netmask text
> );
> CREATE TYPE core.edge_ipsec_vpn_peer (
> type text,
> id text,
> name text,
> vcd_url text,
> vcd_org text,
> vcd_username text
> );
> {code}
> sstabledump extract (IP addressees hidden as well as  secrets)
> {code}
> [...]
>  {
> "partition" : {
>   "key" : [ "84d567cc-0165-4e64-ab97-3a9d06370ba9" ],
>   "position" : 131146
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 131236,
> "liveness_info" : { "tstamp" : "2016-05-06T17:07:15.416003Z" },
> "cells" : [
>   { "name" : "enabled", "value" : "true" },
>   { "name" : "tunnels", "path" : [ 
> “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:false::third
>  party\\:1.2.3.4\\:\\:\\:\\:” ], "value" : "" }
> ]
>   },
>   {
> "type" : "row",
> "position" : 131597,
> "cells" : [
>   { "name" : "endpoints", "path" : [ “XXX” ], "value" : "", "tstamp" 
> : "2016-03-29T08:05:38.297015Z" },
>   { "name" : "tunnels", "path" : [ 
> “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:true::third
>  party\\:1.2.3.4\\:\\:\\:\\:” ], "value" : "", "tstamp" : 
> "2016-03-29T08:05:38.297015Z" },
>   { "name" : "tunnels", "path" : [ 
> “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:false::third
>  party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : 
> "2016-03-14T18:05:07.262001Z" },
>   { "name" : "tunnels", "path" : [ 
> “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4XXX:XXX:false:AES256:1500:true:true::third
>  party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : 
> "2016-03-29T08:05:38.297015Z" }
> ]
>   },
>   {
> "type" : "row",
> "position" : 133644,
> "cells" : [
>   { "name" : "tunnels", "path" : [ 
> “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:true::third
>  party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : 
> "2016-03-29T07:05:27.213013Z" },
>   { "name" : "tunnels", "path" : [ 
> 

  1   2   >