[jira] [Created] (CASSANDRA-13220) test failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_0_x_To_indev_2_1_x.ticket_5230_test

2017-02-14 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-13220:
--

 Summary: test failure in 
upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_0_x_To_indev_2_1_x.ticket_5230_test
 Key: CASSANDRA-13220
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13220
 Project: Cassandra
  Issue Type: Bug
Reporter: Michael Shuler
 Attachments: node1.log, node2.log, node3.log

example failure:

http://cassci.datastax.com/job/cassandra-2.1_dtest_upgrade/21/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_0_x_To_indev_2_1_x/ticket_5230_test

{noformat}
Error Message

Unexpected error in log, see stdout
 >> begin captured logging << 
dtest: DEBUG: Upgrade test beginning, setting CASSANDRA_VERSION to 2.0.17, and 
jdk to 7. (Prior values will be restored after test).
dtest: DEBUG: Switching jdk to version 7 (JAVA_HOME is changing from 
/usr/lib/jvm/jdk1.8.0_51 to /usr/lib/jvm/jdk1.7.0_80)
dtest: DEBUG: cluster ccm directory: /tmp/dtest-NvCzEj
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 1,
'read_request_timeout_in_ms': 1,
'request_timeout_in_ms': 1,
'truncate_request_timeout_in_ms': 1,
'write_request_timeout_in_ms': 1}
cassandra.policies: INFO: Using datacenter 'datacenter1' for 
DCAwareRoundRobinPolicy (via host '127.0.0.1'); if incorrect, please specify a 
local_dc to the constructor, or limit contact points to local cluster nodes
cassandra.cluster: INFO: New Cassandra host  
discovered
cassandra.cluster: INFO: New Cassandra host  
discovered
dtest: DEBUG: upgrading node1 to 
github:apache/a6237bf65a95d654b7e702e81fd0d353460d0c89
dtest: DEBUG: Switching jdk to version 8 (JAVA_HOME is changing from 
/usr/lib/jvm/jdk1.7.0_80 to /usr/lib/jvm/jdk1.8.0_51)
ccm: INFO: Fetching Cassandra updates...
cassandra.cluster: INFO: New Cassandra host  
discovered
cassandra.cluster: INFO: New Cassandra host  
discovered
cassandra.cluster: INFO: New Cassandra host  
discovered
cassandra.cluster: INFO: New Cassandra host  
discovered
dtest: DEBUG: Querying upgraded node
dtest: DEBUG: Querying old node
dtest: DEBUG: removing ccm cluster test at: /tmp/dtest-NvCzEj
dtest: DEBUG: clearing ssl stores from [/tmp/dtest-NvCzEj] directory
- >> end captured logging << -
{noformat}

{noformat}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 358, in run
self.tearDown()
  File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
219, in tearDown
super(UpgradeTester, self).tearDown()
  File "/home/automaton/cassandra-dtest/dtest.py", line 593, in tearDown
raise AssertionError('Unexpected error in log, see stdout')
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-10404) Node to Node encryption transitional mode

2017-02-14 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-10404:

Description: 
Create a transitional mode for encryption that allows encrypted and unencrypted 
traffic node-to-node during a change over to encryption from unencrypted. This 
alleviates downtime during the switch.

 This is similar to CASSANDRA-10559 which is intended for client-to-node

  was:
Create a transitional mode for encryption that allows encrypted and unencrypted 
traffic node-to-node during a change over to encryption from unencrypted. This 
alleviates downtime during the switch.

 This is similar to https://issues.apache.org/jira/browse/CASSANDRA-8803 which 
is intended for client-to-node


> Node to Node encryption transitional mode
> -
>
> Key: CASSANDRA-10404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Tom Lewis
>Assignee: Jason Brown
>
> Create a transitional mode for encryption that allows encrypted and 
> unencrypted traffic node-to-node during a change over to encryption from 
> unencrypted. This alleviates downtime during the switch.
>  This is similar to CASSANDRA-10559 which is intended for client-to-node



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12344) Forward writes to replacement node with same address during replace

2017-02-14 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-12344:

Fix Version/s: 4.x

> Forward writes to replacement node with same address during replace
> ---
>
> Key: CASSANDRA-12344
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12344
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination, Distributed Metadata
>Reporter: Paulo Motta
> Fix For: 4.x
>
>
> On CASSANDRA-8523 it was added support to forwarding writes to a replacement 
> node via a new gossip state {{BOOTSTRAPPING_REPLACE}}.
> Currently this is limited to replacement nodes with a different address of 
> the original node, because if a replacement node with the same address of a 
> normal endpoint joins gossip with a non-dead state, it will become alive in 
> the Failure Detector and reads will be forwarded to it before the node is 
> ready to serve reads.
> This ticket is to add support to forwarding writes to replacement nodes with 
> the same IP address as the original node.
> The initial idea is to allow marking a node as unavailable for reads on 
> {{TokenMetadata}}, what will allow the replacement node with the same IP join 
> gossip without having reads forwarded to it. This will be enabled by 
> CASSANDRA-11559.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[2/9] cassandra git commit: Quick fix: Add missing developers to build.xml file.

2017-02-14 Thread aleksey
Quick fix: Add missing developers to build.xml file.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/753d90cd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/753d90cd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/753d90cd

Branch: refs/heads/cassandra-3.11
Commit: 753d90cd77959b7640b2189060438b4c5403cf4e
Parents: 9a80f80
Author: Nate McCall 
Authored: Wed Feb 15 11:50:23 2017 +1300
Committer: Aleksey Yeschenko 
Committed: Tue Feb 14 23:20:23 2017 +

--
 build.xml | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/753d90cd/build.xml
--
diff --git a/build.xml b/build.xml
index 94e4723..31c239b 100644
--- a/build.xml
+++ b/build.xml
@@ -423,33 +423,49 @@
   

 
+
 
 
 
+
+
 
 
+
 
-
+
+
+
 
 
 
+
 
 
 
 
+
+
+
 
 
 
+
 
+
 
+
+
 
-   
+
 
 
+
 
 
 
 
+
   
 
   



[8/9] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-02-14 Thread aleksey
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/515e4a22
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/515e4a22
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/515e4a22

Branch: refs/heads/trunk
Commit: 515e4a2276d68d1d4f35da2a7daa92dd01936f8f
Parents: 23a1dee 76ad028
Author: Aleksey Yeschenko 
Authored: Tue Feb 14 23:26:39 2017 +
Committer: Aleksey Yeschenko 
Committed: Tue Feb 14 23:26:39 2017 +

--
 build.xml | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/515e4a22/build.xml
--
diff --cc build.xml
index 49dd95a,4e3011f..0eef700
--- a/build.xml
+++ b/build.xml
@@@ -446,13 -411,9 +446,14 @@@



 -
 +  
 +  
 +  
 +  
 +
 +
  
+ 
  
  
  



[9/9] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-02-14 Thread aleksey
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a0827fb2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a0827fb2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a0827fb2

Branch: refs/heads/trunk
Commit: a0827fb2e7e3ac643573eab987ebb0b92f8b73e2
Parents: 48bfc8e 515e4a2
Author: Aleksey Yeschenko 
Authored: Tue Feb 14 23:26:55 2017 +
Committer: Aleksey Yeschenko 
Committed: Tue Feb 14 23:26:55 2017 +

--
 build.xml | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a0827fb2/build.xml
--
diff --cc build.xml
index d5962fe,0eef700..8fde61b
--- a/build.xml
+++ b/build.xml
@@@ -436,9 -450,10 +436,10 @@@



 -
 +
  
  
+ 
  
  
  



[3/9] cassandra git commit: Quick fix: Add missing developers to build.xml file.

2017-02-14 Thread aleksey
Quick fix: Add missing developers to build.xml file.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/753d90cd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/753d90cd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/753d90cd

Branch: refs/heads/trunk
Commit: 753d90cd77959b7640b2189060438b4c5403cf4e
Parents: 9a80f80
Author: Nate McCall 
Authored: Wed Feb 15 11:50:23 2017 +1300
Committer: Aleksey Yeschenko 
Committed: Tue Feb 14 23:20:23 2017 +

--
 build.xml | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/753d90cd/build.xml
--
diff --git a/build.xml b/build.xml
index 94e4723..31c239b 100644
--- a/build.xml
+++ b/build.xml
@@ -423,33 +423,49 @@
   

 
+
 
 
 
+
+
 
 
+
 
-
+
+
+
 
 
 
+
 
 
 
 
+
+
+
 
 
 
+
 
+
 
+
+
 
-   
+
 
 
+
 
 
 
 
+
   
 
   



[5/9] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-14 Thread aleksey
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76ad028f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76ad028f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76ad028f

Branch: refs/heads/cassandra-3.11
Commit: 76ad028f67cbca59895cc489f24102f1f7f9d911
Parents: 82943d6 753d90c
Author: Aleksey Yeschenko 
Authored: Tue Feb 14 23:26:19 2017 +
Committer: Aleksey Yeschenko 
Committed: Tue Feb 14 23:26:19 2017 +

--
 build.xml | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/76ad028f/build.xml
--
diff --cc build.xml
index a182f45,31c239b..4e3011f
--- a/build.xml
+++ b/build.xml
@@@ -411,8 -421,9 +411,9 @@@



 -  
 +
  
+ 
  
  
  
@@@ -430,13 -450,17 +440,18 @@@
  
  
  
+ 
  
+ 
  
+ 
+ 
  
-   
+ 
  
 +
  
+ 
  
  
  



[4/9] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-14 Thread aleksey
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76ad028f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76ad028f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76ad028f

Branch: refs/heads/cassandra-3.0
Commit: 76ad028f67cbca59895cc489f24102f1f7f9d911
Parents: 82943d6 753d90c
Author: Aleksey Yeschenko 
Authored: Tue Feb 14 23:26:19 2017 +
Committer: Aleksey Yeschenko 
Committed: Tue Feb 14 23:26:19 2017 +

--
 build.xml | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/76ad028f/build.xml
--
diff --cc build.xml
index a182f45,31c239b..4e3011f
--- a/build.xml
+++ b/build.xml
@@@ -411,8 -421,9 +411,9 @@@



 -  
 +
  
+ 
  
  
  
@@@ -430,13 -450,17 +440,18 @@@
  
  
  
+ 
  
+ 
  
+ 
+ 
  
-   
+ 
  
 +
  
+ 
  
  
  



[7/9] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-02-14 Thread aleksey
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/515e4a22
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/515e4a22
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/515e4a22

Branch: refs/heads/cassandra-3.11
Commit: 515e4a2276d68d1d4f35da2a7daa92dd01936f8f
Parents: 23a1dee 76ad028
Author: Aleksey Yeschenko 
Authored: Tue Feb 14 23:26:39 2017 +
Committer: Aleksey Yeschenko 
Committed: Tue Feb 14 23:26:39 2017 +

--
 build.xml | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/515e4a22/build.xml
--
diff --cc build.xml
index 49dd95a,4e3011f..0eef700
--- a/build.xml
+++ b/build.xml
@@@ -446,13 -411,9 +446,14 @@@



 -
 +  
 +  
 +  
 +  
 +
 +
  
+ 
  
  
  



[6/9] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-14 Thread aleksey
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76ad028f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76ad028f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76ad028f

Branch: refs/heads/trunk
Commit: 76ad028f67cbca59895cc489f24102f1f7f9d911
Parents: 82943d6 753d90c
Author: Aleksey Yeschenko 
Authored: Tue Feb 14 23:26:19 2017 +
Committer: Aleksey Yeschenko 
Committed: Tue Feb 14 23:26:19 2017 +

--
 build.xml | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/76ad028f/build.xml
--
diff --cc build.xml
index a182f45,31c239b..4e3011f
--- a/build.xml
+++ b/build.xml
@@@ -411,8 -421,9 +411,9 @@@



 -  
 +
  
+ 
  
  
  
@@@ -430,13 -450,17 +440,18 @@@
  
  
  
+ 
  
+ 
  
+ 
+ 
  
-   
+ 
  
 +
  
+ 
  
  
  



[1/9] cassandra git commit: Quick fix: Add missing developers to build.xml file.

2017-02-14 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 82943d6aa -> 76ad028f6
  refs/heads/cassandra-3.11 23a1dee45 -> 515e4a227
  refs/heads/trunk 48bfc8e8d -> a0827fb2e


Quick fix: Add missing developers to build.xml file.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/753d90cd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/753d90cd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/753d90cd

Branch: refs/heads/cassandra-3.0
Commit: 753d90cd77959b7640b2189060438b4c5403cf4e
Parents: 9a80f80
Author: Nate McCall 
Authored: Wed Feb 15 11:50:23 2017 +1300
Committer: Aleksey Yeschenko 
Committed: Tue Feb 14 23:20:23 2017 +

--
 build.xml | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/753d90cd/build.xml
--
diff --git a/build.xml b/build.xml
index 94e4723..31c239b 100644
--- a/build.xml
+++ b/build.xml
@@ -423,33 +423,49 @@
   

 
+
 
 
 
+
+
 
 
+
 
-
+
+
+
 
 
 
+
 
 
 
 
+
+
+
 
 
 
+
 
+
 
+
+
 
-   
+
 
 
+
 
 
 
 
+
   
 
   



cassandra git commit: Quick fix: Add missing developers to build.xml file. [Forced Update!]

2017-02-14 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 06672aa97 -> 753d90cd7 (forced update)


Quick fix: Add missing developers to build.xml file.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/753d90cd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/753d90cd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/753d90cd

Branch: refs/heads/cassandra-2.2
Commit: 753d90cd77959b7640b2189060438b4c5403cf4e
Parents: 9a80f80
Author: Nate McCall 
Authored: Wed Feb 15 11:50:23 2017 +1300
Committer: Aleksey Yeschenko 
Committed: Tue Feb 14 23:20:23 2017 +

--
 build.xml | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/753d90cd/build.xml
--
diff --git a/build.xml b/build.xml
index 94e4723..31c239b 100644
--- a/build.xml
+++ b/build.xml
@@ -423,33 +423,49 @@
   

 
+
 
 
 
+
+
 
 
+
 
-
+
+
+
 
 
 
+
 
 
 
 
+
+
+
 
 
 
+
 
+
 
+
+
 
-   
+
 
 
+
 
 
 
 
+
   
 
   



cassandra git commit: Quick fix: Add missing developers to build.xml file.

2017-02-14 Thread zznate
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 9a80f803c -> 06672aa97


Quick fix: Add missing developers to build.xml file.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/06672aa9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/06672aa9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/06672aa9

Branch: refs/heads/cassandra-2.2
Commit: 06672aa97b6e6606aa192827987006d8b300c949
Parents: 9a80f80
Author: Nate McCall 
Authored: Wed Feb 15 11:50:23 2017 +1300
Committer: Nate McCall 
Committed: Wed Feb 15 12:10:13 2017 +1300

--
 build.xml | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/06672aa9/build.xml
--
diff --git a/build.xml b/build.xml
index 94e4723..863dcf0 100644
--- a/build.xml
+++ b/build.xml
@@ -423,33 +423,49 @@
   

 
+
 
 
 
+
+
 
 
+
 
-
+
+
+
 
 
 
+
 
 
 
 
+
+
+
 
 
 
+
 
+
 
+
 
-   
+
+
 
 
+
 
 
 
 
+
   
 
   



[jira] [Resolved] (CASSANDRA-13199) dtest failure in repair_tests.repair_test.TestRepair.no_anticompaction_after_dclocal_repair_test

2017-02-14 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston resolved CASSANDRA-13199.
-
Resolution: Fixed

This was fixed in dtest, and passed in the most recent run

http://cassci.datastax.com/job/trunk_offheap_dtest/428/

> dtest failure in 
> repair_tests.repair_test.TestRepair.no_anticompaction_after_dclocal_repair_test
> 
>
> Key: CASSANDRA-13199
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13199
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Blake Eggleston
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log, node4_debug.log, node4_gc.log, node4.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/427/testReport/repair_tests.repair_test/TestRepair/no_anticompaction_after_dclocal_repair_test
> {code}
> Error Message
> Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', ['repair', '-local', 
> 'keyspace1', 'standard1']] exited with non-zero status; exit status: 2; 
> stderr: error: Incremental repairs cannot be run against a subset of tokens 
> or ranges
> -- StackTrace --
> java.lang.IllegalArgumentException: Incremental repairs cannot be run against 
> a subset of tokens or ranges
>   at 
> org.apache.cassandra.repair.messages.RepairOption.parse(RepairOption.java:242)
>   at 
> org.apache.cassandra.service.StorageService.repairAsync(StorageService.java:3258)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1466)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:828)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
>   at sun.rmi.transport.Transport$1.run(Transport.java:200)
>   at sun.rmi.transport.Transport$1.run(Transport.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$241(TCPTransport.java:683)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$$Lambda$335/1485984579.run(Unknown
>  Source)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>   at 
> 

[jira] [Resolved] (CASSANDRA-13201) dtest failure in repair_tests.repair_test.TestRepair.test_failure_during_anticompaction

2017-02-14 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston resolved CASSANDRA-13201.
-
Resolution: Fixed

This was fixed in dtest, and is no long failing in the most recent run

http://cassci.datastax.com/job/trunk_offheap_dtest/428/

> dtest failure in 
> repair_tests.repair_test.TestRepair.test_failure_during_anticompaction
> ---
>
> Key: CASSANDRA-13201
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13201
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Blake Eggleston
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/427/testReport/repair_tests.repair_test/TestRepair/test_failure_during_anticompaction
> {code}
> Error Message
> 08 Feb 2017 04:42:14 [node3] Missing: ['Got anticompaction request']:
> INFO  [main] 2017-02-08 04:31:15,447 YamlConfigura.
> See debug.log for remainder
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/repair_tests/repair_test.py", line 
> 1056, in test_failure_during_anticompaction
> self._test_failure_during_repair(phase='anticompaction',)
>   File "/home/automaton/cassandra-dtest/repair_tests/repair_test.py", line 
> 1131, in _test_failure_during_repair
> node_to_kill.watch_log_for(msg_to_wait, filename='debug.log')
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 471, in 
> watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (CASSANDRA-13200) dtest failure in repair_tests.repair_test.TestRepair.test_dead_sync_participant

2017-02-14 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston resolved CASSANDRA-13200.
-
Resolution: Fixed

This was fixed in dtest, and is no long failing in the most recent run

http://cassci.datastax.com/job/trunk_offheap_dtest/428/

> dtest failure in 
> repair_tests.repair_test.TestRepair.test_dead_sync_participant
> ---
>
> Key: CASSANDRA-13200
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13200
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Blake Eggleston
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/427/testReport/repair_tests.repair_test/TestRepair/test_dead_sync_participant
> {code}
> Error Message
> 08 Feb 2017 04:31:07 [node1] Missing: ['Endpoint .* died']:
> INFO  [main] 2017-02-08 04:28:51,776 YamlConfigura.
> See system.log for remainder
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools/decorators.py", line 48, in 
> wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/repair_tests/repair_test.py", line 
> 1049, in test_dead_sync_participant
> self._test_failure_during_repair(phase='sync', initiator=False,)
>   File "/home/automaton/cassandra-dtest/repair_tests/repair_test.py", line 
> 1139, in _test_failure_during_repair
> node1.watch_log_for('Endpoint .* died', timeout=60)
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 471, in 
> watch_log_for
> raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " 
> [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + 
> reads[:50] + ".\nSee {} for remainder".format(filename))
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (CASSANDRA-13210) test failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges

2017-02-14 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston resolved CASSANDRA-13210.
-
Resolution: Fixed

> test failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges
> ---
>
> Key: CASSANDRA-13210
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13210
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Assignee: Blake Eggleston
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log, node4_debug.log, node4_gc.log, node4.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_large_dtest/53/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test_not_intersecting_all_ranges
> {noformat}
> Error Message
> 'Repaired at: 0' unexpectedly found in 'SSTable: 
> /tmp/dtest-N7zjo6/test/node1/data0/keyspace1/standard1-a79a0c50efa211e6bf211330662f36ef/md-6-big\nPartitioner:
>  org.apache.cassandra.dht.Murmur3Partitioner\nBloom Filter FP chance: 
> 0.01\nMinimum timestamp: 148673926323\nMaximum timestamp: 
> 148673926323\nSSTable min local deletion time: 2147483647\nSSTable max 
> local deletion time: 2147483647\nCompressor: -\nTTL min: 0\nTTL max: 0\nFirst 
> token: 296988783704308703 (key=30503337373039503231)\nLast token: 
> 296988783704308703 (key=30503337373039503231)\nEstimated droppable 
> tombstones: 0.0\nSSTable Level: 0\nRepaired at: 0\nPending repair: 
> b099d1f0-efa2-11e6-89ec-d14624f1e47e\nReplay positions covered: 
> {CommitLogPosition(segmentId=1486739234777, 
> position=46796)=CommitLogPosition(segmentId=1486739234777, 
> position=50819)}\ntotalColumnsSet: 5\ntotalRows: 1\nEstimated tombstone drop 
> times:\nCount   Row SizeCell Count\n1 
>  0 0\n2  0 0\n3   
>0 0\n4  0  
>0\n5  0 2\n6   
>0 0\n7  0 
> 0\n8  0 0\n10 
> 0 0\n12 0 0\n14   
>   0 0\n17 0   
>   0\n20 0 0\n24   
>   0 0\n29 0 0\n35 
> 0 0\n42 0 
> 0\n50 0 0\n60 
> 0 0\n72 0 
> 0\n86 0 0\n103
> 0 0\n1240 0\n149  
>   0 0\n1790   
>   0\n2151 0\n258  
>   0 0\n3100 
> 0\n3720 0\n446
> 0 0\n5350 0\n642  
>   0 0\n7700   
>   0\n9240 0\n1109 
>   0 0\n1331   0 
> 0\n1597   0 0\n1916   
> 0 0\n2299   0 0\n2759 
>   0 0\n3311   0   
>   0\n3973   0 0\n4768 
>   0 0\n5722   0 
> 0\n6866   0 0\n8239   
> 0 0\n9887   0 0\n11864
>   0 0\n14237  0   
>   0\n17084  0 0\n20501
>   0 0\n24601  0 
> 0\n29521  0 0\n35425  
> 0 0\n42510  0 0\n51012
>   0 0\n61214  

[jira] [Resolved] (CASSANDRA-13202) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test

2017-02-14 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston resolved CASSANDRA-13202.
-
Resolution: Fixed

This was fixed in dtest, and is no long failing in the most recent run

http://cassci.datastax.com/job/trunk_offheap_dtest/428/

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test
> 
>
> Key: CASSANDRA-13202
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13202
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Blake Eggleston
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/427/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test
> {code}
> Error Message
> 'Repaired at: 0' unexpectedly found in 'SSTable: 
> /tmp/dtest-9PYhKy/test/node1/data0/keyspace1/standard1-17eaf440edbb11e68d99c3f653778b71/md-1-big\nPartitioner:
>  org.apache.cassandra.dht.Murmur3Partitioner\nBloom Filter FP chance: 
> 0.01\nMinimum timestamp: 1486529856104000\nMaximum timestamp: 
> 1486529859637013\nSSTable min local deletion time: 2147483647\nSSTable max 
> local deletion time: 2147483647\nCompressor: -\nTTL min: 0\nTTL max: 0\nFirst 
> token: -9222701292667950301 (key=5032394c323239385030)\nLast token: 
> -3134717340917976237 (key=304b3338324b324b3430)\nEstimated droppable 
> tombstones: 0.0\nSSTable Level: 0\nRepaired at: 0\nPending repair: 
> 26e751a0-edbb-11e6-accb-61d17d26194a\nReplay positions covered: 
> {CommitLogPosition(segmentId=1486529830270, 
> position=41626)=CommitLogPosition(segmentId=1486529830270, 
> position=2604016)}\ntotalColumnsSet: 16365\ntotalRows: 3273\nEstimated 
> tombstone drop times:\nCount   Row SizeCell Count\n1  
> 0 0\n2  0 
> 0\n3  0 0\n4  
> 0 0\n5  0  
> 6546\n6  0 0\n7   
>0 0\n8  0 0\n10
>  0 0\n12 0
>  0\n14 0 0\n17
>  0 0\n20 0 
> 0\n24 0 0\n29 
> 0 0\n35 0 0\n42   
>   0 0\n50 0   
>   0\n60 0 0\n72   
>   0 0\n86 0 
> 0\n1030 0\n124
> 0 0\n1490 0\n179  
>   0 0\n2152   
>   0\n258 3271 0\n310  
>   0 0\n3720 
> 0\n4460 0\n535
> 0 0\n6420 0\n770  
>   0 0\n9240   
>   0\n1109   0 0\n1331 
>   0 0\n1597   0 
> 0\n1916   0 0\n2299   
> 0 0\n2759   0 0\n3311 
>   0 0\n3973   0   
>   0\n4768   0 0\n5722 
>   0 0\n6866   0 
> 0\n8239   0 0\n9887   
> 0 0\n11864  0 0\n14237
>   0 0\n17084  0   
>   0\n20501  0 0\n24601
>   0 0\n29521  0 
> 0\n35425  0 0\n42510  
> 0 0\n51012  

[jira] [Commented] (CASSANDRA-13210) test failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges

2017-02-14 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866866#comment-15866866
 ] 

Blake Eggleston commented on CASSANDRA-13210:
-

dtest PR was merged, and this is no longer failing

http://cassci.datastax.com/job/trunk_large_dtest/55/testReport/

> test failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges
> ---
>
> Key: CASSANDRA-13210
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13210
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Assignee: Blake Eggleston
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log, node4_debug.log, node4_gc.log, node4.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_large_dtest/53/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test_not_intersecting_all_ranges
> {noformat}
> Error Message
> 'Repaired at: 0' unexpectedly found in 'SSTable: 
> /tmp/dtest-N7zjo6/test/node1/data0/keyspace1/standard1-a79a0c50efa211e6bf211330662f36ef/md-6-big\nPartitioner:
>  org.apache.cassandra.dht.Murmur3Partitioner\nBloom Filter FP chance: 
> 0.01\nMinimum timestamp: 148673926323\nMaximum timestamp: 
> 148673926323\nSSTable min local deletion time: 2147483647\nSSTable max 
> local deletion time: 2147483647\nCompressor: -\nTTL min: 0\nTTL max: 0\nFirst 
> token: 296988783704308703 (key=30503337373039503231)\nLast token: 
> 296988783704308703 (key=30503337373039503231)\nEstimated droppable 
> tombstones: 0.0\nSSTable Level: 0\nRepaired at: 0\nPending repair: 
> b099d1f0-efa2-11e6-89ec-d14624f1e47e\nReplay positions covered: 
> {CommitLogPosition(segmentId=1486739234777, 
> position=46796)=CommitLogPosition(segmentId=1486739234777, 
> position=50819)}\ntotalColumnsSet: 5\ntotalRows: 1\nEstimated tombstone drop 
> times:\nCount   Row SizeCell Count\n1 
>  0 0\n2  0 0\n3   
>0 0\n4  0  
>0\n5  0 2\n6   
>0 0\n7  0 
> 0\n8  0 0\n10 
> 0 0\n12 0 0\n14   
>   0 0\n17 0   
>   0\n20 0 0\n24   
>   0 0\n29 0 0\n35 
> 0 0\n42 0 
> 0\n50 0 0\n60 
> 0 0\n72 0 
> 0\n86 0 0\n103
> 0 0\n1240 0\n149  
>   0 0\n1790   
>   0\n2151 0\n258  
>   0 0\n3100 
> 0\n3720 0\n446
> 0 0\n5350 0\n642  
>   0 0\n7700   
>   0\n9240 0\n1109 
>   0 0\n1331   0 
> 0\n1597   0 0\n1916   
> 0 0\n2299   0 0\n2759 
>   0 0\n3311   0   
>   0\n3973   0 0\n4768 
>   0 0\n5722   0 
> 0\n6866   0 0\n8239   
> 0 0\n9887   0 0\n11864
>   0 0\n14237  0   
>   0\n17084  0 0\n20501
>   0 0\n24601  0 
> 0\n29521  0 0\n35425  
> 0 

[jira] [Commented] (CASSANDRA-13153) Reappeared Data when Mixing Incremental and Full Repairs

2017-02-14 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866811#comment-15866811
 ] 

Blake Eggleston commented on CASSANDRA-13153:
-

bq. CASSANDRA-13153 is not just about redundant re-streaming. It's about 
streaming only partial data for partitions or cells 

Right, agreed. My point was that not using incremental repair should fix 
[~Amanda.Debrot]'s problem. The part about redundant streaming just meant that 
as a workaround, it might not actually be as bad as it sounds.

bq. With CASSANDRA-9143 it's not that bad, since you start on unrepaired, 
recent data and the next incremental run will indeed fix the data that has been 
left in unrepaired before, given it's run within gc_grace. But with 
CASSANDRA-13153 you might leak arbitrary old data into unrepaired, which should 
never happen.

I'm not sure what you mean here. The goal of CASSANDRA-9143 was to prevent 
repaired data from ever leaking back into unrepaired, for both correctness and 
performance reasons. Do you mean that leaking data is still possible after 
CASSANDRA-9143, or that the point of this ticket is different?

> Reappeared Data when Mixing Incremental and Full Repairs
> 
>
> Key: CASSANDRA-13153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13153
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Tools
> Environment: Apache Cassandra 2.2
>Reporter: Amanda Debrot
>  Labels: Cassandra
> Attachments: log-Reappeared-Data.txt, 
> Step-by-Step-Simulate-Reappeared-Data.txt
>
>
> This happens for both LeveledCompactionStrategy and 
> SizeTieredCompactionStrategy.  I've only tested it on Cassandra version 2.2 
> but it most likely also affects all Cassandra versions after 2.2, if they 
> have anticompaction with full repair.
> When mixing incremental and full repairs, there are a few scenarios where the 
> Data SSTable is marked as unrepaired and the Tombstone SSTable is marked as 
> repaired.  Then if it is past gc_grace, and the tombstone and data has been 
> compacted out on other replicas, the next incremental repair will push the 
> Data to other replicas without the tombstone.
> Simplified scenario:
> 3 node cluster with RF=3
> Intial config:
>   Node 1 has data and tombstone in separate SSTables.
>   Node 2 has data and no tombstone.
>   Node 3 has data and tombstone in separate SSTables.
> Incremental repair (nodetool repair -pr) is run every day so now we have 
> tombstone on each node.
> Some minor compactions have happened since so data and tombstone get merged 
> to 1 SSTable on Nodes 1 and 3.
>   Node 1 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
> Incremental repairs keep running every day.
> Full repairs run weekly (nodetool repair -full -pr). 
> Now there are 2 scenarios where the Data SSTable will get marked as 
> "Unrepaired" while Tombstone SSTable will get marked as "Repaired".
> Scenario 1:
> Since the Data and Tombstone SSTable have been marked as "Repaired" 
> and anticompacted, they have had minor compactions with other SSTables 
> containing keys from other ranges.  During full repair, if the last node to 
> run it doesn't own this particular key in it's partitioner range, the Data 
> and Tombstone SSTable will get anticompacted and marked as "Unrepaired".  Now 
> in the next incremental repair, if the Data SSTable is involved in a minor 
> compaction during the repair but the Tombstone SSTable is not, the resulting 
> compacted SSTable will be marked "Unrepaired" and Tombstone SSTable is marked 
> "Repaired".
> Scenario 2:
> Only the Data SSTable had minor compaction with other SSTables 
> containing keys from other ranges after being marked as "Repaired".  The 
> Tombstone SSTable was never involved in a minor compaction so therefore all 
> keys in that SSTable belong to 1 particular partitioner range. During full 
> repair, if the last node to run it doesn't own this particular key in it's 
> partitioner range, the Data SSTable will get anticompacted and marked as 
> "Unrepaired".   The Tombstone SSTable stays marked as Repaired.
> Then it’s past gc_grace.  Since Node’s #1 and #3 only have 1 SSTable for that 
> key, the tombstone will get compacted out.
>   Node 1 has nothing.
>   Node 2 has data (in unrepaired SSTable) and tombstone (in repaired 
> SSTable) in separate SSTables.
>   Node 3 has nothing.
> Now when the next incremental repair runs, it will only use the Data SSTable 
> to build the merkle tree since the tombstone SSTable is flagged as repaired 
> and data SSTable 

[Cassandra Wiki] Update of "Committers" by AlekseyYeschenko

2017-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Committers" page has been changed by AlekseyYeschenko:
https://wiki.apache.org/cassandra/Committers?action=diff=71=72

  ||Jeff Jirsa ||June 2016 ||Apple|| PMC member ||
  ||Nate McCall ||June 2016 ||Last Pickle|| Project chair ||
  ||Jake Farrell ||June 2016 || || PMC member ||
- ||Michael Shuler ||June 2016 ||Datastax || PMC member , Release manager ||
+ ||Michael Shuler ||June 2016 ||Datastax || PMC member, Release manager ||
  ||Michael Semb Wever ||June 2016 || Last Pickle || ||
  ||Dikang Gu ||November 2016 ||Instagram || ||
  ||Branimir Lambov ||November 2016 ||Datastax || ||


[Cassandra Wiki] Update of "Committers" by JasonBrown

2017-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Committers" page has been changed by JasonBrown:
https://wiki.apache.org/cassandra/Committers?action=diff=70=71

  ||Committer ||Since ||Employer ||Comments ||
  ||Avinash Lakshman ||Jan 2009 ||Facebook ||Co-author of Facebook Cassandra ||
  ||Prashant Malik ||Jan 2009 ||Facebook ||Co-author of Facebook Cassandra ||
- ||Jonathan Ellis ||Mar 2009 ||Datastax ||Project chair ||
+ ||Jonathan Ellis ||Mar 2009 ||Datastax ||PMC member||
- ||Eric Evans ||Jun 2009 ||The OpenNMS Group ||PMC member, Debian packager ||
+ ||Eric Evans ||Jun 2009 ||The OpenNMS Group ||PMC member, Debian packager , 
Release manager ||
  ||Jun Rao ||Jun 2009 ||!LinkedIn ||PMC member ||
  ||Chris Goffinet ||Sept 2009 ||Twitter ||PMC member ||
  ||Johan Oskarsson ||Nov 2009 ||Twitter ||Also a 
[[http://hadoop.apache.org/|Hadoop]] committer ||
  ||Gary Dusbabek ||Dec 2009 ||Silicon Valley Data Science ||PMC member ||
  ||Jaakko Laine ||Dec 2009 ||? || ||
  ||Brandon Williams ||Jun 2010 ||Datastax ||PMC member ||
- ||Jake Luciani ||Jan 2011 ||Datastax ||PMC member, 
[[http://thrift.apache.org/|Thrift]] PMC member ||
+ ||Jake Luciani ||Jan 2011 ||Datastax ||PMC member, Release manager, 
[[http://thrift.apache.org/|Thrift]] PMC member ||
  ||Sylvain Lebresne ||Mar 2011 ||Datastax ||PMC member, Release manager ||
  ||Pavel Yaskevich ||Aug 2011 ||Apple ||PMC member ||
  ||Vijay Parthasarathy ||Jan 2012 ||Apple || ||
@@ -32, +32 @@

  ||Carl Yeksigian ||Jan 2016 ||Datastax ||Also a 
[[http://thrift.apache.org|Thrift]] committer ||
  ||Stefania Alborghetti ||Apr 2016 ||Datastax || ||
  ||Jeff Jirsa ||June 2016 ||Apple|| PMC member ||
- ||Nate McCall ||June 2016 ||Last Pickle|| PMC member ||
+ ||Nate McCall ||June 2016 ||Last Pickle|| Project chair ||
+ ||Jake Farrell ||June 2016 || || PMC member ||
- ||Michael Shuler ||June 2016 ||Datastax || PMC member ||
+ ||Michael Shuler ||June 2016 ||Datastax || PMC member , Release manager ||
  ||Michael Semb Wever ||June 2016 || Last Pickle || ||
  ||Dikang Gu ||November 2016 ||Instagram || ||
  ||Branimir Lambov ||November 2016 ||Datastax || ||


[jira] [Commented] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866774#comment-15866774
 ] 

Ariel Weisberg commented on CASSANDRA-13219:


Committed as 
[9a80f803c2ec9a4a74cb8a99293dc81ef3dc183d|https://github.com/apache/cassandra/commit/9a80f803c2ec9a4a74cb8a99293dc81ef3dc183d]

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: utf8-to-ascii_yaml.patch
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13219:
---
Status: Ready to Commit  (was: Patch Available)

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: utf8-to-ascii_yaml.patch
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13219:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: utf8-to-ascii_yaml.patch
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[Cassandra Wiki] Update of "Committers" by JasonBrown

2017-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Committers" page has been changed by JasonBrown:
https://wiki.apache.org/cassandra/Committers?action=diff=69=70

  ||Dikang Gu ||November 2016 ||Instagram || ||
  ||Branimir Lambov ||November 2016 ||Datastax || ||
  ||Paulo Motta || November 2016 ||Datastax || ||
- ||Sankalp Kohli || November 2016 ||Apple || ||
+ ||Sankalp Kohli || November 2016 ||Apple || PMC member ||
  ||Stefan Podkowinski ||February 2017 ||Independent || ||
  ||Ariel Weisberg ||February 2017 ||Apple || ||
  ||Blake Eggleston ||February 2017 ||Apple || ||


[Cassandra Wiki] Update of "Committers" by JasonBrown

2017-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Committers" page has been changed by JasonBrown:
https://wiki.apache.org/cassandra/Committers?action=diff=68=69

  ||Jeff Jirsa ||June 2016 ||Apple|| PMC member ||
  ||Nate McCall ||June 2016 ||Last Pickle|| PMC member ||
  ||Michael Shuler ||June 2016 ||Datastax || PMC member ||
+ ||Michael Semb Wever ||June 2016 || Last Pickle || ||
+ ||Dikang Gu ||November 2016 ||Instagram || ||
  ||Branimir Lambov ||November 2016 ||Datastax || ||
  ||Paulo Motta || November 2016 ||Datastax || ||
  ||Sankalp Kohli || November 2016 ||Apple || ||


[Cassandra Wiki] Update of "Committers" by JasonBrown

2017-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Committers" page has been changed by JasonBrown:
https://wiki.apache.org/cassandra/Committers?action=diff=67=68

  ||Carl Yeksigian ||Jan 2016 ||Datastax ||Also a 
[[http://thrift.apache.org|Thrift]] committer ||
  ||Stefania Alborghetti ||Apr 2016 ||Datastax || ||
  ||Jeff Jirsa ||June 2016 ||Apple|| PMC member ||
+ ||Nate McCall ||June 2016 ||Last Pickle|| PMC member ||
  ||Michael Shuler ||June 2016 ||Datastax || PMC member ||
  ||Branimir Lambov ||November 2016 ||Datastax || ||
  ||Paulo Motta || November 2016 ||Datastax || ||
+ ||Sankalp Kohli || November 2016 ||Apple || ||
  ||Stefan Podkowinski ||February 2017 ||Independent || ||
  ||Ariel Weisberg ||February 2017 ||Apple || ||
  ||Blake Eggleston ||February 2017 ||Apple || ||


[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-02-14 Thread aweisberg
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23a1dee4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23a1dee4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23a1dee4

Branch: refs/heads/trunk
Commit: 23a1dee45f43ea241deb6b677d8c42cb3e9d45a0
Parents: 702ec08 82943d6
Author: Ariel Weisberg 
Authored: Tue Feb 14 15:13:39 2017 -0500
Committer: Ariel Weisberg 
Committed: Tue Feb 14 15:13:39 2017 -0500

--
 conf/cassandra.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23a1dee4/conf/cassandra.yaml
--
diff --cc conf/cassandra.yaml
index 063a0b7,790dfd7..90e28b2
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@@ -1173,34 -959,12 +1173,34 @@@ gc_warn_threshold_in_ms: 100
  # as corrupted.
  # max_value_size_in_mb: 256
  
 +# Back-pressure settings #
 +# If enabled, the coordinator will apply the back-pressure strategy specified 
below to each mutation
 +# sent to replicas, with the aim of reducing pressure on overloaded replicas.
 +back_pressure_enabled: false
 +# The back-pressure strategy applied.
 +# The default implementation, RateBasedBackPressure, takes three arguments:
 +# high ratio, factor, and flow type, and uses the ratio between incoming 
mutation responses and outgoing mutation requests.
 +# If below high ratio, outgoing mutations are rate limited according to the 
incoming rate decreased by the given factor;
 +# if above high ratio, the rate limiting is increased by the given factor;
 +# such factor is usually best configured between 1 and 10, use larger values 
for a faster recovery
 +# at the expense of potentially more dropped mutations;
 +# the rate limiting is applied according to the flow type: if FAST, it's rate 
limited at the speed of the fastest replica,
 +# if SLOW at the speed of the slowest one.
 +# New strategies can be added. Implementors need to implement 
org.apache.cassandra.net.BackpressureStrategy and
 +# provide a public constructor accepting a Map.
 +back_pressure_strategy:
 +- class_name: org.apache.cassandra.net.RateBasedBackPressure
 +  parameters:
 +- high_ratio: 0.90
 +  factor: 5
 +  flow: FAST
 +
  # Coalescing Strategies #
  # Coalescing multiples messages turns out to significantly boost message 
processing throughput (think doubling or more).
- # On bare metal, the floor for packet processing throughput is high enough 
that many applications won’t notice, but in
+ # On bare metal, the floor for packet processing throughput is high enough 
that many applications won't notice, but in
  # virtualized environments, the point at which an application can be bound by 
network packet processing can be
- # surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It’s not that bare metal
- # doesn’t benefit from coalescing messages, it’s that the number of 
packets a bare metal network interface can process
+ # surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It's not that bare metal
+ # doesn't benefit from coalescing messages, it's that the number of packets a 
bare metal network interface can process
  # is sufficient for many applications such that no load starvation is 
experienced even without coalescing.
  # There are other benefits to coalescing network messages that are harder to 
isolate with a simple metric like messages
  # per second. By coalescing multiple tasks together, a network thread can 
process multiple messages for the cost of one



[03/10] cassandra git commit: Remove non-ascii characters from cassandra.yaml introduced by CASSANDRA-13090

2017-02-14 Thread aweisberg
Remove non-ascii characters from cassandra.yaml introduced by CASSANDRA-13090

patch by Ariel Weisberg; reviewed by Jason Brown for CASSANDRA-13219


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9a80f803
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9a80f803
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9a80f803

Branch: refs/heads/cassandra-3.11
Commit: 9a80f803c2ec9a4a74cb8a99293dc81ef3dc183d
Parents: 5725e2c
Author: Ariel Weisberg 
Authored: Tue Feb 14 15:09:20 2017 -0500
Committer: Ariel Weisberg 
Committed: Tue Feb 14 15:09:20 2017 -0500

--
 conf/cassandra.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a80f803/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index ddb7927..41c1fb1 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -892,10 +892,10 @@ windows_timer_interval: 1
 
 # Coalescing Strategies #
 # Coalescing multiples messages turns out to significantly boost message 
processing throughput (think doubling or more).
-# On bare metal, the floor for packet processing throughput is high enough 
that many applications won’t notice, but in
+# On bare metal, the floor for packet processing throughput is high enough 
that many applications won't notice, but in
 # virtualized environments, the point at which an application can be bound by 
network packet processing can be
-# surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It’s not that bare metal
-# doesn’t benefit from coalescing messages, it’s that the number of 
packets a bare metal network interface can process
+# surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It's not that bare metal
+# doesn't benefit from coalescing messages, it's that the number of packets a 
bare metal network interface can process
 # is sufficient for many applications such that no load starvation is 
experienced even without coalescing.
 # There are other benefits to coalescing network messages that are harder to 
isolate with a simple metric like messages
 # per second. By coalescing multiple tasks together, a network thread can 
process multiple messages for the cost of one



[04/10] cassandra git commit: Remove non-ascii characters from cassandra.yaml introduced by CASSANDRA-13090

2017-02-14 Thread aweisberg
Remove non-ascii characters from cassandra.yaml introduced by CASSANDRA-13090

patch by Ariel Weisberg; reviewed by Jason Brown for CASSANDRA-13219


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9a80f803
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9a80f803
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9a80f803

Branch: refs/heads/trunk
Commit: 9a80f803c2ec9a4a74cb8a99293dc81ef3dc183d
Parents: 5725e2c
Author: Ariel Weisberg 
Authored: Tue Feb 14 15:09:20 2017 -0500
Committer: Ariel Weisberg 
Committed: Tue Feb 14 15:09:20 2017 -0500

--
 conf/cassandra.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a80f803/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index ddb7927..41c1fb1 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -892,10 +892,10 @@ windows_timer_interval: 1
 
 # Coalescing Strategies #
 # Coalescing multiples messages turns out to significantly boost message 
processing throughput (think doubling or more).
-# On bare metal, the floor for packet processing throughput is high enough 
that many applications won’t notice, but in
+# On bare metal, the floor for packet processing throughput is high enough 
that many applications won't notice, but in
 # virtualized environments, the point at which an application can be bound by 
network packet processing can be
-# surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It’s not that bare metal
-# doesn’t benefit from coalescing messages, it’s that the number of 
packets a bare metal network interface can process
+# surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It's not that bare metal
+# doesn't benefit from coalescing messages, it's that the number of packets a 
bare metal network interface can process
 # is sufficient for many applications such that no load starvation is 
experienced even without coalescing.
 # There are other benefits to coalescing network messages that are harder to 
isolate with a simple metric like messages
 # per second. By coalescing multiple tasks together, a network thread can 
process multiple messages for the cost of one



[10/10] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-02-14 Thread aweisberg
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/48bfc8e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/48bfc8e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/48bfc8e8

Branch: refs/heads/trunk
Commit: 48bfc8e8dcd926ea547211fd959cebba8c0e027b
Parents: 3f3db2d 23a1dee
Author: Ariel Weisberg 
Authored: Tue Feb 14 15:14:15 2017 -0500
Committer: Ariel Weisberg 
Committed: Tue Feb 14 15:14:15 2017 -0500

--
 conf/cassandra.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/48bfc8e8/conf/cassandra.yaml
--



[02/10] cassandra git commit: Remove non-ascii characters from cassandra.yaml introduced by CASSANDRA-13090

2017-02-14 Thread aweisberg
Remove non-ascii characters from cassandra.yaml introduced by CASSANDRA-13090

patch by Ariel Weisberg; reviewed by Jason Brown for CASSANDRA-13219


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9a80f803
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9a80f803
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9a80f803

Branch: refs/heads/cassandra-3.0
Commit: 9a80f803c2ec9a4a74cb8a99293dc81ef3dc183d
Parents: 5725e2c
Author: Ariel Weisberg 
Authored: Tue Feb 14 15:09:20 2017 -0500
Committer: Ariel Weisberg 
Committed: Tue Feb 14 15:09:20 2017 -0500

--
 conf/cassandra.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a80f803/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index ddb7927..41c1fb1 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -892,10 +892,10 @@ windows_timer_interval: 1
 
 # Coalescing Strategies #
 # Coalescing multiples messages turns out to significantly boost message 
processing throughput (think doubling or more).
-# On bare metal, the floor for packet processing throughput is high enough 
that many applications won’t notice, but in
+# On bare metal, the floor for packet processing throughput is high enough 
that many applications won't notice, but in
 # virtualized environments, the point at which an application can be bound by 
network packet processing can be
-# surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It’s not that bare metal
-# doesn’t benefit from coalescing messages, it’s that the number of 
packets a bare metal network interface can process
+# surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It's not that bare metal
+# doesn't benefit from coalescing messages, it's that the number of packets a 
bare metal network interface can process
 # is sufficient for many applications such that no load starvation is 
experienced even without coalescing.
 # There are other benefits to coalescing network messages that are harder to 
isolate with a simple metric like messages
 # per second. By coalescing multiple tasks together, a network thread can 
process multiple messages for the cost of one



[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-14 Thread aweisberg
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/82943d6a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/82943d6a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/82943d6a

Branch: refs/heads/cassandra-3.11
Commit: 82943d6aa7564222f43e6c6f9e6d599ebd9dbbe2
Parents: 3d01e90 9a80f80
Author: Ariel Weisberg 
Authored: Tue Feb 14 15:12:39 2017 -0500
Committer: Ariel Weisberg 
Committed: Tue Feb 14 15:12:39 2017 -0500

--
 conf/cassandra.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/82943d6a/conf/cassandra.yaml
--



[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-14 Thread aweisberg
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/82943d6a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/82943d6a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/82943d6a

Branch: refs/heads/trunk
Commit: 82943d6aa7564222f43e6c6f9e6d599ebd9dbbe2
Parents: 3d01e90 9a80f80
Author: Ariel Weisberg 
Authored: Tue Feb 14 15:12:39 2017 -0500
Committer: Ariel Weisberg 
Committed: Tue Feb 14 15:12:39 2017 -0500

--
 conf/cassandra.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/82943d6a/conf/cassandra.yaml
--



[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-14 Thread aweisberg
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/82943d6a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/82943d6a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/82943d6a

Branch: refs/heads/cassandra-3.0
Commit: 82943d6aa7564222f43e6c6f9e6d599ebd9dbbe2
Parents: 3d01e90 9a80f80
Author: Ariel Weisberg 
Authored: Tue Feb 14 15:12:39 2017 -0500
Committer: Ariel Weisberg 
Committed: Tue Feb 14 15:12:39 2017 -0500

--
 conf/cassandra.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/82943d6a/conf/cassandra.yaml
--



[01/10] cassandra git commit: Remove non-ascii characters from cassandra.yaml introduced by CASSANDRA-13090

2017-02-14 Thread aweisberg
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 5725e2c42 -> 9a80f803c
  refs/heads/cassandra-3.0 3d01e9061 -> 82943d6aa
  refs/heads/cassandra-3.11 702ec088f -> 23a1dee45
  refs/heads/trunk 3f3db2d40 -> 48bfc8e8d


Remove non-ascii characters from cassandra.yaml introduced by CASSANDRA-13090

patch by Ariel Weisberg; reviewed by Jason Brown for CASSANDRA-13219


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9a80f803
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9a80f803
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9a80f803

Branch: refs/heads/cassandra-2.2
Commit: 9a80f803c2ec9a4a74cb8a99293dc81ef3dc183d
Parents: 5725e2c
Author: Ariel Weisberg 
Authored: Tue Feb 14 15:09:20 2017 -0500
Committer: Ariel Weisberg 
Committed: Tue Feb 14 15:09:20 2017 -0500

--
 conf/cassandra.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a80f803/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index ddb7927..41c1fb1 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -892,10 +892,10 @@ windows_timer_interval: 1
 
 # Coalescing Strategies #
 # Coalescing multiples messages turns out to significantly boost message 
processing throughput (think doubling or more).
-# On bare metal, the floor for packet processing throughput is high enough 
that many applications won’t notice, but in
+# On bare metal, the floor for packet processing throughput is high enough 
that many applications won't notice, but in
 # virtualized environments, the point at which an application can be bound by 
network packet processing can be
-# surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It’s not that bare metal
-# doesn’t benefit from coalescing messages, it’s that the number of 
packets a bare metal network interface can process
+# surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It's not that bare metal
+# doesn't benefit from coalescing messages, it's that the number of packets a 
bare metal network interface can process
 # is sufficient for many applications such that no load starvation is 
experienced even without coalescing.
 # There are other benefits to coalescing network messages that are harder to 
isolate with a simple metric like messages
 # per second. By coalescing multiple tasks together, a network thread can 
process multiple messages for the cost of one



[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-02-14 Thread aweisberg
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/23a1dee4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/23a1dee4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/23a1dee4

Branch: refs/heads/cassandra-3.11
Commit: 23a1dee45f43ea241deb6b677d8c42cb3e9d45a0
Parents: 702ec08 82943d6
Author: Ariel Weisberg 
Authored: Tue Feb 14 15:13:39 2017 -0500
Committer: Ariel Weisberg 
Committed: Tue Feb 14 15:13:39 2017 -0500

--
 conf/cassandra.yaml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/23a1dee4/conf/cassandra.yaml
--
diff --cc conf/cassandra.yaml
index 063a0b7,790dfd7..90e28b2
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@@ -1173,34 -959,12 +1173,34 @@@ gc_warn_threshold_in_ms: 100
  # as corrupted.
  # max_value_size_in_mb: 256
  
 +# Back-pressure settings #
 +# If enabled, the coordinator will apply the back-pressure strategy specified 
below to each mutation
 +# sent to replicas, with the aim of reducing pressure on overloaded replicas.
 +back_pressure_enabled: false
 +# The back-pressure strategy applied.
 +# The default implementation, RateBasedBackPressure, takes three arguments:
 +# high ratio, factor, and flow type, and uses the ratio between incoming 
mutation responses and outgoing mutation requests.
 +# If below high ratio, outgoing mutations are rate limited according to the 
incoming rate decreased by the given factor;
 +# if above high ratio, the rate limiting is increased by the given factor;
 +# such factor is usually best configured between 1 and 10, use larger values 
for a faster recovery
 +# at the expense of potentially more dropped mutations;
 +# the rate limiting is applied according to the flow type: if FAST, it's rate 
limited at the speed of the fastest replica,
 +# if SLOW at the speed of the slowest one.
 +# New strategies can be added. Implementors need to implement 
org.apache.cassandra.net.BackpressureStrategy and
 +# provide a public constructor accepting a Map.
 +back_pressure_strategy:
 +- class_name: org.apache.cassandra.net.RateBasedBackPressure
 +  parameters:
 +- high_ratio: 0.90
 +  factor: 5
 +  flow: FAST
 +
  # Coalescing Strategies #
  # Coalescing multiples messages turns out to significantly boost message 
processing throughput (think doubling or more).
- # On bare metal, the floor for packet processing throughput is high enough 
that many applications won’t notice, but in
+ # On bare metal, the floor for packet processing throughput is high enough 
that many applications won't notice, but in
  # virtualized environments, the point at which an application can be bound by 
network packet processing can be
- # surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It’s not that bare metal
- # doesn’t benefit from coalescing messages, it’s that the number of 
packets a bare metal network interface can process
+ # surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It's not that bare metal
+ # doesn't benefit from coalescing messages, it's that the number of packets a 
bare metal network interface can process
  # is sufficient for many applications such that no load starvation is 
experienced even without coalescing.
  # There are other benefits to coalescing network messages that are harder to 
isolate with a simple metric like messages
  # per second. By coalescing multiple tasks together, a network thread can 
process multiple messages for the cost of one



[Cassandra Wiki] Update of "Committers" by JasonBrown

2017-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Committers" page has been changed by JasonBrown:
https://wiki.apache.org/cassandra/Committers?action=diff=65=66

  ||Michael Shuler ||June 2016 ||Datastax || PMC member ||
  ||Branimir Lambov ||November 2016 ||Datastax || ||
  ||Paulo Motta || November 2016 ||Datastax || ||
- ||Stefan Pokowinski ||February 2017 ||Independant || ||
+ ||Stefan Pokowinski ||February 2017 ||Independent || ||
  ||Ariel Weisberg ||February 2017 ||Apple || ||
  ||Blake Eggleston ||February 2017 ||Apple || ||
  ||Alex Petrov ||February 2017 ||Datastax || ||


[Cassandra Wiki] Update of "Committers" by JasonBrown

2017-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Committers" page has been changed by JasonBrown:
https://wiki.apache.org/cassandra/Committers?action=diff=64=65

  ||Jeff Jirsa ||June 2016 ||Apple|| PMC member ||
  ||Michael Shuler ||June 2016 ||Datastax || PMC member ||
  ||Branimir Lambov ||November 2016 ||Datastax || ||
+ ||Paulo Motta || November 2016 ||Datastax || ||
+ ||Stefan Pokowinski ||February 2017 ||Independant || ||
  ||Ariel Weisberg ||February 2017 ||Apple || ||
  ||Blake Eggleston ||February 2017 ||Apple || ||
  ||Alex Petrov ||February 2017 ||Datastax || ||


[Cassandra Wiki] Update of "Committers" by JasonBrown

2017-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Committers" page has been changed by JasonBrown:
https://wiki.apache.org/cassandra/Committers?action=diff=63=64

  ||Benjamin Lerer ||Jul 2015 ||Datastax || ||
  ||Carl Yeksigian ||Jan 2016 ||Datastax ||Also a 
[[http://thrift.apache.org|Thrift]] committer ||
  ||Stefania Alborghetti ||Apr 2016 ||Datastax || ||
- ||Jeff Jirsa ||June 2016 ||!CrowdStrike || PMC member ||
+ ||Jeff Jirsa ||June 2016 ||Apple|| PMC member ||
  ||Michael Shuler ||June 2016 ||Datastax || PMC member ||
  ||Branimir Lambov ||November 2016 ||Datastax || ||
  ||Ariel Weisberg ||February 2017 ||Apple || ||


[Cassandra Wiki] Update of "Committers" by JasonBrown

2017-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Committers" page has been changed by JasonBrown:
https://wiki.apache.org/cassandra/Committers?action=diff=62=63

  ||Yuki Morishita ||May 2012 ||Datastax || ||
  ||Aleksey Yeschenko ||Nov 2012 ||Datastax ||PMC member ||
  ||Jason Brown ||Feb 2013 ||Apple || PMC member ||
- ||Marcus Eriksson ||Apr 2013 ||Datastax || ||
+ ||Marcus Eriksson ||Apr 2013 ||Apple || ||
  ||Mikhail Stepura ||Jan 2014 ||Apple || ||
  ||Tyler Hobbs ||Mar 2014 ||Datastax ||PMC member ||
  ||Benedict Elliott Smith ||May 2014 ||Vast || ||
  ||Josh Mckenzie ||Jul 2014 ||Datastax || ||
  ||Robert Stupp ||Jan 2015 ||Datastax || ||
- ||Sam Tunnicliffe ||May 2015 ||Datastax || ||
+ ||Sam Tunnicliffe ||May 2015 ||Apple || ||
  ||Benjamin Lerer ||Jul 2015 ||Datastax || ||
  ||Carl Yeksigian ||Jan 2016 ||Datastax ||Also a 
[[http://thrift.apache.org|Thrift]] committer ||
  ||Stefania Alborghetti ||Apr 2016 ||Datastax || ||


[Cassandra Wiki] Update of "Committers" by JasonBrown

2017-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Committers" page has been changed by JasonBrown:
https://wiki.apache.org/cassandra/Committers?action=diff=61=62

  ||Dave Brosius ||May 2012 ||Independent ||PMC member, also a 
[[http://commons.apache.org|Commons]] committer ||
  ||Yuki Morishita ||May 2012 ||Datastax || ||
  ||Aleksey Yeschenko ||Nov 2012 ||Datastax ||PMC member ||
- ||Jason Brown ||Feb 2013 ||Apple || ||
+ ||Jason Brown ||Feb 2013 ||Apple || PMC member ||
  ||Marcus Eriksson ||Apr 2013 ||Datastax || ||
  ||Mikhail Stepura ||Jan 2014 ||Apple || ||
  ||Tyler Hobbs ||Mar 2014 ||Datastax ||PMC member ||
@@ -31, +31 @@

  ||Benjamin Lerer ||Jul 2015 ||Datastax || ||
  ||Carl Yeksigian ||Jan 2016 ||Datastax ||Also a 
[[http://thrift.apache.org|Thrift]] committer ||
  ||Stefania Alborghetti ||Apr 2016 ||Datastax || ||
- ||Jeff Jirsa ||June 2016 ||!CrowdStrike || ||
+ ||Jeff Jirsa ||June 2016 ||!CrowdStrike || PMC member ||
- ||Michael Shuler ||June 2016 ||Datastax || ||
+ ||Michael Shuler ||June 2016 ||Datastax || PMC member ||
+ ||Branimir Lambov ||November 2016 ||Datastax || ||
+ ||Ariel Weisberg ||February 2017 ||Apple || ||
+ ||Blake Eggleston ||February 2017 ||Apple || ||
+ ||Alex Petrov ||February 2017 ||Datastax || ||
+ ||Joel Knighton ||February 2017 || Datastax || ||
  
  
  {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}}


[jira] [Commented] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866708#comment-15866708
 ] 

Ariel Weisberg commented on CASSANDRA-13219:


This merged forward cleanly, but just to make sure it compiles I had cassci run 
the unit tests and dtests for 2.2, 3.0, and 3.11. 

https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-2.2-testall/1/
https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-3.0-testall/1/
https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-3.11-testall/1/

The dtests failed when I started them initially:
https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-3.0-dtest/1/
{noformat}
[artifact:dependencies] Transferring 4K from central
[artifact:dependencies] Downloading: 
commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.pom 
from repository apache at 
https://repository.apache.org/content/repositories/releases
Err: 




Build step 'Execute shell' marked build as failure
Performing Post build task...
Could not match :Aborted by  : False
Logical operation result is FALSE
{noformat}
It's not clear why but I restarted one of them and that is running and seems 
happy.

I'll wait for the one to pass and if that is good enough for review I'll commit.
https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-3.0-dtest/2/

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: utf8-to-ascii_yaml.patch
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13219:
---
Status: Patch Available  (was: In Progress)

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: utf8-to-ascii_yaml.patch
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reassigned CASSANDRA-13219:
--

Assignee: Philip Thompson  (was: Ariel Weisberg)

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: utf8-to-ascii_yaml.patch
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reassigned CASSANDRA-13219:
--

Assignee: Ariel Weisberg  (was: Philip Thompson)

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: utf8-to-ascii_yaml.patch
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1587#comment-1587
 ] 

Jason Brown commented on CASSANDRA-13219:
-

+1. I checked it out via hexdump and in the original I can see the utf-encoded 
value, and with the new patch i see the ascii values.

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: utf8-to-ascii_yaml.patch
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866577#comment-15866577
 ] 

Jeremiah Jordan edited comment on CASSANDRA-13219 at 2/14/17 8:39 PM:
--

+1 fix LGTM

{code}
$ pcregrep --color='auto' -n '[^\x00-\x7F]' conf/cassandra.yaml
964:# On bare metal, the floor for packet processing throughput is high enough 
that many applications won���t notice, but in
966:# surprisingly low compared to the throughput of task processing that is 
possible inside a VM. It���s not that bare metal
967:# doesn���t benefit from coalescing messages, it���s that the number of 
packets a bare metal network interface can process
$ git apply ~/Downloads/utf8-to-ascii_yaml.patch
$ pcregrep --color='auto' -n '[^\x00-\x7F]' conf/cassandra.yaml
$
{code}


was (Author: jjordan):
+1 fix LGTM

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: utf8-to-ascii_yaml.patch
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13114) Upgrade netty to 4.0.44 to fix memory leak with client encryption

2017-02-14 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866605#comment-15866605
 ] 

Robert Stupp commented on CASSANDRA-13114:
--

Well, the new netty version _should_ not hurt. But if you run into this 
SSL/netty issue, it hurts a lot.

> Upgrade netty to 4.0.44 to fix memory leak with client encryption
> -
>
> Key: CASSANDRA-13114
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13114
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom van der Woerdt
>Assignee: Stefan Podkowinski
>Priority: Blocker
> Fix For: 2.1.17, 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: 13114_netty-4.0.44_2.x-3.0.patch, 
> 13114_netty-4.0.44_3.11.patch
>
>
> https://issues.apache.org/jira/browse/CASSANDRA-12032 updated netty for 
> Cassandra 3.8, but this wasn't backported. Netty 4.0.23, which ships with 
> Cassandra 3.0.x, has some serious bugs around memory handling for SSL 
> connections.
> It would be nice if both were updated to 4.0.42, a version released this year.
> 4.0.23 makes it impossible for me to run SSL, because nodes run out of memory 
> every ~30 minutes. This was fixed in 4.0.27.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-9639) size_estimates is inacurate in multi-dc clusters

2017-02-14 Thread Scott Bale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866591#comment-15866591
 ] 

Scott Bale commented on CASSANDRA-9639:
---

If at all possible, could we add this bit of improved logging to 
{{ActiveRepairService.java}} as part of this ticket? We have been frequently 
running into this "Requested range intersects..." error as we try to repair the 
cluster which is what led us to this ticket.

{code}
diff --git a/src/java/org/apache/cassandra/service/ActiveRepairService.java 
b/src/java/org/apache/cassandra/service/ActiveRepairService.java
index bde5313005..18d43ed56f 100644
--- a/src/java/org/apache/cassandra/service/ActiveRepairService.java
+++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java
@@ -220,7 +220,8 @@ public class ActiveRepairService implements 
IEndpointStateChangeSubscriber, IFai
 }
 else if (range.intersects(toRepair))
 {
-throw new IllegalArgumentException("Requested range intersects 
a local range but is not fully contained in one; this would lead to imprecise 
repair");
+throw new IllegalArgumentException(String.format("Requested 
range %s intersects a local range %s but is not fully contained in one; this 
would lead to imprecise repair. keyspace: %s", toRepair.toString(), 
range.toString(), keyspaceName));
 }
 }
 if (rangeSuperSet == null || !replicaSets.containsKey(rangeSuperSet))
{code}

> size_estimates is inacurate in multi-dc clusters
> 
>
> Key: CASSANDRA-9639
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9639
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 3.0.x
>
>
> CASSANDRA-7688 introduced size_estimates to replace the thrift 
> describe_splits_ex command.
> Users have reported seeing estimates that are widely off in multi-dc clusters.
> system.size_estimates show the wrong range_start / range_end



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866577#comment-15866577
 ] 

Jeremiah Jordan commented on CASSANDRA-13219:
-

+1 fix LGTM

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: utf8-to-ascii_yaml.patch
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13219:
---
Attachment: utf8-to-ascii_yaml.patch

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: utf8-to-ascii_yaml.patch
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13219:
---
Reviewer: Jason Brown

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after

2017-02-14 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-13219:

Component/s: Configuration

> Cassandra.yaml now unicode instead of ascii after 
> --
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after

2017-02-14 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-13219:
---

 Summary: Cassandra.yaml now unicode instead of ascii after 
 Key: CASSANDRA-13219
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0


After CASSANDRA-13090, which was commit 
5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
characters, specifically 
[0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
 Previously, it was only ascii.

This is an admittedly minor change, but it is breaking. It affects (at least) a 
subset of python yaml parsing tools (which is a large number of tools that use 
C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13219) Cassandra.yaml now unicode instead of ascii after 13090

2017-02-14 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-13219:

Summary: Cassandra.yaml now unicode instead of ascii after 13090  (was: 
Cassandra.yaml now unicode instead of ascii after )

> Cassandra.yaml now unicode instead of ascii after 13090
> ---
>
> Key: CASSANDRA-13219
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13219
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Philip Thompson
>Assignee: Ariel Weisberg
>Priority: Minor
> Fix For: 2.2.9, 3.0.11, 3.11.0, 4.0
>
>
> After CASSANDRA-13090, which was commit 
> 5725e2c422d21d8efe5ae3bc4389842939553650, cassandra.yaml now has unicode 
> characters, specifically 
> [0xe2|http://utf8-chartable.de/unicode-utf8-table.pl?start=8320=128=2=0x].
>  Previously, it was only ascii.
> This is an admittedly minor change, but it is breaking. It affects (at least) 
> a subset of python yaml parsing tools (which is a large number of tools that 
> use C*).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13153) Reappeared Data when Mixing Incremental and Full Repairs

2017-02-14 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866518#comment-15866518
 ] 

Stefan Podkowinski commented on CASSANDRA-13153:


CASSANDRA-13153 is not just about redundant re-streaming. It's about streaming 
only _partial_ data for partitions or cells based on the circumstance if an 
individual sstable has been affected or not. If it did, you may end up leaking 
data that is covered by a tombstone back to unrepaired, while the tombstone in 
the unaffected sstable stays in repaired, and have the data streamed from there 
to all other nodes (which may already compacted the data and tombstone away). 
Or am I missing something here?

With CASSANDRA-9143 it's not _that_ bad, since you start on unrepaired, recent 
data and the next incremental run will indeed fix the data that has been left 
in unrepaired before, given it's run within gc_grace. But with CASSANDRA-13153 
you might leak arbitrary old data into unrepaired, which should never happen.

> Reappeared Data when Mixing Incremental and Full Repairs
> 
>
> Key: CASSANDRA-13153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13153
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Tools
> Environment: Apache Cassandra 2.2
>Reporter: Amanda Debrot
>  Labels: Cassandra
> Attachments: log-Reappeared-Data.txt, 
> Step-by-Step-Simulate-Reappeared-Data.txt
>
>
> This happens for both LeveledCompactionStrategy and 
> SizeTieredCompactionStrategy.  I've only tested it on Cassandra version 2.2 
> but it most likely also affects all Cassandra versions after 2.2, if they 
> have anticompaction with full repair.
> When mixing incremental and full repairs, there are a few scenarios where the 
> Data SSTable is marked as unrepaired and the Tombstone SSTable is marked as 
> repaired.  Then if it is past gc_grace, and the tombstone and data has been 
> compacted out on other replicas, the next incremental repair will push the 
> Data to other replicas without the tombstone.
> Simplified scenario:
> 3 node cluster with RF=3
> Intial config:
>   Node 1 has data and tombstone in separate SSTables.
>   Node 2 has data and no tombstone.
>   Node 3 has data and tombstone in separate SSTables.
> Incremental repair (nodetool repair -pr) is run every day so now we have 
> tombstone on each node.
> Some minor compactions have happened since so data and tombstone get merged 
> to 1 SSTable on Nodes 1 and 3.
>   Node 1 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
> Incremental repairs keep running every day.
> Full repairs run weekly (nodetool repair -full -pr). 
> Now there are 2 scenarios where the Data SSTable will get marked as 
> "Unrepaired" while Tombstone SSTable will get marked as "Repaired".
> Scenario 1:
> Since the Data and Tombstone SSTable have been marked as "Repaired" 
> and anticompacted, they have had minor compactions with other SSTables 
> containing keys from other ranges.  During full repair, if the last node to 
> run it doesn't own this particular key in it's partitioner range, the Data 
> and Tombstone SSTable will get anticompacted and marked as "Unrepaired".  Now 
> in the next incremental repair, if the Data SSTable is involved in a minor 
> compaction during the repair but the Tombstone SSTable is not, the resulting 
> compacted SSTable will be marked "Unrepaired" and Tombstone SSTable is marked 
> "Repaired".
> Scenario 2:
> Only the Data SSTable had minor compaction with other SSTables 
> containing keys from other ranges after being marked as "Repaired".  The 
> Tombstone SSTable was never involved in a minor compaction so therefore all 
> keys in that SSTable belong to 1 particular partitioner range. During full 
> repair, if the last node to run it doesn't own this particular key in it's 
> partitioner range, the Data SSTable will get anticompacted and marked as 
> "Unrepaired".   The Tombstone SSTable stays marked as Repaired.
> Then it’s past gc_grace.  Since Node’s #1 and #3 only have 1 SSTable for that 
> key, the tombstone will get compacted out.
>   Node 1 has nothing.
>   Node 2 has data (in unrepaired SSTable) and tombstone (in repaired 
> SSTable) in separate SSTables.
>   Node 3 has nothing.
> Now when the next incremental repair runs, it will only use the Data SSTable 
> to build the merkle tree since the tombstone SSTable is flagged as repaired 
> and data SSTable is marked as unrepaired.  And the data will get repaired 
> against the other two nodes.
>   Node 1 has data.
>   Node 2 

[jira] [Comment Edited] (CASSANDRA-13153) Reappeared Data when Mixing Incremental and Full Repairs

2017-02-14 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866448#comment-15866448
 ] 

Blake Eggleston edited comment on CASSANDRA-13153 at 2/14/17 7:34 PM:
--

I think this can also happen just by running incremental repair only, because 
of the way it leaks data into the unrepaired sstable bucket. This has been 
fixed in CASSANDRA-9143… but that was only committed to trunk, since it’s not a 
trivial change. Unfortunately, the only way to avoid this in pre-4.0 clusters 
is to just not run incremental repair.

This may not be as bad as it sounds though, since what pre CASSANDRA-9143 
incremental repair gained in validation time, it likely lost in redundant 
re-streaming of otherwise repaired data. If you compacted a large sstable that 
was also involved in a repair, the entire contents of that sstable would end up 
getting streamed to every other replica on the next incremental repair.


was (Author: bdeggleston):
I think this can also happen just by running incremental repair only, because 
of the way it leaks data into the unrepaired sstable bucket. This has been 
fixed in CASSANDRA-9143… but that was only committed to trunk, since it’s not a 
trivial change. Unfortunately, the only way to avoid this in pre-4.0 clusters 
is to just not run incremental repair.

This may not be as bad as it sounds though, since what pre CASSANDRA-9143 
incremental repair gained in validation time, it likely lost in redundant 
re-streaming of otherwise repaired data. If you had a large sstable compacted 
during a repair, the entire thing would have to be streamed to every other 
replica on the next incremental repair.

> Reappeared Data when Mixing Incremental and Full Repairs
> 
>
> Key: CASSANDRA-13153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13153
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Tools
> Environment: Apache Cassandra 2.2
>Reporter: Amanda Debrot
>  Labels: Cassandra
> Attachments: log-Reappeared-Data.txt, 
> Step-by-Step-Simulate-Reappeared-Data.txt
>
>
> This happens for both LeveledCompactionStrategy and 
> SizeTieredCompactionStrategy.  I've only tested it on Cassandra version 2.2 
> but it most likely also affects all Cassandra versions after 2.2, if they 
> have anticompaction with full repair.
> When mixing incremental and full repairs, there are a few scenarios where the 
> Data SSTable is marked as unrepaired and the Tombstone SSTable is marked as 
> repaired.  Then if it is past gc_grace, and the tombstone and data has been 
> compacted out on other replicas, the next incremental repair will push the 
> Data to other replicas without the tombstone.
> Simplified scenario:
> 3 node cluster with RF=3
> Intial config:
>   Node 1 has data and tombstone in separate SSTables.
>   Node 2 has data and no tombstone.
>   Node 3 has data and tombstone in separate SSTables.
> Incremental repair (nodetool repair -pr) is run every day so now we have 
> tombstone on each node.
> Some minor compactions have happened since so data and tombstone get merged 
> to 1 SSTable on Nodes 1 and 3.
>   Node 1 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
> Incremental repairs keep running every day.
> Full repairs run weekly (nodetool repair -full -pr). 
> Now there are 2 scenarios where the Data SSTable will get marked as 
> "Unrepaired" while Tombstone SSTable will get marked as "Repaired".
> Scenario 1:
> Since the Data and Tombstone SSTable have been marked as "Repaired" 
> and anticompacted, they have had minor compactions with other SSTables 
> containing keys from other ranges.  During full repair, if the last node to 
> run it doesn't own this particular key in it's partitioner range, the Data 
> and Tombstone SSTable will get anticompacted and marked as "Unrepaired".  Now 
> in the next incremental repair, if the Data SSTable is involved in a minor 
> compaction during the repair but the Tombstone SSTable is not, the resulting 
> compacted SSTable will be marked "Unrepaired" and Tombstone SSTable is marked 
> "Repaired".
> Scenario 2:
> Only the Data SSTable had minor compaction with other SSTables 
> containing keys from other ranges after being marked as "Repaired".  The 
> Tombstone SSTable was never involved in a minor compaction so therefore all 
> keys in that SSTable belong to 1 particular partitioner range. During full 
> repair, if the last node to run it doesn't own this particular key in it's 
> partitioner range, the Data SSTable will get 

[jira] [Commented] (CASSANDRA-13153) Reappeared Data when Mixing Incremental and Full Repairs

2017-02-14 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866448#comment-15866448
 ] 

Blake Eggleston commented on CASSANDRA-13153:
-

I think this can also happen just by running incremental repair only, because 
of the way it leaks data into the unrepaired sstable bucket. This has been 
fixed in CASSANDRA-9143… but that was only committed to trunk, since it’s not a 
trivial change. Unfortunately, the only way to avoid this in pre-4.0 clusters 
is to just not run incremental repair.

This may not be as bad as it sounds though, since what pre CASSANDRA-9143 
incremental repair gained in validation time, it likely lost in redundant 
re-streaming of otherwise repaired data. If you had a large sstable compacted 
during a repair, the entire thing would have to be streamed to every other 
replica on the next incremental repair.

> Reappeared Data when Mixing Incremental and Full Repairs
> 
>
> Key: CASSANDRA-13153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13153
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Tools
> Environment: Apache Cassandra 2.2
>Reporter: Amanda Debrot
>  Labels: Cassandra
> Attachments: log-Reappeared-Data.txt, 
> Step-by-Step-Simulate-Reappeared-Data.txt
>
>
> This happens for both LeveledCompactionStrategy and 
> SizeTieredCompactionStrategy.  I've only tested it on Cassandra version 2.2 
> but it most likely also affects all Cassandra versions after 2.2, if they 
> have anticompaction with full repair.
> When mixing incremental and full repairs, there are a few scenarios where the 
> Data SSTable is marked as unrepaired and the Tombstone SSTable is marked as 
> repaired.  Then if it is past gc_grace, and the tombstone and data has been 
> compacted out on other replicas, the next incremental repair will push the 
> Data to other replicas without the tombstone.
> Simplified scenario:
> 3 node cluster with RF=3
> Intial config:
>   Node 1 has data and tombstone in separate SSTables.
>   Node 2 has data and no tombstone.
>   Node 3 has data and tombstone in separate SSTables.
> Incremental repair (nodetool repair -pr) is run every day so now we have 
> tombstone on each node.
> Some minor compactions have happened since so data and tombstone get merged 
> to 1 SSTable on Nodes 1 and 3.
>   Node 1 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
> Incremental repairs keep running every day.
> Full repairs run weekly (nodetool repair -full -pr). 
> Now there are 2 scenarios where the Data SSTable will get marked as 
> "Unrepaired" while Tombstone SSTable will get marked as "Repaired".
> Scenario 1:
> Since the Data and Tombstone SSTable have been marked as "Repaired" 
> and anticompacted, they have had minor compactions with other SSTables 
> containing keys from other ranges.  During full repair, if the last node to 
> run it doesn't own this particular key in it's partitioner range, the Data 
> and Tombstone SSTable will get anticompacted and marked as "Unrepaired".  Now 
> in the next incremental repair, if the Data SSTable is involved in a minor 
> compaction during the repair but the Tombstone SSTable is not, the resulting 
> compacted SSTable will be marked "Unrepaired" and Tombstone SSTable is marked 
> "Repaired".
> Scenario 2:
> Only the Data SSTable had minor compaction with other SSTables 
> containing keys from other ranges after being marked as "Repaired".  The 
> Tombstone SSTable was never involved in a minor compaction so therefore all 
> keys in that SSTable belong to 1 particular partitioner range. During full 
> repair, if the last node to run it doesn't own this particular key in it's 
> partitioner range, the Data SSTable will get anticompacted and marked as 
> "Unrepaired".   The Tombstone SSTable stays marked as Repaired.
> Then it’s past gc_grace.  Since Node’s #1 and #3 only have 1 SSTable for that 
> key, the tombstone will get compacted out.
>   Node 1 has nothing.
>   Node 2 has data (in unrepaired SSTable) and tombstone (in repaired 
> SSTable) in separate SSTables.
>   Node 3 has nothing.
> Now when the next incremental repair runs, it will only use the Data SSTable 
> to build the merkle tree since the tombstone SSTable is flagged as repaired 
> and data SSTable is marked as unrepaired.  And the data will get repaired 
> against the other two nodes.
>   Node 1 has data.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 has data.
> If a read request hits Node 1 and 3, it will return data.  If 

[jira] [Commented] (CASSANDRA-10786) Include hash of result set metadata in prepared statement id

2017-02-14 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866372#comment-15866372
 ] 

Alex Petrov commented on CASSANDRA-10786:
-

Thanks for noticing [~omichallat] I'll take a closer look at it!

> Include hash of result set metadata in prepared statement id
> 
>
> Key: CASSANDRA-10786
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10786
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Olivier Michallat
>Assignee: Alex Petrov
>Priority: Minor
>  Labels: client-impacting, doc-impacting, protocolv5
> Fix For: 3.11.x
>
>
> *_Initial description:_*
> This is a follow-up to CASSANDRA-7910, which was about invalidating a 
> prepared statement when the table is altered, to force clients to update 
> their local copy of the metadata.
> There's still an issue if multiple clients are connected to the same host. 
> The first client to execute the query after the cache was invalidated will 
> receive an UNPREPARED response, re-prepare, and update its local metadata. 
> But other clients might miss it entirely (the MD5 hasn't changed), and they 
> will keep using their old metadata. For example:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, 
> clientA and clientB both have a cache of the metadata (columns b and c) 
> locally
> # column a gets added to the table, C* invalidates its cache entry
> # clientA sends an EXECUTE request for md5 abc123, gets UNPREPARED response, 
> re-prepares on the fly and updates its local metadata to (a, b, c)
> # prepared statement is now in C*’s cache again, with the same md5 abc123
> # clientB sends an EXECUTE request for id abc123. Because the cache has been 
> populated again, the query succeeds. But clientB still has not updated its 
> metadata, it’s still (b,c)
> One solution that was suggested is to include a hash of the result set 
> metadata in the md5. This way the md5 would change at step 3, and any client 
> using the old md5 would get an UNPREPARED, regardless of whether another 
> client already reprepared.
> -
> *_Resolution (2017/02/13):_*
> The following changes were made to native protocol v5:
> - the PREPARED response includes {{result_metadata_id}}, a hash of the result 
> set metadata.
> - every EXECUTE message must provide {{result_metadata_id}} in addition to 
> the prepared statement id. If it doesn't match the current one on the server, 
> it means the client is operating on a stale schema.
> - to notify the client, the server returns a ROWS response with a new 
> {{Metadata_changed}} flag, the new {{result_metadata_id}} and the updated 
> result metadata (this overrides the {{No_metadata}} flag, even if the client 
> had requested it)
> - the client updates its copy of the result metadata before it decodes the 
> results.
> So the scenario above would now look like:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, and 
> result set (b, c) that hashes to cde456
> # column a gets added to the table, C* does not invalidate its cache entry, 
> but only updates the result set to (a, b, c) which hashes to fff789
> # client sends an EXECUTE request for (statementId=abc123, resultId=cde456) 
> and skip_metadata flag
> # cde456!=fff789, so C* responds with ROWS(..., no_metadata=false, 
> metadata_changed=true, new_metadata_id=fff789,col specs for (a,b,c))
> # client updates its column specifications, and will send the next execute 
> queries with (statementId=abc123, resultId=fff789)
> This works the same with multiple clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-10786) Include hash of result set metadata in prepared statement id

2017-02-14 Thread Olivier Michallat (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866365#comment-15866365
 ] 

Olivier Michallat edited comment on CASSANDRA-10786 at 2/14/17 6:42 PM:


There are a couple of minor issues in {{native_protocol_v5.spec}}. In the ROWS 
response metadata:
* if both paging state and new metadata id are present, the paging state comes 
first, not second
* the metadata_changed flag is 0x0008, not 0x0005


was (Author: omichallat):
There are a couple of minor issues in {{native_protocol_v5.spec}}:
* the paging state is before the new metadata id, not after
* the metadata_changed flag is 0x0008, not 0x0005

> Include hash of result set metadata in prepared statement id
> 
>
> Key: CASSANDRA-10786
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10786
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Olivier Michallat
>Assignee: Alex Petrov
>Priority: Minor
>  Labels: client-impacting, doc-impacting, protocolv5
> Fix For: 3.11.x
>
>
> *_Initial description:_*
> This is a follow-up to CASSANDRA-7910, which was about invalidating a 
> prepared statement when the table is altered, to force clients to update 
> their local copy of the metadata.
> There's still an issue if multiple clients are connected to the same host. 
> The first client to execute the query after the cache was invalidated will 
> receive an UNPREPARED response, re-prepare, and update its local metadata. 
> But other clients might miss it entirely (the MD5 hasn't changed), and they 
> will keep using their old metadata. For example:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, 
> clientA and clientB both have a cache of the metadata (columns b and c) 
> locally
> # column a gets added to the table, C* invalidates its cache entry
> # clientA sends an EXECUTE request for md5 abc123, gets UNPREPARED response, 
> re-prepares on the fly and updates its local metadata to (a, b, c)
> # prepared statement is now in C*’s cache again, with the same md5 abc123
> # clientB sends an EXECUTE request for id abc123. Because the cache has been 
> populated again, the query succeeds. But clientB still has not updated its 
> metadata, it’s still (b,c)
> One solution that was suggested is to include a hash of the result set 
> metadata in the md5. This way the md5 would change at step 3, and any client 
> using the old md5 would get an UNPREPARED, regardless of whether another 
> client already reprepared.
> -
> *_Resolution (2017/02/13):_*
> The following changes were made to native protocol v5:
> - the PREPARED response includes {{result_metadata_id}}, a hash of the result 
> set metadata.
> - every EXECUTE message must provide {{result_metadata_id}} in addition to 
> the prepared statement id. If it doesn't match the current one on the server, 
> it means the client is operating on a stale schema.
> - to notify the client, the server returns a ROWS response with a new 
> {{Metadata_changed}} flag, the new {{result_metadata_id}} and the updated 
> result metadata (this overrides the {{No_metadata}} flag, even if the client 
> had requested it)
> - the client updates its copy of the result metadata before it decodes the 
> results.
> So the scenario above would now look like:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, and 
> result set (b, c) that hashes to cde456
> # column a gets added to the table, C* does not invalidate its cache entry, 
> but only updates the result set to (a, b, c) which hashes to fff789
> # client sends an EXECUTE request for (statementId=abc123, resultId=cde456) 
> and skip_metadata flag
> # cde456!=fff789, so C* responds with ROWS(..., no_metadata=false, 
> metadata_changed=true, new_metadata_id=fff789,col specs for (a,b,c))
> # client updates its column specifications, and will send the next execute 
> queries with (statementId=abc123, resultId=fff789)
> This works the same with multiple clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-10786) Include hash of result set metadata in prepared statement id

2017-02-14 Thread Olivier Michallat (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866365#comment-15866365
 ] 

Olivier Michallat commented on CASSANDRA-10786:
---

There are a couple of minor issues in {{native_protocol_v5.spec}}:
* the paging state is before the new metadata id, not after
* the metadata_changed flag is 0x0008, not 0x0005

> Include hash of result set metadata in prepared statement id
> 
>
> Key: CASSANDRA-10786
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10786
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Olivier Michallat
>Assignee: Alex Petrov
>Priority: Minor
>  Labels: client-impacting, doc-impacting, protocolv5
> Fix For: 3.11.x
>
>
> *_Initial description:_*
> This is a follow-up to CASSANDRA-7910, which was about invalidating a 
> prepared statement when the table is altered, to force clients to update 
> their local copy of the metadata.
> There's still an issue if multiple clients are connected to the same host. 
> The first client to execute the query after the cache was invalidated will 
> receive an UNPREPARED response, re-prepare, and update its local metadata. 
> But other clients might miss it entirely (the MD5 hasn't changed), and they 
> will keep using their old metadata. For example:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, 
> clientA and clientB both have a cache of the metadata (columns b and c) 
> locally
> # column a gets added to the table, C* invalidates its cache entry
> # clientA sends an EXECUTE request for md5 abc123, gets UNPREPARED response, 
> re-prepares on the fly and updates its local metadata to (a, b, c)
> # prepared statement is now in C*’s cache again, with the same md5 abc123
> # clientB sends an EXECUTE request for id abc123. Because the cache has been 
> populated again, the query succeeds. But clientB still has not updated its 
> metadata, it’s still (b,c)
> One solution that was suggested is to include a hash of the result set 
> metadata in the md5. This way the md5 would change at step 3, and any client 
> using the old md5 would get an UNPREPARED, regardless of whether another 
> client already reprepared.
> -
> *_Resolution (2017/02/13):_*
> The following changes were made to native protocol v5:
> - the PREPARED response includes {{result_metadata_id}}, a hash of the result 
> set metadata.
> - every EXECUTE message must provide {{result_metadata_id}} in addition to 
> the prepared statement id. If it doesn't match the current one on the server, 
> it means the client is operating on a stale schema.
> - to notify the client, the server returns a ROWS response with a new 
> {{Metadata_changed}} flag, the new {{result_metadata_id}} and the updated 
> result metadata (this overrides the {{No_metadata}} flag, even if the client 
> had requested it)
> - the client updates its copy of the result metadata before it decodes the 
> results.
> So the scenario above would now look like:
> # {{SELECT * ...}} statement is prepared in Cassandra with md5 abc123, and 
> result set (b, c) that hashes to cde456
> # column a gets added to the table, C* does not invalidate its cache entry, 
> but only updates the result set to (a, b, c) which hashes to fff789
> # client sends an EXECUTE request for (statementId=abc123, resultId=cde456) 
> and skip_metadata flag
> # cde456!=fff789, so C* responds with ROWS(..., no_metadata=false, 
> metadata_changed=true, new_metadata_id=fff789,col specs for (a,b,c))
> # client updates its column specifications, and will send the next execute 
> queries with (statementId=abc123, resultId=fff789)
> This works the same with multiple clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13218) Duration validation error is unclear in case of overflow.

2017-02-14 Thread Sandeep Tamhankar (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866074#comment-15866074
 ] 

Sandeep Tamhankar commented on CASSANDRA-13218:
---

Where is it specified that these are supposed to be int's? I was under the 
impression that since we're transmitting zigzag-encoded vint values and that 
format supports a 64-bit range, these attributes are intended to have signed 
64-bit range. Is there a particular reason to restrict it?

> Duration validation error is unclear in case of overflow.
> -
>
> Key: CASSANDRA-13218
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13218
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.11.x
>
>
> If a user try to insert a {{duration}} with a number of months or days that 
> cannot fit in an {{int}} (for example: {{9223372036854775807mo1d}}), the 
> error message is confusing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13217) minor bugs related to CASSANDRA-9143

2017-02-14 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866058#comment-15866058
 ] 

Jeff Jirsa commented on CASSANDRA-13217:


Blake committed as {{3f3db2d40d6b5edbf079b917953a30bcc1209d25}} 

> minor bugs related to CASSANDRA-9143
> 
>
> Key: CASSANDRA-13217
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13217
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> We found a few minor bugs found in an internal review:
> * -incorrect log argument order 
> [here|https://github.com/apache/cassandra/blob/edcbef3e343778b4d5affe019f64c89da2a13aa2/src/java/org/apache/cassandra/streaming/compress/CompressedStreamReader.java#L75]-
> * {{SSTableReader#intersects}} should use Bounds, not Range 
> ([here|https://github.com/apache/cassandra/blob/edcbef3e343778b4d5affe019f64c89da2a13aa2/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L1761])
> * {{CompactionStrategyManager#validateForCompaction}} doesn't prevent 
> sstables from different repair session from being compacted together 
> [here|https://github.com/apache/cassandra/blob/edcbef3e343778b4d5affe019f64c89da2a13aa2/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java#L1761]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12653) In-flight shadow round requests

2017-02-14 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866010#comment-15866010
 ] 

Jeff Jirsa commented on CASSANDRA-12653:


Fair points all around. I've deployed this as-is into a decent sized test 
cluster just to see how it behaves - first impressions are good compared to 
3.0.10. Would love to see this land in 3.0.11 - if [~jkni] doesn't get to it in 
time for cutting 3.0.11, I'll try to get a more formal review pass on it.


> In-flight shadow round requests
> ---
>
> Key: CASSANDRA-12653
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12653
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
> Attachments: 12653-2.2.patch, 12653-3.0.patch, 12653-trunk.patch
>
>
> Bootstrapping or replacing a node in the cluster requires to gather and check 
> some host IDs or tokens by doing a gossip "shadow round" once before joining 
> the cluster. This is done by sending a gossip SYN to all seeds until we 
> receive a response with the cluster state, from where we can move on in the 
> bootstrap process. Receiving a response will call the shadow round done and 
> calls {{Gossiper.resetEndpointStateMap}} for cleaning up the received state 
> again.
> The issue here is that at this point there might be other in-flight requests 
> and it's very likely that shadow round responses from other seeds will be 
> received afterwards, while the current state of the bootstrap process doesn't 
> expect this to happen (e.g. gossiper may or may not be enabled). 
> One side effect will be that MigrationTasks are spawned for each shadow round 
> reply except the first. Tasks might or might not execute based on whether at 
> execution time {{Gossiper.resetEndpointStateMap}} had been called, which 
> effects the outcome of {{FailureDetector.instance.isAlive(endpoint))}} at 
> start of the task. You'll see error log messages such as follows when this 
> happend:
> {noformat}
> INFO  [SharedPool-Worker-1] 2016-09-08 08:36:39,255 Gossiper.java:993 - 
> InetAddress /xx.xx.xx.xx is now UP
> ERROR [MigrationStage:1]2016-09-08 08:36:39,255 FailureDetector.java:223 
> - unknown endpoint /xx.xx.xx.xx
> {noformat}
> Although is isn't pretty, I currently don't see any serious harm from this, 
> but it would be good to get a second opinion (feel free to close as "wont 
> fix").
> /cc [~Stefania] [~thobbs]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13132) Add currentTimestamp and currentDate functions

2017-02-14 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13132:

Status: Patch Available  (was: Open)

> Add currentTimestamp and currentDate functions
> --
>
> Key: CASSANDRA-13132
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13132
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 4.x
>
>
> Today, the only way to get the current {{timestamp}} or {{date}} is to 
> convert using the {{toTimestamp}} and {{toDate}} functions the output of 
> {{now()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13132) Add currentTimestamp and currentDate functions

2017-02-14 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13132:

Reviewer: Alex Petrov
  Status: Open  (was: Patch Available)

> Add currentTimestamp and currentDate functions
> --
>
> Key: CASSANDRA-13132
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13132
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 4.x
>
>
> Today, the only way to get the current {{timestamp}} or {{date}} is to 
> convert using the {{toTimestamp}} and {{toDate}} functions the output of 
> {{now()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13218) Duration validation error is unclear in case of overflow.

2017-02-14 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13218:
---
Component/s: CQL

> Duration validation error is unclear in case of overflow.
> -
>
> Key: CASSANDRA-13218
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13218
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.11.x
>
>
> If a user try to insert a {{duration}} with a number of months or days that 
> cannot fit in an {{int}} (for example: {{9223372036854775807mo1d}}), the 
> error message is confusing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13218) Duration validation error is unclear in case of overflow.

2017-02-14 Thread Benjamin Lerer (JIRA)
Benjamin Lerer created CASSANDRA-13218:
--

 Summary: Duration validation error is unclear in case of overflow.
 Key: CASSANDRA-13218
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13218
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: Benjamin Lerer


If a user try to insert a {{duration}} with a number of months or days that 
cannot fit in an {{int}} (for example: {{9223372036854775807mo1d}}), the error 
message is confusing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13218) Duration validation error is unclear in case of overflow.

2017-02-14 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13218:
---
Fix Version/s: 3.11.x

> Duration validation error is unclear in case of overflow.
> -
>
> Key: CASSANDRA-13218
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13218
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.11.x
>
>
> If a user try to insert a {{duration}} with a number of months or days that 
> cannot fit in an {{int}} (for example: {{9223372036854775807mo1d}}), the 
> error message is confusing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13153) Reappeared Data when Mixing Incremental and Full Repairs

2017-02-14 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865792#comment-15865792
 ] 

Stefan Podkowinski commented on CASSANDRA-13153:


Getting back to this ticket and giving it some thoughts again, I'm pretty sure 
that it's not enough to disable anti-compaction for full PK repairs. This will 
only prevent the described issue for the repair initiator node, but not the 
involved other replicas. I'm afraid there's no way around disabling 
anti-compaction for full repairs completely to prevent this issue from 
happening.

> Reappeared Data when Mixing Incremental and Full Repairs
> 
>
> Key: CASSANDRA-13153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13153
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Tools
> Environment: Apache Cassandra 2.2
>Reporter: Amanda Debrot
>  Labels: Cassandra
> Attachments: log-Reappeared-Data.txt, 
> Step-by-Step-Simulate-Reappeared-Data.txt
>
>
> This happens for both LeveledCompactionStrategy and 
> SizeTieredCompactionStrategy.  I've only tested it on Cassandra version 2.2 
> but it most likely also affects all Cassandra versions after 2.2, if they 
> have anticompaction with full repair.
> When mixing incremental and full repairs, there are a few scenarios where the 
> Data SSTable is marked as unrepaired and the Tombstone SSTable is marked as 
> repaired.  Then if it is past gc_grace, and the tombstone and data has been 
> compacted out on other replicas, the next incremental repair will push the 
> Data to other replicas without the tombstone.
> Simplified scenario:
> 3 node cluster with RF=3
> Intial config:
>   Node 1 has data and tombstone in separate SSTables.
>   Node 2 has data and no tombstone.
>   Node 3 has data and tombstone in separate SSTables.
> Incremental repair (nodetool repair -pr) is run every day so now we have 
> tombstone on each node.
> Some minor compactions have happened since so data and tombstone get merged 
> to 1 SSTable on Nodes 1 and 3.
>   Node 1 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
> Incremental repairs keep running every day.
> Full repairs run weekly (nodetool repair -full -pr). 
> Now there are 2 scenarios where the Data SSTable will get marked as 
> "Unrepaired" while Tombstone SSTable will get marked as "Repaired".
> Scenario 1:
> Since the Data and Tombstone SSTable have been marked as "Repaired" 
> and anticompacted, they have had minor compactions with other SSTables 
> containing keys from other ranges.  During full repair, if the last node to 
> run it doesn't own this particular key in it's partitioner range, the Data 
> and Tombstone SSTable will get anticompacted and marked as "Unrepaired".  Now 
> in the next incremental repair, if the Data SSTable is involved in a minor 
> compaction during the repair but the Tombstone SSTable is not, the resulting 
> compacted SSTable will be marked "Unrepaired" and Tombstone SSTable is marked 
> "Repaired".
> Scenario 2:
> Only the Data SSTable had minor compaction with other SSTables 
> containing keys from other ranges after being marked as "Repaired".  The 
> Tombstone SSTable was never involved in a minor compaction so therefore all 
> keys in that SSTable belong to 1 particular partitioner range. During full 
> repair, if the last node to run it doesn't own this particular key in it's 
> partitioner range, the Data SSTable will get anticompacted and marked as 
> "Unrepaired".   The Tombstone SSTable stays marked as Repaired.
> Then it’s past gc_grace.  Since Node’s #1 and #3 only have 1 SSTable for that 
> key, the tombstone will get compacted out.
>   Node 1 has nothing.
>   Node 2 has data (in unrepaired SSTable) and tombstone (in repaired 
> SSTable) in separate SSTables.
>   Node 3 has nothing.
> Now when the next incremental repair runs, it will only use the Data SSTable 
> to build the merkle tree since the tombstone SSTable is flagged as repaired 
> and data SSTable is marked as unrepaired.  And the data will get repaired 
> against the other two nodes.
>   Node 1 has data.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 has data.
> If a read request hits Node 1 and 3, it will return data.  If it hits 1 and 
> 2, or 2 and 3, however, it would return no data.
> Tested this with single range tokens for simplicity.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-8457) nio MessagingService

2017-02-14 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865754#comment-15865754
 ] 

Jason Brown edited comment on CASSANDRA-8457 at 2/14/17 1:14 PM:
-

rebased, and made changes wrt CASSANDRA-13090


was (Author: jasobrown):
rebased, and pulled in CASSANDRA-13090

> nio MessagingService
> 
>
> Key: CASSANDRA-8457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Jason Brown
>Priority: Minor
>  Labels: netty, performance
> Fix For: 4.x
>
>
> Thread-per-peer (actually two each incoming and outbound) is a big 
> contributor to context switching, especially for larger clusters.  Let's look 
> at switching to nio, possibly via Netty.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-8457) nio MessagingService

2017-02-14 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865754#comment-15865754
 ] 

Jason Brown commented on CASSANDRA-8457:


rebased, and pulled in CASSANDRA-13090

> nio MessagingService
> 
>
> Key: CASSANDRA-8457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Jason Brown
>Priority: Minor
>  Labels: netty, performance
> Fix For: 4.x
>
>
> Thread-per-peer (actually two each incoming and outbound) is a big 
> contributor to context switching, especially for larger clusters.  Let's look 
> at switching to nio, possibly via Netty.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13132) Add currentTimestamp and currentDate functions

2017-02-14 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865738#comment-15865738
 ] 

Benjamin Lerer commented on CASSANDRA-13132:


[~ifesdjeen] could you review?

> Add currentTimestamp and currentDate functions
> --
>
> Key: CASSANDRA-13132
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13132
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 4.x
>
>
> Today, the only way to get the current {{timestamp}} or {{date}} is to 
> convert using the {{toTimestamp}} and {{toDate}} functions the output of 
> {{now()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13132) Add currentTimestamp and currentDate functions

2017-02-14 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13132:
---
Component/s: CQL

> Add currentTimestamp and currentDate functions
> --
>
> Key: CASSANDRA-13132
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13132
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 4.x
>
>
> Today, the only way to get the current {{timestamp}} or {{date}} is to 
> convert using the {{toTimestamp}} and {{toDate}} functions the output of 
> {{now()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13132) Add currentTimestamp and currentDate functions

2017-02-14 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865736#comment-15865736
 ] 

Benjamin Lerer commented on CASSANDRA-13132:


||[trunk|https://github.com/apache/cassandra/compare/trunk...blerer:13132-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-13132-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-13132-trunk-dtest/]|

The patch adds the following functions:
* {{currentTimestamp}}
* {{currentDate}}
* {{currentTime}}
* {{currentTimeUUID}} (same as {{now}})

> Add currentTimestamp and currentDate functions
> --
>
> Key: CASSANDRA-13132
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13132
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 4.x
>
>
> Today, the only way to get the current {{timestamp}} or {{date}} is to 
> convert using the {{toTimestamp}} and {{toDate}} functions the output of 
> {{now()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13132) Add currentTimestamp and currentDate functions

2017-02-14 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13132:
---
Fix Version/s: 4.x
   Status: Patch Available  (was: Open)

> Add currentTimestamp and currentDate functions
> --
>
> Key: CASSANDRA-13132
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13132
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 4.x
>
>
> Today, the only way to get the current {{timestamp}} or {{date}} is to 
> convert using the {{toTimestamp}} and {{toDate}} functions the output of 
> {{now()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13126) native transport protocol corruption when using SSL

2017-02-14 Thread Tom van der Woerdt (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865602#comment-15865602
 ] 

Tom van der Woerdt commented on CASSANDRA-13126:


All streams /could/ be affected. Basically what happens is that a chunk of the 
data stream (TCP) is ignored. If there's no data waiting in the buffers, the 
chunk that's ignored happens to align with the request frame, and we simply 
cause an error on the wrong stream_id, but it won't immediately affect the next 
request. If there's data waiting, we continue raising DecoderExceptions until 
there's enough memory. But once we have enough memory again, we may not have 
the read buffer aligned to the start of a frame, which will impact every future 
decoding attempt. In my case that showed as errors related to Snappy, but it 
could also cause data corruption if you're very unlucky.

iow, as soon as you get one of these, all bets are off, because we don't know 
when the next request frame starts.

> native transport protocol corruption when using SSL
> ---
>
> Key: CASSANDRA-13126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13126
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tom van der Woerdt
>Priority: Critical
>
> This is a series of conditions that can result in client connections becoming 
> unusable.
> 1) Cassandra GC must be well-tuned, to have short GC pauses every minute or so
> 2) *client* SSL must be enabled and transmitting a significant amount of data
> 3) Cassandra must run with the default library versions
> 4) disableexplicitgc must be set (this is the default in the current 
> cassandra-env.sh)
> This ticket relates to CASSANDRA-13114 which is a possible workaround (but 
> not a fix) for the SSL requirement to trigger this bug.
> * Netty allocates nio.ByteBuffers for every outgoing SSL message.
> * ByteBuffers consist of two parts, the jvm object and the off-heap object. 
> The jvm object is small and goes with regular GC cycles, the off-heap object 
> gets freed only when the small jvm object is freed. To avoid exploding the 
> native memory use, the jvm defaults to limiting its allocation to the max 
> heap size. Allocating beyond that limit triggers a System.gc(), a retry, and 
> potentially an exception.
> * System.gc is a no-op under disableexplicitgc
> * This means ByteBuffers are likely to throw an exception when too many 
> objects are being allocated
> * The netty version shipped in Cassandra is broken when using SSL (see 
> CASSANDRA-13114) and causes significantly too many bytebuffers to be 
> allocated.
> This gets more complicated though.
> When /some/ clients use SSL, and others don't, the clients not using SSL can 
> still be affected by this bug, as bytebuffer starvation caused by ssl will 
> leak to other users.
> ByteBuffers are used very early on in the native protocol as well. Before 
> even being able to decode the network protocol, this error can be thrown :
> {noformat}
> io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct 
> buffer memory
> {noformat}
> Note that this comes back with stream_id 0, so clients end up waiting for the 
> client timeout before the query is considered failed and retried.
> A few frames later on the same connection, this appears:
> {noformat}
> Provided frame does not appear to be Snappy compressed
> {noformat}
> And after that everything errors out with:
> {noformat}
> Invalid or unsupported protocol version (54); the lowest supported version is 
> 3 and the greatest is 4
> {noformat}
> So this bug ultimately affects the binary protocol and the connection becomes 
> useless if not downright dangerous.
> I think there are several things that need to be done here.
> * CASSANDRA-13114 should be fixed (easy, and probably needs to land in 3.0.11 
> anyway)
> * Connections should be closed after a DecoderException
> * DisableExplicitGC should be removed from the default JVM arguments
> Any of these three would limit the impact to clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13126) native transport protocol corruption when using SSL

2017-02-14 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865565#comment-15865565
 ] 

Stefan Podkowinski commented on CASSANDRA-13126:


bq. Connections should be closed after a DecoderException

We _could_ do this easily. But are you sure this exception is non-recoverable? 
Will all streams be affected? If we do, we'd have to close the whole 
connection, as we can't signal the error to individual streams without the 
stream_id in the frame that can't be decoded. Wouldn't frequently reconnecting 
clients possibly cause more memory pressure in this case and further escalate 
the issue?
 

> native transport protocol corruption when using SSL
> ---
>
> Key: CASSANDRA-13126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13126
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tom van der Woerdt
>Priority: Critical
>
> This is a series of conditions that can result in client connections becoming 
> unusable.
> 1) Cassandra GC must be well-tuned, to have short GC pauses every minute or so
> 2) *client* SSL must be enabled and transmitting a significant amount of data
> 3) Cassandra must run with the default library versions
> 4) disableexplicitgc must be set (this is the default in the current 
> cassandra-env.sh)
> This ticket relates to CASSANDRA-13114 which is a possible workaround (but 
> not a fix) for the SSL requirement to trigger this bug.
> * Netty allocates nio.ByteBuffers for every outgoing SSL message.
> * ByteBuffers consist of two parts, the jvm object and the off-heap object. 
> The jvm object is small and goes with regular GC cycles, the off-heap object 
> gets freed only when the small jvm object is freed. To avoid exploding the 
> native memory use, the jvm defaults to limiting its allocation to the max 
> heap size. Allocating beyond that limit triggers a System.gc(), a retry, and 
> potentially an exception.
> * System.gc is a no-op under disableexplicitgc
> * This means ByteBuffers are likely to throw an exception when too many 
> objects are being allocated
> * The netty version shipped in Cassandra is broken when using SSL (see 
> CASSANDRA-13114) and causes significantly too many bytebuffers to be 
> allocated.
> This gets more complicated though.
> When /some/ clients use SSL, and others don't, the clients not using SSL can 
> still be affected by this bug, as bytebuffer starvation caused by ssl will 
> leak to other users.
> ByteBuffers are used very early on in the native protocol as well. Before 
> even being able to decode the network protocol, this error can be thrown :
> {noformat}
> io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct 
> buffer memory
> {noformat}
> Note that this comes back with stream_id 0, so clients end up waiting for the 
> client timeout before the query is considered failed and retried.
> A few frames later on the same connection, this appears:
> {noformat}
> Provided frame does not appear to be Snappy compressed
> {noformat}
> And after that everything errors out with:
> {noformat}
> Invalid or unsupported protocol version (54); the lowest supported version is 
> 3 and the greatest is 4
> {noformat}
> So this bug ultimately affects the binary protocol and the connection becomes 
> useless if not downright dangerous.
> I think there are several things that need to be done here.
> * CASSANDRA-13114 should be fixed (easy, and probably needs to land in 3.0.11 
> anyway)
> * Connections should be closed after a DecoderException
> * DisableExplicitGC should be removed from the default JVM arguments
> Any of these three would limit the impact to clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13001) pluggable slow query logging / handling

2017-02-14 Thread Murukesh Mohanan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865514#comment-15865514
 ] 

Murukesh Mohanan commented on CASSANDRA-13001:
--

I have a hacky WIP patch for this, but, looking at the Datastax 
{{node_slow_log}}, I think it falls woefully short of what Datastax already 
does. The logging is currently done in the {{MonitorableImpl}} class, which 
doesn't, as far as I can tell, have access to that much information. Worse, 
even the query isn't really available, as [the comment on function that 
provides access to 
it|https://github.com/apache/cassandra/blob/cassandra-3.10/src/java/org/apache/cassandra/db/ReadCommand.java#L616]
 notes.
Right now my changes are confined to a single file, but doing what 
{{node_slow_log}} provides would be more invasive. Thoughts? Should I try for 
it?

> pluggable slow query logging / handling
> ---
>
> Key: CASSANDRA-13001
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13001
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jon Haddad
>
> Currently CASSANDRA-12403 logs slow queries as DEBUG to a file.  It would be 
> better to have this as an interface which we can log to alternative 
> locations, such as to a table on the cluster or to a remote location (statsd, 
> graphite, etc).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12653) In-flight shadow round requests

2017-02-14 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865504#comment-15865504
 ] 

Stefan Podkowinski commented on CASSANDRA-12653:


bq. Stefan Podkowinski - is there some deeper purpose of moving the 
FD.instance.isAlive() check higher in MigrationTask#runMayThrow() method beyond 
"check to see if it's dead before we bother checking to see if it's worth 
sending a migration task"? Is there a reason we don't let 
MM#shouldPullSchemaFrom return false if FD says the instance is dead?

We could move FS.isAlive into MM.shouldPullSchemaFrom, yes. Not totally against 
it, but the log message in MigrationTask in case of a false return value would 
have to be changed and actually the isAlive status should only be relevant at 
task execution, as there's a 60 second delay after submitting it. So in theory 
you could submit a task for a node that has been dead but will be alive again 
at time of execution.

bq. Given that the shadow round is meant to just get ring state without 
changing anything, should we add an explicit check to 
MigrationManager#scheduleSchemaPull() to ensure that 
Gossiper.instance.isInShadowRound() is false before scheduling?

The MigrationManager should never issue a schema pull during shadow round. If 
we add such check, I'd prefer to throw an exception instead of failing 
silently, instead of letting the process run in an undefined state. On the 
other hand, it's not really the business of the MM to monitor the gossiper 
life-cycle when it comes to separations of concerns.

> In-flight shadow round requests
> ---
>
> Key: CASSANDRA-12653
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12653
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
> Attachments: 12653-2.2.patch, 12653-3.0.patch, 12653-trunk.patch
>
>
> Bootstrapping or replacing a node in the cluster requires to gather and check 
> some host IDs or tokens by doing a gossip "shadow round" once before joining 
> the cluster. This is done by sending a gossip SYN to all seeds until we 
> receive a response with the cluster state, from where we can move on in the 
> bootstrap process. Receiving a response will call the shadow round done and 
> calls {{Gossiper.resetEndpointStateMap}} for cleaning up the received state 
> again.
> The issue here is that at this point there might be other in-flight requests 
> and it's very likely that shadow round responses from other seeds will be 
> received afterwards, while the current state of the bootstrap process doesn't 
> expect this to happen (e.g. gossiper may or may not be enabled). 
> One side effect will be that MigrationTasks are spawned for each shadow round 
> reply except the first. Tasks might or might not execute based on whether at 
> execution time {{Gossiper.resetEndpointStateMap}} had been called, which 
> effects the outcome of {{FailureDetector.instance.isAlive(endpoint))}} at 
> start of the task. You'll see error log messages such as follows when this 
> happend:
> {noformat}
> INFO  [SharedPool-Worker-1] 2016-09-08 08:36:39,255 Gossiper.java:993 - 
> InetAddress /xx.xx.xx.xx is now UP
> ERROR [MigrationStage:1]2016-09-08 08:36:39,255 FailureDetector.java:223 
> - unknown endpoint /xx.xx.xx.xx
> {noformat}
> Although is isn't pretty, I currently don't see any serious harm from this, 
> but it would be good to get a second opinion (feel free to close as "wont 
> fix").
> /cc [~Stefania] [~thobbs]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13215) Cassandra nodes startup time 20x more after upgarding to 3.x

2017-02-14 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865479#comment-15865479
 ] 

Romain Hardouin commented on CASSANDRA-13215:
-

It's related to CASSANDRA-6696 i.e. since 3.2.

Regarding {{AbstractReplicationStrategy.getAddressRanges}} it seems to be a 
known limitation. Maybe we can now consider that it's used on a critical path:
{code}
/*
 * NOTE: this is pretty inefficient. also the inverse (getRangeAddresses) 
below.
 * this is fine as long as we don't use this on any critical path.
 * (fixing this would probably require merging tokenmetadata into 
replicationstrategy,
 * so we could cache/invalidate cleanly.)
 */
public Multimap getAddressRanges(TokenMetadata 
metadata)
{code}

> Cassandra nodes startup time 20x more after upgarding to 3.x
> 
>
> Key: CASSANDRA-13215
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13215
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Cluster setup: two datacenters (dc-main, dc-backup).
> dc-main - 9 servers, no vnodes
> dc-backup - 6 servers, vnodes
>Reporter: Viktor Kuzmin
> Attachments: simple-cache.patch
>
>
> CompactionStrategyManage.getCompactionStrategyIndex is called on each sstable 
> at startup. And this function calls StorageService.getDiskBoundaries. And 
> getDiskBoundaries calls AbstractReplicationStrategy.getAddressRanges.
> It appears that last function can be really slow. In our environment we have 
> 1545 tokens and with NetworkTopologyStrategy it can make 1545*1545 
> computations in worst case (maybe I'm wrong, but it really takes lot's of 
> cpu).
> Also this function can affect runtime later, cause it is called not only 
> during startup.
> I've tried to implement simple cache for getDiskBoundaries results and now 
> startup time is about one minute instead of 25m, but I'm not sure if it's a 
> good solution.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12762) Cassandra 3.0.9 Fails both compact and repair without even debug logs

2017-02-14 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865395#comment-15865395
 ] 

Stefan Podkowinski commented on CASSANDRA-12762:


Logging for CorruptBlockException has already been improved in CASSANDRA-12889. 
You'd have to upgrade to 3.0.10 for that.

> Cassandra 3.0.9 Fails both compact and repair without even debug logs
> -
>
> Key: CASSANDRA-12762
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12762
> Project: Cassandra
>  Issue Type: Bug
> Environment: Debian Jessie current
>Reporter: Jason Kania
>Priority: Critical
>
> After upgrading from 3.0.7 to 3.0.9, the following exception occurs when 
> trying to run compaction (previous to the upgrade compaction worked fine):
> {code}
> error: 
> (/home/circuitwatch/cassandra/data/circuitwatch/edgeTransitionByCircuitId-f5d33310024b11e5bb310d2316086bf7/mb-12063-big-Data.db):
>  corruption detected, chunk at 345885546 of length 62024.
> -- StackTrace --
> org.apache.cassandra.io.compress.CorruptBlockException: 
> (/home/circuitwatch/cassandra/data/circuitwatch/edgeTransitionByCircuitId-f5d33310024b11e5bb310d2316086bf7/mb-12063-big-Data.db):
>  corruption detected, chunk at 345885546 of length 62024.
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferMmap(CompressedRandomAccessReader.java:202)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.reBuffer(RandomAccessReader.java:111)
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.read(RebufferingInputStream.java:88)
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:66)
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:404)
> at 
> org.apache.cassandra.db.marshal.AbstractType.readValue(AbstractType.java:406)
> at 
> org.apache.cassandra.db.rows.BufferCell$Serializer.deserialize(BufferCell.java:302)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.readSimpleColumn(UnfilteredSerializer.java:476)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserializeRowBody(UnfilteredSerializer.java:454)
> at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:377)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:87)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$CurrentFormatIterator.computeNext(SSTableSimpleIterator.java:65)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.doCompute(SSTableIdentityIterator.java:123)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:100)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.computeNext(SSTableIdentityIterator.java:30)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:95)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:32)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:369)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:189)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:158)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:509)
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:369)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:129)
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:111)
> at 
> org.apache.cassandra.db.ColumnIndex.writeAndBuildIndex(ColumnIndex.java:52)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:149)
> at 
> 

[jira] [Commented] (CASSANDRA-13002) per table slow query times

2017-02-14 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865363#comment-15865363
 ] 

Jon Haddad commented on CASSANDRA-13002:


Sorry for the delay.  I'll give this a test this week and post feedback here.

> per table slow query times
> --
>
> Key: CASSANDRA-13002
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13002
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jon Haddad
>Assignee: Murukesh Mohanan
> Fix For: 4.x
>
> Attachments: 
> 0001-Add-per-table-slow_query_log_timeout_in_ms-property.patch
>
>
> CASSANDRA-12403 made it possible to log slow queries, but the time specified 
> is a global one.  This isn't useful if we know different tables have 
> different access patterns, as we'll end up with a lot of noise.  We should be 
> able to override the slow query time at a per table level.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)