[jira] [Updated] (CASSANDRA-13228) SASI index on partition key part doesn't match

2017-02-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hannu Kröger updated CASSANDRA-13228:
-
Description: 
I created a SASI index on first part of multi-part partition key. Running query 
using that index doesn't seem to work.

I have here a log of queries that should indicate the issue:

{code}cqlsh:test> CREATE TABLE test1(name text, event_date date, data_type 
text, bytes int, PRIMARY KEY ((name, event_date), data_type));
cqlsh:test> CREATE CUSTOM INDEX test_index ON test1(name) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
values('1234', '2010-01-01', 'sensor', 128);
cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
values('abcd', '2010-01-02', 'sensor', 500);
cqlsh:test> select * from test1 where NAME = '1234';

 name | event_date | data_type | bytes
--++---+---

(0 rows)
cqlsh:test> CONSISTENCY ALL;
Consistency level set to ALL.
cqlsh:test> select * from test1 where NAME = '1234';

 name | event_date | data_type | bytes
--++---+---

(0 rows){code}

Note! Creating a SASI index on single part partition key, SASI index creation 
fails. Apparently this should not work at all, so is it about missing 
validation on index creation?

  was:
I created a SASI index on first part of multi-part partition key. Running query 
using that index doesn't seem to work.

I have here a log of queries that should indicate the issue:

{code}cqlsh:test> CREATE TABLE test1(name text, event_date date, data_type 
text, bytes int, PRIMARY KEY ((name, event_date), data_type));
cqlsh:test> CREATE CUSTOM INDEX test_index ON test1(name) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
values('1234', '2010-01-01', 'sensor', 128);
cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
values('abcd', '2010-01-02', 'sensor', 500);
cqlsh:test> select * from test1 where NAME = '1234';

 name | event_date | data_type | bytes
--++---+---

(0 rows)
cqlsh:test> CONSISTENCY ALL;
Consistency level set to ALL.
cqlsh:test> select * from test1 where NAME = '1234';

 name | event_date | data_type | bytes
--++---+---

(0 rows){code}

Note1 Creating a SASI index on single part partition key, SASI index creation 
fails. Apparently this should not work at all, so is it about missing 
validation on index creation?


> SASI index on partition key part doesn't match
> --
>
> Key: CASSANDRA-13228
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13228
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>
> I created a SASI index on first part of multi-part partition key. Running 
> query using that index doesn't seem to work.
> I have here a log of queries that should indicate the issue:
> {code}cqlsh:test> CREATE TABLE test1(name text, event_date date, data_type 
> text, bytes int, PRIMARY KEY ((name, event_date), data_type));
> cqlsh:test> CREATE CUSTOM INDEX test_index ON test1(name) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
> values('1234', '2010-01-01', 'sensor', 128);
> cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
> values('abcd', '2010-01-02', 'sensor', 500);
> cqlsh:test> select * from test1 where NAME = '1234';
>  name | event_date | data_type | bytes
> --++---+---
> (0 rows)
> cqlsh:test> CONSISTENCY ALL;
> Consistency level set to ALL.
> cqlsh:test> select * from test1 where NAME = '1234';
>  name | event_date | data_type | bytes
> --++---+---
> (0 rows){code}
> Note! Creating a SASI index on single part partition key, SASI index creation 
> fails. Apparently this should not work at all, so is it about missing 
> validation on index creation?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13228) SASI index on partition key part doesn't match

2017-02-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868940#comment-15868940
 ] 

Hannu Kröger commented on CASSANDRA-13228:
--

There is a related ticket where a more thorough fix for the problem is 
proposed. However this could be resolved with an additional validation.

> SASI index on partition key part doesn't match
> --
>
> Key: CASSANDRA-13228
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13228
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>
> I created a SASI index on first part of multi-part partition key. Running 
> query using that index doesn't seem to work.
> I have here a log of queries that should indicate the issue:
> {code}cqlsh:test> CREATE TABLE test1(name text, event_date date, data_type 
> text, bytes int, PRIMARY KEY ((name, event_date), data_type));
> cqlsh:test> CREATE CUSTOM INDEX test_index ON test1(name) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex';
> cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
> values('1234', '2010-01-01', 'sensor', 128);
> cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
> values('abcd', '2010-01-02', 'sensor', 500);
> cqlsh:test> select * from test1 where NAME = '1234';
>  name | event_date | data_type | bytes
> --++---+---
> (0 rows)
> cqlsh:test> CONSISTENCY ALL;
> Consistency level set to ALL.
> cqlsh:test> select * from test1 where NAME = '1234';
>  name | event_date | data_type | bytes
> --++---+---
> (0 rows){code}
> Note1 Creating a SASI index on single part partition key, SASI index creation 
> fails. Apparently this should not work at all, so is it about missing 
> validation on index creation?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13228) SASI index on partition key part doesn't match

2017-02-15 Thread JIRA
Hannu Kröger created CASSANDRA-13228:


 Summary: SASI index on partition key part doesn't match
 Key: CASSANDRA-13228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13228
 Project: Cassandra
  Issue Type: Bug
Reporter: Hannu Kröger


I created a SASI index on first part of multi-part partition key. Running query 
using that index doesn't seem to work.

I have here a log of queries that should indicate the issue:

{code}cqlsh:test> CREATE TABLE test1(name text, event_date date, data_type 
text, bytes int, PRIMARY KEY ((name, event_date), data_type));
cqlsh:test> CREATE CUSTOM INDEX test_index ON test1(name) USING 
'org.apache.cassandra.index.sasi.SASIIndex';
cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
values('1234', '2010-01-01', 'sensor', 128);
cqlsh:test> INSERT INTO test1(name, event_date, data_type, bytes) 
values('abcd', '2010-01-02', 'sensor', 500);
cqlsh:test> select * from test1 where NAME = '1234';

 name | event_date | data_type | bytes
--++---+---

(0 rows)
cqlsh:test> CONSISTENCY ALL;
Consistency level set to ALL.
cqlsh:test> select * from test1 where NAME = '1234';

 name | event_date | data_type | bytes
--++---+---

(0 rows){code}

Note1 Creating a SASI index on single part partition key, SASI index creation 
fails. Apparently this should not work at all, so is it about missing 
validation on index creation?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[cassandra] Git Push Summary

2017-02-15 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/2.1.17-tentative [created] 943db2488


[cassandra] Git Push Summary

2017-02-15 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/2.2.9-tentative [created] 70a08f1c3


[cassandra] Git Push Summary

2017-02-15 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/3.0.11-tentative [created] 338226e04


[17/23] cassandra git commit: Update debian/changelog for 3.0.11 release

2017-02-15 Thread mshuler
Update debian/changelog for 3.0.11 release


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/338226e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/338226e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/338226e0

Branch: refs/heads/cassandra-3.0
Commit: 338226e042a22242645ab54a372c7c1459e78a01
Parents: 5168879
Author: Michael Shuler 
Authored: Wed Feb 15 18:15:54 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:15:54 2017 -0600

--
 debian/changelog | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/338226e0/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 6ea7f6f..64cd73b 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,8 +1,8 @@
-cassandra (3.0.11) UNRELEASED; urgency=medium
+cassandra (3.0.11) unstable; urgency=medium
 
   * New release
 
- -- Michael Shuler   Wed, 16 Nov 2016 14:29:32 -0600
+ -- Michael Shuler   Wed, 15 Feb 2017 18:15:14 -0600
 
 cassandra (3.0.10) unstable; urgency=medium
 



[07/23] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/06bfa06b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/06bfa06b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/06bfa06b

Branch: refs/heads/cassandra-3.11
Commit: 06bfa06bbd154499aa9f09efebd53f0d41210358
Parents: 0b9f6de 943db24
Author: Michael Shuler 
Authored: Wed Feb 15 18:11:17 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:11:17 2017 -0600

--

--




[21/23] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-02-15 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cc405c0b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cc405c0b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cc405c0b

Branch: refs/heads/trunk
Commit: cc405c0b2f0d4fa9268429f183b861c42bc3026e
Parents: ad70202 338226e
Author: Michael Shuler 
Authored: Wed Feb 15 18:16:37 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:16:37 2017 -0600

--

--




[10/23] cassandra git commit: Add 2.2.9 release to debian/changelog

2017-02-15 Thread mshuler
Add 2.2.9 release to debian/changelog


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70a08f1c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70a08f1c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70a08f1c

Branch: refs/heads/trunk
Commit: 70a08f1c35091a36f7d9cc4816259210c2185267
Parents: 06bfa06
Author: Michael Shuler 
Authored: Wed Feb 15 18:13:56 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:13:56 2017 -0600

--
 debian/changelog | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/70a08f1c/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 9be1320..1be9edf 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.2.9) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Wed, 15 Feb 2017 18:12:32 -0600
+
 cassandra (2.2.8) unstable; urgency=medium
 
   * New release 



[19/23] cassandra git commit: Update debian/changelog for 3.0.11 release

2017-02-15 Thread mshuler
Update debian/changelog for 3.0.11 release


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/338226e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/338226e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/338226e0

Branch: refs/heads/trunk
Commit: 338226e042a22242645ab54a372c7c1459e78a01
Parents: 5168879
Author: Michael Shuler 
Authored: Wed Feb 15 18:15:54 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:15:54 2017 -0600

--
 debian/changelog | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/338226e0/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 6ea7f6f..64cd73b 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,8 +1,8 @@
-cassandra (3.0.11) UNRELEASED; urgency=medium
+cassandra (3.0.11) unstable; urgency=medium
 
   * New release
 
- -- Michael Shuler   Wed, 16 Nov 2016 14:29:32 -0600
+ -- Michael Shuler   Wed, 15 Feb 2017 18:15:14 -0600
 
 cassandra (3.0.10) unstable; urgency=medium
 



[23/23] cassandra git commit: Add 4.0 to debian/changelog

2017-02-15 Thread mshuler
Add 4.0 to debian/changelog


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b6fdaba6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b6fdaba6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b6fdaba6

Branch: refs/heads/trunk
Commit: b6fdaba6eabd14f71484cfa2f4414f22a64fef38
Parents: 21eb018
Author: Michael Shuler 
Authored: Wed Feb 15 18:21:42 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:21:42 2017 -0600

--
 debian/changelog | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b6fdaba6/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 19bf308..273c6bc 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,4 +1,10 @@
-cassandra (3.10) UNRELEASED; urgency=medium
+cassandra (4.0) UNRELEASED; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Wed, 15 Feb 2017 18:20:09 -0600
+
+cassandra (3.10) unstable; urgency=medium
 
   * New release
 



[15/23] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/51688795
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/51688795
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/51688795

Branch: refs/heads/cassandra-3.11
Commit: 51688795bb848896279d5a47ed525fa19d412036
Parents: 7cb2ef0 70a08f1
Author: Michael Shuler 
Authored: Wed Feb 15 18:14:15 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:14:15 2017 -0600

--

--




[13/23] cassandra git commit: Add 2.2.9 release to debian/changelog

2017-02-15 Thread mshuler
Add 2.2.9 release to debian/changelog


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70a08f1c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70a08f1c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70a08f1c

Branch: refs/heads/cassandra-2.2
Commit: 70a08f1c35091a36f7d9cc4816259210c2185267
Parents: 06bfa06
Author: Michael Shuler 
Authored: Wed Feb 15 18:13:56 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:13:56 2017 -0600

--
 debian/changelog | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/70a08f1c/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 9be1320..1be9edf 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.2.9) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Wed, 15 Feb 2017 18:12:32 -0600
+
 cassandra (2.2.8) unstable; urgency=medium
 
   * New release 



[11/23] cassandra git commit: Add 2.2.9 release to debian/changelog

2017-02-15 Thread mshuler
Add 2.2.9 release to debian/changelog


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70a08f1c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70a08f1c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70a08f1c

Branch: refs/heads/cassandra-3.0
Commit: 70a08f1c35091a36f7d9cc4816259210c2185267
Parents: 06bfa06
Author: Michael Shuler 
Authored: Wed Feb 15 18:13:56 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:13:56 2017 -0600

--
 debian/changelog | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/70a08f1c/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 9be1320..1be9edf 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.2.9) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Wed, 15 Feb 2017 18:12:32 -0600
+
 cassandra (2.2.8) unstable; urgency=medium
 
   * New release 



[01/23] cassandra git commit: Add 2.1.17 NEWS entry and bump debian/changelog timestamp

2017-02-15 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 cb090791c -> 943db2488
  refs/heads/cassandra-2.2 0b9f6de7a -> 70a08f1c3
  refs/heads/cassandra-3.0 7cb2ef090 -> 338226e04
  refs/heads/cassandra-3.11 ad7020278 -> cc405c0b2
  refs/heads/trunk ad316820b -> b6fdaba6e


Add 2.1.17 NEWS entry and bump debian/changelog timestamp


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/943db248
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/943db248
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/943db248

Branch: refs/heads/cassandra-2.1
Commit: 943db2488c8b62e1fbe03b132102f0e579c9ae17
Parents: cb09079
Author: Michael Shuler 
Authored: Wed Feb 15 18:10:51 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:10:51 2017 -0600

--
 NEWS.txt | 8 
 debian/changelog | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/943db248/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 2db34ed..65ccdc9 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.1.17
+==
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
 2.1.16
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/943db248/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 4bd30ff..ba9b91d 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -2,7 +2,7 @@ cassandra (2.1.17) unstable; urgency=medium
 
   * New release
 
- -- Michael Shuler   Mon, 10 Oct 2016 17:07:44 -0500
+ -- Michael Shuler   Wed, 15 Feb 2017 18:09:07 -0600
 
 cassandra (2.1.16) unstable; urgency=medium
 



[03/23] cassandra git commit: Add 2.1.17 NEWS entry and bump debian/changelog timestamp

2017-02-15 Thread mshuler
Add 2.1.17 NEWS entry and bump debian/changelog timestamp


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/943db248
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/943db248
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/943db248

Branch: refs/heads/cassandra-2.2
Commit: 943db2488c8b62e1fbe03b132102f0e579c9ae17
Parents: cb09079
Author: Michael Shuler 
Authored: Wed Feb 15 18:10:51 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:10:51 2017 -0600

--
 NEWS.txt | 8 
 debian/changelog | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/943db248/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 2db34ed..65ccdc9 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.1.17
+==
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
 2.1.16
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/943db248/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 4bd30ff..ba9b91d 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -2,7 +2,7 @@ cassandra (2.1.17) unstable; urgency=medium
 
   * New release
 
- -- Michael Shuler   Mon, 10 Oct 2016 17:07:44 -0500
+ -- Michael Shuler   Wed, 15 Feb 2017 18:09:07 -0600
 
 cassandra (2.1.16) unstable; urgency=medium
 



[18/23] cassandra git commit: Update debian/changelog for 3.0.11 release

2017-02-15 Thread mshuler
Update debian/changelog for 3.0.11 release


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/338226e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/338226e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/338226e0

Branch: refs/heads/cassandra-3.11
Commit: 338226e042a22242645ab54a372c7c1459e78a01
Parents: 5168879
Author: Michael Shuler 
Authored: Wed Feb 15 18:15:54 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:15:54 2017 -0600

--
 debian/changelog | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/338226e0/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 6ea7f6f..64cd73b 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,8 +1,8 @@
-cassandra (3.0.11) UNRELEASED; urgency=medium
+cassandra (3.0.11) unstable; urgency=medium
 
   * New release
 
- -- Michael Shuler   Wed, 16 Nov 2016 14:29:32 -0600
+ -- Michael Shuler   Wed, 15 Feb 2017 18:15:14 -0600
 
 cassandra (3.0.10) unstable; urgency=medium
 



[06/23] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/06bfa06b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/06bfa06b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/06bfa06b

Branch: refs/heads/cassandra-2.2
Commit: 06bfa06bbd154499aa9f09efebd53f0d41210358
Parents: 0b9f6de 943db24
Author: Michael Shuler 
Authored: Wed Feb 15 18:11:17 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:11:17 2017 -0600

--

--




[09/23] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/06bfa06b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/06bfa06b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/06bfa06b

Branch: refs/heads/cassandra-3.0
Commit: 06bfa06bbd154499aa9f09efebd53f0d41210358
Parents: 0b9f6de 943db24
Author: Michael Shuler 
Authored: Wed Feb 15 18:11:17 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:11:17 2017 -0600

--

--




[16/23] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/51688795
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/51688795
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/51688795

Branch: refs/heads/trunk
Commit: 51688795bb848896279d5a47ed525fa19d412036
Parents: 7cb2ef0 70a08f1
Author: Michael Shuler 
Authored: Wed Feb 15 18:14:15 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:14:15 2017 -0600

--

--




[22/23] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-02-15 Thread mshuler
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21eb018d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21eb018d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21eb018d

Branch: refs/heads/trunk
Commit: 21eb018deeadb2e6054e3e875c54943835ffb7f4
Parents: ad31682 cc405c0
Author: Michael Shuler 
Authored: Wed Feb 15 18:21:15 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:21:15 2017 -0600

--

--




[12/23] cassandra git commit: Add 2.2.9 release to debian/changelog

2017-02-15 Thread mshuler
Add 2.2.9 release to debian/changelog


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/70a08f1c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/70a08f1c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/70a08f1c

Branch: refs/heads/cassandra-3.11
Commit: 70a08f1c35091a36f7d9cc4816259210c2185267
Parents: 06bfa06
Author: Michael Shuler 
Authored: Wed Feb 15 18:13:56 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:13:56 2017 -0600

--
 debian/changelog | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/70a08f1c/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 9be1320..1be9edf 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.2.9) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Wed, 15 Feb 2017 18:12:32 -0600
+
 cassandra (2.2.8) unstable; urgency=medium
 
   * New release 



[08/23] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/06bfa06b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/06bfa06b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/06bfa06b

Branch: refs/heads/trunk
Commit: 06bfa06bbd154499aa9f09efebd53f0d41210358
Parents: 0b9f6de 943db24
Author: Michael Shuler 
Authored: Wed Feb 15 18:11:17 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:11:17 2017 -0600

--

--




[14/23] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/51688795
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/51688795
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/51688795

Branch: refs/heads/cassandra-3.0
Commit: 51688795bb848896279d5a47ed525fa19d412036
Parents: 7cb2ef0 70a08f1
Author: Michael Shuler 
Authored: Wed Feb 15 18:14:15 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:14:15 2017 -0600

--

--




[20/23] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-02-15 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cc405c0b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cc405c0b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cc405c0b

Branch: refs/heads/cassandra-3.11
Commit: cc405c0b2f0d4fa9268429f183b861c42bc3026e
Parents: ad70202 338226e
Author: Michael Shuler 
Authored: Wed Feb 15 18:16:37 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:16:37 2017 -0600

--

--




[05/23] cassandra git commit: Add 2.1.17 NEWS entry and bump debian/changelog timestamp

2017-02-15 Thread mshuler
Add 2.1.17 NEWS entry and bump debian/changelog timestamp


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/943db248
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/943db248
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/943db248

Branch: refs/heads/cassandra-3.0
Commit: 943db2488c8b62e1fbe03b132102f0e579c9ae17
Parents: cb09079
Author: Michael Shuler 
Authored: Wed Feb 15 18:10:51 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:10:51 2017 -0600

--
 NEWS.txt | 8 
 debian/changelog | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/943db248/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 2db34ed..65ccdc9 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.1.17
+==
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
 2.1.16
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/943db248/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 4bd30ff..ba9b91d 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -2,7 +2,7 @@ cassandra (2.1.17) unstable; urgency=medium
 
   * New release
 
- -- Michael Shuler   Mon, 10 Oct 2016 17:07:44 -0500
+ -- Michael Shuler   Wed, 15 Feb 2017 18:09:07 -0600
 
 cassandra (2.1.16) unstable; urgency=medium
 



[04/23] cassandra git commit: Add 2.1.17 NEWS entry and bump debian/changelog timestamp

2017-02-15 Thread mshuler
Add 2.1.17 NEWS entry and bump debian/changelog timestamp


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/943db248
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/943db248
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/943db248

Branch: refs/heads/trunk
Commit: 943db2488c8b62e1fbe03b132102f0e579c9ae17
Parents: cb09079
Author: Michael Shuler 
Authored: Wed Feb 15 18:10:51 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:10:51 2017 -0600

--
 NEWS.txt | 8 
 debian/changelog | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/943db248/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 2db34ed..65ccdc9 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.1.17
+==
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
 2.1.16
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/943db248/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 4bd30ff..ba9b91d 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -2,7 +2,7 @@ cassandra (2.1.17) unstable; urgency=medium
 
   * New release
 
- -- Michael Shuler   Mon, 10 Oct 2016 17:07:44 -0500
+ -- Michael Shuler   Wed, 15 Feb 2017 18:09:07 -0600
 
 cassandra (2.1.16) unstable; urgency=medium
 



[02/23] cassandra git commit: Add 2.1.17 NEWS entry and bump debian/changelog timestamp

2017-02-15 Thread mshuler
Add 2.1.17 NEWS entry and bump debian/changelog timestamp


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/943db248
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/943db248
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/943db248

Branch: refs/heads/cassandra-3.11
Commit: 943db2488c8b62e1fbe03b132102f0e579c9ae17
Parents: cb09079
Author: Michael Shuler 
Authored: Wed Feb 15 18:10:51 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:10:51 2017 -0600

--
 NEWS.txt | 8 
 debian/changelog | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/943db248/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 2db34ed..65ccdc9 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.1.17
+==
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
 2.1.16
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/943db248/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 4bd30ff..ba9b91d 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -2,7 +2,7 @@ cassandra (2.1.17) unstable; urgency=medium
 
   * New release
 
- -- Michael Shuler   Mon, 10 Oct 2016 17:07:44 -0500
+ -- Michael Shuler   Wed, 15 Feb 2017 18:09:07 -0600
 
 cassandra (2.1.16) unstable; urgency=medium
 



[jira] [Updated] (CASSANDRA-13211) Use portable stderr for java error in startup

2017-02-15 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13211:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Commit {{cb090791c3d16011665f0f56afd66bbce2a0e40f}} pushed to cassandra-2.1 and 
up. Thanks!

> Use portable stderr for java error in startup
> -
>
> Key: CASSANDRA-13211
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13211
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Max Bowsher
>Assignee: Michael Shuler
> Fix For: 2.1.17, 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: 13211_use-portable-stderr.patch
>
>
> The cassandra startup shell script contains this line:
> echo Unable to find java executable. Check JAVA_HOME and PATH environment 
> variables. > /dev/stderr
> The problem here is the construct "> /dev/stderr". If the user invoking 
> Cassandra has changed user (for example, by SSHing in as a personal user, and 
> then sudo-ing to an application user responsible for executing the Cassandra 
> daemon), then the attempt to open /dev/stderr will fail, because it will 
> point to a PTY node under /dev/pts/ owned by the original user.
> Ultimately this leads to the real problem being masked by the confusing error 
> message "bash: /dev/stderr: Permission denied".
> The correct technique is to replace "> /dev/stderr" with ">&2" which will 
> write to the already open stderr file descriptor, instead of resolving the 
> chain of symlinks starting at /dev/stderr, and attempting to reopen the 
> target by name.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13211) Use portable stderr for java error in startup

2017-02-15 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13211:
---
Fix Version/s: (was: 2.1.x)
   4.0
   3.11.0
   3.0.11
   2.2.9
   2.1.17

> Use portable stderr for java error in startup
> -
>
> Key: CASSANDRA-13211
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13211
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Max Bowsher
>Assignee: Michael Shuler
> Fix For: 2.1.17, 2.2.9, 3.0.11, 3.11.0, 4.0
>
> Attachments: 13211_use-portable-stderr.patch
>
>
> The cassandra startup shell script contains this line:
> echo Unable to find java executable. Check JAVA_HOME and PATH environment 
> variables. > /dev/stderr
> The problem here is the construct "> /dev/stderr". If the user invoking 
> Cassandra has changed user (for example, by SSHing in as a personal user, and 
> then sudo-ing to an application user responsible for executing the Cassandra 
> daemon), then the attempt to open /dev/stderr will fail, because it will 
> point to a PTY node under /dev/pts/ owned by the original user.
> Ultimately this leads to the real problem being masked by the confusing error 
> message "bash: /dev/stderr: Permission denied".
> The correct technique is to replace "> /dev/stderr" with ">&2" which will 
> write to the already open stderr file descriptor, instead of resolving the 
> chain of symlinks starting at /dev/stderr, and attempting to reopen the 
> target by name.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[01/15] cassandra git commit: Use portable stderr for java error in startup

2017-02-15 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 a6237bf65 -> cb090791c
  refs/heads/cassandra-2.2 753d90cd7 -> 0b9f6de7a
  refs/heads/cassandra-3.0 f02f154e4 -> 7cb2ef090
  refs/heads/cassandra-3.11 ef9df6e05 -> ad7020278
  refs/heads/trunk d81dc27c7 -> ad316820b


Use portable stderr for java error in startup


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb090791
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb090791
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb090791

Branch: refs/heads/cassandra-2.1
Commit: cb090791c3d16011665f0f56afd66bbce2a0e40f
Parents: a6237bf
Author: Michael Shuler 
Authored: Mon Feb 13 12:46:02 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:49:39 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb090791/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9ce8d49..a3de742 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.17
+ * Use portable stderr for java error in startup (CASSANDRA-13211)
  * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
  * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
  * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb090791/bin/cassandra
--
diff --git a/bin/cassandra b/bin/cassandra
index 4dca73e..957cc7d 100755
--- a/bin/cassandra
+++ b/bin/cassandra
@@ -99,7 +99,7 @@ else
 fi
 
 if [ -z $JAVA ] ; then
-echo Unable to find java executable. Check JAVA_HOME and PATH environment 
variables. > /dev/stderr
+echo Unable to find java executable. Check JAVA_HOME and PATH environment 
variables. >&2
 exit 1;
 fi
 



[14/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-02-15 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad702027
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad702027
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad702027

Branch: refs/heads/trunk
Commit: ad7020278353cabe330ab813f07f6679f382d6cd
Parents: ef9df6e 7cb2ef0
Author: Michael Shuler 
Authored: Wed Feb 15 18:00:06 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:00:06 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad702027/CHANGES.txt
--
diff --cc CHANGES.txt
index ee5a5cb,ab345e6..1a9ce2d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -19,130 -12,6 +19,131 @@@ Merged from 3.0
 live rows in sstabledump (CASSANDRA-13177)
   * Provide user workaround when system_schema.columns does not contain entries
 for a table that's in system_schema.tables (CASSANDRA-13180)
 +Merged from 2.2:
 + * Fix negative mean latency metric (CASSANDRA-12876)
 + * Use only one file pointer when creating commitlog segments 
(CASSANDRA-12539)
 +Merged from 2.1:
++ * Use portable stderr for java error in startup (CASSANDRA-13211)
 + * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
 + * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
 +
 +3.10
 + * Fix secondary index queries regression (CASSANDRA-13013)
 + * Add duration type to the protocol V5 (CASSANDRA-12850)
 + * Fix duration type validation (CASSANDRA-13143)
 + * Fix flaky GcCompactionTest (CASSANDRA-12664)
 + * Fix TestHintedHandoff.hintedhandoff_decom_test (CASSANDRA-13058)
 + * Fixed query monitoring for range queries (CASSANDRA-13050)
 + * Remove outboundBindAny configuration property (CASSANDRA-12673)
 + * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
 + * Remove timing window in test case (CASSANDRA-12875)
 + * Resolve unit testing without JCE security libraries installed 
(CASSANDRA-12945)
 + * Fix inconsistencies in cassandra-stress load balancing policy 
(CASSANDRA-12919)
 + * Fix validation of non-frozen UDT cells (CASSANDRA-12916)
 + * Don't shut down socket input/output on StreamSession (CASSANDRA-12903)
 + * Fix Murmur3PartitionerTest (CASSANDRA-12858)
 + * Move cqlsh syntax rules into separate module and allow easier 
customization (CASSANDRA-12897)
 + * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
 + * Fix cassandra-stress truncate option (CASSANDRA-12695)
 + * Fix crossNode value when receiving messages (CASSANDRA-12791)
 + * Don't load MX4J beans twice (CASSANDRA-12869)
 + * Extend native protocol request flags, add versions to SUPPORTED, and 
introduce ProtocolVersion enum (CASSANDRA-12838)
 + * Set JOINING mode when running pre-join tasks (CASSANDRA-12836)
 + * remove net.mintern.primitive library due to license issue (CASSANDRA-12845)
 + * Properly format IPv6 addresses when logging JMX service URL 
(CASSANDRA-12454)
 + * Optimize the vnode allocation for single replica per DC (CASSANDRA-12777)
 + * Use non-token restrictions for bounds when token restrictions are 
overridden (CASSANDRA-12419)
 + * Fix CQLSH auto completion for PER PARTITION LIMIT (CASSANDRA-12803)
 + * Use different build directories for Eclipse and Ant (CASSANDRA-12466)
 + * Avoid potential AttributeError in cqlsh due to no table metadata 
(CASSANDRA-12815)
 + * Fix RandomReplicationAwareTokenAllocatorTest.testExistingCluster 
(CASSANDRA-12812)
 + * Upgrade commons-codec to 1.9 (CASSANDRA-12790)
 + * Make the fanout size for LeveledCompactionStrategy to be configurable 
(CASSANDRA-11550)
 + * Add duration data type (CASSANDRA-11873)
 + * Fix timeout in ReplicationAwareTokenAllocatorTest (CASSANDRA-12784)
 + * Improve sum aggregate functions (CASSANDRA-12417)
 + * Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes 
in CASSANDRA-10876 (CASSANDRA-12761)
 + * cqlsh fails to format collections when using aliases (CASSANDRA-11534)
 + * Check for hash conflicts in prepared statements (CASSANDRA-12733)
 + * Exit query parsing upon first error (CASSANDRA-12598)
 + * Fix cassandra-stress to use single seed in UUID generation 
(CASSANDRA-12729)
 + * CQLSSTableWriter does not allow Update statement (CASSANDRA-12450)
 + * Config class uses boxed types but DD exposes primitive types 
(CASSANDRA-12199)
 + * Add pre- and post-shutdown hooks to Storage Service (CASSANDRA-12461)
 + * Add hint delivery metrics (CASSANDRA-12693)
 + * Remove IndexInfo cache from FileIndexInfoRetriever (CASSANDRA-12731)
 + * ColumnIndex does not reuse buffer (CASSANDRA-12502)
 + * cdc column addition still breaks 

[04/15] cassandra git commit: Use portable stderr for java error in startup

2017-02-15 Thread mshuler
Use portable stderr for java error in startup


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb090791
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb090791
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb090791

Branch: refs/heads/cassandra-3.0
Commit: cb090791c3d16011665f0f56afd66bbce2a0e40f
Parents: a6237bf
Author: Michael Shuler 
Authored: Mon Feb 13 12:46:02 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:49:39 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb090791/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9ce8d49..a3de742 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.17
+ * Use portable stderr for java error in startup (CASSANDRA-13211)
  * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
  * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
  * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb090791/bin/cassandra
--
diff --git a/bin/cassandra b/bin/cassandra
index 4dca73e..957cc7d 100755
--- a/bin/cassandra
+++ b/bin/cassandra
@@ -99,7 +99,7 @@ else
 fi
 
 if [ -z $JAVA ] ; then
-echo Unable to find java executable. Check JAVA_HOME and PATH environment 
variables. > /dev/stderr
+echo Unable to find java executable. Check JAVA_HOME and PATH environment 
variables. >&2
 exit 1;
 fi
 



[09/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0b9f6de7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0b9f6de7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0b9f6de7

Branch: refs/heads/cassandra-3.11
Commit: 0b9f6de7a8a68ee7637cde8c17177b21f801f652
Parents: 753d90c cb09079
Author: Michael Shuler 
Authored: Wed Feb 15 17:57:03 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:57:03 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b9f6de7/CHANGES.txt
--
diff --cc CHANGES.txt
index 4052b0f,a3de742..d53457f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,36 -1,5 +1,37 @@@
 -2.1.17
 +2.2.9
 + * Coalescing strategy sleeps too much and shouldn't be enabled by default 
(CASSANDRA-13090)
 + * Fix negative mean latency metric (CASSANDRA-12876)
 + * Use only one file pointer when creating commitlog segments 
(CASSANDRA-12539)
 + * Fix speculative retry bugs (CASSANDRA-13009)
 + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) 
 + * Remove support for non-JavaScript UDFs (CASSANDRA-12883)
 + * Fix DynamicEndpointSnitch noop in multi-datacenter situations 
(CASSANDRA-13074)
 + * cqlsh copy-from: encode column names to avoid primary key parsing errors 
(CASSANDRA-12909)
 + * Temporarily fix bug that creates commit log when running offline tools 
(CASSANDRA-8616)
 + * Reduce granuality of OpOrder.Group during index build (CASSANDRA-12796)
 + * Test bind parameters and unset parameters in InsertUpdateIfConditionTest 
(CASSANDRA-12980)
 + * Do not specify local address on outgoing connection when 
listen_on_broadcast_address is set (CASSANDRA-12673)
 + * Use saved tokens when setting local tokens on StorageService.joinRing 
(CASSANDRA-12935)
 + * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
 + * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)
 + * Avoid blocking gossip during pending range calculation (CASSANDRA-12281)
 + * Fix purgeability of tombstones with max timestamp (CASSANDRA-12792)
 + * Fail repair if participant dies during sync or anticompaction 
(CASSANDRA-12901)
 + * cqlsh COPY: unprotected pk values before converting them if not using 
prepared statements (CASSANDRA-12863)
 + * Fix Util.spinAssertEquals (CASSANDRA-12283)
 + * Fix potential NPE for compactionstats (CASSANDRA-12462)
 + * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
 + * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)
 + * Clean up permissions when a UDA is dropped (CASSANDRA-12720)
 + * Limit colUpdateTimeDelta histogram updates to reasonable deltas 
(CASSANDRA-7)
 + * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
 + * Fix merkle tree depth calculation (CASSANDRA-12580)
 + * Make Collections deserialization more robust (CASSANDRA-12618)
 + * Better handle invalid system roles table (CASSANDRA-12700)
 + * Split consistent range movement flag correction (CASSANDRA-12786)
 + * CompactionTasks now correctly drops sstables out of compaction when not 
enough disk space is available (CASSANDRA-12979)
 +Merged from 2.1:
+  * Use portable stderr for java error in startup (CASSANDRA-13211)
   * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
   * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
   * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b9f6de7/bin/cassandra
--



[11/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7cb2ef09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7cb2ef09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7cb2ef09

Branch: refs/heads/cassandra-3.11
Commit: 7cb2ef09013df4987c74238bf6af4e7445c05ee2
Parents: f02f154 0b9f6de
Author: Michael Shuler 
Authored: Wed Feb 15 17:59:06 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:59:06 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7cb2ef09/CHANGES.txt
--
diff --cc CHANGES.txt
index 732e14b,d53457f..ab345e6
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -57,50 -15,6 +57,51 @@@ Merged from 2.2
   * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
   * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)
   * Avoid blocking gossip during pending range calculation (CASSANDRA-12281)
 +Merged from 2.1:
++ * Use portable stderr for java error in startup (CASSANDRA-13211)
 + * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
 + * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
 + * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)
 + * cqlsh copy-from: sort user type fields in csv (CASSANDRA-12959)
 +
 +
 +
 +3.0.10
 + * Disallow offheap_buffers memtable allocation (CASSANDRA-11039)
 + * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
 + * Pass root cause to CorruptBlockException when uncompression failed 
(CASSANDRA-12889)
 + * Fix partition count log during compaction (CASSANDRA-12184)
 + * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
 + * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 + * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
 + * Include SSTable filename in compacting large row message (CASSANDRA-12384)
 + * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)
 + * Fix ViewTest.testCompaction (CASSANDRA-12789)
 + * Improve avg aggregate functions (CASSANDRA-12417)
 + * Preserve quoted reserved keyword column names in MV creation 
(CASSANDRA-11803)
 + * nodetool stopdaemon errors out (CASSANDRA-12646)
 + * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
 + * mx4j does not work in 3.0.8 (CASSANDRA-12274)
 + * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
 + * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 + * Correct log message for statistics of offheap memtable flush 
(CASSANDRA-12776)
 + * Explicitly set locale for string validation 
(CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545)
 +Merged from 2.2:
   * Fix purgeability of tombstones with max timestamp (CASSANDRA-12792)
   * Fail repair if participant dies during sync or anticompaction 
(CASSANDRA-12901)
   * cqlsh COPY: unprotected pk values before converting them if not using 
prepared statements (CASSANDRA-12863)



[06/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0b9f6de7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0b9f6de7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0b9f6de7

Branch: refs/heads/trunk
Commit: 0b9f6de7a8a68ee7637cde8c17177b21f801f652
Parents: 753d90c cb09079
Author: Michael Shuler 
Authored: Wed Feb 15 17:57:03 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:57:03 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b9f6de7/CHANGES.txt
--
diff --cc CHANGES.txt
index 4052b0f,a3de742..d53457f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,36 -1,5 +1,37 @@@
 -2.1.17
 +2.2.9
 + * Coalescing strategy sleeps too much and shouldn't be enabled by default 
(CASSANDRA-13090)
 + * Fix negative mean latency metric (CASSANDRA-12876)
 + * Use only one file pointer when creating commitlog segments 
(CASSANDRA-12539)
 + * Fix speculative retry bugs (CASSANDRA-13009)
 + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) 
 + * Remove support for non-JavaScript UDFs (CASSANDRA-12883)
 + * Fix DynamicEndpointSnitch noop in multi-datacenter situations 
(CASSANDRA-13074)
 + * cqlsh copy-from: encode column names to avoid primary key parsing errors 
(CASSANDRA-12909)
 + * Temporarily fix bug that creates commit log when running offline tools 
(CASSANDRA-8616)
 + * Reduce granuality of OpOrder.Group during index build (CASSANDRA-12796)
 + * Test bind parameters and unset parameters in InsertUpdateIfConditionTest 
(CASSANDRA-12980)
 + * Do not specify local address on outgoing connection when 
listen_on_broadcast_address is set (CASSANDRA-12673)
 + * Use saved tokens when setting local tokens on StorageService.joinRing 
(CASSANDRA-12935)
 + * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
 + * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)
 + * Avoid blocking gossip during pending range calculation (CASSANDRA-12281)
 + * Fix purgeability of tombstones with max timestamp (CASSANDRA-12792)
 + * Fail repair if participant dies during sync or anticompaction 
(CASSANDRA-12901)
 + * cqlsh COPY: unprotected pk values before converting them if not using 
prepared statements (CASSANDRA-12863)
 + * Fix Util.spinAssertEquals (CASSANDRA-12283)
 + * Fix potential NPE for compactionstats (CASSANDRA-12462)
 + * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
 + * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)
 + * Clean up permissions when a UDA is dropped (CASSANDRA-12720)
 + * Limit colUpdateTimeDelta histogram updates to reasonable deltas 
(CASSANDRA-7)
 + * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
 + * Fix merkle tree depth calculation (CASSANDRA-12580)
 + * Make Collections deserialization more robust (CASSANDRA-12618)
 + * Better handle invalid system roles table (CASSANDRA-12700)
 + * Split consistent range movement flag correction (CASSANDRA-12786)
 + * CompactionTasks now correctly drops sstables out of compaction when not 
enough disk space is available (CASSANDRA-12979)
 +Merged from 2.1:
+  * Use portable stderr for java error in startup (CASSANDRA-13211)
   * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
   * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
   * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b9f6de7/bin/cassandra
--



[15/15] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-02-15 Thread mshuler
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad316820
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad316820
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad316820

Branch: refs/heads/trunk
Commit: ad316820be0ac9caa2ddc228e257fb4fceb70675
Parents: d81dc27 ad70202
Author: Michael Shuler 
Authored: Wed Feb 15 18:00:24 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:00:24 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad316820/CHANGES.txt
--
diff --cc CHANGES.txt
index 0a76400,1a9ce2d..adc1503
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -55,11 -23,10 +55,12 @@@ Merged from 2.2
   * Fix negative mean latency metric (CASSANDRA-12876)
   * Use only one file pointer when creating commitlog segments 
(CASSANDRA-12539)
  Merged from 2.1:
+  * Use portable stderr for java error in startup (CASSANDRA-13211)
   * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
 + * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)
   * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
  
 +
  3.10
   * Fix secondary index queries regression (CASSANDRA-13013)
   * Add duration type to the protocol V5 (CASSANDRA-12850)



[07/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0b9f6de7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0b9f6de7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0b9f6de7

Branch: refs/heads/cassandra-2.2
Commit: 0b9f6de7a8a68ee7637cde8c17177b21f801f652
Parents: 753d90c cb09079
Author: Michael Shuler 
Authored: Wed Feb 15 17:57:03 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:57:03 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b9f6de7/CHANGES.txt
--
diff --cc CHANGES.txt
index 4052b0f,a3de742..d53457f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,36 -1,5 +1,37 @@@
 -2.1.17
 +2.2.9
 + * Coalescing strategy sleeps too much and shouldn't be enabled by default 
(CASSANDRA-13090)
 + * Fix negative mean latency metric (CASSANDRA-12876)
 + * Use only one file pointer when creating commitlog segments 
(CASSANDRA-12539)
 + * Fix speculative retry bugs (CASSANDRA-13009)
 + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) 
 + * Remove support for non-JavaScript UDFs (CASSANDRA-12883)
 + * Fix DynamicEndpointSnitch noop in multi-datacenter situations 
(CASSANDRA-13074)
 + * cqlsh copy-from: encode column names to avoid primary key parsing errors 
(CASSANDRA-12909)
 + * Temporarily fix bug that creates commit log when running offline tools 
(CASSANDRA-8616)
 + * Reduce granuality of OpOrder.Group during index build (CASSANDRA-12796)
 + * Test bind parameters and unset parameters in InsertUpdateIfConditionTest 
(CASSANDRA-12980)
 + * Do not specify local address on outgoing connection when 
listen_on_broadcast_address is set (CASSANDRA-12673)
 + * Use saved tokens when setting local tokens on StorageService.joinRing 
(CASSANDRA-12935)
 + * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
 + * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)
 + * Avoid blocking gossip during pending range calculation (CASSANDRA-12281)
 + * Fix purgeability of tombstones with max timestamp (CASSANDRA-12792)
 + * Fail repair if participant dies during sync or anticompaction 
(CASSANDRA-12901)
 + * cqlsh COPY: unprotected pk values before converting them if not using 
prepared statements (CASSANDRA-12863)
 + * Fix Util.spinAssertEquals (CASSANDRA-12283)
 + * Fix potential NPE for compactionstats (CASSANDRA-12462)
 + * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
 + * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)
 + * Clean up permissions when a UDA is dropped (CASSANDRA-12720)
 + * Limit colUpdateTimeDelta histogram updates to reasonable deltas 
(CASSANDRA-7)
 + * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
 + * Fix merkle tree depth calculation (CASSANDRA-12580)
 + * Make Collections deserialization more robust (CASSANDRA-12618)
 + * Better handle invalid system roles table (CASSANDRA-12700)
 + * Split consistent range movement flag correction (CASSANDRA-12786)
 + * CompactionTasks now correctly drops sstables out of compaction when not 
enough disk space is available (CASSANDRA-12979)
 +Merged from 2.1:
+  * Use portable stderr for java error in startup (CASSANDRA-13211)
   * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
   * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
   * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b9f6de7/bin/cassandra
--



[10/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7cb2ef09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7cb2ef09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7cb2ef09

Branch: refs/heads/trunk
Commit: 7cb2ef09013df4987c74238bf6af4e7445c05ee2
Parents: f02f154 0b9f6de
Author: Michael Shuler 
Authored: Wed Feb 15 17:59:06 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:59:06 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7cb2ef09/CHANGES.txt
--
diff --cc CHANGES.txt
index 732e14b,d53457f..ab345e6
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -57,50 -15,6 +57,51 @@@ Merged from 2.2
   * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
   * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)
   * Avoid blocking gossip during pending range calculation (CASSANDRA-12281)
 +Merged from 2.1:
++ * Use portable stderr for java error in startup (CASSANDRA-13211)
 + * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
 + * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
 + * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)
 + * cqlsh copy-from: sort user type fields in csv (CASSANDRA-12959)
 +
 +
 +
 +3.0.10
 + * Disallow offheap_buffers memtable allocation (CASSANDRA-11039)
 + * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
 + * Pass root cause to CorruptBlockException when uncompression failed 
(CASSANDRA-12889)
 + * Fix partition count log during compaction (CASSANDRA-12184)
 + * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
 + * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 + * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
 + * Include SSTable filename in compacting large row message (CASSANDRA-12384)
 + * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)
 + * Fix ViewTest.testCompaction (CASSANDRA-12789)
 + * Improve avg aggregate functions (CASSANDRA-12417)
 + * Preserve quoted reserved keyword column names in MV creation 
(CASSANDRA-11803)
 + * nodetool stopdaemon errors out (CASSANDRA-12646)
 + * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
 + * mx4j does not work in 3.0.8 (CASSANDRA-12274)
 + * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
 + * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 + * Correct log message for statistics of offheap memtable flush 
(CASSANDRA-12776)
 + * Explicitly set locale for string validation 
(CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545)
 +Merged from 2.2:
   * Fix purgeability of tombstones with max timestamp (CASSANDRA-12792)
   * Fail repair if participant dies during sync or anticompaction 
(CASSANDRA-12901)
   * cqlsh COPY: unprotected pk values before converting them if not using 
prepared statements (CASSANDRA-12863)



[12/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7cb2ef09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7cb2ef09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7cb2ef09

Branch: refs/heads/cassandra-3.0
Commit: 7cb2ef09013df4987c74238bf6af4e7445c05ee2
Parents: f02f154 0b9f6de
Author: Michael Shuler 
Authored: Wed Feb 15 17:59:06 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:59:06 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7cb2ef09/CHANGES.txt
--
diff --cc CHANGES.txt
index 732e14b,d53457f..ab345e6
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -57,50 -15,6 +57,51 @@@ Merged from 2.2
   * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
   * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)
   * Avoid blocking gossip during pending range calculation (CASSANDRA-12281)
 +Merged from 2.1:
++ * Use portable stderr for java error in startup (CASSANDRA-13211)
 + * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
 + * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
 + * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)
 + * cqlsh copy-from: sort user type fields in csv (CASSANDRA-12959)
 +
 +
 +
 +3.0.10
 + * Disallow offheap_buffers memtable allocation (CASSANDRA-11039)
 + * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
 + * Pass root cause to CorruptBlockException when uncompression failed 
(CASSANDRA-12889)
 + * Fix partition count log during compaction (CASSANDRA-12184)
 + * Batch with multiple conditional updates for the same partition causes 
AssertionError (CASSANDRA-12867)
 + * Make AbstractReplicationStrategy extendable from outside its package 
(CASSANDRA-12788)
 + * Fix CommitLogTest.testDeleteIfNotDirty (CASSANDRA-12854)
 + * Don't tell users to turn off consistent rangemovements during rebuild. 
(CASSANDRA-12296)
 + * Avoid deadlock due to materialized view lock contention (CASSANDRA-12689)
 + * Fix for KeyCacheCqlTest flakiness (CASSANDRA-12801)
 + * Include SSTable filename in compacting large row message (CASSANDRA-12384)
 + * Fix potential socket leak (CASSANDRA-12329, CASSANDRA-12330)
 + * Fix ViewTest.testCompaction (CASSANDRA-12789)
 + * Improve avg aggregate functions (CASSANDRA-12417)
 + * Preserve quoted reserved keyword column names in MV creation 
(CASSANDRA-11803)
 + * nodetool stopdaemon errors out (CASSANDRA-12646)
 + * Split materialized view mutations on build to prevent OOM (CASSANDRA-12268)
 + * mx4j does not work in 3.0.8 (CASSANDRA-12274)
 + * Abort cqlsh copy-from in case of no answer after prolonged period of time 
(CASSANDRA-12740)
 + * Avoid sstable corrupt exception due to dropped static column 
(CASSANDRA-12582)
 + * Make stress use client mode to avoid checking commit log size on startup 
(CASSANDRA-12478)
 + * Fix exceptions with new vnode allocation (CASSANDRA-12715)
 + * Unify drain and shutdown processes (CASSANDRA-12509)
 + * Fix NPE in ComponentOfSlice.isEQ() (CASSANDRA-12706)
 + * Fix failure in LogTransactionTest (CASSANDRA-12632)
 + * Fix potentially incomplete non-frozen UDT values when querying with the
 +   full primary key specified (CASSANDRA-12605)
 + * Skip writing MV mutations to commitlog on mutation.applyUnsafe() 
(CASSANDRA-11670)
 + * Establish consistent distinction between non-existing partition and NULL 
value for LWTs on static columns (CASSANDRA-12060)
 + * Extend ColumnIdentifier.internedInstances key to include the type that 
generated the byte buffer (CASSANDRA-12516)
 + * Backport CASSANDRA-10756 (race condition in NativeTransportService 
shutdown) (CASSANDRA-12472)
 + * If CF has no clustering columns, any row cache is full partition cache 
(CASSANDRA-12499)
 + * Correct log message for statistics of offheap memtable flush 
(CASSANDRA-12776)
 + * Explicitly set locale for string validation 
(CASSANDRA-12541,CASSANDRA-12542,CASSANDRA-12543,CASSANDRA-12545)
 +Merged from 2.2:
   * Fix purgeability of tombstones with max timestamp (CASSANDRA-12792)
   * Fail repair if participant dies during sync or anticompaction 
(CASSANDRA-12901)
   * cqlsh COPY: unprotected pk values before converting them if not using 
prepared statements (CASSANDRA-12863)



[13/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-02-15 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad702027
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad702027
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad702027

Branch: refs/heads/cassandra-3.11
Commit: ad7020278353cabe330ab813f07f6679f382d6cd
Parents: ef9df6e 7cb2ef0
Author: Michael Shuler 
Authored: Wed Feb 15 18:00:06 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 18:00:06 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad702027/CHANGES.txt
--
diff --cc CHANGES.txt
index ee5a5cb,ab345e6..1a9ce2d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -19,130 -12,6 +19,131 @@@ Merged from 3.0
 live rows in sstabledump (CASSANDRA-13177)
   * Provide user workaround when system_schema.columns does not contain entries
 for a table that's in system_schema.tables (CASSANDRA-13180)
 +Merged from 2.2:
 + * Fix negative mean latency metric (CASSANDRA-12876)
 + * Use only one file pointer when creating commitlog segments 
(CASSANDRA-12539)
 +Merged from 2.1:
++ * Use portable stderr for java error in startup (CASSANDRA-13211)
 + * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
 + * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
 +
 +3.10
 + * Fix secondary index queries regression (CASSANDRA-13013)
 + * Add duration type to the protocol V5 (CASSANDRA-12850)
 + * Fix duration type validation (CASSANDRA-13143)
 + * Fix flaky GcCompactionTest (CASSANDRA-12664)
 + * Fix TestHintedHandoff.hintedhandoff_decom_test (CASSANDRA-13058)
 + * Fixed query monitoring for range queries (CASSANDRA-13050)
 + * Remove outboundBindAny configuration property (CASSANDRA-12673)
 + * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
 + * Remove timing window in test case (CASSANDRA-12875)
 + * Resolve unit testing without JCE security libraries installed 
(CASSANDRA-12945)
 + * Fix inconsistencies in cassandra-stress load balancing policy 
(CASSANDRA-12919)
 + * Fix validation of non-frozen UDT cells (CASSANDRA-12916)
 + * Don't shut down socket input/output on StreamSession (CASSANDRA-12903)
 + * Fix Murmur3PartitionerTest (CASSANDRA-12858)
 + * Move cqlsh syntax rules into separate module and allow easier 
customization (CASSANDRA-12897)
 + * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
 + * Fix cassandra-stress truncate option (CASSANDRA-12695)
 + * Fix crossNode value when receiving messages (CASSANDRA-12791)
 + * Don't load MX4J beans twice (CASSANDRA-12869)
 + * Extend native protocol request flags, add versions to SUPPORTED, and 
introduce ProtocolVersion enum (CASSANDRA-12838)
 + * Set JOINING mode when running pre-join tasks (CASSANDRA-12836)
 + * remove net.mintern.primitive library due to license issue (CASSANDRA-12845)
 + * Properly format IPv6 addresses when logging JMX service URL 
(CASSANDRA-12454)
 + * Optimize the vnode allocation for single replica per DC (CASSANDRA-12777)
 + * Use non-token restrictions for bounds when token restrictions are 
overridden (CASSANDRA-12419)
 + * Fix CQLSH auto completion for PER PARTITION LIMIT (CASSANDRA-12803)
 + * Use different build directories for Eclipse and Ant (CASSANDRA-12466)
 + * Avoid potential AttributeError in cqlsh due to no table metadata 
(CASSANDRA-12815)
 + * Fix RandomReplicationAwareTokenAllocatorTest.testExistingCluster 
(CASSANDRA-12812)
 + * Upgrade commons-codec to 1.9 (CASSANDRA-12790)
 + * Make the fanout size for LeveledCompactionStrategy to be configurable 
(CASSANDRA-11550)
 + * Add duration data type (CASSANDRA-11873)
 + * Fix timeout in ReplicationAwareTokenAllocatorTest (CASSANDRA-12784)
 + * Improve sum aggregate functions (CASSANDRA-12417)
 + * Make cassandra.yaml docs for batch_size_*_threshold_in_kb reflect changes 
in CASSANDRA-10876 (CASSANDRA-12761)
 + * cqlsh fails to format collections when using aliases (CASSANDRA-11534)
 + * Check for hash conflicts in prepared statements (CASSANDRA-12733)
 + * Exit query parsing upon first error (CASSANDRA-12598)
 + * Fix cassandra-stress to use single seed in UUID generation 
(CASSANDRA-12729)
 + * CQLSSTableWriter does not allow Update statement (CASSANDRA-12450)
 + * Config class uses boxed types but DD exposes primitive types 
(CASSANDRA-12199)
 + * Add pre- and post-shutdown hooks to Storage Service (CASSANDRA-12461)
 + * Add hint delivery metrics (CASSANDRA-12693)
 + * Remove IndexInfo cache from FileIndexInfoRetriever (CASSANDRA-12731)
 + * ColumnIndex does not reuse buffer (CASSANDRA-12502)
 + * cdc column addition 

[03/15] cassandra git commit: Use portable stderr for java error in startup

2017-02-15 Thread mshuler
Use portable stderr for java error in startup


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb090791
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb090791
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb090791

Branch: refs/heads/trunk
Commit: cb090791c3d16011665f0f56afd66bbce2a0e40f
Parents: a6237bf
Author: Michael Shuler 
Authored: Mon Feb 13 12:46:02 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:49:39 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb090791/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9ce8d49..a3de742 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.17
+ * Use portable stderr for java error in startup (CASSANDRA-13211)
  * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
  * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
  * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb090791/bin/cassandra
--
diff --git a/bin/cassandra b/bin/cassandra
index 4dca73e..957cc7d 100755
--- a/bin/cassandra
+++ b/bin/cassandra
@@ -99,7 +99,7 @@ else
 fi
 
 if [ -z $JAVA ] ; then
-echo Unable to find java executable. Check JAVA_HOME and PATH environment 
variables. > /dev/stderr
+echo Unable to find java executable. Check JAVA_HOME and PATH environment 
variables. >&2
 exit 1;
 fi
 



[05/15] cassandra git commit: Use portable stderr for java error in startup

2017-02-15 Thread mshuler
Use portable stderr for java error in startup


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb090791
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb090791
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb090791

Branch: refs/heads/cassandra-3.11
Commit: cb090791c3d16011665f0f56afd66bbce2a0e40f
Parents: a6237bf
Author: Michael Shuler 
Authored: Mon Feb 13 12:46:02 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:49:39 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb090791/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9ce8d49..a3de742 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.17
+ * Use portable stderr for java error in startup (CASSANDRA-13211)
  * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
  * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
  * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb090791/bin/cassandra
--
diff --git a/bin/cassandra b/bin/cassandra
index 4dca73e..957cc7d 100755
--- a/bin/cassandra
+++ b/bin/cassandra
@@ -99,7 +99,7 @@ else
 fi
 
 if [ -z $JAVA ] ; then
-echo Unable to find java executable. Check JAVA_HOME and PATH environment 
variables. > /dev/stderr
+echo Unable to find java executable. Check JAVA_HOME and PATH environment 
variables. >&2
 exit 1;
 fi
 



[02/15] cassandra git commit: Use portable stderr for java error in startup

2017-02-15 Thread mshuler
Use portable stderr for java error in startup


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb090791
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb090791
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb090791

Branch: refs/heads/cassandra-2.2
Commit: cb090791c3d16011665f0f56afd66bbce2a0e40f
Parents: a6237bf
Author: Michael Shuler 
Authored: Mon Feb 13 12:46:02 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:49:39 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb090791/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9ce8d49..a3de742 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.17
+ * Use portable stderr for java error in startup (CASSANDRA-13211)
  * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
  * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
  * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb090791/bin/cassandra
--
diff --git a/bin/cassandra b/bin/cassandra
index 4dca73e..957cc7d 100755
--- a/bin/cassandra
+++ b/bin/cassandra
@@ -99,7 +99,7 @@ else
 fi
 
 if [ -z $JAVA ] ; then
-echo Unable to find java executable. Check JAVA_HOME and PATH environment 
variables. > /dev/stderr
+echo Unable to find java executable. Check JAVA_HOME and PATH environment 
variables. >&2
 exit 1;
 fi
 



[08/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-02-15 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0b9f6de7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0b9f6de7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0b9f6de7

Branch: refs/heads/cassandra-3.0
Commit: 0b9f6de7a8a68ee7637cde8c17177b21f801f652
Parents: 753d90c cb09079
Author: Michael Shuler 
Authored: Wed Feb 15 17:57:03 2017 -0600
Committer: Michael Shuler 
Committed: Wed Feb 15 17:57:03 2017 -0600

--
 CHANGES.txt   | 1 +
 bin/cassandra | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b9f6de7/CHANGES.txt
--
diff --cc CHANGES.txt
index 4052b0f,a3de742..d53457f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,36 -1,5 +1,37 @@@
 -2.1.17
 +2.2.9
 + * Coalescing strategy sleeps too much and shouldn't be enabled by default 
(CASSANDRA-13090)
 + * Fix negative mean latency metric (CASSANDRA-12876)
 + * Use only one file pointer when creating commitlog segments 
(CASSANDRA-12539)
 + * Fix speculative retry bugs (CASSANDRA-13009)
 + * Fix handling of nulls and unsets in IN conditions (CASSANDRA-12981) 
 + * Remove support for non-JavaScript UDFs (CASSANDRA-12883)
 + * Fix DynamicEndpointSnitch noop in multi-datacenter situations 
(CASSANDRA-13074)
 + * cqlsh copy-from: encode column names to avoid primary key parsing errors 
(CASSANDRA-12909)
 + * Temporarily fix bug that creates commit log when running offline tools 
(CASSANDRA-8616)
 + * Reduce granuality of OpOrder.Group during index build (CASSANDRA-12796)
 + * Test bind parameters and unset parameters in InsertUpdateIfConditionTest 
(CASSANDRA-12980)
 + * Do not specify local address on outgoing connection when 
listen_on_broadcast_address is set (CASSANDRA-12673)
 + * Use saved tokens when setting local tokens on StorageService.joinRing 
(CASSANDRA-12935)
 + * cqlsh: fix DESC TYPES errors (CASSANDRA-12914)
 + * Fix leak on skipped SSTables in sstableupgrade (CASSANDRA-12899)
 + * Avoid blocking gossip during pending range calculation (CASSANDRA-12281)
 + * Fix purgeability of tombstones with max timestamp (CASSANDRA-12792)
 + * Fail repair if participant dies during sync or anticompaction 
(CASSANDRA-12901)
 + * cqlsh COPY: unprotected pk values before converting them if not using 
prepared statements (CASSANDRA-12863)
 + * Fix Util.spinAssertEquals (CASSANDRA-12283)
 + * Fix potential NPE for compactionstats (CASSANDRA-12462)
 + * Prepare legacy authenticate statement if credentials table initialised 
after node startup (CASSANDRA-12813)
 + * Change cassandra.wait_for_tracing_events_timeout_secs default to 0 
(CASSANDRA-12754)
 + * Clean up permissions when a UDA is dropped (CASSANDRA-12720)
 + * Limit colUpdateTimeDelta histogram updates to reasonable deltas 
(CASSANDRA-7)
 + * Fix leak errors and execution rejected exceptions when draining 
(CASSANDRA-12457)
 + * Fix merkle tree depth calculation (CASSANDRA-12580)
 + * Make Collections deserialization more robust (CASSANDRA-12618)
 + * Better handle invalid system roles table (CASSANDRA-12700)
 + * Split consistent range movement flag correction (CASSANDRA-12786)
 + * CompactionTasks now correctly drops sstables out of compaction when not 
enough disk space is available (CASSANDRA-12979)
 +Merged from 2.1:
+  * Use portable stderr for java error in startup (CASSANDRA-13211)
   * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
   * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
   * Upgrade netty version to fix memory leak with client encryption 
(CASSANDRA-13114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0b9f6de7/bin/cassandra
--



[jira] [Assigned] (CASSANDRA-11471) Add SASL mechanism negotiation to the native protocol

2017-02-15 Thread Ben Bromhead (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Bromhead reassigned CASSANDRA-11471:


Assignee: Ben Bromhead

> Add SASL mechanism negotiation to the native protocol
> -
>
> Key: CASSANDRA-11471
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11471
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Sam Tunnicliffe
>Assignee: Ben Bromhead
>  Labels: client-impacting
> Attachments: CASSANDRA-11471
>
>
> Introducing an additional message exchange into the authentication sequence 
> would allow us to support multiple authentication schemes and [negotiation of 
> SASL mechanisms|https://tools.ietf.org/html/rfc4422#section-3.2]. 
> The current {{AUTHENTICATE}} message sent from Client to Server includes the 
> java classname of the configured {{IAuthenticator}}. This could be superceded 
> by a new message which lists the SASL mechanisms supported by the server. The 
> client would then respond with a new message which indicates it's choice of 
> mechanism.  This would allow the server to support multiple mechanisms, for 
> example enabling both {{PLAIN}} for username/password authentication and 
> {{EXTERNAL}} for a mechanism for extracting credentials from SSL 
> certificates\* (see the example in 
> [RFC-4422|https://tools.ietf.org/html/rfc4422#appendix-A]). Furthermore, the 
> server could tailor the list of supported mechanisms on a per-connection 
> basis, e.g. only offering certificate based auth to encrypted clients. 
> The client's response should include the selected mechanism and any initial 
> response data. This is mechanism-specific; the {{PLAIN}} mechanism consists 
> of a single round in which the client sends encoded credentials as the 
> initial response data and the server response indicates either success or 
> failure with no futher challenges required.
> From a protocol perspective, after the mechanism negotiation the exchange 
> would continue as in protocol v4, with one or more rounds of 
> {{AUTH_CHALLENGE}} and {{AUTH_RESPONSE}} messages, terminated by an 
> {{AUTH_SUCCESS}} sent from Server to Client upon successful authentication or 
> an {{ERROR}} on auth failure. 
> XMPP performs mechanism negotiation in this way, 
> [RFC-3920|http://tools.ietf.org/html/rfc3920#section-6] includes a good 
> overview.
> \* Note: this would require some a priori agreement between client and server 
> over the implementation of the {{EXTERNAL}} mechanism.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12837) Add multi-threaded support to nodetool rebuild_index

2017-02-15 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-12837:
---
Fix Version/s: (was: 2.2.9)
   4.x

> Add multi-threaded support to nodetool rebuild_index
> 
>
> Key: CASSANDRA-12837
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12837
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: vincent royer
>Priority: Minor
>  Labels: patch
> Fix For: 4.x
>
> Attachments: CASSANDRA-12837-2.2.9.txt
>
>
> Add multi-thread nodetool rebuild_index to improve performances.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13225) Best Consistency Level

2017-02-15 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868797#comment-15868797
 ] 

Jeremiah Jordan edited comment on CASSANDRA-13225 at 2/15/17 11:32 PM:
---

If your app works correctly when this happens and the 3rd node comes up again:

bq. BEST_QUORUM would succeed as 2 replicas are up and would return success 
when 2 replicas get the write

Then you should just use QUORUM.  You are getting no advantage from 
"BEST_QUORUM".  It is not like the coordinator doesn't write the data to all 3 
nodes when they are all up.

If what you are worried about is overloading nodes and want to get all 3 acks 
back to slow down the writes, then you might want to take a look at using a 
back_pressure_strategy, this is exactly the case that was implemented for, 
reducing the number of dropped mutations when all nodes are up and there is a 
constant stream of writes coming in.


was (Author: jjordan):
If you app works correctly when this happens and the 3rd node comes up again:

bq. BEST_QUORUM would succeed as 2 replicas are up and would return success 
when 2 replicas get the write

Then you should just use QUORUM.  You are getting no advantage from 
"BEST_QUORUM".  Its not like we don't write the data to all 3 nodes when they 
are all up.

If what you are worried about is overloading your nodes and want to get all 3 
acks back to slow down your writes, then you might want to take a look at using 
a back_pressure_strategy.

> Best Consistency Level
> --
>
> Key: CASSANDRA-13225
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13225
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Connor Warrington
>Priority: Minor
>
> When writing data into a cluster there are a few consistency levels to choose 
> from. When choosing the consistency level to write with you are making a 
> tradeoff between consistency and failover availability. If you choose 
> consistency level ALL then all replicas have to be up and when a write 
> succeeds all replicas received the write. If you choose consistency level 
> QUORUM then a quorum number of replicas have to be up and when a write 
> succeeds at quorum number of replicas received the write. The tradeoff comes 
> in when there are more then quorum nodes available for the write. We would 
> like a write to succeed only when all replicas that are up have received the 
> write. Hence the suggestion of best as a consistency level. This would be 
> available for the existing consistency levels. The main idea behind this 
> feature request is that we are okay with a replica going down (fault 
> tolerance) but when the cluster is in a good state we don't mind waiting for 
> all nodes to get the write. This would enable the writer to operate at speed 
> of the slowest node instead of potentially getting into a state where that 
> slow node gets even further behind. This would also enable back pressure to 
> be better propagated through the system as the slowest node likely has back 
> pressure which is trying to tell the client about but if we don't wait for 
> that node the writer loses that information.
> Example scenarios:
> If we have replication factor of 3: 
> ALL consistency means 3 replicas have to be up and 3 replicas have to 
> successfully get the write. 
> QUORUM consistency means 2 replicas have to be up and 2 replicas have to 
> successfully get the write. 
> BEST_QUORUM consistency means 2 replicas have be up and all up replicas have 
> to successfully get the write.
> If 3 replicas are up with replication factor of 3: 
> ALL would succeed as all 3 replicas are up and would return success when all 
> 3 replicas get the write 
> QUORUM would succeed as all 3 replicas are up and would return success when 2 
> replicas get the write 
> BEST_QUORUM would succeed as all 3 replicas are up and would return success 
> when all 3 replicas get the write
> If 2 replicas are up with replication factor of 3: 
> ALL would fail as only 2 replicas are up 
> QUORUM would succeed as 2 replicas are up and would return success when 2 
> replicas get the write 
> BEST_QUORUM would succeed as 2 replicas are up and would return success when 
> 2 replicas get the write



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (CASSANDRA-13225) Best Consistency Level

2017-02-15 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall resolved CASSANDRA-13225.
-
Resolution: Won't Fix

I'm marking this as won't fix. [~Connor Warrington] I do appreciate you raising 
the issue. Feel free to discuss/request more information on #cassandra IRC or 
on the mailing list.

> Best Consistency Level
> --
>
> Key: CASSANDRA-13225
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13225
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Connor Warrington
>Priority: Minor
>
> When writing data into a cluster there are a few consistency levels to choose 
> from. When choosing the consistency level to write with you are making a 
> tradeoff between consistency and failover availability. If you choose 
> consistency level ALL then all replicas have to be up and when a write 
> succeeds all replicas received the write. If you choose consistency level 
> QUORUM then a quorum number of replicas have to be up and when a write 
> succeeds at quorum number of replicas received the write. The tradeoff comes 
> in when there are more then quorum nodes available for the write. We would 
> like a write to succeed only when all replicas that are up have received the 
> write. Hence the suggestion of best as a consistency level. This would be 
> available for the existing consistency levels. The main idea behind this 
> feature request is that we are okay with a replica going down (fault 
> tolerance) but when the cluster is in a good state we don't mind waiting for 
> all nodes to get the write. This would enable the writer to operate at speed 
> of the slowest node instead of potentially getting into a state where that 
> slow node gets even further behind. This would also enable back pressure to 
> be better propagated through the system as the slowest node likely has back 
> pressure which is trying to tell the client about but if we don't wait for 
> that node the writer loses that information.
> Example scenarios:
> If we have replication factor of 3: 
> ALL consistency means 3 replicas have to be up and 3 replicas have to 
> successfully get the write. 
> QUORUM consistency means 2 replicas have to be up and 2 replicas have to 
> successfully get the write. 
> BEST_QUORUM consistency means 2 replicas have be up and all up replicas have 
> to successfully get the write.
> If 3 replicas are up with replication factor of 3: 
> ALL would succeed as all 3 replicas are up and would return success when all 
> 3 replicas get the write 
> QUORUM would succeed as all 3 replicas are up and would return success when 2 
> replicas get the write 
> BEST_QUORUM would succeed as all 3 replicas are up and would return success 
> when all 3 replicas get the write
> If 2 replicas are up with replication factor of 3: 
> ALL would fail as only 2 replicas are up 
> QUORUM would succeed as 2 replicas are up and would return success when 2 
> replicas get the write 
> BEST_QUORUM would succeed as 2 replicas are up and would return success when 
> 2 replicas get the write



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13225) Best Consistency Level

2017-02-15 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868807#comment-15868807
 ] 

Jeff Jirsa commented on CASSANDRA-13225:


I'll add a third "sounds like a bad idea" vote. Downgrading consistency on 
retry, back-pressure via app/driver, or 
just-use-quorums-and-let-hints-do-their-job all sound better than 
{{BEST_QUORUM}} . 


> Best Consistency Level
> --
>
> Key: CASSANDRA-13225
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13225
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Connor Warrington
>Priority: Minor
>
> When writing data into a cluster there are a few consistency levels to choose 
> from. When choosing the consistency level to write with you are making a 
> tradeoff between consistency and failover availability. If you choose 
> consistency level ALL then all replicas have to be up and when a write 
> succeeds all replicas received the write. If you choose consistency level 
> QUORUM then a quorum number of replicas have to be up and when a write 
> succeeds at quorum number of replicas received the write. The tradeoff comes 
> in when there are more then quorum nodes available for the write. We would 
> like a write to succeed only when all replicas that are up have received the 
> write. Hence the suggestion of best as a consistency level. This would be 
> available for the existing consistency levels. The main idea behind this 
> feature request is that we are okay with a replica going down (fault 
> tolerance) but when the cluster is in a good state we don't mind waiting for 
> all nodes to get the write. This would enable the writer to operate at speed 
> of the slowest node instead of potentially getting into a state where that 
> slow node gets even further behind. This would also enable back pressure to 
> be better propagated through the system as the slowest node likely has back 
> pressure which is trying to tell the client about but if we don't wait for 
> that node the writer loses that information.
> Example scenarios:
> If we have replication factor of 3: 
> ALL consistency means 3 replicas have to be up and 3 replicas have to 
> successfully get the write. 
> QUORUM consistency means 2 replicas have to be up and 2 replicas have to 
> successfully get the write. 
> BEST_QUORUM consistency means 2 replicas have be up and all up replicas have 
> to successfully get the write.
> If 3 replicas are up with replication factor of 3: 
> ALL would succeed as all 3 replicas are up and would return success when all 
> 3 replicas get the write 
> QUORUM would succeed as all 3 replicas are up and would return success when 2 
> replicas get the write 
> BEST_QUORUM would succeed as all 3 replicas are up and would return success 
> when all 3 replicas get the write
> If 2 replicas are up with replication factor of 3: 
> ALL would fail as only 2 replicas are up 
> QUORUM would succeed as 2 replicas are up and would return success when 2 
> replicas get the write 
> BEST_QUORUM would succeed as 2 replicas are up and would return success when 
> 2 replicas get the write



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13225) Best Consistency Level

2017-02-15 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868797#comment-15868797
 ] 

Jeremiah Jordan commented on CASSANDRA-13225:
-

If you app works correctly when this happens and the 3rd node comes up again:

bq. BEST_QUORUM would succeed as 2 replicas are up and would return success 
when 2 replicas get the write

Then you should just use QUORUM.  You are getting no advantage from 
"BEST_QUORUM".  Its not like we don't write the data to all 3 nodes when they 
are all up.

If what you are worried about is overloading your nodes and want to get all 3 
acks back to slow down your writes, then you might want to take a look at using 
a back_pressure_strategy.

> Best Consistency Level
> --
>
> Key: CASSANDRA-13225
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13225
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Connor Warrington
>Priority: Minor
>
> When writing data into a cluster there are a few consistency levels to choose 
> from. When choosing the consistency level to write with you are making a 
> tradeoff between consistency and failover availability. If you choose 
> consistency level ALL then all replicas have to be up and when a write 
> succeeds all replicas received the write. If you choose consistency level 
> QUORUM then a quorum number of replicas have to be up and when a write 
> succeeds at quorum number of replicas received the write. The tradeoff comes 
> in when there are more then quorum nodes available for the write. We would 
> like a write to succeed only when all replicas that are up have received the 
> write. Hence the suggestion of best as a consistency level. This would be 
> available for the existing consistency levels. The main idea behind this 
> feature request is that we are okay with a replica going down (fault 
> tolerance) but when the cluster is in a good state we don't mind waiting for 
> all nodes to get the write. This would enable the writer to operate at speed 
> of the slowest node instead of potentially getting into a state where that 
> slow node gets even further behind. This would also enable back pressure to 
> be better propagated through the system as the slowest node likely has back 
> pressure which is trying to tell the client about but if we don't wait for 
> that node the writer loses that information.
> Example scenarios:
> If we have replication factor of 3: 
> ALL consistency means 3 replicas have to be up and 3 replicas have to 
> successfully get the write. 
> QUORUM consistency means 2 replicas have to be up and 2 replicas have to 
> successfully get the write. 
> BEST_QUORUM consistency means 2 replicas have be up and all up replicas have 
> to successfully get the write.
> If 3 replicas are up with replication factor of 3: 
> ALL would succeed as all 3 replicas are up and would return success when all 
> 3 replicas get the write 
> QUORUM would succeed as all 3 replicas are up and would return success when 2 
> replicas get the write 
> BEST_QUORUM would succeed as all 3 replicas are up and would return success 
> when all 3 replicas get the write
> If 2 replicas are up with replication factor of 3: 
> ALL would fail as only 2 replicas are up 
> QUORUM would succeed as 2 replicas are up and would return success when 2 
> replicas get the write 
> BEST_QUORUM would succeed as 2 replicas are up and would return success when 
> 2 replicas get the write



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13225) Best Consistency Level

2017-02-15 Thread Nate McCall (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868751#comment-15868751
 ] 

Nate McCall commented on CASSANDRA-13225:
-

I'm not sure I quite understand the use case for BEST_QUORUM over QUORUM in 
your examples. Are you saying that you want a CL that defines fallback behavior 
depending on availability?

If that is the case, this type of logic is best handled at the application 
level. For example, the JavaDriver project can be configured to use a 
"downgrading consistency" retry policy 
(http://docs.datastax.com/en/developer/java-driver/3.1/manual/retries/). 



> Best Consistency Level
> --
>
> Key: CASSANDRA-13225
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13225
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Connor Warrington
>Priority: Minor
>
> When writing data into a cluster there are a few consistency levels to choose 
> from. When choosing the consistency level to write with you are making a 
> tradeoff between consistency and failover availability. If you choose 
> consistency level ALL then all replicas have to be up and when a write 
> succeeds all replicas received the write. If you choose consistency level 
> QUORUM then a quorum number of replicas have to be up and when a write 
> succeeds at quorum number of replicas received the write. The tradeoff comes 
> in when there are more then quorum nodes available for the write. We would 
> like a write to succeed only when all replicas that are up have received the 
> write. Hence the suggestion of best as a consistency level. This would be 
> available for the existing consistency levels. The main idea behind this 
> feature request is that we are okay with a replica going down (fault 
> tolerance) but when the cluster is in a good state we don't mind waiting for 
> all nodes to get the write. This would enable the writer to operate at speed 
> of the slowest node instead of potentially getting into a state where that 
> slow node gets even further behind. This would also enable back pressure to 
> be better propagated through the system as the slowest node likely has back 
> pressure which is trying to tell the client about but if we don't wait for 
> that node the writer loses that information.
> Example scenarios:
> If we have replication factor of 3: 
> ALL consistency means 3 replicas have to be up and 3 replicas have to 
> successfully get the write. 
> QUORUM consistency means 2 replicas have to be up and 2 replicas have to 
> successfully get the write. 
> BEST_QUORUM consistency means 2 replicas have be up and all up replicas have 
> to successfully get the write.
> If 3 replicas are up with replication factor of 3: 
> ALL would succeed as all 3 replicas are up and would return success when all 
> 3 replicas get the write 
> QUORUM would succeed as all 3 replicas are up and would return success when 2 
> replicas get the write 
> BEST_QUORUM would succeed as all 3 replicas are up and would return success 
> when all 3 replicas get the write
> If 2 replicas are up with replication factor of 3: 
> ALL would fail as only 2 replicas are up 
> QUORUM would succeed as 2 replicas are up and would return success when 2 
> replicas get the write 
> BEST_QUORUM would succeed as 2 replicas are up and would return success when 
> 2 replicas get the write



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-11471) Add SASL mechanism negotiation to the native protocol

2017-02-15 Thread Ben Bromhead (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868750#comment-15868750
 ] 

Ben Bromhead commented on CASSANDRA-11471:
--

Ok finally got to work on this a little more and now ready for some feedback 
while I finish up tests. I've also put together a brief overview of the patch 
and negotiation flow in a google doc 
[here|https://docs.google.com/document/d/1u-d9ZMgZ4Fn1VW19-iReo7Kks8aCkDUKqrF4a-3R1Ew/edit?usp=sharing]

||4.0||
|[Branch|https://github.com/apache/cassandra/compare/trunk...benbromhead:11471]|
|[Java 
Driver|https://github.com/datastax/java-driver/compare/3.x...benbromhead:11471]|

> Add SASL mechanism negotiation to the native protocol
> -
>
> Key: CASSANDRA-11471
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11471
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Sam Tunnicliffe
>  Labels: client-impacting
> Attachments: CASSANDRA-11471
>
>
> Introducing an additional message exchange into the authentication sequence 
> would allow us to support multiple authentication schemes and [negotiation of 
> SASL mechanisms|https://tools.ietf.org/html/rfc4422#section-3.2]. 
> The current {{AUTHENTICATE}} message sent from Client to Server includes the 
> java classname of the configured {{IAuthenticator}}. This could be superceded 
> by a new message which lists the SASL mechanisms supported by the server. The 
> client would then respond with a new message which indicates it's choice of 
> mechanism.  This would allow the server to support multiple mechanisms, for 
> example enabling both {{PLAIN}} for username/password authentication and 
> {{EXTERNAL}} for a mechanism for extracting credentials from SSL 
> certificates\* (see the example in 
> [RFC-4422|https://tools.ietf.org/html/rfc4422#appendix-A]). Furthermore, the 
> server could tailor the list of supported mechanisms on a per-connection 
> basis, e.g. only offering certificate based auth to encrypted clients. 
> The client's response should include the selected mechanism and any initial 
> response data. This is mechanism-specific; the {{PLAIN}} mechanism consists 
> of a single round in which the client sends encoded credentials as the 
> initial response data and the server response indicates either success or 
> failure with no futher challenges required.
> From a protocol perspective, after the mechanism negotiation the exchange 
> would continue as in protocol v4, with one or more rounds of 
> {{AUTH_CHALLENGE}} and {{AUTH_RESPONSE}} messages, terminated by an 
> {{AUTH_SUCCESS}} sent from Server to Client upon successful authentication or 
> an {{ERROR}} on auth failure. 
> XMPP performs mechanism negotiation in this way, 
> [RFC-3920|http://tools.ietf.org/html/rfc3920#section-6] includes a good 
> overview.
> \* Note: this would require some a priori agreement between client and server 
> over the implementation of the {{EXTERNAL}} mechanism.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13226) StreamPlan for incremental repairs flushing memtables unnecessarily

2017-02-15 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13226:

Status: Patch Available  (was: Open)

|[branch|https://github.com/bdeggleston/cassandra/tree/13226]|[dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-13226-dtest/]|[testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-13226-testall/]|

The only functional change is in 
{{src/java/org/apache/cassandra/repair/StreamingRepairTask.java}}, everything 
else is just rearranging some of the base repair test classes to make testing 
unit testing the change possible / easier.

[~krummas], [~jjirsa]: could one of you review?

> StreamPlan for incremental repairs flushing memtables unnecessarily
> ---
>
> Key: CASSANDRA-13226
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13226
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> Since incremental repairs are run against a fixed dataset, there's no need to 
> flush memtables when streaming for them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13227) remove CompactionManager#submitAntiCompaction

2017-02-15 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13227:

Status: Patch Available  (was: Open)

[branch|https://github.com/bdeggleston/cassandra/tree/13227]

[~slebresne], [~jjirsa], [~krummas]: could one of you review?

> remove CompactionManager#submitAntiCompaction
> -
>
> Key: CASSANDRA-13227
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13227
> Project: Cassandra
>  Issue Type: Task
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Trivial
> Fix For: 4.0
>
>
> Method is no longer used



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13227) remove CompactionManager#submitAntiCompaction

2017-02-15 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-13227:
---

 Summary: remove CompactionManager#submitAntiCompaction
 Key: CASSANDRA-13227
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13227
 Project: Cassandra
  Issue Type: Task
Reporter: Blake Eggleston
Assignee: Blake Eggleston
Priority: Trivial
 Fix For: 4.0


Method is no longer used



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13226) StreamPlan for incremental repairs flushing memtables unnecessarily

2017-02-15 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-13226:
---

 Summary: StreamPlan for incremental repairs flushing memtables 
unnecessarily
 Key: CASSANDRA-13226
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13226
 Project: Cassandra
  Issue Type: Bug
Reporter: Blake Eggleston
Assignee: Blake Eggleston
Priority: Minor
 Fix For: 4.0


Since incremental repairs are run against a fixed dataset, there's no need to 
flush memtables when streaming for them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


svn commit: r1783142 - /cassandra/site/publish/index.html

2017-02-15 Thread mshuler
Author: mshuler
Date: Wed Feb 15 20:07:17 2017
New Revision: 1783142

URL: http://svn.apache.org/viewvc?rev=1783142=rev
Log:
Update latest release on home page

Modified:
cassandra/site/publish/index.html

Modified: cassandra/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/index.html?rev=1783142=1783141=1783142=diff
==
--- cassandra/site/publish/index.html (original)
+++ cassandra/site/publish/index.html Wed Feb 15 20:07:17 2017
@@ -95,7 +95,7 @@
 
 
   http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.9;>Cassandra
 3.9 Changelog
+ 
href="http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.10;>Cassandra
 3.10 Changelog
   
 
   




[jira] [Created] (CASSANDRA-13225) Best Consistency Level

2017-02-15 Thread Connor Warrington (JIRA)
Connor Warrington created CASSANDRA-13225:
-

 Summary: Best Consistency Level
 Key: CASSANDRA-13225
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13225
 Project: Cassandra
  Issue Type: New Feature
Reporter: Connor Warrington
Priority: Minor


When writing data into a cluster there are a few consistency levels to choose 
from. When choosing the consistency level to write with you are making a 
tradeoff between consistency and failover availability. If you choose 
consistency level ALL then all replicas have to be up and when a write succeeds 
all replicas received the write. If you choose consistency level QUORUM then a 
quorum number of replicas have to be up and when a write succeeds at quorum 
number of replicas received the write. The tradeoff comes in when there are 
more then quorum nodes available for the write. We would like a write to 
succeed only when all replicas that are up have received the write. Hence the 
suggestion of best as a consistency level. This would be available for the 
existing consistency levels. The main idea behind this feature request is that 
we are okay with a replica going down (fault tolerance) but when the cluster is 
in a good state we don't mind waiting for all nodes to get the write. This 
would enable the writer to operate at speed of the slowest node instead of 
potentially getting into a state where that slow node gets even further behind. 
This would also enable back pressure to be better propagated through the system 
as the slowest node likely has back pressure which is trying to tell the client 
about but if we don't wait for that node the writer loses that information.

Example scenarios:
If we have replication factor of 3: 
ALL consistency means 3 replicas have to be up and 3 replicas have to 
successfully get the write. 
QUORUM consistency means 2 replicas have to be up and 2 replicas have to 
successfully get the write. 
BEST_QUORUM consistency means 2 replicas have be up and all up replicas have to 
successfully get the write.

If 3 replicas are up with replication factor of 3: 
ALL would succeed as all 3 replicas are up and would return success when all 3 
replicas get the write 
QUORUM would succeed as all 3 replicas are up and would return success when 2 
replicas get the write 
BEST_QUORUM would succeed as all 3 replicas are up and would return success 
when all 3 replicas get the write

If 2 replicas are up with replication factor of 3: 
ALL would fail as only 2 replicas are up 
QUORUM would succeed as 2 replicas are up and would return success when 2 
replicas get the write 
BEST_QUORUM would succeed as 2 replicas are up and would return success when 2 
replicas get the write



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-8457) nio MessagingService

2017-02-15 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868315#comment-15868315
 ] 

Jason Brown commented on CASSANDRA-8457:


[~jjordan] That's a fair assessment. I'll table the large messages for the 
moment, and focus my energies a) on getting CASSANDRA-12229 posted and b) 
pinging [~slebresne] for more comments when he can :)

> nio MessagingService
> 
>
> Key: CASSANDRA-8457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Jason Brown
>Priority: Minor
>  Labels: netty, performance
> Fix For: 4.x
>
>
> Thread-per-peer (actually two each incoming and outbound) is a big 
> contributor to context switching, especially for larger clusters.  Let's look 
> at switching to nio, possibly via Netty.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13215) Cassandra nodes startup time 20x more after upgarding to 3.x

2017-02-15 Thread Viktor Kuzmin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868206#comment-15868206
 ] 

Viktor Kuzmin commented on CASSANDRA-13215:
---

It is really used on critical path. I think that this affects not only startup 
time, but repairs aswell... And maybe some other parts...

> Cassandra nodes startup time 20x more after upgarding to 3.x
> 
>
> Key: CASSANDRA-13215
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13215
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Cluster setup: two datacenters (dc-main, dc-backup).
> dc-main - 9 servers, no vnodes
> dc-backup - 6 servers, vnodes
>Reporter: Viktor Kuzmin
> Attachments: simple-cache.patch
>
>
> CompactionStrategyManage.getCompactionStrategyIndex is called on each sstable 
> at startup. And this function calls StorageService.getDiskBoundaries. And 
> getDiskBoundaries calls AbstractReplicationStrategy.getAddressRanges.
> It appears that last function can be really slow. In our environment we have 
> 1545 tokens and with NetworkTopologyStrategy it can make 1545*1545 
> computations in worst case (maybe I'm wrong, but it really takes lot's of 
> cpu).
> Also this function can affect runtime later, cause it is called not only 
> during startup.
> I've tried to implement simple cache for getDiskBoundaries results and now 
> startup time is about one minute instead of 25m, but I'm not sure if it's a 
> good solution.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-8457) nio MessagingService

2017-02-15 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868091#comment-15868091
 ] 

Ariel Weisberg edited comment on CASSANDRA-8457 at 2/15/17 4:53 PM:


{quote}
Thus, I propose to bring back the SwappingByteBufDataOutputStreamPlus that I 
had in an earlier commit. To recap, the basic idea is provide a DataOutputPlus 
that has a backing ByteBuffer that is written to, and when it is filled, it is 
written to the netty context and flushed, then allocate a new buffer for more 
writes - kinda similar to a BufferedOutputStream, but replacing the backing 
buffer when full. Bringing this idea back is also what underpins one of the 
major performance things I wanted to address: buffering up smaller messages 
into one buffer to avoid going back to the netty allocator for every tiny 
buffer we might need - think Mutation acks.
{quote}

What thread is going to be writing to the output stream to serialize the 
messages? If it's a Netty thread you can't block inside a serialization method 
waiting for the bytes to drain to the socket that is not keeping up. You also 
can't wind out of the serialization method and continue it later.

If it's an application thread then it's no longer asynchronous and a slow 
connection can block the application and prevent it from doing say a quorum 
write to just the fast nodes. You would also need to lock during serialization 
or queue concurrently sent messages behind the one currently being written.

With large messages we aren't really fully eliminating the issue only making it 
a factor better. At the other end you still need to materialize a buffer 
containing the message + the object graph you are going to materialize. This is 
different from how things worked previously where we had a dedicated thread 
that would read fixed size buffers and then materialize just the object graph 
from that. 

To really solve this we need to be able to avoid buffering the entire message 
at both sending and receiving side. The buffering is worse because we are 
allocating contiguous memory and not just doubling the space impact. We could 
make it incrementally better by using chains of fixed size buffers so there is 
less external fragmentation and allocator overhead. That's still committing 
additional memory compared to pre-8457, but at least it's being committed in a 
more reasonable way.

I think the most elegant solution is to use a lightweight thread 
implementation. What we will probably be boxed into doing is making the 
serialization of result data and other large message portions able to yield. 
This will bound the memory committed to large messages to the largest atomic 
portion we have to serialize (Cell?).

Something like an output stream being able to say "shouldYield". If you 
continue to write it will continue to buffer and not fail, but use memory. Then 
serializers can implement a return value for serialize which indicates whether 
there is more to serialize. You would check shouldYield after each Cell or some 
unit of work when serializing. Most of these large things being serialized are 
iterators which could be stashed away. The trick will be that most 
serialization is stateless, and objects are serialized concurrently so you 
can't stored the serialization state in the object being serialized safely.

We also need to solve incremental non-blocking deserialization on the receive 
side and that I don't know. That's even trickier because you don't control how 
the message is fragmented so you can't insert the yield points trivially.


was (Author: aweisberg):
{quote}
Thus, I propose to bring back the SwappingByteBufDataOutputStreamPlus that I 
had in an earlier commit. To recap, the basic idea is provide a DataOutputPlus 
that has a backing ByteBuffer that is written to, and when it is filled, it is 
written to the netty context and flushed, then allocate a new buffer for more 
writes - kinda similar to a BufferedOutputStream, but replacing the backing 
buffer when full. Bringing this idea back is also what underpins one of the 
major performance things I wanted to address: buffering up smaller messages 
into one buffer to avoid going back to the netty allocator for every tiny 
buffer we might need - think Mutation acks.
{quote}

What thread is going to be writing to the output stream to serialize the 
messages? If it's a Netty thread you can't block inside a serialization method 
waiting for the bytes to drain to the socket that is not keeping up. You also 
can't wind out of the serialization method and continue it later.

If it's an application thread then it's no longer asynchronous and a slow 
connection can block the application and prevent it from doing say a quorum 
write to just the fast nodes. You would also need to lock during serialization 
or queue concurrently sent messages behind the one currently being written.

With 

[jira] [Commented] (CASSANDRA-8457) nio MessagingService

2017-02-15 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868140#comment-15868140
 ] 

Jeremiah Jordan commented on CASSANDRA-8457:


Is adding more complexity to this already complex patch really the answer here 
(handling large messages)?  It seems to me like we should finish up the patch 
as is and make "handle large messages" an optimization ticket after we get this 
in.

Getting streaming to work with this seems like a better place to spend our time.

> nio MessagingService
> 
>
> Key: CASSANDRA-8457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Jason Brown
>Priority: Minor
>  Labels: netty, performance
> Fix For: 4.x
>
>
> Thread-per-peer (actually two each incoming and outbound) is a big 
> contributor to context switching, especially for larger clusters.  Let's look 
> at switching to nio, possibly via Netty.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13153) Reappeared Data when Mixing Incremental and Full Repairs

2017-02-15 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868143#comment-15868143
 ] 

Blake Eggleston commented on CASSANDRA-13153:
-

bq. shouldn't we get rid of anti-compactions for full repairs in 2.2+ as well?

I think we're ok leaving them as is. Using pre 4.0 incremental repairs is root 
cause of this. If operators stop using incremental repairs, there's no harm in 
doing an anticompaction after a full repair. The only scenario it would cause 
problems is when using incremental repair for the first time after upgrading to 
4.0, when the repaired datasets are very likely inconsistent. This could be 
addressed by just running a final full repair on the upgraded cluster. As part 
of CASSANDRA-9143, full repairs no longer perform anticompaction, and streamed 
sstables include the repairedAt time, which would bring the repaired and 
unrepaired datasets in sync.

So having said all that, it seems like we should recommend that users who 
delete data:
1. Stop using incremental repair (pre-4.0)
2. Run a full repair after upgrading to 4.0 before using incremental repair 
again

We should also recommend that even if users don't delete data, they should take 
a look at the amount of streaming their incremental repair is doing, and decide 
if it might be less expensive to just do full repairs instead.

Thoughts?

> Reappeared Data when Mixing Incremental and Full Repairs
> 
>
> Key: CASSANDRA-13153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13153
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Tools
> Environment: Apache Cassandra 2.2
>Reporter: Amanda Debrot
>  Labels: Cassandra
> Attachments: log-Reappeared-Data.txt, 
> Step-by-Step-Simulate-Reappeared-Data.txt
>
>
> This happens for both LeveledCompactionStrategy and 
> SizeTieredCompactionStrategy.  I've only tested it on Cassandra version 2.2 
> but it most likely also affects all Cassandra versions after 2.2, if they 
> have anticompaction with full repair.
> When mixing incremental and full repairs, there are a few scenarios where the 
> Data SSTable is marked as unrepaired and the Tombstone SSTable is marked as 
> repaired.  Then if it is past gc_grace, and the tombstone and data has been 
> compacted out on other replicas, the next incremental repair will push the 
> Data to other replicas without the tombstone.
> Simplified scenario:
> 3 node cluster with RF=3
> Intial config:
>   Node 1 has data and tombstone in separate SSTables.
>   Node 2 has data and no tombstone.
>   Node 3 has data and tombstone in separate SSTables.
> Incremental repair (nodetool repair -pr) is run every day so now we have 
> tombstone on each node.
> Some minor compactions have happened since so data and tombstone get merged 
> to 1 SSTable on Nodes 1 and 3.
>   Node 1 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
> Incremental repairs keep running every day.
> Full repairs run weekly (nodetool repair -full -pr). 
> Now there are 2 scenarios where the Data SSTable will get marked as 
> "Unrepaired" while Tombstone SSTable will get marked as "Repaired".
> Scenario 1:
> Since the Data and Tombstone SSTable have been marked as "Repaired" 
> and anticompacted, they have had minor compactions with other SSTables 
> containing keys from other ranges.  During full repair, if the last node to 
> run it doesn't own this particular key in it's partitioner range, the Data 
> and Tombstone SSTable will get anticompacted and marked as "Unrepaired".  Now 
> in the next incremental repair, if the Data SSTable is involved in a minor 
> compaction during the repair but the Tombstone SSTable is not, the resulting 
> compacted SSTable will be marked "Unrepaired" and Tombstone SSTable is marked 
> "Repaired".
> Scenario 2:
> Only the Data SSTable had minor compaction with other SSTables 
> containing keys from other ranges after being marked as "Repaired".  The 
> Tombstone SSTable was never involved in a minor compaction so therefore all 
> keys in that SSTable belong to 1 particular partitioner range. During full 
> repair, if the last node to run it doesn't own this particular key in it's 
> partitioner range, the Data SSTable will get anticompacted and marked as 
> "Unrepaired".   The Tombstone SSTable stays marked as Repaired.
> Then it’s past gc_grace.  Since Node’s #1 and #3 only have 1 SSTable for that 
> key, the tombstone will get compacted out.
>   Node 1 has nothing.
>   Node 2 has data (in unrepaired SSTable) and tombstone (in repaired 
> SSTable) 

[jira] [Comment Edited] (CASSANDRA-8457) nio MessagingService

2017-02-15 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868091#comment-15868091
 ] 

Ariel Weisberg edited comment on CASSANDRA-8457 at 2/15/17 4:42 PM:


{quote}
Thus, I propose to bring back the SwappingByteBufDataOutputStreamPlus that I 
had in an earlier commit. To recap, the basic idea is provide a DataOutputPlus 
that has a backing ByteBuffer that is written to, and when it is filled, it is 
written to the netty context and flushed, then allocate a new buffer for more 
writes - kinda similar to a BufferedOutputStream, but replacing the backing 
buffer when full. Bringing this idea back is also what underpins one of the 
major performance things I wanted to address: buffering up smaller messages 
into one buffer to avoid going back to the netty allocator for every tiny 
buffer we might need - think Mutation acks.
{quote}

What thread is going to be writing to the output stream to serialize the 
messages? If it's a Netty thread you can't block inside a serialization method 
waiting for the bytes to drain to the socket that is not keeping up. You also 
can't wind out of the serialization method and continue it later.

If it's an application thread then it's no longer asynchronous and a slow 
connection can block the application and prevent it from doing say a quorum 
write to just the fast nodes. You would also need to lock during serialization 
or queue concurrently sent messages behind the one currently being written.

With large messages we aren't really fully eliminating the issue only making it 
a factor better. At the other end you still need to materialize a buffer 
containing the message + the object graph you are going to materialize. This is 
different from how things worked previously where we had a dedicated thread 
that would read fixed size buffers and then materialize just the object graph 
from that. 

To really solve this we need to be able to avoid buffering the entire message 
at both sending and receiving side. The buffering is worse because we are 
allocating contiguous memory and not just doubling the space impact. We could 
make it incrementally better by using chains of fixed size buffers so there is 
less external fragmentation and allocator overhead. That's still committing 
additional memory compared to pre-8457, but at least it's being committed in a 
more reasonable way.

I think the most elegant solution is to use a lightweight thread 
implementation. What we will probably be boxed into doing is making the 
serialization of result data and other large message portions able to yield. 
This will bound the memory committed to large messages to the largest atomic 
portion we have to serialize (Cell?).

Something like an output stream being able to say "shouldYield". If you 
continue to write it will continue to buffer and not fail, but use memory. Then 
serializers can implement a return value for serialize which indicates whether 
there is more to serialize. You would check shouldYield after each Cell or some 
unit of work when serializing. Most of these large things being serialized are 
iterators. The trick will be that most serialization is stateless, and objects 
are serialized concurrently so you can't stored the serialization state in 
object being serialized safely.

We also need to solve incremental non-blocking deserialization on the receive 
side and that I don't know. That's even trickier because you don't control how 
the message is fragmented so you can't insert the yield points trivially.


was (Author: aweisberg):
{quote}
Thus, I propose to bring back the SwappingByteBufDataOutputStreamPlus that I 
had in an earlier commit. To recap, the basic idea is provide a DataOutputPlus 
that has a backing ByteBuffer that is written to, and when it is filled, it is 
written to the netty context and flushed, then allocate a new buffer for more 
writes - kinda similar to a BufferedOutputStream, but replacing the backing 
buffer when full. Bringing this idea back is also what underpins one of the 
major performance things I wanted to address: buffering up smaller messages 
into one buffer to avoid going back to the netty allocator for every tiny 
buffer we might need - think Mutation acks.
{quote}

What thread is going to be writing to the output stream to serialize the 
messages? If it's a Netty thread you can't block inside a serialization method 
waiting for the bytes to drain to the socket that is not keeping up. You also 
can't wind out of the serialization method and continue it later.

If it's an application thread then it's no longer asynchronous and a slow 
connection can block the application and prevent it from doing say a quorum 
write to just the fast nodes. You would also need to lock during serialization 
or queue concurrently sent messages behind the one currently being written.

With large messages we aren't really 

[jira] [Commented] (CASSANDRA-13218) Duration validation error is unclear in case of overflow.

2017-02-15 Thread Sandeep Tamhankar (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868130#comment-15868130
 ] 

Sandeep Tamhankar commented on CASSANDRA-13218:
---

Ok, sounds reasonable. Thanks.

> Duration validation error is unclear in case of overflow.
> -
>
> Key: CASSANDRA-13218
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13218
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.11.x
>
>
> If a user try to insert a {{duration}} with a number of months or days that 
> cannot fit in an {{int}} (for example: {{9223372036854775807mo1d}}), the 
> error message is confusing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-9639) size_estimates is inacurate in multi-dc clusters

2017-02-15 Thread Scott Bale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868113#comment-15868113
 ] 

Scott Bale commented on CASSANDRA-9639:
---

Thanks [~pauloricardomg]!

> size_estimates is inacurate in multi-dc clusters
> 
>
> Key: CASSANDRA-9639
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9639
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 3.0.11
>
>
> CASSANDRA-7688 introduced size_estimates to replace the thrift 
> describe_splits_ex command.
> Users have reported seeing estimates that are widely off in multi-dc clusters.
> system.size_estimates show the wrong range_start / range_end



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13224) testall failure in org.apache.cassandra.db.compaction.CompactionStrategyManagerPendingRepairTest.cleanupCompactionFinalized

2017-02-15 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-13224:
-

 Summary: testall failure in 
org.apache.cassandra.db.compaction.CompactionStrategyManagerPendingRepairTest.cleanupCompactionFinalized
 Key: CASSANDRA-13224
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13224
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Sean McCarthy
 Attachments: 
TEST-org.apache.cassandra.db.compaction.CompactionStrategyManagerPendingRepairTest.log

example failure:

http://cassci.datastax.com/job/trunk_testall/1407/testReport/org.apache.cassandra.db.compaction/CompactionStrategyManagerPendingRepairTest/cleanupCompactionFinalized

{code}
Stacktrace

junit.framework.AssertionFailedError: 
at 
org.apache.cassandra.db.compaction.CompactionStrategyManagerPendingRepairTest.cleanupCompactionFinalized(CompactionStrategyManagerPendingRepairTest.java:235)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-8457) nio MessagingService

2017-02-15 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868091#comment-15868091
 ] 

Ariel Weisberg edited comment on CASSANDRA-8457 at 2/15/17 4:05 PM:


{quote}
Thus, I propose to bring back the SwappingByteBufDataOutputStreamPlus that I 
had in an earlier commit. To recap, the basic idea is provide a DataOutputPlus 
that has a backing ByteBuffer that is written to, and when it is filled, it is 
written to the netty context and flushed, then allocate a new buffer for more 
writes - kinda similar to a BufferedOutputStream, but replacing the backing 
buffer when full. Bringing this idea back is also what underpins one of the 
major performance things I wanted to address: buffering up smaller messages 
into one buffer to avoid going back to the netty allocator for every tiny 
buffer we might need - think Mutation acks.
{quote}

What thread is going to be writing to the output stream to serialize the 
messages? If it's a Netty thread you can't block inside a serialization method 
waiting for the bytes to drain to the socket that is not keeping up. You also 
can't wind out of the serialization method and continue it later.

If it's an application thread then it's no longer asynchronous and a slow 
connection can block the application and prevent it from doing say a quorum 
write to just the fast nodes. You would also need to lock during serialization 
or queue concurrently sent messages behind the one currently being written.

With large messages we aren't really fully eliminating the issue only making it 
a factor better. At the other end you still need to materialize a buffer 
containing the message + the object graph you are going to materialize. This is 
different from how things worked previously where we had a dedicated thread 
that would read fixed size buffers and then materialize just the object graph 
from that. 

To really solve this we need to be able to avoid buffering the entire message 
at both sending and receiving side. The buffering is worse because we are 
allocating contagious memory and not just doubling the space impact. We could 
make it incrementally better by using chains of fixed size buffers so there is 
less external fragmentation and allocator overhead. That's still committing 
additional memory compared to pre-8457, but at least it's being committed in a 
more reasonable way.

I think the most elegant solution is to use a lightweight thread 
implementation. What we will probably be boxed into doing is making the 
serialization of result data and other large message portions able to yield. 
This will bound the memory committed to large messages to the largest atomic 
portion we have to serialize (Cell?).

Something like an output stream being able to say "shouldYield". If you 
continue to write it will continue to buffer and not fail, but use memory. Then 
serializers can implement a return value for serialize which indicates whether 
there is more to serialize. You would check shouldYield after each Cell or some 
unit of work when serializing. Most of these large things being serialized are 
iterators. The trick will be that most serialization is stateless, and objects 
are serialized concurrently so you can't stored the serialization state in 
object being serialized safely.

We also need to solve incremental non-blocking deserialization on the receive 
side and that I don't know. That's even trickier because you don't control how 
the message is fragmented so you can't insert the yield points trivially.


was (Author: aweisberg):
{quote}
Thus, I propose to bring back the SwappingByteBufDataOutputStreamPlus that I 
had in an earlier commit. To recap, the basic idea is provide a DataOutputPlus 
that has a backing ByteBuffer that is written to, and when it is filled, it is 
written to the netty context and flushed, then allocate a new buffer for more 
writes - kinda similar to a BufferedOutputStream, but replacing the backing 
buffer when full. Bringing this idea back is also what underpins one of the 
major performance things I wanted to address: buffering up smaller messages 
into one buffer to avoid going back to the netty allocator for every tiny 
buffer we might need - think Mutation acks.
{quote}

What thread is going to be writing to the output stream to serialize the 
messages? If it's a Netty thread you can't block inside a serialization method 
waiting for the bytes to drain to the socket that is not keeping up. You also 
can't wind out of the serialization method and continue it later.

If it's an application thread then it's no longer asynchronous and a slow 
connection can block the application and prevent it from doing say a quorum 
write to just the fast nodes. You would also need to lock during serialization 
or queue concurrently sent messages behind the one currently being written.

With large messages we aren't really 

[jira] [Commented] (CASSANDRA-8457) nio MessagingService

2017-02-15 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868091#comment-15868091
 ] 

Ariel Weisberg commented on CASSANDRA-8457:
---

{quote}
Thus, I propose to bring back the SwappingByteBufDataOutputStreamPlus that I 
had in an earlier commit. To recap, the basic idea is provide a DataOutputPlus 
that has a backing ByteBuffer that is written to, and when it is filled, it is 
written to the netty context and flushed, then allocate a new buffer for more 
writes - kinda similar to a BufferedOutputStream, but replacing the backing 
buffer when full. Bringing this idea back is also what underpins one of the 
major performance things I wanted to address: buffering up smaller messages 
into one buffer to avoid going back to the netty allocator for every tiny 
buffer we might need - think Mutation acks.
{quote}

What thread is going to be writing to the output stream to serialize the 
messages? If it's a Netty thread you can't block inside a serialization method 
waiting for the bytes to drain to the socket that is not keeping up. You also 
can't wind out of the serialization method and continue it later.

If it's an application thread then it's no longer asynchronous and a slow 
connection can block the application and prevent it from doing say a quorum 
write to just the fast nodes. You would also need to lock during serialization 
or queue concurrently sent messages behind the one currently being written.

With large messages we aren't really fully eliminating the issue only making it 
a factor better. At the other end you still need to materialize a buffer 
containing the message + the object graph you are going to materialize. This is 
different from how things worked previously where we had a dedicated thread 
that would read fixed size buffers and then materialize just the object graph 
from that. 

To really solve this we need to be able to avoid buffering the entire message 
at both sending and receiving side. The buffering is worse because we are 
allocating contagious memory and not just doubling the space impact. We could 
make it incrementally better by using chains of fixed size buffers so there is 
less external fragmentation and allocator overhead. That's still committing 
additional memory compared to pre-8457, but at least it's being committed in a 
more reasonable way.

I think the most elegant solution is to use a lightweight thread 
implementation. What we will probably be boxed into doing is making the 
serialization of result data and other large message portions able to yield. 
This will bound the memory committed to large messages to the largest atomic 
portion we have to serialize (Cell?).

Something like an output stream being able to say "shouldYield". If you 
continue to write it will continue to buffer and not fail, but use memory. Then 
serializers can implement a return value for serialize which indicates whether 
there is more to serialize. You would check shouldYield after each Cell or some 
unit of work when serializing. Most of these large things being serialized are 
iterators. The trick will be that most serialization is stateless, and objects 
are serialized concurrently so you can't stored the serialization state in 
object being serialized safely.

> nio MessagingService
> 
>
> Key: CASSANDRA-8457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Jason Brown
>Priority: Minor
>  Labels: netty, performance
> Fix For: 4.x
>
>
> Thread-per-peer (actually two each incoming and outbound) is a big 
> contributor to context switching, especially for larger clusters.  Let's look 
> at switching to nio, possibly via Netty.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12886) Streaming failed due to SSL Socket connection reset

2017-02-15 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12886:

Reviewer: Yuki Morishita

> Streaming failed due to SSL Socket connection reset
> ---
>
> Key: CASSANDRA-12886
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12886
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Bing Wu
>Assignee: Paulo Motta
> Attachments: debug.log.2016-11-10_2319.gz
>
>
> While running "nodetool repair", I see many instances of 
> "javax.net.ssl.SSLException: java.net.SocketException: Connection reset" in 
> system.logs on some nodes in the cluster. Timestamps correspond to streaming 
> source/initiator's error messages of "sync failed between ..."
> Setup: 
> - Cassandra 3.7.01 
> - CentOS 6.7 in AWS (multi-region)
> - JDK version: {noformat}
> java version "1.8.0_102"
> Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
> {noformat}
> - cassandra.yaml:
> {noformat}
> server_encryption_options:
> internode_encryption: all
> keystore: [path]
> keystore_password: [password]
> truststore: [path]
> truststore_password: [password]
> # More advanced defaults below:
> # protocol: TLS
> # algorithm: SunX509
> # store_type: JKS
> # cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
> require_client_auth: false
> {noformat}
> Error messages in system.log on the target host:
> {noformat}
> ERROR [STREAM-OUT-/54.247.111.232:7001] 2016-11-07 07:30:56,475 
> StreamSession.java:529 - [Stream #e14abcb0-a4bb-11e6-9758-55b9ac38b78e] 
> Streaming error occurred on session with peer 54.247.111.232
> javax.net.ssl.SSLException: Connection has been shutdown: 
> javax.net.ssl.SSLException: java.net.SocketException: Connection reset
> at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) 
> ~[na:1.8.0_102]
> at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) 
> ~[na:1.8.0_102]
> at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) 
> ~[na:1.8.0_102]
> at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
> ~[na:1.8.0_102]
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
> ~[na:1.8.0_102]
> at 
> org.apache.cassandra.io.util.WrappedDataOutputStreamPlus.flush(WrappedDataOutputStreamPlus.java:66)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:371)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:342)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
> Caused by: javax.net.ssl.SSLException: java.net.SocketException: Connection 
> reset
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13081) Confusing StreamReader.StreamDeserializer.cleanup leftover? code

2017-02-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13081:

Resolution: Fixed
  Reviewer: Sylvain Lebresne
Status: Resolved  (was: Patch Available)

Thanks, merged as d81dc27c7bde7c44a3c00526d803d9d5c7fe2604 (with fixed nit).

> Confusing StreamReader.StreamDeserializer.cleanup leftover? code
> 
>
> Key: CASSANDRA-13081
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13081
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Dave Brosius
>Assignee: Paulo Motta
>Priority: Trivial
> Fix For: 4.x
>
>
> The cleanup method in StreamReader.StreamDeserializer does stuff in the cases 
> when the field 'in' is a RewindableDataInputStreamPlus typed object.
> Given that it is a 
>  this.in = new DataInputPlus.DataInputStreamPlus(in);
> that can never be. I'm assuming this was left over from some previous 
> refactor or such. Assuming we can just delete this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13081) Remove pre-3.0 streaming compatibility code for 4.0

2017-02-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13081:

Fix Version/s: (was: 4.x)
   4.0

> Remove pre-3.0 streaming compatibility code for 4.0
> ---
>
> Key: CASSANDRA-13081
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13081
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Dave Brosius
>Assignee: Paulo Motta
>Priority: Trivial
> Fix For: 4.0
>
>
> The cleanup method in StreamReader.StreamDeserializer does stuff in the cases 
> when the field 'in' is a RewindableDataInputStreamPlus typed object.
> Given that it is a 
>  this.in = new DataInputPlus.DataInputStreamPlus(in);
> that can never be. I'm assuming this was left over from some previous 
> refactor or such. Assuming we can just delete this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13081) Remove pre-3.0 streaming compatibility code for 4.0

2017-02-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13081:

Summary: Remove pre-3.0 streaming compatibility code for 4.0  (was: 
Confusing StreamReader.StreamDeserializer.cleanup leftover? code)

> Remove pre-3.0 streaming compatibility code for 4.0
> ---
>
> Key: CASSANDRA-13081
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13081
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Dave Brosius
>Assignee: Paulo Motta
>Priority: Trivial
> Fix For: 4.0
>
>
> The cleanup method in StreamReader.StreamDeserializer does stuff in the cases 
> when the field 'in' is a RewindableDataInputStreamPlus typed object.
> Given that it is a 
>  this.in = new DataInputPlus.DataInputStreamPlus(in);
> that can never be. I'm assuming this was left over from some previous 
> refactor or such. Assuming we can just delete this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


cassandra git commit: Remove pre-3.0 streaming compatibility code for 4.0

2017-02-15 Thread paulo
Repository: cassandra
Updated Branches:
  refs/heads/trunk 31eac784f -> d81dc27c7


Remove pre-3.0 streaming compatibility code for 4.0

Patch by Paulo Motta; Reviewed by Sylvain Lebresne for CASSANDRA-13081


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d81dc27c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d81dc27c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d81dc27c

Branch: refs/heads/trunk
Commit: d81dc27c7bde7c44a3c00526d803d9d5c7fe2604
Parents: 31eac78
Author: Paulo Motta 
Authored: Wed Jan 4 12:04:09 2017 -0200
Committer: Paulo Motta 
Committed: Wed Feb 15 13:45:53 2017 -0200

--
 CHANGES.txt |  1 +
 .../cassandra/streaming/StreamReader.java   | 37 ++--
 .../compress/CompressedStreamReader.java| 10 ++
 3 files changed, 5 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d81dc27c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6efcaa3..0a76400 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081)
  * Add support for + and - operations on dates (CASSANDRA-11936)
  * Fix consistency of incrementally repaired data (CASSANDRA-9143)
  * Increase commitlog version (CASSANDRA-13161)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d81dc27c/src/java/org/apache/cassandra/streaming/StreamReader.java
--
diff --git a/src/java/org/apache/cassandra/streaming/StreamReader.java 
b/src/java/org/apache/cassandra/streaming/StreamReader.java
index fdc2ae2..7d00e48 100644
--- a/src/java/org/apache/cassandra/streaming/StreamReader.java
+++ b/src/java/org/apache/cassandra/streaming/StreamReader.java
@@ -40,7 +40,6 @@ import org.apache.cassandra.io.sstable.SSTableSimpleIterator;
 import org.apache.cassandra.io.sstable.format.RangeAwareSSTableWriter;
 import org.apache.cassandra.io.sstable.format.SSTableFormat;
 import org.apache.cassandra.io.sstable.format.Version;
-import org.apache.cassandra.io.util.RewindableDataInputStreamPlus;
 import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.streaming.messages.FileMessageHeader;
 import org.apache.cassandra.utils.ByteBufferUtil;
@@ -118,20 +117,14 @@ public class StreamReader
 }
 catch (Throwable e)
 {
-if (deserializer != null)
-logger.warn("[Stream {}] Error while reading partition {} from 
stream on ks='{}' and table='{}'.",
-session.planId(), deserializer.partitionKey(), 
cfs.keyspace.getName(), cfs.getTableName(), e);
+logger.warn("[Stream {}] Error while reading partition {} from 
stream on ks='{}' and table='{}'.",
+session.planId(), deserializer.partitionKey(), 
cfs.keyspace.getName(), cfs.getTableName(), e);
 if (writer != null)
 {
 writer.abort(e);
 }
 throw Throwables.propagate(e);
 }
-finally
-{
-if (deserializer != null)
-deserializer.cleanup();
-}
 }
 
 protected SerializationHeader getHeader(TableMetadata metadata)
@@ -166,13 +159,6 @@ public class StreamReader
 
 public static class StreamDeserializer extends 
UnmodifiableIterator implements UnfilteredRowIterator
 {
-public static final int INITIAL_MEM_BUFFER_SIZE = 
Integer.getInteger("cassandra.streamdes.initial_mem_buffer_size", 32768);
-public static final int MAX_MEM_BUFFER_SIZE = 
Integer.getInteger("cassandra.streamdes.max_mem_buffer_size", 1048576);
-public static final int MAX_SPILL_FILE_SIZE = 
Integer.getInteger("cassandra.streamdes.max_spill_file_size", 
Integer.MAX_VALUE);
-
-public static final String BUFFER_FILE_PREFIX = "buf";
-public static final String BUFFER_FILE_SUFFIX = "dat";
-
 private final TableMetadata metadata;
 private final DataInputPlus in;
 private final SerializationHeader header;
@@ -279,24 +265,5 @@ public class StreamReader
 public void close()
 {
 }
-
-/* We have a separate cleanup method because sometimes close is called 
before exhausting the
-   StreamDeserializer (for instance, when enclosed in an 
try-with-resources wrapper, such as in
-   BigTableWriter.append()).
- */
-public void cleanup()
-{
-if (in instanceof RewindableDataInputStreamPlus)
-{
-try
-{
-

[jira] [Commented] (CASSANDRA-13081) Confusing StreamReader.StreamDeserializer.cleanup leftover? code

2017-02-15 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868031#comment-15868031
 ] 

Sylvain Lebresne commented on CASSANDRA-13081:
--

Sure, and +1, but there is a {{if (deserializer != null)}} left in both 
{{StreamReader}} and {{CompressedStreamReader}} (that guards the printing of a 
warning) that can go away too.

> Confusing StreamReader.StreamDeserializer.cleanup leftover? code
> 
>
> Key: CASSANDRA-13081
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13081
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Dave Brosius
>Assignee: Paulo Motta
>Priority: Trivial
> Fix For: 4.x
>
>
> The cleanup method in StreamReader.StreamDeserializer does stuff in the cases 
> when the field 'in' is a RewindableDataInputStreamPlus typed object.
> Given that it is a 
>  this.in = new DataInputPlus.DataInputStreamPlus(in);
> that can never be. I'm assuming this was left over from some previous 
> refactor or such. Assuming we can just delete this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-9143) Improving consistency of repairAt field across replicas

2017-02-15 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868022#comment-15868022
 ] 

Sylvain Lebresne commented on CASSANDRA-9143:
-

Probably not a big deal but noticed that after this patch, 
{{CompactionManager.submitAntiCompaction()}} is now unused. Assuming that was 
intended, can one of you guys maybe clean it up ([~bdeggleston] and [~krummas])?

> Improving consistency of repairAt field across replicas 
> 
>
> Key: CASSANDRA-9143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9143
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> We currently send an anticompaction request to all replicas. During this, a 
> node will split stables and mark the appropriate ones repaired. 
> The problem is that this could fail on some replicas due to many reasons 
> leading to problems in the next repair. 
> This is what I am suggesting to improve it. 
> 1) Send anticompaction request to all replicas. This can be done at session 
> level. 
> 2) During anticompaction, stables are split but not marked repaired. 
> 3) When we get positive ack from all replicas, coordinator will send another 
> message called markRepaired. 
> 4) On getting this message, replicas will mark the appropriate stables as 
> repaired. 
> This will reduce the window of failure. We can also think of "hinting" 
> markRepaired message if required. 
> Also the stables which are streaming can be marked as repaired like it is 
> done now. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13081) Confusing StreamReader.StreamDeserializer.cleanup leftover? code

2017-02-15 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867977#comment-15867977
 ] 

Paulo Motta commented on CASSANDRA-13081:
-

Mind reviewing [~slebresne]? This is a leftover from CASSANDRA-12716.

> Confusing StreamReader.StreamDeserializer.cleanup leftover? code
> 
>
> Key: CASSANDRA-13081
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13081
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Dave Brosius
>Assignee: Paulo Motta
>Priority: Trivial
> Fix For: 4.x
>
>
> The cleanup method in StreamReader.StreamDeserializer does stuff in the cases 
> when the field 'in' is a RewindableDataInputStreamPlus typed object.
> Given that it is a 
>  this.in = new DataInputPlus.DataInputStreamPlus(in);
> that can never be. I'm assuming this was left over from some previous 
> refactor or such. Assuming we can just delete this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13126) native transport protocol corruption when using SSL

2017-02-15 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867950#comment-15867950
 ] 

Jason Brown commented on CASSANDRA-13126:
-

I agree with [~tvdw]'s assessment here: once part of the data stream is lost, 
you're pretty much screwed. It's possible there's some route to recovery, but 
that probably includes some degree of luck and fortuitous timing. I think the 
simplest solution is to close the channel/socket, as I suspect error recovery 
code might be tricky and there may be security holes in that (I am not a 
security expert so I may be wrong).

bq. Wouldn't frequently reconnecting clients possibly cause more memory 
pressure in this case and further escalate the issue?

Quite possibly, although {{ConnectionLimitHandler}} might be able to help, but 
even that will have some costs before it executes in a channel.

> native transport protocol corruption when using SSL
> ---
>
> Key: CASSANDRA-13126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13126
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tom van der Woerdt
>Priority: Critical
>
> This is a series of conditions that can result in client connections becoming 
> unusable.
> 1) Cassandra GC must be well-tuned, to have short GC pauses every minute or so
> 2) *client* SSL must be enabled and transmitting a significant amount of data
> 3) Cassandra must run with the default library versions
> 4) disableexplicitgc must be set (this is the default in the current 
> cassandra-env.sh)
> This ticket relates to CASSANDRA-13114 which is a possible workaround (but 
> not a fix) for the SSL requirement to trigger this bug.
> * Netty allocates nio.ByteBuffers for every outgoing SSL message.
> * ByteBuffers consist of two parts, the jvm object and the off-heap object. 
> The jvm object is small and goes with regular GC cycles, the off-heap object 
> gets freed only when the small jvm object is freed. To avoid exploding the 
> native memory use, the jvm defaults to limiting its allocation to the max 
> heap size. Allocating beyond that limit triggers a System.gc(), a retry, and 
> potentially an exception.
> * System.gc is a no-op under disableexplicitgc
> * This means ByteBuffers are likely to throw an exception when too many 
> objects are being allocated
> * The netty version shipped in Cassandra is broken when using SSL (see 
> CASSANDRA-13114) and causes significantly too many bytebuffers to be 
> allocated.
> This gets more complicated though.
> When /some/ clients use SSL, and others don't, the clients not using SSL can 
> still be affected by this bug, as bytebuffer starvation caused by ssl will 
> leak to other users.
> ByteBuffers are used very early on in the native protocol as well. Before 
> even being able to decode the network protocol, this error can be thrown :
> {noformat}
> io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct 
> buffer memory
> {noformat}
> Note that this comes back with stream_id 0, so clients end up waiting for the 
> client timeout before the query is considered failed and retried.
> A few frames later on the same connection, this appears:
> {noformat}
> Provided frame does not appear to be Snappy compressed
> {noformat}
> And after that everything errors out with:
> {noformat}
> Invalid or unsupported protocol version (54); the lowest supported version is 
> 3 and the greatest is 4
> {noformat}
> So this bug ultimately affects the binary protocol and the connection becomes 
> useless if not downright dangerous.
> I think there are several things that need to be done here.
> * CASSANDRA-13114 should be fixed (easy, and probably needs to land in 3.0.11 
> anyway)
> * Connections should be closed after a DecoderException
> * DisableExplicitGC should be removed from the default JVM arguments
> Any of these three would limit the impact to clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12886) Streaming failed due to SSL Socket connection reset

2017-02-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12886:

Status: Patch Available  (was: Open)

This is hard to reproduce but I've got a similar issue when working on 
CASSANDRA-11841, and what is probably happening is a race where the handler 
sender thread is started and pushes a message to the socket before the init 
message is sent, what breaks the connection in the receiver side. In order to 
avoid this, we must first send the init message before starting the handler 
thread. This should already be fixed on 3.10 by CASSANDRA-11841, but this fix 
is for 2.2+.

||2.2||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-12886]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-12886-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-12886-dtest/lastCompletedBuild/testReport/]|

> Streaming failed due to SSL Socket connection reset
> ---
>
> Key: CASSANDRA-12886
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12886
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Bing Wu
>Assignee: Paulo Motta
> Attachments: debug.log.2016-11-10_2319.gz
>
>
> While running "nodetool repair", I see many instances of 
> "javax.net.ssl.SSLException: java.net.SocketException: Connection reset" in 
> system.logs on some nodes in the cluster. Timestamps correspond to streaming 
> source/initiator's error messages of "sync failed between ..."
> Setup: 
> - Cassandra 3.7.01 
> - CentOS 6.7 in AWS (multi-region)
> - JDK version: {noformat}
> java version "1.8.0_102"
> Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
> {noformat}
> - cassandra.yaml:
> {noformat}
> server_encryption_options:
> internode_encryption: all
> keystore: [path]
> keystore_password: [password]
> truststore: [path]
> truststore_password: [password]
> # More advanced defaults below:
> # protocol: TLS
> # algorithm: SunX509
> # store_type: JKS
> # cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
> require_client_auth: false
> {noformat}
> Error messages in system.log on the target host:
> {noformat}
> ERROR [STREAM-OUT-/54.247.111.232:7001] 2016-11-07 07:30:56,475 
> StreamSession.java:529 - [Stream #e14abcb0-a4bb-11e6-9758-55b9ac38b78e] 
> Streaming error occurred on session with peer 54.247.111.232
> javax.net.ssl.SSLException: Connection has been shutdown: 
> javax.net.ssl.SSLException: java.net.SocketException: Connection reset
> at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) 
> ~[na:1.8.0_102]
> at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) 
> ~[na:1.8.0_102]
> at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) 
> ~[na:1.8.0_102]
> at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
> ~[na:1.8.0_102]
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
> ~[na:1.8.0_102]
> at 
> org.apache.cassandra.io.util.WrappedDataOutputStreamPlus.flush(WrappedDataOutputStreamPlus.java:66)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:371)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:342)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
> Caused by: javax.net.ssl.SSLException: java.net.SocketException: Connection 
> reset
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-11303) New inbound throughput parameters for streaming

2017-02-15 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867920#comment-15867920
 ] 

Jason Brown commented on CASSANDRA-11303:
-

[~pauloricardomg] heh, i was partly waiting on this ticket to see how it would 
affect my CASSANDRA-12229 work :)  That being said, let me take a more thorough 
review the work here and see how it will affect/be affected by CASSANDRA-12229.

> New inbound throughput parameters for streaming
> ---
>
> Key: CASSANDRA-11303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11303
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Configuration
>Reporter: Satoshi Konno
>Assignee: Satoshi Konno
>Priority: Minor
> Attachments: 11303_inbound_limit_debug_20160419.log, 
> 11303_inbound_nolimit_debug_20160419.log, 
> 11303_inbound_patch_for_trunk_20160419.diff, 
> 11303_inbound_patch_for_trunk_20160525.diff, 
> 11303_inbound_patch_for_trunk_20160704.diff, 
> 200vs40inboundstreamthroughput.png, cassandra_inbound_stream.diff
>
>
> Hi,
> To specify stream throughputs of a node more clearly, I would like to add the 
> following new inbound parameters like existing outbound parameters in the 
> cassandra.yaml.
> - stream_throughput_inbound_megabits_per_sec
> - inter_dc_stream_throughput_outbound_megabits_per_sec  
> We use only the existing outbound parameters now, but it is difficult to 
> control the total throughputs of a node. In our production network, some 
> critical alerts occurs when a node exceed the specified total throughput 
> which is the sum of the input and output throughputs.
> In our operation of Cassandra, the alerts occurs during the bootstrap or 
> repair processing when a new node is added. In the worst case, we have to 
> stop the operation of the exceed node.
> I have attached the patch under consideration. I would like to add a new 
> limiter class, StreamInboundRateLimiter, and use the limiter class in 
> StreamDeserializer class. I use Row::dataSize( )to get the input throughput 
> in StreamDeserializer::newPartition(), but I am not sure whether the 
> dataSize() returns the correct data size.
> Can someone please tell me how to do it ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13223) Unable to compute when histogram overflowed

2017-02-15 Thread Vladimir Bukhtoyarov (JIRA)
Vladimir Bukhtoyarov created CASSANDRA-13223:


 Summary: Unable to compute when histogram overflowed
 Key: CASSANDRA-13223
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13223
 Project: Cassandra
  Issue Type: Bug
Reporter: Vladimir Bukhtoyarov
Priority: Minor


DecayingEstimatedHistogramReservoir throws exception when value upper max 
recorded to reservoir. It is very undesired behavior, because functionality 
like logging or monitoring should never fail with exception. Current behavior 
of DecayingEstimatedHistogramReservoir violates contract for 
[Reservoir|https://github.com/dropwizard/metrics/blob/3.2-development/metrics-core/src/main/java/com/codahale/metrics/Reservoir.java],
 as you can see javadocs for Reservoir says nothing that implementation can 
throw exception in getSnapshot method. As result all Dropwizzard/Metrics 
reporters are broken, because nobody expect that metric will throw exception on 
get, for example our monitoring pipeline is broken with exception:
{noformat}
com.fasterxml.jackson.databind.JsonMappingException: Unable to compute when 
histogram overflowed (through reference chain: 
java.util.UnmodifiableSortedMap["org.apache.cassandra.metrics.Table
.ColUpdateTimeDeltaHistogram.all"])
at 
com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:339)
at 
com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:299)
at 
com.fasterxml.jackson.databind.ser.std.StdSerializer.wrapAndThrow(StdSerializer.java:342)
at 
com.fasterxml.jackson.databind.ser.std.MapSerializer.serializeFields(MapSerializer.java:620)
at 
com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:519)
at 
com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:31)
at 
com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:130)
at 
com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:2436)
at 
com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:355)
at 
com.fasterxml.jackson.core.JsonGenerator.writeObjectField(JsonGenerator.java:1442)
at 
com.codahale.metrics.json.MetricsModule$MetricRegistrySerializer.serialize(MetricsModule.java:188)
at 
com.codahale.metrics.json.MetricsModule$MetricRegistrySerializer.serialize(MetricsModule.java:171)
at 
com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:130)
at 
com.fasterxml.jackson.databind.ObjectWriter$Prefetch.serialize(ObjectWriter.java:1428)
at 
com.fasterxml.jackson.databind.ObjectWriter._configAndWriteValue(ObjectWriter.java:1129)
at 
com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:967)
at 
com.codahale.metrics.servlets.MetricsServlet.doGet(MetricsServlet.java:176)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
at com.ringcentral.slf4j.CleanMDCFilter.doFilter(CleanMDCFilter.java:18)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:524)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 

[jira] [Updated] (CASSANDRA-13222) Paging with reverse queries and static columns may return incorrectly sized pages

2017-02-15 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-13222:

Reproduced In: 2.2.8, 2.1.16  (was: 2.1.16, 2.2.8)
   Status: Patch Available  (was: In Progress)

As expected, this doesn't repro on 3.0+ so I've haven't pushed branches for 
3.11 or trunk as like the 3.0 branch they'll only contain the new test.

||branch||testall||dtest||
|[13222-2.1|https://github.com/beobal/cassandra/tree/13222-2.1]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-13222-2.1-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-13222-2.1-dtest]|
|[13222-2.2|https://github.com/beobal/cassandra/tree/13222-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-13222-2.2-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-13222-2.2-dtest]|
|[13222-3.0|https://github.com/beobal/cassandra/tree/13222-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-13222-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-13222-3.0-dtest]|


> Paging with reverse queries and static columns may return incorrectly sized 
> pages
> -
>
> Key: CASSANDRA-13222
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13222
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>
> There are 2 specialisations of {{ColumnCounter}} that deal with static 
> columns differently depending on the order of iteration through the column 
> family and which impl is used generally depends on whether or not the 
> {{ColumnFilter}} in use is reversed. However, the base method 
> {{ColumnCounter::countAll}} always uses forward iteration, which can result 
> in overcounting when the query is reversed and there are statics involved. In 
> turn, this leads to incorrectly sized pages being returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13222) Paging with reverse queries and static columns may return incorrectly sized pages

2017-02-15 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-13222:

Description: There are 2 specialisations of {{ColumnCounter}} that deal 
with static columns differently depending on the order of iteration through the 
column family and which impl is used generally depends on whether or not the 
{{ColumnFilter}} in use is reversed. However, the base method 
{{ColumnCounter::countAll}} always uses forward iteration, which can result in 
overcounting when the query is reversed and there are statics involved. In 
turn, this leads to incorrectly sized pages being returned to the client.  
(was: There are 2 specialisations of `ColumnCounter` that deal with static 
columns differently depending on the order of iteration through the column 
family and which impl is used generally depends on whether or not the 
`ColumnFilter` in use is reversed. However, the base method 
`ColumnCounter::countAll`` always uses forward iteration, which can result in 
overcounting when the query is reversed and there are statics involved. In 
turn, this leads to incorrectly sized pages being returned to the client.)

> Paging with reverse queries and static columns may return incorrectly sized 
> pages
> -
>
> Key: CASSANDRA-13222
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13222
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>
> There are 2 specialisations of {{ColumnCounter}} that deal with static 
> columns differently depending on the order of iteration through the column 
> family and which impl is used generally depends on whether or not the 
> {{ColumnFilter}} in use is reversed. However, the base method 
> {{ColumnCounter::countAll}} always uses forward iteration, which can result 
> in overcounting when the query is reversed and there are statics involved. In 
> turn, this leads to incorrectly sized pages being returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13079) Warn user to run full repair when increasing replication factor

2017-02-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta reassigned CASSANDRA-13079:
---

  Assignee: Marcus Eriksson  (was: Paulo Motta)
  Priority: Minor  (was: Critical)
  Reviewer: Paulo Motta
   Summary: Warn user to run full repair when increasing replication factor 
 (was: Repair doesn't work after several replication factor changes)
Issue Type: Improvement  (was: Bug)

Good point [~krummas], I agree dropping repaired sstables to the unrepaired 
compaction buckets will be much more catastrophic so warning the user is 
definitely the best approach here.

+1 to fix, tested locally and works as expected. Could you also update the 
[documentation|http://cassandra.apache.org/doc/latest/faq/index.html?highlight=replication#can-i-change-the-replication-factor-a-a-keyspace-on-a-live-cluster]
 to indicate the user must run a {{-full}} repair when increasing the RF?

Feel free to commit after this nit. Thanks!

> Warn user to run full repair when increasing replication factor
> ---
>
> Key: CASSANDRA-13079
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13079
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Debian 
>Reporter: Vladimir Yudovin
>Assignee: Marcus Eriksson
>Priority: Minor
>
> Scenario:
> Start two nodes cluster.
> Create keyspace with rep.factor *one*:
> CREATE KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE rep.data (str text PRIMARY KEY );
> INSERT INTO rep.data (str) VALUES ( 'qwerty');
> Run *nodetool flush* on all nodes. On one of them table files are created.
> Change replication factor to *two*:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> Run repair, then *nodetool flush* on all nodes. On all nodes table files are 
> created.
> Change replication factor to *one*:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> Then *nodetool cleanup*, only on initial node remained data files.
> Change replication factor to *two* again:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> Run repair, then *nodetool flush* on all nodes. No data files on second node 
> (though expected, as after first repair/flush).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13079) Warn user to run full repair when increasing replication factor

2017-02-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13079:

Status: Patch Available  (was: Open)

> Warn user to run full repair when increasing replication factor
> ---
>
> Key: CASSANDRA-13079
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13079
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Debian 
>Reporter: Vladimir Yudovin
>Assignee: Marcus Eriksson
>Priority: Minor
>
> Scenario:
> Start two nodes cluster.
> Create keyspace with rep.factor *one*:
> CREATE KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE rep.data (str text PRIMARY KEY );
> INSERT INTO rep.data (str) VALUES ( 'qwerty');
> Run *nodetool flush* on all nodes. On one of them table files are created.
> Change replication factor to *two*:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> Run repair, then *nodetool flush* on all nodes. On all nodes table files are 
> created.
> Change replication factor to *one*:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> Then *nodetool cleanup*, only on initial node remained data files.
> Change replication factor to *two* again:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> Run repair, then *nodetool flush* on all nodes. No data files on second node 
> (though expected, as after first repair/flush).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13079) Warn user to run full repair when increasing replication factor

2017-02-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13079:

Status: Ready to Commit  (was: Patch Available)

> Warn user to run full repair when increasing replication factor
> ---
>
> Key: CASSANDRA-13079
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13079
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Debian 
>Reporter: Vladimir Yudovin
>Assignee: Marcus Eriksson
>Priority: Minor
>
> Scenario:
> Start two nodes cluster.
> Create keyspace with rep.factor *one*:
> CREATE KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> CREATE TABLE rep.data (str text PRIMARY KEY );
> INSERT INTO rep.data (str) VALUES ( 'qwerty');
> Run *nodetool flush* on all nodes. On one of them table files are created.
> Change replication factor to *two*:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> Run repair, then *nodetool flush* on all nodes. On all nodes table files are 
> created.
> Change replication factor to *one*:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 1};
> Then *nodetool cleanup*, only on initial node remained data files.
> Change replication factor to *two* again:
> ALTER KEYSPACE rep WITH replication = {'class': 'SimpleStrategy', 
> 'replication_factor': 2};
> Run repair, then *nodetool flush* on all nodes. No data files on second node 
> (though expected, as after first repair/flush).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13153) Reappeared Data when Mixing Incremental and Full Repairs

2017-02-15 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867821#comment-15867821
 ] 

Marcus Eriksson edited comment on CASSANDRA-13153 at 2/15/17 1:23 PM:
--

Ok, so the problem is actually if we run a -full repair and some of the ranges 
fail, we might anticompact an sstable containing the data to unrepaired, but 
the repaired tombstone stays in repaired because that sstable was compacted 
away. The fix would be that we anticompact to the previous value of repairedAt.

Or that we, as suggested, don't anticompact on full repairs at all.


was (Author: krummas):
Ok, so the problem is actually if we run a -full repair and some of the ranges 
fail, we might anticompact an sstable to unrepaired. The fix would be that we 
anticompact to the previous value of repairedAt.

Or that we, as suggested, don't anticompact on full repairs at all.

> Reappeared Data when Mixing Incremental and Full Repairs
> 
>
> Key: CASSANDRA-13153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13153
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Tools
> Environment: Apache Cassandra 2.2
>Reporter: Amanda Debrot
>  Labels: Cassandra
> Attachments: log-Reappeared-Data.txt, 
> Step-by-Step-Simulate-Reappeared-Data.txt
>
>
> This happens for both LeveledCompactionStrategy and 
> SizeTieredCompactionStrategy.  I've only tested it on Cassandra version 2.2 
> but it most likely also affects all Cassandra versions after 2.2, if they 
> have anticompaction with full repair.
> When mixing incremental and full repairs, there are a few scenarios where the 
> Data SSTable is marked as unrepaired and the Tombstone SSTable is marked as 
> repaired.  Then if it is past gc_grace, and the tombstone and data has been 
> compacted out on other replicas, the next incremental repair will push the 
> Data to other replicas without the tombstone.
> Simplified scenario:
> 3 node cluster with RF=3
> Intial config:
>   Node 1 has data and tombstone in separate SSTables.
>   Node 2 has data and no tombstone.
>   Node 3 has data and tombstone in separate SSTables.
> Incremental repair (nodetool repair -pr) is run every day so now we have 
> tombstone on each node.
> Some minor compactions have happened since so data and tombstone get merged 
> to 1 SSTable on Nodes 1 and 3.
>   Node 1 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
> Incremental repairs keep running every day.
> Full repairs run weekly (nodetool repair -full -pr). 
> Now there are 2 scenarios where the Data SSTable will get marked as 
> "Unrepaired" while Tombstone SSTable will get marked as "Repaired".
> Scenario 1:
> Since the Data and Tombstone SSTable have been marked as "Repaired" 
> and anticompacted, they have had minor compactions with other SSTables 
> containing keys from other ranges.  During full repair, if the last node to 
> run it doesn't own this particular key in it's partitioner range, the Data 
> and Tombstone SSTable will get anticompacted and marked as "Unrepaired".  Now 
> in the next incremental repair, if the Data SSTable is involved in a minor 
> compaction during the repair but the Tombstone SSTable is not, the resulting 
> compacted SSTable will be marked "Unrepaired" and Tombstone SSTable is marked 
> "Repaired".
> Scenario 2:
> Only the Data SSTable had minor compaction with other SSTables 
> containing keys from other ranges after being marked as "Repaired".  The 
> Tombstone SSTable was never involved in a minor compaction so therefore all 
> keys in that SSTable belong to 1 particular partitioner range. During full 
> repair, if the last node to run it doesn't own this particular key in it's 
> partitioner range, the Data SSTable will get anticompacted and marked as 
> "Unrepaired".   The Tombstone SSTable stays marked as Repaired.
> Then it’s past gc_grace.  Since Node’s #1 and #3 only have 1 SSTable for that 
> key, the tombstone will get compacted out.
>   Node 1 has nothing.
>   Node 2 has data (in unrepaired SSTable) and tombstone (in repaired 
> SSTable) in separate SSTables.
>   Node 3 has nothing.
> Now when the next incremental repair runs, it will only use the Data SSTable 
> to build the merkle tree since the tombstone SSTable is flagged as repaired 
> and data SSTable is marked as unrepaired.  And the data will get repaired 
> against the other two nodes.
>   Node 1 has data.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 has data.
> If a read request hits Node 

[jira] [Commented] (CASSANDRA-13153) Reappeared Data when Mixing Incremental and Full Repairs

2017-02-15 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867821#comment-15867821
 ] 

Marcus Eriksson commented on CASSANDRA-13153:
-

Ok, so the problem is actually if we run a -full repair and some of the ranges 
fail, we might anticompact an sstable to unrepaired. The fix would be that we 
anticompact to the previous value of repairedAt.

Or that we, as suggested, don't anticompact on full repairs at all.

> Reappeared Data when Mixing Incremental and Full Repairs
> 
>
> Key: CASSANDRA-13153
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13153
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, Tools
> Environment: Apache Cassandra 2.2
>Reporter: Amanda Debrot
>  Labels: Cassandra
> Attachments: log-Reappeared-Data.txt, 
> Step-by-Step-Simulate-Reappeared-Data.txt
>
>
> This happens for both LeveledCompactionStrategy and 
> SizeTieredCompactionStrategy.  I've only tested it on Cassandra version 2.2 
> but it most likely also affects all Cassandra versions after 2.2, if they 
> have anticompaction with full repair.
> When mixing incremental and full repairs, there are a few scenarios where the 
> Data SSTable is marked as unrepaired and the Tombstone SSTable is marked as 
> repaired.  Then if it is past gc_grace, and the tombstone and data has been 
> compacted out on other replicas, the next incremental repair will push the 
> Data to other replicas without the tombstone.
> Simplified scenario:
> 3 node cluster with RF=3
> Intial config:
>   Node 1 has data and tombstone in separate SSTables.
>   Node 2 has data and no tombstone.
>   Node 3 has data and tombstone in separate SSTables.
> Incremental repair (nodetool repair -pr) is run every day so now we have 
> tombstone on each node.
> Some minor compactions have happened since so data and tombstone get merged 
> to 1 SSTable on Nodes 1 and 3.
>   Node 1 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 had a minor compaction that merged data with tombstone. 1 
> SSTable with tombstone.
> Incremental repairs keep running every day.
> Full repairs run weekly (nodetool repair -full -pr). 
> Now there are 2 scenarios where the Data SSTable will get marked as 
> "Unrepaired" while Tombstone SSTable will get marked as "Repaired".
> Scenario 1:
> Since the Data and Tombstone SSTable have been marked as "Repaired" 
> and anticompacted, they have had minor compactions with other SSTables 
> containing keys from other ranges.  During full repair, if the last node to 
> run it doesn't own this particular key in it's partitioner range, the Data 
> and Tombstone SSTable will get anticompacted and marked as "Unrepaired".  Now 
> in the next incremental repair, if the Data SSTable is involved in a minor 
> compaction during the repair but the Tombstone SSTable is not, the resulting 
> compacted SSTable will be marked "Unrepaired" and Tombstone SSTable is marked 
> "Repaired".
> Scenario 2:
> Only the Data SSTable had minor compaction with other SSTables 
> containing keys from other ranges after being marked as "Repaired".  The 
> Tombstone SSTable was never involved in a minor compaction so therefore all 
> keys in that SSTable belong to 1 particular partitioner range. During full 
> repair, if the last node to run it doesn't own this particular key in it's 
> partitioner range, the Data SSTable will get anticompacted and marked as 
> "Unrepaired".   The Tombstone SSTable stays marked as Repaired.
> Then it’s past gc_grace.  Since Node’s #1 and #3 only have 1 SSTable for that 
> key, the tombstone will get compacted out.
>   Node 1 has nothing.
>   Node 2 has data (in unrepaired SSTable) and tombstone (in repaired 
> SSTable) in separate SSTables.
>   Node 3 has nothing.
> Now when the next incremental repair runs, it will only use the Data SSTable 
> to build the merkle tree since the tombstone SSTable is flagged as repaired 
> and data SSTable is marked as unrepaired.  And the data will get repaired 
> against the other two nodes.
>   Node 1 has data.
>   Node 2 has data and tombstone in separate SSTables.
>   Node 3 has data.
> If a read request hits Node 1 and 3, it will return data.  If it hits 1 and 
> 2, or 2 and 3, however, it would return no data.
> Tested this with single range tokens for simplicity.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-11303) New inbound throughput parameters for streaming

2017-02-15 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867798#comment-15867798
 ] 

Paulo Motta commented on CASSANDRA-11303:
-

This review has felt badly through the cracks (I'm really sorry about that), 
and now 3.X line is over for improvements, so I'm afraid we will only be able 
to support this on 4.0 where we will have an updated NIO streaming protocol. 
Based on this, I'd like to check with [~jasobrown] if he already thought 
through this and/or if this is something that can be easily supported in a 
post-CASSANDRA-12229 world.

> New inbound throughput parameters for streaming
> ---
>
> Key: CASSANDRA-11303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11303
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Configuration
>Reporter: Satoshi Konno
>Assignee: Satoshi Konno
>Priority: Minor
> Attachments: 11303_inbound_limit_debug_20160419.log, 
> 11303_inbound_nolimit_debug_20160419.log, 
> 11303_inbound_patch_for_trunk_20160419.diff, 
> 11303_inbound_patch_for_trunk_20160525.diff, 
> 11303_inbound_patch_for_trunk_20160704.diff, 
> 200vs40inboundstreamthroughput.png, cassandra_inbound_stream.diff
>
>
> Hi,
> To specify stream throughputs of a node more clearly, I would like to add the 
> following new inbound parameters like existing outbound parameters in the 
> cassandra.yaml.
> - stream_throughput_inbound_megabits_per_sec
> - inter_dc_stream_throughput_outbound_megabits_per_sec  
> We use only the existing outbound parameters now, but it is difficult to 
> control the total throughputs of a node. In our production network, some 
> critical alerts occurs when a node exceed the specified total throughput 
> which is the sum of the input and output throughputs.
> In our operation of Cassandra, the alerts occurs during the bootstrap or 
> repair processing when a new node is added. In the worst case, we have to 
> stop the operation of the exceed node.
> I have attached the patch under consideration. I would like to add a new 
> limiter class, StreamInboundRateLimiter, and use the limiter class in 
> StreamDeserializer class. I use Row::dataSize( )to get the input throughput 
> in StreamDeserializer::newPartition(), but I am not sure whether the 
> dataSize() returns the correct data size.
> Can someone please tell me how to do it ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13222) Paging with reverse queries and static columns may return incorrectly sized pages

2017-02-15 Thread Sam Tunnicliffe (JIRA)
Sam Tunnicliffe created CASSANDRA-13222:
---

 Summary: Paging with reverse queries and static columns may return 
incorrectly sized pages
 Key: CASSANDRA-13222
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13222
 Project: Cassandra
  Issue Type: Bug
  Components: CQL, Local Write-Read Paths
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe


There are 2 specialisations of `ColumnCounter` that deal with static columns 
differently depending on the order of iteration through the column family and 
which impl is used generally depends on whether or not the `ColumnFilter` in 
use is reversed. However, the base method `ColumnCounter::countAll`` always 
uses forward iteration, which can result in overcounting when the query is 
reversed and there are statics involved. In turn, this leads to incorrectly 
sized pages being returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-9639) size_estimates is inacurate in multi-dc clusters

2017-02-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-9639:
---
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   3.0.11
   Status: Resolved  (was: Patch Available)

Added NEWS.txt entry about change on {{system.size_estimates}} table and 
committed to 3.0 and up as {{af7b20bd0ea0d1e80553c519510c9ad9f29af64a}}. 
Committed ninja fix to subrange repair error message to 3.0 and merged up as 
{{f02f154e47be13dd481fe7afe18183c615f39c71}}. Merged [cassandra-dtest 
PR|https://github.com/riptano/cassandra-dtest/pull/1439].

Thanks all!

> size_estimates is inacurate in multi-dc clusters
> 
>
> Key: CASSANDRA-9639
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9639
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 3.0.11
>
>
> CASSANDRA-7688 introduced size_estimates to replace the thrift 
> describe_splits_ex command.
> Users have reported seeing estimates that are widely off in multi-dc clusters.
> system.size_estimates show the wrong range_start / range_end



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[6/9] cassandra git commit: ninja: improve subrange repair error message

2017-02-15 Thread paulo
ninja: improve subrange repair error message


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f02f154e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f02f154e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f02f154e

Branch: refs/heads/trunk
Commit: f02f154e47be13dd481fe7afe18183c615f39c71
Parents: af7b20b
Author: Paulo Motta 
Authored: Wed Feb 15 10:15:43 2017 -0200
Committer: Paulo Motta 
Committed: Wed Feb 15 10:29:09 2017 -0200

--
 src/java/org/apache/cassandra/service/ActiveRepairService.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f02f154e/src/java/org/apache/cassandra/service/ActiveRepairService.java
--
diff --git a/src/java/org/apache/cassandra/service/ActiveRepairService.java 
b/src/java/org/apache/cassandra/service/ActiveRepairService.java
index 97c5c0a..11d4617 100644
--- a/src/java/org/apache/cassandra/service/ActiveRepairService.java
+++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java
@@ -220,7 +220,10 @@ public class ActiveRepairService implements 
IEndpointStateChangeSubscriber, IFai
 }
 else if (range.intersects(toRepair))
 {
-throw new IllegalArgumentException("Requested range intersects 
a local range but is not fully contained in one; this would lead to imprecise 
repair");
+throw new IllegalArgumentException(String.format("Requested 
range %s intersects a local range (%s) " +
+ "but is not 
fully contained in one; this would lead to " +
+ "imprecise 
repair. keyspace: %s", toRepair.toString(),
+ 
range.toString(), keyspaceName));
 }
 }
 if (rangeSuperSet == null || !replicaSets.containsKey(rangeSuperSet))



[5/9] cassandra git commit: ninja: improve subrange repair error message

2017-02-15 Thread paulo
ninja: improve subrange repair error message


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f02f154e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f02f154e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f02f154e

Branch: refs/heads/cassandra-3.11
Commit: f02f154e47be13dd481fe7afe18183c615f39c71
Parents: af7b20b
Author: Paulo Motta 
Authored: Wed Feb 15 10:15:43 2017 -0200
Committer: Paulo Motta 
Committed: Wed Feb 15 10:29:09 2017 -0200

--
 src/java/org/apache/cassandra/service/ActiveRepairService.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f02f154e/src/java/org/apache/cassandra/service/ActiveRepairService.java
--
diff --git a/src/java/org/apache/cassandra/service/ActiveRepairService.java 
b/src/java/org/apache/cassandra/service/ActiveRepairService.java
index 97c5c0a..11d4617 100644
--- a/src/java/org/apache/cassandra/service/ActiveRepairService.java
+++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java
@@ -220,7 +220,10 @@ public class ActiveRepairService implements 
IEndpointStateChangeSubscriber, IFai
 }
 else if (range.intersects(toRepair))
 {
-throw new IllegalArgumentException("Requested range intersects 
a local range but is not fully contained in one; this would lead to imprecise 
repair");
+throw new IllegalArgumentException(String.format("Requested 
range %s intersects a local range (%s) " +
+ "but is not 
fully contained in one; this would lead to " +
+ "imprecise 
repair. keyspace: %s", toRepair.toString(),
+ 
range.toString(), keyspaceName));
 }
 }
 if (rangeSuperSet == null || !replicaSets.containsKey(rangeSuperSet))



[2/9] cassandra git commit: Use keyspace replication settings on system.size_estimates table

2017-02-15 Thread paulo
Use keyspace replication settings on system.size_estimates table

Patch by Chris Lohfink; Reviewed by Paulo Motta for CASSANDRA-9639


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/af7b20bd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/af7b20bd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/af7b20bd

Branch: refs/heads/cassandra-3.11
Commit: af7b20bd0ea0d1e80553c519510c9ad9f29af64a
Parents: 76ad028
Author: Chris Lohfink 
Authored: Thu Jan 26 09:43:47 2017 -0600
Committer: Paulo Motta 
Committed: Wed Feb 15 10:27:25 2017 -0200

--
 CHANGES.txt |  1 +
 NEWS.txt|  2 +
 .../cassandra/db/SizeEstimatesRecorder.java | 54 ++--
 3 files changed, 30 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/af7b20bd/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b19550a..732e14b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.11
+ * Use keyspace replication settings on system.size_estimates table 
(CASSANDRA-9639)
  * Add vm.max_map_count StartupCheck (CASSANDRA-13008)
  * Hint related logging should include the IP address of the destination in 
addition to 
host ID (CASSANDRA-13205)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/af7b20bd/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 4248a6e..a5ee496 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -30,6 +30,8 @@ Upgrading
- Compaction now correctly drops sstables out of CompactionTask when there
  isn't enough disk space to perform the full compaction.  This should 
reduce
  pending compaction tasks on systems with little remaining disk space.
+   - Primary ranges in the system.size_estimates table are now based on the 
keyspace
+ replication settings and adjacent ranges are no longer merged 
(CASSANDRA-9639).
 
 3.0.10
 =

http://git-wip-us.apache.org/repos/asf/cassandra/blob/af7b20bd/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--
diff --git a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java 
b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
index 0b31b87..ebe3f9a 100644
--- a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
+++ b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
@@ -69,12 +69,10 @@ public class SizeEstimatesRecorder extends 
MigrationListener implements Runnable
 
 logger.trace("Recording size estimates");
 
-// find primary token ranges for the local node.
-Collection localTokens = 
StorageService.instance.getLocalTokens();
-Collection localRanges = 
metadata.getPrimaryRangesFor(localTokens);
-
 for (Keyspace keyspace : Keyspace.nonLocalStrategy())
 {
+Collection localRanges = 
StorageService.instance.getPrimaryRangesForEndpoint(keyspace.getName(),
+FBUtilities.getBroadcastAddress());
 for (ColumnFamilyStore table : keyspace.getColumnFamilyStores())
 {
 long start = System.nanoTime();
@@ -91,37 +89,39 @@ public class SizeEstimatesRecorder extends 
MigrationListener implements Runnable
 @SuppressWarnings("resource")
 private void recordSizeEstimates(ColumnFamilyStore table, 
Collection localRanges)
 {
-List unwrappedRanges = Range.normalize(localRanges);
 // for each local primary range, estimate (crudely) mean partition 
size and partitions count.
 Map> estimates = new 
HashMap<>(localRanges.size());
-for (Range range : unwrappedRanges)
+for (Range localRange : localRanges)
 {
-// filter sstables that have partitions in this range.
-Refs refs = null;
-long partitionsCount, meanPartitionSize;
-
-try
+for (Range unwrappedRange : localRange.unwrap())
 {
-while (refs == null)
+// filter sstables that have partitions in this range.
+Refs refs = null;
+long partitionsCount, meanPartitionSize;
+
+try
+{
+while (refs == null)
+{
+Iterable sstables = 
table.getTracker().getView().select(SSTableSet.CANONICAL);
+SSTableIntervalTree tree = 
SSTableIntervalTree.build(sstables);
+Range r = 
Range.makeRowRange(unwrappedRange);
+

[9/9] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-02-15 Thread paulo
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/31eac784
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/31eac784
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/31eac784

Branch: refs/heads/trunk
Commit: 31eac784f1bae9ec3509d7a7726d9b8725764fa0
Parents: a0827fb ef9df6e
Author: Paulo Motta 
Authored: Wed Feb 15 10:53:21 2017 -0200
Committer: Paulo Motta 
Committed: Wed Feb 15 10:53:21 2017 -0200

--
 CHANGES.txt |  1 +
 NEWS.txt| 11 ++--
 .../cassandra/db/SizeEstimatesRecorder.java | 54 ++--
 .../cassandra/service/ActiveRepairService.java  |  5 +-
 4 files changed, 40 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/31eac784/CHANGES.txt
--
diff --cc CHANGES.txt
index 6016674,ee5a5cb..6efcaa3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -37,8 -7,8 +37,9 @@@
   * More fixes to the TokenAllocator (CASSANDRA-12990)
   * NoReplicationTokenAllocator should work with zero replication factor 
(CASSANDRA-12983)
  Merged from 3.0:
+  * Use keyspace replication settings on system.size_estimates table 
(CASSANDRA-9639)
   * Add vm.max_map_count StartupCheck (CASSANDRA-13008)
 + * Obfuscate password in stress-graphs (CASSANDRA-12233)
   * Hint related logging should include the IP address of the destination in 
addition to
 host ID (CASSANDRA-13205)
   * Reloading logback.xml does not work (CASSANDRA-13173)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/31eac784/NEWS.txt
--
diff --cc NEWS.txt
index 9c183f6,fc27526..d936dbe
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -13,40 -13,15 +13,45 @@@ restore snapshots created with the prev
  'sstableloader' tool. You can upgrade the file format of your snapshots
  using the provided 'sstableupgrade' tool.
  
 +4.0
 +===
 +
 +New features
 +
 +   - Support for arithmetic operations between `timestamp`/`date` and 
`duration` has been added.
 + See CASSANDRA-11936
 +   - Support for arithmetic operations on number has been added. See 
CASSANDRA-11935
 +
- 3.11
- 
- 
 +Upgrading
 +-
 +- Cassandra 4.0 removed support for the deprecated Thrift interface. 
Amongst
 +  Tother things, this imply the removal of all yaml option related to 
thrift
 +  ('start_rpc', rpc_port, ...).
 +- Cassandra 4.0 removed support for any pre-3.0 format. This means you
 +  cannot upgrade from a 2.x version to 4.0 directly, you have to upgrade 
to
 +  a 3.0.x/3.x version first (and run upgradesstable). In particular, this
 +  mean Cassandra 4.0 cannot load or read pre-3.0 sstables in any way: you
 +  will need to upgrade those sstable in 3.0.x/3.x first.
 +- Cassandra will no longer allow invalid keyspace replication options, 
such
 +  as invalid datacenter names for NetworkTopologyStrategy. Operators MUST
 +  add new nodes to a datacenter before they can set set ALTER or CREATE
 +  keyspace replication policies using that datacenter. Existing keyspaces
 +  will continue to operate, but CREATE and ALTER will validate that all
 +  datacenters specified exist in the cluster.
 +- Cassandra 4.0 fixes a problem with incremental repair which caused 
repaired
 +  data to be inconsistent between nodes. The fix changes the behavior of 
both
 +  full and incremental repairs. For full repairs, data is no longer marked
 +  repaired. For incremental repairs, anticompaction is run at the 
beginning
 +  of the repair, instead of at the end.
  
+ 3.11.0
+ ==
+ 
+ Upgrading
+ -
+- Primary ranges in the system.size_estimates table are now based on the 
keyspace
+  replication settings and adjacent ranges are no longer merged 
(CASSANDRA-9639).
+ 
  3.10
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/31eac784/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/31eac784/src/java/org/apache/cassandra/service/ActiveRepairService.java
--



[4/9] cassandra git commit: ninja: improve subrange repair error message

2017-02-15 Thread paulo
ninja: improve subrange repair error message


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f02f154e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f02f154e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f02f154e

Branch: refs/heads/cassandra-3.0
Commit: f02f154e47be13dd481fe7afe18183c615f39c71
Parents: af7b20b
Author: Paulo Motta 
Authored: Wed Feb 15 10:15:43 2017 -0200
Committer: Paulo Motta 
Committed: Wed Feb 15 10:29:09 2017 -0200

--
 src/java/org/apache/cassandra/service/ActiveRepairService.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f02f154e/src/java/org/apache/cassandra/service/ActiveRepairService.java
--
diff --git a/src/java/org/apache/cassandra/service/ActiveRepairService.java 
b/src/java/org/apache/cassandra/service/ActiveRepairService.java
index 97c5c0a..11d4617 100644
--- a/src/java/org/apache/cassandra/service/ActiveRepairService.java
+++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java
@@ -220,7 +220,10 @@ public class ActiveRepairService implements 
IEndpointStateChangeSubscriber, IFai
 }
 else if (range.intersects(toRepair))
 {
-throw new IllegalArgumentException("Requested range intersects 
a local range but is not fully contained in one; this would lead to imprecise 
repair");
+throw new IllegalArgumentException(String.format("Requested 
range %s intersects a local range (%s) " +
+ "but is not 
fully contained in one; this would lead to " +
+ "imprecise 
repair. keyspace: %s", toRepair.toString(),
+ 
range.toString(), keyspaceName));
 }
 }
 if (rangeSuperSet == null || !replicaSets.containsKey(rangeSuperSet))



[8/9] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-02-15 Thread paulo
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ef9df6e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ef9df6e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ef9df6e0

Branch: refs/heads/trunk
Commit: ef9df6e053f6d094f8f031a80612c1a63482d351
Parents: 515e4a2 f02f154
Author: Paulo Motta 
Authored: Wed Feb 15 10:48:22 2017 -0200
Committer: Paulo Motta 
Committed: Wed Feb 15 10:48:22 2017 -0200

--
 CHANGES.txt |  1 +
 NEWS.txt|  4 +-
 .../cassandra/db/SizeEstimatesRecorder.java | 54 ++--
 .../cassandra/service/ActiveRepairService.java  |  5 +-
 4 files changed, 34 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef9df6e0/CHANGES.txt
--
diff --cc CHANGES.txt
index 8164a52,732e14b..ee5a5cb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,14 -1,7 +1,15 @@@
 -3.0.11
 +3.11.0
 + * Obfuscate password in stress-graphs (CASSANDRA-12233)
 + * Move to FastThreadLocalThread and FastThreadLocal (CASSANDRA-13034)
 + * nodetool stopdaemon errors out (CASSANDRA-13030)
 + * Tables in system_distributed should not use gcgs of 0 (CASSANDRA-12954)
 + * Fix primary index calculation for SASI (CASSANDRA-12910)
 + * More fixes to the TokenAllocator (CASSANDRA-12990)
 + * NoReplicationTokenAllocator should work with zero replication factor 
(CASSANDRA-12983)
 +Merged from 3.0:
+  * Use keyspace replication settings on system.size_estimates table 
(CASSANDRA-9639)
   * Add vm.max_map_count StartupCheck (CASSANDRA-13008)
 - * Hint related logging should include the IP address of the destination in 
addition to 
 + * Hint related logging should include the IP address of the destination in 
addition to
 host ID (CASSANDRA-13205)
   * Reloading logback.xml does not work (CASSANDRA-13173)
   * Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0 
(CASSANDRA-13109)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef9df6e0/NEWS.txt
--
diff --cc NEWS.txt
index 35ef9a5,a5ee496..fc27526
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -19,139 -18,32 +19,139 @@@ using the provided 'sstableupgrade' too
  
  Upgrading
  -
-- Nothing specific to this release, but please read all notes back to your
-  current version if you are upgrading.
 -   - Support for alter types of already defined tables and of UDTs fields has 
been disabled.
 - If it is necessary to return a different type, please use casting 
instead. See
 - CASSANDRA-12443 for more details.
 -   - Specifying the default_time_to_live option when creating or altering a
 - materialized view was erroneously accepted (and ignored). It is now
 - properly rejected.
 -   - Only Java and JavaScript are now supported UDF languages.
 - The sandbox in 3.0 already prevented the use of script languages except 
Java
 - and JavaScript.
 -   - Compaction now correctly drops sstables out of CompactionTask when there
 - isn't enough disk space to perform the full compaction.  This should 
reduce
 - pending compaction tasks on systems with little remaining disk space.
+- Primary ranges in the system.size_estimates table are now based on the 
keyspace
+  replication settings and adjacent ranges are no longer merged 
(CASSANDRA-9639).
  
 -3.0.10
 -=
 +3.10
 +
  
 -Upgrading
 --
 -   - memtable_allocation_type: offheap_buffers is no longer allowed to be 
specified in the 3.0 series.
 - This was an oversight that can cause segfaults. Offheap was 
re-introduced in 3.4 see CASSANDRA-11039
 - and CASSANDRA-9472 for details.
 +New features
 +
 +   - New `DurationType` (cql duration). See CASSANDRA-11873
 +   - Runtime modification of concurrent_compactors is now available via 
nodetool
 +   - Support for the assignment operators +=/-= has been added for update 
queries.
 +   - An Index implementation may now provide a task which runs prior to 
joining
 + the ring. See CASSANDRA-12039
 +   - Filtering on partition key columns is now also supported for queries 
without
 + secondary indexes.
 +   - A slow query log has been added: slow queries will be logged at DEBUG 
level.
 + For more details refer to CASSANDRA-12403 and 
slow_query_log_timeout_in_ms
 + in cassandra.yaml.
 +   - Support for GROUP BY queries has been added.
 +   - A new compaction-stress tool has been added to test the throughput of 
compaction
 + for any cassandra-stress user schema.  see compaction-stress help for 
how to use.
 +   - 

[1/9] cassandra git commit: Use keyspace replication settings on system.size_estimates table

2017-02-15 Thread paulo
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 76ad028f6 -> f02f154e4
  refs/heads/cassandra-3.11 515e4a227 -> ef9df6e05
  refs/heads/trunk a0827fb2e -> 31eac784f


Use keyspace replication settings on system.size_estimates table

Patch by Chris Lohfink; Reviewed by Paulo Motta for CASSANDRA-9639


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/af7b20bd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/af7b20bd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/af7b20bd

Branch: refs/heads/cassandra-3.0
Commit: af7b20bd0ea0d1e80553c519510c9ad9f29af64a
Parents: 76ad028
Author: Chris Lohfink 
Authored: Thu Jan 26 09:43:47 2017 -0600
Committer: Paulo Motta 
Committed: Wed Feb 15 10:27:25 2017 -0200

--
 CHANGES.txt |  1 +
 NEWS.txt|  2 +
 .../cassandra/db/SizeEstimatesRecorder.java | 54 ++--
 3 files changed, 30 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/af7b20bd/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b19550a..732e14b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.11
+ * Use keyspace replication settings on system.size_estimates table 
(CASSANDRA-9639)
  * Add vm.max_map_count StartupCheck (CASSANDRA-13008)
  * Hint related logging should include the IP address of the destination in 
addition to 
host ID (CASSANDRA-13205)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/af7b20bd/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 4248a6e..a5ee496 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -30,6 +30,8 @@ Upgrading
- Compaction now correctly drops sstables out of CompactionTask when there
  isn't enough disk space to perform the full compaction.  This should 
reduce
  pending compaction tasks on systems with little remaining disk space.
+   - Primary ranges in the system.size_estimates table are now based on the 
keyspace
+ replication settings and adjacent ranges are no longer merged 
(CASSANDRA-9639).
 
 3.0.10
 =

http://git-wip-us.apache.org/repos/asf/cassandra/blob/af7b20bd/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
--
diff --git a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java 
b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
index 0b31b87..ebe3f9a 100644
--- a/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
+++ b/src/java/org/apache/cassandra/db/SizeEstimatesRecorder.java
@@ -69,12 +69,10 @@ public class SizeEstimatesRecorder extends 
MigrationListener implements Runnable
 
 logger.trace("Recording size estimates");
 
-// find primary token ranges for the local node.
-Collection localTokens = 
StorageService.instance.getLocalTokens();
-Collection localRanges = 
metadata.getPrimaryRangesFor(localTokens);
-
 for (Keyspace keyspace : Keyspace.nonLocalStrategy())
 {
+Collection localRanges = 
StorageService.instance.getPrimaryRangesForEndpoint(keyspace.getName(),
+FBUtilities.getBroadcastAddress());
 for (ColumnFamilyStore table : keyspace.getColumnFamilyStores())
 {
 long start = System.nanoTime();
@@ -91,37 +89,39 @@ public class SizeEstimatesRecorder extends 
MigrationListener implements Runnable
 @SuppressWarnings("resource")
 private void recordSizeEstimates(ColumnFamilyStore table, 
Collection localRanges)
 {
-List unwrappedRanges = Range.normalize(localRanges);
 // for each local primary range, estimate (crudely) mean partition 
size and partitions count.
 Map> estimates = new 
HashMap<>(localRanges.size());
-for (Range range : unwrappedRanges)
+for (Range localRange : localRanges)
 {
-// filter sstables that have partitions in this range.
-Refs refs = null;
-long partitionsCount, meanPartitionSize;
-
-try
+for (Range unwrappedRange : localRange.unwrap())
 {
-while (refs == null)
+// filter sstables that have partitions in this range.
+Refs refs = null;
+long partitionsCount, meanPartitionSize;
+
+try
+{
+while (refs == null)
+{
+Iterable sstables = 

  1   2   >