[jira] [Updated] (CASSANDRA-7396) Allow selecting Map key, List index

2016-04-09 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-7396:

Status: Open  (was: Patch Available)

Cancelling patch for the moment after CASSANDRA-7423 has been committed.

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
>  Labels: cql
> Fix For: 3.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns

2016-04-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233504#comment-15233504
 ] 

Alex Petrov edited comment on CASSANDRA-9842 at 4/9/16 11:31 AM:
-

Now with CI results: 

|_|2.1|2.2|3.0|trunk|
|code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]|
|utest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-testall/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-testall/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-testall/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-testall/]|
|dtest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-dtest/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-dtest/]|


was (Author: ifesdjeen):
Now with CI results: 

|_||2.1|2.2|3.0|trunk|
|code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]|
|utest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-testall/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-testall/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-testall/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-testall/]|
|dtest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-dtest/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-dtest/]|

> Inconsistent behavior for '= null' conditions on static columns
> ---
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>Assignee: Alex Petrov
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns

2016-04-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233504#comment-15233504
 ] 

Alex Petrov commented on CASSANDRA-9842:


Now with CI results: 

|_||2.1|2.2|3.0|trunk|
|code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]|
|utest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-testall/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-testall/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-testall/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-testall/]|
|dtest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-dtest/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-dtest/]|

> Inconsistent behavior for '= null' conditions on static columns
> ---
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>Assignee: Alex Petrov
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns

2016-04-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15232315#comment-15232315
 ] 

Alex Petrov edited comment on CASSANDRA-9842 at 4/9/16 11:32 AM:
-

To sum up, there's no distinction between the non-existing row and a static 
column containing {{null}} value, so both an update to non-existing row and row 
with null in static column will succeed. 

Inconsistent behaviour is only in {{2.1}} and {{2.2}}, although I've added same 
tests to {{3.0}} and {{trunk}}. 

|_|2.1|2.2|3.0|trunk|
|code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]|
|utest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-testall/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-testall/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-testall/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-testall/]|
|dtest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-dtest/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-dtest/]|

Waiting for CI results.


was (Author: ifesdjeen):
To sum up, there's no distinction between the non-existing row and a static 
column containing {{null}} value, so both an update to non-existing row and row 
with null in static column will succeed. 

Inconsistent behaviour is only in {{2.1}} and {{2.2}}, although I've added same 
tests to {{3.0}} and {{trunk}}. 

|code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]|

Waiting for CI results.

> Inconsistent behavior for '= null' conditions on static columns
> ---
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>Assignee: Alex Petrov
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns

2016-04-09 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-9842:
---
Comment: was deleted

(was: Now with CI results: 

|_|2.1|2.2|3.0|trunk|
|code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]|
|utest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-testall/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-testall/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-testall/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-testall/]|
|dtest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-dtest/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-dtest/]|)

> Inconsistent behavior for '= null' conditions on static columns
> ---
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>Assignee: Alex Petrov
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns

2016-04-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15232315#comment-15232315
 ] 

Alex Petrov edited comment on CASSANDRA-9842 at 4/9/16 11:33 AM:
-

To sum up, there's no distinction between the non-existing row and a static 
column containing {{null}} value, so both an update to non-existing row and row 
with null in static column will succeed. 

Inconsistent behaviour is only in {{2.1}} and {{2.2}}, although I've added same 
tests to {{3.0}} and {{trunk}}. 

|| ||2.1||2.2||3.0||trunk|
||code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]|
||utest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-testall/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-testall/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-testall/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-testall/]|
||dtest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-dtest/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-dtest/]|

Waiting for CI results.


was (Author: ifesdjeen):
To sum up, there's no distinction between the non-existing row and a static 
column containing {{null}} value, so both an update to non-existing row and row 
with null in static column will succeed. 

Inconsistent behaviour is only in {{2.1}} and {{2.2}}, although I've added same 
tests to {{3.0}} and {{trunk}}. 

|_|2.1|2.2|3.0|trunk|
|code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]|
|utest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-testall/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-testall/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-testall/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-testall/]|
|dtest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-dtest/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-dtest/]|

Waiting for CI results.

> Inconsistent behavior for '= null' conditions on static columns
> ---
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>Assignee: Alex Petrov
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns

2016-04-09 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-9842:
---
Status: Patch Available  (was: Open)

> Inconsistent behavior for '= null' conditions on static columns
> ---
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>Assignee: Alex Petrov
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11339) WHERE clause in SELECT DISTINCT can be ignored

2016-04-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15226056#comment-15226056
 ] 

Alex Petrov edited comment on CASSANDRA-11339 at 4/9/16 11:45 AM:
--

I've added validation that would disallow any filtering queries. Non-filtering 
queries (partition key locked ones, which is a bit senseless, even though 
technically valid). Index queries also work. The tests for all cases, including 
static columns and indexes are added.

|| |2.1|trunk|
||code|[2.1|https://github.com/ifesdjeen/cassandra/tree/11339-2.1]|[trunk|https://github.com/ifesdjeen/cassandra/tree/11339-trunk]|
||utest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11339-2.1-testall/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11339-trunk-testall/]|
||dtest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11339-2.1-dtest]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11339-trunk-dtest]|



was (Author: ifesdjeen):
I've added validation that would disallow any filtering queries. Non-filtering 
queries (partition key locked ones, which is a bit senseless, even though 
technically valid). Index queries also work. The tests for all cases, including 
static columns and indexes are added.

|branch|testall|dtest|
|[trunk|https://github.com/ifesdjeen/cassandra/tree/11339-trunk]|[trunk-testall|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11339-trunk-testall/]|[trunk-dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11339-trunk-dtest/]|
|[2.1|https://github.com/ifesdjeen/cassandra/tree/11339-2.1]|[2.1-testall|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11339-2.1-testall/]|[2.1-dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11339-2.1-dtest/]|

> WHERE clause in SELECT DISTINCT can be ignored
> --
>
> Key: CASSANDRA-11339
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11339
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Philip Thompson
>Assignee: Alex Petrov
> Fix For: 2.1.x
>
> Attachments: 
> 0001-Add-validation-for-distinct-queries-disallowing-quer.patch
>
>
> I've tested this out on 2.1-head. I'm not sure if it's the same behavior on 
> newer versions.
> For a given table t, with {{PRIMARY KEY (id, v)}} the following two queries 
> return the same result:
> {{SELECT DISTINCT id FROM t WHERE v > X ALLOW FILTERING}}
> {{SELECT DISTINCT id FROM t}}
> The WHERE clause in the former is silently ignored, and all id are returned, 
> regardless of the value of v in any row. 
> It seems like this has been a known issue for a while:
> http://stackoverflow.com/questions/26548788/select-distinct-cql-ignores-where-clause
> However, if we don't support filtering on anything but the partition key, we 
> should reject the query, rather than silently dropping the where clause



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11473) Clustering column value is zeroed out in some query results

2016-04-09 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233510#comment-15233510
 ] 

Sylvain Lebresne commented on CASSANDRA-11473:
--

I haven't looked very carefully yet (and probably won't be able to next week as 
I'm on vacation), but one thing that would be nice to know to make sure is the 
history of that cluster. Has it, by any chance, be upgraded from a beta/RC of 
3.0? The fact the extra bytes are always here *and* are accounted in the row 
size strongly suggests it's not some corruption. But at the same time, it's 
hard to believe that the code genuinely doesn't write the same thing that it 
reads as I'd assume something like that would have been detected easily and 
we'd have lots of reports (of course, that could be something only happening in 
very special cases but the serialization code doesn't have tons of special 
cases). But we definitively did change the file format between betas, and while 
I don't remember exactly, we might have done that between RCs too. So, that's 
the only idea I have thus far.

> Clustering column value is zeroed out in some query results
> ---
>
> Key: CASSANDRA-11473
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11473
> Project: Cassandra
>  Issue Type: Bug
> Environment: debian jessie patch current with Cassandra 3.0.4
>Reporter: Jason Kania
>Assignee: Tyler Hobbs
>
> As per a discussion on the mailing list, 
> http://www.mail-archive.com/user@cassandra.apache.org/msg46902.html, we are 
> encountering inconsistent query results when the following query is run:
> {noformat}
> select "subscriberId","sensorUnitId","sensorId","time" from 
> "sensorReadingIndex" where "subscriberId"='JASKAN' AND "sensorUnitId"=0 AND 
> "sensorId"=0 ORDER BY "time" LIMIT 10;
> {noformat}
> Invalid Query Results
> {noformat}
> subscriberIdsensorUnitIdsensorIdtime
> JASKAN002015-05-24 2:09
> JASKAN001969-12-31 19:00
> JASKAN002016-01-21 2:10
> JASKAN002016-01-21 2:10
> JASKAN002016-01-21 2:10
> JASKAN002016-01-21 2:11
> JASKAN002016-01-21 2:22
> JASKAN002016-01-21 2:22
> JASKAN002016-01-21 2:22
> JASKAN002016-01-21 2:22
> {noformat}
> Valid Query Results
> {noformat}
> subscriberIdsensorUnitIdsensorIdtime
> JASKAN002015-05-24 2:09
> JASKAN002015-05-24 2:09
> JASKAN002015-05-24 2:10
> JASKAN002015-05-24 2:10
> JASKAN002015-05-24 2:10
> JASKAN002015-05-24 2:10
> JASKAN002015-05-24 2:11
> JASKAN002015-05-24 2:13
> JASKAN002015-05-24 2:13
> JASKAN002015-05-24 2:14
> {noformat}
> Running the following yields no rows indicating that the 1969... timestamp is 
> invalid.
> {noformat}
> select "subscriberId","sensorUnitId","sensorId","time" FROM 
> "edgeTransitionIndex" where "subscriberId"='JASKAN' AND "sensorUnitId"=0 AND 
> "sensorId"=0 and time='1969-12-31 19:00:00-0500';
> {noformat}
> The schema is as follows:
> {noformat}
> CREATE TABLE sensorReading."sensorReadingIndex" (
> "subscriberId" text,
> "sensorUnitId" int,
> "sensorId" int,
> time timestamp,
> "classId" int,
> correlation float,
> PRIMARY KEY (("subscriberId", "sensorUnitId", "sensorId"), time)
> ) WITH CLUSTERING ORDER BY (time ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> CREATE INDEX classSecondaryIndex ON sensorReading."sensorReadingIndex" 
> ("classId");
> {noformat}
> We were asked to provide our sstables as well but these are very large and 
> would require some data obfuscation. We are able to run code or scripts 
> against the data on our servrers if that is option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-09 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233542#comment-15233542
 ] 

Mattias W commented on CASSANDRA-11528:
---

This error also occurs with the same database contents on cassandra 3.4 on 
Ubuntu 14.04. 

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-09 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233542#comment-15233542
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/9/16 1:24 PM:
---

This last error also occurs with the same database contents on cassandra 3.4 on 
Ubuntu 14.04. 

i.e.

{{SELECT COUNT(*) FROM usr WHERE disabled = null LIMIT 100 ALLOW FILTERING;}}


was (Author: mattiasw2):
This error also occurs with the same database contents on cassandra 3.4 on 
Ubuntu 14.04. 

> Server Crash when select returns more than a few hundred rows
> -
>
> Key: CASSANDRA-11528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11528
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: windows 7, 8 GB machine
>Reporter: Mattias W
> Fix For: 3.x
>
> Attachments: datastax_ddc_server-stdout.2016-04-07.log
>
>
> While implementing a dump procedure, which did "select * from" from one table 
> at a row, I instantly kill the server. A simple 
> {noformat}select count(*) from {noformat} 
> also kills it. For a while, I thought the size of blobs were the cause
> I also try to only have a unique id as partition key, I was afraid a single 
> partition got too big or so, but that didn't change anything
> It happens every time, both from Java/Clojure and from DevCenter.
> I looked at the logs at C:\Program Files\DataStax-DDC\logs, but the crash is 
> so quick, so nothing is recorded there.
> There is a Java-out-of-memory in the logs, but that isn't from the time of 
> the crash.
> It only happens for one table, it only has 15000 entries, but there are blobs 
> and byte[] stored there, size between 100kb - 4Mb. Total size for that table 
> is about 6.5 GB on disk.
> I made a workaround by doing many small selects instead, each only fetching 
> 100 rows.
> Is there a setting a can set to make the system log more eagerly, in order to 
> at least get a stacktrace or similar, that might help you.
> It is the prun_srv that dies. Restarting the NT service makes Cassandra run 
> again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11525) StaticTokenTreeBuilder should respect posibility of duplicate tokens

2016-04-09 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233546#comment-15233546
 ] 

DOAN DuyHai commented on CASSANDRA-11525:
-

[~xedin]   [~jrwest]

OK the fix is confirmed. I have fetched successfully ~36 millions CQL rows 
using the index without the exception

Good job and thanks for fixing this tricky bug

> StaticTokenTreeBuilder should respect posibility of duplicate tokens
> 
>
> Key: CASSANDRA-11525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11525
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.5-SNAPSHOT
>Reporter: DOAN DuyHai
>Assignee: Jordan West
> Fix For: 3.5
>
>
> Bug reproduced in *Cassandra 3.5-SNAPSHOT* (after the fix of OOM)
> {noformat}
> create table if not exists test.resource_bench ( 
>  dsr_id uuid,
>  rel_seq bigint,
>  seq bigint,
>  dsp_code varchar,
>  model_code varchar,
>  media_code varchar,
>  transfer_code varchar,
>  commercial_offer_code varchar,
>  territory_code varchar,
>  period_end_month_int int,
>  authorized_societies_txt text,
>  rel_type text,
>  status text,
>  dsp_release_code text,
>  title text,
>  contributors_name list,
>  unic_work text,
>  paying_net_qty bigint,
> PRIMARY KEY ((dsr_id, rel_seq), seq)
> ) WITH CLUSTERING ORDER BY (seq ASC); 
> CREATE CUSTOM INDEX resource_period_end_month_int_idx ON test.resource_bench 
> (period_end_month_int) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH 
> OPTIONS = {'mode': 'PREFIX'};
> {noformat}
> So the index is a {{DENSE}} numerical index.
> When doing the request {{SELECT dsp_code, unic_work, paying_net_qty FROM 
> test.resource_bench WHERE period_end_month_int = 201401}} using server-side 
> paging.
> I bumped into this stack trace:
> {noformat}
> WARN  [SharedPool-Worker-1] 2016-04-06 00:00:30,825 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.ArrayIndexOutOfBoundsException: -55
>   at 
> org.apache.cassandra.db.ClusteringPrefix$Serializer.deserialize(ClusteringPrefix.java:268)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:128) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.Serializers$2.deserialize(Serializers.java:120) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.IndexHelper$IndexInfo$Serializer.deserialize(IndexHelper.java:148)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:218)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.keyAt(SSTableReader.java:1823)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:168)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.SSTableIndex$DecoratedKeyFetcher.apply(SSTableIndex.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:518)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.disk.TokenTree$KeyIterator.computeNext(TokenTree.java:504)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.tryToComputeNext(AbstractIterator.java:116)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.utils.AbstractIterator.hasNext(AbstractIterator.java:110)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:106)
>  ~[apache-cassandra-3.5-SNAPSHOT.jar:3.5-SNAPSHOT]
>   at 
> org.apache.cassandra.index.sasi.plan.QueryPlan$ResultIterator.computeNext(QueryPlan.java:71)
>  ~[apa

[jira] [Commented] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-09 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233551#comment-15233551
 ] 

Mattias W commented on CASSANDRA-11528:
---

It is a out-of-memory error. Cassandra 3.4 on ubuntu 14.04 behaves the same, 
and there, the last message in the log is

{noformat}
INFO  [SharedPool-Worker-3] 2016-04-09 15:32:34,915 ApproximateTime.java:44 - 
Scheduling approximate time-check task with a precision of 10 milliseconds
INFO  [Service Thread] 2016-04-09 15:34:50,366 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 443ms.  CMS Old Gen: 547965232 -> 268786192; Par Eden 
Space: 126017056 -> 0; Par Survivor Space: 3420928 -> 0
INFO  [Service Thread] 2016-04-09 15:34:50,379 StatusLogger.java:52 - Pool Name 
   Active   Pending  Completed   Blocked  All Time Blocked
ERROR [SharedPool-Worker-2] 2016-04-09 15:34:50,409 
JVMStabilityInspector.java:139 - JVM state determined to be unstable.  Exiting 
forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77]
at 
org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.Cell$Serializer.serialize(Cell.java:208) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:185)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:110)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:98)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:134)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:79)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:294)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:292) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1799)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2467)
 ~[apache-cassandra-3.4.jar:3.4]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_77]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.4.jar:3.4]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.4.jar:3.4]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
INFO  [Service Thread] 2016-04-09 15:34:50,412 StatusLogger.java:56 - 
MutationStage 0 0157 0  
   0

INFO  [Service Thread] 2016-04-09 15:34:50,414 StatusLogger.java:56 - 
ViewMutationStage   

[jira] [Comment Edited] (CASSANDRA-11528) Server Crash when select returns more than a few hundred rows

2016-04-09 Thread Mattias W (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233551#comment-15233551
 ] 

Mattias W edited comment on CASSANDRA-11528 at 4/9/16 1:41 PM:
---

It is a out-of-memory error. Cassandra 3.4 on ubuntu 14.04 behaves the same, 
and there, the last message in the log is below.

So now I know, select statements can use a lot of heap. The Ubuntu machine only 
has 1.5 GB RAM. (The windows machine above had 8 GB)

{noformat}
INFO  [SharedPool-Worker-3] 2016-04-09 15:32:34,915 ApproximateTime.java:44 - 
Scheduling approximate time-check task with a precision of 10 milliseconds
INFO  [Service Thread] 2016-04-09 15:34:50,366 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 443ms.  CMS Old Gen: 547965232 -> 268786192; Par Eden 
Space: 126017056 -> 0; Par Survivor Space: 3420928 -> 0
INFO  [Service Thread] 2016-04-09 15:34:50,379 StatusLogger.java:52 - Pool Name 
   Active   Pending  Completed   Blocked  All Time Blocked
ERROR [SharedPool-Worker-2] 2016-04-09 15:34:50,409 
JVMStabilityInspector.java:139 - JVM state determined to be unstable.  Exiting 
forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) ~[na:1.8.0_77]
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) ~[na:1.8.0_77]
at 
org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:126)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:86) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:374) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.Cell$Serializer.serialize(Cell.java:208) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:185)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:110)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:98)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:134)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:79)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:294)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:134)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:127)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:292) 
~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1799)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2467)
 ~[apache-cassandra-3.4.jar:3.4]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_77]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
 ~[apache-cassandra-3.4.jar:3.4]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 [apache-cassandra-3.4.jar:3.4]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.4.jar:3.4]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
INFO  [Service Thread] 2016-04-09 15:34:50,412 StatusLogger.java

[cassandra] Git Push Summary

2016-04-09 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/3.5-tentative [deleted] 89bd93502


[cassandra] Git Push Summary

2016-04-09 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/3.5-tentative [created] 020dd2d10


[jira] [Issue Comment Deleted] (CASSANDRA-8272) 2ndary indexes can return stale data

2016-04-09 Thread Henry Manasseh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Manasseh updated CASSANDRA-8272:
--
Comment: was deleted

(was: Can this be avoided by increasing the consistency level to ALL? I am just 
wondering if there is a workaround in order to eliminate the risk.)

> 2ndary indexes can return stale data
> 
>
> Key: CASSANDRA-8272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8272
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
> Fix For: 2.1.x
>
>
> When replica return 2ndary index results, it's possible for a single replica 
> to return a stale result and that result will be sent back to the user, 
> potentially failing the CL contract.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v text);
> CREATE INDEX ON test(v);
> INSERT INTO test(k, v) VALUES (0, 'foo');
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v = 'bar' WHERE k = 0;
> SELECT * FROM test WHERE v = 'foo';
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned (since 
> C will return it and A or B will return nothing).
> A potential solution would be that when we read a tombstone in the index (and 
> provided we make the index inherit the gcGrace of it's parent CF), instead of 
> skipping that tombstone, we'd insert in the result a corresponding range 
> tombstone.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11540) The JVM should exit if jmx fails to bind

2016-04-09 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-11540:


 Summary: The JVM should exit if jmx fails to bind
 Key: CASSANDRA-11540
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11540
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
 Fix For: 2.1.x


If you are already running a cassandra instance, but for some reason try to 
start another one, this happens:

{noformat}
INFO  20:57:09 JNA mlockall successful
WARN  20:57:09 JMX is not enabled to receive remote connections. Please see 
cassandra-env.sh for more info.
ERROR 20:57:10 Error starting local jmx server:
java.rmi.server.ExportException: Port already in use: 7199; nested exception is:
java.net.BindException: Address already in use
at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:340) 
~[na:1.7.0_76]
at 
sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:248) 
~[na:1.7.0_76]
at sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:411) 
~[na:1.7.0_76]
at sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147) 
~[na:1.7.0_76]
at 
sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:207) 
~[na:1.7.0_76]
at sun.rmi.registry.RegistryImpl.setup(RegistryImpl.java:122) 
~[na:1.7.0_76]
at sun.rmi.registry.RegistryImpl.(RegistryImpl.java:98) 
~[na:1.7.0_76]
at 
java.rmi.registry.LocateRegistry.createRegistry(LocateRegistry.java:239) 
~[na:1.7.0_76]
at 
org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:100)
 [main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:222) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:564) 
[main/:na]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:653) 
[main/:na]
Caused by: java.net.BindException: Address already in use
at java.net.PlainSocketImpl.socketBind(Native Method) ~[na:1.7.0_76]
at 
java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:376) 
~[na:1.7.0_76]
at java.net.ServerSocket.bind(ServerSocket.java:376) ~[na:1.7.0_76]
at java.net.ServerSocket.(ServerSocket.java:237) ~[na:1.7.0_76]
at 
javax.net.DefaultServerSocketFactory.createServerSocket(ServerSocketFactory.java:231)
 ~[na:1.7.0_76]
at 
org.apache.cassandra.utils.RMIServerSocketFactoryImpl.createServerSocket(RMIServerSocketFactoryImpl.java:13)
 ~[main/:na]
at 
sun.rmi.transport.tcp.TCPEndpoint.newServerSocket(TCPEndpoint.java:666) 
~[na:1.7.0_76]
at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:329) 
~[na:1.7.0_76]
... 11 common frames omitted
{noformat}

However the startup continues, and ends up replaying commitlogs, which is 
probably not a good thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9549) Memory leak in Ref.GlobalState due to pathological ConcurrentLinkedQueue.remove behaviour

2016-04-09 Thread stone (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233823#comment-15233823
 ] 

stone commented on CASSANDRA-9549:
--

could you take a summary after resolving the issue?
why this happen?,how to resolve?
actually,it's hard to find the answer when people met the same issue.

> Memory leak in Ref.GlobalState due to pathological 
> ConcurrentLinkedQueue.remove behaviour
> -
>
> Key: CASSANDRA-9549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9549
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.5. 9 node cluster in EC2 (m1.large nodes, 
> 2 cores 7.5G memory, 800G platter for cassandra data, root partition and 
> commit log are on SSD EBS with sufficient IOPS), 3 nodes/availablity zone, 1 
> replica/zone
> JVM: /usr/java/jdk1.8.0_40/jre/bin/java 
> JVM Flags besides CP: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar 
> -XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
> -XX:ThreadPriorityPolicy=42 -Xms2G -Xmx2G -Xmn200M 
> -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=103 
> -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
> -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
> -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
> -XX:+UseTLAB -XX:CompileCommandFile=/etc/cassandra/conf/hotspot_compiler 
> -XX:CMSWaitDuration=1 -XX:+CMSParallelInitialMarkEnabled 
> -XX:+CMSEdenChunksRecordAlways -XX:CMSWaitDuration=1 -XX:+UseCondCardMark 
> -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=7199 
> -Dcom.sun.management.jmxremote.rmi.port=7199 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dlogback.configurationFile=logback.xml -Dcassandra.logdir=/var/log/cassandra 
> -Dcassandra.storagedir= -Dcassandra-pidfile=/var/run/cassandra/cassandra.pid 
> Kernel: Linux 2.6.32-504.16.2.el6.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Ivar Thorson
>Assignee: Benedict
>Priority: Critical
> Fix For: 2.1.7
>
> Attachments: c4_system.log, c7fromboot.zip, cassandra.yaml, 
> cpu-load.png, memoryuse.png, ref-java-errors.jpeg, suspect.png, two-loads.png
>
>
> We have been experiencing a severe memory leak with Cassandra 2.1.5 that, 
> over the period of a couple of days, eventually consumes all of the available 
> JVM heap space, putting the JVM into GC hell where it keeps trying CMS 
> collection but can't free up any heap space. This pattern happens for every 
> node in our cluster and is requiring rolling cassandra restarts just to keep 
> the cluster running. We have upgraded the cluster per Datastax docs from the 
> 2.0 branch a couple of months ago and have been using the data from this 
> cluster for more than a year without problem.
> As the heap fills up with non-GC-able objects, the CPU/OS load average grows 
> along with it. Heap dumps reveal an increasing number of 
> java.util.concurrent.ConcurrentLinkedQueue$Node objects. We took heap dumps 
> over a 2 day period, and watched the number of Node objects go from 4M, to 
> 19M, to 36M, and eventually about 65M objects before the node stops 
> responding. The screen capture of our heap dump is from the 19M measurement.
> Load on the cluster is minimal. We can see this effect even with only a 
> handful of writes per second. (See attachments for Opscenter snapshots during 
> very light loads and heavier loads). Even with only 5 reads a sec we see this 
> behavior.
> Log files show repeated errors in Ref.java:181 and Ref.java:279 and "LEAK 
> detected" messages:
> {code}
> ERROR [CompactionExecutor:557] 2015-06-01 18:27:36,978 Ref.java:279 - Error 
> when closing class 
> org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1302301946:/data1/data/ourtablegoeshere-ka-1150
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@32680b31 
> rejected from 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@573464d6[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1644]
> {code}
> {code}
> ERROR [Reference-Reaper:1] 2015-06-01 18:27:37,083 Ref.java:181 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@74b5df92) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2054303604:/data2/data/ourtablegoeshere-ka-1151
>  was not released before the reference was garbage collected
> {code}
> This might be related to [CASSANDRA-8723]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9549) Memory leak in Ref.GlobalState due to pathological ConcurrentLinkedQueue.remove behaviour

2016-04-09 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233832#comment-15233832
 ] 

Benedict commented on CASSANDRA-9549:
-

What is obtuse?

bq. how to resolve?

Move to a version >= fixVersion, i.e. 2.1.7

bq. why this happen

The [last comment  with more than one 
sentence|https://issues.apache.org/jira/browse/CASSANDRA-9549?focusedCommentId=14586587&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14586587],
 only six comments back, spells out what happened and why.

I realise JIRA noise can be quite an issue in many cases, but in this instance 
it seems to me that just a modicum of effort was necessary to find the answers 
you sought.



> Memory leak in Ref.GlobalState due to pathological 
> ConcurrentLinkedQueue.remove behaviour
> -
>
> Key: CASSANDRA-9549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9549
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.5. 9 node cluster in EC2 (m1.large nodes, 
> 2 cores 7.5G memory, 800G platter for cassandra data, root partition and 
> commit log are on SSD EBS with sufficient IOPS), 3 nodes/availablity zone, 1 
> replica/zone
> JVM: /usr/java/jdk1.8.0_40/jre/bin/java 
> JVM Flags besides CP: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar 
> -XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
> -XX:ThreadPriorityPolicy=42 -Xms2G -Xmx2G -Xmn200M 
> -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=103 
> -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
> -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
> -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
> -XX:+UseTLAB -XX:CompileCommandFile=/etc/cassandra/conf/hotspot_compiler 
> -XX:CMSWaitDuration=1 -XX:+CMSParallelInitialMarkEnabled 
> -XX:+CMSEdenChunksRecordAlways -XX:CMSWaitDuration=1 -XX:+UseCondCardMark 
> -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=7199 
> -Dcom.sun.management.jmxremote.rmi.port=7199 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dlogback.configurationFile=logback.xml -Dcassandra.logdir=/var/log/cassandra 
> -Dcassandra.storagedir= -Dcassandra-pidfile=/var/run/cassandra/cassandra.pid 
> Kernel: Linux 2.6.32-504.16.2.el6.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Ivar Thorson
>Assignee: Benedict
>Priority: Critical
> Fix For: 2.1.7
>
> Attachments: c4_system.log, c7fromboot.zip, cassandra.yaml, 
> cpu-load.png, memoryuse.png, ref-java-errors.jpeg, suspect.png, two-loads.png
>
>
> We have been experiencing a severe memory leak with Cassandra 2.1.5 that, 
> over the period of a couple of days, eventually consumes all of the available 
> JVM heap space, putting the JVM into GC hell where it keeps trying CMS 
> collection but can't free up any heap space. This pattern happens for every 
> node in our cluster and is requiring rolling cassandra restarts just to keep 
> the cluster running. We have upgraded the cluster per Datastax docs from the 
> 2.0 branch a couple of months ago and have been using the data from this 
> cluster for more than a year without problem.
> As the heap fills up with non-GC-able objects, the CPU/OS load average grows 
> along with it. Heap dumps reveal an increasing number of 
> java.util.concurrent.ConcurrentLinkedQueue$Node objects. We took heap dumps 
> over a 2 day period, and watched the number of Node objects go from 4M, to 
> 19M, to 36M, and eventually about 65M objects before the node stops 
> responding. The screen capture of our heap dump is from the 19M measurement.
> Load on the cluster is minimal. We can see this effect even with only a 
> handful of writes per second. (See attachments for Opscenter snapshots during 
> very light loads and heavier loads). Even with only 5 reads a sec we see this 
> behavior.
> Log files show repeated errors in Ref.java:181 and Ref.java:279 and "LEAK 
> detected" messages:
> {code}
> ERROR [CompactionExecutor:557] 2015-06-01 18:27:36,978 Ref.java:279 - Error 
> when closing class 
> org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1302301946:/data1/data/ourtablegoeshere-ka-1150
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@32680b31 
> rejected from 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@573464d6[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1644]
> {code}
> {code}
> ERROR [Reference-Reaper:1] 2015-06-01 18:27:37,083 Ref.java:181 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.