[Cassandra Wiki] Update of "ThirdPartySupport" by Thomas Brown

2017-08-09 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "ThirdPartySupport" page has been changed by Thomas Brown:
https://wiki.apache.org/cassandra/ThirdPartySupport?action=diff=56=57

  [[http://www.dbteamlimited.co.uk|DB Team Limited]] We provide expert level 
consultancy services for performance tuning large Apache Cassandra Oracle and 
SQL Server databases. Contact us at admindbteamlimited.co.uk
  
  
- 
{{http://www.instaclustr.com/wp-content/uploads/2016/09/Instaclustr-Logo-Motif-Steel-Blue-287px.png}}
 
+ 
{{https://www.instaclustr.com/wp-content/uploads/2017/08/Apache-Cassandra-Instaclustr-logo-Home.png}}
 
  [[https://www.instaclustr.com/?cid=casspp|Instaclustr]] provides managed 
Apache Cassandra hosting on Amazon Web Services, Google Cloud Platform, 
Microsoft Azure, and SoftLayer. Instaclustr also provides expert-level 
consultancy and 24/7/365 enterprise support. Instaclustr dramatically reduces 
administration overheads and support costs by providing automated deployment, 
backups, cluster balancing and performance tuning. 
  
  

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121048#comment-16121048
 ] 

Jeff Jirsa commented on CASSANDRA-13717:


[~skonto] - do you know which versions need to be fixed? 3.0? 3.11? trunk?

I've kicked off some test builds [here (unit 
tests)|https://circleci.com/gh/jeffjirsa/cassandra/tree/cassandra-13717] and 
[here 
(dtest)|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/173]
 - we'll want to do that for each branch that needs this fix (and of course, 
we'll want to add tests to this fix as well).


> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
>Assignee: Stavros Kontopoulos
>Priority: Critical
> Attachments: example_queries.cql, fix_13717
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13723) fix exception logging that should be consumed by placeholder to 'getMessage()' for new slf4j version

2017-08-09 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121003#comment-16121003
 ] 

ZhaoYang commented on CASSANDRA-13723:
--

[~spo...@gmail.com] [~jasobrown] could you please review it? thanks

> fix exception logging that should be consumed by placeholder to 
> 'getMessage()' for new slf4j version
> 
>
> Key: CASSANDRA-13723
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13723
> Project: Cassandra
>  Issue Type: Bug
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Trivial
> Fix For: 4.0
>
> Attachments: CASSANDRA-13723.patch
>
>
> The wrong tracing log will fail 
> {{materialized_views_test.py:TestMaterializedViews.view_tombstone_test}} and 
> impact clients.
> Current log: {{Digest mismatch: {} on 127.0.0.1}}
> Expected log: {{Digest mismatch: 
> org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
> DecoratedKey... on 127.0.0.1}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120965#comment-16120965
 ] 

Jeff Jirsa commented on CASSANDRA-13717:


I typically prefer keeping them in unit tests (junit tests in the same repo, 
check out the test/ directory). There should be a section for cql3 tests, and 
almost certainly a TupleTest within it that you can add a function or two to.


> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
>Assignee: Stavros Kontopoulos
>Priority: Critical
> Attachments: example_queries.cql, fix_13717
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120960#comment-16120960
 ] 

Stavros Kontopoulos commented on CASSANDRA-13717:
-

Thnx [~jjirsa] ! Here is my branch: 
https://github.com/skonto/cassandra/tree/cassandra-13717. As for the test case 
I read the contribution wiki etc... still a bit confused where should I add it? 
In dtests or just part of the patch?


> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
>Assignee: Stavros Kontopoulos
>Priority: Critical
> Attachments: example_queries.cql, fix_13717
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13717:
---
Priority: Critical  (was: Major)

> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
>Assignee: Stavros Kontopoulos
>Priority: Critical
> Attachments: example_queries.cql, fix_13717
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120932#comment-16120932
 ] 

Jeff Jirsa commented on CASSANDRA-13717:


Welcome [~skonto] ! Next step would be to assign yourself (I've done that for 
you), and then hit 'submit patch' to mark the issue as patch available (I've 
done that for you again).

We typically ask either the contributor (you) or the reviewer (someone who will 
volunteer, hopefully soon) to push the patch to a github branch and kick off CI 
(we have it setup to use circleci for unit tests, and a committer can kick off 
dtests). We typically ask that your patch includes a test case that fails 
before your patch and succeeds after it's applied, so while this is not a 
review, I will say that any reviewer should ask you to do that. 



> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
>Assignee: Stavros Kontopoulos
> Attachments: example_queries.cql, fix_13717
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13717:
---
Attachment: fix_13717

> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
>Assignee: Stavros Kontopoulos
> Attachments: example_queries.cql, fix_13717
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13717:
---
Status: Patch Available  (was: Open)

> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
>Assignee: Stavros Kontopoulos
> Attachments: example_queries.cql
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa reassigned CASSANDRA-13717:
--

Assignee: Stavros Kontopoulos

> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
>Assignee: Stavros Kontopoulos
> Attachments: example_queries.cql
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120927#comment-16120927
 ] 

Stavros Kontopoulos edited comment on CASSANDRA-13717 at 8/10/17 1:47 AM:
--

I have created a patch and verified Jeff's suggestion:
[patch|https://drive.google.com/open?id=0B0SeiqgJaLZvclhmY0N4dEJtUGs]

{noformat}
cqlsh> create keyspace test with replication = 
{'class':'SimpleStrategy','replication_factor': 1};cqlsh> create table 
test.test_table ( id int, tdemo tuple, primary key (id, 
tdemo) ) with clustering order by (tdemo desc);
cqlsh> insert into test.test_table (id, tdemo) values (1, ('2017-02-03 
03:05+','Europe'));
cqlsh> select * from test.test_table;
 id | tdemo
+---
  1 | ('2017-02-03 03:05:00.00+', 'Europe')

(1 rows)

{noformat}

What are the next steps for the review (I am new here)?



was (Author: skonto):
I have created a patch and verified Jeff's suggestion:
[patch|https://drive.google.com/open?id=0B0SeiqgJaLZvclhmY0N4dEJtUGs]

{noformat}
cqlsh> create keyspace test with replication = 
{'class':'SimpleStrategy','replication_factor': 1};cqlsh> create table 
test.test_table ( id int, tdemo tuple, primary key (id, 
tdemo) ) with clustering order by (tdemo desc);
cqlsh> insert into test.test_table (id, tdemo) values (1, ('2017-02-03 
03:05+','Europe'));
cqlsh> select * from test.test_table;
 id | tdemo
+---
  1 | ('2017-02-03 03:05:00.00+', 'Europe')

(1 rows)

{noformat}



> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
> Attachments: example_queries.cql
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120927#comment-16120927
 ] 

Stavros Kontopoulos edited comment on CASSANDRA-13717 at 8/10/17 1:45 AM:
--

I have created a patch and verified Jeff's suggestion:
[patch|https://drive.google.com/open?id=0B0SeiqgJaLZvclhmY0N4dEJtUGs]

{noformat}
cqlsh> create keyspace test with replication = 
{'class':'SimpleStrategy','replication_factor': 1};cqlsh> create table 
test.test_table ( id int, tdemo tuple, primary key (id, 
tdemo) ) with clustering order by (tdemo desc);
cqlsh> insert into test.test_table (id, tdemo) values (1, ('2017-02-03 
03:05+','Europe'));
cqlsh> select * from test.test_table;
 id | tdemo
+---
  1 | ('2017-02-03 03:05:00.00+', 'Europe')

(1 rows)

{noformat}




was (Author: skonto):
I have created a patch and verified Jeff's suggestion:
[patch|https://drive.google.com/open?id=0B0SeiqgJaLZvclhmY0N4dEJtUGs]


> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
> Attachments: example_queries.cql
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120927#comment-16120927
 ] 

Stavros Kontopoulos commented on CASSANDRA-13717:
-

I have created a patch and verified Jeff's suggestion:
[patch title|https://drive.google.com/open?id=0B0SeiqgJaLZvclhmY0N4dEJtUGs]


> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
> Attachments: example_queries.cql
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13717) INSERT statement fails when Tuple type is used as clustering column with default DESC order

2017-08-09 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120927#comment-16120927
 ] 

Stavros Kontopoulos edited comment on CASSANDRA-13717 at 8/10/17 1:44 AM:
--

I have created a patch and verified Jeff's suggestion:
[patch|https://drive.google.com/open?id=0B0SeiqgJaLZvclhmY0N4dEJtUGs]



was (Author: skonto):
I have created a patch and verified Jeff's suggestion:
[patch title|https://drive.google.com/open?id=0B0SeiqgJaLZvclhmY0N4dEJtUGs]


> INSERT statement fails when Tuple type is used as clustering column with 
> default DESC order
> ---
>
> Key: CASSANDRA-13717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13717
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.11
>Reporter: Anastasios Kichidis
> Attachments: example_queries.cql
>
>
> When a column family is created and a Tuple is used on clustering column with 
> default clustering order DESC, then the INSERT statement fails. 
> For example, the following table will make the INSERT statement fail with 
> error message "Invalid tuple type literal for tdemo of type 
> frozen>" , although the INSERT statement is correct 
> (works as expected when the default order is ASC)
> {noformat}
> create table test_table (
>   id int,
>   tdemo tuple,
>   primary key (id, tdemo)
> ) with clustering order by (tdemo desc);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9988) Introduce leaf-only iterator

2017-08-09 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120862#comment-16120862
 ] 

Jay Zhuang commented on CASSANDRA-9988:
---

[~Anthony Grasso] would you please final review the patch and commit it?

> Introduce leaf-only iterator
> 
>
> Key: CASSANDRA-9988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9988
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Benedict
>Assignee: Jay Zhuang
>Priority: Minor
>  Labels: patch
> Fix For: 4.0
>
> Attachments: 9988-3tests.png, 9988-data.png, 9988-result2.png, 
> 9988-result3.png, 9988-result.png, 9988-test-result3.png, 
> 9988-test-result-expsearch.xlsx, 9988-test-result-raw.png, 
> 9988-test-result.xlsx, 9988-trunk-new.txt, 9988-trunk-new-update.txt, 
> trunk-9988.txt
>
>
> In many cases we have small btrees, small enough to fit in a single leaf 
> page. In this case it _may_ be more efficient to specialise our iterator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13166) Build failures encountered on ppc64le

2017-08-09 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-13166.

Resolution: Duplicate

The trunk branch builds on ppc64le successfully, and the architecture-specific 
sigar lib added in CASSANDRA-13615. Test run in progress on ppc64le at:
https://builds.apache.org/view/A-D/view/Cassandra/job/cassandra-devbranch-ppc64le-testall/22/

> Build failures encountered on ppc64le
> -
>
> Key: CASSANDRA-13166
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13166
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
> Environment: [root@pts00433-vm5 cassandra]# ant -version
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> Apache Ant(TM) version 1.9.2 compiled on January 22 2014
> [root@pts00433-vm5 cassandra]# java -version
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-b15)
> OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
> [root@pts00433-vm5 cassandra]# lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):4
> On-line CPU(s) list:   0-3
> Thread(s) per core:1
> Core(s) per socket:4
> Socket(s): 1
> NUMA node(s):  1
> Model: 2.1 (pvr 004b 0201)
> Model name:POWER8E (raw), altivec supported
> L1d cache: 64K
> L1i cache: 32K
> NUMA node0 CPU(s): 0-3
> [root@pts00433-vm5 cassandra]# cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 7.3 (Maipo)
>Reporter: Amitkumar Ghatwal
>Assignee: Michael Shuler
>
> Getting below errors while trying to run the test suite.
> [root@pts00433-vm5 cassandra]# ant test
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> Buildfile: /root/amit/cassandra-wo_capi/cassandra/build.xml
> init:
> maven-ant-tasks-localrepo:
> maven-ant-tasks-download:
> maven-ant-tasks-init:
> maven-declare-dependencies:
> maven-ant-tasks-retrieve-build:
> init-dependencies:
>  [echo] Loading dependency paths from file: 
> /root/amit/cassandra-wo_capi/cassandra/build/build-dependencies.xml
> init-dependencies:
>  [echo] Loading dependency paths from file: 
> /root/amit/cassandra-wo_capi/cassandra/build/build-dependencies-sources.xml
> [unzip] Expanding: 
> /root/amit/cassandra-wo_capi/cassandra/build/lib/jars/org.jacoco.agent-0.7.5.201505241946.jar
>  into /root/amit/cassandra-wo_capi/cassandra/build/lib/jars
> check-gen-cql3-grammar:
> gen-cql3-grammar:
> generate-cql-html:
> generate-jflex-java:
> build-project:
>  [echo] apache-cassandra: /root/amit/cassandra-wo_capi/cassandra/build.xml
> [javac] Compiling 308 source files to 
> /root/amit/cassandra-wo_capi/cassandra/build/classes/main
> [javac] Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> [javac] Note: Processing compiler hints annotations
> [javac] Note: Processing compiler hints annotations
> [javac] Note: Writing compiler command file at META-INF/hotspot_compiler
> [javac] Note: Done processing compiler hints annotations
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:210:
>  error: reference to Schema is ambiguous
> [javac] KeyspaceMetadata ksm = 
> Schema.instance.getKeyspaceMetadata(keyspace());
> [javac]^
> [javac]   both class org.apache.cassandra.schema.Schema in 
> org.apache.cassandra.schema and class org.apache.cassandra.config.Schema in 
> org.apache.cassandra.config match
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:221:
>  error: reference to SchemaConstants is ambiguous
> [javac] if (columnFamily().length() > 
> SchemaConstants.NAME_LENGTH)
> [javac]   ^
> [javac]   both class org.apache.cassandra.schema.SchemaConstants in 
> org.apache.cassandra.schema and class 
> org.apache.cassandra.config.SchemaConstants in org.apache.cassandra.config 
> match
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:222:
>  error: reference to SchemaConstants is ambiguous
> [javac] throw new 
> InvalidRequestException(String.format("Table names shouldn't be more than %s 
> characters long (got \"%s\")", SchemaConstants.NAME_LENGTH, columnFamily()));
> [javac]   
> ^
> [javac]   both class org.apache.cassandra.schema.SchemaConstants in 
> org.apache.cassandra.schema 

[jira] [Updated] (CASSANDRA-13166) Build failures encountered on ppc64le

2017-08-09 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13166:
---
Fix Version/s: (was: 3.9)

> Build failures encountered on ppc64le
> -
>
> Key: CASSANDRA-13166
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13166
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
> Environment: [root@pts00433-vm5 cassandra]# ant -version
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> Apache Ant(TM) version 1.9.2 compiled on January 22 2014
> [root@pts00433-vm5 cassandra]# java -version
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-b15)
> OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
> [root@pts00433-vm5 cassandra]# lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):4
> On-line CPU(s) list:   0-3
> Thread(s) per core:1
> Core(s) per socket:4
> Socket(s): 1
> NUMA node(s):  1
> Model: 2.1 (pvr 004b 0201)
> Model name:POWER8E (raw), altivec supported
> L1d cache: 64K
> L1i cache: 32K
> NUMA node0 CPU(s): 0-3
> [root@pts00433-vm5 cassandra]# cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 7.3 (Maipo)
>Reporter: Amitkumar Ghatwal
>Assignee: Michael Shuler
>
> Getting below errors while trying to run the test suite.
> [root@pts00433-vm5 cassandra]# ant test
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> Buildfile: /root/amit/cassandra-wo_capi/cassandra/build.xml
> init:
> maven-ant-tasks-localrepo:
> maven-ant-tasks-download:
> maven-ant-tasks-init:
> maven-declare-dependencies:
> maven-ant-tasks-retrieve-build:
> init-dependencies:
>  [echo] Loading dependency paths from file: 
> /root/amit/cassandra-wo_capi/cassandra/build/build-dependencies.xml
> init-dependencies:
>  [echo] Loading dependency paths from file: 
> /root/amit/cassandra-wo_capi/cassandra/build/build-dependencies-sources.xml
> [unzip] Expanding: 
> /root/amit/cassandra-wo_capi/cassandra/build/lib/jars/org.jacoco.agent-0.7.5.201505241946.jar
>  into /root/amit/cassandra-wo_capi/cassandra/build/lib/jars
> check-gen-cql3-grammar:
> gen-cql3-grammar:
> generate-cql-html:
> generate-jflex-java:
> build-project:
>  [echo] apache-cassandra: /root/amit/cassandra-wo_capi/cassandra/build.xml
> [javac] Compiling 308 source files to 
> /root/amit/cassandra-wo_capi/cassandra/build/classes/main
> [javac] Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> [javac] Note: Processing compiler hints annotations
> [javac] Note: Processing compiler hints annotations
> [javac] Note: Writing compiler command file at META-INF/hotspot_compiler
> [javac] Note: Done processing compiler hints annotations
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:210:
>  error: reference to Schema is ambiguous
> [javac] KeyspaceMetadata ksm = 
> Schema.instance.getKeyspaceMetadata(keyspace());
> [javac]^
> [javac]   both class org.apache.cassandra.schema.Schema in 
> org.apache.cassandra.schema and class org.apache.cassandra.config.Schema in 
> org.apache.cassandra.config match
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:221:
>  error: reference to SchemaConstants is ambiguous
> [javac] if (columnFamily().length() > 
> SchemaConstants.NAME_LENGTH)
> [javac]   ^
> [javac]   both class org.apache.cassandra.schema.SchemaConstants in 
> org.apache.cassandra.schema and class 
> org.apache.cassandra.config.SchemaConstants in org.apache.cassandra.config 
> match
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:222:
>  error: reference to SchemaConstants is ambiguous
> [javac] throw new 
> InvalidRequestException(String.format("Table names shouldn't be more than %s 
> characters long (got \"%s\")", SchemaConstants.NAME_LENGTH, columnFamily()));
> [javac]   
> ^
> [javac]   both class org.apache.cassandra.schema.SchemaConstants in 
> org.apache.cassandra.schema and class 
> org.apache.cassandra.config.SchemaConstants in org.apache.cassandra.config 
> match
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/db/ColumnFamilyStore.java:1809:
>  error: reference to 

[jira] [Updated] (CASSANDRA-13166) Build failures encountered on ppc64le

2017-08-09 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13166:
---
Summary: Build failures encountered on ppc64le  (was: Test case failures 
encountered on ppc64le)

> Build failures encountered on ppc64le
> -
>
> Key: CASSANDRA-13166
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13166
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
> Environment: [root@pts00433-vm5 cassandra]# ant -version
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> Apache Ant(TM) version 1.9.2 compiled on January 22 2014
> [root@pts00433-vm5 cassandra]# java -version
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-b15)
> OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
> [root@pts00433-vm5 cassandra]# lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):4
> On-line CPU(s) list:   0-3
> Thread(s) per core:1
> Core(s) per socket:4
> Socket(s): 1
> NUMA node(s):  1
> Model: 2.1 (pvr 004b 0201)
> Model name:POWER8E (raw), altivec supported
> L1d cache: 64K
> L1i cache: 32K
> NUMA node0 CPU(s): 0-3
> [root@pts00433-vm5 cassandra]# cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 7.3 (Maipo)
>Reporter: Amitkumar Ghatwal
>Assignee: Michael Shuler
>
> Getting below errors while trying to run the test suite.
> [root@pts00433-vm5 cassandra]# ant test
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> Buildfile: /root/amit/cassandra-wo_capi/cassandra/build.xml
> init:
> maven-ant-tasks-localrepo:
> maven-ant-tasks-download:
> maven-ant-tasks-init:
> maven-declare-dependencies:
> maven-ant-tasks-retrieve-build:
> init-dependencies:
>  [echo] Loading dependency paths from file: 
> /root/amit/cassandra-wo_capi/cassandra/build/build-dependencies.xml
> init-dependencies:
>  [echo] Loading dependency paths from file: 
> /root/amit/cassandra-wo_capi/cassandra/build/build-dependencies-sources.xml
> [unzip] Expanding: 
> /root/amit/cassandra-wo_capi/cassandra/build/lib/jars/org.jacoco.agent-0.7.5.201505241946.jar
>  into /root/amit/cassandra-wo_capi/cassandra/build/lib/jars
> check-gen-cql3-grammar:
> gen-cql3-grammar:
> generate-cql-html:
> generate-jflex-java:
> build-project:
>  [echo] apache-cassandra: /root/amit/cassandra-wo_capi/cassandra/build.xml
> [javac] Compiling 308 source files to 
> /root/amit/cassandra-wo_capi/cassandra/build/classes/main
> [javac] Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> [javac] Note: Processing compiler hints annotations
> [javac] Note: Processing compiler hints annotations
> [javac] Note: Writing compiler command file at META-INF/hotspot_compiler
> [javac] Note: Done processing compiler hints annotations
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:210:
>  error: reference to Schema is ambiguous
> [javac] KeyspaceMetadata ksm = 
> Schema.instance.getKeyspaceMetadata(keyspace());
> [javac]^
> [javac]   both class org.apache.cassandra.schema.Schema in 
> org.apache.cassandra.schema and class org.apache.cassandra.config.Schema in 
> org.apache.cassandra.config match
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:221:
>  error: reference to SchemaConstants is ambiguous
> [javac] if (columnFamily().length() > 
> SchemaConstants.NAME_LENGTH)
> [javac]   ^
> [javac]   both class org.apache.cassandra.schema.SchemaConstants in 
> org.apache.cassandra.schema and class 
> org.apache.cassandra.config.SchemaConstants in org.apache.cassandra.config 
> match
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:222:
>  error: reference to SchemaConstants is ambiguous
> [javac] throw new 
> InvalidRequestException(String.format("Table names shouldn't be more than %s 
> characters long (got \"%s\")", SchemaConstants.NAME_LENGTH, columnFamily()));
> [javac]   
> ^
> [javac]   both class org.apache.cassandra.schema.SchemaConstants in 
> org.apache.cassandra.schema and class 
> org.apache.cassandra.config.SchemaConstants in org.apache.cassandra.config 
> match
> [javac] 
> 

[jira] [Assigned] (CASSANDRA-13166) Test case failures encountered on ppc64le

2017-08-09 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler reassigned CASSANDRA-13166:
--

Assignee: Michael Shuler

> Test case failures encountered on ppc64le
> -
>
> Key: CASSANDRA-13166
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13166
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
> Environment: [root@pts00433-vm5 cassandra]# ant -version
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> Apache Ant(TM) version 1.9.2 compiled on January 22 2014
> [root@pts00433-vm5 cassandra]# java -version
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-b15)
> OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
> [root@pts00433-vm5 cassandra]# lscpu
> Architecture:  ppc64le
> Byte Order:Little Endian
> CPU(s):4
> On-line CPU(s) list:   0-3
> Thread(s) per core:1
> Core(s) per socket:4
> Socket(s): 1
> NUMA node(s):  1
> Model: 2.1 (pvr 004b 0201)
> Model name:POWER8E (raw), altivec supported
> L1d cache: 64K
> L1i cache: 32K
> NUMA node0 CPU(s): 0-3
> [root@pts00433-vm5 cassandra]# cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 7.3 (Maipo)
>Reporter: Amitkumar Ghatwal
>Assignee: Michael Shuler
> Fix For: 3.9
>
>
> Getting below errors while trying to run the test suite.
> [root@pts00433-vm5 cassandra]# ant test
> Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> Buildfile: /root/amit/cassandra-wo_capi/cassandra/build.xml
> init:
> maven-ant-tasks-localrepo:
> maven-ant-tasks-download:
> maven-ant-tasks-init:
> maven-declare-dependencies:
> maven-ant-tasks-retrieve-build:
> init-dependencies:
>  [echo] Loading dependency paths from file: 
> /root/amit/cassandra-wo_capi/cassandra/build/build-dependencies.xml
> init-dependencies:
>  [echo] Loading dependency paths from file: 
> /root/amit/cassandra-wo_capi/cassandra/build/build-dependencies-sources.xml
> [unzip] Expanding: 
> /root/amit/cassandra-wo_capi/cassandra/build/lib/jars/org.jacoco.agent-0.7.5.201505241946.jar
>  into /root/amit/cassandra-wo_capi/cassandra/build/lib/jars
> check-gen-cql3-grammar:
> gen-cql3-grammar:
> generate-cql-html:
> generate-jflex-java:
> build-project:
>  [echo] apache-cassandra: /root/amit/cassandra-wo_capi/cassandra/build.xml
> [javac] Compiling 308 source files to 
> /root/amit/cassandra-wo_capi/cassandra/build/classes/main
> [javac] Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
> [javac] Note: Processing compiler hints annotations
> [javac] Note: Processing compiler hints annotations
> [javac] Note: Writing compiler command file at META-INF/hotspot_compiler
> [javac] Note: Done processing compiler hints annotations
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:210:
>  error: reference to Schema is ambiguous
> [javac] KeyspaceMetadata ksm = 
> Schema.instance.getKeyspaceMetadata(keyspace());
> [javac]^
> [javac]   both class org.apache.cassandra.schema.Schema in 
> org.apache.cassandra.schema and class org.apache.cassandra.config.Schema in 
> org.apache.cassandra.config match
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:221:
>  error: reference to SchemaConstants is ambiguous
> [javac] if (columnFamily().length() > 
> SchemaConstants.NAME_LENGTH)
> [javac]   ^
> [javac]   both class org.apache.cassandra.schema.SchemaConstants in 
> org.apache.cassandra.schema and class 
> org.apache.cassandra.config.SchemaConstants in org.apache.cassandra.config 
> match
> [javac] 
> /root/amit/cassandra-wo_capi/cassandra/src/java/org/apache/cassandra/cql3/statements/CreateTableStatement.java:222:
>  error: reference to SchemaConstants is ambiguous
> [javac] throw new 
> InvalidRequestException(String.format("Table names shouldn't be more than %s 
> characters long (got \"%s\")", SchemaConstants.NAME_LENGTH, columnFamily()));
> [javac]   
> ^
> [javac]   both class org.apache.cassandra.schema.SchemaConstants in 
> org.apache.cassandra.schema and class 
> org.apache.cassandra.config.SchemaConstants in org.apache.cassandra.config 
> match
> [javac] 
> 

[jira] [Updated] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-08-09 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13615:
---
   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   (was: 3.0.x)
   4.0
   Status: Resolved  (was: Ready to Commit)

Committed to trunk ({{bcdbee5}}). CircleCI failed on memory usage, but local CI 
passed test-all 100%.

> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>Assignee: Michael Shuler
>  Labels: easyfix
> Fix For: 4.0
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Enable ppc64le runtime as unsupported architecture

2017-08-09 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/trunk c00206297 -> bcdbee5cd


Enable ppc64le runtime as unsupported architecture

patch by Amitkumar Ghatwal and Michael Shuler; reviewed by Jeff Jirsa for 
CASSANDRA-13615


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bcdbee5c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bcdbee5c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bcdbee5c

Branch: refs/heads/trunk
Commit: bcdbee5cdce6e22e6c97c5cab23fb2cf3265aa0d
Parents: c002062
Author: Michael Shuler 
Authored: Wed Aug 9 15:42:01 2017 -0500
Committer: Michael Shuler 
Committed: Wed Aug 9 15:50:19 2017 -0500

--
 CHANGES.txt   |   1 +
 lib/sigar-bin/libsigar-ppc64le-linux.so   | Bin 0 -> 310664 bytes
 .../org/apache/cassandra/utils/Architecture.java  |   7 ---
 3 files changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcdbee5c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 53caaba..849848f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Enable ppc64le runtime as unsupported architecture (CASSANDRA-13615)
  * Improve sstablemetadata output (CASSANDRA-11483)
  * Support for migrating legacy users to roles has been dropped 
(CASSANDRA-13371)
  * Introduce error metrics for repair (CASSANDRA-13387)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcdbee5c/lib/sigar-bin/libsigar-ppc64le-linux.so
--
diff --git a/lib/sigar-bin/libsigar-ppc64le-linux.so 
b/lib/sigar-bin/libsigar-ppc64le-linux.so
new file mode 100644
index 000..62303bf
Binary files /dev/null and b/lib/sigar-bin/libsigar-ppc64le-linux.so differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bcdbee5c/src/java/org/apache/cassandra/utils/Architecture.java
--
diff --git a/src/java/org/apache/cassandra/utils/Architecture.java 
b/src/java/org/apache/cassandra/utils/Architecture.java
index 2b87de0..3e9f579 100644
--- a/src/java/org/apache/cassandra/utils/Architecture.java
+++ b/src/java/org/apache/cassandra/utils/Architecture.java
@@ -26,15 +26,16 @@ import com.google.common.collect.Sets;
 
 public final class Architecture
 {
-// Note that s390x & aarch64 architecture are not officially supported and 
adding it here is only done out of convenience
-// for those that want to run C* on this architecture at their own risk 
(see #11214 & #13326)
+// Note that s390x, aarch64, & ppc64le architectures are not officially 
supported and adding them here is only done out
+// of convenience for those that want to run C* on these architectures at 
their own risk (see #11214, #13326, & #13615)
 private static final Set UNALIGNED_ARCH = 
Collections.unmodifiableSet(Sets.newHashSet(
 "i386",
 "x86",
 "amd64",
 "x86_64",
 "s390x",
-"aarch64"
+"aarch64",
+"ppc64le"
 ));
 
 public static final boolean IS_UNALIGNED = 
UNALIGNED_ARCH.contains(System.getProperty("os.arch"));


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13740) Orphan hint file gets created while node is being removed from cluster

2017-08-09 Thread Jaydeepkumar Chovatia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120714#comment-16120714
 ] 

Jaydeepkumar Chovatia commented on CASSANDRA-13740:
---

Thanks [~iamaleksey] for the code review. I will change it as per your 
suggestion and will provide updated patch.

> Orphan hint file gets created while node is being removed from cluster
> --
>
> Key: CASSANDRA-13740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13740
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.x, 3.11.x
>
> Attachments: 13740-3.0.15.txt, gossip_hang_test.py
>
>
> I have found this new issue during my test, whenever node is being removed 
> then hint file for that node gets written and stays inside the hint directory 
> forever. I debugged the code and found that it is due to the race condition 
> between [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L195]
>  and [HintsWriteExecutor.java::closeWriter | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L106]
> . 
>  
> *Time t1* Node is down, as a result Hints are being written by 
> [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L195]
> *Time t2* Node is removed from cluster as a result it calls 
> [HintsService.java-exciseStore | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L327]
>  which removes hint files for the node being removed
> *Time t3* Mutation stage keeps pumping Hints through [HintService.java::write 
> | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L145]
>  which again calls [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L215]
>  and new orphan file gets created
> I was writing a new dtest for {CASSANDRA-13562, CASSANDRA-13308} and that 
> helped me reproduce this new bug. I will submit patch for this new dtest 
> later.
> I also tried following to check how this orphan hint file responds:
> 1. I tried {{nodetool truncatehints }} but it fails as node is no 
> longer part of the ring
> 2. I then tried {{nodetool truncatehints}}, that still doesn’t remove hint 
> file because it is not yet included in the [dispatchDequeue | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsStore.java#L53]
> Reproducible steps:
> Please find dTest python file {{gossip_hang_test.py}} attached which 
> reproduces this bug.
> Solution:
> This is due to race condition as mentioned above. Since 
> {{HintsWriteExecutor.java}} creates thread pool with only 1 worker, so 
> solution becomes little simple. Whenever we [HintService.java::excise | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L303]
>  a host, just store it in-memory, and check for already evicted host inside 
> [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L215].
>  If already evicted host is found then ignore hints.
> Jaydeep



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-08-09 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13615:
---
Status: Ready to Commit  (was: Patch Available)

> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>Assignee: Michael Shuler
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-08-09 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120685#comment-16120685
 ] 

Jeff Jirsa commented on CASSANDRA-13615:


+1 for {{libsigar-ppc64le-linux.so}} with hash 
{{f00a2dc54f7f163ce1dbfa3268e4452a}} , thanks for following up on that infra 
ticket [~mshuler]




> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>Assignee: Michael Shuler
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-08-09 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120637#comment-16120637
 ] 

Michael Shuler edited comment on CASSANDRA-13615 at 8/9/17 8:57 PM:


[~jjirsa], I put your name in the commit message, if you want to +1 :)

https://github.com/apache/cassandra/compare/trunk...mshuler:sigar-bin-ppc64le?expand=1

CircleCI will run, but this .so will be ignored.
https://circleci.com/gh/mshuler/cassandra/100


was (Author: mshuler):
[~jjirsa], I put your name in the commit message, if you want to +1 :)

https://github.com/apache/cassandra/compare/trunk...mshuler:sigar-bin-ppc64le?expand=1

CircleCI will run, but this .so will be ignored.

> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>Assignee: Michael Shuler
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-08-09 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13615:
---
Status: Patch Available  (was: Open)

[~jjirsa], I put your name in the commit message, if you want to +1 :)

https://github.com/apache/cassandra/compare/trunk...mshuler:sigar-bin-ppc64le?expand=1

CircleCI will run, but this .so will be ignored.

> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>Assignee: Michael Shuler
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-08-09 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler reassigned CASSANDRA-13615:
--

Assignee: Michael Shuler

> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>Assignee: Michael Shuler
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11483) Enhance sstablemetadata

2017-08-09 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120624#comment-16120624
 ] 

Chris Lohfink commented on CASSANDRA-11483:
---

your safe to commit it, thanks!

> Enhance sstablemetadata
> ---
>
> Key: CASSANDRA-11483
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11483
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 4.0
>
> Attachments: CASSANDRA-11483.txt, CASSANDRA-11483v2.txt, 
> CASSANDRA-11483v3.txt, CASSANDRA-11483v4.txt, CASSANDRA-11483v5.txt, Screen 
> Shot 2016-04-03 at 11.40.32 PM.png
>
>
> sstablemetadata provides quite a bit of useful information but theres a few 
> hiccups I would like to see addressed:
> * Does not use client mode
> * Units are not provided (or anything for that matter). There is data in 
> micros, millis, seconds as durations and timestamps from epoch. But there is 
> no way to tell what one is without a non-trival code dive
> * in general pretty frustrating to parse



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13615) Include 'ppc64le' library for sigar-1.6.4.jar

2017-08-09 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120607#comment-16120607
 ] 

Michael Shuler commented on CASSANDRA-13615:


Build fixed by INFRA and I grabbed our built ppc64le-linux.so lib in a branch.

> Include 'ppc64le' library for sigar-1.6.4.jar
> -
>
> Key: CASSANDRA-13615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
> Environment: # arch
> ppc64le
>Reporter: Amitkumar Ghatwal
>  Labels: easyfix
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: libsigar-ppc64le-linux.so
>
>
> Hi All,
> sigar-1.6.4.jar does not include a ppc64le library, so we had to install 
> libsigar-ppc64le-linux.so.As the community has been inactive for long 
> (https://github.com/hyperic/sigar), requesting the community to include the 
> ppc64le library directly here.
> Attaching the ppc64le library ( *.so) file to be included under 
> "/lib/sigar-bin". let me know of issues/dependency if any.
> FYI - [~ReiOdaira],[~jjirsa], [~mshuler]
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11483) Enhance sstablemetadata

2017-08-09 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120382#comment-16120382
 ] 

Joel Knighton commented on CASSANDRA-11483:
---

Is there a branch of dtest changes for this ticket anywhere? While working on 
an unrelated ticket, I noticed this commit broke 
{{repair_tests.incremental_repair_test.TestIncRepair.consistent_repair_test}}. 
I've pushed a trivial fix 
[here|https://github.com/jkni/cassandra-dtest/commit/f55f78b093fc668dc5cc9d1fc72f66dc5a9bf3a6],
 but I don't want to commit it and create conflicts if there's already an 
existing dtest branch. I didn't notice any other tests that needed fixing.

> Enhance sstablemetadata
> ---
>
> Key: CASSANDRA-11483
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11483
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 4.0
>
> Attachments: CASSANDRA-11483.txt, CASSANDRA-11483v2.txt, 
> CASSANDRA-11483v3.txt, CASSANDRA-11483v4.txt, CASSANDRA-11483v5.txt, Screen 
> Shot 2016-04-03 at 11.40.32 PM.png
>
>
> sstablemetadata provides quite a bit of useful information but theres a few 
> hiccups I would like to see addressed:
> * Does not use client mode
> * Units are not provided (or anything for that matter). There is data in 
> micros, millis, seconds as durations and timestamps from epoch. But there is 
> no way to tell what one is without a non-trival code dive
> * in general pretty frustrating to parse



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12884) Batch logic can lead to unbalanced use of system.batches

2017-08-09 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12884:
--
Reviewer: Aleksey Yeschenko
  Status: Patch Available  (was: Open)

> Batch logic can lead to unbalanced use of system.batches
> 
>
> Key: CASSANDRA-12884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Daniel Cranford
> Fix For: 3.0.x, 3.11.x
>
> Attachments: 0001-CASSANDRA-12884.patch
>
>
> It looks as though there are some odd edge cases in how we distribute the 
> copies in system.batches.
> The main issue is in the filter method for 
> org.apache.cassandra.batchlog.BatchlogManager
> {code:java}
>  if (validated.size() - validated.get(localRack).size() >= 2)
>  {
> // we have enough endpoints in other racks
> validated.removeAll(localRack);
>   }
>  if (validated.keySet().size() == 1)
>  {
>// we have only 1 `other` rack
>Collection otherRack = 
> Iterables.getOnlyElement(validated.asMap().values());
>
> return Lists.newArrayList(Iterables.limit(otherRack, 2));
>  }
> {code}
> So with one or two racks we just return the first 2 entries in the list.  
> There's no shuffle or randomisation here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-12884) Batch logic can lead to unbalanced use of system.batches

2017-08-09 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko reassigned CASSANDRA-12884:
-

Assignee: Daniel Cranford  (was: Joshua McKenzie)

> Batch logic can lead to unbalanced use of system.batches
> 
>
> Key: CASSANDRA-12884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Daniel Cranford
> Fix For: 3.0.x, 3.11.x
>
> Attachments: 0001-CASSANDRA-12884.patch
>
>
> It looks as though there are some odd edge cases in how we distribute the 
> copies in system.batches.
> The main issue is in the filter method for 
> org.apache.cassandra.batchlog.BatchlogManager
> {code:java}
>  if (validated.size() - validated.get(localRack).size() >= 2)
>  {
> // we have enough endpoints in other racks
> validated.removeAll(localRack);
>   }
>  if (validated.keySet().size() == 1)
>  {
>// we have only 1 `other` rack
>Collection otherRack = 
> Iterables.getOnlyElement(validated.asMap().values());
>
> return Lists.newArrayList(Iterables.limit(otherRack, 2));
>  }
> {code}
> So with one or two racks we just return the first 2 entries in the list.  
> There's no shuffle or randomisation here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12884) Batch logic can lead to unbalanced use of system.batches

2017-08-09 Thread Daniel Cranford (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119980#comment-16119980
 ] 

Daniel Cranford commented on CASSANDRA-12884:
-

Same bug. Regression.

> Batch logic can lead to unbalanced use of system.batches
> 
>
> Key: CASSANDRA-12884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Joshua McKenzie
> Fix For: 3.0.x, 3.11.x
>
> Attachments: 0001-CASSANDRA-12884.patch
>
>
> It looks as though there are some odd edge cases in how we distribute the 
> copies in system.batches.
> The main issue is in the filter method for 
> org.apache.cassandra.batchlog.BatchlogManager
> {code:java}
>  if (validated.size() - validated.get(localRack).size() >= 2)
>  {
> // we have enough endpoints in other racks
> validated.removeAll(localRack);
>   }
>  if (validated.keySet().size() == 1)
>  {
>// we have only 1 `other` rack
>Collection otherRack = 
> Iterables.getOnlyElement(validated.asMap().values());
>
> return Lists.newArrayList(Iterables.limit(otherRack, 2));
>  }
> {code}
> So with one or two racks we just return the first 2 entries in the list.  
> There's no shuffle or randomisation here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12884) Batch logic can lead to unbalanced use of system.batches

2017-08-09 Thread Daniel Cranford (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119980#comment-16119980
 ] 

Daniel Cranford edited comment on CASSANDRA-12884 at 8/9/17 2:26 PM:
-

Same bug as CASSANDRA-8735. Regression.


was (Author: daniel.cranford):
Same bug. Regression.

> Batch logic can lead to unbalanced use of system.batches
> 
>
> Key: CASSANDRA-12884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Joshua McKenzie
> Fix For: 3.0.x, 3.11.x
>
> Attachments: 0001-CASSANDRA-12884.patch
>
>
> It looks as though there are some odd edge cases in how we distribute the 
> copies in system.batches.
> The main issue is in the filter method for 
> org.apache.cassandra.batchlog.BatchlogManager
> {code:java}
>  if (validated.size() - validated.get(localRack).size() >= 2)
>  {
> // we have enough endpoints in other racks
> validated.removeAll(localRack);
>   }
>  if (validated.keySet().size() == 1)
>  {
>// we have only 1 `other` rack
>Collection otherRack = 
> Iterables.getOnlyElement(validated.asMap().values());
>
> return Lists.newArrayList(Iterables.limit(otherRack, 2));
>  }
> {code}
> So with one or two racks we just return the first 2 entries in the list.  
> There's no shuffle or randomisation here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8735) Batch log replication is not randomized when there are only 2 racks

2017-08-09 Thread Daniel Cranford (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119979#comment-16119979
 ] 

Daniel Cranford commented on CASSANDRA-8735:


[~iamaleksey] Great, I didn't see any activity yet on CASSANDRA-12844, so I 
attached a patch there.

> Batch log replication is not randomized when there are only 2 racks
> ---
>
> Key: CASSANDRA-8735
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8735
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuki Morishita
>Assignee: Mihai Suteu
>Priority: Minor
> Fix For: 2.1.9, 2.2.1, 3.0 alpha 1
>
> Attachments: 8735-v2.patch, CASSANDRA-8735.patch
>
>
> Batch log replication is not randomized and the same 2 nodes can be picked up 
> when there are only 2 racks in the cluster.
> https://github.com/apache/cassandra/blob/cassandra-2.0.11/src/java/org/apache/cassandra/service/BatchlogEndpointSelector.java#L72-73



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12884) Batch logic can lead to unbalanced use of system.batches

2017-08-09 Thread Daniel Cranford (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Cranford updated CASSANDRA-12884:

Attachment: 0001-CASSANDRA-12884.patch

Fix + improved unit tests.

> Batch logic can lead to unbalanced use of system.batches
> 
>
> Key: CASSANDRA-12884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Hattrell
>Assignee: Joshua McKenzie
> Fix For: 3.0.x, 3.11.x
>
> Attachments: 0001-CASSANDRA-12884.patch
>
>
> It looks as though there are some odd edge cases in how we distribute the 
> copies in system.batches.
> The main issue is in the filter method for 
> org.apache.cassandra.batchlog.BatchlogManager
> {code:java}
>  if (validated.size() - validated.get(localRack).size() >= 2)
>  {
> // we have enough endpoints in other racks
> validated.removeAll(localRack);
>   }
>  if (validated.keySet().size() == 1)
>  {
>// we have only 1 `other` rack
>Collection otherRack = 
> Iterables.getOnlyElement(validated.asMap().values());
>
> return Lists.newArrayList(Iterables.limit(otherRack, 2));
>  }
> {code}
> So with one or two racks we just return the first 2 entries in the list.  
> There's no shuffle or randomisation here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12015) Rebuilding from another DC should use different sources

2017-08-09 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12015:

Component/s: Streaming and Messaging

> Rebuilding from another DC should use different sources
> ---
>
> Key: CASSANDRA-12015
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12015
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Fabien Rousseau
>
> Currently, when adding a new DC (ex: DC2) and rebuilding it from an existing 
> DC (ex: DC1), only the closest replica is used as a "source of data".
> It works but is not optimal, because in case of an RF=3 and 3 nodes cluster, 
> only one node in DC1 is streaming the data to DC2. 
> To build the new DC in a reasonable time, it would be better, in that case, 
> to stream from multiple sources, thus distributing more evenly the load.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11748) Schema version mismatch may leads to Casandra OOM at bootstrap during a rolling upgrade process

2017-08-09 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119897#comment-16119897
 ] 

Aleksey Yeschenko commented on CASSANDRA-11748:
---

bq. At a minimum, MigrationTask run should start its execution by comparing 
schema versions between the two nodes again, and exiting early if they are the 
same already. If we do that, the remainder of the tasks in the queue would 
often essentially noop. Am I missing anything here?

What I'm missing, once again, is that {{MigrationTask}} itself is asynchronous, 
and will terminate as soon as it's done sending the request, with all the 
action happening later in the callback.

With {{Schema.instance.mergeAndAnnounceVersion()}} itself being 
{{synchronized}}, what are we *really* winning from having multiple outgoing 
schema pulls, other than the risk of pathological behaviour described in this 
issue and CASSANDRA-13569?

> Schema version mismatch may leads to Casandra OOM at bootstrap during a 
> rolling upgrade process
> ---
>
> Key: CASSANDRA-11748
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11748
> Project: Cassandra
>  Issue Type: Bug
> Environment: Rolling upgrade process from 1.2.19 to 2.0.17. 
> CentOS 6.6
> Occurred in different C* node of different scale of deployment (2G ~ 5G)
>Reporter: Michael Fong
>Assignee: Matt Byrd
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have observed multiple times when a multi-node C* (v2.0.17) cluster ran 
> into OOM in bootstrap during a rolling upgrade process from 1.2.19 to 2.0.17. 
> Here is the simple guideline of our rolling upgrade process
> 1. Update schema on a node, and wait until all nodes to be in schema version 
> agreemnt - via nodetool describeclulster
> 2. Restart a Cassandra node
> 3. After restart, there is a chance that the the restarted node has different 
> schema version.
> 4. All nodes in cluster start to rapidly exchange schema information, and any 
> of node could run into OOM. 
> The following is the system.log that occur in one of our 2-node cluster test 
> bed
> --
> Before rebooting node 2:
> Node 1: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,326 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> Node 2: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,122 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> After rebooting node 2, 
> Node 2: DEBUG [main] 2016-04-19 11:18:18,016 MigrationManager.java (line 328) 
> Gossiping my schema version f5270873-ba1f-39c7-ab2e-a86db868b09b
> The node2  keeps submitting the migration task over 100+ times to the other 
> node.
> INFO [GossipStage:1] 2016-04-19 11:18:18,261 Gossiper.java (line 1011) Node 
> /192.168.88.33 has restarted, now UP
> INFO [GossipStage:1] 2016-04-19 11:18:18,262 TokenMetadata.java (line 414) 
> Updating topology for /192.168.88.33
> ...
> DEBUG [GossipStage:1] 2016-04-19 11:18:18,265 MigrationManager.java (line 
> 102) Submitting migration task for /192.168.88.33
> ... ( over 100+ times)
> --
> On the otherhand, Node 1 keeps updating its gossip information, followed by 
> receiving and submitting migrationTask afterwards: 
> INFO [RequestResponseStage:3] 2016-04-19 11:18:18,333 Gossiper.java (line 
> 978) InetAddress /192.168.88.34 is now UP
> ...
> DEBUG [MigrationStage:1] 2016-04-19 11:18:18,496 
> MigrationRequestVerbHandler.java (line 41) Received migration request from 
> /192.168.88.34.
> …… ( over 100+ times)
> DEBUG [OptionalTasks:1] 2016-04-19 11:19:18,337 MigrationManager.java (line 
> 127) submitting migration task for /192.168.88.34
> .  (over 50+ times)
> On the side note, we have over 200+ column families defined in Cassandra 
> database, which may related to this amount of rpc traffic.
> P.S.2 The over requested schema migration task will eventually have 
> InternalResponseStage performing schema merge operation. Since this operation 
> requires a compaction for each merge and is much slower to consume. Thus, the 
> back-pressure of incoming schema migration content objects consumes all of 
> the heap space and ultimately ends up OOM!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11748) Schema version mismatch may leads to Casandra OOM at bootstrap during a rolling upgrade process

2017-08-09 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119850#comment-16119850
 ] 

Aleksey Yeschenko commented on CASSANDRA-11748:
---

[~mbyrd] Sorry for the delay. Getting to this and CASSANDRA-13569 just now.

The patch is alright, but I'm wondering if instead of applying band-aid 
patches, even nice and clean ones, we should instead change the logic of our 
pulls on a bit slightly higher level.

Current mechanism is really blunt. We are performing the version check just 
once, in {{maybeScheduleSchemaPull()}}, then basically unconditionally schedule 
the task.

Which leads to this issue, and to the issue described in CASSANDRA-13569, too.

As a result, we can end up with a lot of {{MigrationTasks}} instances 
scheduled, most of which to become completely unnecessary as soon as the first 
one of them is completed and resolves the schema difference.

At a minimum, {{MigrationTask}} run should start its execution by comparing 
schema versions between the two nodes again, and exiting early if they are the 
same already. If we do that, the remainder of the tasks in the queue would 
often essentially noop. Am I missing anything here?

> Schema version mismatch may leads to Casandra OOM at bootstrap during a 
> rolling upgrade process
> ---
>
> Key: CASSANDRA-11748
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11748
> Project: Cassandra
>  Issue Type: Bug
> Environment: Rolling upgrade process from 1.2.19 to 2.0.17. 
> CentOS 6.6
> Occurred in different C* node of different scale of deployment (2G ~ 5G)
>Reporter: Michael Fong
>Assignee: Matt Byrd
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have observed multiple times when a multi-node C* (v2.0.17) cluster ran 
> into OOM in bootstrap during a rolling upgrade process from 1.2.19 to 2.0.17. 
> Here is the simple guideline of our rolling upgrade process
> 1. Update schema on a node, and wait until all nodes to be in schema version 
> agreemnt - via nodetool describeclulster
> 2. Restart a Cassandra node
> 3. After restart, there is a chance that the the restarted node has different 
> schema version.
> 4. All nodes in cluster start to rapidly exchange schema information, and any 
> of node could run into OOM. 
> The following is the system.log that occur in one of our 2-node cluster test 
> bed
> --
> Before rebooting node 2:
> Node 1: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,326 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> Node 2: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,122 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> After rebooting node 2, 
> Node 2: DEBUG [main] 2016-04-19 11:18:18,016 MigrationManager.java (line 328) 
> Gossiping my schema version f5270873-ba1f-39c7-ab2e-a86db868b09b
> The node2  keeps submitting the migration task over 100+ times to the other 
> node.
> INFO [GossipStage:1] 2016-04-19 11:18:18,261 Gossiper.java (line 1011) Node 
> /192.168.88.33 has restarted, now UP
> INFO [GossipStage:1] 2016-04-19 11:18:18,262 TokenMetadata.java (line 414) 
> Updating topology for /192.168.88.33
> ...
> DEBUG [GossipStage:1] 2016-04-19 11:18:18,265 MigrationManager.java (line 
> 102) Submitting migration task for /192.168.88.33
> ... ( over 100+ times)
> --
> On the otherhand, Node 1 keeps updating its gossip information, followed by 
> receiving and submitting migrationTask afterwards: 
> INFO [RequestResponseStage:3] 2016-04-19 11:18:18,333 Gossiper.java (line 
> 978) InetAddress /192.168.88.34 is now UP
> ...
> DEBUG [MigrationStage:1] 2016-04-19 11:18:18,496 
> MigrationRequestVerbHandler.java (line 41) Received migration request from 
> /192.168.88.34.
> …… ( over 100+ times)
> DEBUG [OptionalTasks:1] 2016-04-19 11:19:18,337 MigrationManager.java (line 
> 127) submitting migration task for /192.168.88.34
> .  (over 50+ times)
> On the side note, we have over 200+ column families defined in Cassandra 
> database, which may related to this amount of rpc traffic.
> P.S.2 The over requested schema migration task will eventually have 
> InternalResponseStage performing schema merge operation. Since this operation 
> requires a compaction for each merge and is much slower to consume. Thus, the 
> back-pressure of incoming schema migration content objects consumes all of 
> the heap space and ultimately ends up OOM!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For 

[jira] [Comment Edited] (CASSANDRA-11500) Obsolete MV entry may not be properly deleted

2017-08-09 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119823#comment-16119823
 ] 

Paulo Motta edited comment on CASSANDRA-11500 at 8/9/17 12:49 PM:
--

Talking offline with Zhao, it seems like there is still an outstanding case 
derived from CASSANDRA-13547 not addressed by the strict liveness suggestion:

{code:none}
// liveness or deletion using max-timestamp of view-primary-key column in base
base:  (k), a, b, c
view:  (k, a), b, c=1

q1: insert (1,1,1,1) with timestamp 0

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=1@0
view: liveness=ts@0,  (k=1, a=1), b=1@0, c=1@0

q2: update c=1 with timestamp 10 where k = 1  

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=1@10
view: liveness=ts@0,  (k=1, a=1), b=1@0, c=1@10

q3: update c=2 with timestamp 11 where k = 1  

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=2@11
view:
  liveness=ts@0,  (k=1, a=1), b=1@0, c=1@10
  tombstone=ts@0,  (k=1, a=1)

  with strict-liveness flag, view row is dead

q4: update c=1 with timestamp 12 where k = 1  

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=1@12
view:
  liveness=ts@0,  (k=1, a=1), b=1@0, c=1@10
  tombstone=ts@0,  (k=1, a=1)
  liveness=ts@0,  (k=1, a=1), b=1@0, c=1@12
 
  view row should be live..but it's dead
{code}

It seems like this scenario where the row liveness depend on a non-view primary 
key was overlooked by CASSANDRA-10368 and seems to be analogous to the problem 
Tyler discovered on CASSANDRA-10226 (but with conditions rather than non-base 
view primary keys):

bq. It seems like when we include multiple non-PK columns in the view PK, we 
fundamentally have to accept that the view row's existence depends on multiple 
timestamps. I propose that we solve this by using a set of timestamps for the 
row's LivenessInfo.

The solution proposed on that ticket of keeping multiple deletion and liveness 
infos per primary key is similar to the virtual cells solution you 
independently came up (great job!). While I agree that a solution along those 
lines is the way to go moving forward, that's a pretty significant change in 
the storage engine which may introduce unforeseen problems, and would probably 
be nice to have [~slebresne] blessing given he seems to [feel 
strongly|https://issues.apache.org/jira/browse/CASSANDRA-10226?focusedCommentId=14740391=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14740391]
 about it and will likely want to chime in.

I personally think that before introducing disruptive changes to the storage 
engine and MV machinery to enable relatively new features (in this case, 
filtering on non-PK columns which didn't seem to have all of its repercussions 
considered on CASSANDRA-10368), we should take a conservative approach and 
spend our energy on stabilizing current MV features.

In practical terms, I'd suggest going with the simpler strict liveness approach 
I suggested above to fix the current problems (or any alternative which do not 
require disruptive changes on the storage engine) and disallow filtering on 
non-PK while the virtual cells are not implemented - MVs with it already 
enabled would not be affected but users would be susceptible to the problem 
above (we could maybe print a warning to inform this).

After we have current MV features stabilized we can then think of implementing 
the virtual cell idea to properly enable other features like filtering on 
non-view PK columns and support multiple non-PK cols in MV clustering key when 
partition key is shared (CASSANDRA-10226).

Please let me know what do you think.


was (Author: pauloricardomg):
Talking offline with Zhao, it seems like there is still an outstanding case 
derived from CASSANDRA-13547 not addressed by the strict liveness suggestion:

{code:none}
// liveness or deletion using max-timestamp of view-primary-key column in base
base:  (k), a, b, c
view:  (k, a), b, c=1

q1: insert (1,1,1,1) with timestamp 0

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=1@0
view: liveness=ts@0,  (k=1, a=1), b=1@0, c=1@0

q2: update c=1 with timestamp 10 where k = 1  

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=1@10
view: liveness=ts@0,  (k=1, a=1), b=1@0, c=1@10

q3: update c=2 with timestamp 11 where k = 1  

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=2@11
view:
  liveness=ts@0,  (k=1, a=1), b=1@0, c=1@10
  tombstone=ts@0,  (k=1, a=1)

  with strict-liveness flag, view row is dead

q4: update c=1 with timestamp 12 where k = 1  

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=1@12
view:
  liveness=ts@0,  (k=1, a=1), b=1@0, c=1@10
  tombstone=ts@0,  (k=1, a=1)
  liveness=ts@0,  (k=1, a=1), b=1@0, c=1@12
 
  view row should be live..but it's dead
{code}

It seems like this scenario where the row 

[jira] [Commented] (CASSANDRA-11500) Obsolete MV entry may not be properly deleted

2017-08-09 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119823#comment-16119823
 ] 

Paulo Motta commented on CASSANDRA-11500:
-

Talking offline with Zhao, it seems like there is still an outstanding case 
derived from CASSANDRA-13547 not addressed by the strict liveness suggestion:

{code:none}
// liveness or deletion using max-timestamp of view-primary-key column in base
base:  (k), a, b, c
view:  (k, a), b, c=1

q1: insert (1,1,1,1) with timestamp 0

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=1@0
view: liveness=ts@0,  (k=1, a=1), b=1@0, c=1@0

q2: update c=1 with timestamp 10 where k = 1  

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=1@10
view: liveness=ts@0,  (k=1, a=1), b=1@0, c=1@10

q3: update c=2 with timestamp 11 where k = 1  

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=2@11
view:
  liveness=ts@0,  (k=1, a=1), b=1@0, c=1@10
  tombstone=ts@0,  (k=1, a=1)

  with strict-liveness flag, view row is dead

q4: update c=1 with timestamp 12 where k = 1  

base: liveness=ts@0,  k=1, a=1@0, b=1@0, c=1@12
view:
  liveness=ts@0,  (k=1, a=1), b=1@0, c=1@10
  tombstone=ts@0,  (k=1, a=1)
  liveness=ts@0,  (k=1, a=1), b=1@0, c=1@12
 
  view row should be live..but it's dead
{code}

It seems like this scenario where the row liveness depend on a non-primary key 
was overlooked by CASSANDRA-10368 and seems to be analogous to the problem 
Tyler discovered on CASSANDRA-10226 (but with conditions rather than non-base 
view primary keys):

bq. It seems like when we include multiple non-PK columns in the view PK, we 
fundamentally have to accept that the view row's existence depends on multiple 
timestamps. I propose that we solve this by using a set of timestamps for the 
row's LivenessInfo.

The solution proposed on that ticket of keeping multiple deletion and liveness 
infos per primary key is similar to the virtual cells solution you 
independently came up (great job!). While I agree that a solution along those 
lines is the way to go moving forward, that's a pretty significant change in 
the storage engine which may introduce unforeseen problems, and would probably 
be nice to have [~slebresne] blessing given he seems to [feel 
strongly|https://issues.apache.org/jira/browse/CASSANDRA-10226?focusedCommentId=14740391=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14740391]
 about it and will likely want to chime in.

I personally think that before introducing disruptive changes to the storage 
engine and MV machinery to enable relatively new features (in this case, 
filtering on non-PK columns which didn't seem to have all of its repercussions 
considered on CASSANDRA-10368), we should take a conservative approach and 
spend our energy on stabilizing current MV features.

In practical terms, I'd suggest going with the simpler strict liveness approach 
I suggested above to fix the current problems (or any alternative which do not 
require disruptive changes on the storage engine) and disallow filtering on 
non-PK while the virtual cells are not implemented - MVs with it already 
enabled would not be affected but users would be susceptible to the problem 
above (we could maybe print a warning to inform this).

After we have current MV features stabilized we can then think of implementing 
the virtual cell idea to properly enable other features like filtering on 
non-PK columns and support multiple non-PK cols in MV clustering key when 
partition key is shared (CASSANDRA-10226).

Please let me know what do you think.

> Obsolete MV entry may not be properly deleted
> -
>
> Key: CASSANDRA-11500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11500
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Sylvain Lebresne
>Assignee: ZhaoYang
>
> When a Materialized View uses a non-PK base table column in its PK, if an 
> update changes that column value, we add the new view entry and remove the 
> old one. When doing that removal, the current code uses the same timestamp 
> than for the liveness info of the new entry, which is the max timestamp for 
> any columns participating to the view PK. This is not correct for the 
> deletion as the old view entry could have other columns with higher timestamp 
> which won't be deleted as can easily shown by the failing of the following 
> test:
> {noformat}
> CREATE TABLE t (k int PRIMARY KEY, a int, b int);
> CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS 
> NOT NULL PRIMARY KEY (k, a);
> INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
> UPDATE t USING TIMESTAMP 4 SET b = 2 WHERE k = 1;
> UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1;
> SELECT 

[jira] [Commented] (CASSANDRA-13649) Uncaught exceptions in Netty pipeline

2017-08-09 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119802#comment-16119802
 ] 

Stefan Podkowinski commented on CASSANDRA-13649:


It should always be set when not run in a testing context, as instanciated in 
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/NativeTransportService.java#L65
and implemented in
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/transport/RequestThreadPoolExecutor.java


> Uncaught exceptions in Netty pipeline
> -
>
> Key: CASSANDRA-13649
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13649
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging, Testing
>Reporter: Stefan Podkowinski
> Attachments: test_stdout.txt
>
>
> I've noticed some netty related errors in trunk in [some of the dtest 
> results|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/106/#showFailuresLink].
>  Just want to make sure that we don't have to change anything related to the 
> exception handling in our pipeline and that this isn't a netty issue. 
> Actually if this causes flakiness but is otherwise harmless, we should do 
> something about it, even if it's just on the dtest side.
> {noformat}
> WARN  [epollEventLoopGroup-2-9] 2017-06-28 17:23:49,699 Slf4JLogger.java:151 
> - An exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did not handle 
> the exception.
> io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
> Connection reset by peer
>   at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown 
> Source) ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
> {noformat}
> And again in another test:
> {noformat}
> WARN  [epollEventLoopGroup-2-8] 2017-06-29 02:27:31,300 Slf4JLogger.java:151 
> - An exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did not handle 
> the exception.
> io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
> Connection reset by peer
>   at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown 
> Source) ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
> {noformat}
> Edit:
> The {{io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() 
> failed}} error also causes tests to fail for 3.0 and 3.11. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13740) Orphan hint file gets created while node is being removed from cluster

2017-08-09 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-13740:
--
Fix Version/s: 3.11.x

> Orphan hint file gets created while node is being removed from cluster
> --
>
> Key: CASSANDRA-13740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13740
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.x, 3.11.x
>
> Attachments: 13740-3.0.15.txt, gossip_hang_test.py
>
>
> I have found this new issue during my test, whenever node is being removed 
> then hint file for that node gets written and stays inside the hint directory 
> forever. I debugged the code and found that it is due to the race condition 
> between [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L195]
>  and [HintsWriteExecutor.java::closeWriter | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L106]
> . 
>  
> *Time t1* Node is down, as a result Hints are being written by 
> [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L195]
> *Time t2* Node is removed from cluster as a result it calls 
> [HintsService.java-exciseStore | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L327]
>  which removes hint files for the node being removed
> *Time t3* Mutation stage keeps pumping Hints through [HintService.java::write 
> | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L145]
>  which again calls [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L215]
>  and new orphan file gets created
> I was writing a new dtest for {CASSANDRA-13562, CASSANDRA-13308} and that 
> helped me reproduce this new bug. I will submit patch for this new dtest 
> later.
> I also tried following to check how this orphan hint file responds:
> 1. I tried {{nodetool truncatehints }} but it fails as node is no 
> longer part of the ring
> 2. I then tried {{nodetool truncatehints}}, that still doesn’t remove hint 
> file because it is not yet included in the [dispatchDequeue | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsStore.java#L53]
> Reproducible steps:
> Please find dTest python file {{gossip_hang_test.py}} attached which 
> reproduces this bug.
> Solution:
> This is due to race condition as mentioned above. Since 
> {{HintsWriteExecutor.java}} creates thread pool with only 1 worker, so 
> solution becomes little simple. Whenever we [HintService.java::excise | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L303]
>  a host, just store it in-memory, and check for already evicted host inside 
> [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L215].
>  If already evicted host is found then ignore hints.
> Jaydeep



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13740) Orphan hint file gets created while node is being removed from cluster

2017-08-09 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-13740:
--
Status: Open  (was: Patch Available)

> Orphan hint file gets created while node is being removed from cluster
> --
>
> Key: CASSANDRA-13740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13740
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 13740-3.0.15.txt, gossip_hang_test.py
>
>
> I have found this new issue during my test, whenever node is being removed 
> then hint file for that node gets written and stays inside the hint directory 
> forever. I debugged the code and found that it is due to the race condition 
> between [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L195]
>  and [HintsWriteExecutor.java::closeWriter | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L106]
> . 
>  
> *Time t1* Node is down, as a result Hints are being written by 
> [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L195]
> *Time t2* Node is removed from cluster as a result it calls 
> [HintsService.java-exciseStore | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L327]
>  which removes hint files for the node being removed
> *Time t3* Mutation stage keeps pumping Hints through [HintService.java::write 
> | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L145]
>  which again calls [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L215]
>  and new orphan file gets created
> I was writing a new dtest for {CASSANDRA-13562, CASSANDRA-13308} and that 
> helped me reproduce this new bug. I will submit patch for this new dtest 
> later.
> I also tried following to check how this orphan hint file responds:
> 1. I tried {{nodetool truncatehints }} but it fails as node is no 
> longer part of the ring
> 2. I then tried {{nodetool truncatehints}}, that still doesn’t remove hint 
> file because it is not yet included in the [dispatchDequeue | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsStore.java#L53]
> Reproducible steps:
> Please find dTest python file {{gossip_hang_test.py}} attached which 
> reproduces this bug.
> Solution:
> This is due to race condition as mentioned above. Since 
> {{HintsWriteExecutor.java}} creates thread pool with only 1 worker, so 
> solution becomes little simple. Whenever we [HintService.java::excise | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L303]
>  a host, just store it in-memory, and check for already evicted host inside 
> [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L215].
>  If already evicted host is found then ignore hints.
> Jaydeep



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13740) Orphan hint file gets created while node is being removed from cluster

2017-08-09 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119799#comment-16119799
 ] 

Aleksey Yeschenko commented on CASSANDRA-13740:
---

The patch likely works, but I think we can do better.

Some of the issues I have with it:
1. It introduces a dependency on {{HintsService}} and {{StorageService}} to 
{{HintsWriteExecutor}}
2. It introduces a dependency on {{HintsService}} to {{HintsStore}}

When designing the current iteration of hints I was very careful to design the 
system in a top-down way without any interleaving that’s avoidable. Each class 
is a dumb as possible on its own, and as you go up, you just compose dumb 
classes that by themselves know nothing of layers above them.

As for the problem itself, we do acknowledge that “The worst that can happen if 
we don't get everything right is a hints file (or two) remaining undeleted.” - 
comments to {{excise()}}, it’s more of a known limitation than a bug. But of 
course we can improve on it. What is a problem, however, is the inability to 
programmatically remove those orphan files via JMX. {{nodetool truncatehints}} 
should get results no matter what, and it should be fixed.

If we want to deal with the orphans for sure - and I don’t see why not improve 
this as well - I suggest you do so in a different way. Perhaps as last step of 
{{excise()}} schedule an task - on {{ScheduledExecutors.optionalTasks}} to 
clean up any orphans, if any, after some delay.

> Orphan hint file gets created while node is being removed from cluster
> --
>
> Key: CASSANDRA-13740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13740
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 13740-3.0.15.txt, gossip_hang_test.py
>
>
> I have found this new issue during my test, whenever node is being removed 
> then hint file for that node gets written and stays inside the hint directory 
> forever. I debugged the code and found that it is due to the race condition 
> between [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L195]
>  and [HintsWriteExecutor.java::closeWriter | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L106]
> . 
>  
> *Time t1* Node is down, as a result Hints are being written by 
> [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L195]
> *Time t2* Node is removed from cluster as a result it calls 
> [HintsService.java-exciseStore | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L327]
>  which removes hint files for the node being removed
> *Time t3* Mutation stage keeps pumping Hints through [HintService.java::write 
> | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L145]
>  which again calls [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L215]
>  and new orphan file gets created
> I was writing a new dtest for {CASSANDRA-13562, CASSANDRA-13308} and that 
> helped me reproduce this new bug. I will submit patch for this new dtest 
> later.
> I also tried following to check how this orphan hint file responds:
> 1. I tried {{nodetool truncatehints }} but it fails as node is no 
> longer part of the ring
> 2. I then tried {{nodetool truncatehints}}, that still doesn’t remove hint 
> file because it is not yet included in the [dispatchDequeue | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsStore.java#L53]
> Reproducible steps:
> Please find dTest python file {{gossip_hang_test.py}} attached which 
> reproduces this bug.
> Solution:
> This is due to race condition as mentioned above. Since 
> {{HintsWriteExecutor.java}} creates thread pool with only 1 worker, so 
> solution becomes little simple. Whenever we [HintService.java::excise | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsService.java#L303]
>  a host, just store it in-memory, and check for already evicted host inside 
> [HintsWriteExecutor.java::flush | 
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/hints/HintsWriteExecutor.java#L215].
>  If already evicted host is found then ignore hints.
> Jaydeep



--
This message was sent 

[jira] [Commented] (CASSANDRA-13649) Uncaught exceptions in Netty pipeline

2017-08-09 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119785#comment-16119785
 ] 

Norman Maurer commented on CASSANDRA-13649:
---

Is it possible that you have an eventExecutor set here?:

https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/transport/Server.java#L329

And if so can you show me the implementation of it ?

> Uncaught exceptions in Netty pipeline
> -
>
> Key: CASSANDRA-13649
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13649
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging, Testing
>Reporter: Stefan Podkowinski
> Attachments: test_stdout.txt
>
>
> I've noticed some netty related errors in trunk in [some of the dtest 
> results|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/106/#showFailuresLink].
>  Just want to make sure that we don't have to change anything related to the 
> exception handling in our pipeline and that this isn't a netty issue. 
> Actually if this causes flakiness but is otherwise harmless, we should do 
> something about it, even if it's just on the dtest side.
> {noformat}
> WARN  [epollEventLoopGroup-2-9] 2017-06-28 17:23:49,699 Slf4JLogger.java:151 
> - An exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did not handle 
> the exception.
> io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
> Connection reset by peer
>   at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown 
> Source) ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
> {noformat}
> And again in another test:
> {noformat}
> WARN  [epollEventLoopGroup-2-8] 2017-06-29 02:27:31,300 Slf4JLogger.java:151 
> - An exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did not handle 
> the exception.
> io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
> Connection reset by peer
>   at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown 
> Source) ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
> {noformat}
> Edit:
> The {{io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() 
> failed}} error also causes tests to fail for 3.0 and 3.11. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13649) Uncaught exceptions in Netty pipeline

2017-08-09 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119763#comment-16119763
 ] 

Stefan Podkowinski commented on CASSANDRA-13649:


The Initializer implementing ChannelInitializer is added by calling 
ServerBootstrap.childHandler() here:
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/transport/Server.java#L153


> Uncaught exceptions in Netty pipeline
> -
>
> Key: CASSANDRA-13649
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13649
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging, Testing
>Reporter: Stefan Podkowinski
> Attachments: test_stdout.txt
>
>
> I've noticed some netty related errors in trunk in [some of the dtest 
> results|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/106/#showFailuresLink].
>  Just want to make sure that we don't have to change anything related to the 
> exception handling in our pipeline and that this isn't a netty issue. 
> Actually if this causes flakiness but is otherwise harmless, we should do 
> something about it, even if it's just on the dtest side.
> {noformat}
> WARN  [epollEventLoopGroup-2-9] 2017-06-28 17:23:49,699 Slf4JLogger.java:151 
> - An exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did not handle 
> the exception.
> io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
> Connection reset by peer
>   at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown 
> Source) ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
> {noformat}
> And again in another test:
> {noformat}
> WARN  [epollEventLoopGroup-2-8] 2017-06-29 02:27:31,300 Slf4JLogger.java:151 
> - An exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did not handle 
> the exception.
> io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
> Connection reset by peer
>   at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown 
> Source) ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
> {noformat}
> Edit:
> The {{io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() 
> failed}} error also causes tests to fail for 3.0 and 3.11. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13649) Uncaught exceptions in Netty pipeline

2017-08-09 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119753#comment-16119753
 ] 

Norman Maurer commented on CASSANDRA-13649:
---

[~spo...@gmail.com] yes how handler(...) and childHandler(...) are now handled 
is more consistent. Can you give me a link to the code where you setup your 
handlers that this exceptions produce ?

> Uncaught exceptions in Netty pipeline
> -
>
> Key: CASSANDRA-13649
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13649
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging, Testing
>Reporter: Stefan Podkowinski
> Attachments: test_stdout.txt
>
>
> I've noticed some netty related errors in trunk in [some of the dtest 
> results|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/106/#showFailuresLink].
>  Just want to make sure that we don't have to change anything related to the 
> exception handling in our pipeline and that this isn't a netty issue. 
> Actually if this causes flakiness but is otherwise harmless, we should do 
> something about it, even if it's just on the dtest side.
> {noformat}
> WARN  [epollEventLoopGroup-2-9] 2017-06-28 17:23:49,699 Slf4JLogger.java:151 
> - An exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did not handle 
> the exception.
> io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
> Connection reset by peer
>   at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown 
> Source) ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
> {noformat}
> And again in another test:
> {noformat}
> WARN  [epollEventLoopGroup-2-8] 2017-06-29 02:27:31,300 Slf4JLogger.java:151 
> - An exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did not handle 
> the exception.
> io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: 
> Connection reset by peer
>   at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown 
> Source) ~[netty-all-4.0.44.Final.jar:4.0.44.Final]
> {noformat}
> Edit:
> The {{io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() 
> failed}} error also causes tests to fail for 3.0 and 3.11. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8735) Batch log replication is not randomized when there are only 2 racks

2017-08-09 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119735#comment-16119735
 ] 

Aleksey Yeschenko commented on CASSANDRA-8735:
--

[~daniel.cranford] [~yukim] I think CASSANDRA-12884 (unresolved) covers the 
issue - whatever the origin of it was.

> Batch log replication is not randomized when there are only 2 racks
> ---
>
> Key: CASSANDRA-8735
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8735
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuki Morishita
>Assignee: Mihai Suteu
>Priority: Minor
> Fix For: 2.1.9, 2.2.1, 3.0 alpha 1
>
> Attachments: 8735-v2.patch, CASSANDRA-8735.patch
>
>
> Batch log replication is not randomized and the same 2 nodes can be picked up 
> when there are only 2 racks in the cluster.
> https://github.com/apache/cassandra/blob/cassandra-2.0.11/src/java/org/apache/cassandra/service/BatchlogEndpointSelector.java#L72-73



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13750) Counter digests include local data

2017-08-09 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119713#comment-16119713
 ] 

Aleksey Yeschenko commented on CASSANDRA-13750:
---

If you have legacy pre-2.1 data shards lying around in sstables, this bug will 
hurt. One minor problem is that for people who don't, there will be a short 
period of digest mismatches during the minor upgrade from 3.0.prev to 3.0.next, 
but I don't see a way to avoid it.

+1

> Counter digests include local data
> --
>
> Key: CASSANDRA-13750
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13750
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> In 3.x+, the raw counter value bytes are used when hashing counters for reads 
> and repair, including local shard data, which is removed when streamed. This 
> leads to constant digest mismatches and repair overstreaming.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12758) Expose tasks queue length via JMX

2017-08-09 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Summary: Expose tasks queue length via JMX  (was: Expose NTR tasks queue 
length via JMX)

> Expose tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-3.0.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12758) Expose NTR tasks queue length via JMX

2017-08-09 Thread Romain Hardouin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Romain Hardouin updated CASSANDRA-12758:

Summary: Expose NTR tasks queue length via JMX  (was: Expose tasks queue 
length via JMX)

> Expose NTR tasks queue length via JMX
> -
>
> Key: CASSANDRA-12758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12758
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Romain Hardouin
>Assignee: Michael Shuler
>Priority: Minor
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: 12758-3.0.patch, 12758-trunk.patch
>
>
> CASSANDRA-11363 introduced {{cassandra.max_queued_native_transport_requests}} 
> to set the NTR queue length.
> Currently Cassandra lacks of a JMX Mbean which exposes this value which would 
> allow to:
>  
> 1. Be sure this value has been set
> 2. Plot this value in a monitoring application to make correlations with 
> other graphs when we make changes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13751) Race / ref leak in PendingRepairManager

2017-08-09 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119544#comment-16119544
 ] 

Marcus Eriksson commented on CASSANDRA-13751:
-

+1 if tests succeed (seems the utests failed as well)

> Race / ref leak in PendingRepairManager
> ---
>
> Key: CASSANDRA-13751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13751
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> PendingRepairManager#getScanners has an assertion that confirms an sstable 
> is, in fact, marked as pending repair. Since validation compactions don't use 
> the same concurrency controls as proper compactions, they can race with 
> promotion/demotion compactions and end up getting assertion errors when the 
> pending repair id is changed while the scanners are being acquired. Also, 
> error handling in PendingRepairManager and CompactionStrategyManager leaks 
> refs when this happens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11500) Obsolete MV entry may not be properly deleted

2017-08-09 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119461#comment-16119461
 ] 

Paulo Motta commented on CASSANDRA-11500:
-

The virtual cell proposal is pretty clever and looks like it would solve most 
outstanding issues, but I'm a bit concerned about adding new structures to the 
storage engine to deal with materialized-specific issues.

While I agree we should do if it's our only choice, we should explore 
alternatives which reuse existing structures if possible to avoid introducing 
feature-specific stuff into the storage engine.

Looking back at the original scenario which motivated this ticket 
(CASSANDRA-11500):
{noformat}
CREATE TABLE t (k int PRIMARY KEY, a int, b int);
CREATE MATERIALIZED VIEW mv AS SELECT * FROM t WHERE k IS NOT NULL AND a IS NOT 
NULL PRIMARY KEY (k, a);

INSERT INTO t(k, a, b) VALUES (1, 1, 1) USING TIMESTAMP 0;
UPDATE t USING TIMESTAMP 4 SET b = 2 WHERE k = 1;
UPDATE t USING TIMESTAMP 2 SET a = 2 WHERE k = 1;

SELECT * FROM mv WHERE k = 1; // This currently return 2 entries, the old 
(invalid) and the new one
{noformat}

It seems to me that the problem here is applying the definition for standard 
tables that if a single column is live, then the whole row is live, which does 
not need to be the case for MV where we can guarantee a view entry will always 
contain row-level liveness info.

We could solve this  by introducing a "strict " flag to the row liveness info, 
which has the following semantic:
- A strict row is only live iff it's row level liveness info is live, 
regardless of the liveness of its columns

Materialized views rows would have this flag set and perform deletions with its 
max primary key timestamp (instead of max timestamp of all keys), and this 
would solve the issue above by ensuring the row {{(1, 1)@(liveness@0, 
deleted@2)=(b=2@4)}} would not be live.

In addition to solving the original problem we would not create the second 
problem in the ticket description of updates to the view primary key with a 
smaller timestamp to be shadowed by a shadowable tombstone using the max 
timestamp of a non-PK column. In this approach the shadowing tombstone 
mechanism would still be orthogonal to the strict liveness and working as it is 
today.

This mechanism alone would not solve all other problems but at least the ones 
described in this ticket description with minimal change in the storage engine, 
let's now go through the other issues to see how we could solve them:

*View row expires too fast with unselected base column (CASSANDRA-13127)*

>From the discussion on CASSANDRA-13127 it seems like you found and fixed some 
>issues with liveness comparison in addition to no view update being generated 
>when there is an update to an unselected column which seems to solve this 
>issue in addition with the strict row concept above. Even though this will 
>require a read  when updating columns not in the view, the MV user is already 
>expecting to pay an extra price for MVs anyway so it shouldn't be a problem - 
>if you want performance you can build views manually or use CASSANDRA-9779 
>hopefully when it's ready. :-)

*based column used in view key is TTLed (CASSANDRA-13657)*

This seems to be fixed by the fix above.

*Partial update unselected columns in base (CASSANDRA-13127)*

This seem to be more of an anomaly of the partial row update semantics which 
has bad consequences for MVs than a problem with MV itself. 6.0 where thrift is 
gone is a good occasion to revisit this semantics rather than trying to make MV 
fit into it.

Right now, inserting a non-PK column (in the example {{UPDATE base USING 
TIMESTAMP 10 SET b=1 WHERE k=1 AND c=1}}) will create a row (k=1, c=1, b=1)@10 
on the end-user perspective ({{SELECT * FROM base WHERE k=1 and C=1}}) while 
internally creating only a column, which destroys the entire row in case the 
same non-PK column is removed ({{DELETE b FROM base USING TIMESTAMP 11 WHERE 
k=1 AND c=1;}}).

While this semantics may make sense in a column-oriented world, it's a tad 
bizarre in a row oriented world, given we can delete a row by simply unsetting 
a non-PK column. I think the correct and expected semantic would be: {{a column 
update to a non-existing row will create it}}.

This semantic is IMO what makes the most sense in a row oriented store and it's 
possible to implement it without falling into the unexpected/inconsistent 
behaviors of CASSANDRA-6782/CASSANDRA-6668 by adding some kind of 
{{CREATE_IF_NOT_EXISTS}} flag which basically keeps the oldest liveness entry 
if more than one is found with this flag when merging.

This semantic change would be the most correct while still preventing this 
anomaly with MVs due to the current semantic. If you agree we can create 
another ticket to propose this change.

*merging should be commutative (CASSANDRA-13409)*
The second case, represented by