[jira] [Comment Edited] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-22 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060397#comment-16060397
 ] 

Amitkumar Ghatwal edited comment on CASSANDRA-13581 at 6/23/17 4:30 AM:


any comments here about contents of the plugins page : 
https://github.com/hhorii/capi-rowcache and PR#118 . Can this plugin be added 
to cassandra's webpage ? - [~jjirsa] [~spo...@gmail.com]


was (Author: amitkumar_ghatwal):
any comments here about contents of the plugins page : 
"https://github.com/hhorii/capi-rowcache; and PR#118 . Can this plugin be added 
to cassandra's webpage ? - [~jjirsa] [~spo...@gmail.com]

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-22 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060397#comment-16060397
 ] 

Amitkumar Ghatwal edited comment on CASSANDRA-13581 at 6/23/17 4:28 AM:


any comments here about contents of the plugins page : 
"https://github.com/hhorii/capi-rowcache; and PR#118 . Can this plugin be added 
to cassandra's webpage ? - [~jjirsa] [~spo...@gmail.com]


was (Author: amitkumar_ghatwal):
any comments here about contents of the plugins page : 
"https://github.com/hhorii/capi-rowcache; and PR#118 . Can this plugin be added 
to cassandra's webpage ? - [~jeff jirsa] [~spo...@gmail.com]

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13581) Adding plugins support to Cassandra's webpage

2017-06-22 Thread Amitkumar Ghatwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060397#comment-16060397
 ] 

Amitkumar Ghatwal commented on CASSANDRA-13581:
---

any comments here about contents of the plugins page : 
"https://github.com/hhorii/capi-rowcache; and PR#118 . Can this plugin be added 
to cassandra's webpage ? - [~jeff jirsa] [~spo...@gmail.com]

> Adding plugins support to Cassandra's webpage
> -
>
> Key: CASSANDRA-13581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13581
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Amitkumar Ghatwal
>  Labels: documentation
> Fix For: 4.x
>
>
> Hi [~spo...@gmail.com],
> As was suggested here : 
> http://www.mail-archive.com/dev@cassandra.apache.org/msg11183.html .  Have 
> created the necessary *.rst file to create "plugins" link here : 
> https://cassandra.apache.org/doc/latest/.
> Have followed the steps here : 
> https://cassandra.apache.org/doc/latest/development/documentation.html  and 
> raised a PR : https://github.com/apache/cassandra/pull/118 for introducing 
> plugins support on Cassandra's Webpage.
> Let me know your review comments and if i have not done things correctly to 
> make changes to cassandra's website i can rectify the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13593) Cassandra Service Crashes

2017-06-22 Thread anuragh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060378#comment-16060378
 ] 

anuragh commented on CASSANDRA-13593:
-

The current version which im using is 2.1.7.
So this could have been fixed  right.
But i'm seeing the issues


> Cassandra Service Crashes
> -
>
> Key: CASSANDRA-13593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13593
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: Cassandra Migration from one rack to another
>Reporter: anuragh
> Fix For: 2.1.x
>
> Attachments: cassandra.log, system.log
>
>
> I'm migrating from one rack to another.
> Source Nodes 3,each node contains 175GB of data
> Target Nodes 3
> Add one of the ipaddress to the seed node .
> existing 3 nodes from the source + newly added node in one ring.
> The moment i added to the ring,cassandra service crashes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13547) Filtered materialized views missing data

2017-06-22 Thread Krishna Dattu Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16040527#comment-16040527
 ] 

Krishna Dattu Koneru edited comment on CASSANDRA-13547 at 6/23/17 3:26 AM:
---

{code}
cqlsh> UPDATE test.table1 SET enabled = TRUE WHERE id = 1 AND name = 'One';

cqlsh> SELECT * FROM test.table1;

 id | name | enabled | foo
+--+-+-
  1 |  One |True | Bar

(1 rows)
{code}
{code:title=Problem 1 - Missing Updates|borderStyle=solid}

cqlsh> SELECT * FROM test.table1_mv1;

 name | id | foo
--++-

(0 rows)
{code}

Logic in ViewUpdateGenerator.java does not update the view row if updated 
column is not denormalized in the view.
in the above case,{{enabled}} is not denormalized and so update has not 
propagated to the view.View metadata only has pk columns + columns in select 
statement of create view.
Now that filtering on non-pk columns is supported , we have to make sure that 
all non-primary key columns that have filters are denormalized.With this we can 
also make sure that {{ALTER TABLE}} does not drop a column that is used in view.
(delete does not do this check because it does not have to. This is the reason 
row delete worked when {{enabled}} is set to false.)

{code:title=Problem 2 - incorrect non-pk tombstones|borderStyle=solid}
cqlsh> SELECT * FROM test.table1_mv2;

 name | id | enabled | foo
--++-+--
  One |  1 |True | null

(1 rows)
{code}

This happens because of the way liveliness/deletion info  is computed in the 
view. 
{{computeTimestampForEntryDeletion())}} method takes the biggest timestamp of 
all the columns (including non-pk) in the row and uses it in Deletion info when 
deleting.

But,when inserting/updating, {{computeLivenessInfoForEntry()}} uses the biggest 
timestamp of the primary keys for liveliness info.
This causes non-pk columns to be treated as deleted because view tombstones 
have higher timestamp than live cell from base row.


I have uploaded a [patch for 3.11 | 
https://github.com/apache/cassandra/compare/cassandra-3.11...krishna-koneru:cassandra-3.11-13547]branch
 which fixes above two issues.

I will make patches for other branches if this patch looks okay.
Comments appreciated !


was (Author: krishna.koneru):

{code}
cqlsh> UPDATE test.table1 SET enabled = TRUE WHERE id = 1 AND name = 'One';

cqlsh> SELECT * FROM test.table1;

 id | name | enabled | foo
+--+-+-
  1 |  One |True | Bar

(1 rows)
{code}
{code:title=Problem 1 - Missing Updates|borderStyle=solid}

cqlsh> SELECT * FROM test.table1_mv1;

 name | id | foo
--++-

(0 rows)
{code}

Logic in ViewUpdateGenerator.java does not update the view row if updated 
column is not denormalized in the view.
in the above case,{{enabled}} is not denormalized and so update has not 
propagated to the view.View metadata only has pk columns + columns in select 
statement of create view.
Now that filtering on non-pk columns is supported , we have to make sure that 
all non-primary key columns that have filters are denormalized.
(delete does not do this check because it does not have to. This is the reason 
row delete worked when {{enabled}} is set to false.)

{code:title=Problem 2 - incorrect non-pk tombstones|borderStyle=solid}
cqlsh> SELECT * FROM test.table1_mv2;

 name | id | enabled | foo
--++-+--
  One |  1 |True | null

(1 rows)
{code}

This happens because of the way liveliness/deletion info  is computed in the 
view. 
{{computeTimestampForEntryDeletion())}} method takes the biggest timestamp of 
all the columns (including non-pk) in the row and uses it in Deletion info when 
deleting.

But,when inserting/updating, {{computeLivenessInfoForEntry()}} uses the biggest 
timestamp of the primary keys for liveliness info.
This causes non-pk columns to be treated as deleted because view tombstones 
have higher timestamp than live cell from base row.


I have uploaded a [patch for 3.11 | 
https://github.com/apache/cassandra/compare/cassandra-3.11...krishna-koneru:cassandra-3.11-13547]branch
 which fixes above two issues.

I will make patches for other branches if this patch looks okay.
Comments appreciated !

> Filtered materialized views missing data
> 
>
> Key: CASSANDRA-13547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13547
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce).
>Reporter: Craig Nicholson
>Assignee: Krishna Dattu Koneru
>Priority: Blocker
>  Labels: materializedviews
> Fix For: 3.11.x
>
>
> When creating a materialized view against a base table the materialized view 
> does not always reflect the 

[jira] [Updated] (CASSANDRA-13629) Wait for batchlog replay during bootstrap

2017-06-22 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-13629:

Reviewer: Paulo Motta

> Wait for batchlog replay during bootstrap
> -
>
> Key: CASSANDRA-13629
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13629
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>
> As part of the problem described in 
> [CASSANDRA-13162|https://issues.apache.org/jira/browse/CASSANDRA-13162], the 
> bootstrap logic won't wait for the backlogged batchlog to be fully replayed 
> before changing the new bootstrapping node to "UN" state. We should wait for 
> batchlog replay before making the node available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13547) Filtered materialized views missing data

2017-06-22 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves reassigned CASSANDRA-13547:


Assignee: Krishna Dattu Koneru  (was: Kurt Greaves)

> Filtered materialized views missing data
> 
>
> Key: CASSANDRA-13547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13547
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce).
>Reporter: Craig Nicholson
>Assignee: Krishna Dattu Koneru
>Priority: Blocker
>  Labels: materializedviews
> Fix For: 3.11.x
>
>
> When creating a materialized view against a base table the materialized view 
> does not always reflect the correct data.
> Using the following test schema:
> {code:title=Schema|language=sql}
> DROP KEYSPACE IF EXISTS test;
> CREATE KEYSPACE test
>   WITH REPLICATION = { 
>'class' : 'SimpleStrategy', 
>'replication_factor' : 1 
>   };
> CREATE TABLE test.table1 (
> id int,
> name text,
> enabled boolean,
> foo text,
> PRIMARY KEY (id, name));
> CREATE MATERIALIZED VIEW test.table1_mv1 AS SELECT id, name, foo
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> CREATE MATERIALIZED VIEW test.table1_mv2 AS SELECT id, name, foo, enabled
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> {code}
> When I insert a row into the base table the materialized views are updated 
> appropriately. (+)
> {code:title=Insert row|language=sql}
> cqlsh> INSERT INTO test.table1 (id, name, enabled, foo) VALUES (1, 'One', 
> TRUE, 'Bar');
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |True | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
>   One |  1 | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
>   One |  1 |True | Bar
> (1 rows)
> {code}
> Updating the record in the base table and setting enabled to FALSE will 
> filter the record from both materialized views. (+)
> {code:title=Disable the row|language=sql}
> cqlsh> UPDATE test.table1 SET enabled = FALSE WHERE id = 1 AND name = 'One';
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |   False | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
> (0 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
> (0 rows)
> {code}
> However a further update to the base table setting enabled to TRUE should 
> include the record in both materialzed views, however only one view 
> (table1_mv2) gets updated. (-)
> It appears that only the view (table1_mv2) that returns the filtered column 
> (enabled) is updated. (-)
> Additionally columns that are not part of the partiion or clustering key are 
> not updated. You can see that the foo column has a null value in table1_mv2. 
> (-)
> {code:title=Enable the row|language=sql}
> cqlsh> UPDATE test.table1 SET enabled = TRUE WHERE id = 1 AND name = 'One';
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |True | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
> (0 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+--
>   One |  1 |True | null
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13547) Filtered materialized views missing data

2017-06-22 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves reassigned CASSANDRA-13547:


Assignee: Kurt Greaves  (was: Krishna Dattu Koneru)

> Filtered materialized views missing data
> 
>
> Key: CASSANDRA-13547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13547
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
> Environment: Official Cassandra 3.10 Docker image (ID 154b919bf8ce).
>Reporter: Craig Nicholson
>Assignee: Kurt Greaves
>Priority: Blocker
>  Labels: materializedviews
> Fix For: 3.11.x
>
>
> When creating a materialized view against a base table the materialized view 
> does not always reflect the correct data.
> Using the following test schema:
> {code:title=Schema|language=sql}
> DROP KEYSPACE IF EXISTS test;
> CREATE KEYSPACE test
>   WITH REPLICATION = { 
>'class' : 'SimpleStrategy', 
>'replication_factor' : 1 
>   };
> CREATE TABLE test.table1 (
> id int,
> name text,
> enabled boolean,
> foo text,
> PRIMARY KEY (id, name));
> CREATE MATERIALIZED VIEW test.table1_mv1 AS SELECT id, name, foo
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> CREATE MATERIALIZED VIEW test.table1_mv2 AS SELECT id, name, foo, enabled
> FROM test.table1
> WHERE id IS NOT NULL 
> AND name IS NOT NULL 
> AND enabled = TRUE
> PRIMARY KEY ((name), id);
> {code}
> When I insert a row into the base table the materialized views are updated 
> appropriately. (+)
> {code:title=Insert row|language=sql}
> cqlsh> INSERT INTO test.table1 (id, name, enabled, foo) VALUES (1, 'One', 
> TRUE, 'Bar');
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |True | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
>   One |  1 | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
>   One |  1 |True | Bar
> (1 rows)
> {code}
> Updating the record in the base table and setting enabled to FALSE will 
> filter the record from both materialized views. (+)
> {code:title=Disable the row|language=sql}
> cqlsh> UPDATE test.table1 SET enabled = FALSE WHERE id = 1 AND name = 'One';
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |   False | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
> (0 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+-
> (0 rows)
> {code}
> However a further update to the base table setting enabled to TRUE should 
> include the record in both materialzed views, however only one view 
> (table1_mv2) gets updated. (-)
> It appears that only the view (table1_mv2) that returns the filtered column 
> (enabled) is updated. (-)
> Additionally columns that are not part of the partiion or clustering key are 
> not updated. You can see that the foo column has a null value in table1_mv2. 
> (-)
> {code:title=Enable the row|language=sql}
> cqlsh> UPDATE test.table1 SET enabled = TRUE WHERE id = 1 AND name = 'One';
> cqlsh> SELECT * FROM test.table1;
>  id | name | enabled | foo
> +--+-+-
>   1 |  One |True | Bar
> (1 rows)
> cqlsh> SELECT * FROM test.table1_mv1;
>  name | id | foo
> --++-
> (0 rows)
> cqlsh> SELECT * FROM test.table1_mv2;
>  name | id | enabled | foo
> --++-+--
>   One |  1 |True | null
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13464) Failed to create Materialized view with a specific token range

2017-06-22 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves reassigned CASSANDRA-13464:


Assignee: Krishna Dattu Koneru

> Failed to create Materialized view with a specific token range
> --
>
> Key: CASSANDRA-13464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Natsumi Kojima
>Assignee: Krishna Dattu Koneru
>Priority: Minor
>
> Failed to create Materialized view with a specific token range.
> Example :
> {code:java}
> $ ccm create "MaterializedView" -v 3.0.13
> $ ccm populate  -n 3
> $ ccm start
> $ ccm status
> Cluster: 'MaterializedView'
> ---
> node1: UP
> node3: UP
> node2: UP
> $ccm node1 cqlsh
> Connected to MaterializedView at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 3.0.13 | CQL spec 3.4.0 | Native protocol v4]
> Use HELP for help.
> cqlsh> CREATE KEYSPACE test WITH replication = {'class':'SimpleStrategy', 
> 'replication_factor':3};
> cqlsh> CREATE TABLE test.test ( id text PRIMARY KEY , value1 text , value2 
> text, value3 text);
> $ccm node1 ring test 
> Datacenter: datacenter1
> ==
> AddressRackStatus State   LoadOwns
> Token
>   
> 3074457345618258602
> 127.0.0.1  rack1   Up Normal  64.86 KB100.00% 
> -9223372036854775808
> 127.0.0.2  rack1   Up Normal  86.49 KB100.00% 
> -3074457345618258603
> 127.0.0.3  rack1   Up Normal  89.04 KB100.00% 
> 3074457345618258602
> $ ccm node1 cqlsh
> cqlsh> INSERT INTO test.test (id, value1 , value2, value3 ) VALUES ('aaa', 
> 'aaa', 'aaa' ,'aaa');
> cqlsh> INSERT INTO test.test (id, value1 , value2, value3 ) VALUES ('bbb', 
> 'bbb', 'bbb' ,'bbb');
> cqlsh> SELECT token(id),id,value1 FROM test.test;
>  system.token(id) | id  | value1
> --+-+
>  -4737872923231490581 | aaa |aaa
>  -3071845237020185195 | bbb |bbb
> (2 rows)
> cqlsh> CREATE MATERIALIZED VIEW test.test_view AS SELECT value1, id FROM 
> test.test WHERE id IS NOT NULL AND value1 IS NOT NULL AND TOKEN(id) > 
> -9223372036854775808 AND TOKEN(id) < -3074457345618258603 PRIMARY KEY(value1, 
> id) WITH CLUSTERING ORDER BY (id ASC);
> ServerError: java.lang.ClassCastException: 
> org.apache.cassandra.cql3.TokenRelation cannot be cast to 
> org.apache.cassandra.cql3.SingleColumnRelation
> {code}
> Stacktrace :
> {code:java}
> INFO  [MigrationStage:1] 2017-04-19 18:32:48,131 ColumnFamilyStore.java:389 - 
> Initializing test.test
> WARN  [SharedPool-Worker-1] 2017-04-19 18:44:07,263 FBUtilities.java:337 - 
> Trigger directory doesn't exist, please create it and try again.
> ERROR [SharedPool-Worker-1] 2017-04-19 18:46:10,072 QueryMessage.java:128 - 
> Unexpected error during query
> java.lang.ClassCastException: org.apache.cassandra.cql3.TokenRelation cannot 
> be cast to org.apache.cassandra.cql3.SingleColumnRelation
>   at 
> org.apache.cassandra.db.view.View.relationsToWhereClause(View.java:275) 
> ~[apache-cassandra-3.0.13.jar:3.0.13]
>   at 
> org.apache.cassandra.cql3.statements.CreateViewStatement.announceMigration(CreateViewStatement.java:219)
>  ~[apache-cassandra-3.0.13.jar:3.0.13]
>   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:93)
>  ~[apache-cassandra-3.0.13.jar:3.0.13]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[apache-cassandra-3.0.13.jar:3.0.13]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:237) 
> ~[apache-cassandra-3.0.13.jar:3.0.13]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:222) 
> ~[apache-cassandra-3.0.13.jar:3.0.13]
>   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[apache-cassandra-3.0.13.jar:3.0.13]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513)
>  [apache-cassandra-3.0.13.jar:3.0.13]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407)
>  [apache-cassandra-3.0.13.jar:3.0.13]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.44.Final.jar:4.0.44.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  

[jira] [Comment Edited] (CASSANDRA-11748) Schema version mismatch may leads to Casandra OOM at bootstrap during a rolling upgrade process

2017-06-22 Thread Michael Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060262#comment-16060262
 ] 

Michael Fong edited comment on CASSANDRA-11748 at 6/23/17 1:06 AM:
---

Hi, [~mbyrd], 


Thanks for looking into this issue. 

If my memory serves me correctly, we observed that the number of schema 
migration request and response message exchanged between two nodes is linearly 
related to the 
1. # of gossip message a node sent to the other node but yet responded since 
the other node was in process of restarting. 
2. # of elapsed second that two nodes has been blocked for internal 
communication.

It is also true that we had *a lot* of table - over a few hundreds tables, 
secondary indices included, and that makes each round of schema migration more 
expensive.  Our workaround was to add a throttle control on # of schema 
migration task requested in v2.0 source code, and that seemed to work just 
fine. This makes more sense as each schema migration tasks requested a full 
copy of schema, as far as I remember. Hence, requesting migration for 100+ 
times is likely inefficient per say.

Last but not least, the root cause of having different schema version is yet 
unknown, that is, say its schema version is A, but having B as schema version 
after restarting the C* instance. This happens seemly at random and uncertain 
how to reproduce. Our best guess is perhaps related to 
1. Some variant added to calculating schema hash is different - maybe timestamp 
after restarting C* instances
2. Down to file system level where a schema migration task did not successfully 
flush onto disk before killing the process.

Thanks.

Michael Fong


was (Author: mcfongtw):
Hi, [~mbyrd], 


Thanks for looking into this issue. 

If my memory serves me correctly, we observed that the number of schema 
migration request and response message exchanged between two nodes is linearly 
related to the 
1. # of gossip message a node sent to the other node but yet responded since 
the other node was in process of restarting. 
2. # of elapsed second that two nodes has been blocked for internal 
communication.

It is also true that we had *a lot* of table - over 500+ tables, and that makes 
each round of schema migration more expensive.  Our workaround was to add a 
throttle control on # of schema migration task requested in v2.0 source code, 
and that seemed to work just fine. This makes more sense as each schema 
migration tasks requested a full copy of schema, as far as I remember. Hence, 
requesting migration for 100+ times is likely inefficient per say.

Last but not least, the root cause of having different schema version is yet 
unknown, that is, say its schema version is A, but having B as schema version 
after restarting the C* instance. This happens seemly at random and uncertain 
how to reproduce. Our best guess is perhaps related to 
1. Some variant added to calculating schema hash is different - maybe timestamp 
after restarting C* instances
2. Down to file system level where a schema migration task did not successfully 
flush onto disk before killing the process.

Thanks.

Michael Fong

> Schema version mismatch may leads to Casandra OOM at bootstrap during a 
> rolling upgrade process
> ---
>
> Key: CASSANDRA-11748
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11748
> Project: Cassandra
>  Issue Type: Bug
> Environment: Rolling upgrade process from 1.2.19 to 2.0.17. 
> CentOS 6.6
> Occurred in different C* node of different scale of deployment (2G ~ 5G)
>Reporter: Michael Fong
>Assignee: Matt Byrd
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have observed multiple times when a multi-node C* (v2.0.17) cluster ran 
> into OOM in bootstrap during a rolling upgrade process from 1.2.19 to 2.0.17. 
> Here is the simple guideline of our rolling upgrade process
> 1. Update schema on a node, and wait until all nodes to be in schema version 
> agreemnt - via nodetool describeclulster
> 2. Restart a Cassandra node
> 3. After restart, there is a chance that the the restarted node has different 
> schema version.
> 4. All nodes in cluster start to rapidly exchange schema information, and any 
> of node could run into OOM. 
> The following is the system.log that occur in one of our 2-node cluster test 
> bed
> --
> Before rebooting node 2:
> Node 1: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,326 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> Node 2: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,122 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> After 

[jira] [Commented] (CASSANDRA-11748) Schema version mismatch may leads to Casandra OOM at bootstrap during a rolling upgrade process

2017-06-22 Thread Michael Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060262#comment-16060262
 ] 

Michael Fong commented on CASSANDRA-11748:
--

Hi, [~mbyrd], 


Thanks for looking into this issue. 

If my memory serves me correctly, we observed that the number of schema 
migration request and response message exchanged between two nodes is linearly 
related to the 
1. # of gossip message a node sent to the other node but yet responded since 
the other node was in process of restarting. 
2. # of elapsed second that two nodes has been blocked for internal 
communication.

It is also true that we had *a lot* of table - over 500+ tables, and that makes 
each round of schema migration more expensive.  Our workaround was to add a 
throttle control on # of schema migration task requested in v2.0 source code, 
and that seemed to work just fine. This makes more sense as each schema 
migration tasks requested a full copy of schema, as far as I remember. Hence, 
requesting migration for 100+ times is likely inefficient per say.

Last but not least, the root cause of having different schema version is yet 
unknown, that is, say its schema version is A, but having B as schema version 
after restarting the C* instance. This happens seemly at random and uncertain 
how to reproduce. Our best guess is perhaps related to 
1. Some variant added to calculating schema hash is different - maybe timestamp 
after restarting C* instances
2. Down to file system level where a schema migration task did not successfully 
flush onto disk before killing the process.

Thanks.

Michael Fong

> Schema version mismatch may leads to Casandra OOM at bootstrap during a 
> rolling upgrade process
> ---
>
> Key: CASSANDRA-11748
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11748
> Project: Cassandra
>  Issue Type: Bug
> Environment: Rolling upgrade process from 1.2.19 to 2.0.17. 
> CentOS 6.6
> Occurred in different C* node of different scale of deployment (2G ~ 5G)
>Reporter: Michael Fong
>Assignee: Matt Byrd
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have observed multiple times when a multi-node C* (v2.0.17) cluster ran 
> into OOM in bootstrap during a rolling upgrade process from 1.2.19 to 2.0.17. 
> Here is the simple guideline of our rolling upgrade process
> 1. Update schema on a node, and wait until all nodes to be in schema version 
> agreemnt - via nodetool describeclulster
> 2. Restart a Cassandra node
> 3. After restart, there is a chance that the the restarted node has different 
> schema version.
> 4. All nodes in cluster start to rapidly exchange schema information, and any 
> of node could run into OOM. 
> The following is the system.log that occur in one of our 2-node cluster test 
> bed
> --
> Before rebooting node 2:
> Node 1: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,326 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> Node 2: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,122 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> After rebooting node 2, 
> Node 2: DEBUG [main] 2016-04-19 11:18:18,016 MigrationManager.java (line 328) 
> Gossiping my schema version f5270873-ba1f-39c7-ab2e-a86db868b09b
> The node2  keeps submitting the migration task over 100+ times to the other 
> node.
> INFO [GossipStage:1] 2016-04-19 11:18:18,261 Gossiper.java (line 1011) Node 
> /192.168.88.33 has restarted, now UP
> INFO [GossipStage:1] 2016-04-19 11:18:18,262 TokenMetadata.java (line 414) 
> Updating topology for /192.168.88.33
> ...
> DEBUG [GossipStage:1] 2016-04-19 11:18:18,265 MigrationManager.java (line 
> 102) Submitting migration task for /192.168.88.33
> ... ( over 100+ times)
> --
> On the otherhand, Node 1 keeps updating its gossip information, followed by 
> receiving and submitting migrationTask afterwards: 
> INFO [RequestResponseStage:3] 2016-04-19 11:18:18,333 Gossiper.java (line 
> 978) InetAddress /192.168.88.34 is now UP
> ...
> DEBUG [MigrationStage:1] 2016-04-19 11:18:18,496 
> MigrationRequestVerbHandler.java (line 41) Received migration request from 
> /192.168.88.34.
> …… ( over 100+ times)
> DEBUG [OptionalTasks:1] 2016-04-19 11:19:18,337 MigrationManager.java (line 
> 127) submitting migration task for /192.168.88.34
> .  (over 50+ times)
> On the side note, we have over 200+ column families defined in Cassandra 
> database, which may related to this amount of rpc traffic.
> P.S.2 The over requested schema migration task will eventually have 
> InternalResponseStage performing schema merge operation. Since this 

[jira] [Updated] (CASSANDRA-13632) Digest mismatch if (non-static) column is null

2017-06-22 Thread Andrew Whang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Whang updated CASSANDRA-13632:
-
Fix Version/s: 3.0.14
   Status: Patch Available  (was: Open)

https://github.com/whangsf/cassandra/commit/0efb32db3eaf76229c53451f2fc3ab85693a76bd

> Digest mismatch if (non-static) column is null
> --
>
> Key: CASSANDRA-13632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13632
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Andrew Whang
> Fix For: 3.0.14
>
>
> This issue is similar to CASSANDRA-12090. Quorum read queries that include a 
> column selector (non-wildcard) result in digest mismatch when the column 
> value is null. It seems the data serialization path checks if 
> rowIterator.isEmpty() and if so ignores column names (by setting IS_EMPTY 
> flag). However, the digest serialization path does not perform this check and 
> includes column names. The digest comparison results in a mismatch. The 
> mismatch does not end up issuing a read repair mutation since the underlying 
> data is the same.
> Our use case involves frequent deletion of partition columns, so the mismatch 
> on the read path ends up doubling our p99 read latency. We discovered this 
> issue while testing a 2.2.5 to 3.0.13 upgrade.
> One thing to note is that we're using thrift, which ends up handling the 
> ColumnFilter differently than the CQL path. 
> As with CASSANDRA-12090, fixing the digest seems sensible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13632) Digest mismatch if (non-static) column is null

2017-06-22 Thread Andrew Whang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Whang updated CASSANDRA-13632:
-
Description: 
This issue is similar to CASSANDRA-12090. Quorum read queries that include a 
column selector (non-wildcard) result in digest mismatch when the column value 
is null. It seems the data serialization path checks if rowIterator.isEmpty() 
and if so ignores column names (by setting IS_EMPTY flag). However, the digest 
serialization path does not perform this check and includes column names. The 
digest comparison results in a mismatch. The mismatch does not end up issuing a 
read repair mutation since the underlying data is the same.

Our use case involves frequent deletion of partition columns, so the mismatch 
on the read path ends up doubling our p99 read latency. We discovered this 
issue while testing a 2.2.5 to 3.0.13 upgrade.

One thing to note is that we're using thrift, which ends up handling the 
ColumnFilter differently than the CQL path. 

As with CASSANDRA-12090, fixing the digest seems sensible.

  was:
This issue is similar to CASSANDRA-12090. Quorum read queries that include a 
column selector (non-wildcard) result in digest mismatch when the column value 
is null. It seems the data serialization path checks if rowIterator.isEmpty() 
and if so ignores column names (by setting IS_EMPTY flag). However, the digest 
serialization path does not perform this check and includes column names. The 
digest comparison results in a mismatch. The mismatch does not end up issuing a 
read repair mutation since the underlying data is the same.

Our use case involves frequent deletion of partition columns, so the mismatch 
on the read path ends up doubling our p99 read latency. We discovered this 
issue while testing a 2.2.5 to 3.0.13 upgrade.

As with CASSANDRA-12090, fixing the digest seems sensible.


> Digest mismatch if (non-static) column is null
> --
>
> Key: CASSANDRA-13632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13632
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Andrew Whang
>
> This issue is similar to CASSANDRA-12090. Quorum read queries that include a 
> column selector (non-wildcard) result in digest mismatch when the column 
> value is null. It seems the data serialization path checks if 
> rowIterator.isEmpty() and if so ignores column names (by setting IS_EMPTY 
> flag). However, the digest serialization path does not perform this check and 
> includes column names. The digest comparison results in a mismatch. The 
> mismatch does not end up issuing a read repair mutation since the underlying 
> data is the same.
> Our use case involves frequent deletion of partition columns, so the mismatch 
> on the read path ends up doubling our p99 read latency. We discovered this 
> issue while testing a 2.2.5 to 3.0.13 upgrade.
> One thing to note is that we're using thrift, which ends up handling the 
> ColumnFilter differently than the CQL path. 
> As with CASSANDRA-12090, fixing the digest seems sensible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13632) Digest mismatch if (non-static) column is null

2017-06-22 Thread Andrew Whang (JIRA)
Andrew Whang created CASSANDRA-13632:


 Summary: Digest mismatch if (non-static) column is null
 Key: CASSANDRA-13632
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13632
 Project: Cassandra
  Issue Type: Bug
  Components: Local Write-Read Paths
Reporter: Andrew Whang


This issue is similar to CASSANDRA-12090. Quorum read queries that include a 
column selector (non-wildcard) result in digest mismatch when the column value 
is null. It seems the data serialization path checks if rowIterator.isEmpty() 
and if so ignores column names (by setting IS_EMPTY flag). However, the digest 
serialization path does not perform this check and includes column names. The 
digest comparison results in a mismatch. The mismatch does not end up issuing a 
read repair mutation since the underlying data is the same.

Our use case involves frequent deletion of partition columns, so the mismatch 
on the read path ends up doubling our p99 read latency. We discovered this 
issue while testing a 2.2.5 to 3.0.13 upgrade.

As with CASSANDRA-12090, fixing the digest seems sensible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13358) AlterViewStatement.checkAccess can throw exceptions

2017-06-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13358:
---
Fix Version/s: (was: 2.1.18)
   2.1.x

> AlterViewStatement.checkAccess can throw exceptions
> ---
>
> Key: CASSANDRA-13358
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13358
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Hao Zhong
>Assignee: Hao Zhong
> Fix For: 2.1.x
>
> Attachments: cassandra.patch
>
>
> The AlterViewStatement.checkAccess method has code lines as follow:
> {code:title=AlterViewStatement.java|borderStyle=solid}
>   if (baseTable != null)
> state.hasColumnFamilyAccess(keyspace(), baseTable.name, 
> Permission.ALTER);
> {code}
> These lines can throw InvalidRequestException. Indeed, 
> DropTableStatement.checkAccess has a similar problem, and was fixed in 
> CASSANDRA-6687. The fixed code is as follow:
> {code:title=DropTableStatement.java|borderStyle=solid}
>  try
> {
> state.hasColumnFamilyAccess(keyspace(), columnFamily(), 
> Permission.DROP);
> }
> catch (InvalidRequestException e)
> {
> if (!ifExists)
> throw e;
> }
> {code}
> Please fix the problem as CASSANDRA-6687 did.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13593) Cassandra Service Crashes

2017-06-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13593:
---
Fix Version/s: (was: 2.1.18)
   2.1.x

> Cassandra Service Crashes
> -
>
> Key: CASSANDRA-13593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13593
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: Cassandra Migration from one rack to another
>Reporter: anuragh
> Fix For: 2.1.x
>
> Attachments: cassandra.log, system.log
>
>
> I'm migrating from one rack to another.
> Source Nodes 3,each node contains 175GB of data
> Target Nodes 3
> Add one of the ipaddress to the seed node .
> existing 3 nodes from the source + newly added node in one ring.
> The moment i added to the ring,cassandra service crashes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13538) Cassandra tasks permanently block after the following assertion occurs during compaction: "java.lang.AssertionError: Interval min > max "

2017-06-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13538:
---
Fix Version/s: (was: 2.1.18)
   2.1.x

> Cassandra tasks permanently block after the following assertion occurs during 
> compaction: "java.lang.AssertionError: Interval min > max "
> -
>
> Key: CASSANDRA-13538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13538
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: This happens on a 7 node system with 2 data centers. 
> We're using Cassandra version 2.1.15. I upgraded to 2.1.17 and it still 
> occurs.
>Reporter: Andy Klages
> Fix For: 2.1.x
>
> Attachments: cassandra.yaml, jstack.out, schema.cql3, system.log, 
> tpstats.out
>
>
> We noticed this problem because the commitlogs proliferate to the point that 
> we eventually run out of disk space. nodetool tpstats shows several of the 
> tasks backed up:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 0  134335315 0
>  0
> ReadStage 0 0  643986790 0
>  0
> RequestResponseStage  0 0 114298 0
>  0
> ReadRepairStage   0 0 36 0
>  0
> CounterMutationStage  0 0  0 0
>  0
> MiscStage 0 0  0 0
>  0
> AntiEntropySessions   1 1  79357 0
>  0
> HintedHandoff 0 0 90 0
>  0
> GossipStage   0 06595098 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 01638369 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   1752922542 0
>  0
> ValidationExecutor0 01465374 0
>  0
> MigrationStage176600 0
>  0
> AntiEntropyStage  1   9238291098 0
>  0
> PendingRangeCalculator0 0 20 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   0 0  53017 0
>  0
> MemtablePostFlush 1  45841545141 0
>  0
> MemtableReclaimMemory 0 0  70639 0
>  0
> Native-Transport-Requests 0 0 352559 0
>  0
> {code}
> This all starts after the following exception is raised in Cassandra:
> {code}
> ERROR [MemtableFlushWriter:2437] 2017-05-15 01:53:23,380 
> CassandraDaemon.java:231 - Exception in thread 
> Thread[MemtableFlushWriter:2437,5,main]
> java.lang.AssertionError: Interval min > max
>   at 
> org.apache.cassandra.utils.IntervalTree$IntervalNode.(IntervalTree.java:249)
>  ~[apache-cassandra-2.1.15.jar:2.1.15]
>   at org.apache.cassandra.utils.IntervalTree.(IntervalTree.java:72) 
> ~[apache-cassandra-2.1.15.jar:2.1.15]
>   at 
> org.apache.cassandra.db.DataTracker$SSTableIntervalTree.(DataTracker.java:603)
>  ~[apache-cassandra-2.1.15.jar:2.1.15]
>   at 
> org.apache.cassandra.db.DataTracker$SSTableIntervalTree.(DataTracker.java:597)
>  ~[apache-cassandra-2.1.15.jar:2.1.15]
>   at 
> org.apache.cassandra.db.DataTracker.buildIntervalTree(DataTracker.java:578) 
> ~[apache-cassandra-2.1.15.jar:2.1.15]
>   at 
> org.apache.cassandra.db.DataTracker$View.replaceFlushed(DataTracker.java:740) 
> ~[apache-cassandra-2.1.15.jar:2.1.15]
>   at 
> org.apache.cassandra.db.DataTracker.replaceFlushed(DataTracker.java:172) 
> ~[apache-cassandra-2.1.15.jar:2.1.15]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:234)
>  ~[apache-cassandra-2.1.15.jar:2.1.15]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1521)
>  ~[apache-cassandra-2.1.15.jar:2.1.15]
>   at 
> 

[jira] [Updated] (CASSANDRA-13479) Create schema file for keyspace as a part of snapshot

2017-06-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13479:
---
Fix Version/s: (was: 3.0.14)
   3.0.x

> Create schema file for keyspace as a part of snapshot
> -
>
> Key: CASSANDRA-13479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13479
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Varun Gupta
>Priority: Minor
> Fix For: 3.0.x
>
>
> As of now, snapshot process create schema.cql file per column family. 
> Restoring to new cluster is not feasible if keyspace schema is not present. 
> So similar to schema.cql file created for each column family, a different 
> schema.cql file should be created for keyspace too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13574) mx4j default listening configuration comment is not correct

2017-06-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13574:
---
Fix Version/s: (was: 3.0.14)
   3.0.x

> mx4j default listening configuration comment is not correct
> ---
>
> Key: CASSANDRA-13574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13574
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
>  Labels: jmx_interface
> Fix For: 3.0.x
>
>
> {noformat}
> By default mx4j listens on 0.0.0.0:8081.
> {noformat}
> https://github.com/apache/cassandra/blob/cassandra-2.2/conf/cassandra-env.sh#L302
> It's actually set to Cassandra broadcast_address and it will never be 
> {{0.0.0.0}}:
> https://github.com/apache/cassandra/blob/cassandra-2.2/src/java/org/apache/cassandra/utils/Mx4jTool.java#L79



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13587) Deadlock during CommitLog replay when Cassandra restarts

2017-06-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13587:
---
Fix Version/s: (was: 3.0.14)
   3.0.x

> Deadlock during CommitLog replay when Cassandra restarts
> 
>
> Key: CASSANDRA-13587
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13587
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
> Fix For: 3.0.x
>
> Attachments: 13587-3.0.txt, Reproduce_CASSANDRA-13587.txt
>
>
> Possible deadlock found when Cassandra is replaying commit log and at the 
> same time Mutation gets triggered by 
> SSTableReader(SystemKeyspace.persistSSTableReadMeter). As a result Cassandra 
> restart hangs forever
> Please find details of stack trace here:
> *Frame#1* This thread is trying to apply {{persistSSTableReadMeter}} mutation 
> and as a result it has called {{writeOrder.start()}} in {{Keyspace.java:533}}
> but there are no Commitlog Segments available because {{createReserveSegments 
> (CommitLogSegmentManager.java)}} is not yet {{true}} 
> Hence this thread is blocked on {{createReserveSegments}} to become {{true}}, 
> please note this thread has already started {{writeOrder}}
> {quote}
> "pool-11-thread-1" #251 prio=5 os_prio=0 tid=0x7fe128478400 nid=0x1b274 
> waiting on condition [0x7fe1389a]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManager.advanceAllocatingFrom(CommitLogSegmentManager.java:277)
> at 
> org.apache.cassandra.db.commitlog.CommitLogSegmentManager.allocate(CommitLogSegmentManager.java:196)
> at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:260)
> at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:540)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:421)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:210)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:215)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:224)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:566)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:556)
> at 
> org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:295)
> at 
> org.apache.cassandra.db.SystemKeyspace.persistSSTableReadMeter(SystemKeyspace.java:1181)
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader$GlobalTidy$1.run(SSTableReader.java:2202)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {quote}
> *Frame#2* This thread is trying to recover commit logs and as a result it 
> tries to flush Memtable by calling following code:
> {{futures.add(Keyspace.open(SystemKeyspace.NAME).getColumnFamilyStore(SystemKeyspace.BATCHES).forceFlush());}}
> As a result Frame#3 (below) gets created
> {quote}
> "main" #1 prio=5 os_prio=0 tid=0x7fe1c64ec400 nid=0x1af29 waiting on 
> condition [0x7fe1c94a1000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> parking to wait for  <0x0006370da0c0> (a 
> com.google.common.util.concurrent.ListenableFutureTask)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429)
> at java.util.concurrent.FutureTask.get(FutureTask.java:191)
> at 
> org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:383)
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.blockForWrites(CommitLogReplayer.java:207)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:182)
>

[jira] [Commented] (CASSANDRA-13621) i am seeing outof memory issue in cassandra

2017-06-22 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060186#comment-16060186
 ] 

Michael Shuler commented on CASSANDRA-13621:


You listed 3.0.14 in fix version (which is set to specific version on patch 
commits), which has not even been released as of writing. What version of 
Cassandra was this from? Can you provide any additional details about how you 
arrived in this state, for instance commands that were run, changes to or 
errors on other servers, etc.?

Without much to go on to reproduce your error, this will likely get closed as 
unreproducible. Thanks!

> i am seeing outof memory issue in cassandra
> ---
>
> Key: CASSANDRA-13621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13621
> Project: Cassandra
>  Issue Type: Bug
> Environment: Centos 6.7,
>Reporter: Nanda Kishore Tokala
> Fix For: 3.0.x
>
>
> {noformat}
> ERROR [SharedPool-Worker-11] 2017-06-14 17:39:21,929  Message.java:617 - 
> Unexpected exception during request; channel = [id: 0x4c3897de, ]
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_121]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.8.0_121]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
> ~[na:1.8.0_121]
> at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:645) 
> ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:229) 
> ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:213) 
> ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at io.netty.buffer.PoolArena.allocate(PoolArena.java:133) 
> ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:271)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:177)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:168)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:105)
>  ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:1470) 
> ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at 
> io.netty.handler.ssl.SslHandler.allocateOutNetBuf(SslHandler.java:1480) 
> ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:528) 
> ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
> at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:507) 
> ~[netty-all-4.0.34.Final.jar:4.0.34.Final
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13621) i am seeing outof memory issue in cassandra

2017-06-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13621:
---
Fix Version/s: (was: 3.0.14)
   3.0.x
  Description: 
{noformat}
ERROR [SharedPool-Worker-11] 2017-06-14 17:39:21,929  Message.java:617 - 
Unexpected exception during request; channel = [id: 0x4c3897de, ]
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_121]
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
~[na:1.8.0_121]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
~[na:1.8.0_121]
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:645) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:229) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:213) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:133) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:271)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:177)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:168)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:105)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:1470) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.handler.ssl.SslHandler.allocateOutNetBuf(SslHandler.java:1480) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:528) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:507) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final
{noformat}

  was:
ERROR [SharedPool-Worker-11] 2017-06-14 17:39:21,929  Message.java:617 - 
Unexpected exception during request; channel = [id: 0x4c3897de, ]
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_121]
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
~[na:1.8.0_121]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) 
~[na:1.8.0_121]
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:645) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:229) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:213) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:133) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:271)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:177)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:168)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:105)
 ~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:1470) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at 
io.netty.handler.ssl.SslHandler.allocateOutNetBuf(SslHandler.java:1480) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:528) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:507) 
~[netty-all-4.0.34.Final.jar:4.0.34.Final


> i am seeing outof memory issue in cassandra
> ---
>
> Key: CASSANDRA-13621
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13621
> Project: Cassandra
>  Issue Type: Bug
> Environment: Centos 6.7,
>Reporter: Nanda Kishore Tokala
> Fix For: 3.0.x
>
>
> {noformat}
> ERROR [SharedPool-Worker-11] 2017-06-14 17:39:21,929  Message.java:617 - 
> Unexpected exception during request; channel = [id: 0x4c3897de, ]
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_121]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.8.0_121]
> at 

[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-06-22 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060108#comment-16060108
 ] 

Paulo Motta commented on CASSANDRA-10130:
-

bq.  The squashed and rebased patch is in this branch.

The squashed patch LGTM, you should just need to format the commit message to 
include the reviewers ([format 
here|https://wiki.apache.org/cassandra/HowToContribute#Committing]).

bq. The notice for CHANGES.txt is "Fix management of secondary indexes 
(re)builds (CASSANDRA-10130)", which is quite generic and distant from the 
title of this ticket. Suggestions are welcome.

Perhaps "Improve secondary index (re)build failure and concurrency handling"? 

bq. I ran the final patch on our internal CI. There are not failures for the 
unit tests and the failing dtests are not related to the change.

Great job, ship it! (pending [~sbtourist] final +1) :-)

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-06-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16060038#comment-16060038
 ] 

Andrés de la Peña commented on CASSANDRA-10130:
---

bq. I'm not sure if you already prepared for commit (CHANGES, squash+rebase) 
but it would probably be wise to do that before submitting another CI round to 
make sure we catch any potential problems on CI after rebase.

Wise idea, [~pauloricardomg]. I hadn't prepared it for commit, but now I have. 
The squashed and rebased patch is in [this 
branch|https://github.com/adelapena/cassandra/commits/10130-trunk-squash].

The notice for CHANGES.txt is "Fix management of secondary indexes (re)builds 
(CASSANDRA-10130)", which is quite generic and distant from the title of this 
ticket. Suggestions are welcome.

I ran the final patch on our internal CI. There are not failures for the unit 
tests and the failing dtests are not related to the change.

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13441) Schema version changes for each upgraded node in a rolling upgrade, causing migration storms

2017-06-22 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-13441:

Component/s: (was: Schema)
 Distributed Metadata

> Schema version changes for each upgraded node in a rolling upgrade, causing 
> migration storms
> 
>
> Key: CASSANDRA-13441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13441
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
> Fix For: 3.0.14, 3.11.0, 4.0
>
>
> In versions < 3.0, during a rolling upgrade (say 2.0 -> 2.1), the first node 
> to upgrade to 2.1 would add the new tables, setting the new 2.1 version ID, 
> and subsequently upgraded hosts would settle on that version.
> When a 3.0 node upgrades and writes its own new-in-3.0 system tables, it'll 
> write the same tables that exist in the schema with brand new timestamps. As 
> written, this will cause all nodes in the cluster to change schema (to the 
> version with the newest timestamp). On a sufficiently large cluster with a 
> non-trivial schema, this could cause (literally) millions of migration tasks 
> to needlessly bounce across the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13384) Legacy caching options can prevent 3.0 upgrade

2017-06-22 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-13384:

Component/s: (was: Schema)
 Distributed Metadata

> Legacy caching options can prevent 3.0 upgrade
> --
>
> Key: CASSANDRA-13384
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13384
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
>Priority: Minor
> Fix For: 3.0.13, 3.11.0
>
>
> In 2.1, we wrote caching options as a JSONified map, but we tolerated raw 
> strings ["ALL", "KEYS_ONLY", "ROWS_ONLY", and 
> "NONE"|https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/cache/CachingOptions.java#L42].
> If a 2.1 node with any of these strings is upgraded to 3.0, the legacy schema 
> migration will fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13479) Create schema file for keyspace as a part of snapshot

2017-06-22 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-13479:

Component/s: (was: Schema)
 Distributed Metadata

> Create schema file for keyspace as a part of snapshot
> -
>
> Key: CASSANDRA-13479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13479
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Varun Gupta
>Priority: Minor
> Fix For: 3.0.14
>
>
> As of now, snapshot process create schema.cql file per column family. 
> Restoring to new cluster is not feasible if keyspace schema is not present. 
> So similar to schema.cql file created for each column family, a different 
> schema.cql file should be created for keyspace too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13334) Cassandra fails to start with LOCAL_JMX set to false

2017-06-22 Thread Andres March (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16059900#comment-16059900
 ] 

Andres March commented on CASSANDRA-13334:
--

i totally understand. this ticket is just asking for an error message
instead of silent failure




> Cassandra fails to start with LOCAL_JMX set to false
> 
>
> Key: CASSANDRA-13334
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13334
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
> Environment: cassandra 3.0.9
> ubuntu 16.04
> ec2
>Reporter: Andres March
> Fix For: 3.11.x
>
>
> There seems to be some changes lately with CASSANDRA-11540 and 
> CASSANDRA-11725 that changes how remote jmx set up works.  
> Just setting LOCAL_JMX=false causes cassandra to fail to start on ubuntu in 
> ec2.  Worse yet the initd script says it has started ok and there are no logs 
> whatsoever about an error.
> This is reproducible 100% of the time.  Changing various settings like the 
> port, jmx remote hostname, etc... didn't change anything.  I would attach 
> some logs but there aren't any.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12837) Add multi-threaded support to nodetool rebuild_index

2017-06-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-12837:
---
Fix Version/s: (was: 3.11.0)
   3.11.x

> Add multi-threaded support to nodetool rebuild_index
> 
>
> Key: CASSANDRA-12837
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12837
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: vincent royer
>Priority: Minor
>  Labels: patch, performance
> Fix For: 3.11.x, 4.x
>
> Attachments: 0001-CASSANDRA-12837-multi-threaded-rebuild_index.patch, 
> CASSANDRA-12837-2.2.9.txt
>
>
> Add multi-thread nodetool rebuild_index to improve performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10968) When taking snapshot, manifest.json contains incorrect or no files when column family has secondary indexes

2017-06-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-10968:
---
Fix Version/s: (was: 3.11.0)
   (was: 2.1.12)
   (was: 2.2.4)
   3.11.x
   3.0.x
   2.2.x

> When taking snapshot, manifest.json contains incorrect or no files when 
> column family has secondary indexes
> ---
>
> Key: CASSANDRA-10968
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10968
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fred A
>Assignee: Aleksandr Sorokoumov
>  Labels: lhf
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
>
> xNoticed indeterminate behaviour when taking snapshot on column families that 
> has secondary indexes setup. The created manifest.json created when doing 
> snapshot, sometimes contains no file names at all and sometimes some file 
> names. 
> I don't know if this post is related but that was the only thing I could find:
> http://www.mail-archive.com/user%40cassandra.apache.org/msg42019.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13334) Cassandra fails to start with LOCAL_JMX set to false

2017-06-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13334:
---
Fix Version/s: (was: 3.11.0)
   3.11.x

> Cassandra fails to start with LOCAL_JMX set to false
> 
>
> Key: CASSANDRA-13334
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13334
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
> Environment: cassandra 3.0.9
> ubuntu 16.04
> ec2
>Reporter: Andres March
> Fix For: 3.11.x
>
>
> There seems to be some changes lately with CASSANDRA-11540 and 
> CASSANDRA-11725 that changes how remote jmx set up works.  
> Just setting LOCAL_JMX=false causes cassandra to fail to start on ubuntu in 
> ec2.  Worse yet the initd script says it has started ok and there are no logs 
> whatsoever about an error.
> This is reproducible 100% of the time.  Changing various settings like the 
> port, jmx remote hostname, etc... didn't change anything.  I would attach 
> some logs but there aren't any.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-06-22 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16059763#comment-16059763
 ] 

Paulo Motta commented on CASSANDRA-10130:
-

LGTM after CI is happy. Great job!

bq. The last execution doesn't contain the last commit fixing 
SecondaryIndexManagerTest#rebuildWithFailure. The new job execution should take 
a little bit more than two hours.

I'm not sure if you already prepared for commit (CHANGES, squash+rebase) but it 
would probably be wise to do that before submitting another CI round to make 
sure we catch any potential problems on CI after rebase.

Thanks!

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-06-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16059753#comment-16059753
 ] 

Andrés de la Peña commented on CASSANDRA-10130:
---

Sure:

||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:10130-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-dtest/]|

The last execution doesn't contain the last commit fixing 
{{SecondaryIndexManagerTest#rebuildWithFailure}}. The new job execution should 
take a little bit more than two hours.

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-06-22 Thread Sergio Bossa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Bossa updated CASSANDRA-10130:
-
Component/s: (was: Materialized Views)

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-06-22 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16059725#comment-16059725
 ] 

Sergio Bossa commented on CASSANDRA-10130:
--

Excellent, can we have another utests/dtests run before the final +1?

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13300) Upgrade the jna version to 4.3.0

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13300:
-
Component/s: (was: Materialized Views)

> Upgrade the jna version to 4.3.0
> 
>
> Key: CASSANDRA-13300
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13300
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Amitkumar Ghatwal
>Assignee: Jason Brown
> Fix For: 4.0
>
>
> Could you please upgrade the jna version present in the github cassandra
> location : https://github.com/apache/cassandra/blob/trunk/lib/jna-4.0.0.jar
> to below latest version   - 4.3.0 -
> http://repo1.maven.org/maven2/net/java/dev/jna/jna/4.3.0/jna-4.3.0-javadoc.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12373:
-
Component/s: (was: Materialized Views)

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9259) Bulk Reading from Cassandra

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-9259:

Component/s: (was: Materialized Views)

> Bulk Reading from Cassandra
> ---
>
> Key: CASSANDRA-9259
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9259
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, CQL, Local Write-Read Paths, Streaming and 
> Messaging, Testing
>Reporter:  Brian Hess
>Assignee: Stefania
>Priority: Critical
> Fix For: 4.x
>
> Attachments: 256_vnodes.jpg, before_after.jpg, 
> bulk-read-benchmark.1.html, bulk-read-jfr-profiles.1.tar.gz, 
> bulk-read-jfr-profiles.2.tar.gz, no_vnodes.jpg, spark_benchmark_raw_data.zip
>
>
> This ticket is following on from the 2015 NGCC.  This ticket is designed to 
> be a place for discussing and designing an approach to bulk reading.
> The goal is to have a bulk reading path for Cassandra.  That is, a path 
> optimized to grab a large portion of the data for a table (potentially all of 
> it).  This is a core element in the Spark integration with Cassandra, and the 
> speed at which Cassandra can deliver bulk data to Spark is limiting the 
> performance of Spark-plus-Cassandra operations.  This is especially of 
> importance as Cassandra will (likely) leverage Spark for internal operations 
> (for example CASSANDRA-8234).
> The core CQL to consider is the following:
> SELECT a, b, c FROM myKs.myTable WHERE Token(partitionKey) > X AND 
> Token(partitionKey) <= Y
> Here, we choose X and Y to be contained within one token range (perhaps 
> considering the primary range of a node without vnodes, for example).  This 
> query pushes 50K-100K rows/sec, which is not very fast if we are doing bulk 
> operations via Spark (or other processing frameworks - ETL, etc).  There are 
> a few causes (e.g., inefficient paging).
> There are a few approaches that could be considered.  First, we consider a 
> new "Streaming Compaction" approach.  The key observation here is that a bulk 
> read from Cassandra is a lot like a major compaction, though instead of 
> outputting a new SSTable we would output CQL rows to a stream/socket/etc.  
> This would be similar to a CompactionTask, but would strip out some 
> unnecessary things in there (e.g., some of the indexing, etc). Predicates and 
> projections could also be encapsulated in this new "StreamingCompactionTask", 
> for example.
> Another approach would be an alternate storage format.  For example, we might 
> employ Parquet (just as an example) to store the same data as in the primary 
> Cassandra storage (aka SSTables).  This is akin to Global Indexes (an 
> alternate storage of the same data optimized for a particular query).  Then, 
> Cassandra can choose to leverage this alternate storage for particular CQL 
> queries (e.g., range scans).
> These are just 2 suggestions to get the conversation going.
> One thing to note is that it will be useful to have this storage segregated 
> by token range so that when you extract via these mechanisms you do not get 
> replications-factor numbers of copies of the data.  That will certainly be an 
> issue for some Spark operations (e.g., counting).  Thus, we will want 
> per-token-range storage (even for single disks), so this will likely leverage 
> CASSANDRA-6696 (though, we'll want to also consider the single disk case).
> It is also worth discussing what the success criteria is here.  It is 
> unlikely to be as fast as EDW or HDFS performance (though, that is still a 
> good goal), but being within some percentage of that performance should be 
> set as success.  For example, 2x as long as doing bulk operations on HDFS 
> with similar node count/size/etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-2474) CQL support for compound columns and wide rows

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-2474:

Component/s: (was: Materialized Views)

> CQL support for compound columns and wide rows
> --
>
> Key: CASSANDRA-2474
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2474
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Eric Evans
>Priority: Critical
>  Labels: cql
> Attachments: 0001-Add-support-for-wide-and-composite-CFs.patch, 
> 0002-thrift-generated-code.patch, 2474-transposed-1.PNG, 
> 2474-transposed-raw.PNG, 2474-transposed-select-no-sparse.PNG, 
> 2474-transposed-select.PNG, ASF.LICENSE.NOT.GRANTED--screenshot-1.jpg, 
> ASF.LICENSE.NOT.GRANTED--screenshot-2.jpg, cql_tests.py, raw_composite.txt
>
>
> For the most part, this boils down to supporting the specification of 
> compound column names (the CQL syntax is colon-delimted terms), and then 
> teaching the decoders (drivers) to create structures from the results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10661) Integrate SASI to Cassandra

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10661:
-
Component/s: (was: Materialized Views)

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.4
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13403) nodetool repair breaks SASI index

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13403:
-
Component/s: (was: Materialized Views)

> nodetool repair breaks SASI index
> -
>
> Key: CASSANDRA-13403
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13403
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: 3.10
>Reporter: Igor Novgorodov
>Assignee: Alex Petrov
>
> I've got table:
> {code}
> CREATE TABLE cservice.bulks_recipients (
> recipient text,
> bulk_id uuid,
> datetime_final timestamp,
> datetime_sent timestamp,
> request_id uuid,
> status int,
> PRIMARY KEY (recipient, bulk_id)
> ) WITH CLUSTERING ORDER BY (bulk_id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> CREATE CUSTOM INDEX bulk_recipients_bulk_id ON cservice.bulks_recipients 
> (bulk_id) USING 'org.apache.cassandra.index.sasi.SASIIndex';
> {code}
> There are 11 rows in it:
> {code}
> > select * from bulks_recipients;
> ...
> (11 rows)
> {code}
> Let's query by index (all rows have the same *bulk_id*):
> {code}
> > select * from bulks_recipients where bulk_id = 
> > baa94815-e276-4ca4-adda-5b9734e6c4a5;   
> >   
> ...
> (11 rows)
> {code}
> Ok, everything is fine.
> Now i'm doing *nodetool repair --partitioner-range --job-threads 4 --full* on 
> each node in cluster sequentially.
> After it finished:
> {code}
> > select * from bulks_recipients where bulk_id = 
> > baa94815-e276-4ca4-adda-5b9734e6c4a5;
> ...
> (2 rows)
> {code}
> Only two rows.
> While the rows are actually there:
> {code}
> > select * from bulks_recipients;
> ...
> (11 rows)
> {code}
> If i issue an incremental repair on a random node, i can get like 7 rows 
> after index query.
> Dropping index and recreating it fixes the issue. Is it a bug or am i doing 
> the repair the wrong way?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10707) Add support for Group By to Select statement

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10707:
-
Component/s: (was: Materialized Views)

> Add support for Group By to Select statement
> 
>
> Key: CASSANDRA-10707
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10707
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.10
>
>
> Now that Cassandra support aggregate functions, it makes sense to support 
> {{GROUP BY}} on the {{SELECT}} statements.
> It should be possible to group either at the partition level or at the 
> clustering column level.
> {code}
> SELECT partitionKey, max(value) FROM myTable GROUP BY partitionKey;
> SELECT partitionKey, clustering0, clustering1, max(value) FROM myTable GROUP 
> BY partitionKey, clustering0, clustering1; 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8844) Change Data Capture (CDC)

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-8844:

Component/s: (was: Materialized Views)

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.8
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the client daemon's responibility
> - Logfile size should probably be configurable.
> - Logfiles should be named with a predictable naming schema, making it 
> triivial to process them in order.
> - Daemons should be able to checkpoint their work, and resume from where they 
> left off. This means they would have to leave some file artifact in the CDC 
> log's directory.
> - A sophisticated daemon should be able to be written that could 
> -- Catch up, in written-order, even when it is multiple logfiles behind in 
> processing
> -- Be able to continuously "tail" the most recent logfile and get 
> low-latency(ms?) access to the data as it is written.
> h2. Alternate approach
> In order to make consuming a change log easy and efficient to do with low 
> latency, the following could supplement the approach outlined above
> - Instead of writing to a logfile, by default, Cassandra could expose a 
> socket for a daemon to connect to, and from 

[jira] [Updated] (CASSANDRA-11383) Avoid index segment stitching in RAM which lead to OOM on big SSTable files

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11383:
-
Component/s: (was: Materialized Views)

> Avoid index segment stitching in RAM which lead to OOM on big SSTable files 
> 
>
> Key: CASSANDRA-11383
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11383
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.4
>Reporter: DOAN DuyHai
>Assignee: Jordan West
>  Labels: sasi
> Fix For: 3.5
>
> Attachments: CASSANDRA-11383.patch, new_system_log_CMS_8GB_OOM.log, 
> SASI_Index_build_LCS_1G_Max_SSTable_Size_logs.tar.gz, 
> system.log_sasi_build_oom
>
>
> 13 bare metal machines
> - 6 cores CPU (12 HT)
> - 64Gb RAM
> - 4 SSD in RAID0
>  JVM settings:
> - G1 GC
> - Xms32G, Xmx32G
> Data set:
>  - ≈ 100Gb/per node
>  - 1.3 Tb cluster-wide
>  - ≈ 20Gb for all SASI indices
> C* settings:
> - concurrent_compactors: 1
> - compaction_throughput_mb_per_sec: 256
> - memtable_heap_space_in_mb: 2048
> - memtable_offheap_space_in_mb: 2048
> I created 9 SASI indices
>  - 8 indices with text field, NonTokenizingAnalyser,  PREFIX mode, 
> case-insensitive
>  - 1 index with numeric field, SPARSE mode
>  After a while, the nodes just gone OOM.
>  I attach log files. You can see a lot of GC happening while index segments 
> are flush to disk. At some point the node OOM ...
> /cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12281) Gossip blocks on startup when there are pending range movements

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12281:
-
Component/s: (was: Materialized Views)

> Gossip blocks on startup when there are pending range movements
> ---
>
> Key: CASSANDRA-12281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12281
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Eric Evans
>Assignee: Stefan Podkowinski
> Fix For: 2.2.9, 3.0.11, 3.10
>
> Attachments: 12281-2.2.patch, 12281-3.0.patch, 12281-3.X.patch, 
> 12281-trunk.patch, restbase1015-a_jstack.txt
>
>
> In our cluster, normal node startup times (after a drain on shutdown) are 
> less than 1 minute.  However, when another node in the cluster is 
> bootstrapping, the same node startup takes nearly 30 minutes to complete, the 
> apparent result of gossip blocking on pending range calculations.
> {noformat}
> $ nodetool-a tpstats
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 0   1840 0
>  0
> ReadStage 0 0   2350 0
>  0
> RequestResponseStage  0 0 53 0
>  0
> ReadRepairStage   0 0  1 0
>  0
> CounterMutationStage  0 0  0 0
>  0
> HintedHandoff 0 0 44 0
>  0
> MiscStage 0 0  0 0
>  0
> CompactionExecutor3 3395 0
>  0
> MemtableReclaimMemory 0 0 30 0
>  0
> PendingRangeCalculator1 2 29 0
>  0
> GossipStage   1  5602164 0
>  0
> MigrationStage0 0  0 0
>  0
> MemtablePostFlush 0 0111 0
>  0
> ValidationExecutor0 0  0 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   0 0 30 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {noformat}
> A full thread dump is attached, but the relevant bit seems to be here:
> {noformat}
> [ ... ]
> "GossipStage:1" #1801 daemon prio=5 os_prio=0 tid=0x7fe4cd54b000 
> nid=0xea9 waiting on condition [0x7fddcf883000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0004c1e922c0> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
>   at 
> org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:174)
>   at 
> org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:160)
>   at 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:2023)
>   at 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1682)
>   at 
> org.apache.cassandra.gms.Gossiper.doOnChangeNotifications(Gossiper.java:1182)
>   at org.apache.cassandra.gms.Gossiper.applyNewStates(Gossiper.java:1165)
>   at 
> org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:1128)

[jira] [Updated] (CASSANDRA-13162) Batchlog replay is throttled during bootstrap, creating conditions for incorrect query results on materialized views

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13162:
-
Component/s: Materialized Views

> Batchlog replay is throttled during bootstrap, creating conditions for 
> incorrect query results on materialized views
> 
>
> Key: CASSANDRA-13162
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13162
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Wei Deng
>Priority: Critical
>  Labels: bootstrap, materializedviews
>
> I've tested this in a C* 3.0 cluster with a couple of Materialized Views 
> defined (one base table and two MVs on that base table). The data volume is 
> not very high per node (about 80GB of data per node total, and that 
> particular base table has about 25GB of data uncompressed with one MV taking 
> 18GB compressed and the other MV taking 3GB), and the cluster is using decent 
> hardware (EC2 C4.8XL with 18 cores + 60GB RAM + 18K IOPS RAID0 from two 3TB 
> gp2 EBS volumes). 
> This is originally a 9-node cluster. It appears that after adding 3 more 
> nodes to the DC, the system.batches table accumulated a lot of data on the 3 
> new nodes (each having around 20GB under system.batches directory), and in 
> the subsequent week the batchlog on the 3 new nodes got slowly replayed back 
> to the rest of the nodes in the cluster. The bottleneck seems to be the 
> throttling defined in this cassandra.yaml setting: 
> batchlog_replay_throttle_in_kb, which by default is set to 1MB/s.
> Given that it is taking almost a week (and still hasn't finished) for the 
> batchlog (from MV) to be replayed after the boostrap finishes, it seems only 
> reasonable to unthrottle (or at least give it a much higher throttle rate) 
> during the initial bootstrap, and hence I'd consider this a bug for our 
> current MV implementation.
> Also as far as I understand, the bootstrap logic won't wait for the 
> backlogged batchlog to be fully replayed before changing the new 
> bootstrapping node to "UN" state, and if batchlog for the MVs got stuck in 
> this state for a long time, we basically will get wrong answers on the MVs 
> during that whole duration (until batchlog is fully played to the cluster), 
> which adds even more criticality to this bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11475) MV code refactor

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11475:
-
Component/s: Materialized Views

> MV code refactor
> 
>
> Key: CASSANDRA-11475
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11475
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0.7, 3.7
>
>
> While working on CASSANDRA-5546 I run into a problem with TTLs on MVs, which 
> looking more closely is a bug of the MV code. But one thing leading to 
> another I reviewed a good portion of the MV code and found the following 
> correction problems:
> * If a base row is TTLed then even if an update remove that TTL the view 
> entry remained TTLed and expires, leading to an inconsistency.
> * Due to calling the wrong ctor for {{LivenessInfo}}, when a TTL was set on 
> the base table, the view entry was living twice as long as the TTL. Again 
> leading to a temporary inconsistency.
> * When reading existing data to compute view updates, all deletion 
> informations are completely ignored (the code uses a {{PartitionIterator}} 
> instead of an {{UnfilteredPartitionIterator}}). This is a serious issue since 
> it means some deletions could be totally ignored as far as views are 
> concerned especially when messages are delivered to a replica out of order. 
> I'll note that while the 2 previous points are relatively easy to fix, I 
> didn't find an easy and clean way to fix this one on the current code.
> Further, I think the MV code in general has inefficiencies/code complexities 
> that should be avoidable:
> * {{TemporalRow.Set}} is buffering both everything read and a pretty much 
> complete copy of the updates. That's a potentially high memory requirement. 
> We shouldn't have to copy the updates and we shouldn't buffer all reads but 
> rather work incrementally.
> * {{TemporalRow}}/{{TemporalRow.Set}}/{{TemporalCell}} classes are somewhat 
> re-inventing the wheel. They are really just storing both an update we're 
> doing and the corresponding existing data, but we already have 
> {{Row}}/{{Partition}}/{{Cell}} for that. In practice, those {{Temporal*}} 
> class generates a lot of allocations that we could avoid.
> * The code from CASSANDRA-10060 to avoid multiple reads of the base table 
> with multiple views doesn't work when the update has partition/range 
> tombstones because the code uses {{TemporalRow.Set.setTombstonedExisting()}} 
> to trigger reuse, but the {{TemporalRow.Set.withNewViewPrimaryKey()}} method 
> is used between view and it does not preseve the {{hasTombstonedExisting}} 
> flag.  But that oversight, which is trivial to fix, is kind of a good thing 
> since if you fix it, you're left with a correction problem.
>   The read done when there is a partition deletion depends on the view itself 
> (if there is clustering filters in particular) and so reusing that read for 
> other views is wrong. Which makes that whole reuse code really dodgy imo: the 
> read for existing data is in {{View.java}}, suggesting that it depends on the 
> view (which again, it does at least for partition deletion), but it shouldn't 
> if we're going to reuse the result across multiple views.
> * Even ignoring the previous point, we still potentially read the base table 
> twice if the update mix both row updates and partition/range deletions, 
> potentially re-reading the same values.
> * It's probably more minor but the reading code is using {{QueryPager}}, 
> which is probably an artifact of the initial version of the code being 
> pre-8099, but it's not necessary anymore (the reads are local and locally 
> we're completely iterator based), adding, especially when we do page. I'll 
> note that despite using paging, the current code still buffers everything in 
> {{TemporalRow.Set}} anyway .
> Overall, I suspect trying to fix the problems above (particularly the fact 
> that existing deletion infos are ignored) is only going to add complexity 
> with the current code and we'd still have to fix the inefficiencies. So I 
> propose a refactor of that code which does 2 main things:
> # it removes all of {{TemporalRow}} and related classes. Instead, it directly 
> uses the existing {{Row}} (with all its deletion infos) and the update being 
> applied to it and compute the updates for the view from that. I submit that 
> this is more clear/simple, but this also avoid copying every cell of both the 
> existing and update data as a {{TemporalCell}}. We can also reuse codes like 
> {{Rows.merge}} and {{Rows.diff}} to make the handling of deletions relatively 
> painless.
> # instead of dealing with each view one at a time, re-iterating over all 
> updates each time, it iterates over each 

[jira] [Updated] (CASSANDRA-13405) ViewBuilder can miss data due to sstable generation filter

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13405:
-
Component/s: Materialized Views

> ViewBuilder can miss data due to sstable generation filter
> --
>
> Key: CASSANDRA-13405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13405
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>  Labels: materializedviews
> Fix For: 3.0.13, 3.11.0
>
>
> The view builder for one MV is restarted when other MVs are added on the same 
> keyspace.  There is an issue if compactions are running between these 
> restarts that can cause the view builder to skip data, since the builder 
> tracks the max sstable generation to filter by when it starts back up.
> I don't see a need for this generation tracking across restarts, it only 
> needs to be tracked during a builders life (to avoid adding in newly 
> compacted data).  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10361) DropTableStatement do not throw an error if the table is a view

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10361:
-
Component/s: Materialized Views

> DropTableStatement do not throw an error if the table is a view
> ---
>
> Key: CASSANDRA-10361
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10361
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Materialized Views
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.0.0 rc1
>
> Attachments: 10361-3.0.txt, 10361-3.0-V2.txt
>
>
> While testing cqlsh, I discovered that {{DROP TABLE ;}} was not 
> rejected and that it was corrupting the MV.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10061) Only use batchlog when paired view replica is remote

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10061:
-
Component/s: Materialized Views

> Only use batchlog when paired view replica is remote
> 
>
> Key: CASSANDRA-10061
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10061
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination, Materialized Views
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.0 beta 2
>
>
> As described in the MV design doc the base and view replicas are paired one 
> to one.
> If the replica selected for the view is the local node itself there is no 
> need to create a local batchlog and we can simply apply the view mutations 
> locally.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10609) MV performance regression

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10609:
-
Component/s: Materialized Views

> MV performance regression
> -
>
> Key: CASSANDRA-10609
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10609
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
> Environment: EC2
>Reporter: Alan Boudreault
>Assignee: Tyler Hobbs
>Priority: Critical
>
> I've noticed an important MV performance regression that has been introduced 
> in 3.0.0-rc1. The issue has been introduced by CASSANDRA-9664.
> * I'm using mvbench to test with RF=3
> * I confirm it's not a driver issue.
> {code}
> EC2 RF=3 (i2.2xlarge, also tried on i2.4xlarge)
> mvn exec:java -Dexec.args="--num-users 10 --num-songs 100 
> --num-artists 1 -n 50 --endpoint node1"
> 3.0.0-beta2 (alpha2 java driver)
> ---
> total
>  count = 461601
>  mean rate = 1923.21 calls/second
>  1-minute rate = 1937.82 calls/second
>  5-minute rate = 1424.09 calls/second
> 15-minute rate = 1058.28 calls/second
>min = 1.90 milliseconds
>max = 3707.76 milliseconds
>   mean = 516.42 milliseconds
> stddev = 457.41 milliseconds
> median = 390.07 milliseconds
>   75% <= 775.95 milliseconds
>   95% <= 1417.67 milliseconds
>   98% <= 1728.05 milliseconds
>   99% <= 1954.55 milliseconds
> 99.9% <= 2566.91 milliseconds
> 3.0.0-rc1 (alpha3 java driver)
> -
> total
>  count = 310373
>  mean rate = 272.25 calls/second
>  1-minute rate = 0.00 calls/second
>  5-minute rate = 45.47 calls/second
> 15-minute rate = 295.94 calls/second
>min = 1.05 milliseconds
>max = 10468.98 milliseconds
>   mean = 492.99 milliseconds
> stddev = 510.42 milliseconds
> median = 281.02 milliseconds
>   75% <= 696.25 milliseconds
>   95% <= 1434.45 milliseconds
>   98% <= 1820.33 milliseconds
>   99% <= 2080.37 milliseconds
> 99.9% <= 4362.08 milliseconds
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12952) AlterTableStatement propagates base table and affected MV changes inconsistently

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12952:
-
Component/s: Materialized Views

> AlterTableStatement propagates base table and affected MV changes 
> inconsistently
> 
>
> Key: CASSANDRA-12952
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12952
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata, Materialized Views
>Reporter: Aleksey Yeschenko
> Fix For: 3.0.x, 3.11.x
>
>
> In {{AlterTableStatement}}, when renaming columns or changing their types, we 
> also keep track of all affected MVs - ones that also need column renames or 
> type changes. Then in the end we announce the migration for the table change, 
> and afterwards, separately, one for each affected MV.
> This creates a window in which view definitions and base table definition are 
> not in sync with each other. If a node fails in between receiving those 
> pushes, it's likely to have startup issues.
> The fix is trivial: table change and affected MV change should be pushed as a 
> single schema mutation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10060) Reuse TemporalRow when updating multiple MaterializedViews

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10060:
-
Component/s: Materialized Views

> Reuse TemporalRow when updating multiple MaterializedViews
> --
>
> Key: CASSANDRA-10060
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10060
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination, Materialized Views
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.0 beta 2
>
>
> If a table has 5 associated MVs the current logic reads the existing row for 
> the incoming mutation 5 times. 
> If we reuse the data from the first MV update we can cut out any further 
> reads.
> We know the existing data isn't changing because we are holding a lock on the 
> partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9826) Expose tuning parameters from CREATE TABLE to MV

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-9826:

Component/s: Materialized Views

> Expose tuning parameters from CREATE TABLE to MV
> 
>
> Key: CASSANDRA-9826
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9826
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Materialized Views
>Reporter: Jonathan Ellis
>Assignee: Carl Yeksigian
>  Labels: materializedviews
>
> We should expose tuning parameters like compaction strategy to MV creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9917) MVs should validate gc grace seconds on the tables involved

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-9917:

Component/s: Materialized Views

> MVs should validate gc grace seconds on the tables involved
> ---
>
> Key: CASSANDRA-9917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9917
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
>Reporter: Aleksey Yeschenko
>Assignee: Paulo Motta
>  Labels: docs-impacting, materializedviews
> Fix For: 3.0 beta 2
>
>
> For correctness reasons (potential resurrection of dropped values), batchlog 
> entries are TTLs with the lowest gc grace second of all the tables involved 
> in a batch.
> It means that if gc gs is set to 0 in one of the tables, the batchlog entry 
> will be dead on arrival, and never replayed.
> We should probably warn against such LOGGED writes taking place, in general, 
> but for MVs, we must validate that gc gs on the base table (and on the MV 
> table, if we should allow altering gc gs there at all), is never set too low, 
> or else.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13339) java.nio.BufferOverflowException: null

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13339:
-
Component/s: Materialized Views

> java.nio.BufferOverflowException: null
> --
>
> Key: CASSANDRA-13339
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13339
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Materialized Views
>Reporter: Chris Richards
>
> I'm seeing the following exception running Cassandra 3.9 (with Netty updated 
> to 4.1.8.Final) running on a 2 node cluster.  It would have been processing 
> around 50 queries/second at the time (mixture of 
> inserts/updates/selects/deletes) : there's a collection of tables (some with 
> counters some without) and a single materialized view.
> ERROR [MutationStage-4] 2017-03-15 22:50:33,052 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
> ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> and then again shortly afterwards
> ERROR [MutationStage-3] 2017-03-15 23:27:36,198 StorageProxy.java:1353 - 
> Failed to apply mutation locally : {}
> java.nio.BufferOverflowException: null
>   at 
> org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380)
>  ~[apache-cassandra-3.9.jar:3.9]
>   at 
> 

[jira] [Updated] (CASSANDRA-12734) Materialized View schema file for snapshots created as tables

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12734:
-
Component/s: Materialized Views

> Materialized View schema file for snapshots created as tables
> -
>
> Key: CASSANDRA-12734
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12734
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, Tools
>Reporter: Hau Phan
>Assignee: Alex Petrov
> Fix For: 3.0.9
>
>
> The materialized view schema file that gets created and stored with the 
> sstables is created as a table instead of a materialized view.  
> Can the materialized view be created and added to the corresponding table's  
> schema file?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11069) Materialised views require all collections to be selected.

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11069:
-
Component/s: Materialized Views

> Materialised views require all collections to be selected.
> --
>
> Key: CASSANDRA-11069
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11069
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
>Reporter: Vassil Lunchev
>Assignee: Carl Yeksigian
>  Labels: materializedviews
> Fix For: 3.0.4, 3.4
>
>
> Running Cassandra 3.0.2
> Using the official example from: 
> http://www.datastax.com/dev/blog/new-in-cassandra-3-0-materialized-views
> The only difference is that I have added a map column to the base table.
> {code}
> CREATE TABLE scores
> (
>   user TEXT,
>   game TEXT,
>   year INT,
>   month INT,
>   day INT,
>   score INT,
>   a_map map,
>   PRIMARY KEY (user, game, year, month, day)
> );
> CREATE MATERIALIZED VIEW alltimehigh AS
>SELECT user FROM scores
>WHERE game IS NOT NULL AND score IS NOT NULL AND user IS NOT NULL AND 
> year IS NOT NULL AND month IS NOT NULL AND day IS NOT NULL
>PRIMARY KEY (game, score, user, year, month, day)
>WITH CLUSTERING ORDER BY (score desc);
> INSERT INTO scores (user, game, year, month, day, score) VALUES ('pcmanus', 
> 'Coup', 2015, 06, 02, 2000);
> SELECT * FROM scores;
> SELECT * FROM alltimehigh;
> {code}
> All of the above works perfectly fine. Until you insert a row where the 
> 'a_map' column is not null.
> {code}
> INSERT INTO scores (user, game, year, month, day, score, a_map) VALUES 
> ('pcmanus_2', 'Coup', 2015, 06, 02, 2000, {1: 'text'});
> {code}
> This results in:
> {code}
> Traceback (most recent call last):
>   File "/Users/vassil/apache-cassandra-3.0.2/bin/cqlsh.py", line 1258, in 
> perform_simple_statement
> result = future.result()
>   File 
> "/Users/vassil/apache-cassandra-3.0.2/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> WriteFailure: code=1500 [Replica(s) failed to execute write] 
> message="Operation failed - received 0 responses and 1 failures" 
> info={'failures': 1, 'received_responses': 0, 'required_responses': 1, 
> 'consistency': 'ONE'}
> {code}
> Selecting the base table and the materialised view is also interesting:
> {code}
> SELECT * FROM scores;
> SELECT * FROM alltimehigh;
> {code}
> The result is:
> {code}
> cqlsh:tests> SELECT * FROM scores;
>  user| game | year | month | day | a_map | score
> -+--+--+---+-+---+---
>  pcmanus | Coup | 2015 | 6 |   2 |  null |  2000
> (1 rows)
> cqlsh:tests> SELECT * FROM alltimehigh;
>  game | score | user  | year | month | day
> --+---+---+--+---+-
>  Coup |  2000 |   pcmanus | 2015 | 6 |   2
>  Coup |  2000 | pcmanus_2 | 2015 | 6 |   2
> (2 rows)
> {code}
> In the logs you can see:
> {code:java}
> ERROR [SharedPool-Worker-2] 2016-01-26 03:25:27,456 Keyspace.java:484 - 
> Unknown exception caught while attempting to update MaterializedView! 
> tests.scores
> java.lang.IllegalStateException: [ColumnDefinition{name=a_map, 
> type=org.apache.cassandra.db.marshal.MapType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type),
>  kind=REGULAR, position=-1}] is not a subset of []
>   at 
> org.apache.cassandra.db.Columns$Serializer.encodeBitmap(Columns.java:531) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.Columns$Serializer.serializedSubsetSize(Columns.java:483)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedRowBodySize(UnfilteredSerializer.java:275)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:247)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:234)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:227)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serializedSize(UnfilteredRowIteratorSerializer.java:169)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serializedSize(PartitionUpdate.java:683)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serializedSize(Mutation.java:354)
>  

[jira] [Updated] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12700:
-
Component/s: Materialized Views

> During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes 
> Connection get lost, because of Server NullPointerException
> --
>
> Key: CASSANDRA-12700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12700
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Materialized Views
> Environment: Cassandra cluster with two nodes running C* version 
> 3.7.0 and Python Driver 3.7 using Python 2.7.11. 
> OS: Red Hat Enterprise Linux 6.x x64, 
> RAM :8GB
> DISK :210GB
> Cores: 2
> Java 1.8.0_73 JRE
>Reporter: Rajesh Radhakrishnan
>Assignee: Jeff Jirsa
> Fix For: 2.2.9, 3.0.10, 3.10
>
>
> In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) 
> with Python driver 3.7. Trying to insert 2 million row or more data into the 
> database, but sometimes we are getting "Null pointer Exception". 
> We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in 
> the client its Python 2.7.12.
> {code:title=cassandra server log}
> ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xc208da86, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:227) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_73]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
> ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x8e2eae00, 
> L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> 

[jira] [Updated] (CASSANDRA-12097) dtest failure in materialized_views_test.TestMaterializedViews.view_tombstone_test

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12097:
-
Component/s: Materialized Views

> dtest failure in 
> materialized_views_test.TestMaterializedViews.view_tombstone_test
> --
>
> Key: CASSANDRA-12097
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12097
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, Testing
>Reporter: Sean McCarthy
>Assignee: Carl Yeksigian
>  Labels: dtest
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/271/testReport/materialized_views_test/TestMaterializedViews/view_tombstone_test
> Failed on CassCI build trunk_offheap_dtest #271
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 754, in view_tombstone_test
> [1, 1, 'b', 3.0]
>   File "/home/automaton/cassandra-dtest/assertions.py", line 51, in assert_one
> assert list_res == [expected], "Expected %s from %s, but got %s" % 
> ([expected], query, list_res)
> "Expected [[1, 1, 'b', 3.0]] from SELECT * FROM t_by_v WHERE v = 1, but got 
> [[1, 1, u'b', None]]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11602) Materialized View Does Not Have Static Columns

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11602:
-
Component/s: Materialized Views

> Materialized View Does Not Have Static Columns
> --
>
> Key: CASSANDRA-11602
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11602
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
>Reporter: Ravishankar Rajendran
>Assignee: Carl Yeksigian
> Fix For: 3.0.6, 3.6
>
>
> {quote}
> CREATE TABLE "team" (teamname text, manager text, location text static, 
> PRIMARY KEY ((teamname), manager));
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull1', 
> 'Ricciardo11', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo12', 'Australian');
> INSERT INTO team (teamname, manager, location) VALUES ('Red Bull2', 
> 'Ricciardo13', 'Australian');
> select * From team;
> CREATE MATERIALIZED VIEW IF NOT EXISTS "teamMV" AS SELECT "teamname", 
> "manager", "location" FROM "team" WHERE "teamname" IS NOT NULL AND "manager" 
> is NOT NULL AND "location" is NOT NULL PRIMARY KEY("manager", "teamname");  
> select * from "teamMV";
> {quote}
> The teamMV does not have "location" column. Static columns are not getting 
> created in MV.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10614) AssertionError while flushing memtables

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10614:
-
Component/s: Materialized Views

> AssertionError while flushing memtables
> ---
>
> Key: CASSANDRA-10614
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10614
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: debug.log, system.log
>
>
> While running mvbench against a single local node, I noticed this stacktrace 
> showing up multiple times in the logs:
> {noformat}
> ERROR 16:40:01 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.AssertionError: null
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70) 
> ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:116) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ColumnIndex.writeAndBuildIndex(ColumnIndex.java:49) 
> ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:149)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.append(SimpleSSTableMultiWriter.java:45)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableTxnWriter.append(SSTableTxnWriter.java:52)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:389)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:352) 
> ~[main/:na]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
>   at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:299)
>  ~[guava-18.0.jar:na]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1037)
>  ~[main/:na]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_45]
> {noformat}
> To reproduce, run mvbench with the regular schema and the following arguments:
> {noformat}
> mvn exec:java -Dexec.args="--num-users 10 --num-songs 100 
> --num-artists 1 -n 50 --endpoint 127.0.0.1"
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10661) Integrate SASI to Cassandra

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10661:
-
Component/s: Materialized Views

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.4
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10348) cqlsh will not display results from Materialized Views

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10348:
-
Component/s: Materialized Views

> cqlsh will not display results from Materialized Views
> --
>
> Key: CASSANDRA-10348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10348
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, Tools
>Reporter: Adam Holmberg
>Assignee: Stefania
> Fix For: 3.0.0 rc1
>
>
> When displaying results of a query, cqlsh attempts to get table metadata from 
> the driver. If it is not there, the query fails with "Column family %r not 
> found".
> Following CASSANDRA-9921 views are no longer represented as normal tables.
> cqlsh will need to be updated to also return materialized view metadata when 
> selecting from a view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9961) cqlsh should have DESCRIBE MATERIALIZED VIEW

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-9961:

Component/s: Materialized Views

> cqlsh should have DESCRIBE MATERIALIZED VIEW
> 
>
> Key: CASSANDRA-9961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9961
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Materialized Views, Tools
>Reporter: Carl Yeksigian
>Assignee: Stefania
>  Labels: client-impacting, doc-impacting, materializedviews
> Fix For: 3.0.0 rc1
>
>
> cqlsh doesn't currently produce describe output that can be used to recreate 
> a MV. Needs to add a new {{DESCRIBE MATERIALIZED VIEW}} command, and also add 
> to {{DESCRIBE KEYSPACE}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11891) WriteTimeout during commit log replay due to MV lock

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11891:
-
Component/s: Materialized Views

> WriteTimeout during commit log replay due to MV lock
> 
>
> Key: CASSANDRA-11891
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11891
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths, Materialized Views
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Critical
> Fix For: 3.0.7, 3.7
>
>
> During commit log replay, if there are materialized views, it's possible for 
> contention on the MV lock to cause a {{WriteTimeoutException}}.  This makes 
> commit log replay fail, which of course prevents the node from starting up.  
> This generally means that the operator has to move the commitlog segments to 
> avoid replay.
> Here's a stacktrace of this happening on 3.0.5:
> {noformat}
> ERROR [main] 2016-05-25 15:10:31,120 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - 
> received only 0 responses.
>   at org.apache.cassandra.utils.Throwables.maybeFail(Throwables.java:50) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:372) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:624)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:511)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:406)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:153)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:189) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:169) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
> [apache-cassandra-3.0.5.jar:3.0.5]
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - 
> received only 0 responses.
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.get(AbstractLocalAwareExecutorService.java:200)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:365) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   ... 9 common frames omitted
>   Suppressed: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - 
> received only 0 responses.
>   ... 11 common frames omitted
>   Caused by: org.apache.cassandra.exceptions.WriteTimeoutException: 
> Operation timed out - received only 0 responses.
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:431)
>   at 
> org.apache.cassandra.db.Keyspace.lambda$apply$62(Keyspace.java:443)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>   at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We should ignore the {{write_rpc_timeout}} setting while acquiring MV locks 
> if we're on the commitlog replay path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12373:
-
Component/s: Materialized Views

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Materialized Views
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-7190) Add schema to snapshot manifest

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-7190:

Component/s: Materialized Views

> Add schema to snapshot manifest
> ---
>
> Key: CASSANDRA-7190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Materialized Views, Tools
>Reporter: Jonathan Ellis
>Assignee: Alex Petrov
>Priority: Minor
>  Labels: client-impacting, doc-impacting, lhf
> Fix For: 3.0.9, 3.10
>
>
> followup from CASSANDRA-6326



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11465:
-
Component/s: Materialized Views

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
> --
>
> Key: CASSANDRA-11465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11465
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, Observability
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.2.8, 3.0.9, 3.8
>
>
> Failing on the following assert, on trunk only: 
> {{self.assertEqual(len(errs[0]), 1)}}
> Is not failing consistently.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test
> Failed on CassCI build trunk_dtest #1087



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10424) Altering base table column with materialized view causes unexpected server error.

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10424:
-
Component/s: Materialized Views

> Altering base table column with materialized view causes unexpected server 
> error.
> -
>
> Key: CASSANDRA-10424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10424
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
> Environment: cassandra-3.0.0-rc1, with python driver 3.0-alpha
>Reporter: Greg Bestland
>Assignee: Carl Yeksigian
> Fix For: 3.0.0 rc2
>
>
> When attempting to alter column type of base table which has a corresponding  
> materialized view we get an exception from the server.
> Steps to reproduce.
> 1. Create a base table
> {code}
> CREATE TABLE test.scores(
> user TEXT,
> game TEXT,
> year INT,
> month INT,
> day INT,
> score TEXT,
> PRIMARY KEY (user, game, year, month, day)
> )
> {code}
> 2. Create a corresponding materialized view
> {code}
> CREATE MATERIALIZED VIEW test.monthlyhigh AS
> SELECT game, year, month, score, user, day FROM test.scores
> WHERE game IS NOT NULL AND year IS NOT NULL AND month IS NOT 
> NULL AND score IS NOT NULL AND user IS NOT NULL AND day IS NOT NULL
> PRIMARY KEY ((game, year, month), score, user, day)
> WITH CLUSTERING ORDER BY (score DESC, user ASC, day ASC)
> {code}
> 3. Attempt to Alter the base table 
> {code}
> ALTER TABLE test.scores ALTER score TYPE blob
> {code}
> In the python driver we see the following exception returned from the server
> {code}
> Ignoring schedule_unique for already-scheduled task: ( ControlConnection.refresh_schema of  object at 0x100f72c50>>, (), (('keyspace', 'test'), ('target_type', 
> 'KEYSPACE'), ('change_type', 'UPDATED')))
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "./cassandra/cluster.py", line 1623, in execute
> result = future.result()
>   File "./cassandra/cluster.py", line 3205, in result
> raise self._final_exception
> cassandra.protocol.ServerError:  message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.ConfigurationException: Column family 
> comparators do not match or are not compatible (found ClusteringComparator; 
> expected ClusteringComparator).">
> {code}
> On the server I see the following stack trace
> {code}
> INFO  [MigrationStage:1] 2015-09-30 11:45:47,457 ColumnFamilyStore.java:825 - 
> Enqueuing flush of keyspaces: 512 (0%) on-heap, 0 (0%) off-heap
> INFO  [MemtableFlushWriter:11] 2015-09-30 11:45:47,457 Memtable.java:362 - 
> Writing Memtable-keyspaces@1714565887(0.146KiB serialized bytes, 1 ops, 0%/0% 
> of on/off-heap limit)
> INFO  [MemtableFlushWriter:11] 2015-09-30 11:45:47,463 Memtable.java:395 - 
> Completed flushing 
> /Users/gregbestland/.ccm/tests/node1/data/system_schema/keyspaces-abac5682dea631c5b535b3d6cffd0fb6/ma-54-big-Data.db
>  (0.109KiB) for commitlog position ReplayPosition(segmentId=1443623481894, 
> position=9812)
> INFO  [MigrationStage:1] 2015-09-30 11:45:47,472 ColumnFamilyStore.java:825 - 
> Enqueuing flush of columns: 877 (0%) on-heap, 0 (0%) off-heap
> INFO  [MemtableFlushWriter:12] 2015-09-30 11:45:47,472 Memtable.java:362 - 
> Writing Memtable-columns@771367282(0.182KiB serialized bytes, 1 ops, 0%/0% of 
> on/off-heap limit)
> INFO  [MemtableFlushWriter:12] 2015-09-30 11:45:47,478 Memtable.java:395 - 
> Completed flushing 
> /Users/gregbestland/.ccm/tests/node1/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/ma-51-big-Data.db
>  (0.107KiB) for commitlog position ReplayPosition(segmentId=1443623481894, 
> position=9812)
> INFO  [MigrationStage:1] 2015-09-30 11:45:47,490 ColumnFamilyStore.java:825 - 
> Enqueuing flush of views: 2641 (0%) on-heap, 0 (0%) off-heap
> INFO  [MemtableFlushWriter:11] 2015-09-30 11:45:47,490 Memtable.java:362 - 
> Writing Memtable-views@1740452585(0.834KiB serialized bytes, 1 ops, 0%/0% of 
> on/off-heap limit)
> INFO  [MemtableFlushWriter:11] 2015-09-30 11:45:47,496 Memtable.java:395 - 
> Completed flushing 
> /Users/gregbestland/.ccm/tests/node1/data/system_schema/views-9786ac1cdd583201a7cdad556410c985/ma-22-big-Data.db
>  (0.542KiB) for commitlog position ReplayPosition(segmentId=1443623481894, 
> position=9812)
> ERROR [MigrationStage:1] 2015-09-30 11:45:47,507 CassandraDaemon.java:195 - 
> Exception in thread Thread[MigrationStage:1,5,main]
> org.apache.cassandra.exceptions.ConfigurationException: Column family 
> 

[jira] [Updated] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10130:
-
Component/s: Materialized Views

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12888) Incremental repairs broken for MVs and CDC

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12888:
-
Component/s: Materialized Views

> Incremental repairs broken for MVs and CDC
> --
>
> Key: CASSANDRA-12888
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12888
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, Streaming and Messaging
>Reporter: Stefan Podkowinski
>Assignee: Benjamin Roth
>  Labels: repair
> Fix For: 3.0.x, 3.11.x
>
>
> SSTables streamed during the repair process will first be written locally and 
> afterwards either simply added to the pool of existing sstables or, in case 
> of existing MVs or active CDC, replayed on mutation basis:
> As described in {{StreamReceiveTask.OnCompletionRunnable}}:
> {quote}
> We have a special path for views and for CDC.
> For views, since the view requires cleaning up any pre-existing state, we 
> must put all partitions through the same write path as normal mutations. This 
> also ensures any 2is are also updated.
> For CDC-enabled tables, we want to ensure that the mutations are run through 
> the CommitLog so they can be archived by the CDC process on discard.
> {quote}
> Using the regular write path turns out to be an issue for incremental 
> repairs, as we loose the {{repaired_at}} state in the process. Eventually the 
> streamed rows will end up in the unrepaired set, in contrast to the rows on 
> the sender site moved to the repaired set. The next repair run will stream 
> the same data back again, causing rows to bounce on and on between nodes on 
> each repair.
> See linked dtest on steps to reproduce. An example for reproducing this 
> manually using ccm can be found 
> [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12281) Gossip blocks on startup when there are pending range movements

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12281:
-
Component/s: Materialized Views

> Gossip blocks on startup when there are pending range movements
> ---
>
> Key: CASSANDRA-12281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12281
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Materialized Views
>Reporter: Eric Evans
>Assignee: Stefan Podkowinski
> Fix For: 2.2.9, 3.0.11, 3.10
>
> Attachments: 12281-2.2.patch, 12281-3.0.patch, 12281-3.X.patch, 
> 12281-trunk.patch, restbase1015-a_jstack.txt
>
>
> In our cluster, normal node startup times (after a drain on shutdown) are 
> less than 1 minute.  However, when another node in the cluster is 
> bootstrapping, the same node startup takes nearly 30 minutes to complete, the 
> apparent result of gossip blocking on pending range calculations.
> {noformat}
> $ nodetool-a tpstats
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 0   1840 0
>  0
> ReadStage 0 0   2350 0
>  0
> RequestResponseStage  0 0 53 0
>  0
> ReadRepairStage   0 0  1 0
>  0
> CounterMutationStage  0 0  0 0
>  0
> HintedHandoff 0 0 44 0
>  0
> MiscStage 0 0  0 0
>  0
> CompactionExecutor3 3395 0
>  0
> MemtableReclaimMemory 0 0 30 0
>  0
> PendingRangeCalculator1 2 29 0
>  0
> GossipStage   1  5602164 0
>  0
> MigrationStage0 0  0 0
>  0
> MemtablePostFlush 0 0111 0
>  0
> ValidationExecutor0 0  0 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   0 0 30 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {noformat}
> A full thread dump is attached, but the relevant bit seems to be here:
> {noformat}
> [ ... ]
> "GossipStage:1" #1801 daemon prio=5 os_prio=0 tid=0x7fe4cd54b000 
> nid=0xea9 waiting on condition [0x7fddcf883000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0004c1e922c0> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
>   at 
> org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:174)
>   at 
> org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:160)
>   at 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:2023)
>   at 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1682)
>   at 
> org.apache.cassandra.gms.Gossiper.doOnChangeNotifications(Gossiper.java:1182)
>   at org.apache.cassandra.gms.Gossiper.applyNewStates(Gossiper.java:1165)
>   at 
> 

[jira] [Updated] (CASSANDRA-1704) CQL reads (aka SELECT)

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-1704:

Component/s: Materialized Views

> CQL reads (aka SELECT)
> --
>
> Key: CASSANDRA-1704
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1704
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL, Materialized Views
>Affects Versions: 0.8 beta 1
>Reporter: Eric Evans
>Assignee: Eric Evans
>Priority: Minor
> Fix For: 0.8 beta 1
>
> Attachments: 
> ASF.LICENSE.NOT.GRANTED--v3-0001-CASSANDRA-1704.-doc-update-for-proposed-SELECT.txt,
>  ASF.LICENSE.NOT.GRANTED--v3-0002-refactor-CQL-SELECT-to-be-more-SQLish.txt, 
> ASF.LICENSE.NOT.GRANTED--v3-0003-make-avro-exception-factory-methods-public.txt,
>  
> ASF.LICENSE.NOT.GRANTED--v3-0004-wrap-AvroRemoteExceptions-in-CQLExcpetions.txt,
>  ASF.LICENSE.NOT.GRANTED--v3-0005-backfill-missing-system-tests.txt, 
> ASF.LICENSE.NOT.GRANTED--v3-0006-add-support-for-index-scans.txt, 
> ASF.LICENSE.NOT.GRANTED--v3-0007-support-empty-unset-where-clause.txt, 
> ASF.LICENSE.NOT.GRANTED--v3-0008-SELECT-COUNT-.-FROM-support.txt
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> Data access specification and implementation for CQL.  
> This corresponds to the following RPC methods:
> * get()
> * get_slice()
> * get_count()
> * multiget_slice()
> * multiget_count()
> * get_range_slices()
> * get_indexed_slices()
> The initial check-in to trunk/ uses a syntax that looks like:
> {code:SQL}
> SELECT (FROM)?  [USING CONSISTENCY.] WHERE  [ROWLIMIT X] 
> [COLLIMIT Y] [ASC|DESC]
> {code}
> Where:
> *  is the column family name.
> *  consists of relations chained by the AND keyword.
> *  corresponds to one of the enum values in the RPC interface(s).
> What is still undone:
> * Support for indexes
> * Counts
> * Complete test coverage
> And of course, all of this is still very much open to further discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9778) CQL support for time series aggregation

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-9778:

Component/s: Materialized Views

> CQL support for time series aggregation
> ---
>
> Key: CASSANDRA-9778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9778
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL, Materialized Views
>Reporter: Jonathan Ellis
>
> Along with MV (CASSANDRA-6477), time series aggregation or "rollups" are a 
> common design pattern in cassandra applications.  I'd like to add CQL support 
> for this along these lines:
> {code}
> CREATE MATERIALIZED VIEW stocks_by_hour AS
> SELECT exchange, day, day_time(1h) AS hour, symbol, avg(price), sum(volume)
> FROM stocks
> GROUP BY exchange, day, symbol, hour
> PRIMARY KEY  ((exchange, day), hour, symbol);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10485) Missing host ID on hinted handoff write

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10485:
-
Component/s: Materialized Views

> Missing host ID on hinted handoff write
> ---
>
> Key: CASSANDRA-10485
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10485
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Fix For: 2.1.12, 2.2.4, 3.0.1, 3.1
>
>
> when I restart one of them I receive the error "Missing host ID":
> {noformat}
> WARN  [SharedPool-Worker-1] 2015-10-08 13:15:33,882 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.AssertionError: Missing host ID for 63.251.156.141
> at 
> org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:978)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.service.StorageProxy$6.runMayThrow(StorageProxy.java:950)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:2235)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-2.1.3.jar:2.1.3]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> {noformat}
> If I made nodetool status, the problematic node has ID:
> {noformat}
> UN  10.10.10.12  1.3 TB 1   ?   
> 4d5c8fd2-a909-4f09-a23c-4cd6040f338a  rack3
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10016) Materialized view metrics pushes out tpstats formatting

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10016:
-
Component/s: Materialized Views

> Materialized view metrics pushes out tpstats formatting 
> 
>
> Key: CASSANDRA-10016
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10016
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, Tools
>Reporter: Sam Tunnicliffe
>Assignee: Paulo Motta
>Priority: Minor
> Fix For: 3.0 beta 1
>
>
> {noformat}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> ReadStage 0 0  3 0
>  0
> MutationStage 0 0  1 0
>  0
> CounterMutationStage  0 0  0 0
>  0
> BatchlogMutationStage 0 0  0 0
>  0
> MaterializedViewMutationStage 0 0  0 0
>  0
> GossipStage   0 0  0 0
>  0
> RequestResponseStage  0 0  0 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> MigrationStage0 0  3 0
>  0
> MiscStage 0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> ReadRepairStage   0 0  0 0
>  0
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> BATCHLOG_MUTATION0
> MUTATION 0
> COUNTER_MUTATION 0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> MATERIALIZED_VIEW_MUTATION 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10929) cql_tests.py:AbortedQueriesTester fails

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10929:
-
Component/s: Materialized Views

> cql_tests.py:AbortedQueriesTester fails
> ---
>
> Key: CASSANDRA-10929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10929
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL, Materialized Views
>Reporter: Philip Thompson
> Attachments: node1_debug.log, node1.log, node2_debug.log, node2.log
>
>
> All four tests in the {{cql_tests.AbortedQueriesTester}} dtest suite are 
> failing on HEAD of cassandra-3.0, here is an example link from cassci:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/455/testReport/cql_tests/AbortedQueriesTester/remote_query_test/
> The tests set {{'read_request_timeout_in_ms': 1000}}  and 
> {{"-Dcassandra.test.read_iteration_delay_ms=1500"}}, then issues read queries 
> and expects them to timeout. However, they are succeeding. I can reproduce 
> this locally. 
> Looking at remote_query_test, from the logs, it appears that the query is 
> being sent from the driver to node1, which forwards it to node2 
> appropriately. I've tried also setting {{range_request_timeout_in_ms}} lower, 
> but that has had no effect. Trace logs from remote_query_test are attached.
> The same issue is happening on local_query_test, remote_query_test, 
> materialized_view_test, and index_query_test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13214) Data versioning for rows

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13214:
-
Component/s: Materialized Views

> Data versioning for rows
> 
>
> Key: CASSANDRA-13214
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13214
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Bhuvan Rawal
> Fix For: 4.x
>
>
> Row level data versioning can be implemented in cassandra in two possible 
> ways :
> 1. Specify the last clustering column as a timestamp and all the previous 
> data is merged into latest one. User can specify versioning while creating 
> the table.
> 2. Create a materialized view with last column as timestamp without really 
> touching the base table. Although a bit different from traditional MV as here 
> it will contain data as versioned and in MV read before write complete 
> partition data can be sent.
> This will help people maintain historical records versioned without current 
> client level challenges - consistency challenges etc, and rather doing at 
> server level using things already implemented in place at cassandra level - 
> materialized views, partition locks for MV update.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13403) nodetool repair breaks SASI index

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13403:
-
Component/s: Materialized Views

> nodetool repair breaks SASI index
> -
>
> Key: CASSANDRA-13403
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13403
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, sasi
> Environment: 3.10
>Reporter: Igor Novgorodov
>Assignee: Alex Petrov
>
> I've got table:
> {code}
> CREATE TABLE cservice.bulks_recipients (
> recipient text,
> bulk_id uuid,
> datetime_final timestamp,
> datetime_sent timestamp,
> request_id uuid,
> status int,
> PRIMARY KEY (recipient, bulk_id)
> ) WITH CLUSTERING ORDER BY (bulk_id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> CREATE CUSTOM INDEX bulk_recipients_bulk_id ON cservice.bulks_recipients 
> (bulk_id) USING 'org.apache.cassandra.index.sasi.SASIIndex';
> {code}
> There are 11 rows in it:
> {code}
> > select * from bulks_recipients;
> ...
> (11 rows)
> {code}
> Let's query by index (all rows have the same *bulk_id*):
> {code}
> > select * from bulks_recipients where bulk_id = 
> > baa94815-e276-4ca4-adda-5b9734e6c4a5;   
> >   
> ...
> (11 rows)
> {code}
> Ok, everything is fine.
> Now i'm doing *nodetool repair --partitioner-range --job-threads 4 --full* on 
> each node in cluster sequentially.
> After it finished:
> {code}
> > select * from bulks_recipients where bulk_id = 
> > baa94815-e276-4ca4-adda-5b9734e6c4a5;
> ...
> (2 rows)
> {code}
> Only two rows.
> While the rows are actually there:
> {code}
> > select * from bulks_recipients;
> ...
> (11 rows)
> {code}
> If i issue an incremental repair on a random node, i can get like 7 rows 
> after index query.
> Dropping index and recreating it fixes the issue. Is it a bug or am i doing 
> the repair the wrong way?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11727) Streaming error while Bootstraping Materialized View

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11727:
-
Component/s: Materialized Views

> Streaming error while Bootstraping Materialized View
> 
>
> Key: CASSANDRA-11727
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11727
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, Streaming and Messaging
> Environment: Ubuntu 14.04
> Oracle JDK 1.8.0_11
> 16GB RAM
> Cassandra Version 3.0.5
>Reporter: Alexander Heiß
> Fix For: 3.0.5
>
>
> We have a Cluster with 4 Servers in 2 Datacenters (2 in DC A and 2 in DC B), 
> Root servers.
> We have a Replication Factor of 2, so atm we have 100% load on all 4 Servers. 
> Around 250GB of Data. Everything works fine. Now we want to add 2 more 
> Servers to the Cluster, one in each Datacenter. But we always get the same 
> Kind of error while Bootstraping:
> {quote}ERROR 13:21:34 Unknown exception caught while attempting to update 
> MaterializedView! messages_dump.messages
> java.lang.IllegalArgumentException: Mutation of 24032623 bytes is too large 
> for the maxiumum size of 16777216
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) 
> ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149)
>  ~[apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) 
> [apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169)
>  [apache-cassandra-3.0.5.jar:3.0.5]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_11]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_11]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_11]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_11]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11]
> {quote}
> and
> {quote}
> WARN  13:21:34 Some data streaming failed. Use nodetool to check bootstrap 
> state and resume. For more, see `nodetool help bootstrap`. IN_PROGRESS
> {quote}
> And if we Resume the Bootstrap it starts all over again and then it fails 
> with the same Error Message.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12735) org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12735:
-
Component/s: Materialized Views

> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out
> -
>
> Key: CASSANDRA-12735
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12735
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration, Core, Materialized Views
> Environment: Python 2.7.11, Datastax Cassandra 3.7.0  
>Reporter: Rajesh Radhakrishnan
> Fix For: 3.7
>
>
> We got a cluster of two nodes running Cassandra.3.7.0 and using client 
> running Python 2.7.11 injecting lot of data from maybe 100 or so jobs. 
> --
> Cache setting can be seen from system.log:
> INFO  [main] 2016-09-30 15:12:50,002 AuthCache.java:172 - (Re)initializing 
> CredentialsCache (validity period/update interval/max entries) 
> (2000/2000/1000)
> INFO  [SharedPool-Worker-1] 2016-09-30 15:15:09,561 AuthCache.java:172 - 
> (Re)initializing PermissionsCache (validity period/update interval/max 
> entries) (1/1/1000)
> INFO  [SharedPool-Worker-1] 2016-09-30 15:15:24,319 AuthCache.java:172 - 
> (Re)initializing RolesCache (validity period/update interval/max entries) 
> (5000/5000/1000)
> ===
> But I am getting the following exception :
> ERROR [SharedPool-Worker-90] 2016-09-30 15:17:20,883 ErrorMessage.java:338 - 
> Unexpected exception during request
> com.google.common.util.concurrent.UncheckedExecutionException: 
> com.google.common.util.concurrent.UncheckedExecutionException: 
> java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
> received only 0 responses.
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203) 
> ~[guava-18.0.jar:na]
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3937) 
> ~[guava-18.0.jar:na]
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941) 
> ~[guava-18.0.jar:na]
>   at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824)
>  ~[guava-18.0.jar:na]
>   at org.apache.cassandra.auth.AuthCache.get(AuthCache.java:108) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.auth.PermissionsCache.getPermissions(PermissionsCache.java:45)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.auth.AuthenticatedUser.getPermissions(AuthenticatedUser.java:104)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.authorize(ClientState.java:375) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.checkPermissionOnResourceChain(ClientState.java:308)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:285)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:272) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:256)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:211)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.checkAccess(BatchStatement.java:137)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:502)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:495)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.transport.messages.BatchMessage.execute(BatchMessage.java:217)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.7.0.jar:3.7.0]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.36.Final.jar:4.0.36.Final]
>

[jira] [Updated] (CASSANDRA-13564) Mismatch Documentation on MATERIALIZE VIEW

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13564:
-
Component/s: Materialized Views

> Mismatch Documentation on MATERIALIZE VIEW
> --
>
> Key: CASSANDRA-13564
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13564
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website, Materialized Views
> Environment: 3.10
>Reporter: Nick Hryhoriev
>  Labels: doc-impacting
>
> During create MATERIALIZED VIEW with out cluster key. 
> I've got exception "No columns are defined for Materialized View other than 
> primary key"
> Does it expected behave? Because i can't find nothing about this in 
> Documentation. But this check exist in code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9809) Full support for WHERE clause in MV

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-9809:

Component/s: Materialized Views

> Full support for WHERE clause in MV
> ---
>
> Key: CASSANDRA-9809
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9809
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL, Materialized Views
>Reporter: Jonathan Ellis
>
> Materialized views should support limiting what rows are denormalized with a 
> WHERE clause.  (UDF calls should be valid here.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12475) dtest failure in consistency_test.TestConsistency.short_read_test

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12475:
-
Component/s: Materialized Views

> dtest failure in consistency_test.TestConsistency.short_read_test
> -
>
> Key: CASSANDRA-12475
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12475
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, Testing
>Reporter: Joel Knighton
>Assignee: Joel Knighton
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest/42/testReport/junit/consistency_test/TestConsistency/short_read_test/
> Error:
> {code}
> Error from server: code=2200 [Invalid query] message="No keyspace has been 
> specified. USE a keyspace, or explicitly specify keyspace.tablename"
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10977) MV view_tombstone_test is failing on trunk

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10977:
-
Component/s: Materialized Views

> MV view_tombstone_test is failing on trunk
> --
>
> Key: CASSANDRA-10977
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10977
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, Tools
>Reporter: Alan Boudreault
>  Labels: dtest
> Fix For: 3.0.3, 3.3
>
>
> http://cassci.datastax.com/job/trunk_dtest/893/testReport/materialized_views_test/TestMaterializedViews/view_tombstone_test/
> {code}
> ==
> FAIL: view_tombstone_test (materialized_views_test.TestMaterializedViews)
> --
> Traceback (most recent call last):
>   File 
> "/home/aboudreault/git/cstar/cassandra-dtest/materialized_views_test.py", 
> line 735, in view_tombstone_test
> assert_none(session, "SELECT * FROM t_by_v WHERE v = 1")
>   File "/home/aboudreault/git/cstar/cassandra-dtest/assertions.py", line 44, 
> in assert_none
> assert list_res == [], "Expected nothing from %s, but got %s" % (query, 
> list_res)
> AssertionError: Expected nothing from SELECT * FROM t_by_v WHERE v = 1, but 
> got [[1, 1, u'b', None]]
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-MFSCKQ
> - >> end captured logging << -
> --
> Ran 1 test in 27.986s
> FAILED (failures=1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10197) LWW bug in Materialized Views

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10197:
-
Component/s: Materialized Views

> LWW bug in Materialized Views
> -
>
> Key: CASSANDRA-10197
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10197
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.0 beta 2
>
>
> MaterializedViewLongTest is flakey.
> Turns out it happens when the same timestamp is used for two writes.  MV 
> doesn't resolve this properly so you get unexpected results



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13248) testall failure in org.apache.cassandra.db.compaction.PendingRepairManagerTest.userDefinedTaskTest

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13248:
-
Component/s: Materialized Views

> testall failure in 
> org.apache.cassandra.db.compaction.PendingRepairManagerTest.userDefinedTaskTest
> --
>
> Key: CASSANDRA-13248
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13248
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, Testing
>Reporter: Sean McCarthy
>Assignee: Blake Eggleston
>  Labels: test-failure, testall
> Attachments: 
> TEST-org.apache.cassandra.db.compaction.PendingRepairManagerTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1416/testReport/org.apache.cassandra.db.compaction/PendingRepairManagerTest/userDefinedTaskTest
> {code}
> Error Message
> expected:<1> but was:<0>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<1> but was:<0>
>   at 
> org.apache.cassandra.db.compaction.PendingRepairManagerTest.userDefinedTaskTest(PendingRepairManagerTest.java:194)
> {code}{code}
> Standard Output
> ERROR [main] 2017-02-21 17:00:01,792 ?:? - SLF4J: stderr
> INFO  [main] 2017-02-21 17:00:02,001 ?:? - Configuration location: 
> file:/home/automaton/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2017-02-21 17:00:02,002 ?:? - Loading settings from 
> file:/home/automaton/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2017-02-21 17:00:02,530 ?:? - Node 
> configuration:[allocate_tokens_for_keyspace=null; authenticator=null; 
> authorizer=null; auto_bootstrap=true; auto_snapshot=true; 
> back_pressure_enabled=false; back_pressure_strategy=null; 
> batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; 
> batchlog_replay_throttle_in_kb=1024; broadcast_address=null; 
> broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; 
> cas_contention_timeout_in_ms=1000; cdc_enabled=false; 
> cdc_free_space_check_interval_ms=250; 
> cdc_raw_directory=build/test/cassandra/cdc_raw:165; cdc_total_space_in_mb=0; 
> client_encryption_options=; cluster_name=Test Cluster; 
> column_index_cache_size_in_kb=2; column_index_size_in_kb=4; 
> commit_failure_policy=stop; commitlog_compression=null; 
> commitlog_directory=build/test/cassandra/commitlog:165; 
> commitlog_max_compression_buffers_in_pool=3; 
> commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=5; 
> commitlog_sync=batch; commitlog_sync_batch_window_in_ms=1.0; 
> commitlog_sync_period_in_ms=0; commitlog_total_space_in_mb=null; 
> compaction_large_partition_warning_threshold_mb=100; 
> compaction_throughput_mb_per_sec=0; concurrent_compactors=4; 
> concurrent_counter_writes=32; concurrent_materialized_view_writes=32; 
> concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; 
> counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; 
> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; 
> credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; 
> credentials_validity_in_ms=2000; cross_node_timeout=false; 
> data_file_directories=[Ljava.lang.String;@1757cd72; disk_access_mode=mmap; 
> disk_failure_policy=ignore; disk_optimization_estimate_percentile=0.95; 
> disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; 
> dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; 
> dynamic_snitch_reset_interval_in_ms=60; 
> dynamic_snitch_update_interval_in_ms=100; 
> enable_scripted_user_defined_functions=true; 
> enable_user_defined_functions=true; 
> enable_user_defined_functions_threads=true; encryption_options=null; 
> endpoint_snitch=org.apache.cassandra.locator.SimpleSnitch; 
> file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; 
> gc_warn_threshold_in_ms=0; hinted_handoff_disabled_datacenters=[]; 
> hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; 
> hints_compression=null; hints_directory=build/test/cassandra/hints:165; 
> hints_flush_period_in_ms=1; incremental_backups=true; 
> index_interval=null; index_summary_capacity_in_mb=null; 
> index_summary_resize_interval_in_minutes=60; initial_token=null; 
> inter_dc_stream_throughput_outbound_megabits_per_sec=200; 
> inter_dc_tcp_nodelay=true; internode_authenticator=null; 
> internode_compression=none; internode_recv_buff_size_in_bytes=0; 
> internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; 
> key_cache_save_period=14400; key_cache_size_in_mb=null; 
> listen_address=127.0.0.1; listen_interface=null; 
> listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; 
> max_hint_window_in_ms=1080; max_hints_delivery_threads=2; 
> max_hints_file_size_in_mb=128; 

[jira] [Updated] (CASSANDRA-12753) Create MV can corrupt C*, blocking all further table actions and startup

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12753:
-
Component/s: Materialized Views

> Create MV can corrupt C*, blocking all further table actions and startup
> 
>
> Key: CASSANDRA-12753
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12753
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Materialized Views
> Environment: RHEL6.5
> Cas 3.0.9
>Reporter: Hazel Bobrins
>Priority: Critical
> Attachments: Cass_start.txt, MV_Create.txt, table_drop.txt
>
>
> Creating a MV with a protected field name e.g. 'set' results in an error. 
> Post this failed MV create all further actions in the keyspace fail and node 
> startup fails until the keyspace is dropped.
> Tested on a fresh 3.0.9 install single node cluster.
> How to reproduce
> cassandra@cqlsh:test1> CREATE KEYSPACE test1 WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': 1 } AND durable_writes = 'true';
> cassandra@cqlsh:test1> use test1 ;
> cassandra@cqlsh:test1> CREATE TABLE main_table ( field1 text, field2 text, 
> "set" text, PRIMARY KEY ( field1, field2 ) );
> cassandra@cqlsh:test1> CREATE MATERIALIZED VIEW mv1 AS SELECT field2, field1, 
> "set" FROM main_table WHERE field1 IS NOT NULL AND field2 IS NOT NULL PRIMARY 
> KEY ( field2, field1 ) ;
> ServerError: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.SyntaxException: line 1:23 no viable 
> alternative at input 'set' (SELECT field1, field2, [set]...)
> ## Attached stack traces - 'MV_Create' thrown at this point
> cassandra@cqlsh:test1> drop TABLE main_table ;
> ServerError: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.exceptions.SyntaxException: line 1:23 no viable 
> alternative at input 'set' (SELECT field1, field2, [set]...)
> ## Attached stacke traces - 'Table_drop' thrown at this point
> Finally restart Cassandra. Attached stack 'Cass_start' thrown at this point 
> and C* does not start.
> Dropping the keyspace does work, however, this must obviously be done before 
> stopping the node.
> We have also tested this on a 4 node cluster, post the MV create all nodes 
> report the same issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9859) IndexedReader updateBlock exception

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-9859:

Component/s: Materialized Views

> IndexedReader updateBlock exception
> ---
>
> Key: CASSANDRA-9859
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9859
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
> Fix For: 3.0 alpha 1
>
>
> While testing the Materialized View, an exception is thrown on create.
> {noformat}
> [junit] ERROR 18:25:01 Unexpected exception during request
> [junit] java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: java.lang.IndexOutOfBoundsException: 
> Index: 2, Size: 2
> [junit]   at 
> org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:356) 
> ~[main/:na]
> [junit]   at 
> org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:423)
>  ~[main/:na]
> [junit]   at 
> org.apache.cassandra.service.MigrationManager.announceColumnFamilyUpdate(MigrationManager.java:340)
>  ~[main/:na]
> [junit]   at 
> org.apache.cassandra.cql3.statements.CreateMaterializedViewStatement.announceMigration(CreateMaterializedViewStatement.java:208)
>  ~[main/:na]
> [junit]   at 
> org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:93)
>  ~[main/:na]
> [junit]   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:204)
>  ~[main/:na]
> [junit]   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:235) 
> ~[main/:na]
> [junit]   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:220) 
> ~[main/:na]
> [junit]   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
>  ~[main/:na]
> [junit]   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
> [junit]   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
> [junit]   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> [junit]   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> [junit]   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> [junit]   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> [junit]   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
> [junit]   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [main/:na]
> [junit]   at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [main/:na]
> [junit]   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> [junit] Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IndexOutOfBoundsException: Index: 2, Size: 2
> [junit]   at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> ~[na:1.8.0_45]
> [junit]   at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
> ~[na:1.8.0_45]
> [junit]   at 
> org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:352) 
> ~[main/:na]
> [junit]   ... 18 common frames omitted
> [junit] Caused by: java.lang.IndexOutOfBoundsException: Index: 2, Size: 2
> [junit]   at java.util.ArrayList.rangeCheck(ArrayList.java:653) 
> ~[na:1.8.0_45]
> [junit]   at java.util.ArrayList.get(ArrayList.java:429) ~[na:1.8.0_45]
> [junit]   at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator$IndexedReader.updateBlock(AbstractSSTableIterator.java:407)
>  ~[main/:na]
> [junit]   at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardIndexedReader$1.computeNext(SSTableIterator.java:253)
>  ~[main/:na]
> [junit]   at 
> org.apache.cassandra.db.columniterator.SSTableIterator$ForwardIndexedReader$1.computeNext(SSTableIterator.java:222)
>  ~[main/:na]
> [junit]   at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-18.0.jar:na]
> [junit]   at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-18.0.jar:na]
> [junit]   at 
> org.apache.cassandra.db.Slices$ArrayBackedSlices$1.prepareNext(Slices.java:490)
>  

[jira] [Updated] (CASSANDRA-10147) Base table PRIMARY KEY can be assumed to be NOT NULL in MV creation

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10147:
-
Component/s: Materialized Views

> Base table PRIMARY KEY can be assumed to be NOT NULL in MV creation
> ---
>
> Key: CASSANDRA-10147
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10147
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Materialized Views
>Reporter: Jonathan Ellis
>Assignee: Carl Yeksigian
>Priority: Minor
> Fix For: 3.0 beta 2
>
>
> {code}
> CREATE TABLE users (
> id uuid PRIMARY KEY,
> username text,
> email text,
> age int
> );
> CREATE MATERIALIZED VIEW users_by_username AS SELECT * FROM users WHERE 
> username IS NOT NULL PRIMARY KEY (username, id);
> InvalidRequest: code=2200 [Invalid query] message="Primary key column 'id' is 
> required to be filtered by 'IS NOT NULL'"
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10295) Support skipping MV read-before-write on a per-operation basis

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10295:
-
Component/s: Materialized Views

> Support skipping MV read-before-write on a per-operation basis
> --
>
> Key: CASSANDRA-10295
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10295
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL, Materialized Views
>Reporter: Tyler Hobbs
>  Labels: client-impacting, doc-impacting
> Fix For: 4.x
>
>
> This is similar in spirit to CASSANDRA-9779, but on a per-operation basis.  
> There are many workloads that include a mixture of new insertions and 
> overwrites.  In some cases, logic outside of Cassandra guarantees that an 
> inserted row does not already exist.  For example, the primary key may 
> include a UUID or another form of unique id (from, say, Snowflake).  
> When denormalizing manually, users can take advantage of this knowledge to 
> avoid doing a read-before-write, but with materialized views they don't have 
> this option.  When the newly inserted row also happens to be a new partition, 
> MVs are still pretty efficient, because the bloom filters allow us to quickly 
> short circuit the read.  However, when new rows are inserted to existing 
> partitions, the reads can become costly.
> I'd like to consider exposing a way for the user to indicate that an inserted 
> row is new on a per-operation basis.  Internally, this could potentially use 
> the mechanism from CASSANDRA-9779, depending on how that's implemented.  As 
> far as the API goes, I'm not sure.  Perhaps an "assertion" clause in inserts 
> would work well:
> {noformat}
> INSERT INTO users ... ASSERTING DOES NOT EXIST;
> {noformat}
> The choice of API should also take into consideration potential future 
> enhancements along these lines.  For example, we might want to support 
> asserting that a given column has a known current value (as another means of 
> avoiding read-before-writes).
> If we implement this, we should make sure that hints, logged batches, and 
> commitlog replay handle this safely.  If the original timestamp is used for 
> replay, I believe it should be idempotent (during the gc_grace window), but I 
> could be missing something.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9673) Improve batchlog write path

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-9673:

Component/s: Materialized Views

> Improve batchlog write path
> ---
>
> Key: CASSANDRA-9673
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9673
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination, Materialized Views
>Reporter: Aleksey Yeschenko
>Assignee: Stefania
>  Labels: performance
> Fix For: 3.0 beta 2
>
> Attachments: 9673_001.tar.gz, 9673_004.tar.gz, 
> gc_times_first_node_patched_004.png, gc_times_first_node_trunk_004.png
>
>
> Currently we allocate an on-heap {{ByteBuffer}} to serialize the batched 
> mutations into, before sending it to a distant node, generating unnecessary 
> garbage (potentially a lot of it).
> With materialized views using the batchlog, it would be nice to optimise the 
> write path:
> - introduce a new verb ({{Batch}})
> - introduce a new message ({{BatchMessage}}) that would encapsulate the 
> mutations, expiration, and creation time (similar to {{HintMessage}} in 
> CASSANDRA-6230)
> - have MS serialize it directly instead of relying on an intermediate buffer
> To avoid merely shifting the temp buffer to the receiving side(s) we should 
> change the structure of the batchlog table to use a list or a map of 
> individual mutations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-2474) CQL support for compound columns and wide rows

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-2474:

Component/s: Materialized Views

> CQL support for compound columns and wide rows
> --
>
> Key: CASSANDRA-2474
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2474
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL, Materialized Views
>Reporter: Eric Evans
>Priority: Critical
>  Labels: cql
> Attachments: 0001-Add-support-for-wide-and-composite-CFs.patch, 
> 0002-thrift-generated-code.patch, 2474-transposed-1.PNG, 
> 2474-transposed-raw.PNG, 2474-transposed-select-no-sparse.PNG, 
> 2474-transposed-select.PNG, ASF.LICENSE.NOT.GRANTED--screenshot-1.jpg, 
> ASF.LICENSE.NOT.GRANTED--screenshot-2.jpg, cql_tests.py, raw_composite.txt
>
>
> For the most part, this boils down to supporting the specification of 
> compound column names (the CQL syntax is colon-delimted terms), and then 
> teaching the decoders (drivers) to create structures from the results.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13300) Upgrade the jna version to 4.3.0

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13300:
-
Component/s: Materialized Views

> Upgrade the jna version to 4.3.0
> 
>
> Key: CASSANDRA-13300
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13300
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration, Materialized Views
>Reporter: Amitkumar Ghatwal
>Assignee: Jason Brown
> Fix For: 4.0
>
>
> Could you please upgrade the jna version present in the github cassandra
> location : https://github.com/apache/cassandra/blob/trunk/lib/jna-4.0.0.jar
> to below latest version   - 4.3.0 -
> http://repo1.maven.org/maven2/net/java/dev/jna/jna/4.3.0/jna-4.3.0-javadoc.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10261) Materialized Views Timestamp issues

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10261:
-
Component/s: Materialized Views

> Materialized Views Timestamp issues
> ---
>
> Key: CASSANDRA-10261
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10261
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.0.0 rc1
>
>
> As [~thobbs] 
> [mentioned|https://issues.apache.org/jira/browse/CASSANDRA-9664?focusedCommentId=14724150=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14724150]
>  in CASSANDRA-9664 there are issues dealing with updates to individual cells 
> which can mask data from the base table in the view when trying to filter 
> data correctly in the view.  
> Unfortunately, this same issue exists for all MV tables with regular columns.
> In the earlier versions of MV we did have a fix for this which I now can see 
> is ineffective for all situations.
> I've pushed some unit tests to show the issue (similar to tylers) and a fix.  
> The idea is we keep the base table's timestamps per cell as it so we can 
> *always* tell (per replica) which version of the record is the latest.  Since 
> the base table *always* writes the entire record to the view (part of our 
> earlier partial fix) we can ensure the view record contains *at least* views 
> primary key timestamp.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11698) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11698:
-
Component/s: Materialized Views

> dtest failure in 
> materialized_views_test.TestMaterializedViews.clustering_column_test
> -
>
> Key: CASSANDRA-11698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11698
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views, Testing
>Reporter: Russ Hatch
>Assignee: Carl Yeksigian
>  Labels: dtest
> Attachments: node1_debug.log, node1.log, node2_debug.log, node2.log, 
> node3_debug.log, node3.log
>
>
> recent failure, test has flapped before a while back.
> {noformat}
> Expecting 2 users, got 1
> {noformat}
> http://cassci.datastax.com/job/cassandra-3.0_dtest/688/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test
> Failed on CassCI build cassandra-3.0_dtest #688



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10014) Deletions using clustering keys not reflected in MV

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10014:
-
Component/s: Materialized Views

> Deletions using clustering keys not reflected in MV
> ---
>
> Key: CASSANDRA-10014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10014
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Materialized Views
>Reporter: Stefan Podkowinski
>Assignee: Carl Yeksigian
> Fix For: 3.0 beta 1
>
>
> I wrote a test to reproduce an 
> [issue|http://stackoverflow.com/questions/31810841/cassandra-materialized-view-shows-stale-data/31860487]
>  reported on SO and turns out this is easily reproducible. There seems to be 
> a bug preventing deletes to be propagated to MVs in case a clustering key is 
> used. See 
> [here|https://github.com/spodkowinski/cassandra/commit/1c064523c8d8dbee30d46a03a0f58d3be97800dc]
>  for test case (testClusteringKeyTombstone should fail).
> It seems {{MaterializedView.updateAffectsView()}} will not consider the 
> delete relevant for the view as {{partition.deletionInfo().isLive()}} will be 
> true during the test. In other test cases isLive will return false, which 
> seems to be the actual problem here. I'm not even sure the root cause is MV 
> specific, but wasn't able to dig much deeper as I'm not familiar with the 
> slightly confusing semantics around DeletionInfo.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12044) Materialized view definition regression in clustering key

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-12044:
-
Component/s: Materialized Views

> Materialized view definition regression in clustering key
> -
>
> Key: CASSANDRA-12044
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12044
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Materialized Views
>Reporter: Michael Mior
>Assignee: Carl Yeksigian
> Fix For: 3.0.9, 3.8
>
>
> This bug was reported on the 
> [users|https://mail-archives.apache.org/mod_mbox/cassandra-user/201606.mbox/%3CCAG0vsSJRtRjLJqKsd3M8X-8nXpPwRj7Q80mNkuy8sy%2B%2B%3D%2BocHA%40mail.gmail.com%3E]
>  mailing list. The following definitions work in 3.0.3 but fail in 3.0.7.
> {code}
> CREATE TABLE ks.pa (
> id bigint,
> sub_id text,
> name text,
> class text,
> r_id bigint,
> k_id bigint,
> created timestamp,
> priority int,
> updated timestamp,
> value text,
> PRIMARY KEY (id, sub_id, name)
> );
> CREATE ks.mv_pa AS
> SELECT k_id, name, value, sub_id, id, class, r_id
> FROM ks.pa
> WHERE k_id IS NOT NULL AND name IS NOT NULL AND value IS NOT NULL AND 
> sub_id IS NOT NULL AND id IS NOT NULL
> PRIMARY KEY ((k_id, name), value, sub_id, id);
> {code}
> After running bisect, I've narrowed it down to commit 
> [86ba227|https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=86ba227477b9f8595eb610ecaf950cfbc29dd36b]
>  from [CASSANDRA-11475|https://issues.apache.org/jira/browse/CASSANDRA-11475].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10230) Remove coordinator batchlog from materialized views

2017-06-22 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-10230:
-
Component/s: Materialized Views

> Remove coordinator batchlog from materialized views
> ---
>
> Key: CASSANDRA-10230
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10230
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination, Materialized Views
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.0.0 rc1
>
>
> We are considering removing or making optional the coordinator batchlog.  
> The batchlog primary serves as a way to quickly reach consistency between 
> base and view since we don't have any kind of read repair between base and 
> view. But we do have repair so as long as you don't lose nodes while writing 
> at CL.ONE you will be eventually consistent.
> I've committed to the 3.0 branch a way to disable the coordinator with 
> {{-Dcassandra.mv_disable_coordinator_batchlog=true}}
> The majority of the performance hit to throughput is currently the batchlog 
> as shown by this chart.
> http://cstar.datastax.com/graph?stats=f794245a-4d9d-11e5-9def-42010af0688f=op_rate=1_user=1_aggregates=true=0=498.52=0=50142.4
> I'd like to have tests run with and without this flag to validate how quickly 
> we achieve quorum consistency without repair writing with CL.ONE.   Once we 
> can see there is little/no impact we can permanently remove the coordinator 
> batchlog.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



  1   2   >