[jira] [Updated] (CASSANDRA-14421) Reenable upgrade tests

2018-06-24 Thread Dinesh Joshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-14421:
-
Reviewer: Dinesh Joshi

> Reenable upgrade tests
> --
>
> Key: CASSANDRA-14421
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14421
> Project: Cassandra
>  Issue Type: Task
>  Components: Testing
>Reporter: Sam Tunnicliffe
>Assignee: Jason Brown
>Priority: Major
> Fix For: 4.0
>
>
> Since dtests were switched to pytest & python3 in CASSANDRA-13134, the 
> upgrade tests have been non-functional and are deselected by default (though 
> even if you ran with the {{--execute-upgrade-tests}} they wouldn't work). 
> They're further broken by CASSANDRA-14420, as {{upgrade_manifest}} relies on 
> {{CASSANDRA_VERSION_FROM_BUILD}}. We need to get them, or something 
> equivalent, up and running.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14540) Internode messaging handshake sends wrong messaging version number

2018-06-24 Thread Dinesh Joshi (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521858#comment-16521858
 ] 

Dinesh Joshi commented on CASSANDRA-14540:
--

Thanks, [~jasobrown] LGTM +1

> Internode messaging handshake sends wrong messaging version number
> --
>
> Key: CASSANDRA-14540
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14540
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Blocker
> Fix For: 4.x
>
>
> With the refactor of internode messaging to netty in 4.0, we abstracted the 
> protocol handshakes messages into a class and handlers. There is a bug when 
> the initiator of the connection sends, in the third message of the handshake, 
> it's own default protocol version number 
> ({{MessagingService.current_version}}), rather than the negotiated version. 
> This was not causing any obvious problems when CASSANDRA-8457 was initially 
> committed, but the bug is exposed after CASSANDRA-7544. The problem is during 
> rolling upgrades of 3.0/3.X to 4.0, nodes cannot correctly connect. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14525) streaming failure during bootstrap makes new node into inconsistent state

2018-06-24 Thread Dinesh Joshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-14525:
-
Reviewer: Kurt Greaves  (was: Dinesh Joshi)

> streaming failure during bootstrap makes new node into inconsistent state
> -
>
> Key: CASSANDRA-14525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Major
> Fix For: 4.0, 2.2.x, 3.0.x
>
>
> If bootstrap fails for newly joining node (most common reason is due to 
> streaming failure) then Cassandra state remains in {{joining}} state which is 
> fine but Cassandra also enables Native transport which makes overall state 
> inconsistent. This further creates NullPointer exception if auth is enabled 
> on the new node, please find reproducible steps here:
> For example if bootstrap fails due to streaming errors like
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.cassandra.streaming.StreamException: Stream failed
>  at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) 
> ~[guava-18.0.jar:na]
>  at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1256)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:894)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:660)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:573)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:330) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:695) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>  Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
>  at 
> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310) 
> ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
>  ~[guava-18.0.jar:na]
>  at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:211)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:187)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:440)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:540) 
> ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:307)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {quote}
> then variable [StorageService.java::dataAvailable 
> |https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L892]
>  will be {{false}}. Since {{dataAvailable}} is {{false}} hence it will not 
> call [StorageService.java::finishJoiningRing 
> |https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L933]
>  and as a result 
> [StorageService.java::doAuthSetup|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L999]
>  will not be invoked.
> API [StorageService.java::joinTokenRing 
> 

[jira] [Commented] (CASSANDRA-14525) streaming failure during bootstrap makes new node into inconsistent state

2018-06-24 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521843#comment-16521843
 ] 

Kurt Greaves commented on CASSANDRA-14525:
--

Sorry about that, we had a pretty busy week last week and Vince probably won't 
have time. I'll review.

> streaming failure during bootstrap makes new node into inconsistent state
> -
>
> Key: CASSANDRA-14525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Major
> Fix For: 4.0, 2.2.x, 3.0.x
>
>
> If bootstrap fails for newly joining node (most common reason is due to 
> streaming failure) then Cassandra state remains in {{joining}} state which is 
> fine but Cassandra also enables Native transport which makes overall state 
> inconsistent. This further creates NullPointer exception if auth is enabled 
> on the new node, please find reproducible steps here:
> For example if bootstrap fails due to streaming errors like
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.cassandra.streaming.StreamException: Stream failed
>  at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) 
> ~[guava-18.0.jar:na]
>  at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1256)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:894)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:660)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:573)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:330) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:695) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>  Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
>  at 
> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310) 
> ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
>  ~[guava-18.0.jar:na]
>  at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:211)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:187)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:440)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:540) 
> ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:307)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {quote}
> then variable [StorageService.java::dataAvailable 
> |https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L892]
>  will be {{false}}. Since {{dataAvailable}} is {{false}} hence it will not 
> call [StorageService.java::finishJoiningRing 
> |https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/service/StorageService.java#L933]
>  and as a result 
> 

[jira] [Commented] (CASSANDRA-14056) Many dtests fail with ConfigurationException: offheap_objects are not available in 3.0 when OFFHEAP_MEMTABLES="true"

2018-06-24 Thread Alex Lourie (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521834#comment-16521834
 ] 

Alex Lourie commented on CASSANDRA-14056:
-

PR is closed too. Thanks!

> Many dtests fail with ConfigurationException: offheap_objects are not 
> available in 3.0 when OFFHEAP_MEMTABLES="true"
> 
>
> Key: CASSANDRA-14056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14056
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Alex Lourie
>Priority: Major
>
> Tons of dtests are running when they shouldn't as it looks like the path is 
> no longer supported.. we need to add a bunch of logic that's missing to fully 
> support running dtests with off-heap memtables enabled (via the 
> OFFHEAP_MEMTABLES="true" environment variable)
> {code}[node2 ERROR] java.lang.ExceptionInInitializerError
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:394)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:361)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:577)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:554)
>   at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
>   at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
>   at 
> org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:887)
>   at 
> org.apache.cassandra.service.StartupChecks$9.execute(StartupChecks.java:354)
>   at 
> org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:110)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:179)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697)
> Caused by: org.apache.cassandra.exceptions.ConfigurationException: 
> offheap_objects are not available in 3.0. They will be re-introduced in a 
> future release, see https://issues.apache.org/jira/browse/CASSANDRA-9472 for 
> details
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.getMemtableAllocatorPool(DatabaseDescriptor.java:1907)
>   at org.apache.cassandra.db.Memtable.(Memtable.java:65)
>   ... 14 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13857) Allow MV with only partition key

2018-06-24 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves reassigned CASSANDRA-13857:


Assignee: Alexander Ivakov

> Allow MV with only partition key
> 
>
> Key: CASSANDRA-13857
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13857
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Kurt Greaves
>Assignee: Alexander Ivakov
>Priority: Major
>
> We currently disallow creation of a view that has the exact same primary key 
> as the base where no clustering keys are present, however a potential use 
> case would be a view where part of the PK is filtered so as to have a subset 
> of data in the view which is faster for range queries. We actually currently 
> allow this, but only if you have a clustering key defined. If you only have a 
> partitioning key it's not possible.
> From the mailing list, the below example works:
> {code:java}
> CREATE TABLE users (
>   site_id int,
>   user_id text,
>   n int,
>   data set>,
>   PRIMARY KEY ((site_id, user_id), n));
> user data is updated and read by PK and sometimes I have to fetch all user 
> for some specific site_id. It appeared that full scan by 
> token(site_id,user_id) filtered by WHERE site_id =  works much 
> slower than unfiltered full scan on
> CREATE MATERIALIZED VIEW users_1 AS
> SELECT site_id, user_id, n, data
> FROM users
> WHERE site_id = 1 AND user_id IS NOT NULL AND n IS NOT NULL
> PRIMARY KEY ((site_id, user_id), n);
> {code}
> However the following does not:
> {code:java}
> CREATE TABLE users (
> site_id int,
> user_id text,
> data set,
> PRIMARY KEY ((site_id, user_id)));
> CREATE MATERIALIZED VIEW users_1 AS
> SELECT site_id, user_id, data
> FROM users
> WHERE site_id = 1 AND user_id IS NOT NULL 
> PRIMARY KEY ((site_id, user_id));
> InvalidRequest: Error from server: code=2200 [Invalid query] message="No 
> columns are defined for Materialized View other than primary key"
> {code}
> This is because if the clustering key is empty we assume they've only defined 
> the primary key in the partition key and we haven't accounted for this use 
> case. 
> On that note, we also don't allow the following narrowing of the partition 
> key:
> {code}
> CREATE TABLE kurt.base (
> id int,
> uid text,
> data text,
> PRIMARY KEY (id, uid)
> ) 
> CREATE MATERIALIZED VIEW kurt.mv2 AS SELECT * from kurt.base where id IS NOT 
> NULL and uid='1' PRIMARY KEY ((id, uid));
> InvalidRequest: Error from server: code=2200 [Invalid query] message="No 
> columns are defined for Materialized View other than primary key"
> {code}
> But we do allow the following, which works because there is still a 
> clustering key, despite not changing the PK.
> {code}
> CREATE MATERIALIZED VIEW kurt.mv2 AS SELECT * from kurt.base where id IS NOT 
> NULL and uid='1' PRIMARY KEY (id, uid);
> {code}
> And we also allow the following, which is a narrowing of the partition key as 
> above, but with an extra clustering key.
> {code}
> create table kurt.base3 (id int, uid int, clus1 int, clus2 int, data text, 
> PRIMARY KEY ((id, uid), clus1, clus2));
> CREATE MATERIALIZED VIEW kurt.mv4 AS SELECT * from kurt.base3 where id IS NOT 
> NULL and uid IS NOT NULL and clus1 IS NOT NULL AND clus2 IS NOT NULL  PRIMARY 
> KEY ((id, uid, clus1), clus2);
> {code}
> I _think_ supporting these cases is trivial and mostly already handled in the 
> underlying MV write path, so we might be able to get away with just a simple 
> change of [this 
> condition|https://github.com/apache/cassandra/blob/83822d12d87dcb3aaad2b1e670e57ebef4ab1c36/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java#L291].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14542) Deselect no_offheap_memtables dtests

2018-06-24 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-14542:

Status: Patch Available  (was: Open)

> Deselect no_offheap_memtables dtests
> 
>
> Key: CASSANDRA-14542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14542
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>
> After the large rework of dtests in CASSANDRA-14134, one task left undone was 
> to enable running dtests with offheap memtables. That was resolved in 
> CASSANDRA-14056. However, there are a few tests explicitly marked as 
> "no_offheap_memtables", and we should respect that marking when running the 
> dtests with offheap memtables enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14542) Deselect no_offheap_memtables dtests

2018-06-24 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-14542:
---

 Summary: Deselect no_offheap_memtables dtests
 Key: CASSANDRA-14542
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14542
 Project: Cassandra
  Issue Type: Improvement
  Components: Testing
Reporter: Jason Brown
Assignee: Jason Brown


After the large rework of dtests in CASSANDRA-14134, one task left undone was 
to enable running dtests with offheap memtables. That was resolved in 
CASSANDRA-14056. However, there are a few tests explicitly marked as 
"no_offheap_memtables", and we should respect that marking when running the 
dtests with offheap memtables enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14542) Deselect no_offheap_memtables dtests

2018-06-24 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521717#comment-16521717
 ] 

Jason Brown commented on CASSANDRA-14542:
-

Here is a trivial patch to deselect the test:

||patch||
|[branch|https://github.com/jasobrown/cassandra-dtest/tree/deselect-no-offheap-tests]|
||

Only ~3 tests are actually marked as {{no-offheap-memtables}}, so you can check 
this locally by running:

{noformat}
pytest --use-off-heap-memtables --cassandra-dir=/opt/dev/cassandra  
secondary_indexes_test.py::TestSecondaryIndexesOnCollections::test_map_indexes
{noformat}

The output should contain

{noformat}
...
collected 1 item / 1 deselected
...
{noformat}

and should not have executed the test.


> Deselect no_offheap_memtables dtests
> 
>
> Key: CASSANDRA-14542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14542
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>
> After the large rework of dtests in CASSANDRA-14134, one task left undone was 
> to enable running dtests with offheap memtables. That was resolved in 
> CASSANDRA-14056. However, there are a few tests explicitly marked as 
> "no_offheap_memtables", and we should respect that marking when running the 
> dtests with offheap memtables enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13857) Allow MV with only partition key

2018-06-24 Thread Alexander Ivakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Ivakov updated CASSANDRA-13857:
-
Status: Patch Available  (was: Open)

Attaching a patch for the above. Allows: 

1) to create an MV with no clusterng columns if there are none in the base table

2) to create an MV where all base table PK key columns are in the partition key 
of the MV

[3.11|https://github.com/apache/cassandra/compare/trunk...aivakov:CASSANDRA-13857]

> Allow MV with only partition key
> 
>
> Key: CASSANDRA-13857
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13857
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Kurt Greaves
>Priority: Major
>
> We currently disallow creation of a view that has the exact same primary key 
> as the base where no clustering keys are present, however a potential use 
> case would be a view where part of the PK is filtered so as to have a subset 
> of data in the view which is faster for range queries. We actually currently 
> allow this, but only if you have a clustering key defined. If you only have a 
> partitioning key it's not possible.
> From the mailing list, the below example works:
> {code:java}
> CREATE TABLE users (
>   site_id int,
>   user_id text,
>   n int,
>   data set>,
>   PRIMARY KEY ((site_id, user_id), n));
> user data is updated and read by PK and sometimes I have to fetch all user 
> for some specific site_id. It appeared that full scan by 
> token(site_id,user_id) filtered by WHERE site_id =  works much 
> slower than unfiltered full scan on
> CREATE MATERIALIZED VIEW users_1 AS
> SELECT site_id, user_id, n, data
> FROM users
> WHERE site_id = 1 AND user_id IS NOT NULL AND n IS NOT NULL
> PRIMARY KEY ((site_id, user_id), n);
> {code}
> However the following does not:
> {code:java}
> CREATE TABLE users (
> site_id int,
> user_id text,
> data set,
> PRIMARY KEY ((site_id, user_id)));
> CREATE MATERIALIZED VIEW users_1 AS
> SELECT site_id, user_id, data
> FROM users
> WHERE site_id = 1 AND user_id IS NOT NULL 
> PRIMARY KEY ((site_id, user_id));
> InvalidRequest: Error from server: code=2200 [Invalid query] message="No 
> columns are defined for Materialized View other than primary key"
> {code}
> This is because if the clustering key is empty we assume they've only defined 
> the primary key in the partition key and we haven't accounted for this use 
> case. 
> On that note, we also don't allow the following narrowing of the partition 
> key:
> {code}
> CREATE TABLE kurt.base (
> id int,
> uid text,
> data text,
> PRIMARY KEY (id, uid)
> ) 
> CREATE MATERIALIZED VIEW kurt.mv2 AS SELECT * from kurt.base where id IS NOT 
> NULL and uid='1' PRIMARY KEY ((id, uid));
> InvalidRequest: Error from server: code=2200 [Invalid query] message="No 
> columns are defined for Materialized View other than primary key"
> {code}
> But we do allow the following, which works because there is still a 
> clustering key, despite not changing the PK.
> {code}
> CREATE MATERIALIZED VIEW kurt.mv2 AS SELECT * from kurt.base where id IS NOT 
> NULL and uid='1' PRIMARY KEY (id, uid);
> {code}
> And we also allow the following, which is a narrowing of the partition key as 
> above, but with an extra clustering key.
> {code}
> create table kurt.base3 (id int, uid int, clus1 int, clus2 int, data text, 
> PRIMARY KEY ((id, uid), clus1, clus2));
> CREATE MATERIALIZED VIEW kurt.mv4 AS SELECT * from kurt.base3 where id IS NOT 
> NULL and uid IS NOT NULL and clus1 IS NOT NULL AND clus2 IS NOT NULL  PRIMARY 
> KEY ((id, uid, clus1), clus2);
> {code}
> I _think_ supporting these cases is trivial and mostly already handled in the 
> underlying MV write path, so we might be able to get away with just a simple 
> change of [this 
> condition|https://github.com/apache/cassandra/blob/83822d12d87dcb3aaad2b1e670e57ebef4ab1c36/src/java/org/apache/cassandra/cql3/statements/CreateViewStatement.java#L291].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14540) Internode messaging handshake sends wrong messaging version number

2018-06-24 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521709#comment-16521709
 ] 

Jason Brown commented on CASSANDRA-14540:
-

Tests added to {{OutboundHandshakeHandlerTest}}, and a new commit is on the 
same branch.

> Internode messaging handshake sends wrong messaging version number
> --
>
> Key: CASSANDRA-14540
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14540
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Blocker
> Fix For: 4.x
>
>
> With the refactor of internode messaging to netty in 4.0, we abstracted the 
> protocol handshakes messages into a class and handlers. There is a bug when 
> the initiator of the connection sends, in the third message of the handshake, 
> it's own default protocol version number 
> ({{MessagingService.current_version}}), rather than the negotiated version. 
> This was not causing any obvious problems when CASSANDRA-8457 was initially 
> committed, but the bug is exposed after CASSANDRA-7544. The problem is during 
> rolling upgrades of 3.0/3.X to 4.0, nodes cannot correctly connect. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14540) Internode messaging handshake sends wrong messaging version number

2018-06-24 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521707#comment-16521707
 ] 

Jason Brown commented on CASSANDRA-14540:
-

upgrade_tests are currently disabled/non-funcational 
(https://issues.apache.org/jira/browse/CASSANDRA-14421) (sadpanda). I'm working 
on that soon-ish. 

I can add a test, and thanks for the push!

> Internode messaging handshake sends wrong messaging version number
> --
>
> Key: CASSANDRA-14540
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14540
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Blocker
> Fix For: 4.x
>
>
> With the refactor of internode messaging to netty in 4.0, we abstracted the 
> protocol handshakes messages into a class and handlers. There is a bug when 
> the initiator of the connection sends, in the third message of the handshake, 
> it's own default protocol version number 
> ({{MessagingService.current_version}}), rather than the negotiated version. 
> This was not causing any obvious problems when CASSANDRA-8457 was initially 
> committed, but the bug is exposed after CASSANDRA-7544. The problem is during 
> rolling upgrades of 3.0/3.X to 4.0, nodes cannot correctly connect. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14540) Internode messaging handshake sends wrong messaging version number

2018-06-24 Thread Dinesh Joshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-14540:
-
Status: Patch Available  (was: Open)

> Internode messaging handshake sends wrong messaging version number
> --
>
> Key: CASSANDRA-14540
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14540
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Blocker
> Fix For: 4.x
>
>
> With the refactor of internode messaging to netty in 4.0, we abstracted the 
> protocol handshakes messages into a class and handlers. There is a bug when 
> the initiator of the connection sends, in the third message of the handshake, 
> it's own default protocol version number 
> ({{MessagingService.current_version}}), rather than the negotiated version. 
> This was not causing any obvious problems when CASSANDRA-8457 was initially 
> committed, but the bug is exposed after CASSANDRA-7544. The problem is during 
> rolling upgrades of 3.0/3.X to 4.0, nodes cannot correctly connect. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14540) Internode messaging handshake sends wrong messaging version number

2018-06-24 Thread Dinesh Joshi (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521596#comment-16521596
 ] 

Dinesh Joshi commented on CASSANDRA-14540:
--

Hi [~jasobrown], this is a good catch. Just curious if the upgrade tests dtest 
caught this?

Regarding, the change, would it be possible to add a test in 
{{OutboundHandshakeHandlerTest}} to check that we use the negotiated version 
number?

> Internode messaging handshake sends wrong messaging version number
> --
>
> Key: CASSANDRA-14540
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14540
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Blocker
> Fix For: 4.x
>
>
> With the refactor of internode messaging to netty in 4.0, we abstracted the 
> protocol handshakes messages into a class and handlers. There is a bug when 
> the initiator of the connection sends, in the third message of the handshake, 
> it's own default protocol version number 
> ({{MessagingService.current_version}}), rather than the negotiated version. 
> This was not causing any obvious problems when CASSANDRA-8457 was initially 
> committed, but the bug is exposed after CASSANDRA-7544. The problem is during 
> rolling upgrades of 3.0/3.X to 4.0, nodes cannot correctly connect. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14540) Internode messaging handshake sends wrong messaging version number

2018-06-24 Thread Dinesh Joshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-14540:
-
Reviewer: Dinesh Joshi

> Internode messaging handshake sends wrong messaging version number
> --
>
> Key: CASSANDRA-14540
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14540
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Blocker
> Fix For: 4.x
>
>
> With the refactor of internode messaging to netty in 4.0, we abstracted the 
> protocol handshakes messages into a class and handlers. There is a bug when 
> the initiator of the connection sends, in the third message of the handshake, 
> it's own default protocol version number 
> ({{MessagingService.current_version}}), rather than the negotiated version. 
> This was not causing any obvious problems when CASSANDRA-8457 was initially 
> committed, but the bug is exposed after CASSANDRA-7544. The problem is during 
> rolling upgrades of 3.0/3.X to 4.0, nodes cannot correctly connect. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14423) SSTables stop being compacted

2018-06-24 Thread Michael Shuler (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-14423:
---
Priority: Blocker  (was: Major)

> SSTables stop being compacted
> -
>
> Key: CASSANDRA-14423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>Priority: Blocker
> Fix For: 2.2.13, 3.0.17, 3.11.3
>
>
> So seeing a problem in 3.11.0 where SSTables are being lost from the view and 
> not being included in compactions/as candidates for compaction. It seems to 
> get progressively worse until there's only 1-2 SSTables in the view which 
> happen to be the most recent SSTables and thus compactions completely stop 
> for that table.
> The SSTables seem to still be included in reads, just not compactions.
> The issue can be fixed by restarting C*, as it will reload all SSTables into 
> the view, but this is only a temporary fix. User defined/major compactions 
> still work - not clear if they include the result back in the view but is not 
> a good work around.
> This also results in a discrepancy between SSTable count and SSTables in 
> levels for any table using LCS.
> {code:java}
> Keyspace : xxx
> Read Count: 57761088
> Read Latency: 0.10527088681224288 ms.
> Write Count: 2513164
> Write Latency: 0.018211106398149903 ms.
> Pending Flushes: 0
> Table: xxx
> SSTable count: 10
> SSTables in each level: [2, 0, 0, 0, 0, 0, 0, 0, 0]
> Space used (live): 894498746
> Space used (total): 894498746
> Space used by snapshots (total): 0
> Off heap memory used (total): 11576197
> SSTable Compression Ratio: 0.6956629530569777
> Number of keys (estimate): 3562207
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 87
> Local read count: 57761088
> Local read latency: 0.108 ms
> Local write count: 2513164
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 86.33
> Bloom filter false positives: 43
> Bloom filter false ratio: 0.0
> Bloom filter space used: 8046104
> Bloom filter off heap memory used: 8046024
> Index summary off heap memory used: 3449005
> Compression metadata off heap memory used: 81168
> Compacted partition minimum bytes: 104
> Compacted partition maximum bytes: 5722
> Compacted partition mean bytes: 175
> Average live cells per slice (last five minutes): 1.0
> Maximum live cells per slice (last five minutes): 1
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Dropped Mutations: 0
> {code}
> Also for STCS we've confirmed that SSTable count will be different to the 
> number of SSTables reported in the Compaction Bucket's. In the below example 
> there's only 3 SSTables in a single bucket - no more are listed for this 
> table. Compaction thresholds haven't been modified for this table and it's a 
> very basic KV schema.
> {code:java}
> Keyspace : yyy
> Read Count: 30485
> Read Latency: 0.06708991307200263 ms.
> Write Count: 57044
> Write Latency: 0.02204061776873992 ms.
> Pending Flushes: 0
> Table: yyy
> SSTable count: 19
> Space used (live): 18195482
> Space used (total): 18195482
> Space used by snapshots (total): 0
> Off heap memory used (total): 747376
> SSTable Compression Ratio: 0.7607394576769735
> Number of keys (estimate): 116074
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 39
> Local read count: 30485
> Local read latency: NaN ms
> Local write count: 57044
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 79.76
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 690912
> Bloom filter off heap memory used: 690760
> Index summary off heap memory used: 54736
> Compression metadata off heap memory used: 1880
> Compacted partition minimum bytes: 73
> Compacted partition maximum bytes: 124
> Compacted partition mean bytes: 96
> Average live cells per slice (last five minutes): NaN
> Maximum live cells per slice (last five minutes): 0
> Average tombstones per slice (last five minutes): NaN
> Maximum tombstones per slice (last five minutes): 0
> Dropped Mutations: 0 
> {code}
> {code:java}
> Apr 27 03:10:39 cassandra[9263]: TRACE o.a.c.d.c.SizeTieredCompactionStrategy 
> Compaction buckets are 
> 

[jira] [Updated] (CASSANDRA-14541) Order of warning and custom payloads is unspecified in the protocol specification

2018-06-24 Thread Avi Kivity (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Avi Kivity updated CASSANDRA-14541:
---
Attachment: v1-0001-Document-order-of-tracing-warning-and-custom-payl.patch
Status: Patch Available  (was: Open)

> Order of warning and custom payloads is unspecified in the protocol 
> specification
> -
>
> Key: CASSANDRA-14541
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14541
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Avi Kivity
>Priority: Trivial
> Attachments: 
> v1-0001-Document-order-of-tracing-warning-and-custom-payl.patch
>
>
> Section 2.2 of the protocol specification documents the types of tracing, 
> warning, and custom payloads, but does not document their order in the body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14541) Order of warning and custom payloads is unspecified in the protocol specification

2018-06-24 Thread Avi Kivity (JIRA)
Avi Kivity created CASSANDRA-14541:
--

 Summary: Order of warning and custom payloads is unspecified in 
the protocol specification
 Key: CASSANDRA-14541
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14541
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation and Website
Reporter: Avi Kivity


Section 2.2 of the protocol specification documents the types of tracing, 
warning, and custom payloads, but does not document their order in the body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org