[jira] [Updated] (CASSANDRA-17998) Mutation Serialization Caching

2022-12-19 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17998:
---
Fix Version/s: 4.2
   (was: 4.x)
   Resolution: Fixed
   Status: Resolved  (was: Ready to Commit)

Committed as 
[https://github.com/apache/cassandra/commit/227409d9201fa1aeb9f80b22f499577aedfe25bc]

 

Final CI 
[j8|https://app.circleci.com/pipelines/github/tjake/cassandra/15/workflows/14cfc77f-5f19-49cb-8ff3-de2ea81f25a6]
 & 
[j11|https://app.circleci.com/pipelines/github/tjake/cassandra/15/workflows/53d342fa-ca65-4dec-bda5-5227e2a70b56]

 

Final benchmark 
[result|https://tjake.github.io/other/cassandra-17998-report.html?metric=Ops_Sec=nb_bench%3Amain.result-success_filter=.%2B=1_aggregates=true_ops=0=0=1353.01=0=88328.35]
  > 8% throughput gain

 

> Mutation Serialization Caching
> --
>
> Key: CASSANDRA-17998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
> Fix For: 4.2
>
>
> A performance change that adds mutation serialization caching to avoid 
> re-serializing the mutation for commitlog and nodes twice. 
>  * Cached serialization for storage proxy and local commitlog
>  * Cached deserialization for messaging service and local commitlog
> This yields a non trivial perf gain (~7-10%) and latency drop (median)
> [https://tjake.github.io/other/cached-mutations-report.html]
>  
> The cached buffer is stored by MessagingService version to avoid being used 
> by differing nodes during upgrades.
> Also, It avoids caching mutations larger than a threshold to avoid GC issues. 
>  
> GH PR: https://github.com/apache/cassandra/pull/1954
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17998) Mutation Serialization Caching

2022-12-19 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17998:
---
Status: Ready to Commit  (was: Changes Suggested)

> Mutation Serialization Caching
> --
>
> Key: CASSANDRA-17998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
> Fix For: 4.x
>
>
> A performance change that adds mutation serialization caching to avoid 
> re-serializing the mutation for commitlog and nodes twice. 
>  * Cached serialization for storage proxy and local commitlog
>  * Cached deserialization for messaging service and local commitlog
> This yields a non trivial perf gain (~7-10%) and latency drop (median)
> [https://tjake.github.io/other/cached-mutations-report.html]
>  
> The cached buffer is stored by MessagingService version to avoid being used 
> by differing nodes during upgrades.
> Also, It avoids caching mutations larger than a threshold to avoid GC issues. 
>  
> GH PR: https://github.com/apache/cassandra/pull/1954
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17998) Mutation Serialization Caching

2022-12-19 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17998:
---
Source Control Link: 
https://github.com/apache/cassandra/commit/227409d9201fa1aeb9f80b22f499577aedfe25bc

> Mutation Serialization Caching
> --
>
> Key: CASSANDRA-17998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
> Fix For: 4.x
>
>
> A performance change that adds mutation serialization caching to avoid 
> re-serializing the mutation for commitlog and nodes twice. 
>  * Cached serialization for storage proxy and local commitlog
>  * Cached deserialization for messaging service and local commitlog
> This yields a non trivial perf gain (~7-10%) and latency drop (median)
> [https://tjake.github.io/other/cached-mutations-report.html]
>  
> The cached buffer is stored by MessagingService version to avoid being used 
> by differing nodes during upgrades.
> Also, It avoids caching mutations larger than a threshold to avoid GC issues. 
>  
> GH PR: https://github.com/apache/cassandra/pull/1954
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8928) Add downgradesstables

2022-12-16 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8928:
--
Reviewers: T Jake Luciani

> Add downgradesstables
> -
>
> Key: CASSANDRA-8928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8928
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/Tools
>Reporter: Jeremy Hanna
>Assignee: Claude Warren
>Priority: Low
>  Labels: remove-reopen
> Fix For: 4.x
>
>
> As mentioned in other places such as CASSANDRA-8047 and in the wild, 
> sometimes you need to go back.  A downgrade sstables utility would be nice 
> for a lot of reasons and I don't know that supporting going back to the 
> previous major version format would be too much code since we already support 
> reading the previous version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8928) Add downgradesstables

2022-12-16 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8928:
--
Fix Version/s: 4.x

> Add downgradesstables
> -
>
> Key: CASSANDRA-8928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8928
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/Tools
>Reporter: Jeremy Hanna
>Assignee: Claude Warren
>Priority: Low
>  Labels: remove-reopen
> Fix For: 4.x
>
>
> As mentioned in other places such as CASSANDRA-8047 and in the wild, 
> sometimes you need to go back.  A downgrade sstables utility would be nice 
> for a lot of reasons and I don't know that supporting going back to the 
> previous major version format would be too much code since we already support 
> reading the previous version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-8928) Add downgradesstables

2022-12-16 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17648684#comment-17648684
 ] 

T Jake Luciani edited comment on CASSANDRA-8928 at 12/16/22 3:33 PM:
-

[~claude] has written an initial 
[PR|https://github.com/apache/cassandra/pull/2045] that supports downgrading 
via an offline tool (like sstableupgrader)  Following a similar approach 
described in [this 
comment|https://issues.apache.org/jira/browse/CASSANDRA-11877?focusedCommentId=15310504=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310504].

It requires extending the same contract we have in place for major upgrades.  
We support reading from the last major sstable version and we support 
reading/writing to the previous MessagingService version. 

With this PR we add the ability to Write SSTables in the prior major format 
(though with a new format key which includes the major version it belongs to).

 

This will allow someone who wants to revert major versions a simpler path. 

Just stop each node (w/ flush), run offline downgrader,   then restart with 
previous version.

 

This will require some new docs and also a sstable downgrade dtest as well.  
Would it be ok to re-assign this ticket to Claude?


was (Author: tjake):
[~claude] has written an initial 
[PR|https://github.com/apache/cassandra/pull/2045] that supports downgrading 
via an offline tool (like sstableupgrader)  Following a similar approach 
described in this comment.

It requires extending the same contract we have in place for major upgrades.  
We support reading from the last major sstable version and we support 
reading/writing to the previous MessagingService version. 

With this PR we add the ability to Write SSTables in the prior major format 
(though with a new format key which includes the major version it belongs to).

 

This will allow someone who wants to revert major versions a simpler path. 

Just stop each node (w/ flush), run offline downgrader,   then restart with 
previous version.

 

This will require some new docs and also a sstable downgrade dtest as well.  
Would it be ok to re-assign this ticket to Claude?

> Add downgradesstables
> -
>
> Key: CASSANDRA-8928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8928
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/Tools
>Reporter: Jeremy Hanna
>Assignee: Claude Warren
>Priority: Low
>  Labels: remove-reopen
>
> As mentioned in other places such as CASSANDRA-8047 and in the wild, 
> sometimes you need to go back.  A downgrade sstables utility would be nice 
> for a lot of reasons and I don't know that supporting going back to the 
> previous major version format would be too much code since we already support 
> reading the previous version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8928) Add downgradesstables

2022-12-16 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8928:
--
Labels: remove-reopen  (was: gsoc2016 mentor remove-reopen)

> Add downgradesstables
> -
>
> Key: CASSANDRA-8928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8928
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/Tools
>Reporter: Jeremy Hanna
>Assignee: Claude Warren
>Priority: Low
>  Labels: remove-reopen
>
> As mentioned in other places such as CASSANDRA-8047 and in the wild, 
> sometimes you need to go back.  A downgrade sstables utility would be nice 
> for a lot of reasons and I don't know that supporting going back to the 
> previous major version format would be too much code since we already support 
> reading the previous version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8928) Add downgradesstables

2022-12-16 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17648684#comment-17648684
 ] 

T Jake Luciani commented on CASSANDRA-8928:
---

[~claude] has written an initial 
[PR|https://github.com/apache/cassandra/pull/2045] that supports downgrading 
via an offline tool (like sstableupgrader)  Following a similar approach 
described in this comment.

It requires extending the same contract we have in place for major upgrades.  
We support reading from the last major sstable version and we support 
reading/writing to the previous MessagingService version. 

With this PR we add the ability to Write SSTables in the prior major format 
(though with a new format key which includes the major version it belongs to).

 

This will allow someone who wants to revert major versions a simpler path. 

Just stop each node (w/ flush), run offline downgrader,   then restart with 
previous version.

 

This will require some new docs and also a sstable downgrade dtest as well.  
Would it be ok to re-assign this ticket to Claude?

> Add downgradesstables
> -
>
> Key: CASSANDRA-8928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8928
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/Tools
>Reporter: Jeremy Hanna
>Assignee: Kaide Mu
>Priority: Low
>  Labels: gsoc2016, mentor, remove-reopen
>
> As mentioned in other places such as CASSANDRA-8047 and in the wild, 
> sometimes you need to go back.  A downgrade sstables utility would be nice 
> for a lot of reasons and I don't know that supporting going back to the 
> previous major version format would be too much code since we already support 
> reading the previous version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17998) Mutation Serialization Caching

2022-10-31 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17626584#comment-17626584
 ] 

T Jake Luciani commented on CASSANDRA-17998:


That works, it would be the equivalent of this call 
[https://github.com/apache/cassandra/pull/1954/files#diff-4f6615c703489b87cafd23c0dc5a4edc4af27c33aba172870b30e38a7d8faa1dR1478]

 

 

> Mutation Serialization Caching
> --
>
> Key: CASSANDRA-17998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
> Fix For: 4.x
>
>
> A performance change that adds mutation serialization caching to avoid 
> re-serializing the mutation for commitlog and nodes twice. 
>  * Cached serialization for storage proxy and local commitlog
>  * Cached deserialization for messaging service and local commitlog
> This yields a non trivial perf gain (~7-10%) and latency drop (median)
> [https://tjake.github.io/other/cached-mutations-report.html]
>  
> The cached buffer is stored by MessagingService version to avoid being used 
> by differing nodes during upgrades.
> Also, It avoids caching mutations larger than a threshold to avoid GC issues. 
>  
> GH PR: https://github.com/apache/cassandra/pull/1954
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17998) Mutation Serialization Caching

2022-10-28 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17625815#comment-17625815
 ] 

T Jake Luciani commented on CASSANDRA-17998:


You can disable it by setting the threshold to 0

> Mutation Serialization Caching
> --
>
> Key: CASSANDRA-17998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
> Fix For: 4.x
>
>
> A performance change that adds mutation serialization caching to avoid 
> re-serializing the mutation for commitlog and nodes twice. 
>  * Cached serialization for storage proxy and local commitlog
>  * Cached deserialization for messaging service and local commitlog
> This yields a non trivial perf gain (~7-10%) and latency drop (median)
> [https://tjake.github.io/other/cached-mutations-report.html]
>  
> The cached buffer is stored by MessagingService version to avoid being used 
> by differing nodes during upgrades.
> Also, It avoids caching mutations larger than a threshold to avoid GC issues. 
>  
> GH PR: https://github.com/apache/cassandra/pull/1954
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17998) Mutation Serialization Caching

2022-10-28 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17998:
---
Test and Documentation Plan: CI
 Status: Patch Available  (was: In Progress)

> Mutation Serialization Caching
> --
>
> Key: CASSANDRA-17998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
> Fix For: 4.x
>
>
> A performance change that adds mutation serialization caching to avoid 
> re-serializing the mutation for commitlog and nodes twice. 
>  * Cached serialization for storage proxy and local commitlog
>  * Cached deserialization for messaging service and local commitlog
> This yields a non trivial perf gain (~7-10%) and latency drop (median)
> [https://tjake.github.io/other/cached-mutations-report.html]
>  
> The cached buffer is stored by MessagingService version to avoid being used 
> by differing nodes during upgrades.
> Also, It avoids caching mutations larger than a threshold to avoid GC issues. 
>  
> GH PR: https://github.com/apache/cassandra/pull/1954
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17998) Mutation Serialization Caching

2022-10-28 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17625807#comment-17625807
 ] 

T Jake Luciani commented on CASSANDRA-17998:


> Specifically, entries in the log will be addressable, so we'll only have to 
> serialize once - when writing to the commit log

 

This would require waiting for the local write to hit the commitlog before 
sending the request to other replicas. Besides that, this patch avoids having 
to serialize the mutation to the commitlog when it's read off the internode 
message as it keeps around the one off the wire.

 

When the accord stuff is in (assuming there's a way around my first point) you 
could add a new kind of CachedSerialization instance for the one in the 
commitlog.

 

 

> Mutation Serialization Caching
> --
>
> Key: CASSANDRA-17998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
> Fix For: 4.x
>
>
> A performance change that adds mutation serialization caching to avoid 
> re-serializing the mutation for commitlog and nodes twice. 
>  * Cached serialization for storage proxy and local commitlog
>  * Cached deserialization for messaging service and local commitlog
> This yields a non trivial perf gain (~7-10%) and latency drop (median)
> [https://tjake.github.io/other/cached-mutations-report.html]
>  
> The cached buffer is stored by MessagingService version to avoid being used 
> by differing nodes during upgrades.
> Also, It avoids caching mutations larger than a threshold to avoid GC issues. 
>  
> GH PR: https://github.com/apache/cassandra/pull/1954
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17998) Mutation Serialization Caching

2022-10-28 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17625745#comment-17625745
 ] 

T Jake Luciani commented on CASSANDRA-17998:


[j8 
tests|https://app.circleci.com/pipelines/github/tjake/cassandra/7/workflows/96748234-266c-4e6e-983b-1708381136c6]

> Mutation Serialization Caching
> --
>
> Key: CASSANDRA-17998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
> Fix For: 4.x
>
>
> A performance change that adds mutation serialization caching to avoid 
> re-serializing the mutation for commitlog and nodes twice. 
>  * Cached serialization for storage proxy and local commitlog
>  * Cached deserialization for messaging service and local commitlog
> This yields a non trivial perf gain (~7-10%) and latency drop (median)
> [https://tjake.github.io/other/cached-mutations-report.html]
>  
> The cached buffer is stored by MessagingService version to avoid being used 
> by differing nodes during upgrades.
> Also, It avoids caching mutations larger than a threshold to avoid GC issues. 
>  
> GH PR: https://github.com/apache/cassandra/pull/1954
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17998) Mutation Serialization Caching

2022-10-26 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17998:
---
Fix Version/s: 4.2
   (was: 4.x)

> Mutation Serialization Caching
> --
>
> Key: CASSANDRA-17998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
> Fix For: 4.2
>
>
> A performance change that adds mutation serialization caching to avoid 
> re-serializing the mutation for commitlog and nodes twice. 
>  * Cached serialization for storage proxy and local commitlog
>  * Cached deserialization for messaging service and local commitlog
> This yields a non trivial perf gain (~7-10%) and latency drop (median)
> [https://tjake.github.io/other/cached-mutations-report.html]
>  
> The cached buffer is stored by MessagingService version to avoid being used 
> by differing nodes during upgrades.
> Also, It avoids caching mutations larger than a threshold to avoid GC issues. 
>  
> GH PR: https://github.com/apache/cassandra/pull/1954
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17998) Mutation Serialization Caching

2022-10-26 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17998:
---
Description: 
A performance change that adds mutation serialization caching to avoid 
re-serializing the mutation for commitlog and nodes twice. 
 * Cached serialization for storage proxy and local commitlog
 * Cached deserialization for messaging service and local commitlog

This yields a non trivial perf gain (~7-10%) and latency drop (median)

[https://tjake.github.io/other/cached-mutations-report.html]

 

The cached buffer is stored by MessagingService version to avoid being used by 
differing nodes during upgrades.

Also, It avoids caching mutations larger than a threshold to avoid GC issues. 

 

GH PR: https://github.com/apache/cassandra/pull/1954

 

  was:
A performance change that adds mutation serialization caching to avoid 
re-serializing the mutation for commitlog and nodes twice. 
 * Cached serialization for storage proxy and local commitlog
 * Cached deserialization for messaging service and local commitlog

This yields a non trivial perf gain (~7-10%) and latency drop (median)

[https://tjake.github.io/other/cached-mutations-report.html]

 

The cached buffer is stored by MessagingService version to avoid being used by 
differing nodes during upgrades.

 

Also, It avoids caching mutations larger than a threshold to avoid GC issues. 

 

 


> Mutation Serialization Caching
> --
>
> Key: CASSANDRA-17998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
>
> A performance change that adds mutation serialization caching to avoid 
> re-serializing the mutation for commitlog and nodes twice. 
>  * Cached serialization for storage proxy and local commitlog
>  * Cached deserialization for messaging service and local commitlog
> This yields a non trivial perf gain (~7-10%) and latency drop (median)
> [https://tjake.github.io/other/cached-mutations-report.html]
>  
> The cached buffer is stored by MessagingService version to avoid being used 
> by differing nodes during upgrades.
> Also, It avoids caching mutations larger than a threshold to avoid GC issues. 
>  
> GH PR: https://github.com/apache/cassandra/pull/1954
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-17998) Mutation Serialization Caching

2022-10-26 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-17998:
--

Assignee: T Jake Luciani

> Mutation Serialization Caching
> --
>
> Key: CASSANDRA-17998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
>
> A performance change that adds mutation serialization caching to avoid 
> re-serializing the mutation for commitlog and nodes twice. 
>  * Cached serialization for storage proxy and local commitlog
>  * Cached deserialization for messaging service and local commitlog
> This yields a non trivial perf gain (~7-10%) and latency drop (median)
> [https://tjake.github.io/other/cached-mutations-report.html]
>  
> The cached buffer is stored by MessagingService version to avoid being used 
> by differing nodes during upgrades.
>  
> Also, It avoids caching mutations larger than a threshold to avoid GC issues. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-17998) Mutation Serialization Caching

2022-10-26 Thread T Jake Luciani (Jira)
T Jake Luciani created CASSANDRA-17998:
--

 Summary: Mutation Serialization Caching
 Key: CASSANDRA-17998
 URL: https://issues.apache.org/jira/browse/CASSANDRA-17998
 Project: Cassandra
  Issue Type: Improvement
  Components: Messaging/Internode
Reporter: T Jake Luciani


A performance change that adds mutation serialization caching to avoid 
re-serializing the mutation for commitlog and nodes twice. 
 * Cached serialization for storage proxy and local commitlog
 * Cached deserialization for messaging service and local commitlog

This yields a non trivial perf gain (~7-10%) and latency drop (median)

[https://tjake.github.io/other/cached-mutations-report.html]

 

The cached buffer is stored by MessagingService version to avoid being used by 
differing nodes during upgrades.

 

Also, It avoids caching mutations larger than a threshold to avoid GC issues. 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17803) AlterTypeStatement validate is missing checks

2022-08-08 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17803:
---
Resolution: Won't Fix
Status: Resolved  (was: Triage Needed)

Ok, I can work around it.

> AlterTypeStatement validate is missing checks
> -
>
> Key: CASSANDRA-17803
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17803
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/UDT
>Reporter: T Jake Luciani
>Priority: Normal
>  Labels: lhf
>
> All the checks for a valid AlterTypeStatement are happening in apply() not 
> validate()
>  
> Same problem in CreateTypeStatement.  
> [https://github.com/apache/cassandra/blob/45f4f8c1e89e4b221b569ff3bd3e78675eff7747/src/java/org/apache/cassandra/cql3/statements/schema/CreateTypeStatement.java#L66]
>  
> Perhaps all schema statements should be checked for validations



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17803) AlterTypeStatement validate is missing checks

2022-08-05 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17803:
---
Description: 
All the checks for a valid AlterTypeStatement are happening in apply() not 
validate()

 

Same problem in CreateTypeStatement.  
[https://github.com/apache/cassandra/blob/45f4f8c1e89e4b221b569ff3bd3e78675eff7747/src/java/org/apache/cassandra/cql3/statements/schema/CreateTypeStatement.java#L66]

 

Perhaps all schema statements should be checked for validations

  was:All the checks for a valid UDT are happening in apply() not validate()


> AlterTypeStatement validate is missing checks
> -
>
> Key: CASSANDRA-17803
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17803
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/UDT
>Reporter: T Jake Luciani
>Priority: Normal
>  Labels: lhf
>
> All the checks for a valid AlterTypeStatement are happening in apply() not 
> validate()
>  
> Same problem in CreateTypeStatement.  
> [https://github.com/apache/cassandra/blob/45f4f8c1e89e4b221b569ff3bd3e78675eff7747/src/java/org/apache/cassandra/cql3/statements/schema/CreateTypeStatement.java#L66]
>  
> Perhaps all schema statements should be checked for validations



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17803) AlterTypeStatement validate is missing checks

2022-08-05 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17803:
---
Summary: AlterTypeStatement validate is missing checks  (was: UserType 
validate is missing checks)

> AlterTypeStatement validate is missing checks
> -
>
> Key: CASSANDRA-17803
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17803
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/UDT
>Reporter: T Jake Luciani
>Priority: Normal
>  Labels: lhf
>
> All the checks for a valid UDT are happening in apply() not validate()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17803) UserType validate is missing checks

2022-08-05 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17803:
---
Impacts:   (was: None)

> UserType validate is missing checks
> ---
>
> Key: CASSANDRA-17803
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17803
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/UDT
>Reporter: T Jake Luciani
>Priority: Normal
>
> All the checks for a valid UDT are happening in apply() not validate()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17803) UserType validate is missing checks

2022-08-05 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17803:
---
Labels: lhf  (was: )

> UserType validate is missing checks
> ---
>
> Key: CASSANDRA-17803
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17803
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/UDT
>Reporter: T Jake Luciani
>Priority: Normal
>  Labels: lhf
>
> All the checks for a valid UDT are happening in apply() not validate()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-17803) UserType validate is missing checks

2022-08-05 Thread T Jake Luciani (Jira)
T Jake Luciani created CASSANDRA-17803:
--

 Summary: UserType validate is missing checks
 Key: CASSANDRA-17803
 URL: https://issues.apache.org/jira/browse/CASSANDRA-17803
 Project: Cassandra
  Issue Type: Bug
  Components: Feature/UDT
Reporter: T Jake Luciani


All the checks for a valid UDT are happening in apply() not validate()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17764) Extra writePreparedStatement call

2022-07-21 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17764:
---
Impacts:   (was: None)

> Extra writePreparedStatement call
> -
>
> Key: CASSANDRA-17764
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17764
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Caching
>Reporter: T Jake Luciani
>Priority: Normal
>
> There seems to be a double insert happening in 
> QueryProcessor.storePreparedStatement()
>  
> [https://github.com/apache/cassandra/blob/ab9ab903fa590409251e97fe075e02a64c8aa4f3/src/java/org/apache/cassandra/cql3/QueryProcessor.java#L788-L791]
>  
> I think it's intended to only write to prepared statement when its new to the 
> cache vs every prepare call.  Regardless the extra one should be dropped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17764) Extra writePreparedStatement call

2022-07-21 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-17764:
---
Description: 
There seems to be a double insert happening in 
QueryProcessor.storePreparedStatement()

 

[https://github.com/apache/cassandra/blob/ab9ab903fa590409251e97fe075e02a64c8aa4f3/src/java/org/apache/cassandra/cql3/QueryProcessor.java#L788-L791]

 

I think it's intended to only write to prepared statement  table when it's new 
to the cache vs every prepare call.  Regardless the extra one should be dropped.

  was:
There seems to be a double insert happening in 
QueryProcessor.storePreparedStatement()

 

[https://github.com/apache/cassandra/blob/ab9ab903fa590409251e97fe075e02a64c8aa4f3/src/java/org/apache/cassandra/cql3/QueryProcessor.java#L788-L791]

 

I think it's intended to only write to prepared statement when its new to the 
cache vs every prepare call.  Regardless the extra one should be dropped.


> Extra writePreparedStatement call
> -
>
> Key: CASSANDRA-17764
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17764
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Caching
>Reporter: T Jake Luciani
>Priority: Normal
>
> There seems to be a double insert happening in 
> QueryProcessor.storePreparedStatement()
>  
> [https://github.com/apache/cassandra/blob/ab9ab903fa590409251e97fe075e02a64c8aa4f3/src/java/org/apache/cassandra/cql3/QueryProcessor.java#L788-L791]
>  
> I think it's intended to only write to prepared statement  table when it's 
> new to the cache vs every prepare call.  Regardless the extra one should be 
> dropped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-17764) Extra writePreparedStatement call

2022-07-21 Thread T Jake Luciani (Jira)
T Jake Luciani created CASSANDRA-17764:
--

 Summary: Extra writePreparedStatement call
 Key: CASSANDRA-17764
 URL: https://issues.apache.org/jira/browse/CASSANDRA-17764
 Project: Cassandra
  Issue Type: Bug
  Components: Local/Caching
Reporter: T Jake Luciani


There seems to be a double insert happening in 
QueryProcessor.storePreparedStatement()

 

[https://github.com/apache/cassandra/blob/ab9ab903fa590409251e97fe075e02a64c8aa4f3/src/java/org/apache/cassandra/cql3/QueryProcessor.java#L788-L791]

 

I think it's intended to only write to prepared statement when its new to the 
cache vs every prepare call.  Regardless the extra one should be dropped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRASC-22) RESTEasy integration for Cassandra Sidecar

2020-04-29 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRASC-22?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095639#comment-17095639
 ] 

T Jake Luciani edited comment on CASSANDRASC-22 at 4/29/20, 4:47 PM:
-

-Hi,-

-Is there a reason we need vert.x and jax-rs and resteasy?- 

-I think Jax-RS  is more standard. since we have it and there's not much going 
on yet in the sidecar would you be ok switching (in a new ticket)?-

 

My Bad.  I realize vertx us using rest-easy.  nevermind me :D

 

Jake


was (Author: tjake):
Hi,

Is there a reason we need vert.x and jax-rs and resteasy? 

I think Jax-RS  is more standard. since we have it and there's not much going 
on yet in the sidecar would you be ok switching (in a new ticket)?

 

Jake

> RESTEasy integration for Cassandra Sidecar
> --
>
> Key: CASSANDRASC-22
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-22
> Project: Sidecar for Apache Cassandra
>  Issue Type: Improvement
>  Components: Rest API
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: image-2020-04-27-22-59-40-060.png, 
> image-2020-04-29-01-14-11-756.png
>
>
> Add support for JAX-RS based routing via RESTEasy to Cassandra Sidecar. This 
> also dynamically generates swagger documentation and adds the swagger UI.
> [Branch|https://github.com/dineshjoshi/cassandra-sidecar/tree/resteasy-swagger]
> [Tests|https://circleci.com/workflow-run/a7888146-a22d-45af-983a-8833b77eef59]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRASC-22) RESTEasy integration for Cassandra Sidecar

2020-04-29 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRASC-22?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095639#comment-17095639
 ] 

T Jake Luciani commented on CASSANDRASC-22:
---

Hi,

Is there a reason we need vert.x and jax-rs and resteasy? 

I think Jax-RS  is more standard. since we have it and there's not much going 
on yet in the sidecar would you be ok switching (in a new ticket)?

 

Jake

> RESTEasy integration for Cassandra Sidecar
> --
>
> Key: CASSANDRASC-22
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-22
> Project: Sidecar for Apache Cassandra
>  Issue Type: Improvement
>  Components: Rest API
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: image-2020-04-27-22-59-40-060.png, 
> image-2020-04-29-01-14-11-756.png
>
>
> Add support for JAX-RS based routing via RESTEasy to Cassandra Sidecar. This 
> also dynamically generates swagger documentation and adds the swagger UI.
> [Branch|https://github.com/dineshjoshi/cassandra-sidecar/tree/resteasy-swagger]
> [Tests|https://circleci.com/workflow-run/a7888146-a22d-45af-983a-8833b77eef59]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRASC-17) Ensure sidecar can control multiple Cassandra instances

2020-04-27 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRASC-17?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17093512#comment-17093512
 ] 

T Jake Luciani commented on CASSANDRASC-17:
---

Back to this ticket I do think the scope here is more of a v2 thing.  We should 
really be focused on getting everything for a single instance per node correct 
before moving onto many nodes.  With a versioned api (I agree we should always 
have a /api/vX/ root) adding this over time is less of a concern.  Same goes 
for batching IMO.

> Ensure sidecar can control multiple Cassandra instances
> ---
>
> Key: CASSANDRASC-17
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-17
> Project: Sidecar for Apache Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jon Haddad
>Priority: Normal
>
> Since we can run multiple hosts per node, we should allow a single sidecar 
> process to control multiple Cassandra nodes.
> I am not sure if we should encode the id of the node in the URL or as a 
> parameter that would have to be present in every request if using > 1 node.  
> I lean towards the latter - meaning it’s a slight inconvenience for a very 
> small group, rather than messing with the URL scheme for everyone else.  I 
> don’t hold this opinion very strongly though.  I’d like to discuss before 
> doing any work here.  
> Thoughts?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRASC-17) Ensure sidecar can control multiple Cassandra instances

2020-04-22 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRASC-17?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17089806#comment-17089806
 ] 

T Jake Luciani commented on CASSANDRASC-17:
---

I like the idea of having it be part of the URI.

 

Like /create returns an ID

Then you can do:

   /node/\{ID}/start

 

 

> Ensure sidecar can control multiple Cassandra instances
> ---
>
> Key: CASSANDRASC-17
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-17
> Project: Sidecar for Apache Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jon Haddad
>Priority: Normal
>
> Since we can run multiple hosts per node, we should allow a single sidecar 
> process to control multiple Cassandra nodes.
> I am not sure if we should encode the id of the node in the URL or as a 
> parameter that would have to be present in every request if using > 1 node.  
> I lean towards the latter - meaning it’s a slight inconvenience for a very 
> small group, rather than messing with the URL scheme for everyone else.  I 
> don’t hold this opinion very strongly though.  I’d like to discuss before 
> doing any work here.  
> Thoughts?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRASC-16) Incorporate sidecar java agent, allowing project to work with existing Cassandra releases

2020-04-20 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRASC-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRASC-16:
--
Description: 
The sidecar project should be support many C* versions from 3.0 to 4.0 

In order to provide a consistent set of APIs and de-couple the development 
cadence of the sidecar project from the Cassandra project, it would be most 
advantageous to use a java agent to enable new operational hooks for the 
sidecars use.

In the Management API we use an agent to support the following:
 * Add a local CQL syntax for executing operations - example: CALL 
compact('keyspace', 'table')
 * Adds a local unix socket for the sidecar to communicate with c* via the java 
driver
 * Enables functionality required for sidecar
 ** default system_auth to NTS
 ** avoid setting up default cassandra superuser 

 

The agent is built on the ByteBuddy Agent api.  

  was:
The sidecar project should be support many C* versions from 2.1 to 4.0 

In order to provide a consistent set of APIs and de-couple the development 
cadence of the sidecar project from the Cassandra project, it would be most 
advantageous to use a java agent to enable new operational hooks for the 
sidecars use.

In the Management API we use an agent to support the following:
 * Add a local CQL syntax for executing operations - example: CALL 
compact('keyspace', 'table')
 * Adds a local unix socket for the sidecar to communicate with c* via the java 
driver
 * Enables functionality required for sidecar
 ** default system_auth to NTS
 ** avoid setting up default cassandra superuser 

 

The agent is built on the ByteBuddy Agent api.  


> Incorporate sidecar java agent, allowing project to work with existing 
> Cassandra releases
> -
>
> Key: CASSANDRASC-16
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-16
> Project: Sidecar for Apache Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
>
> The sidecar project should be support many C* versions from 3.0 to 4.0 
> In order to provide a consistent set of APIs and de-couple the development 
> cadence of the sidecar project from the Cassandra project, it would be most 
> advantageous to use a java agent to enable new operational hooks for the 
> sidecars use.
> In the Management API we use an agent to support the following:
>  * Add a local CQL syntax for executing operations - example: CALL 
> compact('keyspace', 'table')
>  * Adds a local unix socket for the sidecar to communicate with c* via the 
> java driver
>  * Enables functionality required for sidecar
>  ** default system_auth to NTS
>  ** avoid setting up default cassandra superuser 
>  
> The agent is built on the ByteBuddy Agent api.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRASC-16) Incorporate sidecar java agent, allowing project to work with existing Cassandra releases

2020-04-20 Thread T Jake Luciani (Jira)
T Jake Luciani created CASSANDRASC-16:
-

 Summary: Incorporate sidecar java agent, allowing project to work 
with existing Cassandra releases
 Key: CASSANDRASC-16
 URL: https://issues.apache.org/jira/browse/CASSANDRASC-16
 Project: Sidecar for Apache Cassandra
  Issue Type: Improvement
Reporter: T Jake Luciani


The sidecar project should be supported by many C* versions from 2.1 to 4.0 

In order to provide a consistent set of APIs and de-couple the development 
cadence of the sidecar project from the Cassandra project, it would be most 
advantageous to use a java agent to enable new operational hooks for the 
sidecars use.

In the Management API we use an agent to support the following:
 * Add a local CQL syntax for executing operations - example: CALL 
compact('keyspace', 'table')
 * Adds a local unix socket for the sidecar to communicate with c* via the java 
driver
 * Enables functionality required for sidecar
 ** default system_auth to NTS
 ** avoid setting up default cassandra superuser 

 

The agent is built on the ByteBuddy Agent api.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRASC-16) Incorporate sidecar java agent, allowing project to work with existing Cassandra releases

2020-04-20 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRASC-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRASC-16:
-

Assignee: T Jake Luciani

> Incorporate sidecar java agent, allowing project to work with existing 
> Cassandra releases
> -
>
> Key: CASSANDRASC-16
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-16
> Project: Sidecar for Apache Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
>
> The sidecar project should be supported by many C* versions from 2.1 to 4.0 
> In order to provide a consistent set of APIs and de-couple the development 
> cadence of the sidecar project from the Cassandra project, it would be most 
> advantageous to use a java agent to enable new operational hooks for the 
> sidecars use.
> In the Management API we use an agent to support the following:
>  * Add a local CQL syntax for executing operations - example: CALL 
> compact('keyspace', 'table')
>  * Adds a local unix socket for the sidecar to communicate with c* via the 
> java driver
>  * Enables functionality required for sidecar
>  ** default system_auth to NTS
>  ** avoid setting up default cassandra superuser 
>  
> The agent is built on the ByteBuddy Agent api.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRASC-16) Incorporate sidecar java agent, allowing project to work with existing Cassandra releases

2020-04-20 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRASC-16?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRASC-16:
--
Description: 
The sidecar project should be support many C* versions from 2.1 to 4.0 

In order to provide a consistent set of APIs and de-couple the development 
cadence of the sidecar project from the Cassandra project, it would be most 
advantageous to use a java agent to enable new operational hooks for the 
sidecars use.

In the Management API we use an agent to support the following:
 * Add a local CQL syntax for executing operations - example: CALL 
compact('keyspace', 'table')
 * Adds a local unix socket for the sidecar to communicate with c* via the java 
driver
 * Enables functionality required for sidecar
 ** default system_auth to NTS
 ** avoid setting up default cassandra superuser 

 

The agent is built on the ByteBuddy Agent api.  

  was:
The sidecar project should be supported by many C* versions from 2.1 to 4.0 

In order to provide a consistent set of APIs and de-couple the development 
cadence of the sidecar project from the Cassandra project, it would be most 
advantageous to use a java agent to enable new operational hooks for the 
sidecars use.

In the Management API we use an agent to support the following:
 * Add a local CQL syntax for executing operations - example: CALL 
compact('keyspace', 'table')
 * Adds a local unix socket for the sidecar to communicate with c* via the java 
driver
 * Enables functionality required for sidecar
 ** default system_auth to NTS
 ** avoid setting up default cassandra superuser 

 

The agent is built on the ByteBuddy Agent api.  


> Incorporate sidecar java agent, allowing project to work with existing 
> Cassandra releases
> -
>
> Key: CASSANDRASC-16
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-16
> Project: Sidecar for Apache Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
>
> The sidecar project should be support many C* versions from 2.1 to 4.0 
> In order to provide a consistent set of APIs and de-couple the development 
> cadence of the sidecar project from the Cassandra project, it would be most 
> advantageous to use a java agent to enable new operational hooks for the 
> sidecars use.
> In the Management API we use an agent to support the following:
>  * Add a local CQL syntax for executing operations - example: CALL 
> compact('keyspace', 'table')
>  * Adds a local unix socket for the sidecar to communicate with c* via the 
> java driver
>  * Enables functionality required for sidecar
>  ** default system_auth to NTS
>  ** avoid setting up default cassandra superuser 
>  
> The agent is built on the ByteBuddy Agent api.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15703) When CDC is disabled bootstrapping breaks

2020-04-07 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-15703:
---
Fix Version/s: 3.11.x

> When CDC is disabled bootstrapping breaks
> -
>
> Key: CASSANDRA-15703
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15703
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Bootstrap and Decommission
>Reporter: T Jake Luciani
>Priority: Normal
> Fix For: 3.11.x
>
>
> Related to CASSANDRA-12697
> There is an edge case left over.  If a cluster had enabled CDC on a table 
> then subsequently set cdc=false, subsequent bootstraps break. 
>  
> This is because the cdc column is false on the existing nodes but null on the 
> bootstrapping node, causing the schema sha to never match.
>  
> There are a couple possible fixes:
>   1.  Since 12697 was only about upgrades we can serialize the cdc column IFF 
> the cluster nodes are all on the same version.
>   2.  We can force cdc=false on all tables where it's null.
>  
> I think #1 is probably simpler. #2 would probably cause more of the same 
> problem if nodes are not all updated with the fix.
>  
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15703) When CDC is disabled bootstrapping breaks

2020-04-07 Thread T Jake Luciani (Jira)
T Jake Luciani created CASSANDRA-15703:
--

 Summary: When CDC is disabled bootstrapping breaks
 Key: CASSANDRA-15703
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15703
 Project: Cassandra
  Issue Type: Bug
  Components: Consistency/Bootstrap and Decommission
Reporter: T Jake Luciani


Related to CASSANDRA-12697

There is an edge case left over.  If a cluster had enabled CDC on a table then 
subsequently set cdc=false, subsequent bootstraps break. 

 

This is because the cdc column is false on the existing nodes but null on the 
bootstrapping node, causing the schema sha to never match.

 

There are a couple possible fixes:

  1.  Since 12697 was only about upgrades we can serialize the cdc column IFF 
the cluster nodes are all on the same version.

  2.  We can force cdc=false on all tables where it's null.

 

I think #1 is probably simpler. #2 would probably cause more of the same 
problem if nodes are not all updated with the fix.

 

  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15657) Improve zero-copy-streaming containment check by using file sections

2020-04-07 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077296#comment-17077296
 ] 

T Jake Luciani commented on CASSANDRA-15657:


Ok thanks [~marcuse].  [~djoshi] any other concerns or can we commit this?

> Improve zero-copy-streaming containment check by using file sections
> 
>
> Key: CASSANDRA-15657
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15657
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Streaming and Messaging
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Normal
> Fix For: 4.0
>
>
> Currently zero copy streaming is only enabled for leveled-compaction strategy 
> and it checks if all keys in the sstables are included in the transferred 
> ranges.
> This is very inefficient. The containment check can be improved by checking 
> if transferred sections (the transferred file positions) cover entire sstable.
> I also enabled ZCS for all compaction strategies since the new containment 
> check is very fast..



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15657) Improve zero-copy-streaming containment check by using file sections

2020-04-07 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-15657:
---
Reviewers: Dinesh Joshi, T Jake Luciani, T Jake Luciani  (was: Dinesh 
Joshi, T Jake Luciani)
   Dinesh Joshi, T Jake Luciani, T Jake Luciani  (was: Dinesh 
Joshi, T Jake Luciani)
   Status: Review In Progress  (was: Patch Available)

> Improve zero-copy-streaming containment check by using file sections
> 
>
> Key: CASSANDRA-15657
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15657
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Streaming and Messaging
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Normal
> Fix For: 4.0
>
>
> Currently zero copy streaming is only enabled for leveled-compaction strategy 
> and it checks if all keys in the sstables are included in the transferred 
> ranges.
> This is very inefficient. The containment check can be improved by checking 
> if transferred sections (the transferred file positions) cover entire sstable.
> I also enabled ZCS for all compaction strategies since the new containment 
> check is very fast..



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15657) Improve zero-copy-streaming containment check by using file sections

2020-04-06 Thread T Jake Luciani (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17076366#comment-17076366
 ] 

T Jake Luciani commented on CASSANDRA-15657:


[~aleksey] Can you comment here? We are not understanding the potential problem 
here?

> Improve zero-copy-streaming containment check by using file sections
> 
>
> Key: CASSANDRA-15657
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15657
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Streaming and Messaging
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Normal
> Fix For: 4.0
>
>
> Currently zero copy streaming is only enabled for leveled-compaction strategy 
> and it checks if all keys in the sstables are included in the transferred 
> ranges.
> This is very inefficient. The containment check can be improved by checking 
> if transferred sections (the transferred file positions) cover entire sstable.
> I also enabled ZCS for all compaction strategies since the new containment 
> check is very fast..



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15657) Improve zero-copy-streaming containment check by using file sections

2020-04-02 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-15657:
---
Status: Ready to Commit  (was: Review In Progress)

> Improve zero-copy-streaming containment check by using file sections
> 
>
> Key: CASSANDRA-15657
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15657
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Streaming and Messaging
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Normal
> Fix For: 4.0
>
>
> Currently zero copy streaming is only enabled for leveled-compaction strategy 
> and it checks if all keys in the sstables are included in the transferred 
> ranges.
> This is very inefficient. The containment check can be improved by checking 
> if transferred sections (the transferred file positions) cover entire sstable.
> I also enabled ZCS for all compaction strategies since the new containment 
> check is very fast..



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRASC-13) Work related to adding required features from DS Management API to C* Sidecar

2020-03-30 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRASC-13?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRASC-13:
-

Assignee: T Jake Luciani

> Work related to adding required features from DS Management API to C* Sidecar
> -
>
> Key: CASSANDRASC-13
> URL: https://issues.apache.org/jira/browse/CASSANDRASC-13
> Project: Sidecar for Apache Cassandra
>  Issue Type: Epic
>  Components: Configuration, Rest API
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
>
> This ticket covers the work required to migrate work done in the [Management 
> API by 
> DataStax|http://github.com/datastax/management-api-for-apache-cassandra] into 
> the official C* Sidecar.
> For the most part this should be relatively simple as both have a similar 
> approach.  Some of the differences are highlighted below.
>  
>  * Local access only.
>     - Each sidecar process is responsible for the local c* instance
>  *  CQL communication only.
>     - Communication between Management API and C* is via CQL only
>     - A simple CQL -> method mechanism is added to C* resulting in the 
> ability to execute statements like: "CALL NodeOps.compact('keyspace');"
>  *  Secure by default.
>     - The Management API talks to C* via a local unix socket.
>     - The Management API has its own local unix socket so local tools can 
> communicate securely.
>     - Optional Mutual TLS support for secure services
>     - Disables default cassandra/cassandra superuser
>  *  No configuration file
>     - Keeps deployments simple
>  *  Hooks into Existing C* releases by utilizing a new java agent.
>  
> The java agent would obviously not be needed here.  But I do like the option 
> of supporting older releases with this approach. Thoughts?
>  
> Happy to discuss in this ticket before starting any work.  Please checkout 
> the linked project above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRASC-13) Work related to adding required features from DS Management API to C* Sidecar

2020-03-30 Thread T Jake Luciani (Jira)
T Jake Luciani created CASSANDRASC-13:
-

 Summary: Work related to adding required features from DS 
Management API to C* Sidecar
 Key: CASSANDRASC-13
 URL: https://issues.apache.org/jira/browse/CASSANDRASC-13
 Project: Sidecar for Apache Cassandra
  Issue Type: Epic
  Components: Configuration, Rest API
Reporter: T Jake Luciani


This ticket covers the work required to migrate work done in the [Management 
API by DataStax|http://github.com/datastax/management-api-for-apache-cassandra] 
into the official C* Sidecar.

For the most part this should be relatively simple as both have a similar 
approach.  Some of the differences are highlighted below.

 
 * Local access only.
    - Each sidecar process is responsible for the local c* instance
 *  CQL communication only.
    - Communication between Management API and C* is via CQL only
    - A simple CQL -> method mechanism is added to C* resulting in the ability 
to execute statements like: "CALL NodeOps.compact('keyspace');"
 *  Secure by default.
    - The Management API talks to C* via a local unix socket.
    - The Management API has its own local unix socket so local tools can 
communicate securely.
    - Optional Mutual TLS support for secure services
    - Disables default cassandra/cassandra superuser
 *  No configuration file
    - Keeps deployments simple
 *  Hooks into Existing C* releases by utilizing a new java agent.

 

The java agent would obviously not be needed here.  But I do like the option of 
supporting older releases with this approach. Thoughts?

 

Happy to discuss in this ticket before starting any work.  Please checkout the 
linked project above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Deleted] (CASSANDRA-15675) Work related to adding required features from DS Management API to C* Sidecar

2020-03-30 Thread T Jake Luciani (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani deleted CASSANDRA-15675:
---


> Work related to adding required features from DS Management API to C* Sidecar
> -
>
> Key: CASSANDRA-15675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15675
> Project: Cassandra
>  Issue Type: Epic
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Normal
>
> This ticket covers the work required to migrate work done in the [Management 
> API by 
> DataStax|http://github.com/datastax/management-api-for-apache-cassandra] into 
> the official C* Sidecar.
> For the most part this should be relatively simple as both have a similar 
> approach.  Some of the differences are highlighted below.
>  
>  * Local access only. 
>     - Each sidecar process is responsible for the local c* instance
>  *  CQL communication only.
>     - Communication between Management API and C* is via CQL only
>     - A simple CQL -> method mechanism is added to C* resulting in the 
> ability to execute statements like: "CALL NodeOps.compact('keyspace');"
>  *  Secure by default. 
>     - The Management API talks to C* via a local unix socket.
>     - The Management API has its own local unix socket so local tools can 
> communicate securely.
>     - Optional Mutual TLS support for secure services
>     - Disables default cassandra/cassandra superuser
>  *  No configuration file
>     - Keeps deployments simple
>  *  Hooks into Existing C* releases by utilizing a new java agent.
>  
> The java agent would obviously not be needed here.  But I do like the option 
> of supporting older releases with this approach. Thoughts?
>  
> Happy to discuss in this ticket before starting any work.  Please checkout 
> the linked project above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15675) Work related to adding required features from DS Management API to C* Sidecar

2020-03-30 Thread T Jake Luciani (Jira)
T Jake Luciani created CASSANDRA-15675:
--

 Summary: Work related to adding required features from DS 
Management API to C* Sidecar
 Key: CASSANDRA-15675
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15675
 Project: Cassandra
  Issue Type: Epic
  Components: Sidecar
Reporter: T Jake Luciani
Assignee: T Jake Luciani


This ticket covers the work required to migrate work done in the [Management 
API by DataStax|http://github.com/datastax/management-api-for-apache-cassandra] 
into the official C* Sidecar.

For the most part this should be relatively simple as both have a similar 
approach.  Some of the differences are highlighted below.

 
 * Local access only. 
    - Each sidecar process is responsible for the local c* instance
 *  CQL communication only.
    - Communication between Management API and C* is via CQL only
    - A simple CQL -> method mechanism is added to C* resulting in the ability 
to execute statements like: "CALL NodeOps.compact('keyspace');"
 *  Secure by default. 
    - The Management API talks to C* via a local unix socket.
    - The Management API has its own local unix socket so local tools can 
communicate securely.
    - Optional Mutual TLS support for secure services
    - Disables default cassandra/cassandra superuser
 *  No configuration file
    - Keeps deployments simple
 *  Hooks into Existing C* releases by utilizing a new java agent.

 

The java agent would obviously not be needed here.  But I do like the option of 
supporting older releases with this approach. Thoughts?

 

Happy to discuss in this ticket before starting any work.  Please checkout the 
linked project above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-02-22 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373549#comment-16373549
 ] 

T Jake Luciani commented on CASSANDRA-13929:


I'd prefer to reduce the array size vs removing but you have been at this for 
so long I'm starting to think it's not worth keeping around.  I'd like to 
re-setup the test I had to check the improvement of CASSANDRA-9766 and see how 
much of a impact it has.

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.x
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13997) Upgrade Guava to 23.3 and Airline to 0.8

2017-11-09 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246126#comment-16246126
 ] 

T Jake Luciani commented on CASSANDRA-13997:


[~krummas]  There's no need to add the j2objc to the lib dir.  You can simply 
add it to the build dep jars. An example of this is the 
compile-command-annotation entries in build.xml

> Upgrade Guava to 23.3 and Airline to 0.8
> 
>
> Key: CASSANDRA-13997
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13997
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.0
>
> Attachments: airline-0.8.jar.asc, guava-23.3-jre.jar.asc
>
>
> For 4.0 we should upgrade guava to the latest version
> patch here: https://github.com/krummas/cassandra/commits/marcuse/guava23
> A bunch of quite commonly used methods have been deprecated since guava 18 
> which we use now ({{Throwables.propagate}} for example), this patch mostly 
> updates uses where compilation fails. {{Futures.transform(ListenableFuture 
> ..., AsyncFunction ...}} was deprecated in Guava 19 and removed in 20 for 
> example, we should probably open new tickets to remove calls to all 
> deprecated guava methods.
> Also had to add a dependency on {{com.google.j2objc.j2objc-annotations}}, to 
> avoid some build-time warnings (maybe due to 
> https://github.com/google/guava/commit/fffd2b1f67d158c7b4052123c5032b0ba54a910d
>  ?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2017-10-03 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190006#comment-16190006
 ] 

T Jake Luciani commented on CASSANDRA-13929:


This was part of CASSANDRA-9766 and addresses one of the main allocation 
culprits during streaming.

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_snapshot_heaputilization.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13811) Unable to find table . at maybeLoadschemainfo (StressProfile.java)

2017-10-03 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189729#comment-16189729
 ] 

T Jake Luciani commented on CASSANDRA-13811:


You need to specify the keyspace in the table definition

> Unable to find table . at maybeLoadschemainfo 
> (StressProfile.java)
> 
>
> Key: CASSANDRA-13811
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13811
> Project: Cassandra
>  Issue Type: Bug
>  Components: Stress
> Environment: 3 node cluster
> Node 1 --> 172.27.21.16(Seed Node)
> Node 2 --> 172.27.21.18
> Node 3 --> 172.27.21.19
> *cassandra.yaml paramters for all the nodes:-*
> 1) seeds: "172.27.21.16"
> 2) write_request_timeout_in_ms: 5000
> 3) listen_address: 172.27.21.1(6,8,9
> 4) rpc_address: 172.27.21.1(6,8,9)
>Reporter: Akshay Jindal
>Priority: Minor
> Fix For: 3.10
>
> Attachments: code.yaml, stress-script.sh
>
>
> * Please find attached my .yaml and .sh file.
> * Now the problem is if I run stress-script.sh the first time, just after 
> firing up cassandra, it is working fine on the cluster, but when I again run 
> stress-script.sh, it is giving the following error:-
> *Unable to find prutorStress3node.code*
> at 
> org.apache.cassandra.stress.StressProfile.maybeLoadSchemaInfo(StressProfile.java:306)
>   at 
> org.apache.cassandra.stress.StressProfile.maybeCreateSchema(StressProfile.java:273)
>   at 
> org.apache.cassandra.stress.StressProfile.newGenerator(StressProfile.java:676)
>   at 
> org.apache.cassandra.stress.StressProfile.printSettings(StressProfile.java:129)
>   at 
> org.apache.cassandra.stress.settings.StressSettings.printSettings(StressSettings.java:383)
>   at org.apache.cassandra.stress.Stress.run(Stress.java:95)
>   at org.apache.cassandra.stress.Stress.main(Stress.java:62)
> In the file 
> [https://insight.io/github.com/apache/cassandra/blob/trunk/tools/stress/src/org/apache/cassandra/stress/StressProfile.java?line=289]
>  ,I saw that table metadata is being populated to NULL. I tried to make sense 
> of the stack trace, but was not able to make anything of it. Please give me 
> some directions as to what might have gone wrong?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2017-10-03 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189717#comment-16189717
 ] 

T Jake Luciani commented on CASSANDRA-13929:


The default for recycler is 32k instances per thread. So perhaps change this to 
8192 per thread and see if that makes a difference.

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_snapshot_heaputilization.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2017-10-03 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189587#comment-16189587
 ] 

T Jake Luciani commented on CASSANDRA-13929:


The recycler is meant to cache objects for reuse. By nulling the handler you 
are effectively invalidating the cache every time. 

I don't think this is a leak but perhaps we should limit this cache to hold 
less items. (see the recycler constructor)

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_snapshot_heaputilization.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2017-09-26 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180739#comment-16180739
 ] 

T Jake Luciani commented on CASSANDRA-8099:
---

[~tsteinmaurer] A number of GC problems were addressed in CASSANDRA-12269

> Refactor and modernize the storage engine
> -
>
> Key: CASSANDRA-8099
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0 alpha 1
>
> Attachments: 8099-nit
>
>
> The current storage engine (which for this ticket I'll loosely define as "the 
> code implementing the read/write path") is suffering from old age. One of the 
> main problem is that the only structure it deals with is the cell, which 
> completely ignores the more high level CQL structure that groups cell into 
> (CQL) rows.
> This leads to many inefficiencies, like the fact that during a reads we have 
> to group cells multiple times (to count on replica, then to count on the 
> coordinator, then to produce the CQL resultset) because we forget about the 
> grouping right away each time (so lots of useless cell names comparisons in 
> particular). But outside inefficiencies, having to manually recreate the CQL 
> structure every time we need it for something is hindering new features and 
> makes the code more complex that it should be.
> Said storage engine also has tons of technical debt. To pick an example, the 
> fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
> hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
> to go into to simply "remove the last query result".
> So I want to bite the bullet and modernize this storage engine. I propose to 
> do 2 main things:
> # Make the storage engine more aware of the CQL structure. In practice, 
> instead of having partitions be a simple iterable map of cells, it should be 
> an iterable list of row (each being itself composed of per-column cells, 
> though obviously not exactly the same kind of cell we have today).
> # Make the engine more iterative. What I mean here is that in the read path, 
> we end up reading all cells in memory (we put them in a ColumnFamily object), 
> but there is really no reason to. If instead we were working with iterators 
> all the way through, we could get to a point where we're basically 
> transferring data from disk to the network, and we should be able to reduce 
> GC substantially.
> Please note that such refactor should provide some performance improvements 
> right off the bat but it's not it's primary goal either. It's primary goal is 
> to simplify the storage engine and adds abstraction that are better suited to 
> further optimizations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13873) Ref bug in Scrub

2017-09-20 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-13873:
--

Assignee: Joel Knighton

> Ref bug in Scrub
> 
>
> Key: CASSANDRA-13873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13873
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: Joel Knighton
>Priority: Critical
>
> I'm hitting a Ref bug when many scrubs run against a node.  This doesn't 
> happen on 3.0.X.  I'm not sure if/if not this happens with compactions too 
> but I suspect it does.
> I'm not seeing any Ref leaks or double frees.
> To Reproduce:
> {quote}
> ./tools/bin/cassandra-stress write n=10m -rate threads=100
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> {quote}
> Eventually in the logs you get:
> WARN  [RMI TCP Connection(4)-127.0.0.1] 2017-09-14 15:51:26,722 
> NoSpamLogger.java:97 - Spinning trying to capture readers 
> [BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-32-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-31-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-29-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-27-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-26-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-20-big-Data.db')],
> *released: 
> [BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db')],*
>  
> This released table has a selfRef of 0 but is in the Tracker



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13873) Ref bug in Scrub

2017-09-14 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16166914#comment-16166914
 ] 

T Jake Luciani commented on CASSANDRA-13873:


I should probably mention this happens when we cancel compactions.  In 3.0 they 
would just wait till previous runs finished, now we cancel them.

> Ref bug in Scrub
> 
>
> Key: CASSANDRA-13873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13873
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>
> I'm hitting a Ref bug when many scrubs run against a node.  This doesn't 
> happen on 3.0.X.  I'm not sure if/if not this happens with compactions too 
> but I suspect it does.
> I'm not seeing any Ref leaks or double frees.
> To Reproduce:
> {quote}
> ./tools/bin/cassandra-stress write n=10m -rate threads=100
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> {quote}
> Eventually in the logs you get:
> WARN  [RMI TCP Connection(4)-127.0.0.1] 2017-09-14 15:51:26,722 
> NoSpamLogger.java:97 - Spinning trying to capture readers 
> [BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-32-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-31-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-29-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-27-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-26-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-20-big-Data.db')],
> *released: 
> [BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db')],*
>  
> This released table has a selfRef of 0 but is in the Tracker



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13873) Ref bug in Scrub

2017-09-14 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-13873:
---
Description: 
I'm hitting a Ref bug when many scrubs run against a node.  This doesn't happen 
on 3.0.X.  I'm not sure if/if not this happens with compactions too but I 
suspect it does.

I'm not seeing any Ref leaks or double frees.

To Reproduce:

{quote}
./tools/bin/cassandra-stress write n=10m -rate threads=100
./bin/nodetool scrub
#Ctrl-C
./bin/nodetool scrub
#Ctrl-C
./bin/nodetool scrub
#Ctrl-C
./bin/nodetool scrub
{quote}

Eventually in the logs you get:
WARN  [RMI TCP Connection(4)-127.0.0.1] 2017-09-14 15:51:26,722 
NoSpamLogger.java:97 - Spinning trying to capture readers 
[BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-32-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-31-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-29-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-27-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-26-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-20-big-Data.db')],
*released: 
[BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db')],*
 

This released table has a selfRef of 0 but is in the Tracker


  was:
I'm hitting a Ref bug when many scrubs run against a node.  This doesn't happen 
on 3.0.X.  I'm not sure if/if not this happens with compactions too but I 
suspect it does.

I'm not seeing any Ref leaks or double frees.

To Reproduce:

{quote}
./tools/bin/cassandra-stress write n=10m -rate threads=100
./bin/nodetool scrub
#Ctrl-C
./bin/nodetool scrub
#Ctrl-C
./bin/nodetool scrub
#Ctrl-C
./bin/nodetool scrub
{quote}

Eventually in the logs you get:
WARN  [RMI TCP Connection(4)-127.0.0.1] 2017-09-14 15:51:26,722 
NoSpamLogger.java:97 - Spinning trying to capture readers 
[BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-32-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-31-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-29-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-27-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-26-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-20-big-Data.db')],*
 released: 
[BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db')],
 *



> Ref bug in Scrub
> 
>
> Key: CASSANDRA-13873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13873
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>
> I'm hitting a Ref bug when many scrubs run against a node.  This doesn't 
> happen on 3.0.X.  I'm not sure if/if not this happens with compactions too 
> but I suspect it does.
> I'm not seeing any Ref leaks or double frees.
> To Reproduce:
> {quote}
> ./tools/bin/cassandra-stress write n=10m -rate threads=100
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> {quote}
> Eventually in the logs you get:
> WARN  [RMI TCP Connection(4)-127.0.0.1] 2017-09-14 15:51:26,722 
> NoSpamLogger.java:97 - Spinning trying to capture readers 
> [BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-32-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-31-big-Data.db'),
> 

[jira] [Created] (CASSANDRA-13873) Ref bug in Scrub

2017-09-14 Thread T Jake Luciani (JIRA)
T Jake Luciani created CASSANDRA-13873:
--

 Summary: Ref bug in Scrub
 Key: CASSANDRA-13873
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13873
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani


I'm hitting a Ref bug when many scrubs run against a node.  This doesn't happen 
on 3.0.X.  I'm not sure if/if not this happens with compactions too but I 
suspect it does.

I'm not seeing any Ref leaks or double frees.

To Reproduce:

{quote}
./tools/bin/cassandra-stress write n=10m -rate threads=100
./bin/nodetool scrub
#Ctrl-C
./bin/nodetool scrub
#Ctrl-C
./bin/nodetool scrub
#Ctrl-C
./bin/nodetool scrub
{quote}

Eventually in the logs you get:
WARN  [RMI TCP Connection(4)-127.0.0.1] 2017-09-14 15:51:26,722 
NoSpamLogger.java:97 - Spinning trying to capture readers 
[BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-32-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-31-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-29-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-27-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-26-big-Data.db'),
 
BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-20-big-Data.db')],*
 released: 
[BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db')],
 *




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-11091) Insufficient disk space in memtable flush should trigger disk fail policy

2017-08-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-11091:
--

Assignee: (was: Dimitar Dimitrov)

> Insufficient disk space in memtable flush should trigger disk fail policy
> -
>
> Key: CASSANDRA-11091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11091
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Richard Low
>
> If there's insufficient disk space to flush, 
> DiskAwareRunnable.getWriteDirectory throws and the flush fails. The 
> commitlogs then grow indefinitely because the latch is never counted down.
> This should be an FSError so the disk fail policy is triggered. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13692) CompactionAwareWriter_getWriteDirectory throws incompatible exceptions

2017-08-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-13692:
--

Assignee: Dimitar Dimitrov

> CompactionAwareWriter_getWriteDirectory throws incompatible exceptions
> --
>
> Key: CASSANDRA-13692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13692
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Hao Zhong
>Assignee: Dimitar Dimitrov
>  Labels: lhf
>
> The CompactionAwareWriter_getWriteDirectory throws RuntimeException:
> {code}
> public Directories.DataDirectory getWriteDirectory(Iterable 
> sstables, long estimatedWriteSize)
> {
> File directory = null;
> for (SSTableReader sstable : sstables)
> {
> if (directory == null)
> directory = sstable.descriptor.directory;
> if (!directory.equals(sstable.descriptor.directory))
> {
> logger.trace("All sstables not from the same disk - putting 
> results in {}", directory);
> break;
> }
> }
> Directories.DataDirectory d = 
> getDirectories().getDataDirectoryForFile(directory);
> if (d != null)
> {
> long availableSpace = d.getAvailableSpace();
> if (availableSpace < estimatedWriteSize)
> throw new RuntimeException(String.format("Not enough space to 
> write %s to %s (%s available)",
>  
> FBUtilities.prettyPrintMemory(estimatedWriteSize),
>  d.location,
>  
> FBUtilities.prettyPrintMemory(availableSpace)));
> logger.trace("putting compaction results in {}", directory);
> return d;
> }
> d = getDirectories().getWriteableLocation(estimatedWriteSize);
> if (d == null)
> throw new RuntimeException(String.format("Not enough disk space 
> to store %s",
>  
> FBUtilities.prettyPrintMemory(estimatedWriteSize)));
> return d;
> }
> {code}
> However, the thrown exception does not  trigger the failure policy. 
> CASSANDRA-11448 fixed a similar problem. The buggy code is:
> {code}
> protected Directories.DataDirectory getWriteDirectory(long writeSize)
> {
> Directories.DataDirectory directory = 
> getDirectories().getWriteableLocation(writeSize);
> if (directory == null)
> throw new RuntimeException("Insufficient disk space to write " + 
> writeSize + " bytes");
> return directory;
> }
> {code}
> The fixed code is:
> {code}
> protected Directories.DataDirectory getWriteDirectory(long writeSize)
> {
> Directories.DataDirectory directory = 
> getDirectories().getWriteableLocation(writeSize);
> if (directory == null)
> throw new FSWriteError(new IOException("Insufficient disk space 
> to write " + writeSize + " bytes"), "");
> return directory;
> }
> {code}
> The fixed code throws FSWE and triggers the failure policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13176) DROP INDEX seemingly doesn't stop existing Index build

2017-08-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-13176:
--

Assignee: Dimitar Dimitrov

> DROP INDEX seemingly doesn't stop existing Index build
> --
>
> Key: CASSANDRA-13176
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13176
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, CQL
> Environment: CentOS Linux, JRE 1.8
>Reporter: Soumya Sanyal
>Assignee: Dimitar Dimitrov
>
> There appears to be an edge case with secondary indexes (non SASI). I 
> originally issued a CREATE INDEX on a column, and upon listening to advice 
> from folks in the #cassandra room, decided against it, and issued a DROP 
> INDEX. 
> I didn't check the cluster overnight, but this morning, I found out that our 
> cluster CPU usage was pegged around 80%. Looking at compaction stats, I saw 
> that the index build was still ongoing. We had to restart the entire cluster 
> for the changes to take effect.
> Version: 3.9



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-11091) Insufficient disk space in memtable flush should trigger disk fail policy

2017-08-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-11091:
--

Assignee: Dimitar Dimitrov

> Insufficient disk space in memtable flush should trigger disk fail policy
> -
>
> Key: CASSANDRA-11091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11091
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Richard Low
>Assignee: Dimitar Dimitrov
>
> If there's insufficient disk space to flush, 
> DiskAwareRunnable.getWriteDirectory throws and the flush fails. The 
> commitlogs then grow indefinitely because the latch is never counted down.
> This should be an FSError so the disk fail policy is triggered. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13737) Node start can fail if the base table of a materialized view is not found

2017-08-07 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117072#comment-16117072
 ] 

T Jake Luciani commented on CASSANDRA-13737:


+1 assuming clean CI

> Node start can fail if the base table of a materialized view is not found
> -
>
> Key: CASSANDRA-13737
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13737
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Node start can fail if the base table of a materialized view is not found, 
> which is something that can happen under certain circumstances. There is a 
> dtest reproducing the problem:
> {code}
> cluster = self.cluster
> cluster.populate(3)
> cluster.start()
> node1, node2, node3 = self.cluster.nodelist()
> session = self.patient_cql_connection(node1, 
> consistency_level=ConsistencyLevel.QUORUM)
> create_ks(session, 'ks', 3)
> session.execute('CREATE TABLE users (username varchar PRIMARY KEY, state 
> varchar)')
> node3.stop(wait_other_notice=True)
> # create a materialized view only in nodes 1 and 2
> session.execute(('CREATE MATERIALIZED VIEW users_by_state AS '
>  'SELECT * FROM users WHERE state IS NOT NULL AND username IS 
> NOT NULL '
>  'PRIMARY KEY (state, username)'))
> node1.stop(wait_other_notice=True)
> node2.stop(wait_other_notice=True)
> # drop the base table only in node 3
> node3.start(wait_for_binary_proto=True)
> session = self.patient_cql_connection(node3, 
> consistency_level=ConsistencyLevel.QUORUM)
> session.execute('DROP TABLE ks.users')
> cluster.stop()
> cluster.start()  # Fails
> {code}
> This is the error during node start:
> {code}
> java.lang.IllegalArgumentException: Unknown CF 
> 958ebc30-76e4-11e7-869a-9d8367a71c76
>   at 
> org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:215) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.view.ViewManager.addView(ViewManager.java:143) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.view.ViewManager.reload(ViewManager.java:113) 
> ~[main/:na]
>   at org.apache.cassandra.schema.Schema.alterKeyspace(Schema.java:618) 
> ~[main/:na]
>   at org.apache.cassandra.schema.Schema.lambda$merge$18(Schema.java:591) 
> ~[main/:na]
>   at 
> java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.lambda$entryConsumer$0(Collections.java:1575)
>  ~[na:1.8.0_131]
>   at java.util.HashMap$EntrySet.forEach(HashMap.java:1043) ~[na:1.8.0_131]
>   at 
> java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.forEach(Collections.java:1580)
>  ~[na:1.8.0_131]
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:591) ~[main/:na]
>   at 
> org.apache.cassandra.schema.Schema.mergeAndAnnounceVersion(Schema.java:564) 
> ~[main/:na]
>   at 
> org.apache.cassandra.schema.MigrationTask$1.response(MigrationTask.java:89) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72) 
> ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_131]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_131]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [main/:na]
>   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13127) Materialized Views: View row expires too soon

2017-07-07 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16078206#comment-16078206
 ] 

T Jake Luciani commented on CASSANDRA-13127:


[~jasonstack] What impact will the change to the update statement have? My 
guess it we would potentially read the existing data a lot more often.  

> Materialized Views: View row expires too soon
> -
>
> Key: CASSANDRA-13127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13127
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: Duarte Nunes
>Assignee: ZhaoYang
>
> Consider the following commands, ran against trunk:
> {code}
> echo "DROP MATERIALIZED VIEW ks.mv; DROP TABLE ks.base;" | bin/cqlsh
> echo "CREATE TABLE ks.base (p int, c int, v int, PRIMARY KEY (p, c));" | 
> bin/cqlsh
> echo "CREATE MATERIALIZED VIEW ks.mv AS SELECT p, c FROM base WHERE p IS NOT 
> NULL AND c IS NOT NULL PRIMARY KEY (c, p);" | bin/cqlsh
> echo "INSERT INTO ks.base (p, c) VALUES (0, 0) USING TTL 10;" | bin/cqlsh
> # wait for row liveness to get closer to expiration
> sleep 6;
> echo "UPDATE ks.base USING TTL 8 SET v = 0 WHERE p = 0 and c = 0;" | bin/cqlsh
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  7
> (1 rows)
>  c | p
> ---+---
>  0 | 0
> (1 rows)
> # wait for row liveness to expire
> sleep 4;
> echo "SELECT p, c, ttl(v) FROM ks.base; SELECT * FROM ks.mv;" | bin/cqlsh
>  p | c | ttl(v)
> ---+---+
>  0 | 0 |  3
> (1 rows)
>  c | p
> ---+---
> (0 rows)
> {code}
> Notice how the view row is removed even though the base row is still live. I 
> would say this is because in ViewUpdateGenerator#computeLivenessInfoForEntry 
> the TTLs are compared instead of the expiration times, but I'm not sure I'm 
> getting that far ahead in the code when updating a column that's not in the 
> view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12744) Randomness of stress distributions is not good

2017-06-06 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-12744:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed thx!

> Randomness of stress distributions is not good
> --
>
> Key: CASSANDRA-12744
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12744
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: T Jake Luciani
>Assignee: Ben Slater
>Priority: Minor
>  Labels: stress
> Fix For: 4.0
>
> Attachments: CASSANDRA_12744_SeedManager_changes-trunk.patch
>
>
> The randomness of our distributions is pretty bad.  We are using the 
> JDKRandomGenerator() but in testing of uniform(1..3) we see for 100 
> iterations it's only outputting 3.  If you bump it to 10k it hits all 3 
> values. 
> I made a change to just use the default commons math random generator and now 
> see all 3 values for n=10



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12744) Randomness of stress distributions is not good

2017-05-30 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029477#comment-16029477
 ] 

T Jake Luciani commented on CASSANDRA-12744:


good find! I re-started the tests with your patch

> Randomness of stress distributions is not good
> --
>
> Key: CASSANDRA-12744
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12744
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: T Jake Luciani
>Assignee: Ben Slater
>Priority: Minor
>  Labels: stress
> Fix For: 4.0
>
> Attachments: CASSANDRA_12744_SeedManager_changes-trunk.patch
>
>
> The randomness of our distributions is pretty bad.  We are using the 
> JDKRandomGenerator() but in testing of uniform(1..3) we see for 100 
> iterations it's only outputting 3.  If you bump it to 10k it hits all 3 
> values. 
> I made a change to just use the default commons math random generator and now 
> see all 3 values for n=10



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13555) Thread leak during repair

2017-05-30 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029431#comment-16029431
 ] 

T Jake Luciani commented on CASSANDRA-13555:


Just realised you already found that :)  

We only block on validations because we want to throttle the validations (since 
they require compactors).  If we don't it can overwhelm the replicas with 
pending work for each subrange  

> Thread leak during repair
> -
>
> Key: CASSANDRA-13555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13555
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Simon Zhou
>Assignee: Simon Zhou
>
> The symptom is similar to what happened in [CASSANDRA-13204 | 
> https://issues.apache.org/jira/browse/CASSANDRA-13204] that the thread 
> waiting forever doing nothing. This one happened during "nodetool repair -pr 
> -seq -j 1" in production but I can easily simulate the problem with just 
> "nodetool repair" in dev environment (CCM). I'm trying to explain what 
> happened with 3.0.13 code base.
> 1. One node is down while doing repair. This is the error I saw in production:
> {code}
> ERROR [GossipTasks:1] 2017-05-19 15:00:10,545 RepairSession.java:334 - 
> [repair #bc9a3cd1-3ca3-11e7-a44a-e30923ac9336] session completed with the 
> following error
> java.io.IOException: Endpoint /10.185.43.15 died
> at 
> org.apache.cassandra.repair.RepairSession.convict(RepairSession.java:333) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.gms.FailureDetector.interpret(FailureDetector.java:306) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.gms.Gossiper.doStatusCheck(Gossiper.java:766) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.gms.Gossiper.access$800(Gossiper.java:66) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:181) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>  [apache-cassandra-3.0.11.jar:3.0.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_121]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.11.jar:3.0.11]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {code}
> 2. At this moment the repair coordinator hasn't received the response 
> (MerkleTrees) for the node that was marked down. This means, RepairJob#run 
> will never return because it waits for validations to finish:
> {code}
> // Wait for validation to complete
> Futures.getUnchecked(validations);
> {code}
> Be noted that all RepairJob's (as Runnable) run on a shared executor created 
> in RepairRunnable#runMayThrow, while all snapshot, validation and sync'ing 
> happen on a per-RepairSession "taskExecutor". The RepairJob#run will only 
> return when it receives MerkleTrees (or null) from all endpoints for a given 
> column family and token range.
> As evidence of the thread leak, below is from the thread dump. I can also get 
> the same stack trace when simulating the same issue in dev environment.
> {code}
> "Repair#129:56" #406373 daemon prio=5 os_prio=0 tid=0x7fc495028400 
> nid=0x1a77d waiting on condition [0x7fc02153]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0002d7c00198> (a 
> com.google.common.util.concurrent.AbstractFuture$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:285)
> at 
> 

[jira] [Commented] (CASSANDRA-13555) Thread leak during repair

2017-05-30 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029421#comment-16029421
 ] 

T Jake Luciani commented on CASSANDRA-13555:


The issue here is we aren't finishing off the validations when the repair 
session terminates:

https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/repair/RepairSession.java#L302

So just need to walk that array and set results to null.  Same with the 
syncTasks.
[~szhou] would you be able to make a patch (and ideally a dtest)?




> Thread leak during repair
> -
>
> Key: CASSANDRA-13555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13555
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Simon Zhou
>Assignee: Simon Zhou
>
> The symptom is similar to what happened in [CASSANDRA-13204 | 
> https://issues.apache.org/jira/browse/CASSANDRA-13204] that the thread 
> waiting forever doing nothing. This one happened during "nodetool repair -pr 
> -seq -j 1" in production but I can easily simulate the problem with just 
> "nodetool repair" in dev environment (CCM). I'm trying to explain what 
> happened with 3.0.13 code base.
> 1. One node is down while doing repair. This is the error I saw in production:
> {code}
> ERROR [GossipTasks:1] 2017-05-19 15:00:10,545 RepairSession.java:334 - 
> [repair #bc9a3cd1-3ca3-11e7-a44a-e30923ac9336] session completed with the 
> following error
> java.io.IOException: Endpoint /10.185.43.15 died
> at 
> org.apache.cassandra.repair.RepairSession.convict(RepairSession.java:333) 
> ~[apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.gms.FailureDetector.interpret(FailureDetector.java:306) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.gms.Gossiper.doStatusCheck(Gossiper.java:766) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.gms.Gossiper.access$800(Gossiper.java:66) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:181) 
> [apache-cassandra-3.0.11.jar:3.0.11]
> at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>  [apache-cassandra-3.0.11.jar:3.0.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_121]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_121]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.11.jar:3.0.11]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
> {code}
> 2. At this moment the repair coordinator hasn't received the response 
> (MerkleTrees) for the node that was marked down. This means, RepairJob#run 
> will never return because it waits for validations to finish:
> {code}
> // Wait for validation to complete
> Futures.getUnchecked(validations);
> {code}
> Be noted that all RepairJob's (as Runnable) run on a shared executor created 
> in RepairRunnable#runMayThrow, while all snapshot, validation and sync'ing 
> happen on a per-RepairSession "taskExecutor". The RepairJob#run will only 
> return when it receives MerkleTrees (or null) from all endpoints for a given 
> column family and token range.
> As evidence of the thread leak, below is from the thread dump. I can also get 
> the same stack trace when simulating the same issue in dev environment.
> {code}
> "Repair#129:56" #406373 daemon prio=5 os_prio=0 tid=0x7fc495028400 
> nid=0x1a77d waiting on condition [0x7fc02153]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0002d7c00198> (a 
> com.google.common.util.concurrent.AbstractFuture$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>  

[jira] [Commented] (CASSANDRA-12744) Randomness of stress distributions is not good

2017-05-23 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021150#comment-16021150
 ] 

T Jake Luciani commented on CASSANDRA-12744:


Thanks Ben, I'd appreciate it

Rebased on trunk:

[branch|https://github.com/tjake/cassandra/tree/stress-random-trunk]
[utest|http://cassci.datastax.com/job/tjake-stress-random-trunk-testall/]
[dtests| http://cassci.datastax.com/job/tjake-stress-random-trunk-dtest/]

> Randomness of stress distributions is not good
> --
>
> Key: CASSANDRA-12744
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12744
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Minor
>  Labels: stress
> Fix For: 4.0
>
>
> The randomness of our distributions is pretty bad.  We are using the 
> JDKRandomGenerator() but in testing of uniform(1..3) we see for 100 
> iterations it's only outputting 3.  If you bump it to 10k it hits all 3 
> values. 
> I made a change to just use the default commons math random generator and now 
> see all 3 values for n=10



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-12744) Randomness of stress distributions is not good

2017-05-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-12744:
--

Assignee: Ben Slater  (was: T Jake Luciani)

> Randomness of stress distributions is not good
> --
>
> Key: CASSANDRA-12744
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12744
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: T Jake Luciani
>Assignee: Ben Slater
>Priority: Minor
>  Labels: stress
> Fix For: 4.0
>
>
> The randomness of our distributions is pretty bad.  We are using the 
> JDKRandomGenerator() but in testing of uniform(1..3) we see for 100 
> iterations it's only outputting 3.  If you bump it to 10k it hits all 3 
> values. 
> I made a change to just use the default commons math random generator and now 
> see all 3 values for n=10



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12744) Randomness of stress distributions is not good

2017-05-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-12744:
---
Fix Version/s: (was: 3.0.x)
   4.0

> Randomness of stress distributions is not good
> --
>
> Key: CASSANDRA-12744
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12744
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Minor
>  Labels: stress
> Fix For: 4.0
>
>
> The randomness of our distributions is pretty bad.  We are using the 
> JDKRandomGenerator() but in testing of uniform(1..3) we see for 100 
> iterations it's only outputting 3.  If you bump it to 10k it hits all 3 
> values. 
> I made a change to just use the default commons math random generator and now 
> see all 3 values for n=10



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10399) Create default Stress tables without compact storage

2017-05-10 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004710#comment-16004710
 ] 

T Jake Luciani commented on CASSANDRA-10399:


It just moves the goalpost for older benchmarks, so we just need to be careful 
to document this.

> Create default Stress tables without compact storage 
> -
>
> Key: CASSANDRA-10399
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10399
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sebastian Estevez
>Assignee: mck
>Priority: Minor
>  Labels: stress
> Fix For: 4.x
>
>
> ~$ cassandra-stress write
> {code}
> cqlsh> desc TABLE keyspace1.standard1
> CREATE TABLE keyspace1.standard1 (
> key blob PRIMARY KEY,
> "C0" blob,
> "C1" blob,
> "C2" blob,
> "C3" blob,
> "C4" blob
> ) WITH COMPACT STORAGE
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = 'NONE';
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2017-05-08 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8780:
--
   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   4.0
   Status: Resolved  (was: Patch Available)

Committed {{d345ef5d57e303cd8c642640bd65d7212bbb2436}} Thanks Ben!

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 4.0
>
> Attachments: 8780-trunk-v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2017-05-08 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000812#comment-16000812
 ] 

T Jake Luciani commented on CASSANDRA-8780:
---

Thanks, kicked off tests again :)

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 3.11.x
>
> Attachments: 8780-trunk-v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2017-04-28 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8780:
--
Status: Awaiting Feedback  (was: Open)

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 3.11.x
>
> Attachments: 8780-trunkv2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2017-04-28 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15989258#comment-15989258
 ] 

T Jake Luciani commented on CASSANDRA-8780:
---

The dtests seem to have issues running stress, so it looks like this is not 
backwards compatible. We should support both old/new ways.  Can you take a look 
[~slater_ben]?

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 3.11.x
>
> Attachments: 8780-trunkv2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2017-04-28 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8780:
--
Status: Open  (was: Patch Available)

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 3.11.x
>
> Attachments: 8780-trunkv2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2017-04-28 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15985424#comment-15985424
 ] 

T Jake Luciani edited comment on CASSANDRA-8780 at 4/28/17 2:24 PM:


Kicked off CI on these 
[testall|http://cassci.datastax.com/job/tjake-8780-trunk-testall/] 
[dtest|http://cassci.datastax.com/job/tjake-8780-trunk-dtest/]  


was (Author: tjake):
Kicked off CI on these [testall|https://circleci.com/gh/tjake/cassandra/2] 
[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/33/]
  

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 3.11.x
>
> Attachments: 8780-trunkv2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-4650) RangeStreamer should be smarter when picking endpoints for streaming in case of N >=3 in each DC.

2017-04-28 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-4650:
--
Reviewer: Marcus Eriksson  (was: T Jake Luciani)

> RangeStreamer should be smarter when picking endpoints for streaming in case 
> of N >=3 in each DC.  
> ---
>
> Key: CASSANDRA-4650
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4650
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.1.5
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
>  Labels: streaming
> Fix For: 4.x
>
> Attachments: CASSANDRA-4650_trunk.txt, photo-1.JPG
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> getRangeFetchMap method in RangeStreamer should pick unique nodes to stream 
> data from when number of replicas in each DC is three or more. 
> When N>=3 in a DC, there are two options for streaming a range. Consider an 
> example of 4 nodes in one datacenter and replication factor of 3. 
> If a node goes down, it needs to recover 3 ranges of data. With current code, 
> two nodes could get selected as it orders the node by proximity. 
> We ideally will want to select 3 nodes for streaming the data. We can do this 
> by selecting unique nodes for each range.  
> Advantages:
> This will increase the performance of bootstrapping a node and will also put 
> less pressure on nodes serving the data. 
> Note: This does not affect if N < 3 in each DC as then it streams data from 
> only 2 nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2017-04-26 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15985424#comment-15985424
 ] 

T Jake Luciani commented on CASSANDRA-8780:
---

Kicked off CI on these [testall|https://circleci.com/gh/tjake/cassandra/2] 
[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/33/]
  

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 3.11.x
>
> Attachments: 8780-trunkv2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13188) compaction-stress AssertionError from getMemtableFor()

2017-04-25 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-13188:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed {{2369faab7959d57c8f6bc1f324de47c5aeaf19b9}} thanks!

> compaction-stress AssertionError from getMemtableFor()
> --
>
> Key: CASSANDRA-13188
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13188
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing, Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 3.11.0, 4.0
>
>
> Exception:
> {noformat}
> ./compaction-stress compact -d /tmp/compaction -p 
> https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml
>  -t 4
> WARN  18:45:04,854 JNA link failure, one or more native method will be 
> unavailable.
> java.lang.AssertionError: []
> at 
> org.apache.cassandra.db.lifecycle.Tracker.getMemtableFor(Tracker.java:312)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1315)
> at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:618)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:462)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:570)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:564)
> at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:356)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.saveSystemKeyspacesSchema(SchemaKeyspace.java:265)
> at 
> org.apache.cassandra.db.SystemKeyspace.finishStartup(SystemKeyspace.java:495)
> at 
> org.apache.cassandra.stress.CompactionStress$Compaction.run(CompactionStress.java:209)
> at 
> org.apache.cassandra.stress.CompactionStress.main(CompactionStress.java:349)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13188) compaction-stress AssertionError from getMemtableFor()

2017-04-25 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-13188:
---
Fix Version/s: 4.0
   3.11.0

> compaction-stress AssertionError from getMemtableFor()
> --
>
> Key: CASSANDRA-13188
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13188
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing, Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 3.11.0, 4.0
>
>
> Exception:
> {noformat}
> ./compaction-stress compact -d /tmp/compaction -p 
> https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml
>  -t 4
> WARN  18:45:04,854 JNA link failure, one or more native method will be 
> unavailable.
> java.lang.AssertionError: []
> at 
> org.apache.cassandra.db.lifecycle.Tracker.getMemtableFor(Tracker.java:312)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1315)
> at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:618)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:462)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:570)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:564)
> at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:356)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.saveSystemKeyspacesSchema(SchemaKeyspace.java:265)
> at 
> org.apache.cassandra.db.SystemKeyspace.finishStartup(SystemKeyspace.java:495)
> at 
> org.apache.cassandra.stress.CompactionStress$Compaction.run(CompactionStress.java:209)
> at 
> org.apache.cassandra.stress.CompactionStress.main(CompactionStress.java:349)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13188) compaction-stress AssertionError from getMemtableFor()

2017-04-25 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982881#comment-15982881
 ] 

T Jake Luciani edited comment on CASSANDRA-13188 at 4/25/17 1:35 PM:
-

Changes look good, kicked off CI

||branch|testall|dtests||
|3.11|[testall|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-testall]|[dtest|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-dtest]|
|trunk|[testall|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-trunk-testall]|[dtest|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-trunk-dtest]|


was (Author: tjake):
Changes look good, kicked off CI

||branch|testall|dtests||
|3.11|[testall|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-testal]l|[dtest|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-dtest]|
|trunk|[testall|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-trunk-testall]|[dtest|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-trunk-dtest]|

> compaction-stress AssertionError from getMemtableFor()
> --
>
> Key: CASSANDRA-13188
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13188
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing, Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 3.11.0, 4.0
>
>
> Exception:
> {noformat}
> ./compaction-stress compact -d /tmp/compaction -p 
> https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml
>  -t 4
> WARN  18:45:04,854 JNA link failure, one or more native method will be 
> unavailable.
> java.lang.AssertionError: []
> at 
> org.apache.cassandra.db.lifecycle.Tracker.getMemtableFor(Tracker.java:312)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1315)
> at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:618)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:462)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:570)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:564)
> at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:356)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.saveSystemKeyspacesSchema(SchemaKeyspace.java:265)
> at 
> org.apache.cassandra.db.SystemKeyspace.finishStartup(SystemKeyspace.java:495)
> at 
> org.apache.cassandra.stress.CompactionStress$Compaction.run(CompactionStress.java:209)
> at 
> org.apache.cassandra.stress.CompactionStress.main(CompactionStress.java:349)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13188) compaction-stress AssertionError from getMemtableFor()

2017-04-25 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982881#comment-15982881
 ] 

T Jake Luciani commented on CASSANDRA-13188:


Changes look good, kicked off CI

||branch|testall|dtests||
|3.11|[testall|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-testal]l|[dtest|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-dtest]|
|trunk|[testall|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-trunk-testall]|[dtest|http://cassci.datastax.com/job/tjake-CASSANDRA-13188-trunk-dtest]|

> compaction-stress AssertionError from getMemtableFor()
> --
>
> Key: CASSANDRA-13188
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13188
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing, Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>
> Exception:
> {noformat}
> ./compaction-stress compact -d /tmp/compaction -p 
> https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml
>  -t 4
> WARN  18:45:04,854 JNA link failure, one or more native method will be 
> unavailable.
> java.lang.AssertionError: []
> at 
> org.apache.cassandra.db.lifecycle.Tracker.getMemtableFor(Tracker.java:312)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1315)
> at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:618)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:462)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:570)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:564)
> at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:356)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.saveSystemKeyspacesSchema(SchemaKeyspace.java:265)
> at 
> org.apache.cassandra.db.SystemKeyspace.finishStartup(SystemKeyspace.java:495)
> at 
> org.apache.cassandra.stress.CompactionStress$Compaction.run(CompactionStress.java:209)
> at 
> org.apache.cassandra.stress.CompactionStress.main(CompactionStress.java:349)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13188) compaction-stress AssertionError from getMemtableFor()

2017-04-24 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982250#comment-15982250
 ] 

T Jake Luciani commented on CASSANDRA-13188:


[~jay.zhuang] thank you for this patch and the test.  I'll take a tomorrow

> compaction-stress AssertionError from getMemtableFor()
> --
>
> Key: CASSANDRA-13188
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13188
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing, Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>
> Exception:
> {noformat}
> ./compaction-stress compact -d /tmp/compaction -p 
> https://gist.githubusercontent.com/tjake/8995058fed11d9921e31/raw/a9334d1090017bf546d003e271747351a40692ea/blogpost.yaml
>  -t 4
> WARN  18:45:04,854 JNA link failure, one or more native method will be 
> unavailable.
> java.lang.AssertionError: []
> at 
> org.apache.cassandra.db.lifecycle.Tracker.getMemtableFor(Tracker.java:312)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1315)
> at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:618)
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:462)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:570)
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:564)
> at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:356)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.saveSystemKeyspacesSchema(SchemaKeyspace.java:265)
> at 
> org.apache.cassandra.db.SystemKeyspace.finishStartup(SystemKeyspace.java:495)
> at 
> org.apache.cassandra.stress.CompactionStress$Compaction.run(CompactionStress.java:209)
> at 
> org.apache.cassandra.stress.CompactionStress.main(CompactionStress.java:349)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-8780) cassandra-stress should support multiple table operations

2017-04-24 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982247#comment-15982247
 ] 

T Jake Luciani commented on CASSANDRA-8780:
---

[~slater_ben] sorry for the delay.  I'll look at this tomorrow.

> cassandra-stress should support multiple table operations
> -
>
> Key: CASSANDRA-8780
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8780
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benedict
>Assignee: Ben Slater
>  Labels: stress
> Fix For: 3.11.x
>
> Attachments: 8780-trunkv2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13307) The specification of protocol version in cqlsh means the python driver doesn't automatically downgrade protocol version.

2017-04-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973974#comment-15973974
 ] 

T Jake Luciani commented on CASSANDRA-13307:


yes please

> The specification of protocol version in cqlsh means the python driver 
> doesn't automatically downgrade protocol version.
> 
>
> Key: CASSANDRA-13307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13307
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Matt Byrd
>Assignee: Matt Byrd
>Priority: Minor
>  Labels: doc-impacting
> Fix For: 3.11.x
>
>
> Hi,
> Looks like we've regressed on the issue described in:
> https://issues.apache.org/jira/browse/CASSANDRA-9467
> In that we're no longer able to connect from newer cqlsh versions
> (e.g trunk) to older versions of Cassandra with a lower version of the 
> protocol (e.g 2.1 with protocol version 3)
> The problem seems to be that we're relying on the ability for the client to 
> automatically downgrade protocol version implemented in Cassandra here:
> https://issues.apache.org/jira/browse/CASSANDRA-12838
> and utilised in the python client here:
> https://datastax-oss.atlassian.net/browse/PYTHON-240
> The problem however comes when we implemented:
> https://datastax-oss.atlassian.net/browse/PYTHON-537
> "Don't downgrade protocol version if explicitly set" 
> (included when we bumped from 3.5.0 to 3.7.0 of the python driver as part of 
> fixing: https://issues.apache.org/jira/browse/CASSANDRA-11534)
> Since we do explicitly specify the protocol version in the bin/cqlsh.py.
> I've got a patch which just adds an option to explicitly specify the protocol 
> version (for those who want to do that) and then otherwise defaults to not 
> setting the protocol version, i.e using the protocol version from the client 
> which we ship, which should by default be the same protocol as the server.
> Then it should downgrade gracefully as was intended. 
> Let me know if that seems reasonable.
> Thanks,
> Matt



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13442) Support a means of strongly consistent highly available replication with storage requirements approximating RF=2

2017-04-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972641#comment-15972641
 ] 

T Jake Luciani commented on CASSANDRA-13442:


Rather than forcing auto deletion of the data on repair, would you be ok with 
requiring a explicit cleanup to see the disk savings? That seems like the 
safest approach.

> Support a means of strongly consistent highly available replication with 
> storage requirements approximating RF=2
> 
>
> Key: CASSANDRA-13442
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13442
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, Coordination, Distributed Metadata, Local 
> Write-Read Paths
>Reporter: Ariel Weisberg
>
> Replication factors like RF=2 can't provide strong consistency and 
> availability because if a single node is lost it's impossible to reach a 
> quorum of replicas. Stepping up to RF=3 will allow you to lose a node and 
> still achieve quorum for reads and writes, but requires committing additional 
> storage.
> The requirement of a quorum for writes/reads doesn't seem to be something 
> that can be relaxed without additional constraints on queries, but it seems 
> like it should be possible to relax the requirement that 3 full copies of the 
> entire data set are kept. What is actually required is a covering data set 
> for the range and we should be able to achieve a covering data set and high 
> availability without having three full copies. 
> After a repair we know that some subset of the data set is fully replicated. 
> At that point we don't have to read from a quorum of nodes for the repaired 
> data. It is sufficient to read from a single node for the repaired data and a 
> quorum of nodes for the unrepaired data.
> One way to exploit this would be to have N replicas, say the last N replicas 
> (where N varies with RF) in the preference list, delete all repaired data 
> after a repair completes. Subsequent quorum reads will be able to retrieve 
> the repaired data from any of the two full replicas and the unrepaired data 
> from a quorum read of any replica including the "transient" replicas.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13442) Support a means of strongly consistent highly available replication with storage requirements approximating RF=2

2017-04-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972633#comment-15972633
 ] 

T Jake Luciani commented on CASSANDRA-13442:


I like the idea of making it part of the replication strategy.
You could have an unrepaired RF and a repaired RF.

In your example it would be: unrepaired: { dc1=3, dc2=3}, repaired: { dc1=2, 
dc2=2 }.


> Support a means of strongly consistent highly available replication with 
> storage requirements approximating RF=2
> 
>
> Key: CASSANDRA-13442
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13442
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, Coordination, Distributed Metadata, Local 
> Write-Read Paths
>Reporter: Ariel Weisberg
>
> Replication factors like RF=2 can't provide strong consistency and 
> availability because if a single node is lost it's impossible to reach a 
> quorum of replicas. Stepping up to RF=3 will allow you to lose a node and 
> still achieve quorum for reads and writes, but requires committing additional 
> storage.
> The requirement of a quorum for writes/reads doesn't seem to be something 
> that can be relaxed without additional constraints on queries, but it seems 
> like it should be possible to relax the requirement that 3 full copies of the 
> entire data set are kept. What is actually required is a covering data set 
> for the range and we should be able to achieve a covering data set and high 
> availability without having three full copies. 
> After a repair we know that some subset of the data set is fully replicated. 
> At that point we don't have to read from a quorum of nodes for the repaired 
> data. It is sufficient to read from a single node for the repaired data and a 
> quorum of nodes for the unrepaired data.
> One way to exploit this would be to have N replicas, say the last N replicas 
> (where N varies with RF) in the preference list, delete all repaired data 
> after a repair completes. Subsequent quorum reads will be able to retrieve 
> the repaired data from any of the two full replicas and the unrepaired data 
> from a quorum read of any replica including the "transient" replicas.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12835) Tracing payload not passed from QueryMessage to tracing session

2017-04-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972616#comment-15972616
 ] 

T Jake Luciani commented on CASSANDRA-12835:


Changes make sense.

One nit: I think you've included an unused import in the TracingTest: 
org.apache.commons.lang3.StringUtils

+1 assuming the tests look good.

> Tracing payload not passed from QueryMessage to tracing session
> ---
>
> Key: CASSANDRA-12835
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12835
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Assignee: mck
>Priority: Critical
>  Labels: tracing
> Fix For: 3.11.x, 4.x
>
>
> Caused by CASSANDRA-10392.
> Related to CASSANDRA-11706.
> When querying using CQL statements (not prepared) the message type is 
> QueryMessage and the code in 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/transport/messages/QueryMessage.java#L101
>  is as follows:
> {code:java}
> if (state.traceNextQuery())
> {
> state.createTracingSession();
> ImmutableMap.Builder builder = 
> ImmutableMap.builder();
> {code}
> {{state.createTracingSession();}} should probably be 
> {{state.createTracingSession(getCustomPayload());}}. At least that fixes the 
> problem for me.
> This also raises the question whether some other parts of the code should 
> pass the custom payload as well (I'm not the right person to analyze this):
> {code}
> $ ag createTracingSession
> src/java/org/apache/cassandra/service/QueryState.java
> 80:public void createTracingSession()
> 82:createTracingSession(Collections.EMPTY_MAP);
> 85:public void createTracingSession(Map customPayload)
> src/java/org/apache/cassandra/thrift/CassandraServer.java
> 2528:state().getQueryState().createTracingSession();
> src/java/org/apache/cassandra/transport/messages/BatchMessage.java
> 163:state.createTracingSession();
> src/java/org/apache/cassandra/transport/messages/ExecuteMessage.java
> 114:state.createTracingSession(getCustomPayload());
> src/java/org/apache/cassandra/transport/messages/QueryMessage.java
> 101:state.createTracingSession();
> src/java/org/apache/cassandra/transport/messages/PrepareMessage.java
> 74:state.createTracingSession();
> {code}
> This is not marked as `minor` as the CASSANDRA-11706 was because this cannot 
> be fixed by the tracing plugin.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13442) Support a means of strongly consistent highly available replication with storage requirements approximating RF=2

2017-04-17 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972014#comment-15972014
 ] 

T Jake Luciani commented on CASSANDRA-13442:


.bq This ticket is not about changing consistency levels and doesn't require 
applications to change their usage of consistency levels to benefit.

7168 added a new CL as a way to opt-in to this new feature. Once its fully 
vetted it would be trivial to make it automatically use it when appropriate.

bq.  7168 also does not have reducing storage requirements as a goal.

This idea seem risky to me. At the least it should be opt-in.  From a operators 
perspective you would need to consider how to handle bootstrapping or replacing 
a node. Also, how to handle backup and restores, etc.

The topology of the cluster would also have a new dimension that the drivers 
would need to consider.  Since for CL.ONE queries you would need to only use 
one of the replicas with all the data on it.

> Support a means of strongly consistent highly available replication with 
> storage requirements approximating RF=2
> 
>
> Key: CASSANDRA-13442
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13442
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, Coordination, Distributed Metadata, Local 
> Write-Read Paths
>Reporter: Ariel Weisberg
>
> Replication factors like RF=2 can't provide strong consistency and 
> availability because if a single node is lost it's impossible to reach a 
> quorum of replicas. Stepping up to RF=3 will allow you to lose a node and 
> still achieve quorum for reads and writes, but requires committing additional 
> storage.
> The requirement of a quorum for writes/reads doesn't seem to be something 
> that can be relaxed without additional constraints on queries, but it seems 
> like it should be possible to relax the requirement that 3 full copies of the 
> entire data set are kept. What is actually required is a covering data set 
> for the range and we should be able to achieve a covering data set and high 
> availability without having three full copies. 
> After a repair we know that some subset of the data set is fully replicated. 
> At that point we don't have to read from a quorum of nodes for the repaired 
> data. It is sufficient to read from a single node for the repaired data and a 
> quorum of nodes for the unrepaired data.
> One way to exploit this would be to have N replicas, say the last N replicas 
> (where N varies with RF) in the preference list, delete all repaired data 
> after a repair completes. Subsequent quorum reads will be able to retrieve 
> the repaired data from any of the two full replicas and the unrepaired data 
> from a quorum read of any replica including the "transient" replicas.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13442) Support a means of strongly consistent highly available replication with storage requirements approximating RF=2

2017-04-17 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972014#comment-15972014
 ] 

T Jake Luciani edited comment on CASSANDRA-13442 at 4/18/17 2:25 AM:
-

bq. This ticket is not about changing consistency levels and doesn't require 
applications to change their usage of consistency levels to benefit.

7168 added a new CL as a way to opt-in to this new feature. Once its fully 
vetted it would be trivial to make it automatically use it when appropriate.

bq.  7168 also does not have reducing storage requirements as a goal.

This idea seem risky to me. At the least it should be opt-in.  From a operators 
perspective you would need to consider how to handle bootstrapping or replacing 
a node. Also, how to handle backup and restores, etc.

The topology of the cluster would also have a new dimension that the drivers 
would need to consider.  Since for CL.ONE queries you would need to only use 
one of the replicas with all the data on it.


was (Author: tjake):
.bq This ticket is not about changing consistency levels and doesn't require 
applications to change their usage of consistency levels to benefit.

7168 added a new CL as a way to opt-in to this new feature. Once its fully 
vetted it would be trivial to make it automatically use it when appropriate.

bq.  7168 also does not have reducing storage requirements as a goal.

This idea seem risky to me. At the least it should be opt-in.  From a operators 
perspective you would need to consider how to handle bootstrapping or replacing 
a node. Also, how to handle backup and restores, etc.

The topology of the cluster would also have a new dimension that the drivers 
would need to consider.  Since for CL.ONE queries you would need to only use 
one of the replicas with all the data on it.

> Support a means of strongly consistent highly available replication with 
> storage requirements approximating RF=2
> 
>
> Key: CASSANDRA-13442
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13442
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, Coordination, Distributed Metadata, Local 
> Write-Read Paths
>Reporter: Ariel Weisberg
>
> Replication factors like RF=2 can't provide strong consistency and 
> availability because if a single node is lost it's impossible to reach a 
> quorum of replicas. Stepping up to RF=3 will allow you to lose a node and 
> still achieve quorum for reads and writes, but requires committing additional 
> storage.
> The requirement of a quorum for writes/reads doesn't seem to be something 
> that can be relaxed without additional constraints on queries, but it seems 
> like it should be possible to relax the requirement that 3 full copies of the 
> entire data set are kept. What is actually required is a covering data set 
> for the range and we should be able to achieve a covering data set and high 
> availability without having three full copies. 
> After a repair we know that some subset of the data set is fully replicated. 
> At that point we don't have to read from a quorum of nodes for the repaired 
> data. It is sufficient to read from a single node for the repaired data and a 
> quorum of nodes for the unrepaired data.
> One way to exploit this would be to have N replicas, say the last N replicas 
> (where N varies with RF) in the preference list, delete all repaired data 
> after a repair completes. Subsequent quorum reads will be able to retrieve 
> the repaired data from any of the two full replicas and the unrepaired data 
> from a quorum read of any replica including the "transient" replicas.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13442) Support a means of strongly consistent highly available replication with storage requirements approximating RF=2

2017-04-17 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15971975#comment-15971975
 ] 

T Jake Luciani commented on CASSANDRA-13442:


How is this not a duplicate of CASSANDRA-7168

> Support a means of strongly consistent highly available replication with 
> storage requirements approximating RF=2
> 
>
> Key: CASSANDRA-13442
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13442
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction, Coordination, Distributed Metadata, Local 
> Write-Read Paths
>Reporter: Ariel Weisberg
>
> Replication factors like RF=2 can't provide strong consistency and 
> availability because if a single node is lost it's impossible to reach a 
> quorum of replicas. Stepping up to RF=3 will allow you to lose a node and 
> still achieve quorum for reads and writes, but requires committing additional 
> storage.
> The requirement of a quorum for writes/reads doesn't seem to be something 
> that can be relaxed without additional constraints on queries, but it seems 
> like it should be possible to relax the requirement that 3 full copies of the 
> entire data set are kept. What is actually required is a covering data set 
> for the range and we should be able to achieve a covering data set and high 
> availability without having three full copies. 
> After a repair we know that some subset of the data set is fully replicated. 
> At that point we don't have to read from a quorum of nodes for the repaired 
> data. It is sufficient to read from a single node for the repaired data and a 
> quorum of nodes for the unrepaired data.
> One way to exploit this would be to have N replicas, say the last N replicas 
> (where N varies with RF) in the preference list, delete all repaired data 
> after a repair completes. Subsequent quorum reads will be able to retrieve 
> the repaired data from any of the two full replicas and the unrepaired data 
> from a quorum read of any replica including the "transient" replicas.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13405) ViewBuilder can miss data due to sstable generation filter

2017-04-04 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-13405:
---
   Resolution: Fixed
Fix Version/s: 3.11.0
   Status: Resolved  (was: Patch Available)

Committed {{2f1ab4a4248ac24c890e195cd5714ca54510c19a}}

> ViewBuilder can miss data due to sstable generation filter
> --
>
> Key: CASSANDRA-13405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13405
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>  Labels: materializedviews
> Fix For: 3.0.13, 3.11.0
>
>
> The view builder for one MV is restarted when other MVs are added on the same 
> keyspace.  There is an issue if compactions are running between these 
> restarts that can cause the view builder to skip data, since the builder 
> tracks the max sstable generation to filter by when it starts back up.
> I don't see a need for this generation tracking across restarts, it only 
> needs to be tracked during a builders life (to avoid adding in newly 
> compacted data).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13405) ViewBuilder can miss data due to sstable generation filter

2017-04-04 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955161#comment-15955161
 ] 

T Jake Luciani commented on CASSANDRA-13405:


Nits addressed running CI again

> ViewBuilder can miss data due to sstable generation filter
> --
>
> Key: CASSANDRA-13405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13405
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>  Labels: materializedviews
> Fix For: 3.0.13
>
>
> The view builder for one MV is restarted when other MVs are added on the same 
> keyspace.  There is an issue if compactions are running between these 
> restarts that can cause the view builder to skip data, since the builder 
> tracks the max sstable generation to filter by when it starts back up.
> I don't see a need for this generation tracking across restarts, it only 
> needs to be tracked during a builders life (to avoid adding in newly 
> compacted data).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13405) ViewBuilder can miss data due to sstable generation filter

2017-04-03 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-13405:
---
Description: 
The view builder for one MV is restarted when other MVs are added on the same 
keyspace.  There is an issue if compactions are running between these restarts 
that can cause the view builder to skip data, since the builder tracks the max 
sstable generation to filter by when it starts back up.

I don't see a need for this generation tracking across restarts, it only needs 
to be tracked during a builders life (to avoid adding in newly compacted data). 
 



  was:
The view builder for one MV is restarted when other MVs are added on the same 
keyspace.  There is an issue if compactions are running between these restarts 
that can cause the view build is lost since the builder tracks the max sstable 
generation to filter by when it starts back up.

I don't see a need for this generation tracking across restarts, it only needs 
to be tracked during a builders life (to avoid adding in newly compacted data). 
 




> ViewBuilder can miss data due to sstable generation filter
> --
>
> Key: CASSANDRA-13405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13405
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>  Labels: materializedviews
> Fix For: 3.0.13
>
>
> The view builder for one MV is restarted when other MVs are added on the same 
> keyspace.  There is an issue if compactions are running between these 
> restarts that can cause the view builder to skip data, since the builder 
> tracks the max sstable generation to filter by when it starts back up.
> I don't see a need for this generation tracking across restarts, it only 
> needs to be tracked during a builders life (to avoid adding in newly 
> compacted data).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13405) ViewBuilder can miss data due to sstable generation filter

2017-04-03 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-13405:
---
Description: 
The view builder for one MV is restarted when other MVs are added on the same 
keyspace.  There is an issue if compactions are running between these restarts 
that can cause the view build is lost since the builder tracks the max sstable 
generation to filter by when it starts back up.

I don't see a need for this generation tracking across restarts, it only needs 
to be tracked during a builders life (to avoid adding in newly compacted data). 
 



  was:
The view builder for one MV is restarted when other MVs are added on the same 
keyspace.  There is an issue if compaction are running between these restarts 
that can cause the view build is lost since the builder tracks the max sstable 
generation to filter by when it starts back up.

I don't see a need for this generation tracking across restarts, it only needs 
to be tracked during a builders life (to avoid adding in newly compacted data). 
 




> ViewBuilder can miss data due to sstable generation filter
> --
>
> Key: CASSANDRA-13405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13405
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>  Labels: materializedviews
> Fix For: 3.0.13
>
>
> The view builder for one MV is restarted when other MVs are added on the same 
> keyspace.  There is an issue if compactions are running between these 
> restarts that can cause the view build is lost since the builder tracks the 
> max sstable generation to filter by when it starts back up.
> I don't see a need for this generation tracking across restarts, it only 
> needs to be tracked during a builders life (to avoid adding in newly 
> compacted data).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13405) ViewBuilder can miss data due to sstable generation filter

2017-04-03 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-13405:
---
Component/s: Local Write-Read Paths

> ViewBuilder can miss data due to sstable generation filter
> --
>
> Key: CASSANDRA-13405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13405
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>  Labels: materializedviews
> Fix For: 3.0.13
>
>
> The view builder for one MV is restarted when other MVs are added on the same 
> keyspace.  There is an issue if compaction are running between these restarts 
> that can cause the view build is lost since the builder tracks the max 
> sstable generation to filter by when it starts back up.
> I don't see a need for this generation tracking across restarts, it only 
> needs to be tracked during a builders life (to avoid adding in newly 
> compacted data).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13405) ViewBuilder can miss data due to sstable generation filter

2017-04-03 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-13405:
---
Labels: materializedviews  (was: )

> ViewBuilder can miss data due to sstable generation filter
> --
>
> Key: CASSANDRA-13405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13405
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>  Labels: materializedviews
> Fix For: 3.0.13
>
>
> The view builder for one MV is restarted when other MVs are added on the same 
> keyspace.  There is an issue if compaction are running between these restarts 
> that can cause the view build is lost since the builder tracks the max 
> sstable generation to filter by when it starts back up.
> I don't see a need for this generation tracking across restarts, it only 
> needs to be tracked during a builders life (to avoid adding in newly 
> compacted data).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13405) ViewBuilder can miss data due to sstable generation filter

2017-04-03 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953949#comment-15953949
 ] 

T Jake Luciani edited comment on CASSANDRA-13405 at 4/3/17 6:25 PM:


Fix with test to repro.
Basically ignore the generation value...

I've also added some better debug logging to help operators and us see what's 
happening.

[3.0|https://github.com/tjake/cassandra/tree/13405-3.0]
[testall|http://cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.0-testall/]
[dtest|http://cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.0-dtest/]


[3.11|https://github.com/tjake/cassandra/tree/13405-3.11]
[testall|http://cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.11-testall/]
[dtest|http://cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.11-dtest/]


was (Author: tjake):
Fix with test to repro.
Basically ignore the generation value...

I've also added some better debug logging to help operators and us see what's 
happening.

[3.0|https://github.com/tjake/cassandra/tree/13405-3.0]
[testall|cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.0-testall/]
[dtest|cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.0-dtest/]


[3.11|https://github.com/tjake/cassandra/tree/13405-3.11]
[testall|cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.11-testall/]
[dtest|cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.11-dtest/]

> ViewBuilder can miss data due to sstable generation filter
> --
>
> Key: CASSANDRA-13405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13405
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>  Labels: materializedviews
> Fix For: 3.0.13
>
>
> The view builder for one MV is restarted when other MVs are added on the same 
> keyspace.  There is an issue if compaction are running between these restarts 
> that can cause the view build is lost since the builder tracks the max 
> sstable generation to filter by when it starts back up.
> I don't see a need for this generation tracking across restarts, it only 
> needs to be tracked during a builders life (to avoid adding in newly 
> compacted data).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13405) ViewBuilder can miss data due to sstable generation filter

2017-04-03 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-13405:
---
Status: Patch Available  (was: Open)

Fix with test to repro.
Basically ignore the generation value...

I've also added some better debug logging to help operators and us see what's 
happening.

[3.0|https://github.com/tjake/cassandra/tree/13405-3.0]
[testall|cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.0-testall/]
[dtest|cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.0-dtest/]


[3.11|https://github.com/tjake/cassandra/tree/13405-3.11]
[testall|cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.11-testall/]
[dtest|cassci.datastax.com/view/Dev/view/tjake/job/tjake-13405-3.11-dtest/]

> ViewBuilder can miss data due to sstable generation filter
> --
>
> Key: CASSANDRA-13405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13405
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.0.13
>
>
> The view builder for one MV is restarted when other MVs are added on the same 
> keyspace.  There is an issue if compaction are running between these restarts 
> that can cause the view build is lost since the builder tracks the max 
> sstable generation to filter by when it starts back up.
> I don't see a need for this generation tracking across restarts, it only 
> needs to be tracked during a builders life (to avoid adding in newly 
> compacted data).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13405) ViewBuilder can miss data due to sstable generation filter

2017-04-03 Thread T Jake Luciani (JIRA)
T Jake Luciani created CASSANDRA-13405:
--

 Summary: ViewBuilder can miss data due to sstable generation filter
 Key: CASSANDRA-13405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13405
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Assignee: T Jake Luciani
 Fix For: 3.0.13


The view builder for one MV is restarted when other MVs are added on the same 
keyspace.  There is an issue if compaction are running between these restarts 
that can cause the view build is lost since the builder tracks the max sstable 
generation to filter by when it starts back up.

I don't see a need for this generation tracking across restarts, it only needs 
to be tracked during a builders life (to avoid adding in newly compacted data). 
 





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13373) Provide additional speculative retry statistics

2017-03-31 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951356#comment-15951356
 ] 

T Jake Luciani commented on CASSANDRA-13373:


Would you mind updating the docs and add these new metrics ?

> Provide additional speculative retry statistics
> ---
>
> Key: CASSANDRA-13373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13373
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 4.x
>
>
> Right now there is a single metric for speculative retry on reads that is the 
> number of speculative retries attempted. You can't tell how many of those 
> actually succeeded in salvaging the read.
> The metric is also per table and there is no keyspace level rollup as there 
> is for several other metrics.
> Add a metric that counts reads that attempt to speculate but fail to complete 
> before the timeout (ignoring read errors).
> Add a rollup metric for the current count of speculation attempts as well as 
> the count of failed speculations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   3   4   5   6   7   8   9   10   >