[jira] [Commented] (CASSANDRA-14781) Log message when mutation passed to CommitLog#add(Mutation) is too large is not descriptive enough

2020-04-19 Thread Venkata Harikrishna Nukala (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087386#comment-17087386
 ] 

Venkata Harikrishna Nukala commented on CASSANDRA-14781:


[~jrwest] raised CASSANDRA-15741 for validation and/or fixing client timeout 
when mutation exceeds max size.

> Log message when mutation passed to CommitLog#add(Mutation) is too large is 
> not descriptive enough
> --
>
> Key: CASSANDRA-14781
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14781
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Hints, Local/Commit Log, Messaging/Client
>Reporter: Jordan West
>Assignee: Tom Petracca
>Priority: Normal
>  Labels: protocolv5
> Fix For: 4.0-beta
>
> Attachments: CASSANDRA-14781.patch, CASSANDRA-14781_3.0.patch, 
> CASSANDRA-14781_3.11.patch
>
>
> When hitting 
> [https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/commitlog/CommitLog.java#L256-L257],
>  the log message produced does not help the operator track down what data is 
> being written. At a minimum the keyspace and cfIds involved would be useful 
> (and are available) – more detail might not be reasonable to include. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15741) Mutation size exceeds max limit - clients get timeout?

2020-04-19 Thread Venkata Harikrishna Nukala (Jira)
Venkata Harikrishna Nukala created CASSANDRA-15741:
--

 Summary: Mutation size exceeds max limit - clients get timeout?
 Key: CASSANDRA-15741
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15741
 Project: Cassandra
  Issue Type: Task
Reporter: Venkata Harikrishna Nukala
Assignee: Venkata Harikrishna Nukala


Raising this ticket based on the discussion from CASSANDRA-14781 to validate 
that co-oridinator returns timeout when mutation size exceeds maximum limit 
(need to add a jvm-dtest to confirm). If it throws timeout or any other 
exception which doesn't reflect properly, then response should be modified to 
throw meaningful exception immediately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-19 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15739:
-
Reviewers: Jordan West, Michael Semb Wever  (was: Jordan West, Mick Semb 
Wever)

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|
>  !15623-pipeline.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15623) When running CQLSH with STDIN input, exit with error status code if script fails

2020-04-19 Thread Dinesh Joshi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087248#comment-17087248
 ] 

Dinesh Joshi commented on CASSANDRA-15623:
--

Hi [~mck], apologies for breaking the build and have the fixes over in 
CASSANDRA-15739. I ran the CircleCI but likely mixed up the results from 
different tickets. I'll be careful in the future.

> When running CQLSH with STDIN input, exit with error status code if script 
> fails
> 
>
> Key: CASSANDRA-15623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15623
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
>Reporter: Jacob Becker
>Assignee: Jacob Becker
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 3.0.21, 3.11.7, 4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Assuming CASSANDRA-6344 is in place for years and considering that scripts 
> submitted with the `-e` option behave in a similar fashion, it is very 
> surprising that scripts submitted to STDIN (i.e. piped in) always exit with a 
> zero code, regardless of errors. I believe this should be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15740) Entire SSTable transfers don't work over SSL

2020-04-19 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15740:
-
Description: 
Entire SSTable transfers do not function when SSL handler is present in the 
Netty pipeline. This is a trivial fix to allow it to proceed over SSL there by 
extending benefits over SSL.

||ssl||
|[cassandra|https://github.com/dineshjoshi/cassandra/tree/zcs-with-ssl]|
|[dtest|https://github.com/dineshjoshi/cassandra-dtest-1/tree/zcs-with-ssl]|
|[utests  
dtests|https://circleci.com/gh/dineshjoshi/workflows/cassandra/tree/zcs-with-ssl]|

  was:Entire SSTable transfers do not function when SSL handler is present in 
the Netty pipeline. This is a trivial fix to allow it to proceed over SSL there 
by extending benefits over SSL.


> Entire SSTable transfers don't work over SSL
> 
>
> Key: CASSANDRA-15740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15740
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
>
> Entire SSTable transfers do not function when SSL handler is present in the 
> Netty pipeline. This is a trivial fix to allow it to proceed over SSL there 
> by extending benefits over SSL.
> ||ssl||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/zcs-with-ssl]|
> |[dtest|https://github.com/dineshjoshi/cassandra-dtest-1/tree/zcs-with-ssl]|
> |[utests  
> dtests|https://circleci.com/gh/dineshjoshi/workflows/cassandra/tree/zcs-with-ssl]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15740) Entire SSTable transfers don't work over SSL

2020-04-19 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15740:
-
Test and Documentation Plan: CircleCI tests
 Status: Patch Available  (was: Open)

> Entire SSTable transfers don't work over SSL
> 
>
> Key: CASSANDRA-15740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15740
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
>
> Entire SSTable transfers do not function when SSL handler is present in the 
> Netty pipeline. This is a trivial fix to allow it to proceed over SSL there 
> by extending benefits over SSL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15740) Entire SSTable transfers don't work over SSL

2020-04-19 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15740:
-
 Bug Category: Parent values: Degradation(12984)Level 1 values: Performance 
Bug/Regression(12997)
   Complexity: Low Hanging Fruit
Discovered By: User Report
Reviewers: Joey Lynch
 Severity: Normal
 Assignee: Dinesh Joshi
   Status: Open  (was: Triage Needed)

> Entire SSTable transfers don't work over SSL
> 
>
> Key: CASSANDRA-15740
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15740
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
>
> Entire SSTable transfers do not function when SSL handler is present in the 
> Netty pipeline. This is a trivial fix to allow it to proceed over SSL there 
> by extending benefits over SSL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15740) Entire SSTable transfers don't work over SSL

2020-04-19 Thread Dinesh Joshi (Jira)
Dinesh Joshi created CASSANDRA-15740:


 Summary: Entire SSTable transfers don't work over SSL
 Key: CASSANDRA-15740
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15740
 Project: Cassandra
  Issue Type: Bug
  Components: Legacy/Streaming and Messaging
Reporter: Dinesh Joshi


Entire SSTable transfers do not function when SSL handler is present in the 
Netty pipeline. This is a trivial fix to allow it to proceed over SSL there by 
extending benefits over SSL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-19 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15739:
-
Description: 
dtests are failing due to a behavior change in cqlsh that was introduced as 
part of 15623. This patch fixes the issue.

||tests||
|[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
|[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
|[utests  
dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|


 !15623-pipeline.png! 



  was:
dtests are failing due to a behavior change in cqlsh that was introduced as 
part of 15623. This patch fixes the issue.

||tests||
|[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
|[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
|[utests  
dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|







> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|
>  !15623-pipeline.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-19 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15739:
-
Attachment: 15623-pipeline.png

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
> Attachments: 15623-pipeline.png
>
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-19 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15739:
-
Description: 
dtests are failing due to a behavior change in cqlsh that was introduced as 
part of 15623. This patch fixes the issue.

||tests||
|[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
|[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
|[utests  
dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|






  was:
dtests are failing due to a behavior change in cqlsh that was introduced as 
part of 15623. This patch fixes the issue.

||tests||
|[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
|[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
|[utests  
dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|




> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-19 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15739:
-
Description: 
dtests are failing due to a behavior change in cqlsh that was introduced as 
part of 15623. This patch fixes the issue.

||tests||
|[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
|[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
|[utests  
dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|



  was:
dtests are failing due to a behavior change in cqlsh that was introduced as 
part of 15623. This patch fixes the issue.

||tests||
|[branch|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
|[utests  
dtests|https://circleci.com/gh/dineshjoshi/workflows/cassandra/tree/15623-fix-tests]|




> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[cassandra|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[cassandra-dtests|https://github.com/dineshjoshi/cassandra-dtest-1/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/workflow-run/65ae49dd-e52f-4af0-b310-7e09f2c204ea]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-19 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15739:
-
Test and Documentation Plan: circleci
 Status: Patch Available  (was: Open)

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-19 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15739:
-
Description: 
dtests are failing due to a behavior change in cqlsh that was introduced as 
part of 15623. This patch fixes the issue.

||tests||
|[branch|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
|[utests  
dtests|https://circleci.com/gh/dineshjoshi/workflows/cassandra/tree/15623-fix-tests]|



  was:
dtests are failing due to a behavior change in cqlsh that was introduced as 
part of 15623. This patch fixes the issue.



> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.
> ||tests||
> |[branch|https://github.com/dineshjoshi/cassandra/tree/15623-fix-tests]|
> |[utests  
> dtests|https://circleci.com/gh/dineshjoshi/workflows/cassandra/tree/15623-fix-tests]|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-19 Thread Dinesh Joshi (Jira)
Dinesh Joshi created CASSANDRA-15739:


 Summary: dtests fix due to cqlsh behavior change
 Key: CASSANDRA-15739
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
 Project: Cassandra
  Issue Type: Bug
  Components: Tool/cqlsh
Reporter: Dinesh Joshi
Assignee: Dinesh Joshi


dtests are failing due to a behavior change in cqlsh that was introduced as 
part of 15623. This patch fixes the issue.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15739) dtests fix due to cqlsh behavior change

2020-04-19 Thread Dinesh Joshi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-15739:
-
 Bug Category: Parent values: Correctness(12982)Level 1 values: Test 
Failure(12990)
   Complexity: Normal
Discovered By: User Report
Reviewers: Jordan West, Mick Semb Wever
 Severity: Low
   Status: Open  (was: Triage Needed)

> dtests fix due to cqlsh behavior change
> ---
>
> Key: CASSANDRA-15739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15739
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/cqlsh
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
>
> dtests are failing due to a behavior change in cqlsh that was introduced as 
> part of 15623. This patch fixes the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15379) Make it possible to flush with a different compression strategy than we compact with

2020-04-19 Thread Joey Lynch (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087187#comment-17087187
 ] 

Joey Lynch edited comment on CASSANDRA-15379 at 4/19/20, 9:44 PM:
--

Alright, finally fixed our internal trunk build so we can do performance 
validations again. I ran the following performance benchmark and the results 
are essentially identical for the default configuration (so testing _just_ the 
addition of the NoopCompressor on the megamorphic call sites).

*Experimental Setup:*

A baseline and candidate cluster of EC2 machines running the following:
 * C* cluster: 3x3 (us-east-1 and eu-west-1) i3.2xlarge
 * Load cluster: 3 m5.2xlarge nodes running ndbench in us-east-1, generating a 
consistent load against the cluster
 * Baseline C* version: Latest trunk (b05fe7ab)
 * Candidate C* version: The proposed patch applied to the same version of trunk
 * Relevant system configuration: Ubuntu xenial running Linux 4.15, with kyber 
io scheduler (vs noop), 32 KiB readahead (vs 128), and tc-fq network qdisc (vs 
pfifo_fast)
 * Relevant JVM configuration: 12 GiB heap size

In all cases load is applied and then we wait for metrics to settle, especially 
things like pending compactions, read/write latencies, p99 latencies, etc ...

*Defaults Benchmark:*
 * Load pattern: 1.2K wps and 1.2k rps at LOCAL_ONE consistency with a  random 
load pattern.
 * Data sizing: 10 million partitions with 2 rows each of 10 columns, total 
size per partition of about 10 KiB of random data. ~100 GiB per node data size 
(replicated 6 ways)
 * Compaction settings: LCS with size=256MiB, fanout=20
 * Compression: LZ4 with 16 KiB block size 

*Defaults Benchmark Results:*

We do not have data to support the hypothesis that the megamorphic call sites 
have become more expensive to the addition of the NoopCompressor.

1. No significant change at the coordinator level (least relevant metric): 
[^15379_coordinator_defaults.png]
 2. No significant change at the replica level (most relevant metric): 
[^15379_replica_defaults.png]
 3. No significant change at the system resource level (second most relevant 
metrics): [^15379_system_defaults.png]

Our external flamegraphs exports appear to be broken, but I looked at them and 
they also show no noticeable difference (I'll work with our performance team to 
fix exports so I can share the data here).

*Next steps for me:*
 * Squash, rebase, and re-run unit and dtests with latest trunk in preparation 
for commit
 * Run a benchmark of `ZstdCompressor` with and without the patch, we expect to 
see reduced CPU usage due to flushes. I will likely have to reduce the 
read/write throughput due to compactions taking a crazy amount of our on CPU 
time with this configuration.


was (Author: jolynch):
Alright, finally fixed our internal trunk build so we can do performance 
validations again. I ran the following performance benchmark and the results 
are essentially identical for the default configuration (so testing _just_ the 
addition of the NoopCompressor on the megamorphic call sites).

*Experimental Setup:*

A baseline and candidate cluster of EC2 machines running the following:
 * C* cluster: 3x3 (us-east-1 and eu-west-1) i3.2xlarge
 * Load cluster: 3 m5.2xlarge nodes running ndbench in us-east-1, generating a 
consistent load against the cluster
 * Baseline C* version: Latest trunk (b05fe7ab)
 * Candidate C* version: The proposed patch applied to the same version of trunk
 * Relevant system configuration: Ubuntu xenial running Linux 4.15, with kyber 
io scheduler (vs noop), 32 KiB readahead (vs 128), and tc-fq network qdisc (vs 
pfifo_fast)

In all cases load is applied and then we wait for metrics to settle, especially 
things like pending compactions, read/write latencies, p99 latencies, etc ...

*Defaults Benchmark:*
 * Load pattern: 1.2K wps and 1.2k rps at LOCAL_ONE consistency with a  random 
load pattern.
 * Data sizing: 10 million partitions with 2 rows each of 10 columns, total 
size per partition of about 10 KiB of random data. ~100 GiB per node data size 
(replicated 6 ways)
 * Compaction settings: LCS with size=256MiB, fanout=20
 * Compression: LZ4 with 16 KiB block size 

*Defaults Benchmark Results:*

We do not have data to support the hypothesis that the megamorphic call sites 
have become more expensive to the addition of the NoopCompressor.

1. No significant change at the coordinator level (least relevant metric): 
[^15379_coordinator_defaults.png]
 2. No significant change at the replica level (most relevant metric): 
[^15379_replica_defaults.png]
 3. No significant change at the system resource level (second most relevant 
metrics): [^15379_system_defaults.png]

Our external flamegraphs exports appear to be broken, but I looked at them and 
they also show no noticeable difference (I'll work with our performance team to 
fix exports so I can share the data 

[jira] [Comment Edited] (CASSANDRA-15379) Make it possible to flush with a different compression strategy than we compact with

2020-04-19 Thread Joey Lynch (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087187#comment-17087187
 ] 

Joey Lynch edited comment on CASSANDRA-15379 at 4/19/20, 9:09 PM:
--

Alright, finally fixed our internal trunk build so we can do performance 
validations again. I ran the following performance benchmark and the results 
are essentially identical for the default configuration (so testing _just_ the 
addition of the NoopCompressor on the megamorphic call sites).

*Experimental Setup:*

A baseline and candidate cluster of EC2 machines running the following:
 * C* cluster: 3x3 (us-east-1 and eu-west-1) i3.2xlarge
 * Load cluster: 3 m5.2xlarge nodes running ndbench in us-east-1, generating a 
consistent load against the cluster
 * Baseline C* version: Latest trunk (b05fe7ab)
 * Candidate C* version: The proposed patch applied to the same version of trunk
 * Relevant system configuration: Ubuntu xenial running Linux 4.15, with kyber 
io scheduler (vs noop), 32 KiB readahead (vs 128), and tc-fq network qdisc (vs 
pfifo_fast)

In all cases load is applied and then we wait for metrics to settle, especially 
things like pending compactions, read/write latencies, p99 latencies, etc ...

*Defaults Benchmark:*
 * Load pattern: 1.2K wps and 1.2k rps at LOCAL_ONE consistency with a  random 
load pattern.
 * Data sizing: 10 million partitions with 2 rows each of 10 columns, total 
size per partition of about 10 KiB of random data. ~100 GiB per node data size 
(replicated 6 ways)
 * Compaction settings: LCS with size=256MiB, fanout=20
 * Compression: LZ4 with 16 KiB block size 

*Defaults Benchmark Results:*

We do not have data to support the hypothesis that the megamorphic call sites 
have become more expensive to the addition of the NoopCompressor.

1. No significant change at the coordinator level (least relevant metric): 
[^15379_coordinator_defaults.png]
 2. No significant change at the replica level (most relevant metric): 
[^15379_replica_defaults.png]
 3. No significant change at the system resource level (second most relevant 
metrics): [^15379_system_defaults.png]

Our external flamegraphs exports appear to be broken, but I looked at them and 
they also show no noticeable difference (I'll work with our performance team to 
fix exports so I can share the data here).

*Next steps for me:*
 * Squash, rebase, and re-run unit and dtests with latest trunk in preparation 
for commit
 * Run a benchmark of `ZstdCompressor` with and without the patch, we expect to 
see reduced CPU usage due to flushes. I will likely have to reduce the 
read/write throughput due to compactions taking a crazy amount of our on CPU 
time with this configuration.


was (Author: jolynch):
Alright, finally fixed our internal trunk build so we can do performance 
validations again. I ran the following performance benchmark and the results 
are essentially identical for the default configuration (so testing _just_ the 
addition of the NoopCompressor on the megamorphic call sites).

*Experimental Setup:*

A baseline and candidate cluster of EC2 machines running the following:
 * C* cluster: 3x3 (us-east-1 and eu-west-1) i3.2xlarge
 * Load cluster: 3 m5.2xlarge nodes running ndbench in us-east-1, generating a 
consistent load against the cluster
 * Baseline C* version: Latest trunk (b05fe7ab)
 * Candidate C* version: The proposed patch applied to the same version of trunk
 * Relevant system configuration: Ubuntu xenial running Linux 4.15, with kyber 
io scheduler (vs noop), 32 KiB readahead (vs 128), and tc-fq network qdisc (vs 
pfifo_fast)

In all cases load is applied and then we wait for metrics to settle, especially 
things like pending compactions, read/write latencies, p99 latencies, etc ...

*Defaults Benchmark:*
 * Load pattern: 1.2K wps and 1.2k rps at LOCAL_ONE consistency with a  random 
load pattern.
 * Data sizing: 2 rows of 10 columns, total size per partition of about 10 KiB 
of random data. ~100 GiB per node data size (replicated 6 ways)
 * Compaction settings: LCS with size=256MiB, fanout=20
 * Compression: LZ4 with 16 KiB block size 

*Defaults Benchmark Results:*

We do not have data to support the hypothesis that the megamorphic call sites 
have become more expensive to the addition of the NoopCompressor.

1. No significant change at the coordinator level (least relevant metric): 
[^15379_coordinator_defaults.png]
 2. No significant change at the replica level (most relevant metric): 
[^15379_replica_defaults.png]
 3. No significant change at the system resource level (second most relevant 
metrics): [^15379_system_defaults.png]

Our external flamegraphs exports appear to be broken, but I looked at them and 
they also show no noticeable difference (I'll work with our performance team to 
fix exports so I can share the data here).

*Next steps for me:*
 * Squash, rebase, and re-run unit and dtests with 

[jira] [Comment Edited] (CASSANDRA-15379) Make it possible to flush with a different compression strategy than we compact with

2020-04-19 Thread Joey Lynch (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087187#comment-17087187
 ] 

Joey Lynch edited comment on CASSANDRA-15379 at 4/19/20, 9:00 PM:
--

Alright, finally fixed our internal trunk build so we can do performance 
validations again. I ran the following performance benchmark and the results 
are essentially identical for the default configuration (so testing _just_ the 
addition of the NoopCompressor on the megamorphic call sites).

*Experimental Setup:*

A baseline and candidate cluster of EC2 machines running the following:
 * C* cluster: 3x3 (us-east-1 and eu-west-1) i3.2xlarge
 * Load cluster: 3 m5.2xlarge nodes running ndbench in us-east-1, generating a 
consistent load against the cluster
 * Baseline C* version: Latest trunk (b05fe7ab)
 * Candidate C* version: The proposed patch applied to the same version of trunk
 * Relevant system configuration: Ubuntu xenial running Linux 4.15, with kyber 
io scheduler (vs noop), 32 KiB readahead (vs 128), and tc-fq network qdisc (vs 
pfifo_fast)

In all cases load is applied and then we wait for metrics to settle, especially 
things like pending compactions, read/write latencies, p99 latencies, etc ...

*Defaults Benchmark:*
 * Load pattern: 1.2K wps and 1.2k rps at LOCAL_ONE consistency with a  random 
load pattern.
 * Data sizing: 2 rows of 10 columns, total size per partition of about 10 KiB 
of random data. ~100 GiB per node data size (replicated 6 ways)
 * Compaction settings: LCS with size=256MiB, fanout=20
 * Compression: LZ4 with 16 KiB block size 

*Defaults Benchmark Results:*

We do not have data to support the hypothesis that the megamorphic call sites 
have become more expensive to the addition of the NoopCompressor.

1. No significant change at the coordinator level (least relevant metric): 
[^15379_coordinator_defaults.png]
 2. No significant change at the replica level (most relevant metric): 
[^15379_replica_defaults.png]
 3. No significant change at the system resource level (second most relevant 
metrics): [^15379_system_defaults.png]

Our external flamegraphs exports appear to be broken, but I looked at them and 
they also show no noticeable difference (I'll work with our performance team to 
fix exports so I can share the data here).

*Next steps for me:*
 * Squash, rebase, and re-run unit and dtests with latest trunk in preparation 
for commit
 * Run a benchmark of `ZstdCompressor` with and without the patch, we expect to 
see reduced CPU usage due to flushes. I will likely have to reduce the 
read/write throughput due to compactions taking a crazy amount of our on CPU 
time with this configuration.


was (Author: jolynch):
Alright, finally fixed our internal trunk build so we can do performance 
validations again. I ran the following performance benchmark and the results 
are essentially identical for the default configuration (so testing _just_ the 
addition of the NoopCompressor on the megamorphic call sites).

*Experimental Setup:*

A baseline and candidate cluster of EC2 machines running the following:
 * C* cluster: 3x3 (us-east-1 and eu-west-1) i3.2xlarge
 * Load cluster: 3 m5.2xlarge nodes running ndbench in us-east-1, generating a 
consistent load against the cluster
 * Baseline C* version: Latest trunk (b05fe7ab)
 * Candidate C* version: The proposed patch applied to the same version of trunk
 * Relevant system configuration: Ubuntu xenial running Linux 4.15, with kyber 
io scheduler (vs noop), 32 KiB readahead (vs 128), and tc-fq network qdisc (vs 
pfifo_fast)

In all cases load is applied and then we wait for metrics to settle, especially 
things like pending compactions, read/write latencies, p99 latencies, etc ...

*Defaults Benchmark:*
 * Load pattern: 1.2K wps and 1.2k rps at LOCAL_ONE consistency with a  random 
load pattern.
 * Data sizing: 2 rows of 10 columns, total size per partition of about 10 KiB 
of random data. ~100 GiB per node data size (replicated 6 ways)
 * Compaction settings: LCS with size=256MiB, fanout=20
 * Compression: LZ4 with 16 KiB block siz 

*Defaults Benchmark Results:*

We do not have data to support the hypothesis that the megamorphic call sites 
have become more expensive to the addition of the NoopCompressor.

1. No significant change at the coordinator level (least relevant metric): 
[^15379_coordinator_defaults.png]
2. No significant change at the replica level (most relevant metric): 
[^15379_replica_defaults.png]
3. No significant change at the system resource level (second most relevant 
metrics): [^15379_system_defaults.png]

Our external flamegraphs exports appear to be broken, but I looked at them and 
they also show no noticeable difference (I'll work with our performance team to 
fix exports so I can share the data here).

*Next steps for me:*
 * Squash, rebase, and re-run unit and dtests with latest trunk in preparation 
for 

[jira] [Commented] (CASSANDRA-15379) Make it possible to flush with a different compression strategy than we compact with

2020-04-19 Thread Joey Lynch (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087187#comment-17087187
 ] 

Joey Lynch commented on CASSANDRA-15379:


Alright, finally fixed our internal trunk build so we can do performance 
validations again. I ran the following performance benchmark and the results 
are essentially identical for the default configuration (so testing _just_ the 
addition of the NoopCompressor on the megamorphic call sites).

*Experimental Setup:*

A baseline and candidate cluster of EC2 machines running the following:
 * C* cluster: 3x3 (us-east-1 and eu-west-1) i3.2xlarge
 * Load cluster: 3 m5.2xlarge nodes running ndbench in us-east-1, generating a 
consistent load against the cluster
 * Baseline C* version: Latest trunk (b05fe7ab)
 * Candidate C* version: The proposed patch applied to the same version of trunk
 * Relevant system configuration: Ubuntu xenial running Linux 4.15, with kyber 
io scheduler (vs noop), 32 KiB readahead (vs 128), and tc-fq network qdisc (vs 
pfifo_fast)

In all cases load is applied and then we wait for metrics to settle, especially 
things like pending compactions, read/write latencies, p99 latencies, etc ...

*Defaults Benchmark:*
 * Load pattern: 1.2K wps and 1.2k rps at LOCAL_ONE consistency with a  random 
load pattern.
 * Data sizing: 2 rows of 10 columns, total size per partition of about 10 KiB 
of random data. ~100 GiB per node data size (replicated 6 ways)
 * Compaction settings: LCS with size=256MiB, fanout=20
 * Compression: LZ4 with 16 KiB block siz 

*Defaults Benchmark Results:*

We do not have data to support the hypothesis that the megamorphic call sites 
have become more expensive to the addition of the NoopCompressor.

1. No significant change at the coordinator level (least relevant metric): 
[^15379_coordinator_defaults.png]
2. No significant change at the replica level (most relevant metric): 
[^15379_replica_defaults.png]
3. No significant change at the system resource level (second most relevant 
metrics): [^15379_system_defaults.png]

Our external flamegraphs exports appear to be broken, but I looked at them and 
they also show no noticeable difference (I'll work with our performance team to 
fix exports so I can share the data here).

*Next steps for me:*
 * Squash, rebase, and re-run unit and dtests with latest trunk in preparation 
for commit
 * Run a benchmark of `ZstdCompressor` with and without the patch, we expect to 
see reduced CPU usage due to flushes. I will likely have to reduce the 
read/write throughput due to compactions taking a crazy amount of our on CPU 
time with this configuration.

> Make it possible to flush with a different compression strategy than we 
> compact with
> 
>
> Key: CASSANDRA-15379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15379
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction, Local/Config, Local/Memtable
>Reporter: Joey Lynch
>Assignee: Joey Lynch
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: 15379_coordinator_defaults.png, 
> 15379_replica_defaults.png, 15379_system_defaults.png
>
>
> [~josnyder] and I have been testing out CASSANDRA-14482 (Zstd compression) on 
> some of our most dense clusters and have been observing close to 50% 
> reduction in footprint with Zstd on some of our workloads! Unfortunately 
> though we have been running into an issue where the flush might take so long 
> (Zstd is slower to compress than LZ4) that we can actually block the next 
> flush and cause instability.
> Internally we are working around this with a very simple patch which flushes 
> SSTables as the default compression strategy (LZ4) regardless of the table 
> params. This is a simple solution but I think the ideal solution though might 
> be for the flush compression strategy to be configurable separately from the 
> table compression strategy (while defaulting to the same thing). Instead of 
> adding yet another compression option to the yaml (like hints and commitlog) 
> I was thinking of just adding it to the table parameters and then adding a 
> {{default_table_parameters}} yaml option like:
> {noformat}
> # Default table properties to apply on freshly created tables. The currently 
> supported defaults are:
> # * compression   : How are SSTables compressed in general (flush, 
> compaction, etc ...)
> # * flush_compression : How are SSTables compressed as they flush
> # supported
> default_table_parameters:
>   compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 16
>   flush_compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 4
> {noformat}
> This would have the nice effect 

[jira] [Updated] (CASSANDRA-15379) Make it possible to flush with a different compression strategy than we compact with

2020-04-19 Thread Joey Lynch (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Lynch updated CASSANDRA-15379:
---
Attachment: 15379_system_defaults.png

> Make it possible to flush with a different compression strategy than we 
> compact with
> 
>
> Key: CASSANDRA-15379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15379
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction, Local/Config, Local/Memtable
>Reporter: Joey Lynch
>Assignee: Joey Lynch
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: 15379_coordinator_defaults.png, 
> 15379_replica_defaults.png, 15379_system_defaults.png
>
>
> [~josnyder] and I have been testing out CASSANDRA-14482 (Zstd compression) on 
> some of our most dense clusters and have been observing close to 50% 
> reduction in footprint with Zstd on some of our workloads! Unfortunately 
> though we have been running into an issue where the flush might take so long 
> (Zstd is slower to compress than LZ4) that we can actually block the next 
> flush and cause instability.
> Internally we are working around this with a very simple patch which flushes 
> SSTables as the default compression strategy (LZ4) regardless of the table 
> params. This is a simple solution but I think the ideal solution though might 
> be for the flush compression strategy to be configurable separately from the 
> table compression strategy (while defaulting to the same thing). Instead of 
> adding yet another compression option to the yaml (like hints and commitlog) 
> I was thinking of just adding it to the table parameters and then adding a 
> {{default_table_parameters}} yaml option like:
> {noformat}
> # Default table properties to apply on freshly created tables. The currently 
> supported defaults are:
> # * compression   : How are SSTables compressed in general (flush, 
> compaction, etc ...)
> # * flush_compression : How are SSTables compressed as they flush
> # supported
> default_table_parameters:
>   compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 16
>   flush_compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 4
> {noformat}
> This would have the nice effect as well of giving our configuration a path 
> forward to providing user specified defaults for table creation (so e.g. if a 
> particular user wanted to use a different default chunk_length_in_kb they can 
> do that).
> So the proposed (~mandatory) scope is:
> * Flush with a faster compression strategy
> I'd like to implement the following at the same time:
> * Per table flush compression configuration
> * Ability to default the table flush and compaction compression in the yaml.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15379) Make it possible to flush with a different compression strategy than we compact with

2020-04-19 Thread Joey Lynch (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Lynch updated CASSANDRA-15379:
---
Attachment: 15379_replica_defaults.png

> Make it possible to flush with a different compression strategy than we 
> compact with
> 
>
> Key: CASSANDRA-15379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15379
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction, Local/Config, Local/Memtable
>Reporter: Joey Lynch
>Assignee: Joey Lynch
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: 15379_coordinator_defaults.png, 
> 15379_replica_defaults.png
>
>
> [~josnyder] and I have been testing out CASSANDRA-14482 (Zstd compression) on 
> some of our most dense clusters and have been observing close to 50% 
> reduction in footprint with Zstd on some of our workloads! Unfortunately 
> though we have been running into an issue where the flush might take so long 
> (Zstd is slower to compress than LZ4) that we can actually block the next 
> flush and cause instability.
> Internally we are working around this with a very simple patch which flushes 
> SSTables as the default compression strategy (LZ4) regardless of the table 
> params. This is a simple solution but I think the ideal solution though might 
> be for the flush compression strategy to be configurable separately from the 
> table compression strategy (while defaulting to the same thing). Instead of 
> adding yet another compression option to the yaml (like hints and commitlog) 
> I was thinking of just adding it to the table parameters and then adding a 
> {{default_table_parameters}} yaml option like:
> {noformat}
> # Default table properties to apply on freshly created tables. The currently 
> supported defaults are:
> # * compression   : How are SSTables compressed in general (flush, 
> compaction, etc ...)
> # * flush_compression : How are SSTables compressed as they flush
> # supported
> default_table_parameters:
>   compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 16
>   flush_compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 4
> {noformat}
> This would have the nice effect as well of giving our configuration a path 
> forward to providing user specified defaults for table creation (so e.g. if a 
> particular user wanted to use a different default chunk_length_in_kb they can 
> do that).
> So the proposed (~mandatory) scope is:
> * Flush with a faster compression strategy
> I'd like to implement the following at the same time:
> * Per table flush compression configuration
> * Ability to default the table flush and compaction compression in the yaml.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15379) Make it possible to flush with a different compression strategy than we compact with

2020-04-19 Thread Joey Lynch (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Lynch updated CASSANDRA-15379:
---
Attachment: 15379_coordinator_defaults.png

> Make it possible to flush with a different compression strategy than we 
> compact with
> 
>
> Key: CASSANDRA-15379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15379
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction, Local/Config, Local/Memtable
>Reporter: Joey Lynch
>Assignee: Joey Lynch
>Priority: Normal
> Fix For: 4.0-alpha
>
> Attachments: 15379_coordinator_defaults.png
>
>
> [~josnyder] and I have been testing out CASSANDRA-14482 (Zstd compression) on 
> some of our most dense clusters and have been observing close to 50% 
> reduction in footprint with Zstd on some of our workloads! Unfortunately 
> though we have been running into an issue where the flush might take so long 
> (Zstd is slower to compress than LZ4) that we can actually block the next 
> flush and cause instability.
> Internally we are working around this with a very simple patch which flushes 
> SSTables as the default compression strategy (LZ4) regardless of the table 
> params. This is a simple solution but I think the ideal solution though might 
> be for the flush compression strategy to be configurable separately from the 
> table compression strategy (while defaulting to the same thing). Instead of 
> adding yet another compression option to the yaml (like hints and commitlog) 
> I was thinking of just adding it to the table parameters and then adding a 
> {{default_table_parameters}} yaml option like:
> {noformat}
> # Default table properties to apply on freshly created tables. The currently 
> supported defaults are:
> # * compression   : How are SSTables compressed in general (flush, 
> compaction, etc ...)
> # * flush_compression : How are SSTables compressed as they flush
> # supported
> default_table_parameters:
>   compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 16
>   flush_compression:
> class_name: 'LZ4Compressor'
> parameters:
>   chunk_length_in_kb: 4
> {noformat}
> This would have the nice effect as well of giving our configuration a path 
> forward to providing user specified defaults for table creation (so e.g. if a 
> particular user wanted to use a different default chunk_length_in_kb they can 
> do that).
> So the proposed (~mandatory) scope is:
> * Flush with a faster compression strategy
> I'd like to implement the following at the same time:
> * Per table flush compression configuration
> * Ability to default the table flush and compaction compression in the yaml.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch master updated: test `IndexOptions +VersionSort`

2020-04-19 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


The following commit(s) were added to refs/heads/master by this push:
 new fcb4d43  test `IndexOptions +VersionSort`
fcb4d43 is described below

commit fcb4d431a4b3546c98c1d6f72cc29cb757ece374
Author: mck 
AuthorDate: Sun Apr 19 19:53:31 2020 +0200

test `IndexOptions +VersionSort`
---
 content/doc/.htaccess | 1 +
 1 file changed, 1 insertion(+)

diff --git a/content/doc/.htaccess b/content/doc/.htaccess
index e1d2560..c744671 100644
--- a/content/doc/.htaccess
+++ b/content/doc/.htaccess
@@ -1 +1,2 @@
 Options +Indexes
+IndexOptions +VersionSort


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch master updated: Fix top level .htaccess ( without it we get 500 under doc/ )

2020-04-19 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 0dd57e1  Fix top level .htaccess ( without it we get 500 under doc/ )
0dd57e1 is described below

commit 0dd57e186bd0162790b959273268cfb2270e480a
Author: mck 
AuthorDate: Sun Apr 19 19:45:39 2020 +0200

Fix top level .htaccess ( without it we get 500 under doc/ )
---
 src/.htaccess | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/.htaccess b/src/.htaccess
index e69de29..00c9ec3 100644
--- a/src/.htaccess
+++ b/src/.htaccess
@@ -0,0 +1 @@
+RewriteEngine On


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch master updated: test remove rewriterule (rather than .htaccess)

2020-04-19 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 009898c  test remove rewriterule (rather than .htaccess)
009898c is described below

commit 009898c2e7f9e7f0d4fe73e16075f2c279152353
Author: mck 
AuthorDate: Sun Apr 19 19:41:06 2020 +0200

test remove rewriterule (rather than .htaccess)
---
 content/.htaccess | 2 --
 1 file changed, 2 deletions(-)

diff --git a/content/.htaccess b/content/.htaccess
index a254c35..00c9ec3 100644
--- a/content/.htaccess
+++ b/content/.htaccess
@@ -1,3 +1 @@
 RewriteEngine On
-
-RewriteRule /doc/ /doc/latest/ [NC,L]


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch master updated: Revert "test, remove content/doc/.htaccess"

2020-04-19 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 719a42e  Revert "test, remove content/doc/.htaccess"
719a42e is described below

commit 719a42e40397dc07a3c7e754959909328d8916e1
Author: mck 
AuthorDate: Sun Apr 19 19:39:16 2020 +0200

Revert "test, remove content/doc/.htaccess"

This reverts commit 6687c0490014d76fdf22f525b7b8bed8198f439a.
---
 content/doc/.htaccess | 1 +
 1 file changed, 1 insertion(+)

diff --git a/content/doc/.htaccess b/content/doc/.htaccess
new file mode 100644
index 000..e1d2560
--- /dev/null
+++ b/content/doc/.htaccess
@@ -0,0 +1 @@
+Options +Indexes


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch master updated: test restore content/.htacecss

2020-04-19 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 34048ef  test restore content/.htacecss
34048ef is described below

commit 34048efc1cca21199d3a3289f90cc2796663bd79
Author: mck 
AuthorDate: Sun Apr 19 19:37:55 2020 +0200

test restore content/.htacecss
---
 content/.htaccess | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/content/.htaccess b/content/.htaccess
new file mode 100644
index 000..a254c35
--- /dev/null
+++ b/content/.htaccess
@@ -0,0 +1,3 @@
+RewriteEngine On
+
+RewriteRule /doc/ /doc/latest/ [NC,L]


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch master updated: test, remove content/doc/.htaccess

2020-04-19 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 6687c04  test, remove content/doc/.htaccess
6687c04 is described below

commit 6687c0490014d76fdf22f525b7b8bed8198f439a
Author: mck 
AuthorDate: Sun Apr 19 19:35:03 2020 +0200

test, remove content/doc/.htaccess
---
 content/doc/.htaccess | 1 -
 1 file changed, 1 deletion(-)

diff --git a/content/doc/.htaccess b/content/doc/.htaccess
deleted file mode 100644
index e1d2560..000
--- a/content/doc/.htaccess
+++ /dev/null
@@ -1 +0,0 @@
-Options +Indexes


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch master updated: test directory listing of doc/

2020-04-19 Thread mck
This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


The following commit(s) were added to refs/heads/master by this push:
 new c2132c5  test directory listing of doc/
c2132c5 is described below

commit c2132c50c7209c2d48c46dc2fd5e531de21c8b5d
Author: mck 
AuthorDate: Sun Apr 19 19:30:37 2020 +0200

test directory listing of doc/
---
 content/.htaccess  |  3 ---
 content/doc/.htaccess  |  1 +
 content/doc/index.html | 13 -
 3 files changed, 1 insertion(+), 16 deletions(-)

diff --git a/content/.htaccess b/content/.htaccess
deleted file mode 100644
index a254c35..000
--- a/content/.htaccess
+++ /dev/null
@@ -1,3 +0,0 @@
-RewriteEngine On
-
-RewriteRule /doc/ /doc/latest/ [NC,L]
diff --git a/content/doc/.htaccess b/content/doc/.htaccess
new file mode 100644
index 000..e1d2560
--- /dev/null
+++ b/content/doc/.htaccess
@@ -0,0 +1 @@
+Options +Indexes
diff --git a/content/doc/index.html b/content/doc/index.html
deleted file mode 100644
index 93a7231..000
--- a/content/doc/index.html
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
-
-  
-
-
-Page Redirection
-  
-  
-If you are not redirected automatically, click here
-  
-


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-15694) Statistics upon streaming of entire SSTables in Netstats is wrong

2020-04-19 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic reassigned CASSANDRA-15694:
-

Assignee: Stefan Miklosovic

> Statistics upon streaming of entire SSTables in Netstats is wrong
> -
>
> Key: CASSANDRA-15694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
>
> There is a bug in the current code (trunk on 6th April 2020) as if we are 
> streaming entire SSTables via CassandraEntireSSTableStreamWriter and 
> CassandraOutgoingFile respectively, there is not any update on particular 
> components of a SSTable as it counts only "db" file to be there. That 
> introduces this bug:
>  
> {code:java}
> Mode: NORMAL
> Rebuild 2c0b43f0-735d-11ea-9346-fb0ffe238736
> /127.0.0.2 Sending 19 files, 27664559 bytes total. Already sent 133 
> files, 27664559 bytes total
> 
> /tmp/dtests15682026295742741219/node2/data/distributed_test_keyspace/cf-196b3...
> 
> {code}
> Basically, number of files to be sent is lower than files sent.
>  
> The straightforward fix here is to distinguish when we are streaming entire 
> sstables and in that case include all manifest files into computation. 
>  
> This issue depends on CASSANDRA-15657 because the resolution whether we 
> stream entirely or not is got from a method which is performance sensitive 
> and computed every time. Once CASSANDRA-15657  (hence CASSANDRA-14586) is 
> done, this ticket can be worked on.
>  
> branch with fix is here: 
> [https://github.com/smiklosovic/cassandra/tree/CASSANDRA-15694]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15694) Statistics upon streaming of entire SSTables in Netstats is wrong

2020-04-19 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087019#comment-17087019
 ] 

Stefan Miklosovic commented on CASSANDRA-15694:
---

[~jasonstack] yes I noticed that, thanks. [~djoshi] could you please review 
this and merge? I think this is your area of expertise as you have written that 
whole SSTable streaming ... 

> Statistics upon streaming of entire SSTables in Netstats is wrong
> -
>
> Key: CASSANDRA-15694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: Stefan Miklosovic
>Priority: Normal
>
> There is a bug in the current code (trunk on 6th April 2020) as if we are 
> streaming entire SSTables via CassandraEntireSSTableStreamWriter and 
> CassandraOutgoingFile respectively, there is not any update on particular 
> components of a SSTable as it counts only "db" file to be there. That 
> introduces this bug:
>  
> {code:java}
> Mode: NORMAL
> Rebuild 2c0b43f0-735d-11ea-9346-fb0ffe238736
> /127.0.0.2 Sending 19 files, 27664559 bytes total. Already sent 133 
> files, 27664559 bytes total
> 
> /tmp/dtests15682026295742741219/node2/data/distributed_test_keyspace/cf-196b3...
> 
> {code}
> Basically, number of files to be sent is lower than files sent.
>  
> The straightforward fix here is to distinguish when we are streaming entire 
> sstables and in that case include all manifest files into computation. 
>  
> This issue depends on CASSANDRA-15657 because the resolution whether we 
> stream entirely or not is got from a method which is performance sensitive 
> and computed every time. Once CASSANDRA-15657  (hence CASSANDRA-14586) is 
> done, this ticket can be worked on.
>  
> branch with fix is here: 
> [https://github.com/smiklosovic/cassandra/tree/CASSANDRA-15694]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14586) Performant range containment check for SSTables

2020-04-19 Thread ZhaoYang (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-14586:
-
Resolution: Duplicate
Status: Resolved  (was: Open)

Marked it as duplicate of CASSANDRA-15657. Thanks

> Performant range containment check for SSTables
> ---
>
> Key: CASSANDRA-14586
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14586
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Normal
>  Labels: Performance
>
> Related to CASSANDRA-14556, we would like to make the range containment check 
> performant. Right now we iterate over all partition keys in the SSTables and 
> determine the eligibility for Zero Copy streaming. This ticket is to explore 
> ways to make it performant by storing information in the SSTable's Metadata.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15694) Statistics upon streaming of entire SSTables in Netstats is wrong

2020-04-19 Thread ZhaoYang (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087017#comment-17087017
 ] 

ZhaoYang commented on CASSANDRA-15694:
--

FYI, 15657 is merged..

> Statistics upon streaming of entire SSTables in Netstats is wrong
> -
>
> Key: CASSANDRA-15694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: Stefan Miklosovic
>Priority: Normal
>
> There is a bug in the current code (trunk on 6th April 2020) as if we are 
> streaming entire SSTables via CassandraEntireSSTableStreamWriter and 
> CassandraOutgoingFile respectively, there is not any update on particular 
> components of a SSTable as it counts only "db" file to be there. That 
> introduces this bug:
>  
> {code:java}
> Mode: NORMAL
> Rebuild 2c0b43f0-735d-11ea-9346-fb0ffe238736
> /127.0.0.2 Sending 19 files, 27664559 bytes total. Already sent 133 
> files, 27664559 bytes total
> 
> /tmp/dtests15682026295742741219/node2/data/distributed_test_keyspace/cf-196b3...
> 
> {code}
> Basically, number of files to be sent is lower than files sent.
>  
> The straightforward fix here is to distinguish when we are streaming entire 
> sstables and in that case include all manifest files into computation. 
>  
> This issue depends on CASSANDRA-15657 because the resolution whether we 
> stream entirely or not is got from a method which is performance sensitive 
> and computed every time. Once CASSANDRA-15657  (hence CASSANDRA-14586) is 
> done, this ticket can be worked on.
>  
> branch with fix is here: 
> [https://github.com/smiklosovic/cassandra/tree/CASSANDRA-15694]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15694) Statistics upon streaming of entire SSTables in Netstats is wrong

2020-04-19 Thread ZhaoYang (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-15694:
-
Description: 
There is a bug in the current code (trunk on 6th April 2020) as if we are 
streaming entire SSTables via CassandraEntireSSTableStreamWriter and 
CassandraOutgoingFile respectively, there is not any update on particular 
components of a SSTable as it counts only "db" file to be there. That 
introduces this bug:

 
{code:java}
Mode: NORMAL
Rebuild 2c0b43f0-735d-11ea-9346-fb0ffe238736
/127.0.0.2 Sending 19 files, 27664559 bytes total. Already sent 133 files, 
27664559 bytes total

/tmp/dtests15682026295742741219/node2/data/distributed_test_keyspace/cf-196b3...

{code}
Basically, number of files to be sent is lower than files sent.

 

The straightforward fix here is to distinguish when we are streaming entire 
sstables and in that case include all manifest files into computation. 

 

This issue depends on CASSANDRA-15657 because the resolution whether we stream 
entirely or not is got from a method which is performance sensitive and 
computed every time. Once CASSANDRA-15657  (hence CASSANDRA-14586) is done, 
this ticket can be worked on.

 

branch with fix is here: 
[https://github.com/smiklosovic/cassandra/tree/CASSANDRA-15694]

  was:
There is a bug in the current code (trunk on 6th April 2020) as if we are 
streaming entire SSTables via CassandraEntireSSTableStreamWriter and 
CassandraOutgoingFile respectively, there is not any update on particular 
components of a SSTable as it counts only "db" file to be there. That 
introduces this bug:

 
{code:java}
Mode: NORMAL
Rebuild 2c0b43f0-735d-11ea-9346-fb0ffe238736
/127.0.0.2 Sending 19 files, 27664559 bytes total. Already sent 133 files, 
27664559 bytes total

/tmp/dtests15682026295742741219/node2/data/distributed_test_keyspace/cf-196b3...

{code}
Basically, number of files to be sent is lower than files sent.

 

The straightforward fix here is to distinguish when we are streaming entire 
sstables and in that case include all manifest files into computation. 

 

This issue relates to https://issues.apache.org/jira/browse/CASSANDRA-15657 
because the resolution whether we stream entirely or not is got from a method 
which is performance sensitive and computed every time. Once CASSANDRA-15657  
(hence CASSANDRA-14586) is done, this ticket can be worked on.

 

branch with fix is here: 
[https://github.com/smiklosovic/cassandra/tree/CASSANDRA-15694]


> Statistics upon streaming of entire SSTables in Netstats is wrong
> -
>
> Key: CASSANDRA-15694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15694
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/nodetool
>Reporter: Stefan Miklosovic
>Priority: Normal
>
> There is a bug in the current code (trunk on 6th April 2020) as if we are 
> streaming entire SSTables via CassandraEntireSSTableStreamWriter and 
> CassandraOutgoingFile respectively, there is not any update on particular 
> components of a SSTable as it counts only "db" file to be there. That 
> introduces this bug:
>  
> {code:java}
> Mode: NORMAL
> Rebuild 2c0b43f0-735d-11ea-9346-fb0ffe238736
> /127.0.0.2 Sending 19 files, 27664559 bytes total. Already sent 133 
> files, 27664559 bytes total
> 
> /tmp/dtests15682026295742741219/node2/data/distributed_test_keyspace/cf-196b3...
> 
> {code}
> Basically, number of files to be sent is lower than files sent.
>  
> The straightforward fix here is to distinguish when we are streaming entire 
> sstables and in that case include all manifest files into computation. 
>  
> This issue depends on CASSANDRA-15657 because the resolution whether we 
> stream entirely or not is got from a method which is performance sensitive 
> and computed every time. Once CASSANDRA-15657  (hence CASSANDRA-14586) is 
> done, this ticket can be worked on.
>  
> branch with fix is here: 
> [https://github.com/smiklosovic/cassandra/tree/CASSANDRA-15694]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15623) When running CQLSH with STDIN input, exit with error status code if script fails

2020-04-19 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087004#comment-17087004
 ] 

Michael Semb Wever edited comment on CASSANDRA-15623 at 4/19/20, 1:34 PM:
--

[~djoshi], [~jrwest], looks like this broke ~25 cqlsh tests. 

- 3.0: 
https://ci-cassandra.apache.org/view/branches/job/Cassandra-3.0/16/testReport/
- 3.11: 
https://ci-cassandra.apache.org/view/Cassandra%203.11/job/Cassandra-3.11/20/testReport/
- 4.0: 
https://ci-cassandra.apache.org/view/Cassandra%204.0/job/Cassandra-trunk/84/testReport/


Where were the CI runs before this was committed?


was (Author: michaelsembwever):
[~djoshi], [~jrwest], looks like this broke ~25 cqlsh tests. That's nearly half 
of the cqlsh tests.

- 3.0: 
https://ci-cassandra.apache.org/view/branches/job/Cassandra-3.0/16/testReport/
- 3.11: 
https://ci-cassandra.apache.org/view/Cassandra%203.11/job/Cassandra-3.11/20/testReport/
- 4.0: 
https://ci-cassandra.apache.org/view/Cassandra%204.0/job/Cassandra-trunk/84/testReport/


Where were the CI runs before this was committed?

> When running CQLSH with STDIN input, exit with error status code if script 
> fails
> 
>
> Key: CASSANDRA-15623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15623
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
>Reporter: Jacob Becker
>Assignee: Jacob Becker
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 3.0.21, 3.11.7, 4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Assuming CASSANDRA-6344 is in place for years and considering that scripts 
> submitted with the `-e` option behave in a similar fashion, it is very 
> surprising that scripts submitted to STDIN (i.e. piped in) always exit with a 
> zero code, regardless of errors. I believe this should be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15623) When running CQLSH with STDIN input, exit with error status code if script fails

2020-04-19 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087004#comment-17087004
 ] 

Michael Semb Wever commented on CASSANDRA-15623:


[~djoshi], [~jrwest], looks like this broke ~25 cqlsh tests. That's nearly half 
of the cqlsh tests.

- 3.0: 
https://ci-cassandra.apache.org/view/branches/job/Cassandra-3.0/16/testReport/
- 3.11: 
https://ci-cassandra.apache.org/view/Cassandra%203.11/job/Cassandra-3.11/20/testReport/
- 4.0: 
https://ci-cassandra.apache.org/view/Cassandra%204.0/job/Cassandra-trunk/84/testReport/


Where were the CI runs before this was committed?

> When running CQLSH with STDIN input, exit with error status code if script 
> fails
> 
>
> Key: CASSANDRA-15623
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15623
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
>Reporter: Jacob Becker
>Assignee: Jacob Becker
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 3.0.21, 3.11.7, 4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Assuming CASSANDRA-6344 is in place for years and considering that scripts 
> submitted with the `-e` option behave in a similar fashion, it is very 
> surprising that scripts submitted to STDIN (i.e. piped in) always exit with a 
> zero code, regardless of errors. I believe this should be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15472) Read failure due to exception from metrics-core dependency

2020-04-19 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-15472:
---
Labels:   (was: lhf)

> Read failure due to exception from metrics-core dependency
> --
>
> Key: CASSANDRA-15472
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15472
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> Stacktrace
> {code:java}
> Uncaught exception on thread Thread[SharedPool-Worker-27,5,main]: {}
> java.util.NoSuchElementException: null
>   at 
> java.util.concurrent.ConcurrentSkipListMap.firstKey(ConcurrentSkipListMap.java:2053)
>  ~[na:1.8.0_222]
>   at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:102)
>  ~[metrics-core-2.2.0.jar:na]
>   at 
> com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:81)
>  ~[metrics-core-2.2.0.jar:na]
>   at com.yammer.metrics.core.Histogram.update(Histogram.java:110) 
> ~[metrics-core-2.2.0.jar:na]
>   at com.yammer.metrics.core.Timer.update(Timer.java:198) 
> ~[metrics-core-2.2.0.jar:na]
>   at com.yammer.metrics.core.Timer.update(Timer.java:76) 
> ~[metrics-core-2.2.0.jar:na]
>   at 
> org.apache.cassandra.metrics.LatencyMetrics.addNano(LatencyMetrics.java:108) 
> ~[nf-cassandra-2.1.19.10.jar:2.1.19.10]
>   at 
> org.apache.cassandra.metrics.LatencyMetrics.addNano(LatencyMetrics.java:114) 
> ~[nf-cassandra-2.1.19.10.jar:2.1.19.10]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1897)
>  ~[nf-cassandra-2.1.19.10.jar:2.1.19.10]
>   at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:353) 
> ~[nf-cassandra-2.1.19.10.jar:2.1.19.10]
>   at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:85)
>  ~[nf-cassandra-2.1.19.10.jar:2.1.19.10]
>   at 
> org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
> ~[nf-cassandra-2.1.19.10.jar:2.1.19.10]
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[nf-cassandra-2.1.19.10.jar:2.1.19.10]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_222]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[nf-cassandra-2.1.19.10.jar:2.1.19.10]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [nf-cassandra-2.1.19.10.jar:2.1.19.10]
>   at java.lang.Thread.run(Thread.java:748) [na:1.8.0_222]
> {code}
> This [issue|https://github.com/dropwizard/metrics/issues/1278] has been 
> [fixed|https://github.com/dropwizard/metrics/pull/1436] in 
> [v4.0.6|https://github.com/dropwizard/metrics/releases/tag/v4.0.6].
> This is observed on a 2.1.19 cluster, but this would impact pretty much any 
> version of C* since we depend on lower versions of metrics-core that do not 
> have the fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org