[jira] [Assigned] (CASSANDRA-16290) Consistency can be violated when bootstrap or decommission is resumed after node restart

2021-09-03 Thread Paulo Motta (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta reassigned CASSANDRA-16290:
---

Assignee: (was: Paulo Motta)

> Consistency can be violated when bootstrap or decommission is resumed after 
> node restart
> 
>
> Key: CASSANDRA-16290
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16290
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Bootstrap and Decommission
>Reporter: Paulo Motta
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Since CASSANDRA-12008, successfully transferred ranges during decommission 
> are saved on the {{system.transferred_ranges}} table. This allow skipping 
> ranges already transferred when a failed decommission is retried with 
> {{nodetool decommission}}.
> If instead of resuming the decommission, an operator restarts the node, waits 
> N minutes and then performs a new decommission, the previously transferred 
> ranges will be skipped during streaming, and any writes received by the 
> decommissioned node during these N minutes will not be replicated to the new 
> range owner, what violates consistency.
> This issue is analogous to the issue mentioned [on this 
> comment|https://issues.apache.org/jira/browse/CASSANDRA-8838?focusedCommentId=16900234=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16900234]
>  for resumable bootstrap (CASSANDRA-8838).
> In order to prevent consistency violations we should clear the 
> {{system.transferred_ranges}} state during node restart, and maybe a system 
> property to disable it. While we're at this, we should change the default of 
> {{-Dcassandra.reset_bootstrap_progress}} to {{true}} to clear the 
> {{system.available_ranges}} state by default when a bootstrapping node is 
> restarted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16841) Unexpectedly ignored dtests

2021-09-03 Thread Jira


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409692#comment-17409692
 ] 

Andres de la Peña edited comment on CASSANDRA-16841 at 9/3/21, 7:56 PM:


Thanks for the patch. Finding those skipped tests is a very good catch. Overall 
the patch looks good to me, I have left some minor suggestions on the PR.
{quote}Treat --only-resource-intensive-tests in the same way as 
--force-resource-intensive-tests, so it will be enough to just specify it even 
with no sufficient resources.
{quote}
I'm not sure about this, IMO it makes sense the way it currently works. The 
flag {{\-\-only-resource-intensive-tests}} selects the tests to run and 
{{\-\-force-resource-intensive-tests}} disables the safety mechanism that 
prevents running the tests if there aren't enough resources available.

On one hand, I understand that current meaning of 
{{\-\-only-resource-intensive-tests}} without 
{{\-\-force-resource-intensive-tests}} is "run the resource intensive tests but 
only if you have resources to do so". I guess that a (convoluted) example use 
case for this could be having a CircleCI job for running resource intensive 
tests. This job would success in LOWRES and MIDRES without running any tests 
due to the lack of resources, and it would actually run the tests in HIGHRES.

On the other hand, I understand that in most cases when you use 
{{\-\-only-resource-intensive-tests}} probably you are going to want to run 
with or without the resources. I don't think that any of the approaches is 
wrong or much better than the other, so I'm more inclined to preserve the 
current behaviour. We can always add some more information in the description 
of the flags if we think that this is going to be confusing for users. What do 
you think? Am I missing something?


was (Author: adelapena):
Thanks for the patch. Finding those skipped tests is a very good catch. Overall 
the patch looks good to me, I have left some minor suggestions on the PR.
{quote}Treat --only-resource-intensive-tests in the same way as 
--force-resource-intensive-tests, so it will be enough to just specify it even 
with no sufficient resources.
{quote}
I'm not sure about this, IMO it makes sense the way it currently works. The 
flag {{--only-resource-intensive-tests}} selects the tests to run and 
{{--force-resource-intensive-tests}} disables the safety mechanism that 
prevents running the tests if there aren't enough resources available.

On one hand, I understand that current meaning of 
{{--only-resource-intensive-tests}} without 
{{--force-resource-intensive-tests}} is "run the resource intensive tests but 
only if you have resources to do so". I guess that a (convoluted) example use 
case for this could be having a CircleCI job for running resource intensive 
tests. This job would success in LOWRES and MIDRES without running any tests 
due to the lack of resources, and it would actually run the tests in HIGHRES.

On the other hand, I understand that in most cases when you use 
{{--only-resource-intensive-tests}} probably you are going to want to run with 
or without the resources. I don't think that any of the approaches is wrong or 
much better than the other, so I'm more inclined to preserve the current 
behaviour. We can always add some more information in the description of the 
flags if we think that this is going to be confusing for users. What do you 
think? Am I missing something?

> Unexpectedly ignored dtests
> ---
>
> Key: CASSANDRA-16841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16841
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Ruslan Fomkin
>Assignee: Ruslan Fomkin
>Priority: Normal
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> An issue, which I was hit:
> When one class in a dtest file is marked as resource intensive, then all 
> tests in all classes are treated as resource intensive. For example, 
> [repair_tests/repair_test.py|https://github.com/apache/cassandra-dtest/blob/trunk/repair_tests/repair_test.py]
>  contains three classes and the last class is marked as resource intensive:
> {code:java}
> @pytest.mark.resource_intensive
> class TestRepairDataSystemTable(Tester):
> {code}
> So if I try to run an unmarked class: 
> {code:java}
> pytest --cassandra-dir=../cassandra repair_tests/repair_test.py::TestRepair 
> --collect-only --skip-resource-intensive-tests
> {code}
> then all tests are ignored
> {code:java}
> collected 36 items / 36 deselected 
> {code}
> This is because a test is treated to be marked if any class in the same file 
> has the mark. This bug was introduced in the fix of CASS-16399. Before only 
> upgrade tests had such behaviour, i.e., if a class is marked as upgrade test, 
> then all 

[jira] [Commented] (CASSANDRA-16841) Unexpectedly ignored dtests

2021-09-03 Thread Jira


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409692#comment-17409692
 ] 

Andres de la Peña commented on CASSANDRA-16841:
---

Thanks for the patch. Finding those skipped tests is a very good catch. Overall 
the patch looks good to me, I have left some minor suggestions on the PR.
{quote}Treat --only-resource-intensive-tests in the same way as 
--force-resource-intensive-tests, so it will be enough to just specify it even 
with no sufficient resources.
{quote}
I'm not sure about this, IMO it makes sense the way it currently works. The 
flag {{--only-resource-intensive-tests}} selects the tests to run and 
{{--force-resource-intensive-tests}} disables the safety mechanism that 
prevents running the tests if there aren't enough resources available.

On one hand, I understand that current meaning of 
{{--only-resource-intensive-tests}} without 
{{--force-resource-intensive-tests}} is "run the resource intensive tests but 
only if you have resources to do so". I guess that a (convoluted) example use 
case for this could be having a CircleCI job for running resource intensive 
tests. This job would success in LOWRES and MIDRES without running any tests 
due to the lack of resources, and it would actually run the tests in HIGHRES.

On the other hand, I understand that in most cases when you use 
{{--only-resource-intensive-tests}} probably you are going to want to run with 
or without the resources. I don't think that any of the approaches is wrong or 
much better than the other, so I'm more inclined to preserve the current 
behaviour. We can always add some more information in the description of the 
flags if we think that this is going to be confusing for users. What do you 
think? Am I missing something?

> Unexpectedly ignored dtests
> ---
>
> Key: CASSANDRA-16841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16841
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Ruslan Fomkin
>Assignee: Ruslan Fomkin
>Priority: Normal
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> An issue, which I was hit:
> When one class in a dtest file is marked as resource intensive, then all 
> tests in all classes are treated as resource intensive. For example, 
> [repair_tests/repair_test.py|https://github.com/apache/cassandra-dtest/blob/trunk/repair_tests/repair_test.py]
>  contains three classes and the last class is marked as resource intensive:
> {code:java}
> @pytest.mark.resource_intensive
> class TestRepairDataSystemTable(Tester):
> {code}
> So if I try to run an unmarked class: 
> {code:java}
> pytest --cassandra-dir=../cassandra repair_tests/repair_test.py::TestRepair 
> --collect-only --skip-resource-intensive-tests
> {code}
> then all tests are ignored
> {code:java}
> collected 36 items / 36 deselected 
> {code}
> This is because a test is treated to be marked if any class in the same file 
> has the mark. This bug was introduced in the fix of CASS-16399. Before only 
> upgrade tests had such behaviour, i.e., if a class is marked as upgrade test, 
> then all tests are upgrade test in the file.
>  
> This bug, for example, means that if the same file contains one class marked 
> with vnodes and another class with no_vnodes, then no tests will be executed 
> in the file.
> I also noticed another issue that If a test run is executed with the argument 
> {{-only-resource-intensive-tests}} and there is no sufficient resources for 
> resource intensive tests, then no tests were executed. Thus it was necessary 
> to provide {{-force-resource-intensive-tests}} in addition.
> Suggestions for the solutions:
>  # Require to mark each class and remove the special case of upgrade tests. 
> This will simplify the implementation and might be more obvious for new 
> comers.
>  # Treat {{-only-resource-intensive-tests}} in the same way as 
> {{-force-resource-intensive-tests}}, so it will be enough to just specify it 
> even with no sufficient resources.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16896) Add soft/hard limits to local reads to protect against reading too much data in a single query

2021-09-03 Thread Caleb Rackliffe (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409665#comment-17409665
 ] 

Caleb Rackliffe commented on CASSANDRA-16896:
-

My first pass at review is complete, and I've dropped a bunch of minor nits as 
well as a handful of conversation-starting comments on things we might consider 
changing. Overall, things are looking pretty good, and it's especially nice 
that we had a fresh canvas to do nested YAML configuration and could fix some 
issues in the existing heap accounting utilities.

> Add soft/hard limits to local reads to protect against reading too much data 
> in a single query
> --
>
> Key: CASSANDRA-16896
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16896
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/Local Write-Read Paths
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 4.1
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Add soft/hard limits to local reads to protect against reading too much data 
> in a single query.
> This is an extension of the existing work to add warnings/aborts to large 
> partitions (CASSANDRA-16850), with the core difference is that this applies 
> locally rather than at the coordinator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16822) Wrong cqlsh python library location in cassandra-3.11.11-1 rhel packages

2021-09-03 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-16822:
-
  Fix Version/s: (was: 3.11.x)
 (was: 3.0.x)
 (was: 2.2.x)
 3.11.12
 3.0.26
 2.2.20
  Since Version: 3.0.25
Source Control Link: 
https://github.com/apache/cassandra/commit/2e547dfbc40e6b500db506353bced161c66f3113
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

I didn't find any better solutions.  Committed.

> Wrong cqlsh python library location in cassandra-3.11.11-1 rhel packages 
> -
>
> Key: CASSANDRA-16822
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16822
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Ville Savolainen
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 2.2.20, 3.0.26, 3.11.12
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> cqlsh does not work because cqlshlib is in wrong location while I think it 
> should be in python2.7 for cassandra-3.11
> cassandra.spec seems to define python interpreter to /usr/bin/python so I 
> think build environment has been changed after 3.11.10 so /usr/bin/python 
> points to python3 instead of python2.
> cassandra-3.11.10 did have chqlshlib in python2.7 site-packages
> {noformat}
> $ rpm -qpl cassandra-3.11.11-1.noarch.rpm |grep cql
> warning: cassandra-3.11.11-1.noarch.rpm: Header V4 RSA/SHA512 Signature, key 
> ID 0b84c041: NOKEY
> /etc/cassandra/default.conf/cqlshrc.sample
> /usr/bin/cqlsh
> /usr/bin/cqlsh.py
> /usr/bin/debug-cql
> /usr/lib/python3.6/site-packages/cqlshlib
> /usr/lib/python3.6/site-packages/cqlshlib/__init__.py
> /usr/lib/python3.6/site-packages/cqlshlib/copyutil.py
> /usr/lib/python3.6/site-packages/cqlshlib/cql3handling.py
> /usr/lib/python3.6/site-packages/cqlshlib/cqlhandling.py
> /usr/lib/python3.6/site-packages/cqlshlib/cqlshhandling.py
> /usr/lib/python3.6/site-packages/cqlshlib/displaying.py
> /usr/lib/python3.6/site-packages/cqlshlib/formatting.py
> /usr/lib/python3.6/site-packages/cqlshlib/helptopics.py
> /usr/lib/python3.6/site-packages/cqlshlib/pylexotron.py
> /usr/lib/python3.6/site-packages/cqlshlib/saferscanner.py
> /usr/lib/python3.6/site-packages/cqlshlib/sslhandling.py
> /usr/lib/python3.6/site-packages/cqlshlib/tracing.py
> /usr/lib/python3.6/site-packages/cqlshlib/util.py
> /usr/lib/python3.6/site-packages/cqlshlib/wcwidth.py
> {noformat}
>  
> Pull request to cassandra-3.11: https://github.com/apache/cassandra/pull/1124



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16822) Wrong cqlsh python library location in cassandra-3.11.11-1 rhel packages

2021-09-03 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-16822:
-
Status: Ready to Commit  (was: Review In Progress)

> Wrong cqlsh python library location in cassandra-3.11.11-1 rhel packages 
> -
>
> Key: CASSANDRA-16822
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16822
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Ville Savolainen
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> cqlsh does not work because cqlshlib is in wrong location while I think it 
> should be in python2.7 for cassandra-3.11
> cassandra.spec seems to define python interpreter to /usr/bin/python so I 
> think build environment has been changed after 3.11.10 so /usr/bin/python 
> points to python3 instead of python2.
> cassandra-3.11.10 did have chqlshlib in python2.7 site-packages
> {noformat}
> $ rpm -qpl cassandra-3.11.11-1.noarch.rpm |grep cql
> warning: cassandra-3.11.11-1.noarch.rpm: Header V4 RSA/SHA512 Signature, key 
> ID 0b84c041: NOKEY
> /etc/cassandra/default.conf/cqlshrc.sample
> /usr/bin/cqlsh
> /usr/bin/cqlsh.py
> /usr/bin/debug-cql
> /usr/lib/python3.6/site-packages/cqlshlib
> /usr/lib/python3.6/site-packages/cqlshlib/__init__.py
> /usr/lib/python3.6/site-packages/cqlshlib/copyutil.py
> /usr/lib/python3.6/site-packages/cqlshlib/cql3handling.py
> /usr/lib/python3.6/site-packages/cqlshlib/cqlhandling.py
> /usr/lib/python3.6/site-packages/cqlshlib/cqlshhandling.py
> /usr/lib/python3.6/site-packages/cqlshlib/displaying.py
> /usr/lib/python3.6/site-packages/cqlshlib/formatting.py
> /usr/lib/python3.6/site-packages/cqlshlib/helptopics.py
> /usr/lib/python3.6/site-packages/cqlshlib/pylexotron.py
> /usr/lib/python3.6/site-packages/cqlshlib/saferscanner.py
> /usr/lib/python3.6/site-packages/cqlshlib/sslhandling.py
> /usr/lib/python3.6/site-packages/cqlshlib/tracing.py
> /usr/lib/python3.6/site-packages/cqlshlib/util.py
> /usr/lib/python3.6/site-packages/cqlshlib/wcwidth.py
> {noformat}
>  
> Pull request to cassandra-3.11: https://github.com/apache/cassandra/pull/1124



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16822) Wrong cqlsh python library location in cassandra-3.11.11-1 rhel packages

2021-09-03 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-16822:
-
Reviewers: Brandon Williams, Brandon Williams  (was: Brandon Williams)
   Brandon Williams, Brandon Williams
   Status: Review In Progress  (was: Patch Available)

> Wrong cqlsh python library location in cassandra-3.11.11-1 rhel packages 
> -
>
> Key: CASSANDRA-16822
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16822
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Ville Savolainen
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> cqlsh does not work because cqlshlib is in wrong location while I think it 
> should be in python2.7 for cassandra-3.11
> cassandra.spec seems to define python interpreter to /usr/bin/python so I 
> think build environment has been changed after 3.11.10 so /usr/bin/python 
> points to python3 instead of python2.
> cassandra-3.11.10 did have chqlshlib in python2.7 site-packages
> {noformat}
> $ rpm -qpl cassandra-3.11.11-1.noarch.rpm |grep cql
> warning: cassandra-3.11.11-1.noarch.rpm: Header V4 RSA/SHA512 Signature, key 
> ID 0b84c041: NOKEY
> /etc/cassandra/default.conf/cqlshrc.sample
> /usr/bin/cqlsh
> /usr/bin/cqlsh.py
> /usr/bin/debug-cql
> /usr/lib/python3.6/site-packages/cqlshlib
> /usr/lib/python3.6/site-packages/cqlshlib/__init__.py
> /usr/lib/python3.6/site-packages/cqlshlib/copyutil.py
> /usr/lib/python3.6/site-packages/cqlshlib/cql3handling.py
> /usr/lib/python3.6/site-packages/cqlshlib/cqlhandling.py
> /usr/lib/python3.6/site-packages/cqlshlib/cqlshhandling.py
> /usr/lib/python3.6/site-packages/cqlshlib/displaying.py
> /usr/lib/python3.6/site-packages/cqlshlib/formatting.py
> /usr/lib/python3.6/site-packages/cqlshlib/helptopics.py
> /usr/lib/python3.6/site-packages/cqlshlib/pylexotron.py
> /usr/lib/python3.6/site-packages/cqlshlib/saferscanner.py
> /usr/lib/python3.6/site-packages/cqlshlib/sslhandling.py
> /usr/lib/python3.6/site-packages/cqlshlib/tracing.py
> /usr/lib/python3.6/site-packages/cqlshlib/util.py
> /usr/lib/python3.6/site-packages/cqlshlib/wcwidth.py
> {noformat}
>  
> Pull request to cassandra-3.11: https://github.com/apache/cassandra/pull/1124



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-4.0 updated (49e8302 -> 752160c)

2021-09-03 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a change to branch cassandra-4.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 49e8302  Merge branch 'cassandra-3.11' into cassandra-4.0
 new 2e547df  Add python2 location to RPMs
 new 615372f  Merge branch 'cassandra-2.2' into cassandra-3.0
 new ecf186f  Merge branch 'cassandra-3.0' into cassandra-3.11
 new 752160c  Merge branch 'cassandra-3.11' into cassandra-4.0

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-3.11' into cassandra-4.0

2021-09-03 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a commit to branch cassandra-4.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 752160c5148ba114c54b049d969f02be6f19b46c
Merge: 49e8302 ecf186f
Author: Brandon Williams 
AuthorDate: Fri Sep 3 13:50:07 2021 -0500

Merge branch 'cassandra-3.11' into cassandra-4.0


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-3.11 updated (d6e1c41 -> ecf186f)

2021-09-03 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a change to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from d6e1c41  Merge branch 'cassandra-3.0' into cassandra-3.11
 new 2e547df  Add python2 location to RPMs
 new 615372f  Merge branch 'cassandra-2.2' into cassandra-3.0
 new ecf186f  Merge branch 'cassandra-3.0' into cassandra-3.11

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGES.txt   | 2 ++
 redhat/cassandra.spec | 5 +
 2 files changed, 7 insertions(+)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-4.0' into trunk

2021-09-03 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 7294210f6bd5a262a824a261164cd155099a39a7
Merge: 163a4d7 752160c
Author: Brandon Williams 
AuthorDate: Fri Sep 3 13:50:17 2021 -0500

Merge branch 'cassandra-4.0' into trunk


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (163a4d7 -> 7294210)

2021-09-03 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 163a4d7  Merge branch 'cassandra-4.0' into trunk
 new 2e547df  Add python2 location to RPMs
 new 615372f  Merge branch 'cassandra-2.2' into cassandra-3.0
 new ecf186f  Merge branch 'cassandra-3.0' into cassandra-3.11
 new 752160c  Merge branch 'cassandra-3.11' into cassandra-4.0
 new 7294210  Merge branch 'cassandra-4.0' into trunk

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-2.2' into cassandra-3.0

2021-09-03 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a commit to branch cassandra-3.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 615372f9087a622e20dfa25159d366e4fc8451f6
Merge: 67eb22e 2e547df
Author: Brandon Williams 
AuthorDate: Fri Sep 3 13:46:52 2021 -0500

Merge branch 'cassandra-2.2' into cassandra-3.0

 CHANGES.txt   | 2 ++
 redhat/cassandra.spec | 5 +
 2 files changed, 7 insertions(+)

diff --cc CHANGES.txt
index 666dd92,a380853..e96e1fa
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,46 -1,7 +1,48 @@@
 -2.2.20
 +3.0.26:
 + * Fix materialized view schema backup as table (CASSANDRA-12734)
 + * Avoid signaling DigestResolver until the minimum number of responses are 
guaranteed to be visible (CASSANDRA-16883)
 + * Fix secondary indexes on primary key columns skipping some writes 
(CASSANDRA-16868)
 + * Fix incorrect error message in LegacyLayout (CASSANDRA-15136)
 + * Use JMX to validate nodetool --jobs parameter (CASSANDRA-16104)
 + * Handle properly UnsatisfiedLinkError in NativeLibrary#getProcessID() 
(CASSANDRA-16578)
 + * Remove mutation data from error log message (CASSANDRA-16817)
++Merged from 2.2:
+  * Add python2 location to RPMs (CASSANDRA-16822)
 +
 +
 +3.0.25:
 + * Binary releases no longer bundle the apidocs (javadoc) (CASSANDRA-16557)
 + * Migrate dependency handling from maven-ant-tasks to resolver-ant-tasks, 
removing lib/ directory from version control (CASSANDRA-16557)
 + * Don't allow seeds to replace without using unsafe (CASSANDRA-14463)
 + * Calculate time remaining correctly for all compaction types in 
compactionstats (CASSANDRA-14701)
 + * Receipt of gossip shutdown notification updates TokenMetadata 
(CASSANDRA-16796)
 + * Count bloom filter misses correctly (CASSANDRA-12922)
 + * Reject token() in MV WHERE clause (CASSANDRA-13464)
 + * Ensure java executable is on the path (CASSANDRA-14325)
 + * Make speculative retry parameter case-insensitive for backward 
compatibility with 2.1 (CASSANDRA-16467)
 + * Push digest mismatch exceptions to trace (CASSANDRA-14900)
 + * Support long names in nodetool output (CASSANDRA-14162)
 + * Handle correctly the exceptions thrown by custom QueryHandler constructors 
(CASSANDRA-16703)
 + * Adding columns via ALTER TABLE can generate corrupt sstables 
(CASSANDRA-16735)
 + * Add flag to disable ALTER...DROP COMPACT STORAGE statements 
(CASSANDRA-16733)
 + * Clean transaction log leftovers at the beginning of sstablelevelreset and 
sstableofflinerelevel (CASSANDRA-12519)
   * CQL shell should prefer newer TLS version by default (CASSANDRA-16695)
 - * Fix Debian init start/stop (CASSANDRA-15770)
 + * Ensure that existing empty rows are properly returned (CASSANDRA-16671)
 + * Invalidate prepared statements on DROP COMPACT (CASSANDRA-16712)
 + * Failure to execute queries should emit a KPI other than read 
timeout/unavailable so it can be alerted/tracked (CASSANDRA-16581)
 + * Don't wait on schema versions from replacement target when replacing 
(CASSANDRA-16692)
 + * StandaloneVerifier does not fail when unable to verify SSTables, it only 
fails if Corruption is thrown (CASSANDRA-16683)
 + * Fix bloom filter false ratio calculation by including true negatives 
(CASSANDRA-15834)
 + * Prevent loss of commit log data when moving sstables between nodes 
(CASSANDRA-16619)
 + * Fix materialized view builders inserting truncated data (CASSANDRA-16567)
 + * Don't wait for schema migrations from removed nodes (CASSANDRA-16577)
 + * Scheduled (delayed) schema pull tasks should not run after MIGRATION stage 
shutdown during decommission (CASSANDRA-16495)
 + * Ignore trailing zeros in hint files (CASSANDRA-16523)
 + * Refuse DROP COMPACT STORAGE if some 2.x sstables are in use 
(CASSANDRA-15897)
 + * Fix ColumnFilter::toString not returning a valid CQL fragment 
(CASSANDRA-16483)
 + * Fix ColumnFilter behaviour to prevent digest mitmatches during upgrades 
(CASSANDRA-16415)
 + * Avoid pushing schema mutations when setting up distributed system 
keyspaces locally (CASSANDRA-16387)
 +Merged from 2.2:
   * Remove ant targets list-jvm-dtests and ant list-jvm-upgrade-dtests 
(CASSANDRA-16519)
   * Fix centos packaging for arm64, >=4.0 rpm's now require python3 
(CASSANDRA-16477)
   * Make TokenMetadata's ring version increments atomic (CASSANDRA-16286)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-3.0 updated (67eb22e -> 615372f)

2021-09-03 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a change to branch cassandra-3.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 67eb22e  Fix materialized view schema backup as table patch by Zhao 
Yang, Ekaterina Dimitrova; reviewed by Benjamin Lerer, Ekaterina Dimitrova for 
CASSANDRA-12734
 new 2e547df  Add python2 location to RPMs
 new 615372f  Merge branch 'cassandra-2.2' into cassandra-3.0

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGES.txt   | 2 ++
 redhat/cassandra.spec | 5 +
 2 files changed, 7 insertions(+)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-3.0' into cassandra-3.11

2021-09-03 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a commit to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit ecf186f8592f73c5268caf408ca7742a3355bf5c
Merge: d6e1c41 615372f
Author: Brandon Williams 
AuthorDate: Fri Sep 3 13:48:32 2021 -0500

Merge branch 'cassandra-3.0' into cassandra-3.11

 CHANGES.txt   | 2 ++
 redhat/cassandra.spec | 5 +
 2 files changed, 7 insertions(+)

diff --cc CHANGES.txt
index 774cb5e,e96e1fa..dc7563f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -13,22 -6,11 +13,24 @@@ Merged from 3.0
   * Use JMX to validate nodetool --jobs parameter (CASSANDRA-16104)
   * Handle properly UnsatisfiedLinkError in NativeLibrary#getProcessID() 
(CASSANDRA-16578)
   * Remove mutation data from error log message (CASSANDRA-16817)
+ Merged from 2.2:
+  * Add python2 location to RPMs (CASSANDRA-16822)
  
  
 -3.0.25:
 +3.11.11
 + * Make cqlsh use the same set of reserved keywords than the server uses 
(CASSANDRA-15663)
 + * Optimize bytes skipping when reading SSTable files (CASSANDRA-14415)
 + * Enable tombstone compactions when unchecked_tombstone_compaction is set in 
TWCS (CASSANDRA-14496)
 + * Read only the required SSTables for single partition queries 
(CASSANDRA-16737)
 + * Fix LeveledCompactionStrategy compacts last level throw an 
ArrayIndexOutOfBoundsException (CASSANDRA-15669)
 + * Maps $CASSANDRA_LOG_DIR to cassandra.logdir java property when executing 
nodetool (CASSANDRA-16199)
 + * Nodetool garbagecollect should retain SSTableLevel for LCS 
(CASSANDRA-16634)
 + * Ignore stale acks received in the shadow round (CASSANDRA-16588)
 + * Add autocomplete and error messages for provide_overlapping_tombstones 
(CASSANDRA-16350)
 + * Add StorageServiceMBean.getKeyspaceReplicationInfo(keyspaceName) 
(CASSANDRA-16447)
 + * Make sure sstables with moved starts are removed correctly in 
LeveledGenerations (CASSANDRA-16552)
 + * Upgrade jackson-databind to 2.9.10.8 (CASSANDRA-16462)
 +Merged from 3.0:
   * Binary releases no longer bundle the apidocs (javadoc) (CASSANDRA-16557)
   * Migrate dependency handling from maven-ant-tasks to resolver-ant-tasks, 
removing lib/ directory from version control (CASSANDRA-16557)
   * Don't allow seeds to replace without using unsafe (CASSANDRA-14463)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-2.2 updated: Add python2 location to RPMs

2021-09-03 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a commit to branch cassandra-2.2
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cassandra-2.2 by this push:
 new 2e547df  Add python2 location to RPMs
2e547df is described below

commit 2e547dfbc40e6b500db506353bced161c66f3113
Author: Mick Semb Wever 
AuthorDate: Mon Aug 2 11:36:36 2021 +0200

Add python2 location to RPMs

Patch by Mck Semb Wever; reviewed by brandonwilliams for
CASSANDRA-16822
---
 CHANGES.txt   | 1 +
 redhat/cassandra.spec | 5 +
 2 files changed, 6 insertions(+)

diff --git a/CHANGES.txt b/CHANGES.txt
index d3a26b6..a380853 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.20
+ * Add python2 location to RPMs (CASSANDRA-16822)
  * CQL shell should prefer newer TLS version by default (CASSANDRA-16695)
  * Fix Debian init start/stop (CASSANDRA-15770)
  * Remove ant targets list-jvm-dtests and ant list-jvm-upgrade-dtests 
(CASSANDRA-16519)
diff --git a/redhat/cassandra.spec b/redhat/cassandra.spec
index 91115e8..1a1a42f 100644
--- a/redhat/cassandra.spec
+++ b/redhat/cassandra.spec
@@ -85,6 +85,9 @@ mkdir -p %{buildroot}/var/lib/%{username}/saved_caches
 mkdir -p %{buildroot}/var/run/%{username}
 mkdir -p %{buildroot}/var/log/%{username}
 ( cd pylib && %{__python} setup.py install --no-compile --root %{buildroot}; )
+# cqlsh before Cassandra version 4.0 still requires python2
+mkdir -p %{buildroot}/usr/lib/python2.7/site-packages
+cp -r %{buildroot}%{python_sitelib}/cqlshlib 
%{buildroot}/usr/lib/python2.7/site-packages/
 
 # patches for data and log paths
 patch -p1 < debian/patches/001cassandra_yaml_dirs.dpatch
@@ -158,6 +161,8 @@ exit 0
 %attr(755,%{username},%{username}) /var/run/%{username}*
 %{python_sitelib}/cqlshlib/
 %{python_sitelib}/cassandra_pylib*.egg-info
+# cqlsh before Cassandra version 4.0 still requires python2
+/usr/lib/python2.7/site-packages/cqlshlib
 
 %post
 alternatives --install /%{_sysconfdir}/%{username}/conf %{username} 
/%{_sysconfdir}/%{username}/default.conf/ 0

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16841) Unexpectedly ignored dtests

2021-09-03 Thread Jira


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres de la Peña updated CASSANDRA-16841:
--
Reviewers: Andres de la Peña, Andres de la Peña  (was: Andres de la Peña)
   Andres de la Peña, Andres de la Peña  (was: Andres de la Peña)
   Status: Review In Progress  (was: Patch Available)

> Unexpectedly ignored dtests
> ---
>
> Key: CASSANDRA-16841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16841
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Ruslan Fomkin
>Assignee: Ruslan Fomkin
>Priority: Normal
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> An issue, which I was hit:
> When one class in a dtest file is marked as resource intensive, then all 
> tests in all classes are treated as resource intensive. For example, 
> [repair_tests/repair_test.py|https://github.com/apache/cassandra-dtest/blob/trunk/repair_tests/repair_test.py]
>  contains three classes and the last class is marked as resource intensive:
> {code:java}
> @pytest.mark.resource_intensive
> class TestRepairDataSystemTable(Tester):
> {code}
> So if I try to run an unmarked class: 
> {code:java}
> pytest --cassandra-dir=../cassandra repair_tests/repair_test.py::TestRepair 
> --collect-only --skip-resource-intensive-tests
> {code}
> then all tests are ignored
> {code:java}
> collected 36 items / 36 deselected 
> {code}
> This is because a test is treated to be marked if any class in the same file 
> has the mark. This bug was introduced in the fix of CASS-16399. Before only 
> upgrade tests had such behaviour, i.e., if a class is marked as upgrade test, 
> then all tests are upgrade test in the file.
>  
> This bug, for example, means that if the same file contains one class marked 
> with vnodes and another class with no_vnodes, then no tests will be executed 
> in the file.
> I also noticed another issue that If a test run is executed with the argument 
> {{-only-resource-intensive-tests}} and there is no sufficient resources for 
> resource intensive tests, then no tests were executed. Thus it was necessary 
> to provide {{-force-resource-intensive-tests}} in addition.
> Suggestions for the solutions:
>  # Require to mark each class and remove the special case of upgrade tests. 
> This will simplify the implementation and might be more obvious for new 
> comers.
>  # Treat {{-only-resource-intensive-tests}} in the same way as 
> {{-force-resource-intensive-tests}}, so it will be enough to just specify it 
> even with no sufficient resources.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16851) Update from Jackson 2.9 to 2.12

2021-09-03 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409638#comment-17409638
 ] 

Brandon Williams commented on CASSANDRA-16851:
--

I think that was probably just oversight during the 4.0 push.  I don't think we 
need to skip it.

> Update from Jackson 2.9 to 2.12
> ---
>
> Key: CASSANDRA-16851
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16851
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies
>Reporter: Tatu Saloranta
>Assignee: Tatu Saloranta
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x, 4.x
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Given that Jackson 2.9 support has ended, it would be good to move at least 
> to the next minor version (2.10, patch 2.10.5) or later – latest stable being 
> 2.12.4.
>  I can test to see if anything breaks, but looking at existing Jackson usage 
> there shouldn't be many issues.
> Assuming upgrade is acceptable there's the question of which branches to 
> apply it to; I will first test it against 4.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16851) Update from Jackson 2.9 to 2.12

2021-09-03 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409637#comment-17409637
 ] 

Ekaterina Dimitrova commented on CASSANDRA-16851:
-

I was about to submit patch runs for the other branches too. I just 
found CASSANDRA-15867  where it seems we have skipped 3.0 which is on 1.9.5. 
Should we skip it again? I didn't see any comment on why this was done so maybe 
there was some offline discussion I am missing. [~brandon.williams]? I see you 
were looking into it. 

> Update from Jackson 2.9 to 2.12
> ---
>
> Key: CASSANDRA-16851
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16851
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies
>Reporter: Tatu Saloranta
>Assignee: Tatu Saloranta
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x, 4.x
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Given that Jackson 2.9 support has ended, it would be good to move at least 
> to the next minor version (2.10, patch 2.10.5) or later – latest stable being 
> 2.12.4.
>  I can test to see if anything breaks, but looking at existing Jackson usage 
> there shouldn't be many issues.
> Assuming upgrade is acceptable there's the question of which branches to 
> apply it to; I will first test it against 4.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16718) Changing listen_address with prefer_local may lead to issues

2021-09-03 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409614#comment-17409614
 ] 

Brandon Williams edited comment on CASSANDRA-16718 at 9/3/21, 5:08 PM:
---

This issue boils down to CASSANDRA-10134 loading the ring state, which includes 
preferred_ip.  OTC then queries this directly if it exists and uses it before 
any changes can be learned.  Given this, I'm not sure it even makes sense to 
store the preferred_ip, since if we try to use it eagerly we'll never be able 
to learn of the change to it, as this issue exemplifies.  I think the best plan 
is just to remove this optimization and do the reconnection dance every time, 
which still shouldn't be super-often. WDYT, [~samt]]?

In the meantime, operators may add {{-Dcassandra.load_ring_state=false}} if 
that's an acceptable workaround.


was (Author: brandon.williams):
This issue boils down to CASSANDRA-10134 loading the ring state, which includes 
preferred_ip.  OTC then queries this directly if it exists and uses it before 
any changes can be learned.  Given this, I'm not sure it even makes sense to 
store the preferred_ip, since if we try to use it eagerly we'll never be able 
to learn of the change to it, as this issue exemplifies.  I think the best plan 
is just to remove this optimization and do the reconnection dance every time, 
which still shouldn't be super-often. WDYT, [~beobal]?

In the meantime, operators may add {{-Dcassandra.load_ring_state=false}} if 
that's an acceptable workaround.

> Changing listen_address with prefer_local may lead to issues
> 
>
> Key: CASSANDRA-16718
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16718
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Jan Karlsson
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 3.11.x, 4.0.x
>
>
> Many container based solution function by assigning new listen_addresses when 
> nodes are stopped. Changing the listen_address is usually as simple as 
> turning off the node and changing the yaml file. 
> However, if prefer_local is enabled, I observed that nodes were unable to 
> join the cluster and fail with 'Unable to gossip with any seeds'. 
> Trace shows that the changing node will try to communicate with the existing 
> node but the response is never received. I assume it is because the existing 
> node attempts to communicate with the local address during the shadow round.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16718) Changing listen_address with prefer_local may lead to issues

2021-09-03 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409614#comment-17409614
 ] 

Brandon Williams edited comment on CASSANDRA-16718 at 9/3/21, 5:08 PM:
---

This issue boils down to CASSANDRA-10134 loading the ring state, which includes 
preferred_ip.  OTC then queries this directly if it exists and uses it before 
any changes can be learned.  Given this, I'm not sure it even makes sense to 
store the preferred_ip, since if we try to use it eagerly we'll never be able 
to learn of the change to it, as this issue exemplifies.  I think the best plan 
is just to remove this optimization and do the reconnection dance every time, 
which still shouldn't be super-often. WDYT, [~samt]?

In the meantime, operators may add {{-Dcassandra.load_ring_state=false}} if 
that's an acceptable workaround.


was (Author: brandon.williams):
This issue boils down to CASSANDRA-10134 loading the ring state, which includes 
preferred_ip.  OTC then queries this directly if it exists and uses it before 
any changes can be learned.  Given this, I'm not sure it even makes sense to 
store the preferred_ip, since if we try to use it eagerly we'll never be able 
to learn of the change to it, as this issue exemplifies.  I think the best plan 
is just to remove this optimization and do the reconnection dance every time, 
which still shouldn't be super-often. WDYT, [~samt]]?

In the meantime, operators may add {{-Dcassandra.load_ring_state=false}} if 
that's an acceptable workaround.

> Changing listen_address with prefer_local may lead to issues
> 
>
> Key: CASSANDRA-16718
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16718
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Jan Karlsson
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 3.11.x, 4.0.x
>
>
> Many container based solution function by assigning new listen_addresses when 
> nodes are stopped. Changing the listen_address is usually as simple as 
> turning off the node and changing the yaml file. 
> However, if prefer_local is enabled, I observed that nodes were unable to 
> join the cluster and fail with 'Unable to gossip with any seeds'. 
> Trace shows that the changing node will try to communicate with the existing 
> node but the response is never received. I assume it is because the existing 
> node attempts to communicate with the local address during the shadow round.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16718) Changing listen_address with prefer_local may lead to issues

2021-09-03 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409614#comment-17409614
 ] 

Brandon Williams edited comment on CASSANDRA-16718 at 9/3/21, 5:06 PM:
---

This issue boils down to CASSANDRA-10134 loading the ring state, which includes 
preferred_ip.  OTC then queries this directly if it exists and uses it before 
any changes can be learned.  Given this, I'm not sure it even makes sense to 
store the preferred_ip, since if we try to use it eagerly we'll never be able 
to learn of the change to it, as this issue exemplifies.  I think the best plan 
is just to remove this optimization and do the reconnection dance every time, 
which still shouldn't be super-often. WDYT, [~beobal]?

In the meantime, operators may add {{-Dcassandra.load_ring_state=false}} if 
that's an acceptable workaround.


was (Author: brandon.williams):
This issue boils down to CASSANDRA-10134 loading the ring state, which includes 
preferred_ip.  OTC then queries this directly if it exists and uses it before 
any changes can be learned.  Given this, I'm not sure it even makes sense to 
store the preferred_ip, since if we try to use it eagerly we'll never be able 
to learn of the change to it, as this issue exemplifies.  I think the best plan 
is just to remove this optimization and do the reconnection dance every time, 
which still shouldn't be super-often. WDYT, [~beobal]?

In the meantime, operators may add `-Dcassandra.load_ring_state=false` if 
that's an acceptable workaround.

> Changing listen_address with prefer_local may lead to issues
> 
>
> Key: CASSANDRA-16718
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16718
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Jan Karlsson
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 3.11.x, 4.0.x
>
>
> Many container based solution function by assigning new listen_addresses when 
> nodes are stopped. Changing the listen_address is usually as simple as 
> turning off the node and changing the yaml file. 
> However, if prefer_local is enabled, I observed that nodes were unable to 
> join the cluster and fail with 'Unable to gossip with any seeds'. 
> Trace shows that the changing node will try to communicate with the existing 
> node but the response is never received. I assume it is because the existing 
> node attempts to communicate with the local address during the shadow round.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16718) Changing listen_address with prefer_local may lead to issues

2021-09-03 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409614#comment-17409614
 ] 

Brandon Williams commented on CASSANDRA-16718:
--

This issue boils down to CASSANDRA-10134 loading the ring state, which includes 
preferred_ip.  OTC then queries this directly if it exists and uses it before 
any changes can be learned.  Given this, I'm not sure it even makes sense to 
store the preferred_ip, since if we try to use it eagerly we'll never be able 
to learn of the change to it, as this issue exemplifies.  I think the best plan 
is just to remove this optimization and do the reconnection dance every time, 
which still shouldn't be super-often. WDYT, [~beobal]?

In the meantime, operators may add `-Dcassandra.load_ring_state=false` if 
that's an acceptable workaround.

> Changing listen_address with prefer_local may lead to issues
> 
>
> Key: CASSANDRA-16718
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16718
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Jan Karlsson
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 3.11.x, 4.0.x
>
>
> Many container based solution function by assigning new listen_addresses when 
> nodes are stopped. Changing the listen_address is usually as simple as 
> turning off the node and changing the yaml file. 
> However, if prefer_local is enabled, I observed that nodes were unable to 
> join the cluster and fail with 'Unable to gossip with any seeds'. 
> Trace shows that the changing node will try to communicate with the existing 
> node but the response is never received. I assume it is because the existing 
> node attempts to communicate with the local address during the shadow round.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16855) Replace minor use of `json-simple` with Jackson

2021-09-03 Thread Tatu Saloranta (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409595#comment-17409595
 ] 

Tatu Saloranta commented on CASSANDRA-16855:


One possibility, too, would be for me to split changes into 2 (or if need be, 
more) PRs, starting with safest ones (adding missing test). This would make 
sense if someone with a better idea of hotspots/-paths could suggest split. 
More important change(s) should then be merged with a jmh test (or tests), or 
validated with load/stress-test for overall change (likely none measurable at 
high level).

> Replace minor use of `json-simple` with Jackson
> ---
>
> Key: CASSANDRA-16855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16855
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies, Local/Other, Tool/nodetool
>Reporter: Tatu Saloranta
>Assignee: Tatu Saloranta
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.x
>
>
> Jackson library is used for most JSON reading/writing, but there are couple 
> of places where older "json-simple" library is used, mostly for diagnostics 
> output. Replacing those minor usages would allow removal of a dependency, one 
> for which the last release was made in 2012.
> Places where json-simple is used are:
>  * src/java/org/apache/cassandra/db/ColumnFamilyStore.java
>  * src/java/org/apache/cassandra/db/commitlog/CommitLogDescriptor.java
>  * src/java/org/apache/cassandra/hints/HintsDescriptor.java
>  * src/java/org/apache/cassandra/tools/nodetool/stats/StatsPrinter.java
> (and some matching usage in couple of test classes)
> I can take a stab at replacing these uses; it also looks like test coverage 
> may be spotty for some (StatsPrinter json/yaml part has no tests for example).
> It is probably best to target this for "trunk" (4.1?).
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16889) WEBSITE - August 2021 updates #2

2021-09-03 Thread Melissa Logan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Melissa Logan updated CASSANDRA-16889:
--
Reviewers: Erick Ramirez, Melissa Logan, Michael Semb Wever  (was: Erick 
Ramirez, Michael Semb Wever)

> WEBSITE - August 2021 updates #2
> 
>
> Key: CASSANDRA-16889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16889
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation/Website
>Reporter: Diogenese Topper
>Assignee: Paul Au
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0.x
>
> Attachments: Apache Cassandra Antora Site Updates.zip, 
> case-studies.adoc
>
>
> Updates to be made to the website that include:
> * Add blog post: Cassandra-on-Kubernetes-A-Beginners-Guide to pages/blog
> * Added Cassandra-on-Kubernetes-A-Beginners-Guide card to blog index
> * Changed Urban Airship to Airship on the case studies page
> * Changed the contributor meetings learn more link on the community page
> * Replace images/companies backblaze.png with the new logo attached with the 
> same name
> * Removed Comcast logo from main page index
> * Removed Comcast from case studies page
> * Removed Yelp from Apache Cassandra 4.0 is Here blog



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16889) WEBSITE - August 2021 updates #2

2021-09-03 Thread Melissa Logan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409589#comment-17409589
 ] 

Melissa Logan commented on CASSANDRA-16889:
---

[~Anthony Grasso] Do we need to resubmit this ticket now that the branches have 
all been merged?

> WEBSITE - August 2021 updates #2
> 
>
> Key: CASSANDRA-16889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16889
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation/Website
>Reporter: Diogenese Topper
>Assignee: Paul Au
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0.x
>
> Attachments: Apache Cassandra Antora Site Updates.zip, 
> case-studies.adoc
>
>
> Updates to be made to the website that include:
> * Add blog post: Cassandra-on-Kubernetes-A-Beginners-Guide to pages/blog
> * Added Cassandra-on-Kubernetes-A-Beginners-Guide card to blog index
> * Changed Urban Airship to Airship on the case studies page
> * Changed the contributor meetings learn more link on the community page
> * Replace images/companies backblaze.png with the new logo attached with the 
> same name
> * Removed Comcast logo from main page index
> * Removed Comcast from case studies page
> * Removed Yelp from Apache Cassandra 4.0 is Here blog



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16892) Fix flakiness in upgrade_tests/thrift_upgrade_test.py in dtests

2021-09-03 Thread Benjamin Lerer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-16892:
---
Status: Ready to Commit  (was: Review In Progress)

> Fix flakiness in upgrade_tests/thrift_upgrade_test.py in dtests
> ---
>
> Key: CASSANDRA-16892
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16892
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: NA
>
>
> I noticed that tests for dtest in upgrade_tests/thrift_upgrade_test.py are 
> flaky.
> The reason this flakiness happens is that we are stopping and starting a node 
> too fast without waiting for its full initialisation and then next attempt to 
> connect to rpc port fails and whole test fails.
> The fix is rather easy, we just need to wait until it is full started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16892) Fix flakiness in upgrade_tests/thrift_upgrade_test.py in dtests

2021-09-03 Thread Benjamin Lerer (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409579#comment-17409579
 ] 

Benjamin Lerer edited comment on CASSANDRA-16892 at 9/3/21, 3:50 PM:
-

I ran the patched {{upgrade_tests/thrift_upgrade_test.py}} using the 
repeated_upgrade_dtest runner 100 times and did not get any failure. Results 
are 
[here|https://app.circleci.com/pipelines/github/blerer/cassandra/195/workflows/a9c26e1f-01e6-4af5-84c3-a0314c08ed59].

The fix looks good. +1

Thanks for the patch [~stefan.miklosovic]


was (Author: blerer):
I ran the patched {{upgrade_tests/thrift_upgrade_test.py}} using the 
repeated_upgrade_dtest runner 100 times and did not get any failure. Results 
are 
[here|https://app.circleci.com/pipelines/github/blerer/cassandra/195/workflows/a9c26e1f-01e6-4af5-84c3-a0314c08ed59].

The fix looks good. +1

> Fix flakiness in upgrade_tests/thrift_upgrade_test.py in dtests
> ---
>
> Key: CASSANDRA-16892
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16892
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: NA
>
>
> I noticed that tests for dtest in upgrade_tests/thrift_upgrade_test.py are 
> flaky.
> The reason this flakiness happens is that we are stopping and starting a node 
> too fast without waiting for its full initialisation and then next attempt to 
> connect to rpc port fails and whole test fails.
> The fix is rather easy, we just need to wait until it is full started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16892) Fix flakiness in upgrade_tests/thrift_upgrade_test.py in dtests

2021-09-03 Thread Benjamin Lerer (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409579#comment-17409579
 ] 

Benjamin Lerer commented on CASSANDRA-16892:


I ran the patched {{upgrade_tests/thrift_upgrade_test.py}} using the 
repeated_upgrade_dtest runner 100 times and did not get any failure. Results 
are 
[here|https://app.circleci.com/pipelines/github/blerer/cassandra/195/workflows/a9c26e1f-01e6-4af5-84c3-a0314c08ed59].

The fix looks good. +1

> Fix flakiness in upgrade_tests/thrift_upgrade_test.py in dtests
> ---
>
> Key: CASSANDRA-16892
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16892
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: NA
>
>
> I noticed that tests for dtest in upgrade_tests/thrift_upgrade_test.py are 
> flaky.
> The reason this flakiness happens is that we are stopping and starting a node 
> too fast without waiting for its full initialisation and then next attempt to 
> connect to rpc port fails and whole test fails.
> The fix is rather easy, we just need to wait until it is full started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16916) Add support for IF EXISTS and IF NOT EXISTS in ALTER statements

2021-09-03 Thread Benjamin Lerer (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409575#comment-17409575
 ] 

Benjamin Lerer commented on CASSANDRA-16916:


[~djanand] I understood that you were interested in working on low-hanging 
fruits ticket. This one might be interesting for you. If you have some 
questions do not hesitate to ping me. 

> Add support for IF EXISTS and IF NOT EXISTS in ALTER statements
> ---
>
> Key: CASSANDRA-16916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16916
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Syntax
>Reporter: Benjamin Lerer
>Priority: Normal
>
> It would make sense to add support for {{IF EXISTS}} and {{IF NOT EXISTS}} in 
> the different {{ALTER}} statements. 
> For example:
> * {{ALTER TABLE IF EXISTS myTable ...}}
> * {{ALTER TABLE myTable ADD IF NOT EXISTS ...}}
> * {{ALTER TABLE myTable DROP IF EXISTS ...}}
> * {{ALTER TYPE IF EXISTS myType ...}}
> * {{ALTER TYPE myType ADD IF NOT EXISTS ...}}
> +Additional info for newcomers:+
> In order to implement this change you will need to change the {{Parser.g}} 
> ANTLR file located in the src/antlr directory and the java classes 
> corresponding to the different alter statements located in the 
> {{org.apache.cassandra.cql3.statements.schema}} package. You can look at the 
> CreateTableStatement class to see how it was done there.
> The unit test for the CQL logic are located under 
> {{org.apache.cassandra.cql3.validation}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16916) Add support for IF EXISTS and IF NOT EXISTS in ALTER statements

2021-09-03 Thread Benjamin Lerer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-16916:
---
Change Category: Semantic
 Complexity: Low Hanging Fruit
 Status: Open  (was: Triage Needed)

> Add support for IF EXISTS and IF NOT EXISTS in ALTER statements
> ---
>
> Key: CASSANDRA-16916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16916
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Syntax
>Reporter: Benjamin Lerer
>Priority: Normal
>
> It would make sense to add support for {{IF EXISTS}} and {{IF NOT EXISTS}} in 
> the different {{ALTER}} statements. 
> For example:
> * {{ALTER TABLE IF EXISTS myTable ...}}
> * {{ALTER TABLE myTable ADD IF NOT EXISTS ...}}
> * {{ALTER TABLE myTable DROP IF EXISTS ...}}
> * {{ALTER TYPE IF EXISTS myType ...}}
> * {{ALTER TYPE myType ADD IF NOT EXISTS ...}}
> +Additional info for newcomers:+
> In order to implement this change you will need to change the {{Parser.g}} 
> ANTLR file located in the src/antlr directory and the java classes 
> corresponding to the different alter statements located in the 
> {{org.apache.cassandra.cql3.statements.schema}} package. You can look at the 
> CreateTableStatement class to see how it was done there.
> The unit test for the CQL logic are located under 
> {{org.apache.cassandra.cql3.validation}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-16916) Add support for IF EXISTS and IF NOT EXISTS in ALTER statements

2021-09-03 Thread Benjamin Lerer (Jira)
Benjamin Lerer created CASSANDRA-16916:
--

 Summary: Add support for IF EXISTS and IF NOT EXISTS in ALTER 
statements
 Key: CASSANDRA-16916
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16916
 Project: Cassandra
  Issue Type: Improvement
  Components: CQL/Syntax
Reporter: Benjamin Lerer


It would make sense to add support for {{IF EXISTS}} and {{IF NOT EXISTS}} in 
the different {{ALTER}} statements. 

For example:
* {{ALTER TABLE IF EXISTS myTable ...}}
* {{ALTER TABLE myTable ADD IF NOT EXISTS ...}}
* {{ALTER TABLE myTable DROP IF EXISTS ...}}
* {{ALTER TYPE IF EXISTS myType ...}}
* {{ALTER TYPE myType ADD IF NOT EXISTS ...}}

+Additional info for newcomers:+

In order to implement this change you will need to change the {{Parser.g}} 
ANTLR file located in the src/antlr directory and the java classes 
corresponding to the different alter statements located in the 
{{org.apache.cassandra.cql3.statements.schema}} package. You can look at the 
CreateTableStatement class to see how it was done there.
The unit test for the CQL logic are located under 
{{org.apache.cassandra.cql3.validation}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16851) Update from Jackson 2.9 to 2.12

2021-09-03 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-16851:

Fix Version/s: 4.x
   4.0.x
   3.11.x
   3.0.x

> Update from Jackson 2.9 to 2.12
> ---
>
> Key: CASSANDRA-16851
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16851
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Dependencies
>Reporter: Tatu Saloranta
>Assignee: Tatu Saloranta
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x, 4.x
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Given that Jackson 2.9 support has ended, it would be good to move at least 
> to the next minor version (2.10, patch 2.10.5) or later – latest stable being 
> 2.12.4.
>  I can test to see if anything breaks, but looking at existing Jackson usage 
> there shouldn't be many issues.
> Assuming upgrade is acceptable there's the question of which branches to 
> apply it to; I will first test it against 4.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16915) Delay is not applied in the dtest to test bootstrap delay

2021-09-03 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409565#comment-17409565
 ] 

Brandon Williams commented on CASSANDRA-16915:
--

Adding {{-Dorg.jboss.byteman.verbose}} to the JVM may reveal something.

> Delay is not applied in the dtest to test bootstrap delay
> -
>
> Key: CASSANDRA-16915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16915
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Ruslan Fomkin
>Priority: Normal
>
> Test 
> [test_bootstrap_waits_for_streaming_to_finish|https://github.com/apache/cassandra-dtest/blob/trunk/bootstrap_test.py#L267]
>  supposed to delay bootstrap of {{node2}} by using byteman :
> {code:java}
> node2 = new_node(cluster, byteman_port='4200')
> node2.update_startup_byteman_script('./byteman/bootstrap_5s_sleep.btm')
> {code}
> where [byteman 
> code|https://github.com/apache/cassandra-dtest/blob/trunk/byteman/bootstrap_5s_sleep.btm]
>  is:
> {code:java}
> RULE Sleep 5s when finishing bootstrap 
> CLASS org.apache.cassandra.service.StorageService 
> METHOD bootstrapFinished 
> AT ENTRY
> IF NOT flagged("done") 
> DO
> flag("done");
> Thread.sleep(5000) 
> ENDRULE
> {code}
> However, I found that this byteman rule is not applied.
> For example, I changed the rule body into:
> {code:java}
> ...
> IF TRUE
> DO
> asfa;adfa;
> flag("done");
> throw new RuntimeException("Test");
> Thread.sleep(5)
> ENDRULE{code}
> So my conclusion is that the delay is not applied. I haven't investigated if 
> it is the issue in calling {{update_startup_byteman_script}}, which is 
> implemented in CCM, or in the implementation inside CCM.
> This issue might exist in other similar tests.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16915) Delay is not applied in the dtest to test bootstrap delay

2021-09-03 Thread Ruslan Fomkin (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruslan Fomkin updated CASSANDRA-16915:
--
Description: 
Test 
[test_bootstrap_waits_for_streaming_to_finish|https://github.com/apache/cassandra-dtest/blob/trunk/bootstrap_test.py#L267]
 supposed to delay bootstrap of {{node2}} by using byteman :
{code:java}
node2 = new_node(cluster, byteman_port='4200')
node2.update_startup_byteman_script('./byteman/bootstrap_5s_sleep.btm')
{code}
where [byteman 
code|https://github.com/apache/cassandra-dtest/blob/trunk/byteman/bootstrap_5s_sleep.btm]
 is:
{code:java}
RULE Sleep 5s when finishing bootstrap 
CLASS org.apache.cassandra.service.StorageService 
METHOD bootstrapFinished 
AT ENTRY
IF NOT flagged("done") 
DO
flag("done");
Thread.sleep(5000) 
ENDRULE
{code}
However, I found that this byteman rule is not applied.

For example, I changed the rule body into:
{code:java}
...
IF TRUE
DO
asfa;adfa;
flag("done");
throw new RuntimeException("Test");
Thread.sleep(5)
ENDRULE{code}
So my conclusion is that the delay is not applied. I haven't investigated if it 
is the issue in calling {{update_startup_byteman_script}}, which is implemented 
in CCM, or in the implementation inside CCM.

This issue might exist in other similar tests.

 

 

  was:
Test 
[test_bootstrap_waits_for_streaming_to_finish|https://github.com/apache/cassandra-dtest/blob/trunk/bootstrap_test.py#L267]
 supposed to delay bootstrap of {{node2}} by using byteman :

 
{code:java}
node2 = new_node(cluster, byteman_port='4200')
node2.update_startup_byteman_script('./byteman/bootstrap_5s_sleep.btm')
{code}
where [byteman 
code|https://github.com/apache/cassandra-dtest/blob/trunk/byteman/bootstrap_5s_sleep.btm]
 is:
{code:java}
RULE Sleep 5s when finishing bootstrap 
CLASS org.apache.cassandra.service.StorageService 
METHOD bootstrapFinished 
AT ENTRY
IF NOT flagged("done") 
DO
flag("done");
Thread.sleep(5000) 
ENDRULE
{code}
However, I found that this byteman rule is not applied.

For example, I changed the rule body into:
{code:java}
...
IF TRUE
DO
asfa;adfa;
flag("done");
throw new RuntimeException("Test");
Thread.sleep(5)
ENDRULE{code}
So my conclusion is that the delay is not applied. I haven't investigated if it 
is the issue in calling {{update_startup_byteman_script}}, which is implemented 
in CCM, or in the implementation inside CCM.

This issue might exist in other similar tests.

 

 


> Delay is not applied in the dtest to test bootstrap delay
> -
>
> Key: CASSANDRA-16915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16915
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Ruslan Fomkin
>Priority: Normal
>
> Test 
> [test_bootstrap_waits_for_streaming_to_finish|https://github.com/apache/cassandra-dtest/blob/trunk/bootstrap_test.py#L267]
>  supposed to delay bootstrap of {{node2}} by using byteman :
> {code:java}
> node2 = new_node(cluster, byteman_port='4200')
> node2.update_startup_byteman_script('./byteman/bootstrap_5s_sleep.btm')
> {code}
> where [byteman 
> code|https://github.com/apache/cassandra-dtest/blob/trunk/byteman/bootstrap_5s_sleep.btm]
>  is:
> {code:java}
> RULE Sleep 5s when finishing bootstrap 
> CLASS org.apache.cassandra.service.StorageService 
> METHOD bootstrapFinished 
> AT ENTRY
> IF NOT flagged("done") 
> DO
> flag("done");
> Thread.sleep(5000) 
> ENDRULE
> {code}
> However, I found that this byteman rule is not applied.
> For example, I changed the rule body into:
> {code:java}
> ...
> IF TRUE
> DO
> asfa;adfa;
> flag("done");
> throw new RuntimeException("Test");
> Thread.sleep(5)
> ENDRULE{code}
> So my conclusion is that the delay is not applied. I haven't investigated if 
> it is the issue in calling {{update_startup_byteman_script}}, which is 
> implemented in CCM, or in the implementation inside CCM.
> This issue might exist in other similar tests.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16915) Delay is not applied in the dtest to test bootstrap delay

2021-09-03 Thread Ruslan Fomkin (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruslan Fomkin updated CASSANDRA-16915:
--
Description: 
Test 
[test_bootstrap_waits_for_streaming_to_finish|https://github.com/apache/cassandra-dtest/blob/trunk/bootstrap_test.py#L267]
 supposed to delay bootstrap of {{node2}} by using byteman :

 
{code:java}
node2 = new_node(cluster, byteman_port='4200')
node2.update_startup_byteman_script('./byteman/bootstrap_5s_sleep.btm')
{code}
where [byteman 
code|https://github.com/apache/cassandra-dtest/blob/trunk/byteman/bootstrap_5s_sleep.btm]
 is:
{code:java}
RULE Sleep 5s when finishing bootstrap 
CLASS org.apache.cassandra.service.StorageService 
METHOD bootstrapFinished 
AT ENTRY
IF NOT flagged("done") 
DO
flag("done");
Thread.sleep(5000) 
ENDRULE
{code}
However, I found that this byteman rule is not applied.

For example, I changed the rule body into:
{code:java}
...
IF TRUE
DO
asfa;adfa;
flag("done");
throw new RuntimeException("Test");
Thread.sleep(5)
ENDRULE{code}
So my conclusion is that the delay is not applied. I haven't investigated if it 
is the issue in calling {{update_startup_byteman_script}}, which is implemented 
in CCM, or in the implementation inside CCM.

This issue might exist in other similar tests.

 

 

  was:
Test 
[test_bootstrap_waits_for_streaming_to_finish|https://github.com/apache/cassandra-dtest/blob/trunk/bootstrap_test.py#L267]
 supposed to delay bootstrap of {{node2}} by using byteman :

 
{code:java}
node2 = new_node(cluster, byteman_port='4200')
node2.update_startup_byteman_script('./byteman/bootstrap_5s_sleep.btm')
{code}
where [byteman 
code|https://github.com/apache/cassandra-dtest/blob/trunk/byteman/bootstrap_5s_sleep.btm]
 is:
{code:java}
RULE Sleep 5s when finishing bootstrap 
CLASS org.apache.cassandra.service.StorageService 
METHOD bootstrapFinished 
AT ENTRY
IF NOT flagged("done") 
DO
flag("done");
Thread.sleep(5000) 
ENDRULE
{code}
However, I found that this byteman rule is not applied.

For example, I changed the rule body into:
{code:java}
IF TRUE
DO
asfa;adfa;
flag("done");
throw new RuntimeException("Test");
Thread.sleep(5)
ENDRULE{code}
So my conclusion is that the delay is not applied. I haven't investigated if it 
is the issue in calling {{update_startup_byteman_script}}, which is implemented 
in CCM, or in the implementation inside CCM.

This issue might exist in other similar tests.

 

 


> Delay is not applied in the dtest to test bootstrap delay
> -
>
> Key: CASSANDRA-16915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16915
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Ruslan Fomkin
>Priority: Normal
>
> Test 
> [test_bootstrap_waits_for_streaming_to_finish|https://github.com/apache/cassandra-dtest/blob/trunk/bootstrap_test.py#L267]
>  supposed to delay bootstrap of {{node2}} by using byteman :
>  
> {code:java}
> node2 = new_node(cluster, byteman_port='4200')
> node2.update_startup_byteman_script('./byteman/bootstrap_5s_sleep.btm')
> {code}
> where [byteman 
> code|https://github.com/apache/cassandra-dtest/blob/trunk/byteman/bootstrap_5s_sleep.btm]
>  is:
> {code:java}
> RULE Sleep 5s when finishing bootstrap 
> CLASS org.apache.cassandra.service.StorageService 
> METHOD bootstrapFinished 
> AT ENTRY
> IF NOT flagged("done") 
> DO
> flag("done");
> Thread.sleep(5000) 
> ENDRULE
> {code}
> However, I found that this byteman rule is not applied.
> For example, I changed the rule body into:
> {code:java}
> ...
> IF TRUE
> DO
> asfa;adfa;
> flag("done");
> throw new RuntimeException("Test");
> Thread.sleep(5)
> ENDRULE{code}
> So my conclusion is that the delay is not applied. I haven't investigated if 
> it is the issue in calling {{update_startup_byteman_script}}, which is 
> implemented in CCM, or in the implementation inside CCM.
> This issue might exist in other similar tests.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16915) Delay is not applied in the dtest to test bootstrap delay

2021-09-03 Thread Ruslan Fomkin (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruslan Fomkin updated CASSANDRA-16915:
--
Description: 
Test 
[test_bootstrap_waits_for_streaming_to_finish|https://github.com/apache/cassandra-dtest/blob/trunk/bootstrap_test.py#L267]
 supposed to delay bootstrap of {{node2}} by using byteman :

 
{code:java}
node2 = new_node(cluster, byteman_port='4200')
node2.update_startup_byteman_script('./byteman/bootstrap_5s_sleep.btm')
{code}
where [byteman 
code|https://github.com/apache/cassandra-dtest/blob/trunk/byteman/bootstrap_5s_sleep.btm]
 is:
{code:java}
RULE Sleep 5s when finishing bootstrap 
CLASS org.apache.cassandra.service.StorageService 
METHOD bootstrapFinished 
AT ENTRY
IF NOT flagged("done") 
DO
flag("done");
Thread.sleep(5000) 
ENDRULE
{code}
However, I found that this byteman rule is not applied.

For example, I changed the rule body into:
{code:java}
IF TRUE
DO
asfa;adfa;
flag("done");
throw new RuntimeException("Test");
Thread.sleep(5)
ENDRULE{code}
So my conclusion is that the delay is not applied. I haven't investigated if it 
is the issue in calling {{update_startup_byteman_script}}, which is implemented 
in CCM, or in the implementation inside CCM.

This issue might exist in other similar tests.

 

 

  was:
Test 
[test_bootstrap_waits_for_streaming_to_finish|https://github.com/apache/cassandra-dtest/blob/trunk/bootstrap_test.py#L267]
 supposed to delay bootstrap of {{node2}} by using byteman :

 
{code:java}
node2 = new_node(cluster, byteman_port='4200')
node2.update_startup_byteman_script('./byteman/bootstrap_5s_sleep.btm')
{code}
where [byteman 
code|https://github.com/apache/cassandra-dtest/blob/trunk/byteman/bootstrap_5s_sleep.btm]
 is:

 
{code:java}
RULE Sleep 5s when finishing bootstrap 
CLASS org.apache.cassandra.service.StorageService 
METHOD bootstrapFinished 
AT ENTRY
IF NOT flagged("done") 
DO
flag("done");
Thread.sleep(5000) 
ENDRULE
{code}
However, I found that this byteman rule is not applied.

For example, I changed the rule body into:
{code:java}
IF TRUE
DO
asfa;adfa;
flag("done");
throw new RuntimeException("Test");
Thread.sleep(5)
ENDRULE{code}
So my conclusion is that the delay is not applied. I haven't investigated if it 
is the issue in calling {{update_startup_byteman_script}}, which is implemented 
in CCM, or in the implementation inside CCM.

This issue might exist in other similar tests.

 

 


> Delay is not applied in the dtest to test bootstrap delay
> -
>
> Key: CASSANDRA-16915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16915
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Ruslan Fomkin
>Priority: Normal
>
> Test 
> [test_bootstrap_waits_for_streaming_to_finish|https://github.com/apache/cassandra-dtest/blob/trunk/bootstrap_test.py#L267]
>  supposed to delay bootstrap of {{node2}} by using byteman :
>  
> {code:java}
> node2 = new_node(cluster, byteman_port='4200')
> node2.update_startup_byteman_script('./byteman/bootstrap_5s_sleep.btm')
> {code}
> where [byteman 
> code|https://github.com/apache/cassandra-dtest/blob/trunk/byteman/bootstrap_5s_sleep.btm]
>  is:
> {code:java}
> RULE Sleep 5s when finishing bootstrap 
> CLASS org.apache.cassandra.service.StorageService 
> METHOD bootstrapFinished 
> AT ENTRY
> IF NOT flagged("done") 
> DO
> flag("done");
> Thread.sleep(5000) 
> ENDRULE
> {code}
> However, I found that this byteman rule is not applied.
> For example, I changed the rule body into:
> {code:java}
> IF TRUE
> DO
> asfa;adfa;
> flag("done");
> throw new RuntimeException("Test");
> Thread.sleep(5)
> ENDRULE{code}
> So my conclusion is that the delay is not applied. I haven't investigated if 
> it is the issue in calling {{update_startup_byteman_script}}, which is 
> implemented in CCM, or in the implementation inside CCM.
> This issue might exist in other similar tests.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-16915) Delay is not applied in the dtest to test bootstrap delay

2021-09-03 Thread Ruslan Fomkin (Jira)
Ruslan Fomkin created CASSANDRA-16915:
-

 Summary: Delay is not applied in the dtest to test bootstrap delay
 Key: CASSANDRA-16915
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16915
 Project: Cassandra
  Issue Type: Bug
  Components: Test/dtest/python
Reporter: Ruslan Fomkin


Test 
[test_bootstrap_waits_for_streaming_to_finish|https://github.com/apache/cassandra-dtest/blob/trunk/bootstrap_test.py#L267]
 supposed to delay bootstrap of {{node2}} by using byteman :

 
{code:java}
node2 = new_node(cluster, byteman_port='4200')
node2.update_startup_byteman_script('./byteman/bootstrap_5s_sleep.btm')
{code}
where [byteman 
code|https://github.com/apache/cassandra-dtest/blob/trunk/byteman/bootstrap_5s_sleep.btm]
 is:

 
{code:java}
RULE Sleep 5s when finishing bootstrap 
CLASS org.apache.cassandra.service.StorageService 
METHOD bootstrapFinished 
AT ENTRY
IF NOT flagged("done") 
DO
flag("done");
Thread.sleep(5000) 
ENDRULE
{code}
However, I found that this byteman rule is not applied.

For example, I changed the rule body into:
{code:java}
IF TRUE
DO
asfa;adfa;
flag("done");
throw new RuntimeException("Test");
Thread.sleep(5)
ENDRULE{code}
So my conclusion is that the delay is not applied. I haven't investigated if it 
is the issue in calling {{update_startup_byteman_script}}, which is implemented 
in CCM, or in the implementation inside CCM.

This issue might exist in other similar tests.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16175) Avoid removing batch when it's not created during view replication

2021-09-03 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409556#comment-17409556
 ] 

Ekaterina Dimitrova commented on CASSANDRA-16175:
-

CI results look ok to me, there are only unrelated known failures I think.

> Avoid removing batch when it's not created during view replication
> --
>
> Key: CASSANDRA-16175
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16175
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Materialized Views
>Reporter: Zhao Yang
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.x
>
>
> When the base replica is also a view replica we don't write a local batchlog, 
> but they are unnecessarily removed when the view write is successful, what 
> creates (and persists) a tombstone into the system.batches table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12734) Materialized View schema file for snapshots created as tables

2021-09-03 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-12734:

Since Version:   (was: 3.0.1)

> Materialized View schema file for snapshots created as tables
> -
>
> Key: CASSANDRA-12734
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12734
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Materialized Views, Legacy/Tools
>Reporter: Hau Phan
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 3.0.26, 3.11.12, 4.1, 4.0.2
>
>
> The materialized view schema file that gets created and stored with the 
> sstables is created as a table instead of a materialized view.  
> Can the materialized view be created and added to the corresponding table's  
> schema file?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12734) Materialized View schema file for snapshots created as tables

2021-09-03 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-12734:

  Fix Version/s: (was: 4.0.x)
 (was: 3.11.x)
 (was: 3.0.x)
 4.0.2
 4.1
 3.11.12
 3.0.26
  Since Version: 3.0.1
Source Control Link: 
https://github.com/apache/cassandra/commit/67eb22ec9d588c9f984d13c0ffd703a14181f775
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> Materialized View schema file for snapshots created as tables
> -
>
> Key: CASSANDRA-12734
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12734
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Materialized Views, Legacy/Tools
>Reporter: Hau Phan
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 3.0.26, 3.11.12, 4.1, 4.0.2
>
>
> The materialized view schema file that gets created and stored with the 
> sstables is created as a table instead of a materialized view.  
> Can the materialized view be created and added to the corresponding table's  
> schema file?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12734) Materialized View schema file for snapshots created as tables

2021-09-03 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409553#comment-17409553
 ] 

Ekaterina Dimitrova commented on CASSANDRA-12734:
-

To https://github.com/apache/cassandra.git

   e4b37c3271..67eb22ec9d  cassandra-3.0 -> cassandra-3.0

   957c6264ef..d6e1c41c48  cassandra-3.11 -> cassandra-3.11

   6a4a93a808..49e83027e2  cassandra-4.0 -> cassandra-4.0

   f9aa19e3b1..163a4d7137  trunk -> trunk

 

CI also looked ok, committed, thanks!

> Materialized View schema file for snapshots created as tables
> -
>
> Key: CASSANDRA-12734
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12734
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Materialized Views, Legacy/Tools
>Reporter: Hau Phan
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> The materialized view schema file that gets created and stored with the 
> sstables is created as a table instead of a materialized view.  
> Can the materialized view be created and added to the corresponding table's  
> schema file?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-4.0' into trunk

2021-09-03 Thread edimitrova
This is an automated email from the ASF dual-hosted git repository.

edimitrova pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 163a4d7137fb67504f2b12cacdc2eb12e8d9a923
Merge: f9aa19e 49e8302
Author: Ekaterina Dimitrova 
AuthorDate: Fri Sep 3 10:55:10 2021 -0400

Merge branch 'cassandra-4.0' into trunk

 .../org/apache/cassandra/cql3/ViewSchemaTest.java  | 88 +-
 1 file changed, 87 insertions(+), 1 deletion(-)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (f9aa19e -> 163a4d7)

2021-09-03 Thread edimitrova
This is an automated email from the ASF dual-hosted git repository.

edimitrova pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from f9aa19e   Add nodetool commands to invalidate auth caches
 add 67eb22e  Fix materialized view schema backup as table patch by Zhao 
Yang, Ekaterina Dimitrova; reviewed by Benjamin Lerer, Ekaterina Dimitrova for 
CASSANDRA-12734
 add d6e1c41  Merge branch 'cassandra-3.0' into cassandra-3.11
 add 49e8302  Merge branch 'cassandra-3.11' into cassandra-4.0
 new 163a4d7  Merge branch 'cassandra-4.0' into trunk

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../org/apache/cassandra/cql3/ViewSchemaTest.java  | 88 +-
 1 file changed, 87 insertions(+), 1 deletion(-)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-3.0 updated (e4b37c3 -> 67eb22e)

2021-09-03 Thread edimitrova
This is an automated email from the ASF dual-hosted git repository.

edimitrova pushed a change to branch cassandra-3.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from e4b37c3  Add resource flags to CircleCI config generation script
 add 67eb22e  Fix materialized view schema backup as table patch by Zhao 
Yang, Ekaterina Dimitrova; reviewed by Benjamin Lerer, Ekaterina Dimitrova for 
CASSANDRA-12734

No new revisions were added by this update.

Summary of changes:
 CHANGES.txt|   1 +
 src/java/org/apache/cassandra/config/Schema.java   |   9 +-
 .../cassandra/db/ColumnFamilyStoreCQLHelper.java   | 135 +
 .../org/apache/cassandra/cql3/ViewSchemaTest.java  |  81 -
 4 files changed, 195 insertions(+), 31 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-4.0 updated (6a4a93a -> 49e8302)

2021-09-03 Thread edimitrova
This is an automated email from the ASF dual-hosted git repository.

edimitrova pushed a change to branch cassandra-4.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 6a4a93a  Merge branch 'cassandra-3.11' into cassandra-4.0
 add 67eb22e  Fix materialized view schema backup as table patch by Zhao 
Yang, Ekaterina Dimitrova; reviewed by Benjamin Lerer, Ekaterina Dimitrova for 
CASSANDRA-12734
 add d6e1c41  Merge branch 'cassandra-3.0' into cassandra-3.11
 add 49e8302  Merge branch 'cassandra-3.11' into cassandra-4.0

No new revisions were added by this update.

Summary of changes:
 .../org/apache/cassandra/cql3/ViewSchemaTest.java  | 88 +-
 1 file changed, 87 insertions(+), 1 deletion(-)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-3.11 updated (957c626 -> d6e1c41)

2021-09-03 Thread edimitrova
This is an automated email from the ASF dual-hosted git repository.

edimitrova pushed a change to branch cassandra-3.11
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from 957c626  Nodetool setcachecapacity behaves oddly when cache disabled
 add 67eb22e  Fix materialized view schema backup as table patch by Zhao 
Yang, Ekaterina Dimitrova; reviewed by Benjamin Lerer, Ekaterina Dimitrova for 
CASSANDRA-12734
 add d6e1c41  Merge branch 'cassandra-3.0' into cassandra-3.11

No new revisions were added by this update.

Summary of changes:
 CHANGES.txt|   1 +
 src/java/org/apache/cassandra/config/Schema.java   |   9 +-
 .../cassandra/db/ColumnFamilyStoreCQLHelper.java   | 135 +
 .../org/apache/cassandra/cql3/ViewSchemaTest.java  |  81 -
 4 files changed, 195 insertions(+), 31 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16904) Check if size of object being added to RowCache and KeyCache is bigger than cache capacity

2021-09-03 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-16904:
--
Resolution: Fixed
Status: Resolved  (was: Triage Needed)

We introduced a new concept of {{ShallowIndexedEntry}} objects in 3.6 / 
CASSANDRA-11206 that prevent us materializing large indexes on heap and instead 
creating a shallow, max size
{quote}BASE_SIZE = ObjectSizes.measure(new ShallowIndexedEntry(0, 0, 
DeletionTime.LIVE, 0, 10, 0, null));
{quote}
on heap when we're past size and have multiple columns. This issue doesn't 
apply to 4.0+.

> Check if size of object being added to RowCache and KeyCache is bigger than 
> cache capacity
> --
>
> Key: CASSANDRA-16904
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16904
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Caching
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
>
> We don't check if the size of an object being added to the RowCache/KeyCache 
> itself exceeds the max configured size of the cache.
> For instance, if a RowCache object is ~5GB due to IndexInfo objects, but the 
> cache is configured to have a max capacity of 100MB, we will still add the 
> 5GB object into the cache and then need to wait for the eviction thread in 
> the cache to come around, realize we're over capacity, and remove the object 
> from the cache.
> We could check the size of the object with jamm and ensure it's smaller than 
> the max size of the cache. If it exceeds the size of the cache don't cache it 
> at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16911) Remove ephemeral snapshot marker file and introduce a flag to SnapshotManifest

2021-09-03 Thread Paulo Motta (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-16911:

Reviewers: Paulo Motta

> Remove ephemeral snapshot marker file and introduce a flag to SnapshotManifest
> --
>
> Key: CASSANDRA-16911
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16911
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Other
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
>
> While creating an ephemeral snapshot, a marker file is created at disk in a 
> respective snapshot directory. This is not necessary anymore as we have 
> introduced SnapshotManifest in CASSANDRA-16789 so we can move this flag 
> there. By putting the information if a snapshot is ephemeral or not into 
> SnapshotManifest, we simplify and "clean up" the snapshotting process and 
> related codebase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16892) Fix flakiness in upgrade_tests/thrift_upgrade_test.py in dtests

2021-09-03 Thread Benjamin Lerer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-16892:
---
Reviewers: Benjamin Lerer, Benjamin Lerer  (was: Benjamin Lerer)
   Benjamin Lerer, Benjamin Lerer  (was: Benjamin Lerer)
   Status: Review In Progress  (was: Patch Available)

> Fix flakiness in upgrade_tests/thrift_upgrade_test.py in dtests
> ---
>
> Key: CASSANDRA-16892
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16892
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: NA
>
>
> I noticed that tests for dtest in upgrade_tests/thrift_upgrade_test.py are 
> flaky.
> The reason this flakiness happens is that we are stopping and starting a node 
> too fast without waiting for its full initialisation and then next attempt to 
> connect to rpc port fails and whole test fails.
> The fix is rather easy, we just need to wait until it is full started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16911) Remove ephemeral snapshot marker file and introduce a flag to SnapshotManifest

2021-09-03 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409462#comment-17409462
 ] 

Stefan Miklosovic commented on CASSANDRA-16911:
---

The solution when ephemeral marker files are still deleted as part of the 
booting sequence is here

[https://github.com/instaclustr/cassandra/tree/CASSANDRA-16911-upgrade-path]

We just stop to create these marker files anymore but they are still being 
removed when a node contains them, as the part of the upgrade.

> Remove ephemeral snapshot marker file and introduce a flag to SnapshotManifest
> --
>
> Key: CASSANDRA-16911
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16911
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Other
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
>
> While creating an ephemeral snapshot, a marker file is created at disk in a 
> respective snapshot directory. This is not necessary anymore as we have 
> introduced SnapshotManifest in CASSANDRA-16789 so we can move this flag 
> there. By putting the information if a snapshot is ephemeral or not into 
> SnapshotManifest, we simplify and "clean up" the snapshotting process and 
> related codebase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16790) Add auto_snapshot_ttl configuration

2021-09-03 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409450#comment-17409450
 ] 

Stefan Miklosovic commented on CASSANDRA-16790:
---

I am waiting for [~fibersel] to prepare that branch to be reviewed as we have 
merged a lot of things from that branch already.

> Add auto_snapshot_ttl configuration
> ---
>
> Key: CASSANDRA-16790
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16790
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local/Config
>Reporter: Paulo Motta
>Assignee: Abuli Palagashvili
>Priority: Normal
>
> This property should take a human readable parameter (ie. 6h, 3days). When 
> specified and {{auto_snapshot: true}}, auto snapshots created should use  the 
> specified TTL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16860) Add --older-than option to nodetool clearsnapshot

2021-09-03 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409437#comment-17409437
 ] 

Stefan Miklosovic edited comment on CASSANDRA-16860 at 9/3/21, 10:50 AM:
-

I think that when we introduce this to clearsnapshot command, we actually need 
two flags, you might use

{code}
 --older-than=1d
{code}

This means "remove all snaphots older than 1 day".

The second one would be

{code}
 --older-than-timestamp=unixtimestamp
{code}

This would, obviously, clear everything older than that.

There is a distinction between these two, if I want to remove all snaphots I 
took last hour, I do not want to compute timestamp for that. On the other hand, 
if I know from when I want to remove it exactly, I do not want to compute "how 
far ago it was".

Internally, --older-than would translate to --older-than-timestamp by taking 
current system time on client and substracting the period so we will go with 
timestamp only to server.


was (Author: stefan.miklosovic):
I think that when we introduce this to clearsnapshot command, we actually need 
two flags, you might use

{source}

--older-than=1d

{source}

This means "remove all snaphots older than 1 day"

The second one would be

{source}

--older-than-timestamp=unixtimestamp

{source}

This would, obviously, clear everything older than that.

There is a distinction between these two, if I want to remove all snaphots I 
took last hour, I do not want to compute timestamp for that. On the other hand, 
if I know from when I want to remove it exactly, I do not want to compute "how 
far ago it was".

Internally, --older-than would translate to --older-than-timestamp by taking 
current system time on client and substracting the period so we will go with 
timestamp only to server.

> Add --older-than option to nodetool clearsnapshot
> -
>
> Key: CASSANDRA-16860
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16860
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tool/nodetool
>Reporter: Jack Casey
>Assignee: Jack Casey
>Priority: Normal
> Fix For: 4.x
>
>
> h1. Summary
> Opening this issue in reference to [this WIP 
> PR|https://github.com/apache/cassandra/pull/1148]:
> This functionality allows users of Cassandra to remove snapshots ad-hoc, 
> based on a TTL. This is to address the problem of snapshots accumulating. For 
> example, an organization I work for aims to keep snapshots for 30 days, 
> however we don't have any way to easily clean them after those 30 days are up.
> This is similar to the goals set in: 
> https://issues.apache.org/jira/browse/CASSANDRA-16451 however would be 
> available for Cassandra 3.x.
> h1. Functionality
> This adds a new command to NodeTool, called {{expiresnapshot}} with the 
> following options:
> NAME
>  nodetool expiresnapshots - Removes snapshots that are older than a TTL
>  in days
> SYNOPSIS
>  nodetool [(-h  | --host )] [(-p  | --port )]
>  [(-pw  | --password )]
>  [(-pwf  | --password-file )]
>  [(-u  | --username )] expiresnapshots [--dry-run]
>  (-t  | --ttl )
> OPTIONS
>  --dry-run
>  Run without actually clearing snapshots
> -h , --host 
>  Node hostname or ip address
> -p , --port 
>  Remote jmx agent port number
> -pw , --password 
>  Remote jmx agent password
> -pwf , --password-file 
>  Path to the JMX password file
> -t , --ttl 
>  TTL (in days) to expire snapshots
> -u , --username 
>  Remote jmx agent username
> The snapshot date is taken by converting the default snapshot name timestamps 
> (epoch time in miliseconds). For this reason, snapshot names that don't 
> contain a timestamp in this format will not be cleared.
> h1. Example Use
> This Cassandra environment has a number of snapshots, a few are recent, and a 
> few outdated:
> root@cassandra001:/cassandra# nodetool listsnapshots
>  Snapshot Details:
>  Snapshot name Keyspace name Column family name True size Size on disk
>  1529173922063 users_keyspace users 362.03 KiB 362.89 KiB
>  1629173909461 users_keyspace users 362.03 KiB 362.89 KiB
>  1629173922063 users_keyspace users 362.03 KiB 362.89 KiB
>  1599173922063 users_keyspace users 362.03 KiB 362.89 KiB
>  1629173916816 users_keyspace users 362.03 KiB 362.89 KiB
> Total TrueDiskSpaceUsed: 1.77 MiB
> To validate the removal runs as expected, we can use the `--dry-run` option:
> root@cassandra001:/cassandra# nodetool expiresnapshots --ttl 30 --dry-run
>  Starting simulated cleanup of snapshots older than 30 days
>  Clearing (dry run): 1529173922063
>  Clearing (dry run): 1599173922063
>  Cleared (dry run): 2 snapshots
> Now that we are confident the correct snapshots will be removed, we can omit 
> the {{--dry-run}} flag:
> root@cassandra001:/cassandra# nodetool expiresnapshots --ttl 30
>  

[jira] [Commented] (CASSANDRA-16860) Add --older-than option to nodetool clearsnapshot

2021-09-03 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409437#comment-17409437
 ] 

Stefan Miklosovic commented on CASSANDRA-16860:
---

I think that when we introduce this to clearsnapshot command, we actually need 
two flags, you might use

{source}

--older-than=1d

{source}

This means "remove all snaphots older than 1 day"

The second one would be

{source}

--older-than-timestamp=unixtimestamp

{source}

This would, obviously, clear everything older than that.

There is a distinction between these two, if I want to remove all snaphots I 
took last hour, I do not want to compute timestamp for that. On the other hand, 
if I know from when I want to remove it exactly, I do not want to compute "how 
far ago it was".

Internally, --older-than would translate to --older-than-timestamp by taking 
current system time on client and substracting the period so we will go with 
timestamp only to server.

> Add --older-than option to nodetool clearsnapshot
> -
>
> Key: CASSANDRA-16860
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16860
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tool/nodetool
>Reporter: Jack Casey
>Assignee: Jack Casey
>Priority: Normal
> Fix For: 4.x
>
>
> h1. Summary
> Opening this issue in reference to [this WIP 
> PR|https://github.com/apache/cassandra/pull/1148]:
> This functionality allows users of Cassandra to remove snapshots ad-hoc, 
> based on a TTL. This is to address the problem of snapshots accumulating. For 
> example, an organization I work for aims to keep snapshots for 30 days, 
> however we don't have any way to easily clean them after those 30 days are up.
> This is similar to the goals set in: 
> https://issues.apache.org/jira/browse/CASSANDRA-16451 however would be 
> available for Cassandra 3.x.
> h1. Functionality
> This adds a new command to NodeTool, called {{expiresnapshot}} with the 
> following options:
> NAME
>  nodetool expiresnapshots - Removes snapshots that are older than a TTL
>  in days
> SYNOPSIS
>  nodetool [(-h  | --host )] [(-p  | --port )]
>  [(-pw  | --password )]
>  [(-pwf  | --password-file )]
>  [(-u  | --username )] expiresnapshots [--dry-run]
>  (-t  | --ttl )
> OPTIONS
>  --dry-run
>  Run without actually clearing snapshots
> -h , --host 
>  Node hostname or ip address
> -p , --port 
>  Remote jmx agent port number
> -pw , --password 
>  Remote jmx agent password
> -pwf , --password-file 
>  Path to the JMX password file
> -t , --ttl 
>  TTL (in days) to expire snapshots
> -u , --username 
>  Remote jmx agent username
> The snapshot date is taken by converting the default snapshot name timestamps 
> (epoch time in miliseconds). For this reason, snapshot names that don't 
> contain a timestamp in this format will not be cleared.
> h1. Example Use
> This Cassandra environment has a number of snapshots, a few are recent, and a 
> few outdated:
> root@cassandra001:/cassandra# nodetool listsnapshots
>  Snapshot Details:
>  Snapshot name Keyspace name Column family name True size Size on disk
>  1529173922063 users_keyspace users 362.03 KiB 362.89 KiB
>  1629173909461 users_keyspace users 362.03 KiB 362.89 KiB
>  1629173922063 users_keyspace users 362.03 KiB 362.89 KiB
>  1599173922063 users_keyspace users 362.03 KiB 362.89 KiB
>  1629173916816 users_keyspace users 362.03 KiB 362.89 KiB
> Total TrueDiskSpaceUsed: 1.77 MiB
> To validate the removal runs as expected, we can use the `--dry-run` option:
> root@cassandra001:/cassandra# nodetool expiresnapshots --ttl 30 --dry-run
>  Starting simulated cleanup of snapshots older than 30 days
>  Clearing (dry run): 1529173922063
>  Clearing (dry run): 1599173922063
>  Cleared (dry run): 2 snapshots
> Now that we are confident the correct snapshots will be removed, we can omit 
> the {{--dry-run}} flag:
> root@cassandra001:/cassandra# nodetool expiresnapshots --ttl 30
>  Starting cleanup of snapshots older than 30 days
>  Clearing: 1529173922063
>  Clearing: 1599173922063
>  Cleared: 2 snapshots
> To confirm our changes are successful, we list the snapshots that still 
> remain:
> root@cassandra001:/cassandra# nodetool listsnapshots
>  Snapshot Details:
>  Snapshot name Keyspace name Column family name True size Size on disk
>  1629173909461 users_keyspace users 362.03 KiB 362.89 KiB
>  1629173922063 users_keyspace users 362.03 KiB 362.89 KiB
>  1629173916816 users_keyspace users 362.03 KiB 362.89 KiB
> Total TrueDiskSpaceUsed: 1.06 MiB
> h1. Next Steps
> To be completed:
>  - Tests
>  - Documentation updates
> I am a new to this repository, and am fuzzy on a few details even after 
> reading the contribution guide  Any advice on the following would be greatly 
> appreciated!
>  - What branch would this type of 

[jira] [Comment Edited] (CASSANDRA-16911) Remove ephemeral snapshot marker file and introduce a flag to SnapshotManifest

2021-09-03 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409411#comment-17409411
 ] 

Stefan Miklosovic edited comment on CASSANDRA-16911 at 9/3/21, 10:05 AM:
-

One not so obvious consequence of this patch is that, until now, there was a 
"scrubbing" process as part of startup checks which scanned the directories and 
removed these ephemeral snapshots when that marker file was found.

Since we are using SnapshotManager from now on (or at least we try to drift 
towards that), it is too soon to involve SnapshotManager at this very early 
stages of a node boot sequence. The stuff it needs to scan directories for 
snapshots (iterating over all Keyspaces and so on) is just not there yet.

In this new implementation, we indeed get rid of ephemerals too, but it is done 
a little bit later (still part of the booting sequence), just after all is 
initialised properly for SnapshotManager. Removal of ephemerals is done as 
one-time job and SnapshotManager is started in such a way that periodic 
checking for expired snapshots is not in effect yet. We are resuming this later 
in the booting sequence which will be obvious by a respective reader of the 
patch.

Also, this patch also introduces an interesting problem on upgrade path; 
imagine this scenario.
 # I am running 4.0 where an ephemeral snapshot is taken and it is stored on 
disk with ephemeral marker file 
 # this node on 4.0 is turned off, so ephemeral snapshot still lives on disk 
because it is meant to be removed on next startup
 # I upgrade this node to 4.1 (or whatever version this change will be in), but 
now we are expecting to have SnapshotManifest
 # Since there is a manifest file from times when ephemeral flag was not 
introduced yet and marker file is not taken into account anymore, by creating 
SnapshotManifest, it will be evaluated as that snapshot is _not_ ephemeral
 # Since it is not ephemeral anymore, we just "promoted" a snapshot from 
ephemeral to normal one which will be never removed

I think that this problem has to be addressed by writing ephemeral marker 
files, as it is done now, in paralel with introducing this flag into 
SnapshotManifest so once a user is on 4.0 and he wants to upgrade to 4.1, 
ephemeral marker files will be still taken into account and removed on the 
startup, it is just that any new ephemeral snapshots will not have this marker 
file and flag in the manifest will be persisted instead of that.

In the next version, like 4.2, we will get rid of marker file logic completely 
because 4.1 will create all ephemeral snapshots with a flag in the manifest 
only.


was (Author: stefan.miklosovic):
One not so obvious consequence of this patch is that, until now, there was a 
"scrubbing" process as part of startup checks which scanned the directories and 
removed these ephemeral snapshots when that marker file was found.

Since we are using SnapshotManager from now on (or at least we try to drift 
towards that), it is too soon to involve SnapshotManager at this very early 
stages of a node boot sequence. The stuff it needs to scan directories for 
snapshots (iterating over all Keyspaces and so on) is just not there yet.

In this new implementation, we indeed get rid of ephemerals too, but it is done 
a little bit later (still part of the booting sequence), just after all is 
initialised properly for SnapshotManager. Removal of ephemerals is done as 
one-time job and SnapshotManager is started in such a way that periodic 
checking for expired snapshots is not in effect yet. We are resuming this later 
in the booting sequence which will be obvious by a respective reader of the 
patch.

Also, this patch also introduces an interesting problem on upgrade path; 
imagine this scenario.
 # I am running 4.0 where an ephemeral snapshot is taken and it is stored on 
disk with ephemeral marker file 
 # this node on 4.0 is turned off, so ephemeral snapshot still lives on disk 
because it is meant to be removed on next startup
 # I upgrade this node to 4.1 (or whatever version this change will be in), but 
now we are expecting to have SnapshotManifest
 # Since there is a manifest file from times when ephemeral flag was not 
introduced yet and marker file is not taken into account anymore, by creating 
SnapshotManifest, it will be evaluated as that snapshot is _not_ ephemeral
 # Since it is not ephemeral anymore, we just "promoted" a snapshot from 
ephemeral to normal one which will be never removed

I think that this problem has to be addressed by writing ephemeral marker 
files, as it is done now, in paralel with introducing this flag into 
SnapshotManifest so once a user is on 4.0 and he wants to upgrade to 4.1, 
ephemeral marker files will be still taken into account and removed on the 
startup, it is just that any new ephemeral snapshot will not have this marker 
file and flag in the 

[jira] [Updated] (CASSANDRA-16911) Remove ephemeral snapshot marker file and introduce a flag to SnapshotManifest

2021-09-03 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-16911:
--
Status: In Progress  (was: Patch Available)

> Remove ephemeral snapshot marker file and introduce a flag to SnapshotManifest
> --
>
> Key: CASSANDRA-16911
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16911
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Other
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
>
> While creating an ephemeral snapshot, a marker file is created at disk in a 
> respective snapshot directory. This is not necessary anymore as we have 
> introduced SnapshotManifest in CASSANDRA-16789 so we can move this flag 
> there. By putting the information if a snapshot is ephemeral or not into 
> SnapshotManifest, we simplify and "clean up" the snapshotting process and 
> related codebase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16911) Remove ephemeral snapshot marker file and introduce a flag to SnapshotManifest

2021-09-03 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409411#comment-17409411
 ] 

Stefan Miklosovic edited comment on CASSANDRA-16911 at 9/3/21, 10:03 AM:
-

One not so obvious consequence of this patch is that, until now, there was a 
"scrubbing" process as part of startup checks which scanned the directories and 
removed these ephemeral snapshots when that marker file was found.

Since we are using SnapshotManager from now on (or at least we try to drift 
towards that), it is too soon to involve SnapshotManager at this very early 
stages of a node boot sequence. The stuff it needs to scan directories for 
snapshots (iterating over all Keyspaces and so on) is just not there yet.

In this new implementation, we indeed get rid of ephemerals too, but it is done 
a little bit later (still part of the booting sequence), just after all is 
initialised properly for SnapshotManager. Removal of ephemerals is done as 
one-time job and SnapshotManager is started in such a way that periodic 
checking for expired snapshots is not in effect yet. We are resuming this later 
in the booting sequence which will be obvious by a respective reader of the 
patch.

Also, this patch also introduces an interesting problem on upgrade path; 
imagine this scenario.
 # I am running 4.0 where an ephemeral snapshot is taken and it is stored on 
disk with ephemeral marker file 
 # this node on 4.0 is turned off, so ephemeral snapshot still lives on disk 
because it is meant to be removed on next startup
 # I upgrade this node to 4.1 (or whatever version this change will be in), but 
now we are expecting to have SnapshotManifest
 # Since there is a manifest file from times when ephemeral flag was not 
introduced yet and marker file is not taken into account anymore, by creating 
SnapshotManifest, it will be evaluated as that snapshot is _not_ ephemeral
 # Since it is not ephemeral anymore, we just "promoted" a snapshot from 
ephemeral to normal one which will be never removed

I think that this problem has to be addressed by writing ephemeral marker 
files, as it is done now, in paralel with introducing this flag into 
SnapshotManifest so once a user is on 4.0 and he wants to upgrade to 4.1, 
ephemeral marker files will be still taken into account and removed on the 
startup, it is just that any new ephemeral snapshot will not have this marker 
file and flag in the manifest will b persisted instead of that.

In the next version, like 4.2, we will get rid of marker file logic completely 
because 4.1 will create all ephemeral snapshots with a flag in the manifest 
only.


was (Author: stefan.miklosovic):
One not so obvious consequence of this patch is that, until now, there was a 
"scrubbing" process as part of startup checks which scanned the directories and 
removed these ephemeral snapshots when that marker file was found.

Since we are using SnapshotManager from now on (or at least we try to drift 
towards that), it is too soon to involve SnapshotManager at this very early 
stages of a node boot sequence. The stuff it needs to scan directories for 
snapshots (iterating over all Keyspaces and so on) is just not there yet.

In this new implementation, we indeed get rid of ephemerals too, but it is done 
a little bit later (still part of the booting sequence), just after all is 
initialised properly for SnapshotManager. Removal of ephemerals is done as 
one-time job and SnapshotManager is started in such a way that periodic 
checking for expired snapshots is not in effect yet. We are resuming this later 
in the booting sequence which will be obvious by a respective reader of the 
patch.

Also, this patch also introduces an interesting problem on upgrade path; 
imagine this scenario.
 # I am running 4.0 where an ephemeral snapshot is taken and it is stored on 
disk with ephemeral marker file 
 # this node on 4.0 is turned off, so ephemeral snapshot still lives on disk 
because it is meant to be removed on next startup
 # I upgrade this node to 4.1 (or whatever version this change will be in), but 
now we are expecting to have SnapshotManifest
 # Since there is a manifest file from times when ephemeral flag was not 
introduced yet and marker file taken into account anymore, by creating 
SnapshotManifest, it will be evaluated as that snapshot is _not_ ephemeral
 # Since it is not ephemeral anymore, we just "promoted" a snapshot from 
ephemeral to normal one which will be never removed

I think that this problem has to be addressed by writing ephemeral marker 
files, as it is done now, in paralel with introducing this flag into 
SnapshotManifest so once a user is on 4.0 and he wants to upgrade to 4.1, 
ephemeral marker files will be still taken into account and removed on the 
startup, it is just that any new ephemeral snapshot will not have this marker 
file and flag in the manifest will b 

[jira] [Comment Edited] (CASSANDRA-16911) Remove ephemeral snapshot marker file and introduce a flag to SnapshotManifest

2021-09-03 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409411#comment-17409411
 ] 

Stefan Miklosovic edited comment on CASSANDRA-16911 at 9/3/21, 10:02 AM:
-

One not so obvious consequence of this patch is that, until now, there was a 
"scrubbing" process as part of startup checks which scanned the directories and 
removed these ephemeral snapshots when that marker file was found.

Since we are using SnapshotManager from now on (or at least we try to drift 
towards that), it is too soon to involve SnapshotManager at this very early 
stages of a node boot sequence. The stuff it needs to scan directories for 
snapshots (iterating over all Keyspaces and so on) is just not there yet.

In this new implementation, we indeed get rid of ephemerals too, but it is done 
a little bit later (still part of the booting sequence), just after all is 
initialised properly for SnapshotManager. Removal of ephemerals is done as 
one-time job and SnapshotManager is started in such a way that periodic 
checking for expired snapshots is not in effect yet. We are resuming this later 
in the booting sequence which will be obvious by a respective reader of the 
patch.

Also, this patch also introduces an interesting problem on upgrade path; 
imagine this scenario.
 # I am running 4.0 where an ephemeral snapshot is taken and it is stored on 
disk with ephemeral marker file 
 # this node on 4.0 is turned off, so ephemeral snapshot still lives on disk 
because it is meant to be removed on next startup
 # I upgrade this node to 4.1 (or whatever version this change will be in), but 
now we are expecting to have SnapshotManifest
 # Since there is a manifest file from times when ephemeral flag was not 
introduced yet and marker file taken into account anymore, by creating 
SnapshotManifest, it will be evaluated as that snapshot is _not_ ephemeral
 # Since it is not ephemeral anymore, we just "promoted" a snapshot from 
ephemeral to normal one which will be never removed

I think that this problem has to be addressed by writing ephemeral marker 
files, as it is done now, in paralel with introducing this flag into 
SnapshotManifest so once a user is on 4.0 and he wants to upgrade to 4.1, 
ephemeral marker files will be still taken into account and removed on the 
startup, it is just that any new ephemeral snapshot will not have this marker 
file and flag in the manifest will b persisted instead of that.

In the next version, like 4.2, we will get rid of marker file logic completely 
because 4.1 will create all ephemeral snapshots with a flag in the manifest 
only.


was (Author: stefan.miklosovic):
One not so obvious consequence of this patch is that, until now, there was a 
"scrubbing" process as part of startup checks which scanned the directories and 
removed these ephemeral snapshots when that marker file was found.

Since we are using SnapshotManager from now on (or at least we try to drift 
towards that), it is too soon to involve SnapshotManager at this very early 
stages of a node boot sequence. The stuff it needs to scan directories for 
snapshots (iterating over all Keyspaces and so on) is just not there yet.

In this new implementation, we indeed get rid of ephemerals too, but it is done 
a little bit later (still part of the booting sequence), just after all is 
initialised properly for SnapshotManager. Removal of ephemerals is done as 
one-time job and SnapshotManager is started in such a way that periodic 
checking for expired snapshots is not in effect yet. We are resuming this later 
in the booting sequence which will be obvious by a respective reader of the 
patch.

Also, this patch also introduces an interesting problem on upgrade path; 
imagine this scenario.
 # I am running 4.0 where an ephemeral snapshot is taken and it is stored on 
disk with ephemeral marker file 
 # this node on 4.0 is turned off, so ephemeral snapshot still lives on disk 
because it is meant to be removed on next startup
 # I upgrade this node to 4.1 (or whatever version this change will be in), but 
now we are expecting to have SnapshotManifest
 # Since there is a manifest file from times when ephemeral flag was not 
introduced yet and marker file is not there anymore, by creating 
SnapshotManifest, it will be evaluated as that snapshot is _not_ ephemeral
 # Since it is not ephemeral anymore, we just "promoted" a snapshot from 
ephemeral to normal one which will be never removed

I think that this problem has to be addressed by writing ephemeral marker 
files, as it is done now, in paralel with introducing this flag into 
SnapshotManifest so once a user is on 4.0 and he wants to upgrade to 4.1, 
ephemeral marker files will be still taken into account and removed on the 
startup, it is just that any new ephemeral snapshot will not have this marker 
file and flag in the manifest will b persisted 

[jira] [Commented] (CASSANDRA-16911) Remove ephemeral snapshot marker file and introduce a flag to SnapshotManifest

2021-09-03 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409411#comment-17409411
 ] 

Stefan Miklosovic commented on CASSANDRA-16911:
---

One not so obvious consequence of this patch is that, until now, there was a 
"scrubbing" process as part of startup checks which scanned the directories and 
removed these ephemeral snapshots when that marker file was found.

Since we are using SnapshotManager from now on (or at least we try to drift 
towards that), it is too soon to involve SnapshotManager at this very early 
stages of a node boot sequence. The stuff it needs to scan directories for 
snapshots (iterating over all Keyspaces and so on) is just not there yet.

In this new implementation, we indeed get rid of ephemerals too, but it is done 
a little bit later (still part of the booting sequence), just after all is 
initialised properly for SnapshotManager. Removal of ephemerals is done as 
one-time job and SnapshotManager is started in such a way that periodic 
checking for expired snapshots is not in effect yet. We are resuming this later 
in the booting sequence which will be obvious by a respective reader of the 
patch.

Also, this patch also introduces an interesting problem on upgrade path; 
imagine this scenario.
 # I am running 4.0 where an ephemeral snapshot is taken and it is stored on 
disk with ephemeral marker file 
 # this node on 4.0 is turned off, so ephemeral snapshot still lives on disk 
because it is meant to be removed on next startup
 # I upgrade this node to 4.1 (or whatever version this change will be in), but 
now we are expecting to have SnapshotManifest
 # Since there is a manifest file from times when ephemeral flag was not 
introduced yet and marker file is not there anymore, by creating 
SnapshotManifest, it will be evaluated as that snapshot is _not_ ephemeral
 # Since it is not ephemeral anymore, we just "promoted" a snapshot from 
ephemeral to normal one which will be never removed

I think that this problem has to be addressed by writing ephemeral marker 
files, as it is done now, in paralel with introducing this flag into 
SnapshotManifest so once a user is on 4.0 and he wants to upgrade to 4.1, 
ephemeral marker files will be still taken into account and removed on the 
startup, it is just that any new ephemeral snapshot will not have this marker 
file and flag in the manifest will b persisted instead of that.

In the next version, like 4.2, we will get rid of marker file logic completely 
because 4.1 will create all ephemeral snapshots with a flag in the manifest 
only.

> Remove ephemeral snapshot marker file and introduce a flag to SnapshotManifest
> --
>
> Key: CASSANDRA-16911
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16911
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Other
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
>
> While creating an ephemeral snapshot, a marker file is created at disk in a 
> respective snapshot directory. This is not necessary anymore as we have 
> introduced SnapshotManifest in CASSANDRA-16789 so we can move this flag 
> there. By putting the information if a snapshot is ephemeral or not into 
> SnapshotManifest, we simplify and "clean up" the snapshotting process and 
> related codebase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16790) Add auto_snapshot_ttl configuration

2021-09-03 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409396#comment-17409396
 ] 

Stefan Miklosovic commented on CASSANDRA-16790:
---

Yes [~blerer], I will do that gladly.

> Add auto_snapshot_ttl configuration
> ---
>
> Key: CASSANDRA-16790
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16790
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local/Config
>Reporter: Paulo Motta
>Assignee: Abuli Palagashvili
>Priority: Normal
>
> This property should take a human readable parameter (ie. 6h, 3days). When 
> specified and {{auto_snapshot: true}}, auto snapshots created should use  the 
> specified TTL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16790) Add auto_snapshot_ttl configuration

2021-09-03 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-16790:
--
Reviewers: Stefan Miklosovic
   Status: Review In Progress  (was: Patch Available)

> Add auto_snapshot_ttl configuration
> ---
>
> Key: CASSANDRA-16790
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16790
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local/Config
>Reporter: Paulo Motta
>Assignee: Abuli Palagashvili
>Priority: Normal
>
> This property should take a human readable parameter (ie. 6h, 3days). When 
> specified and {{auto_snapshot: true}}, auto snapshots created should use  the 
> specified TTL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16790) Add auto_snapshot_ttl configuration

2021-09-03 Thread Benjamin Lerer (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409391#comment-17409391
 ] 

Benjamin Lerer commented on CASSANDRA-16790:


[~stefan.miklosovic] we need a second reviewer for this ticket. Would you have 
some time for it?


> Add auto_snapshot_ttl configuration
> ---
>
> Key: CASSANDRA-16790
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16790
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local/Config
>Reporter: Paulo Motta
>Assignee: Abuli Palagashvili
>Priority: Normal
>
> This property should take a human readable parameter (ie. 6h, 3days). When 
> specified and {{auto_snapshot: true}}, auto snapshots created should use  the 
> specified TTL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16404) Provide a nodetool way of invalidating auth caches

2021-09-03 Thread Aleksei Zotov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409375#comment-17409375
 ] 

Aleksei Zotov commented on CASSANDRA-16404:
---

I raised CASSANDRA-16914 to implement Virtual Tables for Auth Caches.

> Provide a nodetool way of invalidating auth caches
> --
>
> Key: CASSANDRA-16404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16404
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Authorization
>Reporter: Sumanth Pasupuleti
>Assignee: Aleksei Zotov
>Priority: Normal
> Fix For: 4.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We currently have nodetool commands to invalidate certain caches like 
> KeyCache, RowCache and CounterCache. 
> Being able to invalidate auth caches as well can come in handy in situations 
> where, critical backend auth changes may need to be in effect right away for 
> all the connections, especially in configurations where cache validity is 
> chosen to be for a longer duration. An example can be that an authenticated 
> user "User1" is no longer authorized to access a table resource "table1" and 
> it is vital that this change is reflected right away, without having to wait 
> for cache expiry/refresh to trigger.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16914) Implement Virtual Tables for Auth Caches

2021-09-03 Thread Aleksei Zotov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksei Zotov updated CASSANDRA-16914:
--
Summary: Implement Virtual Tables for Auth Caches  (was: Virtual Tables for 
Auth Caches)

> Implement Virtual Tables for Auth Caches
> 
>
> Key: CASSANDRA-16914
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16914
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Authorization, Feature/Virtual Tables
>Reporter: Aleksei Zotov
>Assignee: Aleksei Zotov
>Priority: Low
> Fix For: 4.1
>
>
> {{NodeTool}} commands for Auth Caches invalidation were implemented as a part 
> of CASSANDRA-16404 ticket. While discussing that ticket it was agreed that 
> there is a need to develop the same kind of functionality through Vitrual 
> Tables. Unfortunately, VT did not have {{TRUNCATE}} and {{DELETE}} support. 
> And CASSANDRA-16806 was created for that reason. Once it is completed, 
> further work can be started.
> The goal of this ticket is to create VTs for the following caches:
>  * {{CredentialsCache}}
>  * {{JmxPermissionsCache}}
>  * {{NetworkPermissionsCache}}
>  * {{PermissionsCache}}
>  * {{RolesCache}}
> The VTs should support reading from and modification of the in the Auth 
> Caches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16914) Virtual Tables for Auth Caches

2021-09-03 Thread Aleksei Zotov (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksei Zotov updated CASSANDRA-16914:
--
Change Category: Operability
 Complexity: Low Hanging Fruit
  Fix Version/s: 4.1
   Priority: Low  (was: Normal)
 Status: Open  (was: Triage Needed)

> Virtual Tables for Auth Caches
> --
>
> Key: CASSANDRA-16914
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16914
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Authorization, Feature/Virtual Tables
>Reporter: Aleksei Zotov
>Assignee: Aleksei Zotov
>Priority: Low
> Fix For: 4.1
>
>
> {{NodeTool}} commands for Auth Caches invalidation were implemented as a part 
> of CASSANDRA-16404 ticket. While discussing that ticket it was agreed that 
> there is a need to develop the same kind of functionality through Vitrual 
> Tables. Unfortunately, VT did not have {{TRUNCATE}} and {{DELETE}} support. 
> And CASSANDRA-16806 was created for that reason. Once it is completed, 
> further work can be started.
> The goal of this ticket is to create VTs for the following caches:
>  * {{CredentialsCache}}
>  * {{JmxPermissionsCache}}
>  * {{NetworkPermissionsCache}}
>  * {{PermissionsCache}}
>  * {{RolesCache}}
> The VTs should support reading from and modification of the in the Auth 
> Caches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-16914) Virtual Tables for Auth Caches

2021-09-03 Thread Aleksei Zotov (Jira)
Aleksei Zotov created CASSANDRA-16914:
-

 Summary: Virtual Tables for Auth Caches
 Key: CASSANDRA-16914
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16914
 Project: Cassandra
  Issue Type: Improvement
  Components: Feature/Authorization, Feature/Virtual Tables
Reporter: Aleksei Zotov
Assignee: Aleksei Zotov


{{NodeTool}} commands for Auth Caches invalidation were implemented as a part 
of CASSANDRA-16404 ticket. While discussing that ticket it was agreed that 
there is a need to develop the same kind of functionality through Vitrual 
Tables. Unfortunately, VT did not have {{TRUNCATE}} and {{DELETE}} support. And 
CASSANDRA-16806 was created for that reason. Once it is completed, further work 
can be started.

The goal of this ticket is to create VTs for the following caches:
 * {{CredentialsCache}}
 * {{JmxPermissionsCache}}
 * {{NetworkPermissionsCache}}
 * {{PermissionsCache}}
 * {{RolesCache}}

The VTs should support reading from and modification of the in the Auth Caches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15269) Cassandra fails to process OperationExecutionException which causes ClassCastException

2021-09-03 Thread Benjamin Lerer (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409352#comment-17409352
 ] 

Benjamin Lerer commented on CASSANDRA-15269:


Sorry, I realize that my commit description was confusing. The 
{{ClassCastException}} is happening on the server NOT on the driver side but it 
is occuring while the sever serialize the message to the driver. The new unit 
test check exactly the code path that was leading to the  
{{ClassCastException}} . 

> Cassandra fails to process OperationExecutionException which causes 
> ClassCastException
> --
>
> Key: CASSANDRA-15269
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15269
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: Liudmila Kornilova
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.0.x
>
>
> While working on CASSANDRA-15232 I noticed that OperationExecutionException 
> is not processed correctly.
> How to reproduce the issue:
>  1. {{create table d (numerator decimal primary key, denominator decimal);}}
>  2. {{insert into d (numerator, denominator) values 
> (123456789112345678921234567893123456, 2);}}
>  3. {{select numerator % denominator from d;}}
> What happens:
>  1. remainder operation throws ArithmeticException (BigDecimal:1854)
>  2. The exception is wrapped in OperationExecutionException
>  3. ClassCastException appears (OperationExecutionException cannot be cast to 
> FunctionExecutionException at ErrorMessage.java:280)
> What should happen:
> OperationExecutionException with message "the operation 'decimal % decimal' 
> failed: Division impossible" should be delivered to user 
> Note that after fixing CASSANDRA-15232 {{select numerator % denominator from 
> d;}} will produce correct result of remainder operation.
>  Currently I am not aware of other cases when OperationExecutionException may 
> be treated as FunctionExecutionException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12734) Materialized View schema file for snapshots created as tables

2021-09-03 Thread Benjamin Lerer (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409354#comment-17409354
 ] 

Benjamin Lerer commented on CASSANDRA-12734:


The patches look good to me. Thanks :-) 

> Materialized View schema file for snapshots created as tables
> -
>
> Key: CASSANDRA-12734
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12734
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Materialized Views, Legacy/Tools
>Reporter: Hau Phan
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> The materialized view schema file that gets created and stored with the 
> sstables is created as a table instead of a materialized view.  
> Can the materialized view be created and added to the corresponding table's  
> schema file?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12734) Materialized View schema file for snapshots created as tables

2021-09-03 Thread Benjamin Lerer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-12734:
---
Status: Ready to Commit  (was: Review In Progress)

> Materialized View schema file for snapshots created as tables
> -
>
> Key: CASSANDRA-12734
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12734
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Materialized Views, Legacy/Tools
>Reporter: Hau Phan
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> The materialized view schema file that gets created and stored with the 
> sstables is created as a table instead of a materialized view.  
> Can the materialized view be created and added to the corresponding table's  
> schema file?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16404) Provide a nodetool way of invalidating auth caches

2021-09-03 Thread Aleksei Zotov (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409342#comment-17409342
 ] 

Aleksei Zotov commented on CASSANDRA-16404:
---

Great, thanks a lot for your support [~samt]!

> Provide a nodetool way of invalidating auth caches
> --
>
> Key: CASSANDRA-16404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16404
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Authorization
>Reporter: Sumanth Pasupuleti
>Assignee: Aleksei Zotov
>Priority: Normal
> Fix For: 4.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We currently have nodetool commands to invalidate certain caches like 
> KeyCache, RowCache and CounterCache. 
> Being able to invalidate auth caches as well can come in handy in situations 
> where, critical backend auth changes may need to be in effect right away for 
> all the connections, especially in configurations where cache validity is 
> chosen to be for a longer duration. An example can be that an authenticated 
> user "User1" is no longer authorized to access a table resource "table1" and 
> it is vital that this change is reflected right away, without having to wait 
> for cache expiry/refresh to trigger.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16404) Provide a nodetool way of invalidating auth caches

2021-09-03 Thread Sam Tunnicliffe (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-16404:

  Fix Version/s: (was: 4.x)
 4.1
Source Control Link: 
https://github.com/apache/cassandra/commit/f9aa19e3b116c0078019e9382d1a6c4bb050f113
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Committed to C* trunk in 
{{[f9aa19e3|https://github.com/apache/cassandra/commit/f9aa19e3b116c0078019e9382d1a6c4bb050f113]}}
 and to dtests in 
{{[0ef8be46|https://github.com/apache/cassandra-dtest/commit/0ef8be46f8f729c80662a03fd515b6fe108531c8]}}
 and 
{{[1f5aefdc|https://github.com/apache/cassandra-dtest/commit/1f5aefdc23b5cd27dea056d119ff5d9c9801030a]}}.

Thanks for the patches and for your patience [~azotcsit]


> Provide a nodetool way of invalidating auth caches
> --
>
> Key: CASSANDRA-16404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16404
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Authorization
>Reporter: Sumanth Pasupuleti
>Assignee: Aleksei Zotov
>Priority: Normal
> Fix For: 4.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We currently have nodetool commands to invalidate certain caches like 
> KeyCache, RowCache and CounterCache. 
> Being able to invalidate auth caches as well can come in handy in situations 
> where, critical backend auth changes may need to be in effect right away for 
> all the connections, especially in configurations where cache validity is 
> chosen to be for a longer duration. An example can be that an authenticated 
> user "User1" is no longer authorized to access a table resource "table1" and 
> it is vital that this change is reflected right away, without having to wait 
> for cache expiry/refresh to trigger.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16404) Provide a nodetool way of invalidating auth caches

2021-09-03 Thread Sam Tunnicliffe (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-16404:

Reviewers: Benjamin Lerer, Sam Tunnicliffe, Sumanth Pasupuleti  (was: 
Benjamin Lerer, Sam Tunnicliffe)
   Status: Review In Progress  (was: Patch Available)

> Provide a nodetool way of invalidating auth caches
> --
>
> Key: CASSANDRA-16404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16404
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Authorization
>Reporter: Sumanth Pasupuleti
>Assignee: Aleksei Zotov
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We currently have nodetool commands to invalidate certain caches like 
> KeyCache, RowCache and CounterCache. 
> Being able to invalidate auth caches as well can come in handy in situations 
> where, critical backend auth changes may need to be in effect right away for 
> all the connections, especially in configurations where cache validity is 
> chosen to be for a longer duration. An example can be that an authenticated 
> user "User1" is no longer authorized to access a table resource "table1" and 
> it is vital that this change is reflected right away, without having to wait 
> for cache expiry/refresh to trigger.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16404) Provide a nodetool way of invalidating auth caches

2021-09-03 Thread Sam Tunnicliffe (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-16404:

Status: Ready to Commit  (was: Review In Progress)

> Provide a nodetool way of invalidating auth caches
> --
>
> Key: CASSANDRA-16404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16404
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Authorization
>Reporter: Sumanth Pasupuleti
>Assignee: Aleksei Zotov
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We currently have nodetool commands to invalidate certain caches like 
> KeyCache, RowCache and CounterCache. 
> Being able to invalidate auth caches as well can come in handy in situations 
> where, critical backend auth changes may need to be in effect right away for 
> all the connections, especially in configurations where cache validity is 
> chosen to be for a longer duration. An example can be that an authenticated 
> user "User1" is no longer authorized to access a table resource "table1" and 
> it is vital that this change is reflected right away, without having to wait 
> for cache expiry/refresh to trigger.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-dtest] 01/02: Extend network auth test to check deprecated mbean name

2021-09-03 Thread samt
This is an automated email from the ASF dual-hosted git repository.

samt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra-dtest.git

commit 0ef8be46f8f729c80662a03fd515b6fe108531c8
Author: Sam Tunnicliffe 
AuthorDate: Tue Aug 17 14:26:45 2021 +0100

Extend network auth test to check deprecated mbean name

Patch by Sam Tunnicliffe; reviewed by Aleksei Zotov for
CASSANDRA-16404
---
 auth_test.py | 33 +
 1 file changed, 21 insertions(+), 12 deletions(-)

diff --git a/auth_test.py b/auth_test.py
index df57fb0..ca2056c 100644
--- a/auth_test.py
+++ b/auth_test.py
@@ -3079,8 +3079,8 @@ class TestNetworkAuth(Tester):
 with JolokiaAgent(node) as jmx:
 jmx.execute_method(mbean, 'invalidate')
 
-def clear_network_auth_cache(self, node):
-mbean = make_mbean('auth', type='NetworkAuthCache')
+def clear_network_auth_cache(self, node, 
cache_name='NetworkPermissionsCache'):
+mbean = make_mbean('auth', type=cache_name)
 with JolokiaAgent(node) as jmx:
 jmx.execute_method(mbean, 'invalidate')
 
@@ -3101,16 +3101,25 @@ class TestNetworkAuth(Tester):
 if a user's access to a dc is revoked while they're connected,
 all of their requests should fail once the cache is cleared
 """
-username = self.username()
-self.create_user("CREATE ROLE %s WITH password = 'password' AND LOGIN 
= true", username)
-self.assertConnectsTo(username, self.dc1_node)
-self.assertConnectsTo(username, self.dc2_node)
-
-# connect to the dc2 node, then remove permission for it
-session = self.exclusive_cql_connection(self.dc2_node, user=username, 
password='password')
-self.superuser.execute("ALTER ROLE %s WITH ACCESS TO DATACENTERS 
{'dc1'}" % username)
-self.clear_network_auth_cache(self.dc2_node)
-self.assertUnauthorized(lambda: session.execute("SELECT * FROM 
ks.tbl"))
+def test_revoked_access(cache_name):
+logger.debug('Testing with cache name: %s' % cache_name)
+username = self.username()
+self.create_user("CREATE ROLE %s WITH password = 'password' AND 
LOGIN = true", username)
+self.assertConnectsTo(username, self.dc1_node)
+self.assertConnectsTo(username, self.dc2_node)
+
+# connect to the dc2 node, then remove permission for it
+session = self.exclusive_cql_connection(self.dc2_node, 
user=username, password='password')
+self.superuser.execute("ALTER ROLE %s WITH ACCESS TO DATACENTERS 
{'dc1'}" % username)
+self.clear_network_auth_cache(self.dc2_node, cache_name)
+self.assertUnauthorized(lambda: session.execute("SELECT * FROM 
ks.tbl"))
+
+if self.dtest_config.cassandra_version_from_build > '4.0':
+test_revoked_access("NetworkPermissionsCache")
+
+# deprecated cache name, scheduled for removal in 5.0
+if self.dtest_config.cassandra_version_from_build < '5.0':
+test_revoked_access("NetworkAuthCache")
 
 def test_create_dc_validation(self):
 """

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-dtest] 02/02: Add JMX auth test

2021-09-03 Thread samt
This is an automated email from the ASF dual-hosted git repository.

samt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra-dtest.git

commit 1f5aefdc23b5cd27dea056d119ff5d9c9801030a
Author: Aleksei Zotov 
AuthorDate: Sun Aug 22 19:30:34 2021 +0400

Add JMX auth test

Patch by Aleksei Zotov; reviewed by Sam Tunnicliffe for
CASSANDRA-16404
---
 auth_test.py |  8 
 jmx_auth_test.py | 47 +--
 2 files changed, 49 insertions(+), 6 deletions(-)

diff --git a/auth_test.py b/auth_test.py
index ca2056c..bd46688 100644
--- a/auth_test.py
+++ b/auth_test.py
@@ -8,9 +8,9 @@ import re
 import pytest
 import logging
 
-from cassandra import AuthenticationFailed, InvalidRequest, Unauthorized, 
Unavailable
+from cassandra import AuthenticationFailed, InvalidRequest, Unauthorized
 from cassandra.cluster import NoHostAvailable
-from cassandra.protocol import ServerError, SyntaxException
+from cassandra.protocol import SyntaxException
 
 from dtest_setup_overrides import DTestSetupOverrides
 from dtest import Tester
@@ -24,6 +24,7 @@ from tools.misc import ImmutableMapping
 since = pytest.mark.since
 logger = logging.getLogger(__name__)
 
+
 class TestAuth(Tester):
 
 @pytest.fixture(autouse=True)
@@ -3047,8 +3048,7 @@ class TestNetworkAuth(Tester):
 fixture_dtest_setup.superuser.execute("CREATE TABLE ks.tbl (k int 
primary key, v int)")
 
 def username(self):
-return ''.join(random.choice(string.ascii_lowercase) for _ in 
range(8));
-
+return ''.join(random.choice(string.ascii_lowercase) for _ in range(8))
 
 def create_user(self, query_fmt, username):
 """
diff --git a/jmx_auth_test.py b/jmx_auth_test.py
index 199e525..e5b3d03 100644
--- a/jmx_auth_test.py
+++ b/jmx_auth_test.py
@@ -1,3 +1,5 @@
+import random
+import string
 import pytest
 import logging
 from distutils.version import LooseVersion
@@ -12,12 +14,14 @@ logger = logging.getLogger(__name__)
 
 @since('3.6')
 class TestJMXAuth(Tester):
+"""
+Uses nodetool as a means of exercising the JMX interface as JolokiaAgent
+exposes its own connector which bypasses the in-built security features
+"""
 
 def test_basic_auth(self):
 """
 Some basic smoke testing of JMX authentication and authorization.
-Uses nodetool as a means of exercising the JMX interface as 
JolokiaAgent
-exposes its own connector which bypasses the in-built security features
 @jira_ticket CASSANDRA-10091
 """
 self.prepare()
@@ -55,6 +59,42 @@ class TestJMXAuth(Tester):
 # superuser status applies to JMX authz too
 node.nodetool('-u cassandra -pw cassandra gossipinfo')
 
+@since('4.1')
+def test_revoked_jmx_access(self):
+"""
+if a user's access to a JMX MBean is revoked while they're connected,
+all of their requests should fail once the cache is cleared.
+@jira_ticket CASSANDRA-16404
+"""
+self.prepare(permissions_validity=6)
+[node] = self.cluster.nodelist()
+
+def test_revoked_access(cache_name):
+logger.debug('Testing with cache name: %s' % cache_name)
+username = self.username()
+session = self.patient_cql_connection(node, user='cassandra', 
password='cassandra')
+session.execute("CREATE ROLE %s WITH LOGIN=true AND 
PASSWORD='abc123'" % username)
+session.execute("GRANT SELECT ON MBEAN 
'org.apache.cassandra.net:type=FailureDetector' TO %s" % username)
+session.execute("GRANT DESCRIBE ON ALL MBEANS TO %s" % username)
+
+# works fine
+node.nodetool('-u %s -pw abc123 gossipinfo' % username)
+
+session.execute("REVOKE SELECT ON MBEAN 
'org.apache.cassandra.net:type=FailureDetector' FROM %s" % username)
+# works fine because the JMX permission is cached
+node.nodetool('-u %s -pw abc123 gossipinfo' % username)
+
+node.nodetool('-u cassandra -pw cassandra 
invalidatejmxpermissionscache')
+# the user has no permissions to the JMX resource anymore
+with pytest.raises(ToolError, match='Access Denied'):
+node.nodetool('-u %s -pw abc123 gossipinfo' % username)
+
+test_revoked_access("JmxPermissionsCache")
+
+# deprecated cache name, scheduled for removal in 5.0
+if self.dtest_config.cassandra_version_from_build < '5.0':
+test_revoked_access("JMXPermissionsCache")
+
 def prepare(self, nodes=1, permissions_validity=0):
 config = {'authenticator': 
'org.apache.cassandra.auth.PasswordAuthenticator',
   'authorizer': 
'org.apache.cassandra.auth.CassandraAuthorizer',
@@ -69,3 +109,6 @@ class TestJMXAuth(Tester):
 def authentication_fail_message(self, node, username):
 return "Provided username {user} and/or password are 

[cassandra-dtest] branch trunk updated (03cc411 -> 1f5aefd)

2021-09-03 Thread samt
This is an automated email from the ASF dual-hosted git repository.

samt pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra-dtest.git.


from 03cc411  Add test for CASSANDRA-16104
 new 0ef8be4  Extend network auth test to check deprecated mbean name
 new 1f5aefd  Add JMX auth test

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 auth_test.py | 41 +
 jmx_auth_test.py | 47 +--
 2 files changed, 70 insertions(+), 18 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated: Add nodetool commands to invalidate auth caches

2021-09-03 Thread samt
This is an automated email from the ASF dual-hosted git repository.

samt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f9aa19e   Add nodetool commands to invalidate auth caches
f9aa19e is described below

commit f9aa19e3b116c0078019e9382d1a6c4bb050f113
Author: Aleksei Zotov 
AuthorDate: Thu Aug 12 14:52:31 2021 +0100

 Add nodetool commands to invalidate auth caches

 Patch by Aleksei Zotov; reviewed by Benjamin Lerer, Sumanth Pasupuleti
 and Sam Tunnicliffe for CASSANDRA-16404
---
 CHANGES.txt|   1 +
 src/java/org/apache/cassandra/auth/AuthCache.java  |   2 +-
 .../apache/cassandra/auth/AuthenticatedUser.java   |   4 +-
 .../apache/cassandra/auth/INetworkAuthorizer.java  |   2 +-
 ...AuthCache.java => NetworkPermissionsCache.java} |  21 +-
 ...ache.java => NetworkPermissionsCacheMBean.java} |  21 +-
 .../cassandra/auth/PasswordAuthenticator.java  |  20 +-
 .../apache/cassandra/auth/PermissionsCache.java|   8 +-
 ...rkAuthCache.java => PermissionsCacheMBean.java} |  19 +-
 src/java/org/apache/cassandra/auth/RolesCache.java |   9 +-
 ...{NetworkAuthCache.java => RolesCacheMBean.java} |  19 +-
 .../cassandra/auth/jmx/AuthorizationProxy.java |  35 ++-
 src/java/org/apache/cassandra/tools/NodeProbe.java |  75 -
 src/java/org/apache/cassandra/tools/NodeTool.java  |  31 +-
 .../tools/nodetool/InvalidateCredentialsCache.java |  49 
 .../nodetool/InvalidateJmxPermissionsCache.java|  48 
 .../InvalidateNetworkPermissionsCache.java |  49 
 .../tools/nodetool/InvalidatePermissionsCache.java | 174 +++
 .../tools/nodetool/InvalidateRolesCache.java   |  50 
 .../{RoleTestUtils.java => AuthTestUtils.java} |  69 -
 .../auth/CassandraNetworkAuthorizerTest.java   |  50 +---
 .../cassandra/auth/CassandraRoleManagerTest.java   |   6 +-
 test/unit/org/apache/cassandra/auth/RolesTest.java |  18 +-
 .../nodetool/InvalidateCredentialsCacheTest.java   | 171 +++
 .../InvalidateJmxPermissionsCacheTest.java | 192 +
 .../InvalidateNetworkPermissionsCacheTest.java | 160 +++
 .../nodetool/InvalidatePermissionsCacheTest.java   | 317 +
 .../tools/nodetool/InvalidateRolesCacheTest.java   | 159 +++
 28 files changed, 1630 insertions(+), 149 deletions(-)

diff --git a/CHANGES.txt b/CHANGES.txt
index 8555e51..a2a4043 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.1
+ * Provide a nodetool command to invalidate auth caches (CASSANDRA-16404)
  * Catch read repair timeout exceptions and add metric (CASSANDRA-16880)
  * Exclude Jackson 1.x transitive dependency of hadoop* provided dependencies 
(CASSANDRA-16854)
  * Add client warnings and abort to tombstone and coordinator reads which go 
past a low/high watermark (CASSANDRA-16850)
diff --git a/src/java/org/apache/cassandra/auth/AuthCache.java 
b/src/java/org/apache/cassandra/auth/AuthCache.java
index 6393da7..32e9f0f 100644
--- a/src/java/org/apache/cassandra/auth/AuthCache.java
+++ b/src/java/org/apache/cassandra/auth/AuthCache.java
@@ -38,7 +38,7 @@ public class AuthCache implements AuthCacheMBean
 {
 private static final Logger logger = 
LoggerFactory.getLogger(AuthCache.class);
 
-private static final String MBEAN_NAME_BASE = 
"org.apache.cassandra.auth:type=";
+public static final String MBEAN_NAME_BASE = 
"org.apache.cassandra.auth:type=";
 
 /**
  * Underlying cache. LoadingCache will call underlying load function on 
{@link #get} if key is not present
diff --git a/src/java/org/apache/cassandra/auth/AuthenticatedUser.java 
b/src/java/org/apache/cassandra/auth/AuthenticatedUser.java
index 9f22bea..c2d93ca 100644
--- a/src/java/org/apache/cassandra/auth/AuthenticatedUser.java
+++ b/src/java/org/apache/cassandra/auth/AuthenticatedUser.java
@@ -40,7 +40,7 @@ public class AuthenticatedUser
 
 // User-level permissions cache.
 private static final PermissionsCache permissionsCache = new 
PermissionsCache(DatabaseDescriptor.getAuthorizer());
-private static final NetworkAuthCache networkAuthCache = new 
NetworkAuthCache(DatabaseDescriptor.getNetworkAuthorizer());
+private static final NetworkPermissionsCache networkPermissionsCache = new 
NetworkPermissionsCache(DatabaseDescriptor.getNetworkAuthorizer());
 
 private final String name;
 // primary Role of the logged in user
@@ -136,7 +136,7 @@ public class AuthenticatedUser
  */
 public boolean hasLocalAccess()
 {
-return 
networkAuthCache.get(this.getPrimaryRole()).canAccess(Datacenters.thisDatacenter());
+return 
networkPermissionsCache.get(this.getPrimaryRole()).canAccess(Datacenters.thisDatacenter());
 }
 
 @Override
diff --git a/src/java/org/apache/cassandra/auth/INetworkAuthorizer.java 

[jira] [Commented] (CASSANDRA-15269) Cassandra fails to process OperationExecutionException which causes ClassCastException

2021-09-03 Thread Benjamin Lerer (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409325#comment-17409325
 ] 

Benjamin Lerer commented on CASSANDRA-15269:


The old PR was relying on a hack for testing the fix. The new PR unit reproduce 
the scenario that lead to the {{ClassCastException}} without relying on any 
hack.  

> Cassandra fails to process OperationExecutionException which causes 
> ClassCastException
> --
>
> Key: CASSANDRA-15269
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15269
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: Liudmila Kornilova
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.0.x
>
>
> While working on CASSANDRA-15232 I noticed that OperationExecutionException 
> is not processed correctly.
> How to reproduce the issue:
>  1. {{create table d (numerator decimal primary key, denominator decimal);}}
>  2. {{insert into d (numerator, denominator) values 
> (123456789112345678921234567893123456, 2);}}
>  3. {{select numerator % denominator from d;}}
> What happens:
>  1. remainder operation throws ArithmeticException (BigDecimal:1854)
>  2. The exception is wrapped in OperationExecutionException
>  3. ClassCastException appears (OperationExecutionException cannot be cast to 
> FunctionExecutionException at ErrorMessage.java:280)
> What should happen:
> OperationExecutionException with message "the operation 'decimal % decimal' 
> failed: Division impossible" should be delivered to user 
> Note that after fixing CASSANDRA-15232 {{select numerator % denominator from 
> d;}} will produce correct result of remainder operation.
>  Currently I am not aware of other cases when OperationExecutionException may 
> be treated as FunctionExecutionException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16666) Make SSLContext creation pluggable/extensible

2021-09-03 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409303#comment-17409303
 ] 

Stefan Miklosovic commented on CASSANDRA-1:
---

[~mck] could you please answer to Maulin as you are more aware of what is going 
on? What is the appropriate window we can merge this? 

> Make SSLContext creation pluggable/extensible
> -
>
> Key: CASSANDRA-1
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Internode
>Reporter: Maulin Vasavada
>Assignee: Maulin Vasavada
>Priority: Normal
> Fix For: 4.x
>
>
> Currently Cassandra creates the SSLContext via SSLFactory.java. SSLFactory is 
> a final class with static methods and not overridable. The SSLFactory loads 
> the keys and certs from the file based artifacts for the same. While this 
> works for many, in the industry where security is stricter and contextual, 
> this approach falls short. Many big organizations need flexibility to load 
> the SSL artifacts from a custom resource (like custom Key Management 
> Solution, HashiCorp Vault, Amazon KMS etc). While JSSE SecurityProvider 
> architecture allows us flexibility to build our custom mechanisms to validate 
> and process security artifacts, many times all we need is to build upon 
> Java's existing extensibility that Trust/Key Manager interfaces provide to 
> load keystores from various resources in the absence of any customized 
> requirements on the Keys/Certificate formats.
> My proposal here is to make the SSLContext creation pluggable/extensible and 
> have the current SSLFactory.java implement an extensible interface. 
> I contributed a similar change that is live now in Apache Kafka (2.6.0) - 
> https://issues.apache.org/jira/browse/KAFKA-8890 
> I can spare some time writing the pluggable interface and run by the required 
> reviewers.
>  
> Created [CEP-9: Make SSLContext creation 
> pluggable|https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-9%3A+Make+SSLContext+creation+pluggable]
>  
>  
> cc: [~dcapwell] [~djoshi]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15269) Cassandra fails to process OperationExecutionException which causes ClassCastException

2021-09-03 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409299#comment-17409299
 ] 

Berenguer Blasi commented on CASSANDRA-15269:
-

[~blerer] lgtm but I a missing sthg here. What's the benefit of the new PR vs 
the old one, they both seem equally involved. You mention the driver and I am 
not familiar with that side of things so what am I missing? Thx

> Cassandra fails to process OperationExecutionException which causes 
> ClassCastException
> --
>
> Key: CASSANDRA-15269
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15269
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: Liudmila Kornilova
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.0.x
>
>
> While working on CASSANDRA-15232 I noticed that OperationExecutionException 
> is not processed correctly.
> How to reproduce the issue:
>  1. {{create table d (numerator decimal primary key, denominator decimal);}}
>  2. {{insert into d (numerator, denominator) values 
> (123456789112345678921234567893123456, 2);}}
>  3. {{select numerator % denominator from d;}}
> What happens:
>  1. remainder operation throws ArithmeticException (BigDecimal:1854)
>  2. The exception is wrapped in OperationExecutionException
>  3. ClassCastException appears (OperationExecutionException cannot be cast to 
> FunctionExecutionException at ErrorMessage.java:280)
> What should happen:
> OperationExecutionException with message "the operation 'decimal % decimal' 
> failed: Division impossible" should be delivered to user 
> Note that after fixing CASSANDRA-15232 {{select numerator % denominator from 
> d;}} will produce correct result of remainder operation.
>  Currently I am not aware of other cases when OperationExecutionException may 
> be treated as FunctionExecutionException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org