josh-mckenzie commented on a change in pull request #1213:
URL: https://github.com/apache/cassandra/pull/1213#discussion_r727189043
##########
File path: src/java/org/apache/cassandra/service/StorageProxy.java
##########
@@ -2728,4 +2805,104 @@ public void
disableCheckForDuplicateRowsDuringCompaction()
{
DatabaseDescriptor.setCheckForDuplicateRowsDuringCompaction(false);
}
+
+ public void initialLoadPartitionDenylist()
+ {
+ partitionDenylist.initialLoad();
+ }
+
+ @Override
+ public void loadPartitionDenylist()
+ {
+ partitionDenylist.load();
+ }
+
+ @Override
+ public int getPartitionDenylistLoadAttempts()
+ {
+ return partitionDenylist.getLoadAttempts();
+ }
+
+ @Override
+ public int getPartitionDenylistLoadSuccesses()
+ {
+ return partitionDenylist.getLoadSuccesses();
+ }
+
+ @Override
+ public void setEnablePartitionDenylist(boolean enabled)
+ {
+ DatabaseDescriptor.setEnablePartitionDenylist(enabled);
+ }
+
+ @Override
+ public void setEnableDenylistWrites(boolean enabled)
+ {
+ DatabaseDescriptor.setEnableDenylistWrites(enabled);
+ }
+
+ @Override
+ public void setEnableDenylistReads(boolean enabled)
+ {
+ DatabaseDescriptor.setEnableDenylistReads(enabled);
+ }
+
+ @Override
+ public void setEnableDenylistRangeReads(boolean enabled)
+ {
+ DatabaseDescriptor.setEnableDenylistRangeReads(enabled);
+ }
+
+ @Override
+ public void setMaxDenylistKeysPerTable(int value)
+ {
+ DatabaseDescriptor.setMaxDenylistKeysPerTable(value);
+ loadPartitionDenylist();
Review comment:
I plan to write the docs at the end once we've stabilized not he API.
And yeah - the flow is going to be 1) insert key, 2) reload denylist at CL=ALL
CP to make sure they're all consistent. So no exposing the reload timer as the
current timer is used for cache invalidation on a per-table basis.
Honestly, with this workflow we could probably do away with the per-table
auto .`refreshAfterWrite` on the caffeine cache entirely.
So that aside, regarding whether we reload when we change the max allowable
or not, I think we should (but should of course document it). There's 2
use-cases there: either a) you have too many keys and want to truncate (I can't
imagine a world where an operator would want to do this, but...), or b) you
have too many keys for your current limit and want to *expand* the allowable
limit.
Even writing that out, having the asymmetry of "you have to reload when you
mutate keys" and "you don't have to reload when you change key limits" smells
to me. I'll go ahead and remove these reloads now and make sure we cover this
flow thoroughly in the docs.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]