[ 
https://issues.apache.org/jira/browse/CASSANDRA-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166770#comment-17166770
 ] 

Caleb Rackliffe edited comment on CASSANDRA-15907 at 7/29/20, 5:46 AM:
-----------------------------------------------------------------------

[~adelapena] Here are the final PRs and test runs (with not ALL commits 
squashed just yet, to make reviewing the small bits you might not have reviewed 
yet easier):

3.0: [patch|https://github.com/apache/cassandra/pull/659/commits], 
[CircleCI|https://app.circleci.com/pipelines/github/maedhroz/cassandra/89/workflows/b6192a9d-774a-4453-95c1-34d46acedcf4]

3.11: [patch|https://github.com/apache/cassandra/pull/665/commits], 
[CircleCI|https://app.circleci.com/pipelines/github/maedhroz/cassandra/88/workflows/4af8d5a3-280d-449a-a650-d3ff2db5b0f4]

({{SASIIndexTest#testIndexMemtableSwitching}} is a known failure)

trunk: [patch|https://github.com/apache/cassandra/pull/666/commits], [CircleCI 
J8|https://app.circleci.com/pipelines/github/maedhroz/cassandra/92/workflows/c59be4f8-329e-4d76-9c59-d49c38e58dd2],
 [CircleCI 
J11|https://app.circleci.com/pipelines/github/maedhroz/cassandra/92/workflows/56130350-62b0-48c3-b675-bdc45e3cebf2]

The [one 
failure|https://app.circleci.com/pipelines/github/maedhroz/cassandra/92/workflows/c59be4f8-329e-4d76-9c59-d49c38e58dd2/jobs/448]
 in {{TestBootstrap}} is an existing problem, but it doesn't appear to have a 
flakey test Jira.

Let me know if you'd like me to do a final squash at some point. Thanks for all 
your help on this one!


was (Author: maedhroz):
[~adelapena] Here are the final PRs and test runs (with not ALL commits 
squashed just yet, to make reviewing the small bits you might not have reviewed 
yet easier):

3.0: [patch|https://github.com/apache/cassandra/pull/659/commits], 
[CircleCI|https://app.circleci.com/pipelines/github/maedhroz/cassandra/89/workflows/b6192a9d-774a-4453-95c1-34d46acedcf4]

3.11: [patch|https://github.com/apache/cassandra/pull/665/commits], 
[CircleCI|https://app.circleci.com/pipelines/github/maedhroz/cassandra/88/workflows/4af8d5a3-280d-449a-a650-d3ff2db5b0f4]

({{SASIIndexTest#testIndexMemtableSwitching}} is a known failure)

trunk: [patch|https://github.com/apache/cassandra/pull/666/commits], [CircleCI 
J8|https://app.circleci.com/pipelines/github/maedhroz/cassandra/92/workflows/c59be4f8-329e-4d76-9c59-d49c38e58dd2],
 [CircleCI 
J11|https://app.circleci.com/pipelines/github/maedhroz/cassandra/92/workflows/56130350-62b0-48c3-b675-bdc45e3cebf2]

Let me know if you'd like me to do a final squash at some point. Thanks for all 
your help on this one!

> Operational Improvements & Hardening for Replica Filtering Protection
> ---------------------------------------------------------------------
>
>                 Key: CASSANDRA-15907
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-15907
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Consistency/Coordination, Feature/2i Index
>            Reporter: Caleb Rackliffe
>            Assignee: Caleb Rackliffe
>            Priority: Normal
>              Labels: 2i, memory
>             Fix For: 3.0.x, 3.11.x, 4.0-beta
>
>          Time Spent: 8h
>  Remaining Estimate: 0h
>
> CASSANDRA-8272 uses additional space on the heap to ensure correctness for 2i 
> and filtering queries at consistency levels above ONE/LOCAL_ONE. There are a 
> few things we should follow up on, however, to make life a bit easier for 
> operators and generally de-risk usage:
> (Note: Line numbers are based on {{trunk}} as of 
> {{3cfe3c9f0dcf8ca8b25ad111800a21725bf152cb}}.)
> *Minor Optimizations*
> * {{ReplicaFilteringProtection:114}} - Given we size them up-front, we may be 
> able to use simple arrays instead of lists for {{rowsToFetch}} and 
> {{originalPartitions}}. Alternatively (or also), we may be able to null out 
> references in these two collections more aggressively. (ex. Using 
> {{ArrayList#set()}} instead of {{get()}} in {{queryProtectedPartitions()}}, 
> assuming we pass {{toFetch}} as an argument to {{querySourceOnKey()}}.)
> * {{ReplicaFilteringProtection:323}} - We may be able to use 
> {{EncodingStats.merge()}} and remove the custom {{stats()}} method.
> * {{DataResolver:111 & 228}} - Cache an instance of 
> {{UnaryOperator#identity()}} instead of creating one on the fly.
> * {{ReplicaFilteringProtection:217}} - We may be able to scatter/gather 
> rather than serially querying every row that needs to be completed. This 
> isn't a clear win perhaps, given it targets the latency of single queries and 
> adds some complexity. (Certainly a decent candidate to kick even out of this 
> issue.)
> *Documentation and Intelligibility*
> * There are a few places (CHANGES.txt, tracing output in 
> {{ReplicaFilteringProtection}}, etc.) where we mention "replica-side 
> filtering protection" (which makes it seem like the coordinator doesn't 
> filter) rather than "replica filtering protection" (which sounds more like 
> what we actually do, which is protect ourselves against incorrect replica 
> filtering results). It's a minor fix, but would avoid confusion.
> * The method call chain in {{DataResolver}} might be a bit simpler if we put 
> the {{repairedDataTracker}} in {{ResolveContext}}.
> *Testing*
> * I want to bite the bullet and get some basic tests for RFP (including any 
> guardrails we might add here) onto the in-JVM dtest framework.
> *Guardrails*
> * As it stands, we don't have a way to enforce an upper bound on the memory 
> usage of {{ReplicaFilteringProtection}} which caches row responses from the 
> first round of requests. (Remember, these are later used to merged with the 
> second round of results to complete the data for filtering.) Operators will 
> likely need a way to protect themselves, i.e. simply fail queries if they hit 
> a particular threshold rather than GC nodes into oblivion. (Having control 
> over limits and page sizes doesn't quite get us there, because stale results 
> _expand_ the number of incomplete results we must cache.) The fun question is 
> how we do this, with the primary axes being scope (per-query, global, etc.) 
> and granularity (per-partition, per-row, per-cell, actual heap usage, etc.). 
> My starting disposition   on the right trade-off between 
> performance/complexity and accuracy is having something along the lines of 
> cached rows per query. Prior art suggests this probably makes sense alongside 
> things like {{tombstone_failure_threshold}} in {{cassandra.yaml}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org

Reply via email to