[
https://issues.apache.org/jira/browse/CASSANDRA-19018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17817823#comment-17817823
]
Caleb Rackliffe edited comment on CASSANDRA-19018 at 2/16/24 12:42 AM:
-----------------------------------------------------------------------
After resolving several bugs ranging from SAI's local internals to changes
required by non-strict filtering in replica filtering protection, I've finally
been able to consolidate everything and get stable fuzz testing/Harry runs.
(These typically burn in for between 30 minutes and 2 hours on my MBP.) The
work in progress had been carried out in [the WIP Harry
branch|https://github.com/maedhroz/cassandra/pull/15] (based on the trunk patch
for this issue) to make iterating faster, but it has now been moved back as a
[single
commit|https://github.com/apache/cassandra/pull/3083/commits/5919f3d8c5290b1b61baa43cf0783db5de0d95e1]
[here|https://github.com/apache/cassandra/pull/3083] and is ready for
review...again :)
CI is in progress, and I'll have results soon...
The changes made over the past couple weeks have highlighted even more clearly
what this whole issue is about. We've had to make changes to make sure
distributed filtering/index queries can be correct, but there is conceptually
no way to do that without trading away performance that is aligned very closely
to the extent to which the data we're looking at is out-of-sync. When repair is
keeping inconsistencies to a minimum, things should go reasonably well, but
when large divergences and large quantities of unrepaired data are allowed to
persist, there will be pain.
I think it's also probably appropriate to note that static columns have become
even more dangerous, as matches on static columns can now entail pulling entire
partitions through RFP in coordinator space or post-filtering entire partitions
locally.
was (Author: maedhroz):
After resolving several bugs ranging from SAI's local internals to changes
required by non-strict filtering in replica filtering protection, I've finally
been able to consolidate everything and get stable fuzz testing/Harry runs.
(These typically burn in for between 30 minutes and 2 hours on my MBP.) The
work in progress had been carried out in [the WIP Harry
branch|https://github.com/maedhroz/cassandra/pull/15] (based on the trunk patch
for this issue) to make iterating faster, but it has now been moved back as a
[single
commit|https://github.com/apache/cassandra/pull/3083/commits/5919f3d8c5290b1b61baa43cf0783db5de0d95e1]
[here|https://github.com/apache/cassandra/pull/3083] and is ready for
review...again :)
CI is in progress, and I'll have results soon...
> An SAI-specific mechanism to ensure consistency isn't violated for
> multi-column (i.e. AND) queries at CL > ONE
> --------------------------------------------------------------------------------------------------------------
>
> Key: CASSANDRA-19018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19018
> Project: Cassandra
> Issue Type: Bug
> Components: Consistency/Coordination, Feature/SAI
> Reporter: Caleb Rackliffe
> Assignee: Caleb Rackliffe
> Priority: Normal
> Fix For: 5.0-rc, 5.x
>
> Attachments: ci_summary-1.html, ci_summary.html,
> result_details.tar-1.gz, result_details.tar.gz
>
> Time Spent: 9h 20m
> Remaining Estimate: 0h
>
> CASSANDRA-19007 is going to be where we add a guardrail around
> filtering/index queries that use intersection/AND over partially updated
> non-key columns. (ex. Restricting one clustering column and one normal column
> does not cause a consistency problem, as primary keys cannot be partially
> updated.) This issue exists to attempt to fix this specifically for SAI in
> 5.0.x, as Accord will (last I checked) not be available until the 5.1 release.
> The SAI-specific version of the originally reported issue is this:
> {noformat}
> try (Cluster cluster = init(Cluster.build(2).withConfig(config ->
> config.with(GOSSIP).with(NETWORK)).start()))
> {
> cluster.schemaChange(withKeyspace("CREATE TABLE %s.t (k int
> PRIMARY KEY, a int, b int)"));
> cluster.schemaChange(withKeyspace("CREATE INDEX ON %s.t(a) USING
> 'sai'"));
> cluster.schemaChange(withKeyspace("CREATE INDEX ON %s.t(b) USING
> 'sai'"));
> // insert a split row
> cluster.get(1).executeInternal(withKeyspace("INSERT INTO %s.t(k,
> a) VALUES (0, 1)"));
> cluster.get(2).executeInternal(withKeyspace("INSERT INTO %s.t(k,
> b) VALUES (0, 2)"));
> // Uncomment this line and test succeeds w/ partial writes
> completed...
> //cluster.get(1).nodetoolResult("repair",
> KEYSPACE).asserts().success();
> String select = withKeyspace("SELECT * FROM %s.t WHERE a = 1 AND
> b = 2");
> Object[][] initialRows = cluster.coordinator(1).execute(select,
> ConsistencyLevel.ALL);
> assertRows(initialRows, row(0, 1, 2)); // not found!!
> }
> {noformat}
> To make a long story short, the local SAI indexes are hiding local partial
> matches from the coordinator that would combine there to form full matches.
> Simple non-index filtering queries also suffer from this problem, but they
> hide the partial matches in a different way. I'll outline a possible solution
> for this in the comments that takes advantage of replica filtering protection
> and the repaired/unrepaired datasets...and attempts to minimize the amount of
> extra row data sent to the coordinator.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]