[
https://issues.apache.org/jira/browse/CASSANDRA-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14504981#comment-14504981
]
Sylvain Lebresne commented on CASSANDRA-7168:
---------------------------------------------
bq. But in the meantime I do think we should drop digest reads.
I'm not necessarily against that, though I agree with Aleksey that it's worth
doing serious benchmarking before affirming that it "really doesn't buy us
much" since it's been there pretty much forever.
bq. Do we have any spare resources to do the testing prior to 3.0 release?
I know we're always impatient to remove stuff, but since this particular ticket
won't make 3.0, I would suggest leaving digests for now and save whatever
benchmarking resources we have for 3.0. Just an opinion though.
> Add repair aware consistency levels
> -----------------------------------
>
> Key: CASSANDRA-7168
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7168
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Reporter: T Jake Luciani
> Labels: performance
> Fix For: 3.1
>
>
> With CASSANDRA-5351 and CASSANDRA-2424 I think there is an opportunity to
> avoid a lot of extra disk I/O when running queries with higher consistency
> levels.
> Since repaired data is by definition consistent and we know which sstables
> are repaired, we can optimize the read path by having a REPAIRED_QUORUM which
> breaks reads into two phases:
>
> 1) Read from one replica the result from the repaired sstables.
> 2) Read from a quorum only the un-repaired data.
> For the node performing 1) we can pipeline the call so it's a single hop.
> In the long run (assuming data is repaired regularly) we will end up with
> much closer to CL.ONE performance while maintaining consistency.
> Some things to figure out:
> - If repairs fail on some nodes we can have a situation where we don't have
> a consistent repaired state across the replicas.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)