[
https://issues.apache.org/jira/browse/CASSANDRA-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Andres de la Peña updated CASSANDRA-15752:
------------------------------------------
Fix Version/s: (was: 4.0-beta)
(was: 3.11.x)
(was: 3.0.x)
4.0-alpha5
3.11.7
3.0.21
Since Version: 2.1 beta1
Source Control Link:
https://github.com/apache/cassandra/commit/abdf5085d4381351054bc2c0976bc826f4ac82e2
(was: trunk: https://github.com/apache/cassandra/pull/606)
Resolution: Fixed
Status: Resolved (was: Ready to Commit)
> Range read concurrency factor didn't consider range merger
> ----------------------------------------------------------
>
> Key: CASSANDRA-15752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15752
> Project: Cassandra
> Issue Type: Bug
> Components: Legacy/Coordination
> Reporter: ZhaoYang
> Assignee: ZhaoYang
> Priority: Normal
> Fix For: 3.0.21, 3.11.7, 4.0-alpha5
>
>
> During range read, coordinator computes concurrency factor which is the
> number of vnode ranges to contact in parallel for the next batch.
> But in {{RangeCommandIterator}}, vnode ranges are merged by {{RangeMerger}}
> if vnode ranges share enough replicas to satisfy consistency level. eg. vnode
> range [a,b) has replica n1,n2,n3 and vnode range [b,c) has replica n2,n3,n4,
> so they can be merged as range [a,c) with replica n2, n3 for Quorum.
> Currently it counts number of merged ranges towards concurrency factor.
> Coordinator may fetch more ranges than needed.
> ----
> Another issue is that when executing range read on table with very small
> amount of data, concurrency factor can be bumped to {{size of total vnode
> ranges}}, eg. 10k, depending on the num of vnodes and cluster size. As a
> result, coordinator will send large number of concurrent range requests,
> potentially slowing down the cluster.. We should cap the max concurrency
> factor..
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]