[
https://issues.apache.org/jira/browse/SOLR-16932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
ASF GitHub Bot updated SOLR-16932:
----------------------------------
Labels: pull-request-available (was: )
> Http2Client should have configurable `maxOutstandingRequests`, to support
> parallel requests in high-shard-count contexts
> ------------------------------------------------------------------------------------------------------------------------
>
> Key: SOLR-16932
> URL: https://issues.apache.org/jira/browse/SOLR-16932
> Project: Solr
> Issue Type: Improvement
> Affects Versions: main (10.0), 9.3
> Reporter: Michael Gibney
> Priority: Major
> Labels: pull-request-available
> Time Spent: 4h
> Remaining Estimate: 0h
>
> Http2SolrClient is asynchronous, but it only allows for a
> [hardcoded|https://github.com/apache/solr/blob/88990d640a89091a8f7b0b2493377ac24118afe8/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Http2SolrClient.java#L964]
> max number (1000) of outstanding requests. Thus, under sufficient load,
> intra-cluster communication is not fully concurrent/asynchronous, and the
> top-level coordinator node can become a bottleneck. This is especially
> problematic for high-shard-count collections (>1k shards) where a single
> top-level request easily generates sufficient load to hit this throttling,
> effectively guaranteeing a near doubling of top-level (client-side) request
> latency.
> It should be possible to configure this {{maxOutstandingConnections}}
> threshold via the HttpShardHandlerFactory config. If I understand correctly
> the implications of this limit, it should be reasonable to scale it roughly
> according to the number of nodes in the cluster (consider, e.g.: 1k
> outstanding requests to 2 nodes is a very different situation than 1k
> outstanding requests to 128 nodes).
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]