jihoonson commented on a change in pull request #6629: Add support parallel
combine in brokers
URL: https://github.com/apache/incubator-druid/pull/6629#discussion_r240882331
##########
File path:
server/src/main/java/org/apache/druid/client/CachingClusteredClient.java
##########
@@ -285,12 +301,72 @@ public CachingClusteredClient(
List<Sequence<T>> sequencesByInterval = new
ArrayList<>(alreadyCachedResults.size() + segmentsByServer.size());
addSequencesFromCache(sequencesByInterval, alreadyCachedResults);
addSequencesFromServer(sequencesByInterval, segmentsByServer);
- return Sequences
- .simple(sequencesByInterval)
- .flatMerge(seq -> seq, query.getResultOrdering());
+ return merge(sequencesByInterval);
});
}
+ private Sequence<T> merge(List<Sequence<T>> sequencesByInterval)
+ {
+ final int numParallelCombineThreads =
QueryContexts.getNumBrokerParallelCombineThreads(query);
+
+ if (numParallelCombineThreads > 0) {
+ final ReserveResult reserveResult =
processingThreadResourcePool.reserve(query, numParallelCombineThreads);
+ if (!reserveResult.isOk()) {
+ throw new ISE(
Review comment:
The idea behind this is, druid currently can't make the optimal decision for
resource planning by itself. (The optimal automatic resource allocation I think
is at least that Druid should be able to allocate proper resources based on the
query priority and how heavy its workload is. So, users should be careful when
allocating resources. If there isn't enough resources required for this
particular query, it means user's resource planning failed and the query should
be failed.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]