Sxnan commented on code in PR #19653:
URL: https://github.com/apache/flink/pull/19653#discussion_r893005274
##########
flink-runtime/src/main/java/org/apache/flink/runtime/deployment/TaskDeploymentDescriptorFactory.java:
##########
@@ -244,7 +280,41 @@ public static TaskDeploymentDescriptorFactory
fromExecutionVertex(
internalExecutionGraphAccessor.getPartitionLocationConstraint(),
executionVertex.getAllConsumedPartitionGroups(),
internalExecutionGraphAccessor::getResultPartitionOrThrow,
- internalExecutionGraphAccessor.getBlobWriter());
+ internalExecutionGraphAccessor.getBlobWriter(),
+ clusterPartitionShuffleDescriptors);
+ }
+
+ private static Map<IntermediateDataSetID, ShuffleDescriptor[]>
+ getClusterPartitionShuffleDescriptors(ExecutionVertex
executionVertex) {
+ final InternalExecutionGraphAccessor internalExecutionGraphAccessor =
+ executionVertex.getExecutionGraphAccessor();
+ final List<IntermediateDataSetID> consumedClusterDataSetIds =
+
executionVertex.getJobVertex().getJobVertex().getIntermediateDataSetIdsToConsume();
+ Map<IntermediateDataSetID, ShuffleDescriptor[]>
clusterPartitionShuffleDescriptors =
+ new HashMap<>();
+
+ for (IntermediateDataSetID consumedClusterDataSetId :
consumedClusterDataSetIds) {
+ List<? extends ShuffleDescriptor> shuffleDescriptors =
+
internalExecutionGraphAccessor.getClusterPartitionShuffleDescriptors(
Review Comment:
Considering that one JobVertex doesn't consume too many IntermediateDataSet
and we cache the shuffle descriptors at the JobMasterPartitionTracker after the
first query to the Resource Manager, I think this optimization is not that
necessary. We can add the optimization later if we find that it is needed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]