zhuzhurk commented on code in PR #21199:
URL: https://github.com/apache/flink/pull/21199#discussion_r1042889689


##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/InputConsumableDecider.java:
##########
@@ -33,14 +33,14 @@ public interface InputConsumableDecider {
      * Determining weather the input of an execution vertex is consumable.
      *
      * @param executionVertexID identifier of this {@link ExecutionVertex}.
-     * @param verticesToDeploy all vertices to deploy during the current 
scheduling process. This
-     *     set will be used to know whether an execution vertex has been 
decided to scheduled.
+     * @param verticesToSchedule vertices that are not yet scheduled by 
already decided to be

Review Comment:
   by -> but



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/InputConsumableDecider.java:
##########
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler.strategy;
+
+import org.apache.flink.runtime.executiongraph.ExecutionVertex;
+
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+
+/**
+ * {@link InputConsumableDecider} is responsible for determining weather the 
input of an

Review Comment:
   weather -> whether
   
   There are a few other occurrences.



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/InputConsumableDecider.java:
##########
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler.strategy;
+
+import org.apache.flink.runtime.executiongraph.ExecutionVertex;
+
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+
+/**
+ * {@link InputConsumableDecider} is responsible for determining weather the 
input of an
+ * executionVertex is consumable.
+ */
+public interface InputConsumableDecider {
+    /**
+     * Determining weather the input of an execution vertex is consumable.
+     *
+     * @param executionVertexID identifier of this {@link ExecutionVertex}.
+     * @param verticesToDeploy all vertices to deploy during the current 
scheduling process. This
+     *     set will be used to know whether an execution vertex has been 
decided to scheduled.
+     * @param consumableStatusCache a cache for {@link ConsumedPartitionGroup} 
consumable status.
+     *     This will be used to reduce double computing.
+     */
+    boolean isInputConsumable(
+            ExecutionVertexID executionVertexID,

Review Comment:
   executionVertexID -> executionVertexId



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/VertexwiseSchedulingStrategy.java:
##########
@@ -123,12 +132,15 @@ private void maybeScheduleVertices(final 
Set<ExecutionVertexID> vertices) {
                                     SchedulingExecutionVertex vertex =
                                             
schedulingTopology.getVertex(vertexId);
                                     checkState(vertex.getState() == 
ExecutionState.CREATED);
-                                    return areVertexInputsAllConsumable(
-                                            vertex, consumableStatusCache);
+                                    return 
inputConsumableDecider.isInputConsumable(
+                                            vertexId,

Review Comment:
   It's better to pass in a `SchedulingExecutionVertex` so that we do not need 
to find the vertex by the id multiple times.



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/VertexwiseSchedulingStrategy.java:
##########
@@ -125,24 +122,64 @@ private void maybeScheduleVertices(final 
Set<ExecutionVertexID> vertices) {
             newVertices.clear();
         }
 
-        final Set<ExecutionVertexID> verticesToDeploy =
-                allCandidates.stream()
-                        .filter(
-                                vertexId -> {
-                                    SchedulingExecutionVertex vertex =
-                                            
schedulingTopology.getVertex(vertexId);
-                                    checkState(vertex.getState() == 
ExecutionState.CREATED);
-                                    return 
inputConsumableDecider.isInputConsumable(
-                                            vertexId,
-                                            Collections.emptySet(),
-                                            consumableStatusCache);
-                                })
-                        .collect(Collectors.toSet());
+        final Set<ExecutionVertexID> verticesToDeploy = new HashSet<>();
+
+        Set<ExecutionVertexID> nextVertices = allCandidates;
+        while (!nextVertices.isEmpty()) {
+            nextVertices = addToDeployAndGetVertices(nextVertices, 
verticesToDeploy);
+        }
 
         scheduleVerticesOneByOne(verticesToDeploy);
         scheduledVertices.addAll(verticesToDeploy);
     }
 
+    private Set<ExecutionVertexID> addToDeployAndGetVertices(
+            Set<ExecutionVertexID> currentVertices, Set<ExecutionVertexID> 
verticesToDeploy) {
+        Set<ExecutionVertexID> nextVertices = new HashSet<>();
+        // cache consumedPartitionGroup's consumable status to avoid compute 
repeatedly.
+        final Map<ConsumedPartitionGroup, Boolean> consumableStatusCache = new 
HashMap<>();

Review Comment:
   I think it's more efficient to reuse the cache across different 
`addToDeployAndGetVertices`. So is `visitedConsumerVertexGroup`.
   
   e.g. A->B, B->C, A->C, all edges are hybrid, B&C consume the same result R. 
Seems currently C will be visited twice, and the consumable state of R will be 
computed twice.



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/VertexwiseSchedulingStrategy.java:
##########
@@ -125,24 +124,64 @@ private void maybeScheduleVertices(final 
Set<ExecutionVertexID> vertices) {
             newVertices.clear();
         }
 
-        final Set<ExecutionVertexID> verticesToDeploy =
-                allCandidates.stream()
-                        .filter(
-                                vertexId -> {
-                                    SchedulingExecutionVertex vertex =
-                                            
schedulingTopology.getVertex(vertexId);
-                                    checkState(vertex.getState() == 
ExecutionState.CREATED);
-                                    return 
inputConsumableDecider.isInputConsumable(
-                                            vertexId,
-                                            Collections.emptySet(),
-                                            consumableStatusCache);
-                                })
-                        .collect(Collectors.toSet());
+        final Set<ExecutionVertexID> verticesToDeploy = new HashSet<>();
+
+        Set<ExecutionVertexID> nextVertices = allCandidates;
+        while (!nextVertices.isEmpty()) {
+            nextVertices = addToDeployAndGetVertices(nextVertices, 
verticesToDeploy);
+        }
 
         scheduleVerticesOneByOne(verticesToDeploy);
         scheduledVertices.addAll(verticesToDeploy);
     }
 
+    private Set<ExecutionVertexID> addToDeployAndGetVertices(
+            Set<ExecutionVertexID> currentVertices, Set<ExecutionVertexID> 
verticesToDeploy) {
+        Set<ExecutionVertexID> nextVertices = new HashSet<>();
+        // cache consumedPartitionGroup's consumable status to avoid compute 
repeatedly.
+        final Map<ConsumedPartitionGroup, Boolean> consumableStatusCache = new 
HashMap<>();
+        final Set<ConsumerVertexGroup> visitedConsumerVertexGroup = new 
HashSet<>();

Review Comment:
   I prefer to use `IdentityHashMap` and `Collections.newSetFromMap(new 
IdentityHashMap<>())`. Because we never expect a same ConsumedPartitionGroup to 
be created multiple times. And directly comparing the references is more 
efficient.
   
   I also noticed the current implementation of their `equals()`/`hashCode()` 
is a bit tricky so it's better to just avoid making that change.



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/DefaultInputConsumableDecider.java:
##########
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler.strategy;
+
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+
+/**
+ * Default implementation of {@link InputConsumableDecider}. This decider will 
judge whether the
+ * executionVertex's inputs are consumable as follows:
+ *
+ * <p>For blocking consumed partition group: Whether all result partitions in 
the group are
+ * finished.
+ *
+ * <p>For hybrid consumed partition group: whether all result partitions in 
the group are scheduled.
+ */
+public class DefaultInputConsumableDecider implements InputConsumableDecider {
+    private final Function<ExecutionVertexID, SchedulingExecutionVertex> 
executionVertexRetriever;
+
+    private final Function<IntermediateResultPartitionID, 
SchedulingResultPartition>
+            resultPartitionRetriever;
+
+    private final Function<ExecutionVertexID, Boolean> 
scheduledVertexRetriever;
+
+    public DefaultInputConsumableDecider(

Review Comment:
   can be package private



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/DefaultInputConsumableDecider.java:
##########
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler.strategy;
+
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+
+/**
+ * Default implementation of {@link InputConsumableDecider}. This decider will 
judge whether the
+ * executionVertex's inputs are consumable as follows:
+ *
+ * <p>For blocking consumed partition group: Whether all result partitions in 
the group are
+ * finished.
+ *
+ * <p>For hybrid consumed partition group: whether all result partitions in 
the group are scheduled.
+ */
+public class DefaultInputConsumableDecider implements InputConsumableDecider {
+    private final Function<ExecutionVertexID, SchedulingExecutionVertex> 
executionVertexRetriever;
+
+    private final Function<IntermediateResultPartitionID, 
SchedulingResultPartition>
+            resultPartitionRetriever;
+
+    private final Function<ExecutionVertexID, Boolean> 
scheduledVertexRetriever;
+
+    public DefaultInputConsumableDecider(
+            Function<ExecutionVertexID, Boolean> scheduledVertexRetriever,
+            Function<ExecutionVertexID, SchedulingExecutionVertex> 
executionVertexRetriever,
+            Function<IntermediateResultPartitionID, SchedulingResultPartition>
+                    resultPartitionRetriever) {
+        this.scheduledVertexRetriever = scheduledVertexRetriever;
+        this.executionVertexRetriever = executionVertexRetriever;
+        this.resultPartitionRetriever = resultPartitionRetriever;
+    }
+
+    @Override
+    public boolean isInputConsumable(
+            ExecutionVertexID executionVertexID,

Review Comment:
   executionVertexID -> executionVertexId



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/DefaultInputConsumableDecider.java:
##########
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler.strategy;
+
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+
+/**
+ * Default implementation of {@link InputConsumableDecider}. This decider will 
judge whether the
+ * executionVertex's inputs are consumable as follows:
+ *
+ * <p>For blocking consumed partition group: Whether all result partitions in 
the group are
+ * finished.
+ *
+ * <p>For hybrid consumed partition group: whether all result partitions in 
the group are scheduled.
+ */
+public class DefaultInputConsumableDecider implements InputConsumableDecider {

Review Comment:
   can be package private



##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/VertexwiseSchedulingStrategy.java:
##########
@@ -125,24 +122,64 @@ private void maybeScheduleVertices(final 
Set<ExecutionVertexID> vertices) {
             newVertices.clear();
         }
 
-        final Set<ExecutionVertexID> verticesToDeploy =
-                allCandidates.stream()
-                        .filter(
-                                vertexId -> {
-                                    SchedulingExecutionVertex vertex =
-                                            
schedulingTopology.getVertex(vertexId);
-                                    checkState(vertex.getState() == 
ExecutionState.CREATED);
-                                    return 
inputConsumableDecider.isInputConsumable(
-                                            vertexId,
-                                            Collections.emptySet(),
-                                            consumableStatusCache);
-                                })
-                        .collect(Collectors.toSet());
+        final Set<ExecutionVertexID> verticesToDeploy = new HashSet<>();
+
+        Set<ExecutionVertexID> nextVertices = allCandidates;
+        while (!nextVertices.isEmpty()) {
+            nextVertices = addToDeployAndGetVertices(nextVertices, 
verticesToDeploy);
+        }
 
         scheduleVerticesOneByOne(verticesToDeploy);
         scheduledVertices.addAll(verticesToDeploy);
     }
 
+    private Set<ExecutionVertexID> addToDeployAndGetVertices(
+            Set<ExecutionVertexID> currentVertices, Set<ExecutionVertexID> 
verticesToDeploy) {
+        Set<ExecutionVertexID> nextVertices = new HashSet<>();
+        // cache consumedPartitionGroup's consumable status to avoid compute 
repeatedly.
+        final Map<ConsumedPartitionGroup, Boolean> consumableStatusCache = new 
HashMap<>();
+        final Set<ConsumerVertexGroup> visitedConsumerVertexGroup = new 
HashSet<>();
+
+        for (ExecutionVertexID currentVertex : currentVertices) {
+            if (isVertexSchedulable(currentVertex, consumableStatusCache, 
verticesToDeploy)) {
+                verticesToDeploy.add(currentVertex);
+                Set<ConsumerVertexGroup> canBePipelinedConsumerVertexGroups =
+                        IterableUtils.toStream(
+                                        schedulingTopology
+                                                .getVertex(currentVertex)
+                                                .getProducedResults())
+                                
.map(SchedulingResultPartition::getConsumerVertexGroups)
+                                .flatMap(Collection::stream)
+                                .filter(
+                                        (consumerVertexGroup) ->
+                                                consumerVertexGroup
+                                                        
.getResultPartitionType()
+                                                        
.canBePipelinedConsumed())
+                                .collect(Collectors.toSet());
+                for (ConsumerVertexGroup consumerVertexGroup : 
canBePipelinedConsumerVertexGroups) {
+                    if 
(!visitedConsumerVertexGroup.contains(consumerVertexGroup)) {
+                        visitedConsumerVertexGroup.add(consumerVertexGroup);
+                        nextVertices.addAll(
+                                canBePipelinedConsumerVertexGroups.stream()

Review Comment:
   I guess here we should only add vertices of the `consumerVertexGroup`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to