xintongsong commented on code in PR #21560:
URL: https://github.com/apache/flink/pull/21560#discussion_r1058108777
##########
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/InternalExecutionGraphAccessor.java:
##########
@@ -122,4 +122,6 @@ List<ShuffleDescriptor>
getClusterPartitionShuffleDescriptors(
IntermediateDataSetID intermediateResultPartition);
MarkPartitionFinishedStrategy getMarkPartitionFinishedStrategy();
+
+ boolean isHybridEnableConsumePartialFinishedProducer();
Review Comment:
I'm not entirely sure about introducing a hybrid specific interface in
execution graph. Haven't think of an alternative way though.
##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/PartialFinishedInputConsumableDecider.java:
##########
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler.strategy;
+
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+
+/**
+ * {@link PartialFinishedInputConsumableDecider} is a special {@link
InputConsumableDecider}. The
+ * input is considered to be consumable:
+ *
+ * <ul>
+ * <li>for hybrid input: when partial producer partitions are finished.
+ * <li>for blocking input: when all producer partitions are finished.
+ * </ul>
+ */
+public class PartialFinishedInputConsumableDecider implements
InputConsumableDecider {
+ public static final int NUM_FINISHED_PARTITIONS_AS_CONSUMABLE = 1;
+
+ @Override
+ public boolean isInputConsumable(
+ SchedulingExecutionVertex executionVertex,
+ Set<ExecutionVertexID> verticesToDeploy,
+ Map<ConsumedPartitionGroup, Boolean> consumableStatusCache) {
+ for (ConsumedPartitionGroup consumedPartitionGroup :
+ executionVertex.getConsumedPartitionGroups()) {
+
+ if (!consumableStatusCache.computeIfAbsent(
+ consumedPartitionGroup,
this::isConsumedPartitionGroupConsumable)) {
+ return false;
+ }
Review Comment:
When there're multiple hybrid partition groups, shall we require all groups
to have at least one finished partition? Or one finished partition from any of
the groups?
##########
flink-core/src/main/java/org/apache/flink/configuration/JobManagerOptions.java:
##########
@@ -670,6 +670,24 @@ public enum SchedulerType {
code(SPECULATIVE_ENABLED.key()))
.build());
+ @Documentation.Section({
+ Documentation.Sections.EXPERT_SCHEDULING,
+ Documentation.Sections.ALL_JOB_MANAGER
+ })
+ public static final ConfigOption<Boolean>
CONSUME_PARTIAL_FINISHED_PRODUCER_ENABLED =
Review Comment:
The two configs (`ONLY_CONSUME_FINISHED_PARTITION` and
`CONSUME_PARTIAL_FINISHED_PRODUCER_ENABLED`) are quite alike, which may confuse
users.
- It's hard to understand the differences between them.
- There could be conflicts. E.g., allow consuming unfinished partitions,
while not allow consuming partition finished producer.
I wonder if we can combine them into one config, which takes a enum type of
supported values.
##########
flink-runtime/src/main/java/org/apache/flink/runtime/deployment/CachedShuffleDescriptors.java:
##########
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.deployment;
+
+import
org.apache.flink.runtime.deployment.TaskDeploymentDescriptor.MaybeOffloaded;
+import org.apache.flink.runtime.executiongraph.IntermediateResultPartition;
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.strategy.ConsumedPartitionGroup;
+import org.apache.flink.runtime.shuffle.ShuffleDescriptor;
+import org.apache.flink.util.function.FunctionWithException;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/** {@link ShuffleDescriptor}s cache for a {@link ConsumedPartitionGroup} */
+public class CachedShuffleDescriptors {
+ /**
+ * Stores all serialized shuffle descriptors. For unknown shuffle
descriptor, it will be
+ * replaced by real shuffle descriptor after upstream task finished.
+ */
+ private final List<MaybeOffloaded<ShuffleDescriptor>>
serializedShuffleDescriptors;
+
+ /**
+ * Stores all to be serialized shuffle descriptors, They will be
serialized and replace
+ * corresponding value(unknown shuffle descriptor) in
serializedShuffleDescriptors during the
+ * next time TDD is generated.
+ */
+ private final Map<ShuffleDescriptor, Integer> toBeSerialized;
Review Comment:
And why not eagerly serialize the descriptor and replace the previous
unknown descriptor in `markPartitionFinished`?
##########
flink-runtime/src/main/java/org/apache/flink/runtime/deployment/CachedShuffleDescriptors.java:
##########
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.deployment;
+
+import
org.apache.flink.runtime.deployment.TaskDeploymentDescriptor.MaybeOffloaded;
+import org.apache.flink.runtime.executiongraph.IntermediateResultPartition;
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.strategy.ConsumedPartitionGroup;
+import org.apache.flink.runtime.shuffle.ShuffleDescriptor;
+import org.apache.flink.util.function.FunctionWithException;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/** {@link ShuffleDescriptor}s cache for a {@link ConsumedPartitionGroup} */
+public class CachedShuffleDescriptors {
+ /**
+ * Stores all serialized shuffle descriptors. For unknown shuffle
descriptor, it will be
+ * replaced by real shuffle descriptor after upstream task finished.
+ */
+ private final List<MaybeOffloaded<ShuffleDescriptor>>
serializedShuffleDescriptors;
+
+ /**
+ * Stores all to be serialized shuffle descriptors, They will be
serialized and replace
+ * corresponding value(unknown shuffle descriptor) in
serializedShuffleDescriptors during the
+ * next time TDD is generated.
+ */
+ private final Map<ShuffleDescriptor, Integer> toBeSerialized;
Review Comment:
Why do we need a map? Would it be better to use a queue of tuples?
##########
flink-runtime/src/main/java/org/apache/flink/runtime/deployment/InputGateDeploymentDescriptor.java:
##########
@@ -75,7 +77,7 @@ public class InputGateDeploymentDescriptor implements
Serializable {
private transient ShuffleDescriptor[] inputChannels;
/** Serialized value of shuffle descriptors. */
- private MaybeOffloaded<ShuffleDescriptor[]> serializedInputChannels;
+ private final List<MaybeOffloaded<ShuffleDescriptor>>
serializedInputChannels;
Review Comment:
A serialized value will be offloaded only if it's larger than the min
offloading size (default 1MB). This practically means shuffler descriptors will
no longer be offloaded, because a single shuffle descriptor can hardly be
larger than 1MB. Moreover, if we configure the min offloading size to a smaller
value, than each shuffle descriptor will be a separate blog object, which
significantly increases the load of the blob server.
##########
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java:
##########
@@ -762,7 +763,9 @@ private static PartitionInfo createPartitionInfo(
ShuffleDescriptor shuffleDescriptor =
getConsumedPartitionShuffleDescriptor(
consumedPartition,
-
TaskDeploymentDescriptorFactory.PartitionLocationConstraint.MUST_BE_KNOWN);
+
TaskDeploymentDescriptorFactory.PartitionLocationConstraint.MUST_BE_KNOWN,
+ // because partition is already finished, false is
fair enough.
Review Comment:
Why is the partition finished?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]