SteNicholas commented on a change in pull request #16307:
URL: https://github.com/apache/flink/pull/16307#discussion_r661108805
##########
File path:
flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/StreamExecutionEnvironment.scala
##########
@@ -106,6 +107,21 @@ class StreamExecutionEnvironment(javaEnv: JavaEnv) {
javaEnv.setMaxParallelism(maxParallelism)
}
+ /**
+ * Register a slot sharing group with its resource spec. The resource
configured here is prior
+ * than it configured in {@link
SingleOutputStreamOperator#slotSharingGroup(SlotSharingGroup)}.
+ *
+ * <p>Note that a slot sharing group hints the scheduler that the grouped
operators CAN be
+ * deployed into a shared slot. There's no guarantee that the scheduler
always deploy the
+ * grouped operators together. In cases grouped operators are deployed into
separate slots, the
+ * slot resources will be derived from the specified group requirements.
Review comment:
Add the comment of the parameter `slotSharingGroup`.
##########
File path:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
##########
@@ -336,6 +345,24 @@ public StreamExecutionEnvironment setMaxParallelism(int
maxParallelism) {
return this;
}
+ /**
+ * Register a slot sharing group with its resource spec. The resource
configured here is prior
+ * than it configured in {@link
SingleOutputStreamOperator#slotSharingGroup(SlotSharingGroup)}.
+ *
+ * <p>Note that a slot sharing group hints the scheduler that the grouped
operators CAN be
+ * deployed into a shared slot. There's no guarantee that the scheduler
always deploy the
+ * grouped operators together. In cases grouped operators are deployed
into separate slots, the
+ * slot resources will be derived from the specified group requirements.
Review comment:
Add the comment of the parameter `slotSharingGroup`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]