azagrebin commented on a change in pull request #8789: [FLINK-12890] Add 
partition lifecycle related Shuffle API
URL: https://github.com/apache/flink/pull/8789#discussion_r295798487
 
 

 ##########
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/shuffle/ShuffleDescriptor.java
 ##########
 @@ -50,4 +54,17 @@
        default boolean isUnknown() {
                return false;
        }
+
+       /**
+        * Returns the location of the producing task executor if the partition 
occupies local resources there.
+        *
+        * <p>Indicates that this partition occupies local resources in the 
producing task executor. Such partition requires
+        * that the task executor is running and being connected to be able to 
consume the produced data. This is mostly
+        * relevant for the batch jobs and blocking result partitions which 
should outlive the producer lifetime and
+        * be released externally: {@link 
ResultPartitionDeploymentDescriptor#isReleasedOnConsumption()} is {@code false}.
+        * {@link ShuffleEnvironment#releasePartitions(Collection)} can be used 
to release such kind of partitions locally.
+        *
+        * @return the resource id of the producing task executor if the 
partition occupies local resources there
+        */
+       Optional<ResourceID> hasLocalResources();
 
 Review comment:
   I would define it in a way that in general every shuffle service can 
potentially have some external resources if not then this is just an empty 
implementation. I would always call the external release in JM when the 
partition is not needed unless `releaseOnConsumption` is set in 
`ResultPartitionDeploymentDescriptor`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to