xintongsong commented on a change in pull request #11320: 
[FLINK-16437][runtime] Make SlotManager allocate resource from ResourceManager 
at the worker granularity.
URL: https://github.com/apache/flink/pull/11320#discussion_r404473798
 
 

 ##########
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
 ##########
 @@ -1054,10 +1054,10 @@ protected abstract void internalDeregisterApplication(
         * Allocates a resource using the resource profile.
         *
         * @param resourceProfile The resource description
-        * @return Collection of {@link ResourceProfile} describing the 
launched slots
+        * @return whether the resource can be allocated
         */
        @VisibleForTesting
-       public abstract Collection<ResourceProfile> 
startNewWorker(ResourceProfile resourceProfile);
+       public abstract boolean startNewWorker(ResourceProfile resourceProfile);
 
 Review comment:
   True, it is possible that RM won't be able to start a container for given 
`WorkerResourceSpec`.
   
   I think if the requested resource, to be specific the 
`TaskExecutorProcessSpec` derived from `WorkerResourceSpec`, is smaller than 
the Yarn min-allocation, we can simply throw a warning for that. Same for when 
requested resource is not an integer multiple of min-allocation. The critical 
part is if the requested resource is larger than the max-allocation, and we 
should check return false in such cases.
   
   I think these checks should be performed by the RM implementations. I'll add 
the checks for Yarn in #11353. For K8s, I think there are no such 
min/max-allocation limits, but I'll double-check on that.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to