wilfred-s commented on a change in pull request #173:
URL: 
https://github.com/apache/incubator-yunikorn-core/pull/173#discussion_r447549155



##########
File path: pkg/scheduler/scheduling_application.go
##########
@@ -418,12 +431,34 @@ func (sa *SchedulingApplication) tryAllocate(headRoom 
*resources.Resource, ctx *
        for _, request := range sa.sortedRequests {
                // resource must fit in headroom otherwise skip the request
                if !resources.FitIn(headRoom, request.AllocatedResource) {
+                       // if the queue (or any of its parent) has max capacity 
is defined,
+                       // get the max headroom, this represents the configured 
queue quota.
+                       // if queue quota is enough, but headroom is not, 
usually this means
+                       // the cluster needs to scale up to meet the its 
capacity.
+                       maxHeadRoom := sa.queue.getMaxHeadRoom()
+                       if resources.FitIn(maxHeadRoom, 
request.AllocatedResource) {
+                               
sa.updateContainerSchedulingStateIfNeeded(request,
+                                       
si.UpdateContainerSchedulingStateRequest_FAILED,
+                                       "failed to schedule the request because 
partition resource is not enough")
+                       }
+                       // skip the request
                        continue
                }
                if nodeIterator := ctx.getNodeIterator(); nodeIterator != nil {
                        alloc := sa.tryNodes(request, nodeIterator)
-                       // have a candidate return it
-                       if alloc != nil {
+                       if alloc == nil {
+                               // we have enough headroom, but we could not 
find a node for this request,
+                               // this can happen when non of the nodes is 
qualified for this request,
+                               // by satisfying both conditions:
+                               //   1) node has enough resources;
+                               //   2) node satisfies all placement 
constraints of the request (e.g predicates)

Review comment:
       The scale up with 0 nodes will not be triggered here at all. It will 
have been triggered above in the `FitIn()` check. The `headRoom` check fails 
and the `maxHeadRoom` passes. We thus scale immediately. We cannot reserve in 
that case or even try to allocate if there are no nodes.
   
   In small clusters we also do not want to go overboard and keep scaling up. 
If there is just one node, or maybe a couple of nodes, in the cluster it is 
also better to not go and trigger large numbers of scale up. The scaled up node 
will more often than not fit a number of requests. That should thus account for 
the backlog. 
   If we still run out we keep scaling up. Keep in mind that we also scale up 
if the container does not fit in the headroom. So for large container we might 
have already triggered scale up due to headroom restrictions.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to