Karthik Kambatla commented on YARN-3655:

Comments on the patch:
# okToUnreserve 
## It was a little hard to wrap my head around. Can we negate it and call it 
## Can we get rid of the if-else and have a simple {{return hasContainerForNode 
&& fitsInMaxShare && !isOverAMShareLimit}}?
# Add an {{if (isValidReservation)}} check in {{FSAppAttempt#reserve}} so all 
the reservation logic stays in one place? 
# In {{FSAppAttempt#assignContainer(node, request, nodeType, reserved)}}, 
## We can get rid of the fitsInMaxShare check immediately preceding the call to 
## Given {{if (fitsIn(capability, available))}}-block ends in return, we don't 
need to put the continuation in else. 
# While adding this check in {{FSAppAttempt#assignContainer(node)}} might work 
in practice, it somehow feels out of place. Also, assignReservedContainer could 
also lead to a reservation? 
# Instead of calling {{okToUnreserve}}/{{!isValidReservation}} in 
{{FairScheduler#attemptScheduling}}, we should likely add it as the first check 
in {{FSAppAttempt#assignReservedContainer}}.
# Looks like assign-multiple is broken with reserved-containers. The while-loop 
for assign-multiple should look at both reserved and un-reserved containers 
assigned. Can we file a follow-up JIRA to fix this?  

> FairScheduler: potential livelock due to maxAMShare limitation and container 
> reservation 
> -----------------------------------------------------------------------------------------
>                 Key: YARN-3655
>                 URL: https://issues.apache.org/jira/browse/YARN-3655
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler
>    Affects Versions: 2.7.0
>            Reporter: zhihai xu
>            Assignee: zhihai xu
>         Attachments: YARN-3655.000.patch, YARN-3655.001.patch, 
> YARN-3655.002.patch
> FairScheduler: potential livelock due to maxAMShare limitation and container 
> reservation.
> If a node is reserved by an application, all the other applications don't 
> have any chance to assign a new container on this node, unless the 
> application which reserves the node assigns a new container on this node or 
> releases the reserved container on this node.
> The problem is if an application tries to call assignReservedContainer and 
> fail to get a new container due to maxAMShare limitation, it will block all 
> other applications to use the nodes it reserves. If all other running 
> applications can't release their AM containers due to being blocked by these 
> reserved containers. A livelock situation can happen.
> The following is the code at FSAppAttempt#assignContainer which can cause 
> this potential livelock.
> {code}
>     // Check the AM resource usage for the leaf queue
>     if (!isAmRunning() && !getUnmanagedAM()) {
>       List<ResourceRequest> ask = appSchedulingInfo.getAllResourceRequests();
>       if (ask.isEmpty() || !getQueue().canRunAppAM(
>           ask.get(0).getCapability())) {
>         if (LOG.isDebugEnabled()) {
>           LOG.debug("Skipping allocation because maxAMShare limit would " +
>               "be exceeded");
>         }
>         return Resources.none();
>       }
>     }
> {code}
> To fix this issue, we can unreserve the node if we can't allocate the AM 
> container on the node due to Max AM share limitation and the node is reserved 
> by the application.

This message was sent by Atlassian JIRA

Reply via email to