[
https://issues.apache.org/jira/browse/YARN-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553636#comment-14553636
]
zhihai xu commented on YARN-3655:
---------------------------------
thanks [~asuresh] for the review. I think the flip-flop won't happen.
bq. At some time T2, the next allocation event (after all nodes have sent
heartbeat.. or after a continuousScheduling attempt) happens, a reservation of
2GB is made on each node for appX.
The above reservation won't succeed because maxAMShare limitation.
If it succeeded, then the reservation for appX won't be removed.
thanks [~kasha] for your review. these are great suggestions.
I made the change based on your suggestions. Also I fixed fitsInMaxShare issue
in this JIRA instead of creating a follow-up JIRA.
I also did some optimizations to remove some duplicate logic.
I find hasContainerForNode already covered getTotalRequiredResources.
If we check hasContainerForNode, then we don't check getTotalRequiredResources.
So I remove getTotalRequiredResources check in assignReservedContainer and
assignContainer.
Also because okToUnreserve checked hasContainerForNode, we don't need to check
it again for reserved container in assignContainer.
I uploaded a new patch YARN-3655.002.patch with above change.
> FairScheduler: potential livelock due to maxAMShare limitation and container
> reservation
> -----------------------------------------------------------------------------------------
>
> Key: YARN-3655
> URL: https://issues.apache.org/jira/browse/YARN-3655
> Project: Hadoop YARN
> Issue Type: Bug
> Components: fairscheduler
> Affects Versions: 2.7.0
> Reporter: zhihai xu
> Assignee: zhihai xu
> Attachments: YARN-3655.000.patch, YARN-3655.001.patch,
> YARN-3655.002.patch
>
>
> FairScheduler: potential livelock due to maxAMShare limitation and container
> reservation.
> If a node is reserved by an application, all the other applications don't
> have any chance to assign a new container on this node, unless the
> application which reserves the node assigns a new container on this node or
> releases the reserved container on this node.
> The problem is if an application tries to call assignReservedContainer and
> fail to get a new container due to maxAMShare limitation, it will block all
> other applications to use the nodes it reserves. If all other running
> applications can't release their AM containers due to being blocked by these
> reserved containers. A livelock situation can happen.
> The following is the code at FSAppAttempt#assignContainer which can cause
> this potential livelock.
> {code}
> // Check the AM resource usage for the leaf queue
> if (!isAmRunning() && !getUnmanagedAM()) {
> List<ResourceRequest> ask = appSchedulingInfo.getAllResourceRequests();
> if (ask.isEmpty() || !getQueue().canRunAppAM(
> ask.get(0).getCapability())) {
> if (LOG.isDebugEnabled()) {
> LOG.debug("Skipping allocation because maxAMShare limit would " +
> "be exceeded");
> }
> return Resources.none();
> }
> }
> {code}
> To fix this issue, we can unreserve the node if we can't allocate the AM
> container on the node due to Max AM share limitation and the node is reserved
> by the application.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)