[
https://issues.apache.org/jira/browse/YARN-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549547#comment-14549547
]
Arun Suresh commented on YARN-3655:
-----------------------------------
Thanks for the patch [~zxu],
I was just wondering though.. with your approach, assume the following
situation (please correct me if I am wrong)
* We have 3 nodes with say 4GB capacity.
* Currently, applications are using up 3GB on each node (assume they are all
fairly long running tasks..).
* At time T1, A new app (appX) is added, and requires 2 GB.
* At some time T2, the next allocation event (after all nodes have sent
heartbeat.. or after a continuousScheduling attempt) happens, a reservation of
2GB is made on each node for appX.
* At some time T3, during the next allocation event, As per your patch, the
reservation for appX will be removed from ALL nodes..
* Thus reservations for appX will flip-flop on all nodes. It is possible that
during the period when there is no reservation for appX. other apps with < 1GB
requirement might come in and be scheduled on the cluster... thereby starving
appX
> FairScheduler: potential livelock due to maxAMShare limitation and container
> reservation
> -----------------------------------------------------------------------------------------
>
> Key: YARN-3655
> URL: https://issues.apache.org/jira/browse/YARN-3655
> Project: Hadoop YARN
> Issue Type: Bug
> Components: fairscheduler
> Affects Versions: 2.7.0
> Reporter: zhihai xu
> Assignee: zhihai xu
> Attachments: YARN-3655.000.patch, YARN-3655.001.patch
>
>
> FairScheduler: potential livelock due to maxAMShare limitation and container
> reservation.
> If a node is reserved by an application, all the other applications don't
> have any chance to assign a new container on this node, unless the
> application which reserves the node assigns a new container on this node or
> releases the reserved container on this node.
> The problem is if an application tries to call assignReservedContainer and
> fail to get a new container due to maxAMShare limitation, it will block all
> other applications to use the nodes it reserves. If all other running
> applications can't release their AM containers due to being blocked by these
> reserved containers. A livelock situation can happen.
> The following is the code at FSAppAttempt#assignContainer which can cause
> this potential livelock.
> {code}
> // Check the AM resource usage for the leaf queue
> if (!isAmRunning() && !getUnmanagedAM()) {
> List<ResourceRequest> ask = appSchedulingInfo.getAllResourceRequests();
> if (ask.isEmpty() || !getQueue().canRunAppAM(
> ask.get(0).getCapability())) {
> if (LOG.isDebugEnabled()) {
> LOG.debug("Skipping allocation because maxAMShare limit would " +
> "be exceeded");
> }
> return Resources.none();
> }
> }
> {code}
> To fix this issue, we can unreserve the node if we can't allocate the AM
> container on the node due to Max AM share limitation and the node is reserved
> by the application.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)