[
https://issues.apache.org/jira/browse/YARN-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14577321#comment-14577321
]
Hudson commented on YARN-3655:
------------------------------
FAILURE: Integrated in Hadoop-Mapreduce-trunk #2168 (See
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2168/])
YARN-3655. FairScheduler: potential livelock due to maxAMShare limitation and
container reservation. (Zhihai Xu via kasha) (kasha: rev
bd69ea408f8fdd8293836ce1089fe9b01616f2f7)
*
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
*
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java
* hadoop-yarn-project/CHANGES.txt
*
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
*
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
> FairScheduler: potential livelock due to maxAMShare limitation and container
> reservation
> -----------------------------------------------------------------------------------------
>
> Key: YARN-3655
> URL: https://issues.apache.org/jira/browse/YARN-3655
> Project: Hadoop YARN
> Issue Type: Bug
> Components: fairscheduler
> Affects Versions: 2.7.0
> Reporter: zhihai xu
> Assignee: zhihai xu
> Priority: Critical
> Fix For: 2.8.0
>
> Attachments: YARN-3655.000.patch, YARN-3655.001.patch,
> YARN-3655.002.patch, YARN-3655.003.patch, YARN-3655.004.patch
>
>
> FairScheduler: potential livelock due to maxAMShare limitation and container
> reservation.
> If a node is reserved by an application, all the other applications don't
> have any chance to assign a new container on this node, unless the
> application which reserves the node assigns a new container on this node or
> releases the reserved container on this node.
> The problem is if an application tries to call assignReservedContainer and
> fail to get a new container due to maxAMShare limitation, it will block all
> other applications to use the nodes it reserves. If all other running
> applications can't release their AM containers due to being blocked by these
> reserved containers. A livelock situation can happen.
> The following is the code at FSAppAttempt#assignContainer which can cause
> this potential livelock.
> {code}
> // Check the AM resource usage for the leaf queue
> if (!isAmRunning() && !getUnmanagedAM()) {
> List<ResourceRequest> ask = appSchedulingInfo.getAllResourceRequests();
> if (ask.isEmpty() || !getQueue().canRunAppAM(
> ask.get(0).getCapability())) {
> if (LOG.isDebugEnabled()) {
> LOG.debug("Skipping allocation because maxAMShare limit would " +
> "be exceeded");
> }
> return Resources.none();
> }
> }
> {code}
> To fix this issue, we can unreserve the node if we can't allocate the AM
> container on the node due to Max AM share limitation and the node is reserved
> by the application.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)