[ https://issues.apache.org/jira/browse/YARN-3415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14386042#comment-14386042 ]
Hadoop QA commented on YARN-3415: --------------------------------- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12708063/YARN-3415.000.patch against trunk revision 3d9132d. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.TestRMHA org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebappAuthentication org.apache.hadoop.yarn.server.resourcemanager.TestMoveApplication org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore Test results: https://builds.apache.org/job/PreCommit-YARN-Build/7143//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/7143//console This message is automatically generated. > Non-AM containers can be counted towards amResourceUsage of a fairscheduler > queue > --------------------------------------------------------------------------------- > > Key: YARN-3415 > URL: https://issues.apache.org/jira/browse/YARN-3415 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler > Affects Versions: 2.6.0 > Reporter: Rohit Agarwal > Assignee: zhihai xu > Priority: Critical > Attachments: YARN-3415.000.patch > > > We encountered this problem while running a spark cluster. The > amResourceUsage for a queue became artificially high and then the cluster got > deadlocked because the maxAMShare constrain kicked in and no new AM got > admitted to the cluster. > I have described the problem in detail here: > https://github.com/apache/spark/pull/5233#issuecomment-87160289 > In summary - the condition for adding the container's memory towards > amResourceUsage is fragile. It depends on the number of live containers > belonging to the app. We saw that the spark AM went down without explicitly > releasing its requested containers and then one of those containers memory > was counted towards amResource. > cc - [~sandyr] -- This message was sent by Atlassian JIRA (v6.3.4#6332)