[
https://issues.apache.org/jira/browse/YARN-957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13725693#comment-13725693
]
Hadoop QA commented on YARN-957:
--------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12595262/YARN-957-20130731.1.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 4 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-YARN-Build/1628//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1628//console
This message is automatically generated.
> Capacity Scheduler tries to reserve the memory more than what node manager
> reports.
> -----------------------------------------------------------------------------------
>
> Key: YARN-957
> URL: https://issues.apache.org/jira/browse/YARN-957
> Project: Hadoop YARN
> Issue Type: Bug
> Reporter: Omkar Vinit Joshi
> Assignee: Omkar Vinit Joshi
> Attachments: YARN-957-20130730.1.patch, YARN-957-20130730.2.patch,
> YARN-957-20130730.3.patch, YARN-957-20130731.1.patch
>
>
> I have 2 node managers.
> * one with 1024 MB memory.(nm1)
> * second with 2048 MB memory.(nm2)
> I am submitting simple map reduce application with 1 mapper and one reducer
> with 1024mb each. The steps to reproduce this are
> * stop nm2 with 2048MB memory.( This I am doing to make sure that this node's
> heartbeat doesn't reach RM first).
> * now submit application. As soon as it receives first node's (nm1) heartbeat
> it will try to reserve memory for AM-container (2048MB). However it has only
> 1024MB of memory.
> * now start nm2 with 2048 MB memory.
> It hangs forever... Ideally this has two potential issues.
> * It should not try to reserve memory on a node manager which is never going
> to give requested memory. i.e. Current max capability of node manager is
> 1024MB but 2048MB is reserved on it. But it still does that.
> * Say 2048MB is reserved on nm1 but nm2 comes back with 2048MB available
> memory. In this case if the original request was made without any locality
> then scheduler should unreserve memory on nm1 and allocate requested 2048MB
> container on nm2.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira