[
https://issues.apache.org/jira/browse/YARN-957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13718753#comment-13718753
]
Omkar Vinit Joshi commented on YARN-957:
----------------------------------------
No this is completely different as RM here is trying to reserve memory on node
manager which is more than what it has.
> Capacity Scheduler tries to reserve the memory more than what node manager
> reports.
> -----------------------------------------------------------------------------------
>
> Key: YARN-957
> URL: https://issues.apache.org/jira/browse/YARN-957
> Project: Hadoop YARN
> Issue Type: Bug
> Reporter: Omkar Vinit Joshi
>
> I have 2 node managers.
> * one with 1024 MB memory.(nm1)
> * second with 2048 MB memory.(nm2)
> I am submitting simple map reduce application with 1 mapper and one reducer
> with 1024mb each. The steps to reproduce this are
> * stop nm2 with 2048MB memory.( This I am doing to make sure that this node's
> heartbeat doesn't reach RM first).
> * now submit application. As soon as it receives first node's (nm1) heartbeat
> it will try to reserve memory for AM-container (2048MB). However it has only
> 1024MB of memory.
> * now start nm2 with 2048 MB memory.
> It hangs forever... Ideally this has two potential issues.
> * It should not try to reserve memory on a node manager which is never going
> to give requested memory. i.e. Current max capability of node manager is
> 1024MB but 2048MB is reserved on it. But it still does that.
> * Say 2048MB is reserved on nm1 but nm2 comes back with 2048MB available
> memory. In this case if the original request was made without any locality
> then scheduler should unreserve memory on nm1 and allocate requested 2048MB
> container on nm2.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira