[
https://issues.apache.org/jira/browse/YARN-389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13718653#comment-13718653
]
Omkar Vinit Joshi commented on YARN-389:
----------------------------------------
Today this limit is static. This by default (under well maintained cluster)
will be less than or equal to maximum single node manager resource capability.
However for node update or when node is considered dead we don't update it.
Probably we should update it or some other flag and start logging this
information when it drops below this value... thoughts?
{code}
this.maximumAllocation =
Resources.createResource(conf.getInt(
YarnConfiguration.RM_SCHEDULER_MAXIMUM_ALLOCATION_MB,
YarnConfiguration.DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_MB));
{code}
> Infinitely assigning containers when the required resource exceeds the
> cluster's absolute capacity
> --------------------------------------------------------------------------------------------------
>
> Key: YARN-389
> URL: https://issues.apache.org/jira/browse/YARN-389
> Project: Hadoop YARN
> Issue Type: Bug
> Reporter: Zhijie Shen
> Assignee: Omkar Vinit Joshi
>
> I've run wordcount example on branch-2 and trunk. I've set
> yarn.nodemanager.resource.memory-mb to 1G and
> yarn.app.mapreduce.am.resource.mb to 1.5G. Therefore, resourcemanager is to
> assign a 2G AM container for AM. However, the nodemanager doesn't have enough
> memory to assign the container. The problem is that the assignment operation
> will be repeated infinitely, if the assignment cannot be accomplished. Logs
> follow.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira