[
https://issues.apache.org/jira/browse/YARN-1680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529840#comment-14529840
]
Jian He commented on YARN-1680:
-------------------------------
On my thinking, even if we do the headroom calculation on the client side,
scheduler still requires some corresponding per-app logic for the headroom
calculation. And that scheduler piece of logic may end up duplicating a subset
of the client side logic plus corresponding protocol changes. In that sense, I
think it's simpler to do this inside scheduler. Doing the calculation in one
place is still a more accurate snapshot than doing the calculations in multiple
places.
Also, changing MapReduce to use AMRMClient is non-trivial work.
> availableResources sent to applicationMaster in heartbeat should exclude
> blacklistedNodes free memory.
> ------------------------------------------------------------------------------------------------------
>
> Key: YARN-1680
> URL: https://issues.apache.org/jira/browse/YARN-1680
> Project: Hadoop YARN
> Issue Type: Sub-task
> Components: capacityscheduler
> Affects Versions: 2.2.0, 2.3.0
> Environment: SuSE 11 SP2 + Hadoop-2.3
> Reporter: Rohith
> Assignee: Craig Welch
> Attachments: YARN-1680-WIP.patch, YARN-1680-v2.patch,
> YARN-1680-v2.patch, YARN-1680.patch
>
>
> There are 4 NodeManagers with 8GB each.Total cluster capacity is 32GB.Cluster
> slow start is set to 1.
> Job is running reducer task occupied 29GB of cluster.One NodeManager(NM-4) is
> become unstable(3 Map got killed), MRAppMaster blacklisted unstable
> NodeManager(NM-4). All reducer task are running in cluster now.
> MRAppMaster does not preempt the reducers because for Reducer preemption
> calculation, headRoom is considering blacklisted nodes memory. This makes
> jobs to hang forever(ResourceManager does not assing any new containers on
> blacklisted nodes but returns availableResouce considers cluster free
> memory).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)