[
https://issues.apache.org/jira/browse/YARN-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16580892#comment-16580892
]
Yeliang Cang commented on YARN-8668:
------------------------------------
Submit a patch to resolve this!
> Inconsistency between capacity and fair scheduler in the aspect of computing
> node available resource
> ----------------------------------------------------------------------------------------------------
>
> Key: YARN-8668
> URL: https://issues.apache.org/jira/browse/YARN-8668
> Project: Hadoop YARN
> Issue Type: Bug
> Reporter: Yeliang Cang
> Assignee: Yeliang Cang
> Priority: Major
> Attachments: YARN-8668.001.patch
>
>
> We have observed that given capacityScheduler and defaultResourceCalculor,
> when there are many memory resources in a node, running heavy workload, then
> the available vcores of this node will be negative!
> I noticed that in capacityScheduler.java, use code below to calculate the
> available resources for allocating containers:
> {code}
> if (calculator.computeAvailableContainers(Resources
> .add(node.getUnallocatedResource(), node.getTotalKillableResources()),
> minimumAllocation) <= 0) {
> if (LOG.isDebugEnabled()) {
> LOG.debug("This node or this node partition doesn't have available or"
> + "killable resource");
> }
> {code}
> while in fairscheduler FsAppAttempt.java, similar code was found:
> {code}
> // Can we allocate a container on this node?
> if (Resources.fitsIn(capability, available)) {
> ...
> }
> {code}
> Why is the inconsistency? I think we should use
> Resources.fitsIn(smaller,bigger) instead in capacityScheduler !!!
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]