[
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242301#comment-15242301
]
Hadoop QA commented on YARN-3126:
---------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color}
| {color:red} YARN-3126 does not apply to trunk. Rebase required? Wrong Branch?
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12733746/resourcelimit-test.patch
|
| JIRA Issue | YARN-3126 |
| Console output |
https://builds.apache.org/job/PreCommit-YARN-Build/11091/console |
| Powered by | Apache Yetus 0.2.0 http://yetus.apache.org |
This message was automatically generated.
> FairScheduler: queue's usedResource is always more than the maxResource limit
> -----------------------------------------------------------------------------
>
> Key: YARN-3126
> URL: https://issues.apache.org/jira/browse/YARN-3126
> Project: Hadoop YARN
> Issue Type: Bug
> Components: fairscheduler
> Affects Versions: 2.3.0
> Environment: hadoop2.3.0. fair scheduler. spark 1.1.0.
> Reporter: Xia Hu
> Labels: BB2015-05-TBR, assignContainer, fairscheduler, resources
> Fix For: trunk-win
>
> Attachments: resourcelimit-02.patch, resourcelimit-test.patch,
> resourcelimit.patch
>
>
> When submitting spark application(both spark-on-yarn-cluster and
> spark-on-yarn-cleint model), the queue's usedResources assigned by
> fairscheduler always can be more than the queue's maxResources limit.
> And by reading codes of fairscheduler, I suppose this issue happened because
> of ignore to check the request resources when assign Container.
> Here is the detail:
> 1. choose a queue. In this process, it will check if queue's usedResource is
> bigger than its max, with assignContainerPreCheck.
> 2. then choose a app in the certain queue.
> 3. then choose a container. And here is the question, there is no check
> whether this container would make the queue sources over its max limit. If a
> queue's usedResource is 13G, the maxResource limit is 16G, then a container
> which asking for 4G resources may be assigned successful.
> This problem will always happen in spark application, cause we can ask for
> different container resources in different applications.
> By the way, I have already use the patch from YARN-2083.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)