[
https://issues.apache.org/jira/browse/YARN-4678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15153547#comment-15153547
]
Brahma Reddy Battula commented on YARN-4678:
--------------------------------------------
[~sunilg] thanks for the patch..Approach looks good to me.
*One comment:*
Need to correct following testcase,as new field ({{reservedCapacity}})
introduced.
TestRMWebServicesCapacitySched#verifyClusterScheduler(JSONObject json)
{{assertEquals("incorrect number of elements", 8, info.length());}}, it
should be 9 now.
> Cluster used capacity is > 100 when container reserved
> -------------------------------------------------------
>
> Key: YARN-4678
> URL: https://issues.apache.org/jira/browse/YARN-4678
> Project: Hadoop YARN
> Issue Type: Bug
> Reporter: Brahma Reddy Battula
> Assignee: Brahma Reddy Battula
> Attachments: 0001-YARN-4678.patch
>
>
> *Scenario:*
> * Start cluster with Three NM's each having 8GB (cluster memory:24GB).
> * Configure queues with elasticity and userlimitfactor=10.
> * disable pre-emption.
> * run two job with different priority in different queue at the same time
> ** yarn jar hadoop-mapreduce-examples-2.7.2.jar pi -Dyarn.app.priority=LOW
> -Dmapreduce.job.queuename=QueueA -Dmapreduce.map.memory.mb=4096
> -Dyarn.app.mapreduce.am.resource.mb=1536
> -Dmapreduce.job.reduce.slowstart.completedmaps=1.0 10 1000000000000
> ** yarn jar hadoop-mapreduce-examples-2.7.2.jar pi -Dyarn.app.priority=HIGH
> -Dmapreduce.job.queuename=QueueB -Dmapreduce.map.memory.mb=4096
> -Dyarn.app.mapreduce.am.resource.mb=1536 3 1000000000000
> * observe the cluster capacity which was used in RM web UI
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)