[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12768265#action_12768265
 ] 

rahul k singh commented on MAPREDUCE-1105:
------------------------------------------

ran ant test for yahoo distribution, all testcases passed except 2.

    [junit] Test org.apache.hadoop.hdfs.server.namenode.TestStartup FAILED
    [junit] Test org.apache.hadoop.fs.loadGenerator.TestLoadGenerator FAILED

both are related to hdfs. Current patch has no relation with these  , it only 
modifies capacity scheduler.

> CapacityScheduler: It should be possible to set queue hard-limit beyond it's 
> actual capacity
> --------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-1105
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1105
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: contrib/capacity-sched
>    Affects Versions: 0.21.0
>            Reporter: Arun C Murthy
>            Assignee: rahul k singh
>            Priority: Blocker
>             Fix For: 0.21.0
>
>         Attachments: MAPRED-1105-21-1.patch, MAPRED-1105-21-2.patch, 
> MAPRED-1105-21-3.patch, MAPRED-1105-21-3.patch, 
> MAPREDUCE-1105-version20-2.patch, MAPREDUCE-1105-version20.patch.txt, 
> MAPREDUCE-1105-yahoo-version20-3.patch, MAPREDUCE-1105-yahoo-version20-4.patch
>
>
> Currently the CS caps a queue's capacity to it's actual capacity if a 
> hard-limit is specified to be greater than it's actual capacity. We should 
> allow the queue to go upto the hard-limit if specified.
> Also, I propose we change the hard-limit unit to be percentage rather than 
> #slots.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to