[ 
https://issues.apache.org/jira/browse/HADOOP-5884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Yamijala updated HADOOP-5884:
-------------------------------------

    Attachment: HADOOP-5884.patch

A slightly modified patch. Basically just makes the comments and debug 
statements in test cases match code. The list of changes made are the following:

- Added a comment on the getOrderedQueues method.
- In testUserLimitsForHighMemoryJobs - set max reduce slots set to 2G instead 
of 1, as we are submitting jobs with 2G reduces.
- Same test, JobConf was being overwritten. I changed that.
- Also, Debug statement not matching the submitted high RAM job. (were saying 
0MB reduces) Changed that.
- Also corrected debug statements in testQueueOrdering

> Capacity scheduler should account high memory jobs as using more capacity of 
> the queue
> --------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5884
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5884
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/capacity-sched
>            Reporter: Hemanth Yamijala
>            Assignee: Vinod K V
>         Attachments: HADOOP-5884-20090529.1.txt, HADOOP-5884-20090602.1.txt, 
> HADOOP-5884-20090603.txt, HADOOP-5884.patch
>
>
> Currently, when a high memory job is scheduled by the capacity scheduler, 
> each task scheduled counts only once in the capacity of the queue, though it 
> may actually be preventing other jobs from using spare slots on that node 
> because of its higher memory requirements. In order to be fair, the capacity 
> scheduler should proportionally (with respect to default memory) account high 
> memory jobs as using a larger capacity of the queue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to