[
https://issues.apache.org/jira/browse/HADOOP-5884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12715384#action_12715384
]
Hemanth Yamijala commented on HADOOP-5884:
------------------------------------------
Some comments on test cases:
- testClusterBlockingForLackOfMemory needs updates to validate the number of
slots
- I think we may need two additional tests:
-- We should have a test to check the change in sorting of queues done based on
slots than running tasks. We could do this by submitting 2 jobs to 2 queues, 1
is normal and the other is high RAM. We can check that for every assignTasks
call, if 1 task of the high RAM job is scheduled, two tasks of the normal job
are (assuming 2 slots for a task of the high RAM Jobs).
-- We should have a check on user limits that a high RAM job hits it's user
limits twice as fast as a normal job, again assuming 2 slots for a task of the
high RAM job.
> Capacity scheduler should account high memory jobs as using more capacity of
> the queue
> --------------------------------------------------------------------------------------
>
> Key: HADOOP-5884
> URL: https://issues.apache.org/jira/browse/HADOOP-5884
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/capacity-sched
> Reporter: Hemanth Yamijala
> Assignee: Vinod K V
> Attachments: HADOOP-5884-20090529.1.txt
>
>
> Currently, when a high memory job is scheduled by the capacity scheduler,
> each task scheduled counts only once in the capacity of the queue, though it
> may actually be preventing other jobs from using spare slots on that node
> because of its higher memory requirements. In order to be fair, the capacity
> scheduler should proportionally (with respect to default memory) account high
> memory jobs as using a larger capacity of the queue.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.