[
https://issues.apache.org/jira/browse/HADOOP-4979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12660722#action_12660722
]
Hemanth Yamijala commented on HADOOP-4979:
------------------------------------------
Patch looks good. Results of test-patch:
[exec] +1 overall.
[exec]
[exec] +1 @author. The patch does not contain any @author tags.
[exec]
[exec] +1 tests included. The patch appears to include 3 new or
modified tests.
[exec]
[exec] +1 javadoc. The javadoc tool did not generate any warning
messages.
[exec]
[exec] +1 javac. The applied patch does not increase the total number
of javac compiler warnings.
[exec]
[exec] +1 findbugs. The patch does not introduce any new Findbugs
warnings.
[exec]
[exec] +1 Eclipse classpath. The patch retains Eclipse classpath
integrity.
> Capacity Scheduler does not always return no task to a TT if a job's memry
> requirements are not met
> ---------------------------------------------------------------------------------------------------
>
> Key: HADOOP-4979
> URL: https://issues.apache.org/jira/browse/HADOOP-4979
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/capacity-sched
> Reporter: Vivek Ratan
> Fix For: 0.20.0
>
> Attachments: 4979.1.patch, 4979.2.patch
>
>
> As per HADOOP-4035, the Capacity Scheduler should return no task to a TT if a
> job's high mem requirements are not met. This doesn't always happen. In the
> Scheduler's assignTasks() method, if a job's map task does not enough memory
> to run, the Scheduler looks at reduce tasks, and vice-versa. This can result
> in a case where a reduce task from another job is returned to the TT (if the
> high-mem job does not have a reduce task to run, for example), thus starving
> the high-mem job.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.