[ 
https://issues.apache.org/jira/browse/HADOOP-4979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12661267#action_12661267
 ] 

Hudson commented on HADOOP-4979:
--------------------------------

Integrated in Hadoop-trunk #708 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/708/])
    . Fix capacity scheduler to block cluster for failed high RAM requirements 
across task types. Contributed by Vivek Ratan.


> Capacity Scheduler does not always return no task to a TT if a job's memry 
> requirements are not met
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4979
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4979
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/capacity-sched
>            Reporter: Vivek Ratan
>            Assignee: Vivek Ratan
>             Fix For: 0.20.0
>
>         Attachments: 4979.1.patch, 4979.2.patch, 4979.3.patch
>
>
> As per HADOOP-4035, the Capacity Scheduler should return no task to a TT if a 
> job's high mem requirements are not met. This doesn't always happen. In the 
> Scheduler's assignTasks() method, if a job's map task does not enough memory 
> to run, the Scheduler looks at reduce tasks, and vice-versa. This can result 
> in a case where a reduce task from another job is returned to the TT (if the 
> high-mem job does not have a reduce task to run, for example), thus starving 
> the high-mem job. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to