[ 
https://issues.apache.org/jira/browse/HADOOP-4979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12660711#action_12660711
 ] 

Hemanth Yamijala commented on HADOOP-4979:
------------------------------------------

Vivek, the fix looks good.

Some minor comments on the test case:
- Can you please rename the test to something that reflects the test more 
descriptively ? One suggestion is testBlockingAcrossTaskTypes() or something 
similar.
- Also the number of reduce tasks should be 0 in the first job. Otherwise, the 
buggy code would still continue to return null in assignTasks, as the first 
job's reduces (or maps) would be looked at first and it would cause the cluster 
to block anyway.

> Capacity Scheduler does not always return no task to a TT if a job's memry 
> requirements are not met
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4979
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4979
>             Project: Hadoop Core
>          Issue Type: Bug
>            Reporter: Vivek Ratan
>         Attachments: 4979.1.patch
>
>
> As per HADOOP-4035, the Capacity Scheduler should return no task to a TT if a 
> job's high mem requirements are not met. This doesn't always happen. In the 
> Scheduler's assignTasks() method, if a job's map task does not enough memory 
> to run, the Scheduler looks at reduce tasks, and vice-versa. This can result 
> in a case where a reduce task from another job is returned to the TT (if the 
> high-mem job does not have a reduce task to run, for example), thus starving 
> the high-mem job. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to