[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13446078#comment-13446078
 ] 

Arun C Murthy commented on MAPREDUCE-4613:
------------------------------------------

Vasco - the fix will show up via 2.1.0, which is very soon. Thanks.
                
> Scheduling of reduce tasks results in starvation
> ------------------------------------------------
>
>                 Key: MAPREDUCE-4613
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4613
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: scheduler
>    Affects Versions: 0.23.1, 2.0.1-alpha
>         Environment: 16 (duo core) machine cluster ==> 32 containers
> namenode and resourcemanager running on separate 17th machine
>            Reporter: Vasco
>         Attachments: scheduling.png
>
>
> If a job has more reduce tasks than there are containers available, then the 
> reduce tasks can occupy all containers causing starvation. The attached graph 
> illustrates the behaviour. Scheduler used is fifo.
> I understand that the correct behaviour when all containers are taken by 
> reducers while mappers are still pending, is for the running reducers to be 
> "pre-empted". However, pre-emption does not occur.
> A work-around is to set the number of reducers < available containers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to