[ 
https://issues.apache.org/jira/browse/MAPREDUCE-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13065318#comment-13065318
 ] 

Robert Joseph Evans commented on MAPREDUCE-2324:
------------------------------------------------

OK I have thought about it and talk with some people about 
statistics/scheduling and the like, and the conclusion that I have come to is 
the following.

We should add in a new configuration parameter called 
mapreduce.reduce.input.limit.attempt.factor

This value would default to 1.0 and be used to determine the number of times a 
reduce task can be rejected because the estimated input size will not fit 
before the job is killed.  So if (#failedAttempts > (#ofActiveNodes * 
attempt.factor)) then kill the job. 

> Job should fail if a reduce task can't be scheduled anywhere
> ------------------------------------------------------------
>
>                 Key: MAPREDUCE-2324
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2324
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 0.20.2, 0.20.205.0
>            Reporter: Todd Lipcon
>            Assignee: Robert Joseph Evans
>
> If there's a reduce task that needs more disk space than is available on any 
> mapred.local.dir in the cluster, that task will stay pending forever. For 
> example, we produced this in a QA cluster by accidentally running terasort 
> with one reducer - since no mapred.local.dir had 1T free, the job remained in 
> pending state for several days. The reason for the "stuck" task wasn't clear 
> from a user perspective until we looked at the JT logs.
> Probably better to just fail the job if a reduce task goes through all TTs 
> and finds that there isn't enough space.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to