[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13159834#comment-13159834
 ] 

Subroto Sanyal commented on MAPREDUCE-3473:
-------------------------------------------

*mapreduce.map.failures.maxpercent* and *mapreduce.reduce.failures.maxpercent* 
hold the percentage of failure tolerance of number of Tasks for Job to handle.

Say in case a map fails and it comes under the tolerance limit then the output 
of the mapper is lost(will not be considered for further computation). Same is 
with Reducer.

I suggest let user decide the this failure percentage and be ready for such 
data loss otherwise, it will come to a surprise to user if the value is set to 
non-zero.

Further I feel there won't be any correct default non-zero value for these 
configurations. These values depend on user scenarios/use-cases.
                
> Task failures shouldn't result in Job failures 
> -----------------------------------------------
>
>                 Key: MAPREDUCE-3473
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3473
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: tasktracker
>    Affects Versions: 0.20.205.0, 0.23.0
>            Reporter: Eli Collins
>
> Currently some task failures may result in job failures. Eg a local TT disk 
> failure seen in TaskLauncher#run, TaskRunner#run, MapTask#run is visible to 
> and can hang the JobClient, causing the job to fail. Job execution should 
> always be able to survive a task failure if there are sufficient resources. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to