[
https://issues.apache.org/jira/browse/MAPREDUCE-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13159114#comment-13159114
]
Eli Collins commented on MAPREDUCE-3473:
----------------------------------------
Ah, I didn't realize these defaulted to zero, thanks for pointing this out.
Anyone know the rationale behind having jobs not tolerate a single task failure
by default? From reading HADOOP-1144 it seems like this was chosen because it
was the initial behavior before the code could handle task failure.
> Task failures shouldn't result in Job failures
> -----------------------------------------------
>
> Key: MAPREDUCE-3473
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3473
> Project: Hadoop Map/Reduce
> Issue Type: Improvement
> Components: tasktracker
> Affects Versions: 0.20.205.0, 0.23.0
> Reporter: Eli Collins
>
> Currently some task failures may result in job failures. Eg a local TT disk
> failure seen in TaskLauncher#run, TaskRunner#run, MapTask#run is visible to
> and can hang the JobClient, causing the job to fail. Job execution should
> always be able to survive a task failure if there are sufficient resources.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira