[
https://issues.apache.org/jira/browse/HADOOP-1144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12493264
]
Hadoop QA commented on HADOOP-1144:
-----------------------------------
+1
http://issues.apache.org/jira/secure/attachment/12356676/HADOOP-1144_20070503_1.patch
applied and successfully tested against trunk revision r534624.
Test results:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/108/testReport/
Console output:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/108/console
> Hadoop should allow a configurable percentage of failed map tasks before
> declaring a job failed.
> ------------------------------------------------------------------------------------------------
>
> Key: HADOOP-1144
> URL: https://issues.apache.org/jira/browse/HADOOP-1144
> Project: Hadoop
> Issue Type: Improvement
> Components: mapred
> Affects Versions: 0.12.0
> Reporter: Christian Kunz
> Assigned To: Arun C Murthy
> Fix For: 0.13.0
>
> Attachments: HADOOP-1144_20070503_1.patch
>
>
> In our environment it can occur that some map tasks will fail repeatedly
> because of corrupt input data, which sometimes is non-critical as long as the
> amount is limited. In this case it is annoying that the whole Hadoop job
> fails and cannot be restarted till the corrupt data are identified and
> eliminated from the input. It would be extremely helpful if the job
> configuration would allow to indicate how many map tasks are allowed to fail.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.