[ 
https://issues.apache.org/jira/browse/HADOOP-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12493086
 ] 

Hadoop QA commented on HADOOP-1304:
-----------------------------------

+1

http://issues.apache.org/jira/secure/attachment/12356628/1304.patch applied and 
successfully tested against trunk revision r534234.

Test results:   
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/101/testReport/
Console output: 
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/101/console

> MAX_TASK_FAILURES should be configurable
> ----------------------------------------
>
>                 Key: HADOOP-1304
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1304
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.12.3
>            Reporter: Christian Kunz
>         Assigned To: Devaraj Das
>         Attachments: 1304.patch, 1304.patch, 1304.patch, 1304.patch
>
>
> After a couple of weeks of failed attempts I was able to finish a large job 
> only after I changed MAX_TASK_FAILURES to a higher value. In light of 
> HADOOP-1144 (allowing a certain amount of task failures without failing the 
> job) it would be even better if this value could be configured separately for 
> mappers and reducers, because often a success of a job requires the success 
> of all reducers but not of all mappers.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to