[ 
https://issues.apache.org/jira/browse/HADOOP-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12623271#action_12623271
 ] 

Amareshwari Sriramadasu commented on HADOOP-3462:
-------------------------------------------------

Shall we have maximum allowed FAILED_FRAMEWORK attempts per job? Say, if 10\% 
of the cluster tasktrackers get blacklisted because of internal failures, then 
kill the job.
Thoughts?

> reduce task failures during shuffling should not count against number of 
> retry attempts
> ---------------------------------------------------------------------------------------
>
>                 Key: HADOOP-3462
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3462
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.16.3
>            Reporter: Christian Kunz
>            Assignee: Amareshwari Sriramadasu
>             Fix For: 0.19.0
>
>         Attachments: patch-3462.txt, patch-3462.txt, patch-3462.txt, 
> patch-3462.txt, patch-3462.txt, patch-3462.txt
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to