[ 
https://issues.apache.org/jira/browse/HADOOP-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12641556#action_12641556
 ] 

Koji Noguchi commented on HADOOP-4305:
--------------------------------------

I'm afraid what would happen when application has a bug and it gets submitted 
many times to the cluster.
Could this job blacklist the healthy nodes and eventually take the TaskTrackers 
down?

bq. If the number of times the task tracker got blacklisted is greater than or 
equal to mapred.max.tasktracker.blacklists, then the job tracker declares the 
task tracker as dead.

Can we count this *only* from successful jobs?


> repeatedly blacklisted tasktrackers should get declared dead
> ------------------------------------------------------------
>
>                 Key: HADOOP-4305
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4305
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Christian Kunz
>            Assignee: Amareshwari Sriramadasu
>             Fix For: 0.20.0
>
>
> When running a batch of jobs it often happens that the same tasktrackers are 
> blacklisted again and again. This can slow job execution considerably, in 
> particular, when tasks fail because of timeout.
> It would make sense to no longer assign any tasks to such tasktrackers and to 
> declare them dead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to