[ https://issues.apache.org/jira/browse/HADOOP-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12644417#action_12644417 ]
dhruba borthakur commented on HADOOP-4305: ------------------------------------------ I like Amareshwati's proposal because it is simple. Owen's extension seems to add a "age-ing" factor to the counter. And, Runping's proposal can be encompassed into Amareshwari's proposal too.....reflect the state of the TT (how many jobs was it running simultaneously) by incrementing the blacklist counter with an appropriate weight. > repeatedly blacklisted tasktrackers should get declared dead > ------------------------------------------------------------ > > Key: HADOOP-4305 > URL: https://issues.apache.org/jira/browse/HADOOP-4305 > Project: Hadoop Core > Issue Type: Improvement > Components: mapred > Reporter: Christian Kunz > Assignee: Amareshwari Sriramadasu > Fix For: 0.20.0 > > > When running a batch of jobs it often happens that the same tasktrackers are > blacklisted again and again. This can slow job execution considerably, in > particular, when tasks fail because of timeout. > It would make sense to no longer assign any tasks to such tasktrackers and to > declare them dead. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.