[ 
https://issues.apache.org/jira/browse/HADOOP-3333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12593768#action_12593768
 ] 

Amar Kamat commented on HADOOP-3333:
------------------------------------

bq. reduce tasks failing on marginal TaskTrackers
What do you mean by this?
bq. repeatedly to the same TaskTrackers (probably because it is the only 
available slot)
No. It will assign to the same tasktracker only under the conditions Arun 
mentioned or only after the TIP was tried on all the machines.
Can you provide more details as to how to reproduce this problem, how many 
nodes were there in the cluster and what was the overall behaviour of the job 
in terms of failures.

> job failing because of reassigning same tasktracker to failing tasks
> --------------------------------------------------------------------
>
>                 Key: HADOOP-3333
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3333
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.16.3
>            Reporter: Christian Kunz
>            Priority: Blocker
>
> We have a long running a job in a 2nd atttempt. Previous job was failing and 
> current jobs risks to fail as well, because  reduce tasks failing on marginal 
> TaskTrackers are assigned repeatedly to the same TaskTrackers (probably 
> because it is the only available slot), eventually running out of attempts.
> Reduce tasks should be assigned to the same TaskTrackers at most twice, or 
> TaskTrackers need to get some better smarts to find  failing hardware.
> BTW, mapred.reduce.max.attempts=12, which is high, but does not help in this 
> case.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to