[ 
https://issues.apache.org/jira/browse/HADOOP-3333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12605541#action_12605541
 ] 

Jothi Padmanabhan commented on HADOOP-3333:
-------------------------------------------

Findbugs is complaining on this piece of code

{
Integer numTaskTrackersInHost;
.
.
numTaskTrackersInHost ++;
uniqueHosts.put(host, numTaskTrackersinHost);
}

It appears that while the compiler is handling the case of applying native 
integer operations to the Integer object, findbugs is confused.

> job failing because of reassigning same tasktracker to failing tasks
> --------------------------------------------------------------------
>
>                 Key: HADOOP-3333
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3333
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.16.3
>            Reporter: Christian Kunz
>            Assignee: Jothi Padmanabhan
>            Priority: Blocker
>             Fix For: 0.18.0
>
>         Attachments: hadoop-3333-v1.patch, hadoop-3333.patch, 
> HADOOP-3333_0_20080503.patch, HADOOP-3333_1_20080505.patch, 
> HADOOP-3333_2_20080506.patch
>
>
> We have a long running a job in a 2nd atttempt. Previous job was failing and 
> current jobs risks to fail as well, because  reduce tasks failing on marginal 
> TaskTrackers are assigned repeatedly to the same TaskTrackers (probably 
> because it is the only available slot), eventually running out of attempts.
> Reduce tasks should be assigned to the same TaskTrackers at most twice, or 
> TaskTrackers need to get some better smarts to find  failing hardware.
> BTW, mapred.reduce.max.attempts=12, which is high, but does not help in this 
> case.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to