[ 
https://issues.apache.org/jira/browse/HADOOP-3370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-3370:
----------------------------------

        Fix Version/s: 0.18.0
    Affects Version/s: 0.17.0
               Status: Open  (was: Patch Available)

Zheng, apologies for being late to get to this - couple of comments:

1. Please do not comment out code which is no longer required, just delete it.
2. HADOOP-3297 changed the way we get TaskCompletionEvents, it is no longer 
once in 5s. Just FYI.
3. If you don't mind, please do not use this.<func> whenever calling <func> 
suffices.
4. As you mentioned, the other option is to send a KillJobAction for all 
trackers on which tasks ran at the end of the job. This is a really useful 
feature and would make me very happy if you took that route! *smile* - however, 
I won't hold it against this patch; we could do it as a separate issue.

> failed tasks may stay forever in TaskTracker.runningJobs
> --------------------------------------------------------
>
>                 Key: HADOOP-3370
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3370
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.17.0
>            Reporter: Zheng Shao
>            Assignee: Zheng Shao
>            Priority: Critical
>             Fix For: 0.18.0
>
>         Attachments: 3370-1.patch
>
>
> The net effect of this is that, with a long-running TaskTracker, it takes 
> long long time for ReduceTasks on that TaskTracker to fetch map outputs - 
> TaskTracker does that for all reduce tasks in TaskTracker .runningJobs, 
> including those stale ReduceTasks. There is a 5-second delay between 2 
> requests, which makes it a long time for a running reducetask to get the map 
> output locations, when there are tens of stale ReduceTasks. Of course this 
> also blows up the memory but that is not a too big problem at its rate.
> I've verified the bug by adding an html table for TaskTracker.runningJobs on 
> TaskTracker http interface, on a 2-node machine, with a single mapper single 
> reducer job, in which mapper succeeds and reducer fails. I can still see the 
> ReduceTask in TaskTracker.runningJobs, while it's not in the first 2 tables 
> (TaskTracker.tasks and TaskTracker.runningTasks).
> Details:
> TaskRunner.run() will call TaskTracker.reportTaskFinished() when the task 
> fails,
> which calls TaskTracker.TaskInProgress.taskFinished,
> which calls TaskTracker.TaskInProgress.cleanup(),
> which calls TaskTracker.tasks.remove(taskId).
> In short, it remove a failed task from TaskTracker.tasks, but not 
> TaskTracker.runningJobs.
> Then the failure is reported to JobTracker.
> JobTracker.heartbeat will call processHeartbeat, 
> which calls updateTaskStatuses, 
> which calls tip.getJob().updateTaskStatus, 
> which calls JobInProgress.failedTask,
> which calls JobTracker.markCompletedTaskAttempt, 
> which puts the task to trackerToMarkedTasksMap, 
> and then JobTracker.heartbeat will call removeMarkedTasks,
> which call removeTaskEntry, 
> which removes it from trackerToTaskMap.
> JobTracker.heartbeat will also call JobTracker.getTasksToKill,
> which reads from trackerToTaskMap for <tracker, task> pairs,
> and ask tracker to KILL the task or job of the task.
> In the case there is only one task for a specific job on a specific tracker 
> and that task failed (NOTE: and that task is not the last failed try of the
> job - otherwise JobTracker.getTasksToKill will pick it up before 
> removeMarkedTasks comes in and remove it from trackerToTaskMap), the task 
> tracker will not receive the KILL task or KILL job message from the 
> JobTracker.
> As a result, the task will remain in TaskTracker.runningJobs forever.
> Solution:
> Remove the task from TaskTracker.runningJobs at the same time when we remove 
> it from TaskTracker.tasks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to