[
https://issues.apache.org/jira/browse/HADOOP-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12681607#action_12681607
]
Leitao Guo commented on HADOOP-5474:
------------------------------------
I don't agree with you that the application should be tolerant to this
situation. But the cost of re-execution of all reduce tasks is very high, do
you have any suggestions to solve this issue?
> All reduce tasks should be re-executed when tasktracker with a completed map
> task failed
> ----------------------------------------------------------------------------------------
>
> Key: HADOOP-5474
> URL: https://issues.apache.org/jira/browse/HADOOP-5474
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.19.0
> Environment: CentOS 5,
> hadoop-0.19.0
> Reporter: Leitao Guo
> Priority: Critical
> Original Estimate: 96h
> Remaining Estimate: 96h
>
> When a tasktracker with a completed map task failed, the map task will be
> re-exectuted, and all reduce tasks that haven't read the data from that
> tasktracker should be re-executed. But the reduce task that have read the
> data from that tasktracker will not be re-executed.
> In this situation, if the outputs of multi map tasks on the same dataset are
> different, for example outputting a random number, the outputs of maptask and
> the re-executed maptask will probably are different. Then the re-executed
> reduce tasks will read the new output of the re-executed maptask, but reduce
> tasks that have read the data from the failed tasktracker have read the old
> output. This probably will cause correctness of the result.
> A recommended solution is that all reduce tasks should be re-executed if one
> tasktracker with a completed map task failed.
> Any comments? thanks!
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.