[
https://issues.apache.org/jira/browse/HADOOP-4112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Amareshwari Sriramadasu updated HADOOP-4112:
--------------------------------------------
Attachment: patch-4112.txt
Thanks for the review.
bq. logFailed() and logKilled() differ in one string. I assume you have done
just to keep the code consistent. Someday we should collapse both the api's
into something like logComplete(). There is a lot of duplicate/redundant code.
Yes. I raised HADOOP-4122 for the same.
bq. Why is job-failed log removed? Redundancy?
Yes. It is a redundant log. Moreover, JobInfo.logFailed log closes the log
file. Then the cleanupAttempt logs will be missed, because killJob launches a
cleanup task and after the completion of cleanup, job is marked Failed.
Patch incorporates other comments
> Got ArrayOutOfBound exception while analyzing the job history
> -------------------------------------------------------------
>
> Key: HADOOP-4112
> URL: https://issues.apache.org/jira/browse/HADOOP-4112
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.19.0
> Reporter: Amar Kamat
> Assignee: Amareshwari Sriramadasu
> Fix For: 0.19.0
>
> Attachments: patch-4112.txt, patch-4112.txt
>
>
> HADOOP-3150 introduced 2 new type of tasks called the cleanup tasks. These
> are logged to history either as map tasks or reduce tasks. The number of
> maps/reducers is also logged to history. Since the number of maps will be
> less than the total number of map tasks logged to history (actual num maps +
> cleanup tasks), I think thats the reason for this exception. The important
> question is to investigate the effect of HADOOP-3150 on job history and code
> related to it.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.