[ 
https://issues.apache.org/jira/browse/HADOOP-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12677640#action_12677640
 ] 

Amareshwari Sriramadasu commented on HADOOP-4996:
-------------------------------------------------

With the current code, if a job is FAILED/KILLED the JobControl shows the 
status as FAILED.  If a dependent is FAILED/KILLED, the JobControl shows the 
status as DEPENDENT_FAILED. And all these jobs are available in failedJobs 
list, which can be queried through getFailedJobs() api. 

Is it that JobControl should also have new state KILLED and DEPENDENT_KILLED 
and a new api getKilledJobs() for these jobs?
then, is it blocker?

> JobControl does not report killed jobs
> --------------------------------------
>
>                 Key: HADOOP-4996
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4996
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.0
>            Reporter: Olga Natkovich
>            Assignee: Amareshwari Sriramadasu
>            Priority: Blocker
>             Fix For: 0.20.0
>
>         Attachments: patch-4996-testcase.txt
>
>
> After speaking with Arun and Owen, my understanding of the situation is that 
> separate killed job tracking was added in hadoop 18: 
> http://issues.apache.org/jira/browse/HADOOP-3924.
> However, it does not look like this change was integrated into JobControl 
> class. While I have not verified this yet, it looks like, applications that 
> use JobControl would no way of knowing if one of the jobs was killed.
> This would be blocker for Pig to move to Hadoop 19.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to