[ 
https://issues.apache.org/jira/browse/HADOOP-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12698061#action_12698061
 ] 

Ruyue Ma commented on HADOOP-5600:
----------------------------------

This is related to mapred.userlog.retain.hours. 

Current, every task jvm tries to clean up user logs in hadoop/logs/userlogs 
dir. The standard is 
return file.lastModified() < purgeTimeStamp. This 'file' is the attempt dir. 
But the dir lastModified time doesn't change. so the change is 
+      File indexFile = new File(file, "log.index");
+      if (indexFile.exists()){
+         return indexFile.lastModified() < purgeTimeStamp;
+      }else{
+         return file.lastModified() < purgeTimeStamp;
+      }  

> mapred.jobtracker.retirejob.interval killing long running reduce task
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-5600
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5600
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.2
>         Environment: 0.19.2-dev, r753365 
>            Reporter: Billy Pearson
>         Attachments: hadoop-5600.patch
>
>
> Can verify by changing the mapred.jobtracker.retirejob.interval to < then 
> your normal map time and watch the reduce task fail
> more info on closed ticket HADOOP-5591

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to