[ 
https://issues.apache.org/jira/browse/HADOOP-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Bieniosek updated HADOOP-1636:
--------------------------------------

    Attachment: configure-max-completed-jobs.patch

This patch creates a new configurable variable 
mapred.jobtracker.completeuserjobs.maximum, which defaults to 100 (the current 
hard-coded value).  

When this many jobs are completed (failed or succeeded), hadoop deletes 
finished jobs from memory, making them accessible only through the 
information-poor jobhistory page.   This limit is supposedly per user, but I 
submit all jobs as the same user.

I have tested this patch, and it seems to work.


> constant should be user-configurable: MAX_COMPLETE_USER_JOBS_IN_MEMORY
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-1636
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1636
>             Project: Hadoop
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.13.0
>            Reporter: Michael Bieniosek
>         Attachments: configure-max-completed-jobs.patch
>
>
> In JobTracker.java:   static final int MAX_COMPLETE_USER_JOBS_IN_MEMORY = 100;
> This should be configurable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to