[ 
https://issues.apache.org/jira/browse/HADOOP-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhruba borthakur updated HADOOP-4018:
-------------------------------------

    Attachment: maxSplits6.patch

Thanks to Amar for your comments. I am attaching a new patch that looks at the 
allocated tasks for each job and matches that with the specified limit. Amar: 
can you pl review this latest patch? Thanks.

@Vinod: the proposed mapred.max.tasks.per.jobtracker is used to limit the 
memory usage of the jobtracker. We need to count how many tasks (failed, 
completed, running, etc) are resident in memory. I believe that 
mapred.jobtracker.completeuserjobs.maximum does not satifsy that requirement. I 
do not know much about org.apache.hadoop.mapred.LimitTasksPerJobTaskScheduler, 
maybe Amar can comment on that,

> limit memory usage in jobtracker
> --------------------------------
>
>                 Key: HADOOP-4018
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4018
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: maxSplits.patch, maxSplits2.patch, maxSplits3.patch, 
> maxSplits4.patch, maxSplits5.patch, maxSplits6.patch
>
>
> We have seen instances when a user submitted a job with many thousands of 
> mappers. The JobTracker was running with 3GB heap, but it was still not 
> enough to prevent memory trashing from Garbage collection; effectively the 
> Job Tracker was not able to serve jobs and had to be restarted.
> One simple proposal would be to limit the maximum number of tasks per job. 
> This can be a configurable parameter. Is there other things that eat huge 
> globs of memory in job Tracker?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to