[ https://issues.apache.org/jira/browse/HADOOP-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12634026#action_12634026 ]
Amar Kamat commented on HADOOP-4018: ------------------------------------ +1 for global limits. > limit memory usage in jobtracker > -------------------------------- > > Key: HADOOP-4018 > URL: https://issues.apache.org/jira/browse/HADOOP-4018 > Project: Hadoop Core > Issue Type: Bug > Components: mapred > Reporter: dhruba borthakur > Assignee: dhruba borthakur > Attachments: maxSplits.patch, maxSplits2.patch, maxSplits3.patch, > maxSplits4.patch, maxSplits5.patch, maxSplits6.patch, maxSplits7.patch, > maxSplits8.patch > > > We have seen instances when a user submitted a job with many thousands of > mappers. The JobTracker was running with 3GB heap, but it was still not > enough to prevent memory trashing from Garbage collection; effectively the > Job Tracker was not able to serve jobs and had to be restarted. > One simple proposal would be to limit the maximum number of tasks per job. > This can be a configurable parameter. Is there other things that eat huge > globs of memory in job Tracker? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.