[ https://issues.apache.org/jira/browse/HADOOP-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
dhruba borthakur updated HADOOP-4018: ------------------------------------- Attachment: maxSplits10.patch Incorporated most of Amar's comments. I left the name of the config parameter as it was earlier. > limit memory usage in jobtracker > -------------------------------- > > Key: HADOOP-4018 > URL: https://issues.apache.org/jira/browse/HADOOP-4018 > Project: Hadoop Core > Issue Type: Bug > Components: mapred > Reporter: dhruba borthakur > Assignee: dhruba borthakur > Attachments: maxSplits.patch, maxSplits10.patch, maxSplits2.patch, > maxSplits3.patch, maxSplits4.patch, maxSplits5.patch, maxSplits6.patch, > maxSplits7.patch, maxSplits8.patch, maxSplits9.patch > > > We have seen instances when a user submitted a job with many thousands of > mappers. The JobTracker was running with 3GB heap, but it was still not > enough to prevent memory trashing from Garbage collection; effectively the > Job Tracker was not able to serve jobs and had to be restarted. > One simple proposal would be to limit the maximum number of tasks per job. > This can be a configurable parameter. Is there other things that eat huge > globs of memory in job Tracker? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.