[ https://issues.apache.org/jira/browse/HADOOP-3670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12610370#action_12610370 ]
Arun C Murthy commented on HADOOP-3670: --------------------------------------- bq. You should also see a log msg in stderr which notes where the dump is placed... The JT's stderr, of course, is redirected to the *-jobtracker-*.out file. The message is usually along the lines of: "Dumping heap to <file>" or some such. > JobTracker running out of heap space > ------------------------------------ > > Key: HADOOP-3670 > URL: https://issues.apache.org/jira/browse/HADOOP-3670 > Project: Hadoop Core > Issue Type: Bug > Components: mapred > Affects Versions: 0.17.0 > Reporter: Christian Kunz > > The JobTracker on our 0.17.0 installation runs out of heap space rather > quickly, with less than 100 jobs (at one time even after just 16 jobs). > Running in 64-bit mode with larger heap space does not help -- it will use up > all available RAM. > 2008-06-28 05:17:06,661 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 62 on 9020, call he > artbeat([EMAIL PROTECTED], false, true, 17384) from xxx.xxx.xxx.xxx > :51802: error: java.io.IOException: java.lang.OutOfMemoryError: GC overhead > limit exceeded > java.io.IOException: java.lang.OutOfMemoryError: GC overhead limit exceeded -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.