[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13109878#comment-13109878
 ] 

Robert Joseph Evans commented on MAPREDUCE-3057:
------------------------------------------------

Lets do a little math here.  10GB heap/1200 jobs is about 8.5 MB per job (a bit 
less because we are running a web server after all).  This seems like a lot to 
store a job.  How many task and task attempts are we looking at per job?  I 
just want to be sure that there is not a memory leak somewhere in here.

> Job History Server goes of OutOfMemory with 1200 Jobs and Heap Size set to 10 
> GB
> --------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-3057
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3057
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: jobhistoryserver, mrv2
>    Affects Versions: 0.23.0
>            Reporter: Karam Singh
>             Fix For: 0.23.0
>
>
> History server was started with -Xmx10000m
> Ran GridMix V3 with 1200 Jobs trace in STRESS mode on 350 nodes with each 
> node 4 NMS.
> All jobs finished as reported by RM Web UI and HADOOP_MAPRED_HOME/bin/mapred 
> job -list all
> But found that GridMix job client was stuck while trying connect to 
> HistoryServer
> Then tried to do HADOOP_MAPRED_HOME/bin/mapred job -status jobid
> JobClient also got stuck while looking for token to connect to History server
> Then looked at History Server logs and found History is trowing 
> "java.lang.OutOfMemoryError: GC overhead limit exceeded" error.
> With 10GB of Heap space and 1200 Jobs, History Server should not go out of 
> memory .
> No matter what are the type of jobs.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to