[
https://issues.apache.org/jira/browse/MAPREDUCE-3057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13109508#comment-13109508
]
Vinod Kumar Vavilapalli commented on MAPREDUCE-3057:
----------------------------------------------------
The default size of loaded-jobs' cache is 2000, which turned out to be too much
in the current case. The short term fix is to reduce this, sure.
For long term, we've a couple of options:
- If it isn't too difficult, JHS should adjust its cache size depending on the
available heap.
- OTOH, I think, we can live with not loading the task-information and
retrieve task-specific information on demand. I'd expect the demand for
Job-level information to be far higher.
> Job History Server goes of OutOfMemory with 1200 Jobs and Heap Size set to 10
> GB
> --------------------------------------------------------------------------------
>
> Key: MAPREDUCE-3057
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3057
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: jobhistoryserver, mrv2
> Affects Versions: 0.23.0
> Reporter: Karam Singh
> Fix For: 0.23.0
>
>
> History server was started with -Xmx10000m
> Ran GridMix V3 with 1200 Jobs trace in STRESS mode on 350 nodes with each
> node 4 NMS.
> All jobs finished as reported by RM Web UI and HADOOP_MAPRED_HOME/bin/mapred
> job -list all
> But found that GridMix job client was stuck while trying connect to
> HistoryServer
> Then tried to do HADOOP_MAPRED_HOME/bin/mapred job -status jobid
> JobClient also got stuck while looking for token to connect to History server
> Then looked at History Server logs and found History is trowing
> "java.lang.OutOfMemoryError: GC overhead limit exceeded" error.
> With 10GB of Heap space and 1200 Jobs, History Server should not go out of
> memory .
> No matter what are the type of jobs.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira