[
https://issues.apache.org/jira/browse/YARN-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363574#comment-15363574
]
Hudson commented on YARN-5296:
------------------------------
SUCCESS: Integrated in Hadoop-trunk-Commit #10053 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/10053/])
YARN-5296. NMs going OutOfMemory because ContainerMetrics leak in (jianhe: rev
d792a90206e940c31d1048e53dc24ded605788bf)
*
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java
*
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
> NMs going OutOfMemory because ContainerMetrics leak in ContainerMonitorImpl
> ---------------------------------------------------------------------------
>
> Key: YARN-5296
> URL: https://issues.apache.org/jira/browse/YARN-5296
> Project: Hadoop YARN
> Issue Type: Bug
> Components: nodemanager
> Affects Versions: 2.9.0
> Reporter: Karam Singh
> Assignee: Junping Du
> Fix For: 2.9.0
>
> Attachments: YARN-5296-v2.1.patch, YARN-5296-v2.patch,
> YARN-5296.patch, after v2 fix.png, before v2 fix.png
>
>
> Ran tests in following manner,
> 1. Run GridMix of 768 sequestionally around 17 times to execute about 12.9K
> apps.
> 2. After 4-5hrs take Check NM Heap using Memory Analyser. It report around
> 96% Heap is being used my ContainerMetrics
> 3. Run 7 more GridMix run for have around 18.2apps ran in total. Again check
> NM heap using Memory Analyser again 96% heap is being used by
> ContainerMetrics.
> 4. Start one more grimdmix run, while run going on , NMs started going down
> with OOM, around running 18.7K+, On analysing NM heap using Memory analyser,
> OOM was caused by ContainerMetrics
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]