[ 
https://issues.apache.org/jira/browse/YARN-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483569#comment-15483569
 ] 

Junping Du commented on YARN-5296:
----------------------------------

[~leftnoteasy], this is actually no need for branch-2.7 - as I discussed this 
with Jason on HADOOP-13362, this is just a misunderstand caused by different 
container remove places between branch-2.7 and branch-2. Just forget about the 
comment. 
Also, I noticed you reopen YARN-5190 for branch-2.7 which seems duplicated with 
HADOOP-13362. Can you double check and close it? Thx!

> NMs going OutOfMemory because ContainerMetrics leak in ContainerMonitorImpl
> ---------------------------------------------------------------------------
>
>                 Key: YARN-5296
>                 URL: https://issues.apache.org/jira/browse/YARN-5296
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 2.9.0
>            Reporter: Karam Singh
>            Assignee: Junping Du
>             Fix For: 2.9.0, 3.0.0-alpha1
>
>         Attachments: YARN-5296-v2.1.patch, YARN-5296-v2.patch, 
> YARN-5296.patch, after v2 fix.png, before v2 fix.png
>
>
> Ran tests in following manner,
> 1. Run GridMix of 768 sequestionally around 17 times to execute about 12.9K 
> apps.
> 2. After 4-5hrs take Check NM Heap using Memory Analyser. It report around 
> 96% Heap is being used my ContainerMetrics
> 3. Run 7 more GridMix run for have around 18.2apps ran in total. Again check 
> NM heap using Memory Analyser again 96% heap is being used by 
> ContainerMetrics. 
> 4. Start one more grimdmix run, while run going on , NMs started going down 
> with OOM, around running 18.7K+, On analysing NM heap using Memory analyser, 
> OOM was caused by ContainerMetrics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to