acedia28 opened a new pull request #3327:
URL: https://github.com/apache/hadoop/pull/3327


   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 
'HADOOP-17799. Your PR title ...'.
   -->
   
   ### Description of PR
   
   YARN-10467 fixed ContainerIdPBImpl Object Leakage in 
RMNodeImpl.completedContainers.
   
   After applying YARN-10467 patch and operating cluster with large number of 
nodes, we found similar heap leakage still exists.
   
   In heap dump which are dumped after failover, (so it is not active RM) about 
4.5G is used by ContainerIDPBImpl on RMNodeImpl.completedContainers.
   
   There are two cases.
    
   
   1. Apps with 'KeepContainersAcrossApplicationAttempts'  is not cleared when 
they are failed
   
   Even though 'KeepContainersAcrossApplicationAttempts' is set, we should 
clear RMAppAttemptImpl.justFinishedContainers.
   
   If app attempt is failed and retried by next attempt, we may not need to 
clear RMAppAttemptImpl.justFinishedContainers because related information will 
be handed over to next attempts and eventually cleared.
   
   However, when app is failed, there is no next attempt and heap leakage occur.
   
   (We found this case when Yarn Service Application failed over multiple 
attempts because of OOM in AM)
    
   
   2. Apps is killed explicitly by user
   
   When app is killed by user by 'yarn application -kill' CLI interface or 
WebUI interface,  RMAppAttemptImpl.amContainerFinished is not called because 
app and app attempt state is already changed.
   
   To handle this, we added sendFinishedContainersToNMs for each 
RMAppAttemptImpl.finishedContainersSentToAm, 
RMAppAttemptImpl.justFinishedContainers when Attempt is set to 'KILLED'
    
   
   We found and patched our cluster on 3.1.2 but it seems trunk still has the 
same problem.
   
   I attached patch based on the trunk.
   
   Thanks!
   
   
   ### How was this patch tested?
   
   We reproduced cases on our test clusters and dumped heap memory for each 
case.
   Comparing heap dump on before/after patching, we checked heap leakage is 
cleared.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to