Ashish Ranjan created YARN-11808:
------------------------------------

             Summary: RM memory leak due to Opportunistic container request 
cancellation at App level
                 Key: YARN-11808
                 URL: https://issues.apache.org/jira/browse/YARN-11808
             Project: Hadoop YARN
          Issue Type: Bug
          Components: RM, yarn
            Reporter: Ashish Ranjan


2025-02-20T09:07:40,735 INFO  [2991] OpportunisticContainerContext: # of 
outstandingOpReqs in ANY (at priority = 68, allocationReqId = 50657, with 
capability = <memory:3072, vCores:2, network: 640Mi> ) : , with location = * ) 
: , numContainers = 0
2025-02-20T09:07:40,735 INFO  [2991] OpportunisticContainerContext: # of 
outstandingOpReqs in ANY (at priority = 68, allocationReqId = 50658, with 
capability = <memory:3072, vCores:2, network: 640Mi> ) : , with location = * ) 
: , numContainers = 0
2025-02-20T09:07:40,735 INFO  [2991] OpportunisticContainerContext: # of 
outstandingOpReqs in ANY (at priority = 68, allocationReqId = 50659, with 
capability = <memory:3072, vCores:2, network: 640Mi> ) : , with location = * ) 
: , numContainers = 0
 
 
numContainers = 0 denote we don't ant allocation of this type. But the map 
object is persisted per application attempt and causing heap issue wastage in 
RM.
 
In function addToOutstandingReqs * Incase the container request from AM is zero 
we should make sure to clean up the map.

Multiple issue due to this:
 * Too much unnecessary logging.
 * Memory leak in RM side.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

Reply via email to