[ 
https://issues.apache.org/jira/browse/YARN-9233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16753884#comment-16753884
 ] 

Bilwa S T commented on YARN-9233:
---------------------------------

Thanks [~bibinchundatt] for suggestion. I think it can be achieved in following 
way

         RMContainerImpl#FinishedTransition() will fire an event 
CONTAINER_FINISHED which would lead to transition 
RMAppAttemptImpl#ContainerFinishedTransition where all finished containers are 
getting added to justFinishedContainers map which would be sent to AM .  So we 
can skip adding it for container which is not acquired. 

       Set container as ACQUIRED if 
SchedulerApplicationAttempt#newlyAllocatedContainers doesn't contain it as 
container would be removed from newlyAllocatedContainers map if its already 
acquired.

> RM may report allocated container which is killed (but not acquired by AM ) 
> to AM which can cause spark AM confused
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: YARN-9233
>                 URL: https://issues.apache.org/jira/browse/YARN-9233
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Bilwa S T
>            Assignee: Bilwa S T
>            Priority: Major
>
> After the RM kills an allocated (Allocated state) container for various 
> reasons, it will go through the state transition process to the FINISHED 
> state just like other state containers. Currently RM doesn't consider if 
> container is acquired by the AM. Hence All the containers transitioned to 
> FINISH state are added to justFinishedContainers list. Therefore the 
> container that is not obtained by the AM and is killed by the rm will also 
> return through the AM heartbeat. So AM re-applies for more resources than 
> needed which would eventually cause number of containers to exceed the 
> maximum limit



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to