Rohit Agarwal commented on YARN-3415:

> if (!isAmRunning() && getLiveContainers().size() == 1 && !getUnmanagedAM()) {

Few points:
# If the above approach is valid - why do we need the {{getLiveContainers()}} 
check at all?
# I don't see any place where we are setting {{amRunning}} to {{false}} once it 
is set to {{true}}. Should we do that for completeness?
# Why is there no {{getUnmanagedAM()}} check in {{removeApp}} where we are 
subtracting from {{amResourceUsage}}. I think the conditions for adding and 
subtracting {{amResourceUsage}} should be similar as much as possible.

> Non-AM containers can be counted towards amResourceUsage of a fairscheduler 
> queue
> ---------------------------------------------------------------------------------
>                 Key: YARN-3415
>                 URL: https://issues.apache.org/jira/browse/YARN-3415
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler
>    Affects Versions: 2.6.0
>            Reporter: Rohit Agarwal
>            Assignee: zhihai xu
> We encountered this problem while running a spark cluster. The 
> amResourceUsage for a queue became artificially high and then the cluster got 
> deadlocked because the maxAMShare constrain kicked in and no new AM got 
> admitted to the cluster.
> I have described the problem in detail here: 
> https://github.com/apache/spark/pull/5233#issuecomment-87160289
> In summary - the condition for adding the container's memory towards 
> amResourceUsage is fragile. It depends on the number of live containers 
> belonging to the app. We saw that the spark AM went down without explicitly 
> releasing its requested containers and then one of those containers memory 
> was counted towards amResource.
> cc - [~sandyr]

This message was sent by Atlassian JIRA

Reply via email to