[ 
https://issues.apache.org/jira/browse/YARN-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15271794#comment-15271794
 ] 

Sidharta Seethana commented on YARN-4599:
-----------------------------------------

{quote}
We are likely better off setting hard limit for all yarn containers so they 
don't interfere anything else on the machine. We could disable OOM control on 
the cgroup corresponding to all yarn containers (not including NM) and if all 
containers are paused, the NM can decide what tasks to kill. This is 
particularly useful if we are oversubscribing the node.
{quote}

[~kasha] By tasks, do you mean entire containers? How would the NM know which 
containers are paused ? We would probably need to hook into the OOM 
notification mechanism for this. I am not sure how well that works in practice. 
 That being said, it would be good to know that the container got killed 
because of an OOM event - otherwise it would be pretty hard to debug 
applications that run into this. (I think we might have hit this already with 
some internal testing with the memory cgroup)



> Set OOM control for memory cgroups
> ----------------------------------
>
>                 Key: YARN-4599
>                 URL: https://issues.apache.org/jira/browse/YARN-4599
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 2.9.0
>            Reporter: Karthik Kambatla
>            Assignee: Karthik Kambatla
>         Attachments: yarn-4599-not-so-useful.patch
>
>
> YARN-1856 adds memory cgroups enforcing support. We should also explicitly 
> set OOM control so that containers are not killed as soon as they go over 
> their usage. Today, one could set the swappiness to control this, but 
> clusters with swap turned off exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to