[ 
https://issues.apache.org/jira/browse/YARN-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15254441#comment-15254441
 ] 

Vinod Kumar Vavilapalli commented on YARN-4599:
-----------------------------------------------

bq. We are likely better off setting hard limit for all yarn containers so they 
don't interfere anything else on the machine. We could disable OOM control on 
the cgroup corresponding to all yarn containers (not including NM) and if all 
containers are paused, the NM can decide what tasks to kill. This is 
particularly useful if we are oversubscribing the node. 
This seems like our only choice, given that none of the options to recover 
(when the per-container-limit is hit and when OOM-killer is disabled) are 
usable in practice for YARN containers.

/cc [~sidharta-s], [~shanekumpf]

> Set OOM control for memory cgroups
> ----------------------------------
>
>                 Key: YARN-4599
>                 URL: https://issues.apache.org/jira/browse/YARN-4599
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 2.9.0
>            Reporter: Karthik Kambatla
>            Assignee: Karthik Kambatla
>         Attachments: yarn-4599-not-so-useful.patch
>
>
> YARN-1856 adds memory cgroups enforcing support. We should also explicitly 
> set OOM control so that containers are not killed as soon as they go over 
> their usage. Today, one could set the swappiness to control this, but 
> clusters with swap turned off exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to