[ 
https://issues.apache.org/jira/browse/YARN-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14304225#comment-14304225
 ] 

Wangda Tan commented on YARN-3119:
----------------------------------

[~vinodkv],
That makes sense to me,
We can even relax restriction (or make it configurable) of {{kill the 
over-limit container if it exceeds scheduler.maximum-allocation-mb of the 
cluster.}} if we enforce {{we only let's containers grow when there is capacity 
that is not allocated for any containers}}. 

Maybe we need introduce a "resource tracker" in NM to track used resource / 
allocated resource.

[~adhoot], I think what [~vinodkv] suggested is not make total_mem_usage_check 
configurable, because it potentially makes container much more easily get OOM 
exception when its usage under its allocated resource, that is a bad behavior.

> Memory limit check need not be enforced unless aggregate usage of all 
> containers is near limit
> ----------------------------------------------------------------------------------------------
>
>                 Key: YARN-3119
>                 URL: https://issues.apache.org/jira/browse/YARN-3119
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: nodemanager
>            Reporter: Anubhav Dhoot
>            Assignee: Anubhav Dhoot
>         Attachments: YARN-3119.prelim.patch
>
>
> Today we kill any container preemptively even if the total usage of 
> containers for that is well within the limit for YARN. Instead if we enforce 
> memory limit only if the total limit of all containers is close to some 
> configurable ratio of overall memory assigned to containers, we can allow for 
> flexibility in container memory usage without adverse effects. This is 
> similar in principle to how cgroups uses soft_limit_in_bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to