Karthik Kambatla commented on YARN-1856:

bq. Ideally oom_control, swappiness would be set by the AM/YARN client and 
should be container specific settings.
If we don't disable oom_control, wouldn't the current implementation kill 
containers as soon as they spike their usage over the configured hard limit 
which appears to be the container size? I feel this is too aggressive 
especially considering how a delayed GC could cause this so easily. No?

I see your point about an application deciding whether its containers should be 
paused/killed. I think the default should be paused, i.e., disabled.  

bq. In general, we need an API to set container executor specific settings - 
we've seen a need for this when adding Docker support and now for CGroups 
settings as well.
Would like to understand this better. May be we should take this to another 
JIRA. I am open to discussing this offline before filing this JIRA and posting 
our thoughts there. 

> cgroups based memory monitoring for containers
> ----------------------------------------------
>                 Key: YARN-1856
>                 URL: https://issues.apache.org/jira/browse/YARN-1856
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: nodemanager
>    Affects Versions: 2.3.0
>            Reporter: Karthik Kambatla
>            Assignee: Varun Vasudev
>             Fix For: 2.9.0
>         Attachments: YARN-1856.001.patch, YARN-1856.002.patch, 
> YARN-1856.003.patch, YARN-1856.004.patch

This message was sent by Atlassian JIRA

Reply via email to