[ 
https://issues.apache.org/jira/browse/YARN-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321685#comment-16321685
 ] 

Miklos Szegedi commented on YARN-7730:
--------------------------------------

[~yeshavora], thank you for reporting this. Please review YARN-7064, the docs 
in question are being updated there together with adding additional 
configuration, so you might need to merge with this jira. I like the additional 
information, that you provided in the description. Please note that I think 
swappiness is not precisely the amount of memory that can be swapped out just 
the aggressiveness of the swapping algorithm.

> Add memory management configs to yarn-default
> ---------------------------------------------
>
>                 Key: YARN-7730
>                 URL: https://issues.apache.org/jira/browse/YARN-7730
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Yesha Vora
>            Priority: Minor
>
> Add below configuration and description to yarn-defaults.xml
> {code}
> "yarn.nodemanager.resource.memory.enabled"
> // the default value is false, we need to set to true here to enable the 
> cgroups based memory monitoring.
> "yarn.nodemanager.resource.memory.cgroups.soft-limit-percentage"
> // the default value is 90.0f, which means in memory congestion case, the 
> container can still keep/reserve 90% resource for its claimed value. It 
> cannot be set to above 100 or set as negative value.
> "yarn.nodemanager.resource.memory.cgroups.swappiness"
> // The percentage that memory can be swapped or not. default value is 0, 
> which means container memory cannot be swapped out. If not set, linux cgroup 
> setting by default set to 60 which means 60% of memory can potentially be 
> swapped out when system memory is not enough.{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to