[ 
https://issues.apache.org/jira/browse/YARN-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ruiliang updated YARN-11789:
----------------------------
    Description: 
1:CGroupsResourceCalculator doc  Description error

[NodeManagerCGroupsMemory|https://hadoop.apache.org/docs/r3.3.0/hadoop-yarn/hadoop-yarn-site/NodeManagerCGroupsMemory.html#:~:text=In%20order%20to%20enable%20cgroups%20based%20resource%20calculation%20set%20yarn.nodemanager.resource%2Dcalculator.class%20to%20org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsResourceCalculator]

It should be

yarn.nodemanager.container-monitor.process-tree.class
 to 
{{{}org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsResourceCalculator{}}}.

 

default
yarn.nodemanager.resource-calculator.class=org.apache.hadoop.yarn.util.ResourceCalculatorPlugin
ResourceCalculatorPlugin sbuclass not 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsResourceCalculator

 
2:I have a question. I enabled Elastic Memory Control through cgroups based on 
the [NodeManagerCGroupsMemory 
documentation|https://hadoop.apache.org/docs/r3.3.0/hadoop-yarn/hadoop-yarn-site/NodeManagerCGroupsMemory.html#:~:text=Configuring%20elastic%20memory%20resource%20control]

When I have a container jvm process of -Xmx 10G,
{code:java}
 exec /bin/bash -c "$JAVA_HOME/bin/java -server -Xmx10240m 
'-XX:OnOutOfMemoryError=echo OnOutOfMemory' xx --user-class-path 
file:$PWD/__app__.jar 1>/xx_1_000004/stdout 2>/xx_01_000004/stderr"
 {code}
 container code 
{code:java}
  ....
  val gb1= new obj5G()
  print(gb1.dataa)
  val gb2= new obj5G()
  print(gb2.dataa.mkString("Array(", ", ", ")"))
  ...  
  class obj5G{var dataa= new Array[Byte](1024 * 1024 * 1024*5) //5gb}
  ... {code}
But the container is always get Java. Lang. OutOfMemoryError: Java heap space
Did I misunderstand? Under what conditions does the container use Elastic 
Memory beyond 10GB+?
 
This question has puzzled me for a long time, please help me to answer or have 
any case for reference, thank you very much

 

This should be a spark submit encapsulation problem

[https://issues.apache.org/jira/browse/SPARK-51435|https://issues.apache.org/jira/projects/SPARK/issues/SPARK-51480]

  was:
1:CGroupsResourceCalculator doc  Description error

[NodeManagerCGroupsMemory|https://hadoop.apache.org/docs/r3.3.0/hadoop-yarn/hadoop-yarn-site/NodeManagerCGroupsMemory.html#:~:text=In%20order%20to%20enable%20cgroups%20based%20resource%20calculation%20set%20yarn.nodemanager.resource%2Dcalculator.class%20to%20org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsResourceCalculator]

It should be

yarn.nodemanager.container-monitor.process-tree.class
 to 
{{{}org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsResourceCalculator{}}}.

 

default
yarn.nodemanager.resource-calculator.class=org.apache.hadoop.yarn.util.ResourceCalculatorPlugin
ResourceCalculatorPlugin sbuclass not 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsResourceCalculator

 
2:I have a question. I enabled Elastic Memory Control through cgroups based on 
the [NodeManagerCGroupsMemory 
documentation|https://hadoop.apache.org/docs/r3.3.0/hadoop-yarn/hadoop-yarn-site/NodeManagerCGroupsMemory.html#:~:text=Configuring%20elastic%20memory%20resource%20control]

When I have a container jvm process of -Xmx 10G,
{code:java}
 exec /bin/bash -c "$JAVA_HOME/bin/java -server -Xmx10240m 
'-XX:OnOutOfMemoryError=echo OnOutOfMemory' xx --user-class-path 
file:$PWD/__app__.jar 1>/xx_1_000004/stdout 2>/xx_01_000004/stderr"
 {code}
 container code 
{code:java}
  ....
  val gb1= new obj5G()
  print(gb1.dataa)
  val gb2= new obj5G()
  print(gb2.dataa.mkString("Array(", ", ", ")"))
  ...  
  class obj5G{var dataa= new Array[Byte](1024 * 1024 * 1024*5) //5gb}
  ... {code}
But the container is always get Java. Lang. OutOfMemoryError: Java heap space
Did I misunderstand? Under what conditions does the container use Elastic 
Memory beyond 10GB+?
 
This question has puzzled me for a long time, please help me to answer or have 
any case for reference, thank you very much

 

This should be a spark submit encapsulation problem

https://issues.apache.org/jira/browse/SPARK-51435


> CGroupsResourceCalculator doc  Description error
> ------------------------------------------------
>
>                 Key: YARN-11789
>                 URL: https://issues.apache.org/jira/browse/YARN-11789
>             Project: Hadoop YARN
>          Issue Type: Improvement
>    Affects Versions: 3.3.0
>            Reporter: ruiliang
>            Priority: Minor
>
> 1:CGroupsResourceCalculator doc  Description error
> [NodeManagerCGroupsMemory|https://hadoop.apache.org/docs/r3.3.0/hadoop-yarn/hadoop-yarn-site/NodeManagerCGroupsMemory.html#:~:text=In%20order%20to%20enable%20cgroups%20based%20resource%20calculation%20set%20yarn.nodemanager.resource%2Dcalculator.class%20to%20org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsResourceCalculator]
> It should be
> yarn.nodemanager.container-monitor.process-tree.class
>  to 
> {{{}org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsResourceCalculator{}}}.
>  
> default
> yarn.nodemanager.resource-calculator.class=org.apache.hadoop.yarn.util.ResourceCalculatorPlugin
> ResourceCalculatorPlugin sbuclass not 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsResourceCalculator
>  
> 2:I have a question. I enabled Elastic Memory Control through cgroups based 
> on the [NodeManagerCGroupsMemory 
> documentation|https://hadoop.apache.org/docs/r3.3.0/hadoop-yarn/hadoop-yarn-site/NodeManagerCGroupsMemory.html#:~:text=Configuring%20elastic%20memory%20resource%20control]
> When I have a container jvm process of -Xmx 10G,
> {code:java}
>  exec /bin/bash -c "$JAVA_HOME/bin/java -server -Xmx10240m 
> '-XX:OnOutOfMemoryError=echo OnOutOfMemory' xx --user-class-path 
> file:$PWD/__app__.jar 1>/xx_1_000004/stdout 2>/xx_01_000004/stderr"
>  {code}
>  container code 
> {code:java}
>   ....
>   val gb1= new obj5G()
>   print(gb1.dataa)
>   val gb2= new obj5G()
>   print(gb2.dataa.mkString("Array(", ", ", ")"))
>   ...  
>   class obj5G{var dataa= new Array[Byte](1024 * 1024 * 1024*5) //5gb}
>   ... {code}
> But the container is always get Java. Lang. OutOfMemoryError: Java heap space
> Did I misunderstand? Under what conditions does the container use Elastic 
> Memory beyond 10GB+?
>  
> This question has puzzled me for a long time, please help me to answer or 
> have any case for reference, thank you very much
>  
> This should be a spark submit encapsulation problem
> [https://issues.apache.org/jira/browse/SPARK-51435|https://issues.apache.org/jira/projects/SPARK/issues/SPARK-51480]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to