[
https://issues.apache.org/jira/browse/SPARK-25073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590396#comment-16590396
]
Sujith edited comment on SPARK-25073 at 8/23/18 3:43 PM:
---------------------------------------------------------
Yes, in the executor memory validation check we are displaying the proper
message considering both yarn.nodemanager.resource.memory-mb and
yarn.scheduler.maximum-allocation-mb in org.apache.spark.deploy.yarn.Client
class as below, where as for AM container memory allocation validation only
yarn.scheduler.maximum-allocation-mb is mentioned
if (executorMem > maxMem)
{
throw new IllegalArgumentException(s"Required executor memory ($executorMemory"
+ s"+$executorMemoryOverhead MB) is above the max threshold ($maxMem MB) of
this cluster! " + "Please check the values of
'yarn.scheduler.maximum-allocation-mb' and/or " +
"'yarn.nodemanager.resource.memory-mb and increase the memory appropriately." )
}
i think is we can mention about both yarn.nodemanager.resource.memory-mb and
yarn.scheduler.maximum-allocation-mb parameters for am memory validation as
well,
was (Author: s71955):
Yes, in the executor memory validation check we are displaying the proper
message considering both yarn.nodemanager.resource.memory-mb and
yarn.scheduler.maximum-allocation-mb in org.apache.spark.deploy.yarn.Client
class as below, where as for AM container memory allocation validation only
yarn.scheduler.maximum-allocation-mb is mentioned
if (executorMem > maxMem)
{ throw new IllegalArgumentException(s"Required executor memory
($executorMemory" + s"+$executorMemoryOverhead MB) is above the max threshold
($maxMem MB) of this cluster! " + "Please check the values of
'yarn.scheduler.maximum-allocation-mb' and/or " +
"'yarn.nodemanager.resource.memory-mb and increase the memory appropriately."
) }
i think is we can mention about both yarn.nodemanager.resource.memory-mb and
yarn.scheduler.maximum-allocation-mb parameters for am memory validation as
well,
> Spark-submit on Yarn Task : When the yarn.nodemanager.resource.memory-mb
> and/or yarn.scheduler.maximum-allocation-mb is insufficient, Spark always
> reports an error request to adjust yarn.scheduler.maximum-allocation-mb
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-25073
> URL: https://issues.apache.org/jira/browse/SPARK-25073
> Project: Spark
> Issue Type: Bug
> Components: Spark Submit
> Affects Versions: 2.3.0, 2.3.1
> Reporter: vivek kumar
> Priority: Minor
>
> When the yarn.nodemanager.resource.memory-mb and/or
> yarn.scheduler.maximum-allocation-mb is insufficient, Spark *always* reports
> an error request to adjust Yarn.scheduler.maximum-allocation-mb. Expecting
> the error request to be more around yarn.scheduler.maximum-allocation-mb'
> and/or 'yarn.nodemanager.resource.memory-mb'.
>
> Scenario 1. yarn.scheduler.maximum-allocation-mb =4g and
> yarn.nodemanager.resource.memory-mb =8G
> # Launch shell on Yarn with am.memory less than nodemanager.resource memory
> but greater than yarn.scheduler.maximum-allocation-mb
> eg; spark-shell --master yarn --conf spark.yarn.am.memory 5g
> Error: java.lang.IllegalArgumentException: Required AM memory (5120+512 MB)
> is above the max threshold (4096 MB) of this cluster! Please increase the
> value of 'yarn.scheduler.maximum-allocation-mb'.
> at
> org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)
>
> *Scenario 2*. yarn.scheduler.maximum-allocation-mb =15g and
> yarn.nodemanager.resource.memory-mb =8g
> a. Launch shell on Yarn with am.memory greater than nodemanager.resource
> memory but less than yarn.scheduler.maximum-allocation-mb
> eg; *spark-shell --master yarn --conf spark.yarn.am.memory=10g*
> Error :
> java.lang.IllegalArgumentException: Required AM memory (10240+1024 MB) is
> above the max threshold (*8096 MB*) of this cluster! *Please increase the
> value of 'yarn.scheduler.maximum-allocation-mb'.*
> at
> org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)
>
> b. Launch shell on Yarn with am.memory greater than nodemanager.resource
> memory and yarn.scheduler.maximum-allocation-mb
> eg; *spark-shell --master yarn --conf spark.yarn.am.memory=17g*
> Error:
> java.lang.IllegalArgumentException: Required AM memory (17408+1740 MB) is
> above the max threshold (*8096 MB*) of this cluster! *Please increase the
> value of 'yarn.scheduler.maximum-allocation-mb'.*
> at
> org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)
>
> *Expected* : Error request for scenario2 should be more around
> yarn.scheduler.maximum-allocation-mb' and/or
> 'yarn.nodemanager.resource.memory-mb'.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]