Github user sujith71955 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22199#discussion_r212354862
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -338,13 +338,14 @@ private[spark] class Client(
throw new IllegalArgumentException(s"Required executor memory
($executorMemory" +
s"+$executorMemoryOverhead MB) is above the max threshold ($maxMem
MB) of this cluster! " +
"Please check the values of 'yarn.scheduler.maximum-allocation-mb'
and/or " +
- "'yarn.nodemanager.resource.memory-mb'.")
+ "'yarn.nodemanager.resource.memory-mb and increase the memory
appropriately.")
--- End diff --
As mentioned in the JIRA even though the memory defined in the
yarn.nodemanager.resource.memory-mb parameter is less eg:
yarn.scheduler.maximum-allocation-mb =15g and
yarn.nodemanager.resource.memory-mb =8g
Launch spark-shell --master yarn --conf spark.yarn.am.memory=10g
we are getting below error
java.lang.IllegalArgumentException: Required AM memory (10240+1024 MB) is
above the max threshold (8096 MB) of this cluster! Please increase the value of
'yarn.scheduler.maximum-allocation-mb'.
This message is very confusing to a user since spark indicate to increase
the arn.scheduler.maximum-allocation-mb which is actually more than 10G as per
this scenario whereas the issue is with yarn.nodemanager.resource.memory-mb,
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]