[
https://issues.apache.org/jira/browse/SPARK-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324056#comment-14324056
]
Shekhar Bansal edited comment on SPARK-5861 at 2/17/15 11:14 AM:
-----------------------------------------------------------------
Thanks for the quick reply
I know all this.
I meant yarn-client mode only
In org.apache.spark.deploy.yarn.ClientArguments
amMemory = driver-memory
amMemoryOverhead = sparkConf.getInt("spark.yarn.driver.memoryOverhead",
math.max((MEMORY_OVERHEAD_FACTOR * amMemory).toInt, MEMORY_OVERHEAD_MIN))
there is no check for spark.master
In above case, I think we are wasting 5g memory
was (Author: sb58):
Thanks for the quick reply
I know all this.
I mean yarn-client mode only
In org.apache.spark.deploy.yarn.ClientArguments
amMemory = driver-memory
amMemoryOverhead = sparkConf.getInt("spark.yarn.driver.memoryOverhead",
math.max((MEMORY_OVERHEAD_FACTOR * amMemory).toInt, MEMORY_OVERHEAD_MIN))
there is no check for spark.master
In above case, I think we are wasting 5g memory
> [yarn-client mode] Application master should not use memory =
> spark.driver.memory
> ---------------------------------------------------------------------------------
>
> Key: SPARK-5861
> URL: https://issues.apache.org/jira/browse/SPARK-5861
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Affects Versions: 1.2.1
> Reporter: Shekhar Bansal
>
> I am using
> {code}spark.driver.memory=6g{code}
> which creates application master of 7g
> (yarn.scheduler.minimum-allocation-mb=1024)
> Application manager don't need 7g in yarn-client mode.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]