[ 
https://issues.apache.org/jira/browse/SPARK-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324054#comment-14324054
 ] 

Shekhar Bansal commented on SPARK-5861:
---------------------------------------

Thanks for the quick reply
I know all this.

I mean yarn-client mode only

In org.apache.spark.deploy.yarn.ClientArguments
amMemory = driver-memory
amMemoryOverhead = sparkConf.getInt("spark.yarn.driver.memoryOverhead",
    math.max((MEMORY_OVERHEAD_FACTOR * amMemory).toInt, MEMORY_OVERHEAD_MIN))

there is no check for spark.master

In above case, I think we are wasting 5g memory

> [yarn-client mode] Application master should not use memory = 
> spark.driver.memory
> ---------------------------------------------------------------------------------
>
>                 Key: SPARK-5861
>                 URL: https://issues.apache.org/jira/browse/SPARK-5861
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.2.1
>            Reporter: Shekhar Bansal
>             Fix For: 1.3.0, 1.2.2
>
>
> I am using
>  {code}spark.driver.memory=6g{code}
> which creates application master of 7g 
> (yarn.scheduler.minimum-allocation-mb=1024)
> which is waste of resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to