[ 
https://issues.apache.org/jira/browse/SPARK-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324046#comment-14324046
 ] 

Sean Owen commented on SPARK-5861:
----------------------------------

Do you mean yarn cluster mode? the driver is not run in an Application Master 
in yarn client mode.

You're setting, effectively, a JVM heap size, but in YARN you need to request 
somewhat more than this for your container, or eventually the JVM process will 
overrun 6GB of physical memory with a 6GB heap and be killed (the JVM stores 
more than just objects on the heap). Spark builds in padding to account for 
this. You can control it; it defaults to about 7% of heap. 

Change the padding, reduce your allocation minimum, or reduce your driver 
memory. 

I think this should be closed as not an issue.

> [yarn-client mode] Application master should not use memory = 
> spark.driver.memory
> ---------------------------------------------------------------------------------
>
>                 Key: SPARK-5861
>                 URL: https://issues.apache.org/jira/browse/SPARK-5861
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.2.1
>            Reporter: Shekhar Bansal
>             Fix For: 1.3.0, 1.2.2
>
>
> I am using
>  {code}spark.driver.memory=6g{code}
> which creates application master of 7g 
> (yarn.scheduler.minimum-allocation-mb=1024)
> which is waste of resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to