[ 
https://issues.apache.org/jira/browse/SPARK-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15091702#comment-15091702
 ] 

Sean Owen commented on SPARK-12743:
-----------------------------------

I can't reproduce this; I started a simple master/executor on my local machine 
and ran spark-submit as you suggest and found I got exactly the executor memory 
I requested. I asked for 1g whereas my local conf/spark-defaults.conf specified 
2g. What else might be different / are you sure?

> spark.executor.memory is ignored by spark-submit in Standalone Cluster mode
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-12743
>                 URL: https://issues.apache.org/jira/browse/SPARK-12743
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Submit
>    Affects Versions: 1.6.0
>            Reporter: Alan Braithwaite
>
> When using spark-submit in standalone cluster mode, `--conf 
> spark.executor.memory=Xg` is ignored.  Instead, the value in 
> spark-defaults.conf on the standalone master is used.
> Using the legacy submission gateway as well, if that affects this (we're in 
> the process of setting up the REST gateway).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to