Hi All,

This is a bit late, but I found it helpful.  Piggy-backing on Wang Hao's
comment, spark will ignore the "spark.executor.memory" setting if you add
it to SparkConf via:

conf.set("spark.executor.memory", "1g")


What you actually should do depends on how you run spark.  I found some
"official" documentation for this in a bug report here:

https://issues.apache.org/jira/browse/SPARK-1264



Alex






On Fri, Jun 13, 2014 at 10:40 AM, Hao Wang <wh.s...@gmail.com> wrote:

> Hi, Laurent
>
> You could set Spark.executor.memory and heap size by following methods:
>
> 1. in you conf/spark-env.sh:
>     *export SPARK_WORKER_MEMORY=38g*
> *    export SPARK_JAVA_OPTS="-XX:-UseGCOverheadLimit
> -XX:+UseConcMarkSweepGC -Xmx2g -XX:MaxPermSize=256m"*
>
> 2. you could also add modification for executor memory and java opts in 
> *spark-submit
> *parameters.
>
> Check the Spark *configure *and *tuning *docs, you could find full
> answers there.
>
>
> Regards,
> Wang Hao(王灏)
>
> CloudTeam | School of Software Engineering
> Shanghai Jiao Tong University
> Address:800 Dongchuan Road, Minhang District, Shanghai, 200240
> Email:wh.s...@gmail.com
>
>
> On Thu, Jun 12, 2014 at 6:29 PM, Laurent T <laurent.thou...@ldmobile.net>
> wrote:
>
>> Hi,
>>
>> Can you give us a little more insight on how you used that file to solve
>> your problem ?
>> We're having the same OOM as you were and haven't been able to solve it
>> yet.
>>
>> Thanks
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p7469.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>
>

Reply via email to