Hello,
I was looking for guidelines on what value to set executor memory to
(via spark.executor.memory for example).
This seems to be important to avoid OOM during tasks, especially in no swap
environments (like AWS EMR clusters).
This setting is really about the executor JVM heap. Hence, in ord
2014 03:23, hassan wrote:
> >>
> >> just use -Dspark.executor.memory=
> >>
> >>
> >>
> >> --
> >> View this message in context:
> >>
> http://apache-spark-user-list.1001560.n3.nabble.com/Setting-executor-memory-
> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Setting-executor-memory-when-using-spark-shell-tp7082p7103.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>
>
>
> --
> Kind regards,
>
> Oleg
>
Thank you, Hassan!
On 6 June 2014 03:23, hassan wrote:
> just use -Dspark.executor.memory=
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Setting-executor-memory-when-using-spark-shell-tp7082p7103.html
> Sent from th
Thank you, Andrew!
On 5 June 2014 23:14, Andrew Ash wrote:
> Oh my apologies that was for 1.0
>
> For Spark 0.9 I did it like this:
>
> MASTER=spark://mymaster:7077 SPARK_MEM=8g ./bin/spark-shell -c
> $CORES_ACROSS_CLUSTER
>
> The downside of this though is that SPARK_MEM also sets the driver's
just use -Dspark.executor.memory=
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Setting-executor-memory-when-using-spark-shell-tp7082p7103.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Oh my apologies that was for 1.0
For Spark 0.9 I did it like this:
MASTER=spark://mymaster:7077 SPARK_MEM=8g ./bin/spark-shell -c
$CORES_ACROSS_CLUSTER
The downside of this though is that SPARK_MEM also sets the driver's JVM to
be 8g, rather than just the executors. I think this is the reason f
Thank you, Andrew,
I am using Spark 0.9.1 and tried your approach like this:
bin/spark-shell --driver-java-options
"-Dspark.executor.memory=$MEMORY_PER_EXECUTOR"
I get
bad option: '--driver-java-options'
There must be something different in my setup. Any ideas?
Thank you again,
Oleg
On 5
Hi Oleg,
I set the size of my executors on a standalone cluster when using the shell
like this:
./bin/spark-shell --master $MASTER --total-executor-cores
$CORES_ACROSS_CLUSTER --driver-java-options
"-Dspark.executor.memory=$MEMORY_PER_EXECUTOR"
It doesn't seem particularly clean, but it works.
Hi All,
Please help me set Executor JVM memory size. I am using Spark shell and it
appears that the executors are started with a predefined JVM heap of 512m
as soon as Spark shell starts. How can I change this setting? I tried
setting SPARK_EXECUTOR_MEMORY before launching Spark shell:
export SPA
10 matches
Mail list logo