Hi,
I have a machine with lots of memory. Since I understand all executors in a 
single worker run on the same JVM, I do not want to use just one worker for the 
whole memory. Instead I want to define multiple workers each with less than 
30GB memory.

Looking at the documentation I see this would be done by adding export 
SPARK_WORKER_INSTANCES=3 to spark-env.sh
The problem is that once I do this and run spark I get the following error:
16/09/01 08:47:35 WARN SparkConf:
SPARK_WORKER_INSTANCES was detected (set to '3').
This is deprecated in Spark 1.0+.

Please instead use:
- ./spark-submit with --num-executors to specify the number of executors
- Or set SPARK_EXECUTOR_INSTANCES
- spark.executor.instances to configure the number of instances in the spark 
config.


The problem is that these solutions are for running spark and have no effect 
when running the start scripts for spark standalone.

Is there a non-deprecated solution for this?
Thanks
                Assaf




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/using-multiple-worker-instances-in-spark-standalone-tp27641.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to