In spark release 0.7.1, I added support for running multiple worker processes
on a single slave machine. I built it for performance testing multiple
workers on a single machine in standalone mode.

Set the following in conf/spark-env.sh and bounce your cluster :

export SPARK_WORKER_INSTANCES=3

This will create 3 worker processes on every slave machine.

- Kalpit



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Changing-number-of-workers-for-benchmarking-purposes-tp2606p4169.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to