Hey, I'm using this setup in a single m4.4xlarge node in order to utilize it :
https://github.com/gettyimages/docker-spark/blob/master/docker-compose.yml but setting : SPARK_WORKER_INSTANCES: 2 SPARK_WORKER_CORES: 2 still creates only one worker. One JVM process that utilizes up to 200% CPU Do I have to also start 2 org.apache.spark.deploy.worker.Worker instances explicitly ? Or should I rather stick with : SPARK_WORKER_INSTANCES: 1 SPARK_WORKER_CORES: 8 ? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Changing-number-of-workers-for-benchmarking-purposes-tp2606p26488.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org