I'm trying to understand the basics of Spark internals and Spark
documentation for submitting applications in local mode says for
spark-submit --master setting:

local[K] Run Spark locally with K worker threads (ideally, set this to the
number of cores on your machine).

local[*] Run Spark locally with as many worker threads as logical cores on
your machine.
Since all the data is stored on a single local machine, it does not benefit
from distributed operations on RDDs.

How does it benefit and what internally is going on when Spark utilizes
several logical cores?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Apache-Spark-standalone-mode-number-of-cores-tp21342.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to