I just realized that --conf needs to be one key-value pair per line. And
somehow I needed 
        --conf "spark.cores.max=2" \

However, when it was 
        --conf "spark.deploy.defaultCores=2" \

then one job would take up all 16 cores on the box.

What's the actual model here?

We've got 10 apps we want to submit. These are apps that consume, directly,
out of Kafka topics. Now with max=2 I'm lacking a few cores. What should the
actual strategy be here?

How do the below parameters affect this strategy and each other?

"Set this (max) lower on a shared cluster to prevent users from grabbing the
whole cluster by default."  But why tie a consumer to 1 or 2 cores only?
isn't the idea to split RDD's into partitions and send them to multiple
workers?

spark.cores.max
Default=not set
When running on a standalone deploy cluster or a Mesos cluster in
"coarse-grained" sharing mode,
the maximum amount of CPU cores to request for the application from across
the cluster
(not from each machine). If not set, the default will be
spark.deploy.defaultCores on
Spark's standalone cluster manager, or infinite (all available cores) on
Mesos.

spark.executor.cores
Default=1 in YARN mode, all the available cores on the worker in standalone
mode.
The number of cores to use on each executor. For YARN and standalone mode
only. In standalone mode,
setting this parameter allows an application to run multiple executors on
the same worker,
provided that there are enough cores on that worker. Otherwise, only one
executor per application
will run on each worker.

spark.deploy.defaultCores
Default=infinite
Default number of cores to give to applications in Spark's standalone mode
if they don't set
spark.cores.max. If not set, applications always get all available cores
unless they configure
spark.cores.max themselves. Set this lower on a shared cluster to prevent
users from grabbing
the whole cluster by default. 




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/The-Initial-job-has-not-accepted-any-resources-error-can-t-seem-to-set-tp23398p23399.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to