Hi,

I'm running Spark Standalone on a single node with 16 cores. Master and 4
workers are running.

I'm trying to submit two applications via spark-submit and am getting the
following error when submitting the second one: "Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient resources".

The Web UI shows the first job taking up all the cores. 

Have tried setting spark.deploy.defaultCores, or spark.cores.max, or both,
at the value of 2:
spark-submit \
        --conf "spark.deploy.defaultCores=2 spark.cores.max=2" \
        ...
or
spark-submit \
        --conf "spark.deploy.defaultCores=2" \
        ...
This doesn't seem to get propagated. Or perhaps this is not the way to pass
this in?

Does spark.executor.cores play into this? I have it set to 2 in
spark-defaults.conf.

Thanks.






--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/The-Initial-job-has-not-accepted-any-resources-error-can-t-seem-to-set-tp23398.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to