I cut https://issues.apache.org/jira/browse/SPARK-10790 for this issue.
On Wed, Sep 23, 2015 at 8:38 PM, Jonathan Kelly
wrote:
> AHA! I figured it out, but it required some tedious remote debugging of
> the Spark ApplicationMaster. (But now I understand the Spark
AHA! I figured it out, but it required some tedious remote debugging of the
Spark ApplicationMaster. (But now I understand the Spark codebase a little
better than before, so I guess I'm not too put out. =P)
Here's what's happening...
I am setting spark.dynamicAllocation.minExecutors=1 but am not
Thanks for the quick response!
spark-shell is indeed using yarn-client. I forgot to mention that I also
have "spark.master yarn-client" in my spark-defaults.conf file too.
The working spark-shell and my non-working example application both display
spark.scheduler.mode=FIFO on the Spark UI. Is
Another update that doesn't make much sense:
The SparkPi example does work on yarn-cluster mode with dynamicAllocation.
That is, the following command works (as well as with yarn-client mode):
spark-submit --deploy-mode cluster --class
org.apache.spark.examples.SparkPi spark-examples.jar 100
I'm running into a problem with YARN dynamicAllocation on Spark 1.5.0 after
using it successfully on an identically configured cluster with Spark 1.4.1.
I'm getting the dreaded warning "YarnClusterScheduler: Initial job has not
accepted any resources; check your cluster UI to ensure that workers