Hi all,
I ran a SparkPi on YARN with Dynamic Allocation enabled and set 
spark.dynamicAllocation.maxExecutors equals
spark.dynamicAllocation.minExecutors,then I submit an application using:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master 
yarn-cluster --driver-memory 4g --executor-memory 8g lib/spark-examples*.jar 200

then, this application was submitted successfully, but the AppMaster always 
saying “15/11/23 20:13:08 WARN cluster.YarnClusterScheduler: Initial job has 
not accepted any resources; check your cluster UI to ensure that workers are 
registered and have sufficient resources” 
and when I open DEBUG,I found “15/11/23 20:24:00 DEBUG 
ExecutorAllocationManager: Not adding executors because our current target 
total is already 50 (limit 50)” in the console.

I have fixed it by modifying code in 
ExecutorAllocationManager.addExecutors,Does this a bug or it was designed that 
we can’t set maxExecutors equals minExecutors?

Thanks,
Weber

Reply via email to