Hi Tingwen,

Would you minding sharing your changes in
ExecutorAllocationManager#addExecutors().

>From my understanding and test, dynamic allocation can be worked when you
set the min to max number of executors to the same number.

Please check your Spark and Yarn log to make sure the executors are
correctly started, the warning log means currently resource is not enough
to submit tasks.

Thanks
Saisai


On Mon, Nov 23, 2015 at 8:41 PM, 谢廷稳 <xieting...@gmail.com> wrote:

> Hi all,
> I ran a SparkPi on YARN with Dynamic Allocation enabled and set 
> spark.dynamicAllocation.maxExecutors
> equals
> spark.dynamicAllocation.minExecutors,then I submit an application using:
> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
> yarn-cluster --driver-memory 4g --executor-memory 8g
> lib/spark-examples*.jar 200
>
> then, this application was submitted successfully, but the AppMaster
> always saying “15/11/23 20:13:08 WARN cluster.YarnClusterScheduler:
> Initial job has not accepted any resources; check your cluster UI to ensure
> that workers are registered and have sufficient resources”
> and when I open DEBUG,I found “15/11/23 20:24:00 DEBUG
> ExecutorAllocationManager: Not adding executors because our current target
> total is already 50 (limit 50)” in the console.
>
> I have fixed it by modifying code in
> ExecutorAllocationManager.addExecutors,Does this a bug or it was designed
> that we can’t set maxExecutors equals minExecutors?
>
> Thanks,
> Weber
>

Reply via email to