[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550378#comment-14550378
 ] 

Saisai Shao commented on SPARK-7699:
------------------------------------

Yes, {{maxNeeded}} is the correct # if we have lots of pending tasks *at 
start*, but if not, is it better to choose {{initialExecutors}} or 
{{minNumExecutors}} *at start*. From the current code, it chooses 
{{minNumExecutors}}, so the configuration {{initialExecutors}} never has any 
chance to take effect at any request, even at beginning, as what this 
configuration stands for.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> ----------------------------------------------------------------
>
>                 Key: SPARK-7699
>                 URL: https://issues.apache.org/jira/browse/SPARK-7699
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to