[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550401#comment-14550401
 ] 

Sean Owen commented on SPARK-7699:
----------------------------------

Backing up a bit here, after I re-read the code in more detail, I am pretty 
certain initialExecutors has an effect. It's in {{addExecutors}}, and that is 
the "increase executors" code path. Let's say minimum = 1, max = 10, initial = 
3. At the first schedulued check, 6 executors are needed. The code path 
increases the initial value by 1 to 4, and requests 4 executors. The fact that 
the initial value was 3 matters here.

However, yes, the code seems to intentionally ramp down immediately if load is 
less than the target. It doesn't choose the minimum; it chooses a target number 
of executors equal to the required amount (which must be at least the minimum). 
I think that is by design; I think there's much less reason to ramp *down* 
slowly?

But it's not true that this initialExecutors has no effect, which seems to be 
the thrust of this JIRA. It has an effect in all cases; its effect is mooted 
immediately however in one code path, by design it seems.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> ----------------------------------------------------------------
>
>                 Key: SPARK-7699
>                 URL: https://issues.apache.org/jira/browse/SPARK-7699
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to