[
https://issues.apache.org/jira/browse/SPARK-16435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372529#comment-15372529
]
Apache Spark commented on SPARK-16435:
--------------------------------------
User 'jerryshao' has created a pull request for this issue:
https://github.com/apache/spark/pull/14149
> Behavior changes if initialExecutor is less than minExecutor for dynamic
> allocation
> -----------------------------------------------------------------------------------
>
> Key: SPARK-16435
> URL: https://issues.apache.org/jira/browse/SPARK-16435
> Project: Spark
> Issue Type: Bug
> Components: Scheduler, Spark Core
> Affects Versions: 2.0.0
> Reporter: Saisai Shao
> Priority: Minor
>
> After SPARK-13723, the behavior changed for
> {{spark.dynamicAllocation.initialExecutors}} less then
> {{spark.dynamicAllocation.minExecutors}} situation.
> initialExecutors < minExecutors is an invalid setting,
> h4. Before SPARK-13723
> If initialExecutors < minExecutors, Spark will throw exception with:
> {code}
> java.lang.IllegalArgumentException: requirement failed: initial executor
> number xxx must between min executor number xxx and max executor number xxx
> {code}
> This will clearly let user know that current configuration is invalid.
> h4. After SPARK-13723
> Because we also consider {{spark.executor.instances}}, so the initial number
> is the max value between minExecutors, initialExecutors, numExecutors.
> This will silently ignore the situation where initialExecutors < minExecutors.
> So at least we should add some warning logs to let user know this is an
> invalid configuration.
> What do you think [~tgraves], [~rdblue]?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]