[
https://issues.apache.org/jira/browse/SPARK-4585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandy Ryza updated SPARK-4585:
------------------------------
Issue Type: Improvement (was: Bug)
> Spark dynamic scaling executors use upper limit value as default.
> -----------------------------------------------------------------
>
> Key: SPARK-4585
> URL: https://issues.apache.org/jira/browse/SPARK-4585
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core, YARN
> Affects Versions: 1.1.0
> Reporter: Chengxiang Li
>
> With SPARK-3174, one can configure a minimum and maximum number of executors
> for a Spark application on Yarn. However, the application always starts with
> the maximum. It seems more reasonable, at least for Hive on Spark, to start
> from the minimum and scale up as needed up to the maximum.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]