Yue Ma created SPARK-22008:
------------------------------
Summary: Spark Streaming Dynamic Allocation auto fix
maxNumExecutors
Key: SPARK-22008
URL: https://issues.apache.org/jira/browse/SPARK-22008
Project: Spark
Issue Type: Improvement
Components: DStreams
Affects Versions: 2.2.0
Reporter: Yue Ma
Priority: Minor
In SparkStreaming DRA .The metric we use to add or remove executor is the ratio
of batch processing time / batch duration (R). And we use the parameter
"spark.streaming.dynamicAllocation.maxExecutors" to set the max Num of executor
.Currently it doesn't work well with Spark streaming because of several reasons:
(1) For example if the max nums of executor we need is 10 and we set
"spark.streaming.dynamicAllocation.maxExecutors" to 15,Obviously ,We wasted 5
executors.
(2) If the number of topic partition changes ,then the partition of KafkaRDD
or the num of tasks in a stage changes too.And the max executor we need will
also change,so the num of maxExecutors should change with the nums of Task .
The goal of this JIRA is to auto fix maxNumExecutors . Using a SparkListerner
when Stage Submitted ,first figure out the num executor we need , then update
the maxNumExecutor
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]