[
https://issues.apache.org/jira/browse/SPARK-24815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17741449#comment-17741449
]
Sandish Kumar HN commented on SPARK-24815:
------------------------------------------
[~pavan0831] I'm excited to see what you come up with. You can create a pull
request with the design details, or you can attach a Google design doc to the
pull request. Either way, I'm looking forward to reviewing your work.
> Structured Streaming should support dynamic allocation
> ------------------------------------------------------
>
> Key: SPARK-24815
> URL: https://issues.apache.org/jira/browse/SPARK-24815
> Project: Spark
> Issue Type: Improvement
> Components: Scheduler, Spark Core, Structured Streaming
> Affects Versions: 2.3.1
> Reporter: Karthik Palaniappan
> Priority: Minor
>
> For batch jobs, dynamic allocation is very useful for adding and removing
> containers to match the actual workload. On multi-tenant clusters, it ensures
> that a Spark job is taking no more resources than necessary. In cloud
> environments, it enables autoscaling.
> However, if you set spark.dynamicAllocation.enabled=true and run a structured
> streaming job, the batch dynamic allocation algorithm kicks in. It requests
> more executors if the task backlog is a certain size, and removes executors
> if they idle for a certain period of time.
> Quick thoughts:
> 1) Dynamic allocation should be pluggable, rather than hardcoded to a
> particular implementation in SparkContext.scala (this should be a separate
> JIRA).
> 2) We should make a structured streaming algorithm that's separate from the
> batch algorithm. Eventually, continuous processing might need its own
> algorithm.
> 3) Spark should print a warning if you run a structured streaming job when
> Core's dynamic allocation is enabled
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]