The built-in resource managers do support dynamic allocation for
auto-scaling.
https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation

Provisioning the resources for the resource manager is a different question
that Spark itself doesn't address; it won't go run EC2/Azure instances for
you, but gives you the resource manager to run on those. The 'serverless'
part you refer to here is really answering the question of provisioning VMs
for a particular hosted Spark service. That isn't part of Spark per se.


On Sat, Mar 28, 2020 at 10:38 AM Mania Abdi <abdi...@husky.neu.edu> wrote:

> Hi everyone,
>
> I came across Databricks serverless analytics on Spark
> <https://databricks.com/blog/2017/06/07/databricks-serverless-next-generation-resource-management-for-apache-spark.html>and
> Serverless pools. I was wondering if the Apache Spark also supports these
> pools and if the Apache Spark resource manager supports
> preemptive execution and auto-scaling?
>
> Regards
> Mania
>

Reply via email to