[
https://issues.apache.org/jira/browse/SPARK-27495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Thomas Graves updated SPARK-27495:
----------------------------------
Issue Type: Epic (was: Story)
> SPIP: Support Stage level resource configuration and scheduling
> ---------------------------------------------------------------
>
> Key: SPARK-27495
> URL: https://issues.apache.org/jira/browse/SPARK-27495
> Project: Spark
> Issue Type: Epic
> Components: Spark Core
> Affects Versions: 3.0.0
> Reporter: Thomas Graves
> Priority: Major
>
> Currently Spark supports CPU level scheduling and we are adding in
> accelerator aware scheduling with
> https://issues.apache.org/jira/browse/SPARK-24615, but both of those are
> scheduling via application level configurations. Meaning there is one
> configuration that is set for the entire lifetime of the application and the
> user can't change it between Spark jobs/stages within that application.
> Many times users have different requirements for different stages of their
> application so they want to be able to configure at the stage level what
> resources are required for that stage.
> For example, I might start a spark application which first does some ETL work
> that needs lots of cores to run many tasks in parallel, then once that is
> done I want to run some ML job and at that point I want GPU's, less CPU's,
> and more memory.
> With this Jira we want to add the ability for users to specify the resources
> for different stages.
> Note that https://issues.apache.org/jira/browse/SPARK-24615 had some
> discussions on this but this part of it was removed from that.
> We should come up with a proposal on how to do this.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]