> > - Advisory user input (e.g. a way to say after X is done I know I need Y 
> > where Y might be a bunch of GPU machines)

You are thinking of something more advanced than the Stage Level
Scheduling?  Or perhaps configured differently or prestarting things
you know you will need?

Tom

On Mon, Aug 7, 2023 at 3:27 PM Holden Karau <hol...@pigscanfly.ca> wrote:
>
> So I wondering if there is interesting in revisiting some of how Spark is 
> doing it's dynamica allocation for Spark 4+?
>
> Some things that I've been thinking about:
>
> - Advisory user input (e.g. a way to say after X is done I know I need Y 
> where Y might be a bunch of GPU machines)
> - Configurable tolerance (e.g. if we have at most Z% over target no-op)
> - Past runs of same job (e.g. stage X of job Y had a peak of K)
> - Faster executor launches (I'm a little fuzzy on what we can do here but, 
> one area for example is we setup and tear down an RPC connection to the 
> driver with a blocking call which does seem to have some locking inside of 
> the driver at first glance)
>
> Is this an area other folks are thinking about? Should I make an epic we can 
> track ideas in? Or are folks generally happy with today's dynamic allocation 
> (or just busy with other things)?
>
> --
> Twitter: https://twitter.com/holdenkarau
> Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9
> YouTube Live Streams: https://www.youtube.com/user/holdenkarau

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to