Generally, I like the design.

What is not clear in my mind, is how "complete" it is in supporting
real-life executors like Spark and Kubernetes.

For example - for Kubernetes executor, what exactly are the assumptions
about kube config? how to set up the docker registry if it's private? what
if I need to use tolerations since I need a GPU node?
for spark - how to create ephemeral clusters in one task, and then use this
cluster to the executor to run tasks on?
How to add jars and python dependencies to the cluster ?

Each of these executors is a "world" of configurations and options, and
since the integration to the underlying infra is delicate, it will be good
to know explicitly
who configures what and where (including limitations) for it to work in
practice.


On Sun, Apr 18, 2021 at 11:08 AM Zion Rubin
<[email protected]> wrote:

>
> https://docs.google.com/document/d/1BuhuM7hFrf9p32BXJDF2mcsiplDEdqP45iKRebMi9uk/edit?usp=sharing
>

Reply via email to