Exciting indeed! Fantastic progress. J.
On Thu, Jan 22, 2026 at 6:35 AM Amogh Desai <[email protected]> wrote: > Woah, we are near completion on this one. > > Good job Niko and team on getting this one so far, I have not had the b/w > to watch it closely > but this one seems like an exciting one. > > Thanks & Regards, > Amogh Desai > > > On Thu, Jan 22, 2026 at 10:21 AM Srabasti Banerjee < > [email protected]> wrote: > > > Thanks for sharing this opportunity with the community Niko! > > > > I would like to give a shot at AwsLambdaExecutor if it is still > available. > > > > Warm Regards, > > Srabasti Banerjee > > > > On Wed, Jan 21, 2026 at 4:55 PM Oliveira, Niko <[email protected]> > > wrote: > > > > > Thanks Vinod! > > > > > > I've tagged you for that executor on the tracking issue. Let me know if > > > you need a hand at all :) You can reach me on the Airflow community > Slack > > > server. > > > > > > Cheers, > > > Niko > > > > > > ________________________________ > > > From: vinod bottu <[email protected]> > > > Sent: Wednesday, January 21, 2026 4:42:20 PM > > > To: [email protected] > > > Subject: RE: [EXT] [Call to Action] Updating Executors to Support Multi > > > Team > > > > > > CAUTION: This email originated from outside of the organization. Do not > > > click links or open attachments unless you can confirm the sender and > > know > > > the content is safe. > > > > > > > > > > > > AVERTISSEMENT: Ce courrier électronique provient d’un expéditeur > externe. > > > Ne cliquez sur aucun lien et n’ouvrez aucune pièce jointe si vous ne > > pouvez > > > pas confirmer l’identité de l’expéditeur et si vous n’êtes pas certain > > que > > > le contenu ne présente aucun risque. > > > > > > > > > > > > Hi Niko, > > > I am familiar with k8s and would like to work on kubernetes executor. > > > Thanks, > > > Vinod > > > > > > > On Jan 21, 2026, at 6:20 PM, Oliveira, Niko <[email protected]> > > wrote: > > > > > > > > Hey folks, > > > > > > > > This is a call to action for the multi-team project. We have had many > > > interested parties reach out, but previously we lacked well-defined and > > > well-scoped areas for contributors to tackle. But we now have some > items > > > that could use some help, specifically in the executors space. > > > > > > > > Context: > > > > Each executor needs changes before it can be used in a multi-team > > setup. > > > The two major requirements are: > > > > 1) Ability to read team-based configuration - e.g., two > CeleryExecutors > > > may need different brokers, backends, or autoscaling settings, etc for > > > different teams. > > > > 2) The executors must be safe to run concurrently within the same > > > scheduler process - e.g. no shared memory/queues, global DB tables, or > > > shared filesystem, etc. This implies different things for different > > > executors; some may need little to no changes some may need many. > > > > > > > > The LocalExecutor [1], CeleryExecutor [2] and AwsEcsExecutor [3] have > > > already been updated (or are in progress) to support multi-team. All > > other > > > executors are outstanding. > > > > > > > > We welcome folks to get involved with updating executors to support > > > multi-team. I have created a tracking ticket here [4] which has > sub-tasks > > > per executor. Please add any that I have missed or leave a comment on > the > > > issue. Plenty of help and guidance will be provided! > > > > > > > > Also, in particular, if anyone is familiar with > k8s/KubernetesExecutor, > > > I would love for that one to be available for 3.2 but I don't have the > > > time/bandwidth to complete it personally. > > > > > > > > Thanks for your time, and I look forward to working with you! > > > > > > > > Cheers, > > > > Niko > > > > > > > > [1] https://github.com/apache/airflow/pull/59021 > > > > [2] https://github.com/apache/airflow/pull/60675 > > > > [3] https://github.com/apache/airflow/pull/55003 > > > > [4] https://github.com/apache/airflow/issues/60912 > > > > > > > > > > --------------------------------------------------------------------- > > > To unsubscribe, e-mail: [email protected] > > > For additional commands, e-mail: [email protected] > > > > > > > > >
