GitHub user ywmvis added a comment to the discussion: How to handle long 
running tasks with the Kubernetes Operator ?

Thanks again @potiuk 

We had a read through the documentation and Kueue seems to be exactly made for 
our resource management demands.

Currently we use the CeleryKubernetes Executor, tasks are created as Pods via 
the DockerOperator and "kubernetes" queue.

According to the Kueue documentation it should be possible to run pods through 
Kueue by assigning a label to the pod. 
https://kueue.sigs.k8s.io/docs/tasks/run/plain_pods/

We try to find out if this approach will work with Airflow and the 
CeleryKubernetes Executor. If not i think we have to try the 
"KubernetesStartKueueJobOperator" and try to work around not using the 
DockerOperator as task entry point.

Just for a better understanding, how would the flow in airflow look like if 
Kueue is setup and working properly (mainly on airflow side) ?
Seems for the KubernetesStartKueueJobOperator there is not a lot of 
documentation or examples available.

Lets assume airflow could schedule 10 tasks and kubernetes kueue has only 
resources for 5 tasks available. What would happen to the remaining 5 tasks on 
airflow side ? Are they not put to the airflow queued state until resources on 
kubernetes side are available or would they be placed in the airflow queue but 
transition immediately to a non queued state ? 

GitHub link: 
https://github.com/apache/airflow/discussions/45503#discussioncomment-11802390

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to