GitHub user potiuk added a comment to the discussion: How to handle long 
running tasks with the Kubernetes Operator ?

Three things that come to my mind:

* Use Hybrid Celery Kubernetes Executor and make all the small and fast tasks 
run through Celery - that wil limit the overhead incurred by many PODs being 
created to just run a small and fast thing (will decrease the pressure on K8S)
* You can consider using Yuvicorn or Kueue for better management of queued PODs 
and resources with priorities and such - but this requires more understanding 
of your particular tasks and resource needs they have
* Limit parallelism of certain tasks in Airlfow - Airlfow has a number of ways 
to limit parallelism - for example by using Pools, Queues, various dags, tasks 
and configuration parameters - for example here: 
https://airflow.apache.org/docs/apache-airflow/stable/faq.html#how-to-improve-dag-performance
  - this will prevent airflow scheduler to even schedule tasks for execution if 
there are other - related tasks already scheduled and exceed the parallelism 
settings

GitHub link: 
https://github.com/apache/airflow/discussions/45503#discussioncomment-11784974

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to