Hi,

I am wondering what is the most correct way to stop execution of triggered 
DAG(DAGRun) with all subDAGS and tasks using Celery Executor? It's not that 
important to mark specifically all the tasks, just to make sure that 
Celery/Rabbitmq queue will be empty for the next DagRun.
My concrete case is that if one of subset of tasks fails I need to stop the 
execution off all DAG (DagRun). Workers are running on EC2 instances, which 
means i will shut down the instances (with Airflow CeleryExecutor workers) as 
well.
So far I plan to use on_failure_callback to stop instances and do this 
"cleanup".

Also about trigger_rule  functionality, what about extending it to support 
multiple rules? (ex. all_success|one_failed})

Best,
Paul

Reply via email to