GitHub user jojaeng2 added a comment to the discussion: Is there a function
that allows the celery worker provided by airflow to take DAG from the Rabbitmq
Quorum queue?
Hi, @trikker
I think it appears to be an issue with the default queue. If you look at the
screenshot you shared, you can see that the default queue is down. Airflow DAGs
work in a way where tasks are delivered to the default queue created by Celery,
and then Celery workers pick up these tasks.
It seems that the default queue in your RabbitMQ cluster is led by the first
node, so when that node goes down, tasks cannot be delivered to the default
queue, causing scheduling issues. To resolve this, you can use RabbitMQ
mirroring settings. You’ll need to set a policy for the default queue, and you
can achieve the desired behavior by using the following command for mirroring
`rabbitmqctl set_policy ha-all "^default" '{"ha-mode":"all"}'`
This should help you achieve the high availability you’re looking for.
GitHub link:
https://github.com/apache/airflow/discussions/35649#discussioncomment-11300234
----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]