hafid-d opened a new issue #15978: URL: https://github.com/apache/airflow/issues/15978
**Apache Airflow version**: 2.0.2 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): - **Cloud provider or hardware configuration**: - **OS** : Ubuntu 18.04.3 - **Install tools**: celery = 4.4.7, redis = 3.5.3 **What happened**: When I trigger manually my dag, some of the tasks are stuck in the "queued" state in the logs. [2021-05-21 16:55:57,808: WARNING/ForkPoolWorker-9] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,080: WARNING/ForkPoolWorker-17] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,203: WARNING/ForkPoolWorker-13] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,221: WARNING/ForkPoolWorker-5] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,247: WARNING/ForkPoolWorker-4] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,296: WARNING/ForkPoolWorker-10] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,362: WARNING/ForkPoolWorker-1] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,367: WARNING/ForkPoolWorker-8] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,433: WARNING/ForkPoolWorker-3] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,445: WARNING/ForkPoolWorker-11] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,458: WARNING/ForkPoolWorker-6] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,459: WARNING/ForkPoolWorker-2] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* [2021-05-21 16:55:58,510: WARNING/ForkPoolWorker-12] Running <TaskInstance: ******* 2021-05-21T08:54:59.100511+00:00 [queued]> on host ******* Even when I mark them as "failed" and rerun them again it is still getting stuck. When I check on the airflow UI the dag is in the "running" state :  And when I check the subdags the first one is in the "scheduled" state :  I made sure to set all the other running tasks to "failed" before running this dag. **What you expected to happen**: I expect all my tasks to be run and my dag to be marked as "success" or "failed" if there is an issue. **How to reproduce it**: It occures when I run the following command : airflow celery worker. It doesnt occure everytime, sometimes the dags are not running indefinitely and it works well. I restarted few times airflow webserver, worker and scheduler but it didn't change anything. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
