FurcyPin edited a comment on issue #13542: URL: https://github.com/apache/airflow/issues/13542#issuecomment-948858685
Hello, I think I ran into the same issue.  Here's all that the relevant info I could find, hoping this will help to solve it. * Stack: Cloud Composer 2 * Image version: composer-2.0.0-preview.3-airflow-2.1.2 * Executor: I'm not sure what Cloud Composer uses, but the airflow.cfg in the bucket says "CeleryExecutor" - I am 100% sure that my DAG code does not contain any error (other similar tasks work fine). - I do not use any pool. - I do not have a trigger date in the future. I tried clearing the queued tasks, they immediately appeared as queued again, and the scheduler logged this message (and nothing else useful): ``` could not queue task TaskInstanceKey(dag_id='my_dag', task_id='my_task', execution_date=datetime.datetime(2021, 10, 20, 0, 0, tzinfo=Timezone('UTC')), try_number=2) ``` I tried restarting the scheduler by destroying the pod, it did not change anything. I tried destroying the Redis pod, but I did not have the necessary permission. I was running a large number of dags at the same time (more than 20), so it might be linked to `max_dagruns_per_loop_to_schedule`, so I increased it to a number larger than my number of dags, but the 3 tasks are still stuck in the queued state, even when I clear them. I'm running out of ideas... luckily it's only a POC, so if someone has some suggestions to what I could try next, I would be happy to try them. UPDATE : after a little more than 24 hours, the three tasks went into failed state by themselves. I cleared them and they ran with no problem... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
