jensenity commented on issue #20047: URL: https://github.com/apache/airflow/issues/20047#issuecomment-1322904466
> Even I am hitting this error however I have configured task retries which ensures that another try is attempted, which usually succeeds. I am not using ExternalTaskSensor though. > > In my case I have ~80 DAGs running every minute, and roughly upto 30% of tasks tend to hit this issue but end up successfully completing on 2nd attempt. I have sufficient workers (25) and worker concurrency (16). I have sufficient schedulers (4) as well, so it's not clear what is causing such an intermittent failure to queue the task, such failures result in delayed completion of DAGs. > > I have over 20K DAGs in the cluster and using KubernetesCeleryExecutor. Airflow 2.2.3 > > > [�[34m2022-01-19 04:41:30,514�[0m] {�[34mbase_executor.py:�[0m85} ERROR�[0m - could not queue task TaskInstanceKey(dag_id='931_dag_3761', task_id='task_2', run_id='scheduled__2022-01-19T03:41:00+00:00', try_number=1)�[0m same over here.... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org