nitinpandey-154 edited a comment on issue #13542:
URL: https://github.com/apache/airflow/issues/13542#issuecomment-846808019


   Environment:
   - Airflow 2.1.0
   - Docker : apache/airflow:2.1.0-python3.8
   - Executor : Celery
   
   I am facing the same issue again.  This happens randomly and even after 
cleaning up the tasks instances and/or restarting the container is not fixing 
the issue.
   
   airflow-scheduler    | [2021-05-24 06:46:52,008] {scheduler_job.py:1105} 
INFO - Sending TaskInstanceKey(dag_id='airflow_health_checkup', 
task_id='send_heartbeat', execution_date=datetime.datetime(2021, 5, 24, 4, 21, 
tzinfo=Timezone('UTC')), try_number=1) to executor with priority 100 and queue 
airflow_maintenance
   airflow-scheduler    | [2021-05-24 06:46:52,008] {base_executor.py:85} ERROR 
- could not queue task TaskInstanceKey(dag_id='airflow_health_checkup', 
task_id='send_heartbeat', execution_date=datetime.datetime(2021, 5, 24, 4, 21, 
tzinfo=Timezone('UTC')), try_number=1)
   airflow-scheduler    | [2021-05-24 06:46:52,008] {scheduler_job.py:1105} 
INFO - Sending TaskInstanceKey(dag_id='airflow_health_checkup', 
task_id='send_heartbeat', execution_date=datetime.datetime(2021, 5, 24, 4, 24, 
tzinfo=Timezone('UTC')), try_number=1) to executor with priority 100 and queue 
airflow_maintenance
   airflow-scheduler    | [2021-05-24 06:46:52,009] {base_executor.py:85} ERROR 
- could not queue task TaskInstanceKey(dag_id='airflow_health_checkup', 
task_id='send_heartbeat', execution_date=datetime.datetime(2021, 5, 24, 4, 24, 
tzinfo=Timezone('UTC')), try_number=1)
   
   The only solution that works is manually deleting the dag.. which isn't a 
feasible option. This should be a very high priority as it breaks the 
scheduling.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to