SalmonTimo edited a comment on issue #13542: URL: https://github.com/apache/airflow/issues/13542#issuecomment-812742063
I ran into this issue due to the scheduler over-utilizing CPU because our `min_file_process_interval` was set to 0 (the default prior to 2.0), which in airflow 2.0 causes 100% CPU utilization by constantly searching for new DAG files. Setting this parameter to 60 fixed the issue. The stack I observed this on: host: AWS ECS Cluster executor: CeleryExecutor queue: AWS SQS Queue The behavior I observed was that the scheduler would mark tasks are "queued", but never actually send them to the queue. I think the scheduler does the actual queueing via the executor, so I suspect that the executor is starved of resources and unable to queue the new task. My manual workaround until correcting the `min_file_process_interval` param was to stop the scheduler, clear queued tasks, and then start a new scheduler. The new scheduler would temporarily properly send tasks to the queue, before degenerating to marking tasks as queued without sending to the queue. I suspect the OP may have this same issue because they mentioned having 100% CPU utilization on their scheduler. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
