SalmonTimo commented on issue #13542:
URL: https://github.com/apache/airflow/issues/13542#issuecomment-812742063


   I ran into this issue due to the scheduler over-utilizing CPU because our 
`min_file_process_interval` was set to 0 (the default prior to 2.0), which in 
airflow 2.0 causes a ton of CPU work from constantly refreshing DAG files. 
Setting this parameter to 60 fixed the issue.
   The stack I observed this on:
   host: AWS ECS Cluster
   executor: CeleryExecutor
   queue: AWS SQS Queue
   
   The behavior I observed was that the scheduler would mark tasks are 
"queued", but never actually send them to the queue (I think the scheduler does 
actual queueing via the executor). My manual workaround until correcting the 
`min_file_process_interval` param was to stop the scheduler, clear queued 
tasks, and then start a new scheduler. The new scheduler would temporarily 
properly send tasks to the queue, before degenerating to marking tasks as 
queued without sending to the queue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to