benrifkind commented on issue #41055: URL: https://github.com/apache/airflow/issues/41055#issuecomment-2433165972
> I am facing a similar issue, we had upgraded our airflow version to 2.10 from 2.5.3, here the tasks pods are getting stuck in scheduled after a certain period of time when it reaches the open slot count for parallelism, in our case it is set to 32 , so when 32 plus tasks run, then whatever tasks comes up get stuck in scheduled state, even though the previous tasks are cleared, it still shows that the open slot count as zero I believe I am seeing this issue as well. I upgraded from Airflow 2.6.3 to 2.10.2. I am running Airflow on kubernetes using the CeleryExecutor with two schedulers and core parallelism set to 64. At some point overnight one of the Airflow scheduler pods just maxed out at 64 and gets stuck there  If I look at the logs for this scheduler I see many lines like ``` [2024-10-23T18:47:56.591+0000] {base_executor.py:292} INFO - Executor parallelism limit reached. 0 open slots. [2024-10-23T18:47:57.725+0000] {base_executor.py:292} INFO - Executor parallelism limit reached. 0 open slots. .... ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
