tfagan25 commented on issue #58509: URL: https://github.com/apache/airflow/issues/58509#issuecomment-3573038731
> > [@zachliu](https://github.com/zachliu) [@rcampos87](https://github.com/rcampos87) [@tfagan25](https://github.com/tfagan25) I tested it in a Docker environment, and I didn’t observe any noticeable memory leaks in the dag-processor. You deployed it on Kubernetes, right? Could you share your Helm chart? > > maybe we should zero in on the ways we're running Airflow differently. understanding those differences will likely reveal what’s causing the memory leak 🤔 > > i tested it in both k8s (production but not using helm chart) and local (docker), both have memory leak in the `dag-processor` & `worker` but not the `scheduler`. here are all my files for local runs: > > [Dockerfile.txt](https://github.com/user-attachments/files/23731977/Dockerfile.txt) [docker-compose-CeleryExecutor.yml](https://github.com/user-attachments/files/23731979/docker-compose-CeleryExecutor.yml) [entrypoint.sh](https://github.com/user-attachments/files/23731944/entrypoint.sh) [airflow.cfg.txt](https://github.com/user-attachments/files/23731953/airflow.cfg.txt) You mean specifically to zero in on the scheduler memory leak difference? It seems we all are seeing the memory leak across multiple components. Is it possible that the growth was just small enough you didn't notice it for the scheduler? What time frame were you running it for? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
