GitHub user potiuk added a comment to the discussion: Scaling Airflow 3 on EKS — API server OOMs, PgBouncer saturation, and health check flakiness at 8K concurrent tasks
* Some API server memory issues are being solved in 3.1.8 and some in 3.2.0 - you might want take a look if those will solve your issues (search for memory in recent PRs) * There are no DB connections from workers - actually PgBouncer is no longer recommended/needed because there are far less number of connections created by Airflow 3. *. The entrypoint checks for db connection indeed as you noticed CONNECTION_CHECK_MAX_COUNT set to 0 is a good workaround before #60271 is fixed permanently. You can also configure connection pools with sqlalchemy connections to improve performance - since DB clients are done now from "fixed" number of processes - api-servers, scheduler, triggerer, the sqlalchemy pools make more sense to use rather than external pgbouncer pools (which adds overall memory and communication overhead) * Since we have changed to FastAPI - the recommendation from FastAPI itself is to have one worker per container - i.e. scale not by increasing a number of workers but increasing number of containers - i.e. api-server replicas, which should give more control over memory / CPU usage, health checks etc.. Given enough memory (needs experimenting) this should address startup time issues. We cannot really give advice on concrete numbers -> https://airflow.apache.org/docs/apache-airflow/stable/installation/index.html#notes-about-minimum-requirements explains why. I think we also would like to learn from users like you on some best practices, so I guees we need to get more feedback and learnings (and replace some workarounds with permanent fixes) to get some good guidelines/best practices. So it would be great to hear if the above comment help you to optimize things. GitHub link: https://github.com/apache/airflow/discussions/62117#discussioncomment-15848254 ---- This is an automatically sent email for [email protected]. To unsubscribe, please send an email to: [email protected]
