kaxil opened a new pull request, #60804: URL: https://github.com/apache/airflow/pull/60804
(depends on https://github.com/apache/airflow/pull/60803) Fixes memory growth in long-running API servers by adding bounded LRU+TTL caching to `DBDagBag`. Previously, the cache was an unbounded dict that never expired, causing memory to grow indefinitely as DAG versions accumulated. ## Problem The API server's `DBDagBag` uses an internal dict to cache `SerializedDAG` objects (5-50 MB each). This cache: - **Never expires** - entries stay forever - **Never evicts** - grows with each new DAG version - **Shared singleton** - one instance for the entire API server lifetime With 100+ DAGs updating daily, memory grows ~500 MB/day, eventually causing OOM. ## Solution Add optional LRU+TTL caching controlled by new `[api]` configuration: | Config | Default | Description | |--------|---------|-------------| | `dag_cache_size` | 64 | Max cached DAG versions (0 = disabled) | | `dag_cache_ttl` | 3600 | TTL in seconds (0 = LRU only) | ### Key Design Decisions 1. **API server only** - Scheduler continues using simple dict (no caching overhead) 2. **Cache thrashing prevention** - `iter_all_latest_version_dags()` bypasses cache 3. **Thread-safe** - RLock protects cachetools operations in multi-threaded API server 4. **Observability** - Metrics for cache hits, misses, and clears ## Configuration ```ini [api] # Size of LRU cache (0 to disable) dag_cache_size = 64 # TTL in seconds (0 for LRU-only, no time expiry) dag_cache_ttl = 3600 ``` ## Metrics | Metric | Type | Description | |--------|------|-------------| | `api_server.dag_bag.cache_hit` | Counter | Cache hits | | `api_server.dag_bag.cache_miss` | Counter | Cache misses | | `api_server.dag_bag.cache_clear` | Counter | Cache clears | | `api_server.dag_bag.cache_size ` | Gauge | Cache size | ## Backward Compatibility - Default behavior unchanged for scheduler - API server gets caching by default (can disable with `dag_cache_size = 0`) - No breaking changes to public APIs --- ##### Was generative AI tooling used to co-author this PR? <!-- If generative AI tooling has been used in the process of authoring this PR, please change below checkbox to `[X]` followed by the name of the tool, uncomment the "Generated-by". --> - [ ] Yes (please specify the tool below) <!-- Generated-by: [Tool Name] following [the guidelines](https://github.com/apache/airflow/blob/main/contributing-docs/05_pull_requests.rst#gen-ai-assisted-contributions) --> --- * Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/main/contributing-docs/05_pull_requests.rst#pull-request-guidelines)** for more information. Note: commit author/co-author name and email in commits become permanently public when merged. * For fundamental code changes, an Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvement+Proposals)) is needed. * When adding dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x). * For significant user-facing changes create newsfragment: `{pr_number}.significant.rst` or `{issue_number}.significant.rst`, in [airflow-core/newsfragments](https://github.com/apache/airflow/tree/main/airflow-core/newsfragments). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
