This is an automated email from the ASF dual-hosted git repository.
potiuk pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/airflow.git
The following commit(s) were added to refs/heads/main by this push:
new 0a03b4e87db Docs: Add JWT authentication docs and strengthen security
model (#64760)
0a03b4e87db is described below
commit 0a03b4e87dbcb4a653b5cf7e59a6b92d14361176
Author: Jarek Potiuk <[email protected]>
AuthorDate: Tue Apr 7 17:38:19 2026 +0200
Docs: Add JWT authentication docs and strengthen security model (#64760)
* Docs: Add JWT authentication docs and strengthen security model
Add comprehensive JWT token authentication documentation covering both
the REST API and Execution API flows, including token structure, timings,
refresh mechanisms, and the DFP/Triggerer in-process bypass.
Update the security model to:
- Document current isolation limitations (DFP/Triggerer DB access,
shared Execution API resources, multi-team not guaranteeing task-level
isolation)
- Add deployment hardening guidance (per-component config, asymmetric
JWT keys, env vars with PR_SET_DUMPABLE protection)
- Add "What is NOT a security vulnerability" section covering all
categories from the security team's response policies
- Fix contradicting statements across docs that overstated isolation
guarantees or recommended sharing all config across components
Update AGENTS.md with security model awareness so AI agents performing
security research distinguish intentional design choices from actual
vulnerabilities.
* Fix spelling errors and use 'potentially' for DFP/Triggerer access
- Add dumpable, sandboxing, unsanitized, XSS to spelling wordlist
- Use 'potentially' consistently when describing Dag File Processor
and Triggerer database access and JWT authentication bypass, since
these are capabilities that Dag author code could exploit rather
than guaranteed behaviors of normal operation
* Add prek hook to validate security doc constants against config.yml
New hook `check-security-doc-constants` validates that:
- [section] option references in security RST files match config.yml
- AIRFLOW__X__Y env var references correspond to real config options
- Default values in doc tables match config.yml defaults
- Sensitive config variables are listed (warning, not error, since
the list is documented as non-exhaustive)
Loads both airflow-core config.yml and provider.yaml files to cover
all config sections (including celery, sentry, workers, etc.).
Runs automatically when config.yml or security RST docs are modified.
* Expand sensitive vars to full list with component mapping and auto-update
Update security_model.rst sensitive config variables section:
- List ALL sensitive vars from config.yml and provider.yaml files
- Core vars organized in a table with "Needed by" column mapping each
var to the components that require it (API Server, Scheduler, Workers,
Dag File Processor, Triggerer)
- Provider vars in a separate table noting they should only be set where
the provider functionality is needed
- Tables are auto-generated between AUTOGENERATED markers
Update prek hook to auto-update the sensitive var tables:
- Reads config.yml and all provider.yaml files
- Generates RST list-table content for core and provider sensitive vars
- Replaces content between markers on each run
- Warns when new sensitive vars need component mapping added to the hook
- Validates [section] option and AIRFLOW__X__Y references against config
- Skips autogenerated sections when checking env var references
* Clarify software guards vs intentional access in DFP/Triggerer
Address issues raised in security discussion about the gap between
Airflow's isolation promises and reality:
- Clearly distinguish software guards (prevent accidental DB access)
from the inability to prevent intentional malicious access by code
running as the same Unix user as the parent process
- Document the specific mechanisms: /proc/PID/environ, config files,
_CMD commands, secrets manager credential reuse
- Clarify that worker isolation is genuine (no DB credentials at all)
while DFP/Triggerer isolation is software-level only
- Add Unix user impersonation as a deployment hardening measure
- Document strategic (API-based DFP/Triggerer) and tactical (user
impersonation) planned improvements
- Add warning about sensitive config leakage through task logs
- Add guidance to restrict task log access
* Docs: Improve security docs wording, extract workload isolation,
recommend DagBundle
- Reword DFP/Triggerer descriptions to clarify software guards vs
intentional bypass
- Extract workload isolation section from jwt_token_authentication into
workload.rst
- Recommend Dag Bundle mechanism (GitDagBundle) for DAG synchronization
- Fix typo in public-airflow-interface.rst and broken backtick in
jwt_token_authentication.rst
- Update cross-references between security docs
---
.github/instructions/code-review.instructions.md | 2 +-
AGENTS.md | 29 +-
airflow-core/.pre-commit-config.yaml | 10 +
.../production-deployment.rst | 9 +-
airflow-core/docs/best-practices.rst | 6 +-
airflow-core/docs/configurations-ref.rst | 25 +-
airflow-core/docs/core-concepts/multi-team.rst | 2 +-
airflow-core/docs/howto/set-config.rst | 23 +-
.../docs/installation/upgrading_to_airflow3.rst | 2 +-
airflow-core/docs/public-airflow-interface.rst | 7 +-
.../docs/security/jwt_token_authentication.rst | 398 +++++++++++++++++
airflow-core/docs/security/security_model.rst | 493 ++++++++++++++++++++-
airflow-core/docs/security/workload.rst | 83 ++++
.../src/airflow/config_templates/config.yml | 10 +-
docs/spelling_wordlist.txt | 4 +
scripts/ci/prek/check_security_doc_constants.py | 427 ++++++++++++++++++
16 files changed, 1477 insertions(+), 53 deletions(-)
diff --git a/.github/instructions/code-review.instructions.md
b/.github/instructions/code-review.instructions.md
index 0d4ce8a8791..cd480bdcaf7 100644
--- a/.github/instructions/code-review.instructions.md
+++ b/.github/instructions/code-review.instructions.md
@@ -11,7 +11,7 @@ Use these rules when reviewing pull requests to the Apache
Airflow repository.
- **Scheduler must never run user code.** It only processes serialized Dags.
Flag any scheduler-path code that deserializes or executes Dag/task code.
- **Flag any task execution code that accesses the metadata DB directly**
instead of through the Execution API (`/execution` endpoints).
-- **Flag any code in Dag Processor or Triggerer that breaks process
isolation** — these components run user code in isolated processes.
+- **Flag any code in Dag Processor or Triggerer that breaks process
isolation** — these components run user code in separate processes from the
Scheduler and API Server, but note that they potentially have direct metadata
database access and potentially bypass JWT authentication via in-process
Execution API transport. This is an intentional design choice documented in the
security model, not a security vulnerability.
- **Flag any provider importing core internals** like `SUPERVISOR_COMMS` or
task-runner plumbing. Providers interact through the public SDK and execution
API only.
## Database and Query Correctness
diff --git a/AGENTS.md b/AGENTS.md
index b9ef07b381d..3abe9eda14c 100644
--- a/AGENTS.md
+++ b/AGENTS.md
@@ -67,15 +67,38 @@ UV workspace monorepo. Key paths:
## Architecture Boundaries
1. Users author Dags with the Task SDK (`airflow.sdk`).
-2. Dag Processor parses Dag files in isolated processes and stores serialized
Dags in the metadata DB.
+2. Dag File Processor parses Dag files in separate processes and stores
serialized Dags in the metadata DB. Software guards prevent individual parsing
processes from accessing the database directly and enforce use of the Execution
API, but these guards do not protect against intentional bypassing by malicious
or misconfigured code.
3. Scheduler reads serialized Dags — **never runs user code** — and creates
Dag runs / task instances.
-4. Workers execute tasks via Task SDK and communicate with the API server
through the Execution API — **never access the metadata DB directly**.
+4. Workers execute tasks via Task SDK and communicate with the API server
through the Execution API — **never access the metadata DB directly**. Each
task receives a short-lived JWT token scoped to its task instance ID.
5. API Server serves the React UI and handles all client-database interactions.
-6. Triggerer evaluates deferred tasks/sensors in isolated processes.
+6. Triggerer evaluates deferred tasks/sensors in separate processes. Like the
Dag File Processor, software guards steer it through the Execution API rather
than direct database access, but these guards do not protect against
intentional bypassing by malicious or misconfigured code.
7. Shared libraries that are symbolically linked to different Python
distributions are in `shared` folder.
8. Airflow uses `uv workspace` feature to keep all the distributions sharing
dependencies and venv
9. Each of the distributions should declare other needed distributions: `uv
--project <FOLDER> sync` command acts on the selected project in the monorepo
with only dependencies that it has
+## Security Model
+
+When reviewing code, writing security documentation, or performing security
research, keep in
+mind the following aspects of Airflow's security model. The authoritative
reference is
+[`airflow-core/docs/security/security_model.rst`](airflow-core/docs/security/security_model.rst)
+and
[`airflow-core/docs/security/jwt_token_authentication.rst`](airflow-core/docs/security/jwt_token_authentication.rst).
+
+Particularly, the intentional design choices, that are not security
vulnerabilities and should not be
+reported as such are described in "What is NOT considered a security
vulnerability" chapter of the security model.
+
+**When flagging security concerns, distinguish between:**
+
+1. **Actual vulnerabilities** — code that violates the documented security
model (e.g., a worker
+ gaining database access it shouldn't have, a Scheduler executing user code,
an unauthenticated
+ user accessing protected endpoints).
+2. **Known limitations** — documented gaps where the current implementation
doesn't provide full
+ isolation (e.g., DFP/Triggerer database access, shared Execution API
resources, multi-team
+ not enforcing task-level isolation). These are tracked for improvement in
future versions and
+ should not be reported as new findings.
+3. **Deployment hardening opportunities** — measures a Deployment Manager can
take to improve
+ isolation beyond what Airflow enforces natively (e.g., per-component
configuration, asymmetric
+ JWT keys, network policies). These belong in deployment guidance, not as
code-level issues.
+
# Shared libraries
- shared libraries provide implementation of some common utilities like
logging, configuration where the code should be reused in different
distributions (potentially in different versions)
diff --git a/airflow-core/.pre-commit-config.yaml
b/airflow-core/.pre-commit-config.yaml
index 0567187dbf6..0ad9de856d9 100644
--- a/airflow-core/.pre-commit-config.yaml
+++ b/airflow-core/.pre-commit-config.yaml
@@ -263,6 +263,16 @@ repos:
require_serial: true
pass_filenames: false
files: ^src/airflow/config_templates/config\.yml$
+ - id: check-security-doc-constants
+ name: Check security docs match config.yml constants
+ entry: ../scripts/ci/prek/check_security_doc_constants.py
+ language: python
+ pass_filenames: false
+ files: >
+ (?x)
+ ^src/airflow/config_templates/config\.yml$|
+ ^docs/security/jwt_token_authentication\.rst$|
+ ^docs/security/security_model\.rst$
- id: check-airflow-version-checks-in-core
language: pygrep
name: No AIRFLOW_V_* imports in airflow-core
diff --git
a/airflow-core/docs/administration-and-deployment/production-deployment.rst
b/airflow-core/docs/administration-and-deployment/production-deployment.rst
index e69d4364887..e88b94d94ba 100644
--- a/airflow-core/docs/administration-and-deployment/production-deployment.rst
+++ b/airflow-core/docs/administration-and-deployment/production-deployment.rst
@@ -62,9 +62,12 @@ the :doc:`Celery executor
<apache-airflow-providers-celery:celery_executor>`.
Once you have configured the executor, it is necessary to make sure that every
node in the cluster contains
-the same configuration and Dags. Airflow sends simple instructions such as
"execute task X of Dag Y", but
-does not send any Dag files or configuration. You can use a simple cronjob or
any other mechanism to sync
-Dags and configs across your nodes, e.g., checkout Dags from git repo every 5
minutes on all nodes.
+the Dags and configuration appropriate for its role. Airflow sends simple
instructions such as
+"execute task X of Dag Y", but does not send any Dag files or configuration.
For synchronization of Dags
+we recommend the Dag Bundle mechanism (including ``GitDagBundle``), which
allows you to make use of
+DAG versioning. For security-sensitive deployments, restrict sensitive
configuration (JWT signing keys,
+database credentials, Fernet keys) to only the components that need them
rather than sharing all
+configuration across all nodes — see :doc:`/security/security_model` for
guidance.
Logging
diff --git a/airflow-core/docs/best-practices.rst
b/airflow-core/docs/best-practices.rst
index 9e94a1bb9db..b0b75b0086a 100644
--- a/airflow-core/docs/best-practices.rst
+++ b/airflow-core/docs/best-practices.rst
@@ -1098,8 +1098,10 @@ The benefits of using those operators are:
environment is optimized for the case where you have multiple similar, but
different environments.
* The dependencies can be pre-vetted by the admins and your security team, no
unexpected, new code will
be added dynamically. This is good for both, security and stability.
-* Complete isolation between tasks. They cannot influence one another in other
ways than using standard
- Airflow XCom mechanisms.
+* Strong process-level isolation between tasks. Tasks run in separate
containers/pods and cannot
+ influence one another at the process or filesystem level. They can still
interact through standard
+ Airflow mechanisms (XComs, connections, variables) via the Execution API. See
+ :doc:`/security/security_model` for the full isolation model.
The drawbacks:
diff --git a/airflow-core/docs/configurations-ref.rst
b/airflow-core/docs/configurations-ref.rst
index 83c5d8a8ed5..1afe00f1e2c 100644
--- a/airflow-core/docs/configurations-ref.rst
+++ b/airflow-core/docs/configurations-ref.rst
@@ -22,15 +22,22 @@ Configuration Reference
This page contains the list of all the available Airflow configurations that
you
can set in ``airflow.cfg`` file or using environment variables.
-Use the same configuration across all the Airflow components. While each
component
-does not require all, some configurations need to be same otherwise they would
not
-work as expected. A good example for that is
:ref:`secret_key<config:api__secret_key>` which
-should be same on the Webserver and Worker to allow Webserver to fetch logs
from Worker.
-
-The webserver key is also used to authorize requests to Celery workers when
logs are retrieved. The token
-generated using the secret key has a short expiry time though - make sure that
time on ALL the machines
-that you run Airflow components on is synchronized (for example using ntpd)
otherwise you might get
-"forbidden" errors when the logs are accessed.
+Different Airflow components may require different configuration parameters,
and for
+improved security, you should restrict sensitive configuration to only the
components that
+need it. Some configuration values must be shared across specific components
to work
+correctly — for example, the JWT signing key (``[api_auth] jwt_secret`` or
+``[api_auth] jwt_private_key_path``) must be consistent across all components
that generate
+or validate JWT tokens (Scheduler, API Server). However, other sensitive
parameters such as
+database connection strings or Fernet keys should only be provided to
components that need them.
+
+For security-sensitive deployments, pass configuration values via environment
variables
+scoped to individual components rather than sharing a single configuration
file across all
+components. See :doc:`/security/security_model` for details on which
configuration
+parameters should be restricted to which components.
+
+Make sure that time on ALL the machines that you run Airflow components on is
synchronized
+(for example using ntpd) otherwise you might get "forbidden" errors when the
logs are
+accessed or API calls are made.
.. note::
For more information see :doc:`/howto/set-config`.
diff --git a/airflow-core/docs/core-concepts/multi-team.rst
b/airflow-core/docs/core-concepts/multi-team.rst
index 6beccc249b1..609a79cdf18 100644
--- a/airflow-core/docs/core-concepts/multi-team.rst
+++ b/airflow-core/docs/core-concepts/multi-team.rst
@@ -38,7 +38,7 @@ Multi-Team mode is designed for medium to large organizations
that typically hav
**Use Multi-Team mode when:**
- You have many teams that need to share Airflow infrastructure
-- You need resource isolation (Variables, Connections, Secrets, etc) between
teams
+- You need resource isolation (Variables, Connections, Secrets, etc) between
teams at the UI and API level (see :doc:`/security/security_model` for
task-level isolation limitations)
- You want separate execution environments per team
- You want separate views per team in the Airflow UI
- You want to minimize operational overhead or cost by sharing a single
Airflow deployment
diff --git a/airflow-core/docs/howto/set-config.rst
b/airflow-core/docs/howto/set-config.rst
index 30d29c924c6..c35df0f4c89 100644
--- a/airflow-core/docs/howto/set-config.rst
+++ b/airflow-core/docs/howto/set-config.rst
@@ -157,15 +157,20 @@ the example below.
See :doc:`/administration-and-deployment/modules_management` for details
on how Python and Airflow manage modules.
.. note::
- Use the same configuration across all the Airflow components. While each
component
- does not require all, some configurations need to be same otherwise they
would not
- work as expected. A good example for that is
:ref:`secret_key<config:api__secret_key>` which
- should be same on the Webserver and Worker to allow Webserver to fetch
logs from Worker.
-
- The webserver key is also used to authorize requests to Celery workers
when logs are retrieved. The token
- generated using the secret key has a short expiry time though - make sure
that time on ALL the machines
- that you run Airflow components on is synchronized (for example using
ntpd) otherwise you might get
- "forbidden" errors when the logs are accessed.
+ Different Airflow components may require different configuration
parameters. For improved
+ security, restrict sensitive configuration to only the components that
need it rather than
+ sharing all configuration across all components. Some values must be
consistent across specific
+ components — for example, the JWT signing key must match between
components that generate and
+ validate tokens. However, sensitive parameters such as database connection
strings, Fernet keys,
+ and secrets backend credentials should only be provided to components that
actually need them.
+
+ For security-sensitive deployments, pass configuration values via
environment variables scoped
+ to individual components. See :doc:`/security/security_model` for detailed
guidance on
+ restricting configuration parameters.
+
+ Make sure that time on ALL the machines that you run Airflow components on
is synchronized
+ (for example using ntpd) otherwise you might get "forbidden" errors when
the logs are
+ accessed or API calls are made.
.. _set-config:configuring-local-settings:
diff --git a/airflow-core/docs/installation/upgrading_to_airflow3.rst
b/airflow-core/docs/installation/upgrading_to_airflow3.rst
index 2d9c878390d..ad0b5507b62 100644
--- a/airflow-core/docs/installation/upgrading_to_airflow3.rst
+++ b/airflow-core/docs/installation/upgrading_to_airflow3.rst
@@ -54,7 +54,7 @@ In Airflow 3, direct metadata database access from task code
is now restricted.
- **No Direct Database Access**: Task code can no longer directly import and
use Airflow database sessions or models.
- **API-Based Resource Access**: All runtime interactions (state transitions,
heartbeats, XComs, and resource fetching) are handled through a dedicated Task
Execution API.
-- **Enhanced Security**: This ensures isolation and security by preventing
malicious task code from accessing or modifying the Airflow metadata database.
+- **Enhanced Security**: This improves isolation and security by preventing
worker task code from directly accessing or modifying the Airflow metadata
database. Note that Dag author code potentially still executes with direct
database access in the Dag File Processor and Triggerer — see
:doc:`/security/security_model` for details.
- **Stable Interface**: The Task SDK provides a stable, forward-compatible
interface for accessing Airflow resources without direct database dependencies.
Step 1: Take care of prerequisites
diff --git a/airflow-core/docs/public-airflow-interface.rst
b/airflow-core/docs/public-airflow-interface.rst
index c768c36a7b1..4f4c09d66d1 100644
--- a/airflow-core/docs/public-airflow-interface.rst
+++ b/airflow-core/docs/public-airflow-interface.rst
@@ -548,9 +548,10 @@ but in Airflow they are not parts of the Public Interface
and might change any t
internal implementation detail and you should not assume they will be
maintained
in a backwards-compatible way.
-**Direct metadata database access from task code is no longer allowed**.
-Task code cannot directly access the metadata database to query Dag state,
task history,
-or Dag runs. Instead, use one of the following alternatives:
+**Direct metadata database access from code authored by Dag Authors is no
longer allowed**.
+The code authored by Dag Authors cannot directly access the metadata database
to query Dag state, task history,
+or Dag runs — workers communicate exclusively through the Execution API.
Instead, use one
+of the following alternatives:
* **Task Context**: Use :func:`~airflow.sdk.get_current_context` to access
task instance
information and methods like
:meth:`~airflow.sdk.types.RuntimeTaskInstanceProtocol.get_dr_count`,
diff --git a/airflow-core/docs/security/jwt_token_authentication.rst
b/airflow-core/docs/security/jwt_token_authentication.rst
new file mode 100644
index 00000000000..7aa85bba9a3
--- /dev/null
+++ b/airflow-core/docs/security/jwt_token_authentication.rst
@@ -0,0 +1,398 @@
+ .. Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements. See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership. The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License. You may obtain a copy of the License at
+
+ .. http://www.apache.org/licenses/LICENSE-2.0
+
+ .. Unless required by applicable law or agreed to in writing,
+ software distributed under the License is distributed on an
+ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ KIND, either express or implied. See the License for the
+ specific language governing permissions and limitations
+ under the License.
+
+JWT Token Authentication
+========================
+
+This document describes how JWT (JSON Web Token) authentication works in
Apache Airflow
+for both the public REST API (Core API) and the internal Execution API used by
workers.
+
+.. contents::
+ :local:
+ :depth: 2
+
+Overview
+--------
+
+Airflow uses JWT tokens as the primary authentication mechanism for its APIs.
There are two
+distinct JWT authentication flows:
+
+1. **REST API (Core API)** — used by UI users, CLI tools, and external clients
to interact
+ with the Airflow public API.
+2. **Execution API** — used internally by workers, the Dag File Processor, and
the Triggerer
+ to communicate task state and retrieve runtime data (connections,
variables, XComs).
+
+Both flows share the same underlying JWT infrastructure (``JWTGenerator`` and
``JWTValidator``
+classes in ``airflow.api_fastapi.auth.tokens``) but differ in audience, token
lifetime, subject
+claims, and scope semantics.
+
+
+Signing and Cryptography
+------------------------
+
+Airflow supports two mutually exclusive signing modes:
+
+**Symmetric (shared secret)**
+ Uses a pre-shared secret key (``[api_auth] jwt_secret``) with the **HS512**
algorithm.
+ All components that generate or validate tokens must share the same secret.
If no secret
+ is configured, Airflow auto-generates a random 16-byte key at startup — but
this key is
+ ephemeral and different across processes, which will cause authentication
failures in
+ multi-component deployments. Deployment Managers must explicitly configure
this value.
+
+**Asymmetric (public/private key pair)**
+ Uses a PEM-encoded private key (``[api_auth] jwt_private_key_path``) for
signing and
+ the corresponding public key for validation. Supported algorithms:
**RS256** (``RSA``) and
+ **EdDSA** (``Ed25519``). The algorithm is auto-detected from the key type
when
+ ``[api_auth] jwt_algorithm`` is set to ``GUESS`` (the default).
+
+ Validation can use either:
+
+ - A JWKS (JSON Web Key Set) endpoint configured via ``[api_auth]
trusted_jwks_url``
+ (local file or remote HTTP/HTTPS URL, polled periodically for updates).
+ - The public key derived from the configured private key (automatic
fallback when
+ ``trusted_jwks_url`` is not set).
+
+REST API Authentication Flow
+-----------------------------
+
+Token acquisition
+^^^^^^^^^^^^^^^^^
+
+1. A client sends a ``POST`` request to ``/auth/token`` with credentials
(e.g., username
+ and password in JSON body).
+2. The auth manager validates the credentials and creates a user object.
+3. The auth manager serializes the user into JWT claims and calls
``JWTGenerator.generate()``.
+4. The generated token is returned in the response as ``access_token``.
+
+For UI-based authentication, the token is stored in a secure, HTTP-only cookie
(``_token``)
+with ``SameSite=Lax``.
+
+The CLI uses a separate endpoint (``/auth/token/cli``) with a different
(shorter) expiration
+time.
+
+Token structure (REST API)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. list-table::
+ :header-rows: 1
+ :widths: 15 85
+
+ * - Claim
+ - Description
+ * - ``jti``
+ - Unique token identifier (UUID4 hex). Used for token revocation.
+ * - ``iss``
+ - Issuer (from ``[api_auth] jwt_issuer``).
+ * - ``aud``
+ - Audience (from ``[api_auth] jwt_audience``).
+ * - ``sub``
+ - User identifier (serialized by the auth manager).
+ * - ``iat``
+ - Issued-at timestamp (Unix epoch seconds).
+ * - ``nbf``
+ - Not-before timestamp (same as ``iat``).
+ * - ``exp``
+ - Expiration timestamp (``iat + jwt_expiration_time``).
+
+Token validation (REST API)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+On each API request, the token is extracted in this order of precedence:
+
+1. ``Authorization: Bearer <token>`` header.
+2. OAuth2 query parameter.
+3. ``_token`` cookie.
+
+The ``JWTValidator`` verifies the signature, expiry (``exp``), not-before
(``nbf``),
+issued-at (``iat``), audience, and issuer claims. A configurable leeway
+(``[api_auth] jwt_leeway``, default 10 seconds) accounts for clock skew.
+
+Token revocation (REST API only)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Token revocation applies only to REST API and UI tokens — it is **not** used
for Execution API
+tokens issued to workers.
+
+Revoked tokens are tracked in the ``revoked_token`` database table by their
``jti`` claim.
+On logout or explicit revocation, the token's ``jti`` and ``exp`` are inserted
into this
+table. Expired entries are automatically cleaned up at a cadence of ``2×
jwt_expiration_time``.
+
+Token refresh (REST API)
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``JWTRefreshMiddleware`` runs on UI requests. When the middleware detects
that the
+current token's ``_token`` cookie is approaching expiry, it calls
+``auth_manager.refresh_user()`` to generate a new token and sets it as the
updated cookie.
+
+Default timings (REST API)
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. list-table::
+ :header-rows: 1
+ :widths: 50 50
+
+ * - Setting
+ - Default
+ * - ``[api_auth] jwt_expiration_time``
+ - 86400 seconds (24 hours)
+ * - ``[api_auth] jwt_cli_expiration_time``
+ - 3600 seconds (1 hour)
+ * - ``[api_auth] jwt_leeway``
+ - 10 seconds
+
+
+Execution API Authentication Flow
+----------------------------------
+
+The Execution API is an API used for use by Airflow itself (not third party
callers)
+to report and set task state transitions, send heartbeats, and to retrieve
connections,
+variables, and XComs at task runtime, trigger execution and Dag parsing.
+
+Token generation (Execution API)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+1. The **Scheduler** generates a JWT for each task instance before
+ dispatching it (via the executor) to a worker. The executor's
+ ``jwt_generator`` property creates a ``JWTGenerator`` configured with the
``[execution_api]`` settings.
+2. The token's ``sub`` (subject) claim is set to the **task instance UUID**.
+3. The token is embedded in the workload JSON payload
(``BaseWorkloadSchema.token`` field)
+ that is sent to the worker process.
+
+Token structure (Execution API)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. list-table::
+ :header-rows: 1
+ :widths: 15 85
+
+ * - Claim
+ - Description
+ * - ``jti``
+ - Unique token identifier (UUID4 hex).
+ * - ``iss``
+ - Issuer (from ``[api_auth] jwt_issuer``).
+ * - ``aud``
+ - Audience (from ``[execution_api] jwt_audience``, default:
``urn:airflow.apache.org:task``).
+ * - ``sub``
+ - Task instance UUID — the identity of the workload.
+ * - ``scope``
+ - Token scope: ``"execution"`` or ``"workload"``.
+ * - ``iat``
+ - Issued-at timestamp.
+ * - ``nbf``
+ - Not-before timestamp.
+ * - ``exp``
+ - Expiration timestamp (``iat + [execution_api] jwt_expiration_time``).
+
+Token scopes (Execution API)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The Execution API defines two token scopes:
+
+**workload**
+ A restricted scope accepted only on endpoints that explicitly opt in via
+ ``Security(require_auth, scopes=["token:workload"])``. Used for endpoints
that
+ manage task state transitions.
+
+**execution**
+ Accepted by all Execution API endpoints. This is the standard scope for
worker
+ communication and allows access
+
+Tokens without a ``scope`` claim default to ``"execution"`` for backwards
compatibility.
+
+Token delivery to workers
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The token flows through the execution stack as follows:
+
+1. **Scheduler** generates the token and embeds it in the workload JSON
payload that it passes to
+ **Executor**.
+2. The workload JSON is passed to the worker process (via the
executor-specific mechanism:
+ Celery message, Kubernetes Pod spec, local subprocess arguments, etc.).
+3. The worker's ``execute_workload()`` function reads the workload JSON and
extracts the token.
+4. The ``supervise()`` function receives the token and creates an
``httpx.Client`` instance
+ with ``BearerAuth(token)`` for all Execution API HTTP requests.
+5. The token is included in the ``Authorization: Bearer <token>`` header of
every request.
+
+Token validation (Execution API)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``JWTBearer`` security dependency validates the token once per request:
+
+1. Extracts the token from the ``Authorization: Bearer`` header.
+2. Performs cryptographic signature validation via ``JWTValidator``.
+3. Verifies standard claims (``exp``, ``iat``, ``aud`` — ``nbf`` and ``iss``
if configured).
+4. Defaults the ``scope`` claim to ``"execution"`` if absent.
+5. Creates a ``TIToken`` object with the task instance ID and claims.
+6. Caches the validated token on the ASGI request scope for the duration of
the request.
+
+Route-level enforcement is handled by ``require_auth``:
+
+- Checks the token's ``scope`` against the route's ``allowed_token_types``
(precomputed
+ by ``ExecutionAPIRoute`` from ``token:*`` Security scopes at route
registration time).
+- Enforces ``ti:self`` scope — verifies that the token's ``sub`` claim matches
the
+ ``{task_instance_id}`` path parameter, preventing a worker from accessing
another task's
+ endpoints.
+
+Token refresh (Execution API)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``JWTReissueMiddleware`` automatically refreshes valid tokens that are
approaching expiry:
+
+1. After each response, the middleware checks the token's remaining validity.
+2. If less than **20%** of the total validity remains (minimum 30 seconds),
the server
+ generates a new token preserving all original claims (including ``scope``
and ``sub``).
+3. The refreshed token is returned in the ``Refreshed-API-Token`` response
header.
+4. The client's ``_update_auth()`` hook detects this header and transparently
updates
+ the ``BearerAuth`` instance for subsequent requests.
+
+This mechanism ensures long-running tasks do not lose API access due to token
expiry,
+without requiring the worker to re-authenticate.
+
+No token revocation (Execution API)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Execution API tokens are not subject to revocation. They are short-lived
(default 10 minutes)
+and automatically refreshed by the ``JWTReissueMiddleware``, so revocation is
not part of the
+Execution API security model. Once an Execution API token is issued to a
worker, it remains
+valid until it expires.
+
+
+
+Default timings (Execution API)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. list-table::
+ :header-rows: 1
+ :widths: 50 50
+
+ * - Setting
+ - Default
+ * - ``[execution_api] jwt_expiration_time``
+ - 600 seconds (10 minutes)
+ * - ``[execution_api] jwt_audience``
+ - ``urn:airflow.apache.org:task``
+ * - Token refresh threshold
+ - 20% of validity remaining (minimum 30 seconds, i.e., at ~120 seconds
before expiry
+ with the default 600-second token lifetime)
+
+
+Dag File Processor and Triggerer
+---------------------------------
+
+The **Dag File Processor** and **Triggerer** are internal Airflow components
that also
+interact with the Execution API, but they do so via an **in-process** transport
+(``InProcessExecutionAPI``) rather than over the network. This in-process API:
+
+- Runs the Execution API application directly within the same process, using
an ASGI/WSGI
+ bridge.
+- **Potentially bypasses JWT authentication** — the JWT bearer dependency is
overridden to
+ always return a synthetic ``TIToken`` with the ``"execution"`` scope,
effectively bypassing
+ token validation.
+- Also potentially bypasses per-resource access controls (connection,
variable, and XCom access
+ checks are overridden to always allow).
+
+Airflow implements software guards that prevent accidental direct database
access from Dag
+author code in these components. However, because the child processes that
parse Dag files and
+execute trigger code run as the **same Unix user** as their parent processes,
these guards do
+not protect against intentional access. A deliberately malicious Dag author
can potentially
+retrieve the parent process's database credentials (via
``/proc/<PID>/environ``, configuration
+files, or secrets manager access) and gain full read/write access to the
metadata database and
+all Execution API operations — without needing a valid JWT token.
+
+This is in contrast to workers/task execution, where the isolation is
implemented ad deployment
+level - where sensitive configuration of database credentials is not available
to Airflow
+processes because they are not set in their deployment configuration at all,
and communicate
+exclusively through the Execution API.
+
+In the default deployment, a **single Dag File Processor instance** parses Dag
files for all
+teams and a **single Triggerer instance** handles all triggers across all
teams. This means
+that Dag author code from different teams executes within the same process,
with potentially
+shared access to the in-process Execution API and the metadata database.
+
+For multi-team deployments that require isolation, Deployment Managers must
run **separate
+Dag File Processor and Triggerer instances per team** as a deployment-level
measure — Airflow
+does not provide built-in support for per-team DFP or Triggerer instances.
Even with separate
+instances, each retains the same Unix user as the parent process. To prevent
credential
+retrieval, Deployment Managers must implement Unix user-level isolation
(running child
+processes as a different, low-privilege user) or network-level restrictions.
+
+See :doc:`/security/security_model` for the full security implications,
deployment hardening
+guidance, and the planned strategic and tactical improvements.
+
+
+Workload Isolation and Current Limitations
+------------------------------------------
+
+For a detailed discussion of workload isolation protections, current
limitations, and planned
+improvements, see :ref:`workload-isolation`.
+
+
+Configuration Reference
+------------------------
+
+All JWT-related configuration parameters:
+
+.. list-table::
+ :header-rows: 1
+ :widths: 40 15 45
+
+ * - Parameter
+ - Default
+ - Description
+ * - ``[api_auth] jwt_secret``
+ - Auto-generated if missing
+ - Symmetric secret key for signing tokens. Must be the same across all
components. Mutually exclusive with ``jwt_private_key_path``.
+ * - ``[api_auth] jwt_private_key_path``
+ - None
+ - Path to PEM-encoded private key (``RSA`` or ``Ed25519``). Mutually
exclusive with ``jwt_secret``.
+ * - ``[api_auth] jwt_algorithm``
+ - ``GUESS``
+ - Signing algorithm. Auto-detected from key type: ``HS512`` for
symmetric, ``RS256`` for ``RSA``, ``EdDSA`` for ``Ed25519``.
+ * - ``[api_auth] jwt_kid``
+ - Auto (``RFC 7638`` thumbprint)
+ - Key ID placed in token header. Ignored for symmetric keys.
+ * - ``[api_auth] jwt_issuer``
+ - None
+ - Issuer claim (``iss``). Recommended to be unique per deployment.
+ * - ``[api_auth] jwt_audience``
+ - None
+ - Audience claim (``aud``) for REST API tokens.
+ * - ``[api_auth] jwt_expiration_time``
+ - 86400 (24h)
+ - REST API token lifetime in seconds.
+ * - ``[api_auth] jwt_cli_expiration_time``
+ - 3600 (1h)
+ - CLI token lifetime in seconds.
+ * - ``[api_auth] jwt_leeway``
+ - 10
+ - Clock skew tolerance in seconds for token validation.
+ * - ``[api_auth] trusted_jwks_url``
+ - None
+ - JWKS endpoint URL or local file path for token validation. Mutually
exclusive with ``jwt_secret``.
+ * - ``[execution_api] jwt_expiration_time``
+ - 600 (10 min)
+ - Execution API token lifetime in seconds.
+ * - ``[execution_api] jwt_audience``
+ - ``urn:airflow.apache.org:task``
+ - Audience claim for Execution API tokens.
+
+.. important::
+
+ Time synchronization across all Airflow components is critical. Use NTP
(e.g., ``ntpd`` or
+ ``chrony``) to keep clocks in sync. Clock skew beyond the configured
``jwt_leeway`` will cause
+ authentication failures.
diff --git a/airflow-core/docs/security/security_model.rst
b/airflow-core/docs/security/security_model.rst
index 15b59b25090..96f6f66783b 100644
--- a/airflow-core/docs/security/security_model.rst
+++ b/airflow-core/docs/security/security_model.rst
@@ -62,11 +62,24 @@ Dag authors
...........
They can create, modify, and delete Dag files. The
-code in Dag files is executed on workers and in the Dag Processor.
-Therefore, Dag authors can create and change code executed on workers
-and the Dag Processor and potentially access the credentials that the Dag
-code uses to access external systems. Dag authors have full access
-to the metadata database.
+code in Dag files is executed on workers, in the Dag File Processor,
+and in the Triggerer.
+Therefore, Dag authors can create and change code executed on workers,
+the Dag File Processor, and the Triggerer, and potentially access the
credentials that the Dag
+code uses to access external systems.
+
+In Airflow 3, the level of database isolation depends on the component:
+
+* **Workers**: Task code on workers communicates with the API server
exclusively through the
+ Execution API. Workers do not receive database credentials and genuinely
cannot access the
+ metadata database directly.
+* **Dag File Processor and Triggerer**: Airflow implements software guards
that prevent
+ accidental direct database access from Dag author code. However, because Dag
parsing and
+ trigger execution processes run as the same Unix user as their parent
processes (which do
+ have database credentials), a deliberately malicious Dag author can
potentially retrieve
+ credentials from the parent process and gain direct database access. See
+ :ref:`jwt-authentication-and-workload-isolation` for details on the specific
mechanisms and
+ deployment hardening measures.
Authenticated UI users
.......................
@@ -115,6 +128,8 @@ The primary difference between an operator and admin is the
ability to manage an
to other users, and access audit logs - only admins are able to do this.
Otherwise assume they have
the same access as an admin.
+.. _connection-configuration-users:
+
Connection configuration users
..............................
@@ -170,6 +185,8 @@ Viewers also do not have permission to access audit logs.
For more information on the capabilities of authenticated UI users, see
:doc:`apache-airflow-providers-fab:auth-manager/access-control`.
+.. _capabilities-of-dag-authors:
+
Capabilities of Dag authors
---------------------------
@@ -193,15 +210,21 @@ not open new security vulnerabilities.
Limiting Dag Author access to subset of Dags
--------------------------------------------
-Airflow does not have multi-tenancy or multi-team features to provide
isolation between different groups of users when
-it comes to task execution. While, in Airflow 3.0 and later, Dag Authors
cannot directly access database and cannot run
-arbitrary queries on the database, they still have access to all Dags in the
Airflow installation and they can
+Airflow does not yet provide full task-level isolation between different
groups of users when
+it comes to task execution. While, in Airflow 3.0 and later, worker task code
cannot directly access the
+metadata database (it communicates through the Execution API), Dag author code
that runs in the Dag File
+Processor and Triggerer potentially still has direct database access.
Regardless of execution context, Dag authors
+have access to all Dags in the Airflow installation and they can
modify any of those Dags - no matter which Dag the task code is executed for.
This means that Dag authors can
modify state of any task instance of any Dag, and there are no finer-grained
access controls to limit that access.
-There is a work in progress on multi-team feature in Airflow that will allow
to have some isolation between different
-groups of users and potentially limit access of Dag authors to only a subset
of Dags, but currently there is no
-such feature in Airflow and you can assume that all Dag authors have access to
all Dags and can modify their state.
+There is an **experimental** multi-team feature in Airflow (``[core]
multi_team``) that provides UI-level and
+REST API-level RBAC isolation between teams. However, this feature **does not
yet guarantee task-level isolation**.
+At the task execution level, workloads from different teams still share the
same Execution API, signing keys,
+connections, and variables. A task from one team can access the same shared
resources as a task from another team.
+The multi-team feature is a work in progress — task-level isolation and
Execution API enforcement of team
+boundaries will be improved in future versions of Airflow. Until then, you
should assume that all Dag authors
+have access to all Dags and shared resources, and can modify their state
regardless of team assignment.
Security contexts for Dag author submitted code
@@ -239,8 +262,15 @@ Triggerer
In case of Triggerer, Dag authors can execute arbitrary code in Triggerer.
Currently there are no
enforcement mechanisms that would allow to isolate tasks that are using
deferrable functionality from
-each other and arbitrary code from various tasks can be executed in the same
process/machine. Deployment
-Manager must trust that Dag authors will not abuse this capability.
+each other and arbitrary code from various tasks can be executed in the same
process/machine. The default
+deployment runs a single Triggerer instance that handles triggers from all
teams — there is no built-in
+support for per-team Triggerer instances. Additionally, the Triggerer uses an
in-process Execution API
+transport that potentially bypasses JWT authentication and potentially has
direct access to the metadata
+database. For multi-team deployments, Deployment Managers must run separate
Triggerer instances per team
+as a deployment-level measure, but even then each instance potentially retains
direct database access
+and a Dag author
+whose trigger code runs there can potentially access the database directly —
including data belonging
+to other teams. Deployment Manager must trust that Dag authors will not abuse
this capability.
Dag files not needed for Scheduler and API Server
.................................................
@@ -282,6 +312,292 @@ Access to all Dags
All Dag authors have access to all Dags in the Airflow deployment. This means
that they can view, modify,
and update any Dag without restrictions at any time.
+.. _jwt-authentication-and-workload-isolation:
+
+JWT authentication and workload isolation
+-----------------------------------------
+
+Airflow uses JWT (JSON Web Token) authentication for both its public REST API
and its internal
+Execution API. For a detailed description of the JWT authentication flows,
token structure, and
+configuration, see :doc:`/security/jwt_token_authentication`. For the current
state of workload
+isolation protections and their limitations, see :ref:`workload-isolation`.
+
+Current isolation limitations
+.............................
+
+While Airflow 3 significantly improved the security model by preventing worker
task code from
+directly accessing the metadata database (workers now communicate exclusively
through the
+Execution API), **perfect isolation between Dag authors is not yet achieved**.
Dag author code
+potentially still executes with direct database access in the Dag File
Processor and Triggerer.
+
+**Software guards vs. intentional access**
+ Airflow implements software-level guards that prevent **accidental and
unintentional** direct database
+ access from Dag author code. The Dag File Processor removes the database
session and connection
+ information before forking child processes that parse Dag files, and worker
tasks use the Execution
+ API exclusively.
+
+ However, these software guards **do not protect against intentional,
malicious access**. The child
+ processes that parse Dag files and execute trigger code run as the **same
Unix user** as their parent
+ processes (the Dag File Processor manager and the Triggerer respectively).
Because of how POSIX
+ process isolation works, a child process running as the same user can
retrieve the parent's
+ credentials through several mechanisms:
+
+ * **Environment variables**: By default, on Linux, any process can read
``/proc/<PID>/environ`` of another
+ process running as the same user — so database credentials passed via
environment variables
+ (e.g., ``AIRFLOW__DATABASE__SQL_ALCHEMY_CONN``) can be read from the
parent process. This can be
+ prevented by setting dumpable property of the process which is
implemented in supervisor of tasks.
+ * **Configuration files**: If configuration is stored in files, those files
must be readable by the
+ parent process and are therefore also readable by the child process
running as the same user.
+ * **Command-based secrets** (``_CMD`` suffix options): The child process
can execute the same
+ commands to retrieve secrets.
+ * **Secrets manager access**: If the parent uses a secrets backend, the
child can access the same
+ secrets manager using credentials available in the process environment or
filesystem.
+
+ This means that a deliberately malicious Dag author can retrieve database
credentials and gain
+ **full read/write access to the metadata database** — including the ability
to modify any Dag,
+ task instance, connection, or variable. The software guards address
accidental access (e.g., a Dag
+ author importing ``airflow.settings.Session`` out of habit from Airflow 2)
but do not prevent a
+ determined actor from circumventing them.
+
+ On workers, the isolation can be stronger when Deployment Manager
configures worker processes to
+ not receive database credentials at all (neither via environment variables
nor configuration).
+ Workers should communicate exclusively through the Execution API using
short-lived JWT tokens.
+ A task running on a worker genuinely should not access the metadata
database directly —
+ when it is configured to not have any credentials accessible to it.
+
+**Dag File Processor and Triggerer run user code only have soft protection to
bypass JWT authentication**
+ The Dag File Processor and Triggerer processes that run user code,
+ use an in-process transport to access the Execution API, which bypasses JWT
authentication.
+ Since these components execute user-submitted code (Dag files and trigger
code respectively),
+ a Dag author whose code runs in these components
+ has unrestricted access to all Execution API operations if they bypass the
soft protections
+ — including the ability to read any connection, variable, or XCom — without
needing a valid JWT token.
+
+ Furthermore, the Dag File Processor has direct access to the metadata
database (it needs this to
+ store serialized Dags). As described above, Dag author code executing in
the Dag File Processor
+ context could potentially retrieve the database credentials from the parent
process and access
+ the database directly, including the JWT signing key configuration if it is
available in the
+ process environment. If a Dag author obtains the JWT signing key, they
could forge arbitrary tokens.
+
+**Dag File Processor and Triggerer are shared across teams**
+ In the default deployment, a **single Dag File Processor instance** parses
all Dag files and a
+ **single Triggerer instance** handles all triggers — regardless of team
assignment. There is no
+ built-in support for running per-team Dag File Processor or Triggerer
instances. This means that
+ Dag author code from different teams executes within the same process,
potentially sharing the
+ in-process Execution API and direct database access.
+
+ For multi-team deployments that require separation, Deployment Managers
must run **separate
+ Dag File Processor and Triggerer instances per team** as a deployment-level
measure (for example,
+ by configuring each instance to only process bundles belonging to a
specific team). However, even
+ with separate instances, each Dag File Processor and Triggerer potentially
retains direct access
+ to the metadata database — a Dag author whose code runs in these components
can potentially
+ retrieve credentials from the parent process and access the database
directly, including reading
+ or modifying data belonging to other teams, unless the Deployment Manager
implements Unix
+ user-level isolation (see
:ref:`deployment-hardening-for-improved-isolation`).
+
+**No cross-workload isolation in the Execution API**
+ All worker workloads authenticate to the same Execution API with tokens
signed by the same key and
+ sharing the same audience. While the ``ti:self`` scope enforcement prevents
a worker from accessing
+ another task's specific endpoints (heartbeat, state transitions), shared
resources such as connections,
+ variables, and XComs are accessible to all tasks. There is no isolation
between tasks belonging to
+ different teams or Dag authors at the Execution API level.
+
+**Token signing key might be a shared secret**
+ In symmetric key mode (``[api_auth] jwt_secret``), the same secret key is
used to both generate and
+ validate tokens. Any component that has access to this secret can forge
tokens with arbitrary claims,
+ including tokens for other task instances or with elevated scopes. This
does not impact the security
+ of the system though if the secret is only available to api-server and
scheduler via deployment
+ configuration.
+
+**Sensitive configuration values can be leaked through logs**
+ Dag authors can write code that prints environment variables or
configuration values to task logs
+ (e.g., ``print(os.environ)``). Airflow masks known sensitive values in
logs, but masking depends on
+ recognizing the value patterns. Dag authors who intentionally or
accidentally log raw environment
+ variables may expose database credentials, JWT signing keys, Fernet keys,
or other secrets in task
+ logs. Deployment Managers should restrict access to task logs and ensure
that sensitive configuration
+ is only provided to components where it is needed (see the sensitive
variables tables below).
+
+.. _deployment-hardening-for-improved-isolation:
+
+Deployment hardening for improved isolation
+...........................................
+
+Deployment Managers who require stronger isolation between Dag authors and
teams can take the following
+measures. Note that these are deployment-specific actions that go beyond
Airflow's built-in security
+model — Airflow does not enforce these natively.
+
+**Mandatory code review of Dag files**
+ Implement a review process for all Dag submissions to Dag bundles. This can
include:
+
+ * Requiring pull request reviews before Dag files are deployed.
+ * Static analysis of Dag code to detect suspicious patterns (e.g., direct
database access attempts,
+ reading environment variables, importing configuration modules).
+ * Automated linting rules that flag potentially dangerous code.
+
+**Restrict sensitive configuration to components that need them**
+ Do not share all configuration parameters across all components. In
particular:
+
+ * The JWT signing key (``[api_auth] jwt_secret`` or ``[api_auth]
jwt_private_key_path``) should only
+ be available to components that need to generate tokens
(Scheduler/Executor, API Server) and
+ components that need to validate tokens (API Server). Workers should not
have access to the signing
+ key — they only need the tokens provided to them.
+ * Connection credentials for external systems (via Secrets Managers) should
only be available to the API Server
+ (which serves them to workers via the Execution API), not to the
Scheduler, Dag File Processor,
+ or Triggerer processes directly. This however limits some of the features
of Airflow - such as Deadline
+ Alerts or triggers that need to authenticate with the external systems.
+ * Database connection strings should only be available to components that
need direct database access
+ (API Server, Scheduler, Dag File Processor, Triggerer), not to workers.
+
+**Pass configuration via environment variables**
+ For higher security, pass sensitive configuration values via environment
variables rather than
+ configuration files. Environment variables are inherently safer than
configuration files in
+ Airflow's worker processes because of a built-in protection: on Linux, the
supervisor process
+ calls ``prctl(PR_SET_DUMPABLE, 0)`` before forking the task process, and
this flag is inherited
+ by the forked child. This marks both processes as non-dumpable, which
prevents same-UID sibling
+ processes from reading ``/proc/<pid>/environ``, ``/proc/<pid>/mem``, or
attaching via
+ ``ptrace``. In contrast, configuration files on disk are readable by any
process running as
+ the same Unix user. Environment variables can also be scoped to individual
processes or
+ containers, making it easier to restrict which components have access to
which secrets.
+
+ The following tables list all security-sensitive configuration variables
(marked ``sensitive: true``
+ in Airflow's configuration). Deployment Managers should review each
variable and ensure it is only
+ provided to the components that need it. The "Needed by" column indicates
which components
+ typically require the variable — but actual needs depend on the specific
deployment topology and
+ features in use.
+
+ .. START AUTOGENERATED CORE SENSITIVE VARS
+
+ **Core Airflow sensitive configuration variables:**
+
+ .. list-table::
+ :header-rows: 1
+ :widths: 40 30 30
+
+ * - Environment variable
+ - Description
+ - Needed by
+ * - ``AIRFLOW__API_AUTH__JWT_SECRET``
+ - JWT signing key (symmetric mode)
+ - API Server, Scheduler
+ * - ``AIRFLOW__API__SECRET_KEY``
+ - API secret key for log token signing
+ - API Server, Scheduler, Workers, Triggerer
+ * - ``AIRFLOW__CORE__ASSET_MANAGER_KWARGS``
+ - Asset manager credentials
+ - Dag File Processor
+ * - ``AIRFLOW__CORE__FERNET_KEY``
+ - Fernet encryption key for connections/variables at rest
+ - API Server, Scheduler, Workers, Dag File Processor, Triggerer
+ * - ``AIRFLOW__DATABASE__SQL_ALCHEMY_CONN``
+ - Metadata database connection string
+ - API Server, Scheduler, Dag File Processor, Triggerer
+ * - ``AIRFLOW__DATABASE__SQL_ALCHEMY_CONN_ASYNC``
+ - Async metadata database connection string
+ - API Server, Scheduler, Dag File Processor, Triggerer
+ * - ``AIRFLOW__DATABASE__SQL_ALCHEMY_ENGINE_ARGS``
+ - SQLAlchemy engine parameters (may contain credentials)
+ - API Server, Scheduler, Dag File Processor, Triggerer
+ * - ``AIRFLOW__LOGGING__REMOTE_TASK_HANDLER_KWARGS``
+ - Remote logging handler credentials
+ - Scheduler, Workers, Triggerer
+ * - ``AIRFLOW__SECRETS__BACKEND_KWARGS``
+ - Secrets backend credentials (non-worker mode)
+ - Scheduler, Dag File Processor, Triggerer
+ * - ``AIRFLOW__SENTRY__SENTRY_DSN``
+ - Sentry error reporting endpoint
+ - Scheduler, Triggerer
+ * - ``AIRFLOW__WORKERS__SECRETS_BACKEND_KWARGS``
+ - Worker-specific secrets backend credentials
+ - Workers
+
+ .. END AUTOGENERATED CORE SENSITIVE VARS
+
+ Note that ``AIRFLOW__API_AUTH__JWT_PRIVATE_KEY_PATH`` (path to the JWT
private key for asymmetric
+ signing) is not marked as ``sensitive`` in config.yml because it is a file
path, not a secret
+ value itself. However, access to the file it points to should be restricted
to the Scheduler
+ (which generates tokens) and the API Server (which validates them).
+
+ .. START AUTOGENERATED PROVIDER SENSITIVE VARS
+
+ **Provider-specific sensitive configuration variables:**
+
+ The following variables are defined by Airflow providers and should only be
set on components where
+ the corresponding provider functionality is needed. The decision of which
components require these
+ variables depends on the Deployment Manager's choices about which providers
and features are
+ enabled in each component.
+
+ .. list-table::
+ :header-rows: 1
+ :widths: 40 30 30
+
+ * - Environment variable
+ - Provider
+ - Description
+ * - ``AIRFLOW__CELERY_BROKER_TRANSPORT_OPTIONS__SENTINEL_KWARGS``
+ - celery
+ - Sentinel kwargs
+ * - ``AIRFLOW__CELERY_RESULT_BACKEND_TRANSPORT_OPTIONS__SENTINEL_KWARGS``
+ - celery
+ - Sentinel kwargs
+ * - ``AIRFLOW__CELERY__BROKER_URL``
+ - celery
+ - Broker url
+ * - ``AIRFLOW__CELERY__FLOWER_BASIC_AUTH``
+ - celery
+ - Flower basic auth
+ * - ``AIRFLOW__CELERY__RESULT_BACKEND``
+ - celery
+ - Result backend
+ * - ``AIRFLOW__KEYCLOAK_AUTH_MANAGER__CLIENT_SECRET``
+ - keycloak
+ - Client secret
+ * - ``AIRFLOW__OPENSEARCH__PASSWORD``
+ - opensearch
+ - Password
+ * - ``AIRFLOW__OPENSEARCH__USERNAME``
+ - opensearch
+ - Username
+
+ .. END AUTOGENERATED PROVIDER SENSITIVE VARS
+
+ Deployment Managers should review the full configuration reference and
identify any additional
+ parameters that contain credentials or secrets relevant to their specific
deployment.
+
+**Use asymmetric keys for JWT signing**
+ Using asymmetric keys (``[api_auth] jwt_private_key_path`` with a JWKS
endpoint) provides better
+ security than symmetric keys because:
+
+ * The private key (used for signing) can be restricted to the
Scheduler/Executor.
+ * The API Server only needs the public key (via JWKS) for validation.
+ * Workers cannot forge tokens even if they could access the JWKS endpoint,
since they would
+ not have the private key.
+
+**Network-level isolation**
+ Use network policies, VPCs, or similar mechanisms to restrict which
components can communicate
+ with each other. For example, workers should only be able to reach the
Execution API endpoint,
+ not the metadata database or internal services directly. The Dag File
Processor and Triggerer
+ child processes should ideally not have network access to the metadata
database either, if
+ Unix user-level isolation is implemented.
+
+**Other measures and future improvements**
+ Deployment Managers may need to implement additional measures depending on
their security
+ requirements. These may include monitoring and auditing of Execution API
access patterns,
+ runtime sandboxing of Dag code, or dedicated infrastructure per team.
+
+ Future versions of Airflow plan to address these limitations through two
approaches:
+
+ * **Strategic (longer-term)**: Move the Dag File Processor and Triggerer to
communicate with
+ the metadata database exclusively through the API server (similar to how
workers use the
+ Execution API today). This would eliminate the need for these components
to have database
+ credentials at all, providing security by design rather than relying on
deployment-level
+ measures.
+ * **Tactical (shorter-term)**: Native support for Unix user impersonation
in the Dag File
+ Processor and Triggerer child processes, so that Dag author code runs as
a different, low-
+ privilege user that cannot access the parent's credentials or the
database.
+
+ The Airflow community is actively working on these improvements.
+
+
Custom RBAC limitations
-----------------------
@@ -309,6 +625,8 @@ you trust them not to abuse the capabilities they have. You
should also make sur
properly configured the Airflow installation to prevent Dag authors from
executing arbitrary code
in the Scheduler and API Server processes.
+.. _deploying-and-protecting-airflow-installation:
+
Deploying and protecting Airflow installation
.............................................
@@ -354,13 +672,150 @@ Examples of fine-grained access control include (but are
not limited to):
* Access restrictions to views or Dags: Controlling user access to certain
views or specific Dags,
ensuring that users can only view or interact with authorized components.
-Future: multi-tenancy isolation
-...............................
+Future: multi-team isolation
+............................
These examples showcase ways in which Deployment Managers can refine and limit
user privileges within Airflow,
providing tighter control and ensuring that users have access only to the
necessary components and
functionalities based on their roles and responsibilities. However,
fine-grained access control does not
-provide full isolation and separation of access to allow isolation of
different user groups in a
-multi-tenant fashion yet. In future versions of Airflow, some fine-grained
access control features could
-become part of the Airflow security model, as the Airflow community is working
on a multi-tenant model
-currently.
+yet provide full isolation and separation of access between different groups
of users.
+
+The experimental multi-team feature (``[core] multi_team``) is a step towards
cross-team isolation, but it
+currently only enforces team-based isolation at the UI and REST API level.
**Task-level isolation is not yet
+guaranteed** — workloads from different teams share the same Execution API,
JWT signing keys, and access to
+connections, variables, and XComs. In deployments where additional hardening
measures (described in
+:ref:`deployment-hardening-for-improved-isolation`) are not implemented, a
task belonging to one team can
+potentially access shared resources available to tasks from other teams.
Deployment Managers who enable the
+multi-team feature should not rely on it alone for security-critical isolation
between teams at the task
+execution layer — a deep understanding of configuration and deployment
security is required by Deployment
+Managers to configure it in a way that can guarantee separation between teams.
+
+Future versions of Airflow will improve task-level isolation, including
team-scoped Execution API enforcement,
+finer-grained JWT token scopes, and better sandboxing of user-submitted code.
The Airflow community is
+actively working on these improvements.
+
+
+What is NOT considered a security vulnerability
+-----------------------------------------------
+
+The following scenarios are **not** considered security vulnerabilities in
Airflow. They are either
+intentional design choices, consequences of the trust model described above,
or issues that fall
+outside Airflow's threat model. Security researchers (and AI agents performing
security analysis)
+should review this section before reporting issues to the Airflow security
team.
+
+For full details on reporting policies, see
+`Airflow's Security Policy
<https://github.com/apache/airflow/security/policy>`_.
+
+Dag authors executing arbitrary code
+.....................................
+
+Dag authors can execute arbitrary code on workers, the Dag File Processor, and
the Triggerer. This
+includes accessing credentials, environment variables, and (in the case of the
Dag File Processor
+and Triggerer) potentially the metadata database directly. This is the
intended behavior as described in
+:ref:`capabilities-of-dag-authors` — Dag authors are trusted users. Reports
that a Dag author can
+"achieve RCE" or "access the database" by writing Dag code are restating a
documented capability,
+not discovering a vulnerability.
+
+Dag author code passing unsanitized input to operators and hooks
+................................................................
+
+When a Dag author writes code that passes unsanitized UI user input (such as
Dag run parameters,
+variables, or connection configuration values) to operators, hooks, or
third-party libraries, the
+responsibility lies with the Dag author. Airflow's hooks and operators are
low-level interfaces —
+Dag authors are Python programmers who must sanitize inputs before passing
them to these interfaces.
+
+SQL injection or command injection is only considered a vulnerability if it
can be triggered by a
+**non-Dag-author** user role (e.g., an authenticated UI user) **without** the
Dag author deliberately
+writing code that passes that input unsafely. If the only way to exploit the
injection requires writing
+or modifying a Dag file, it is not a vulnerability — the Dag author already
has the ability to execute
+arbitrary code. See also :doc:`/security/sql`.
+
+An exception exists when official Airflow documentation explicitly recommends
a pattern that leads to
+injection — in that case, the documentation guidance itself is the issue and
may warrant an advisory.
+
+Dag File Processor and Triggerer potentially having database access
+...................................................................
+
+The Dag File Processor potentially has direct database access to store
serialized Dags. The Triggerer
+potentially has direct database access to manage trigger state. Both
components execute user-submitted
+code (Dag files and trigger code respectively) and potentially bypass JWT
authentication via an
+in-process Execution API transport. These are intentional architectural
choices, not vulnerabilities.
+They are documented in :ref:`jwt-authentication-and-workload-isolation`.
+
+Workers accessing shared Execution API resources
+.................................................
+
+Worker tasks can access connections, variables, and XComs via the Execution
API using their JWT token.
+While the ``ti:self`` scope prevents cross-task state manipulation, shared
resources are accessible to
+all tasks. This is the current design — not a vulnerability. Reports that "a
task can read another
+team's connection" are describing a known limitation of the current isolation
model, documented in
+:ref:`jwt-authentication-and-workload-isolation`.
+
+Execution API tokens not being revocable
+........................................
+
+Execution API tokens issued to workers are short-lived (default 10 minutes)
with automatic refresh
+and are intentionally not subject to revocation. This is a design choice
documented in
+:doc:`/security/jwt_token_authentication`, not a missing security control.
+
+Connection configuration capabilities
+......................................
+
+Users with the **Connection configuration** role can configure connections
with arbitrary credentials
+and connection parameters. When the ``test connection`` feature is enabled,
these users can potentially
+trigger RCE, arbitrary file reads, or Denial of Service through connection
parameters. This is by
+design — connection configuration users are highly privileged and must be
trusted not to abuse these
+capabilities. The ``test connection`` feature is disabled by default since
Airflow 2.7.0, and enabling
+it is an explicit Deployment Manager decision that acknowledges these risks.
See
+:ref:`connection-configuration-users` for details.
+
+Denial of Service by authenticated users
+........................................
+
+Airflow is not designed to be exposed to untrusted users on the public
internet. All users who can
+access the Airflow UI and API are authenticated and known. Denial of Service
scenarios triggered by
+authenticated users (such as creating very large Dag runs, submitting
expensive queries, or flooding
+the API) are not considered security vulnerabilities. They are operational
concerns that Deployment
+Managers should address through rate limiting, resource quotas, and monitoring
— standard measures
+for any internal application. See
:ref:`deploying-and-protecting-airflow-installation`.
+
+Self-XSS by authenticated users
+................................
+
+Cross-site scripting (XSS) scenarios where the only victim is the user who
injected the payload
+(self-XSS) are not considered security vulnerabilities. Airflow's users are
authenticated and
+known, and self-XSS does not allow an attacker to compromise other users. If
you discover an XSS
+scenario where a lower-privileged user can inject a payload that executes in a
higher-privileged
+user's session without that user's action, that is a valid vulnerability and
should be reported.
+
+Simple Auth Manager
+...................
+
+The Simple Auth Manager is intended for development and testing only. This is
clearly documented and
+a prominent warning banner is displayed on the login page. Security issues
specific to the Simple
+Auth Manager (such as weak password handling, lack of rate limiting, or
missing CSRF protections) are
+not considered production security vulnerabilities. Production deployments
must use a production-grade
+auth manager.
+
+Third-party dependency vulnerabilities in Docker images
+.......................................................
+
+Airflow's reference Docker images are built with the latest available
dependencies at release time.
+Vulnerabilities found by scanning these images against CVE databases are
expected to appear over time
+as new CVEs are published. These should **not** be reported to the Airflow
security team. Instead,
+users should build their own images with updated dependencies as described in
the
+`Docker image documentation
<https://airflow.apache.org/docs/docker-stack/index.html>`_.
+
+If you discover that a third-party dependency vulnerability is **actually
exploitable** in Airflow
+(with a proof-of-concept demonstrating the exploitation in Airflow's context),
that is a valid
+report and should be submitted following the security policy.
+
+Automated scanning results without human verification
+.....................................................
+
+Automated security scanner reports that list findings without human
verification against Airflow's
+security model are not considered valid vulnerability reports. Airflow's trust
model differs
+significantly from typical web applications — many scanner findings (such as
"admin user can execute
+code" or "database credentials accessible in configuration") are expected
behavior. Reports must
+include a proof-of-concept that demonstrates how the finding violates the
security model described
+in this document, including identifying the specific user role involved and
the attack scenario.
diff --git a/airflow-core/docs/security/workload.rst
b/airflow-core/docs/security/workload.rst
index 31714aa21fb..0496cddc7f5 100644
--- a/airflow-core/docs/security/workload.rst
+++ b/airflow-core/docs/security/workload.rst
@@ -50,3 +50,86 @@ not set.
[core]
default_impersonation = airflow
+
+.. _workload-isolation:
+
+Workload Isolation and Current Limitations
+------------------------------------------
+
+This section describes the current state of workload isolation in Apache
Airflow,
+including the protections that are in place, the known limitations, and
planned improvements.
+
+For the full security model and deployment hardening guidance, see
:doc:`/security/security_model`.
+For details on the JWT authentication flows used by workers and internal
components, see
+:doc:`/security/jwt_token_authentication`.
+
+Worker process memory protection (Linux)
+''''''''''''''''''''''''''''''''''''''''
+
+On Linux, the supervisor process calls ``prctl(PR_SET_DUMPABLE, 0)`` at the
start of
+``supervise()`` before forking the task process. This flag is inherited by the
forked
+child. Marking processes as non-dumpable prevents same-UID sibling processes
from reading
+``/proc/<pid>/mem``, ``/proc/<pid>/environ``, or ``/proc/<pid>/maps``, and
blocks
+``ptrace(PTRACE_ATTACH)``. This is critical because each supervisor holds a
distinct JWT
+token in memory — without this protection, a malicious task process running as
the same
+Unix user could steal tokens from sibling supervisor processes.
+
+This protection is one of the reasons that passing sensitive configuration via
environment
+variables is safer than via configuration files: environment variables are
only readable
+by the process itself (and root), whereas configuration files on disk are
readable by any
+process with filesystem access running as the same user.
+
+.. note::
+
+ This protection is Linux-specific. On non-Linux platforms, the
+ ``_make_process_nondumpable()`` call is a no-op. Deployment Managers
running Airflow
+ on non-Linux platforms should implement alternative isolation measures.
+
+No cross-workload isolation
+'''''''''''''''''''''''''''
+
+All worker workloads authenticate to the same Execution API with tokens that
share the
+same signing key, audience, and issuer. While the ``ti:self`` scope
enforcement prevents
+a worker from accessing *another task instance's* specific endpoints (e.g.,
heartbeat,
+state transitions), the token grants access to shared resources such as
connections,
+variables, and XComs that are not scoped to individual tasks.
+
+No team-level isolation in Execution API (experimental multi-team feature)
+''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+The experimental multi-team feature (``[core] multi_team``) provides UI-level
and REST
+API-level RBAC isolation between teams, but **does not yet guarantee
task-level isolation**.
+At the Execution API level, there is no enforcement of team-based access
boundaries.
+A task from one team can access the same connections, variables, and XComs as
a task from
+another team. All workloads share the same JWT signing keys and audience
regardless of team
+assignment.
+
+In deployments where additional hardening measures are not implemented at the
deployment
+level, a task from one team can potentially access resources belonging to
another team
+(see :doc:`/security/security_model`). A deep understanding of configuration
and deployment
+security is required by Deployment Managers to configure it in a way that can
guarantee
+separation between teams. Task-level team isolation will be improved in future
versions
+of Airflow.
+
+Dag File Processor and Triggerer potentially bypass JWT and access the database
+'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+As described in :doc:`/security/jwt_token_authentication`, the default
deployment runs a
+single Dag File Processor and a single Triggerer for all teams. Both
potentially bypass
+JWT authentication via in-process transport. For multi-team isolation,
Deployment Managers
+must run separate instances per team, but even then, each instance potentially
retains
+direct database access. A Dag author whose code runs in these components can
potentially
+access the database directly — including data belonging to other teams or the
JWT signing
+key configuration — unless the Deployment Manager restricts the database
credentials and
+configuration available to each instance.
+
+Planned improvements
+''''''''''''''''''''
+
+Future versions of Airflow will address these limitations with:
+
+- Finer-grained token scopes tied to specific resources (connections,
variables) and teams.
+- Enforcement of team-based isolation in the Execution API.
+- Built-in support for per-team Dag File Processor and Triggerer instances.
+- Improved sandboxing of user-submitted code in the Dag File Processor and
Triggerer.
+- Full task-level isolation for the multi-team feature.
diff --git a/airflow-core/src/airflow/config_templates/config.yml
b/airflow-core/src/airflow/config_templates/config.yml
index 2f1c63a21c1..4b44ce6c181 100644
--- a/airflow-core/src/airflow/config_templates/config.yml
+++ b/airflow-core/src/airflow/config_templates/config.yml
@@ -1987,8 +1987,14 @@ api_auth:
description: |
Secret key used to encode and decode JWTs to authenticate to public
and private APIs.
- It should be as random as possible. However, when running more than 1
instances of API services,
- make sure all of them use the same ``jwt_secret`` otherwise calls will
fail on authentication.
+ It should be as random as possible. This key must be consistent across
all components that
+ generate or validate JWT tokens (Scheduler, API Server). For improved
security, consider
+ using asymmetric keys (``jwt_private_key_path``) instead, which allow
you to restrict the
+ signing key to only the components that need to generate tokens.
+
+ For security-sensitive deployments, pass this value via environment
variable
+ (``AIRFLOW__API_AUTH__JWT_SECRET``) rather than storing it in a
configuration file, and
+ restrict it to only the components that need it.
Mutually exclusive with ``jwt_private_key_path``.
version_added: 3.0.0
diff --git a/docs/spelling_wordlist.txt b/docs/spelling_wordlist.txt
index 4465ada3dfe..56130d0a850 100644
--- a/docs/spelling_wordlist.txt
+++ b/docs/spelling_wordlist.txt
@@ -512,6 +512,7 @@ dttm
dtypes
du
duckdb
+dumpable
dunder
dup
durable
@@ -1385,6 +1386,7 @@ salesforce
samesite
saml
sandboxed
+sandboxing
sanitization
sas
Sasl
@@ -1728,6 +1730,7 @@ unpause
unpaused
unpausing
unpredicted
+unsanitized
untestable
untransformed
untrusted
@@ -1832,6 +1835,7 @@ Xiaodong
xlarge
xml
xpath
+XSS
xyz
yaml
Yandex
diff --git a/scripts/ci/prek/check_security_doc_constants.py
b/scripts/ci/prek/check_security_doc_constants.py
new file mode 100755
index 00000000000..ef4f31fde9a
--- /dev/null
+++ b/scripts/ci/prek/check_security_doc_constants.py
@@ -0,0 +1,427 @@
+#!/usr/bin/env python
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# /// script
+# requires-python = ">=3.10,<3.11"
+# dependencies = [
+# "pyyaml>=6.0.3",
+# "rich>=13.6.0",
+# ]
+# ///
+"""
+Validate and auto-update security documentation against config.yml.
+
+Checks performed:
+ 1. Every ``[section] option`` reference in the security RST files
corresponds to an
+ actual option in config.yml or provider.yaml.
+ 2. Default values quoted in the docs match the defaults in config.yml.
+ 3. Auto-updates the sensitive-variable tables in security_model.rst between
+ AUTOGENERATED markers to stay in sync with config.yml and provider.yaml.
+"""
+
+from __future__ import annotations
+
+import re
+import sys
+from pathlib import Path
+
+import yaml
+from rich.console import Console
+
+sys.path.insert(0, str(Path(__file__).parent.resolve()))
+
+from common_prek_utils import AIRFLOW_ROOT_PATH
+
+console = Console(color_system="standard", width=200)
+
+CONFIG_YML = AIRFLOW_ROOT_PATH / "airflow-core" / "src" / "airflow" /
"config_templates" / "config.yml"
+PROVIDERS_ROOT = AIRFLOW_ROOT_PATH / "providers"
+SECURITY_MODEL_RST = AIRFLOW_ROOT_PATH / "airflow-core" / "docs" / "security"
/ "security_model.rst"
+
+SECURITY_DOCS = [
+ AIRFLOW_ROOT_PATH / "airflow-core" / "docs" / "security" /
"jwt_token_authentication.rst",
+ SECURITY_MODEL_RST,
+]
+
+# Pattern to match ``[section] option_name`` references in RST
+SECTION_OPTION_RE = re.compile(r"``\[(\w+)\]\s+(\w+)``")
+
+# Pattern to match AIRFLOW__SECTION__OPTION env var references
+ENV_VAR_RE = re.compile(r"``(AIRFLOW__\w+)``")
+
+# Map section+option to the AIRFLOW__ env var form
+SECTION_OPT_TO_ENV = re.compile(r"AIRFLOW__([A-Z_]+)__([A-Z_]+)")
+
+# Markers for autogenerated sections
+CORE_START = " .. START AUTOGENERATED CORE SENSITIVE VARS"
+CORE_END = " .. END AUTOGENERATED CORE SENSITIVE VARS"
+PROVIDER_START = " .. START AUTOGENERATED PROVIDER SENSITIVE VARS"
+PROVIDER_END = " .. END AUTOGENERATED PROVIDER SENSITIVE VARS"
+
+# Which components need which core config sections/options.
+# Maps (section, option) -> list of component names.
+# This is the source of truth for the "Needed by" column.
+CORE_COMPONENT_MAP: dict[tuple[str, str], str] = {
+ ("api", "secret_key"): "API Server, Scheduler, Workers, Triggerer",
+ ("api_auth", "jwt_secret"): "API Server, Scheduler",
+ ("core", "asset_manager_kwargs"): "Dag File Processor",
+ ("core", "fernet_key"): "API Server, Scheduler, Workers, Dag File
Processor, Triggerer",
+ ("database", "sql_alchemy_conn"): "API Server, Scheduler, Dag File
Processor, Triggerer",
+ ("database", "sql_alchemy_conn_async"): "API Server, Scheduler, Dag File
Processor, Triggerer",
+ ("database", "sql_alchemy_engine_args"): "API Server, Scheduler, Dag File
Processor, Triggerer",
+ ("logging", "remote_task_handler_kwargs"): "Scheduler, Workers, Triggerer",
+ ("secrets", "backend_kwargs"): "Scheduler, Dag File Processor, Triggerer",
+ ("sentry", "sentry_dsn"): "Scheduler, Triggerer",
+ ("workers", "secrets_backend_kwargs"): "Workers",
+}
+
+# Human-readable descriptions for core sensitive vars
+CORE_DESCRIPTIONS: dict[tuple[str, str], str] = {
+ ("api", "secret_key"): "API secret key for log token signing",
+ ("api_auth", "jwt_secret"): "JWT signing key (symmetric mode)",
+ ("core", "asset_manager_kwargs"): "Asset manager credentials",
+ ("core", "fernet_key"): "Fernet encryption key for connections/variables
at rest",
+ ("database", "sql_alchemy_conn"): "Metadata database connection string",
+ ("database", "sql_alchemy_conn_async"): "Async metadata database
connection string",
+ ("database", "sql_alchemy_engine_args"): "SQLAlchemy engine parameters
(may contain credentials)",
+ ("logging", "remote_task_handler_kwargs"): "Remote logging handler
credentials",
+ ("secrets", "backend_kwargs"): "Secrets backend credentials (non-worker
mode)",
+ ("sentry", "sentry_dsn"): "Sentry error reporting endpoint",
+ ("workers", "secrets_backend_kwargs"): "Worker-specific secrets backend
credentials",
+}
+
+
+def option_to_env_var(section: str, option: str) -> str:
+ """Convert a config section+option to its AIRFLOW__ env var form."""
+ return f"AIRFLOW__{section.upper()}__{option.upper()}"
+
+
+def load_core_config() -> dict:
+ """Load the core config.yml."""
+ with open(CONFIG_YML) as f:
+ return yaml.safe_load(f)
+
+
+def load_provider_configs() -> dict[str, dict]:
+ """Load provider.yaml files. Returns {provider_name: config_sections}."""
+ result = {}
+ for provider_yaml in sorted(PROVIDERS_ROOT.glob("*/provider.yaml")):
+ with open(provider_yaml) as f:
+ data = yaml.safe_load(f)
+ if data and "config" in data:
+ provider_name = provider_yaml.parent.name
+ result[provider_name] = data["config"]
+ return result
+
+
+def get_all_options(core_config: dict, provider_configs: dict[str, dict]) ->
dict[tuple[str, str], dict]:
+ """Return a dict of (section, option) -> option_config for all config
options."""
+ result = {}
+ for section_name, section_data in core_config.items():
+ if not isinstance(section_data, dict) or "options" not in section_data:
+ continue
+ for option_name, option_config in section_data["options"].items():
+ if isinstance(option_config, dict):
+ result[(section_name, option_name)] = option_config
+
+ for _provider_name, sections in provider_configs.items():
+ for section_name, section_data in sections.items():
+ if not isinstance(section_data, dict) or "options" not in
section_data:
+ continue
+ for option_name, option_config in section_data["options"].items():
+ if isinstance(option_config, dict):
+ result[(section_name, option_name)] = option_config
+
+ return result
+
+
+def get_core_sensitive_vars(core_config: dict) -> list[tuple[str, str]]:
+ """Return sorted list of (section, option) for core sensitive config
options."""
+ result = []
+ for section_name, section_data in core_config.items():
+ if not isinstance(section_data, dict) or "options" not in section_data:
+ continue
+ for option_name, option_config in section_data["options"].items():
+ if isinstance(option_config, dict) and
option_config.get("sensitive"):
+ result.append((section_name, option_name))
+ return sorted(result, key=lambda x: option_to_env_var(*x))
+
+
+def get_provider_sensitive_vars(
+ provider_configs: dict[str, dict],
+) -> list[tuple[str, str, str]]:
+ """Return sorted list of (provider, section, option) for provider
sensitive config options."""
+ result = []
+ for provider_name, sections in provider_configs.items():
+ for section_name, section_data in sections.items():
+ if not isinstance(section_data, dict) or "options" not in
section_data:
+ continue
+ for option_name, option_config in section_data["options"].items():
+ if isinstance(option_config, dict) and
option_config.get("sensitive"):
+ result.append((provider_name, section_name, option_name))
+ return sorted(result, key=lambda x: option_to_env_var(x[1], x[2]))
+
+
+def generate_core_table(core_sensitive: list[tuple[str, str]]) -> list[str]:
+ """Generate RST list-table lines for core sensitive vars."""
+ lines = [
+ "",
+ " **Core Airflow sensitive configuration variables:**",
+ "",
+ " .. list-table::",
+ " :header-rows: 1",
+ " :widths: 40 30 30",
+ "",
+ " * - Environment variable",
+ " - Description",
+ " - Needed by",
+ ]
+ for section, option in core_sensitive:
+ env_var = option_to_env_var(section, option)
+ desc = CORE_DESCRIPTIONS.get((section, option), f"[{section}]
{option}")
+ needed_by = CORE_COMPONENT_MAP.get((section, option), "Review per
deployment")
+ lines.append(f" * - ``{env_var}``")
+ lines.append(f" - {desc}")
+ lines.append(f" - {needed_by}")
+
+ # Check for unmapped vars and warn
+ for section, option in core_sensitive:
+ if (section, option) not in CORE_COMPONENT_MAP:
+ console.print(
+ f" [yellow]⚠[/] New core sensitive var [{section}] {option} —
"
+ f"add it to CORE_COMPONENT_MAP in
check_security_doc_constants.py"
+ )
+ if (section, option) not in CORE_DESCRIPTIONS:
+ console.print(
+ f" [yellow]⚠[/] New core sensitive var [{section}] {option} —
"
+ f"add a description to CORE_DESCRIPTIONS in
check_security_doc_constants.py"
+ )
+
+ return lines
+
+
+def generate_provider_table(provider_sensitive: list[tuple[str, str, str]]) ->
list[str]:
+ """Generate RST list-table lines for provider sensitive vars."""
+ lines = [
+ "",
+ " **Provider-specific sensitive configuration variables:**",
+ "",
+ " The following variables are defined by Airflow providers and
should only be set on components where",
+ " the corresponding provider functionality is needed. The decision
of which components require these",
+ " variables depends on the Deployment Manager's choices about which
providers and features are",
+ " enabled in each component.",
+ "",
+ " .. list-table::",
+ " :header-rows: 1",
+ " :widths: 40 30 30",
+ "",
+ " * - Environment variable",
+ " - Provider",
+ " - Description",
+ ]
+ for provider, section, option in provider_sensitive:
+ env_var = option_to_env_var(section, option)
+ # Generate a reasonable description from the option name
+ desc = option.replace("_", " ").capitalize()
+ lines.append(f" * - ``{env_var}``")
+ lines.append(f" - {provider}")
+ lines.append(f" - {desc}")
+
+ return lines
+
+
+def update_autogenerated_section(
+ content: str, start_marker: str, end_marker: str, new_lines: list[str]
+) -> str:
+ """Replace content between markers with new content."""
+ lines = content.splitlines()
+ start_idx = None
+ end_idx = None
+
+ for i, line in enumerate(lines):
+ if start_marker in line:
+ start_idx = i
+ elif end_marker in line:
+ end_idx = i
+ break
+
+ if start_idx is None or end_idx is None:
+ console.print(f" [red]✗[/] Could not find markers {start_marker!r} /
{end_marker!r}")
+ return content
+
+ result = lines[: start_idx + 1] + new_lines + [""] + lines[end_idx:]
+ return "\n".join(result) + "\n"
+
+
+def update_sensitive_var_tables(
+ core_sensitive: list[tuple[str, str]],
+ provider_sensitive: list[tuple[str, str, str]],
+) -> bool:
+ """Update the autogenerated tables in security_model.rst. Returns True if
changed."""
+ content = SECURITY_MODEL_RST.read_text()
+ original = content
+
+ core_lines = generate_core_table(core_sensitive)
+ content = update_autogenerated_section(content, CORE_START, CORE_END,
core_lines)
+
+ provider_lines = generate_provider_table(provider_sensitive)
+ content = update_autogenerated_section(content, PROVIDER_START,
PROVIDER_END, provider_lines)
+
+ if content != original:
+ SECURITY_MODEL_RST.write_text(content)
+ return True
+ return False
+
+
+def check_option_references(doc_path: Path, all_options: dict[tuple[str, str],
dict]) -> list[str]:
+ """Check that all [section] option references in the doc exist in
config.yml."""
+ errors = []
+ content = doc_path.read_text()
+
+ for line_num, line in enumerate(content.splitlines(), 1):
+ for match in SECTION_OPTION_RE.finditer(line):
+ section = match.group(1)
+ option = match.group(2)
+ if (section, option) not in all_options:
+ section_exists = any(s == section for s, _ in all_options)
+ if section_exists:
+ errors.append(
+ f"{doc_path.name}:{line_num}: Option ``[{section}]
{option}`` not found in config.yml"
+ )
+ else:
+ errors.append(
+ f"{doc_path.name}:{line_num}: Section ``[{section}]``
not found in config.yml"
+ )
+ return errors
+
+
+def check_env_var_references(doc_path: Path, all_options: dict[tuple[str,
str], dict]) -> list[str]:
+ """Check that AIRFLOW__X__Y env var references correspond to real config
options."""
+ errors = []
+ content = doc_path.read_text()
+
+ for line_num, line in enumerate(content.splitlines(), 1):
+ # Skip lines inside autogenerated sections — those are managed by the
update logic
+ if "AUTOGENERATED" in line:
+ continue
+ for match in ENV_VAR_RE.finditer(line):
+ env_var = match.group(1)
+ m = SECTION_OPT_TO_ENV.match(env_var)
+ if not m:
+ continue
+ section = m.group(1).lower()
+ option = m.group(2).lower()
+ if (section, option) not in all_options:
+ section_exists = any(s == section for s, _ in all_options)
+ if section_exists:
+ errors.append(
+ f"{doc_path.name}:{line_num}: Env var ``{env_var}``
references "
+ f"option [{section}] {option} which is not in
config.yml"
+ )
+ else:
+ errors.append(
+ f"{doc_path.name}:{line_num}: Env var ``{env_var}``
references "
+ f"section [{section}] which is not in config.yml"
+ )
+ return errors
+
+
+def check_defaults_in_tables(doc_path: Path, all_options: dict[tuple[str,
str], dict]) -> list[str]:
+ """Check default values in RST table rows match config.yml."""
+ errors = []
+ content = doc_path.read_text()
+ lines = content.splitlines()
+
+ i = 0
+ while i < len(lines):
+ line = lines[i]
+ match = SECTION_OPTION_RE.search(line)
+ if match and "* -" in line:
+ section = match.group(1)
+ option = match.group(2)
+ j = i + 1
+ while j < len(lines) and not lines[j].strip():
+ j += 1
+ if j < len(lines) and lines[j].strip().startswith("-"):
+ value_line = lines[j].strip().lstrip("- ").strip()
+ key = (section, option)
+ if key in all_options:
+ config_default = str(all_options[key].get("default", "~"))
+ doc_value = value_line.split()[0] if value_line else ""
+ doc_value = doc_value.strip("`")
+ config_default_clean = config_default.strip('"').strip("'")
+ if (
+ doc_value
+ and config_default_clean
+ and config_default_clean not in ("~", "None", "none",
"")
+ and doc_value != config_default_clean
+ and not doc_value.startswith("Auto")
+ and not doc_value.startswith("None")
+ and doc_value != "``GUESS``"
+ ):
+ errors.append(
+ f"{doc_path.name}:{j + 1}: Default for [{section}]
{option} is "
+ f"'{doc_value}' in docs but
'{config_default_clean}' in config.yml"
+ )
+ i += 1
+
+ return errors
+
+
+def main() -> int:
+ core_config = load_core_config()
+ provider_configs = load_provider_configs()
+ all_options = get_all_options(core_config, provider_configs)
+
+ # Step 1: Auto-update the sensitive var tables
+ core_sensitive = get_core_sensitive_vars(core_config)
+ provider_sensitive = get_provider_sensitive_vars(provider_configs)
+
+ if update_sensitive_var_tables(core_sensitive, provider_sensitive):
+ console.print(
+ " [yellow]⚠[/] security_model.rst sensitive variable tables were
out of date and have been updated."
+ )
+ console.print(" [yellow] Please review and commit the changes.[/]")
+
+ # Step 2: Validate references (re-read after potential update)
+ all_errors: list[str] = []
+
+ for doc_path in SECURITY_DOCS:
+ if not doc_path.exists():
+ console.print(f" [yellow]⚠[/] {doc_path.name} not found,
skipping")
+ continue
+ all_errors.extend(check_option_references(doc_path, all_options))
+ all_errors.extend(check_env_var_references(doc_path, all_options))
+ all_errors.extend(check_defaults_in_tables(doc_path, all_options))
+
+ if all_errors:
+ console.print()
+ for error in all_errors:
+ console.print(f" [red]✗[/] {error}")
+ console.print()
+ console.print(f"[red]Security doc constants check failed with
{len(all_errors)} error(s).[/]")
+ console.print(
+ "[yellow]Fix the documentation to match config.yml, or update
config.yml if the docs are correct.[/]"
+ )
+ return 1
+
+ console.print("[green]Security doc constants check passed.[/]")
+ return 0
+
+
+if __name__ == "__main__":
+ sys.exit(main())