uplsh580 opened a new pull request, #61039:
URL: https://github.com/apache/airflow/pull/61039
## Description
This PR implements per-bundle DAG Processor deployments feature, allowing
users to create separate Kubernetes deployments for each DAG bundle defined in
`dagBundleConfigList`. This enables independent resource isolation, scaling,
and configuration per bundle.
Closes #61037
## Motivation
Currently, when multiple DAG bundles are configured, all bundles are
processed by a single DAG processor deployment. This limits the ability to:
- Scale bundles independently based on their workload
- Apply bundle-specific resource configurations
- Isolate bundle processing failures
- Use bundle-specific node selectors, affinities, or tolerations
## Changes
### Configuration (`values.yaml`)
Added new `dagProcessor.deployPerBundle` section:
```yaml
dagProcessor:
deployPerBundle:
enabled: false # Enable per-bundle deployments
args: ["bash", "-c", "exec airflow dag-processor --bundle-name {{
bundleName }}"]
bundleOverrides: {} # Per-bundle configuration overrides
```
### Features
1. **Per-bundle Deployments**: When `deployPerBundle.enabled` is `true`,
creates a separate Deployment for each bundle in `dagBundleConfigList`
2. **Bundle-specific Args**: Supports templated args with `{{ bundleName }}`
placeholder that gets replaced with actual bundle name
3. **Bundle Overrides**: Allows per-bundle configuration overrides via
`bundleOverrides` map:
- Resources (CPU, memory)
- Replicas
- Node selectors, affinities, tolerations
- Environment variables
- Pod disruption budgets
- And other deployment settings
4. **Per-bundle PodDisruptionBudget**: Creates separate PDBs for each bundle
when enabled
5. **Backward Compatibility**: When `deployPerBundle.enabled` is `false`,
maintains existing single deployment behavior
### Implementation Details
- Refactored deployment logic into a reusable helper template
`dag-processor.deployment`
- Refactored PDB logic into a reusable helper template
`dag-processor.poddisruptionbudget`
- Preserves `dagBundleConfigList` during merge operations to prevent
overwriting
- Supports per-bundle enable/disable via
`bundleOverrides[bundleName].enabled`
- Properly handles YAML document separators for multiple resources
### Files Changed
- `chart/templates/dag-processor/dag-processor-deployment.yaml`: Added
per-bundle deployment logic
- `chart/templates/dag-processor/dag-processor-poddisruptionbudget.yaml`:
Added per-bundle PDB logic
- `chart/values.yaml`: Added `deployPerBundle` configuration section
- `chart/values.schema.json`: Added schema validation for `deployPerBundle`
-
`helm-tests/tests/helm_tests/airflow_core/test_dag_processor_per_bundle.py`:
Added comprehensive test cases
## Usage Example
```yaml
dagProcessor:
enabled: true
dagBundleConfigList:
- name: bundle1
classpath: "airflow.providers.git.bundles.git.GitDagBundle"
kwargs:
git_conn_id: "GITHUB__SAMPLE"
subdir: "dags"
- name: bundle2
classpath: "airflow.providers.git.bundles.git.GitDagBundle"
kwargs:
git_conn_id: "GITHUB__SAMPLE2"
subdir: "dags"
deployPerBundle:
enabled: true
args: ["dag-processor", "--bundle-name", "{{ bundleName }}"]
bundleOverrides:
bundle1:
replicas: 3
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "4Gi"
cpu: "2000m"
nodeSelector:
workload-type: production
bundle2:
replicas: 1
resources:
requests:
memory: "1Gi"
cpu: "500m"
```
This will create:
- `{release-name}-dag-processor-bundle1` deployment with 3 replicas and
production resources
- `{release-name}-dag-processor-bundle2` deployment with 1 replica and
standard resources
- Separate PodDisruptionBudgets for each bundle (if enabled)
<!--
Thank you for contributing!
Please provide above a brief description of the changes made in this pull
request.
Write a good git commit message following this guide:
http://chris.beams.io/posts/git-commit/
Please make sure that your code changes are covered with tests.
And in case of new features or big changes remember to adjust the
documentation.
Feel free to ping (in general) for the review if you do not see reaction for
a few days
(72 Hours is the minimum reaction time you can expect from volunteers) - we
sometimes miss notifications.
In case of an existing issue, reference it using one of the following:
* closes: #ISSUE
* related: #ISSUE
-->
---
##### Was generative AI tooling used to co-author this PR?
<!--
If generative AI tooling has been used in the process of authoring this PR,
please
change below checkbox to `[X]` followed by the name of the tool, uncomment
the "Generated-by".
-->
- [x] Yes (please specify the tool below)
- cursor
<!--
Generated-by: [Tool Name] following [the
guidelines](https://github.com/apache/airflow/blob/main/contributing-docs/05_pull_requests.rst#gen-ai-assisted-contributions)
-->
---
* Read the **[Pull Request
Guidelines](https://github.com/apache/airflow/blob/main/contributing-docs/05_pull_requests.rst#pull-request-guidelines)**
for more information. Note: commit author/co-author name and email in commits
become permanently public when merged.
* For fundamental code changes, an Airflow Improvement Proposal
([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvement+Proposals))
is needed.
* When adding dependency, check compliance with the [ASF 3rd Party License
Policy](https://www.apache.org/legal/resolved.html#category-x).
* For significant user-facing changes create newsfragment:
`{pr_number}.significant.rst` or `{issue_number}.significant.rst`, in
[airflow-core/newsfragments](https://github.com/apache/airflow/tree/main/airflow-core/newsfragments).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]