MatthewStrickland opened a new issue, #37042: URL: https://github.com/apache/airflow/issues/37042
### Description Support grouping dags, where a dag can be part of many groups, with a configuration similar to max_active_runs across the whole group. (I'm sure this must have been discussed somewhere before but I can't find any searches for the topic, apologies if there are) ### Use case/motivation `In a nutshell:` Motivations relating to the reasoning behind the current existence of DAG level `max_active_runs`. **Use case:** My dags each mutate a single table in my schema, though occasionally I need to duplicate the dag with a static start/end date to fill in some missing data (eg. backfilling jobs for earlier dates, or backfilling jobs for existing dates without overriding existing data). All dags by themselves run with `max_active_runs=1`. Each dag I have creates a snapshot of the table (and rollback if necessary when things go wrong). If a rollback of this snapshot were to trigger it is pretty much guaranteed to have negative impacts on the persistence of the data of another dag working on the same table. **What would you like to happen?** Ideally the ability to make a DAGGroup object with max_active_runs. (I'm not sure if it's better for the DagGroup to add dags, or if the dag should specify which DagGroup(s) to belong to). Alternatives: - I can pause DAGs manually that might interfere with each other, but would need to actively monitor them - Cross-DAG dependencies, though I feel all dags would need knowledge of each other and would be a nightmare to maintain and get right ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
