kalluripradeep commented on issue #60443:
URL: https://github.com/apache/airflow/issues/60443#issuecomment-3831847477
Hi @DataCerealz and @potiuk ! 👋
I'd like to work on this issue. I've analyzed the code and have a clear
implementation plan.
## Problem Analysis
Currently, `TriggerDagRunOperator` has two limitations:
1. **Forces `logical_date`**: When `logical_date` is not provided, it
defaults to `timezone.utcnow()` (line 203-205)
2. **No `run_after` support**: The operator doesn't expose the `run_after`
parameter, which is crucial for parallel DAG runs in Airflow 3
This prevents users from triggering multiple parallel runs of the same DAG
with different configurations, as the unique constraint on `(dag_id,
logical_date)` causes conflicts.
## Proposed Solution
Add `run_after` parameter support to `TriggerDagRunOperator`:
### 1. **New Parameter**
```python
run_after: datetime.datetime | None | ArgNotSet = NOTSET
```
### 2. **Logic Changes**
- When `run_after` is provided and `logical_date` is `NOTSET`, set
`logical_date=None`
- Pass `run_after` to `DagRunTriggerException` for Airflow 3
- Pass `run_after` to `trigger_dag()` for Airflow 2 (if supported)
### 3. **Template Field**
Add `run_after` to `template_fields` for dynamic templating
### 4. **Updated Signature**
```python
TriggerDagRunOperator(
trigger_dag_id="my_dag",
run_after=datetime(2024, 1, 1, 12, 0, 0), # Schedule for specific time
logical_date=None, # Allow None to enable parallel runs
conf={"key": "value1"}
)
```
## Implementation Files
-
`providers/standard/src/airflow/providers/standard/operators/trigger_dagrun.py`
- `providers/standard/tests/unit/standard/operators/test_trigger_dagrun.py`
- Documentation updates
I'll submit a PR shortly with the implementation and comprehensive tests.
Let me know if this approach looks good! 🚀
Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]