Lee-W commented on code in PR #63848:
URL: https://github.com/apache/airflow/pull/63848#discussion_r2957598394
##########
airflow-core/src/airflow/assets/manager.py:
##########
@@ -358,7 +358,7 @@ def _queue_dagruns(
)
non_partitioned_dags = dags_to_queue.difference(partition_dags) #
don't double process
- if not non_partitioned_dags:
+ if not non_partitioned_dags or partition_key is not None:
Review Comment:
Currently, users can provide a partition key to a non-partition-aware Dag.
The main question is how to handle this when triggering downstream
partition-aware assets.
One approach is to treat the provided key as a temporary partition context,
allowing it to propagate to downstream assets without requiring changes to the
upstream Dag. This preserves flexibility, though it's a bit conceptually odd.
Alternatively, the upstream Dag could be made partition-aware (e.g., set
`schedule=PartitionedAssetTimetable(assets=[])`) so keys propagate naturally,
but this adds complexity (we'll need to block users from providing partition
key to non-partition-aware Dags).
---
I kinda like the first one a bit more. The logic will then be
1. Whether a DagRun can trigger a partitioned aware Dag -> depends on
whether the DagRun has a valid partition key
2. Whether a Dag can be triggered by asset events with partition keys ->
depends on whether this consumer Dag is partition aware
I think we kinda miss this case during implementation, and assume DagRun
with a partition key is always partition aware. might need to check the trigger
logic again
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]