eladkal commented on code in PR #30270:
URL: https://github.com/apache/airflow/pull/30270#discussion_r1147709766
##########
airflow/ti_deps/deps/trigger_rule_dep.py:
##########
@@ -263,8 +280,28 @@ def _iter_upstream_conditions() ->
Iterator[ColumnOperators]:
elif trigger_rule == TR.ALL_SKIPPED:
if success or failed:
new_state = TaskInstanceState.SKIPPED
-
+ elif trigger_rule == TR.ALL_DONE_SETUP_SUCCESS:
+ # when there is an upstream setup and they have all skipped,
then skip
+ if upstream_done and upstream_setup and skipped_setup >=
upstream_setup:
+ new_state = TaskInstanceState.SKIPPED
+ logging.warning(
+ "ti=%s skipped_setup >= upstream_setup (%s >= %s),
marking skipped",
+ ti.task_id,
+ skipped_setup,
+ upstream_setup,
+ )
+ elif upstream_done and upstream_setup > success_setup:
+ # when there is an upstream setup, they must all succeed
+ # otherwise, behave same as all done.
Review Comment:
> When you mark a task as teardown (which is something you must do --
airflow cannot guess about this) that's when it ensures the trigger rule. For
teardown tasks, trigger rule is not meant to be configurable
For sure.
So what you say that we can make it happen by changing the code of current
trigger rules and without adding new ones. Thats good.
I just want to point that relaying on trigger rules is problematic. We have
cases like `ShortCircuitOperator` that with default execution do not honor
trigger rules.
https://github.com/apache/airflow/issues/7858#issuecomment-615863490
Thus from your description if pipeline will use this operator then the
teardown might not run
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]