jscheffl commented on issue #29332:
URL: https://github.com/apache/airflow/issues/29332#issuecomment-2040480328
Okay, I did a regression and can confirm this bug.
Note that the additional dependency from Setup >> Teardown is not needed. It
is modelled automatically.
I used a modified example `example_setup_teardown.py` and added two normal
tasks in the middle.
```
with DAG(
dag_id="example_setup_teardown",
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
tags=["example"],
) as dag:
root_setup = BashOperator(task_id="root_setup", bash_command="echo
'Hello from root_setup'").as_setup()
root_normal = BashOperator(task_id="normal", bash_command="echo 'I am
just a normal task'")
root_normal2 = BashOperator(task_id="normal2", bash_command="echo 'I am
just a normal task'")
root_normal3 = BashOperator(task_id="normal3", bash_command="echo 'I am
just a normal task'")
root_teardown = BashOperator(
task_id="root_teardown", bash_command="echo 'Goodbye from
root_teardown'"
).as_teardown(setups=root_setup)
root_setup >> root_normal >> root_normal2 >> root_normal3 >>
root_teardown
```
The error exactly appears if you clear a normal task between setup and
teardown which is NOT the last task before teardown and select no downstream
tasks. Means that the setup is executed, then the real cleared task. The bug is
that the only direct pre-ceeding task before teardown is already finished such
that for the scheduler it seems to be "ready for teardown". Scheduler does not
check whether all tasks between setup and teardown are completed.
In the example above the error is generated if either task "normal" or
"normal2" are cleared w/o downstream. It works as expected if "normal3" is
cleared.
I made my regression on latest main after branching off 2.9.0rc2. So the
problem most probably is in since the beginning.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]