GitHub user pykenny edited a discussion: Why is "upstream_failed" viewed as an 
intermediate state?

Currently I'm reworking on the [task lifecycle 
graph](https://github.com/apache/airflow/blob/a3294cc6272b132b9ecc2873a570fe5d1d480e03/docs/apache-airflow/img/task_lifecycle_diagram.png)
 to get it up to the latest version of Airflow:
 - https://github.com/apache/airflow/issues/40185

>From definition of the [state 
>enums](https://github.com/apache/airflow/blob/a3294cc6272b132b9ecc2873a570fe5d1d480e03/airflow/utils/state.py),
> task instance states can be grouped into three categories:

**Terminal States**
 - success
 - failed
 - skipped
 - removed (sort of, because a task instance can be reset to "none" state when 
it becomes available again during DAG run)  

**Intermediate States** 
 - scheduled
 - queued
 - restarting
 - up_for_retry
 - up_for_reschedule
 - **upstream_failed**
 - deferred

**Other States**
 - running (occupied by executor/worker)
 - none (not processed by scheduler yet)

Other states seems to be reasonable in their corresponding category, but why is 
"upstream_failed" placed inside of "Intermediate States" category? A task 
enters this state when any of the required upstream tasks failed, and it won't 
be updated by the scheduler unless the failed states are cleared.

GitHub link: https://github.com/apache/airflow/discussions/46038

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to