matthewblock opened a new issue, #27449: URL: https://github.com/apache/airflow/issues/27449
### Apache Airflow version 2.4.2 ### What happened We have dynamic tasks that use `partial`/`expand`. These tasks are within task groups and have upstream tasks. Occasionally these dynamic tasks are marked as state `upstream_failed` even though none of their upstream tasks and `upstream_failed` or `failed`. This occurs before their upstream tasks complete. All of the dynamic tasks have the default trigger rule `all_success`. Note that this occurs sporadically - some dynamic tasks _do_ complete successfully, and it appears to be random per DAG run. <img width="1220" alt="dag_deid" src="https://user-images.githubusercontent.com/10802053/199304550-30a2d731-9e05-44bf-a472-27b1e4addd0d.png"> ### What you think should happen instead The dynamic tasks should successfully execute. The tasks immediately upstream of these `upstream_failed` tasks have this near the end of their logs: ``` [2022-11-01, 09:15:31 PDT] {local_task_job.py:273} INFO - 0 downstream tasks scheduled from follow-on schedule check ``` ### How to reproduce Difficult to reproduce, working on a test DAG. * Run a DAG that has dynamic tasks with trigger rule `all_success` * Note that occasionally dynamic tasks get marked as `upstream_failed` ### Operating System debian buster ### Versions of Apache Airflow Providers _No response_ ### Deployment Docker-Compose ### Deployment details _No response_ ### Anything else Some dynamic tasks successfully complete and some get marked as `upstream_failed`. It appears to be random per DAG run. ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
