[ https://issues.apache.org/jira/browse/AIRFLOW-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434993#comment-16434993 ]
Parikshit edited comment on AIRFLOW-1515 at 4/12/18 6:24 AM: ------------------------------------------------------------- Hi, is this resolved ? or some other alternative approach to do cleanup in case of failure/success, I am facing the same issue, with airflow 1.9 What I have observed is that if there is any running task in DAG, then it would keep on scheduling and marking the downstream tasks as upstream_failed. if not then it just stops scheduling tasks then and those tasks doesn't move to executors and left out as 'NONE' state. Regards PB was (Author: bhatia): Hi, is this resolved ? or some other alternative approach to do cleanup in case of failure/success, I am facing the same issue. What I have observed is that if there is any running task in DAG, then it would keep on scheduling and marking the downstream tasks as upstream_failed. if not then it just stops scheduling tasks then and those tasks doesn't move to executors and left out as 'NONE' state. Regards PB > Airflow 1.8.1 tasks not being marked as upstream_failed when one of the > parents fails > ------------------------------------------------------------------------------------- > > Key: AIRFLOW-1515 > URL: https://issues.apache.org/jira/browse/AIRFLOW-1515 > Project: Apache Airflow > Issue Type: Bug > Components: core, DAG, DagRun > Affects Versions: 1.8.1 > Reporter: Jose Sanchez > Priority: Major > Attachments: airflow_bug.png, image-2018-04-03-21-24-58-893.png, > rofl_test.py > > > the trigger rule "all_done" is not working when its parents are marked as > State.None instead of State.Upstream_Failed. I am submitting a very small dag > as example and a picture of the run, where the last Task should have been > executed before marking the dag as failed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)