A couple things are at play.

The best practice is to ensure that your branch python operator does not
fail. Catch the most general exception and return a value that would pick a
"failure processing branch". If neither D nor E is your failure processing
branch, then add one more branch - BranchPythonOperator supports
generalized N-way branching after all.


Your "failure processing branch" can report errors and be a leaf
node/terminal node of your DAG, which means that F and G do not need to be
executed in case of failure.

If you would like F and G to always be executed (since I don't know your
use-case, I cannot comment whether it makes sense for you), then make all 3
of your branch nodes, D, E, and "Failure processing branch", upstream
parents of F.

Also, in F, you should specify a trigger_rule of "one_success", so that
processing of F only happens when you get one success.

It is intended that a DAGRun be deemed successful in all cases except for
failure. So, skipped nodes F and G, would result in a Successful DagRun.


-s

On Mon, Nov 14, 2016 at 1:23 PM, <[email protected]> wrote:

> Hello.
>
> My DAG is as follow :
>
>                 D
>                /   \
> A -- B -- C     F -- G
>                \   /                 E
>
> I have a case where the C BranchPythonOperator fails so that it is not
> able to tell Airflow which child task must be executed. The result is : D &
> E are in UP_FOR_RETRY state and F & G are in SKIPPED state.
>
> The issue is, because of the last SKIPPED state, Airflow considers the
> whole DAG to have succeeded whereras it has not.
>
> The FAILED state is not propagated to downstream tasks. Is it intended
> behaviour ?
>
> Regards.
>
>
> At moment, I want my privacy to be protected.
> https://mytemp.email/
>

Reply via email to