[
https://issues.apache.org/jira/browse/AIRFLOW-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned AIRFLOW-1488:
-------------------------------------
Assignee: Holden Karau's magical unicorn (was: Yati)
> Add a sensor operator to wait on DagRuns
> ----------------------------------------
>
> Key: AIRFLOW-1488
> URL: https://issues.apache.org/jira/browse/AIRFLOW-1488
> Project: Apache Airflow
> Issue Type: New Feature
> Components: contrib, operators
> Reporter: Yati
> Assignee: Holden Karau's magical unicorn
> Priority: Major
>
> The
> [ExternalTaskSensor|https://airflow.incubator.apache.org/code.html#airflow.operators.ExternalTaskSensor]
> operator already allows for encoding dependencies on tasks in external DAGs.
> However, when you have teams, each owning multiple small-to-medium sized
> DAGs, it is desirable to be able to wait on an external DagRun as a whole.
> This allows the owners of an upstream DAG to refactor their code freely by
> splitting/squashing task responsibilities, without worrying about dependent
> DAGs breaking.
> I'll now enumerate the easiest ways of achieving this that come to mind:
> * Make all DAGs always have a join DummyOperator in the end, with a task id
> that follows some convention, e.g., "{{ dag_id }}.__end__".
> * Make ExternalTaskSensor poke for a DagRun instead of TaskInstances when the
> external_task_id argument is None.
> * Implement a separate DagRunSensor operator.
> After considerations, we decided to implement a separate operator, which
> we've been using in the team for our workflows, and I think it would make a
> good addition to contrib.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)