MatrixManAtYrService opened a new issue #19525:
URL: https://github.com/apache/airflow/issues/19525


   ### Apache Airflow version
   
   2.2.2rc1 (release candidate)
   
   ### Operating System
   
   debian (docker)
   
   ### Versions of Apache Airflow Providers
   
   n/a
   
   ### Deployment
   
   Astronomer
   
   ### Deployment details
   
   Two dags:
   
   ```
   # one.py
   
   @task
   def a():
       print("a")
   
   @dag(schedule_interval=None, start_date=days_ago(2))
   def my_dag():
       a()
   
   dag = my_dag()
   ```
   
   ```
   # two.py
   
   @task
   def b():
       print("b")
   
   @dag(schedule_interval=None, start_date=days_ago(2))
   def my_dag():
       b()
   
   dag = my_dag()
   ```
   
   Note that they share the same dag_id: `my_dag`
   
   
   ### What happened
   
   Only one DAG appeared.  In the tree-view, I would refresh the page and see 
the task change between:
   - task_id: a
   - task_id: b
   
   seemingly at random.
   
   ### What you expected to happen
   
   At least, I expected a warning to appear.  It seems that we used to have 
this: https://github.com/apache/airflow/pull/15302
   
   I think that in the presence of a dag_id collision, we should either:
   - refuse to run that dag at all until the collision is resolved
   - resolve it deterministically
   
   ### How to reproduce
   
   Include the two dags above and observe the "DAGs" view.  Notice that there's 
only one DAG and it's not clear which one it is.
   
   ### Anything else
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to