yenchenLiu opened a new issue, #28868:
URL: https://github.com/apache/airflow/issues/28868

   ### Apache Airflow version
   
   2.5.0
   
   ### What happened
   
   I use TriggerDagRunOperator to generate a large number of dag instances(>= 
100), and it sometimes causes a bug that is 
   ``` log
   Failed to execute job 7159001 for task dynamic_dags 
((psycopg2.errors.UniqueViolation) duplicate key value violates unique 
constraint "dag_run_dag_id_run_id_key"
   DETAIL:  Key (dag_id, run_id)=(dynamic_dags, 
manual__2023-01-11T22:28:03.286419+00:00) already exists.
   ```
   
   ``` python
   TriggerDagRunOperator.partial(task_id='dynamic_dags', 
trigger_dag_id='trigger_dag', wait_for_completion=True).expand(conf=[{"id": 1}, 
{"id": 2}, {"id": 3}, ......])
   ```
   
   ### What you think should happen instead
   
   I expect the `run_id` to be always unique when I use dynamic task mapping to 
generate instances.
   
   ### How to reproduce
   
   1. Create a dag called `trigger_dag`.
   2. Create a dag called `test_dag`, which runs TriggerDagRunOperator with the 
dynamic task to generate a large number of `trigger_dag`. 
   3. Sometimes, you will see the task failed due to the same `run_id`.
   
   * I use the local executor and  parallelism = 16, max_active_tasks_per_dag = 
12, and max_active_runs_per_dag = 12
   
   ### Operating System
   
   Ubuntu 20.04.4 LTS
   
   ### Versions of Apache Airflow Providers
   
   _No response_
   
   ### Deployment
   
   Virtualenv installation
   
   ### Deployment details
   
   _No response_
   
   ### Anything else
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to