pierrejeambrun commented on code in PR #43875:
URL: https://github.com/apache/airflow/pull/43875#discussion_r1853773428


##########
airflow/api_fastapi/core_api/datamodels/dag_run.py:
##########
@@ -72,3 +74,35 @@ class DAGRunCollectionResponse(BaseModel):
 
     dag_runs: list[DAGRunResponse]
     total_entries: int
+
+
+class TriggerDAGRunPostBody(BaseModel):
+    """Trigger DAG Run Serializer for POST body."""
+
+    dag_run_id: str | None = None
+    logical_date: AwareDatetime | None = None
+    data_interval_start: AwareDatetime | None = None
+    data_interval_end: AwareDatetime | None = None
+
+    conf: dict | None = Field(default_factory=dict)
+    note: str | None
+
+    @model_validator(mode="after")
+    def check_data_intervals(cls, values):
+        if (values.data_interval_start is None) != (values.data_interval_end 
is None):
+            raise ValueError(
+                "Either both data_interval_start and data_interval_end must be 
provided or both must be None"
+            )
+        return values
+
+    @model_validator(mode="after")
+    def validate_dag_run_id(self):
+        if not self.dag_run_id:
+            self.dag_run_id = DagRun.generate_run_id(DagRunType.MANUAL, 
self.logical_date)

Review Comment:
   @uranusjr If we remove the logical_date, then we have `None` for logical 
date when creating the dagrun with `create_dagrun`, and unfortunately it needs 
`type` + `logical_date` to infer the `run_id`. This is why we need to manually 
fill the `run_id` here in case it's not there, but I am not a big fan of it.
   
   I assume this will be updated later when the `logical_date` change will take 
place => `create_dagrun` will be able to generate an appropriate `run_id` 
without providing `logical_date` ? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to