ferruzzi commented on code in PR #61372:
URL: https://github.com/apache/airflow/pull/61372#discussion_r2771469173
##########
airflow-core/src/airflow/models/taskinstance.py:
##########
@@ -181,6 +183,48 @@ def _stop_remaining_tasks(*, task_instance: TaskInstance,
task_teardown_map=None
log.info("Not skipping teardown task '%s'", ti.task_id)
+def _recalculate_dagrun_queued_at_deadlines(
+ dagrun: DagRun, new_queued_at: datetime, session: Session
+) -> None:
+ """
+ Recalculate deadline times for deadlines that reference dagrun.queued_at.
+
+ :param dagrun: The DagRun whose deadlines should be recalculated
+ :param new_queued_at: The new queued_at timestamp to use for calculation
+ :param session: Database session
+
+ :meta private:
+ """
+ deadlines = session.scalars(
+ select(Deadline).where(
+ Deadline.dagrun_id == dagrun.id,
+ Deadline.missed == false(),
+ )
+ ).all()
+
+ if not deadlines:
+ return
+
+ for deadline in deadlines:
+ if deadline.deadline_alert_id:
+ if deadline_alert := session.get(DeadlineAlertModel,
deadline.deadline_alert_id):
+ reference_type =
deadline_alert.reference.get(ReferenceModels.REFERENCE_TYPE_FIELD)
+ if reference_type ==
ReferenceModels.DagRunQueuedAtDeadline.__name__:
+ # We can't use evaluate_with() since the new queued_at is
not written to the DB yet.
+ deadline_interval =
timedelta(seconds=deadline_alert.interval)
+ new_deadline_time = new_queued_at + deadline_interval
+
+ log.info(
+ "Recalculating deadline %s for DagRun %s.%s: old=%s,
new=%s",
+ deadline.id,
+ dagrun.dag_id,
+ dagrun.run_id,
+ deadline.deadline_time,
+ new_deadline_time,
+ )
+ deadline.deadline_time = new_deadline_time
Review Comment:
No, I don't think so. My understanding is that committing or flushing here
would break the scheduler session's atomic nature and could result in the
database getting a partial update if things went wrong. When I tested
manually, this does work as expected. `deadline.deadline_time =
new_deadline_time` is saving that value int he ORM, and the session.flush()
on/around L349 writes it all at once.
This pattern also matches how the DagRun updates are handled earlier in the
same function (like dr.clear_number += 1 and dr.queued_at = ...).
I'm definitely not an SQL expert, but I'm reasonably confident that this is
good how it is.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]