o-nikolas commented on code in PR #55088:
URL: https://github.com/apache/airflow/pull/55088#discussion_r2320339872
##########
airflow-core/src/airflow/models/deadline.py:
##########
@@ -355,6 +355,31 @@ def _evaluate_with(self, *, session: Session, **kwargs:
Any) -> datetime:
return _fetch_from_db(DagRun.queued_at, session=session, **kwargs)
+ class AverageRuntimeDeadline(BaseDeadlineReference):
+ """A deadline that calculates the average runtime from past DAG
runs."""
+
+ required_kwargs = {"dag_id"}
+
+ @provide_session
+ def _evaluate_with(self, *, session: Session, **kwargs: Any) ->
datetime:
+ from airflow.models import DagRun
+
+ dag_id = kwargs["dag_id"]
+
+ # Query for completed DAG runs with both start and end dates
+ query = select(func.avg(func.extract("epoch", DagRun.end_date -
DagRun.start_date))).filter(
+ DagRun.dag_id == dag_id, DagRun.start_date.isnot(None),
DagRun.end_date.isnot(None)
+ )
+
+ avg_seconds = session.execute(query).scalar()
Review Comment:
I like the iterative approach! But I think the default one we go with for
now should be some kind of rolling average (probably, again in the name of
simplicity like you were shooting for, just use the last N dag runs, and don't
factor in dag versions etc yet). After you get a lot of dag runs this deadline
type will become almost unusable since the weight of all of history will make
the average difficult to move.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]