jhtimmins commented on a change in pull request #10594:
URL: https://github.com/apache/airflow/pull/10594#discussion_r492438024
##########
File path: airflow/api_connexion/endpoints/dag_run_endpoint.py
##########
@@ -122,6 +122,7 @@ def _fetch_dag_runs(
limit,
offset,
):
+ total_entries = query.count()
Review comment:
Good count. I think the original behavior included a bug. @mik-laj can
you confirm that the count should be taken after filtering by date?
##########
File path: airflow/api_connexion/endpoints/dag_run_endpoint.py
##########
@@ -122,6 +122,7 @@ def _fetch_dag_runs(
limit,
offset,
):
+ total_entries = query.count()
Review comment:
Good catch. I think the original behavior included a bug. @mik-laj can
you confirm that the count should be taken after filtering by date?
##########
File path: airflow/api_connexion/endpoints/dag_run_endpoint.py
##########
@@ -158,26 +158,29 @@ def _apply_date_filters_to_query(
return query
[email protected]_authentication
[email protected]_access([("can_read", "Dag"), ("can_read", "DagRun")])
@provide_session
def get_dag_runs_batch(session):
"""
Get list of DAG Runs
"""
body = request.get_json()
- try:
+ try: # TODO: Handle filtering.
data = dagruns_batch_form_schema.load(body)
except ValidationError as err:
raise BadRequest(detail=str(err.messages))
+ appbuilder = current_app.appbuilder
+ readable_dag_ids = appbuilder.sm.get_readable_dag_ids(g.user)
query = session.query(DagRun)
-
- if data["dag_ids"]:
- query = query.filter(DagRun.dag_id.in_(data["dag_ids"]))
+ if data.get("dag_ids"):
+ dag_ids = set(data["dag_ids"]) & set(readable_dag_ids)
+ query = query.filter(DagRun.dag_id.in_(dag_ids))
+ else:
+ query = query.filter(DagRun.dag_id.in_(readable_dag_ids))
dag_runs, total_entries = _fetch_dag_runs(
query,
- session,
Review comment:
It has to do with how the count is getting made, as seen in your
previous comment. It used to use `session`, but it doesn't need to now.
##########
File path: airflow/api_connexion/endpoints/dag_run_endpoint.py
##########
@@ -122,6 +122,7 @@ def _fetch_dag_runs(
limit,
offset,
):
+ total_entries = query.count()
Review comment:
Actually, I think we should maintain the existing behavior here and do
the count prior to the date filtering. Just to keep scope limited. If we decide
that the count should take place after the date filtering, let's make that
behavioral change in a separate PR, just to reduce scope for this one somewhat.
##########
File path: airflow/api_connexion/endpoints/xcom_endpoint.py
##########
@@ -49,22 +52,28 @@ def get_xcom_entries(
"""
query = session.query(XCom)
- if dag_id != '~':
+ if dag_id == '~':
+ appbuilder = current_app.appbuilder
+ readable_dag_ids = appbuilder.sm.get_readable_dag_ids(g.user)
+ query = query.filter(XCom.dag_id.in_(readable_dag_ids))
+ query.join(DR, and_(XCom.dag_id.in_(readable_dag_ids),
XCom.execution_date == DR.execution_date))
+ else:
query = query.filter(XCom.dag_id == dag_id)
Review comment:
Correct
##########
File path: airflow/api_connexion/endpoints/xcom_endpoint.py
##########
@@ -49,22 +52,28 @@ def get_xcom_entries(
"""
query = session.query(XCom)
- if dag_id != '~':
+ if dag_id == '~':
+ appbuilder = current_app.appbuilder
+ readable_dag_ids = appbuilder.sm.get_readable_dag_ids(g.user)
+ query = query.filter(XCom.dag_id.in_(readable_dag_ids))
+ query.join(DR, and_(XCom.dag_id.in_(readable_dag_ids),
XCom.execution_date == DR.execution_date))
Review comment:
I've made the update, and created an issue to add a test for this
https://github.com/apache/airflow/issues/11073
##########
File path: airflow/api_connexion/endpoints/dag_run_endpoint.py
##########
@@ -122,6 +122,7 @@ def _fetch_dag_runs(
limit,
offset,
):
+ total_entries = query.count()
Review comment:
I've created an issue to change how the total_entries are counted. #11074
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]