GitHub user amit-mittal added a comment to the discussion: Disabling 
auto-refresh from UI has gone missing

That's fair! https://github.com/apache/airflow/issues/53154 is the GitHub issue 
I am talking about.

>From Airflow v2 to v3, the resource usages have increased, but I don't see any 
>documentation with any new recommendations and I would have expected as the 
>behaviors are being changed, it's reasonable to add those changes under a flag 
>and toggle the default (before removing it), rather than changing the default 
>behavior. I believe increasing 
>[auto_refresh_interval](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#auto-refresh-interval)
> is an alternative for now and we won't be removing that setting anytime soon.

> But if refreshing a page takes 3 minutes then you should definitely explain 
> that in the issue including all sql queries and index information you found 
> and explain that this is what happens, because that obviously means that 
> something is wrong.
I am willing to share the SQL queries, but is there a way to enable those 
end-to-end traces or log statements on Airflow that I can then share? At least 
one query which is taking a long time is below. These queries are so long, that 
we have to increase the limit to 5000 characters in `performance_schema` to 
even see the full query.

```
SELECT task_instance.rendered_map_index, task_instance.task_display_name, 
task_instance.id, task_instance.task_id, task_instance.dag_id, 
task_instance.run_id, task_instance.map_index, task_instance.start_date, 
task_instance.end_date, task_instance.duration, task_instance.state, 
task_instance.try_number, task_instance.max_tries, task_instance.hostname, 
task_instance.unixname, task_instance.pool, task_instance.pool_slots, 
task_instance.queue, task_instance.priority_weight, task_instance.operator, 
task_instance.custom_operator_name, task_instance.queued_dttm, 
task_instance.scheduled_dttm, task_instance.queued_by_job_id, 
task_instance.last_heartbeat_at, task_instance.pid, task_instance.executor, 
task_instance.executor_config, task_instance.updated_at, 
task_instance.context_carrier, task_instance.span_status, 
task_instance.external_executor_id, task_instance.trigger_id, 
task_instance.trigger_timeout, task_instance.next_method, 
task_instance.next_kwargs, task_instance.dag_version_id, dag_v
 ersion_1.id AS id_1, dag_version_1.version_number, dag_version_1.dag_id AS 
dag_id_1, dag_version_1.bundle_name, dag_version_1.bundle_version, 
dag_version_1.created_at, dag_version_1.last_updated, dag_run_1.state AS 
state_1, dag_run_1.id AS id_2, dag_run_1.dag_id AS dag_id_2, 
dag_run_1.queued_at, dag_run_1.logical_date, dag_run_1.start_date AS 
start_date_1, dag_run_1.end_date AS end_date_1, dag_run_1.run_id AS run_id_1, 
dag_run_1.creating_job_id, dag_run_1.run_type, dag_run_1.triggered_by, 
dag_run_1.conf, dag_run_1.data_interval_start, dag_run_1.data_interval_end, 
dag_run_1.run_after, dag_run_1.last_scheduling_decision, 
dag_run_1.log_template_id, dag_run_1.updated_at AS updated_at_1, 
dag_run_1.clear_number, dag_run_1.backfill_id, dag_run_1.bundle_version AS 
bundle_version_1, dag_run_1.scheduled_by_job_id, dag_run_1.context_carrier AS 
context_carrier_1, dag_run_1.span_status AS span_status_1, 
dag_run_1.created_dag_version_id
FROM task_instance INNER JOIN dag_run ON dag_run.dag_id = task_instance.dag_id 
AND dag_run.run_id = task_instance.run_id LEFT OUTER JOIN dag_version ON 
dag_version.id = task_instance.dag_version_id LEFT OUTER JOIN dag_version AS 
dag_version_1 ON dag_version_1.id = task_instance.dag_version_id INNER JOIN 
dag_run AS dag_run_1 ON dag_run_1.dag_id = task_instance.dag_id AND 
dag_run_1.run_id = task_instance.run_id
WHERE task_instance.dag_id IN (....) ORDER BY CASE WHEN 
(task_instance.start_date IS NOT NULL) THEN 0 ELSE 1 END, 
task_instance.start_date DESC, task_instance.id DESC
```

GitHub link: 
https://github.com/apache/airflow/discussions/53175#discussioncomment-13733204

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to