brki opened a new issue #18674: URL: https://github.com/apache/airflow/issues/18674
### Apache Airflow version 2.1.3 ### Operating System Redhat 6.10 ### Versions of Apache Airflow Providers _No response_ ### Deployment Virtualenv installation ### Deployment details _No response_ ### What happened When deploying an updated version of a DAG, I noticed that there was higher CPU usage, and a new process was starting every few seconds: ``` airflow 31449 76.0 0.1 935636 79236 ? Rl 17:04 0:00 airflow scheduler - DagFileProcessor /path/to/the/dag_file.py ``` Note: there are many DAGs in this directory, only this file was being continually scanned by the DagFileProcessor. The other files would be scanned every 5 minutes or so, which agrees with the airflow configuration. Eventually I guessed that it might be related to a difference between the UTC time and the local system time (Central Europe, e.g. Berlin). Running ``` touch -d "1 day ago" /path/to/the/dag_file.py ``` stopped the continuous scanning of the file. But after running ``` touch /path/to/the/dag_file.py ``` the continuous scanning started again. ### What you expected to happen When a new or updated file appears, I'd expect Airflow to notice, "hey, there's a new or updated file, I'll scan it". But not scan it again and again. ### How to reproduce Run Airflow on a system with non-UTC time, specifically 'Europe/Berlin'. Deploy a new or updated DAG. Notice that there's a DagFileProcessor process running very often (every second or so there's a new process). ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
