CodingJonas opened a new issue #11122:
URL: https://github.com/apache/airflow/issues/11122


   **Apache Airflow version**: 1.10.11
   
   **Environment**: Service as part of a Docker Swarm
   
   - **Cloud provider or hardware configuration**:
   - **OS** (e.g. from /etc/os-release): Debian 10
   - **Kernel** (e.g. `uname -a`): Linux 5ed6acafbcee 5.3.0-1034-aws #36-Ubuntu 
SMP Tue Aug 18 08:58:43 UTC 2020 x86_64 GNU/Linux
   - **Install tools**: pip
   
   **What happened**:
   
   Setting `min_file_process_interval` to a high value delays the execution of 
DAGs by up to the time specified for `min_file_process_interval`.
   
   **What you expected to happen**:
   We use Airflow as a deployed Docker container. We modify our DAGs locally 
and deploy a new version of the image every now and then. Thus the DAG 
definitions the deployed Airflow service uses will never update while the 
service runs, and to save processing resources, we expected setting 
`min_file_process_interval` will make Airflow very seldom look for updated DAG 
definitions.
   
   From the 
[documentation](https://airflow.apache.org/docs/stable/configurations-ref.html#min-file-process-interval)
 
   
   >after how much time (seconds) a new DAGs should be picked up from the 
filesystem
   
   This sounds to me as the delay between checks for updated DAG definitions, 
so I don't understand why this setting delays DAG executions.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to