This is an automated email from the ASF dual-hosted git repository.

potiuk pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/airflow.git


The following commit(s) were added to refs/heads/main by this push:
     new afcb63e806 Fix triggerer HA doc (#32454)
afcb63e806 is described below

commit afcb63e806755f07cb1d435ef050bfaf96f99c4a
Author: Hussein Awala <[email protected]>
AuthorDate: Sun Jul 9 15:49:27 2023 +0200

    Fix triggerer HA doc (#32454)
    
    Signed-off-by: Hussein Awala <[email protected]>
---
 docs/apache-airflow/authoring-and-scheduling/deferring.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/apache-airflow/authoring-and-scheduling/deferring.rst 
b/docs/apache-airflow/authoring-and-scheduling/deferring.rst
index 5d4e6e2634..6b5f124e8f 100644
--- a/docs/apache-airflow/authoring-and-scheduling/deferring.rst
+++ b/docs/apache-airflow/authoring-and-scheduling/deferring.rst
@@ -203,7 +203,7 @@ Triggers are designed from the ground-up to be 
highly-available; if you want to
 
 Depending on how much work the triggers are doing, you can fit from hundreds 
to tens of thousands of triggers on a single ``triggerer`` host. By default, 
every ``triggerer`` will have a capacity of 1000 triggers it will try to run at 
once; you can change this with the ``--capacity`` argument. If you have more 
triggers trying to run than you have capacity across all of your ``triggerer`` 
processes, some triggers will be delayed from running until others have 
completed.
 
-Airflow tries to only run triggers in one place at once, and maintains a 
heartbeat to all ``triggerers`` that are currently running. If a ``triggerer`` 
dies, or becomes partitioned from the network where Airflow's database is 
running, Airflow will automatically re-schedule triggers that were on that host 
to run elsewhere (after waiting 30 seconds for the machine to re-appear).
+Airflow tries to only run triggers in one place at once, and maintains a 
heartbeat to all ``triggerers`` that are currently running. If a ``triggerer`` 
dies, or becomes partitioned from the network where Airflow's database is 
running, Airflow will automatically re-schedule triggers that were on that host 
to run elsewhere (after waiting (2.1 * ``triggerer.job_heartbeat_sec``) seconds 
for the machine to re-appear).
 
 This means it's possible, but unlikely, for triggers to run in multiple places 
at once; this is designed into the Trigger contract, however, and entirely 
expected. Airflow will de-duplicate events fired when a trigger is running in 
multiple places simultaneously, so this process should be transparent to your 
Operators.
 

Reply via email to