This is an automated email from the ASF dual-hosted git repository.

jedcunningham pushed a commit to branch v2-2-test
in repository https://gitbox.apache.org/repos/asf/airflow.git

commit f0774beb35ca24f3c9f9db35de6f5ef1053ca62f
Author: Daniel Standish <[email protected]>
AuthorDate: Fri Nov 12 16:24:41 2021 -0800

    Minor touch up for async docs (#19539)
    
    (cherry picked from commit 1c2dfde8ef1e6028f10d36f000c59c087b129745)
---
 airflow/triggers/base.py                   |  4 ++--
 docs/apache-airflow/concepts/deferring.rst | 13 ++++++++-----
 2 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/airflow/triggers/base.py b/airflow/triggers/base.py
index 942c2f4..e168d35 100644
--- a/airflow/triggers/base.py
+++ b/airflow/triggers/base.py
@@ -58,7 +58,7 @@ class BaseTrigger(abc.ABC, LoggingMixin):
 
         If it yields, it is likely that it will be resumed very quickly,
         but it may not be (e.g. if the workload is being moved to another
-        trigger process, or a multi-event trigger was being used for a
+        triggerer process, or a multi-event trigger was being used for a
         single-event task defer).
 
         In either case, Trigger classes should assume they will be persisted,
@@ -69,7 +69,7 @@ class BaseTrigger(abc.ABC, LoggingMixin):
     def cleanup(self) -> None:
         """
         Called when the trigger is no longer needed and it's being removed
-        from the active trigger process.
+        from the active triggerer process.
         """
 
     def __repr__(self) -> str:
diff --git a/docs/apache-airflow/concepts/deferring.rst 
b/docs/apache-airflow/concepts/deferring.rst
index af6870c..c7edced 100644
--- a/docs/apache-airflow/concepts/deferring.rst
+++ b/docs/apache-airflow/concepts/deferring.rst
@@ -55,7 +55,7 @@ Writing Deferrable Operators
 
 Writing a deferrable operator takes a bit more work. There are some main 
points to consider:
 
-* Your Operator must defer itself based on a Trigger. If there is a Trigger in 
core Airflow you can use, great; otherwise, you will have to write one.
+* Your Operator must defer itself with a Trigger. If there is a Trigger in 
core Airflow you can use, great; otherwise, you will have to write one.
 * Your Operator will be stopped and removed from its worker while deferred, 
and no state will persist automatically. You can persist state by asking 
Airflow to resume you at a certain method or pass certain kwargs, but that's it.
 * You can defer multiple times, and you can defer before/after your Operator 
does significant work, or only defer if certain conditions are met (e.g. a 
system does not have an immediate answer). Deferral is entirely under your 
control.
 * Any Operator can defer; no special marking on its class is needed, and it's 
not limited to Sensors.
@@ -67,13 +67,13 @@ Triggering Deferral
 If you want to trigger deferral, at any place in your Operator you can call 
``self.defer(trigger, method_name, kwargs, timeout)``, which will raise a 
special exception that Airflow will catch. The arguments are:
 
 * ``trigger``: An instance of a Trigger that you wish to defer on. It will be 
serialized into the database.
-* ``method_name``: The method name on your Operator you want Airflow to call 
when it resumes, other than ``execute``.
+* ``method_name``: The method name on your Operator you want Airflow to call 
when it resumes.
 * ``kwargs``: Additional keyword arguments to pass to the method when it is 
called. Optional, defaults to ``{}``.
 * ``timeout``: A timedelta that specifies a timeout after which this deferral 
will fail, and fail the task instance. Optional, defaults to ``None``, meaning 
no timeout.
 
 When you opt to defer, your Operator will *stop executing at that point and be 
removed from its current worker*. No state - such as local variables, or 
attributes set on ``self`` - will persist, and when your Operator is resumed it 
will be a *brand new instance* of it. The only way you can pass state from the 
old instance of the Operator to the new one is via ``method_name`` and 
``kwargs``.
 
-When your Operator is resumed, you will find an ``event`` item added to the 
kwargs passed to it, which contains the payload from the trigger event that 
resumed your Operator. Depending on the trigger, this may be useful to your 
operator (e.g. it's a status code or URL to fetch results), or it may not be 
important (it's just a datetime). Your ``method_name`` method, however, *must* 
accept ``event`` as a keyword argument.
+When your Operator is resumed, an ``event`` item will be added to the kwargs 
passed to the ``method_name`` method. The ``event`` object contains the payload 
from the trigger event that resumed your Operator. Depending on the trigger, 
this may be useful to your operator (e.g. it's a status code or URL to fetch 
results), or it may not be important (it's just a datetime). Your 
``method_name`` method, however, *must* accept ``event`` as a keyword argument.
 
 If your Operator returns from either its first ``execute()`` method when it's 
new, or a subsequent method specified by ``method_name``, it will be considered 
complete and will finish executing.
 
@@ -113,6 +113,11 @@ There's also some design constraints to be aware of:
 * When events are emitted, and if your trigger is designed to emit more than 
one event, they *must* contain a payload that can be used to deduplicate events 
if the trigger is being run in multiple places. If you only fire one event, and 
don't want to pass information in the payload back to the Operator that 
deferred, you can just set the payload to ``None``.
 * A trigger may be suddenly removed from one process and started on a new one 
(if partitions are being changed, or a deployment is happening). You may 
provide an optional ``cleanup`` method that gets called when this happens.
 
+.. note::
+
+    Right now, Triggers are only used up to their first event, as they are 
only used for resuming deferred tasks (which happens on the first event fired). 
However, we plan to allow DAGs to be launched from triggers in future, which is 
where multi-event triggers will be more useful.
+
+
 Here's the structure of a basic Trigger::
 
 
@@ -140,8 +145,6 @@ Triggers can be as complex or as simple as you like 
provided you keep inside thi
 
 If you are new to writing asynchronous Python, you should be very careful 
writing your ``run()`` method; Python's async model means that any code that 
does not correctly ``await`` when it does a blocking operation will block the 
*entire process*. Airflow will attempt to detect this and warn you in the 
triggerer logs when it happens, but we strongly suggest you set the variable 
``PYTHONASYNCIODEBUG=1`` when you are writing your Trigger to enable extra 
checks from Python to make sure you'r [...]
 
-Right now, Triggers are only used up to their first event, as they are only 
used for resuming deferred tasks (which happens on the first event fired). 
However, we plan to allow DAGs to be launched from triggers in future, which is 
where multi-event triggers will be more useful.
-
 
 High Availability
 -----------------

Reply via email to