This is an automated email from the ASF dual-hosted git repository.
potiuk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/airflow.git
The following commit(s) were added to refs/heads/master by this push:
new 31afc0d Move celery-exclusive feature to CeleryExecutor page (#10242)
31afc0d is described below
commit 31afc0d8a86f3be26227f7a22b20e8278e41b0c2
Author: Kamil BreguĊa <[email protected]>
AuthorDate: Sat Aug 8 18:49:14 2020 +0200
Move celery-exclusive feature to CeleryExecutor page (#10242)
---
docs/concepts.rst | 21 ---------------------
docs/executor/celery.rst | 21 +++++++++++++++++++++
2 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/docs/concepts.rst b/docs/concepts.rst
index b75cb41..a73d7c3 100644
--- a/docs/concepts.rst
+++ b/docs/concepts.rst
@@ -686,27 +686,6 @@ need to supply an explicit connection ID. For example, the
default
See :doc:`howto/connection/index` for details on creating and managing
connections.
-Queues
-======
-
-When using the CeleryExecutor, the Celery queues that tasks are sent to
-can be specified. ``queue`` is an attribute of BaseOperator, so any
-task can be assigned to any queue. The default queue for the environment
-is defined in the ``airflow.cfg``'s ``celery -> default_queue``. This defines
-the queue that tasks get assigned to when not specified, as well as which
-queue Airflow workers listen to when started.
-
-Workers can listen to one or multiple queues of tasks. When a worker is
-started (using the command ``airflow celery worker``), a set of comma-delimited
-queue names can be specified (e.g. ``airflow celery worker -q spark``). This
worker
-will then only pick up tasks wired to the specified queue(s).
-
-This can be useful if you need specialized workers, either from a
-resource perspective (for say very lightweight tasks where one worker
-could take thousands of tasks without a problem), or from an environment
-perspective (you want a worker running from within the Spark cluster
-itself because it needs a very specific environment and security rights).
-
.. _concepts:xcom:
XComs
diff --git a/docs/executor/celery.rst b/docs/executor/celery.rst
index cc2a2af..66c1e77 100644
--- a/docs/executor/celery.rst
+++ b/docs/executor/celery.rst
@@ -157,3 +157,24 @@ The components communicate with each other in many places
* [9] **Scheduler** --> **Database** - Store a DAG run and related tasks
* [10] **Scheduler** --> **Celery's result backend** - Gets information about
the status of completed tasks
* [11] **Scheduler** --> **Celery's broker** - Put the commands to be executed
+
+Queues
+======
+
+When using the CeleryExecutor, the Celery queues that tasks are sent to
+can be specified. ``queue`` is an attribute of BaseOperator, so any
+task can be assigned to any queue. The default queue for the environment
+is defined in the ``airflow.cfg``'s ``celery -> default_queue``. This defines
+the queue that tasks get assigned to when not specified, as well as which
+queue Airflow workers listen to when started.
+
+Workers can listen to one or multiple queues of tasks. When a worker is
+started (using the command ``airflow celery worker``), a set of comma-delimited
+queue names can be specified (e.g. ``airflow celery worker -q spark``). This
worker
+will then only pick up tasks wired to the specified queue(s).
+
+This can be useful if you need specialized workers, either from a
+resource perspective (for say very lightweight tasks where one worker
+could take thousands of tasks without a problem), or from an environment
+perspective (you want a worker running from within the Spark cluster
+itself because it needs a very specific environment and security rights).