TobKed commented on a change in pull request #13461:
URL: https://github.com/apache/airflow/pull/13461#discussion_r555715547



##########
File path: docs/apache-airflow-providers-google/operators/cloud/dataflow.rst
##########
@@ -0,0 +1,292 @@
+ .. Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+ ..   http://www.apache.org/licenses/LICENSE-2.0
+
+ .. Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Google Cloud Dataflow Operators
+===============================
+
+`Dataflow <https://cloud.google.com/dataflow/>`__ is a managed service for
+executing a wide variety of data processing patterns. These pipelines are 
created
+using the Apache Beam programming model which allows for both batch and 
streaming processing.
+
+.. contents::
+  :depth: 1
+  :local:
+
+Prerequisite Tasks
+^^^^^^^^^^^^^^^^^^
+
+.. include::/operators/_partials/prerequisite_tasks.rst
+
+Ways to run a data pipeline
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There are several ways to run a Dataflow pipeline depending on your 
environment, source files:
+
+- **Non-templated pipeline**: Developer can run the pipeline as a local 
process on the worker
+  if you have a '*.jar' file for Java or a '* .py` file for Python. This also 
means that the necessary system
+  dependencies must be installed on the worker.  For Java, worker must have 
the JRE Runtime installed.
+  For Python, the Python interpreter. The runtime versions must be compatible 
with the pipeline versions.
+  This is the fastest way to start a pipeline, but because of its frequent 
problems with system dependencies,
+  it often causes problems. See:
+  :ref:`howto/operator:DataflowCreateJavaJobOperator`, 
:ref:`howto/operator:DataflowCreatePythonJobOperator`
+- **Templated pipeline**: The programmer can make the pipeline independent of 
the environment by preparing
+  a template that will then be run on a machine managed by Google. This way, 
changes to the environment
+  won't affect your pipeline. There are two types of the templates:
+
+  - **Classic templates**. Developers run the pipeline and create a template. 
The Apache Beam SDK stages
+    files in Cloud Storage, creates a template file (similar to job request),
+    and saves the template file in Cloud Storage. See: 
:ref:`howto/operator:DataflowTemplatedJobStartOperator`
+  - **Flex Templates**. Developers package the pipeline into a Docker image 
and then use the ``gcloud``

Review comment:
       @mik-laj  is right. Only `DataflowStartSqlJobOperator` requires `gcloud`.
   
   I added warning in `DataflowStartSqlJobOperator` section and operator itself 
about required `gcloud` SDK




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to