This is an automated email from the ASF dual-hosted git repository.

kaxilnaik pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/airflow.git


The following commit(s) were added to refs/heads/master by this push:
     new cc9c4c6  Add reference link for KubernetesPodOperator in 
kubernetes.rst (#11782)
cc9c4c6 is described below

commit cc9c4c682e4b5fb23949f22745639ed62c8e712d
Author: Kaxil Naik <[email protected]>
AuthorDate: Fri Oct 23 16:36:11 2020 +0100

    Add reference link for KubernetesPodOperator in kubernetes.rst (#11782)
    
    This makes it easy to go to the class definition and find the 
arguments/params that can be passed to the Operator
---
 docs/howto/operator/kubernetes.rst | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/docs/howto/operator/kubernetes.rst 
b/docs/howto/operator/kubernetes.rst
index 85043a7..a556919 100644
--- a/docs/howto/operator/kubernetes.rst
+++ b/docs/howto/operator/kubernetes.rst
@@ -40,13 +40,15 @@ you to create and run Pods on a Kubernetes cluster.
 
 How does this operator work?
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The ``KubernetesPodOperator`` uses the Kubernetes API to launch a pod in a 
Kubernetes cluster. By supplying an
+The 
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
 uses the
+Kubernetes API to launch a pod in a Kubernetes cluster. By supplying an
 image URL and a command with optional arguments, the operator uses the Kube 
Python Client to generate a Kubernetes API
 request that dynamically launches those individual pods.
 Users can specify a kubeconfig file using the ``config_file`` parameter, 
otherwise the operator will default
 to ``~/.kube/config``.
 
-The ``KubernetesPodOperator`` enables task-level resource configuration and is 
optimal for custom Python
+The 
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
 enables task-level
+resource configuration and is optimal for custom Python
 dependencies that are not available through the public PyPI repository. It 
also allows users to supply a template
 YAML file using the ``pod_template_file`` parameter.
 Ultimately, it allows Airflow to act a job orchestrator - no matter the 
language those jobs are written in.
@@ -73,7 +75,8 @@ and type safety. While we have removed almost all Kubernetes 
convenience classes
 
 Difference between ``KubernetesPodOperator`` and Kubernetes object spec
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The ``KubernetesPodOperator`` can be considered a substitute for a Kubernetes 
object spec definition that is able
+The 
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
 can be considered
+a substitute for a Kubernetes object spec definition that is able
 to be run in the Airflow scheduler in the DAG context. If using the operator, 
there is no need to create the
 equivalent YAML/JSON object spec for the Pod you would like to run.
 The YAML file can still be provided with the ``pod_template_file`` or even the 
Pod Spec constructed in Python via
@@ -81,7 +84,8 @@ the ``full_pod_spec`` parameter which requires a Kubernetes 
``V1Pod``.
 
 How to use private images (container registry)?
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-By default, the ``KubernetesPodOperator`` will look for images hosted publicly 
on Dockerhub.
+By default, the 
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
 will
+look for images hosted publicly on Dockerhub.
 To pull images from a private registry (such as ECR, GCR, Quay, or others), 
you must create a
 Kubernetes Secret that represents the credentials for accessing images from 
the private registry that is ultimately
 specified in the ``image_pull_secrets`` parameter.
@@ -104,7 +108,8 @@ Then use it in your pod like so:
 
 How does XCom work?
 ^^^^^^^^^^^^^^^^^^^
-The ``KubernetesPodOperator`` handles XCom values differently than other 
operators. In order to pass a XCom value
+The 
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
 handles
+XCom values differently than other operators. In order to pass a XCom value
 from your Pod you must specify the ``do_xcom_push`` as ``True``. This will 
create a sidecar container that runs
 alongside the Pod. The Pod must write the XCom value into this location at the 
``/airflow/xcom/return.json`` path.
 

Reply via email to