This is an automated email from the ASF dual-hosted git repository.
eladkal pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/airflow.git
The following commit(s) were added to refs/heads/main by this push:
new c0e2786dbf Replace the depreacted cncf-kubernetes modules in the doc
and tests (#36727)
c0e2786dbf is described below
commit c0e2786dbfb534574268906dbfe32fd1a7edc736
Author: Hussein Awala <[email protected]>
AuthorDate: Thu Jan 11 07:38:24 2024 +0100
Replace the depreacted cncf-kubernetes modules in the doc and tests (#36727)
---
.../connections/kubernetes.rst | 2 +-
docs/apache-airflow-providers-cncf-kubernetes/operators.rst | 12 ++++++------
.../google/cloud/operators/test_kubernetes_engine.py | 4 ++--
.../google/cloud/triggers/test_kubernetes_engine.py | 2 +-
4 files changed, 10 insertions(+), 10 deletions(-)
diff --git
a/docs/apache-airflow-providers-cncf-kubernetes/connections/kubernetes.rst
b/docs/apache-airflow-providers-cncf-kubernetes/connections/kubernetes.rst
index 1de7629231..cb1caa4b63 100644
--- a/docs/apache-airflow-providers-cncf-kubernetes/connections/kubernetes.rst
+++ b/docs/apache-airflow-providers-cncf-kubernetes/connections/kubernetes.rst
@@ -20,7 +20,7 @@
Kubernetes cluster Connection
=============================
-The Kubernetes cluster Connection type enables connection to a Kubernetes
cluster by
:class:`~airflow.providers.cncf.kubernetes.operators.spark_kubernetes.SparkKubernetesOperator`
tasks and
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
tasks.
+The Kubernetes cluster Connection type enables connection to a Kubernetes
cluster by
:class:`~airflow.providers.cncf.kubernetes.operators.spark_kubernetes.SparkKubernetesOperator`
tasks and
:class:`~airflow.providers.cncf.kubernetes.operators.pod.KubernetesPodOperator`
tasks.
Authenticating to Kubernetes cluster
diff --git a/docs/apache-airflow-providers-cncf-kubernetes/operators.rst
b/docs/apache-airflow-providers-cncf-kubernetes/operators.rst
index 4c1dd9ac86..44685c31ab 100644
--- a/docs/apache-airflow-providers-cncf-kubernetes/operators.rst
+++ b/docs/apache-airflow-providers-cncf-kubernetes/operators.rst
@@ -22,7 +22,7 @@
KubernetesPodOperator
=====================
-The
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
allows
+The
:class:`~airflow.providers.cncf.kubernetes.operators.pod.KubernetesPodOperator`
allows
you to create and run Pods on a Kubernetes cluster.
.. note::
@@ -37,14 +37,14 @@ you to create and run Pods on a Kubernetes cluster.
How does this operator work?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
uses the
+The
:class:`~airflow.providers.cncf.kubernetes.operators.pod.KubernetesPodOperator`
uses the
Kubernetes API to launch a pod in a Kubernetes cluster. By supplying an
image URL and a command with optional arguments, the operator uses the Kube
Python Client to generate a Kubernetes API
request that dynamically launches those individual pods.
Users can specify a kubeconfig file using the ``config_file`` parameter,
otherwise the operator will default
to ``~/.kube/config``.
-The
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
enables task-level
+The
:class:`~airflow.providers.cncf.kubernetes.operators.pod.KubernetesPodOperator`
enables task-level
resource configuration and is optimal for custom Python
dependencies that are not available through the public PyPI repository. It
also allows users to supply a template
YAML file using the ``pod_template_file`` parameter.
@@ -107,7 +107,7 @@ and type safety. While we have removed almost all
Kubernetes convenience classes
Difference between ``KubernetesPodOperator`` and Kubernetes object spec
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
can be considered
+The
:class:`~airflow.providers.cncf.kubernetes.operators.pod.KubernetesPodOperator`
can be considered
a substitute for a Kubernetes object spec definition that is able
to be run in the Airflow scheduler in the DAG context. If using the operator,
there is no need to create the
equivalent YAML/JSON object spec for the Pod you would like to run.
@@ -116,7 +116,7 @@ the ``full_pod_spec`` parameter which requires a Kubernetes
``V1Pod``.
How to use private images (container registry)?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-By default, the
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
will
+By default, the
:class:`~airflow.providers.cncf.kubernetes.operators.pod.KubernetesPodOperator`
will
look for images hosted publicly on Dockerhub.
To pull images from a private registry (such as ECR, GCR, Quay, or others),
you must create a
Kubernetes Secret that represents the credentials for accessing images from
the private registry that is ultimately
@@ -147,7 +147,7 @@ Also for this action you can use operator in the deferrable
mode:
How does XCom work?
^^^^^^^^^^^^^^^^^^^
-The
:class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator`
handles
+The
:class:`~airflow.providers.cncf.kubernetes.operators.pod.KubernetesPodOperator`
handles
XCom values differently than other operators. In order to pass a XCom value
from your Pod you must specify the ``do_xcom_push`` as ``True``. This will
create a sidecar container that runs
alongside the Pod. The Pod must write the XCom value into this location at the
``/airflow/xcom/return.json`` path.
diff --git a/tests/providers/google/cloud/operators/test_kubernetes_engine.py
b/tests/providers/google/cloud/operators/test_kubernetes_engine.py
index 5f91fd9e4e..7805804485 100644
--- a/tests/providers/google/cloud/operators/test_kubernetes_engine.py
+++ b/tests/providers/google/cloud/operators/test_kubernetes_engine.py
@@ -60,10 +60,10 @@ IMAGE = "bash"
GCLOUD_COMMAND = "gcloud container clusters get-credentials {} --zone {}
--project {}"
KUBE_ENV_VAR = "KUBECONFIG"
FILE_NAME = "/tmp/mock_name"
-KUB_OP_PATH =
"airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator.{}"
+KUB_OP_PATH =
"airflow.providers.cncf.kubernetes.operators.pod.KubernetesPodOperator.{}"
GKE_HOOK_MODULE_PATH =
"airflow.providers.google.cloud.operators.kubernetes_engine"
GKE_HOOK_PATH = f"{GKE_HOOK_MODULE_PATH}.GKEHook"
-KUB_OPERATOR_EXEC =
"airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator.execute"
+KUB_OPERATOR_EXEC =
"airflow.providers.cncf.kubernetes.operators.pod.KubernetesPodOperator.execute"
TEMP_FILE = "tempfile.NamedTemporaryFile"
GKE_OP_PATH =
"airflow.providers.google.cloud.operators.kubernetes_engine.GKEStartPodOperator"
CLUSTER_URL = "https://test-host"
diff --git a/tests/providers/google/cloud/triggers/test_kubernetes_engine.py
b/tests/providers/google/cloud/triggers/test_kubernetes_engine.py
index b252ea4e30..65bc45c415 100644
--- a/tests/providers/google/cloud/triggers/test_kubernetes_engine.py
+++ b/tests/providers/google/cloud/triggers/test_kubernetes_engine.py
@@ -32,7 +32,7 @@ from
airflow.providers.google.cloud.triggers.kubernetes_engine import GKEOperati
from airflow.triggers.base import TriggerEvent
TRIGGER_GKE_PATH =
"airflow.providers.google.cloud.triggers.kubernetes_engine.GKEStartPodTrigger"
-TRIGGER_KUB_PATH =
"airflow.providers.cncf.kubernetes.triggers.kubernetes_pod.KubernetesPodTrigger"
+TRIGGER_KUB_PATH =
"airflow.providers.cncf.kubernetes.triggers.pod.KubernetesPodTrigger"
HOOK_PATH =
"airflow.providers.google.cloud.hooks.kubernetes_engine.GKEPodAsyncHook"
POD_NAME = "test-pod-name"
NAMESPACE = "default"