dimberman commented on a change in pull request #10393: URL: https://github.com/apache/airflow/pull/10393#discussion_r480264064
########## File path: UPDATING.md ########## @@ -153,6 +152,175 @@ The Old and New provider configuration keys that have changed are as follows For more information, visit https://flask-appbuilder.readthedocs.io/en/latest/security.html#authentication-oauth +### Changes to the KubernetesExecutor + +#### The KubernetesExecutor Will No Longer Read from the airflow.cfg for Base Pod Configurations + +In Airflow 2.0, the KubernetesExecutor will require a base pod template written in yaml. This file can exist +anywhere on the host machine and will be linked using the `pod_template_file` configuration in the airflow.cfg. + +The airflow.cfg will still accept values for the `worker_container_repository`, the `worker_container_tag`, and +the default namespace. + +#### The executor_config Will Now Expect a `kubernetes.client.models.V1Pod` Class When Launching Tasks + +In airflow 1.10, users could modify task pods at runtime by passing a dictionary to the `executor_config` variable. +Users will now have full access the Kubernetes API via the `kubernetes.client.models.V1Pod`. + +While in the deprecated version a user would mount a volume using the following dictionary: + +```python +second_task = PythonOperator( + task_id="four_task", + python_callable=test_volume_mount, + executor_config={ + "KubernetesExecutor": { + "volumes": [ + { + "name": "example-kubernetes-test-volume", + "hostPath": {"path": "/tmp/"}, + }, + ], + "volume_mounts": [ + { + "mountPath": "/foo/", + "name": "example-kubernetes-test-volume", + }, + ] + } + } +) +``` + +In the new model a user can accomplish the same thing using the following code: + +```python +from kubernetes.client import models as k8s + +second_task = PythonOperator( + task_id="four_task", + python_callable=test_volume_mount, + executor_config={"KubernetesExecutor": k8s.V1Pod( + spec=k8s.V1PodSpec( + containers=[ + k8s.V1Container( + name="base", + volume_mounts=[ + k8s.V1VolumeMount( + mount_path="/foo/", + name="example-kubernetes-test-volume" + ) + ] + ) + ], + volumes=[ + k8s.V1Volume( + name="example-kubernetes-test-volume", + host_path=k8s.V1HostPathVolumeSource( + path="/tmp/" + ) + ) + ] + ) + ) + } +) +``` +For Airflow 2.0, the traditional `executor_config` will continue operation with a deprecation warning, +but will be removed in a future version. + +### Changes to the KubernetesPodOperator + +Much like the KubernetesExecutor, the KubernetesPodOperator will no longer take Airflow custom classes and will +instead expect either a pod_template yaml file, or `kubernetes.client.models` objects. + +The one notable exception is that we will continue to support the `airflow.kubernetes.secret.Secret` class. + +Whereas previously a user would import each individual class to build the pod as so: + +```python +from airflow.kubernetes.pod import Port +from airflow.kubernetes.volume import Volume +from airflow.kubernetes.secret import Secret +from airflow.kubernetes.volume_mount import VolumeMount + + +volume_config = { + 'persistentVolumeClaim': { + 'claimName': 'test-volume' + } +} +volume = Volume(name='test-volume', configs=volume_config) +volume_mount = VolumeMount('test-volume', + mount_path='/root/mount_file', + sub_path=None, + read_only=True) + +port = Port('http', 80) +secret_file = Secret('volume', '/etc/sql_conn', 'airflow-secrets', 'sql_alchemy_conn') +secret_env = Secret('env', 'SQL_CONN', 'airflow-secrets', 'sql_alchemy_conn') + +k = KubernetesPodOperator( + namespace='default', + image="ubuntu:16.04", + cmds=["bash", "-cx"], + arguments=["echo", "10"], + labels={"foo": "bar"}, + secrets=[secret_file, secret_env], + ports=[port], + volumes=[volume], + volume_mounts=[volume_mount], + name="airflow-test-pod", + task_id="task", + affinity=affinity, + is_delete_operator_pod=True, + hostnetwork=False, + tolerations=tolerations, + configmaps=configmaps, + init_containers=[init_container], + priority_class_name="medium", +) +``` +Now the user can use the `kubernetes.client.models` class as a single point of entry for creating all k8s objects. + +```python +from kubernetes.client import models as k8s +from airflow.kubernetes.secret import Secret + + +configmaps = ['test-configmap-1', 'test-configmap-2'] + +volume = k8s.V1Volume( + name='test-volume', + persistent_volume_claim=k8s.V1PersistentVolumeClaimVolumeSource(claim_name='test-volume'), +) + +port = k8s.V1ContainerPort(name='http', container_port=80) +secret_file = Secret('volume', '/etc/sql_conn', 'airflow-secrets', 'sql_alchemy_conn') +secret_env = Secret('env', 'SQL_CONN', 'airflow-secrets', 'sql_alchemy_conn') +secret_all_keys = Secret('env', None, 'airflow-secrets-2') +volume_mount = k8s.V1VolumeMount( + name='test-volume', mount_path='/root/mount_file', sub_path=None, read_only=True +) + +k = KubernetesPodOperator( + namespace='default', + image="ubuntu:16.04", + cmds=["bash", "-cx"], + arguments=["echo", "10"], + labels={"foo": "bar"}, + secrets=[secret_file, secret_env], + ports=[port], + volumes=[volume], + volume_mounts=[volume_mount], + name="airflow-test-pod", + task_id="task", + is_delete_operator_pod=True, + hostnetwork=False) +``` +We decided to keep the Secret class as users seem to really like that simplifies the complexity of mounting +Kubernetes secrets into workers. + Review comment: Added! ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
