naturalett commented on a change in pull request #11: URL: https://github.com/apache/incubator-liminal/pull/11#discussion_r594713637
########## File path: docs/source/How_to_install_liminal_in_airflow_on_kubernetes.md ########## @@ -0,0 +1,122 @@ +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required bgit y applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + +# Install Liminal in Airflow +* [Prerequisites](#prerequisites) + * [Supported Distributions](#supported-distributions) +* [Get Started](#Get-Started) + * [Scripted Installation](#scripted-installation) + * [Manual Installation](#manual-installation) +* [References and other resources](#references-and-other-resources) + +## Prerequisites +Before you begin, ensure you have met the following requirements: + +* You have a Kubernetes cluster running with [Helm][homebrew-helm] (and Tiller if using Helm v2.x) installed +* You have the kubectl command line [(kubectl CLI)][homebrew-kubectl] installed +* You have the current [context][cluster-access-kubeconfig] in kubernetes' kubeconfig file +* You have Airflow on Kubernetes with AWS EFS +* You have already created [custom DAGs][custom-dag] following the liminal [Getting Started Documentation][liminalGetStarted-doc] +* Make sure that the [example repository][liminal-getting-started-project] is your workspace + +### Supported Distributions + +|Distribution | Versions | +|-|-| +|[Airflow][airflowImage] | apache/airflow:1.10.12-python3.6 | +|[Airflow Helm Chart][airflowChart] | 7.14.3 | + +#### You may need to add the Repo + +```sh +helm repo add airflow-stable https://airflow-helm.github.io/charts +helm repo update +``` + +## Get Started + +### Scripted Installation + +[Use `install_liminal_in_airflow_on_kubernetes.sh`][liminal-installation-script] to install liminal in Airflow. +* Move to the [example repository project][liminal-getting-started-project] +* Copy the [script][liminal-installation-script] to the [example repository project][liminal-getting-started-project] and run it + + + +### Manual Installation + +Here are a couple of steps of how to install liminal in Airflow on Kubernetes: +* [Find Airflow components](#Find-Airflow-pod-names) +* [Install liminal in the components](#Install-liminal-in-Airflow-pods) +* [Setting up liminal](#Setting-up-liminal) + +#### Find Airflow pod names: + +```sh +echo -n "Please enter a namespace: " +read namespace +components=("web" "scheduler" "worker") +namespace=${namespace} +podNames=() +for component in ${components[@]} +do + podNames+="$(kubectl get pod -n ${namespace} -l "app=airflow,component=${component}" --no-headers -o custom-columns=":metadata.name")\n" + +done + +echo -e "The following Airflow pods are:\n${podNames}" +``` +#### Install liminal in Airflow pods: Review comment: No. I would launch Airflow with a requirements.txt in the mounted DAGs folder which defines the packages to install and this is how I can assure that the packages will be installed once Airflow will get restarted as well. However, here we have a running Airflow that we want to install a liminal on. We assume the client has its own running Airflow with EFS based on the **Prerequisites** WDYT? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
