This is an automated email from the ASF dual-hosted git repository.

dgrove pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/openwhisk-deploy-kube.git


The following commit(s) were added to refs/heads/master by this push:
     new f25acb2  Instructions for a handmade kubernetes cluster in readme 
(#563)
f25acb2 is described below

commit f25acb27c6e13538d8a4c4a854f5ac5c8d77de1b
Author: Giuseppe De Palma <depalma....@gmail.com>
AuthorDate: Thu Jan 2 20:14:32 2020 +0100

    Instructions for a handmade kubernetes cluster in readme (#563)
    
    * fixed table of contents link to development and testing section
    * added instructions for a simple handmade cluster creation
---
 README.md              |   5 ++-
 docs/k8s-diy-ubuntu.md | 103 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 106 insertions(+), 2 deletions(-)

diff --git a/README.md b/README.md
index dca11bc..c8e78ee 100644
--- a/README.md
+++ b/README.md
@@ -48,7 +48,7 @@ document the necessary steps.
 * [Prerequisites: Kubernetes and Helm](#prerequisites-kubernetes-and-helm)
 * [Deploying OpenWhisk](#deploying-openwhisk)
 * [Administering OpenWhisk](#administering-openwhisk)
-* [Development and Testing](#development-and-testing)
+* [Development and Testing OpenWhisk on 
Kubernetes](#development-and-testing-openwhisk-on-kubernetes)
 * [Cleanup](#cleanup)
 * [Issues](#issues)
 
@@ -129,7 +129,8 @@ significantly larger clusters by scaling up the replica 
count of the
 various components and labeling multiple nodes as invoker nodes.
 There are some additional notes [here](docs/k8s-diy.md).
 
-We would welcome contributions of more detailed DIY instructions.
+[Here](docs/k8s-diy-ubuntu.md) a Kubernetes cluster example using kubeadm and 
Ubuntu 18.04.
+
 
 ## Helm
 
diff --git a/docs/k8s-diy-ubuntu.md b/docs/k8s-diy-ubuntu.md
new file mode 100644
index 0000000..c8a0b39
--- /dev/null
+++ b/docs/k8s-diy-ubuntu.md
@@ -0,0 +1,103 @@
+<!--
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+-->
+
+# Kubernetes cluster example with Ubuntu
+
+You can easily build a cluster using kubeadm and kubectl on Ubuntu 18.04.
+
+### Perform these steps on **all the machines** that will be part of your 
cluster.
+
+First, have Docker installed:
+```
+sudo apt-get install -y docker.io
+```
+
+Then install the kubeadm toolbox:
+```
+sudo apt-get update && sudo apt-get install -y apt-transport-https curl
+curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key 
add -
+cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
+deb https://apt.kubernetes.io/ kubernetes-xenial main
+EOF
+sudo apt-get update
+sudo apt-get install -y kubelet kubeadm kubectl
+sudo apt-mark hold kubelet kubeadm kubectl
+```
+
+Swap must be disabled for kubelet to run:
+```
+sudo swapoff -a
+```
+
+### Only on the machine designated as **master node**:
+
+Select the IP address to broadcast the Kubernetes API. With ``` ifconfig ``` 
you can check the IPs of the network interfaces on your master node (with a 
public IP you can expose the cluster to the internet).
+
+Then run the following line substituting ```<IP-address>``` with your IP.
+```
+sudo kubeadm init --apiserver-advertise-address=<IP-address>
+```
+When it finishes executing, you will find a result similar to this:
+```
+Your Kubernetes control-plane has initialized successfully!
+
+To start using your cluster, you need to run the following as a regular user:
+
+  mkdir -p $HOME/.kube
+  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+  sudo chown $(id -u):$(id -g) $HOME/.kube/config
+
+You should now deploy a pod network to the cluster.
+Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
+  https://kubernetes.io/docs/concepts/cluster-administration/addons/
+
+Then you can join any number of worker nodes by running the following on each 
as root:
+
+kubeadm join <IP-address>:6443 --token 29am26.3fw2znktwbbff0we \
+    --discovery-token-ca-cert-hash 
sha256:eb32f7f58ae6907f26ed5c075ecd4ef6756d832b6c358fd4b2f408e52d18a369
+```
+Now kubeadm set up a cluster with just the master node. Run those 3 
instructions to copy the admin.conf file, to connect kubectl to the new cluster.
+
+Then you can checkout your nodes with:
+```
+kubectl get nodes
+
+NAME          STATUS     ROLES    AGE     VERSION
+master-node   NotReady   master   7m25s   v1.17.0
+```
+The node will stay in the **NotReady** status until you apply Pod Networking. 
With Weave Net run:
+```
+kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl 
version | base64 | tr -d '\n')"
+```
+After a minute the node will be **Ready**. Check the [Weave Net 
addon](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#install) 
to know more.
+
+Now you're ready to let other machines join. Use the join command kubeadm 
printed earlier on them.
+```
+kubeadm join <IP-address>:6443 --token 29am26.3fw2znktwbbff0we \
+    --discovery-token-ca-cert-hash 
sha256:eb32f7f58ae6907f26ed5c075ecd4ef6756d832b6c358fd4b2f408e52d18a369
+
+```
+After a node joined give it time to get in the Ready status, then you can 
check that everything is
+running with: ```kubectl get all -A```.
+
+Now you have a running cluster with a master node and one or more worker nodes.
+
+Before deploying OpenWhisk, you have to set up [Dynamic Volume
+Provision](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/), 
as the [technical
+requirements](docs/k8s-technical-requirements.md) specify. For example, you 
can dynamically provision NFS persistent volumes, setting up an nfs server, a 
client provisioner and a storage class. Now you're ready to deploy openwhisk 
with [Helm](##Helm).

Reply via email to