This is an automated email from the ASF dual-hosted git repository.

daisyguo pushed a commit to branch master
in repository 
https://gitbox.apache.org/repos/asf/incubator-openwhisk-deploy-kube.git


The following commit(s) were added to refs/heads/master by this push:
     new 4ce7a7f  Switch from minikube to kubeadm-dind-cluster for TravisCI 
testing (#314)
4ce7a7f is described below

commit 4ce7a7fc46b8b4831a173f53fed60d4f83e18a86
Author: David Grove <dgrove-...@users.noreply.github.com>
AuthorDate: Wed Oct 17 21:09:02 2018 -0400

    Switch from minikube to kubeadm-dind-cluster for TravisCI testing (#314)
    
    A number of changes and improvements to TravisCI testing of Helm deployment:
      + Get our Kubernetes cluster via kubeadm-dind-cluster instead of minikube.
      + Now test Kubernetes versions 1.10 and 1.11 (drop 1.9; add 1.11).
      + Deploy providers in parallel to reduce testing latency
      + Enable apigateway sniff test (it reliably passes with 
kubeadm-dind-cluster)
    
    Update main README and ingress and troubleshooting documentation to
    reflect shift away from Minikube as the recommended environment.
---
 .travis.yml                                        |  10 +-
 README.md                                          |  40 ++++++--
 docs/ingress.md                                    |  24 +++++
 docs/troubleshooting.md                            |   5 +-
 .../ow-alarm/templates/pkgAlarmProvider.yaml       |   6 ++
 .../ow-cloudant/templates/pkgCloudantProvider.yaml |   6 ++
 tools/travis/build-helm.sh                         | 107 ++++++++++-----------
 tools/travis/collect-logs.sh                       |   7 +-
 tools/travis/start-kubeadm-dind.sh                 |  60 ++++++++++++
 tools/travis/{setup.sh => start-minikube.sh}       |   4 +
 10 files changed, 195 insertions(+), 74 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index 0f3bdd1..dfcfeae 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -8,9 +8,9 @@ env:
   - secure: 
d7CuMXbhT83W2x78qiLwgogX1+3aPicd1PlTwwNNDN6QSkImbxareyKThnsqlHIiNj3o5l5DBuiYjy7wrF/xD1g8BQMmTwm99DRx5q3CI3Im3VCi/ZK8SaNjuOy24d7cf5k2tB/87Gk7zmKsMDYm+fpCl+GpgUmIEeIwthiAxuXSDWZ8eQPIptmxj56DeFRNouvXG+dEUtBfWiwN27UPxNKExCixFnegmdtffLbz6hhst7BHr5Ry9acbycre98PCwWZcu9lxFs+SJ1kvnzX2iue4otmDkF1WkJjxaOFPJVs/D3YItg+neLCSxjwBskPed+Fct8bOjcM/uVROJPNIq5icBmaPX2isH0lvtxOeVw/dmioWYXYPN9ygBOe4eO/vtPllN0bcAUo5xl9jXev8ciAozYrYpHVh9Fplfd81rcYTeYzALmRJBdoiWoc3KQGzwGc9sB1ffmy+KWgG9T0zbnS4fALSR4PS
 [...]
   - secure: 
CJtnU94HTDqd4A6uvhFl8IpnmU+wTdlzb8bPBFUl/lI/VKXiRrYpgJdKUro5xEoxFKuqMprLhbyf66niyWLTIeogjUAEu/h/o2dBVeGgSGKoqC0hQgqvnxKFeGlzFJ0XuEs3vbStJGRnQszGsfnnDrscJtR0x9X+1w4aBKI7iPyyuFtVkDD1UsmBbSi+M8FTeq7G7A0reMDaey7uog3CFCpIMl4geshcohQEcKEGbnXQZoLPFpb7cBOE83VXBJ7Y7Dgf/U4keiLovvnuJThGKZm/SVV2KlELmBmtmbx3rMT6Vb5k9ChSdRWapromNnnzmJBIQ5Scc2mwV3A93/SMha1F3IlYpDKs5djfTw8jZfVnuiou7HhTaRjHkmmcwP12/k30gLe2kw0Vezg1TCY4zgtOpcmCxc8RHEy0ceA74rKvRi8LbexTCwX+iAMQFn/pSrh/OqAq/50JbLyczcoO1zXWS38txUQN
 [...]
   matrix:
-    - TRAVIS_KUBE_VERSION=v1.9.0 TRAVIS_MINIKUBE_VERSION=v0.25.2 
OW_CONTAINER_FACTORY=docker
-    - TRAVIS_KUBE_VERSION=v1.10.5 TRAVIS_MINIKUBE_VERSION=v0.28.2 
OW_CONTAINER_FACTORY=docker
-    - TRAVIS_KUBE_VERSION=v1.10.5 TRAVIS_MINIKUBE_VERSION=v0.28.2 
OW_CONTAINER_FACTORY=kubernetes
+    - TRAVIS_KUBE_VERSION=1.10 OW_CONTAINER_FACTORY=docker
+    - TRAVIS_KUBE_VERSION=1.10 OW_CONTAINER_FACTORY=kubernetes
+    - TRAVIS_KUBE_VERSION=1.11 OW_CONTAINER_FACTORY=docker
 
 services:
   - docker
@@ -24,10 +24,10 @@ notifications:
 
 before_install:
   - ./tools/travis/setupscan.sh
-  - ./tools/travis/setup.sh
+  - ./tools/travis/scancode.sh
+  - ./tools/travis/start-kubeadm-dind.sh
 
 script:
-  - ./tools/travis/scancode.sh
   - ./tools/travis/build-helm.sh
   - ./tools/travis/collect-logs.sh
   - ./tools/travis/box-upload.py "logs" 
"deploy-kube-$TRAVIS_BUILD_ID-$TRAVIS_BRANCH-$TRAVIS_JOB_NUMBER.tar.gz"
diff --git a/README.md b/README.md
index 54321b2..099559e 100644
--- a/README.md
+++ b/README.md
@@ -80,14 +80,37 @@ controller:
   imagePullPolicy: "IfNotPresent"
 ```
 
+NOTE: Docker for Windows 18.06 and later also has similar built-in
+support for Kubernetes. We would be interested in any experience using
+it to run Apache OpenWhisk on the Windows platform.
+
+### Using kubeadm-dind-cluster
+On Linux, you can get a similar experience to using Kubernetes in
+Docker for Mac via the
+[kubeadm-dind-cluster](https://github.com/kubernetes-sigs/kubeadm-dind-cluster)
+project.  In a nutshell, you can get started by doing
+```shell
+wget 
https://cdn.rawgit.com/kubernetes-sigs/kubeadm-dind-cluster/master/fixed/dind-cluster-v1.10.sh
+chmod +x dind-cluster-v1.10.sh
+
+# start the cluster
+./dind-cluster-v1.10.sh up
+
+# add kubectl directory to PATH
+export PATH="$HOME/.kubeadm-dind-cluster:$PATH"
+```
+
+Our TravisCI testing uses kubeadm-dind-cluster.sh on an ubuntu 16.04
+host.  The `fixed` `dind-cluster` scripts for Kubernetes version 1.10
+and 1.11 are known to work for deploying OpenWhisk.
+
 ### Using Minikube
 
-If you are not on a Mac, then for local development and testing, we recommend 
using Minikube with
-the docker network in promiscuous mode.  Not all combinations of
-Minikube and Kubernetes versions will work for running OpenWhisk.
-Although other combinations may work, we recommend at least initially
-using a combination from the table below that is verified by our
-Travis CI testing.
+If you are on Linux and do not want to use kubeadm-dind-cluster, then
+an alternative for local development and testing, is using Minikube
+with the docker network in promiscuous mode.  However not all
+combinations of Minikube and Kubernetes versions will work for running
+OpenWhisk. Some known good combinations are:
 
 | Kubernetes Version | Minikube Version |
 --- | --- |
@@ -167,8 +190,9 @@ file appropriate for a Minikube cluster where `minikube ip` 
returns
 `192.168.99.100` and port 31001 is available to be used.  If you are
 using Docker for Mac, you can use the same configuration but use the
 command `kubectl describe nodes | grep InternalIP` to determine the
-value for `api_host_name`.
-
+value for `api_host_name`.  If you are using kubeadm-dind-cluster, use
+the command `kubectl describe node kube-node-2 | grep InternalIP` to
+determine the value for `api_host_name`.
 
 ```yaml
 whisk:
diff --git a/docs/ingress.md b/docs/ingress.md
index c358651..d46b897 100644
--- a/docs/ingress.md
+++ b/docs/ingress.md
@@ -88,6 +88,30 @@ nginx:
   httpsNodePort: 31001
 ```
 
+## Setting up NodePort using kubadm-dind-cluster
+
+Obtain the IP address of one of the two Kubernetes worker nodes using
+the command below.  To eliminate a network hop to the nginx pod, pick the
+worker node which you did not label with `openwhisk-role=invoker`.
+So, if you label `kube-node-2` as your invoker node, pick `kube-node-1`
+as your api_host.
+```shell
+kubectl describe node kube-node-1 | grep InternalIP
+```
+This should produce output like: `InternalIP:  10.192.0.3`
+
+Next pick an unassigned port (eg 31001) and define `mycluster.yaml` as
+```yaml
+whisk:
+  ingress:
+    type: NodePort
+    api_host_name: 10.192.0.3
+    api_host_port: 31001
+
+nginx:
+  httpsNodePort: 31001
+```
+
 ## Setting up NodePort on an IBM Cloud Lite cluster
 
 The only available ingress method for an IBM Cloud Lite cluster is to
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index ff5be89..34306f8 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -44,8 +44,9 @@ If services are having trouble connecting to Kafka, it may be 
that the
 Kafka service didn't actually come up successfully. One reason Kafka
 can fail to come up is that it cannot connect to itself.  On minikube,
 fix this by saying `minikube ssh -- sudo ip link set docker0 promisc
-on`. On full scale Kubernetes clusters, make sure that your kubelet's
-`hairpin-mode` is not `none`).
+on`. If using kubeadm-dind-cluster, set `USE_HAIRPIN=true` in your environment
+before running 'dind-cluster.sh up`. On full scale Kubernetes clusters,
+make sure that your kubelet's `hairpin-mode` is not `none`).
 
 ### wsk `cannot validate certificates` error
 
diff --git 
a/helm/openwhisk-providers/charts/ow-alarm/templates/pkgAlarmProvider.yaml 
b/helm/openwhisk-providers/charts/ow-alarm/templates/pkgAlarmProvider.yaml
index 2982ad2..131b480 100644
--- a/helm/openwhisk-providers/charts/ow-alarm/templates/pkgAlarmProvider.yaml
+++ b/helm/openwhisk-providers/charts/ow-alarm/templates/pkgAlarmProvider.yaml
@@ -21,8 +21,12 @@ spec:
       restartPolicy: {{ .Values.alarmprovider.restartPolicy }}
       volumes:
         - name: alarm-logs
+{{- if ne .Values.alarmprovider.persistence.storageClass "none" }}
           persistentVolumeClaim:
             claimName: {{ .Values.alarmprovider.persistence.pvcName | quote }}
+{{- else }}
+          emptyDir: {}
+{{- end }}
       containers:
       - name: {{ .Values.alarmprovider.name | quote }}
         imagePullPolicy: {{ .Values.alarmprovider.imagePullPolicy | quote }}
@@ -80,6 +84,7 @@ spec:
             mountPath: /logs
 
 ---
+{{- if ne .Values.alarmprovider.persistence.storageClass "none" }}
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
@@ -92,3 +97,4 @@ spec:
   resources:
     requests:
       storage: {{ .Values.alarmprovider.persistence.size }}
+{{- end }}
diff --git 
a/helm/openwhisk-providers/charts/ow-cloudant/templates/pkgCloudantProvider.yaml
 
b/helm/openwhisk-providers/charts/ow-cloudant/templates/pkgCloudantProvider.yaml
index bd54def..5e0e103 100644
--- 
a/helm/openwhisk-providers/charts/ow-cloudant/templates/pkgCloudantProvider.yaml
+++ 
b/helm/openwhisk-providers/charts/ow-cloudant/templates/pkgCloudantProvider.yaml
@@ -21,8 +21,12 @@ spec:
       restartPolicy: {{ .Values.cloudantprovider.restartPolicy }}
       volumes:
         - name: cloudant-logs
+{{- if ne .Values.cloudantprovider.persistence.storageClass "none" }}
           persistentVolumeClaim:
             claimName: {{ .Values.cloudantprovider.persistence.pvcName | quote 
}}
+{{- else }}
+          emptyDir: {}
+{{- end }}
       containers:
       - name: {{ .Values.cloudantprovider.name | quote }}
         imagePullPolicy: {{ .Values.cloudantprovider.imagePullPolicy | quote }}
@@ -81,6 +85,7 @@ spec:
             mountPath: /logs
 
 ---
+{{- if ne .Values.cloudantprovider.persistence.storageClass "none" }}
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
@@ -93,3 +98,4 @@ spec:
   resources:
     requests:
       storage: {{ .Values.cloudantprovider.persistence.size }}
+{{- end }}
diff --git a/tools/travis/build-helm.sh b/tools/travis/build-helm.sh
index 1d7c7d4..b7f88b8 100755
--- a/tools/travis/build-helm.sh
+++ b/tools/travis/build-helm.sh
@@ -23,13 +23,13 @@ deploymentHealthCheck () {
       break
     fi
 
-    kubectl get pods --all-namespaces -o wide --show-all
+    kubectl get pods -n openwhisk -o wide
 
     let TIMEOUT=TIMEOUT+1
     sleep 10
   done
 
-  if [ ! $PASSED ]; then
+  if [ "$PASSED" == "false" ]; then
     echo "Failed to finish deploying $1"
 
     kubectl -n openwhisk logs $(kubectl -n openwhisk get pods -l name="$1" -o 
wide | grep "$1" | awk '{print $1}')
@@ -56,14 +56,16 @@ statefulsetHealthCheck () {
       break
     fi
 
-    kubectl get pods --all-namespaces -o wide --show-all
+    kubectl get pods -n openwhisk -o wide
 
     let TIMEOUT=TIMEOUT+1
     sleep 10
   done
 
-  if [ ! $PASSED ]; then
+  if [ "$PASSED" == "false" ]; then
     echo "Failed to finish deploying $1"
+    # Dump all namespaces in case the problem is with a pod in the kube-system 
namespace
+    kubectl get pods --all-namespaces -o wide
 
     kubectl -n openwhisk logs $(kubectl -n openwhisk get pods -o wide | grep 
"$1"-0 | awk '{print $1}')
     exit 1
@@ -89,14 +91,16 @@ jobHealthCheck () {
       break
     fi
 
-    kubectl get jobs --all-namespaces -o wide --show-all
+    kubectl get pods -n openwhisk -o wide
 
     let TIMEOUT=TIMEOUT+1
     sleep 10
   done
 
-  if [ ! $PASSED ]; then
+  if [ "$PASSED" == "false" ]; then
     echo "Failed to finish running $1"
+    # Dump all namespaces in case the problem is with a pod in the kube-system 
namespace
+    kubectl get jobs --all-namespaces -o wide --show-all
 
     kubectl -n openwhisk logs jobs/$1
     exit 1
@@ -117,13 +121,15 @@ verifyHealthyInvoker () {
       break
     fi
 
-    kubectl get pods --all-namespaces -o wide --show-all
+    kubectl get pods -n openwhisk -o wide
 
     let TIMEOUT=TIMEOUT+1
     sleep 10
   done
 
-  if [ ! $PASSED ]; then
+  if [ "$PASSED" == "false" ]; then
+    # Dump all namespaces in case the problem is with a pod in the kube-system 
namespace
+    kubectl get pods --all-namespaces -o wide --show-all
     echo "No healthy invokers available"
 
     exit 1
@@ -147,20 +153,23 @@ OW_CONTAINER_FACTORY=${OW_CONTAINER_FACTORY:="docker"}
 # Default timeout limit to 60 steps
 TIMEOUT_STEP_LIMIT=${TIMEOUT_STEP_LIMIT:=60}
 
-# Label invoker nodes (needed for DockerContainerFactory-based invoker 
deployment)
-echo "Labeling invoker node"
-kubectl label nodes --all openwhisk-role=invoker
-kubectl describe nodes
+# Label nodes for affinity. For DockerContainerFactory, at least one invoker 
node is required.
+echo "Labeling nodes with openwhisk-role assignments"
+kubectl label nodes kube-node-1 openwhisk-role=core
+kubectl label nodes kube-node-1 openwhisk-role=edge
+kubectl label nodes kube-node-2 openwhisk-role=invoker
 
 # Create namespace
 echo "Create openwhisk namespace"
 kubectl create namespace openwhisk
 
-# configure Ingress
+# configure a NodePort Ingress assuming kubeadm-dind-cluster conventions
+# use kube-node-1 as the ingress, since that is where nginx will be running
 WSK_PORT=31001
-WSK_HOST=$(kubectl describe nodes | grep Hostname: | awk '{print $2}')
-if [ "$WSK_HOST" = "minikube" ]; then
-    WSK_HOST=$(minikube ip)
+WSK_HOST=$(kubectl describe node kube-node-1 | grep InternalIP: | awk '{print 
$2}')
+if [ -z "$WSK_HOST" ]; then
+  echo "FAILED! Could not determine value for WSK_HOST"
+  exit 1
 fi
 
 # Deploy OpenWhisk using Helm
@@ -241,76 +250,64 @@ if [ -z "$RESULT" ]; then
 fi
 
 # now define it as an api and invoke it that way
+wsk -i api create /demo /hello get hello
+API_URL=$(wsk -i api list | grep hello | awk '{print $4}')
+RESULT=$(wget --no-check-certificate -qO- "$API_URL" | grep 'Hello world')
+if [ -z "$RESULT" ]; then
+  echo "FAILED! Could not invoke hello via apigateway"
+  exit 1
+fi
 
-# TEMP: test is not working yet in travis environment.
-#       disable for now to allow rest of PR to be merged...
-# wsk -v -i api create /demo /hello get hello
-#
-# API_URL=$(wsk -i api list | grep hello | awk '{print $4}')
-# echo "API URL is $API_URL"
-# wget --no-check-certificate -O sayHello.txt "$API_URL"
-# echo "AJA!"
-# cat sayHello.txt
-# echo "AJA!"
-#
-# RESULT=$(wget --no-check-certificate -qO- "$API_URL" | grep 'Hello world')
-# if [ -z "$RESULT" ]; then
-#   echo "FAILED! Could not invoke hello via apigateway"
-#   exit 1
-# fi
-
-echo "PASSED! Deployed openwhisk and invoked Hello action"
+echo "PASSED! Created Hello action and invoked via cli, web and apigateway"
 
-####
-# now test the installation of kafka provider
-####
+###
+# Now install all the provider helm charts.
+# To reduce testing latency we first install all the charts,
+# then we check for correct deployment of each one.
+###
 helm install helm/openwhisk-providers/charts/ow-kafka --namespace=openwhisk 
--name=kafkap4travis  || exit 1
+helm install helm/openwhisk-providers/charts/ow-alarm --namespace=openwhisk 
--name alarmp4travis --set alarmprovider.persistence.storageClass=none || exit 1
+helm install helm/openwhisk-providers/charts/ow-cloudant --namespace=openwhisk 
--name cloudantp4travis --set cloudantprovider.persistence.storageClass=none || 
exit 1
 
-jobHealthCheck "install-package-kafka"
 
+####
+# Verify kafka provider and messaging package
+####
+jobHealthCheck "install-package-kafka"
 deploymentHealthCheck "kafkaprovider"
 
-# Verify messaging package is installed
 RESULT=$(wsk package list /whisk.system -i | grep messaging)
 if [ -z "$RESULT" ]; then
   echo "FAILED! Could not list messaging package via CLI"
   exit 1
+else
+  echo "PASSED! Deployed Kafka provider and package"
 fi
 
-echo "PASSED! Deployed Kafka provider and package"
-
 ####
-# now test the installation of Alarm provider
+# Verify alarm provider and alarms package
 ####
-helm install helm/openwhisk-providers/charts/ow-alarm --namespace=openwhisk 
--name alarmp4travis --set alarmprovider.persistence.storageClass=standard  || 
exit 1
-
 jobHealthCheck "install-package-alarm"
-
 deploymentHealthCheck "alarmprovider"
 
-# Verify alarms package is installed
 RESULT=$(wsk package list /whisk.system -i | grep alarms)
 if [ -z "$RESULT" ]; then
   echo "FAILED! Could not list alarms package via CLI"
   exit 1
+else
+  echo "PASSED! Deployed Alarms provider and package"
 fi
 
-echo "PASSED! Deployed Alarms provider and package"
-
 ####
-# now test the installation of Cloudant provider
+# Verify Cloudant provider and cloudant package
 ####
-helm install helm/openwhisk-providers/charts/ow-cloudant --namespace=openwhisk 
--name cloudantp4travis --set 
cloudantprovider.persistence.storageClass=standard  || exit 1
-
 jobHealthCheck "install-package-cloudant"
-
 deploymentHealthCheck "cloudantprovider"
 
-# Verify cloudant package is installed
 RESULT=$(wsk package list /whisk.system -i | grep cloudant)
 if [ -z "$RESULT" ]; then
   echo "FAILED! Could not list cloudant package via CLI"
   exit 1
+else
+  echo "PASSED! Deployed Cloudant provider and package"
 fi
-
-echo "PASSED! Deployed Cloudant provider and package"
diff --git a/tools/travis/collect-logs.sh b/tools/travis/collect-logs.sh
index 1ebed5f..2ccbbdb 100755
--- a/tools/travis/collect-logs.sh
+++ b/tools/travis/collect-logs.sh
@@ -17,8 +17,7 @@ mkdir logs
 kubectl -n openwhisk logs -lname=couchdb >& logs/couchdb.log
 kubectl -n openwhisk logs -lname=zookeeper >& logs/zookeeper.log
 kubectl -n openwhisk logs -lname=kafka >& logs/kafka.log
-kubectl -n openwhisk logs controller-0 >& logs/controller-0.log
-kubectl -n openwhisk logs controller-1 >& logs/controller-1.log
+kubectl -n openwhisk logs -lname=controller >& logs/controller.log
 kubectl -n openwhisk logs -lname=invoker -c docker-pull-runtimes >& 
logs/invoker-docker-pull.log
 kubectl -n openwhisk logs -lname=invoker -c invoker >& logs/invoker-invoker.log
 kubectl -n openwhisk logs -lname=nginx >& logs/nginx.log
@@ -28,5 +27,5 @@ kubectl -n openwhisk logs jobs/install-catalog >& 
logs/catalog.log
 kubectl -n openwhisk logs jobs/init-couchdb >& logs/init-couchdb.log
 kubectl get pods --all-namespaces -o wide --show-all >& logs/all-pods.txt
 
-# System level logs from minikube
-minikube logs >& logs/minikube.log
+# System level logs from kubernetes cluster
+$HOME/dind-cluster.sh dump >& logs/dind-cluster-dump.txt
diff --git a/tools/travis/start-kubeadm-dind.sh 
b/tools/travis/start-kubeadm-dind.sh
new file mode 100755
index 0000000..0f18dfd
--- /dev/null
+++ b/tools/travis/start-kubeadm-dind.sh
@@ -0,0 +1,60 @@
+#!/bin/bash
+# Licensed to the Apache Software Foundation (ASF) under one or more 
contributor
+# license agreements; and to You under the Apache License, Version 2.0.
+
+set -x
+
+# Install kubernetes-dind-cluster and boot it
+wget 
https://cdn.rawgit.com/kubernetes-sigs/kubeadm-dind-cluster/master/fixed/dind-cluster-v$TRAVIS_KUBE_VERSION.sh
 -O $HOME/dind-cluster.sh && chmod +x $HOME/dind-cluster.sh && USE_HAIRPIN=true 
$HOME/dind-cluster.sh up
+
+# Install kubectl in /usr/local/bin so subsequent scripts can find it
+sudo cp $HOME/.kubeadm-dind-cluster/kubectl-v$TRAVIS_KUBE_VERSION* 
/usr/local/bin/kubectl
+
+
+echo "Kubernetes cluster is deployed and reachable"
+kubectl describe nodes
+
+# Download and install misc packages and utilities
+pushd /tmp
+  # Need socat for helm to forward connections to tiller on ubuntu 16.04
+  sudo apt update
+  sudo apt install -y socat
+
+  # download and install the wsk cli
+  wget -q 
https://github.com/apache/incubator-openwhisk-cli/releases/download/latest/OpenWhisk_CLI-latest-linux-amd64.tgz
+  tar xzf OpenWhisk_CLI-latest-linux-amd64.tgz
+  sudo cp wsk /usr/local/bin/wsk
+
+  # Download and install helm
+  curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > 
get_helm.sh && chmod +x get_helm.sh && ./get_helm.sh
+popd
+
+# Pods running in kube-system namespace should have cluster-admin role
+kubectl create clusterrolebinding add-on-cluster-admin 
--clusterrole=cluster-admin --serviceaccount=kube-system:default
+
+# Install tiller into the cluster
+/usr/local/bin/helm init --service-account default
+
+# Wait for tiller to be ready
+TIMEOUT=0
+TIMEOUT_COUNT=60
+until [ $TIMEOUT -eq $TIMEOUT_COUNT ]; do
+  TILLER_STATUS=$(kubectl -n kube-system get pods -o wide | grep tiller-deploy 
| awk '{print $3}')
+  TILLER_READY_COUNT=$(kubectl -n kube-system get pods -o wide | grep 
tiller-deploy | awk '{print $2}')
+  if [[ "$TILLER_STATUS" == "Running" ]] && [[ "$TILLER_READY_COUNT" == "1/1" 
]]; then
+    break
+  fi
+  echo "Waiting for tiller to be ready"
+  kubectl -n kube-system get pods -o wide
+  let TIMEOUT=TIMEOUT+1
+  sleep 5
+done
+
+if [ $TIMEOUT -eq $TIMEOUT_COUNT ]; then
+  echo "Failed to install tiller"
+
+  # Dump lowlevel logs to help diagnose failure to start tiller
+  $HOME/dind-cluster.sh dump
+  kubectl -n kube-system describe pods
+  exit 1
+fi
diff --git a/tools/travis/setup.sh b/tools/travis/start-minikube.sh
similarity index 96%
rename from tools/travis/setup.sh
rename to tools/travis/start-minikube.sh
index 5a2e2b7..7571adb 100755
--- a/tools/travis/setup.sh
+++ b/tools/travis/start-minikube.sh
@@ -2,6 +2,10 @@
 # Licensed to the Apache Software Foundation (ASF) under one or more 
contributor
 # license agreements; and to You under the Apache License, Version 2.0.
 
+# NOTE: This script is not currently being used.
+#       Will remove after a couple of weeks experience with
+#       kubeadm-dind-cluster on TravisCI
+
 # This script assumes Docker is already installed
 set -x
 

Reply via email to