Adding the list
Guys, you only need to install tiller (either automatically through the rancher 
install or as part of the OOM-1598 RKE install) on the master – not the other 
1-14 slave nodes – those run only the rancher agent docker container and a nfs 
mount.
Kubectl can be installed anywhere as it is a wrapper on the K8S Rest API – I 
usually install it directly on the same master – so the master VM acts as a 
sort of localized bastion node – the 2 scripts do this.

If you refer to the automated installation scripts for the verified 
installation steps you will have your answers – I don’t recommend installing 
k8s manually.
Just run the following script on your master and then add your 14 hosts – you 
can remove the host created on the master after that

sudo ./oom_rancher_setup.sh -b master -s <your domain/ip> -e onap

From your steps – you did not need to rerun the kubectl/helm steps on the 
slaves (they only need nfs, docker and the rancher agent docker container) – 
your master was ok

The single node install is good for any environment - there is a 110 limit step 
needed for rancher - for RKE it is part of the script
https://wiki.onap.org/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-Scriptedundercloud(Helm/Kubernetes/Docker)andONAPinstall-SingleVM
https://git.onap.org/logging-analytics/tree/deploy/rancher/oom_rancher_setup.sh

The clustered example is either for Openstack or AWS
https://wiki.onap.org/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-Scriptedundercloud(Helm/Kubernetes/Docker)andONAPinstall-clustered

But the steps are the same for the slaves - see the script
https://git.onap.org/logging-analytics/tree/deploy/aws/oom_cluster_host_install.sh
https://git.onap.org/logging-analytics/tree/deploy/heat/logging_openstack_13_16g.yaml

install docker, run the nfs client script, run the slave docker client - that 
is it.

if [[ "$IS_NODE" != false ]]; then
       sudo curl https://releases.rancher.com/install-docker/$DOCKER_VER.sh | sh
       sudo usermod -aG docker $USERNAME
   fi
   sudo apt-get install nfs-common -y
   sudo mkdir /$DOCKERDATA_NFS
   sudo mount -t nfs4 -o 
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 
$AWS_EFS.efs.$AWS_REGION.amazonaws.com:/ /$DOCKERDATA_NFS
   if [[ "$IS_NODE" != false ]]; then
       echo "Running agent docker..."
       if [[ "$COMPUTEADDRESS" != false ]]; then
           echo "sudo docker run --rm --privileged -v 
/var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher 
rancher/agent:v$AGENT_VER http://$MASTER:$PORT/v1/scripts/$TOKEN";
           sudo docker run --rm --privileged -v 
/var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher 
rancher/agent:v$AGENT_VER http://$MASTER:$PORT/v1/scripts/$TOKEN
       else
           echo "sudo docker run -e CATTLE_AGENT_IP=\"$ADDRESS\" --rm 
--privileged -v /var/run/docker.sock:/var/run/docker.sock -v 
/var/lib/rancher:/var/lib/rancher rancher/agent:v$AGENT_VER 
http://$MASTER:$PORT/v1/scripts/$TOKEN";
           sudo docker run -e CATTLE_AGENT_IP="$ADDRESS" --rm --privileged -v 
/var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher 
rancher/agent:v$AGENT_VER http://$MASTER:$PORT/v1/scripts/$TOKEN
       fi
   fi


After everything is up - run the following script on the master to optionally 
sequence in the onap pods
https://git.onap.org/logging-analytics/tree/deploy/cd.sh

Alternatively you can use the 0.1.16 RKE install - I will be adding HA and move 
up to 2.0 - but for now you can provision a 128g+ VM and run that one as well - 
tiller/helm is automated there - it is not merged and is in review
https://gerrit.onap.org/r/#/c/79067/5/kubernetes/contrib/tools/rke/rke_setup.sh
subject to change in
https://jira.onap.org/browse/OOM-1670


for your kubectl/helm steps - from the oom_rancher_setup.sh script - on the 
master
sudo curl -LO 
https://storage.googleapis.com/kubernetes-release/release/v$KUBECTL_VERSION/bin/linux/amd64/kubectl
sudo chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo mkdir ~/.kube
wget 
http://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz
sudo tar -zxvf helm-v${HELM_VERSION}-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm

# create kubernetes environment on rancher using cli
RANCHER_CLI_VER=0.6.7
KUBE_ENV_NAME=$ENVIRON
wget 
https://releases.rancher.com/cli/v${RANCHER_CLI_VER}/rancher-linux-amd64-v${RANCHER_CLI_VER}.tar.gz
sudo tar -zxvf rancher-linux-amd64-v${RANCHER_CLI_VER}.tar.gz
sudo cp rancher-v${RANCHER_CLI_VER}/rancher .
sudo chmod +x ./rancher

echo "install jq for json parsing"
sudo wget https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64 -O 
jq
sudo chmod 777 jq
# not +x or jq will not be runnable from your non-root user
sudo mv jq /usr/local/bin
echo "wait for rancher server container to finish - 3 min"
echo "if you are planning on running a co-located host to bring up more than 
110 pods on a single vm - you have 3 min to add --max-pods=900 in additional 
kublet flags - in the k8s template"
sleep 60
echo "2 more min"
sleep 60
echo "1 min left"
sleep 60
echo "get public and private tokens back to the rancher server so we can 
register the client later"
API_RESPONSE=`curl -s 'http://127.0.0.1:8880/v2-beta/apikey' -d 
'{"type":"apikey","accountId":"1a1","name":"autoinstall","description":"autoinstall","created":null,"kind":null,"removeTime":null,"removed":null,"uuid":null}'`
# Extract and store token
echo "API_RESPONSE: $API_RESPONSE"
KEY_PUBLIC=`echo $API_RESPONSE | jq -r .publicValue`
KEY_SECRET=`echo $API_RESPONSE | jq -r .secretValue`
echo "publicValue: $KEY_PUBLIC secretValue: $KEY_SECRET"

export RANCHER_URL=http://${SERVER}:$PORT
export RANCHER_ACCESS_KEY=$KEY_PUBLIC
export RANCHER_SECRET_KEY=$KEY_SECRET
./rancher env ls
echo "wait 60 sec for rancher environments to settle before we create the onap 
kubernetes one"
sleep 60

echo "Creating kubernetes environment named ${KUBE_ENV_NAME}"
./rancher env create -t kubernetes $KUBE_ENV_NAME > kube_env_id.json
PROJECT_ID=$(<kube_env_id.json)
echo "env id: $PROJECT_ID"
export RANCHER_HOST_URL=http://${SERVER}:$PORT/v1/projects/$PROJECT_ID
echo "you should see an additional kubernetes environment usually with id 1a7"
./rancher env ls
# optionally disable cattle env

# add host registration url
# https://github.com/rancher/rancher/issues/2599
# wait for REGISTERING to ACTIVE
echo "sleep 90 to wait for REG to ACTIVE"
./rancher env ls
sleep 30
echo "check on environments again before registering the URL response"
./rancher env ls
sleep 30
./rancher env ls
echo "60 more sec"
sleep 60

REG_URL_RESPONSE=`curl -X POST -u $KEY_PUBLIC:$KEY_SECRET -H 'Accept: 
application/json' -H 'ContentType: application/json' -d '{"name":"$SERVER"}' 
"http://$SERVER:8880/v1/projects/$PROJECT_ID/registrationtokens"`
echo "REG_URL_RESPONSE: $REG_URL_RESPONSE"
echo "wait for server to finish url configuration - 5 min"
sleep 240
echo "60 more sec"
sleep 60
# see registrationUrl in
REGISTRATION_TOKENS=`curl http://127.0.0.1:$PORT/v2-beta/registrationtokens`
echo "REGISTRATION_TOKENS: $REGISTRATION_TOKENS"
REGISTRATION_URL=`echo $REGISTRATION_TOKENS | jq -r .data[0].registrationUrl`
REGISTRATION_DOCKER=`echo $REGISTRATION_TOKENS | jq -r .data[0].image`
REGISTRATION_TOKEN=`echo $REGISTRATION_TOKENS | jq -r .data[0].token`
echo "Registering host for image: $REGISTRATION_DOCKER url: $REGISTRATION_URL 
registrationToken: $REGISTRATION_TOKEN"
HOST_REG_COMMAND=`echo $REGISTRATION_TOKENS | jq -r .data[0].command`
echo "Running agent docker..."
if [[ "$COMPUTEADDRESS" != false ]]; then
     echo "sudo docker run --rm --privileged -v 
/var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher 
$REGISTRATION_DOCKER $RANCHER_URL/v1/scripts/$REGISTRATION_TOKEN"
     sudo docker run --rm --privileged -v 
/var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher 
$REGISTRATION_DOCKER $RANCHER_URL/v1/scripts/$REGISTRATION_TOKEN
else
     echo "sudo docker run -e CATTLE_AGENT_IP=\"$ADDRESS\" --rm --privileged -v 
/var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher 
rancher/agent:v$AGENT_VERSION http://$SERVER:$PORT/v1/scripts/$TOKEN";
     sudo docker run -e CATTLE_AGENT_IP="$ADDRESS" --rm --privileged -v 
/var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher 
rancher/agent:v$AGENT_VERSION 
http://$SERVER:$PORT/v1/scripts/$REGISTRATION_TOKEN
fi
echo "waiting 8 min for host registration to finish"
sleep 420
echo "1 more min"
sleep 60

# base64 encode the kubectl token from the auth pair
# generate this after the host is registered
KUBECTL_TOKEN=$(echo -n 'Basic '$(echo -n 
"$RANCHER_ACCESS_KEY:$RANCHER_SECRET_KEY" | base64 -w 0) | base64 -w 0)
echo "KUBECTL_TOKEN base64 encoded: ${KUBECTL_TOKEN}"
# add kubectl config - NOTE: the following spacing has to be "exact" or kubectl 
will not connect - with a localhost:8080 error
cat > ~/.kube/config <<EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
   api-version: v1
   insecure-skip-tls-verify: true
   server: "https://$SERVER:$PORT/r/projects/$PROJECT_ID/kubernetes:6443";
name: "${ENVIRON}"
contexts:
- context:
   cluster: "${ENVIRON}"
   user: "${ENVIRON}"
name: "${ENVIRON}"
current-context: "${ENVIRON}"
users:
- name: "${ENVIRON}"
user:
   token: "$KUBECTL_TOKEN"

EOF


echo "Verify all pods up on the kubernetes system - will return localhost:8080 
until a host is added"
echo "kubectl get pods --all-namespaces"
kubectl get pods --all-namespaces
echo "upgrade server side of helm in kubernetes"
if [ "$USERNAME" == "root" ]; then
   helm version
else
   sudo helm version
fi
echo "sleep 90"
sleep 90
if [ "$USERNAME" == "root" ]; then
   helm init --upgrade
else
   sudo helm init --upgrade
fi
echo "sleep 90"
sleep 90
echo "verify both versions are the same below"
if [ "$USERNAME" == "root" ]; then
   helm version
else
   sudo helm version
fi
echo "start helm server"
if [ "$USERNAME" == "root" ]; then
   helm serve &
else
   sudo helm serve &
fi
echo "sleep 30"
sleep 30
echo "add local helm repo"
if [ "$USERNAME" == "root" ]; then
   helm repo add local http://127.0.0.1:8879
   helm repo list
else
   sudo helm repo add local http://127.0.0.1:8879
   sudo helm repo list
fi
echo "To enable grafana dashboard - do this after running cd.sh which brings up 
onap - or you may get a 302xx port conflict"
echo "kubectl expose -n kube-system deployment monitoring-grafana 
--type=LoadBalancer --name monitoring-grafana-client"
echo "to get the nodeport for a specific VM running grafana"
echo "kubectl get services --all-namespaces | grep graf"
kubectl get pods --all-namespaces
echo "finished!"

From: Viji Jeyaraman <[email protected]>
Date: Thursday, March 21, 2019 at 09:08
To: Michael O'Brien <[email protected]>
Cc: Pranjal Sharma <[email protected]>, "[email protected]" 
<[email protected]>, "[email protected]" <[email protected]>
Subject: Onap installation steps query

To Michael,

We are currently doing ONAP installation Casablanca version following 
https://docs.onap.org/en/casablanca/submodules/oom.git/docs/oom_setup_kubernetes_rancher.html#onap-on-kubernetes-with-rancher
documentation.

Our deployment VMS consists of

  1.  1 Rancher master node
  2.  14 kubernetes nodes which is internally called  1 masterNFS +13 SlaveNFS 
VMS

We have created the VMS, and through Rancher UI we have added 14 kubernetes 
nodes as hosts and created the kubernetes cluster.

In the documentation we got stuck in “Configure kubectl and helm” step.

We need a clarification from you, the following configuration steps  should we 
run in all the 15VMs including rancher or we should run only in k8s 
hosts(14VMS) or only in masterNFS VM

  1.  vi .kube/config -- add config generated from rancher UI
  2.  kubectl config get-contexts
  3.  kubectl get pods --all-namespaces -o=wide
  4.  helm list
  5.  helm init –upgrade

Also, when we ran the above configuration steps at  first in masterNFS VM, 
things went fine. “tiller-deploy-76747d6678-455n2” pod came up and in running 
state and also “rancher/pause-amd64” container was seen running in “kube 
system” namespace in masterNFS VM.
root@masterNFS:~# kubectl get pods --all-namespaces -o=wide
NAMESPACE     NAME                                    READY     STATUS    
RESTARTS   AGE       IP              NODE        NOMINATED NODE
kube-system   heapster-5c6fddd5b-7k988                1/1       Running   0     
     18h       10.42.230.106   masternfs   <none>
kube-system   kube-dns-8587b597fc-49mwv               3/3       Running   0     
     18h       10.42.240.2     masternfs   <none>
kube-system   kubernetes-dashboard-79599f58bc-zhz8k   1/1       Running   0     
     18h       10.42.115.179   masternfs   <none>
kube-system   monitoring-grafana-74c4f86f9c-tx6d8     1/1       Running   0     
     18h       10.42.53.121    masternfs   <none>
kube-system   monitoring-influxdb-f78c85b98-8jvd4     1/1       Running   0     
     18h       10.42.188.87    masternfs   <none>
kube-system   tiller-deploy-b5f895978-95ctj           1/1       Running   0     
     18h       10.42.6.233     masternfs   <none>

Then, same steps executed in slaveNFS VM, “tiller-deploy-76747d6678-455n2” was 
not up and “rancher/pause-amd64” container now seen in slavenfs which is also 
not in running state.

root@slaveNFS:~# kubectl get pods --all-namespaces -o=wide
NAMESPACE     NAME                                    READY     STATUS          
   RESTARTS   AGE       IP              NODE        NOMINATED NODE
kube-system   heapster-5c6fddd5b-7k988                1/1       Running         
   0          18h       10.42.230.106   masternfs   <none>
kube-system   kube-dns-8587b597fc-49mwv               3/3       Running         
   0          18h       10.42.240.2     masternfs   <none>
kube-system   kubernetes-dashboard-79599f58bc-zhz8k   1/1       Running         
   0          18h       10.42.115.179   masternfs   <none>
kube-system   monitoring-grafana-74c4f86f9c-tx6d8     1/1       Running         
   0          18h       10.42.53.121    masternfs   <none>
kube-system   monitoring-influxdb-f78c85b98-8jvd4     1/1       Running         
   0          18h       10.42.188.87    masternfs   <none>
kube-system   tiller-deploy-76747d6678-4zljw          0/1       
ImagePullBackOff   0          1m        <none>          slavenfs    <none>
root@slaveNFS:~#
root@slaveNFS:~# helm list
Error: could not find a ready tiller pod
root@slaveNFS:~#

root@k8s-onap2-2:~# kubectl get pods --all-namespaces -o=wide
NAMESPACE     NAME                                    READY     STATUS          
    RESTARTS   AGE       IP              NODE          NOMINATED NODE
kube-system   heapster-5c6fddd5b-7k988                1/1       Running         
    0          20h       10.42.230.106   masternfs     <none>
kube-system   kube-dns-8587b597fc-49mwv               3/3       Running         
    15         20h       10.42.186.119   masternfs     <none>
kube-system   kubernetes-dashboard-79599f58bc-zhz8k   1/1       Running         
    0          20h       10.42.115.179   masternfs     <none>
kube-system   monitoring-grafana-74c4f86f9c-tx6d8     1/1       Running         
    0          20h       10.42.53.121    masternfs     <none>
kube-system   monitoring-influxdb-f78c85b98-8jvd4     1/1       Running         
    0          20h       10.42.188.87    masternfs     <none>
kube-system   tiller-deploy-76747d6678-455n2          0/1       
ContainerCreating   0          8m        <none>          k8s-onap-12   <none>

So ,we need a clarification, On which nodes the tiller helm server should be 
running ? Should the tiller container run only in masterNFS VM or all 14VMS k8 
hosts ?

Thanks,
Viji J

This email and the information contained herein is proprietary and confidential 
and subject to the Amdocs Email Terms of Service, which you may review at 
https://www.amdocs.com/about/email-terms-of-service 
<https://www.amdocs.com/about/email-terms-of-service>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16264): https://lists.onap.org/g/onap-discuss/message/16264
Mute This Topic: https://lists.onap.org/mt/30696319/21656
Group Owner: [email protected]
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to