Hi Mohan,

Also i noticed that you used kubenertes 1.9.2. I guess it could due to
Docker, Kubectl or helm version compatibility issue.

Here is what we did to get ONAP amsterdam deployment up and running.

# Install docker 1.12
sudo curl https://releases.rancher.com/install-docker/*1.12.sh
<http://1.12.sh>* | sh

# # Install Kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/*v1.8.6*
/bin/linux/amd64/kubectl
chmod +x $HOME/.kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
mkdir -p $HOME/.kube

# Setup helm
wget http://storage.googleapis.com/kubernetes-helm/
*helm-v2.3.0-linux-amd64.tar.gz*
tar -zxvf helm-v2.3.0-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm

# Clone the amsterdam relase repository
git clone -b amsterdam http://gerrit.onap.org/r/oom

You can give it a try.

Regards
Vivek


On Mon, May 21, 2018 at 10:28 PM, Vivekanandan Muthukrishnan <
vmuthukrish...@aarnanetworks.com> wrote:

> Hi Mohan,
>
> I guess this might be something to do with your Rancher setup.
>
> Try the following.
>
> 1) Check all your kube-system containers are up and running properly
>
> 2) Make sure that OOM host IP is mapped in the /etc/hosts file
>
>      # OOM_HOST_IP    {HOST_NAME}
>
> 3) Make sure that your kube-dns container has come up and running properly
>
>
> All the best
> Vivek
>
>
>
>
> On Mon, May 21, 2018 at 3:05 PM, Mohan Kumar <nmohankumar1...@gmail.com>
> wrote:
>
>> Hi All,
>>
>>
>>
>> I am trying to deploy Amsterdam branch ONAP on Kubernetes on Rancher.
>> But onap-consul pod deployment is failing. Others pods are in running state
>>
>>
>> I have used https://github.com/obrienlabs/onap-root/blob/master/cd.sh
>> script for deployment.
>>
>>
>>
>> *Error :*
>>
>>
>> root@linux-r4ap:~/onap-root# kubectl get pods --all-namespaces -o=wide |
>> grep onap-consul
>>
>> onap-consul           consul-agent-3312409084-p8gxg                 0/1
>>      CrashLoopBackOff   155        15h       10.42.50.130
>> linux-r4ap.opnfv.iol.unh.edu
>>
>> onap-consul           consul-server-1173049560-8p9kh                0/1
>>      CrashLoopBackOff   147        15h       10.42.135.15
>> linux-r4ap.opnfv.iol.unh.edu
>>
>> onap-consul           consul-server-1173049560-cj6hc                0/1
>>      CrashLoopBackOff   147        15h       10.42.220.246
>> linux-r4ap.opnfv.iol.unh.edu
>>
>> onap-consul           consul-server-1173049560-fn0r0                0/1
>>      CrashLoopBackOff   149        15h       10.42.7.90
>> linux-r4ap.opnfv.iol.unh.edu
>>
>> root@linux-r4ap:~/onap-root#
>>
>>
>>
>> and the log shows ( *on kubectl 1.7.7)*
>>
>> root@linux-r4ap:~# kubectl logs consul-agent-3312409084-p8gxg -n
>> onap-consul
>> ==> Error parsing /consul/config/aai-data-router-health.json: 1 error(s)
>> occurred:
>>
>> ** invalid config key service.checks[0].script*
>> root@linux-r4ap:~# kubectl logs consul-server-1173049560-8p9kh -n
>> onap-consul
>> bootstrap_expect > 0: expecting 3 servers
>> ==> Starting Consul agent...
>> ==> Joining cluster...
>> ==> 1 error(s) occurred:
>>
>> ** Failed to join 10.43.217.205 <http://10.43.217.205>: dial tcp
>> 10.43.217.205:8301 <http://10.43.217.205:8301>: i/o timeout*
>> root@linux-r4ap:~# kubectl logs consul-server-1173049560-cj6hc -n
>> onap-consul
>> bootstrap_expect > 0: expecting 3 servers
>> ==> Starting Consul agent...
>> ==> Joining cluster...
>> ==> 1 error(s) occurred:
>>
>> ** Failed to join 10.43.217.205 <http://10.43.217.205>: dial tcp
>> 10.43.217.205:8301 <http://10.43.217.205:8301>: i/o timeout*
>> root@linux-r4ap:~# kubectl log consul-server-1173049560-fn0r0 -n
>> onap-consul
>> W0521 02:41:38.633616   23168 cmd.go:392] log is DEPRECATED and will be
>> removed in a future version. Use logs instead.
>> bootstrap_expect > 0: expecting 3 servers
>> ==> Starting Consul agent...
>> ==> Joining cluster...
>> ==> 1 error(s) occurred:
>>
>> ** Failed to join 10.43.217.205 <http://10.43.217.205>: dial tcp
>> 10.43.217.205:8301 <http://10.43.217.205:8301>: i/o timeout*
>>
>> root@linux-r4ap:~#* kubectl describe pod consul-agent-3312409084-jk861
>> -n onap-consul*
>>
>> Events:
>>   Type     Reason                 Age                 From
>>                    Message
>>   ----     ------                 ----                ----
>>                    -------
>>   Normal   Scheduled              31m                 default-scheduler
>>                     Successfully assigned consul-agent-3312409084-jk861 to
>> linux-r4ap.opnfv.iol.unh.edu
>>   Normal   SuccessfulMountVolume  29m                 kubelet,
>> linux-r4ap.opnfv.iol.unh.edu  MountVolume.SetUp succeeded for volume
>> "consul-agent-config"
>>   Normal   SuccessfulMountVolume  29m                 kubelet,
>> linux-r4ap.opnfv.iol.unh.edu  MountVolume.SetUp succeeded for volume
>> "default-token-kgn7c"
>>   Normal   Pulling                4m (x9 over 28m)    kubelet,
>> linux-r4ap.opnfv.iol.unh.edu  pulling image "docker.io/consul:latest"
>>   Normal   Pulled                 4m (x9 over 28m)    kubelet,
>> linux-r4ap.opnfv.iol.unh.edu  Successfully pulled image "
>> docker.io/consul:latest"
>>   Normal   Created                4m (x9 over 27m)    kubelet,
>> linux-r4ap.opnfv.iol.unh.edu  Created container
>>   Normal   Started                4m (x9 over 27m)    kubelet,
>> linux-r4ap.opnfv.iol.unh.edu  Started container
>> *  Warning  DNSSearchForming       13s (x40 over 29m)  kubelet,
>> linux-r4ap.opnfv.iol.unh.edu <http://linux-r4ap.opnfv.iol.unh.edu>  Search
>> Line limits were exceeded, some dns names have been omitted, the applied
>> search line is: onap-consul.svc.cluster.local svc.cluster.local
>> cluster.local kubelet.kubernetes.rancher.int
>> <http://kubelet.kubernetes.rancher.int>ernal kubernetes.rancher.internal
>> rancher.internal*
>> *  Warning  BackOff                13s (x21 over 20m)  kubelet,
>> linux-r4ap.opnfv.iol.unh.edu <http://linux-r4ap.opnfv.iol.unh.edu>
>> Back-off restarting failed container*
>> *  Warning  FailedSync             13s (x21 over 20m)  kubelet,
>> linux-r4ap.opnfv.iol.unh.edu <http://linux-r4ap.opnfv.iol.unh.edu>  Error
>> syncing pod*
>>
>> I even tried to upgrade Upgrade Kubernetes from* 1.7.7 to 1.9.2 *and
>> recreated pod service, still couldn't get consul pod running
>>
>> Can you suggest how to troubleshoot the problem or if I am doing anything
>> incorrectly?
>>
>>
>>
>> Thanks.,
>>
>> Mohankumar.N
>>
>>
>>
>> _______________________________________________
>> onap-discuss mailing list
>> onap-discuss@lists.onap.org
>> https://lists.onap.org/mailman/listinfo/onap-discuss
>>
>>
>
_______________________________________________
onap-discuss mailing list
onap-discuss@lists.onap.org
https://lists.onap.org/mailman/listinfo/onap-discuss

Reply via email to