Re: [onap-discuss] POD start up issue | SDC component

2017-12-07 Thread Atul Angrish
Thanks Michael for your time and provide me such  information.

I will double check my configuration and try to configure/deploy components 
again.

Regards
Atul


From: Michael O'Brien
Sent: Thursday, December 07, 2017 10:33 AM
To: Atul Angrish <atul.angr...@amdocs.com>; onap-discuss@lists.onap.org
Subject: RE: POD start up issue | SDC component

Atul,
  Hi, sorry to hear that.  I have not run master for about a week –but the 
release-1.1.0 branch of OOM is highly optimized and stable – we are mostly 
cherry picking fixes still out of it into master.

   Your issue is likely that you are not running the message-router dependency 
– see below – partial deployments are tricky – you need the entire dependency 
branch of whatever you are bringing up- try everything if you have 55G

  If you switch to release-1.1.0 for now (the Amsterdam release) – then all 
pods will come up ~84 except for 1 aaf container – you don’t need right now.
  Check the health status on the CD server for Amsterdam for a reference – also 
I just verified OOM-328 which required a clean VM install and everything was OK.

http://jenkins.onap.info/job/oom-cd/611/console

  In your case if you see failed container startups there are 9+ causes

1)  The ONAP code has issues (not in release-1.1.0) – can’t verify this 
until I pull master again – or we run 2 CD jobs

2)  The config is out of date – maybe in master until all cherry-picks in

3)  Your config pod was not run or was not “Completed” when the pods 
started to come up – fix is delete/create the failed pods – not you

4)  Your config pod is in error – usually because of unset variables in 
onap-parameters.yaml – not you

5)  Your docker images are not pulling – verify you sourced setenv.sh

6)  You are still pulling images – try one of the prepull scripts first

7)  Use the included cd.sh script

8)  Your HD is full – you need 75G +

9)  Your RAM is full – you need 55G to run the whole system – or ~40G to 
run the vFWCL

10)   You don’t have dependent pods started – check the dependency section in 
the yaml (for example vnc-portal needs sdc and vid to be up)
This one looks like yours – a partial system has dependencies – in your case 
sdc-be.yaml needs message-router-dmaap.yaml
do a ./createAll.bash –n onap –a message-router
then a

./deleteAll.bash –n onap –a sdc – then create sdc again
although the dependency tree may continue

11)   I can’t remember the other ones right now – feeding my rabbits and then 
going to bed

Thank you
/michael

From: 
onap-discuss-boun...@lists.onap.org<mailto:onap-discuss-boun...@lists.onap.org> 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of Atul Angrish
Sent: Wednesday, December 6, 2017 13:15
To: onap-discuss@lists.onap.org<mailto:onap-discuss@lists.onap.org>
Subject: [onap-discuss] POD start up issue | SDC component

HI Guys,

Greetings

I am currently working on ONAP deployment using Kubernetes. I am new to these 
things.
I am referring below mentioned wiki page for ONAP deployment using K8s.

https://wiki.onap.org/display/DW/ONAP+on+Kubernetes

We are facing an issue while deploying pods mainly SDC using Kubernetes.

root@k8s-2:/# kubectl get pods --all-namespaces -a
NAMESPACE NAMEREADY STATUS  
  RESTARTS   AGE
kube-system   heapster-4285517626-km9jg   1/1   Running 
  8  2h
kube-system   kube-dns-638003847-z8gnh3/3   Running 
  23 2h
kube-system   kubernetes-dashboard-716739405-xn4hx1/1   Running 
  7  2h
kube-system   monitoring-grafana-2360823841-fsznx 1/1   Running 
  7  2h
kube-system   monitoring-influxdb-2323019309-qks0t1/1   Running 
  7  2h
kube-system   tiller-deploy-737598192-wlhmk   1/1   Running 
  7  2h
onap  config  0/1   
Completed 0  1h
onap-aai  aai-resources-898583818-6ptc4   2/2   Running 
  0  1h
onap-aai  aai-service-749944520-0jhxf 1/1   Running 
  0  1h
onap-mso  mariadb-829081257-vx3n1 1/1   Running 
  0  1h
onap-mso  mso-821928192-qp6tn 2/2   Running 
  0  1h
onap-sdc  sdc-be-754421819-phch8  0/2   
PodInitializing   0  1h
onap-sdc  sdc-cs-2937804434-qn1q6 1/1   Running 
  0  1h
onap-sdc  sdc-es-2514443912-c7fmd 1/1   Running 
  0  1h
onap-sdc  sdc-fe-902103934-rlbhv  0/2   
Init:0/1  8  1h
onap-sdc  sdc-kb-281446026-tvg8r 

Re: [onap-discuss] POD start up issue | SDC component

2017-12-06 Thread Michael O'Brien
Atul,
  Hi, sorry to hear that.  I have not run master for about a week –but the 
release-1.1.0 branch of OOM is highly optimized and stable – we are mostly 
cherry picking fixes still out of it into master.

   Your issue is likely that you are not running the message-router dependency 
– see below – partial deployments are tricky – you need the entire dependency 
branch of whatever you are bringing up- try everything if you have 55G

  If you switch to release-1.1.0 for now (the Amsterdam release) – then all 
pods will come up ~84 except for 1 aaf container – you don’t need right now.
  Check the health status on the CD server for Amsterdam for a reference – also 
I just verified OOM-328 which required a clean VM install and everything was OK.

http://jenkins.onap.info/job/oom-cd/611/console

  In your case if you see failed container startups there are 9+ causes

1)  The ONAP code has issues (not in release-1.1.0) – can’t verify this 
until I pull master again – or we run 2 CD jobs

2)  The config is out of date – maybe in master until all cherry-picks in

3)  Your config pod was not run or was not “Completed” when the pods 
started to come up – fix is delete/create the failed pods – not you

4)  Your config pod is in error – usually because of unset variables in 
onap-parameters.yaml – not you

5)  Your docker images are not pulling – verify you sourced setenv.sh

6)  You are still pulling images – try one of the prepull scripts first

7)  Use the included cd.sh script

8)  Your HD is full – you need 75G +

9)  Your RAM is full – you need 55G to run the whole system – or ~40G to 
run the vFWCL

10)   You don’t have dependent pods started – check the dependency section in 
the yaml (for example vnc-portal needs sdc and vid to be up)
This one looks like yours – a partial system has dependencies – in your case 
sdc-be.yaml needs message-router-dmaap.yaml
do a ./createAll.bash –n onap –a message-router
then a

./deleteAll.bash –n onap –a sdc – then create sdc again
although the dependency tree may continue

11)   I can’t remember the other ones right now – feeding my rabbits and then 
going to bed

Thank you
/michael

From: onap-discuss-boun...@lists.onap.org 
[mailto:onap-discuss-boun...@lists.onap.org] On Behalf Of Atul Angrish
Sent: Wednesday, December 6, 2017 13:15
To: onap-discuss@lists.onap.org
Subject: [onap-discuss] POD start up issue | SDC component

HI Guys,

Greetings

I am currently working on ONAP deployment using Kubernetes. I am new to these 
things.
I am referring below mentioned wiki page for ONAP deployment using K8s.

https://wiki.onap.org/display/DW/ONAP+on+Kubernetes

We are facing an issue while deploying pods mainly SDC using Kubernetes.

root@k8s-2:/# kubectl get pods --all-namespaces -a
NAMESPACE NAMEREADY STATUS  
  RESTARTS   AGE
kube-system   heapster-4285517626-km9jg   1/1   Running 
  8  2h
kube-system   kube-dns-638003847-z8gnh3/3   Running 
  23 2h
kube-system   kubernetes-dashboard-716739405-xn4hx1/1   Running 
  7  2h
kube-system   monitoring-grafana-2360823841-fsznx 1/1   Running 
  7  2h
kube-system   monitoring-influxdb-2323019309-qks0t1/1   Running 
  7  2h
kube-system   tiller-deploy-737598192-wlhmk   1/1   Running 
  7  2h
onap  config  0/1   
Completed 0  1h
onap-aai  aai-resources-898583818-6ptc4   2/2   Running 
  0  1h
onap-aai  aai-service-749944520-0jhxf 1/1   Running 
  0  1h
onap-mso  mariadb-829081257-vx3n1 1/1   Running 
  0  1h
onap-mso  mso-821928192-qp6tn 2/2   Running 
  0  1h
onap-sdc  sdc-be-754421819-phch8  0/2   
PodInitializing   0  1h
onap-sdc  sdc-cs-2937804434-qn1q6 1/1   Running 
  0  1h
onap-sdc  sdc-es-2514443912-c7fmd 1/1   Running 
  0  1h
onap-sdc  sdc-fe-902103934-rlbhv  0/2   
Init:0/1  8  1h
onap-sdc  sdc-kb-281446026-tvg8r  1/1   Running 
  0  1h

As can see that all others pods are going to be upped except onap-sdc.
It seems to be a problem in sdc-be container.

When we see the logs of this container we can say that there are issues in some 
chef scripts.

Please find below the steps to check the logs:-


1)  Run kubectl command to check the pods status.

 kubectl get pods --all-namespaces –a

onap-mso  mso-821928192

[onap-discuss] POD start up issue | SDC component

2017-12-06 Thread Atul Angrish
HI Guys,

Greetings

I am currently working on ONAP deployment using Kubernetes. I am new to these 
things.
I am referring below mentioned wiki page for ONAP deployment using K8s.

https://wiki.onap.org/display/DW/ONAP+on+Kubernetes

We are facing an issue while deploying pods mainly SDC using Kubernetes.

root@k8s-2:/# kubectl get pods --all-namespaces -a
NAMESPACE NAMEREADY STATUS  
  RESTARTS   AGE
kube-system   heapster-4285517626-km9jg   1/1   Running 
  8  2h
kube-system   kube-dns-638003847-z8gnh3/3   Running 
  23 2h
kube-system   kubernetes-dashboard-716739405-xn4hx1/1   Running 
  7  2h
kube-system   monitoring-grafana-2360823841-fsznx 1/1   Running 
  7  2h
kube-system   monitoring-influxdb-2323019309-qks0t1/1   Running 
  7  2h
kube-system   tiller-deploy-737598192-wlhmk   1/1   Running 
  7  2h
onap  config  0/1   
Completed 0  1h
onap-aai  aai-resources-898583818-6ptc4   2/2   Running 
  0  1h
onap-aai  aai-service-749944520-0jhxf 1/1   Running 
  0  1h
onap-mso  mariadb-829081257-vx3n1 1/1   Running 
  0  1h
onap-mso  mso-821928192-qp6tn 2/2   Running 
  0  1h
onap-sdc  sdc-be-754421819-phch8  0/2   
PodInitializing   0  1h
onap-sdc  sdc-cs-2937804434-qn1q6 1/1   Running 
  0  1h
onap-sdc  sdc-es-2514443912-c7fmd 1/1   Running 
  0  1h
onap-sdc  sdc-fe-902103934-rlbhv  0/2   
Init:0/1  8  1h
onap-sdc  sdc-kb-281446026-tvg8r  1/1   Running 
  0  1h

As can see that all others pods are going to be upped except onap-sdc.
It seems to be a problem in sdc-be container.

When we see the logs of this container we can say that there are issues in some 
chef scripts.

Please find below the steps to check the logs:-


1)  Run kubectl command to check the pods status.

 kubectl get pods --all-namespaces -a

onap-mso  mso-821928192-qp6tn 2/2   Running 
  0  1h
onap-sdc  sdc-be-754421819-phch8  0/2   
PodInitializing   0  1h
onap-sdc  sdc-cs-2937804434-qn1q6 1/1   Running 
  0  1h
onap-sdc  sdc-es-2514443912-c7fmd 1/1   Running 
  0  1h


2)  Using docker ps -a command to list the containers.

root@k8s-2:/# docker ps -a | grep sdc-be
347b4da64d9c
nexus3.onap.org:10001/openecomp/sdc-backend@sha256:d4007e41988fd0bd451b8400144b27c60b4ba0a2e54fca1a02356d8b5ec3ac0d
"/root/startup.sh"53 minutes ago  Up 53 minutes 
  
k8s_sdc-be_sdc-be-754421819-phch8_onap-sdc_d7e74e36-da76-11e7-a79e-02ffdf18df1f_0
2b4cf42b163a
oomk8s/readiness-check@sha256:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
 "/root/ready.py --con"57 minutes ago  
Exited (0) 53 minutes ago   
k8s_sdc-dmaap-readiness_sdc-be-754421819-phch8_onap-sdc_d7e74e36-da76-11e7-a79e-02ffdf18df1f_3
a066ef35890b
oomk8s/readiness-check@sha256:ab8a4a13e39535d67f110a618312bb2971b9a291c99392ef91415743b6a25ecb
 "/root/ready.py --con"About an hour ago   
Exited (0) About an hour ago
k8s_sdc-be-readiness_sdc-be-754421819-phch8_onap-sdc_d7e74e36-da76-11e7-a79e-02ffdf18df1f_0
1fdc79e399fdgcr.io/google_containers/pause-amd64:3.0
   "/pause" 
 About an hour ago   Up About an hour   
 
k8s_POD_sdc-be-754421819-phch8_onap-sdc_d7e74e36-da76-11e7-a79e-02ffdf18df


3)  Use this command to see the docker logs

Docker logs 347b4da64d9c| grep error



4)  Observe the error logs and exceptions.

Currently we are getting below mentioned exceptions:

Recipe Compile Error in 
/root/chef-solo/cache/cookbooks/sdc-catalog-be/recipes/BE_2_setup_configuration
2017-12-06T11:53:48+00:00] ERROR: bash[upgrade-normatives] 
(sdc-normatives::upgrade_Normatives line 7) had an error: 
Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but 
received '1'