On a full deployment out of the box I am seeing better behavior.
Maybe post more details about your deployment - full/partial or -set 
?.enabled=true/false yaml overrides

 20181015:1930 update Michael 
O'Brien<https://jira.onap.org/secure/ViewProfile.jspa?name=michaelobrien>
Yang - only 1 container is in pending on my azure

onap          onap2-sdnc-ueb-listener-6d685c4557-rnmcb                  0/1     
  Init:0/1            2          56m

and 0 in pending on an AWS standard cluster

onap          onap-sdnc-0                                             2/2       
Running            0          1h

onap          onap-sdnc-ansible-server-dc55dbb6-z6mkc                 1/1       
Running            0          1h

onap          onap-sdnc-db-0                                          2/2       
Running            0          1h

onap          onap-sdnc-dgbuilder-78688789fd-pjj8r                    1/1       
Running            0          1h

onap          onap-sdnc-dmaap-listener-b79df77c9-ws6h8                1/1       
Running            0          1h

onap          onap-sdnc-portal-7b779f87f5-pw7px                       1/1       
Running            0          1h

onap          onap-sdnc-ueb-listener-5bcdbb8677-xxjgg                 1/1       
Running            0          1h

onap          onap-so-sdnc-adapter-76799b95d7-md7nk                   1/1       
Running            0          1h

onap          onap-vfc-zte-sdnc-driver-856fc764f5-z8pf7               1/1       
Running            0          1h



Could you paste details of your deployment - I seem to be OK on a standard 13 
node cluster on AWS, on my single Azure instance - I only have 1 SDNC pod in 
pending state

On a single node deployment under Azure 20181015:1800 I see other issues but 
not with SDNC (just CLMAP-234 and 
SDC-1836<https://jira.onap.org/browse/SDC-1836> and DCAE - all likely nexus 
related- to start)

[https://jira.onap.org/images/icons/emoticons/add.png] Azure - 1 node 
256G/64vCPUs

still bringing up the cluster but only 1 SDNC container still in init

ubuntu@a-onap-devopscd:~$ kubectl get pods --all-namespaces | grep -E '0/|1/2'

onap          onap2-aai-cassandra-2                                     0/1     
  ContainerCreating   0          3m

onap          onap2-aai-champ-5f4db6575-99964                           1/2     
  Running             0          53m

onap          onap2-aai-sparky-be-56647b5b6-dfbsz                       0/2     
  PodInitializing     0          53m

onap          onap2-appc-0                                              1/2     
  Running             0          53m

onap          onap2-appc-ansible-server-7b8df887b8-9x7p4                0/1     
  Init:0/1            4          53m

onap          onap2-appc-db-2                                           0/1     
  Init:0/1            0          3m

onap          onap2-clamp-dash-kibana-755f799f6b-tzjn7                  0/1     
  InvalidImageName    0          53m

onap          onap2-dcae-bootstrap-758dbc947b-zmmgk                     0/1     
  Init:0/1            3          53m

onap          onap2-dcae-cloudify-manager-5b489d9f5d-sg27g              0/1     
  CrashLoopBackOff    13         53m

onap          onap2-dcae-redis-3                                        0/1     
  ContainerCreating   0          4m

onap          onap2-dmaap-dr-node-759487f49d-m8wck                      0/1     
  Init:0/1            4          53m

onap          onap2-dmaap-dr-prov-866df8bfd7-vbqvg                      0/1     
  CrashLoopBackOff    9          53m

onap          onap2-sdc-be-c5bc774bc-m9dq6                              0/2     
  Init:0/2            4          53m

onap          onap2-sdc-be-config-backend-jtlmm                         0/1     
  Init:0/1            0          5m

onap          onap2-sdc-fe-6bb9bc957d-6f8wn                             0/2     
  Init:1/2            4          53m

onap          onap2-sdc-onboarding-be-7b78d9dd4-lzjrw                   0/2     
  Init:0/1            4          53m

onap          onap2-sdc-wfd-be-55f88dddff-n7kzw                         0/1     
  Init:0/1            4          53m

onap          onap2-sdc-wfd-be-workflow-init-qpxqm                      0/1     
  ErrImagePull        0          9m

onap          onap2-sdc-wfd-fe-5bc7f95cc6-fwgnt                         0/2     
  Init:0/1            3          53m

onap          onap2-sdnc-ueb-listener-6d685c4557-rnmcb                  0/1     
  Init:0/1            2          53m

onap          onap2-so-monitoring-7bc4d46cb8-rw6kc                      0/1     
  CrashLoopBackOff    11         53m

ubuntu@a-onap-devopscd:~$ kubectl get pods --all-namespaces | grep -E '0/|1/2' 
| wc -l

20

ubuntu@a-onap-devopscd:~$ kubectl get pods --all-namespaces | grep sdnc

onap          onap2-sdnc-0                                              2/2     
  Running             0          54m

onap          onap2-sdnc-ansible-server-5cd7c6d968-df4sl                1/1     
  Running             0          53m

onap          onap2-sdnc-db-0                                           2/2     
  Running             0          54m

onap          onap2-sdnc-dgbuilder-544dbb6bbd-7jpcv                     1/1     
  Running             0          53m

onap          onap2-sdnc-dmaap-listener-64fcb8b9bc-46crg                1/1     
  Running             0          53m

onap          onap2-sdnc-portal-77bf86f69-cvfqj                         1/1     
  Running             0          53m

onap          onap2-sdnc-ueb-listener-6d685c4557-rnmcb                  0/1     
  Init:0/1            2          53m

onap          onap2-so-sdnc-adapter-68c6689599-dsb7g                    1/1     
  Running             0          53m

onap          onap2-vfc-zte-sdnc-driver-748f97d84f-kn95f                1/1     
  Running             0          53m

On a 1+13 node 16G/4vCPU deployment on AWS 20181015:1800 I see these
with an EFS/NFS share across the nodes

ubuntu@ip-172-31-8-245:~$ kubectl get pods --all-namespaces | grep -E '0/|1/2'

onap          dep-dcae-datafile-collector-f69cc76d6-fs887             1/2       
ImagePullBackOff   0          1h

onap          dep-holmes-engine-mgmt-8758d5d46-6pvtg                  0/1       
Running            1          1h

onap          dep-holmes-rule-mgmt-c794f846f-brt78                    0/1       
Running            1          1h

onap          onap-aai-champ-5dbbfb865-w68sn                          1/2       
Running            0          1h

onap          onap-aai-elasticsearch-6d5889c967-2pjvv                 0/1       
CrashLoopBackOff   23         1h

onap          onap-aai-sparky-be-5f67978f74-7hzn4                     0/2       
Init:0/1           8          1h

onap          onap-clamp-dash-kibana-5978dcf7cb-sjrxv                 0/1       
InvalidImageName   0          1h

onap          onap-dmaap-dr-node-756fd64f48-pd8c9                     0/1       
Init:0/1           8          1h

onap          onap-dmaap-dr-prov-9c5c4c8d5-9tz8d                      0/1       
CrashLoopBackOff   20         1h

onap          onap-sdc-wfd-be-7bfdb8bc69-clj6b                        0/1       
Init:0/1           8          1h

onap          onap-sdc-wfd-be-workflow-init-dxrvk                     0/1       
ImagePullBackOff   0          1h

onap          onap-sdc-wfd-fe-6ff9ff5666-7nxc7                        0/2       
Init:0/1           8          1h

onap          onap-so-monitoring-6fdd664964-4b76t                     0/1       
CrashLoopBackOff   23         1h



SDNC is good

ubuntu@ip-172-31-8-245:~$ kubectl get pods --all-namespaces | grep sdnc

onap          onap-sdnc-0                                             2/2       
Running            0          1h

onap          onap-sdnc-ansible-server-dc55dbb6-z6mkc                 1/1       
Running            0          1h

onap          onap-sdnc-db-0                                          2/2       
Running            0          1h

onap          onap-sdnc-dgbuilder-78688789fd-pjj8r                    1/1       
Running            0          1h

onap          onap-sdnc-dmaap-listener-b79df77c9-ws6h8                1/1       
Running            0          1h

onap          onap-sdnc-portal-7b779f87f5-pw7px                       1/1       
Running            0          1h

onap          onap-sdnc-ueb-listener-5bcdbb8677-xxjgg                 1/1       
Running            0          1h

onap          onap-so-sdnc-adapter-76799b95d7-md7nk                   1/1       
Running            0          1h

onap          onap-vfc-zte-sdnc-driver-856fc764f5-z8pf7               1/1       
Running            0          1h



13 pending > 0 at the 284th 15 sec interval

onap          dep-dcae-datafile-collector-f69cc76d6-fs887             1/2       
ImagePullBackOff   0          1h        10.42.200.173   
ip-172-31-11-30.us-east-2.compute.internal

onap          dep-holmes-engine-mgmt-8758d5d46-6pvtg                  0/1       
Running            1          56m       10.42.43.157    
ip-172-31-11-30.us-east-2.compute.internal

onap          dep-holmes-rule-mgmt-c794f846f-brt78                    0/1       
Running            1          1h        10.42.123.221   
ip-172-31-0-197.us-east-2.compute.internal

onap          onap-aai-champ-5dbbfb865-w68sn                          1/2       
Running            0          1h        10.42.126.159   
ip-172-31-9-128.us-east-2.compute.internal

onap          onap-aai-elasticsearch-6d5889c967-2pjvv                 0/1       
CrashLoopBackOff   20         1h        10.42.99.60     
ip-172-31-12-122.us-east-2.compute.internal

onap          onap-aai-sparky-be-5f67978f74-7hzn4                     0/2       
Init:0/1           7          1h        10.42.139.101   
ip-172-31-0-159.us-east-2.compute.internal

onap          onap-clamp-dash-kibana-5978dcf7cb-sjrxv                 0/1       
InvalidImageName   0          1h        10.42.125.144   
ip-172-31-12-122.us-east-2.compute.internal

onap          onap-dmaap-dr-node-756fd64f48-pd8c9                     0/1       
Init:0/1           7          1h        10.42.14.81     
ip-172-31-2-244.us-east-2.compute.internal

onap          onap-dmaap-dr-prov-9c5c4c8d5-9tz8d                      0/1       
CrashLoopBackOff   18         1h        10.42.57.73     
ip-172-31-2-105.us-east-2.compute.internal

onap          onap-sdc-wfd-be-7bfdb8bc69-clj6b                        0/1       
Init:0/1           7          1h        10.42.141.41    
ip-172-31-12-122.us-east-2.compute.internal

onap          onap-sdc-wfd-be-workflow-init-dxrvk                     0/1       
ImagePullBackOff   0          1h        10.42.237.176   
ip-172-31-9-203.us-east-2.compute.internal

onap          onap-sdc-wfd-fe-6ff9ff5666-7nxc7                        0/2       
Init:0/1           7          1h        10.42.31.94     
ip-172-31-14-254.us-east-2.compute.internal

onap          onap-so-monitoring-6fdd664964-4b76t                     0/1       
CrashLoopBackOff   21         1h        10.42.140.8     
ip-172-31-14-254.us-east-2.compute.internal


*        Comment<https://jira.onap.org/secure/AddComment!default.jspa?id=30111>
People
*                Assignee:



From: onap-discuss@lists.onap.org <onap-discuss@lists.onap.org> On Behalf Of 
Yang Xu
Sent: Monday, October 15, 2018 5:19 PM
To: onap-discuss@lists.onap.org
Subject: [onap-discuss] [sdnc][integration]SDNC fails health check after ONAP 
install

SDNC team,

Please help look at JIRA https://jira.onap.org/browse/SDNC-481, it is blocking 
integration test.

Thanks,
-Yang

“Amdocs’ email platform is based on a third-party, worldwide, cloud-based 
system. Any emails sent to Amdocs will be processed and stored using such 
system and are accessible by third party providers of such system on a limited 
basis. Your sending of emails to Amdocs evidences your consent to the use of 
such system and such processing, storing and access”.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13047): https://lists.onap.org/g/onap-discuss/message/13047
Mute This Topic: https://lists.onap.org/mt/27363335/21656
Group Owner: onap-discuss+ow...@lists.onap.org
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to