Hi Vijay,

Thanks for the response, PFA of the DCAE cloudify describe pod and container 
logs.


Regards,
Naveen

________________________________
From: [email protected] <[email protected]> on behalf of 
Vijay VK via lists.onap.org <[email protected]>
Sent: 13 April 2020 19:13
To: [email protected] <[email protected]>; Naveen S. Sankad 
<[email protected]>; FREEMAN, BRIAN D <[email protected]>
Cc: JOMY JOSE <[email protected]>; Sudarshan K.S <[email protected]>; 
Devangam Manjunatha <[email protected]>; Velugubantla Praveen 
<[email protected]>
Subject: Re: [onap-discuss] [ONAP] [Elalto] [DCAE] [HOLMES] [Modeling] Issue 
while Deploying ONAP with Latest OOM-Elalto Branch


Hi,

Reg DCAE (and Holmes) health check failure – as the dcae-bootstrap is not 
triggered (stuck on init), none of the service components (including Holmes) 
seems to have been deployed.  The problem appears to be cloudify-manager not 
passing in your environment (this is dependency for 
bootstrap/dashboard/Policy-Handler/Deployment handler components).  You could 
check the logs for Cloudify-manager pod if any errors related to k8s 
healthcheck failure.



Regards,

Vijay.



From: [email protected] <[email protected]> On Behalf Of 
Naveen S. Sankad
Sent: Saturday, April 11, 2020 11:43 AM
To: [email protected]; FREEMAN, BRIAN D <[email protected]>
Cc: JOMY JOSE <[email protected]>; Sudarshan K.S <[email protected]>; 
Devangam Manjunatha <[email protected]>; Velugubantla Praveen 
<[email protected]>
Subject: [onap-discuss] [ONAP] [Elalto] [DCAE] [HOLMES] [Modeling] Issue while 
Deploying ONAP with Latest OOM-Elalto Branch



Hi Team,



We have redeployed our ONAP Setup  (OOM: Elalto branch)  with all the latest 
updated patches related to certificates, all health checks are passing except 
(DCAE, HOLMES & Modeling) components. PFA of healthcheck_output, Pod_details 
log.



When ./demo-k8s.sh is executed, it is failing with 403-error code, the trace is 
attached in ACCESS-DENIED.txt file.



Any inputs or suggestions are appreciated.





Thanks and regards,

Naveen

L&T Technology Services Ltd

www.LTTS.com<https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__www.LTTS.com%26d%3DDwQFAw%26c%3DLFYZ-o9_HUMeMTSQicvjIg%26r%3D6WYcUG7NY-ZxfqWx5MmzVQ%26m%3DtFx36S9ALqPotGLX8AN1aBXbG0kAj0QDdfj_vu__HHY%26s%3DmJBTo0tBExY6G6vV3WCPCQEq950e4Pc6rgW4Syr_ygE%26e%3D&data=02%7C01%7Cnaveen.sankad%40ltts.com%7Cd4d0087964c64fe333b608d7dfb0bdcb%7C311b33788e8a4b5ea33fe80a3d8ba60a%7C0%7C0%7C637223822499380796&sdata=wBl9GjlfzDizyptjIjwLTFs%2B%2F0RVubX56onuvA7m5e0%3D&reserved=0>

L&T Technology Services Limited (LTTS) is committed to safeguard your data 
privacy. For more information to view our commitment towards data privacy under 
GDPR, please visit the privacy policy on our website 
www.Ltts.com<https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__www.Ltts.com%26d%3DDwQFAw%26c%3DLFYZ-o9_HUMeMTSQicvjIg%26r%3D6WYcUG7NY-ZxfqWx5MmzVQ%26m%3DtFx36S9ALqPotGLX8AN1aBXbG0kAj0QDdfj_vu__HHY%26s%3D3HYOlXiUFc8VjLh8eH4nxP-tQNbxqH6PQFg2XKrTb74%26e%3D&data=02%7C01%7Cnaveen.sankad%40ltts.com%7Cd4d0087964c64fe333b608d7dfb0bdcb%7C311b33788e8a4b5ea33fe80a3d8ba60a%7C0%7C0%7C637223822499380796&sdata=itE9Genz6xWklTwM%2BINVUjCZMf8lon6rI5FvPJTw6oo%3D&reserved=0>.
 This Email may contain confidential or privileged information for the intended 
recipient (s). If you are not the intended recipient, please do not use or 
disseminate the information, notify the sender and delete it from your system.



L&T Technology Services Ltd

www.LTTS.com

L&T Technology Services Limited (LTTS) is committed to safeguard your data 
privacy. For more information to view our commitment towards data privacy under 
GDPR, please visit the privacy policy on our website www.Ltts.com. This Email 
may contain confidential or privileged information for the intended recipient 
(s). If you are not the intended recipient, please do not use or disseminate 
the information, notify the sender and delete it from your system.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#20626): https://lists.onap.org/g/onap-discuss/message/20626
Mute This Topic: https://lists.onap.org/mt/72946976/21656
Group Owner: [email protected]
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub  
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

root@demoelalto-nfs:~/oom/kubernetes/robot# 
root@demoelalto-nfs:~/oom/kubernetes/robot# kubectl get pods | grep cloudify
demo-dcaegen2-dcae-cloudify-manager-6c86b97bbf-hmvps           0/1     Running      0          2d2h
root@demoelalto-nfs:~/oom/kubernetes/robot# 
root@demoelalto-nfs:~/oom/kubernetes/robot#


root@demoelalto-nfs:~/oom/kubernetes/robot# kubectl describe pod demo-dcaegen2-dcae-cloudify-manager-6c86b97bbf-hmvps bash
Name:           demo-dcaegen2-dcae-cloudify-manager-6c86b97bbf-hmvps
Namespace:      onap
Priority:       0
Node:           demoelalto-k8s-03/30.0.0.7
Start Time:     Sat, 11 Apr 2020 11:05:35 +0000
Labels:         app=dcae-cloudify-manager
                pod-template-hash=6c86b97bbf
                release=demo-dcaegen2
Annotations:    cni.projectcalico.org/podIP: 10.42.6.8/32
Status:         Running
IP:             10.42.6.8
Controlled By:  ReplicaSet/demo-dcaegen2-dcae-cloudify-manager-6c86b97bbf
Init Containers:
  dcae-cloudify-manager-multisite-init:
    Container ID:  docker://42ac92be272d5adb21e84f1302c454e3a83d48046f47b35fe1e80326f16c9cae
    Image:         Elalto-Repo:5000/onap/org.onap.dcaegen2.deployments.multisite-init-container:1.0.0
    Image ID:      docker-pullable://Elalto-Repo:5000/onap/org.onap.dcaegen2.deployments.multisite-init-container@sha256:15592e8895a91c9f949ed4ce1dde5c8a8e993990dedf2e2bf1f794aa1bbf84ae
    Port:          <none>
    Host Port:     <none>
    Args:
      --namespace
      onap
      --configmap
      multisite-kubeconfig-configmap
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sat, 11 Apr 2020 11:11:44 +0000
      Finished:     Sat, 11 Apr 2020 11:11:44 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wm6tg (ro)
  init-tls:
    Container ID:   docker://eef93e1285f6b5b5ef8ee9e9f7639318a8771cd94ec3a24e7c787324060cd985
    Image:          nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.tls-init-container:1.0.3
    Image ID:       docker-pullable://nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.tls-init-container@sha256:9736a0c9bd5ecfc547a4af1c688d61b4cb50dde08913b19483769a8f6510a439
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sat, 11 Apr 2020 11:26:30 +0000
      Finished:     Sat, 11 Apr 2020 11:26:30 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      POD_IP:   (v1:status.podIP)
    Mounts:
      /opt/tls/shared from tls-info (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wm6tg (ro)
Containers:
  dcae-cloudify-manager:
    Container ID:   docker://84a73cf3ba8d5fe3c6acc2d1cac72bd758637e01b947e7d5219c9348376f7171
    Image:          Elalto-Repo:5000/onap/org.onap.dcaegen2.deployments.cm-container:2.0.2
    Image ID:       docker-pullable://Elalto-Repo:5000/onap/org.onap.dcaegen2.deployments.cm-container@sha256:29ed3790c180577a18334bbbf9265cbd6a478129b410fda83b1d421b3c957de2
    Port:           443/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 11 Apr 2020 11:29:55 +0000
    Ready:          False
    Restart Count:  0
    Readiness:      exec [/scripts/readiness-check.sh] delay=120s timeout=1s period=10s #success=1 #failure=3
    Environment:
      REQUESTS_CA_BUNDLE:  /opt/onap/certs/cacert.pem
    Mounts:
      /cfy-persist from cm-persistent (rw)
      /etc/localtime from localtime (ro)
      /opt/onap/certs from tls-info (rw)
      /opt/onap/config.txt from demo-dcaegen2-dcae-cloudify-manager-config (ro,path="config.txt")
      /opt/onap/kube from demo-dcaegen2-dcae-cloudify-manager-kubeconfig (ro)
      /secret from dcae-token (ro)
      /sys/fs/cgroup from demo-dcaegen2-dcae-cloudify-manager-cgroup (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wm6tg (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  demo-dcaegen2-dcae-cloudify-manager-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      demo-dcaegen2-dcae-cloudify-manager-configmap
    Optional:  false
  demo-dcaegen2-dcae-cloudify-manager-kubeconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      multisite-kubeconfig-configmap
    Optional:  false
  dcae-token:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  dcae-token
    Optional:    false
  demo-dcaegen2-dcae-cloudify-manager-cgroup:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/cgroup
    HostPathType:  
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  
  cm-persistent:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  demo-dcaegen2-dcae-cloudify-manager-data
    ReadOnly:   false
  tls-info:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  default-token-wm6tg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wm6tg
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From                        Message
  ----     ------     ----                    ----                        -------
  Warning  Unhealthy  15s (x18120 over 2d2h)  kubelet, demoelalto-k8s-03  Readiness probe failed: + [[ -f /opt/manager/extra-resolver-rules-loaded ]]
+ exit 1
Error from server (NotFound): pods "bash" not found
root@demoelalto-nfs:~/oom/kubernetes/robot# 
root@demoelalto-nfs:~/oom/kubernetes/robot# 
root@demoelalto-nfs:~/oom/kubernetes/robot# 
root@demoelalto-nfs:~/oom/kubernetes/robot# kubectl logs demo-dcaegen2-dcae-cloudify-manager-6c86b97bbf-hmvps -c dcae-cloudify-manager
+ '[' -d /cfy-persist ']'
++ ls -A /cfy-persist
+ '[' -z '' ']'
+ for d in '$PDIRS'
++ dirname /var/lib/pgsql/9.5/data
+ p=/var/lib/pgsql/9.5
+ mkdir -p /cfy-persist/var/lib/pgsql/9.5
+ cp -rp /var/lib/pgsql/9.5/data /cfy-persist/var/lib/pgsql/9.5
+ for d in '$PDIRS'
++ dirname /opt/manager/resources
+ p=/opt/manager
+ mkdir -p /cfy-persist/opt/manager
+ cp -rp /opt/manager/resources /cfy-persist/opt/manager
+ for d in '$PDIRS'
++ dirname /opt/mgmtworker/env/plugins
+ p=/opt/mgmtworker/env
+ mkdir -p /cfy-persist/opt/mgmtworker/env
+ cp -rp /opt/mgmtworker/env/plugins /cfy-persist/opt/mgmtworker/env
+ for d in '$PDIRS'
++ dirname /opt/mgmtworker/work/deployments
+ p=/opt/mgmtworker/work
+ mkdir -p /cfy-persist/opt/mgmtworker/work
+ cp -rp /opt/mgmtworker/work/deployments /cfy-persist/opt/mgmtworker/work
+ for d in '$PDIRS'
+ '[' -d /var/lib/pgsql/9.5/data ']'
+ mv /var/lib/pgsql/9.5/data /var/lib/pgsql/9.5/data-initial
++ dirname /var/lib/pgsql/9.5/data
+ ln -sf /cfy-persist//var/lib/pgsql/9.5/data /var/lib/pgsql/9.5
+ for d in '$PDIRS'
+ '[' -d /opt/manager/resources ']'
+ mv /opt/manager/resources /opt/manager/resources-initial
++ dirname /opt/manager/resources
+ ln -sf /cfy-persist//opt/manager/resources /opt/manager
+ for d in '$PDIRS'
+ '[' -d /opt/mgmtworker/env/plugins ']'
+ mv /opt/mgmtworker/env/plugins /opt/mgmtworker/env/plugins-initial
++ dirname /opt/mgmtworker/env/plugins
+ ln -sf /cfy-persist//opt/mgmtworker/env/plugins /opt/mgmtworker/env
+ for d in '$PDIRS'
+ '[' -d /opt/mgmtworker/work/deployments ']'
+ mv /opt/mgmtworker/work/deployments /opt/mgmtworker/work/deployments-initial
++ dirname /opt/mgmtworker/work/deployments
+ ln -sf /cfy-persist//opt/mgmtworker/work/deployments /opt/mgmtworker/work
+ exec /sbin/init --log-target=journal
root@demoelalto-nfs:~/oom/kubernetes/robot# 
root@demoelalto-nfs:~/oom/kubernetes/robot# 

Reply via email to