Hi,

We are getting InvalidImageName and CrashLoopBackOff while installing OOF using 
Version : Guilin 7.0.0

Kindly suggest if OOF is dependent on any other component. Do we need to deploy 
AAF and Policy first?

dev-oof-55d8d97977-kn26l                        0/1     Init:0/2                
278        46h   100.64.17.103   ip-100-64-17-16.ec2.internal    <none>         
  <none>
dev-oof-cmso-optimizer-6667d4bfcf-2rngd         0/1     CrashLoopBackOff        
525        46h   100.64.16.151   ip-100-64-16-31.ec2.internal    <none>         
  <none>
dev-oof-cmso-service-5669dfcc9f-5btd9           2/3     CrashLoopBackOff        
524        46h   100.64.17.146   ip-100-64-17-16.ec2.internal    <none>         
  <none>
dev-oof-cmso-ticketmgt-67fd4c4bbd-6crbf         0/1     CrashLoopBackOff        
533        46h   100.64.17.161   ip-100-64-17-207.ec2.internal   <none>         
  <none>
dev-oof-cmso-topology-769998cbb6-bfpf8          0/1     CrashLoopBackOff        
536        46h   100.64.17.213   ip-100-64-17-16.ec2.internal    <none>         
  <none>
dev-oof-has-api-58d9b44c95-8mpjz                0/2     Init:0/3                
278        46h   100.64.16.180   ip-100-64-16-28.ec2.internal    <none>         
  <none>
dev-oof-has-controller-65c7984774-skg8j         0/1     Init:InvalidImageName   
1          46h   100.64.17.214   ip-100-64-17-16.ec2.internal    <none>         
  <none>
dev-oof-has-data-7f5f84ccfd-9bzdc               0/1     Init:2/4                
278        46h   100.64.17.114   ip-100-64-17-16.ec2.internal    <none>         
  <none>
dev-oof-has-onboard-qcwlh                       0/1     Completed               
0          46h   100.64.17.115   ip-100-64-17-16.ec2.internal    <none>         
  <none>
dev-oof-has-reservation-86cc8c6b64-n9zpz        0/1     Init:2/4                
278        46h   100.64.17.194   ip-100-64-17-84.ec2.internal    <none>         
  <none>
dev-oof-has-solver-d546f6bb7-xrtw4              0/1     Init:2/4                
278        46h   100.64.17.219   ip-100-64-17-84.ec2.internal    <none>         
  <none>

vmadmin@ip-100-64-16-122:~/onap_honolulu/oom/kubernetes$ kubectl describe pod 
-n onap dev-oof-55d8d97977-kn26l
Name:         dev-oof-55d8d97977-kn26l
Namespace:    onap
Priority:     0
Node:         ip-100-64-17-16.ec2.internal/100.64.17.16
Start Time:   Tue, 03 Aug 2021 11:56:08 +0000
Labels:       app=oof
              pod-template-hash=55d8d97977
              release=dev
Annotations:  kubernetes.io/psp: eks.privileged
Status:       Pending
IP:           100.64.17.103
IPs:
  IP:           100.64.17.103
Controlled By:  ReplicaSet/dev-oof-55d8d97977
Init Containers:
  oof-readiness:
    Container ID:  
docker://9dcd09ce571d6b6cede92ba2f3f9f7f8927384c5a23d99ad5cbc6d2e57e2982e
    Image:         nexus3.onap.org:10001/onap/oom/readiness:3.0.1
    Image ID:      
docker-pullable://nexus3.onap.org:10001/onap/oom/readiness@sha256:317c8a361ae73750f4d4a1b682c42b73de39083f73228dede31fd68b16c089db
    Port:          <none>
    Host Port:     <none>
    Command:
      /app/ready.py
    Args:
      --container-name
      policy-xacml-pdp
    State:          Running
      Started:      Thu, 05 Aug 2021 10:54:02 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 05 Aug 2021 10:43:55 +0000
      Finished:     Thu, 05 Aug 2021 10:54:01 +0000
    Ready:          False
    Restart Count:  279
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s7n2f 
(ro)
  oof-osdf-sms-readiness:
    Container ID:
    Image:         /
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      resp="FAILURE"; until [ $resp = "200" ]; do resp=$(curl -s -o /dev/null 
-k --write-out %{http_code} 
https://aaf-sms.onap:10443/v1/sms/domain/osdf/secret); echo $resp; sleep 2; done
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s7n2f 
(ro)
Containers:
  oof:
    Container ID:
    Image:         nexus3.onap.org:10001/onap/optf-osdf:3.0.2
    Image ID:
    Port:          8699/TCP
    Host Port:     0/TCP
    Command:
      /bin/sh
    Args:
      -c
      grep -v '^$'  /opt/osdf/osaaf/local/org.onap.oof.crt > /tmp/oof.crt
      cat /tmp/oof.crt /opt/app/ssl_cert/intermediate_root_ca.pem 
/opt/app/ssl_cert/aaf_root_ca.cer >> /opt/osdf/org.onap.oof.crt
      ./osdfapp.sh -x osdfapp.py

    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:        500m
      memory:     1Gi
    Liveness:     tcp-socket :8699 delay=120s timeout=1s period=10s #success=1 
#failure=3
    Readiness:    tcp-socket :8699 delay=120s timeout=1s period=10s #success=1 
#failure=3
    Environment:  <none>
    Mounts:
      /etc/localtime from localtime (ro)
      /opt/app/ssl_cert/aaf_root_ca.cer from dev-oof-onap-certs 
(rw,path="aaf_root_ca.cer")
      /opt/app/ssl_cert/intermediate_root_ca.pem from dev-oof-onap-certs 
(rw,path="intermediate_root_ca.pem")
      /opt/osdf/config/common_config.yaml from dev-oof-config 
(rw,path="common_config.yaml")
      /opt/osdf/config/log.yml from dev-oof-config (rw,path="log.yml")
      /opt/osdf/config/osdf_config.yaml from dev-oof-config 
(rw,path="osdf_config.yaml")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s7n2f 
(ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:
  dev-oof-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-configmap
    Optional:  false
  dev-oof-onap-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  dev-oof-onap-certs
    Optional:    false
  default-token-s7n2f:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-s7n2f
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason   Age                   From     Message
  ----    ------   ----                  ----     -------
  Normal  Pulled   52m                   kubelet  Successfully pulled image 
"nexus3.onap.org:10001/onap/oom/readiness:3.0.1" in 273.92023ms
  Normal  Pulled   42m                   kubelet  Successfully pulled image 
"nexus3.onap.org:10001/onap/oom/readiness:3.0.1" in 254.812459ms
  Normal  Pulled   32m                   kubelet  Successfully pulled image 
"nexus3.onap.org:10001/onap/oom/readiness:3.0.1" in 2.218533555s
  Normal  Pulled   21m                   kubelet  Successfully pulled image 
"nexus3.onap.org:10001/onap/oom/readiness:3.0.1" in 254.976908ms
  Normal  Pulled   11m                   kubelet  Successfully pulled image 
"nexus3.onap.org:10001/onap/oom/readiness:3.0.1" in 254.09694ms
  Normal  Pulling  104s (x280 over 46h)  kubelet  Pulling image 
"nexus3.onap.org:10001/onap/oom/readiness:3.0.1"
  Normal  Pulled   103s                  kubelet  Successfully pulled image 
"nexus3.onap.org:10001/onap/oom/readiness:3.0.1" in 256.5229ms

Regards,
Atif


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#23457): https://lists.onap.org/g/onap-discuss/message/23457
Mute This Topic: https://lists.onap.org/mt/84682054/21656
Group Owner: [email protected]
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-


Reply via email to