Hi Team,

I am doing a POC on onap. I am deploying the following components:
cassandra, mariadb-galera, aaf, aai,dmaap, robot, sdc, sdnc.

I am deploying ONAP using the guide ( 
https://docs.onap.org/projects/onap-oom/en/jakarta/oom_user_guide.html#deploy ).
The command we are using is the following:

helm deploy development local / onap -- namespace onap - f override. yaml -- 
set global. masterPassword = password

I am attaching the file for your reference.

Kindly not that we have disabled the tls and cmpv2 as follows:

# Enabling CMPv2

cmpv2Enabled: false

# TLS
# Set to false if you want to disable TLS for NodePorts. Be aware that this
# will loosen your security.
# if set this element will force or not tls even if serviceMesh.tls is set.
tlsEnabled: false

Enabled components are as follows:

[root@ONAP kubernetes]# cat override.yaml  | grep -i -B1 "enabled: true"
metrics:
enabled: true
--
# Keep it enabled in production
aafEnabled: true
--
aai:
enabled: true
--
aaf:
enabled: true
--
cassandra:
enabled: true
--
dmaap:
enabled: true
--
mariadb-galera:
enabled: true
--
robot:
enabled: true
--
sdc:
enabled: true
--
sdnc:
enabled: true
--
cert-wrapper:
enabled: true
repository-wrapper:
enabled: true
roles-wrapper:
enabled: true
[root@ONAP kubernetes]#

The message router pod is stuck in initialising state due to which which some 
of the dmaap pods and sdc pods are failing.

[root@ONAP ~]# k get pods -n onap | grep "development-message"
development-message-router-0                            0/2     Init:0/3     0  
             22m
[root@ONAP ~]# k get pods -n onap | grep "development-sdc"
development-sdc-be-c6b67dd9d-jnhg6                      0/1     Init:2/5     0  
             21m
development-sdc-be-config-backend--1-h6ltj              0/1     Init:0/1     0  
             21m
development-sdc-cs-config-cassandra--1-dxxdk            0/1     Completed    0  
             21m
development-sdc-fe-57bfb57c76-7mj56                     0/1     Init:2/4     0  
             21m
development-sdc-helm-validator-96799c9c-cj6mb           1/1     Running      0  
             21m
development-sdc-onboarding-be-7765c6598d-v9bhc          1/1     Running      0  
             21m
development-sdc-onboarding-be-cassandra-init--1-t6pwd   0/1     Completed    0  
             21m
development-sdc-wfd-be-f87699b54-r8x9d                  1/1     Running      0  
             21m
development-sdc-wfd-be-workflow-init--1-2ltc8           0/1     Completed    0  
             12m
development-sdc-wfd-be-workflow-init--1-bbmts           0/1     Error        0  
             21m
development-sdc-wfd-fe-6c45dfff55-zgmf8                 1/1     Running      0  
             21m
[root@ONAP ~]# k get pods -n onap | grep "development-dmaap"
development-dmaap-bc-79b4b5bf6c-xm7hk                   0/1     Init:5/6     0  
             22m
development-dmaap-bc-dmaap-provisioning--1-8zt9p        0/1     Init:Error   0  
             12m
development-dmaap-bc-dmaap-provisioning--1-gq8hh        0/1     Init:Error   0  
             22m
development-dmaap-bc-dmaap-provisioning--1-gr8ck        0/1     Init:0/1     0  
             104s
development-dmaap-dr-mariadb-init-config-job--1-s85n8   0/1     Completed    0  
             22m
development-dmaap-dr-node-0                             1/1     Running      0  
             22m
development-dmaap-dr-prov-858f67d998-d7lrp              1/1     Running      0  
             22m
[root@ONAP ~]#

Please find the "kubectl describe development-message-router-0 -n onap" as 
follows:

[root@ONAP ~]# k describe pods development-message-router-0 -n onap
Name:           development-message-router-0
Namespace:      onap
Priority:       0
Node:           onap/40.40.40.55
Start Time:     Thu, 25 Aug 2022 08:10:16 -0400
Labels:         app.kubernetes.io/instance=development
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=message-router
controller-revision-hash=development-message-router-5c9d585855
helm.sh/chart=message-router-10.0.0
statefulset.kubernetes.io/pod-name=development-message-router-0
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/development-message-router
Init Containers:
dmaap-mr-cert-initializer-readiness:
Container ID:
Image:         nexus3.onap.org:10001/onap/oom/readiness:3.0.1
Image ID:
Port:          <none>
Host Port:     <none>
Command:
/app/ready.py
Args:
--container-name
aaf-locate
--container-name
aaf-cm
--container-name
aaf-service
State:          Waiting
Reason:       PodInitializing
Ready:          False
Restart Count:  0
Limits:
cpu:     100m
memory:  100Mi
Requests:
cpu:     3m
memory:  20Mi
Environment:
NAMESPACE:  onap (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vmm9s (ro)
message-router-aaf-config:
Container ID:
Image:         nexus3.onap.org:10001/onap/aaf/aaf_agent:2.1.20
Image ID:
Port:          <none>
Host Port:     <none>
Command:
sh
-c
/opt/app/aaf_config/bin/agent.sh
. /opt/app/aaf_config/bin/retrieval_check.sh
/opt/app/aaf_config/bin/aaf-add-config.sh

State:          Waiting
Reason:       PodInitializing
Ready:          False
Restart Count:  0
Environment:
APP_FQI:                   [email protected]
aaf_locate_url:            https://aaf-locate.onap:8095
aaf_locator_container_ns:  onap
aaf_locator_container:     oom
aaf_locator_fqdn:          dmaap-mr
aaf_locator_app_ns:        org.osaaf.aaf
DEPLOY_FQI:                <set to the key 'login' in secret 
'development-dmaap-mr-cert-initializer-deployer-creds'>     Optional: false
DEPLOY_PASSWORD:           <set to the key 'password' in secret 
'development-dmaap-mr-cert-initializer-deployer-creds'>  Optional: false
cadi_longitude:            -122.26147
cadi_latitude:             37.78187
aaf_locator_public_fqdn:   mr.dmaap.onap.org
Mounts:
/opt/app/aaf_config/bin/aaf-add-config.sh from aaf-add-config 
(rw,path="aaf-add-config.sh")
/opt/app/aaf_config/bin/retrieval_check.sh from aaf-add-config 
(rw,path="retrieval_check.sh")
/opt/app/aaf_config/cert/truststoreONAP.p12.b64 from aaf-agent-certs 
(rw,path="truststoreONAP.p12.b64")
/opt/app/aaf_config/cert/truststoreONAPall.jks.b64 from aaf-agent-certs 
(rw,path="truststoreONAPall.jks.b64")
/opt/app/osaaf from development-message-router-aaf-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vmm9s (ro)
message-router-update-config:
Container ID:
Image:         docker.io/dibi/envsubst:1
Image ID:
Port:          <none>
Host Port:     <none>
Command:
sh
Args:
-c
export $(cat /appl/dmaapMR1/bundleconfig/etc/sysprops/local/mycreds.prop | 
xargs -0);
cd /config-input  && for PFILE in `ls -1 .`; do envsubst <${PFILE} 
>/config/${PFILE}; done

State:          Waiting
Reason:       PodInitializing
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/appl/dmaapMR1/bundleconfig/etc/sysprops from 
development-message-router-aaf-config (rw)
/config from jetty (rw)
/config-input from etc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vmm9s (ro)
Containers:
srimzi-zk-entrance:
Container ID:
Image:         docker.io/scholzj/zoo-entrance:latest
Image ID:
Port:          2181/TCP
Host Port:     0/TCP
Command:
/opt/stunnel/stunnel_run.sh
State:          Waiting
Reason:       PodInitializing
Ready:          False
Restart Count:  0
Liveness:       exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s 
timeout=5s period=10s #success=1 #failure=3
Readiness:      exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s 
timeout=5s period=10s #success=1 #failure=3
Environment:
LOG_LEVEL:                  debug
STRIMZI_ZOOKEEPER_CONNECT:  development-strimzi-zookeeper-client:2181
Mounts:
/etc/cluster-ca-certs/ from cluster-ca-certs (rw)
/etc/cluster-operator-certs/ from cluster-operator-certs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vmm9s (ro)
message-router:
Container ID:
Image:         nexus3.onap.org:10001/onap/dmaap/dmaap-mr:1.3.2
Image ID:
Port:          3904/TCP
Host Port:     0/TCP
Command:
sh
Args:
-c
cp /jetty-config/ajsc-jetty.xml /appl/dmaapMR1/etc/
cp /jetty-config/cadi.properties 
/appl/dmaapMR1/bundleconfig/etc/sysprops/local/cadi.properties
/bin/sh /appl/startup.sh

State:          Waiting
Reason:       PodInitializing
Ready:          False
Restart Count:  0
Liveness:       tcp-socket :api delay=10s timeout=1s period=10s #success=1 
#failure=3
Readiness:      tcp-socket :api delay=10s timeout=1s period=10s #success=1 
#failure=3
Startup:        tcp-socket :api delay=10s timeout=1s period=10s #success=1 
#failure=70
Environment:
JAASLOGIN:   <set to the key 'sasl.jaas.config' in secret 
'strimzi-kafka-admin'>  Optional: false
SASLMECH:    scram-sha-512
enableCadi:  true
Mounts:
/appl/dmaapMR1/bundleconfig/etc/appprops/MsgRtrApi.properties from appprops 
(rw,path="MsgRtrApi.properties")
/appl/dmaapMR1/bundleconfig/etc/logback.xml from logback (rw,path="logback.xml")
/appl/dmaapMR1/bundleconfig/etc/sysprops from 
development-message-router-aaf-config (rw)
/appl/dmaapMR1/bundleconfig/etc/sysprops/sys-props.properties from sys-props 
(rw,path="sys-props.properties")
/appl/dmaapMR1/etc/runner-web.xml from etc (rw,path="runner-web.xml")
/etc/localtime from localtime (ro)
/jetty-config from jetty (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vmm9s (ro)
Conditions:
Type              Status
Initialized       False
Ready             False
ContainersReady   False
PodScheduled      True
Volumes:
development-message-router-aaf-config:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     Memory
SizeLimit:  <unset>
aaf-agent-certs:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      development-cert-wrapper-certs
Optional:  false
aaf-add-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      development-dmaap-mr-cert-initializer-add-config
Optional:  false
localtime:
Type:          HostPath (bare host directory volume)
Path:          /etc/localtime
HostPathType:
appprops:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      development-message-router-msgrtrapi-prop-configmap
Optional:  false
etc:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      development-message-router-etc
Optional:  false
logback:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      development-message-router-logback-xml-configmap
Optional:  false
sys-props:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      development-message-router-sys-props
Optional:  false
jetty:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:  <unset>
cluster-operator-certs:
Type:        Secret (a volume populated by a Secret)
SecretName:  development-strimzi-cluster-operator-certs
Optional:    false
cluster-ca-certs:
Type:        Secret (a volume populated by a Secret)
SecretName:  development-strimzi-cluster-ca-cert
Optional:    false
kube-api-access-vmm9s:
Type:                    Projected (a volume that contains injected data from 
multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists 
for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason       Age                   From               Message
----     ------       ----                  ----               -------
Normal   Scheduled    24m                   default-scheduler  Successfully 
assigned onap/development-message-router-0 to onap
Warning  FailedMount  24m                   kubelet            
MountVolume.SetUp failed for volume "appprops" : failed to sync configmap 
cache: timed out waiting for the condition
Warning  FailedMount  24m                   kubelet            
MountVolume.SetUp failed for volume "cluster-operator-certs" : failed to sync 
secret cache: timed out waiting for the condition
Warning  FailedMount  24m                   kubelet            
MountVolume.SetUp failed for volume "logback" : failed to sync configmap cache: 
timed out waiting for the condition
Warning  FailedMount  24m                   kubelet            
MountVolume.SetUp failed for volume "aaf-add-config" : failed to sync configmap 
cache: timed out waiting for the condition
Warning  FailedMount  24m                   kubelet            
MountVolume.SetUp failed for volume "sys-props" : failed to sync configmap 
cache: timed out waiting for the condition
Warning  FailedMount  24m (x2 over 24m)     kubelet            
MountVolume.SetUp failed for volume "cluster-ca-certs" : failed to sync secret 
cache: timed out waiting for the condition
Warning  FailedMount  24m (x2 over 24m)     kubelet            
MountVolume.SetUp failed for volume "etc" : failed to sync configmap cache: 
timed out waiting for the condition
Warning  FailedMount  18m (x9 over 24m)     kubelet            
MountVolume.SetUp failed for volume "cluster-ca-certs" : secret 
"development-strimzi-cluster-ca-cert" not found
Warning  FailedMount  14m (x12 over 24m)    kubelet            
MountVolume.SetUp failed for volume "cluster-operator-certs" : secret 
"development-strimzi-cluster-operator-certs" not found
Warning  FailedMount  4m19s (x12 over 22m)  kubelet            (combined from 
similar events): Unable to attach or mount volumes: unmounted 
volumes=[cluster-operator-certs cluster-ca-certs], unattached volumes=[etc 
appprops aaf-add-config jetty cluster-operator-certs kube-api-access-vmm9s 
development-message-router-aaf-config cluster-ca-certs sys-props 
aaf-agent-certs localtime logback]: timed out waiting for the condition
[root@ONAP ~]#

We found the following link :

https://lists.onap.org/g/onap-discuss/topic/onap_istanbul_sdnc/89018472?p=,,,20,0,0,0::recentpostdate/sticky,,,20,0,0,89018472,previd=0,nextid=1644331819719447215&previd=0&nextid=1644331819719447215

This link states that we need to install portal as well, as it's needed for the 
cert manager, which generates certificates on startup.
Is there any way to do this step without enabling portal? or may be we can do 
it manually?

Further, we have cert-manager and strimizi pods running, but they are running 
in a different name-space as suggested in the link:
https://docs.onap.org/projects/onap-oom/en/jakarta/oom_user_guide.html#deploy

Could anyone please help me out.

Thanks and regards,
Kushagra Gupta


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#24169): https://lists.onap.org/g/onap-discuss/message/24169
Mute This Topic: https://lists.onap.org/mt/93247058/21656
Mute #dmaap:https://lists.onap.org/g/onap-discuss/mutehashtag/dmaap
Mute #dmmap-bc:https://lists.onap.org/g/onap-discuss/mutehashtag/dmmap-bc
Mute #jakarta:https://lists.onap.org/g/onap-discuss/mutehashtag/jakarta
Mute #install:https://lists.onap.org/g/onap-discuss/mutehashtag/install
Group Owner: [email protected]
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-


Attachment: override.yaml
Description: Binary data

Reply via email to