Hello,
I have deployed dmaap on 3 node (124GB X 3) kubernetes cluster, however
below 2 pods are not running. I have deployed the dev-dmaap with helm
several times, changed extend the pod initial, timeout delay seconds.
However, these 2 pods are not recovered. It can not create the container
dmaap-dr-prov. Is there any solution for it ?
Thanks,
dev-dmaap-dmaap-dr-node-658b8ddffb-b4mdp 0/1
Init:0/1 2 28m
dev-dmaap-dmaap-dr-prov-548f5b6fc4-8mp9q 0/1
CrashLoopBackOff 2 1m
ubuntu@kub1:~$ kubectl describe pod dev-dmaap-dmaap-dr-prov-548f5b6fc4-8mp9q
-n onap
Name: dev-dmaap-dmaap-dr-prov-548f5b6fc4-8mp9q
Namespace: onap
Node: kub1/
Start Time: Wed, 12 Dec 2018 11:56:22 +0000
Labels: app=dmaap-dr-prov
pod-template-hash=1049162970
release=dev-dmaap
Annotations: <none>
Status: Running
IP: 10.42.1.30
Controlled By: ReplicaSet/dev-dmaap-dmaap-dr-prov-548f5b6fc4
Init Containers:
dmaap-dr-prov-readiness:
Container ID:
docker://f817dadb56579a473a30fa097a55dc5be0a9b3abe01b4c657735f388e48d4538
Image: oomk8s/readiness-check:2.0.0
Image ID:
docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
Port: <none>
Host Port: <none>
Command:
/root/ready.py
Args:
--container-name
dmaap-dr-db
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 12 Dec 2018 11:56:59 +0000
Finished: Wed, 12 Dec 2018 11:57:06 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5g2vz
(ro)
Containers:
dmaap-dr-prov:
Container ID:
docker://3db0d47f44816b1029c027191fd9b019e1357d4df744d93adb5e16435dac8022
Image: nexus3.onap.org:10001/onap/dmaap/datarouter-prov:1.0.3
Image ID:
docker-pullable://nexus3.onap.org:10001/onap/dmaap/datarouter-prov@sha256:f5703fdee317f45c402956233efd941fd346f3122bb56a93a2635079e0c6b098
Ports: 8080/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 12 Dec 2018 12:01:12 +0000
Finished: Wed, 12 Dec 2018 12:01:15 +0000
Ready: False
Restart Count: 5
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 500m
memory: 1Gi
Liveness: tcp-socket :8080 delay=200s timeout=1s period=10s #success=1
#failure=3
Readiness: tcp-socket :8080 delay=200s timeout=1s period=10s #success=1
#failure=3
Environment: <none>
Mounts:
/etc/localtime from localtime (rw)
/opt/app/datartr/etc/provserver.properties from prov-props (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5g2vz
(ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
localtime:
Type: HostPath (bare host directory volume)
Path: /etc/localtime
HostPathType:
prov-props:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-dmaap-dmaap-dr-prov-prov-props-configmap
Optional: false
dr-prov-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-5g2vz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5g2vz
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m default-scheduler Successfully
assigned onap/dev-dmaap-dmaap-dr-prov-548f5b6fc4-8mp9q to kub1
Normal Pulling 6m kubelet, kub1 pulling image
"oomk8s/readiness-check:2.0.0"
Normal Pulled 5m kubelet, kub1 Successfully pulled
image "oomk8s/readiness-check:2.0.0"
Normal Created 5m kubelet, kub1 Created container
Normal Started 5m kubelet, kub1 Started container
Normal Pulling 4m (x4 over 5m) kubelet, kub1 pulling image
"nexus3.onap.org:10001/onap/dmaap/datarouter-prov:1.0.3"
Normal Pulled 4m (x4 over 5m) kubelet, kub1 Successfully pulled
image "nexus3.onap.org:10001/onap/dmaap/datarouter-prov:1.0.3"
Normal Created 4m (x4 over 5m) kubelet, kub1 Created container
Normal Started 4m (x4 over 5m) kubelet, kub1 Started container
Warning BackOff 52s (x21 over 5m) kubelet, kub1 Back-off restarting
failed container
ubuntu@kub1:~$ kubectl exec -it dev-dmaap-dmaap-dr-prov-548f5b6fc4-8mp9q -c
dmaap-dr-prov -n onap -- /bin/bash
error: unable to upgrade connection: container not found ("dmaap-dr-prov")
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#14438): https://lists.onap.org/g/onap-discuss/message/14438
Mute This Topic: https://lists.onap.org/mt/27483391/21656
Group Owner: [email protected]
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-