Hi, Seshu/Team,
How are you?
I ran into the following SO pods recovery failure issues. Any idea on how to
bring it back w/o redeploying entire SO component? Thanks a lot!
ubuntu@onap-rancher-vm:~$ kubectl get po -n onap | grep -i dev-so
dev-so-64f474fb4b-ns94t 2/2 Running 0
22d
dev-so-admin-cockpit-669f97bb56-n47qs 1/1 Running 0
22d
dev-so-bpmn-infra-78d98f7cbd-c297v 2/2 Running 0
19d
dev-so-catalog-db-adapter-c895cdcd9-ldx5t 0/1 Init:2/3
477 3d11h
dev-so-cnf-adapter-c48d4d844-llllj 1/1 Running 0
3d11h
dev-so-etsi-nfvo-ns-lcm-57ccd6888d-wz7q4 1/1 Running 0
3d11h
dev-so-etsi-sol003-adapter-5459d685c9-jkqd2 1/1 Running 0
3d11h
dev-so-etsi-sol005-adapter-64ff7c46d4-ghvn7 0/1 Init:2/3
477 3d11h
dev-so-nssmf-adapter-577cbcbf96-g4z57 0/1 Init:2/3
478 3d11h
dev-so-oof-adapter-74794c4f64-vt4jw 2/2 Running 0
3d11h
dev-so-openstack-adapter-5769c9fc65-mtmt6 2/2 Running 0
22d
dev-so-request-db-adapter-589f8f4c4f-xbhrn 0/1 Init:2/3
477 3d11h
dev-so-sdc-controller-d99c85467-tmqw6 0/2 Init:2/3
477 3d11h
dev-so-sdnc-adapter-674b56df8b-rgbtc 2/2 Running 0
22d
What happened:
3 days ago, one of my local rancher worker node got reset for some reason. It
causes all the pods running on that node got restarted. The above highlighted
SO pods never came back. They still failed even after manually restarted them.
Several other pods, such as, dev-ejbca, dev-holmes and some dcae pods are at
the similar status. I do not want to redeploy my entire local ONAP instance
just yet since I do want to lose the data and configuration information for now.
Any idea to save the SO?
Thanks a lot.
Xin Miao
Solution Engineering
Fujitsu Network Communication
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#23521): https://lists.onap.org/g/onap-discuss/message/23521
Mute This Topic: https://lists.onap.org/mt/85283270/21656
Group Owner: [email protected]
Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-