There is a parallel discussion on this at https://lists.onap.org/g/onap-discuss/topic/casablanca_kafka_s_pod_in/29665079?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,29665079
Quoting my answer there – keep in mind that a helm undeploy only works as well as the artifact tree that kubernetes manages – if some components are out of band – which some are -then you need to do some manual cleanup as Michel mentions A full make build and purge will be required if you are experiencing leftover config/artifacts – as currently some pv’s are out of bounds and will require manual cleaning outside of the namespace delete – this includes wiping dockerdata-nfs as some config jobs will not rerun. It is also important to allow some pods to fully complete before attempting to use the system – as healthcheck does not necessarily verify DB functionality – just 200 readiness. The DNS service routing in k8s should not need to be modified out-of-band. Curious as to your deployment model – if running on multiple VMs make sure the ::1/0 and 0.0.0.0/0 open CIDR security groups are set – or step back and run everything co-located to verify your k8s cluster on a single VM. Also verify you are running the Rancher bootstrapped version of kubernetes that most of us run as the RI Verify the OS – Ubuntu 16 is usually ok, but RHEL 7.6 will require extra network and firewall config Verify everything is purged via https://wiki.onap.org/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-RemoveaDeployment sudo helm undeploy $ENVIRON --purge kubectl delete namespace onap sudo helm delete --purge onap kubectl delete pv --all kubectl delete pvc --all kubectl delete secrets --all kubectl delete clusterrolebinding --all sudo rm -rf /dockerdata-nfs/onap-<pod> then rebuild https://git.onap.org/logging-analytics/tree/deploy/cd.sh#n189 cd oom/kubernetes/ sudo make clean sudo make all sudo make $ENVIRON deploy (use integrations’ cloud override yaml as well) one ---set at a time if you like empty first sudo helm deploy onap local/onap --namespace $ENVIRON -f $DISABLE_CHARTS_YAML –verbose dmaap and the rest in sequence sudo helm deploy onap local/onap --namespace $ENVIRON -f $DISABLE_CHARTS_YAML -f $DEV0_YAML $APPENDABLE_ENABLED_FLAGS --verbose Do a helm list to check for a failed portal deployment also check the helm deploy logs off the ~/.helm/plugins/deploy directory /michael From: [email protected] <[email protected]> On Behalf Of Michal Ptacek Sent: Friday, February 1, 2019 5:25 AM To: [email protected]; [email protected] Subject: Re: [onap-discuss] re-deployment procedure Thanks David for raising this topic, we also in offline deployments hit similar issues. For us redeploying using „undeploy / deploy“ end up with new environment in FAILED state, most likely because of some dependencies. So I guess it’s about finding individual not cleaned k8s artifacts after “undeploy“ before attempting new „deploy“. Undeploy/deploy works for us if we do it with full environment …. Also the procedure with upgrading current setup with --set sdc.enabled=false (and later with --set sdc.enabled=true) seems to work weel for us … as described in onap wiki https://wiki.onap.org/display/DW/OOM+Helm+%28un%29Deploy+plugins regards, Michal PS: some notes from undeploy I am doing helm undeploy dev --purge kubectl delete namespace onap kubectl delete persistentvolumes --all kubectl delete secrets --all kubectl delete clusterrolebindings --all # clean /dockerdata-nfs before starting new deploy From: [email protected]<mailto:[email protected]> [mailto:[email protected]] On Behalf Of David Darbinyan Sent: Friday, February 1, 2019 8:57 AM To: [email protected]<mailto:[email protected]> Subject: [onap-discuss] re-deployment procedure Hi gurus! pls explain me where am i wrong.... according to current documentation, I deployed casablanca release with the following command: (being in /root/oom/kubernetes ) # helm deploy development local/onap --namespace onap some of components, i.e. "portal" , gave STATUS Error, or Init:Error or CrashLoopBackOff. as example.... # kubectl get pods --namespace onap ... development-aaf-aaf-sshsm-testca-fqxd6 0/1 Error 0 13h development-aaf-aaf-sshsm-testca-g9d4w 0/1 Error 0 10h development-aaf-aaf-sshsm-testca-gx4gg 0/1 Init:Error 0 19h development-aaf-aaf-sshsm-testca-ww6rs 0/1 Init:Error 0 18h ... development-dmaap-dmaap-dr-prov-6d65874bdb-hkj5f 0/1 CrashLoopBackOff 109 18h ... (portal is already removed so i cant past it here. sorry) so we decided to reinstall them. steps: # helm undeploy development-portal --purge # helm deploy development-portal local/portal --namespace onap -f portal/values.yaml fetching local/portal release "development-portal" deployed BUT after it i cannot see "portal" in the any pod's list. or any deployment's list. # kubectl get pods --namespace onap | grep portal development-sdnc-sdnc-portal-5f74b7fdc7-z4xc9 1/1 Running 0 19h # kubectl get services | grep portal EMPTY Did i re-deployed "portal" correctly ? How can I remove or install(deploy) individual component (portal for example) ? Thanks [cid:[email protected]] [http://ext.w1.samsung.net/mail/ext/v1/external/status/update?userid=m.ptacek&do=bWFpbElEPTIwMTkwMjAxMTAyNTIxZXVjYXMxcDFlNTRiYWJhMjkxODEwNjZhMDIyYjYyMjM0OGQ5NzYzNSZyZWNpcGllbnRBZGRyZXNzPW9uYXAtZGlzY3Vzc0BsaXN0cy5vbmFwLm9yZw__] This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service <https://www.amdocs.com/about/email-terms-of-service> -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#15356): https://lists.onap.org/g/onap-discuss/message/15356 Mute This Topic: https://lists.onap.org/mt/29615809/21656 Group Owner: [email protected] Unsubscribe: https://lists.onap.org/g/onap-discuss/unsub [[email protected]] -=-=-=-=-=-=-=-=-=-=-=-
