Hello, As was recommended by Tina I have added Gary to the conversation.
Thank you for the fix Harry, looks like the nbi_docker value was also some problem with the python packages. The VMs do all start up now, but I have problems connecting to them with ssh so I can look over the logs. Even the last command of the deployment script does not complete correctly as it returns a permission problem with the private key. So far I am not sure if it is a problem with the ssh config or if the VMs are not loading the proper ssh public key. Have you encountered this problem? Is there any specific ssh configuration necessary for the pod or ONAP? Thank you very much, Richard ________________________________ From: huangxiangyu <huangxiang...@huawei.com> Sent: 05 November 2018 08:11:07 To: Elias Richard; Klozik Martin; opnfv-tech-discuss@lists.opnfv.org Subject: RE: [opnfv-tech-discuss] [Auto] huawei-pod12 ONAP status Hi Richard I have removed the pip.conf for host1. Though pip will still check the local repo first but it can install packages from internet now. There seems no stack at this point and I believed OpenStack components are working functionally. As for nbi_docker, I haven’t met this problem before, maybe you need to find answer from ONAP community Regards Harry 发件人: Elias Richard [mailto:richard.el...@tieto.com] 发送时间: 2018年10月30日 21:36 收件人: Klozik Martin <martin.klo...@tieto.com>; huangxiangyu <huangxiang...@huawei.com>; opnfv-tech-discuss@lists.opnfv.org 主题: Re: [opnfv-tech-discuss] [Auto] huawei-pod12 ONAP status Hi Harry, I connected to the host and run the commands, but there were problems with the python packages. There was a proxy server set for the pip installation which cannot download the packages. Was there a reason a proxy was necessary for pip? If the packages are already installed the check could probably be skipped. There are also some parts of Openstack that seem to be stuck after the reboot( for example a stack with status:DELETE_FAILED). The teardown script tries to remove this, but as it is stuck there could be problems with the deployment. The creation script itself returns this error: "ERROR: The Parameter (nbi_docker) was not provided." I was not able to find this parameter in any of the templates in ~/onap/integration/test/ete/labs/ so I am not sure what is missing. I started analyzing the problem but I did not want to change anything before asking you for help. Could you please take a look at it and share any insights you have? Thank you very much, Richard ________________________________ From: Klozik Martin Sent: 29 October 2018 14:36:37 To: huangxiangyu; opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> Cc: Elias Richard Subject: Re: [opnfv-tech-discuss] [Auto] huawei-pod12 ONAP status adding Richard to the loop... --Martin ________________________________ Od: opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> <opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>> za uživatele Klozik Martin <martin.klo...@tieto.com<mailto:martin.klo...@tieto.com>> Odesláno: středa 24. října 2018 8:00 Komu: huangxiangyu; opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> Předmět: Re: [opnfv-tech-discuss] [Auto] huawei-pod12 ONAP status Hi Harry, thank you for the hints, we will go ahead with ONAP re-deployment as suggested. Best Regards, Martin ________________________________ Od: huangxiangyu <huangxiang...@huawei.com<mailto:huangxiang...@huawei.com>> Odesláno: středa 24. října 2018 5:01:37 Komu: Klozik Martin; opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> Předmět: RE: [Auto] huawei-pod12 ONAP status Hi Martin On host1 you can run the following commands to recreate the ONAP stack: cd /root/onap/integration/test/ete source ./labs/huawei-shanghai/onap-openrc ./scripts/deploy-onap.sh huawei-shanghai These steps will get all onap VMs spinning but all scripts be called inside are pulled from internet where some problems lie. Be aware that there are some docker tag mismatches exist, you will need to manually check the install.log inside VMs to fix them and then call xx_vm_init.sh. Regards Harry 发件人: opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> [mailto:opnfv-tech-discuss@lists.opnfv.org] 代表 Klozik Martin 发送时间: 2018年10月23日 14:16 收件人: huangxiangyu <huangxiang...@huawei.com<mailto:huangxiang...@huawei.com>>; opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> 主题: [opnfv-tech-discuss] [Auto] huawei-pod12 ONAP status Hi Harry, after the power down of huawei-pod12 we are facing an issue with the ONAP installation. The Onap VMs are seen as running by OpenStack, but all of them have failed to boot the kernel and are hanging in the initramfs. Paul did some investigation and he has found out, that it is possible to boot vm manually, it means, that kernel will boot properly. He did some quick checks and it is not clear to us, why VMs can't be boot properly by the OpenStack. Have you seen similar issue in the past, e.g. at any other huawei server after the recent power down? We have a suspicion, that some OS configuration performed during installation was not hardened in the configuration files and thus it was lost after the pod was powered down. We will be grateful for any hints, so we can bring the ONAP installation up and running again. I'm also wondering if you have any notes about the OS and ONAP installation. We can access the bash history, but it is not clear if all performed activities are really necessary. It would be great if you will document the steps required to have OS deployed. I suppose, that for documentation of ONAP installation, we have to ask Gary, am I right? Thank you, Martin
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#22285): https://lists.opnfv.org/g/opnfv-tech-discuss/message/22285 Mute This Topic: https://lists.opnfv.org/mt/27566007/21656 Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-