Hello William,
thank you very much. When I properly setup NFS for both nodes all AAF pods are up&running.
The only pod not working properly from k8s level is:
onap dev-aai-champ-58fb6954cf-z84nw 0/1 Running 0 21m
but it's started and no error in log, so need to troubleshoot this further ...
I think this NFS part is very crucial, especially when in Beijing it's not possible to deploy all pods within single compute node,
as per my understanding some pods might be spawned on one compute, others on another and there might be gaps in configuration required/expected for them if there is no centralized handling of /dockerdata-nfs/ dir.
It has to be added into guide I think ...
Thanks,
Michal
--------- Original Message ---------
Sender : William Kurkian <[email protected]>
Date : 2018-05-15 20:55 (GMT+1)
Title : Re: [onap-discuss] Environment for Deploying ONAP
To : Michal Ptacek<[email protected]>
CC : null<[email protected]>, null<[email protected]>, null<[email protected]>, null<[email protected]>
Hi Michel,
In our deployment, we have two hosts. We had not set up the network file share slave on the second VM. The first one had it setup by a script we used.
We were having that aaf-locate pod fail, and a number of other aaf pods were failing due to this dependency, and possible other issues.
Once we set up the network share on the slave it started working. In our case we redeployed, though I don't think you should have to. I used the NFS slave script from this page: https://wiki.onap.org/display/DW/ONAP+on+Windriver+ONAP+Developer+Cloud#ONAPonWindriverONAPDeveloperCloud-SettingupanNFSshareforMultinodeKubernetesClusters
it is called slave_nfs_node.sh. I also ran the master_nfs_node.sh, but I think I already had it setup.
|
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
_______________________________________________ onap-discuss mailing list [email protected] https://lists.onap.org/mailman/listinfo/onap-discuss
