Hey Dan, Thanks a lot for all the information. The link to your lab seems to broken however, as it leads to a blank page.
See below, my responses are unbolded *LMCDASM – Hey there… understood – I think you will have a chanllenge of determining if a POD has the HW requirrements prior to having anything installed on it – in fact, the current process is a manual one, not a automatic one where we have a survery of a hetergenous set of equipment, a resource request come in and an election (out of a pooll of possible servers) is done based on those requirements. The fact of the matter is there is NO VM, as you are talking about in the current architecture – so you don’t have an ingress / starting point in the current design.* - The validation tool does not require the POD to have anything installed on it initially. It uses ipmi to PXE boot an initrd image. I'm not sure you understood me clearly, as I wasn't stating there was already a VM. It was just an example, as some labs have been deploying there jump-servers as VMs. The Validator will come pre-packaged within a VM, it doesn't require there to be one already set up. *LMCDASM – this isn’t really your (or others decisions), we work upstream and thus we accept the naming as provided by the OS (and then the installers version interpretation). GENESIS project has the mandate to provide a common set of guidelines so that the installers can push them upstream and then we are common as they come back down to us. The whole point is sort of moot, since the variants between MOBO release, and OS driver update will ensure this is always chased and in the end, once you get to a bonded setup, this is obfuscated anyway.. The real point for NICs should be their feature set – common features (DPDK, PF, SRIOV) as requirements to support the common feature set (all the scenarios), the naming, since it will be common at the “name level” once Openstack is deployed – to me – is a really small issue. – just an opinion* - I'm aware I don't have authority in making this change of course, but I bring it up as a discussion because I think having a more deterministic way of finding the NICs rather than relying on assigned names will be beneficial for both my project maintaining the Openstack projects. The fact that the names vary between MOBO and OS updates is the problem for my tool as my initrd has to bring up the interfaces from scratch. I agree that it's probably a small issue after deployment, but the problem in my eyes is using this configuration file setup pre or during deployment *.* *LMCDASM – probably a review of the way that CI/CD has been working is in order. THE DHA/DEA are yaml based files that provide the HW and Deployment configuration of “any” lab. It was started in FUEL but has been adopted by others.. IN the end, its an example of a way – since it was decided that we want to use Yaml (since it portable across everyone) and in the end, we all need the same information (IPMI, Layout, NODes, etc, etc).* - Ok, I found some example DEA/DHA files in the FUEL installer repository, thanks for the pointer. On Tue, Aug 23, 2016 at 11:31 AM, Daniel Smith <[email protected]> wrote: > Hello Todd. > > > > Please see below > > > > *From:* Todd Gaunt [mailto:[email protected]] > *Sent:* Tuesday, August 23, 2016 11:07 AM > *To:* Daniel Smith <[email protected]> > *Cc:* [email protected]; Jack Morgan < > [email protected]>; Lincoln Lavoie <[email protected]> > *Subject:* Re: [opnfv-tech-discuss] Common configuration files > > > > Hello Dan, > > Thank you for the information on the HW topologies, also if you can it > would be great if you could link me to some resources on your own lab setup > so I can see what an up-to-date functioning lab looks like. > > All the Ericsson LAB Spec and information can be found here: > https://wiki.opnfv.org/display/pharos/Active+Lab+Specs > > > > As for what the validation tool is doing, its job is to see if the pod's > nodes meet the minimum pharos specification hardware requirements for > baremetal deployments from the guidelines established here: > http://artifacts.opnfv.org/pharos/docs/pharos-spec.html with a few > adjustments (Not requiring a specific Xeon processor for example). The > reason it needs the network information is so that it may bring up the > network interfaces and establish that the nodes can connect to each other > on their networks. It shouldn't matter how many nodes there are or even if > the jump-host is a virtual machine, so long as the VM has access to the > admin network. > > *LMCDASM – Hey there… understood – I think you will have a chanllenge of > determining if a POD has the HW requirrements prior to having anything > installed on it – in fact, the current process is a manual one, not a > automatic one where we have a survery of a hetergenous set of equipment, a > resource request come in and an election (out of a pooll of possible > servers) is done based on those requirements. The fact of the matter is > there is NO VM, as you are talking about in the current architecture – so > you don’t have an ingress / starting point in the current design.* > > I'm mostly against the NIC names because they differ between Operating > Systems and part of my tool is an initrd, this means that if the NIC's > assigned names in Ubuntu or CentOS or Debian don't match the ones my initrd > brings up, then configuring them correctly won't be possible without > something like the actual MAC address. This section could only be omitted > if my tool were to not need to validate network connections or if lab > owners would be okay with the test not working if they used a different OS > than my initrd. > > *LMCDASM – this isn’t really your (or others decisions), we work upstream > and thus we accept the naming as provided by the OS (and then the > installers version interpretation). GENESIS project has the mandate to > provide a common set of guidelines so that the installers can push them > upstream and then we are common as they come back down to us. The whole > point is sort of moot, since the variants between MOBO release, and OS > driver update will ensure this is always chased and in the end, once you > get to a bonded setup, this is obfuscated anyway.. The real point for > NICs should be their feature set – common features (DPDK, PF, SRIOV) as > requirements to support the common feature set (all the scenarios), the > naming, since it will be common at the “name level” once Openstack is > deployed – to me – is a really small issue. – just an opinion* > > > By the deploy.sh, you mean the ones provided by the installers correct? I > have been looking through these recently but the configuration files they > read from are all formatted differently. Which files to you mean by DEA/DHA > exactly? I found some python scripts in FUEL with those names but they just > seems to be part of they're deploy scripts (not configuration files if > thats what they're suhpposed to be). The only common files I know of are > the genesis project's so if there are others it'd be great to know of them > > *LMCDASM – probably a review of the way that CI/CD has been working is in > order. THE DHA/DEA are yaml based files that provide the HW and Deployment > configuration of “any” lab. It was started in FUEL but has been adopted by > others.. IN the end, its an example of a way – since it was decided that we > want to use Yaml (since it portable across everyone) and in the end, we all > need the same information (IPMI, Layout, NODes, etc, etc).* > > > > > > Thanks, > > Todd > > > > On Tue, Aug 23, 2016 at 10:25 AM, Daniel Smith <[email protected]> > wrote: > > Hello Todd. > > > > Can you confirm that you have looked at the current depoy.sh and the > DHA/DEA that are in use in the CI/CD that provide common configuration > points? > > > > Cheers, > D > > > > *From:* [email protected] [mailto: > [email protected]] *On Behalf Of *Todd Gaunt > *Sent:* Tuesday, August 23, 2016 9:41 AM > *To:* [email protected]; Jack Morgan < > [email protected]>; Lincoln Lavoie <[email protected]> > *Subject:* [opnfv-tech-discuss] Common configuration files > > > > Hello, > > I want to open up a discussion on the pod configuration files regarding > the inventory and network configuration files as described here > https://wiki.opnfv.org/display/genesis/Configuration-files-discussion. As > I'm working on the Pharos validation tool, getting a common configuration > file, at least a baseline common format that can be made alongside > installer configuration files, would help pod owners to support every > installer and allow tools like the Pharos validation tool I'm working on to > work in every deployment scenario without having custom configuration to > support every possible configuration and installer. This would be > preferable to me making yet another configuration file for my tool. > > I'd like to know from the developers of the installers what it might take > to use these common configuration formats. Along with this what is needed > to finalize the network configuration file here: https://wiki.opnfv.org/ > display/genesis/Common+Network+Settings. This configuration file is > almost perfect aside from how it specifies NICs with the names the OS gives > them (eth0, eth1) rather something more deterministic is such as the MAC > addresses. > > Thanks, > > Todd Gaunt > > >
_______________________________________________ opnfv-tech-discuss mailing list [email protected] https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
