Thanks for starting this discussion Trevor. There are several purposes of test projects in OPNFV (with some test projects serving multiple purposes)
* Gate an OPNFV releases (meaning they are used to validate OPNFV releases of installers and features), example: Functest * Test an external/commercial NFVi distribution – e.g. Dovetail for OVP (which itself relies on Functest for example) or NFVbench for external distributions * Test an OPNFV release but no gating – e.g. Bottleneck * Test and measure an NFVi component – e.g. VSPERF for vswitches The TSC has decided recently that it would be good for OPNFV test projects to be reused in the industry independently of OPNFV releases, to test external NFVi systems and be used by non OPNFV audience. This has already started to happen for a few test components. Functest or NFVbench for example are already being used by vendors or SP (deployers) to validate their openstack solution – completely independently of the OPNFV release and of OVP. The common point about all test projects today is that they all follow the OPNFV release model and cadence – regardless of whether they gate the OPNFV release or not. There are certainly some benefits in doing so (such as standardizing on the documentation format, release cadence, release management…) but I think this becomes very cumbersome for projects that start to have more users outside of OPNFV releases: * Follow milestones for no real reason (since the project is not even gating any OPNFV release) * Not able to create releases outside of the OPNFV release cadence * Create unnecessary branches and labels Furthermore, this model also: * Adds a lot of burden for the OPNFV release team because they have to track the milestones for a lot of projects, even for those that do not gate OPNFV releases! * Discourages new projects to be hosted in OPNFV due to the high overhead imposed on every project I think that is not sustainable. What I would suggest is to get back to basics and use a model that is proven and widely used by most open source projects: * Let every project version independently based on the project’s own roadmap/feature/bug fixing cadence – we can suggest to use semver 2.0 * Gating for the project should be controlled by the project owners using its own CD pipeline * If project contributes to a release (OPNFV release or any other non OPNFV project release), it is up to the release manager and project PTL to agree to pick the proper project version * Project documentation can keep the same integration with OPNFV doc or go its own path (for example publish to a separate per-project space in opnfv doc web site) Examples of application of this model: VSPERF (test components of NFVi – does not gate any OPNFV release): * Could have its own semver version to track changes in the vsperf code (eg bug fixing in the Trex traffic generator driver) * Can have its own branches (if desired) * Has and controls its own gating process (e.g run and pass a set of tests on pod12) * Benchmark results can be versioned after the VSPERF version (e.g benchmark for OVS-DPDK version X tested with VSPERF version 2.0.3) * Can be used by external users with a well known version that is independent of an OPNFV release version * Could publish its doc to separate space under the OPNFV doc web site (indexed by the VSPERF own version) STORPERF/BOTTLENECK…(test full stack - mostly control plane): * Could have its own semver version to track changes * Can have its own branches (if desired) * Can be used by external users with a well known version that is independent of an OPNFV release version * Controls its own gating by for example running STORPERF on a set of well known openstack distros (e.g APEX Fraser noha-nosdn) using the new CD pipeline * Can publish its own public stable versions at its own pace * Can publish its own artifacts at its own pace (e.g. containers under dockerhub/opnfv/storperf/) FUNCTEST (tests full stack and gates multiple projects): * Same as above * Each Functest version controls what version of integrated dependent tools it wants to include (e.g. if Functest integrates storperf, it is up to the Functest/Storperf PTLs to decide which version of that tool to include in each Functest release) * Provide to each user release a stable version to use for their own gating: version X for OPNFV release, version Y for OVP, version Z for SP1 etc… * Because each end user has its own release schedule, it would be futile/impossible to sync the Functest version for every possible end user at any moment. Etc… The interest from the test community in the above model is to see whether OPNFV releng can help in the CD pipeline for the test projects. This starts with a dedicated testbed that runs a small set of well defined and stable openstack distros against which the gating of these projects (of variable quality) can be done. As you might notice this is very different and actually the opposite from the current OPNFV release CI/CD pipeline where tools used to gate must be stable and the SUT (installer+features) can be variable in quality. This contrast also shows why you can’t mix the 2. Most tests projects would content with a virtual deployment because all they need is a working control plane. But data plane projects like VSPERF/Yardtstick and NFVbench will need a bare metal deployment. Combined testing (such as running storperf + nfvbench concurrently) will also require bare metal deployment. The typical CD gate job would do this: * Secure exclusive use of the test pod * Prepare setup (right distro installed and working – this is supposed to be the stable working stuff that you do not want to debug) * Install/deploy on the testbed the version of test code that is to be tested (this can be variable quality) * Run the test * Report results This can be used for gating commits (a bit expensive) or for gating new releases for each test project. I was hoping that the Intel Pharos lab 19 could be used as bare metal testbed for this CD pipelline. Ideally we would need 1 testbed for bare metal and 1 for virtual deployments for CD gating. Sorry for the long post and I hope I did not cause more confusion… Thanks Alec From: <[email protected]> on behalf of "Yang (Gabriel) Yu" <[email protected]> Date: Wednesday, August 8, 2018 at 4:14 AM To: Trevor Bramwell <[email protected]>, "[email protected]" <[email protected]>, "'Beierl, Mark'" <[email protected]>, "Limingjiang (Rex)" <[email protected]> Subject: Re: [opnfv-tech-discuss] Continuously Releasing Test Projects Hi Trevor, It's great that we can continue discussing CD releases for testing projects. In my mind, there are mainly two kinds of deliverables for a specific testing project: 1) test framework; 2) test cases. As to the continuous delivery of testing project, I tent to include both of the deliverables. Validation of a test framework consists of validations of the abilities to do unit test (and maybe style check), to adapt to different community installers/scenarios, to be compatible to at least 2 OPNFV releases, etc. Validation of test cases requires validating its integrity, successfully running on latest releases, clear and concrete testing scopes/purpose, etc. So, I guess we could have 2 delivery gates for the testing project: 1) test framework ready; 2) test cases ready. Just some preliminary thoughts, please comments. As a side notes, Yardstick and Bottlenecks are working on transforming themselves into services on cloud native CD pipeline in Clover project. Maybe we could discuss more towards that direction. Best, Gabriel -----Original Message----- From: [email protected] [mailto:[email protected]] On Behalf Of Trevor Bramwell Sent: Wednesday, August 08, 2018 7:13 AM To: [email protected] Subject: [opnfv-tech-discuss] Continuously Releasing Test Projects Hi all, Today me and Mark Beirel started a discussion during the release call[1] that I wanted to continue here regarding what the process could look like for releasing testing projects or testing tools in a continuous manner. (NFVBench, StorPerf, Yardstick, Functest, Bottlenecks, Dovetail, etc) This could also be seen as a continuation of the discussion we started at the Plugfest in France regarding gating in the CD process[2], but aimed specifically at the testing tools. The question I'm hoping we can collectively come to answer is: What does a continuous release process look like for OPNFV test tools? As I don't work on a testing project myself, obviously I'm not the best one to answer this question, so instead of imposing my views on a process, I'd rather hear the thoughts from our community. I think by laying out some of the issues projects have with the current release process, and listing what a successful release process might look like to you will definitely help move this discussion in the right direction. Regards, Trevor Bramwell [1] http://meetbot.opnfv.org/meetings/opnfv-release/2018/opnfv-release.2018-08-07-14.01.log.html [2] https://etherpad.opnfv.org/p/minum_test_sets_gating_cd -=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#21739): https://lists.opnfv.org/g/opnfv-tech-discuss/message/21739 Mute This Topic: https://lists.opnfv.org/mt/24225095/21656 Group Owner: [email protected] Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub [[email protected]] -=-=-=-=-=-=-=-=-=-=-=-
