Thanks Gabriel.

I would not say that each test tool is to test OPNFV releases and scenarios.  
For example, functionally testing that Cinder works is part of Functest and 
that is targeted towards testing a release and a scenario. However,, measuring 
the actual performance of Cinder under varying workloads cannot be used to gate 
a release or even validate it.  No one can say if X number of IOPS is “good 
enough”, as it varies based on the hardware.  Other performance projects are in 
the same category: what is “fast enough” to consider validating a release.

So, while I agree that performance test projects are part of the community, I 
don’t see them as part of a scheduled release gating process.  This is why the 
discussion about having the option to follow an independent release cycle came 
up.  A project should be free to tag what it considers to be a stable and 
usable release, and have that published for anyone to consume.


Mark Beierl
SW System Sr Principal Developer
Dell EMC | Cloud & Communication Service Provider Solution<>

From: "Yuyang (Gabriel)" <>
Date: Friday, August 10, 2018 at 00:00
To: "" <>, Trevor Bramwell 
<>, "" 
<>, "Beierl, Mark" <>, 
"Limingjiang (Rex)" <>
Subject: RE: [opnfv-tech-discuss] Continuously Releasing Test Projects

Hi Alec,

Thanks a lot for the very detailed explanations! It really clears up many 

I just have some notes here.
In my mind, every testing tool joining OPNFV is to test OPNFV 
releases/scenarios with an overhead of following/being integrated to OPNFV 
release pipeline.
In addition, it’d better to have rolling releases of tip-of-master instead of 
allowing branches from continuous delivery perspective. We can always do 
tagging instead of branching.

In general, I very echo your suggestion to allow more flexible release 
management of testing projects to allow continuously delivering.


[] On Behalf Of Alec via 
Sent: Thursday, August 09, 2018 3:26 AM
To: Yuyang (Gabriel) <>; Trevor Bramwell 
<>;; 'Beierl, 
Mark' <>; Limingjiang (Rex) <>
Subject: Re: [opnfv-tech-discuss] Continuously Releasing Test Projects

Thanks for starting this discussion Trevor.

There are several purposes of test projects in OPNFV (with some test projects 
serving multiple purposes)

  *   Gate an OPNFV releases (meaning they are used to validate OPNFV releases 
of installers and features), example: Functest
  *   Test an external/commercial NFVi distribution – e.g. Dovetail for OVP 
(which itself relies on Functest for example) or NFVbench for external 
  *   Test an OPNFV release but no gating – e.g. Bottleneck
  *   Test and measure an NFVi component – e.g. VSPERF for vswitches

The TSC has decided recently that it would be good for OPNFV test projects to 
be reused in the industry independently of OPNFV releases, to test external 
NFVi systems and be used by non OPNFV audience.
This has already started to happen for a few test components. Functest or 
NFVbench for example are already being used by vendors or SP (deployers) to 
validate their openstack solution – completely independently of the OPNFV 
release and of OVP.

The common point about all test projects today is that they all follow the 
OPNFV release model and cadence – regardless of whether they gate the OPNFV 
release or not.
There are certainly some benefits in doing so (such as standardizing on the 
documentation format, release cadence, release management…) but I think this 
becomes very cumbersome for projects that start to have more users outside of 
OPNFV releases:

  *   Follow milestones for no real reason (since the project is not even 
gating any OPNFV release)
  *   Not able to create releases outside of the OPNFV release cadence
  *   Create unnecessary branches and labels

Furthermore, this model also:

  *   Adds a lot of burden for the OPNFV release team because they have to 
track the milestones for a lot of projects, even for those that do not gate 
OPNFV releases!
  *   Discourages new projects to be hosted in OPNFV due to the high overhead 
imposed on every project

I think that is not sustainable.
What I would suggest is to get back to basics and use a model that is proven 
and widely used by most open source projects:

  *   Let every project version independently based on the project’s own 
roadmap/feature/bug fixing cadence – we can suggest to use semver 2.0
  *   Gating for the project should be controlled by the project owners using 
its own CD pipeline
  *   If  project contributes to a release (OPNFV release or any other non 
OPNFV project release), it is up to the release manager and project PTL to 
agree to pick the proper project version
  *   Project documentation can keep the same integration with OPNFV doc or go 
its own path (for example publish to a separate per-project space in opnfv doc 
web site)

Examples of application of this model:

VSPERF (test components of NFVi – does not gate any OPNFV release):

  *   Could have its own semver version to track changes in the vsperf code (eg 
bug fixing in the Trex traffic generator driver)
  *   Can have its own branches (if desired)
  *   Has and controls its own gating process (e.g run and pass a set of tests 
on pod12)
  *   Benchmark results can be versioned after the VSPERF version (e.g 
benchmark for OVS-DPDK version X tested with VSPERF version 2.0.3)
  *   Can be used by external users with a well known version that is 
independent of an OPNFV release version
  *   Could publish its doc to  separate space under the OPNFV doc web site 
(indexed by the VSPERF own version)

STORPERF/BOTTLENECK…(test full stack - mostly control plane):

  *   Could have its own semver version to track changes
  *   Can have its own branches (if desired)
  *   Can be used by external users with a well known version that is 
independent of an OPNFV release version
  *   Controls its own gating by for example running STORPERF on a set of well 
known openstack distros (e.g APEX Fraser noha-nosdn) using the new CD pipeline
  *   Can publish its own public stable versions at its own pace
  *   Can publish its own artifacts at its own pace (e.g. containers under 

FUNCTEST (tests full stack and gates multiple projects):

  *   Same as above
  *   Each Functest version controls what version of integrated dependent tools 
it wants to include (e.g. if Functest integrates storperf, it is up to the 
Functest/Storperf PTLs to decide which version of that tool to include in each 
Functest release)
  *   Provide to each user release a stable version to use for their own 
gating: version X for OPNFV release, version Y for OVP, version Z for SP1 etc…
  *   Because each end user has its own release schedule, it would be 
futile/impossible to sync the Functest version for every possible end user at 
any moment.


The interest from the test community in the above model is to see whether OPNFV 
releng can help in the CD pipeline for the test projects.
This starts with a dedicated testbed that runs a small set of well defined and 
stable openstack distros against which the gating of these projects (of 
variable quality) can be done. As you might notice this is very different and 
actually the opposite from the current OPNFV release CI/CD pipeline where tools 
used to gate must be stable and the SUT (installer+features) can be variable in 
quality. This contrast also shows why you can’t mix the 2.

Most tests projects would content with a virtual deployment because all they 
need is a working control plane. But data plane projects like VSPERF/Yardtstick 
and NFVbench will need a bare metal deployment. Combined testing (such as 
running storperf + nfvbench concurrently) will also require bare metal 
The typical CD gate job would do this:

  *   Secure exclusive use of the test pod
  *   Prepare setup (right distro installed and working – this is supposed to 
be the stable working stuff that you do not want to debug)
  *   Install/deploy on the testbed the version of test code that is to be 
tested (this can be variable quality)
  *   Run the test
  *   Report results

This can be used for gating commits (a bit expensive) or for gating new 
releases for each test project.

I was hoping that the Intel Pharos lab 19 could be used as bare metal testbed 
for this CD pipelline. Ideally we would need 1 testbed for bare metal and 1 for 
virtual deployments for CD gating.

Sorry for the long post and I hope I did not cause more confusion…



on behalf of "Yang (Gabriel) Yu" 
Date: Wednesday, August 8, 2018 at 4:14 AM
To: Trevor Bramwell 
 "'Beierl, Mark'" <<>>, 
"Limingjiang (Rex)" <<>>
Subject: Re: [opnfv-tech-discuss] Continuously Releasing Test Projects

Hi Trevor,

It's great that we can continue discussing CD releases for testing projects.

In my mind, there are mainly two kinds of deliverables for a specific testing 
project: 1) test framework; 2) test cases.

As to the continuous delivery of testing project, I tent to include both of the 

Validation of a test framework consists of validations of the abilities to do 
unit test (and maybe style check), to adapt to different community 
installers/scenarios, to be compatible to at least 2 OPNFV releases, etc.

Validation of test cases requires validating its integrity, successfully 
running on latest releases, clear and concrete testing scopes/purpose, etc.

So, I guess we could have 2 delivery gates for the testing project: 1) test 
framework ready; 2) test cases ready.

Just some preliminary thoughts, please comments.

As a side notes, Yardstick and Bottlenecks are working on transforming 
themselves into services on cloud native CD pipeline in Clover project.

Maybe we could discuss more towards that direction.



-----Original Message-----
[] On Behalf Of Trevor Bramwell
Sent: Wednesday, August 08, 2018 7:13 AM
Subject: [opnfv-tech-discuss] Continuously Releasing Test Projects

Hi all,

Today me and Mark Beirel started a discussion during the release call[1] that I 
wanted to continue here regarding what the process could look like for 
releasing testing projects or testing tools in a continuous manner. (NFVBench, 
StorPerf, Yardstick, Functest, Bottlenecks, Dovetail,


This could also be seen as a continuation of the discussion we started at the 
Plugfest in France regarding gating in the CD process[2], but aimed 
specifically at the testing tools.

The question I'm hoping we can collectively come to answer is:

  What does a continuous release process look like for OPNFV test tools?

As I don't work on a testing project myself, obviously I'm not the best one to 
answer this question, so instead of imposing my views on a process, I'd rather 
hear the thoughts from our community.

I think by laying out some of the issues projects have with the current release 
process, and listing what a successful release process might look like to you 
will definitely help move this discussion in the right direction.


Trevor Bramwell



Links: You receive all messages sent to this group.

View/Reply Online (#21744):
Mute This Topic:
Group Owner:

Reply via email to