On 10/12/2016 04:02 PM, Jim Rollenhagen wrote:
On Wed, Oct 12, 2016 at 8:01 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:
Hi folks!

I'd like to propose a plan on how to simultaneously extend the coverage of
our jobs and reduce their number.

Currently, we're running one instance per job. This was reasonable when the
coreos-based IPA image was the default, but now with tinyipa we can run up
to 7 instances (and actually do it in the grenade job). I suggest we use 6
fake bm nodes to make a single CI job cover many scenarios.

The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool)
to be more in sync with how 3rd party CI does it. A special configuration
option will be used to enable multi-instance testing to avoid breaking 3rd
party CI systems that are not ready for it.

To ensure coverage, we'll only leave a required number of nodes "available",
and deploy all instances in parallel.

In the end, we'll have these jobs on ironic:
gate-tempest-ironic-pxe_ipmitool-tinyipa
gate-tempest-ironic-agent_ipmitool-tinyipa

Each job will cover the following scenarious:
* partition images:
** with local boot:
** 1. msdos partition table and BIOS boot
** 2. GPT partition table and BIOS boot
** 3. GPT partition table and UEFI boot  <*>
** with netboot:
** 4. msdos partition table and BIOS boot <**>
* whole disk images:
* 5. with msdos partition table embedded and BIOS boot
* 6. with GPT partition table embedded and UEFI boot  <*>

 <*> - in the future, when we figure our UEFI testing
 <**> - we're moving away from defaulting to netboot, hence only one
scenario

I suggest creating the jobs for Newton and Ocata, and starting with Xenial
right away.

+1, huge fan of this.

One more thing to think about - our API tests create and delete nodes.
If you run
tempest in parallel, Nova may schedule to these nodes. Might be worth breaking
API tests out into a separate job, and making these jobs scenario tests only.

I'd prefer to move API tests to functional testing, but yeah, as a first step we can split these jobs.


// jim

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to