On Fri, 2016-10-07 at 09:03 -0400, Paul Belanger wrote:
> Greetings,
> I wanted to propose a work item, that I am happy to spearhead, about
> setting up
> a 3rd party CI system for tripleo project. The work I am proposing,
> wouldn't
> actually affect anything today about tripleo-ci but provider a
> working example
> of how 3rd party CI will work and potential migration path.
> This is just one example of how it would work, obviously everything
> is open for
> discussions but I think you'll find the plan to be workable.
> Additionally, this
> topic would only apply to OVB jobs, existing jobs already running on
> cloud
> providers from openstack-infra would not be affected.

The plan you describe here sounds reasonable. Testing out a 3rd party
system in parallel to our existing CI causes no harm and certainly
allows us to evaluate things and learn from the new setup.

A couple of things I would like to see discussed a bit more (either
here or in a new thread if deemed unrelated) are how do we benefit from
these changes in making the OVB jobs 3rd party.

There are at least 3 groups who likely care about this along with how
this benefits them:

-the openstack-infra team:

  * standardization: doesn't have to deal with special case OVB clouds

-the tripleo OVB cloud/CI maintainers:

  * Can manage the 3rd party cloud how they like it. Using images or whatever 
with less regard for openstack-infra compatability.

-the tripleo core team:

  * The OVB jobs are mostly the same. The maintenance is potentially
further diverging from upstream though. So is there any benefit to 3rd
party for the core team? Unclear to me at this point. The OVB jobs are
still the same. They aren't running any faster than they are today. The
maintenance of them might even get harder for some due to the fact that
we have different base images across our upstream infra multinode jobs
and what we run via the OVB 3rd party testing.


The tripleo-ci end-to-end test jobs have always fallen into the high
maintenance category. We've only recently switched to OVB and one of
the nice things about doing that is we are using something much closer
to stock openstack vs. our previous CI cloud. Sure there are some OVB
configuration differences to enable testing of baremetal in the cloud
but we are using more OpenStack to drive things. So by simply using
more OpenStack within our CI we should be more closely aligning with
infra. A move in the right direction anyway.

Going through all this effort I really would like to see all the teams
gain from the effort. Like, for me the point of having upstream
tripleo-ci tests is that we catch breakages. Breakages that no other
upstream projects are catching. And the solution to stopping those
breakages from happening isn't IMO to move some of the most valuable CI
tests into 3rd party. That may cover over some of the maintenance rubs
in the short/mid term perhaps. But I view it as a bit of a retreat in
where we could be with upstream testing.

So rather than just taking what we have in the OVB jobs today and
making the same, long running (1.5 hours +) CI job (which catches lots
of things) could we re-imaging the pipeline a bit in the process so we
improve this. I guess my concern is we'll go to all the trouble to move
this and we'll actually negatively impact the speed with which the
tripleo core team can land code instead of increasing it. I guess what
I'm asking is in doing this move can we raise the bar for TripleO core
any too?


> What I am proposing is we move tripleo-test-cloud-rh2 (currently
> disabled) from
> openstack-infra (nodepool) to rdoproject (nodepool).  This give us a
> cloud we
> can use for OVB; we know it works because OVB jobs have run on it
> before.
> There is a few issues we'd first need to work on, specifically since
> rdoproject.org is currently using SoftwareFactory[1] we'd need to
> have them
> adding support for nodepool-builder. This is needed so we can use the
> existing
> DIB elements that openstack-infra does to create centos-7 images
> (which tripleo
> uses today). We have 2 options, wait for SF team to add support for
> this (I
> don't know how long that is, but they know of the request) or we
> manually setup
> a external nodepool-builder instance for rdoproject.org, which
> connects to
> nodepool.rdoproject.org via gearman (I suggest we do this).
> Once that issue is solved, things are a little easier.  It would just
> be a
> matter of porting upstream CI configuration to rdoproject.org and
> validating
> images, JJB jobs and test validation. Cloud credentials removed from
> openstack-infra and added to rdoproject.org.
> I'd basically need help from rdoproject (eg: dmsimard) with some of
> the admin
> tasks, a long with a VM for nodepool-builder. We already have the
> 3rdparty CI
> bits setup in rdoproject.org, we are actually running DLRN builds on
> python-tripleoclient / python-openstackclient upstream patches.
> I think the biggest step is getting nodepool-builder working with
> Software
> Factory, but once that is done, it should be straightforward work.
> Now, if SoftwareFactory is the long term home for this system is open
> for
> debate.  Obviously, rdoproject has the majority of this
> infrastructure in plan,
> so it makes for a good place to run tripleo-ci OVB jobs.  Other wise,
> if there
> are issue, then tripleo would have to stand up their own
> jenkins/nodepool/zuul
> infrastructure and maintain it. 
> I'm happy to answer questions,
> Paul
> [1] http://softwarefactory-project.io/
> _____________________________________________________________________
> _____
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Reply via email to