On Sep 26, 2014, at 4:50 PM, Sean Dague <s...@dague.net> wrote:

> On 09/26/2014 04:22 PM, Doug Hellmann wrote:
>> On Sep 26, 2014, at 2:27 PM, Sean Dague <s...@dague.net> wrote:
>>> As we've been talking about the test disaggregation the hamster wheels
>>> in the back of my brain have been churning on the library testing
>>> problem I think we currently have. Namely, openstack components mostly
>>> don't rely on released library versions, they rely on git master of
>>> them. Right now olso and many clients are open masters (their
>>> stable/juno versions are out), but we've still not cut RCs on servers.
>>> So we're actually going to have a time rewind event as we start cutting
>>> stables.
>>> We did this as a reaction to the fact that library releases were often
>>> cratering the world. However, I think the current pattern leads us into
>>> a much more dangerous world where basically the requirements.txt is invalid.
>>> So here is the particular unwind that I think would be useful here:
>>> 1) Change setup_library in devstack to be able to either setup the
>>> library from git or install via pip. This would apply to all libraries
>>> we are installing from oslo, the python clients, stackforge, etc.
>>> Provide a mechanism to specify LIBRARIES_FROM_GIT (or something) so that
>>> you can selectively decide to use libraries from git for development
>>> purposes.
>>> 2) Default devstack to use pip released versions.
>> These first 2 suggestions seem good. It makes our test configuration more 
>> accurate, and frees the Oslo team to have our dev cycle a little out of 
>> phase without worrying about breaking things. It means app developers will 
>> have to wait for fixes and features longer than they would otherwise, but 
>> Thierry has improved the automation for release tracking so we could release 
>> more often.
>>> 3) Change the job definition on the libraries to test against devstack
>>> in check, not in gate. The library teams can decide if they want their
>>> forward testing to be voting or not, but this is basically sniff testing
>>> that when they release a new library they won't ruin the world for
>>> everyone else.
>> The other suggestions only work if we also have some job that does still use 
>> the source version of the libraries. A voting check job is good. Why would 
>> we not also do the test in the gate?
> Right, so the oslo-config devstack job sets
> LIBRARIES_FROM_GIT=oslo.config. It tests with everything else from pip.
> So, honestly, we did this experiment with tempest. We voting check on a
> lot more than we gate on. The only reason to rerun these same tests on
> gate is if you believe compounding changes in your code can produce a
> failure that was not found by testing the two patches independently.
> Which, for something like Nova, has some really likelyhood. But I think
> for libraries the likelihood is quite small. Small enough that you
> relieve the pain of something completely outside of your control killing
> a merge.
> If it was really an issue, then all future patches would fail check. So
> you'd be stuck there. But that just means you just need to make that
> your next fix patch. I actually think in the library case it would be a
> good high throughput model.

OK, that makes sense.

>>> 4) If a ruin the world event happens, figure out how to prevent that
>>> kind of event in local project testing, unit or functional. Basically an
>>> unknown contract was broken. We should bring that contract back into the
>>> project itself, or yell at the consuming project about why they were
>>> using code in a crazy pants way.
>>> Additionally, I'd like us to consider: No more alpha libraries. The
>>> moment we've bumped global requirements in projects we've actually
>>> released these libraries to production, as people are CDing the servers.
>>> We should just be honest about that and just give things a real version.
>>> Version numbers are cheap.
>> Anyone following trunk is already expecting some pain, but the folks that 
>> follow the stable branches do it because we call them stable. We have been 
>> using alpha versions as a way to avoid throwing development versions of 
>> libraries into the stable branch test and production environments before 
>> they have had some runtime against master. It’s not so much an indication of 
>> the quality of the content of the package as a hack taking advantage of pip 
>> behaviors that don’t install pre-release packages unless specifically told 
>> do so, and a signal to distros that they don’t necessarily have to package 
>> those releases (I don’t know if they’re receiving that signal or not). If we 
>> stop using alpha version numbers, we need some other way to avoid having 
>> development releases end up in the stable test environments. Or to say 
>> explicitly that it’s OK if they do, I guess. 
> So it seems to me that if you are running stable, you are probably not
> going out of your way to run pip install -U on all the dependencies
> every day. Honestly, you are probably not upgrading your dependencies
> unless we issue a security alert on one of them. So I'm not sure we
> should make the 'I'm running stable, but I blindly update all my
> dependencies nightly' a common use case.

Well, no, that’s not it.

The stable branch tests in OUR environment use pip. We use an alpha version 
packaged as a wheel so old pips don’t see it and so new pips don’t install it 
because they have a version specifier that doesn’t include the alpha. Moving 
away from alphas makes it more likely OUR stable test environment will break.

Distros, I hope/expect, do not package the alphas because, well, they’re marked 
as alpha. That means anyone installing distro updates won’t get new versions of 
the libs and can safely accept whatever updates the distros do deliver. Moving 
away from alphas makes it more likely the distros will package more frequent 
updates to the libraries, potentially breaking stable deployments. Now, there 
are a lot of other ways to test for that and deal with regressions, so maybe 
the solution is to put more testing into place. But we seem to be on a tear to 
reduce the number of actual test jobs we’re running, so we need to decide what 
balance to strike.

> Alternatively, we could actually semver cap on stable (yes we've gone
> around this mulberry bush before, but I think it might be time to
> again), so new feature releases aren't impacting stable. That would
> actually simplify a bunch of other oddities we end up having to handle
> with stable requirements files.

Yes, that’s another alternative. I think we didn’t do that because of upgrades, 
though? Or maybe it was because we don’t assume that the next release of 
oslo.foo after 1.1 is 2.0, so how do we know where to cap it? I honestly can’t 
remember. If we can figure out a versioning scheme that allows us to cap safely 
and support upgrades and whatever else might have been the problem before, then 
we can put that on the table as an option, too.

>> I would also like to hear some from the distro packaging folks about what it 
>> would mean for us to be releasing regular version libraries frequently 
>> (expect at least one release every week for the entire cycle). Would you 
>> package all of those, or would you wait until closer to the end of the cycle 
>> and package the “final” versions?
> Yep, I'd definitely like to hear that as well.

To be clear, we would not release every library weekly, but we would have at 
least one library ready for a release every week. Likely more some weeks and 
fewer others, but one per week seems like a fair guess at an average.


>       -Sean
> -- 
> Sean Dague
> http://dague.net
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

OpenStack-dev mailing list

Reply via email to