On 02/24/2015 09:37 AM, Chris Dent wrote:
I'm not so sure about that. IMO, much of this goes back to the question
of whether OpenStack services are APIs or implementations. This was
debated with much heat at the Diablo summit (Hi Jay). I frequently have
conversations where there is an issue about release X vs Y when it is
really about api versions. Even if we say that we are about
implementations as well as apis, we can start to organize our processes
and code as if we were just apis. If each service had a well-defined,
versioned, discoverable, well-tested api, then projects could follow
their own release schedule, relying on distros or integrators to put the
pieces together and verify the quality of the whole stack to the users.
Such entities could still collaborate on that task, and still identify
longer release cycles, using "stable branches". The upstream project
could still test the latest released versions together. Some of these
steps are now being taken to resolve gate issues and horizontal resource
issues. Doing this would vastly increase agility but with some costs:
On Tue, 24 Feb 2015, Sean Dague wrote:
That also provides a very concrete answer to "will people show up".
Because if they do, and we get this horizontal refactoring happening,
then we get to the point of being able to change release cadences
faster. If they don't, we remain with the existing system. Vs changing
the system and hoping someone is going to run in and backfill the
Isn't this the way of the world? People only put halon in the
machine room after the fire.
I agree that "people showing up" is a real concern, but I also think
that we shy away too much from the productive energy of stuff
breaking. It's the breakage that shows where stuff isn't good
To this I'd also add that bug fixing is way easier when you have
aligned releases for projects that are expected to be deployed
together. It's easier to know what the impact of a change/bug is
throughout the infrastructure.
Can't this be interpreted as an excuse for making software which
does not have a low surface area and a good API?
(Note I'm taking a relatively unrealistic position for sake of
1. The upstream project would likely have to give up on the worthy goal
of providing an actual deployable stack that could be used as an
alternative to AWS, etc. That saddens me, but for various reasons,
including that we do no scale/performance testing on the upstream code,
we are not achieving that goal anyway. The big tent proposals are also a
move away from that goal.
2. We would have to give up on incompatible api changes. But with the
replacement of nova v3 with microversions we are already doing that.
Massive adoption with release agility is simply incompatible with
allowing incompatible api changes.
Most of this is just echoing what Jay said. I think this is the way any
SOA would be designed. If we did this, and projects released frequently,
would there be a reason for any one to be chasing master?
OpenStack Development Mailing List (not for usage questions)