Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-25 Thread Sean Dague

On 01/25/2017 06:16 AM, Monty Taylor wrote:


I have quibble with the current microversions construct. It's mostly
semantic in nature, and I _think_ it's not valid/useful - but I'm going
to describe it here just so that I've said it and we can all acknowledge
it and move on.

My concern is with the prefix "micro". What gets presented to the user
now is a "major" api version that is essentially useless, and a
monotonoically increasing single version number that does not indicate
whether a given version introduced a breaking change or not.

I LIKE the mechanism. It works well - I do not think using it is
burdensome or bad for the user so far. But it's not "micro". It's
_essentially_ "every 'microversion' bump must be treated as a major
version bump, we just moved it to a construct that doesn't involve
deploying 40 different rest endpoints.

There are ways in which we could use the mechanism while still using
structured content to convey some amount of meaning to a user so that
client consumers don't have to write matrixes of "if this cloud has max
microversion of 27, then do this, otherwise do this other thing" for all
of the microversions.

That said - it's WAY better than the other thing - at least so far in
the way I'm seeing nova use it.

So I imagine it's just me quibbling over the word 'micro' and wanting
something more like libtool's version:revision:age construct which
calculates for a given library and consumer whether or not a library can
be expected to be usable in a dynamic linking context. (this is a
different construct from semver, but turns out is handy when you have a
single client that may need to consume multiple different api providers)


I can definitely understand an issue with the naming. The naming grew 
organically out of the Nova v3 struggles. It was a name that 
distinguished it from Major versions, and far enough away from Semver 
words to help disabuse people that this was semver. Which continues to 
be a struggle.


I'd suggestion a new bike shed on names, except, we seem to have at 
least built context around what we mean by microversions now in our 
community, and I'd had to backslide on 2 years of education there.


It's probably time to build more of a primer back into the api-ref site, 
maybe an area on common concepts.




Also, when suppressing or not suppressing which user base is more
important? The users that exist now or the users to come? This may
sound like a snarky or idle question, but it's a real one: Is it
true that we do, as a general rule, base our development on existing
users and not people who have chosen not to use "the product" for
some reason?


We have a GIANT install base - but the tools that can work consistently
across that install base is small. If we continue to chase phantom maybe
users at the expense of the users we have currently, I'm pretty sure
we'll end up where linux on the desktop has. I believe we stopped be
able to legitimately make backwards incompatible change around havana.


Right, I think that has been the constant question. And I agree that 
taking care of our existing users, at the cost of not being able to 
clean everything up, is the right call.





Finding this:

http://docs.openstack.org/developer/nova/api_microversion_history.html

Is hard. I saw it for the first time 3 days ago. Know why? It's in the
nova developer docs, not in the API docs. It's a great doc.


Yeh, we need to get that reflected in api-ref. That's a short term todo 
I can probably bang out before the release. It was always intended to 
surface more broadly, but until new api-ref it really wasn't possible.


-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-25 Thread Monty Taylor
On 01/25/2017 09:16 AM, Monty Taylor wrote:
> On 01/24/2017 12:39 PM, Chris Dent wrote:
>> On Mon, 23 Jan 2017, Sean Dague wrote:
>>
>>> We all inherited a bunch of odd and poorly defined behaviors in the
>>> system we're using. They were made because at the time they seemed like
>>> reasonable tradeoffs, and a couple of years later we learned more, or
>>> needed to address a different use case that people didn't consider
>>> before.
>>
>> Thanks, as usual, for providing some well considered input Sean. I
>> think it captures well what we could describe as the "nova
>> aspirational model for managing change" which essentially means:
>>
>> * don't change stuff unless you have to
>> * when you do change stuff, anything, use microversions to signal
>>
>> This is a common position and I suspect if we were to use the
>> voices that have spoken up so far to form the new document[1] then
>> it would codify that, including specifying microversions as the
>> technology for managing boundaries.
> 
> I have quibble with the current microversions construct. It's mostly
> semantic in nature, and I _think_ it's not valid/useful - but I'm going
> to describe it here just so that I've said it and we can all acknowledge
> it and move on.
> 
> My concern is with the prefix "micro". What gets presented to the user
> now is a "major" api version that is essentially useless, and a
> monotonoically increasing single version number that does not indicate
> whether a given version introduced a breaking change or not.
> 
> I LIKE the mechanism. It works well - I do not think using it is
> burdensome or bad for the user so far. But it's not "micro". It's
> _essentially_ "every 'microversion' bump must be treated as a major
> version bump, we just moved it to a construct that doesn't involve
> deploying 40 different rest endpoints.
> 
> There are ways in which we could use the mechanism while still using
> structured content to convey some amount of meaning to a user so that
> client consumers don't have to write matrixes of "if this cloud has max
> microversion of 27, then do this, otherwise do this other thing" for all
> of the microversions.
> 
> That said - it's WAY better than the other thing - at least so far in
> the way I'm seeing nova use it.
> 
> So I imagine it's just me quibbling over the word 'micro' and wanting
> something more like libtool's version:revision:age construct which
> calculates for a given library and consumer whether or not a library can
> be expected to be usable in a dynamic linking context. (this is a
> different construct from semver, but turns out is handy when you have a
> single client that may need to consume multiple different api providers)

You know what - forget this part. I just went to try to make a concrete
example of what I'm talking about and got bumpkiss. The single version
numbers are honestly fine.

>> That could very well be fine, but we have evidence that:
>>
>> * some projects don't yet use microversions in their APIs
>> * some projects have no intention of using microversions or at least
>>   have internal conflict about doing so
>> * some projects would like to change things (irrespective of
>>   microversions)
>>
>> What do we do about that? That's what I think we could be working
>> out here, and why I'm persisting in dragging this out. There's no
>> point making rules that a significant portion of the populace have
>> no interest in following.
>>
>> So the options seem to be:
>>
>> * codify the two rules above as the backbone for the
>>   api-compatibility assertion tag and allow several projects to not
>>   assert that, despite an overall OpenStack goal
> 
> I like the two rules above. They serve end users in the way Sean is
> talking about better than any of the alternatives I've heard.
> 
>> * keep hashing things out for a bit longer until either we have
>>   different rules so we have more projects liking the rules or we
>>   justify the rules until we have more projects accepting them
>>
>> More in response to Sean below, not to contradict what he's saying
>> but in the ever-optimistic hope of continuing and expanding the
>> conversation to get real rather than enforced consensus.
>>
>>> If you don't guaruntee that existing applications will work in the
>>> future (for some reasonable window of time), it's a massive turn off to
>>> anyone deciding to use this interface at all. You suppress your user
>>> base.
>>
>> I think "reasonable window of time" is a key phrase here that
>> perhaps we can build into the guidelines somewhat. The problems of
>> course are that some clouds will move forward in time at different
>> rates and as Sean has frequently pointed out, time's arrow is not
>> unidirectional in the universe of many OpenStack clouds.
>>
>> To what extent is the HEAD of OpenStack responsible to OpenStack two
>> or three years back?
> 
> I personally be the answer to this is "forever" I know that's not
> popular - but if we don't, someone _else_ has to deal with 

Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-25 Thread Monty Taylor
On 01/24/2017 12:39 PM, Chris Dent wrote:
> On Mon, 23 Jan 2017, Sean Dague wrote:
> 
>> We all inherited a bunch of odd and poorly defined behaviors in the
>> system we're using. They were made because at the time they seemed like
>> reasonable tradeoffs, and a couple of years later we learned more, or
>> needed to address a different use case that people didn't consider
>> before.
> 
> Thanks, as usual, for providing some well considered input Sean. I
> think it captures well what we could describe as the "nova
> aspirational model for managing change" which essentially means:
> 
> * don't change stuff unless you have to
> * when you do change stuff, anything, use microversions to signal
> 
> This is a common position and I suspect if we were to use the
> voices that have spoken up so far to form the new document[1] then
> it would codify that, including specifying microversions as the
> technology for managing boundaries.

I have quibble with the current microversions construct. It's mostly
semantic in nature, and I _think_ it's not valid/useful - but I'm going
to describe it here just so that I've said it and we can all acknowledge
it and move on.

My concern is with the prefix "micro". What gets presented to the user
now is a "major" api version that is essentially useless, and a
monotonoically increasing single version number that does not indicate
whether a given version introduced a breaking change or not.

I LIKE the mechanism. It works well - I do not think using it is
burdensome or bad for the user so far. But it's not "micro". It's
_essentially_ "every 'microversion' bump must be treated as a major
version bump, we just moved it to a construct that doesn't involve
deploying 40 different rest endpoints.

There are ways in which we could use the mechanism while still using
structured content to convey some amount of meaning to a user so that
client consumers don't have to write matrixes of "if this cloud has max
microversion of 27, then do this, otherwise do this other thing" for all
of the microversions.

That said - it's WAY better than the other thing - at least so far in
the way I'm seeing nova use it.

So I imagine it's just me quibbling over the word 'micro' and wanting
something more like libtool's version:revision:age construct which
calculates for a given library and consumer whether or not a library can
be expected to be usable in a dynamic linking context. (this is a
different construct from semver, but turns out is handy when you have a
single client that may need to consume multiple different api providers)

> That could very well be fine, but we have evidence that:
> 
> * some projects don't yet use microversions in their APIs
> * some projects have no intention of using microversions or at least
>   have internal conflict about doing so
> * some projects would like to change things (irrespective of
>   microversions)
> 
> What do we do about that? That's what I think we could be working
> out here, and why I'm persisting in dragging this out. There's no
> point making rules that a significant portion of the populace have
> no interest in following.
> 
> So the options seem to be:
> 
> * codify the two rules above as the backbone for the
>   api-compatibility assertion tag and allow several projects to not
>   assert that, despite an overall OpenStack goal

I like the two rules above. They serve end users in the way Sean is
talking about better than any of the alternatives I've heard.

> * keep hashing things out for a bit longer until either we have
>   different rules so we have more projects liking the rules or we
>   justify the rules until we have more projects accepting them
> 
> More in response to Sean below, not to contradict what he's saying
> but in the ever-optimistic hope of continuing and expanding the
> conversation to get real rather than enforced consensus.
> 
>> If you don't guaruntee that existing applications will work in the
>> future (for some reasonable window of time), it's a massive turn off to
>> anyone deciding to use this interface at all. You suppress your user
>> base.
> 
> I think "reasonable window of time" is a key phrase here that
> perhaps we can build into the guidelines somewhat. The problems of
> course are that some clouds will move forward in time at different
> rates and as Sean has frequently pointed out, time's arrow is not
> unidirectional in the universe of many OpenStack clouds.
> 
> To what extent is the HEAD of OpenStack responsible to OpenStack two
> or three years back?

I personally be the answer to this is "forever" I know that's not
popular - but if we don't, someone _else_ has to deal with making sure
code that wants to consume new apis and also has to talk to older
OpenStack installations can do that.

But it turns out OpenStack works way better than our detractors in the
"success is defined by the size of your VC intake" tech press like to
admit - and we have clouds _today_ that are happily running in
production with Juno 

Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-25 Thread Chris Dent

On Wed, 25 Jan 2017, Thierry Carrez wrote:


We were discussing this in the context of an "assert" tag, not a goal.


Yes, but it is often the case that changes are being evaluated as if
it was a goal. A couple of glance related changes experienced
reactions of "this doesn't meet compatibility guidelines":

https://review.openstack.org/#/c/420038/
https://review.openstack.org/#/c/414261/

This is perhaps a proper reaction as a sanity check, but if a
project does not subscribe to the mooted assert tag then whether it
is a blocker or not should be up to the project?


I think that's a good commitment to document, and knowing which projects
actually commit to that is very useful to our users (the appdev
variety). I don't think that means every project needs to commit to that
right now, or that microversions are the only way to make sure you won't
ever break API compatibility. I just think it's a good information bit
to communicate.


It is definitely a good commitment to document, but we need to make
sure that we express it is an optional commitment, if in fact it is.
I get the impression that a lot people think it is not.

And if the commitment is being made, then we need to make sure we
document what demarcates change boundaries (when they inevitably
happen) and how to manage them.

I think we would be doing a huge disservice to our efforts at making
the APIs consistent (amongst the different services) if we have
multiple ways to manage them.

BTW: I think we should start using the term "stability" not
"compatibility".

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-25 Thread Thierry Carrez
Chris Dent wrote:
> [...] 
> That could very well be fine, but we have evidence that:
> 
> * some projects don't yet use microversions in their APIs
> * some projects have no intention of using microversions or at least
>   have internal conflict about doing so
> * some projects would like to change things (irrespective of
>   microversions)
> 
> What do we do about that? That's what I think we could be working
> out here, and why I'm persisting in dragging this out. There's no
> point making rules that a significant portion of the populace have
> no interest in following.
> 
> So the options seem to be:
> 
> * codify the two rules above as the backbone for the
>   api-compatibility assertion tag and allow several projects to not
>   assert that, despite an overall OpenStack goal
> 
> * keep hashing things out for a bit longer until either we have
>   different rules so we have more projects liking the rules or we
>   justify the rules until we have more projects accepting them

We were discussing this in the context of an "assert" tag, not a goal.
Assert tags are primarily meant as a way to communicate information to
deployers or users. The one proposed here simply communicates that the
project will not ever break "API compatibility" (as you loosely defined
it, "any extant client code that works should continue working"), and
that it is therefore safe to write long-term code against that API. It
is comparable to the "follows-deprecation-policy" tag. And it is always
ok to allow projects to not assert tags they are not ready to assert.

I think that's a good commitment to document, and knowing which projects
actually commit to that is very useful to our users (the appdev
variety). I don't think that means every project needs to commit to that
right now, or that microversions are the only way to make sure you won't
ever break API compatibility. I just think it's a good information bit
to communicate.

Yes, it might ultimately result in more projects adopting that
commitment, because it will make their project look better in the
project navigator. Personally I see that potential outcome as a good
thing -- they should just do it when they are ready to do it.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-24 Thread Chris Dent

On Mon, 23 Jan 2017, Sean Dague wrote:


We all inherited a bunch of odd and poorly defined behaviors in the
system we're using. They were made because at the time they seemed like
reasonable tradeoffs, and a couple of years later we learned more, or
needed to address a different use case that people didn't consider before.


Thanks, as usual, for providing some well considered input Sean. I
think it captures well what we could describe as the "nova
aspirational model for managing change" which essentially means:

* don't change stuff unless you have to
* when you do change stuff, anything, use microversions to signal

This is a common position and I suspect if we were to use the
voices that have spoken up so far to form the new document[1] then
it would codify that, including specifying microversions as the
technology for managing boundaries.

That could very well be fine, but we have evidence that:

* some projects don't yet use microversions in their APIs
* some projects have no intention of using microversions or at least
  have internal conflict about doing so
* some projects would like to change things (irrespective of
  microversions)

What do we do about that? That's what I think we could be working
out here, and why I'm persisting in dragging this out. There's no
point making rules that a significant portion of the populace have
no interest in following.

So the options seem to be:

* codify the two rules above as the backbone for the
  api-compatibility assertion tag and allow several projects to not
  assert that, despite an overall OpenStack goal

* keep hashing things out for a bit longer until either we have
  different rules so we have more projects liking the rules or we
  justify the rules until we have more projects accepting them

More in response to Sean below, not to contradict what he's saying
but in the ever-optimistic hope of continuing and expanding the
conversation to get real rather than enforced consensus.


If you don't guaruntee that existing applications will work in the
future (for some reasonable window of time), it's a massive turn off to
anyone deciding to use this interface at all. You suppress your user base.


I think "reasonable window of time" is a key phrase here that
perhaps we can build into the guidelines somewhat. The problems of
course are that some clouds will move forward in time at different
rates and as Sean has frequently pointed out, time's arrow is not
unidirectional in the universe of many OpenStack clouds.

To what extent is the HEAD of OpenStack responsible to OpenStack two
or three years back?

Also, when suppressing or not suppressing which user base is more
important? The users that exist now or the users to come? This may
sound like a snarky or idle question, but it's a real one: Is it
true that we do, as a general rule, base our development on existing
users and not people who have chosen not to use "the product" for
some reason?


This is a real issue. A real issue raised by users and other project
teams. I do understand that in other contexts / projects that people
have been involved in, this may not have been considered an issue. But I
would assert it is one here.


I don't think anyone disagrees with it being a real issue. Perhaps
it would be more correct to say "I agree with your assertion". I
also, however, assert that we can learn from other approaches. Not
so that we can use different approaches, but so that we can clarify
and evolve the approaches we do use so that people more fully
understand the reasons, edge cases, etc. For some the problem (and
solutions) are very well understood and accepted, for others not so
much. The compare and constrast technique is a time honored and
tested way of expanding the mind.


So before reopening the exploration of approaches (or the need to do
anything at all), we should probably narrow the focus of whether
guaruntees to the user that their existing code will continue to work is
something that we need / want. I don't see any new data coming into our
community that this is less important than it was 4 years ago.


But we do have some data (recent glance visibility situation) that
sometimes changing stuff that violates the letter of the law (but
not really the spirit?) causes indecision and confusion when
evaluating changes. Are we going to declare this okay because glance
doesn't (and can't (yet) if we assert microversions) assert api
stability support?

We also have endless data that changing APIs is part and parcel of
what we do and that change of any kind is part and parcel of living
in the real world. I think even if we are maintaining that backwards
stability is critical we need to think about the cognitive cost of
multiple microversions to users. They are under no obligation to use
the new features or the bug fixes, but they do represent fairly
constant change that you only get access to if you choose to be
aware of microversions or use particular versions of supplied
clients.

A rough 

Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-23 Thread Sean Dague
On 01/23/2017 08:11 AM, Chris Dent wrote:
> On Wed, 18 Jan 2017, Chris Dent wrote:
> 
>> The review starts with the original text. The hope is that
>> commentary here in this thread and on the review will eventually
>> lead to the best document.
> 
> https://review.openstack.org/#/c/421846
> 
> There's been a bit of commentary on the review which I'll try to
> summarize below. I hope people will join in. There have been plenty
> of people talking about this but unless you provide your input
> either here or on the review it will be lost.
> 
> Most of the people who have commented on the review are generally in
> favor of what's there with a few nits on details:
> 
> * Header changes should be noted as breaking compatibility/stability
> * Changing an error code should be signalled as a breaking change
> * The concept of extensions should be removed in favor of "version
>   boundaries"
> * The examples section needs to be modernized (notably getting rid
>   of XML)
> 
> There's some concern that "security fixes" (as a justification for a
> breaking change) is too broad and could be used too easily.
> 
> These all seem to be good practical comments that can be integrated
> into a future version but they are, as a whole, based upon a model
> of stability based around versioning and "signalling" largely in the
> form of microversions. This is not necessarily bad, but it doesn't
> address the need to come to mutual terms about what stability,
> compatibility and interoperability really mean for both users and
> developers. I hope we can figure that out.
> 
> If my read of what people have said in the past is correct at least
> one definition of HTTP API stability/compatibility is:
> 
>Any extant client code that works should continue working.
> 
> If that's correct then a stability guideline needs to serve two
> purposes:
> 
> * Enumerate the rare circumstances in which that rule may be broken
>   (catastrophic security/data integrity problems?).
> * Describe how to manage inevitable change (e.g., microversion,
>   macroversions, versioned media types) and what "version
>   boundaries" are.
> 
> And if that's correct then what we are really talking about is
> reaching consensus on how (or if) to manage versions. And that's
> where the real contention lies. Do we want to commit to
> microversions across the board? If we assert that versioning is
> something we need across the board then certainly we don't want to
> be using different techniques from service to service do we?
> 
> If you don't think those things above are correct or miss some
> nuance, I hope you will speak up.
> 
> Here's some internally-conflicting, hippy-dippy, personal opinion
> from me, just for the sake of grist for the mill because nobody else
> is yet coughing up:
> 
> I'm not sure I fully accept the original assertion. If extant client
> code is poor, perhaps because it allows the client to make an
> unhealthy demand upon a service, maybe it shouldn't be allowed? If
> way A to do something existing, but way B comes along that is better
> are we doing a disservice to people's self-improvement by letting A
> continue? Breaking stuff can sometimes increase community
> engagement, whether that community is OpenStack at large or the
> community of users in any given deployment.

This counter assertion seems a lot like blaming the consumer for trying
to use the software, and getting something working. Then pulling that
working thing out from under them with no warning.

We all inherited a bunch of odd and poorly defined behaviors in the
system we're using. They were made because at the time they seemed like
reasonable tradeoffs, and a couple of years later we learned more, or
needed to address a different use case that people didn't consider before.

If you don't guaruntee that existing applications will work in the
future (for some reasonable window of time), it's a massive turn off to
anyone deciding to use this interface at all. You suppress your user base.

If, when operators upgrade their OpenStack environments, there consumers
start complaining to them about things breaking, operators are going to
be much more reticent on upgrading anything, ever.

If upgrades get made harder for any reason, then getting security fixes
or features out to operators/users is not possible. They stopped taking
them. And when they are far enough back from master, it's going to be
easier to move to something else entirely than both upgrading OpenStack,
which effectively will be something else entirely for their entire user
base.

This is the spiral we are trying to avoid. It's the spiral we were in.
The one where people would show up to design summit sessions for years
saying "for the love of god can you people stop breaking everything
every release". The one where the only effective way to talk to 2
"OpenStack Clouds" and get them to do the same thing for even medium
complexity applications what to write your own intermediary layer.

This is a real 

Re: [openstack-dev] [all] [tc] [api] refreshing and revalidating api compatibility guidelines

2017-01-23 Thread Chris Dent

On Wed, 18 Jan 2017, Chris Dent wrote:


The review starts with the original text. The hope is that
commentary here in this thread and on the review will eventually
lead to the best document.


https://review.openstack.org/#/c/421846

There's been a bit of commentary on the review which I'll try to
summarize below. I hope people will join in. There have been plenty
of people talking about this but unless you provide your input
either here or on the review it will be lost.

Most of the people who have commented on the review are generally in
favor of what's there with a few nits on details:

* Header changes should be noted as breaking compatibility/stability
* Changing an error code should be signalled as a breaking change
* The concept of extensions should be removed in favor of "version
  boundaries"
* The examples section needs to be modernized (notably getting rid
  of XML)

There's some concern that "security fixes" (as a justification for a
breaking change) is too broad and could be used too easily.

These all seem to be good practical comments that can be integrated
into a future version but they are, as a whole, based upon a model
of stability based around versioning and "signalling" largely in the
form of microversions. This is not necessarily bad, but it doesn't
address the need to come to mutual terms about what stability,
compatibility and interoperability really mean for both users and
developers. I hope we can figure that out.

If my read of what people have said in the past is correct at least
one definition of HTTP API stability/compatibility is:

   Any extant client code that works should continue working.

If that's correct then a stability guideline needs to serve two
purposes:

* Enumerate the rare circumstances in which that rule may be broken
  (catastrophic security/data integrity problems?).
* Describe how to manage inevitable change (e.g., microversion,
  macroversions, versioned media types) and what "version
  boundaries" are.

And if that's correct then what we are really talking about is
reaching consensus on how (or if) to manage versions. And that's
where the real contention lies. Do we want to commit to
microversions across the board? If we assert that versioning is
something we need across the board then certainly we don't want to
be using different techniques from service to service do we?

If you don't think those things above are correct or miss some
nuance, I hope you will speak up.

Here's some internally-conflicting, hippy-dippy, personal opinion
from me, just for the sake of grist for the mill because nobody else
is yet coughing up:

I'm not sure I fully accept the original assertion. If extant client
code is poor, perhaps because it allows the client to make an
unhealthy demand upon a service, maybe it shouldn't be allowed? If
way A to do something existing, but way B comes along that is better
are we doing a disservice to people's self-improvement by letting A
continue? Breaking stuff can sometimes increase community
engagement, whether that community is OpenStack at large or the
community of users in any given deployment.

Many projects that do not currently have microversions (or other
system) need to manage change in some fashion. It seems backwards to
me that they must subscribe to eternal backwards compatibility when
they don't yet have a mechanism for managing forward motion. I
suppose the benefit of the tag being proposed is that it allows a
project to say "actually, for now, we're not worrying about that;
we'll let you know when we do". In which case they would then have
license to do what they like (and presumably adapt tempest as they
like).

Microversions are an interesting system. They allow for eternal
backwards compatibility by defaulting to being in the past unless
you actively choose a particular point in time or choose to be
always in the present with "latest". When I first started thinking
about this stability concept in the context of OpenStack I felt that
microversions were anti-stability because not only do they help
developers manage change, they give them license to change whenever
they are willing to create a new microversion. That seems contrary
to what I originally perceived as a desire to minimize change.

Further, microversions are a feature that is (as far as I know?)
implemented in a way unique to OpenStack. In other universes some
strategies for versioning are:

* don't ever change
* change aligned with semver of the "product"
* use macroversions in the URL or service definitions
* use versioned media-types (e.g.,
  'application/vnd.os.compute.servers+json; version=1.2') and
  content-negotiation (and keep urls always the same)
* hypermedia

I would guess we have enough commitment to microversions in
production that using something else would be nutbar, but it is
probably worth comparing with some of those systems so that we can
at least clearly state the benefits when making everyone settle in
the same place.

--
Chris Dent