We need is an annotation checker to ensure every public method is
tagged either @Experimental or @Deprecated. That way there will be no
confusion about what we expect to be stable. If we really want to
offer stable APIs there exist many tools (such as JAPICC[1]) to ensure
we don't make breaking changes. Without some actual tests we are just
hoping we don't break anything, so everything is actually
@Experimental.

Andrew

[1] https://lvc.github.io/japi-compliance-checker/

On Wed, Nov 27, 2019 at 10:12 AM Kenneth Knowles <k...@apache.org> wrote:
>
> Hi all
>
> I wanted to start a dedicated thread to the discussion of how to manage our 
> @Experimental annotations, API evolution in general, etc.
>
> After some email back-and-forth this will get too big so then I will try to 
> summarize into a document. But I think a thread to start with makes sense.
>
> Problem statement:
>
> 1. Users need stable APIs so their software can just keep working
> 2. Breaking changes are necessary to achieve correctness / high quality
>
> Neither of these is actually universally true. Many changes don't really hurt 
> users, and some APIs are so obvious they don't require adjustment, or 
> continuing to use an inferior API is OK since at least correctness is 
> possible.
>
> But we have had to many breaking changes in Beam, some quite late, for the 
> purposes of fixing major data loss bugs, design errors, changes in underlying 
> services, and usability. [1] So I take for granted that we do need to make 
> these changes.
>
> So the problem becomes:
>
> 1. Users need to know *which* APIs are frozen, clearly and with enough buy-in 
> that changes don't surprise them
> 2. Useful APIs that are not technically frozen but never change will still 
> get usage and should "graduate"
>
> Current status:
>
>  - APIs (classes, methods, etc) can be marked "experimental" with annotations 
> in languages
>  - "experimental" features are shipped in the same jar with non-experimental 
> bits; sometimes it is just a couple methods or classes
>  - "experimental" APIs are supposed to allow breaking changes
>  - there is no particular process for removing "experimental" status
>  - we do go through "deprecation" process even for experimental things
>
> Downsides to this:
>
>  - tons of Beam has become very mature but still "experimental" so it isn't 
> really safe to make breaking changes
>  - users are not really alerted that well to when they are using unstable 
> pieces
>  - we don't have an easy way to determine the impact of any breaking changes
>  - we also don't have a clear policy or guidance around underlying 
> services/client libs making breaking changes (such as services rejecting 
> older clients)
>  - having something both "experimental" and "deprecated" is maybe confusing, 
> but also just deleting experimental stuff is not safe in the current state of 
> things
>
> Some proposals that I can think of people made:
>
>  - making experimental features opt-in only (for example by a separate dep or 
> a flag)
>  - putting a version next to any experimental annotation and force review at 
> that time (lots of objections to this, but noting it for completeness)
>  - reviews for graduating on a case-by-case basis, with dev@ thread and maybe 
> vote
>  - try to improve our ability to know usage of experimental features (really, 
> all features!)
>
> I will start with my own thoughts from here:
>
> *Opt-in*: This is a powerful idea that I think changes everything.
>    - for an experimental new IO, a separate artifact; this way we can also 
> see downloads
>    - for experimental code fragments, add checkState that the relevant 
> experiment is turned on via flags
>
> *Graduation*: Once things are opt-in, the drive to graduate them will be 
> stronger than it is today. I think vote is appropriate, with rationale 
> including usage and test coverage and stability, since it is a commitment by 
> the community to maintain the code, which constitutes most of the TCO of code.
>
> *Tracking*:
>  - We should know what experiments we have and how old they are.
>  - It means that just tagging methods and classes "@Experimental" doesn't 
> really work. I think that is probably a good thing. It is confusing to have 
> hundreds of tiny experiments. We can target larger-scale experiments.
>  - If we regularly poll on twitter or user@ about features then it might 
> become a source of OK signal, for things where we cannot look at download 
> stats.
>
> I think with these three approaches, the @Experimental annotation is actually 
> obsolete. We could still use it to drive some kind of annotation processor to 
> ensure "if there is @Experimental then there is a checkState" but I don't 
> have experience doing such things.
>
> Kenn
>
> [1] 
> https://lists.apache.org/thread.html/1bfe7aa55f8d77c4ddfde39595c9473b233edfcc3255ed38b3f85612@%3Cdev.beam.apache.org%3E

Reply via email to