The issue isn't avoiding 1.19.
The issue is that if things aren't deprecated in 1.18 then for every
breaking change we have to start discussing exemptions from the API
deprecation process, which stipulates that all Public APIs must be
deprecated for at least 2 minor releases before they can be removed
(which is now unsurprisingly backfiring on us).
So if something isn't deprecated in 1.18 then either:
- we delay 2.0 by at 1 release cycle
- we effectively ignore the newly agreed upon deprecation process for 2.0
- we circumvent the newly agreed upon deprecation process by creating 2
minor releases in the same time-frame that we'd usually create 1 release in.
None of these options are great.
On 13/07/2023 14:03, Matthias Pohl wrote:
Jing brought up a question in the FLIP-335 discussion thread [1] which I
want to move into a dedicated discussion thread as it's a bit more general:
How do we handle the deprecation process of Public APIs for Flink 2.0?
I just have a related question: Do we need to create a FLIP each time
when we want to deprecate any classes?
The community defined the requirements of a FLIP [2] in the following way:
- Any major new feature, subsystem, or piece of functionality
- Any change that impacts the public interfaces of the project
public interfaces in this sense are defined as:
- DataStream and DataSet API, including classes related to that, such as
StreamExecutionEnvironment
- Classes marked with the @Public annotation
- On-disk binary formats, such as checkpoints/savepoints
- User-facing scripts/command-line tools, i.e. bin/flink, Yarn scripts,
Mesos scripts
- Configuration settings
- Exposed monitoring information
I think that this makes sense. There should be a proper discussion on the
best approach to change public APIs. Additionally, the FLIP should be a way
to document the changes in the discussion process towards the change
transparently.
In contrast, the impression I have is that we're trying to push all the
deprecation work into 1.18 (which has its feature freeze date by the end of
next week) to avoid having an additional 1.19 minor release. (Correct me if
I'm wrong here but that's the impression I'm getting from the ML threads
I've seen so far.)
I have some concerns on the Flink 2.0 development in this regard. Zhu Zhu
[4] and Chesnay [5] shared similar concerns in the thread about the 2.0
must-have work items.
Considering that quite a few (or some; I haven't checked in detail to be
honest) of the changes for 2.0 should require a FLIP and that there are
still some hanging items [6] (Jira issues which are annotated for Flink 2.0
but have been properly checked, yet): Shouldn't we avoid pushing everything
into 1.18? Instead, we should follow the required process properly and
might plan for another 1.19 minor release, instead?
I'm curious how other contributors feel here and sorry in case I have
misinterpreted the ongoing discussions.
Best,
Matthias
[1] https://lists.apache.org/thread/48ysrg1rrtl8s1twg9wmx35l201hnc2w
[2]
https://cwiki.apache.org/confluence/display/Flink/Flink+Improvement+Proposals#FlinkImprovementProposals-Whatisconsidereda%22majorchange%22thatneedsaFLIP
?
[3] https://lists.apache.org/thread/r0y9syc6k5nmcxvnd0hj33htdpdj9k6m
[4] https://lists.apache.org/thread/45xm348jr8n6s89jldntv5z3t13hdbn8
[5] https://lists.apache.org/thread/98wgqrx0sycpskvgpydvkywsoxt0fkc6
[6] https://lists.apache.org/thread/77hj39ls3kxvx2qd6o09hq1ndtn6hg4y