Hi Ethan, About building a mechanism to expire old clients, IMO, the way supporting for Flink engine is different from other engines like Spark, MR. We only need to define how many Flink versions support to retain. Because the support of Flink engine is Flink minor version. WDYT?
Regards, Nicholas Jiang On 2024/12/19 03:54:04 weijie guo wrote: > Thanks for the feedback! > > > To streamline our process, I propose we build a mechanism to expire > old clients after a certain period, ensuring that only actively > supported Flink/Spark/MR versions are in use. > > That sounds pretty good, +1 for the proposal. > > I don't know much about other engines, but I can offer some background about > the Flink community: > > We only actively maintains the master branch(target 2.0 for now), and the > last two major releases(1.20.x and 1.19.x). > > For the most recent third major release version(1.18.x), there is usually a > final release at an appropriate time, and then support is no longer > guaranteed). > > > Best regards, > > Weijie > > > Ethan Feng <ethanf...@apache.org> 于2024年12月19日周四 11:43写道: > > > Hi Weijie, > > > > I appreciate your thoughts on removing the outdated Flink versions. I > > completely agree that it's essential to focus on maintaining the still > > relevant versions. > > > > To streamline our process, I propose we build a mechanism to expire > > old clients after a certain period, ensuring that only actively > > supported Flink/Spark/MR versions are in use. This would simplify our > > support efforts and encourage users to update to newer versions. > > > > Regarding your question about how many Flink versions we should retain > > support for, keeping support for the last {4} or {5} major versions > > could be a good approach, depending on user feedback. > > > > Let me know your thoughts! > > > > Thanks, > > Ethan Feng > > > > Nicholas Jiang <nicholasji...@apache.org> 于2024年12月19日周四 11:36写道: > > > > > > Hi weijie, > > > > > > Thanks for driving the out-of-dated flink versions. +1 for removing the > > out-of-dated flink versions because the support of different Flink versions > > are similar. > > > > > > IMO, We could firstly remove the support for Flink 1.14 and 1.15 > > version, which versions are very old and few functional features. > > Meanwhile, the support of other Flink versions could be gradually removed > > in the Celeborn major version. > > > > > > BTW, how many Flink versions should we retain support for? > > > > > > Regards, > > > Nicholas Jiang > > > > > > On 2024/12/18 06:18:17 weijie guo wrote: > > > > Hi all, > > > > > > > > I would like to suggest that we consider ending the support for some > > of the > > > > older Flink versions. The main reasons are as follows: > > > > > > > > 1. Currently, the minimum supported version is Flink-1.14. But Flink > > > > releases like 1.14 and 1.15 have been out of date for a long time, and > > > > some important features (speculative execution, AQE, etc.) of Flink > > batch > > > > are missing. > > > > > > > > 2. The Flink community currently does not support versions lower than > > 1.18, > > > > considering that Flink 1.20 will be the first LTS version, the > > existing 1.x > > > > users will gradually upgrade to 1.20 also. > > > > > > > > 3. There is a big difference between the shuffle API of the old > > version and > > > > the current Flink code base, and it is tedious to maintain > > compatibility. > > > > Moreover, new features such as JM Failover and hybrid shuffle cannot be > > > > supported in out-of-dated Flink release. If users do have a need to > > adapt > > > > older Flink versions, it's not difficult for them to do it themselves, > > but > > > > this can reduce our maintenance burden. > > > > > > > > For now, communities such as Apache Iceberg have taken a more radical > > > > strategy that in line with Flink's official supported version. We don't > > > > have to go that far yet, but it feels like at least we can remove the > > > > versions 1.14 and 1.15 for now. > > > > > > > > > > > > Best regards, > > > > > > > > Weijie > > > > > > >