Hi,

Based on this discussion I would like to suggest to move on with
the originally planned release with the HBase connector 4.0, that
will support 1.18, and 1.19.

I volunteer to be the release manager.

Thanks,
Ferenc



On Wednesday, October 23rd, 2024 at 13:19, Ferenc Csaky 
<ferenc.cs...@pm.me.INVALID> wrote:

> 
> 
> Hi Marton, Yanquan,
> 
> Thank you for your responses! Regarding the points brought up to
> discuss:
> 
> 1. Supporting 1.20 definitely makes sense, but since there is quite
> a big gap to work down here now, I am not sure it should be done in
> 1 step. As my understanding, the externalized connector dev model
> [1] do not explicitly forbid that, but AFAIK there were external
> connector release that supported 3 different Flink minor versions.
> 
> In this case, I think technically would be possible, but IMO
> supporting 3 Flink verisons adds more complexity to maintain. So
> what I would suggest to release 4.0 with Flink 1.18 and 1.19
> support, and after that there can be a 4.1 that supports 1.19 and
> 1.20. 4.0 will only have patch support, probably minimizing Flink
> version specific problems.
> 
> 2. Flink 1.17 had no JDK17 support, so those Hadoop related
> problems should not play a role if something needs to be released
> that supports 1.17. But if connector 4.0 is released, 3.x versions
> will not get any new releases (even not patch), cause 1.17 is out
> of support already.
> 
> Best,
> Ferenc
> 
> [1] 
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
> 
> On Wednesday, 23 October 2024 at 05:19, Yanquan Lv decq12y...@gmail.com wrote:
> 
> > Hi Feri,
> > Thank you for bringing up this discussion.
> > I agree to release a version to bump the newer version of Flink with 
> > partial JDK versions support. I have two points to be discussed.
> > 1. I have heard many inquiries about supporting higher versions of Flink in 
> > Slack, Chinese communities, etc., and a large part of them hope to use it 
> > on Flink1.20. Should we consider explicitly supporting Flink1.20 on version 
> > 4.0, otherwise users will have to wait for a relatively long release cycle.
> > 2. Currently supporting Flink1.17 is difficult, but are there any plans to 
> > support it in the future? Do we need to wait for Hadoop related 
> > repositories to release specific versions.
> > 
> > > 2024年10月22日 19:44,Ferenc Csaky ferenc.cs...@pm.me.INVALID 写道:
> > > 
> > > Hello devs,
> > > 
> > > I would like to start a discussion regarding a new HBase connector 
> > > release. Currently, the
> > > externalized HBase connector has only 1 release: 3.0.0 that supports 
> > > Flink 1.16 and 1.17.
> > > 
> > > By stating this, it is obvious that the connector is already outdated for 
> > > quite a while. There
> > > is a long-lasting ticket [1] to release a newer HBase version, which also 
> > > contains a major version
> > > bump as HBase 1.x support is removed, but covering JDK17 with the current 
> > > Hadoop related
> > > dependency mix is impossible, because there are parts that do not play 
> > > well with it when you
> > > try to compile with JDK17+, and there are no runtime tests as well.
> > > 
> > > Solving that properly will require to bump the HBase, Hadoop, and 
> > > Zookeeper versions as well,
> > > but that will require more digging and some refactoring, at least on the 
> > > test side.
> > > 
> > > To cut some corners and move forward I think at this point it would make 
> > > sense to release
> > > version 4.0 that supports Flink 1.18 and 1.19 but only on top of JDK8 and 
> > > JDK11 just to close the
> > > current gap a bit. I am thinking about including the limitations in the 
> > > java compat docs [2] to
> > > highlight users.
> > > 
> > > WDYT?
> > > 
> > > Best,
> > > Ferenc
> > > 
> > > [1] https://issues.apache.org/jira/browse/FLINK-35136
> > > [2] 
> > > https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/java_compatibility/

Reply via email to