Hi Team,

Just joining the conversation for the first time, so pardon me if I repeat
already answered questions.

It might be already discussed, but I think the version for the "connected"
system could be important as well.

There might be some API changes between Iceberg 0.14.2, and 1.0.0, which
would require as to rewrite part of the code for the Flink-Iceberg
connector.
It would be important for the users:
- Which Flink version(s) are this connector working with?
- Which Iceberg version(s) are this connector working with?
- Which code version we have for this connector?

Does this make sense? What is the community's experience with the connected
systems? Are they stable enough for omitting their version number from the
naming of the connectors? Would this worth the proliferation of the
versions?

Thanks,
Peter

Chesnay Schepler <ches...@apache.org> ezt írta (időpont: 2022. szept. 29.,
Cs, 14:11):

> 2) No; the branch names would not have a Flink version in them; v1.0.0,
> v1.0.1 etc.
>
> On 29/09/2022 14:03, Martijn Visser wrote:
> > If I summarize it correctly, that means that:
> >
> > 1. The versioning scheme would be <major.minor.patch connector
> > version>-<major.minor supported Flink version>, where there will never
> be a
> > patch release for a minor version if a newer minor version already
> exists.
> > E.g., 1.0.0-1.15; 1.0.1-1.15; 1.1.0-1.15; 1.2.0-1.15;
> >
> > 2. The branch naming scheme would be vmajor.minor-flink-major.flink-minor
> > E.g., v1.0.0-1.15; v1.0.1-1.15; v1.1.0-1.15; v1.2.0-1.15;
> >
> > I would +1 that.
> >
> > Best regards,
> >
> > Martijn
> >
> > On Tue, Sep 20, 2022 at 2:21 PM Chesnay Schepler <ches...@apache.org>
> wrote:
> >
> >>   > After 1.16, only patches are accepted for 1.2.0-1.15.
> >>
> >> I feel like this is a misunderstanding that both you and Danny ran into.
> >>
> >> What I meant in the original proposal is that the last 2 _major_
> >> /connector /versions are supported, with the latest receiving additional
> >> features.
> >> (Provided that the previous major version still works against a
> >> currently supported Flink version!)
> >> There will never be patch releases for a minor version if a newer minor
> >> version exists.
> >>
> >> IOW, the minor/patch releases within a major version do not form a tree
> >> (like in Flink), but a line.
> >>
> >> 1.0.0 -> 1.0.1 -> 1.1.0 -> 1.2.0 -> ...
> >> NOT
> >> 1.0.0 -> 1.0.1 -> 1.0.2
> >>      |-----> 1.1.0 -> 1.1.1
> >>
> >> If we actually follow semantic versioning then it's just not necessary
> >> to publish a patch for a previous version.
> >>
> >> So if 2.x exists, then (the latest) 2.x gets features and patches, and
> >> the latest 1.x gets patches.
> >>
> >> I hope that clears things up.
> >>
> >> On 20/09/2022 14:00, Jing Ge wrote:
> >>> Hi,
> >>>
> >>> Thanks for starting this discussion. It is an interesting one and yeah,
> >> it
> >>> is a tough topic. It seems like a centralized release version schema
> >>> control for decentralized connector development ;-)
> >>>
> >>> In general, I like this idea, not because it is a good one but because
> >>> there might be no better one(That's life!). The solution gives users an
> >>> easy life with the price of extra effort on the developer's part. But
> it
> >> is
> >>> a chicken and egg situation, i.e. developer friendly vs. user friendly.
> >> If
> >>> it is hard for developers to move forward, it will also be difficult
> for
> >>> users to get a new release, even if the version schema is user
> friendly.
> >>>
> >>> I'd like to raise some questions/concerns to make sure we are on the
> same
> >>> page.
> >>>
> >>> @Chesnay
> >>>
> >>> c1) Imagine we have 2.0.0 for 1.15:
> >>>
> >>> - 2.0.0-1.14 (patches)
> >>> - 2.0.0-1.15 (feature and patches)
> >>> ===> new major release targeting 1.16 and we need to change code for
> new
> >> API
> >>> - 2.0.0-1.14 (no support)
> >>> - 2.0.0-1.15 (patches)
> >>>      - 2.0.1-1.15 (new patches)
> >>> - 2.1.0-1.16 (feature and patches)
> >>>
> >>> There is no more 2.1.0-1.15 because only the latest version is
> receiving
> >>> new features.
> >>>
> >>> b1) Even if in some special cases that we need to break the rule, we
> >> should
> >>> avoid confusing users:
> >>>
> >>> ===> new major release targeting 1.16 and we need to change code for
> new
> >> API
> >>> - 2.0.0-1.14 (no support)
> >>> - 2.0.0-1.15 (patches)
> >>> - 2.1.0-1.16 (feature and patches)
> >>> ===> now we want to break the rule to add features to the penultimate
> >>> version
> >>> - 2.0.0-1.14 (no support)
> >>> - 2.0.0-1.15 (patches)
> >>>       - 2.2.0-1.15 (patches, new features)  // 2.1.0-1.15 vs.
> 2.2.0-1.15,
> >>> have to choose 2.2.0-1.15 to avoid conflict
> >>> - 2.1.0-1.16 (feature and patches)
> >>>
> >>> we have two options: 2.1.0-1.15 vs. 2.2.0-1.15, both will confuse
> users:
> >>> - Using 2.1.0-1.15 will conflict with the existing 2.1.0-1.16. The
> >>> connector version of "2.1.0-1.16" is actually 2.1.0 which means it has
> >> the
> >>> same code as 2.1.0-1.15 but in this case, it contains upgraded code.
> >>> - Using 2.2.0-1.15 will skip 2.1.0-1.15. Actually, it needs to skip all
> >>> occupied minor-1.16 versions, heads-up release manager!
> >>>
> >>> c2) Allow me using your example:
> >>>
> >>> ===> new major release targeting 1.16
> >>> - 1.2.0-1.14 (no support; unsupported Flink version)
> >>> - 1.2.0-1.15 (patches; supported until either 3.0 of 1.17, whichever
> >>> happens first)
> >>> - 2.0.0-1.15 (feature and patches)
> >>> - 2.0.0-1.16 (feature and patches)
> >>>
> >>> I didn't understand the part of "2.0.0-1.15 (features and patches)".
> >> After
> >>> 1.16, only patches are accepted for 1.2.0-1.15.
> >>> It should be clearly defined how to bump up the connector's version
> >> number
> >>> for the new Flink version. If the connector major number would always
> >> bump
> >>> up, it would make less sense to use the Flink version as postfix. With
> >> the
> >>> same example, it should be:
> >>>
> >>> ===> new major release targeting 1.16
> >>> - 1.2.0-1.14 (no support; unsupported Flink version)
> >>> - 1.2.0-1.15 (patches; supported until either 3.0 of 1.17, whichever
> >>> happens first)
> >>>        - 1.2.1-1.15 (new patches)
> >>> - 1.3.0-1.16 (feature and patches)
> >>>       - 1.4.0-1.16 (feature and patches, new features)
> >>>       - 2.0.0-1.16 (feature and patches, major upgrade of connector
> >> itself)
> >>> or
> >>>
> >>> - 1.2.0-1.14 (patches)
> >>> - 1.2.0-1.15 (feature and patches)
> >>>       - 2.0.0 -1.15 (feature and patches, major upgrade of connector
> >> itself)
> >>> ===> new major release targeting 1.16
> >>> - 1.2.0-1.14 (no support; unsupported Flink version)
> >>> - 2.0.0-1.15 (patches)
> >>>       - 2.0.1-1.15 (new patches)
> >>> - 2.1.0-1.16 (feature and patches)
> >>>      - 2.2.0-1.16 (feature and patches, new features)
> >>>
> >>> i.e. commonly, there should be no connector major version change when
> >> using
> >>> the Flink version postfix as the version schema. Special cases(rarely
> >>> happens) are obviously allowed.
> >>>
> >>> Best regards,
> >>> Jing
> >>>
> >>> On Tue, Sep 20, 2022 at 10:57 AM Martijn Visser<
> martijnvis...@apache.org
> >>>
> >>> wrote:
> >>>
> >>>> Hi all,
> >>>>
> >>>> This is a tough topic, I also had to write things down a couple of
> >> times.
> >>>> To summarize and add my thoughts:
> >>>>
> >>>> a) I think everyone is agreeing that "Only the last 2 versions of a
> >>>> connector are supported per supported Flink version, with only the
> >> latest
> >>>> version receiving new features". In the current situation, that means
> >> that
> >>>> Flink 1.14 and Flink 1.15 would be supported for connectors. This
> >> results
> >>>> in a maximum of 4 supported connector versions.
> >>>>
> >>>> b1) In an ideal world, I would have liked Flink's APIs that are used
> by
> >>>> connectors to be versioned (that's why there's now a Sink V1 and a
> Sink
> >>>> V2). However, we're not there yet.
> >>>>
> >>>> b2) With regards to the remark of using @Interal APIs, one thing that
> we
> >>>> agreed to in previous discussions is that connectors shouldn't need to
> >> rely
> >>>> on @Interal APIs so that the connector surface also stabilizes.
> >>>>
> >>>> b3) In the end, I think what matters the most is the user's perception
> >> on
> >>>> versioning. So the first thing to establish would be the versioning
> for
> >>>> connectors itself. So you would indeed have a <major.minor.patch>
> >> scheme.
> >>>> Next is the compatibility of that scheme with a version of Flink. I do
> >> like
> >>>> Chesnay's approach for using the Scala suffixes idea. So you would
> have
> >>>> <major.minor.patch connector>_<major.minor Flink version>. In the
> >> currently
> >>>> externalized Elasticsearch connector, we would end up with 3.0.0_1.14
> >> and
> >>>> 3.0.0_1.15 as first released versions. If a new Flink version would be
> >>>> released that doesn't require code changes to the connector, the
> >> released
> >>>> version would be 3.0.0_1.16. That means that there have been no
> >> connector
> >>>> code changes (no patches, no new features) when comparing this across
> >>>> different Flink versions.
> >>>>
> >>>> b4) Now using the example that Chesnay provided (yet slightly modified
> >> to
> >>>> match it with the Elasticsearch example I've used above), there exists
> >> an
> >>>> Elasticsearch connector 3.0.0_1.15. Now in Flink 1.16, there's a new
> API
> >>>> that we want to use, which is a test util. It would result in version
> >>>> 3.1.0_1.16 for the new Flink version. Like Chesnay said, for the sake
> of
> >>>> argument, at the same time we also had some pending changes for the
> 1.15
> >>>> connector (let's say exclusive to 1.15; some workaround for a bug or
> >> smth),
> >>>> so we would also end up with 3.1.0-1.15. I agree with Danny that we
> >> should
> >>>> avoid this situation: the perception of the user would be that there's
> >> no
> >>>> divergence between the 3.1.0 version, except the compatible Flink
> >> version.
> >>>> I really am wondering how often we will run in that situation. From
> what
> >>>> I've seen so far with connectors is that bug fixes always end up in
> both
> >>>> the release branch and the master branch. The only exceptions are test
> >>>> stabilities or documentation fixes, but if we only resolve these, they
> >>>> wouldn't need to be released. If such a special occasion would occur,
> I
> >>>> would be inclined to go for a hotfix approach, where you would end up
> >> with
> >>>> 3.0.0.1_1.15.
> >>>>
> >>>> c) Branch wise, I think we should end up with <major.minor.patch
> >>>> connector>_<major.minor Flink version>. So again the Elasticsearch
> >> example,
> >>>> at this moment there would be 3.0.0_1.14 and 3.0.0_1.15 branches.
> >>>>
> >>>> Best regards,
> >>>>
> >>>> Martijn
> >>>>
>
>

Reply via email to