Hi all,

I wanted to post here the result of a discussion I had in private with Chesnay related to this subject. The question was regarding archunit with connectors:

"How to deal with different archunit violations between 2 versions of Flink ?  If a violation is normal and should be added to the violation store but the related rule has changed in a recent Flink version, how to have different set of violations between 2 flink versions for one single violation store?"

We concluded by saying that even if a connector should support (and therefore be tested against) the last 2 versions of Flink, the archunit tests should run only on the main supported Flink version (usually the most recent one).

As a consequence, I'll configure that in Cassandra connector and update the connectors migration wiki doc to serve as an example for such cases.

Best

Etienne


Le 29/06/2023 à 15:57, Etienne Chauchot a écrit :

Hi Martijn,

Thanks for your feedback. I makes total sense to me.

I'll enable it for Cassandra.

Best

Etienne

Le 29/06/2023 à 10:54, Martijn Visser a écrit :

Hi Etienne,

I think it all depends on the actual maintainers of the connector to
make a decision on that: if their unreleased version of the connector
should be compatible with a new Flink version, then they should test
against it. For example, that's already done at Elasticsearch [1] and
JDBC [2].

Choosing which versions to support is a decision by the maintainers in
the community, and it always requires an action by a maintainer to
update the CI config to set the correct versions whenever a new Flink
version is released.

Best regards,

Martijn

[1]https://github.com/apache/flink-connector-elasticsearch/blob/main/.github/workflows/push_pr.yml
[2]https://github.com/apache/flink-connector-jdbc/blob/main/.github/workflows/push_pr.yml

On Wed, Jun 28, 2023 at 6:09 PM Etienne Chauchot<echauc...@apache.org>  wrote:
Hi all,

Connectors are external to flink. As such, they need to be tested
against stable (released) versions of Flink.

But I was wondering if it would make sense to test connectors in PRs
also against latest flink master snapshot to allow to discover failures
before merging the PRs, ** while the author is still available **,
rather than discovering them in nightly tests (that test against
snapshot) after the merge. That would allow the author to anticipate
potential failures and provide more future proof code (even if master is
subject to change before the connector release).

Of course, if a breaking change was introduced in master, such tests
will fail. But they should be considered as a preview of how the code
will behave against the current snapshot of the next flink version.

WDYT ?


Best

Etienne

Reply via email to