This is great! Thanks for working on this.

Best

Etienne

Le 15/06/2023 à 13:19, Martijn Visser a écrit :
Hi all,

I would like to inform you of two changes that have been made to the shared
CI workflow that's used for Flink's externalized connectors.

1. Up until now, weekly builds were running to validate that connector code
(still) works with Flink. However, these builds were only running for code
on the "main" branch of the connector, and not for the branches of the
connector (like v3.0 for Elasticsearch, v1.0 for Opensearch etc). This was
tracked underhttps://issues.apache.org/jira/browse/FLINK-31923.

That issue has now been fixed, with the Github Action workflow now
accepting a map with arrays, which can contain a combination of Flink
versions to test for and the connector branch it should test. See
https://github.com/apache/flink-connector-jdbc/blob/main/.github/workflows/weekly.yml#L28-L47
for an example on the Flink JDBC connector

This change has already been applied on the externalized connectors GCP
PubSub, RabbitMQ, JDBC, Pulsar, MongoDB, Opensearch, Cassandra,
Elasticsearch. AWS is pending the merging of the PR. For Kafka, Hive and
HBase, since they haven't finished externalization, this isn't applicable
to them yet.

2. When working on the debugging of a problem with the JDBC connector, one
of the things that was needed to debug that problem was the ability to see
the JVM thread dump. Withhttps://issues.apache.org/jira/browse/FLINK-32331
now completed, every failed CI run will have a JVM thread dump. You can see
the implementation for that in
https://github.com/apache/flink-connector-shared-utils/blob/ci_utils/.github/workflows/ci.yml#L161-L195

Best regards,

Martijn

Reply via email to