Hi Weijie,
Thanks for bringing this up. Fabian, Arvid and I also discussed these
things like a year ago when we thought about the externalization of
connectors. I believe that in the end, you will need to have a 'Smoke Test'
[1] for a connector for a DataStream API program and one for a Table/SQL
program. With that in mind, SmokeKafkaITCase was created for the
combination of Kafka and DataStream API. It would be good to standardize on
such a practice.
I'm not sure if SqlClient{connector_name}ITCase is the best convention,
because it implies that you're testing something on the SqlClient. That's
why we liked the idea of Smoke tests.
Best regards,
Martijn
[1] https://en.wikipedia.org/wiki/Smoke_testing_(software)
On Mon, Jan 9, 2023 at 9:41 AM Hamdy, Ahmed <[email protected]>
wrote:
> Hi Weijie
> I personally agree with the proposal, it guarantees keeping high standards
> across our repos.
> Speaking from AWS connectors side, we have maintained e2e tests for
> Kinesis and Firehose sql connectors, DDB to follow.
> Happy to coordinate regarding any new conventions or suggestions.
>
> On 09/01/2023, 05:48, "weijie guo" <[email protected]> wrote:
>
> CAUTION: This email originated from outside of the organization. Do
> not click links or open attachments unless you can confirm the sender and
> know the content is safe.
>
>
>
> Hi devs,
>
>
> I'd like to start a discussion about adding sql-connector's uber jar
> e2e test for connectors.
>
>
> I know that some connectors like kafka and HBase have corresponding
> sql connector uber jar's test, which are located in SqlClientITCase
> and SQLClientHBaseITCase respectively. However, some sql connectors do
> not seem to have tests
> to the uber jars, such as pulsar and rabbitmq. It is worth mentioning
> that before migrating to an external repository, apache/flink tested
> ElasticSearch with the uber jar. After being migrated to an external
> repository, we lost this test. At the same time, the previous test
> only covered es7, but did not cover es6, which led to a bug like
> FLINK-30359 that existed for a long time but nobody found it.
>
>
> Because sql-connector-* generally only uses the maven-shade-plugin to
> produce the uber jar, the
> end to end test
> only needs to start a corresponding docker container, add the
> dependency of uber jar through SQLJobSubmissionBuilder#addJars, and
> run a very simple job.
>
>
> Since the uber jar generated in sql-connector-* may not work properly
> for various reasons,
> for example, some required dependencies are excluded, and incorrect
> shade
> JDBC driver.
> I propose to check and add sql-connector e2e tests for all sql
> connectors. Given that Flink plans to externalize the connectors,
> there seems to be no good way to uniformly detect and run all such
> tests. Theoretically, there should be a mechanism to ensure the
> existence of these tests in all external repositories of connectors
> that have sql-connector module, but there seems to be no good way to
> do this, which may be the responsibility of the connector maintainer
> or code reviewer.
>
> I understand that there may be no clear standards or negligence in
> code reviewing before, resulting in the lack of corresponding uber jar
> test for some connector. Therefore, I suggest that we should do the
> following:
>
> 1.
> Check all sql-connector-* in internal or external repository and
> add missing test.
> 2.
> In order to form conventions and facilitate code review, I suggest
> that the name of the sql-connector uber jar test should be
> SqlClient{connector_name}ITCase, because we now have many confusing
> names to do the same things, such as SQLClientHBaseITCase,
> SqlClientITCase, KinesisDataStreamsTableApiIT.
> 3.
> I don't know whether the community has documents to guide connector
> contributors to follow. If so, this requirement of uber jar test needs
> to be added to the relevant part. If not, perhaps we should have a
> document to do this?
> 4.
> In any case, we should ensure that all tests (UT, IT, E2E tests)
> including sql-connecotor e2e test are not lost during the code review.
>
> Any thoughts on this? Looking forward to your reply!
>
> Best regards,
>
> Weijie
>
>