[
https://issues.apache.org/jira/browse/FLINK-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16665322#comment-16665322
]
ASF GitHub Bot commented on FLINK-10107:
----------------------------------------
twalthr opened a new pull request #6534: [FLINK-10107] [sql-client] Relocate
Flink Kafka connectors for SQL JARs
URL: https://github.com/apache/flink/pull/6534
## What is the purpose of the change
This PR enforces even more shading for SQL JARs than before. In the past, we
only shaded Kafka dependencies. However, since Flink's Kafka connectors
mutually depend on each other and do not use version-specific package names
(e.g. for Elasticsearch we use `elasticsearch2`, `elasticsearch3`, etc.). This
is the only way of avoiding dependency conflicts between different Kafka SQL
JARs. The end-to-end tests has been extended to detect classloading issues in
builds.
## Brief change log
- Add more relocation to Flink's Kafka SQL JARs
## Verifying this change
SQL Client end-to-end tests has been adapted to detect classloading issues
earlier.
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): yes, but only for
SQL JARs
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: no
- The serializers: no
- The runtime per-record code paths (performance sensitive): no
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
- The S3 file system connector: no
## Documentation
- Does this pull request introduce a new feature? no
- If yes, how is the feature documented? not applicable
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> SQL Client end-to-end test fails for releases
> ---------------------------------------------
>
> Key: FLINK-10107
> URL: https://issues.apache.org/jira/browse/FLINK-10107
> Project: Flink
> Issue Type: Bug
> Components: Table API & SQL
> Reporter: Timo Walther
> Assignee: Timo Walther
> Priority: Major
> Labels: pull-request-available
>
> It seems that SQL JARs for Kafka 0.10 and Kafka 0.9 have conflicts that only
> occur for releases and not SNAPSHOT builds. This might be due to their file
> name. Depending on the file name either 0.9 is loaded before 0.10 and vice
> versa.
> One of the following errors occured:
> {code}
> 2018-08-08 18:28:51,636 ERROR
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.ClientUtils -
> Failed to close coordinator
> java.lang.NoClassDefFoundError:
> org/apache/flink/kafka09/shaded/org/apache/kafka/common/requests/OffsetCommitResponse
> at
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:473)
> at
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:357)
> at
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.maybeAutoCommitOffsetsSync(ConsumerCoordinator.java:439)
> at
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.close(ConsumerCoordinator.java:319)
> at
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.ClientUtils.closeQuietly(ClientUtils.java:63)
> at
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1277)
> at
> org.apache.flink.kafka09.shaded.org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1258)
> at
> org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread.run(KafkaConsumerThread.java:286)
> Caused by: java.lang.ClassNotFoundException:
> org.apache.flink.kafka09.shaded.org.apache.kafka.common.requests.OffsetCommitResponse
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 8 more
> {code}
> {code}
> java.lang.NoSuchFieldError: producer
> at
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.invoke(FlinkKafkaProducer010.java:369)
> at
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
> at
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
> at
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
> at
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
> at
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
> at
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
> at
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
> at
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
> at
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
> at
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
> at
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
> at
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
> at
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
> at
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:37)
> at
> org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:28)
> {code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)