[
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409063#comment-15409063
]
ASF GitHub Bot commented on FLINK-4035:
---------------------------------------
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/2231#discussion_r73652372
--- Diff:
flink-streaming-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaConsumerTestBase.java
---
@@ -186,7 +190,7 @@ public void runFailOnNoBrokerTest() throws Exception {
stream.print();
see.execute("No broker test");
} catch(RuntimeException re) {
- if(kafkaServer.getVersion().equals("0.9")) {
+
if(kafkaServer.getVersion().equals(getExpectedKafkaVersion())) {
--- End diff --
I think the original intent of this assert here was that 0.8 connector
throws different exception messages than 0.9.
So adding the `getExpectedKafkaVersion()` is a bit confusing with respect
to the test intent.
Perhaps we should remove the new `getExpectedKafkaVersion()` and simply
change this to `kafkaServer.getVersion().equals("0.9") ||
kafkaServer.getVersion().equals("0.10")`?
> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---------------------------------------------------
>
> Key: FLINK-4035
> URL: https://issues.apache.org/jira/browse/FLINK-4035
> Project: Flink
> Issue Type: Bug
> Components: Kafka Connector
> Affects Versions: 1.0.3
> Reporter: Elias Levy
> Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.
> Published messages now include timestamps and compressed messages now include
> relative offsets. As it is now, brokers must decompress publisher compressed
> messages, assign offset to them, and recompress them, which is wasteful and
> makes it less likely that compression will be used at all.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)