[ 
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377036#comment-15377036
 ] 

ASF GitHub Bot commented on FLINK-4035:
---------------------------------------

Github user radekg commented on the issue:

    https://github.com/apache/flink/pull/2231
  
    Sure, the problems are the following:
    
    - 
https://github.com/apache/flink/pull/2231/commits/06936d7c5acc0897348019161c9ced4596a0a4dd#diff-aba21cf86694f3f2cd85e2e5e9b04972R305
 in 0.9, `consumer.assign` 
(https://github.com/apache/flink/pull/2231/commits/06936d7c5acc0897348019161c9ced4596a0a4dd#diff-aba21cf86694f3f2cd85e2e5e9b04972R180)
 takes a `List`, in 0.10 it takes `Collection`
    - for unit tests: 
https://github.com/apache/flink/pull/2231/commits/06936d7c5acc0897348019161c9ced4596a0a4dd#diff-ab65f3156ed8820677f3420152b78908R130,
 if we use 0.9 kafka version with 0.10 client, the concrete client tests fail 
as they catch wrong exception type in: 
https://github.com/TheWeatherCompany/flink/blob/06936d7c5acc0897348019161c9ced4596a0a4dd/flink-streaming-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaConsumerTestBase.java#L185
    
    Silly stuff. Everything else works just fine. Fell free to reuse this stuff.
    
    FYI: I'd be confused it I was to use a class indicating 0.9 when working 
with 0.10, that's the reason I assembled separate module. 0.9 is done and 
there's no future work required, it makes sense to have 0.10. Just my opinion.


> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---------------------------------------------------
>
>                 Key: FLINK-4035
>                 URL: https://issues.apache.org/jira/browse/FLINK-4035
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>    Affects Versions: 1.0.3
>            Reporter: Elias Levy
>            Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.  
> Published messages now include timestamps and compressed messages now include 
> relative offsets.  As it is now, brokers must decompress publisher compressed 
> messages, assign offset to them, and recompress them, which is wasteful and 
> makes it less likely that compression will be used at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to