[ 
https://issues.apache.org/jira/browse/FLINK-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225543#comment-15225543
 ] 

ASF GitHub Bot commented on FLINK-3541:
---------------------------------------

Github user skyahead commented on the pull request:

    https://github.com/apache/flink/pull/1846#issuecomment-205604300
  
    @StephanEwen I changed Kafka version from 0.9.0.1 to 0.9.0.0 and verified 
that NullPointerException does get caught, and the code retries connecting for 
10 times. 
    
    Using 0.9.0.1 however, the NullPointerException does not happen anymore, 
whereas a TimeoutException is thrown as expected and got caught expected too.
    
    All test cases in Kafka08ITCase (19 cases) and Kafka09ITCase (15 cases) 
pass in my local environment.


> Clean up workaround in FlinkKafkaConsumer09 
> --------------------------------------------
>
>                 Key: FLINK-3541
>                 URL: https://issues.apache.org/jira/browse/FLINK-3541
>             Project: Flink
>          Issue Type: Improvement
>          Components: Kafka Connector
>    Affects Versions: 1.0.0
>            Reporter: Till Rohrmann
>            Priority: Minor
>
> In the current {{FlinkKafkaConsumer09}} implementation, we repeatedly start a 
> new {{KafkaConsumer}} if the method {{KafkaConsumer.partitionsFor}} returns a 
> NPE. This is due to a bug with the Kafka version 0.9.0.0. See 
> https://issues.apache.org/jira/browse/KAFKA-2880. The code can be found in 
> the constructor of {{FlinkKafkaConsumer09.java:208}}.
> However, the problem is marked as fixed for version 0.9.0.1, which we also 
> use for the flink-connector-kafka. Therefore, we should be able to get rid of 
> the workaround.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to