tillrohrmann commented on a change in pull request #7020: [FLINK-10774] [Kafka]
connection leak when partition discovery is disabled an…
URL: https://github.com/apache/flink/pull/7020#discussion_r251021769
##########
File path:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java
##########
@@ -732,9 +745,7 @@ public void run() {
throw new RuntimeException(discoveryLoopError);
}
} else {
- // won't be using the discoverer
- partitionDiscoverer.close();
-
+ // partitionDiscoverer is already closed in open method
Review comment:
Well, I think you've raised a very good point here @stevenzwu. What happens
if partition discovery is enabled and the creation of the `kafkaFetcher` fails?
Since `close` won't close the partition discoverer, there will be another leak.
The same happens if `kafkaFetcher.runFetchLoop` fails in the case of enabled
`partitionDiscoverer`.
I think we should rethink the lifecycle management of the
`partitionDiscoverer` a bit better. What if we say that the
`partitionDiscoverer` will only be closed in the `close` method which is called
by the `Task's` main thread. In order to avoid the exceptions we first have to
wait on the discoveryLoop's termination. But then it should be safe to close it.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services