tillrohrmann commented on a change in pull request #7020: [FLINK-10774] [Kafka]
connection leak when partition discovery is disabled an…
URL: https://github.com/apache/flink/pull/7020#discussion_r246074960
##########
File path:
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java
##########
@@ -470,7 +470,20 @@ public void open(Configuration configuration) throws
Exception {
subscribedPartitionsToStartOffsets = new HashMap<>();
- List<KafkaTopicPartition> allPartitions =
partitionDiscoverer.discoverPartitions();
+ List<KafkaTopicPartition> allPartitions;
+ try {
+ allPartitions =
partitionDiscoverer.discoverPartitions();
+ } finally {
+ if (discoveryIntervalMillis ==
PARTITION_DISCOVERY_DISABLED) {
+ // when partition discovery is disabled,
+ // we should close partitionDiscoverer after
the initial discovery.
+ // otherwise we may have connection leak,
+ // if open method throws an exception after
partitionDiscoverer is constructed.
+ // In this case, run method won't be executed
+ // and partitionDiscoverer.close() won't be
called.
+ partitionDiscoverer.close();
Review comment:
We could do this by having a `boolean success = false` which is set to true
as the last statement in the `open` method. Then in the following `finally`
block we can check `if ((!success || discoveryIntervalMilli ==
PARTITION_DISCOVERY_DISABLED) && partitionDiscoverer != null) {
partitionDiscoverer.close(); }`.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services