tzulitai commented on a change in pull request #7726: [FLINK-10342] Filter 
restored partitions with discovered partitions b…
URL: https://github.com/apache/flink/pull/7726#discussion_r260099148
 
 

 ##########
 File path: 
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java
 ##########
 @@ -494,6 +514,12 @@ public void open(Configuration configuration) throws 
Exception {
                                }
                        }
 
+                       if (filterRestoredPartitionsWithDiscovered) {
+                               
subscribedPartitionsToStartOffsets.entrySet().removeIf(
+                                       entry -> 
(!allPartitions.contains(entry.getKey()))
 
 Review comment:
   @fengli 
   Well, if it is invalid topics, then the `allPartitions` wouldn't contain the 
correct partitions anyways.
   
   I still think that using `topicsDescriptor` is the safer way to do this.
   For example, lets say that due to some Kafka system hiccup, Kafka 
temporarily did not return some partition for a given topic. If we used 
`allPartitions` to filter out the restored offset states, that partition will 
be filtered out.
   I'm not sure this will ever happen, but again, it's something out of the 
connector code's control. And using such information to drop state is something 
I feel uneasy about.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to