boyuanzz commented on a change in pull request #13710:
URL: https://github.com/apache/beam/pull/13710#discussion_r560394160
##########
File path:
sdks/java/io/kafka/src/main/java/org/apache/beam/sdk/io/kafka/ReadFromKafkaDoFn.java
##########
@@ -288,6 +316,19 @@ public ProcessContinuation processElement(
Optional.ofNullable(watermarkEstimator.currentWatermark()));
}
try (Consumer<byte[], byte[]> consumer =
consumerFactoryFn.apply(updatedConsumerConfig)) {
+ // Check whether current TopicPartition is still available to read.
+ Set<TopicPartition> existingTopicPartitions = new HashSet<>();
+ for (List<PartitionInfo> topicPartitionList :
consumer.listTopics().values()) {
+ topicPartitionList.forEach(
+ partitionInfo -> {
+ existingTopicPartitions.add(
+ new TopicPartition(partitionInfo.topic(),
partitionInfo.partition()));
+ });
+ }
+ if
(!existingTopicPartitions.contains(kafkaSourceDescriptor.getTopicPartition())) {
+ return ProcessContinuation.stop();
Review comment:
I'm assuming that you are asking about the resumed
`KafkaSourceDescriptor`. It depends on the status when it gets re-starting. If
the `KafkaSourceDescriptor` is no longer available, the `ReadFromKafkaDoFn`
will return `stop()`. Otherwise the DoFn will output records from this
`KafkaSourceDescriptor` till next checkpoint happens.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]