becketqin commented on a change in pull request #17991:
URL: https://github.com/apache/flink/pull/17991#discussion_r762673730



##########
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReader.java
##########
@@ -147,33 +133,17 @@ public KafkaPartitionSplitReader(
                             recordsBySplits);
                     break;
                 }
-                // Add the record to the partition collector.
-                try {
-                    deserializationSchema.deserialize(consumerRecord, 
collector);
-                    collector
-                            .getRecords()
-                            .forEach(
-                                    r ->
-                                            recordsForSplit.add(
-                                                    new Tuple3<>(
-                                                            r,
-                                                            
consumerRecord.offset(),
-                                                            
consumerRecord.timestamp())));
-                    // Finish the split because there might not be any message 
after this point.
-                    // Keep polling
-                    // will just block forever.
-                    if (consumerRecord.offset() == stoppingOffset - 1) {
-                        finishSplitAtRecord(
-                                tp,
-                                stoppingOffset,
-                                consumerRecord.offset(),
-                                finishedPartitions,
-                                recordsBySplits);
-                    }
-                } catch (Exception e) {
-                    throw new IOException("Failed to deserialize consumer 
record due to", e);
-                } finally {
-                    collector.reset();
+                recordsForSplit.add(consumerRecord);

Review comment:
       Given that the deserialization is no longer performed in the split 
fetcher. It seems that we don't have to iterate over all the records here any 
more. Instead, the `KafkaPartitionSplitRecords` can be changed to a wrapper 
around `ConsumerRecords`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to