cshuo commented on code in PR #13759:
URL: https://github.com/apache/hudi/pull/13759#discussion_r2309013632


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/StreamWriteFunction.java:
##########
@@ -436,20 +439,22 @@ protected List<WriteStatus> writeRecords(
         rowItr, rowData -> recordConverter.convert(rowData, 
rowDataBucket.getBucketInfo()));
 
     List<WriteStatus> statuses = writeFunction.write(
-        deduplicateRecordsIfNeeded(recordItr), rowDataBucket.getBucketInfo(), 
instant);
+        deduplicateRecordsIfNeeded(recordItr, 
rowDataBucket.getBucketInfo().getBucketType()), rowDataBucket.getBucketInfo(), 
instant);
     writeMetrics.endFileFlush();
     writeMetrics.increaseNumOfFilesWritten();
     return statuses;
   }
 
-  protected Iterator<HoodieRecord> 
deduplicateRecordsIfNeeded(Iterator<HoodieRecord> records) {
-    if (config.get(FlinkOptions.PRE_COMBINE)) {
+  protected Iterator<HoodieRecord> 
deduplicateRecordsIfNeeded(Iterator<HoodieRecord> records, BucketType 
bucketType) {
+    // do not need deduplication if the merge handle supports deduplicating
+    if (!config.get(FlinkOptions.PRE_COMBINE) || 
OptionsResolver.isMergeHandleSupportDeduplication(config) && bucketType == 
BucketType.UPDATE) {

Review Comment:
   Should we at least fix it for cow write, not sure if there is regression 
with merging based on `BufferedRecordMerger`, since there is conversion: 
HoodieRecord -> BufferedRecord -> HoodieRecord during deduplication.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to