danny0405 commented on code in PR #13591:
URL: https://github.com/apache/hudi/pull/13591#discussion_r2229862272


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/FileGroupReaderBasedAppendHandle.java:
##########
@@ -85,8 +85,10 @@ public void doAppend() {
     // Initializes the record iterator, log compaction requires writing the 
deletes into the delete block of the resulting log file.
     try (HoodieFileGroupReader<T> fileGroupReader = 
HoodieFileGroupReader.<T>newBuilder().withReaderContext(readerContext).withHoodieTableMetaClient(hoodieTable.getMetaClient())
         
.withLatestCommitTime(instantTime).withPartitionPath(partitionPath).withLogFiles(logFiles).withBaseFileOption(Option.empty()).withDataSchema(writeSchemaWithMetaFields)
-        
.withRequestedSchema(writeSchemaWithMetaFields).withEnableOptimizedLogBlockScan(true).withInternalSchema(internalSchemaOption).withProps(props).withEmitDelete(true)
-        
.withShouldUseRecordPosition(usePosition).withSortOutput(hoodieTable.requireSortedRecords()).build())
 {
+        
.withRequestedSchema(writeSchemaWithMetaFields).withInternalSchema(internalSchemaOption).withProps(props).withEmitDelete(true)
+        
.withShouldUseRecordPosition(usePosition).withSortOutput(hoodieTable.requireSortedRecords())
+        // instead of using config.enableOptimizedLogBlocksScan(), we set to 
true as log compaction blocks only supported in scanV2
+        .withEnableOptimizedLogBlockScan(true).build()) {

Review Comment:
   The optimized log block scan is mainly introduced for log comapction where 
this handle got used.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to