yihua commented on code in PR #12390:
URL: https://github.com/apache/hudi/pull/12390#discussion_r1866850753


##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/HoodieBaseFileGroupRecordBuffer.java:
##########
@@ -495,14 +502,24 @@ protected boolean hasNextBaseRecord(T baseRecord, 
Pair<Option<T>, Map<String, Ob
     Map<String, Object> metadata = readerContext.generateMetadataForRecord(
         baseRecord, readerSchema);
 
-    Option<T> resultRecord = logRecordInfo != null
-        ? merge(Option.of(baseRecord), metadata, logRecordInfo.getLeft(), 
logRecordInfo.getRight())
-        : merge(Option.empty(), Collections.emptyMap(), Option.of(baseRecord), 
metadata);
-    if (resultRecord.isPresent()) {
-      nextRecord = readerContext.seal(resultRecord.get());
-      return true;
+    if (logRecordInfo != null) {
+      Option<T> resultRecord = merge(Option.of(baseRecord), metadata, 
logRecordInfo.getLeft(), logRecordInfo.getRight());
+      if (resultRecord.isPresent()) {
+        // Updates

Review Comment:
   @danny0405 this logic is in sync with the older compaction / merge handle 
logic, i.e., log files are merged first through the log record scanner, then 
the records from the base file and the merged records from the log scanner are 
routed to the merge handle for writing and stats.  The new file group reader 
and stats follow the exact same semantics to avoid behavior change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to