xiarixiaoyao commented on a change in pull request #3203:
URL: https://github.com/apache/hudi/pull/3203#discussion_r725740308
##########
File path:
hudi-hadoop-mr/src/main/java/org/apache/hudi/hadoop/realtime/HoodieParquetRealtimeInputFormat.java
##########
@@ -66,6 +91,139 @@
return HoodieRealtimeInputFormatUtils.getRealtimeSplits(job, fileSplits);
}
+ /**
+ * keep the logical of mor_incr_view as same as spark datasource.
+ * to do: unify the incremental view code between hive/spark-sql and spark
datasource
+ */
+ @Override
+ protected List<FileStatus> listStatusForIncrementalMode(
+ JobConf job, HoodieTableMetaClient tableMetaClient, List<Path>
inputPaths) throws IOException {
+ List<FileStatus> result = new ArrayList<>();
+ String tableName = tableMetaClient.getTableConfig().getTableName();
+ Job jobContext = Job.getInstance(job);
+
+ Option<HoodieTimeline> timeline =
HoodieInputFormatUtils.getFilteredCommitsTimeline(jobContext, tableMetaClient);
+ if (!timeline.isPresent()) {
+ return result;
+ }
+ String lastIncrementalTs = HoodieHiveUtils.readStartCommitTime(jobContext,
tableName);
+ // Total number of commits to return in this batch. Set this to -1 to get
all the commits.
+ Integer maxCommits = HoodieHiveUtils.readMaxCommits(jobContext, tableName);
+ HoodieTimeline commitsTimelineToReturn =
timeline.get().findInstantsAfter(lastIncrementalTs, maxCommits);
Review comment:
_can you help me understand, if there was a replace commit made, where
exactly the filtering will happen? does findInstantsAfter will take care of
that? or is the code is else where?_
for this question: findInstantsAfter will not take care of replace commit,
but fsView.getAllFileGroups(partitionPath)).collect(Collectors.toList()) from
line 131, will deal with replace commits.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]