cshuo commented on code in PR #14186:
URL: https://github.com/apache/hudi/pull/14186#discussion_r2480476032
##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/table/HoodieTableSource.java:
##########
@@ -679,12 +678,21 @@ public void reset() {
* Get the reader paths with partition path expanded.
*/
@VisibleForTesting
- public List<StoragePathInfo> getReadFiles() {
+ public List<FileSlice> getBaseFileOnlyFileSlices() {
List<String> relPartitionPaths = getReadPartitions();
if (relPartitionPaths.isEmpty()) {
return Collections.emptyList();
}
- return fileIndex.getFilesInPartitions();
+ List<StoragePathInfo> pathInfoList = fileIndex.getFilesInPartitions();
+ try (HoodieTableFileSystemView fsView = new
HoodieTableFileSystemView(metaClient,
+
metaClient.getCommitsAndCompactionTimeline().filterCompletedInstants(),
pathInfoList)) {
Review Comment:
Here we're going to get base file only, which is used for read_optimized
mode and cow snapshot read.
do we need include pending compaction instants, or use `metaClient
.getCommitsTimeline(). filterCompletedInstants`?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]