yihua commented on code in PR #11761:
URL: https://github.com/apache/hudi/pull/11761#discussion_r1720350785


##########
hudi-hadoop-common/src/main/java/org/apache/hudi/hadoop/fs/HadoopFSUtils.java:
##########
@@ -171,6 +171,17 @@ public static StoragePathInfo 
convertToStoragePathInfo(FileStatus fileStatus) {
         fileStatus.getModificationTime());
   }
 
+  public static StoragePathInfo convertToStoragePathInfo(FileStatus 
fileStatus, String[] locations) {

Review Comment:
   `LocatedFileStatus` (which extends `FileStatus`) stores `BlockLocation[] 
locations`.  As a follow-up, see if we want to leverage that.



##########
hudi-common/src/main/java/org/apache/hudi/common/engine/HoodieReaderContext.java:
##########
@@ -165,10 +167,50 @@ public void setShouldMergeUseRecordPosition(boolean 
shouldMergeUseRecordPosition
    * @param storage        {@link HoodieStorage} for reading records.
    * @return {@link ClosableIterator<T>} that can return all records through 
iteration.
    */
-  public abstract ClosableIterator<T> getFileRecordIterator(
+  protected abstract ClosableIterator<T> getFileRecordIterator(
       StoragePath filePath, long start, long length, Schema dataSchema, Schema 
requiredSchema,
       HoodieStorage storage) throws IOException;
 
+  /**
+   * Gets the record iterator based on the type of engine-specific record 
representation from the
+   * file.
+   *
+   * @param file           {@link StorageFile} instance of a file.
+   * @param start          Starting byte to start reading.
+   * @param length         Bytes to read.
+   * @param dataSchema     Schema of records in the file in {@link Schema}.
+   * @param requiredSchema Schema containing required fields to read in {@link 
Schema} for projection.
+   * @param storage        {@link HoodieStorage} for reading records.
+   * @return {@link ClosableIterator<T>} that can return all records through 
iteration.
+   */
+  public final ClosableIterator<T> getFileRecordIterator(
+      StorageFile file, long start, long length, Schema dataSchema, Schema 
requiredSchema,
+      HoodieStorage storage) throws IOException {
+    if (file.getPathInfo() != null) {
+      return getFileRecordIterator(file.getPathInfo(), start, length, 
dataSchema, requiredSchema, storage);
+    } else {
+      return getFileRecordIterator(file.getStoragePath(), start, length, 
dataSchema, requiredSchema, storage);
+    }
+  }
+
+  /**
+   * Gets the record iterator based on the type of engine-specific record 
representation from the
+   * file.
+   *
+   * @param storagePathInfo {@link StoragePathInfo} instance of a file.
+   * @param start           Starting byte to start reading.
+   * @param length          Bytes to read.
+   * @param dataSchema      Schema of records in the file in {@link Schema}.
+   * @param requiredSchema  Schema containing required fields to read in 
{@link Schema} for projection.
+   * @param storage         {@link HoodieStorage} for reading records.
+   * @return {@link ClosableIterator<T>} that can return all records through 
iteration.
+   */
+  protected ClosableIterator<T> getFileRecordIterator(
+      StoragePathInfo storagePathInfo, long start, long length, Schema 
dataSchema, Schema requiredSchema,
+      HoodieStorage storage) throws IOException {
+    return getFileRecordIterator(storagePathInfo.getPath(), start, length, 
dataSchema, requiredSchema, storage);

Review Comment:
   Since this is overridden by the `HiveHoodieReaderContext` and has a default 
implementation, I think it's OK to keep it now.



##########
hudi-common/src/main/java/org/apache/hudi/common/table/read/HoodieFileGroupReader.java:
##########
@@ -145,10 +146,19 @@ private ClosableIterator<T> makeBaseFileIterator() throws 
IOException {
       return makeBootstrapBaseFileIterator(baseFile);
     }
 
-    return readerContext.getFileRecordIterator(
-        baseFile.getStoragePath(), start, length,
-        readerContext.getSchemaHandler().getDataSchema(),
-        readerContext.getSchemaHandler().getRequiredSchema(), storage);
+    StoragePathInfo baseFileStoragePathInfo = baseFile.getPathInfo();
+    if (baseFileStoragePathInfo != null) {
+      return readerContext.getFileRecordIterator(
+          baseFileStoragePathInfo, start, length,
+          readerContext.getSchemaHandler().getDataSchema(),
+          readerContext.getSchemaHandler().getRequiredSchema(), storage);
+    } else {
+      return readerContext.getFileRecordIterator(
+          baseFile.getStoragePath(), start, length,
+          readerContext.getSchemaHandler().getDataSchema(),
+          readerContext.getSchemaHandler().getRequiredSchema(), storage);
+    }
+

Review Comment:
   ```suggestion
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to