linliu-code commented on code in PR #14085:
URL: https://github.com/apache/hudi/pull/14085#discussion_r2436982294
##########
hudi-common/src/main/java/org/apache/hudi/common/table/log/block/HoodieHFileDataBlock.java:
##########
@@ -173,8 +173,30 @@ protected <T> ClosableIterator<HoodieRecord<T>>
lookupRecords(List<String> sorte
// Get writer's schema from the header
final ClosableIterator<HoodieRecord<IndexedRecord>> recordIterator =
fullKey ? reader.getRecordsByKeysIterator(sortedKeys, readerSchema)
: reader.getRecordsByKeyPrefixIterator(sortedKeys, readerSchema);
-
return new CloseableMappingIterator<>(recordIterator, data ->
(HoodieRecord<T>) data);
}
}
+
+ @Override
+ protected <T> ClosableIterator<T> lookupEngineRecords(List<String>
sortedKeys, boolean fullKey) throws IOException {
+ HoodieLogBlockContentLocation blockContentLoc =
getBlockContentLocation().get();
+
+ // NOTE: It's important to extend Hadoop configuration here to make sure
configuration
+ // is appropriately carried over
+ StorageConfiguration<?> inlineConf =
getBlockContentLocation().get().getStorage().getConf().getInline();
+ StoragePath inlinePath = InLineFSUtils.getInlineFilePath(
Review Comment:
To be correct, `blockContentLocation` is populated and NPE has been gone.
The real issue is that when we use `StoragePathInfo` to initialize the reader,
which throws some EOF issue, which lead to some hfile reader internals where I
stopped the investigation. We can revisit it later.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]