the-other-tim-brown commented on code in PR #13591:
URL: https://github.com/apache/hudi/pull/13591#discussion_r2232258232
##########
hudi-hadoop-common/src/test/java/org/apache/hudi/common/testutils/reader/HoodieFileGroupReaderTestHarness.java:
##########
@@ -137,6 +138,7 @@ protected ClosableIterator<IndexedRecord>
getFileGroupIterator(int numFiles, boo
properties.setProperty(HoodieMemoryConfig.SPILLABLE_MAP_BASE_PATH.key(),
basePath + "/" + HoodieTableMetaClient.TEMPFOLDER_NAME);
properties.setProperty(HoodieCommonConfig.SPILLABLE_DISK_MAP_TYPE.key(),
ExternalSpillableMap.DiskMapType.ROCKS_DB.name());
properties.setProperty(HoodieCommonConfig.DISK_MAP_BITCASK_COMPRESSION_ENABLED.key(),
"false");
+
properties.setProperty(HoodieReaderConfig.ENABLE_OPTIMIZED_LOG_BLOCKS_SCAN.key(),
HoodieReaderConfig.ENABLE_OPTIMIZED_LOG_BLOCKS_SCAN.defaultValue());
Review Comment:
Similarly here, is this required?
##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##########
@@ -420,7 +421,7 @@ public static Map<String, HoodieData<HoodieRecord>>
convertMetadataToRecords(Hoo
String instantTime, HoodieTableMetaClient dataMetaClient, HoodieTableMetadata
tableMetadata,
HoodieMetadataConfig metadataConfig, Set<String> enabledPartitionTypes, String
bloomFilterType,
int bloomIndexParallelism, int writesFileIdEncoding, EngineType engineType,
-
Option<HoodieRecordType> recordTypeOpt) {
+
Option<HoodieRecordType> recordTypeOpt, boolean enableOptimizeLogBlocksScan) {
Review Comment:
Can you update the javadoc with the context of whether this relates to the
data table or metadata table? It is data table, correct?
##########
hudi-cli/src/main/java/org/apache/hudi/cli/commands/HoodieLogFileCommand.java:
##########
@@ -311,6 +313,10 @@ private TypedProperties buildFileGroupReaderProperties() {
props.setProperty(
DISK_MAP_BITCASK_COMPRESSION_ENABLED.key(),
Boolean.toString(DISK_MAP_BITCASK_COMPRESSION_ENABLED.defaultValue()));
+ props.setProperty(
+ ENABLE_OPTIMIZED_LOG_BLOCKS_SCAN.key(),
+ ENABLE_OPTIMIZED_LOG_BLOCKS_SCAN.defaultValue());
Review Comment:
Is there any reason to set to the default now that you have the default
handling?
##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##########
@@ -959,15 +963,16 @@ public static <T> Pair<Set<String>, Set<String>>
getRevivedAndDeletedKeysFromMer
Option<Schema> finalWriterSchemaOpt,
List<String> currentLogFilePaths,
String partitionPath,
-
HoodieReaderContext<T> readerContext) {
+
HoodieReaderContext<T> readerContext,
+
boolean enableOptimizedLogBlocksScan) {
Review Comment:
Similarly updated javadoc here
##########
hudi-common/src/test/java/org/apache/hudi/common/table/read/TestHoodieFileGroupReaderBase.java:
##########
@@ -781,6 +781,7 @@ private TypedProperties
buildProperties(HoodieTableMetaClient metaClient, Record
if (metaClient.getTableConfig().contains(PARTITION_FIELDS)) {
props.setProperty(PARTITION_FIELDS.key(),
metaClient.getTableConfig().getString(PARTITION_FIELDS));
}
+
props.setProperty(HoodieReaderConfig.ENABLE_OPTIMIZED_LOG_BLOCKS_SCAN.key(),
HoodieReaderConfig.ENABLE_OPTIMIZED_LOG_BLOCKS_SCAN.defaultValue());
Review Comment:
This should not be required anymore right?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]