codope commented on a change in pull request #3970:
URL: https://github.com/apache/hudi/pull/3970#discussion_r748388141
##########
File path:
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/MultipleSparkJobExecutionStrategy.java
##########
@@ -205,12 +207,26 @@ public MultipleSparkJobExecutionStrategy(HoodieTable
table, HoodieEngineContext
.withSpillableMapBasePath(config.getSpillableMapBasePath())
.build();
- HoodieTableConfig tableConfig =
table.getMetaClient().getTableConfig();
- recordIterators.add(getFileSliceReader(baseFileReader, scanner,
readerSchema,
- tableConfig.getPayloadClass(),
- tableConfig.getPreCombineField(),
- tableConfig.populateMetaFields() ? Option.empty() :
Option.of(Pair.of(tableConfig.getRecordKeyFieldProp(),
- tableConfig.getPartitionFieldProp()))));
+ if (!StringUtils.isNullOrEmpty(clusteringOp.getDataFilePath())) {
+ HoodieFileReader<? extends IndexedRecord> baseFileReader =
HoodieFileReaderFactory.getFileReader(table.getHadoopConf(), new
Path(clusteringOp.getDataFilePath()));
Review comment:
@zhangyue19921010 Thanks for your review.
Yes, it is possible when clustering plan was generated then log files were
not compacted yet. So we use baseFileReader and MergedLogRecordScanner.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]