nsivabalan commented on code in PR #12390:
URL: https://github.com/apache/hudi/pull/12390#discussion_r1865133354
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/action/compact/HoodieCompactor.java:
##########
@@ -161,66 +162,70 @@ public List<WriteStatus> compact(HoodieCompactionHandler
compactionHandler,
Option<InstantRange> instantRange,
TaskContextSupplier taskContextSupplier,
CompactionExecutionHelper executionHelper)
throws IOException {
- HoodieStorage storage = metaClient.getStorage();
- Schema readerSchema;
- Option<InternalSchema> internalSchemaOption = Option.empty();
- if (!StringUtils.isNullOrEmpty(config.getInternalSchema())) {
- readerSchema = new Schema.Parser().parse(config.getSchema());
- internalSchemaOption = SerDeHelper.fromJson(config.getInternalSchema());
- // its safe to modify config here, since we are running in task side.
- ((HoodieTable) compactionHandler).getConfig().setDefault(config);
+ if
(config.getBooleanOrDefault(HoodieReaderConfig.FILE_GROUP_READER_ENABLED)
+ && compactionHandler.supportsFileGroupReader()) {
Review Comment:
Since we are writing this in the last min, would like to keep the new
compaction only for partial update flow. So, it could mean the following
if new FG reader is enabled and do one parsing of log headers to find if
there are any partial update log blocks. If yes, we go w/ new way of
compacting. if not, fallback to old way of compacting.
We can do more testing and flip it in 1.0.1 may be. But feels risky to do it
for 1.0
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]