bvaradar commented on a change in pull request #4910:
URL: https://github.com/apache/hudi/pull/4910#discussion_r828026842
##########
File path:
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/commit/SparkMergeHelper.java
##########
@@ -77,14 +89,39 @@ public void runMerge(HoodieTable<T,
JavaRDD<HoodieRecord<T>>, JavaRDD<HoodieKey>
readSchema = mergeHandle.getWriterSchemaWithMetaFields();
}
+ Option<InternalSchema> querySchemaOpt =
SerDeHelper.fromJson(table.getConfig().getInternalSchema());
+ Boolean needToReWriteRecord = false;
+ // to do support bootstrap
Review comment:
Can you elaborate on this TODO ?
##########
File path:
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/action/compact/RunCompactionActionExecutor.java
##########
@@ -70,6 +73,14 @@ public RunCompactionActionExecutor(HoodieEngineContext
context,
HoodieCompactionPlan compactionPlan =
CompactionUtils.getCompactionPlan(table.getMetaClient(),
instantTime);
+ // try to load internalSchema to support schema Evolution
+ Pair<Option<String>, Option<String>> schemaPair =
TableInternalSchemaUtils
+
.getInternalSchemaAndAvroSchemaForClusteringAndCompaction(table.getMetaClient(),
instantTime);
+ if (schemaPair.getLeft().isPresent() &&
schemaPair.getRight().isPresent()) {
+ config.setInternalSchemaString(schemaPair.getLeft().get());
Review comment:
Instead of setting this directly to existing write config, can we clone
the writeConfig and pass it. This would prevent unnecessary worry about
side-effects for future runs from the same client.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]