danny0405 commented on code in PR #11069:
URL: https://github.com/apache/hudi/pull/11069#discussion_r1575931052
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/MultipleSparkJobExecutionStrategy.java:
##########
@@ -361,7 +369,15 @@ private HoodieData<HoodieRecord<T>>
readRecordsForGroupBaseFiles(JavaSparkContex
List<Iterator<HoodieRecord<T>>> iteratorsForPartition = new
ArrayList<>();
clusteringOpsPartition.forEachRemaining(clusteringOp -> {
try {
- Schema readerSchema = HoodieAvroUtils.addMetadataFields(new
Schema.Parser().parse(writeConfig.getSchema()));
+ TableSchemaResolver schemaUtil = new
TableSchemaResolver(getHoodieTable().getMetaClient());
+ Schema readerSchema;
+ try {
+ readerSchema = schemaUtil.getTableAvroSchema(true);
+ } catch (Exception e) {
+ LOG.warn(e.getMessage());
+ readerSchema = HoodieAvroUtils.addMetadataFields(new
Schema.Parser().parse(writeConfig.getSchema()));
+ }
Review Comment:
I think the fix in this path looks fine, can we just abstract the schema
fething logic into a separate method that maybe called `getLatestTableSchema`,
I mean move the try-catch and fallback caluse into it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]