danny0405 commented on code in PR #11069:
URL: https://github.com/apache/hudi/pull/11069#discussion_r1575656404


##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/clustering/run/strategy/MultipleSparkJobExecutionStrategy.java:
##########
@@ -361,7 +369,15 @@ private HoodieData<HoodieRecord<T>> 
readRecordsForGroupBaseFiles(JavaSparkContex
           List<Iterator<HoodieRecord<T>>> iteratorsForPartition = new 
ArrayList<>();
           clusteringOpsPartition.forEachRemaining(clusteringOp -> {
             try {
-              Schema readerSchema = HoodieAvroUtils.addMetadataFields(new 
Schema.Parser().parse(writeConfig.getSchema()));
+              TableSchemaResolver schemaUtil = new 
TableSchemaResolver(getHoodieTable().getMetaClient());
+              Schema readerSchema;
+              try {
+                readerSchema = schemaUtil.getTableAvroSchema(true);
+              } catch (Exception e) {
+                LOG.warn(e.getMessage());
+                readerSchema = HoodieAvroUtils.addMetadataFields(new 
Schema.Parser().parse(writeConfig.getSchema()));
+              }

Review Comment:
   We actually got similiar fix for compaction, in file `HoodieCompactor.java`:
   
   ```java
       // Here we firstly use the table schema as the reader schema to read
       // log file.That is because in the case of MergeInto, the 
config.getSchema may not
       // the same with the table schema.
       try {
         if (StringUtils.isNullOrEmpty(config.getInternalSchema())) {
           Schema readerSchema = schemaResolver.getTableAvroSchema(false);
           config.setSchema(readerSchema.toString());
         }
       } catch (Exception e) {
         // If there is no commit in the table, just ignore the exception.
       }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to