the-other-tim-brown commented on code in PR #17772:
URL: https://github.com/apache/hudi/pull/17772#discussion_r2659014692


##########
hudi-client/hudi-flink-client/src/main/java/org/apache/hudi/client/model/HoodieFlinkRecord.java:
##########
@@ -205,13 +206,13 @@ private Object getColumnValue(Schema recordSchema, String 
column, Properties pro
   }
 
   @Override
-  public HoodieRecord joinWith(HoodieRecord other, Schema targetSchema) {
+  public HoodieRecord joinWith(HoodieRecord other, HoodieSchema targetSchema) {
     throw new UnsupportedOperationException("Not supported for " + 
this.getClass().getSimpleName());
   }
 
   @Override
-  public HoodieRecord prependMetaFields(Schema recordSchema, Schema 
targetSchema, MetadataValues metadataValues, Properties props) {
-    int metaFieldSize = targetSchema.getFields().size() - 
recordSchema.getFields().size();
+  public HoodieRecord prependMetaFields(HoodieSchema recordSchema, 
HoodieSchema targetSchema, MetadataValues metadataValues, Properties props) {
+    int metaFieldSize = targetSchema.getAvroSchema().getFields().size() - 
recordSchema.getAvroSchema().getFields().size();

Review Comment:
   There is no need to convert the schema to get the field sizes for the 
schema, let's remove the conversion



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/action/commit/HoodieMergeHelper.java:
##########
@@ -200,15 +200,15 @@ private Option<Function<HoodieRecord, HoodieRecord>> 
composeSchemaEvolutionTrans
           .collect(Collectors.toList());
       InternalSchema mergedSchema = new 
InternalSchemaMerger(writeInternalSchema, querySchema,
           true, false, false).mergeSchema();
-      Schema newWriterSchema = InternalSchemaConverter.convert(mergedSchema, 
writerSchema.getFullName()).getAvroSchema();
+      HoodieSchema newWriterSchema = 
InternalSchemaConverter.convert(mergedSchema, writerSchema.getFullName());
       Schema writeSchemaFromFile = 
InternalSchemaConverter.convert(writeInternalSchema, 
newWriterSchema.getFullName()).getAvroSchema();
-      boolean needToReWriteRecord = sameCols.size() != 
colNamesFromWriteSchema.size()
-          || 
SchemaCompatibility.checkReaderWriterCompatibility(newWriterSchema, 
writeSchemaFromFile).getType() == 
org.apache.avro.SchemaCompatibility.SchemaCompatibilityType.COMPATIBLE;
+      boolean needToReWriteRecord = sameCols.size() != 
colNamesFromWriteSchema.size() || 
SchemaCompatibility.checkReaderWriterCompatibility(newWriterSchema.toAvroSchema(),

Review Comment:
   You can use `HoodieSchemaCompatibility#areSchemasCompatible` instead. Once 
you switch to this, let's also update `writeSchemaFromFile` to be a 
`HoodieSchema` by removing the `getAvroSchema()` call 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to