danny0405 commented on code in PR #10727:
URL: https://github.com/apache/hudi/pull/10727#discussion_r1524556643
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/action/commit/HoodieMergeHelper.java:
##########
@@ -202,7 +202,9 @@ private Option<Function<HoodieRecord, HoodieRecord>>
composeSchemaEvolutionTrans
Schema newWriterSchema =
AvroInternalSchemaConverter.convert(mergedSchema, writerSchema.getFullName());
Schema writeSchemaFromFile =
AvroInternalSchemaConverter.convert(writeInternalSchema,
newWriterSchema.getFullName());
boolean needToReWriteRecord = sameCols.size() !=
colNamesFromWriteSchema.size()
- ||
SchemaCompatibility.checkReaderWriterCompatibility(newWriterSchema,
writeSchemaFromFile).getType() ==
org.apache.avro.SchemaCompatibility.SchemaCompatibilityType.COMPATIBLE;
+ &&
SchemaCompatibility.checkReaderWriterCompatibility(newWriterSchema,
writeSchemaFromFile).getType()
+ ==
org.apache.avro.SchemaCompatibility.SchemaCompatibilityType.COMPATIBLE;
+
Review Comment:
Good point for optimization, we introduce some changes like the dynamic read
schema based on write schema in release 1.x as for the `HoodieFileGroupReader`,
but I'm not sure whether it is applied automically for all the read paths, cc
@yihua for confirming this.
And anyway, I think we should have such optimization in 0.x branch and
master for the legacy `HoodieMergedLogRecordReader` which will still be benefic
to engines line Flink and Hive.
@xiarixiaoyao do you have intreast to contribute this?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]