yihua commented on code in PR #14314:
URL: https://github.com/apache/hudi/pull/14314#discussion_r2553335795


##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/common/TestHoodieInternalRowUtils.scala:
##########
@@ -306,14 +307,14 @@ class TestHoodieInternalRowUtils extends FunSuite with 
Matchers with BeforeAndAf
       .updateColumnType("col51", Types.DecimalType.get(18, 9))
       .updateColumnType("col6", Types.StringType.get)
     val newSchema = SchemaChangeUtils.applyTableChanges2Schema(internalSchema, 
updateChange)
-    val newAvroSchema = AvroInternalSchemaConverter.convert(newSchema, 
avroSchema.getName)
-    val newRecord = HoodieAvroUtils.rewriteRecordWithNewSchema(avroRecord, 
newAvroSchema, new HashMap[String, String])
-    assert(GenericData.get.validate(newAvroSchema, newRecord))
+    val newHoodieSchema = InternalSchemaConverter.convert(newSchema, 
avroSchema.getName)

Review Comment:
   ```suggestion
       val newSchema = InternalSchemaConverter.convert(newSchema, 
avroSchema.getName)
   ```



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/action/commit/HoodieMergeHelper.java:
##########
@@ -180,7 +181,7 @@ private Option<Function<HoodieRecord, HoodieRecord>> 
composeSchemaEvolutionTrans
       if (fileSchema.isEmptySchema() && 
writeConfig.getBoolean(HoodieCommonConfig.RECONCILE_SCHEMA)) {
         TableSchemaResolver tableSchemaResolver = new 
TableSchemaResolver(metaClient);
         try {
-          fileSchema = 
AvroInternalSchemaConverter.convert(tableSchemaResolver.getTableAvroSchema(true));
+          fileSchema = 
InternalSchemaConverter.convert(HoodieSchema.fromAvroSchema(tableSchemaResolver.getTableAvroSchema(true)));

Review Comment:
   I assume the `TableSchemaResolver` will be refactored later to use provide 
`HoodieSchema`.



##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala:
##########
@@ -373,7 +374,7 @@ class HoodieSparkSqlWriterInternal {
         // we will force-apply schema evolution to the writer's schema
         if (shouldReconcileSchema && 
hoodieConfig.getBooleanOrDefault(DataSourceReadOptions.SCHEMA_EVOLUTION_ENABLED))
 {
           val allowOperationMetaDataField = 
parameters.getOrElse(HoodieWriteConfig.ALLOW_OPERATION_METADATA_FIELD.key(), 
"false").toBoolean
-          
Some(AvroInternalSchemaConverter.convert(HoodieAvroUtils.addMetadataFields(latestTableSchemaOpt.getOrElse(sourceSchema),
 allowOperationMetaDataField)))
+          
Some(InternalSchemaConverter.convert(HoodieSchema.fromAvroSchema(HoodieAvroUtils.addMetadataFields(latestTableSchemaOpt.getOrElse(sourceSchema),
 allowOperationMetaDataField))))

Review Comment:
   Should some of the schema processing logic including adding meta fields be 
pushed into HoodieSchema class? Can be done in a separate PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to