xiarixiaoyao commented on code in PR #8026:
URL: https://github.com/apache/hudi/pull/8026#discussion_r1116443597
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieBaseRelation.scala:
##########
@@ -155,12 +158,13 @@ abstract class HoodieBaseRelation(val sqlContext:
SQLContext,
}
}
+ val avroNameAndSpace =
AvroConversionUtils.getAvroRecordNameAndNamespace(tableName)
val avroSchema = internalSchemaOpt.map { is =>
- AvroInternalSchemaConverter.convert(is, "schema")
+ AvroInternalSchemaConverter.convert(is, avroNameAndSpace._2 + "." +
avroNameAndSpace._1)
Review Comment:
@danny0405
I checked the code of Flink, and there was no problem with Flink, since
schema evolution will call HoodieAvroUtils.rewriteRecordWithNewSchema to uinfy
namespace.
by the way, This pr problem has nothing to do with this modification ,i
change this line just want to ensure that the namespace of reading schema and
writing schema are consistent from spark side.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]