TengHuo commented on issue #7691:
URL: https://github.com/apache/hudi/issues/7691#issuecomment-1386565809
> Not the same. The current issue is the schema compatibility problem
between Flink and Spark.
Yeah, not the same, but I think they are similar. In #7284, we found it uses
a pattern to build Avro schema namespace in Spark side, e.g. the namespace of
writer schema is `"namespace": "hoodie.test_mor_tab"` ("hoodie" is a prefix,
"test_mor_tab" is our test Hudi table name), but the reader schema use a
constant name `"name": "Record"`, which causes inconsistent issue. Detail here:
https://github.com/apache/hudi/issues/7284#issuecomment-1324899843
As Danny mentioned above, it is using a constant namespace named 'record' in
Flink side, looks like it will cause schema mismatch issue as well if Avro
schema is generated by Spark.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]