rahil-c commented on code in PR #14374:
URL: https://github.com/apache/hudi/pull/14374#discussion_r2591627809
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieSchemaUtils.scala:
##########
@@ -250,13 +248,15 @@ object HoodieSchemaUtils {
*
* TODO support casing reconciliation
*/
- private def canonicalizeSchema(sourceSchema: Schema, latestTableSchema:
Schema, opts : Map[String, String],
- shouldReorderColumns: Boolean): Schema = {
- reconcileSchemaRequirements(sourceSchema, latestTableSchema,
shouldReorderColumns)
+ private def canonicalizeSchema(sourceSchema: HoodieSchema,
latestTableSchema: HoodieSchema, opts : Map[String, String],
+ shouldReorderColumns: Boolean): HoodieSchema
= {
+ HoodieSchema.fromAvroSchema(
+ reconcileSchemaRequirements(sourceSchema.toAvroSchema(),
latestTableSchema.toAvroSchema(), shouldReorderColumns)
Review Comment:
Yea when we pass `null` thru in `StreamSync` like this
```
incomingSchema == null ? null :
org.apache.hudi.common.schema.HoodieSchemaUtils.removeMetadataFields(incomingSchema),
```
It will then hit the NPE when it tries to call the `.toAvroSchema` on a null
object. I think the safer way for now instead of passing null is just to pass a
`HoodieSchema.create(HoodieSchemaType.NULL)` object thru.
When i tried this allowed the `testEmptyBatchWithNullSchemaValue` to pass
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]