nsivabalan commented on issue #2675:
URL: https://github.com/apache/hudi/issues/2675#issuecomment-812710497
Closing this as we have a tracking jira.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
nsivabalan commented on issue #2675:
URL: https://github.com/apache/hudi/issues/2675#issuecomment-810288191
there are two code paths in HoodieSparkSqlWriter.
(1) AvroConversionUtils.convertStructTypeToAvroSchema(df.schema, structName,
nameSpace)
(2) HoodieSparkUtils.createRdd(df, sch
nsivabalan commented on issue #2675:
URL: https://github.com/apache/hudi/issues/2675#issuecomment-808828818
Yes, your approach should work. Only change is that, we might have to fix it
where we generate avro schema from df schema in HoodieSparkSqlWriter. Eg:
https://github.com/nsivabalan/h
nsivabalan commented on issue #2675:
URL: https://github.com/apache/hudi/issues/2675#issuecomment-806348073
yes, you are right. I was able to reproduce the issue(local spark). Have
filed a [bug](https://issues.apache.org/jira/browse/HUDI-1716).
I am yet to try out the hive issue. but it
nsivabalan commented on issue #2675:
URL: https://github.com/apache/hudi/issues/2675#issuecomment-805805044
1. do you use the RowBasedSchemaProvider and hence can't explicitly provide
schema? If you were to use your own schema registry, you might as well provide
an updated schema to hudi w
nsivabalan commented on issue #2675:
URL: https://github.com/apache/hudi/issues/2675#issuecomment-804178486
You can add null as default value for your new field if that would work for
you.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
nsivabalan commented on issue #2675:
URL: https://github.com/apache/hudi/issues/2675#issuecomment-804177670
Yeah, hudi just relies on Avro's schema compatibility in general. From the
[specification](http://avro.apache.org/docs/current/spec.html#Schema+Resolution),
looks like adding a new f