Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18849
When Hive complains it, we should still let users update the Spark-native
file source tables. In Spark SQL, we do our best to make the native data source
tables Hive compatible. However, we should not block users just because Hive
metastore complained it. This is how we behave in [CREATE
TABLE](https://github.com/apache/spark/blob/master/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala#L364-L374).
If users really require reading our Spark-native data source tables from
Hive, we should introduce a SQLConf or table-specific option and update the
corresponding part in `CREATE TABLE` too.
In addition, we should avoid introducing a flag just for fixing a specific
scenario. Thus, I still think comparing the table schemas is preferred for such
a fix. Could you show an example that could break it? cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]