Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18849
> If ALTER TABLE makes the hive compatibility broken, the value of this
flag becomes misleading.
That's the whole point of the flag and what the current changes do! It
takes different paths when handling alter table depending on whether the table
is compatible. So if the table was compatible, it will remain compatible (or
otherwise Hive should complain about the updated table, as it does in certain
cases).
So I really do not understand what is it you're not understanding about the
patch.
> When Hive metastore complained about it, we should also set it to false.
Absolutely not. If you have a Hive compatible table and you try to update
its schema with something that Hive complains about, YOU SHOULD GET AN ERROR.
And that's what the current patch does. You should not try to mess up the table
even further. The old code was just plain broken in this regard.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]