Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14712
Spark SQL already has its own metastore: `InMemoryCatalog`. And we do have
an abstraction for metasotre: `ExternalCatalog`. We have 2 targets here:
1. add table statistics in Spark SQL
2. Spark SQL and Hive should recognize table statistics from each other.
I think target 1 is more important, and we do need an implementation that
not depend on hive features.
> Actually, we desperately need spark sql to have its own metastore,
because we need to persist statistics like histograms which AFAIK hive
metastore doesn't support.
We store table statistics in table properties, why would hive metastore not
support it? Do you mean Hive can't recognize it? But I think it's ok, we should
not limit our table statistics by what Hive supports.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]