Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/21588
> Does this upgrade Hive for execution or also for metastore? Spark
supports virtually all Hive metastore versions out there, and a lot of
deployments do run different versions of Spark against the same old Hive
metastore, and it'd be bad to break connectivity to old Hive metastores.
> The execution part is a different story and we can upgrade them easily.
The upgrade basically targets to upgrade Hive for execution (let me know if
I am mistaken). For metastore compatibility, I believe we are able to provide
metastore jars and support other Hive versions via explicitly configuring the
JARs via isolated classloader. I believe we have basic tests for different Hive
versions.
I would cautiously like to raise an option - drop the builtin metastore
support at 3.0 by default if the upgrade makes to keep builtin metastore
support hard enough.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]