Github user squito commented on the issue:
https://github.com/apache/spark/pull/19343
whoops, sorry I wrote [CORE] out of habit!
> Spark SQL might not be deployed in the HDFS system. Conceptually, this
HDFS-specific codes should not be part of our HiveExternalCatalog .
HiveExternalCatalog is just for using Hive metastore. It does not assume we use
HDFS.
yes, I totally understand that. Even for users that are on hdfs, this is
clear user-error, they should be using hive's metatool to update the database
location. Originally I thought this would be unnecessary complication in
spark, but with enough complaints I figured maybe spark could just handle it
automatically.
Is there another place this could go instead?
Anyway, if you really feel like this doesn't belong in spark, that is fine.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]