Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14883#discussion_r77306344
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
---
@@ -184,4 +184,17 @@ abstract class ExternalCatalog {
def listFunctions(db: String, pattern: String): Seq[String]
+ //
--------------------------------------------------------------------------
+ // Resources
+ //
--------------------------------------------------------------------------
+
+ /**
+ * Add a JAR resource to the underlying external catalog for DDL (e.g.
CREATE TABLE) and DML
+ * (e.g., LOAD TABLE) operations.
+ *
+ * For example, when users create a Hive serde table, they can specify a
custom
+ * Serializer-Deserializer (SerDe) class. When Hive metastore is unable
to access the custom SerDe
+ * JAR (e.g., not on the Hive classpath), the JAR file must be added at
runtime using this API.
+ */
+ def addJar(path: String): Unit
--- End diff --
Everything is doable. Basically, we need to provide a NATIVE support for
reading/writing custom/built-in Hive serde/file-format tables, instead of using
`HiveClient`. Ideally, we should implement our own write path for Hive serde
tables, instead of calling HiveClient's loadTable/loadPartition APIs, and MORE.
After all these changes, we can completely get rid of Hive metastore and
HiveClient APIs.
However, we have to consider the priority, code stability, and
maintainability. All the above efforts/supports are for migration from legacy
Hive to Spark. For the Spark users who do not have legacy Hive system, they
should use our built-in file-format sources and data source APIs, instead of
writing/using custom Hive serde/file format.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]