cloud-fan commented on a change in pull request #29881:
URL: https://github.com/apache/spark/pull/29881#discussion_r504750732
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
##########
@@ -396,23 +440,50 @@ private[spark] object HiveUtils extends Logging {
config = configurations,
barrierPrefixes = hiveMetastoreBarrierPrefixes,
sharedPrefixes = hiveMetastoreSharedPrefixes)
+ } else if (hiveMetastoreJars == "path") {
+ // Convert to files and expand any directories.
+ val jars =
+ HiveUtils.hiveMetastoreJarsPath(sqlConf)
+ .flatMap {
+ case path if Utils.isWindows =>
+ addLocalHiveJars(new File(path))
+ case path =>
+ val uri = new Path(path).toUri
+ uri.getScheme match {
Review comment:
I don't understand why we need to check the scheme and do things
differently. `spark.read.parquet(...)` can support any schema and internally
just use the Hadoop DFS API. Can't we do that as well here?
Can you point to other places in Spark that do similar things to support
this PR?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]