xkrogen commented on a change in pull request #29874:
URL: https://github.com/apache/spark/pull/29874#discussion_r495992294



##########
File path: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala
##########
@@ -61,7 +61,8 @@ private[hive] object IsolatedClientLoader extends Logging {
     val files = if (resolvedVersions.contains((resolvedVersion, 
hadoopVersion))) {
       resolvedVersions((resolvedVersion, hadoopVersion))
     } else {
-      val remoteRepos = sparkConf.get(SQLConf.ADDITIONAL_REMOTE_REPOSITORIES)
+      val remoteRepos = sys.env.getOrElse(
+        "DEFAULT_ARTIFACT_REPOSITORY", 
sparkConf.get(SQLConf.ADDITIONAL_REMOTE_REPOSITORIES))

Review comment:
       This still isn't quite right -- `remoteRepos` still represents the 
repositories to be tried _after_ the default Maven Central repository. I looked 
more closely at this and actually the changes you've made in `SparkSubmit` 
should already cover this scenario. `IsolatedClientLoader#downloadVersion()` 
calls `SparkSubmitUtils.resolveMavenCoordinates()` using 
`SparkSubmitUtils.buildIvySettings()`, which in turn calls 
`SparkSubmitUtils.createRepoResolvers()` to get the default repository, which 
is where you've already made your changes.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to