vanzin commented on a change in pull request #23546: [SPARK-23153][K8s] Support
client dependencies with a Hadoop Compatible File System
URL: https://github.com/apache/spark/pull/23546#discussion_r260916296
##########
File path: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
##########
@@ -374,6 +408,53 @@ private[spark] class SparkSubmit extends Logging {
localPyFiles = Option(args.pyFiles).map {
downloadFileList(_, targetDir, sparkConf, hadoopConf, secMgr)
}.orNull
+
+ if (isKubernetesClient &&
+ sparkConf.getBoolean("spark.kubernetes.submitInDriver", false)) {
+ // Replace with the downloaded local jar path to avoid propagating
hadoop compatible uris.
+ // Executors will get the jars from the Spark file server.
+ if (args.jars != null && localJars != null) {
+ args.jars = Utils.stringToSeq(args.jars).map {
+ jar => val jarUri = new URI(jar)
Review comment:
`jar =>` goes in previous line. Also because the whole block looks really
weird the way you've indented things.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]