vanzin commented on a change in pull request #23546: [SPARK-23153][K8s] Support 
client dependencies with a Hadoop Compatible File System
URL: https://github.com/apache/spark/pull/23546#discussion_r260918653
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
 ##########
 @@ -374,6 +408,53 @@ private[spark] class SparkSubmit extends Logging {
       localPyFiles = Option(args.pyFiles).map {
         downloadFileList(_, targetDir, sparkConf, hadoopConf, secMgr)
       }.orNull
+
+      if (isKubernetesClient &&
+        sparkConf.getBoolean("spark.kubernetes.submitInDriver", false)) {
+        // Replace with the downloaded local jar path to avoid propagating 
hadoop compatible uris.
+        // Executors will get the jars from the Spark file server.
+        if (args.jars != null && localJars != null) {
+          args.jars = Utils.stringToSeq(args.jars).map {
+            jar => val jarUri = new URI(jar)
+              val path1 = {
 
 Review comment:
   Please use better names. This is not `path1`. This is a file name. Having 
the variable name reflect that makes it easier to understand what this code is 
doing.
   
   You could even use `new Path(uri).getName()` to be even clearer.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to