vanzin commented on a change in pull request #23546: [SPARK-23153][K8s] Support 
client dependencies with a Hadoop Compatible File System
URL: https://github.com/apache/spark/pull/23546#discussion_r260915605
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
 ##########
 @@ -353,10 +371,26 @@ private[spark] class SparkSubmit extends Logging {
     }
 
     // Resolve glob path for different resources.
-    args.jars = Option(args.jars).map(resolveGlobPaths(_, hadoopConf)).orNull
-    args.files = Option(args.files).map(resolveGlobPaths(_, hadoopConf)).orNull
-    args.pyFiles = Option(args.pyFiles).map(resolveGlobPaths(_, 
hadoopConf)).orNull
-    args.archives = Option(args.archives).map(resolveGlobPaths(_, 
hadoopConf)).orNull
+    if (isKubernetesCluster) {
+      // Skip dependencies we will handle at the K8s backend.
 
 Review comment:
   Can you clarify what this means? The code seems to be following what you 
mention in the PR description ("uses a custom scheme (`client://`) to denote 
local deps."). But why do that? Why not detect local dependencies like 
everybody else (anything that resolves to "file" after "resolveGlobPaths" is 
called)?
   
   It seems really sketchy to be adding another way to reference local 
dependencies that is exclusive to k8s.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to