skonto commented on a change in pull request #23546: [SPARK-23153][K8s] Support
client dependencies with a Hadoop Compatible File System
URL: https://github.com/apache/spark/pull/23546#discussion_r261157666
##########
File path: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
##########
@@ -353,10 +371,26 @@ private[spark] class SparkSubmit extends Logging {
}
// Resolve glob path for different resources.
- args.jars = Option(args.jars).map(resolveGlobPaths(_, hadoopConf)).orNull
- args.files = Option(args.files).map(resolveGlobPaths(_, hadoopConf)).orNull
- args.pyFiles = Option(args.pyFiles).map(resolveGlobPaths(_,
hadoopConf)).orNull
- args.archives = Option(args.archives).map(resolveGlobPaths(_,
hadoopConf)).orNull
+ if (isKubernetesCluster) {
+ // Skip dependencies we will handle at the K8s backend.
Review comment:
Because in K8s as in Mesos we have uris for files targeting the fs in the
driver container and at the machine where the submission was done in cluster
mode. So at submit time I need to know what is what. Right now we have local://
but also file:// that resolves in the container AFAIK. So client makes explicit
that that this uri is target the submission machine fs.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]