skonto commented on a change in pull request #23546: [SPARK-23153][K8s] Support
client dependencies with a Hadoop Compatible File System
URL: https://github.com/apache/spark/pull/23546#discussion_r261327082
##########
File path: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
##########
@@ -353,10 +371,26 @@ private[spark] class SparkSubmit extends Logging {
}
// Resolve glob path for different resources.
- args.jars = Option(args.jars).map(resolveGlobPaths(_, hadoopConf)).orNull
- args.files = Option(args.files).map(resolveGlobPaths(_, hadoopConf)).orNull
- args.pyFiles = Option(args.pyFiles).map(resolveGlobPaths(_,
hadoopConf)).orNull
- args.archives = Option(args.archives).map(resolveGlobPaths(_,
hadoopConf)).orNull
+ if (isKubernetesCluster) {
+ // Skip dependencies we will handle at the K8s backend.
Review comment:
Ok then I can remove the `client ://` thing I didnt want that anyway looks
ugly.
@vanzin I see in SPARK-24736 you changed the assumptions I thought they were
correct, before that spark.files was passed as it to the container side from
what I observed anyway and so file:// could point to a container file location.
I think all dependencies should be handled by spark submit as well, there is
enough logic in there.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]