skonto commented on a change in pull request #23546: [SPARK-23153][K8s] Support 
client dependencies with a Hadoop Compatible File System
URL: https://github.com/apache/spark/pull/23546#discussion_r259157717
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
 ##########
 @@ -318,9 +319,25 @@ private[spark] class SparkSubmit extends Logging {
         args.ivySettingsPath)
 
       if (!StringUtils.isBlank(resolvedMavenCoordinates)) {
-        args.jars = mergeFileLists(args.jars, resolvedMavenCoordinates)
-        if (args.isPython || isInternal(args.primaryResource)) {
-          args.pyFiles = mergeFileLists(args.pyFiles, resolvedMavenCoordinates)
+        // In K8s client mode, when in the driver, add resolved jars early as 
we might need
+        // them at the submit time. For example we might use the dependencies 
for downloading
+        // files from a Hadoop Compatible fs eg. S3. In this case the user 
might pass:
+        // --packages 
com.amazonaws:aws-java-sdk:1.7.4:org.apache.hadoop:hadoop-aws:2.7.6
+        if (isKubernetesClient &&
+          args.sparkProperties.getOrElse("spark.kubernetes.submitInDriver", 
"false").toBoolean) {
 
 Review comment:
   It means its the first submit time in cluster mode (second is in the 
container). So at this step the submit process needs to upload the files. In 
practice this is user's machine. This is where I also need to make sure 
`--packages` works because user may need to pass the dependencies needed for 
accessing the remote system. There are a bunch of places this needs to work. So 
I just make the same packages available at the user's machine and in the driver 
because we also want to pass the deps in the there so the driver can download 
the files. Of course user may have deps that are not related to the upload as 
usual. The difference between the driver and the user's machine is what logic 
is triggered and when. So for the user's machine I need to add early the jars 
in the classpath. In the driver I let the pre-existing logic (before this PR) 
to do the job as the uris are hadoop compatible. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to