Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/5580#discussion_r29164733
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -328,6 +328,42 @@ object SparkSubmit {
}
}
+ // In yarn mode for a python app, add pyspark archives to files
+ // that can be distributed with the job
+ if (args.isPython && clusterManager == YARN) {
+ var pyArchives: String = null
+ if (sys.env.contains("PYSPARK_ARCHIVES_PATH")) {
+ pyArchives = sys.env.get("PYSPARK_ARCHIVES_PATH").get
+ } else {
+ if (!sys.env.contains("SPARK_HOME")) {
+ printErrorAndExit("SPARK_HOME does not exist for python
application in yarn mode.")
+ }
+ val pythonPath = new ArrayBuffer[String]
+ for (sparkHome <- sys.env.get("SPARK_HOME")) {
+ val pyLibPath = Seq(sparkHome, "python",
"lib").mkString(File.separator)
+ val pyArchivesFile = new File(pyLibPath, "pyspark.zip")
+ if (!pyArchivesFile.exists()) {
--- End diff --
I'm not sure I follow, if you just upgrade spark.jar then there are no
change to the python scripts so you don't need to put new pyspark.zip. If there
are changes then you either need to copy over the new python scripts or put a
new pyspark.zip on there. It seems putting new pyspark.zip on there would be
easier. Although I guess you need the python scripts there anyway for client
mode so you probably need both.
In many cases I wouldn't expect a user to have write permissions on the
python/lib directory. I would expect that to be a privileged operation. In
that case the zip would fail.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]