advancedxy commented on code in PR #41201:
URL: https://github.com/apache/spark/pull/41201#discussion_r1197555100
##########
core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:
##########
@@ -414,6 +414,9 @@ private[spark] class SparkSubmit extends Logging {
// directory too.
// SPARK-33782 : This downloads all the files , jars , archiveFiles
and pyfiles to current
// working directory
+ // SPARK-43540: add current working directory into classpath
+ val workingDirectory = "."
+ childClasspath += workingDirectory
Review Comment:
This doesn't add current working dir to executor's class path right?
Just checked with yarn's behavior, yarn add `CWD` to both driver and
executor. And it puts `CWD` before localized SPARK_CONF and HADOOP_CONF.
See
https://github.com/apache/spark/blob/014685c41e4741f83570d8a2a6a253e48967919a/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala#L1442
To get the similar behavior, I believe it would be easier to leverage the
`entrypoint.sh` here when running on K8S.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]