Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/9313#discussion_r43726053
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1611,8 +1611,14 @@ class SparkContext(config: SparkConf) extends
Logging with ExecutorAllocationCli
* Adds a JAR dependency for all tasks to be executed on this
SparkContext in the future.
* The `path` passed can be either a local file, a file in HDFS (or
other Hadoop-supported
* filesystems), an HTTP, HTTPS or FTP URI, or local:/path for a file on
every worker node.
+ * If addToCurrentThread is true, attempt to add the new class to the
current threads class
+ * loader.
*/
def addJar(path: String) {
+ addJar(path, false)
+ }
+
+ def addJar(path: String, addToCurrentThread: Boolean) {
--- End diff --
Would it be correct to say that in nearly all cases, setting the second
argument to true will result in the jar being added to all of the application's
threads? Because Spark sets the context classloader to a MutableClassLoader
before loading the application's main class, and then all other app threads
will inherit this as the default unless they explicitly change the context
class loader?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]