Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15666#discussion_r143018955
--- Diff: python/pyspark/context.py ---
@@ -863,6 +863,21 @@ def addPyFile(self, path):
import importlib
importlib.invalidate_caches()
+ def addJar(self, path, addToCurrentClassLoader=False):
+ """
+ Adds a JAR dependency for all tasks to be executed on this
SparkContext in the future.
+ The `path` passed can be either a local file, a file in HDFS (or
other Hadoop-supported
+ filesystems), an HTTP, HTTPS or FTP URI, or local:/path for a file
on every worker node.
+ If addToCurrentClassLoader is true, add the jar to the current
threads' classloader.
--- End diff --
little nit: `addToCurrentClassLoader` ->`` `addToCurrentClassLoader` `` and
`ads' cl` -> `ads' cl`.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]