Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/19643#discussion_r149641601
--- Diff: python/pyspark/context.py ---
@@ -860,6 +860,23 @@ def addPyFile(self, path):
import importlib
importlib.invalidate_caches()
+ def addJar(self, path, addToCurrentClassLoader=False):
+ """
+ Adds a JAR dependency for Spark tasks to be executed in the future.
+ The `path` passed can be either a local file, a file in HDFS (or
other Hadoop-supported
+ filesystems), an HTTP, HTTPS or FTP URI, or local:/path for a file
on every worker node.
+ If `addToCurrentClassLoader` is true, add the jar to the current
threads' class loader
+ in the backing JVM. In general adding to the current threads'
class loader will impact all
+ other application threads unless they have explicitly changed
their class loader.
--- End diff --
@holdenk and @felixcheung, here I just added the comments back. I thought
it's a developer API and might be fine to describe some words related with JVM
but .. please let me know if you guys feel we need to take out.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]