Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/19643#discussion_r151839304
--- Diff: python/pyspark/context.py ---
@@ -860,6 +860,23 @@ def addPyFile(self, path):
import importlib
importlib.invalidate_caches()
+ def addJar(self, path, addToCurrentClassLoader=False):
+ """
+ Adds a JAR dependency for Spark tasks to be executed in the future.
+ The `path` passed can be either a local file, a file in HDFS (or
other Hadoop-supported
+ filesystems), an HTTP, HTTPS or FTP URI, or local:/path for a file
on every worker node.
+ If `addToCurrentClassLoader` is true, add the jar to the current
threads' class loader
+ in the backing JVM. In general adding to the current threads'
class loader will impact all
+ other application threads unless they have explicitly changed
their class loader.
--- End diff --
So we currently use `.. note:: DeveloperApi` to indicate it's a developer
API (see ml/pipeline and friends for an example).
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]