Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4665#discussion_r25037050
--- Diff: docs/configuration.md ---
@@ -237,6 +258,8 @@ Apart from these, the following properties are also
available, and may be useful
(Experimental) Whether to give user-added jars precedence over Spark's
own jars when loading
classes in the the driver. This feature can be used to mitigate
conflicts between Spark's
dependencies and user dependencies. It is currently an experimental
feature.
+
+ <br /><em>Note:</em> setting this with <code>conf.set(...)</code> only
works in <code>cluster</code> mode (e.g. YARN deployment). For
<code>client</code> driver memory should be configured in
<code>conf/spark-defaults.conf</code> or via the run-time settings (See
Dynamically Loading Spark Properties).
--- End diff --
actually, this one doesn't apply here. Instead, you should just replace
this one with
```
This is used in cluster mode only.
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]