Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3409#discussion_r21707097
--- Diff: docs/running-on-yarn.md ---
@@ -139,6 +139,15 @@ Most of the configs are the same for Spark on YARN as
for other deployment modes
The maximum number of threads to use in the application master for
launching executor containers.
</td>
</tr>
+<tr>
+ <td><code>spark.yarn.am.extraJavaOptions</code></td>
+ <td>(none)</td>
+ <td>
+ A string of extra JVM options to pass to the Application Master in
yarn-client mode. For instance, specific
+ system properties. Note that it is complementary to
spark.driver.extraJavaOptions, which are only passed to
+ the Application Master in yarn-cluster mode.
--- End diff --
Let's avoid using `yarn-client` and `yarn-cluster` here. I think in general
it's a bad idea to cram the deploy mode into the master string (yes we do it
for `--master` but we should really deprecate that).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]