Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8385#discussion_r37719375
--- Diff: docs/running-on-yarn.md ---
@@ -21,32 +21,51 @@ There are two deploy modes that can be used to launch
Spark applications on YARN
Unlike in Spark standalone and Mesos mode, in which the master's address
is specified in the `--master` parameter, in YARN mode the ResourceManager's
address is picked up from the Hadoop configuration. Thus, the `--master`
parameter is `yarn-client` or `yarn-cluster`.
To launch a Spark application in `yarn-cluster` mode:
- `$ ./bin/spark-submit --class path.to.your.Class --master yarn-cluster
[options] <app jar> [app options]`
-
+ `$ ./bin/spark-submit --class path.to.your.Class --master yarn
--deploy-mode yarn-client/yarn-cluster [options] <app jar> [app options]`
--- End diff --
I think you have to change the explanation in the previous paragraph to
explain the two YARN flag alternatives? that's the really key place.
This example should only use cluster mode then, since that's what it says
it does. It is not runnable like this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]