Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8385#discussion_r40947175
  
    --- Diff: docs/running-on-yarn.md ---
    @@ -16,37 +16,51 @@ containers used by the application use the same 
configuration. If the configurat
     Java system properties or environment variables not managed by YARN, they 
should also be set in the
     Spark application's configuration (driver, executors, and the AM when 
running in client mode).
     
    -There are two deploy modes that can be used to launch Spark applications 
on YARN. In `yarn-cluster` mode, the Spark driver runs inside an application 
master process which is managed by YARN on the cluster, and the client can go 
away after initiating the application. In `yarn-client` mode, the driver runs 
in the client process, and the application master is only used for requesting 
resources from YARN.
    +There are two deploy modes that can be used to launch Spark applications 
on YARN. In `cluster` mode, the Spark driver runs inside an application master 
process which is managed by YARN on the cluster, and the client can go away 
after initiating the application. In `client` mode, the driver runs in the 
client process, and the application master is only used for requesting 
resources from YARN.
     
    -Unlike in Spark standalone and Mesos mode, in which the master's address 
is specified in the `--master` parameter, in YARN mode the ResourceManager's 
address is picked up from the Hadoop configuration. Thus, the `--master` 
parameter is `yarn-client` or `yarn-cluster`. 
    -To launch a Spark application in `yarn-cluster` mode:
    +Unlike in Spark standalone and Mesos mode, in which the master's address 
is specified in the `--master` parameter, in YARN mode the ResourceManager's 
address is picked up from the Hadoop configuration. Thus, the `--master` 
parameter is `yarn` and `--deploy-mode` can be `client` or `cluster` to select 
the YARN deployment mode.
    +To launch a Spark application in YARN in `cluster` mode:
     
    -   `$ ./bin/spark-submit --class path.to.your.Class --master yarn-cluster 
[options] <app jar> [app options]`
    -    
    +   `$ ./bin/spark-submit --class path.to.your.Class --master yarn 
--deploy-mode cluster [options] <app jar> [app options]`
    +   
     For example:
     
         $ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
    -        --master yarn-cluster \
    +        --master yarn \
    +        --deploy-mode cluster
             --num-executors 3 \
             --driver-memory 4g \
             --executor-memory 2g \
             --executor-cores 1 \
             --queue thequeue \
             lib/spark-examples*.jar \
    -        10
     
    -The above starts a YARN client program which starts the default 
Application Master. Then SparkPi will be run as a child thread of Application 
Master. The client will periodically poll the Application Master for status 
updates and display them in the console. The client will exit once your 
application has finished running.  Refer to the "Debugging your Application" 
section below for how to see driver and executor logs.
    +The above example starts a YARN client program which starts the default 
Application Master. Then SparkPi will be run as a child thread of Application 
Master. The client will periodically poll the Application Master for status 
updates and display them in the console. The client will exit once your 
application has finished running.  Refer to the "Debugging your Application" 
section below for how to see driver and executor logs.
    +
    +To launch a Spark application in `client` mode, do the same, but replace 
`cluster` with `client` in the `--deploy-mode` argument.  
    +To run spark-shell:
     
    -To launch a Spark application in `yarn-client` mode, do the same, but 
replace `yarn-cluster` with `yarn-client`.  To run spark-shell:
    +    $ ./bin/spark-shell --master yarn --deploy-mode client     
     
    -    $ ./bin/spark-shell --master yarn-client
    +For example:
     
    +    $ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
    +        --master yarn-cluster \
    --- End diff --
    
    still using yarn-cluster instead of deploy-mode


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to