Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/896#discussion_r13109195
--- Diff: docs/submitting-applications.md ---
@@ -0,0 +1,153 @@
+---
+layout: global
+title: Submitting Applications
+---
+
+The `spark-submit` script in Spark's `bin` directory is used to launch
applications on a cluster.
+It can use all of Spark's supported [cluster
managers](cluster-overview.html#cluster-manager-types)
+through a uniform interface so you don't have to configure your
application specially for each one.
+
+# Bundling Your Application's Dependencies
+If your code depends on other projects, you will need to package them
alongside
+your application in order to distribute the code to a Spark cluster. To do
this,
+to create an assembly jar (or "uber" jar) containing your code and its
dependencies. Both
+[sbt](https://github.com/sbt/sbt-assembly) and
+[Maven](http://maven.apache.org/plugins/maven-shade-plugin/)
+have assembly plugins. When creating assembly jars, list Spark and Hadoop
+as `provided` dependencies; these need not be bundled since they are
provided by
+the cluster manager at runtime. Once you have an assembled jar you can
call the `bin/spark-submit`
+script as shown here while passing your jar.
+
+For Python, you can use the `--py-files` argument of `spark-submit` to add
`.py`, `.zip` or `.egg`
+files to be distributed with your application. If you depend on multiple
Python files we recommend
+packaging them into a `.zip` or `.egg`.
+
+# Launching Applications with spark-submit
+
+Once a user application is bundled, it can be launched using the
`bin/spark-submit` script
+This script takes care of setting up the classpath with Spark and its
+dependencies, and can support different cluster managers and deploy modes
that Spark supports:
+
+{% highlight bash %}
+./bin/spark-submit \
+ --class <main-class>
+ --master <master-url> \
+ --deploy-mode <deploy-mode> \
+ ... # other options
+ <application-jar> \
+ [application-arguments]
+{% endhighlight %}
+
+Some of the commonly used options are:
+
+* `--class`: The entry point for your application (e.g.
`org.apache.spark.examples.SparkPi`)
+* `--master`: The [master URL](#master-urls) for the cluster (e.g.
`spark://23.195.26.187:7077`)
+* `--deploy-mode`: Whether to deploy your driver program within the
cluster or run it locally as an external client (either `cluster` or `client`)
+* `application-jar`: Path to a bundled jar including your application and
all dependencies. The URL must be globally visible inside of your cluster, for
instance, an `hdfs://` path or a `file://` path that is present on all nodes.
+* `application-arguments`: Arguments passed to the main method of your
main class, if any
+
+For Python applications, simply pass a `.py` file in the place of
`<application-jar>` instead of a JAR,
+and add Python `.zip`, `.egg` or `.py` files to the search path with
`--py-files`.
+
+To enumerate all options available to `spark-submit` run it with `--help`.
Here are a few
+examples of common options:
+
+{% highlight bash %}
+# Run application locally on 8 cores
+./bin/spark-submit \
+ --class org.apache.spark.examples.SparkPi
--- End diff --
This needs to have a backlash after it (doesn't paste correclty into a
shell) and same in the cases below
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---