Repository: spark
Updated Branches:
  refs/heads/master 66af8e250 -> 18ab6bd70


SPARK-1307 [DOCS] Don't use term 'standalone' to refer to a Spark Application

HT to Diana, just proposing an implementation of her suggestion, which I rather 
agreed with. Is there a second/third for the motion?

Refer to "self-contained" rather than "standalone" apps to avoid confusion with 
standalone deployment mode. And fix placement of reference to this in MLlib 
docs.

Author: Sean Owen <[email protected]>

Closes #2787 from srowen/SPARK-1307 and squashes the following commits:

b5b82e2 [Sean Owen] Refer to "self-contained" rather than "standalone" apps to 
avoid confusion with standalone deployment mode. And fix placement of reference 
to this in MLlib docs.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/18ab6bd7
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/18ab6bd7
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/18ab6bd7

Branch: refs/heads/master
Commit: 18ab6bd709bb9fcae290ffc43294d13f06670d55
Parents: 66af8e2
Author: Sean Owen <[email protected]>
Authored: Tue Oct 14 21:37:51 2014 -0700
Committer: Xiangrui Meng <[email protected]>
Committed: Tue Oct 14 21:37:51 2014 -0700

----------------------------------------------------------------------
 docs/mllib-clustering.md               | 14 +++++++-------
 docs/mllib-collaborative-filtering.md  | 14 +++++++-------
 docs/mllib-dimensionality-reduction.md | 17 +++++++++--------
 docs/mllib-linear-methods.md           | 20 ++++++++++----------
 docs/quick-start.md                    |  8 ++++----
 5 files changed, 37 insertions(+), 36 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/18ab6bd7/docs/mllib-clustering.md
----------------------------------------------------------------------
diff --git a/docs/mllib-clustering.md b/docs/mllib-clustering.md
index d10bd63..7978e93 100644
--- a/docs/mllib-clustering.md
+++ b/docs/mllib-clustering.md
@@ -69,7 +69,7 @@ println("Within Set Sum of Squared Errors = " + WSSSE)
 All of MLlib's methods use Java-friendly types, so you can import and call 
them there the same
 way you do in Scala. The only caveat is that the methods take Scala RDD 
objects, while the
 Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to 
a Scala one by
-calling `.rdd()` on your `JavaRDD` object. A standalone application example
+calling `.rdd()` on your `JavaRDD` object. A self-contained application example
 that is equivalent to the provided example in Scala is given below:
 
 {% highlight java %}
@@ -113,12 +113,6 @@ public class KMeansExample {
   }
 }
 {% endhighlight %}
-
-In order to run the above standalone application, follow the instructions
-provided in the [Standalone
-Applications](quick-start.html#standalone-applications) section of the Spark
-quick-start guide. Be sure to also include *spark-mllib* to your build file as
-a dependency.
 </div>
 
 <div data-lang="python" markdown="1">
@@ -153,3 +147,9 @@ print("Within Set Sum of Squared Error = " + str(WSSSE))
 </div>
 
 </div>
+
+In order to run the above application, follow the instructions
+provided in the [Self-Contained 
Applications](quick-start.html#self-contained-applications)
+section of the Spark
+Quick Start guide. Be sure to also include *spark-mllib* to your build file as
+a dependency.

http://git-wip-us.apache.org/repos/asf/spark/blob/18ab6bd7/docs/mllib-collaborative-filtering.md
----------------------------------------------------------------------
diff --git a/docs/mllib-collaborative-filtering.md 
b/docs/mllib-collaborative-filtering.md
index d5c539d..2094963 100644
--- a/docs/mllib-collaborative-filtering.md
+++ b/docs/mllib-collaborative-filtering.md
@@ -110,7 +110,7 @@ val model = ALS.trainImplicit(ratings, rank, numIterations, 
alpha)
 All of MLlib's methods use Java-friendly types, so you can import and call 
them there the same
 way you do in Scala. The only caveat is that the methods take Scala RDD 
objects, while the
 Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to 
a Scala one by
-calling `.rdd()` on your `JavaRDD` object. A standalone application example
+calling `.rdd()` on your `JavaRDD` object. A self-contained application example
 that is equivalent to the provided example in Scala is given bellow:
 
 {% highlight java %}
@@ -184,12 +184,6 @@ public class CollaborativeFiltering {
   }
 }
 {% endhighlight %}
-
-In order to run the above standalone application, follow the instructions
-provided in the [Standalone
-Applications](quick-start.html#standalone-applications) section of the Spark
-quick-start guide. Be sure to also include *spark-mllib* to your build file as
-a dependency.
 </div>
 
 <div data-lang="python" markdown="1">
@@ -229,6 +223,12 @@ model = ALS.trainImplicit(ratings, rank, numIterations, 
alpha = 0.01)
 
 </div>
 
+In order to run the above application, follow the instructions
+provided in the [Self-Contained 
Applications](quick-start.html#self-contained-applications)
+section of the Spark
+Quick Start guide. Be sure to also include *spark-mllib* to your build file as
+a dependency.
+
 ## Tutorial
 
 The [training 
exercises](https://databricks-training.s3.amazonaws.com/index.html) from the 
Spark Summit 2014 include a hands-on tutorial for

http://git-wip-us.apache.org/repos/asf/spark/blob/18ab6bd7/docs/mllib-dimensionality-reduction.md
----------------------------------------------------------------------
diff --git a/docs/mllib-dimensionality-reduction.md 
b/docs/mllib-dimensionality-reduction.md
index 21cb35b..870fed6 100644
--- a/docs/mllib-dimensionality-reduction.md
+++ b/docs/mllib-dimensionality-reduction.md
@@ -121,9 +121,9 @@ public class SVD {
 The same code applies to `IndexedRowMatrix` if `U` is defined as an
 `IndexedRowMatrix`.
 
-In order to run the above standalone application, follow the instructions
-provided in the [Standalone
-Applications](quick-start.html#standalone-applications) section of the Spark
+In order to run the above application, follow the instructions
+provided in the [Self-Contained
+Applications](quick-start.html#self-contained-applications) section of the 
Spark
 quick-start guide. Be sure to also include *spark-mllib* to your build file as
 a dependency.
 
@@ -200,10 +200,11 @@ public class PCA {
 }
 {% endhighlight %}
 
-In order to run the above standalone application, follow the instructions
-provided in the [Standalone
-Applications](quick-start.html#standalone-applications) section of the Spark
-quick-start guide. Be sure to also include *spark-mllib* to your build file as
-a dependency.
 </div>
 </div>
+
+In order to run the above application, follow the instructions
+provided in the [Self-Contained 
Applications](quick-start.html#self-contained-applications)
+section of the Spark
+quick-start guide. Be sure to also include *spark-mllib* to your build file as
+a dependency.

http://git-wip-us.apache.org/repos/asf/spark/blob/18ab6bd7/docs/mllib-linear-methods.md
----------------------------------------------------------------------
diff --git a/docs/mllib-linear-methods.md b/docs/mllib-linear-methods.md
index d31bec3..bc914a1 100644
--- a/docs/mllib-linear-methods.md
+++ b/docs/mllib-linear-methods.md
@@ -247,7 +247,7 @@ val modelL1 = svmAlg.run(training)
 All of MLlib's methods use Java-friendly types, so you can import and call 
them there the same
 way you do in Scala. The only caveat is that the methods take Scala RDD 
objects, while the
 Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to 
a Scala one by
-calling `.rdd()` on your `JavaRDD` object. A standalone application example
+calling `.rdd()` on your `JavaRDD` object. A self-contained application example
 that is equivalent to the provided example in Scala is given bellow:
 
 {% highlight java %}
@@ -323,9 +323,9 @@ svmAlg.optimizer()
 final SVMModel modelL1 = svmAlg.run(training.rdd());
 {% endhighlight %}
 
-In order to run the above standalone application, follow the instructions
-provided in the [Standalone
-Applications](quick-start.html#standalone-applications) section of the Spark
+In order to run the above application, follow the instructions
+provided in the [Self-Contained
+Applications](quick-start.html#self-contained-applications) section of the 
Spark
 quick-start guide. Be sure to also include *spark-mllib* to your build file as
 a dependency.
 </div>
@@ -482,12 +482,6 @@ public class LinearRegression {
   }
 }
 {% endhighlight %}
-
-In order to run the above standalone application, follow the instructions
-provided in the [Standalone
-Applications](quick-start.html#standalone-applications) section of the Spark
-quick-start guide. Be sure to also include *spark-mllib* to your build file as
-a dependency.
 </div>
 
 <div data-lang="python" markdown="1">
@@ -519,6 +513,12 @@ print("Mean Squared Error = " + str(MSE))
 </div>
 </div>
 
+In order to run the above application, follow the instructions
+provided in the [Self-Contained 
Applications](quick-start.html#self-contained-applications)
+section of the Spark
+quick-start guide. Be sure to also include *spark-mllib* to your build file as
+a dependency.
+
 ## Streaming linear regression
 
 When data arrive in a streaming fashion, it is useful to fit regression models 
online, 

http://git-wip-us.apache.org/repos/asf/spark/blob/18ab6bd7/docs/quick-start.md
----------------------------------------------------------------------
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 23313d8..6236de0 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -8,7 +8,7 @@ title: Quick Start
 
 This tutorial provides a quick introduction to using Spark. We will first 
introduce the API through Spark's
 interactive shell (in Python or Scala),
-then show how to write standalone applications in Java, Scala, and Python.
+then show how to write applications in Java, Scala, and Python.
 See the [programming guide](programming-guide.html) for a more complete 
reference.
 
 To follow along with this guide, first download a packaged release of Spark 
from the
@@ -215,8 +215,8 @@ a cluster, as described in the [programming 
guide](programming-guide.html#initia
 </div>
 </div>
 
-# Standalone Applications
-Now say we wanted to write a standalone application using the Spark API. We 
will walk through a
+# Self-Contained Applications
+Now say we wanted to write a self-contained application using the Spark API. 
We will walk through a
 simple application in both Scala (with SBT), Java (with Maven), and Python.
 
 <div class="codetabs">
@@ -387,7 +387,7 @@ Lines with a: 46, Lines with b: 23
 </div>
 <div data-lang="python" markdown="1">
 
-Now we will show how to write a standalone application using the Python API 
(PySpark).
+Now we will show how to write an application using the Python API (PySpark).
 
 As an example, we'll create a simple Spark application, `SimpleApp.py`:
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to