Repository: spark
Updated Branches:
  refs/heads/branch-1.0 ba871235f -> 976784b1e


Add language tabs and Python version to interactive part of quick-start

This is an addition of some stuff that was missed in 
https://issues.apache.org/jira/browse/SPARK-1567. I've also updated the doc to 
show submitting the Python application with spark-submit.

Author: Matei Zaharia <[email protected]>

Closes #782 from mateiz/spark-1567-extra and squashes the following commits:

6f8f2aa [Matei Zaharia] tweaks
9ed9874 [Matei Zaharia] tweaks
ae67c3e [Matei Zaharia] tweak
b303ba3 [Matei Zaharia] tweak
1433a4d [Matei Zaharia] Add language tabs and Python version to interactive 
part of quick-start guide
(cherry picked from commit f10de042b8e86adf51b70bae2d8589a5cbf02935)

Signed-off-by: Patrick Wendell <[email protected]>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/976784b1
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/976784b1
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/976784b1

Branch: refs/heads/branch-1.0
Commit: 976784b1eb76c66a293413c687908752cad4266a
Parents: ba87123
Author: Matei Zaharia <[email protected]>
Authored: Wed May 14 21:45:20 2014 -0700
Committer: Patrick Wendell <[email protected]>
Committed: Wed May 14 21:45:37 2014 -0700

----------------------------------------------------------------------
 docs/python-programming-guide.md |   2 +-
 docs/quick-start.md              | 151 +++++++++++++++++++++++++++++-----
 2 files changed, 133 insertions(+), 20 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/976784b1/docs/python-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/python-programming-guide.md b/docs/python-programming-guide.md
index 2ce2c34..17675ac 100644
--- a/docs/python-programming-guide.md
+++ b/docs/python-programming-guide.md
@@ -121,7 +121,7 @@ IPython also works on a cluster or on multiple cores if you 
set the `MASTER` env
 # Standalone Programs
 
 PySpark can also be used from standalone Python scripts by creating a 
SparkContext in your script and running the script using `bin/spark-submit`.
-The Quick Start guide includes a [complete 
example](quick-start.html#a-standalone-app-in-python) of a standalone Python 
application.
+The Quick Start guide includes a [complete 
example](quick-start.html#standalone-applications) of a standalone Python 
application.
 
 Code dependencies can be deployed by passing .zip or .egg files in the 
`--py-files` option of `spark-submit`:
 

http://git-wip-us.apache.org/repos/asf/spark/blob/976784b1/docs/quick-start.md
----------------------------------------------------------------------
diff --git a/docs/quick-start.md b/docs/quick-start.md
index a4d0148..33a0df1 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -6,7 +6,9 @@ title: Quick Start
 * This will become a table of contents (this text will be scraped).
 {:toc}
 
-This tutorial provides a quick introduction to using Spark. We will first 
introduce the API through Spark's interactive Scala shell (don't worry if you 
don't know Scala -- you will not need much for this), then show how to write 
standalone applications in Scala, Java, and Python.
+This tutorial provides a quick introduction to using Spark. We will first 
introduce the API through Spark's
+interactive shell (in Python or Scala),
+then show how to write standalone applications in Java, Scala, and Python.
 See the [programming guide](scala-programming-guide.html) for a more complete 
reference.
 
 To follow along with this guide, first download a packaged release of Spark 
from the
@@ -17,8 +19,12 @@ you can download a package for any version of Hadoop.
 
 ## Basics
 
-Spark's interactive shell provides a simple way to learn the API, as well as a 
powerful tool to analyze datasets interactively.
-Start the shell by running the following in the Spark directory.
+Spark's shell provides a simple way to learn the API, as well as a powerful 
tool to analyze data interactively.
+It is available in either Scala (which runs on the Java VM and is thus a good 
way to use existing Java libraries)
+or Python. Start it by running the following in the Spark directory:
+
+<div class="codetabs">
+<div data-lang="scala" markdown="1">
 
     ./bin/spark-shell
 
@@ -33,7 +39,7 @@ RDDs have _[actions](scala-programming-guide.html#actions)_, 
which return values
 
 {% highlight scala %}
 scala> textFile.count() // Number of items in this RDD
-res0: Long = 74
+res0: Long = 126
 
 scala> textFile.first() // First item in this RDD
 res1: String = # Apache Spark
@@ -53,12 +59,53 @@ scala> textFile.filter(line => 
line.contains("Spark")).count() // How many lines
 res3: Long = 15
 {% endhighlight %}
 
+</div>
+<div data-lang="python" markdown="1">
+
+    ./bin/pyspark
+
+Spark's primary abstraction is a distributed collection of items called a 
Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop 
InputFormats (such as HDFS files) or by transforming other RDDs. Let's make a 
new RDD from the text of the README file in the Spark source directory:
+
+{% highlight python %}
+>>> textFile = sc.textFile("README.md")
+{% endhighlight %}
+
+RDDs have _[actions](scala-programming-guide.html#actions)_, which return 
values, and _[transformations](scala-programming-guide.html#transformations)_, 
which return pointers to new RDDs. Let's start with a few actions:
+
+{% highlight python %}
+>>> textFile.count() # Number of items in this RDD
+126
+
+>>> textFile.first() # First item in this RDD
+u'# Apache Spark'
+{% endhighlight %}
+
+Now let's use a transformation. We will use the 
[`filter`](scala-programming-guide.html#transformations) transformation to 
return a new RDD with a subset of the items in the file.
+
+{% highlight python %}
+>>> linesWithSpark = textFile.filter(lambda line: "Spark" in line)
+{% endhighlight %}
+
+We can chain together transformations and actions:
+
+{% highlight python %}
+>>> textFile.filter(lambda line: "Spark" in line).count() # How many lines 
contain "Spark"?
+15
+{% endhighlight %}
+
+</div>
+</div>
+
+
 ## More on RDD Operations
 RDD actions and transformations can be used for more complex computations. 
Let's say we want to find the line with the most words:
 
+<div class="codetabs">
+<div data-lang="scala" markdown="1">
+
 {% highlight scala %}
 scala> textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) 
a else b)
-res4: Long = 16
+res4: Long = 15
 {% endhighlight %}
 
 This first maps a line to an integer value, creating a new RDD. `reduce` is 
called on that RDD to find the largest line count. The arguments to `map` and 
`reduce` are Scala function literals (closures), and can use any language 
feature or Scala/Java library. For example, we can easily call functions 
declared elsewhere. We'll use `Math.max()` function to make this code easier to 
understand:
@@ -68,26 +115,69 @@ scala> import java.lang.Math
 import java.lang.Math
 
 scala> textFile.map(line => line.split(" ").size).reduce((a, b) => Math.max(a, 
b))
-res5: Int = 16
+res5: Int = 15
 {% endhighlight %}
 
 One common data flow pattern is MapReduce, as popularized by Hadoop. Spark can 
implement MapReduce flows easily:
 
 {% highlight scala %}
 scala> val wordCounts = textFile.flatMap(line => line.split(" ")).map(word => 
(word, 1)).reduceByKey((a, b) => a + b)
-wordCounts: spark.RDD[(java.lang.String, Int)] = 
spark.ShuffledAggregatedRDD@71f027b8
+wordCounts: spark.RDD[(String, Int)] = spark.ShuffledAggregatedRDD@71f027b8
 {% endhighlight %}
 
 Here, we combined the 
[`flatMap`](scala-programming-guide.html#transformations), 
[`map`](scala-programming-guide.html#transformations) and 
[`reduceByKey`](scala-programming-guide.html#transformations) transformations 
to compute the per-word counts in the file as an RDD of (String, Int) pairs. To 
collect the word counts in our shell, we can use the 
[`collect`](scala-programming-guide.html#actions) action:
 
 {% highlight scala %}
 scala> wordCounts.collect()
-res6: Array[(java.lang.String, Int)] = Array((need,2), ("",43), (Extra,3), 
(using,1), (passed,1), (etc.,1), (its,1), (`/usr/local/lib/libmesos.so`,1), 
(`SCALA_HOME`,1), (option,1), (these,1), (#,1), (`PATH`,,2), (200,1), (To,3),...
+res6: Array[(String, Int)] = Array((means,1), (under,2), (this,3), 
(Because,1), (Python,2), (agree,1), (cluster.,1), ...)
 {% endhighlight %}
 
+</div>
+<div data-lang="python" markdown="1">
+
+{% highlight python %}
+>>> textFile.map(lambda line: len(line.split())).reduce(lambda a, b: a if (a > 
b) else b)
+15
+{% endhighlight %}
+
+This first maps a line to an integer value, creating a new RDD. `reduce` is 
called on that RDD to find the largest line count. The arguments to `map` and 
`reduce` are Python [anonymous functions 
(lambdas)](https://docs.python.org/2/reference/expressions.html#lambda),
+but we can also pass any top-level Python function we want.
+For example, we'll define a `max` function to make this code easier to 
understand:
+
+{% highlight python %}
+>>> def max(a, b):
+...     if a > b:
+...         return a
+...     else:
+...         return b
+...
+
+>>> textFile.map(lambda line: len(line.split())).reduce(max)
+15
+{% endhighlight %}
+
+One common data flow pattern is MapReduce, as popularized by Hadoop. Spark can 
implement MapReduce flows easily:
+
+{% highlight python %}
+>>> wordCounts = textFile.flatMap(lambda line: line.split()).map(lambda word: 
(word, 1)).reduceByKey(lambda a, b: a+b)
+{% endhighlight %}
+
+Here, we combined the 
[`flatMap`](scala-programming-guide.html#transformations), 
[`map`](scala-programming-guide.html#transformations) and 
[`reduceByKey`](scala-programming-guide.html#transformations) transformations 
to compute the per-word counts in the file as an RDD of (string, int) pairs. To 
collect the word counts in our shell, we can use the 
[`collect`](scala-programming-guide.html#actions) action:
+
+{% highlight python %}
+>>> wordCounts.collect()
+[(u'and', 9), (u'A', 1), (u'webpage', 1), (u'README', 1), (u'Note', 1), 
(u'"local"', 1), (u'variable', 1), ...]
+{% endhighlight %}
+
+</div>
+</div>
+
 ## Caching
 Spark also supports pulling data sets into a cluster-wide in-memory cache. 
This is very useful when data is accessed repeatedly, such as when querying a 
small "hot" dataset or when running an iterative algorithm like PageRank. As a 
simple example, let's mark our `linesWithSpark` dataset to be cached:
 
+<div class="codetabs">
+<div data-lang="scala" markdown="1">
+
 {% highlight scala %}
 scala> linesWithSpark.cache()
 res7: spark.RDD[String] = spark.FilteredRDD@17e51082
@@ -99,12 +189,33 @@ scala> linesWithSpark.count()
 res9: Long = 15
 {% endhighlight %}
 
-It may seem silly to use Spark to explore and cache a 30-line text file. The 
interesting part is
+It may seem silly to use Spark to explore and cache a 100-line text file. The 
interesting part is
 that these same functions can be used on very large data sets, even when they 
are striped across
 tens or hundreds of nodes. You can also do this interactively by connecting 
`bin/spark-shell` to
 a cluster, as described in the [programming 
guide](scala-programming-guide.html#initializing-spark).
 
-# A Standalone Application
+</div>
+<div data-lang="python" markdown="1">
+
+{% highlight python %}
+>>> linesWithSpark.cache()
+
+>>> linesWithSpark.count()
+15
+
+>>> linesWithSpark.count()
+15
+{% endhighlight %}
+
+It may seem silly to use Spark to explore and cache a 100-line text file. The 
interesting part is
+that these same functions can be used on very large data sets, even when they 
are striped across
+tens or hundreds of nodes. You can also do this interactively by connecting 
`bin/pyspark` to
+a cluster, as described in the [programming 
guide](scala-programming-guide.html#initializing-spark).
+
+</div>
+</div>
+
+# Standalone Applications
 Now say we wanted to write a standalone application using the Spark API. We 
will walk through a
 simple application in both Scala (with SBT), Java (with Maven), and Python.
 
@@ -115,7 +226,7 @@ We'll create a very simple Spark application in Scala. So 
simple, in fact, that
 named `SimpleApp.scala`:
 
 {% highlight scala %}
-/*** SimpleApp.scala ***/
+/* SimpleApp.scala */
 import org.apache.spark.SparkContext
 import org.apache.spark.SparkContext._
 import org.apache.spark.SparkConf
@@ -194,7 +305,7 @@ This example will use Maven to compile an application jar, 
but any similar build
 We'll create a very simple Spark application, `SimpleApp.java`:
 
 {% highlight java %}
-/*** SimpleApp.java ***/
+/* SimpleApp.java */
 import org.apache.spark.api.java.*;
 import org.apache.spark.SparkConf;
 import org.apache.spark.api.java.function.Function;
@@ -309,16 +420,18 @@ Note that you'll need to replace YOUR_SPARK_HOME with the 
location where Spark i
 As with the Scala and Java examples, we use a SparkContext to create RDDs.
 We can pass Python functions to Spark, which are automatically serialized 
along with any variables
 that they reference.
-For applications that use custom classes or third-party libraries, we can add 
those code
-dependencies to SparkContext to ensure that they will be available on remote 
machines; this is
-described in more detail in the [Python programming 
guide](python-programming-guide.html).
+For applications that use custom classes or third-party libraries, we can also 
add code
+dependencies to `spark-submit` through its `--py-files` argument by packaging 
them into a
+.zip file (see `spark-submit --help` for details).
 `SimpleApp` is simple enough that we do not need to specify any code 
dependencies.
 
-We can run this application using the `bin/pyspark` script:
+We can run this application using the `bin/spark-submit` script:
 
 {% highlight python %}
-$ cd $SPARK_HOME
-$ ./bin/pyspark SimpleApp.py
+# Use spark-submit to run your application
+$ YOUR_SPARK_HOME/bin/spark-submit \
+  --master local[4] \
+  SimpleApp.py
 ...
 Lines with a: 46, Lines with b: 23
 {% endhighlight python %}
@@ -326,7 +439,7 @@ Lines with a: 46, Lines with b: 23
 </div>
 </div>
 
-# Where to go from here
+# Where to Go from Here
 Congratulations on running your first Spark application!
 
 * For an in-depth overview of the API see "Programming Guides" menu section.

Reply via email to