Repository: spark Updated Branches: refs/heads/branch-1.4 2cce6bfea -> 8567d29ef
[SPARK-7704] Updating Programming Guides per SPARK-4397 The change per SPARK-4397 makes implicit objects in SparkContext to be found by the compiler automatically. So that we don't need to import the o.a.s.SparkContext._ explicitly any more and can remove some statements around the "implicit conversions" from the latest Programming Guides (1.3.0 and higher) Author: Dice <poleon...@gmail.com> Closes #6234 from daisukebe/patch-1 and squashes the following commits: b77ecd9 [Dice] fix a typo 45dfcd3 [Dice] rewording per Sean's advice a094bcf [Dice] Adding a note for users on any previous releases a29be5f [Dice] Updating Programming Guides per SPARK-4397 (cherry picked from commit 32fa611b19c6b95d4563be631c5a8ff0cdf3438f) Signed-off-by: Sean Owen <so...@cloudera.com> Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8567d29e Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/8567d29e Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/8567d29e Branch: refs/heads/branch-1.4 Commit: 8567d29ef03f49f8d3d18b8c858cca3dd7dfeb04 Parents: 2cce6bf Author: Dice <poleon...@gmail.com> Authored: Tue May 19 18:12:05 2015 +0100 Committer: Sean Owen <so...@cloudera.com> Committed: Tue May 19 18:14:47 2015 +0100 ---------------------------------------------------------------------- docs/programming-guide.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark/blob/8567d29e/docs/programming-guide.md ---------------------------------------------------------------------- diff --git a/docs/programming-guide.md b/docs/programming-guide.md index 0c27376..07a4d29 100644 --- a/docs/programming-guide.md +++ b/docs/programming-guide.md @@ -41,14 +41,15 @@ In addition, if you wish to access an HDFS cluster, you need to add a dependency artifactId = hadoop-client version = <your-hdfs-version> -Finally, you need to import some Spark classes and implicit conversions into your program. Add the following lines: +Finally, you need to import some Spark classes into your program. Add the following lines: {% highlight scala %} import org.apache.spark.SparkContext -import org.apache.spark.SparkContext._ import org.apache.spark.SparkConf {% endhighlight %} +(Before Spark 1.3.0, you need to explicitly `import org.apache.spark.SparkContext._` to enable essential implicit conversions.) + </div> <div data-lang="java" markdown="1"> @@ -821,11 +822,9 @@ by a key. In Scala, these operations are automatically available on RDDs containing [Tuple2](http://www.scala-lang.org/api/{{site.SCALA_VERSION}}/index.html#scala.Tuple2) objects -(the built-in tuples in the language, created by simply writing `(a, b)`), as long as you -import `org.apache.spark.SparkContext._` in your program to enable Spark's implicit -conversions. The key-value pair operations are available in the +(the built-in tuples in the language, created by simply writing `(a, b)`). The key-value pair operations are available in the [PairRDDFunctions](api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions) class, -which automatically wraps around an RDD of tuples if you import the conversions. +which automatically wraps around an RDD of tuples. For example, the following code uses the `reduceByKey` operation on key-value pairs to count how many times each line of text occurs in a file: --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org