Updated Branches:
  refs/heads/branch-0.9 5f63f32b7 -> f3cba2d81

Merge pull request #535 from sslavic/patch-2. Closes #535.

Fixed typo in scaladoc

Author: Stevo Slavić <ssla...@gmail.com>

== Merge branch commits ==

commit 0a77f789e281930f4168543cc0d3b3ffbf5b3764
Author: Stevo Slavić <ssla...@gmail.com>
Date:   Tue Feb 4 15:30:27 2014 +0100

    Fixed typo in scaladoc

(cherry picked from commit 0c05cd374dac309b5444980f10f8dcb820c752c2)
Signed-off-by: Reynold Xin <r...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/incubator-spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spark/commit/f3cba2d8
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spark/tree/f3cba2d8
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spark/diff/f3cba2d8

Branch: refs/heads/branch-0.9
Commit: f3cba2d8190416712dc2b8d09b8fccff951aa464
Parents: 5f63f32
Author: Stevo Slavić <ssla...@gmail.com>
Authored: Tue Feb 4 09:45:46 2014 -0800
Committer: Reynold Xin <r...@apache.org>
Committed: Tue Feb 4 09:46:00 2014 -0800

----------------------------------------------------------------------
 core/src/main/scala/org/apache/spark/Partitioner.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/f3cba2d8/core/src/main/scala/org/apache/spark/Partitioner.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/Partitioner.scala 
b/core/src/main/scala/org/apache/spark/Partitioner.scala
index 3081f92..cfba43d 100644
--- a/core/src/main/scala/org/apache/spark/Partitioner.scala
+++ b/core/src/main/scala/org/apache/spark/Partitioner.scala
@@ -41,7 +41,7 @@ object Partitioner {
    * spark.default.parallelism is set, then we'll use the value from 
SparkContext
    * defaultParallelism, otherwise we'll use the max number of upstream 
partitions.
    *
-   * Unless spark.default.parallelism is set, He number of partitions will be 
the
+   * Unless spark.default.parallelism is set, the number of partitions will be 
the
    * same as the number of partitions in the largest upstream RDD, as this 
should
    * be least likely to cause out-of-memory errors.
    *

Reply via email to