Repository: spark
Updated Branches:
  refs/heads/branch-2.0 9ff158b81 -> 3ca0dc007


[SPARK-17567][DOCS] Use valid url to Spark RDD paper

https://issues.apache.org/jira/browse/SPARK-17567

## What changes were proposed in this pull request?

Documentation 
(http://spark.apache.org/docs/latest/api/scala/#org.apache.spark.rdd.RDD) 
contains broken link to Spark paper 
(http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf).

I found it elsewhere 
(https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf) and 
I hope it is the same one. It should be uploaded to and linked from some Apache 
controlled storage, so it won't break again.

## How was this patch tested?

Tested manually on local laptop.

Author: Xin Ren <iamsh...@126.com>

Closes #15121 from keypointt/SPARK-17567.

(cherry picked from commit f15d41be3ce7569736ccbf2ffe1bec265865f55d)
Signed-off-by: Sean Owen <so...@cloudera.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/3ca0dc00
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/3ca0dc00
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/3ca0dc00

Branch: refs/heads/branch-2.0
Commit: 3ca0dc00786df1d529d55e297aaf23e1e1e07999
Parents: 9ff158b
Author: Xin Ren <iamsh...@126.com>
Authored: Sat Sep 17 12:30:25 2016 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Sat Sep 17 12:30:36 2016 +0100

----------------------------------------------------------------------
 core/src/main/scala/org/apache/spark/rdd/RDD.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/3ca0dc00/core/src/main/scala/org/apache/spark/rdd/RDD.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/rdd/RDD.scala 
b/core/src/main/scala/org/apache/spark/rdd/RDD.scala
index 2ee13dc..34d32aa 100644
--- a/core/src/main/scala/org/apache/spark/rdd/RDD.scala
+++ b/core/src/main/scala/org/apache/spark/rdd/RDD.scala
@@ -70,7 +70,7 @@ import org.apache.spark.util.random.{BernoulliCellSampler, 
BernoulliSampler, Poi
  * All of the scheduling and execution in Spark is done based on these 
methods, allowing each RDD
  * to implement its own way of computing itself. Indeed, users can implement 
custom RDDs (e.g. for
  * reading data from a new storage system) by overriding these functions. 
Please refer to the
- * [[http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf Spark 
paper]] for more details
+ * [[http://people.csail.mit.edu/matei/papers/2012/nsdi_spark.pdf Spark 
paper]] for more details
  * on RDD internals.
  */
 abstract class RDD[T: ClassTag](


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to