Repository: spark
Updated Branches:
  refs/heads/master 69cb04969 -> f15d41be3


[SPARK-17567][DOCS] Use valid url to Spark RDD paper

https://issues.apache.org/jira/browse/SPARK-17567

## What changes were proposed in this pull request?

Documentation 
(http://spark.apache.org/docs/latest/api/scala/#org.apache.spark.rdd.RDD) 
contains broken link to Spark paper 
(http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf).

I found it elsewhere 
(https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf) and 
I hope it is the same one. It should be uploaded to and linked from some Apache 
controlled storage, so it won't break again.

## How was this patch tested?

Tested manually on local laptop.

Author: Xin Ren <iamsh...@126.com>

Closes #15121 from keypointt/SPARK-17567.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f15d41be
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f15d41be
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f15d41be

Branch: refs/heads/master
Commit: f15d41be3ce7569736ccbf2ffe1bec265865f55d
Parents: 69cb049
Author: Xin Ren <iamsh...@126.com>
Authored: Sat Sep 17 12:30:25 2016 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Sat Sep 17 12:30:25 2016 +0100

----------------------------------------------------------------------
 core/src/main/scala/org/apache/spark/rdd/RDD.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/f15d41be/core/src/main/scala/org/apache/spark/rdd/RDD.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/rdd/RDD.scala 
b/core/src/main/scala/org/apache/spark/rdd/RDD.scala
index 10b5f82..6dc334c 100644
--- a/core/src/main/scala/org/apache/spark/rdd/RDD.scala
+++ b/core/src/main/scala/org/apache/spark/rdd/RDD.scala
@@ -70,7 +70,7 @@ import org.apache.spark.util.random.{BernoulliCellSampler, 
BernoulliSampler, Poi
  * All of the scheduling and execution in Spark is done based on these 
methods, allowing each RDD
  * to implement its own way of computing itself. Indeed, users can implement 
custom RDDs (e.g. for
  * reading data from a new storage system) by overriding these functions. 
Please refer to the
- * [[http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf Spark 
paper]] for more details
+ * [[http://people.csail.mit.edu/matei/papers/2012/nsdi_spark.pdf Spark 
paper]] for more details
  * on RDD internals.
  */
 abstract class RDD[T: ClassTag](


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to