Repository: spark Updated Branches: refs/heads/master 7d0a3ef4c -> 957558235
[DOCS] Fix unreachable links in the document ## What changes were proposed in this pull request? Recently, I found two unreachable links in the document and fixed them. Because of small changes related to the document, I don't file this issue in JIRA but please suggest I should do it if you think it's needed. ## How was this patch tested? Tested manually. Author: Kousuke Saruta <[email protected]> Closes #19195 from sarutak/fix-unreachable-link. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/95755823 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/95755823 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/95755823 Branch: refs/heads/master Commit: 957558235b7537c706c6ab4779655aa57838ebac Parents: 7d0a3ef Author: Kousuke Saruta <[email protected]> Authored: Tue Sep 12 15:07:04 2017 +0100 Committer: Sean Owen <[email protected]> Committed: Tue Sep 12 15:07:04 2017 +0100 ---------------------------------------------------------------------- docs/building-spark.md | 2 +- docs/rdd-programming-guide.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark/blob/95755823/docs/building-spark.md ---------------------------------------------------------------------- diff --git a/docs/building-spark.md b/docs/building-spark.md index 69d8302..67a2ce7 100644 --- a/docs/building-spark.md +++ b/docs/building-spark.md @@ -111,7 +111,7 @@ should run continuous compilation (i.e. wait for changes). However, this has not extensively. A couple of gotchas to note: * it only scans the paths `src/main` and `src/test` (see -[docs](http://scala-tools.org/mvnsites/maven-scala-plugin/usage_cc.html)), so it will only work +[docs](http://davidb.github.io/scala-maven-plugin/example_cc.html)), so it will only work from within certain submodules that have that structure. * you'll typically need to run `mvn install` from the project root for compilation within http://git-wip-us.apache.org/repos/asf/spark/blob/95755823/docs/rdd-programming-guide.md ---------------------------------------------------------------------- diff --git a/docs/rdd-programming-guide.md b/docs/rdd-programming-guide.md index 2602598..29af159 100644 --- a/docs/rdd-programming-guide.md +++ b/docs/rdd-programming-guide.md @@ -604,7 +604,7 @@ before the `reduce`, which would cause `lineLengths` to be saved in memory after Spark's API relies heavily on passing functions in the driver program to run on the cluster. There are two recommended ways to do this: -* [Anonymous function syntax](http://docs.scala-lang.org/tutorials/tour/anonymous-function-syntax.html), +* [Anonymous function syntax](http://docs.scala-lang.org/tour/basics.html#functions), which can be used for short pieces of code. * Static methods in a global singleton object. For example, you can define `object MyFunctions` and then pass `MyFunctions.func1`, as follows: --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
