Repository: spark
Updated Branches:
  refs/heads/branch-2.0 bde94cd71 -> 38378f59f


[SPARK-12370][DOCUMENTATION] Documentation should link to examples …

## What changes were proposed in this pull request?

When documentation is built is should reference examples from the same build. 
There are times when the docs have links that point to files in the GitHub head 
which may not be valid on the current release. Changed that in URLs to make 
them point to the right tag in git using ```SPARK_VERSION_SHORT```

…from its own release version] [Streaming programming guide]

Author: Jagadeesan <[email protected]>

Closes #14596 from jagadeesanas2/SPARK-12370.

(cherry picked from commit e46cb78b3b9fd04a50b5ae50f360db612d656a48)
Signed-off-by: Sean Owen <[email protected]>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/38378f59
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/38378f59
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/38378f59

Branch: refs/heads/branch-2.0
Commit: 38378f59f2c91a6f07366aa2013522c334066c69
Parents: bde94cd
Author: Jagadeesan <[email protected]>
Authored: Sat Aug 13 11:25:03 2016 +0100
Committer: Sean Owen <[email protected]>
Committed: Sat Aug 13 11:25:14 2016 +0100

----------------------------------------------------------------------
 docs/ml-advanced.md                            |  4 ++--
 docs/streaming-custom-receivers.md             |  4 ++--
 docs/streaming-flume-integration.md            |  2 +-
 docs/streaming-kafka-0-8-integration.md        | 12 +++++------
 docs/streaming-programming-guide.md            | 22 ++++++++++-----------
 docs/structured-streaming-programming-guide.md | 12 +++++------
 6 files changed, 28 insertions(+), 28 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/38378f59/docs/ml-advanced.md
----------------------------------------------------------------------
diff --git a/docs/ml-advanced.md b/docs/ml-advanced.md
index f5804fd..12a03d3 100644
--- a/docs/ml-advanced.md
+++ b/docs/ml-advanced.md
@@ -49,7 +49,7 @@ MLlib L-BFGS solver calls the corresponding implementation in 
[breeze](https://g
 
 ## Normal equation solver for weighted least squares
 
-MLlib implements normal equation solver for [weighted least 
squares](https://en.wikipedia.org/wiki/Least_squares#Weighted_least_squares) by 
[WeightedLeastSquares](https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/optim/WeightedLeastSquares.scala).
+MLlib implements normal equation solver for [weighted least 
squares](https://en.wikipedia.org/wiki/Least_squares#Weighted_least_squares) by 
[WeightedLeastSquares]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/mllib/src/main/scala/org/apache/spark/ml/optim/WeightedLeastSquares.scala).
 
 Given $n$ weighted observations $(w_i, a_i, b_i)$:
 
@@ -73,7 +73,7 @@ In order to make the normal equation approach efficient, 
WeightedLeastSquares re
 
 ## Iteratively reweighted least squares (IRLS)
 
-MLlib implements [iteratively reweighted least squares 
(IRLS)](https://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares) by 
[IterativelyReweightedLeastSquares](https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/optim/IterativelyReweightedLeastSquares.scala).
+MLlib implements [iteratively reweighted least squares 
(IRLS)](https://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares) by 
[IterativelyReweightedLeastSquares]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/mllib/src/main/scala/org/apache/spark/ml/optim/IterativelyReweightedLeastSquares.scala).
 It can be used to find the maximum likelihood estimates of a generalized 
linear model (GLM), find M-estimator in robust regression and other 
optimization problems.
 Refer to [Iteratively Reweighted Least Squares for Maximum Likelihood 
Estimation, and some Robust and Resistant 
Alternatives](http://www.jstor.org/stable/2345503) for more information.
 

http://git-wip-us.apache.org/repos/asf/spark/blob/38378f59/docs/streaming-custom-receivers.md
----------------------------------------------------------------------
diff --git a/docs/streaming-custom-receivers.md 
b/docs/streaming-custom-receivers.md
index 479140f..f52bf34 100644
--- a/docs/streaming-custom-receivers.md
+++ b/docs/streaming-custom-receivers.md
@@ -181,7 +181,7 @@ val words = lines.flatMap(_.split(" "))
 ...
 {% endhighlight %}
 
-The full source code is in the example 
[CustomReceiver.scala](https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/CustomReceiver.scala).
+The full source code is in the example 
[CustomReceiver.scala]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/scala/org/apache/spark/examples/streaming/CustomReceiver.scala).
 
 </div>
 <div data-lang="java" markdown="1">
@@ -193,7 +193,7 @@ JavaDStream<String> words = lines.flatMap(new 
FlatMapFunction<String, String>()
 ...
 {% endhighlight %}
 
-The full source code is in the example 
[JavaCustomReceiver.java](https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaCustomReceiver.java).
+The full source code is in the example 
[JavaCustomReceiver.java]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/java/org/apache/spark/examples/streaming/JavaCustomReceiver.java).
 
 </div>
 </div>

http://git-wip-us.apache.org/repos/asf/spark/blob/38378f59/docs/streaming-flume-integration.md
----------------------------------------------------------------------
diff --git a/docs/streaming-flume-integration.md 
b/docs/streaming-flume-integration.md
index 8eeeee7..767e1f9 100644
--- a/docs/streaming-flume-integration.md
+++ b/docs/streaming-flume-integration.md
@@ -63,7 +63,7 @@ configuring Flume agents.
 
        By default, the Python API will decode Flume event body as UTF8 encoded 
strings. You can specify your custom decoding function to decode the body byte 
arrays in Flume events to any arbitrary data type. 
        See the [API 
docs](api/python/pyspark.streaming.html#pyspark.streaming.flume.FlumeUtils)
-       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/python/streaming/flume_wordcount.py).
+       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/python/streaming/flume_wordcount.py).
        </div>
        </div>
 

http://git-wip-us.apache.org/repos/asf/spark/blob/38378f59/docs/streaming-kafka-0-8-integration.md
----------------------------------------------------------------------
diff --git a/docs/streaming-kafka-0-8-integration.md 
b/docs/streaming-kafka-0-8-integration.md
index da4a845..f8f7b95 100644
--- a/docs/streaming-kafka-0-8-integration.md
+++ b/docs/streaming-kafka-0-8-integration.md
@@ -29,7 +29,7 @@ Next, we discuss how to use this approach in your streaming 
application.
             [ZK quorum], [consumer group id], [per-topic number of Kafka 
partitions to consume])
 
     You can also specify the key and value classes and their corresponding 
decoder classes using variations of `createStream`. See the [API 
docs](api/scala/index.html#org.apache.spark.streaming.kafka.KafkaUtils$)
-       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/KafkaWordCount.scala).
+       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/scala/org/apache/spark/examples/streaming/KafkaWordCount.scala).
        </div>
        <div data-lang="java" markdown="1">
                import org.apache.spark.streaming.kafka.*;
@@ -39,7 +39,7 @@ Next, we discuss how to use this approach in your streaming 
application.
             [ZK quorum], [consumer group id], [per-topic number of Kafka 
partitions to consume]);
 
     You can also specify the key and value classes and their corresponding 
decoder classes using variations of `createStream`. See the [API 
docs](api/java/index.html?org/apache/spark/streaming/kafka/KafkaUtils.html)
-       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaKafkaWordCount.java).
+       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/java/org/apache/spark/examples/streaming/JavaKafkaWordCount.java).
 
        </div>
        <div data-lang="python" markdown="1">
@@ -49,7 +49,7 @@ Next, we discuss how to use this approach in your streaming 
application.
                        [ZK quorum], [consumer group id], [per-topic number of 
Kafka partitions to consume])
 
        By default, the Python API will decode Kafka data as UTF8 encoded 
strings. You can specify your custom decoding function to decode the byte 
arrays in Kafka records to any arbitrary data type. See the [API 
docs](api/python/pyspark.streaming.html#pyspark.streaming.kafka.KafkaUtils)
-       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/python/streaming/kafka_wordcount.py).
+       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/python/streaming/kafka_wordcount.py).
        </div>
        </div>
 
@@ -106,7 +106,7 @@ Next, we discuss how to use this approach in your streaming 
application.
 
        You can also pass a `messageHandler` to `createDirectStream` to access 
`MessageAndMetadata` that contains metadata about the current message and 
transform it to any desired type.
        See the [API 
docs](api/scala/index.html#org.apache.spark.streaming.kafka.KafkaUtils$)
-       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/DirectKafkaWordCount.scala).
+       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/scala/org/apache/spark/examples/streaming/DirectKafkaWordCount.scala).
        </div>
        <div data-lang="java" markdown="1">
                import org.apache.spark.streaming.kafka.*;
@@ -118,7 +118,7 @@ Next, we discuss how to use this approach in your streaming 
application.
 
        You can also pass a `messageHandler` to `createDirectStream` to access 
`MessageAndMetadata` that contains metadata about the current message and 
transform it to any desired type.
        See the [API 
docs](api/java/index.html?org/apache/spark/streaming/kafka/KafkaUtils.html)
-       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaDirectKafkaWordCount.java).
+       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/java/org/apache/spark/examples/streaming/JavaDirectKafkaWordCount.java).
 
        </div>
        <div data-lang="python" markdown="1">
@@ -127,7 +127,7 @@ Next, we discuss how to use this approach in your streaming 
application.
 
        You can also pass a `messageHandler` to `createDirectStream` to access 
`KafkaMessageAndMetadata` that contains metadata about the current message and 
transform it to any desired type.
        By default, the Python API will decode Kafka data as UTF8 encoded 
strings. You can specify your custom decoding function to decode the byte 
arrays in Kafka records to any arbitrary data type. See the [API 
docs](api/python/pyspark.streaming.html#pyspark.streaming.kafka.KafkaUtils)
-       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/python/streaming/direct_kafka_wordcount.py).
+       and the 
[example]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/python/streaming/direct_kafka_wordcount.py).
        </div>
        </div>
 

http://git-wip-us.apache.org/repos/asf/spark/blob/38378f59/docs/streaming-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/streaming-programming-guide.md 
b/docs/streaming-programming-guide.md
index 3d40b2c..14e1744 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -126,7 +126,7 @@ ssc.awaitTermination()  // Wait for the computation to 
terminate
 {% endhighlight %}
 
 The complete code can be found in the Spark Streaming example
-[NetworkWordCount]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/NetworkWordCount.scala).
+[NetworkWordCount]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/scala/org/apache/spark/examples/streaming/NetworkWordCount.scala).
 <br>
 
 </div>
@@ -216,7 +216,7 @@ jssc.awaitTermination();   // Wait for the computation to 
terminate
 {% endhighlight %}
 
 The complete code can be found in the Spark Streaming example
-[JavaNetworkWordCount]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaNetworkWordCount.java).
+[JavaNetworkWordCount]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/java/org/apache/spark/examples/streaming/JavaNetworkWordCount.java).
 <br>
 
 </div>
@@ -277,7 +277,7 @@ ssc.awaitTermination()  # Wait for the computation to 
terminate
 {% endhighlight %}
 
 The complete code can be found in the Spark Streaming example
-[NetworkWordCount]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/python/streaming/network_wordcount.py).
+[NetworkWordCount]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/python/streaming/network_wordcount.py).
 <br>
 
 </div>
@@ -854,7 +854,7 @@ JavaPairDStream<String, Integer> runningCounts = 
pairs.updateStateByKey(updateFu
 The update function will be called for each word, with `newValues` having a 
sequence of 1's (from
 the `(word, 1)` pairs) and the `runningCount` having the previous count. For 
the complete
 Java code, take a look at the example
-[JavaStatefulNetworkWordCount.java]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/java/org/apache/spark/examples/streaming
+[JavaStatefulNetworkWordCount.java]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/java/org/apache/spark/examples/streaming
 /JavaStatefulNetworkWordCount.java).
 
 </div>
@@ -877,7 +877,7 @@ runningCounts = pairs.updateStateByKey(updateFunction)
 The update function will be called for each word, with `newValues` having a 
sequence of 1's (from
 the `(word, 1)` pairs) and the `runningCount` having the previous count. For 
the complete
 Python code, take a look at the example
-[stateful_network_wordcount.py]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/python/streaming/stateful_network_wordcount.py).
+[stateful_network_wordcount.py]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/python/streaming/stateful_network_wordcount.py).
 
 </div>
 </div>
@@ -1428,7 +1428,7 @@ wordCounts.foreachRDD { (rdd: RDD[(String, Int)], time: 
Time) =>
 
 {% endhighlight %}
 
-See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/RecoverableNetworkWordCount.scala).
+See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/scala/org/apache/spark/examples/streaming/RecoverableNetworkWordCount.scala).
 </div>
 <div data-lang="java" markdown="1">
 {% highlight java %}
@@ -1491,7 +1491,7 @@ wordCounts.foreachRDD(new Function2<JavaPairRDD<String, 
Integer>, Time, Void>()
 
 {% endhighlight %}
 
-See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaRecoverableNetworkWordCount.java).
+See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/java/org/apache/spark/examples/streaming/JavaRecoverableNetworkWordCount.java).
 </div>
 <div data-lang="python" markdown="1">
 {% highlight python %}
@@ -1526,7 +1526,7 @@ wordCounts.foreachRDD(echo)
 
 {% endhighlight %}
 
-See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/python/streaming/recoverable_network_wordcount.py).
+See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/python/streaming/recoverable_network_wordcount.py).
 
 </div>
 </div>
@@ -1564,7 +1564,7 @@ words.foreachRDD { rdd =>
 
 {% endhighlight %}
 
-See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/SqlNetworkWordCount.scala).
+See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/scala/org/apache/spark/examples/streaming/SqlNetworkWordCount.scala).
 </div>
 <div data-lang="java" markdown="1">
 {% highlight java %}
@@ -1619,7 +1619,7 @@ words.foreachRDD(
 );
 {% endhighlight %}
 
-See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaSqlNetworkWordCount.java).
+See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/java/org/apache/spark/examples/streaming/JavaSqlNetworkWordCount.java).
 </div>
 <div data-lang="python" markdown="1">
 {% highlight python %}
@@ -1661,7 +1661,7 @@ def process(time, rdd):
 words.foreachRDD(process)
 {% endhighlight %}
 
-See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/python/streaming/sql_network_wordcount.py).
+See the full [source 
code]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/python/streaming/sql_network_wordcount.py).
 
 </div>
 </div>

http://git-wip-us.apache.org/repos/asf/spark/blob/38378f59/docs/structured-streaming-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/structured-streaming-programming-guide.md 
b/docs/structured-streaming-programming-guide.md
index 8c14c3d..811e8c4 100644
--- a/docs/structured-streaming-programming-guide.md
+++ b/docs/structured-streaming-programming-guide.md
@@ -14,9 +14,9 @@ Structured Streaming is a scalable and fault-tolerant stream 
processing engine b
 
 # Quick Example
 Let’s say you want to maintain a running word count of text data received 
from a data server listening on a TCP socket. Let’s see how you can express 
this using Structured Streaming. You can see the full code in 
-[Scala]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/scala/org/apache/spark/examples/sql/streaming/StructuredNetworkWordCount.scala)/
-[Java]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/java/org/apache/spark/examples/sql/streaming/JavaStructuredNetworkWordCount.java)/
-[Python]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/python/sql/streaming/structured_network_wordcount.py).
 And if you 
+[Scala]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/scala/org/apache/spark/examples/sql/streaming/StructuredNetworkWordCount.scala)/
+[Java]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/java/org/apache/spark/examples/sql/streaming/JavaStructuredNetworkWordCount.java)/
+[Python]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/python/sql/streaming/structured_network_wordcount.py).
 And if you 
 [download Spark](http://spark.apache.org/downloads.html), you can directly run 
the example. In any case, let’s walk through the example step-by-step and 
understand how it works. First, we have to import the necessary classes and 
create a local SparkSession, the starting point of all functionalities related 
to Spark.
 
 <div class="codetabs">
@@ -618,9 +618,9 @@ The result tables would look something like the following.
 ![Window Operations](img/structured-streaming-window.png)
 
 Since this windowing is similar to grouping, in code, you can use `groupBy()` 
and `window()` operations to express windowed aggregations. You can see the 
full code for the below examples in
-[Scala]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/scala/org/apache/spark/examples/sql/streaming/StructuredNetworkWordCountWindowed.scala)/
-[Java]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/java/org/apache/spark/examples/sql/streaming/JavaStructuredNetworkWordCountWindowed.java)/
-[Python]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/python/sql/streaming/structured_network_wordcount_windowed.py).
+[Scala]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/scala/org/apache/spark/examples/sql/streaming/StructuredNetworkWordCountWindowed.scala)/
+[Java]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/java/org/apache/spark/examples/sql/streaming/JavaStructuredNetworkWordCountWindowed.java)/
+[Python]({{site.SPARK_GITHUB_URL}}/blob/v{{site.SPARK_VERSION_SHORT}}/examples/src/main/python/sql/streaming/structured_network_wordcount_windowed.py).
 
 <div class="codetabs">
 <div data-lang="scala"  markdown="1">


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to