Repository: zeppelin
Updated Branches:
  refs/heads/master dc3d383cc -> a648746b5


[ZEPPELIN-1798][DOCS] Update docs with benv instead of env in Flink e…

### What is this PR for?

Several Flink examples reference `env` instead of `benv` which was changed per 
[ZEPPELIN-1461](https://github.com/apache/zeppelin/pull/1409)

Also update 
[http://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/quickstart/install_with_flink_and_spark_cluster.html]
 to reference updated Flink Version

### What type of PR is it?
Documentation

### Todos
* [x] - Update  https://zeppelin.apache.org/docs/0.6.2/interpreter/flink.html
* [x] - Update 
http://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/quickstart/install_with_flink_and_spark_cluster.html

### What is the Jira issue?
[ZEPPELIN-1798](https://issues.apache.org/jira/browse/ZEPPELIN-1798)

### How should this be tested?
No tests

### Screenshots (if appropriate)
n/a
### Questions:
* Does the licenses files need update?
No
* Is there breaking changes for older versions?
No
* Does this needs documentation?
This IS documentation that should have been in earlier PR

Author: rawkintrevo <[email protected]>

Closes #1772 from rawkintrevo/ZEPPELIN-1798 and squashes the following commits:

2ac8090 [rawkintrevo] [ZEPPELIN-1798][DOCS] Update docs with benv instead of 
env in Flink examples and bump versions cited


Project: http://git-wip-us.apache.org/repos/asf/zeppelin/repo
Commit: http://git-wip-us.apache.org/repos/asf/zeppelin/commit/a648746b
Tree: http://git-wip-us.apache.org/repos/asf/zeppelin/tree/a648746b
Diff: http://git-wip-us.apache.org/repos/asf/zeppelin/diff/a648746b

Branch: refs/heads/master
Commit: a648746b501213d65d37ac06d4b3657cf3078ebd
Parents: dc3d383
Author: rawkintrevo <[email protected]>
Authored: Thu Dec 15 11:15:02 2016 -0600
Committer: Lee moon soo <[email protected]>
Committed: Sat Dec 17 07:46:53 2016 -0800

----------------------------------------------------------------------
 docs/interpreter/flink.md                       |  2 +-
 .../install_with_flink_and_spark_cluster.md     | 34 +++++++++++---------
 2 files changed, 19 insertions(+), 17 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/zeppelin/blob/a648746b/docs/interpreter/flink.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/flink.md b/docs/interpreter/flink.md
index 3d06546..2c17087 100644
--- a/docs/interpreter/flink.md
+++ b/docs/interpreter/flink.md
@@ -63,7 +63,7 @@ wget http://www.gutenberg.org/ebooks/10.txt.utf-8
 {% highlight scala %}
 %flink
 case class WordCount(word: String, frequency: Int)
-val bible:DataSet[String] = env.readTextFile("10.txt.utf-8")
+val bible:DataSet[String] = benv.readTextFile("10.txt.utf-8")
 val partialCounts: DataSet[WordCount] = bible.flatMap{
     line =>
         """\b\w+\b""".r.findAllIn(line).map(word => WordCount(word, 1))

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/a648746b/docs/quickstart/install_with_flink_and_spark_cluster.md
----------------------------------------------------------------------
diff --git a/docs/quickstart/install_with_flink_and_spark_cluster.md 
b/docs/quickstart/install_with_flink_and_spark_cluster.md
index 89767c3..0fec15c 100644
--- a/docs/quickstart/install_with_flink_and_spark_cluster.md
+++ b/docs/quickstart/install_with_flink_and_spark_cluster.md
@@ -118,14 +118,16 @@ cd zeppelin
 Package Zeppelin.
 
 ```
-mvn clean package -DskipTests -Pspark-1.6 -Dflink.version=1.1.2
+mvn clean package -DskipTests -Pspark-1.6 -Dflink.version=1.1.3 -Pscala-2.10
 ```
 
 `-DskipTests` skips build tests- you're not developing (yet), so you don't 
need to do tests, the clone version *should* build.
 
 `-Pspark-1.6` tells maven to build a Zeppelin with Spark 1.6.  This is 
important because Zeppelin has its own Spark interpreter and the versions must 
be the same.
 
-`-Dflink.version=1.1.2` tells maven specifically to build Zeppelin with Flink 
version 1.1.2.
+`-Dflink.version=1.1.3` tells maven specifically to build Zeppelin with Flink 
version 1.1.3.
+
+-`-Pscala-2.10` tells maven to build with Scala v2.10.
 
 
 **Note:** You may wish to include additional build flags such as `-Ppyspark` 
or `-Psparkr`.  See [the build section of github for more 
details](https://github.com/apache/zeppelin#build).
@@ -162,7 +164,7 @@ Create a new notebook named "Flink Test" and copy and paste 
the following code.
 
 %flink  // let Zeppelin know what interpreter to use.
 
-val text = env.fromElements("In the time of chimpanzees, I was a monkey",   // 
some lines of text to analyze
+val text = benv.fromElements("In the time of chimpanzees, I was a monkey",   
// some lines of text to analyze
 "Butane in my veins and I'm out to cut the junkie",
 "With the plastic eyeballs, spray paint the vegetables",
 "Dog food stalls with the beefcake pantyhose",
@@ -252,16 +254,16 @@ Building from source is recommended  where possible, for 
simplicity in this tuto
 To download the Flink Binary use `wget`
 
 ```bash
-wget 
"http://mirror.cogentco.com/pub/apache/flink/flink-1.0.3/flink-1.0.3-bin-hadoop24-scala_2.10.tgz";
-tar -xzvf flink-1.0.3-bin-hadoop24-scala_2.10.tgz
+wget 
"http://mirror.cogentco.com/pub/apache/flink/flink-1.1.3/flink-1.1.3-bin-hadoop24-scala_2.10.tgz";
+tar -xzvf flink-1.1.3-bin-hadoop24-scala_2.10.tgz
 ```
 
-This will download Flink 1.0.3, compatible with Hadoop 2.4.  You do not have 
to install Hadoop for this binary to work, but if you are using Hadoop, please 
change `24` to your appropriate version.
+This will download Flink 1.1.3, compatible with Hadoop 2.4.  You do not have 
to install Hadoop for this binary to work, but if you are using Hadoop, please 
change `24` to your appropriate version.
 
 Start the Flink Cluster.
 
 ```bash
-flink-1.0.3/bin/start-cluster.sh
+flink-1.1.3/bin/start-cluster.sh
 ```
 
 ###### Building From source
@@ -270,13 +272,13 @@ If you wish to build Flink from source, the following 
will be instructive.  Note
 
 See the [Flink Installation 
guide](https://github.com/apache/flink/blob/master/README.md) for more detailed 
instructions.
 
-Return to the directory where you have been downloading, this tutorial assumes 
that is `$HOME`. Clone Flink,  check out release-1.0, and build.
+Return to the directory where you have been downloading, this tutorial assumes 
that is `$HOME`. Clone Flink,  check out release-1.1.3-rc2, and build.
 
 ```
 cd $HOME
 git clone https://github.com/apache/flink.git
 cd flink
-git checkout release-1.0
+git checkout release-1.1.3-rc2
 mvn clean install -DskipTests
 ```
 
@@ -297,8 +299,8 @@ If no task managers are present, restart the Flink cluster 
with the following co
 
 (if binaries)
 ```
-flink-1.0.3/bin/stop-cluster.sh
-flink-1.0.3/bin/start-cluster.sh
+flink-1.1.3/bin/stop-cluster.sh
+flink-1.1.3/bin/start-cluster.sh
 ```
 
 
@@ -320,12 +322,12 @@ Using binaries is also
 To download the Spark Binary use `wget`
 
 ```bash
-wget 
"http://mirrors.koehn.com/apache/spark/spark-1.6.1/spark-1.6.1-bin-hadoop2.4.tgz";
-tar -xzvf spark-1.6.1-bin-hadoop2.4.tgz
-mv spark-1.6.1-bin-hadoop4.4 spark
+wget "http://d3kbcqa49mib13.cloudfront.net/spark-1.6.3-bin-hadoop2.6.tgz";
+tar -xzvf spark-1.6.3-bin-hadoop2.6.tgz
+mv spark-1.6.3-bin-hadoop2.6 spark
 ```
 
-This will download Spark 1.6.1, compatible with Hadoop 2.4.  You do not have 
to install Hadoop for this binary to work, but if you are using Hadoop, please 
change `2.4` to your appropriate version.
+This will download Spark 1.6.3, compatible with Hadoop 2.6.  You do not have 
to install Hadoop for this binary to work, but if you are using Hadoop, please 
change `2.6` to your appropriate version.
 
 ###### Building From source
 
@@ -335,7 +337,7 @@ See the [Spark 
Installation](https://github.com/apache/spark/blob/master/README.
 
 Return to the directory where you have been downloading, this tutorial assumes 
that is $HOME. Clone Spark, check out branch-1.6, and build.
 **Note:** Recall, we're only checking out 1.6 because it is the most recent 
Spark for which a Zeppelin profile exists at
-  the time of writing. You are free to check out other version, just make sure 
you build Zeppelin against the correct version of Spark.
+  the time of writing. You are free to check out other version, just make sure 
you build Zeppelin against the correct version of Spark. However if you use 
Spark 2.0, the word count example will need to be changed as Spark 2.0 is not 
compatible with the following examples.
 
 
 ```

Reply via email to