git commit: [Docs] SQL doc formatting and typo fixes

2014-08-29 Thread marmbrus
Repository: spark
Updated Branches:
  refs/heads/branch-1.1 98d0716a1 -> bfa2dc99a


[Docs] SQL doc formatting and typo fixes

As [reported on the dev 
list](http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-1-0-RC2-tp8107p8131.html):
* Code fencing with triple-backticks doesn’t seem to work like it does on 
GitHub. Newlines are lost. Instead, use 4-space indent to format small code 
blocks.
* Nested bullets need 2 leading spaces, not 1.
* Spellcheck!

Author: Nicholas Chammas 
Author: nchammas 

Closes #2201 from nchammas/sql-doc-fixes and squashes the following commits:

873f889 [Nicholas Chammas] [Docs] fix skip-api flag
5195e0c [Nicholas Chammas] [Docs] SQL doc formatting and typo fixes
3b26c8d [nchammas] [Spark QA] Link to console output on test time out

(cherry picked from commit 53aa8316e88980c6f46d3b9fc90d935a4738a370)
Signed-off-by: Michael Armbrust 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/bfa2dc99
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/bfa2dc99
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/bfa2dc99

Branch: refs/heads/branch-1.1
Commit: bfa2dc99a22c23dc4b10d1f9e5dd9681f6f48537
Parents: 98d0716
Author: Nicholas Chammas 
Authored: Fri Aug 29 15:23:32 2014 -0700
Committer: Michael Armbrust 
Committed: Fri Aug 29 15:23:41 2014 -0700

--
 docs/README.md|   2 +-
 docs/sql-programming-guide.md | 109 +
 2 files changed, 52 insertions(+), 59 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/bfa2dc99/docs/README.md
--
diff --git a/docs/README.md b/docs/README.md
index fd7ba4e..0a0126c 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -30,7 +30,7 @@ called `_site` containing index.html as well as the rest of 
the compiled files.
 You can modify the default Jekyll build as follows:
 
 # Skip generating API docs (which takes a while)
-$ SKIP_SCALADOC=1 jekyll build
+$ SKIP_API=1 jekyll build
 # Serve content locally on port 4000
 $ jekyll serve --watch
 # Build the site with extra features used on the live page

http://git-wip-us.apache.org/repos/asf/spark/blob/bfa2dc99/docs/sql-programming-guide.md
--
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index c41f280..8f7fb54 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -474,10 +474,10 @@ anotherPeople = sqlContext.jsonRDD(anotherPeopleRDD)
 
 Spark SQL also supports reading and writing data stored in [Apache 
Hive](http://hive.apache.org/).
 However, since Hive has a large number of dependencies, it is not included in 
the default Spark assembly.
-In order to use Hive you must first run '`sbt/sbt -Phive assembly/assembly`' 
(or use `-Phive` for maven).
+In order to use Hive you must first run "`sbt/sbt -Phive assembly/assembly`" 
(or use `-Phive` for maven).
 This command builds a new assembly jar that includes Hive. Note that this Hive 
assembly jar must also be present
 on all of the worker nodes, as they will need access to the Hive serialization 
and deserialization libraries
-(SerDes) in order to acccess data stored in Hive.
+(SerDes) in order to access data stored in Hive.
 
 Configuration of Hive is done by placing your `hive-site.xml` file in `conf/`.
 
@@ -576,9 +576,8 @@ evaluated by the SQL execution engine.  A full list of the 
functions supported c
 
 ## Running the Thrift JDBC server
 
-The Thrift JDBC server implemented here corresponds to the [`HiveServer2`]
-(https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2) in 
Hive 0.12. You can test
-the JDBC server with the beeline script comes with either Spark or Hive 0.12.
+The Thrift JDBC server implemented here corresponds to the 
[`HiveServer2`](https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2)
+in Hive 0.12. You can test the JDBC server with the beeline script comes with 
either Spark or Hive 0.12.
 
 To start the JDBC server, run the following in the Spark directory:
 
@@ -597,7 +596,7 @@ Connect to the JDBC server in beeline with:
 
 Beeline will ask you for a username and password. In non-secure mode, simply 
enter the username on
 your machine and a blank password. For secure mode, please follow the 
instructions given in the
-[beeline 
documentation](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients)
+[beeline 
documentation](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients).
 
 Configuration of Hive is done by placing your `hive-site.xml` file in `conf/`.
 
@@ -616,11 +615,10 @@ In Shark, default reducer number is 1 and is controlled 
by the pr

git commit: [Docs] SQL doc formatting and typo fixes

2014-08-29 Thread marmbrus
Repository: spark
Updated Branches:
  refs/heads/master e248328b3 -> 53aa8316e


[Docs] SQL doc formatting and typo fixes

As [reported on the dev 
list](http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-1-0-RC2-tp8107p8131.html):
* Code fencing with triple-backticks doesn’t seem to work like it does on 
GitHub. Newlines are lost. Instead, use 4-space indent to format small code 
blocks.
* Nested bullets need 2 leading spaces, not 1.
* Spellcheck!

Author: Nicholas Chammas 
Author: nchammas 

Closes #2201 from nchammas/sql-doc-fixes and squashes the following commits:

873f889 [Nicholas Chammas] [Docs] fix skip-api flag
5195e0c [Nicholas Chammas] [Docs] SQL doc formatting and typo fixes
3b26c8d [nchammas] [Spark QA] Link to console output on test time out


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/53aa8316
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/53aa8316
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/53aa8316

Branch: refs/heads/master
Commit: 53aa8316e88980c6f46d3b9fc90d935a4738a370
Parents: e248328
Author: Nicholas Chammas 
Authored: Fri Aug 29 15:23:32 2014 -0700
Committer: Michael Armbrust 
Committed: Fri Aug 29 15:23:32 2014 -0700

--
 docs/README.md|   2 +-
 docs/sql-programming-guide.md | 109 +
 2 files changed, 52 insertions(+), 59 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/53aa8316/docs/README.md
--
diff --git a/docs/README.md b/docs/README.md
index fd7ba4e..0a0126c 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -30,7 +30,7 @@ called `_site` containing index.html as well as the rest of 
the compiled files.
 You can modify the default Jekyll build as follows:
 
 # Skip generating API docs (which takes a while)
-$ SKIP_SCALADOC=1 jekyll build
+$ SKIP_API=1 jekyll build
 # Serve content locally on port 4000
 $ jekyll serve --watch
 # Build the site with extra features used on the live page

http://git-wip-us.apache.org/repos/asf/spark/blob/53aa8316/docs/sql-programming-guide.md
--
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index c41f280..8f7fb54 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -474,10 +474,10 @@ anotherPeople = sqlContext.jsonRDD(anotherPeopleRDD)
 
 Spark SQL also supports reading and writing data stored in [Apache 
Hive](http://hive.apache.org/).
 However, since Hive has a large number of dependencies, it is not included in 
the default Spark assembly.
-In order to use Hive you must first run '`sbt/sbt -Phive assembly/assembly`' 
(or use `-Phive` for maven).
+In order to use Hive you must first run "`sbt/sbt -Phive assembly/assembly`" 
(or use `-Phive` for maven).
 This command builds a new assembly jar that includes Hive. Note that this Hive 
assembly jar must also be present
 on all of the worker nodes, as they will need access to the Hive serialization 
and deserialization libraries
-(SerDes) in order to acccess data stored in Hive.
+(SerDes) in order to access data stored in Hive.
 
 Configuration of Hive is done by placing your `hive-site.xml` file in `conf/`.
 
@@ -576,9 +576,8 @@ evaluated by the SQL execution engine.  A full list of the 
functions supported c
 
 ## Running the Thrift JDBC server
 
-The Thrift JDBC server implemented here corresponds to the [`HiveServer2`]
-(https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2) in 
Hive 0.12. You can test
-the JDBC server with the beeline script comes with either Spark or Hive 0.12.
+The Thrift JDBC server implemented here corresponds to the 
[`HiveServer2`](https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2)
+in Hive 0.12. You can test the JDBC server with the beeline script comes with 
either Spark or Hive 0.12.
 
 To start the JDBC server, run the following in the Spark directory:
 
@@ -597,7 +596,7 @@ Connect to the JDBC server in beeline with:
 
 Beeline will ask you for a username and password. In non-secure mode, simply 
enter the username on
 your machine and a blank password. For secure mode, please follow the 
instructions given in the
-[beeline 
documentation](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients)
+[beeline 
documentation](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients).
 
 Configuration of Hive is done by placing your `hive-site.xml` file in `conf/`.
 
@@ -616,11 +615,10 @@ In Shark, default reducer number is 1 and is controlled 
by the property `mapred.
 SQL deprecates this property by a new property `spark.sql.shuffle.partitions`, 
whose default