Repository: zeppelin
Updated Branches:
  refs/heads/master 01beb54e9 -> 045b2d24d


[ZEPPELIN-1274]Write "Spark SQL" in docs rather than "SparkSQL"

### What is this PR for?
Some of the doc files say "SparkSQL" but the collect spelling is "Spark SQL" 
(need a white space between "Spark" and "SQL").
Lets's replace it with the collect one.

### What type of PR is it?
Improvement

### Todos
* [x] - Replace all of "SparkSQL" in some files into "Spark SQL".

### What is the Jira issue?
https://issues.apache.org/jira/browse/ZEPPELIN-1274

### How should this be tested?
Check the changes by some people.

Author: Kousuke Saruta <saru...@oss.nttdata.co.jp>

Closes #1271 from sarutak/ZEPPELIN-1274 and squashes the following commits:

edc9212 [Kousuke Saruta] Further replaced "SparkSQL" and "SparkSql" into "Spark 
SQL"
14aa2b7 [Kousuke Saruta] Replaced 'SparkSQL' in docs into 'Spark SQL'


Project: http://git-wip-us.apache.org/repos/asf/zeppelin/repo
Commit: http://git-wip-us.apache.org/repos/asf/zeppelin/commit/045b2d24
Tree: http://git-wip-us.apache.org/repos/asf/zeppelin/tree/045b2d24
Diff: http://git-wip-us.apache.org/repos/asf/zeppelin/diff/045b2d24

Branch: refs/heads/master
Commit: 045b2d24d85c3c2114e43c9d31698ad259692607
Parents: 01beb54
Author: Kousuke Saruta <saru...@oss.nttdata.co.jp>
Authored: Wed Aug 3 15:29:51 2016 +0900
Committer: Lee moon soo <m...@apache.org>
Committed: Sun Aug 7 09:05:47 2016 -0700

----------------------------------------------------------------------
 conf/zeppelin-env.cmd.template                           | 2 +-
 conf/zeppelin-env.sh.template                            | 2 +-
 docs/index.md                                            | 2 +-
 docs/interpreter/livy.md                                 | 2 +-
 docs/interpreter/spark.md                                | 2 +-
 docs/manual/dynamicform.md                               | 2 +-
 docs/manual/interpreters.md                              | 4 ++--
 docs/rest-api/rest-interpreter.md                        | 4 ++--
 docs/screenshots.md                                      | 2 +-
 livy/src/main/resources/interpreter-setting.json         | 4 ++--
 spark/src/main/resources/interpreter-setting.json        | 4 ++--
 spark/src/main/sparkr-resources/interpreter-setting.json | 4 ++--
 12 files changed, 17 insertions(+), 17 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/conf/zeppelin-env.cmd.template
----------------------------------------------------------------------
diff --git a/conf/zeppelin-env.cmd.template b/conf/zeppelin-env.cmd.template
index d85e59f..de89674 100644
--- a/conf/zeppelin-env.cmd.template
+++ b/conf/zeppelin-env.cmd.template
@@ -62,7 +62,7 @@ REM
 REM set ZEPPELIN_SPARK_USEHIVECONTEXT  REM Use HiveContext instead of 
SQLContext if set true. true by default.
 REM set ZEPPELIN_SPARK_CONCURRENTSQL   REM Execute multiple SQL concurrently 
if set true. false by default.
 REM set ZEPPELIN_SPARK_IMPORTIMPLICIT  REM Import implicits, UDF collection, 
and sql if set true. true by default.
-REM set ZEPPELIN_SPARK_MAXRESULT       REM Max number of SparkSQL result to 
display. 1000 by default.
+REM set ZEPPELIN_SPARK_MAXRESULT       REM Max number of Spark SQL result to 
display. 1000 by default.
 
 REM ZeppelinHub connection configuration
 REM

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/conf/zeppelin-env.sh.template
----------------------------------------------------------------------
diff --git a/conf/zeppelin-env.sh.template b/conf/zeppelin-env.sh.template
index 52e36f7..14fdd54 100644
--- a/conf/zeppelin-env.sh.template
+++ b/conf/zeppelin-env.sh.template
@@ -62,7 +62,7 @@
 # export ZEPPELIN_SPARK_USEHIVECONTEXT  # Use HiveContext instead of 
SQLContext if set true. true by default.
 # export ZEPPELIN_SPARK_CONCURRENTSQL   # Execute multiple SQL concurrently if 
set true. false by default.
 # export ZEPPELIN_SPARK_IMPORTIMPLICIT  # Import implicits, UDF collection, 
and sql if set true. true by default.
-# export ZEPPELIN_SPARK_MAXRESULT       # Max number of SparkSQL result to 
display. 1000 by default.
+# export ZEPPELIN_SPARK_MAXRESULT       # Max number of Spark SQL result to 
display. 1000 by default.
 # export ZEPPELIN_WEBSOCKET_MAX_TEXT_MESSAGE_SIZE       # Size in characters 
of the maximum text message to be received by websocket. Defaults to 1024000
 
 

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/docs/index.md
----------------------------------------------------------------------
diff --git a/docs/index.md b/docs/index.md
index fec5af4..beee695 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -62,7 +62,7 @@ For the further information about Apache Spark in Apache 
Zeppelin, please see [S
 <br />
 ## Data visualization
 
-Some basic charts are already included in Apache Zeppelin. Visualizations are 
not limited to SparkSQL query, any output from any language backend can be 
recognized and visualized.
+Some basic charts are already included in Apache Zeppelin. Visualizations are 
not limited to Spark SQL query, any output from any language backend can be 
recognized and visualized.
 
 <div class="row">
   <div class="col-md-6">

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/docs/interpreter/livy.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/livy.md b/docs/interpreter/livy.md
index 2f47364..9af6499 100644
--- a/docs/interpreter/livy.md
+++ b/docs/interpreter/livy.md
@@ -50,7 +50,7 @@ Example: `spark.master` to `livy.spark.master`
   <tr>
     <td>zeppelin.livy.spark.maxResult</td>
     <td>1000</td>
-    <td>Max number of SparkSQL result to display.</td>
+    <td>Max number of Spark SQL result to display.</td>
   </tr>
     <tr>
     <td>livy.spark.driver.cores</td>

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/docs/interpreter/spark.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/spark.md b/docs/interpreter/spark.md
index b094ccd..047f63f 100644
--- a/docs/interpreter/spark.md
+++ b/docs/interpreter/spark.md
@@ -105,7 +105,7 @@ You can also set other Spark properties which are not 
listed in the table. For a
   <tr>
     <td>zeppelin.spark.maxResult</td>
     <td>1000</td>
-    <td>Max number of SparkSQL result to display.</td>
+    <td>Max number of Spark SQL result to display.</td>
   </tr>
   <tr>
     <td>zeppelin.spark.printREPLOutput</td>

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/docs/manual/dynamicform.md
----------------------------------------------------------------------
diff --git a/docs/manual/dynamicform.md b/docs/manual/dynamicform.md
index b554fec..6102baf 100644
--- a/docs/manual/dynamicform.md
+++ b/docs/manual/dynamicform.md
@@ -28,7 +28,7 @@ Custom language backend can select which type of form 
creation it wants to use.
 
 ## Using form Templates
 
-This mode creates form using simple template language. It's simple and easy to 
use. For example Markdown, Shell, SparkSql language backend uses it.
+This mode creates form using simple template language. It's simple and easy to 
use. For example Markdown, Shell, Spark SQL language backend uses it.
 
 ### Text input form
 

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/docs/manual/interpreters.md
----------------------------------------------------------------------
diff --git a/docs/manual/interpreters.md b/docs/manual/interpreters.md
index a21d34e..6e4839d 100644
--- a/docs/manual/interpreters.md
+++ b/docs/manual/interpreters.md
@@ -27,7 +27,7 @@ limitations under the License.
 
 In this section, we will explain about the role of interpreters, interpreters 
group and interpreter settings in Zeppelin.
 The concept of Zeppelin interpreter allows any 
language/data-processing-backend to be plugged into Zeppelin.
-Currently, Zeppelin supports many interpreters such as Scala ( with Apache 
Spark ), Python ( with Apache Spark ), SparkSQL, JDBC, Markdown, Shell and so 
on.
+Currently, Zeppelin supports many interpreters such as Scala ( with Apache 
Spark ), Python ( with Apache Spark ), Spark SQL, JDBC, Markdown, Shell and so 
on.
 
 ## What is Zeppelin interpreter?
 Zeppelin Interpreter is a plug-in which enables Zeppelin users to use a 
specific language/data-processing-backend. For example, to use Scala code in 
Zeppelin, you need `%spark` interpreter.
@@ -51,7 +51,7 @@ Each notebook can be bound to multiple Interpreter Settings 
using setting icon o
 
 ## What is interpreter group?
 Every Interpreter is belonged to an **Interpreter Group**. Interpreter Group 
is a unit of start/stop interpreter.
-By default, every interpreter is belonged to a single group, but the group 
might contain more interpreters. For example, Spark interpreter group is 
including Spark support, pySpark, SparkSQL and the dependency loader.
+By default, every interpreter is belonged to a single group, but the group 
might contain more interpreters. For example, Spark interpreter group is 
including Spark support, pySpark, Spark SQL and the dependency loader.
 
 Technically, Zeppelin interpreters from the same group are running in the same 
JVM. For more information about this, please checkout 
[here](../development/writingzeppelininterpreter.html).
 

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/docs/rest-api/rest-interpreter.md
----------------------------------------------------------------------
diff --git a/docs/rest-api/rest-interpreter.md 
b/docs/rest-api/rest-interpreter.md
index 161c38b..9ee8f69 100644
--- a/docs/rest-api/rest-interpreter.md
+++ b/docs/rest-api/rest-interpreter.md
@@ -92,7 +92,7 @@ The role of registered interpreters, settings and 
interpreters group are describ
       "properties": {
         "zeppelin.spark.maxResult": {
           "defaultValue": "1000",
-          "description": "Max number of SparkSQL result to display."
+          "description": "Max number of Spark SQL result to display."
         }
       },
       "path": "/zeppelin/interpreter/spark"
@@ -460,4 +460,4 @@ The role of registered interpreters, settings and 
interpreters group are describ
       <td> 500 </td>
     </tr>
   </table>
-  
\ No newline at end of file
+  

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/docs/screenshots.md
----------------------------------------------------------------------
diff --git a/docs/screenshots.md b/docs/screenshots.md
index 7a389b7..2cad21b 100644
--- a/docs/screenshots.md
+++ b/docs/screenshots.md
@@ -21,7 +21,7 @@ limitations under the License.
 <div class="row">
      <div class="col-md-3">
           <a href="assets/themes/zeppelin/img/screenshots/sparksql.png"><img 
class="thumbnail" src="assets/themes/zeppelin/img/screenshots/sparksql.png" 
/></a>
-          <center>SparkSQL with inline visualization</center>
+          <center>Spark SQL with inline visualization</center>
      </div>
      <div class="col-md-3">
           <a href="assets/themes/zeppelin/img/screenshots/spark.png"><img 
class="thumbnail" src="assets/themes/zeppelin/img/screenshots/spark.png" /></a>

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/livy/src/main/resources/interpreter-setting.json
----------------------------------------------------------------------
diff --git a/livy/src/main/resources/interpreter-setting.json 
b/livy/src/main/resources/interpreter-setting.json
index 2c1a0be..28fb280 100644
--- a/livy/src/main/resources/interpreter-setting.json
+++ b/livy/src/main/resources/interpreter-setting.json
@@ -93,7 +93,7 @@
         "envName": "ZEPPELIN_LIVY_MAXRESULT",
         "propertyName": "zeppelin.livy.spark.sql.maxResult",
         "defaultValue": "1000",
-        "description": "Max number of SparkSQL result to display."
+        "description": "Max number of Spark SQL result to display."
       },
       "zeppelin.livy.concurrentSQL": {
         "propertyName": "zeppelin.livy.concurrentSQL",
@@ -116,4 +116,4 @@
     "properties": {
     }
   }
-]
\ No newline at end of file
+]

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/spark/src/main/resources/interpreter-setting.json
----------------------------------------------------------------------
diff --git a/spark/src/main/resources/interpreter-setting.json 
b/spark/src/main/resources/interpreter-setting.json
index 2343a0f..d87a6c7 100644
--- a/spark/src/main/resources/interpreter-setting.json
+++ b/spark/src/main/resources/interpreter-setting.json
@@ -46,7 +46,7 @@
         "envName": "ZEPPELIN_SPARK_MAXRESULT",
         "propertyName": "zeppelin.spark.maxResult",
         "defaultValue": "1000",
-        "description": "Max number of SparkSQL result to display."
+        "description": "Max number of Spark SQL result to display."
       },
       "master": {
         "envName": "MASTER",
@@ -77,7 +77,7 @@
         "envName": "ZEPPELIN_SPARK_MAXRESULT",
         "propertyName": "zeppelin.spark.maxResult",
         "defaultValue": "1000",
-        "description": "Max number of SparkSQL result to display."
+        "description": "Max number of Spark SQL result to display."
       },
       "zeppelin.spark.importImplicit": {
         "envName": "ZEPPELIN_SPARK_IMPORTIMPLICIT",

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/045b2d24/spark/src/main/sparkr-resources/interpreter-setting.json
----------------------------------------------------------------------
diff --git a/spark/src/main/sparkr-resources/interpreter-setting.json 
b/spark/src/main/sparkr-resources/interpreter-setting.json
index 4902baf..f884fe4 100644
--- a/spark/src/main/sparkr-resources/interpreter-setting.json
+++ b/spark/src/main/sparkr-resources/interpreter-setting.json
@@ -46,7 +46,7 @@
         "envName": "ZEPPELIN_SPARK_MAXRESULT",
         "propertyName": "zeppelin.spark.maxResult",
         "defaultValue": "1000",
-        "description": "Max number of SparkSQL result to display."
+        "description": "Max number of Spark SQL result to display."
       },
       "master": {
         "envName": "MASTER",
@@ -77,7 +77,7 @@
         "envName": "ZEPPELIN_SPARK_MAXRESULT",
         "propertyName": "zeppelin.spark.maxResult",
         "defaultValue": "1000",
-        "description": "Max number of SparkSQL result to display."
+        "description": "Max number of Spark SQL result to display."
       },
       "zeppelin.spark.importImplicit": {
         "envName": "ZEPPELIN_SPARK_IMPORTIMPLICIT",

Reply via email to