fix links

Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/5f6a51af
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/5f6a51af
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/5f6a51af

Branch: refs/heads/gh-pages
Commit: 5f6a51af253b4fe5fec2cd80705fe74b985d31b5
Parents: 5a7f700
Author: Kristine Hahn <[email protected]>
Authored: Mon May 18 01:26:33 2015 -0700
Committer: Kristine Hahn <[email protected]>
Committed: Mon May 18 01:26:33 2015 -0700

----------------------------------------------------------------------
 .../090-mongodb-plugin-for-apache-drill.md      |  26 +--
 ...ata-sources-and-file-formats-introduction.md |   2 +-
 .../030-deploying-and-using-a-hive-udf.md       |   2 +-
 .../040-parquet-format.md                       |   4 +-
 .../050-json-data-model.md                      |  14 +-
 .../020-develop-a-simple-function.md            |   4 +-
 .../030-developing-an-aggregate-function.md     |   4 +-
 _docs/img/18.png                                | Bin 22253 -> 18137 bytes
 .../030-starting-drill-on-linux-and-mac-os-x.md |   2 +-
 ...microstrategy-analytics-with-apache-drill.md |   4 +-
 _docs/query-data/010-query-data-introduction.md |  14 +-
 _docs/query-data/030-querying-hbase.md          |  41 ++--
 _docs/query-data/050-querying-hive.md           |   2 +-
 .../060-querying-the-information-schema.md      |   2 +-
 _docs/query-data/070-query-sys-tbl.md           |  89 ++++----
 .../010-querying-json-files.md                  |  33 +--
 .../020-querying-parquet-files.md               | 100 +++++----
 .../030-querying-plain-text-files.md            |  72 +++----
 .../040-querying-directories.md                 |  34 +--
 .../005-querying-complex-data-introduction.md   |   4 +-
 _docs/sql-reference/090-sql-extensions.md       |   8 +-
 .../data-types/010-supported-data-types.md      |  20 +-
 .../nested-data-functions/010-flatten.md        |   8 +-
 .../nested-data-functions/020-kvgen.md          |   5 +-
 .../sql-functions/010-math-and-trig.md          |  14 +-
 .../sql-functions/020-data-type-conversion.md   | 211 ++++++++++---------
 .../030-date-time-functions-and-arithmetic.md   | 186 ++++++++--------
 .../sql-functions/040-string-manipulation.md    | 143 ++++++-------
 .../050-aggregate-and-aggregate-statistical.md  |   8 +-
 _docs/tutorials/010-tutorials-introduction.md   |   6 +-
 _docs/tutorials/020-drill-in-10-minutes.md      |   2 +-
 .../030-analyzing-the-yelp-academic-dataset.md  |  13 +-
 .../050-analyzing-highly-dynamic-datasets.md    |   8 +-
 .../020-getting-to-know-the-drill-sandbox.md    |   2 -
 34 files changed, 528 insertions(+), 559 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md 
b/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
index ff1c736..72bdbeb 100644
--- a/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
+++ b/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
@@ -85,19 +85,19 @@ Reference]({{ site.baseurl }}/docs/sql-reference).
 **Example 1: View mongo.zipdb Dataset**
 
     0: jdbc:drill:zk=local> SELECT * FROM zipcodes LIMIT 10;
-+------------------------------------------------------------------------------------------------+
-|                                           *                                  
                  |
-+------------------------------------------------------------------------------------------------+
-| { "city" : "AGAWAM" , "loc" : [ -72.622739 , 42.070206] , "pop" : 15338 , 
"state" : "MA"}      |
-| { "city" : "CUSHMAN" , "loc" : [ -72.51565 , 42.377017] , "pop" : 36963 , 
"state" : "MA"}      |
-| { "city" : "BARRE" , "loc" : [ -72.108354 , 42.409698] , "pop" : 4546 , 
"state" : "MA"}        |
-| { "city" : "BELCHERTOWN" , "loc" : [ -72.410953 , 42.275103] , "pop" : 10579 
, "state" : "MA"} |
-| { "city" : "BLANDFORD" , "loc" : [ -72.936114 , 42.182949] , "pop" : 1240 , 
"state" : "MA"}    |
-| { "city" : "BRIMFIELD" , "loc" : [ -72.188455 , 42.116543] , "pop" : 3706 , 
"state" : "MA"}    |
-| { "city" : "CHESTER" , "loc" : [ -72.988761 , 42.279421] , "pop" : 1688 , 
"state" : "MA"}      |
-| { "city" : "CHESTERFIELD" , "loc" : [ -72.833309 , 42.38167] , "pop" : 177 , 
"state" : "MA"}   |
-| { "city" : "CHICOPEE" , "loc" : [ -72.607962 , 42.162046] , "pop" : 23396 , 
"state" : "MA"}    |
-| { "city" : "CHICOPEE" , "loc" : [ -72.576142 , 42.176443] , "pop" : 31495 , 
"state" : "MA"}    |
+    
+------------------------------------------------------------------------------------------------+
+    |                                           *                              
                      |
+    
+------------------------------------------------------------------------------------------------+
+    | { "city" : "AGAWAM" , "loc" : [ -72.622739 , 42.070206] , "pop" : 15338 
, "state" : "MA"}      |
+    | { "city" : "CUSHMAN" , "loc" : [ -72.51565 , 42.377017] , "pop" : 36963 
, "state" : "MA"}      |
+    | { "city" : "BARRE" , "loc" : [ -72.108354 , 42.409698] , "pop" : 4546 , 
"state" : "MA"}        |
+    | { "city" : "BELCHERTOWN" , "loc" : [ -72.410953 , 42.275103] , "pop" : 
10579 , "state" : "MA"} |
+    | { "city" : "BLANDFORD" , "loc" : [ -72.936114 , 42.182949] , "pop" : 
1240 , "state" : "MA"}    |
+    | { "city" : "BRIMFIELD" , "loc" : [ -72.188455 , 42.116543] , "pop" : 
3706 , "state" : "MA"}    |
+    | { "city" : "CHESTER" , "loc" : [ -72.988761 , 42.279421] , "pop" : 1688 
, "state" : "MA"}      |
+    | { "city" : "CHESTERFIELD" , "loc" : [ -72.833309 , 42.38167] , "pop" : 
177 , "state" : "MA"}   |
+    | { "city" : "CHICOPEE" , "loc" : [ -72.607962 , 42.162046] , "pop" : 
23396 , "state" : "MA"}    |
+    | { "city" : "CHICOPEE" , "loc" : [ -72.576142 , 42.176443] , "pop" : 
31495 , "state" : "MA"}    |
 
 **Example 2: Aggregation**
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/data-sources-and-file-formats/010-data-sources-and-file-formats-introduction.md
----------------------------------------------------------------------
diff --git 
a/_docs/data-sources-and-file-formats/010-data-sources-and-file-formats-introduction.md
 
b/_docs/data-sources-and-file-formats/010-data-sources-and-file-formats-introduction.md
index d758a50..d468c40 100644
--- 
a/_docs/data-sources-and-file-formats/010-data-sources-and-file-formats-introduction.md
+++ 
b/_docs/data-sources-and-file-formats/010-data-sources-and-file-formats-introduction.md
@@ -22,4 +22,4 @@ Drill supports the following input formats for data:
 
 You set the input format for data coming from data sources to Drill in the 
workspace portion of the [storage plugin]({{ site.baseurl 
}}/docs/storage-plugin-registration) definition. The default input format in 
Drill is Parquet. 
 
-You change the [sys.options table]({{ site.baseurl 
}}/docs/planning-and-execution-options) to set the output format of Drill data. 
The default storage format for Drill CREATE TABLE AS (CTAS) statements is 
Parquet.
\ No newline at end of file
+You change one of the `store` property in the [sys.options table]({{ 
site.baseurl }}/docs/configuration-options-introduction/) to set the output 
format of Drill data. The default storage format for Drill CREATE TABLE AS 
(CTAS) statements is Parquet.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
----------------------------------------------------------------------
diff --git 
a/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md 
b/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
index 6a26376..2cc0db0 100644
--- a/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
+++ b/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
@@ -21,7 +21,7 @@ After you export the custom UDF as a JAR, perform the UDF 
setup tasks so Drill c
  
 To set up the UDF:
 
-1. Register Hive. [Register a Hive storage plugin]({{ site.baseurl 
}}/docs/registering-hive/) that connects Drill to a Hive data source.
+1. Register Hive. [Register a Hive storage plugin]({{ site.baseurl 
}}/docs/hive-storage-plugin/) that connects Drill to a Hive data source.
 2. Add the JAR for the UDF to the Drill CLASSPATH. In earlier versions of 
Drill, place the JAR file in the `/jars/3rdparty` directory of the Drill 
installation on all nodes running a Drillbit.
 3. On each Drill node in the cluster, restart the Drillbit.
    `<drill installation directory>/bin/drillbit.sh restart`

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/data-sources-and-file-formats/040-parquet-format.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/040-parquet-format.md 
b/_docs/data-sources-and-file-formats/040-parquet-format.md
index ca8b164..5cfc83f 100644
--- a/_docs/data-sources-and-file-formats/040-parquet-format.md
+++ b/_docs/data-sources-and-file-formats/040-parquet-format.md
@@ -48,14 +48,14 @@ To maximize performance, set the target size of a Parquet 
row group to the numbe
 The default block size is 536870912 bytes.
 
 ### Type Mapping
-The high correlation between Parquet and SQL data types makes reading Parquet 
files effortless in Drill. Writing to Parquet files takes more work than 
reading. Because SQL does not support all Parquet data types, to prevent Drill 
from inferring a type other than one you want, use the [cast function] ({{ 
site.baseurl }}/docs/sql-functions) Drill offers more liberal casting 
capabilities than SQL for Parquet conversions if the Parquet data is of a 
logical type. 
+The high correlation between Parquet and SQL data types makes reading Parquet 
files effortless in Drill. Writing to Parquet files takes more work than 
reading. Because SQL does not support all Parquet data types, to prevent Drill 
from inferring a type other than one you want, use the [cast function] ({{ 
site.baseurl }}/docs/data-type-conversion/#cast) Drill offers more liberal 
casting capabilities than SQL for Parquet conversions if the Parquet data is of 
a logical type. 
 
 The following general process converts a file from JSON to Parquet:
 
 * Create or use an existing storage plugin that specifies the storage location 
of the Parquet file, mutability of the data, and supported file formats.
 * Take a look at the JSON data. 
 * Create a table that selects the JSON file.
-* In the CTAS command, cast JSON string data to corresponding [SQL types]({{ 
site.baseurl }}/docs/json-data-model/data-type-mapping).
+* In the CTAS command, cast JSON string data to corresponding [SQL types]({{ 
site.baseurl }}/docs/json-data-model/#data-type-mapping).
 
 ### Example: Read JSON, Write Parquet
 This example demonstrates a storage plugin definition, a sample row of data 
from a JSON file, and a Drill query that writes the JSON input to Parquet 
output. 

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md 
b/_docs/data-sources-and-file-formats/050-json-data-model.md
index 90b69a1..548b709 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -53,15 +53,15 @@ Set the `store.json.read_numbers_as_double` property to 
true.
 
 When you set this option, Drill reads all numbers from the JSON files as 
DOUBLE. After reading the data, use a SELECT statement in Drill to cast data as 
follows:
 
-* Cast JSON values to [SQL types]({{ site.baseurl }}/docs/data-types), such as 
BIGINT, FLOAT, and INTEGER.
-* Cast JSON strings to [Drill Date/Time Data Type Formats]({{ site.baseurl 
}}/docs/supported-date-time-data-type-formats).
+* Cast JSON values to [SQL types]({{ site.baseurl 
}}/docs/json-data-model/#data-type-mapping), such as BIGINT, FLOAT, and INTEGER.
+* Cast JSON strings to [Drill Date/Time Data Type Formats]({{ site.baseurl 
}}/docs/date-time-and-timestamp).
 
-Drill uses [map and array data types]({{ site.baseurl }}/docs/data-types) 
internally for reading complex and nested data structures from JSON. You can 
cast data in a map or array of data to return a value from the structure, as 
shown in [“Create a view on a MapR-DB table”] ({{ site.baseurl 
}}/docs/lesson-2-run-queries-with-ansi-sql). [“Query Complex Data”]({{ 
site.baseurl }}/docs/querying-complex-data-introduction) shows how to access 
nested arrays.
+Drill uses [map and array data types]({{ site.baseurl 
}}/docs/handling-different-data-types/#handling-json-and-parquet-data) 
internally for reading complex and nested data structures from JSON. You can 
cast data in a map or array of data to return a value from the structure, as 
shown in [“Create a view on a MapR-DB table”] ({{ site.baseurl 
}}/docs/lesson-2-run-queries-with-ansi-sql/#create-a-view-on-a-mapr-db-table). 
[“Query Complex Data”]({{ site.baseurl 
}}/docs/querying-complex-data-introduction) shows how to access nested arrays.
 
 ## Reading JSON
-To read JSON data using Drill, use a [file system storage plugin]({{ 
site.baseurl }}/docs/connect-to-a-data-source) that defines the JSON format. 
You can use the `dfs` storage plugin, which includes the definition. 
+To read JSON data using Drill, use a [file system storage plugin]({{ 
site.baseurl }}/docs/file-system-storage-plugin/) that defines the JSON format. 
You can use the `dfs` storage plugin, which includes the definition. 
 
-JSON data is often complex. Data can be deeply nested and semi-structured. but 
[you can use workarounds ]({{ site.baseurl 
}}/docs/json-data-model#limitations-and-workaroumds) covered later.
+JSON data is often complex. Data can be deeply nested and semi-structured. but 
you can use [workarounds ]({{ site.baseurl 
}}/docs/json-data-model/#limitations-and-workarounds) covered later.
 
 Drill reads tuples defined in single objects, having no comma between objects. 
A JSON object is an unordered set of name/value pairs. Curly braces delimit 
objects in the JSON file:
 
@@ -310,7 +310,7 @@ To access the second geometry coordinate of the first city 
lot in the San Franci
     +-------------------+
     1 row selected (0.19 seconds)
 
-More examples of drilling down into an array are shown in ["Selecting Nested 
Data for a Column"]({{ site.baseurl 
}}/docs/query-3-selecting-nested-data-for-a-column). 
+More examples of drilling down into an array are shown in ["Selecting Nested 
Data for a Column"]({{ site.baseurl 
}}/docs/selecting-nested-data-for-a-column). 
 
 ### Example: Flatten an Array of Maps using a Subquery
 By flattening the following JSON file, which contains an array of maps, you 
can evaluate the records of the flattened data. 
@@ -449,7 +449,7 @@ Workaround: Separate lengthy objects into objects delimited 
by curly braces usin
  
 * [FLATTEN]({{ site.baseurl }}/docs/json-data-model#flatten-json-data) 
separates a set of nested JSON objects into individual rows in a DRILL table.
 
-* [KVGEN]({{ site.baseurl }}/docs/json-data-model#generate-key-value-pairs) 
separates objects having more elements than optimal for querying.
+* [KVGEN]({{ site.baseurl }}/docs/kvgen/) separates objects having more 
elements than optimal for querying.
 
   
 ### Nested Column Names 

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/develop-custom-functions/020-develop-a-simple-function.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/020-develop-a-simple-function.md 
b/_docs/develop-custom-functions/020-develop-a-simple-function.md
index cdc1876..094f0c1 100644
--- a/_docs/develop-custom-functions/020-develop-a-simple-function.md
+++ b/_docs/develop-custom-functions/020-develop-a-simple-function.md
@@ -4,8 +4,8 @@ parent: "Develop Custom Functions"
 ---
 Create a class within a Java package that implements Drill’s simple interface
 into the program, and include the required information for the function type.
-Your function must include data types that Drill supports, such as int or
-BigInt. For a list of supported data types, refer to the [SQL Reference]({{ 
site.baseurl }}/docs/sql-reference).
+Your function must include data types that Drill supports, such as INTEGER or
+BIGINT. For a list of supported data types, refer to the [SQL Reference]({{ 
site.baseurl }}/docs/supported-data-types/).
 
 Complete the following steps to develop a simple function using Drill’s 
simple
 function interface:

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
----------------------------------------------------------------------
diff --git 
a/_docs/develop-custom-functions/030-developing-an-aggregate-function.md 
b/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
index 520c044..3368c24 100644
--- a/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
+++ b/_docs/develop-custom-functions/030-developing-an-aggregate-function.md
@@ -4,8 +4,8 @@ parent: "Develop Custom Functions"
 ---
 Create a class within a Java package that implements Drill’s aggregate
 interface into the program. Include the required information for the function.
-Your function must include data types that Drill supports, such as int or
-BigInt. For a list of supported data types, refer to the [SQL Reference]({{ 
site.baseurl }}/docs/sql-reference).
+Your function must include data types that Drill supports, such as INTEGER or
+BIGINT. For a list of supported data types, refer to the [SQL Reference]({{ 
site.baseurl }}/docs/supported-data-types/).
 
 Complete the following steps to create an aggregate function:
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/img/18.png
----------------------------------------------------------------------
diff --git a/_docs/img/18.png b/_docs/img/18.png
index 691b816..ac5b802 100644
Binary files a/_docs/img/18.png and b/_docs/img/18.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
----------------------------------------------------------------------
diff --git 
a/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
 
b/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
index cdbdf20..697f425 100644
--- 
a/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
+++ 
b/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
@@ -14,7 +14,7 @@ Start the Drill shell using the `drill-embedded` command. The 
command uses a jdb
 
    The `0: jdbc:drill:zk=local>`  prompt appears.  
 
-   At this point, you can [run 
queries]({{site.baseurl}}/docs/drill-in-10-minutes#query-sample-data).
+   At this point, you can [run queries]({{site.baseurl}}/docs/query-data).
 
 You can also use the **sqlline** command to start Drill using a custom 
connection string, as described in ["Using an Ad-Hoc Connection to 
Drill"](docs/starting-drill-in-distributed-mode/#using-an-ad-hoc-connection-to-drill).
 For example, you can specify the storage plugin when you start the shell. 
Doing so eliminates the need to specify the storage plugin in the query: For 
example, this command specifies the `dfs` storage plugin.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
----------------------------------------------------------------------
diff --git 
a/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
 
b/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
index cdade1c..680e139 100755
--- 
a/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
+++ 
b/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
@@ -98,9 +98,7 @@ You can now use MicroStrategy Analytics Enterprise to access 
Drill as a database
 This step includes an example scenario that shows you how to use 
MicroStrategy, with Drill as the database instance, to analyze Twitter data 
stored as complex JSON documents. 
 
 ####Scenario
-The Drill distributed file system plugin is configured to read Twitter data in 
a directory structure. A view is created in Drill to capture the most relevant 
maps and nested maps and arrays for the Twitter JSON documents. Refer to the 
following page for more information about how to configure and use Drill to 
work with complex data:
-
-https://cwiki.apache.org/confluence/display/DRILL/Query+Data
+The Drill distributed file system plugin is configured to read Twitter data in 
a directory structure. A view is created in Drill to capture the most relevant 
maps and nested maps and arrays for the Twitter JSON documents. Refer to [Query 
Data](/docs/query-data-introduction/) for more information about how to 
configure and use Drill to work with complex data:
 
 ####Part 1: Create a Project
 Complete the following steps to create a project:

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/query-data/010-query-data-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/010-query-data-introduction.md 
b/_docs/query-data/010-query-data-introduction.md
index 708a2b7..980c975 100644
--- a/_docs/query-data/010-query-data-introduction.md
+++ b/_docs/query-data/010-query-data-introduction.md
@@ -3,11 +3,12 @@ title: "Query Data Introduction"
 parent: "Query Data"
 ---
 You can query local and distributed file systems, Hive, and HBase data sources
-registered with Drill. If you connect directly to a particular schema when
-you invoke SQLLine, you can issue SQL queries against that schema. If you d0
-not indicate a schema when you invoke SQLLine, you can issue the `USE
-<schema>` statement to run your queries against a particular schema. After you
-issue the `USE` statement, you can use absolute notation, such as 
`schema.table.column`.
+registered with Drill. You issue the `USE
+<storage plugin>` statement to run your queries against a particular storage 
plugin. You use dot notation and back ticks to specify the storage plugin name 
and sometimes the workspace name. For example, to use the dfs storage plugin 
and default workspace, issue this command: ``USE dfs.`default``
+
+Alternatively, you can omit the USE statement, and specify the storage plugin 
and workspace name using dot notation and back ticks. For example:
+
+``dfs.`default`.`/Users/drill-user/apache-drill-1.0.0/log/sqlline_queries.json```;
 
 You may need to use casting functions in some queries. For example, you may
 have to cast a string `"100"` to an integer in order to apply a math function
@@ -23,9 +24,6 @@ text may help you isolate the problem.
 The set command increases the default text display (number of characters). By
 default, most of the plan output is hidden.
 
-You may see errors if you try to use non-standard or unsupported SQL syntax in
-a query.
-
 Remember the following tips when querying data with Drill:
 
   * Include a semicolon at the end of SQL statements, except when you issue a 
command with an exclamation point `(!).   

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/query-data/030-querying-hbase.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/030-querying-hbase.md 
b/_docs/query-data/030-querying-hbase.md
index bb8cc59..7febf42 100644
--- a/_docs/query-data/030-querying-hbase.md
+++ b/_docs/query-data/030-querying-hbase.md
@@ -2,7 +2,7 @@
 title: "Querying HBase"
 parent: "Query Data"
 ---
-This exercise creates two tables in HBase, students and clicks, that you can 
query with Drill. As an HBase user, you most likely are running Drill in 
distributed mode, in which Drill might start as a service. If you are not an 
HBase user and just kicking the tires, you might use the Drill Sandbox on a 
single-node cluster (embedded mode). In this case, you need to [start Drill]({{ 
site.baseurl }}/docs/install-drill/) before performing step 5 of this exercise. 
On the Drill Sandbox, HBase tables you create will be located in: 
/mapr/demo.mapr.com/tables
+This exercise creates two tables in HBase, students and clicks, that you can 
query with Drill. As an HBase user, you most likely are running Drill in 
distributed mode, in which Drill might start as a service. If you are not an 
HBase user and just kicking the tires, you might use the Drill Sandbox on a 
single-node cluster (embedded mode). In this case, you need to [start Drill]({{ 
site.baseurl }}/docs/install-drill/) before performing step 5 of this exercise. 
On the Drill Sandbox, HBase tables you create will be located in: 
`/mapr/demo.mapr.com/tables`
 
 You use the CONVERT_TO and CONVERT_FROM functions to convert binary text to 
readable output. You use the CAST function to convert the binary INT to 
readable output in step 4 of [Query HBase 
Tables]({{site.baseurl}}/docs/querying-hbase/#query-hbase-tables). When 
converting an INT or BIGINT number, having a byte count in the 
destination/source that does not match the byte count of the number in the 
VARBINARY source/destination, use CAST.
 
@@ -99,15 +99,16 @@ The `maprdb` format plugin provides access to the `/tables` 
directory. Use Drill
        SELECT * FROM students;
    The query returns binary results:
   
-        +------------+------------+------------+
-        |  row_key   |  account   |  address   |
-        +------------+------------+------------+
-        | [B@e6d9eb7 | {"name":"QWxpY2U="} | 
{"state":"Q0E=","street":"MTIzIEJhbGxtZXIgQXY=","zipcode":"MTIzNDU="} |
-        | [B@2823a2b4 | {"name":"Qm9i"} | 
{"state":"Q0E=","street":"MSBJbmZpbml0ZSBMb29w","zipcode":"MTIzNDU="} |
-        | [B@3b8eec02 | {"name":"RnJhbms="} | 
{"state":"Q0E=","street":"NDM1IFdhbGtlciBDdA==","zipcode":"MTIzNDU="} |
-        | [B@242895da | {"name":"TWFyeQ=="} | 
{"state":"Q0E=","street":"NTYgU291dGhlcm4gUGt3eQ==","zipcode":"MTIzNDU="} |
-        +------------+------------+------------+
+        
+-------------+-----------------------+---------------------------------------------------------------------------+
+        |  row_key    |  account              |                                
address                                    |
+        
+-------------+-----------------------+---------------------------------------------------------------------------+
+        | [B@e6d9eb7  | {"name":"QWxpY2U="}   | 
{"state":"Q0E=","street":"MTIzIEJhbGxtZXIgQXY=","zipcode":"MTIzNDU="}     |
+        | [B@2823a2b4 | {"name":"Qm9i"}       | 
{"state":"Q0E=","street":"MSBJbmZpbml0ZSBMb29w","zipcode":"MTIzNDU="}     |
+        | [B@3b8eec02 | {"name":"RnJhbms="}   | 
{"state":"Q0E=","street":"NDM1IFdhbGtlciBDdA==","zipcode":"MTIzNDU="}     |
+        | [B@242895da | {"name":"TWFyeQ=="}   | 
{"state":"Q0E=","street":"NTYgU291dGhlcm4gUGt3eQ==","zipcode":"MTIzNDU="} |
+        
+-------------+-----------------------+---------------------------------------------------------------------------+
         4 rows selected (1.335 seconds)
+
    The Drill output reflects the actual data type of the HBase data, which is 
binary.
 
 2. Issue the following query, that includes the CONVERT_FROM function, to 
convert the `students` table to readable data:
@@ -124,14 +125,14 @@ The `maprdb` format plugin provides access to the 
`/tables` directory. Use Drill
 
     The query returns readable data:
 
-        +------------+------------+------------+------------+------------+
-        | studentid  |    name    |   state    |   street   |  zipcode   |
-        +------------+------------+------------+------------+------------+
-        | student1   | Alice      | CA         | 123 Ballmer Av | 12345      |
-        | student2   | Bob        | CA         | 1 Infinite Loop | 12345      |
-        | student3   | Frank      | CA         | 435 Walker Ct | 12345      |
+        
+------------+------------+------------+------------------+------------+
+        | studentid  |    name    |   state    |       street     |  zipcode   
|
+        
+------------+------------+------------+------------------+------------+
+        | student1   | Alice      | CA         | 123 Ballmer Av   | 12345      
|
+        | student2   | Bob        | CA         | 1 Infinite Loop  | 12345      
|
+        | student3   | Frank      | CA         | 435 Walker Ct    | 12345      
|
         | student4   | Mary       | CA         | 56 Southern Pkwy | 12345      
|
-        +------------+------------+------------+------------+------------+
+        
+------------+------------+------------+------------------+------------+
         4 rows selected (0.504 seconds)
 
 3. Query the clicks table to see which students visited google.com:
@@ -142,13 +143,13 @@ The `maprdb` format plugin provides access to the 
`/tables` directory. Use Drill
                CONVERT_FROM(clicks.clickinfo.url, 'UTF8') AS url 
         FROM clicks WHERE clicks.clickinfo.url LIKE '%google%'; 
 
-        +------------+------------+------------+------------+
-        |  clickid   | studentid  |    time    |    url     |
-        +------------+------------+------------+------------+
+        
+------------+------------+--------------------------+-----------------------+
+        |  clickid   | studentid  |           time           |         url     
      |
+        
+------------+------------+--------------------------+-----------------------+
         | click1     | student1   | 2014-01-01 12:01:01.0001 | 
http://www.google.com |
         | click3     | student2   | 2014-01-01 01:02:01.0001 | 
http://www.google.com |
         | click6     | student3   | 2013-02-01 12:01:01.0001 | 
http://www.google.com |
-        +------------+------------+------------+------------+
+        
+------------+------------+--------------------------+-----------------------+
         3 rows selected (0.294 seconds)
 
 4. Query the clicks table to get the studentid of the student having 100 
items. Use CONVERT_FROM to convert the textual studentid and itemtype data, but 
use CAST to convert the integer quantity.

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/query-data/050-querying-hive.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/050-querying-hive.md 
b/_docs/query-data/050-querying-hive.md
index 080492f..515200a 100644
--- a/_docs/query-data/050-querying-hive.md
+++ b/_docs/query-data/050-querying-hive.md
@@ -18,7 +18,7 @@ To create a Hive table and query it with Drill, complete the 
following steps:
 
         hive> load data local inpath '/<directory path>/customers.csv' 
overwrite into table customers;`
   4. Issue `quit` or `exit` to leave the Hive shell.
-  5. Start Drill. Refer to [/docs/install-drill) for instructions.
+  5. Start the Drill shell. 
   6. Issue the following query to Drill to get the first and last names of the 
first ten customers in the Hive table:  
 
         0: jdbc:drill:schema=hiveremote> SELECT firstname,lastname FROM 
hiveremote.`customers` limit 10;`

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/query-data/060-querying-the-information-schema.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/060-querying-the-information-schema.md 
b/_docs/query-data/060-querying-the-information-schema.md
index fddb194..7d18120 100644
--- a/_docs/query-data/060-querying-the-information-schema.md
+++ b/_docs/query-data/060-querying-the-information-schema.md
@@ -107,4 +107,4 @@ of those columns:
     | OrderTotal  | Decimal    |
     +-------------+------------+
 
-In this release, Drill disables the DECIMAL data type, including casting to 
DECIMAL and reading DECIMAL types from Parquet and Hive. [Enable the DECIMAL 
data 
type]({{site.baseurl}}/docs/supported-data-types#enabling-the-decimal-type)) if 
performance is not an issue.
\ No newline at end of file
+In this release, Drill disables the DECIMAL data type, including casting to 
DECIMAL and reading DECIMAL types from Parquet and Hive. [Enable the DECIMAL 
data 
type]({{site.baseurl}}/docs/supported-data-types#enabling-the-decimal-type) if 
performance is not an issue.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/query-data/070-query-sys-tbl.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/070-query-sys-tbl.md 
b/_docs/query-data/070-query-sys-tbl.md
index 17041fd..5cab6dc 100644
--- a/_docs/query-data/070-query-sys-tbl.md
+++ b/_docs/query-data/070-query-sys-tbl.md
@@ -13,21 +13,21 @@ system tables that you can query.
 Issue the `SHOW DATABASES` command to view Drill databases.
 
     0: jdbc:drill:zk=10.10.100.113:5181> show databases;
-    +-------------+
-    | SCHEMA_NAME |
-    +-------------+
-    | M7          |
-    | hive.default|
-    | dfs.default |
-    | dfs.root    |
-    | dfs.views   |
-    | dfs.tmp     |
-    | dfs.tpcds   |
-    | sys         |
-    | cp.default  |
-    | hbase       |
+    +--------------------+
+    |      SCHEMA_NAME   |
+    +--------------------+
+    | M7                 |
+    | hive.default       |
+    | dfs.default        |
+    | dfs.root           |
+    | dfs.views          |
+    | dfs.tmp            |
+    | dfs.tpcds          |
+    | sys                |
+    | cp.default         |
+    | hbase              |
     | INFORMATION_SCHEMA |
-    +-------------+
+    +--------------------+
     11 rows selected (0.162 seconds)
 
 Drill returns `sys` in the database results.
@@ -67,13 +67,13 @@ Query the drillbits, version, and options tables in the sys 
database.
 ###Query the drillbits table.
 
     0: jdbc:drill:zk=10.10.100.113:5181> select * from drillbits;
-    +------------------+------------+--------------+------------+---------+
-    |   host            | user_port | control_port | data_port  |  current|
+    +-------------------+------------+--------------+------------+---------+
+    |   host            |  user_port | control_port | data_port  |  current|
     +-------------------+------------+--------------+------------+--------+
-    | qa-node115.qa.lab | 31010     | 31011        | 31012      | true    |
-    | qa-node114.qa.lab | 31010     | 31011        | 31012      | false   |
-    | qa-node116.qa.lab | 31010     | 31011        | 31012      | false   |
-    +------------+------------+--------------+------------+---------------+
+    | qa-node115.qa.lab | 31010      | 31011        | 31012      | true    |
+    | qa-node114.qa.lab | 31010      | 31011        | 31012      | false   |
+    | qa-node116.qa.lab | 31010      | 31011        | 31012      | false   |
+    +-------------------+------------+--------------+------------+---------+
     3 rows selected (0.146 seconds)
 
   * host   
@@ -94,12 +94,12 @@ query. This Drillbit is the Foreman for the current session.
 ### Query the version table.
 
     0: jdbc:drill:zk=10.10.100.113:5181> select * from version;
-    +------------+----------------+-------------+-------------+------------+
-    | commit_id  | commit_message | commit_time | build_email | build_time |
-    +------------+----------------+-------------+-------------+------------+
-    | 108d29fce3d8465d619d45db5f6f433ca3d97619 | DRILL-1635: Additional fix 
for validation exceptions. | 14.11.2014 @ 02:32:47 UTC | Unknown    | 
14.11.2014 @ 03:56:07 UTC |
-    +------------+----------------+-------------+-------------+------------+
-    1 row selected (0.144 seconds)
+    
+-------------------------------------------+--------------------------------------------------------------------+----------------------------+--------------+----------------------------+
+    |                 commit_id                 |                           
commit_message                           |        commit_time         | 
build_email  |         build_time         |
+    
+-------------------------------------------+--------------------------------------------------------------------+----------------------------+--------------+----------------------------+
+    | d8b19759657698581cc0d01d7038797952888123  | DRILL-3100: 
TestImpersonationDisabledWithMiniDFS fails on Windows  | 15.05.2015 @ 05:18:03 
UTC  | Unknown      | 15.05.2015 @ 06:52:32 UTC  |
+    
+-------------------------------------------+--------------------------------------------------------------------+----------------------------+--------------+----------------------------+
+    1 row selected (0.099 seconds)
   * commit_id  
 The github id of the release you are running. For example, <https://github.com
 /apache/drill/commit/e3ab2c1760ad34bda80141e2c3108f7eda7c9104>
@@ -120,21 +120,22 @@ Drill provides system, session, and boot options that you 
can query.
 The following example shows a query on the system options:
 
     0: jdbc:drill:zk=10.10.100.113:5181> select * from options where 
type='SYSTEM' limit 10;
-    
+------------+------------+------------+------------+------------+------------+------------+
-    |    name   |   kind    |   type    |  num_val   | string_val |  bool_val  
| float_val  |
-    
+------------+------------+------------+------------+------------+------------+------------+
-    | exec.max_hash_table_size | LONG       | SYSTEM    | 1073741824 | null    
 | null      | null      |
-    | planner.memory.max_query_memory_per_node | LONG       | SYSTEM    | 2048 
      | null     | null      | null      |
-    | planner.join.row_count_estimate_factor | DOUBLE   | SYSTEM    | null     
 | null      | null      | 1.0       |
-    | planner.affinity_factor | DOUBLE  | SYSTEM    | null      | null      | 
null       | 1.2      |
-    | exec.errors.verbose | BOOLEAN | SYSTEM    | null      | null      | 
false      | null     |
-    | planner.disable_exchanges | BOOLEAN   | SYSTEM    | null      | null     
 | false      | null     |
-    | exec.java_compiler_debug | BOOLEAN    | SYSTEM    | null      | null     
 | true      | null      |
-    | exec.min_hash_table_size | LONG       | SYSTEM    | 65536     | null     
 | null      | null       |
-    | exec.java_compiler_janino_maxsize | LONG       | SYSTEM   | 262144    | 
null      | null      | null      |
-    | planner.enable_mergejoin | BOOLEAN    | SYSTEM    | null      | null     
 | true      | null       |
-    
+------------+------------+------------+------------+------------+------------+------------+
-    10 rows selected (0.334 seconds)  
+    
+-------------------------------------------------+----------+---------+----------+-------------+-------------+-----------+------------+
+    |                      name                       |   kind   |  type   |  
status  |   num_val   | string_val  | bool_val  | float_val  |
+    
+-------------------------------------------------+----------+---------+----------+-------------+-------------+-----------+------------+
+    | drill.exec.functions.cast_empty_string_to_null  | BOOLEAN  | SYSTEM  | 
DEFAULT  | null        | null        | false     | null       |
+    | drill.exec.storage.file.partition.column.label  | STRING   | SYSTEM  | 
DEFAULT  | null        | dir         | null      | null       |
+    | exec.errors.verbose                             | BOOLEAN  | SYSTEM  | 
DEFAULT  | null        | null        | false     | null       |
+    | exec.java_compiler                              | STRING   | SYSTEM  | 
DEFAULT  | null        | DEFAULT     | null      | null       |
+    | exec.java_compiler_debug                        | BOOLEAN  | SYSTEM  | 
DEFAULT  | null        | null        | true      | null       |
+    | exec.java_compiler_janino_maxsize               | LONG     | SYSTEM  | 
DEFAULT  | 262144      | null        | null      | null       |
+    | exec.max_hash_table_size                        | LONG     | SYSTEM  | 
DEFAULT  | 1073741824  | null        | null      | null       |
+    | exec.min_hash_table_size                        | LONG     | SYSTEM  | 
DEFAULT  | 65536       | null        | null      | null       |
+    | exec.queue.enable                               | BOOLEAN  | SYSTEM  | 
DEFAULT  | null        | null        | false     | null       |
+    | exec.queue.large                                | LONG     | SYSTEM  | 
DEFAULT  | 10          | null        | null      | null       |
+    
+-------------------------------------------------+----------+---------+----------+-------------+-------------+-----------+------------+
+    10 rows selected (0.216 seconds)
+
   * name  
 The name of the option.
   * kind  
@@ -151,9 +152,7 @@ The default value, which is true or false; otherwise, null.
 The default value, which is of the double, float, or long double data type;
 otherwise, null.
 
-For information about how to configure Drill system and session options, see[
-Planning and Execution Options]({{ site.baseurl 
}}/docs/planning-and-execution-options).
+For information about how to configure Drill system and session options, see 
[Planning and Execution Options]({{ site.baseurl 
}}/docs/planning-and-execution-options).
 
-For information about how to configure Drill start-up options, see[ Start-Up
-Options]({{ site.baseurl }}/docs/start-up-options).
+For information about how to configure Drill start-up options, see [Start-Up 
Options]({{ site.baseurl }}/docs/start-up-options).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/query-data/query-a-file-system/010-querying-json-files.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/query-a-file-system/010-querying-json-files.md 
b/_docs/query-data/query-a-file-system/010-querying-json-files.md
index 219f2b1..51f499d 100644
--- a/_docs/query-data/query-a-file-system/010-querying-json-files.md
+++ b/_docs/query-data/query-a-file-system/010-querying-json-files.md
@@ -9,33 +9,16 @@ data. Use SQL syntax to query the sample `JSON` file.
 To view the data in the `employee.json` file, submit the following SQL query
 to Drill:
 
-         0: jdbc:drill:zk=local> SELECT * FROM cp.`employee.json`;
+         0: jdbc:drill:zk=local> SELECT * FROM cp.`employee.json` LIMIT 5;
 
 The query returns the following results:
 
-**Example of partial output**
+    
+--------------+----------------------------+---------------------+---------------+--------------+----------------------------+-----------+----------------+-------------+------------------------+----------+----------------+----------------------+-----------------+---------+-----------------------+
+    | employee_id  |         full_name          |     first_name      |   
last_name   | position_id  |       position_title       | store_id  | 
department_id  | birth_date  |       hire_date        |  salary  | 
supervisor_id  |   education_level    | marital_status  | gender  |    
management_role    |
+    
+--------------+----------------------------+---------------------+---------------+--------------+----------------------------+-----------+----------------+-------------+------------------------+----------+----------------+----------------------+-----------------+---------+-----------------------+
+    | 1            | Sheri Nowmer               | Sheri               | Nowmer 
       | 1            | President                  | 0         | 1              
| 1961-08-26  | 1994-12-01 00:00:00.0  | 80000.0  | 0              | Graduate 
Degree      | S               | F       | Senior Management     |
+    | 2            | Derrick Whelply            | Derrick             | 
Whelply       | 2            | VP Country Manager         | 0         | 1       
       | 1915-07-03  | 1994-12-01 00:00:00.0  | 40000.0  | 1              | 
Graduate Degree      | M               | M       | Senior Management     |
+    | 4            | Michael Spence             | Michael             | Spence 
       | 2            | VP Country Manager         | 0         | 1              
| 1969-06-20  | 1998-01-01 00:00:00.0  | 40000.0  | 1              | Graduate 
Degree      | S               | M       | Senior Management     |
+    | 5            | Maya Gutierrez             | Maya                | 
Gutierrez     | 2            | VP Country Manager         | 0         | 1       
       | 1951-05-10  | 1998-01-01 00:00:00.0  | 35000.0  | 1              | 
Bachelors Degree     | M               | F       | Senior Management     |
 
-    
+-------------+------------+------------+------------+-------------+-----------+
-    | employee_id | full_name  | first_name | last_name  | position_id | 
position_ |
-    
+-------------+------------+------------+------------+-------------+-----------+
-    | 1101        | Steve Eurich | Steve      | Eurich     | 16          | 
Store T |
-    | 1102        | Mary Pierson | Mary       | Pierson    | 16          | 
Store T |
-    | 1103        | Leo Jones  | Leo        | Jones      | 16          | Store 
Tem |
-    | 1104        | Nancy Beatty | Nancy      | Beatty     | 16          | 
Store T |
-    | 1105        | Clara McNight | Clara      | McNight    | 16          | 
Store  |
-    | 1106        | Marcella Isaacs | Marcella   | Isaacs     | 17          | 
Stor |
-    | 1107        | Charlotte Yonce | Charlotte  | Yonce      | 17          | 
Stor |
-    | 1108        | Benjamin Foster | Benjamin   | Foster     | 17          | 
Stor |
-    | 1109        | John Reed  | John       | Reed       | 17          | Store 
Per |
-    | 1110        | Lynn Kwiatkowski | Lynn       | Kwiatkowski | 17          
| St |
-    | 1111        | Donald Vann | Donald     | Vann       | 17          | 
Store Pe |
-    | 1112        | William Smith | William    | Smith      | 17          | 
Store  |
-    | 1113        | Amy Hensley | Amy        | Hensley    | 17          | 
Store Pe |
-    | 1114        | Judy Owens | Judy       | Owens      | 17          | Store 
Per |
-    | 1115        | Frederick Castillo | Frederick  | Castillo   | 17          
| S |
-    | 1116        | Phil Munoz | Phil       | Munoz      | 17          | Store 
Per |
-    | 1117        | Lori Lightfoot | Lori       | Lightfoot  | 17          | 
Store |
-    ...
-    
+-------------+------------+------------+------------+-------------+-----------+
-    1,155 rows selected (0.762 seconds)
     0: jdbc:drill:zk=local>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/query-data/query-a-file-system/020-querying-parquet-files.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/query-a-file-system/020-querying-parquet-files.md 
b/_docs/query-data/query-a-file-system/020-querying-parquet-files.md
index b93a914..3731f65 100644
--- a/_docs/query-data/query-a-file-system/020-querying-parquet-files.md
+++ b/_docs/query-data/query-a-file-system/020-querying-parquet-files.md
@@ -8,42 +8,37 @@ that you can query. Use SQL syntax to query the 
`region.parquet` and
 
 {% include startnote.html %}Your Drill installation location may differ from 
the examples used here.{% include endnote.html %} 
 
-The examples assume that Drill was installed in embedded mode on your machine 
following the [Drill in 10 Minutes ]({{ site.baseurl 
}}/docs/drill-in-10-minutes) tutorial. If you installed Drill in distributed 
mode, or your `sample-data` directory differs from the location used in the 
examples, make sure to change the `sample-data` directory to the correct 
location before you run the queries.
+The examples assume that Drill was [installed in embedded mode]({{ 
site.baseurl }}/docs/installing-drill-in-embedded-mode). If you installed Drill 
in distributed mode, or your `sample-data` directory differs from the location 
used in the examples. Change the `sample-data` directory to the correct 
location before you run the queries.
 
 ## Region File
 
-If you followed the Apache Drill in 10 Minutes instructions to install Drill
-in embedded mode, the path to the parquet file varies between operating
-systems.
-
 To view the data in the `region.parquet` file, issue the query appropriate for
 your operating system:
 
   * Linux  
     
-        SELECT * FROM 
dfs.`/opt/drill/apache-drill-0.4.0-incubating/sample-data/region.parquet`;
+        SELECT * FROM 
dfs.`/opt/drill/apache-drill-1.0.0/sample-data/region.parquet`;
 
   * Mac OS X  
         
-        SELECT * FROM 
dfs.`/Users/max/drill/apache-drill-0.4.0-incubating/sample-data/region.parquet`;
+        SELECT * FROM 
dfs.`/Users/max/drill/apache-drill-1.0.0/sample-data/region.parquet`;
 
   * Windows  
     
-        SELECT * FROM 
dfs.`C:\drill\apache-drill-0.4.0-incubating\sample-data\region.parquet`;
+        SELECT * FROM 
dfs.`C:\drill\apache-drill-1.0.0\sample-data\region.parquet`;
 
 The query returns the following results:
 
-    +------------+------------+
-    |   EXPR$0   |   EXPR$1   |
-    +------------+------------+
-    | AFRICA     | lar deposits. blithely final packages cajole. regular 
waters ar |
-    | AMERICA    | hs use ironic, even requests. s |
-    | ASIA       | ges. thinly even pinto beans ca |
-    | EUROPE     | ly final courts cajole furiously final excuse |
-    | MIDDLE EAST | uickly special accounts cajole carefully blithely close 
reques |
-    +------------+------------+
-    5 rows selected (0.165 seconds)
-    0: jdbc:drill:zk=local>
+    +--------------+--------------+-----------------------+
+    | R_REGIONKEY  |    R_NAME    |       R_COMMENT       |
+    +--------------+--------------+-----------------------+
+    | 0            | AFRICA       | lar deposits. blithe  |
+    | 1            | AMERICA      | hs use ironic, even   |
+    | 2            | ASIA         | ges. thinly even pin  |
+    | 3            | EUROPE       | ly final courts cajo  |
+    | 4            | MIDDLE EAST  | uickly special accou  |
+    +--------------+--------------+-----------------------+
+    5 rows selected (0.272 seconds)
 
 ## Nation File
 
@@ -56,46 +51,45 @@ your operating system:
 
   * Linux  
   
-        SELECT * FROM 
dfs.`/opt/drill/apache-drill-0.4.0-incubating/sample-data/nation.parquet`;
+        SELECT * FROM 
dfs.`/opt/drill/apache-drill-1.0.0/sample-data/nation.parquet`;
 
   * Mac OS X  
 
-        SELECT * FROM 
dfs.`/Users/max/drill/apache-drill-0.4.0-incubating/sample-data/nation.parquet`;
+        SELECT * FROM 
dfs.`/Users/max/drill/apache-drill-1.0.0-incubating/sample-data/nation.parquet`;
 
   * Windows  
 
-        SELECT * FROM 
dfs.`C:\drill\apache-drill-0.4.0-incubating\sample-data\nation.parquet`;
+        SELECT * FROM 
dfs.`C:\drill\apache-drill-1.0.0-incubating\sample-data\nation.parquet`;
 
 The query returns the following results:
 
-    +------------+------------+------------+------------+
-    |   EXPR$0   |   EXPR$1   |   EXPR$2   |   EXPR$3   |
-    +------------+------------+------------+------------+
-    | 0          | 0          | ALGERIA    |  haggle. carefully final deposits 
det |
-    | 1          | 1          | ARGENTINA  | al foxes promise slyly according 
to t |
-    | 2          | 1          | BRAZIL     | y alongside of the pending 
deposits.  |
-    | 3          | 1          | CANADA     | eas hang ironic, silent packages. 
sly |
-    | 4          | 4          | EGYPT      | y above the carefully unusual 
theodol |
-    | 5          | 0          | ETHIOPIA   | ven packages wake quickly. regu |
-    | 6          | 3          | FRANCE     | refully final requests. regular, 
iron |
-    | 7          | 3          | GERMANY    | l platelets. regular accounts 
x-ray:  |
-    | 8          | 2          | INDIA      | ss excuses cajole slyly across 
the pa |
-    | 9          | 2          | INDONESIA  |  slyly express asymptotes. 
regular de |
-    | 10         | 4          | IRAN       | efully alongside of the slyly 
final d |
-    | 11         | 4          | IRAQ       | nic deposits boost atop the 
quickly f |
-    | 12         | 2          | JAPAN      | ously. final, express gifts 
cajole a |
-    | 13         | 4          | JORDAN     | ic deposits are blithely about 
the ca |
-    | 14         | 0          | KENYA      |  pending excuses haggle furiously 
dep |
-    | 15         | 0          | MOROCCO    | rns. blithely bold courts among 
the c |
-    | 16         | 0          | MOZAMBIQUE | s. ironic, unusual asymptotes 
wake bl |
-    | 17         | 1          | PERU       | platelets. blithely pending 
dependenc |
-    | 18         | 2          | CHINA      | c dependencies. furiously express 
not |
-    | 19         | 3          | ROMANIA    | ular asymptotes are about the 
furious |
-    | 20         | 4          | SAUDI ARABIA | ts. silent requests haggle. 
closely |
-    | 21         | 2          | VIETNAM    | hely enticingly express accounts. 
eve |
-    | 22         | 3          | RUSSIA     |  requests against the platelets 
use n |
-    | 23         | 3          | UNITED KINGDOM | eans boost carefully special 
requ |
-    | 24         | 1          | UNITED STATES | y final packages. slow foxes 
cajol |
-    +------------+------------+------------+------------+
-    25 rows selected (2.401 seconds)
-    0: jdbc:drill:zk=local>
\ No newline at end of file
+    +--------------+-----------------+--------------+-----------------------+
+    | N_NATIONKEY  |     N_NAME      | N_REGIONKEY  |       N_COMMENT       |
+    +--------------+-----------------+--------------+-----------------------+
+    | 0            | ALGERIA         | 0            |  haggle. carefully f  |
+    | 1            | ARGENTINA       | 1            | al foxes promise sly  |
+    | 2            | BRAZIL          | 1            | y alongside of the p  |
+    | 3            | CANADA          | 1            | eas hang ironic, sil  |
+    | 4            | EGYPT           | 4            | y above the carefull  |
+    | 5            | ETHIOPIA        | 0            | ven packages wake qu  |
+    | 6            | FRANCE          | 3            | refully final reques  |
+    | 7            | GERMANY         | 3            | l platelets. regular  |
+    | 8            | INDIA           | 2            | ss excuses cajole sl  |
+    | 9            | INDONESIA       | 2            |  slyly express asymp  |
+    | 10           | IRAN            | 4            | efully alongside of   |
+    | 11           | IRAQ            | 4            | nic deposits boost a  |
+    | 12           | JAPAN           | 2            | ously. final, expres  |
+    | 13           | JORDAN          | 4            | ic deposits are blit  |
+    | 14           | KENYA           | 0            |  pending excuses hag  |
+    | 15           | MOROCCO         | 0            | rns. blithely bold c  |
+    | 16           | MOZAMBIQUE      | 0            | s. ironic, unusual a  |
+    | 17           | PERU            | 1            | platelets. blithely   |
+    | 18           | CHINA           | 2            | c dependencies. furi  |
+    | 19           | ROMANIA         | 3            | ular asymptotes are   |
+    | 20           | SAUDI ARABIA    | 4            | ts. silent requests   |
+    | 21           | VIETNAM         | 2            | hely enticingly expr  |
+    | 22           | RUSSIA          | 3            |  requests against th  |
+    | 23           | UNITED KINGDOM  | 3            | eans boost carefully  |
+    | 24           | UNITED STATES   | 1            | y final packages. sl  |
+    +--------------+-----------------+--------------+-----------------------+
+    25 rows selected (0.102 seconds)

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
----------------------------------------------------------------------
diff --git 
a/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md 
b/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
index 1fd9d84..8924835 100644
--- a/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
+++ b/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
@@ -44,19 +44,19 @@ records:
 Drill recognizes each row as an array of values and returns one column for
 each row.
 
-        0: jdbc:drill:zk=local> select * from 
dfs.`/Users/brumsby/drill/plays.csv`;
+    0: jdbc:drill:zk=local> select * from dfs.`/Users/brumsby/drill/plays.csv`;
  
-    +------------+
-    |  columns   |
-    +------------+
-    | ["1599","As You Like It"] |
-    | ["1601","Twelfth Night"] |
-    | ["1594","Comedy of Errors"] |
-    | ["1595","Romeo and Juliet"] |
+    +-----------------------------------+
+    |              columns              |
+    +-----------------------------------+
+    | ["1599","As You Like It"]         |
+    | ["1601","Twelfth Night"]          |
+    | ["1594","Comedy of Errors"]       |
+    | ["1595","Romeo and Juliet"]       |
     | ["1596","The Merchant of Venice"] |
-    | ["1610","The Tempest"] |
-    | ["1599","Hamlet"] |
-    +------------+
+    | ["1610","The Tempest"]            |
+    | ["1599","Hamlet"]                 |
+    +-----------------------------------+
     7 rows selected (0.089 seconds)
 
 ## Columns[n] Syntax
@@ -67,17 +67,17 @@ based index, so the first column is column `0`.)
 
     0: jdbc:drill:zk=local> select columns[0], columns[1] from 
dfs.`/Users/brumsby/drill/plays.csv`;
  
-    +------------+------------+
-    |   EXPR$0   |   EXPR$1   |
-    +------------+------------+
-    | 1599       | As You Like It |
-    | 1601       | Twelfth Night |
-    | 1594       | Comedy of Errors |
-    | 1595       | Romeo and Juliet |
+    +------------+------------------------+
+    |   EXPR$0   |         EXPR$1         |
+    +------------+------------------------+
+    | 1599       | As You Like It         |
+    | 1601       | Twelfth Night          |
+    | 1594       | Comedy of Errors       |
+    | 1595       | Romeo and Juliet       |
     | 1596       | The Merchant of Venice |
-    | 1610       | The Tempest |
-    | 1599       | Hamlet     |
-    +------------+------------+
+    | 1610       | The Tempest            |
+    | 1599       | Hamlet                 |
+    +------------+------------------------+
     7 rows selected (0.137 seconds)
 
 You can use aliases to return meaningful column names. Note that `YEAR` is a
@@ -86,17 +86,17 @@ reserved word, so the `Year` alias must be enclosed by back 
ticks.
     0: jdbc:drill:zk=local> select columns[0] as `Year`, columns[1] as Play 
     from dfs.`/Users/brumsby/drill/plays.csv`;
  
-    +------------+------------+
-    |    Year    |    Play    |
-    +------------+------------+
-    | 1599       | As You Like It |
-    | 1601       | Twelfth Night |
-    | 1594       | Comedy of Errors |
-    | 1595       | Romeo and Juliet |
+    +------------+------------------------+
+    |    Year    |    Play                |
+    +------------+------------------------+
+    | 1599       | As You Like It         |
+    | 1601       | Twelfth Night          |
+    | 1594       | Comedy of Errors       |
+    | 1595       | Romeo and Juliet       |
     | 1596       | The Merchant of Venice |
-    | 1610       | The Tempest |
-    | 1599       | Hamlet     |
-    +------------+------------+
+    | 1610       | The Tempest            |
+    | 1599       | Hamlet                 |
+    +------------+------------------------+
     7 rows selected (0.113 seconds)
 
 You cannot refer to the aliases in subsequent clauses of the query. Use the
@@ -106,12 +106,12 @@ example:
     0: jdbc:drill:zk=local> select columns[0] as `Year`, columns[1] as Play 
     from dfs.`/Users/brumsby/drill/plays.csv` where columns[0]>1599;
  
-    +------------+------------+
-    |    Year    |    Play    |
-    +------------+------------+
+    +------------+---------------+
+    |    Year    |      Play     |
+    +------------+---------------+
     | 1601       | Twelfth Night |
-    | 1610       | The Tempest |
-    +------------+------------+
+    | 1610       | The Tempest   |
+    +------------+---------------+
     2 rows selected (0.201 seconds)
 
 Note that the restriction with the use of aliases applies to queries against

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/query-data/query-a-file-system/040-querying-directories.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/query-a-file-system/040-querying-directories.md 
b/_docs/query-data/query-a-file-system/040-querying-directories.md
index ef2def9..1a55b75 100644
--- a/_docs/query-data/query-a-file-system/040-querying-directories.md
+++ b/_docs/query-data/query-a-file-system/040-querying-directories.md
@@ -16,20 +16,20 @@ the "union" of the two files, ordered by the first column:
     0: jdbc:drill:zk=local> select columns[0] as `Year`, columns[1] as Play 
     from dfs.`/Users/brumsby/drill/testdata` order by 1;
  
-    +------------+------------+
-    |    Year    |    Play    |
-    +------------+------------+
-    | 1594       | Comedy of Errors |
-    | 1595       | Romeo and Juliet |
+    +------------+------------------------+
+    |    Year    |          Play          |
+    +------------+------------------------+
+    | 1594       | Comedy of Errors       |
+    | 1595       | Romeo and Juliet       |
     | 1596       | The Merchant of Venice |
-    | 1599       | As You Like It |
-    | 1599       | Hamlet     |
-    | 1601       | Twelfth Night |
-    | 1606       | Macbeth    |
-    | 1606       | King Lear  |
-    | 1609       | The Winter's Tale |
-    | 1610       | The Tempest |
-    +------------+------------+
+    | 1599       | As You Like It         |
+    | 1599       | Hamlet                 |
+    | 1601       | Twelfth Night          |
+    | 1606       | Macbeth                |
+    | 1606       | King Lear              |
+    | 1609       | The Winter's Tale      |
+    | 1610       | The Tempest            |
+    +------------+------------------------+
     10 rows selected (0.296 seconds)
 
 You can drill down further and automatically query subdirectories as well. For
@@ -65,11 +65,11 @@ files inside the subdirectory named `2013`. The variable 
`dir0` refers to the
 first level down from logs, `dir1` to the next level, and so on.
 
     0: jdbc:drill:> use bob.logdata;
-    +------------+------------+
-    |     ok     |  summary   |
-    +------------+------------+
+    +------------+-----------------------------------------+
+    |     ok     |              summary                    |
+    +------------+-----------------------------------------+
     | true       | Default schema changed to 'bob.logdata' |
-    +------------+------------+
+    +------------+-----------------------------------------+
     1 row selected (0.305 seconds)
  
     0: jdbc:drill:> select * from logs where dir0='2013' limit 10;

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/query-data/querying-complex-data/005-querying-complex-data-introduction.md
----------------------------------------------------------------------
diff --git 
a/_docs/query-data/querying-complex-data/005-querying-complex-data-introduction.md
 
b/_docs/query-data/querying-complex-data/005-querying-complex-data-introduction.md
index a6a8c84..099e047 100644
--- 
a/_docs/query-data/querying-complex-data/005-querying-complex-data-introduction.md
+++ 
b/_docs/query-data/querying-complex-data/005-querying-complex-data-introduction.md
@@ -5,12 +5,12 @@ parent: "Querying Complex Data"
 Apache Drill queries do not require prior knowledge of the actual data you are
 trying to access, regardless of its source system or its schema and data
 types. The sweet spot for Apache Drill is a SQL query workload against
-"complex data": data made up of various types of records and fields, rather
+*complex data*: data made up of various types of records and fields, rather
 than data in a recognizable relational form (discrete rows and columns). Drill
 is capable of discovering the form of the data when you submit the query.
 Nested data formats such as JSON (JavaScript Object Notation) files and
 Parquet files are not only _accessible_: Drill provides special operators and
-functions that you can use to _drill down _into these files and ask
+functions that you can use to _drill down_ into these files and ask
 interesting analytic questions.
 
 These operators and functions include:

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/sql-reference/090-sql-extensions.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/090-sql-extensions.md 
b/_docs/sql-reference/090-sql-extensions.md
index a30961c..ed97611 100644
--- a/_docs/sql-reference/090-sql-extensions.md
+++ b/_docs/sql-reference/090-sql-extensions.md
@@ -4,7 +4,7 @@ parent: "SQL Reference"
 ---
 Drill extends SQL to work with Hadoop-scale data and to explore smaller-scale 
data in ways not possible with SQL. Using intuitive SQL extensions you work 
with self-describing data and complex data types. Extensions to SQL include 
capabilities for exploring self-describing data, such as files and HBase, 
directly in the native format.
 
-Drill provides language support for pointing to [storage plugin]() interfaces 
that Drill uses to interact with data sources. Use the name of a storage plugin 
to specify a file system *database* as a prefix in queries when you refer to 
objects across databases. Query files, including compressed .gz files, and 
[directories]({{ site.baseurl }}/docs/querying-directories), as you would query 
an SQL table. You can query [multiple files in a directory]({{ site.baseurl 
}}/docs/querying-directories).
+Drill provides language support for pointing to [storage 
plugin]({{site.baseurl}}/docs/connect-a-data-source-introduction) interfaces 
that Drill uses to interact with data sources. Use the name of a storage plugin 
to specify a file system *database* as a prefix in queries when you refer to 
objects across databases. Query files, including compressed .gz files, and 
[directories]({{ site.baseurl }}/docs/querying-directories), as you would query 
an SQL table. You can query multiple files in a directory.
 
 Drill extends the SELECT statement for reading complex, multi-structured data. 
The extended CREATE TABLE AS SELECT provides the capability to write data of 
complex/multi-structured data types. Drill extends the [lexical 
rules](http://drill.apache.org/docs/lexical-structure) for working with files 
and directories, such as using back ticks for including file names, directory 
names, and reserved words in queries. Drill syntax supports using the file 
system as a persistent store for query profiles and diagnostic information.
 
@@ -13,14 +13,14 @@ Drill extends the SELECT statement for reading complex, 
multi-structured data. T
 Drill supports Hive and HBase as a plug-and-play data source. Drill can read 
tables created in Hive that use [data types compatible]({{ site.baseurl 
}}/docs/hive-to-drill-data-type-mapping) with Drill.  You can query Hive tables 
without modifications. You can query self-describing data without requiring 
metadata definitions in the Hive metastore. Primitives, such as JOIN, support 
columnar operation. 
 
 ## Extensions for JSON-related Data Sources
-For reading JSON numbers as DOUBLE or reading all JSON data as VARCHAR, use a 
[store.json 
option](http://drill.apache.org/docs/handling-different-data-types/#reading-numbers-of-different-types-from-json).
 Drill extends SQL to provide access to repeating values in arrays and arrays 
within arrays (array indexes). You can use these extensions to reach into 
deeply nested data. Drill extensions use standard JavaScript notation for 
referencing data elements in a hierarchy, as shown in ["Analyzing JSON."]({{ 
site.baseurl }}/docs/json-data-model#analyzing-json)
+For reading JSON numbers as DOUBLE or reading all JSON data as VARCHAR, use a 
[store.json 
option]({{site.baseurl}}/docs/handling-different-data-types/#reading-numbers-of-different-types-from-json).
 Drill extends SQL to provide access to repeating values in arrays and arrays 
within arrays (array indexes). You can use these extensions to reach into 
deeply nested data. Drill extensions use standard JavaScript notation for 
referencing data elements in a hierarchy, as shown in ["Analyzing JSON."]({{ 
site.baseurl }}/docs/json-data-model#analyzing-json)
 
 ## Extensions for Parquet Data Sources
 SQL does not support all Parquet data types, so Drill infers data types in 
many instances. Users [cast] ({{ site.baseurl }}/docs/sql-functions) data types 
to ensure getting a particular data type. Drill offers more liberal casting 
capabilities than SQL for Parquet conversions if the Parquet data is of a 
logical type. You can use the default dfs storage plugin installed with Drill 
for reading and writing Parquet files as shown in the section, [“Parquet 
Format.”]({{ site.baseurl }}/docs/parquet-format)
 
 
 ## Extensions for Text Data Sources
-Drill handles plain text files and directories like standard SQL tables and 
can infer knowledge about the schema of the data. Drill extends SQL to handle 
structured file types, such as comma separated values (CSV) files. An extension 
of the SELECT statement provides COLUMNS[n] syntax for accessing CSV rows in a 
readable format, as shown in ["COLUMNS[n] Syntax."]({{ site.baseurl 
}}/docs/querying-plain-text-files)
+Drill handles plain text files and directories like standard SQL tables and 
can infer knowledge about the schema of the data. Drill extends SQL to handle 
structured file types, such as comma separated values (CSV) files. An extension 
of the SELECT statement provides COLUMNS[n] syntax for accessing CSV rows in a 
readable format, as shown in ["COLUMNS[n] Syntax."]({{ site.baseurl 
}}/docs/querying-plain-text-files/#columns[n]-syntax)
 
 ## SQL Function Extensions
 Drill provides the following functions for analyzing nested data.
@@ -34,7 +34,7 @@ Drill provides the following functions for analyzing nested 
data.
 
 ## Other Extensions
 
-The [`sys` database system tables]() provide port, version, and option 
information.  For example, Drill connects to a random node. You query the sys 
table to know where you are connected:
+The [`sys` tables](/docs/querying-system-tables/) provide port, version, and 
option information.  For example, Drill connects to a random node. You query 
the sys table to know where you are connected:
 
     SELECT host FROM sys.drillbits WHERE `current` = true;
     +------------+

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/sql-reference/data-types/010-supported-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/010-supported-data-types.md 
b/_docs/sql-reference/data-types/010-supported-data-types.md
index 5d7fa86..7ffa85e 100644
--- a/_docs/sql-reference/data-types/010-supported-data-types.md
+++ b/_docs/sql-reference/data-types/010-supported-data-types.md
@@ -32,12 +32,12 @@ To enable the DECIMAL type, set the 
`planner.enable_decimal_data_type` option to
 
      ALTER SYSTEM SET `planner.enable_decimal_data_type` = true;
 
-     +------------+------------+
-     |     ok     |  summary   |
-     +------------+------------+
-     | true       | planner.enable_decimal_data_type updated. |
-     +------------+------------+
-     1 row selected (1.191 seconds)
+    +-------+--------------------------------------------+
+    |  ok   |                  summary                   |
+    +-------+--------------------------------------------+
+    | true  | planner.enable_decimal_data_type updated.  |
+    +-------+--------------------------------------------+
+    1 row selected (0.08 seconds)
 
 ## Casting and Converting Data Types
 
@@ -94,13 +94,13 @@ In a textual file, such as CSV, Drill interprets every 
field as a VARCHAR, as pr
   Casts data from one data type to another.
 * [CONVERT_TO and CONVERT_FROM]({{ site.baseurl 
}}/docs/data-type-conversion#convert_to-and-convert_from)  
   Converts data, including binary data, from one data type to another.
-* [TO_CHAR]()  
+* [TO_CHAR]({{ site.baseurl }}/docs/data-type-conversion/#to_char)  
   Converts a TIMESTAMP, INTERVALDAY/INTERVALYEAR, INTEGER, DOUBLE, or DECIMAL 
to a string.
-* [TO_DATE]()  
+* [TO_DATE]({{ site.baseurl }}/docs/data-type-conversion/#to_date)  
   Converts a string to DATE.
-* [TO_NUMBER]()  
+* [TO_NUMBER]({{ site.baseurl }}/docs/data-type-conversion/#to_number)  
   Converts a string to a DECIMAL.
-* [TO_TIMESTAMP]()  
+* [TO_TIMESTAMP]({{ site.baseurl }}/docs/data-type-conversion/#to_timestamp)  
   Converts a string to TIMESTAMP.
 
 If the SELECT statement includes a WHERE clause that compares a column of an 
unknown data type, cast both the value of the column and the comparison value 
in the WHERE clause.

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/sql-reference/nested-data-functions/010-flatten.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/nested-data-functions/010-flatten.md 
b/_docs/sql-reference/nested-data-functions/010-flatten.md
index a128640..a0e2573 100644
--- a/_docs/sql-reference/nested-data-functions/010-flatten.md
+++ b/_docs/sql-reference/nested-data-functions/010-flatten.md
@@ -52,9 +52,9 @@ row contains an array of four categories:
     0: jdbc:drill:zk=local> select distinct name, hours, categories 
     from dfs.yelp.`yelp_academic_dataset_business.json` 
     where name ='zpizza';
-    +------------+------------+------------+
-    |    name    |   hours    | categories |
-    +------------+------------+------------+
+    
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------+
+    |    name    |   hours                                                     
                                                                                
                                                                                
                                                                                
    | categories                                    |
+    
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------+
     | zpizza     | 
{"Tuesday":{"close":"22:00","open":"10:00"},"Friday":{"close":"23:00","open":"10:00"},"Monday":{"close":"22:00","open":"10:00"},"Wednesday":{"close":"22:00","open":"10:00"},"Thursday":{"close":"22:00","open":"10:00"},"Sunday":{"close":"22:00","open":"10:00"},"Saturday":{"close":"23:00","open":"10:00"}}
 | ["Gluten-Free","Pizza","Vegan","Restaurants"] |
 
 The FLATTEN function can operate on this single row and return multiple rows,
@@ -98,5 +98,5 @@ the categories array, then run a COUNT function on the 
flattened result:
     +---------------|------------+
 
 A common use case for FLATTEN is its use in conjunction with the
-[KVGEN]({{ site.baseurl }}/docs/flatten-function) function as shown in the 
section, ["JSON Data Model"]({{ site.baseurl }}/docs/json-data-model/).
+KVGEN function as shown in the section, ["JSON Data Model"]({{ site.baseurl 
}}/docs/json-data-model/).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/sql-reference/nested-data-functions/020-kvgen.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/nested-data-functions/020-kvgen.md 
b/_docs/sql-reference/nested-data-functions/020-kvgen.md
index 1e01b16..42511e8 100644
--- a/_docs/sql-reference/nested-data-functions/020-kvgen.md
+++ b/_docs/sql-reference/nested-data-functions/020-kvgen.md
@@ -73,7 +73,7 @@ a map with a wide set of columns into an array of key-value 
pairs.
 
 In turn, you can write analytic queries that return a subset of the generated
 keys or constrain the keys in some way. For example, you can use the
-[FLATTEN]({{ site.baseurl }}/docs/flatten-function) function to break the
+[FLATTEN]({{ site.baseurl }}/docs/flatten) function to break the
 array down into multiple distinct rows and further query those rows.
 
 For example, assume that a JSON file named `simplemaps.json` contains this 
data:  
@@ -92,8 +92,7 @@ KVGEN would operate on this data as follows:
        +------------+
        2 rows selected (0.201 seconds)
 
-Applying the [FLATTEN]({{ site.baseurl }}/docs/flatten-function) function to
-this data would return:
+Applying the FLATTEN function to this data would return:
 
     {"key": "a", "value": "valA"}
     {"key": "b", "value": "valB"}

http://git-wip-us.apache.org/repos/asf/drill/blob/5f6a51af/_docs/sql-reference/sql-functions/010-math-and-trig.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-functions/010-math-and-trig.md 
b/_docs/sql-reference/sql-functions/010-math-and-trig.md
index e653b34..fc06932 100644
--- a/_docs/sql-reference/sql-functions/010-math-and-trig.md
+++ b/_docs/sql-reference/sql-functions/010-math-and-trig.md
@@ -12,7 +12,7 @@ Drill supports the math functions shown in the following 
table of math functions
 
 \* Not supported in this release.
 
-Exceptions are the LSHIFT and RSHIFT functions, which take all types except 
FLOAT and DOUBLE types. DEGREES, EXP, RADIANS, and the multiple LOG functions 
take the input types in this list plus the DECIMAL type:
+Exceptions are the LSHIFT and RSHIFT functions, which take all types except 
FLOAT and DOUBLE types. DEGREES, EXP, RADIANS, and the multiple LOG functions 
take the input types in this list plus the DECIMAL type. In this release, Drill 
disables the DECIMAL data type. To enable the DECIMAL type, set the 
`planner.enable_decimal_data_type` option to `true`.
 
 ## Table of Math Functions
 
@@ -184,12 +184,12 @@ Get the natural log of 7.5.
 
     SELECT LOG(7.5) FROM sys.version;
 
-    +------------+
-    |   EXPR$0   |
-    +------------+
-    | 2.0149030205422647 |
-    +------------+
-    1 row selected (0.063 seconds)
+    +---------------------+
+    |       EXPR$0        |
+    +---------------------+
+    | 2.0149030205422647  |
+    +---------------------+
+    1 row selected (0.139 seconds)
 
 ## Trig Functions
 

Reply via email to