DRILL-2681

Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/f1c6b8de
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/f1c6b8de
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/f1c6b8de

Branch: refs/heads/gh-pages
Commit: f1c6b8dea202d9fde12733a13f593aeda8713ff6
Parents: 7037326
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Mon Apr 6 15:20:55 2015 -0700
Committer: Bridget Bevens <bbev...@maprtech.com>
Committed: Mon Apr 6 15:39:54 2015 -0700

----------------------------------------------------------------------
 _docs/connect/009-mapr-db-plugin.md             |  42 +-
 _docs/develop/contribute/001-guidelines.md      |  21 +-
 _docs/develop/contribute/002-ideas.md           |  29 +-
 _docs/develop/develop-drill/001-compile.md      |   2 +-
 _docs/manage/conf/001-mem-alloc.md              | 414 ++++++++-
 _docs/query/003-query-hbase.md                  | 162 ++--
 _docs/sql-ref/001-data-types.md                 |   4 +-
 _docs/sql-ref/002-lexical-structure.md          |   3 +
 _docs/sql-ref/003-operators.md                  |  11 +-
 _docs/sql-ref/004-functions.md                  | 165 +---
 _docs/sql-ref/005-nest-functions.md             |  10 +-
 _docs/sql-ref/data-types/001-date.md            |   2 +-
 _docs/sql-ref/data-types/002-diff-data-types.md |   2 -
 _docs/sql-ref/functions/002-conversion.md       | 889 +++++++++++++++++++
 _docs/sql-ref/functions/002-data-type-fmt.md    | 651 --------------
 _docs/sql-ref/functions/003-date-time-fcns.md   |  53 +-
 _docs/sql-ref/functions/004-string.md           | 382 ++++++++
 _docs/sql-ref/functions/005-aggregate.md        |  33 +
 _docs/sql-ref/functions/006-nulls.md            |  56 ++
 19 files changed, 1977 insertions(+), 954 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/connect/009-mapr-db-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect/009-mapr-db-plugin.md 
b/_docs/connect/009-mapr-db-plugin.md
index a0582f3..62aa068 100644
--- a/_docs/connect/009-mapr-db-plugin.md
+++ b/_docs/connect/009-mapr-db-plugin.md
@@ -2,29 +2,33 @@
 title: "MapR-DB Format"
 parent: "Connect to a Data Source"
 ---
-Drill includes a `maprdb` format for reading MapR-DB data. The `dfs` storage 
plugin defines the format when you install Drill from the `mapr-drill` package 
on a MapR node. The `maprdb` format plugin improves the
-estimated number of rows that Drill uses to plan a query. It also enables you
-to query tables like you would query files in a file system because MapR-DB
-and MapR-FS share the same namespace.
+Drill includes a `maprdb` format plugin for handling MapR-DB and HBase data. 
The Drill Sandbox also includes the following `maprdb` storage plugin on a MapR 
node:
 
-You can query tables stored across multiple directories. You do not need to
-create a table mapping to a directory before you query a table in the
-directory. You can select from any table in any directory the same way you
-would select from files in MapR-FS, using the same syntax.
+    {
+      "type": "hbase",
+      "config": {
+        "hbase.table.namespace.mappings": "*:/tables"
+      },
+      "size.calculator.enabled": false,
+      "enabled": true
+    }
 
-Instead of including the name of a file, you include the table name in the
-query.
+Using the Sandbox and this `maprdb` storage plugin, you can query HBase tables 
located in the `/tables` directory, as shown in the ["Query 
HBase"](/docs/querying-hbase) examples.
 
-**Example**
+The `dfs` storage plugin includes the maprdb format when you install Drill 
from the `mapr-drill` package on a MapR node. Click **Update** next to the 
`dfs` instance
+in the Web UI of the Drill Sandbox to view the configuration for the `dfs` 
instance:
+
+![drill query flow]({{ site.baseurl }}/docs/img/18.png)
 
-    SELECT * FROM mfs.`/users/max/mytable`;
 
-Drill stores the `maprdb` format plugin in the `dfs` storage plugin instance,
-which you can view in the Drill Web UI. You can access the Web UI at
-[http://localhost:8047/storage](http://localhost:8047/storage). Click 
**Update** next to the `dfs` instance
-in the Web UI to view the configuration for the `dfs` instance.
+The examples of the [CONVERT_TO/FROM 
functions](/docs/conversion#convert-to-and-convert-from) show how to adapt the 
`dfs` storage plugin to use the `maprdb` format plugin to query HBase tables on 
the Sandbox.
 
-The following image shows a portion of the configuration with the `maprdb`
-format plugin for the `dfs` instance:
+You modify the `dfs` storage plugin to create a table mapping to a directory 
in the MapR-FS file system. You then select the table by name.
+
+**Example**
+
+    SELECT * FROM myplugin.`mytable`;
+
+The `maprdb` format plugin improves the
+estimated number of rows that Drill uses to plan a query. Using the `dfs` 
storage plugin, you can query HBase and MapR-DB tables as you would query files 
in a file system. MapR-DB, MapR-FS, and Hadoop files share the same namespace.
 
-![drill query flow]({{ site.baseurl }}/docs/img/18.png)

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/develop/contribute/001-guidelines.md
----------------------------------------------------------------------
diff --git a/_docs/develop/contribute/001-guidelines.md 
b/_docs/develop/contribute/001-guidelines.md
index 7782f90..e88e97b 100644
--- a/_docs/develop/contribute/001-guidelines.md
+++ b/_docs/develop/contribute/001-guidelines.md
@@ -29,15 +29,14 @@ These guidelines include the following topics:
 
 First, you need the Drill source code.
 
-Get the source code on your local drive using [Git](git clone 
https://git-wip-us.apache.org/repos/asf/incubator-drill.git). Most development 
is done on
+Get the source code on your local drive using Git. Most development is done on
 "master":
 
     git clone https://git-wip-us.apache.org/repos/asf/drill.git
 
 ### Making Changes
 
-Before you start, send a message to the [Drill developer mailing list](http
-://mail-archives.apache.org/mod_mbox/incubator-drill-dev/), or file a bug
+Before you start, send a message to the [Drill developer mailing 
list](http://mail-archives.apache.org/mod_mbox/drill-dev/), or file a bug
 report in [JIRA](https://issues.apache.org/jira/browse/DRILL). Describe your
 proposed changes and check that they fit in with what others are doing and
 have planned for the project. Be patient, it may take folks a while to
@@ -51,7 +50,7 @@ Please take care about the following points
 
   * All public classes and methods should have informative [Javadoc 
comments](http://www.oracle.com/technetwork/java/javase/documentation/index-137868.html).
     * Do not use @author tags.
-  * Code should be formatted according to [Sun's 
conventions](http://www.oracle.com/technetwork/java/codeconv-138413.html), with 
one exception:
+  * Code should be formatted according to [Sun's 
conventions](http://www.oracle.com/technetwork/java/codeconvtoc-136057.html), 
with one exception:
     * Indent two (2) spaces per level, not four (4).
     * Line length limit is 120 chars, instead of 80 chars.
   * Contributions should not introduce new Checkstyle violations.
@@ -68,8 +67,7 @@ following settings into your browser:
 IntelliJ IDEA formatter: [settings
 
jar](https://cwiki.apache.org/confluence/download/attachments/30757399/idea-settings.jar?version=1&modificationDate=1363022308000&api=v2)
 
-Eclipse: [formatter xml from HBase](https://issues.apache.org/jira/secure/atta
-chment/12474245/eclipse_formatter_apache.xml)
+Eclipse: [formatter 
xml](https://issues.apache.org/jira/secure/attachment/12474245/eclipse_formatter_apache.xml)
 
 #### Understanding Maven
 
@@ -154,16 +152,9 @@ or SQL Server). Then try to implement one.
 
 One example DrillFunc:
 
-[https://github.com/apache/incubator-
-drill/blob/103072a619741d5e228fdb181501ec2f82e111a3/sandbox/prototype/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/ComparisonFunction
-s.java](https://github.com/apache/incubator-
-drill/blob/103072a619741d5e228fdb181501ec2f82e111a3/sandbox/prototype/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/ComparisonFunction
-s.java)
+[ComparisonFunctions.java](https://github.com/apache/drill/blob/3f93454f014196a4da198ce012b605b70081fde0/exec/java-exec/src/main/codegen/templates/ComparisonFunctions.java)
 
-Also one can visit the JIRA issues and implement one of those too. A list of
-functions which need to be implemented can be found
-[here](https://docs.google.com/spreadsheet/ccc?key=0AgAGbQ6asvQ-
-dDRrUUxVSVlMVXRtanRoWk9JcHgteUE&usp=sharing#gid=0) (WIP).
+Also one can visit the JIRA issues and implement one of those too. 
 
 More contribution ideas are located on the [Contribution 
Ideas](/docs/apache-drill-contribution-ideas) page.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/develop/contribute/002-ideas.md
----------------------------------------------------------------------
diff --git a/_docs/develop/contribute/002-ideas.md 
b/_docs/develop/contribute/002-ideas.md
index 052d3e7..c3f5a87 100644
--- a/_docs/develop/contribute/002-ideas.md
+++ b/_docs/develop/contribute/002-ideas.md
@@ -24,8 +24,7 @@ This is a good place to begin if you are new to Drill. Feel 
free to pick
 issues from the Drill JIRA list. When you pick an issue, assign it to
 yourself, inform the team, and start fixing it.
 
-For any questions, seek help from the team by sending email to [drill-
-d...@incubator.apache.org](mailto:drill-...@incubator.apache.org).
+For any questions, seek help from the team through the [mailing 
list](http://drill.apache.org/community/#mailinglists).
 
 [https://issues.apache.org/jira/browse/DRILL/?selectedTab=com.atlassian.jira
 
.jira-projects-plugin:summary-panel](https://issues.apache.org/jira/browse/DRILL/?selectedTab=com.atlassian.jira
@@ -40,13 +39,8 @@ put together a JIRA for one of the DrillFunc's we don't yet 
have but should
 own use case). Then try to implement one.
 
 One example DrillFunc:  
-[https://github.com/apache/incubator-
-drill/blob/103072a619741d5e228fdb181501ec2f82e111a3/sandbox/prototype/exec
-/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/ComparisonFunction
-s.java](https://github.com/apache/incubator-
-drill/blob/103072a619741d5e228fdb181501ec2f82e111a3/sandbox/prototype/exec
-/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/ComparisonFunction
-s.java)** **
+[ComparisonFunctions.java](https://github.com/apache/drill/blob/3f93454f014196a4da198ce012b605b70081fde0/exec/java-exec/src/main/codegen/templates/ComparisonFunctions.java)
+** **
 
 **Additional ideas on functions that can be added to SQL support**
 
@@ -68,11 +62,17 @@ implementing custom storage plugins. Example formats are.
   * XML
   * Thrift
 
+## Support for new data sources
+
+Writing a new file-based storage plugin, such as a JSON or text-based storage 
plugin, simply involves implementing a couple of interfaces. The JSON storage 
plugin is a good example. 
+
 You can refer to the github commits to the mongo db and hbase storage plugin 
for implementation details: 
 
 * 
[mongodb_storage_plugin](https://github.com/apache/drill/commit/2ca9c907bff639e08a561eac32e0acab3a0b3304)
 * 
[hbase_storage_plugin](https://github.com/apache/drill/commit/3651182141b963e24ee48db0530ec3d3b8b6841a)
 
+Focus on implementing/extending this list of classes and the corresponding 
implementations done by Mongo and Hbase. Ignore the mongo db plugin optimizer 
rules for pushing predicates into the scan.
+
 Initially, concentrate on basics:
 
 * AbstractGroupScan (MongoGroupScan, HbaseGroupScan)  
@@ -82,12 +82,6 @@ Initially, concentrate on basics:
 * AbstractStoragePlugin (MongoStoragePlugin, HbaseStoragePlugin)  
 * StoragePluginConfig (MongoStoragePluginConfig, HbaseStoragePluginConfig)
 
-Focus on implementing/extending this list of classes and the corresponding 
implementations done by Mongo and Hbase. Ignore the mongo db plugin optimizer 
rules for pushing predicates into the scan.
-
-Writing a new file-based storage plugin, such as a JSON or text-based storage 
plugin, simply involves implementing a couple of interfaces. The JSON storage 
plugin is a good example. 
-
-## Support for new data sources
-
 Implement custom storage plugins for the following non-Hadoop data sources:
 
   * NoSQL databases (such as Mongo, Cassandra, Couch etc)
@@ -99,10 +93,7 @@ Implement custom storage plugins for the following 
non-Hadoop data sources:
 
 ## New query language parsers
 
-Drill exposes strongly typed JSON APIs for logical and physical plans (plan
-syntax at [https://docs.google.com/a/maprtech.com/document/d/1QTL8warUYS2KjldQ
-rGUse7zp8eA72VKtLOHwfXy6c7I/edit#heading=h.n9gdb1ek71hf](https://docs.google.com/a/maprtech.com/document/d/1QTL8warUYS2KjldQ
-rGUse7zp8eA72VKtLOHwfXy6c7I/edit#heading=h.n9gdb1ek71hf) ). Drill provides a
+Drill exposes strongly typed JSON APIs for logical and physical plans. Drill 
provides a
 SQL language parser today, but any language parser that can generate
 logical/physical plans can use Drill's power on the backend as the distributed
 low latency query execution engine along with its support for self-describing

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/develop/develop-drill/001-compile.md
----------------------------------------------------------------------
diff --git a/_docs/develop/develop-drill/001-compile.md 
b/_docs/develop/develop-drill/001-compile.md
index dea42e9..c3053d6 100644
--- a/_docs/develop/develop-drill/001-compile.md
+++ b/_docs/develop/develop-drill/001-compile.md
@@ -15,7 +15,7 @@ Maven and JDK installed:
 
 ## 1\. Clone the Repository
 
-    git clone https://git-wip-us.apache.org/repos/asf/incubator-drill.git
+    git clone https://git-wip-us.apache.org/repos/asf/drill.git
 
 ## 2\. Compile the Code
 

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/manage/conf/001-mem-alloc.md
----------------------------------------------------------------------
diff --git a/_docs/manage/conf/001-mem-alloc.md 
b/_docs/manage/conf/001-mem-alloc.md
index 4caf563..5d99015 100644
--- a/_docs/manage/conf/001-mem-alloc.md
+++ b/_docs/manage/conf/001-mem-alloc.md
@@ -1,7 +1,419 @@
 ---
-title: "Memory Allocation"
+title: "Overview"
 parent: "Configuration Options"
 ---
+The sys.options table in Drill contains information about boot and system 
options described in the following tables. You configure some of the options to 
tune performance. You can configure the options using the ALTER SESSION or 
ALTER SYSTEM command.
+
+## Boot Options
+
+<table>
+  <tr>
+    <th>Name</th>
+    <th>Default</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>drill.exec.buffer.impl</td>
+    <td>"org.apache.drill.exec.work.batch.UnlimitedRawBatchBuffer"</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>drill.exec.buffer.size</td>
+    <td>6</td>
+    <td>Available memory in terms of record batches to hold data downstream of 
an operation.Increase this value to increase query speed.</td>
+  </tr>
+  <tr>
+    <td>drill.exec.compile.debug</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>drill.exec.http.enabled</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>drill.exec.operator.packages</td>
+    <td>"org.apache.drill.exec.physical.config"</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>drill.exec.sort.external.batch.size</td>
+    <td>4000</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>drill.exec.sort.external.spill.directories</td>
+    <td>"/tmp/drill/spill"</td>
+    <td>Determines which directory to use for spooling</td>
+  </tr>
+  <tr>
+    <td>drill.exec.sort.external.spill.group.size</td>
+    <td>100</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>drill.exec.storage.file.text.batch.size</td>
+    <td>4000</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>drill.exec.storage.packages</td>
+    <td>"org.apache.drill.exec.store" "org.apache.drill.exec.store.mock"</td>
+    <td>#   This file tells Drill to consider this module when class path 
scanning.     #   This file can also include any supplementary configuration 
information.  #   This file is in HOCON format see 
https://github.com/typesafehub/config/blob/master/HOCON.md for more 
information.</td>
+  </tr>
+  <tr>
+    <td>drill.exec.sys.store.provider.class</td>
+    <td>"org.apache.drill.exec.store.sys.zk.ZkPStoreProvider"</td>
+    <td>The Pstore (Persistent Configuration Storage) provider to use. The 
Pstore holds configuration and profile data.</td>
+  </tr>
+  <tr>
+    <td>drill.exec.zk.connect</td>
+    <td>"localhost:2181"</td>
+    <td>The ZooKeeper quorum that Drill uses to connect to data sources. 
Configure on each Drillbit node.</td>
+  </tr>
+  <tr>
+    <td>drill.exec.zk.refresh</td>
+    <td>500</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>file.separator</td>
+    <td>"/"</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>java.specification.version</td>
+    <td>1.7</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>java.vm.name</td>
+    <td>"Java HotSpot(TM) 64-Bit Server VM"</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>java.vm.specification.version</td>
+    <td>1.7</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>log.path</td>
+    <td>"/log/sqlline.log"</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>sun.boot.library.path</td>
+    
<td>/Library/Java/JavaVirtualMachines/jdk1.7.0_71.jdk/Contents/Home/jre/lib</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>sun.java.command</td>
+    <td>"sqlline.SqlLine -d org.apache.drill.jdbc.Driver --maxWidth=10000 -u 
jdbc:drill:zk=local"</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>sun.os.patch.level</td>
+    <td>unknown</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>user</td>
+    <td>""</td>
+    <td></td>
+  </tr>
+</table>
+
+## System Options
+
+<table>
+  <tr>
+    <th>name</th>
+    <th>Default</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>drill.exec.functions.cast_empty_string_to_null</td>
+    <td>FALSE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>drill.exec.storage.file.partition.column.label</td>
+    <td>dir</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>drill.exec.testing.exception-injections</td>
+    <td></td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>exec.errors.verbose</td>
+    <td>FALSE</td>
+    <td>Toggles verbose output of executable error messages</td>
+  </tr>
+  <tr>
+    <td>exec.java_compiler</td>
+    <td>DEFAULT</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>exec.java_compiler_debug</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>exec.java_compiler_janino_maxsize</td>
+    <td>262144</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>exec.max_hash_table_size</td>
+    <td>1073741824</td>
+    <td>Starting size for hash tables. Increase according to available memory 
to improve performance.</td>
+  </tr>
+  <tr>
+    <td>exec.min_hash_table_size</td>
+    <td>65536</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>exec.queue.enable</td>
+    <td>FALSE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>exec.queue.large</td>
+    <td>10</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>exec.queue.small</td>
+    <td>100</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>exec.queue.threshold</td>
+    <td>30000000</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>exec.queue.timeout_millis</td>
+    <td>300000</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>org.apache.drill.exec.compile.ClassTransformer.scalar_replacement</td>
+    <td>try</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.add_producer_consumer</td>
+    <td>FALSE</td>
+    <td>Increase prefetching of data from disk. Disable for in-memory 
reads.</td>
+  </tr>
+  <tr>
+    <td>planner.affinity_factor</td>
+    <td>1.2</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.broadcast_factor</td>
+    <td>1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.broadcast_threshold</td>
+    <td>10000000</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.disable_exchanges</td>
+    <td>FALSE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_broadcast_join</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_demux_exchange</td>
+    <td>FALSE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_hash_single_key</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_hashagg</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_hashjoin</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_hashjoin_swap</td>
+    <td></td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_mergejoin</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_multiphase_agg</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_mux_exchange</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_streamagg</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.identifier_max_length</td>
+    <td>1024</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.join.hash_join_swap_margin_factor</td>
+    <td>10</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.join.row_count_estimate_factor</td>
+    <td>1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.memory.average_field_width</td>
+    <td>8</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.memory.enable_memory_estimation</td>
+    <td>FALSE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.memory.hash_agg_table_factor</td>
+    <td>1.1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.memory.hash_join_table_factor</td>
+    <td>1.1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.memory.max_query_memory_per_node</td>
+    <td>2147483648</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.memory.non_blocking_operators_memory</td>
+    <td>64</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.partitioner_sender_max_threads</td>
+    <td>8</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.partitioner_sender_set_threads</td>
+    <td>-1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.partitioner_sender_threads_factor</td>
+    <td>1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.producer_consumer_queue_size</td>
+    <td>10</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.slice_target</td>
+    <td>100000</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.width.max_per_node</td>
+    <td>3</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.width.max_per_query</td>
+    <td>1000</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>store.format</td>
+    <td>parquet</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>store.json.all_text_mode</td>
+    <td>FALSE</td>
+    <td>Drill reads all data from the JSON files as VARCHAR. Prevents schema 
change errors.</td>
+  </tr>
+  <tr>
+    <td>store.mongo.all_text_mode</td>
+    <td>FALSE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>store.parquet.block-size</td>
+    <td>536870912</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>store.parquet.compression</td>
+    <td>snappy</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>store.parquet.enable_dictionary_encoding</td>
+    <td>FALSE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>store.parquet.use_new_reader</td>
+    <td>FALSE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>store.parquet.vector_fill_check_threshold</td>
+    <td>10</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>store.parquet.vector_fill_threshold</td>
+    <td>85</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>window.enable</td>
+    <td>FALSE</td>
+    <td></td>
+  </tr>
+</table>
+
 You can configure the amount of direct memory allocated to a Drillbit for
 query processing. The default limit is 8G, but Drill prefers 16G or more
 depending on the workload. The total amount of direct memory that a Drillbit

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/query/003-query-hbase.md
----------------------------------------------------------------------
diff --git a/_docs/query/003-query-hbase.md b/_docs/query/003-query-hbase.md
index e6e70fa..bdfbb8a 100644
--- a/_docs/query/003-query-hbase.md
+++ b/_docs/query/003-query-hbase.md
@@ -2,25 +2,28 @@
 title: "Querying HBase"
 parent: "Query Data"
 ---
-This is a simple exercise that provides steps for creating a “students” 
table
-and a “clicks” table in HBase that you can query with Drill.
+This exercise creates two tables in HBase, students and clicks, that you can 
query with Drill. You can use the Drill Sandbox to step through the exercise.
 
-To create the HBase tables and query them with Drill, complete the following
+## Create the HBase tables
+
+To create the HBase tables and start Drill, complete the following
 steps:
 
-  1. Issue the following command to start the HBase shell:
+1. Pipe the following commands to the HBase shell to create students and  
clicks tables in HBase:
   
-        hbase shell
-  2. Issue the following commands to create a ‘students’ table and a 
‘clicks’ table with column families in HBase:
-    
-        echo "create 'students','account','address'" | hbase shell
-    
-        echo "create 'clicks','clickinfo','iteminfo'" | hbase shell
-  3. Issue the following command with the provided data to create a 
`testdata.txt` file:
+      echo "create 'students','account','address'" | hbase shell
+  
+      echo "create 'clicks','clickinfo','iteminfo'" | hbase shell
+
+   On the Drill Sandbox, HBase tables are located in:
+
+        /mapr/demo.mapr.com/tables
 
-        cat > testdata.txt
+2. Issue the following command to create a `testdata.txt` file:
 
-     **Sample Data**
+      cat > testdata.txt
+
+3. Copy and paste the following `put` commands on the line below the **cat** 
command. Press Return, and then CTRL Z to close the file.
 
         put 'students','student1','account:name','Alice'
         put 'students','student1','address:street','123 Ballmer Av'
@@ -84,68 +87,75 @@ steps:
         put 'clicks','click9','iteminfo:itemtype','image'
         put 'clicks','click9','iteminfo:quantity','10'
 
-  4. Issue the following command to verify that the data is in the 
`testdata.txt` file:  
+4. Issue the following command to put the data into hbase:  
+  
+        cat testdata.txt | hbase shell
+5. Start Drill. Type `sqlline` on the terminal command line if you are using 
the Drill Sandbox; otherwise, see [Starting/Stopping Drill]({{ site.baseurl 
}}/docs/starting-stopping-drill).
+6. Use the `maprdb` storage plugin, which includes the [MapR-DB 
format](/docs/mapr-db-format), if you are using the Drill Sandbox; otherwise, 
enable and use the hbase storage plugin on a system having HBase services. 
+
+         USE hbase; /* If you have installed HBase services. */ 
+
+   Or:
+
+         USE maprdb; /* If you are using the Drill Sandbox */
+
+The `maprdb` storage plugin provides access to the `/tables` directory. Use 
Drill to query the students and clicks tables on the Drill Sandbox.  
+
+## Query HBase Tables
+1. Issue the following query to see the data in the students table:  
+
+       SELECT * FROM students;
+   The query returns binary results:
+  
+        +------------+------------+------------+
+        |  row_key   |  account   |  address   |
+        +------------+------------+------------+
+        | [B@e6d9eb7 | {"name":"QWxpY2U="} | 
{"state":"Q0E=","street":"MTIzIEJhbGxtZXIgQXY=","zipcode":"MTIzNDU="} |
+        | [B@2823a2b4 | {"name":"Qm9i"} | 
{"state":"Q0E=","street":"MSBJbmZpbml0ZSBMb29w","zipcode":"MTIzNDU="} |
+        | [B@3b8eec02 | {"name":"RnJhbms="} | 
{"state":"Q0E=","street":"NDM1IFdhbGtlciBDdA==","zipcode":"MTIzNDU="} |
+        | [B@242895da | {"name":"TWFyeQ=="} | 
{"state":"Q0E=","street":"NTYgU291dGhlcm4gUGt3eQ==","zipcode":"MTIzNDU="} |
+        +------------+------------+------------+
+        4 rows selected (1.335 seconds)
+   The Drill output reflects the actual data type of the HBase data, which is 
binary.
+
+2. Issue the following query, that includes the CONVERT_FROM function, to 
convert the `students` table to readable data:
+
+         SELECT CONVERT_FROM(row_key, 'UTF8') AS studentid, 
+                CONVERT_FROM(students.account.name, 'UTF8') AS name, 
+                CONVERT_FROM(students.address.state, 'UTF8') AS state, 
+                CONVERT_FROM(students.address.street, 'UTF8') AS street, 
+                CONVERT_FROM(t.students.address.zipcode, 'UTF8') AS zipcode 
+         FROM students;
+
+    **Note:** Use dot notation to drill down to a column in an HBase table:
     
-         cat testdata.txt | hbase shell
-  5. Issue `exit` to leave the `hbase shell`.
-  6. Start Drill. Refer to [Starting/Stopping 
Drill](/docs/starting-stopping-drill) for instructions.
-  7. Use Drill to issue the following SQL queries on the “students” and 
“clicks” tables:  
+        tablename.columnfamilyname.columnnname
+
+    The query returns readable data:
+
+        +------------+------------+------------+------------+------------+
+        | studentid  |    name    |   state    |   street   |  zipcode   |
+        +------------+------------+------------+------------+------------+
+        | student1   | Alice      | CA         | 123 Ballmer Av | 12345      |
+        | student2   | Bob        | CA         | 1 Infinite Loop | 12345      |
+        | student3   | Frank      | CA         | 435 Walker Ct | 12345      |
+        | student4   | Mary       | CA         | 56 Southern Pkwy | 12345      
|
+        +------------+------------+------------+------------+------------+
+        4 rows selected (0.504 seconds)
+
+3. Query the clicks table to see which students visited google.com:
   
-     1. Issue the following query to see the data in the “students” table: 
 
-
-            SELECT * FROM hbase.`students`;
-        The query returns binary results:
-        
-            Query finished, fetching results ...
-            
+----------+----------+----------+-----------+----------+----------+----------+-----------+
-            |id    | name        | state       | street      | zipcode |`
-            
+----------+----------+----------+-----------+----------+-----------+----------+-----------
-            | [B@1ee37126 | [B@661985a1 | [B@15944165 | [B@385158f4 
|[B@3e08d131 |
-            | [B@64a7180e | [B@161c72c2 | [B@25b229e5 | [B@53dc8cb8 
|[B@1d11c878 |
-            | [B@349aaf0b | [B@175a1628 | [B@1b64a812 | [B@6d5643ca 
|[B@147db06f |
-            | [B@3a7cbada | [B@52cf5c35 | [B@2baec60c | [B@5f4c543b 
|[B@2ec515d6 |
-
-        Since Drill does not require metadata, you must use the SQL `CAST` 
function in
-some queries to get readable query results.
-
-     2. Issue the following query, that includes the `CAST` function, to see 
the data in the “`students`” table:
-
-            SELECT CAST(students.clickinfo.studentid as VarChar(20)),
-            CAST(students.account.name as VarChar(20)), CAST 
(students.address.state as
-            VarChar(20)), CAST (students.address.street as VarChar(20)), CAST
-            (students.address.zipcode as VarChar(20)), FROM hbase.students;
-
-        **Note:** Use the following format when you query a column in an HBase 
table:
-          
-             tablename.columnfamilyname.columnname
-            
-        For more information about column families, refer to [5.6. Column
-Family](http://hbase.apache.org/book/columnfamily.html).
-
-        The query returns the data:
-
-            Query finished, fetching results ...
-            +----------+-------+-------+------------------+---------+`
-            | studentid | name  | state | street           | zipcode |`
-            +----------+-------+-------+------------------+---------+`
-            | student1 | Alice | CA    | 123 Ballmer Av   | 12345   |`
-            | student2 | Bob   | CA    | 1 Infinite Loop  | 12345   |`
-            | student3 | Frank | CA    | 435 Walker Ct    | 12345   |`
-            | student4 | Mary  | CA    | 56 Southern Pkwy | 12345   |`
-            +----------+-------+-------+------------------+---------+`
-
-     3. Issue the following query on the “clicks” table to find out which 
students clicked on google.com:
-        
-              SELECT CAST(clicks.clickinfo.studentid as VarChar(200)), 
CAST(clicks.clickinfo.url as VarChar(200)) FROM hbase.`clicks` WHERE URL LIKE 
'%google%';  
-
-        The query returns the data:
-        
-            Query finished, fetching results ...`
-        
-            
+---------+-----------+-------------------------------+-----------------------+----------+----------+
-            | clickid | studentid | time                          | url        
           | itemtype | quantity |
-            
+---------+-----------+-------------------------------+-----------------------+----------+----------+
-            | click1  | student1  | 2014-01-01 12:01:01.000100000 | 
http://www.google.com | image    | 1        |
-            | click3  | student2  | 2014-01-01 01:02:01.000100000 | 
http://www.google.com | text     | 2        |
-            | click6  | student3  | 2013-02-01 12:01:01.000100000 | 
http://www.google.com | image    | 1        |
-            
+---------+-----------+-------------------------------+-----------------------+----------+----------+
\ No newline at end of file
+        SELECT CONVERT_FROM(row_key, 'UTF8') AS clickid, 
+               CONVERT_FROM(clicks.clickinfo.studentid, 'UTF8') AS studentid, 
+               CONVERT_FROM(clicks.clickinfo.`time`, 'UTF8') AS `time`,
+               CONVERT_FROM(clicks.clickinfo.url, 'UTF8') AS url 
+        FROM clicks WHERE clicks.clickinfo.url LIKE '%google%'; 
+
+        +------------+------------+------------+------------+
+        |  clickid   | studentid  |    time    |    url     |
+        +------------+------------+------------+------------+
+        | click1     | student1   | 2014-01-01 12:01:01.0001 | 
http://www.google.com |
+        | click3     | student2   | 2014-01-01 01:02:01.0001 | 
http://www.google.com |
+        | click6     | student3   | 2013-02-01 12:01:01.0001 | 
http://www.google.com |
+        +------------+------------+------------+------------+
+        3 rows selected (0.294 seconds)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/sql-ref/001-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/001-data-types.md b/_docs/sql-ref/001-data-types.md
index cf353c7..5607c85 100644
--- a/_docs/sql-ref/001-data-types.md
+++ b/_docs/sql-ref/001-data-types.md
@@ -119,9 +119,9 @@ The following table lists data types top to bottom, in 
descending order of prece
 
 In a textual file, such as CSV, Drill interprets every field as a VARCHAR, as 
previously mentioned. To handle textual data, you can use the following 
functions to cast and convert compatible data types:
 
-* [CAST](/docs/data-type-fmt#cast)  
+* [CAST](/docs/conversion#cast)  
   Casts data from one data type to another.
-* [CONVERT_TO and 
CONVERT_FROM](/docs/data-type-fmt#convert-to-and-convert-from)  
+* [CONVERT_TO and CONVERT_FROM](/docs/conversion#convert-to-and-convert-from)  
   Converts data, including binary data, from one data type to another.
 * [TO_CHAR]()  
   Converts a TIMESTAMP, INTERVAL, INTEGER, DOUBLE, or DECIMAL to a string.

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/sql-ref/002-lexical-structure.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/002-lexical-structure.md 
b/_docs/sql-ref/002-lexical-structure.md
index 731b2b3..6046490 100644
--- a/_docs/sql-ref/002-lexical-structure.md
+++ b/_docs/sql-ref/002-lexical-structure.md
@@ -21,6 +21,9 @@ A SQL statement used in Drill can include one or more of the 
following parts:
 * Predicate, such as a > b in `SELECT * FROM myfile WHERE a > b`.
 * [Storage plugin and workspace 
reference](/docs/lexical-structure#storage-plugin-and-workspace-references)
 * Whitespace
+* Comment in the following format: 
+
+        /* This is a comment. */
 
 The upper/lowercase sensitivity of the parts differs.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/sql-ref/003-operators.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/003-operators.md b/_docs/sql-ref/003-operators.md
index 074375a..b9a95e7 100644
--- a/_docs/sql-ref/003-operators.md
+++ b/_docs/sql-ref/003-operators.md
@@ -62,9 +62,14 @@ You can use the following subquery operators in your Drill 
queries:
 
 See [SELECT Statements](/docs/select-statements).
 
-## String Operators
+## String Concatenate Operator
 
-You can use the following string operators in your Drill queries:
+You can use the following string operators in your Drill queries to 
concatenate strings:
 
   * string || string
-  * string || non-string or non-string || string
\ No newline at end of file
+  * string || non-string or non-string || string
+
+The concatenate operator is an alternative to the [concat 
function](/docs/string-manipulation#concat).
+
+The concat function treets NULL as an empty string. The concatenate operator 
(||) returns NULL if any input is NULL.
+

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/sql-ref/004-functions.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/004-functions.md b/_docs/sql-ref/004-functions.md
index e444268..570a4b9 100644
--- a/_docs/sql-ref/004-functions.md
+++ b/_docs/sql-ref/004-functions.md
@@ -4,163 +4,12 @@ parent: "SQL Reference"
 ---
 You can use the following types of functions in your Drill queries:
 
-  * Math Functions
-  * Trig Functions
-  * String Functions
-  * Date/Time Functions
-  * Data Type Formatting Functions
-  * Aggregate Functions
-  * Aggregate Statistics Functions
-  * Convert and Cast Functions
-  * Nested Data Functions
+  * [Math and Trig](/docs/math-and-trig/)
+  * [Casting and Converting Data Types](/docs/casting-converting-data-types/)
+  * [Date/Time and Arithmetic](/docs/date-time-functions-and-arithmetic/)
+  * [String Manipulation](/docs/string-manipulation)
+  * [Aggregate and Aggregate Statistics]()
+  * [Nested Data](/docs/nested-data-functions/)
+  * [Other functions]()
 
-## String Functions
 
-The following table provides the string functions that you can use in your
-Drill queries:
-
-Function| Return Type  
---------|---  
-char_length(string) or character_length(string)| int  
-concat(str "any" [, str "any" [, ...] ])| text
-convert_from(string text, src_encoding name)| text 
-convert_to(string text, dest_encoding name)| byte array
-initcap(string)| text
-left(str text, n int)| text
-length(string)| int
-length(string bytes, encoding name )| int
-lower(string)| text
-lpad(string text, length int [, fill text])| text
-ltrim(string text [, characters text])| text
-position(substring in string)| int
-regexp_replace(string text, pattern text, replacement text [, flags text])|text
-replace(string text, from text, to text)| text
-right(str text, n int)| text
-rpad(string text, length int [, fill text])| text
-rtrim(string text [, characters text])| text
-strpos(string, substring)| int
-substr(string, from [, count])| text
-substring(string [from int] [for int])| text
-trim([leading | trailing | both] [characters] from string)| text
-upper(string)| text
-  
-  
-## Date/Time Functions
-
-The following table provides the date/time functions that you can use in your
-Drill queries:
-
-**Function**| **Return Type**  
----|---  
-current_date| date  
-current_time| time with time zone  
-current_timestamp| timestamp with time zone  
-date_add(date,interval expr type)| date/datetime  
-date_part(text, timestamp)| double precision  
-date_part(text, interval)| double precision  
-date_sub(date,INTERVAL expr type)| date/datetime  
-extract(field from interval)| double precision  
-extract(field from timestamp)| double precision  
-localtime| time  
-localtimestamp| timestamp  
-now()| timestamp with time zone  
-timeofday()| text  
-  
-## Data Type Formatting Functions
-
-The following table provides the data type formatting functions that you can
-use in your Drill queries:
-
-**Function**| **Return Type**  
----|---  
-to_char(timestamp, text)| text  
-to_char(int, text)| text  
-to_char(double precision, text)| text  
-to_char(numeric, text)| text  
-to_date(text, text)| date  
-to_number(text, text)| numeric  
-to_timestamp(text, text)| timestamp with time zone  
-to_timestamp(double precision)| timestamp with time zone  
-  
-## Aggregate Functions
-
-The following table provides the aggregate functions that you can use in your
-Drill queries:
-
-**Function** | **Argument Type** | **Return Type**  
-  --------   |   -------------   |   -----------
-avg(expression)| smallint, int, bigint, real, double precision, numeric, or 
interval| numeric for any integer-type argument, double precision for a 
floating-point argument, otherwise the same as the argument data type
-count(*)| _-_| bigint
-count([DISTINCT] expression)| any| bigint
-max(expression)| any array, numeric, string, or date/time type| same as 
argument type
-min(expression)| any array, numeric, string, or date/time type| same as 
argument type
-sum(expression)| smallint, int, bigint, real, double precision, numeric, or 
interval| bigint for smallint or int arguments, numeric for bigint arguments, 
double precision for floating-point arguments, otherwise the same as the 
argument data type
-  
-  
-## Aggregate Statistics Functions
-
-The following table provides the aggregate statistics functions that you can 
use in your Drill queries:
-
-**Function**| **Argument Type**| **Return Type**
-  --------  |   -------------  |   -----------
-stddev(expression)| smallint, int, bigint, real, double precision, or numeric| 
double precision for floating-point arguments, otherwise numeric
-stddev_pop(expression)| smallint, int, bigint, real, double precision, or 
numeric| double precision for floating-point arguments, otherwise numeric
-stddev_samp(expression)| smallint, int, bigint, real, double precision, or 
numeric| double precision for floating-point arguments, otherwise numeric
-variance(expression)| smallint, int, bigint, real, double precision, or 
numeric| double precision for floating-point arguments, otherwise numeric
-var_pop(expression)| smallint, int, bigint, real, double precision, or 
numeric| double precision for floating-point arguments, otherwise numeric
-var_samp(expression)| smallint, int, bigint, real, double precision, or 
numeric| double precision for floating-point arguments, otherwise numeric
-  
-  
-## Convert and Cast Functions
-
-You can use the CONVERT_TO and CONVERT_FROM functions to encode and decode
-data when you query your data sources with Drill. For example, HBase stores
-data as encoded byte arrays (VARBINARY data). When you issue a query with the
-CONVERT_FROM function on HBase, Drill decodes the data and converts it to the
-specified data type. In instances where Drill sends data back to HBase during
-a query, you can use the CONVERT_TO function to change the data type to bytes.
-
-Do not use the CAST function for converting binary data types to other types. 
Although CAST works for converting VARBINARY to VARCHAR, CAST does not work in 
other cases. CONVERT functions not only work regardless of the types you are 
converting but are also more efficient to use than CAST when your data sources 
return binary data.
-
-The following table provides the data types that you use with the CONVERT_TO
-and CONVERT_FROM functions:
-
-**Type**| **Input Type**| **Output Type**  
----|---|---  
-BOOLEAN_BYTE| bytes(1)| boolean  
-TINYINT_BE| bytes(1)| tinyint  
-TINYINT| bytes(1)| tinyint  
-SMALLINT_BE| bytes(2)| smallint  
-SMALLINT| bytes(2)| smallint  
-INT_BE| bytes(4)| int  
-INT| bytes(4)| int  
-BIGINT_BE| bytes(8)| bigint  
-BIGINT| bytes(8)| bigint  
-FLOAT| bytes(4)| float (float4)  
-DOUBLE| bytes(8)| double (float8)  
-INT_HADOOPV| bytes(1-9)| int  
-BIGINT_HADOOPV| bytes(1-9)| bigint  
-DATE_EPOCH_BE| bytes(8)| date  
-DATE_EPOCH| bytes(8)| date  
-TIME_EPOCH_BE| bytes(8)| time  
-TIME_EPOCH| bytes(8)| time  
-UTF8| bytes| varchar  
-UTF16| bytes| var16char  
-UINT8| bytes(8)| uint8  
-  
-A common use case for CONVERT_FROM is when a data source embeds complex data
-inside a column. For example, you may have an HBase or MapR-DB table with
-embedded JSON data:
-
-    select CONVERT_FROM(col1, 'JSON') 
-    FROM hbase.table1
-    ...
-
-## Nested Data Functions
-
-This section contains descriptions of SQL functions that you can use to
-analyze nested data:
-
-  * [FLATTEN Function](/docs/flatten-function)
-  * [KVGEN Function](/docs/kvgen-function)
-  * [REPEATED_COUNT Function](/docs/repeated-count-function)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/sql-ref/005-nest-functions.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/005-nest-functions.md 
b/_docs/sql-ref/005-nest-functions.md
index c6e7ff2..131b397 100644
--- a/_docs/sql-ref/005-nest-functions.md
+++ b/_docs/sql-ref/005-nest-functions.md
@@ -5,6 +5,10 @@ parent: "SQL Reference"
 This section contains descriptions of SQL functions that you can use to
 analyze nested data:
 
-  * [FLATTEN Function](/docs/flatten-function)
-  * [KVGEN Function](/docs/kvgen-function)
-  * [REPEATED_COUNT Function](/docs/repeated-count-function)
\ No newline at end of file
+  * [FLATTEN Function](/docs/flatten)
+  * [KVGEN Function](/docs/kvgen)
+  * [REPEATED_COUNT Function](/docs/repeated-count)
+  * [REPEATED_CONTAINS Function](/docs/repeated-contains)
+
+## Limitations
+Map, Array, or repeated scalar types should not be used in GROUP BY or ORDER 
BY clauses or in a comparison operator. Drill does not support comparisons 
between VARCHAR:REPEATED and VARCHAR:REPEATED.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/sql-ref/data-types/001-date.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/data-types/001-date.md 
b/_docs/sql-ref/data-types/001-date.md
index e08873b..ec97e3b 100644
--- a/_docs/sql-ref/data-types/001-date.md
+++ b/_docs/sql-ref/data-types/001-date.md
@@ -95,6 +95,6 @@ You can run the query described earlier to check the 
formatting of the fields. T
     +------------+
     1 row selected (0.076 seconds)
 
-For information about casting interval data, see the 
["CAST"](/docs/data-type-fmt#cast) function.
+For information about casting interval data, see the 
["CAST"](/docs/conversion#cast) function.
 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/sql-ref/data-types/002-diff-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/data-types/002-diff-data-types.md 
b/_docs/sql-ref/data-types/002-diff-data-types.md
index 229fc85..539e9a9 100644
--- a/_docs/sql-ref/data-types/002-diff-data-types.md
+++ b/_docs/sql-ref/data-types/002-diff-data-types.md
@@ -2,8 +2,6 @@
 title: "Handling Different Data Types"
 parent: "Data Types"
 ---
-[Previous](/docs/supported-date-time-data-type-formats)<code>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</code>[Back
 to Table of 
Contents](/docs)<code>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</code>[Next](/docs/lexical-structure)
-
 ## Handling HBase Data
 To query HBase data in Drill, convert every column of an HBase table to/from 
byte arrays from/to an SQL data type using CONVERT_TO or CONVERT_FROM. For 
examples of how to use these functions, see "Convert and Cast Functions".
 

http://git-wip-us.apache.org/repos/asf/drill/blob/f1c6b8de/_docs/sql-ref/functions/002-conversion.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/functions/002-conversion.md 
b/_docs/sql-ref/functions/002-conversion.md
new file mode 100644
index 0000000..64fc25c
--- /dev/null
+++ b/_docs/sql-ref/functions/002-conversion.md
@@ -0,0 +1,889 @@
+---
+title: "Data Type Conversion"
+parent: "SQL Functions"
+---
+Drill supports the following functions for casting and converting data types:
+
+* [CAST](/docs/conversion#cast)
+* [CONVERT TO/FROM](/docs/conversion#convert-to-and-convert-from)
+* [Other data type conversion 
functions](/docs/conversion#other-data-type-conversion-functions)
+
+## CAST
+
+The CAST function converts an entity having a single data value, such as a 
column name, from one type to another.
+
+### Syntax
+
+    cast (<expression> AS <data type>)
+
+*expression*
+
+An entity that evaluates to one or more values, such as a column name or 
literal
+
+*data type*
+
+The target data type, such as INTEGER or DATE, to which to cast the expression
+
+### Usage Notes
+
+If the SELECT statement includes a WHERE clause that compares a column of an 
unknown data type, cast both the value of the column and the comparison value 
in the WHERE clause. For example:
+
+    SELECT c_row, CAST(c_int AS DECIMAL(28,8)) FROM mydata WHERE CAST(c_int AS 
DECIMAL(28,8)) > -3.0
+
+Do not use the CAST function for converting binary data types to other types. 
Although CAST works for converting VARBINARY to VARCHAR, CAST does not work in 
other cases for converting all binary data. Use CONVERT_TO and CONVERT_FROM for 
converting to or from binary data. 
+
+Refer to the following tables for information about the data types to use for 
casting:
+
+* [Supported Data Types for Casting](/docs/supported-data-types-for-casting)
+* [Explicit Type Casting Maps](/docs/explicit-type-casting-maps)
+
+
+### Examples
+
+The following examples show how to cast a string to a number, a number to a 
string, and one numerical type to another.
+
+#### Casting a character string to a number
+You cannot cast a character string that includes a decimal point to an INT or 
BIGINT. For example, if you have "1200.50" in a JSON file, attempting to select 
and cast the string to an INT fails. As a workaround, cast to a FLOAT or 
DECIMAL type, and then to an INT. 
+
+The following example shows how to cast a character to a DECIMAL having two 
decimal places.
+
+    SELECT CAST('1' as DECIMAL(28, 2)) FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 1.00       |
+    +------------+
+
+#### Casting a number to a character string
+The first example shows that Drill uses a default limit of 1 character if you 
omit the VARCHAR limit: The result is truncated to 1 character.  The second 
example casts the same number to a VARCHAR having a limit of 3 characters: The 
result is a 3-character string, 456. The third example shows that you can use 
CHAR as an alias for VARCHAR. You can also use CHARACTER or CHARACTER VARYING.
+
+    SELECT CAST(456 as VARCHAR) FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 4          |
+    +------------+
+    1 row selected (0.063 seconds)
+
+    SELECT CAST(456 as VARCHAR(3)) FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 456        |
+    +------------+
+    1 row selected (0.08 seconds)
+
+    SELECT CAST(456 as CHAR(3)) FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 456        |
+    +------------+
+    1 row selected (0.093 seconds)
+
+#### Casting from one numerical type to another
+
+Cast an integer to a decimal.
+
+    SELECT CAST(-2147483648 AS DECIMAL(28,8)) FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | -2.147483648E9 |
+    +------------+
+    1 row selected (0.08 seconds)
+
+### Casting Intervals
+
+To cast INTERVAL data use the following syntax:
+
+    CAST (column_name AS INTERVAL)
+    CAST (column_name AS INTERVAL DAY)
+    CAST (column_name AS INTERVAL YEAR)
+
+For example, a JSON file contains the following objects:
+
+    { "INTERVALYEAR_col":"P1Y", "INTERVALDAY_col":"P1D", 
"INTERVAL_col":"P1Y1M1DT1H1M" }
+    { "INTERVALYEAR_col":"P2Y", "INTERVALDAY_col":"P2D", 
"INTERVAL_col":"P2Y2M2DT2H2M" }
+    { "INTERVALYEAR_col":"P3Y", "INTERVALDAY_col":"P3D", 
"INTERVAL_col":"P3Y3M3DT3H3M" }
+
+The following CTAS statement casts text from a JSON file to INTERVAL data 
types in a Parquet table:
+
+    CREATE TABLE dfs.tmp.parquet_intervals AS 
+    (SELECT cast (INTERVAL_col as interval),
+           cast( INTERVALYEAR_col as interval year) INTERVALYEAR_col, 
+           cast( INTERVALDAY_col as interval day) INTERVALDAY_col 
+    FROM `/user/root/intervals.json`);
+
+<!-- Text and include output -->
+
+## CONVERT_TO and CONVERT_FROM
+
+The CONVERT_TO and CONVERT_FROM functions encode and decode
+data to and from another data type.
+
+## Syntax  
+
+CONVERT_TO (column, type)
+
+CONVERT_FROM(column, type)
+
+*column* is the name of a column Drill reads.
+
+*type* is one of the data types listed in the CONVERT_TO/FROM Data Types table.
+
+
+The following table provides the data types that you use with the CONVERT_TO
+and CONVERT_FROM functions:
+
+### CONVERT_TO/FROM Data Types
+
+**Type**| **Input Type**| **Output Type**  
+---|---|---  
+BOOLEAN_BYTE| bytes(1)| boolean  
+TINYINT_BE| bytes(1)| tinyint  
+TINYINT| bytes(1)| tinyint  
+SMALLINT_BE| bytes(2)| smallint  
+SMALLINT| bytes(2)| smallint  
+INT_BE| bytes(4)| int  
+INT| bytes(4)| int  
+BIGINT_BE| bytes(8)| bigint  
+BIGINT| bytes(8)| bigint  
+FLOAT| bytes(4)| float (float4)  
+DOUBLE| bytes(8)| double (float8)  
+INT_HADOOPV| bytes(1-9)| int  
+BIGINT_HADOOPV| bytes(1-9)| bigint  
+DATE_EPOCH_BE| bytes(8)| date  
+DATE_EPOCH| bytes(8)| date  
+TIME_EPOCH_BE| bytes(8)| time  
+TIME_EPOCH| bytes(8)| time  
+UTF8| bytes| varchar  
+UTF16| bytes| var16char  
+UINT8| bytes(8)| uint8  
+  
+### Usage Notes
+
+You can use the CONVERT_TO and CONVERT_FROM functions to encode and decode 
data that is binary or complex. For example, HBase stores
+data as encoded VARBINARY data. To read HBase data in Drill, convert every 
column of an HBase table *from* binary to an SQL data type while selecting the 
data. To write HBase or Parquet binary data, convert SQL data *to* binary data 
and store the data in an HBase or Parquet while creating a table as a selection 
(CTAS).
+
+Do not use the CAST function for converting binary data types to other types. 
Although CAST works for converting VARBINARY to VARCHAR, CAST does not work in 
some other binary conversion cases. CONVERT functions work for binary 
conversions and are also more efficient to use than CAST.
+
+## Usage Notes
+Use the CONVERT_TO function to change the data type to binary when sending 
data back to a binary data source, such as HBase, MapR, and Parquet, from a 
Drill query. CONVERT_TO also converts an SQL data type to complex types, 
including Hbase byte arrays, JSON and Parquet arrays, and maps. CONVERT_FROM 
converts from complex types, including Hbase arrays, JSON and Parquet arrays 
and maps to an SQL data type. 
+
+### Examples
+
+This example shows how to use the CONVERT_FROM function to convert complex 
HBase data to a readable type. The example summarizes and continues the ["Query 
HBase"](/docs/query-hbase) example. The ["Query HBase"](/docs/query-hbase) 
example stores the following data in the students table on the Drill Sandbox:  
+
+    USE maprdb;
+
+    SELECT * FROM students;
+        
+    +------------+------------+------------+
+    |  row_key   |  account   |  address   |
+    +------------+------------+------------+
+    | [B@e6d9eb7 | {"name":"QWxpY2U="} | 
{"state":"Q0E=","street":"MTIzIEJhbGxtZXIgQXY=","zipcode":"MTIzNDU="} |
+    | [B@2823a2b4 | {"name":"Qm9i"} | 
{"state":"Q0E=","street":"MSBJbmZpbml0ZSBMb29w","zipcode":"MTIzNDU="} |
+    | [B@3b8eec02 | {"name":"RnJhbms="} | 
{"state":"Q0E=","street":"NDM1IFdhbGtlciBDdA==","zipcode":"MTIzNDU="} |
+    | [B@242895da | {"name":"TWFyeQ=="} | 
{"state":"Q0E=","street":"NTYgU291dGhlcm4gUGt3eQ==","zipcode":"MTIzNDU="} |
+    +------------+------------+------------+
+    4 rows selected (1.335 seconds)
+
+You use the CONVERT_FROM function to decode the binary data to render it 
readable:
+
+    SELECT CONVERT_FROM(row_key, 'UTF8') AS studentid, 
+           CONVERT_FROM(students.account.name, 'UTF8') AS name, 
+           CONVERT_FROM(students.address.state, 'UTF8') AS state, 
+           CONVERT_FROM(students.address.street, 'UTF8') AS street, 
+           CONVERT_FROM(students.address.zipcode, 'UTF8') AS zipcode FROM 
students;
+
+    +------------+------------+------------+------------+------------+
+    | studentid  |    name    |   state    |   street   |  zipcode   |
+    +------------+------------+------------+------------+------------+
+    | student1   | Alice      | CA         | 123 Ballmer Av | 12345      |
+    | student2   | Bob        | CA         | 1 Infinite Loop | 12345      |
+    | student3   | Frank      | CA         | 435 Walker Ct | 12345      |
+    | student4   | Mary       | CA         | 56 Southern Pkwy | 12345      |
+    +------------+------------+------------+------------+------------+
+    4 rows selected (0.504 seconds)
+
+#### Set up a storage plugin for working with HBase files
+
+This example assumes you are working in the Drill Sandbox. The `maprdb` 
storage plugin definition is limited, so you modify the `dfs` storage plugin 
slightly and use that plugin for this example.
+
+1. Copy/paste the `dfs` storage plugin defintion to a newly created plugin 
called myplugin.
+
+2. Change the root location to "/mapr/demo.mapr.com/tables". This change 
allows you to query tables for reading in the tables directory by 
workspace.table name. This change allows you to read a table in the `tables` 
directory. You can write a converted version of the table in the `tmp` 
directory because the writable property is true.
+
+        {
+          "type": "file",
+          "enabled": true,
+          "connection": "maprfs:///",
+          "workspaces": {
+            "root": {
+              "location": "/mapr/demo.mapr.com/tables",
+              "writable": true,
+              "defaultInputFormat": null
+            },
+         
+            . . .
+
+            "tmp": {
+              "location": "/tmp",
+              "writable": true,
+              "defaultInputFormat": null
+            }
+
+            . . .
+         
+          "formats": {
+            . . .
+            "maprdb": {
+              "type": "maprdb"
+            }
+          }
+        }
+
+#### Convert the binary HBase students table to JSON data.
+
+1. Start Drill on the Drill Sandbox and set the default storage format from 
Parquet to JSON.
+
+        ALTER SESSION SET `store.format`='json';
+
+2. Use CONVERT_FROM queries to convert the VARBINARY data in the HBase 
students table to JSON, and store the JSON data in a file. 
+
+        CREATE TABLE tmp.`to_json` AS SELECT 
+            CONVERT_FROM(row_key, 'UTF8') AS `studentid`, 
+            CONVERT_FROM(students.account.name, 'UTF8') AS name, 
+            CONVERT_FROM(students.address.state, 'UTF8') AS state, 
+            CONVERT_FROM(students.address.street, 'UTF8') AS street, 
+            CONVERT_FROM(students.address.zipcode, 'UTF8') AS zipcode 
+        FROM root.`students`;
+
+        +------------+---------------------------+
+        |  Fragment  | Number of records written |
+        +------------+---------------------------+
+        | 0_0        | 4                         |
+        +------------+---------------------------+
+        1 row selected (0.41 seconds)
+4. Navigate to the output. 
+
+        cd /mapr/demo.mapr.com/tmp/to_json
+        ls
+   Output is:
+
+        0_0_0.json
+
+5. Take a look at the output om `to_json`:
+
+        {
+          "studentid" : "student1",
+          "name" : "Alice",
+          "state" : "CA",
+          "street" : "123 Ballmer Av",
+          "zipcode" : "12345"
+        } {
+          "studentid" : "student2",
+          "name" : "Bob",
+          "state" : "CA",
+          "street" : "1 Infinite Loop",
+          "zipcode" : "12345"
+        } {
+          "studentid" : "student3",
+          "name" : "Frank",
+          "state" : "CA",
+          "street" : "435 Walker Ct",
+          "zipcode" : "12345"
+        } {
+          "studentid" : "student4",
+          "name" : "Mary",
+          "state" : "CA",
+          "street" : "56 Southern Pkwy",
+          "zipcode" : "12345"
+        }
+
+6. Set up Drill to store data in Parquet format.
+
+        ALTER SESSION SET `store.format`='parquet';
+        +------------+------------+
+        |     ok     |  summary   |
+        +------------+------------+
+        | true       | store.format updated. |
+        +------------+------------+
+        1 row selected (0.056 seconds)
+
+7. Use CONVERT_TO to convert the JSON data to a binary format in the Parquet 
file.
+
+        CREATE TABLE tmp.`json2parquet` AS SELECT 
+            CONVERT_TO(studentid, 'UTF8') AS id, 
+            CONVERT_TO(name, 'UTF8') AS name, 
+            CONVERT_TO(state, 'UTF8') AS state, 
+            CONVERT_TO(street, 'UTF8') AS street, 
+            CONVERT_TO(zipcode, 'UTF8') AS zip 
+        FROM tmp.`to_json`;
+
+        +------------+---------------------------+
+        |  Fragment  | Number of records written |
+        +------------+---------------------------+
+        | 0_0        | 4                         |
+        +------------+---------------------------+
+        1 row selected (0.414 seconds)
+8. Take a look at the binary Parquet output:
+
+        SELECT * FROM tmp.`json2parquet`;
+        +------------+------------+------------+------------+------------+
+        |     id     |    name    |   state    |   street   |    zip     |
+        +------------+------------+------------+------------+------------+
+        | [B@224388b2 | [B@7fc36fb0 | [B@77d9cd57 | [B@7c384839 | [B@530dd5e5 |
+        | [B@3155d7fc | [B@7ad6fab1 | [B@37e4b978 | [B@94c91f3 | [B@201ed4a |
+        | [B@4fb2c078 | [B@607a2f28 | [B@75ae1c93 | [B@79d63340 | [B@5dbeed3d |
+        | [B@2fcfec74 | [B@7baccc31 | [B@d91e466 | [B@6529eb7f | [B@232412bc |
+        +------------+------------+------------+------------+------------+
+        4 rows selected (0.12 seconds)
+
+9. Use CONVERT_FROM to convert the Parquet data to a readable format:
+
+        SELECT CONVERT_FROM(id, 'UTF8') AS id, 
+               CONVERT_FROM(name, 'UTF8') AS name, 
+               CONVERT_FROM(state, 'UTF8') AS state, 
+               CONVERT_FROM(street, 'UTF8') AS address, 
+               CONVERT_FROM(zip, 'UTF8') AS zip 
+        FROM tmp.`json2parquet2`;
+
+        +------------+------------+------------+------------+------------+
+        |     id     |    name    |   state    |  address   |    zip     |
+        +------------+------------+------------+------------+------------+
+        | student1   | Alice      | CA         | 123 Ballmer Av | 12345      |
+        | student2   | Bob        | CA         | 1 Infinite Loop | 12345      |
+        | student3   | Frank      | CA         | 435 Walker Ct | 12345      |
+        | student4   | Mary       | CA         | 56 Southern Pkwy | 12345      
|
+        +------------+------------+------------+------------+------------+
+        4 rows selected (0.182 seconds)
+
+## Other Data Type Conversions
+In addition to the CAST, CONVERT_TO, and CONVERT_FROM functions, Drill 
supports data type conversion functions to perform the following conversions:
+
+* A timestamp, integer, decimal, or double to a character string.
+* A character string to a date
+* A character string to a number
+
+## Time Zone Limitation
+Currently Drill does not support conversion of a date, time, or timestamp from 
one time zone to another. The workaround is to configure Drill to use 
[UTC](http://www.timeanddate.com/time/aboututc.html)-based time, convert your 
data to UTC timestamps, and perform date/time operation in UTC.  
+
+1. Take a look at the Drill time zone configuration by running the TIMEOFDAY 
function. This function returns the local date and time with time zone 
information.
+
+        SELECT TIMEOFDAY() FROM sys.drillbits;
+
+        +------------+
+        |   EXPR$0   |
+        +------------+
+        | 2015-04-02 15:01:31.114 America/Los_Angeles |
+        +------------+
+        1 row selected (1.199 seconds)
+
+2. Configure the default time zone format in <drill installation 
directory>/conf/drill-env.sh by adding `-Duser.timezone=UTC` to 
DRILL_JAVA_OPTS. For example:
+
+        export DRILL_JAVA_OPTS="-Xms1G -Xmx$DRILL_MAX_HEAP 
-XX:MaxDirectMemorySize=$DRILL_MAX_DIRECT_MEMORY -XX:MaxPermSize=512M 
-XX:ReservedCodeCacheSize=1G -ea -Duser.timezone=UTC"
+
+3. Restart sqlline.
+
+4. Confirm that Drill is now set to UTC:
+
+        SELECT TIMEOFDAY() FROM sys.drillbits;
+
+        +------------+
+        |   EXPR$0   |
+        +------------+
+        | 2015-04-02 17:05:02.424 UTC |
+        +------------+
+        1 row selected (1.191 seconds)
+
+The following table lists data type formatting functions that you can
+use in your Drill queries as described in this section:
+
+**Function**| **Return Type**  
+---|---  
+TO_CHAR(timestamp, format)| text  
+TO_CHAR(int, format)| text  
+TO_CHAR(double precision, format)| text  
+TO_CHAR(numeric, format)| text  
+TO_DATE(text, format)| date  
+TO_NUMBER(text, format)| numeric  
+TO_TIMESTAMP(text, format)| timestamp
+TO_TIMESTAMP(double precision)| timestamp
+
+<!-- A character string to a timestamp with time zone
+
+A decimal type to a timestamp with time zone -->
+
+### Format Specifiers for Numerical Conversions
+Use the following format specifiers for numerical conversions:
+<table >
+     <tr >
+          <th align=left>Symbol
+          <th align=left>Location
+          <th align=left>Meaning
+     <tr valign=top>
+          <td><code>0</code>
+          <td>Number
+          <td>Digit
+     <tr >
+          <td><code>#</code>
+          <td>Number
+          <td>Digit, zero shows as absent
+     <tr valign=top>
+          <td><code>.</code>
+          <td>Number
+          <td>Decimal separator or monetary decimal separator
+     <tr >
+          <td><code>-</code>
+          <td>Number
+          <td>Minus sign
+     <tr valign=top>
+          <td><code>,</code>
+          <td>Number
+          <td>Grouping separator
+     <tr >
+          <td><code>E</code>
+          <td>Number
+          <td>Separates mantissa and exponent in scientific notation.
+              <em>Need not be quoted in prefix or suffix.</em>
+     <tr valign=top>
+          <td><code>;</code>
+          <td>Subpattern boundary
+          <td>Separates positive and negative subpatterns
+     <tr >
+          <td><code>%</code>
+          <td>Prefix or suffix
+          <td>Multiply by 100 and show as percentage
+     <tr valign=top>
+          <td><code>&#92;u2030</code>
+          <td>Prefix or suffix
+          <td>Multiply by 1000 and show as per mille value
+     <tr >
+          <td><code>&#164;</code> (<code>&#92;u00A4</code>)
+          <td>Prefix or suffix
+          <td>Currency sign, replaced by currency symbol.  If
+              doubled, replaced by international currency symbol.
+              If present in a pattern, the monetary decimal separator
+              is used instead of the decimal separator.
+     <tr valign=top>
+          <td><code>'</code>
+          <td>Prefix or suffix
+          <td>Used to quote special characters in a prefix or suffix,
+              for example, <code>"'#'#"</code> formats 123 to
+              <code>"#123"</code>.  To create a single quote
+              itself, use two in a row: <code>"# o''clock"</code>.
+ </table>
+
+### Format Specifiers for Date/Time Conversions
+
+Use the following format specifiers for date/time conversions:
+
+<table>
+  <tr>
+    <th>Symbol</th>
+    <th>Meaning</th>
+    <th>Presentation</th>
+    <th>Examples</th>
+  </tr>
+  <tr>
+    <td>G</td>
+    <td>era</td>
+    <td>text</td>
+    <td>AD</td>
+  </tr>
+  <tr>
+    <td>C</td>
+    <td>century of era (&gt;=0)</td>
+    <td>number</td>
+    <td>20</td>
+  </tr>
+  <tr>
+    <td>Y</td>
+    <td>year of era (&gt;=0)</td>
+    <td>year</td>
+    <td>1996</td>
+  </tr>
+  <tr>
+    <td>x</td>
+    <td>weekyear</td>
+    <td>year</td>
+    <td>1996</td>
+  </tr>
+  <tr>
+    <td>w</td>
+    <td>week of weekyear</td>
+    <td>number</td>
+    <td>27</td>
+  </tr>
+  <tr>
+    <td>e</td>
+    <td>day of week</td>
+    <td>number</td>
+    <td>2</td>
+  </tr>
+  <tr>
+    <td>E</td>
+    <td>day of week</td>
+    <td>text</td>
+    <td>Tuesday; Tue</td>
+  </tr>
+  <tr>
+    <td>y</td>
+    <td>year</td>
+    <td>year</td>
+    <td>1996</td>
+  </tr>
+  <tr>
+    <td>D</td>
+    <td>day of year</td>
+    <td>number</td>
+    <td>189</td>
+  </tr>
+  <tr>
+    <td>M</td>
+    <td>month of year</td>
+    <td>month</td>
+    <td>July; Jul; 07</td>
+  </tr>
+  <tr>
+    <td>d</td>
+    <td>day of month</td>
+    <td>number</td>
+    <td>10</td>
+  </tr>
+  <tr>
+    <td>a</td>
+    <td>halfday of day</td>
+    <td>text</td>
+    <td>PM</td>
+  </tr>
+  <tr>
+    <td>K</td>
+    <td>hour of halfday (0~11)</td>
+    <td>number</td>
+    <td>0</td>
+  </tr>
+  <tr>
+    <td>h</td>
+    <td>clockhour of halfday (1~12)number</td>
+    <td>12</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>H</td>
+    <td>hour of day (0~23)</td>
+    <td>number</td>
+    <td>0</td>
+  </tr>
+  <tr>
+    <td>k</td>
+    <td>clockhour of day (1~24)</td>
+    <td>number</td>
+    <td>24</td>
+  </tr>
+  <tr>
+    <td>m</td>
+    <td>minute of hour</td>
+    <td>number</td>
+    <td>30</td>
+  </tr>
+  <tr>
+    <td>s</td>
+    <td>second of minute</td>
+    <td>number</td>
+    <td>55</td>
+  </tr>
+  <tr>
+    <td>S</td>
+    <td>fraction of second</td>
+    <td>number</td>
+    <td>978</td>
+  </tr>
+  <tr>
+    <td>z</td>
+    <td>time zone</td>
+    <td>text</td>
+    <td>Pacific Standard Time; PST</td>
+  </tr>
+  <tr>
+    <td>Z</td>
+    <td>time zone offset/id</td>
+    <td>zone</td>
+    <td>-0800; -08:00; America/Los_Angeles</td>
+  </tr>
+  <tr>
+    <td>'</td>
+    <td>single quotation mark, escape for text delimiter</td>
+    <td>literal</td>
+    <td></td>
+  </tr>
+</table>
+
+For more information about specifying a format, refer to one of the following 
format specifier documents:
+
+* [Java DecimalFormat 
class](http://docs.oracle.com/javase/7/docs/api/java/text/DecimalFormat.html) 
format specifiers 
+* [Java DateTimeFormat 
class](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html)
 format specifiers
+
+## TO_CHAR
+
+TO_CHAR converts a date, time, timestamp, or numerical expression to a 
character string.
+
+### Syntax
+
+    TO_CHAR (expression, 'format');
+
+*expression* is a float, integer, decimal, date, time, or timestamp 
expression. 
+
+*'format'* is a format specifier enclosed in single quotation marks that sets 
a pattern for the output formatting. 
+
+### Usage Notes
+
+
+### Examples
+
+Convert a FLOAT to a character string.
+
+    SELECT TO_CHAR(125.789383, '#,###.###') FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 125.789    |
+    +------------+
+
+Convert an integer to a character string.
+
+    SELECT TO_CHAR(125, '#,###.###') FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 125        |
+    +------------+
+    1 row selected (0.083 seconds)
+
+Convert a date to a character string.
+
+    SELECT TO_CHAR((CAST('2008-2-23' AS DATE)), 'yyyy-MMM-dd') FROM 
sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 2008-Feb-23 |
+    +------------+
+
+Convert a time to a string.
+
+    SELECT TO_CHAR(CAST('12:20:30' AS TIME), 'HH mm ss') FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 12 20 30   |
+    +------------+
+    1 row selected (0.07 seconds)
+
+
+Convert a timestamp to a string.
+
+    SELECT TO_CHAR(CAST('2015-2-23 12:00:00' AS TIMESTAMP), 'yyyy MMM dd 
HH:mm:ss') FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 2015 Feb 23 12:00:00 |
+    +------------+
+    1 row selected (0.075 seconds)
+
+## TO_DATE
+Converts a character string or a UNIX epoch timestamp to a date.
+
+### Syntax
+
+    TO_DATE (expression [, 'format']);
+
+*expression* is a character string enclosed in single quotation marks or a 
Unix epoch timestamp in milliseconds, not enclosed in single quotation marks. 
+
+*'format'* is a format specifier enclosed in single quotation marks that sets 
a pattern for the output formatting. Use this option only when the expression 
is a character string, not a UNIX epoch timestamp. 
+
+### Usage 
+Specify a format using patterns defined in [Java DateTimeFormat 
class](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html).
 The TO_TIMESTAMP function takes a Unix epoch timestamp. The TO_DATE function 
takes a UNIX epoch timestamp in milliseconds.
+
+To compare dates in the WHERE clause, use TO_DATE on the value in the date 
column and in the comparison value. For example:
+
+    SELECT <fields> FROM <plugin> WHERE TO_DATE(<column>, <format>) <
+ TO_DATE(<value>, <format>);
+
+### Examples
+The first example converts a character string to a date. The second example 
extracts the year to verify that Drill recognizes the date as a date type. 
+
+    SELECT TO_DATE('2015-FEB-23', 'yyyy-MMM-dd') FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 2015-02-23 |
+    +------------+
+    1 row selected (0.077 seconds)
+
+    SELECT EXTRACT(year from mydate) `extracted year` FROM (SELECT 
TO_DATE('2015-FEB-23', 'yyyy-MMM-dd') AS mydate FROM sys.drillbits);
+
+    +------------+
+    |   myyear   |
+    +------------+
+    | 2015       |
+    +------------+
+    1 row selected (0.128 seconds)
+
+The following example converts a UNIX epoch timestamp to a date.
+
+    SELECT TO_DATE(1427849046000) FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 2015-04-01 |
+    +------------+
+    1 row selected (0.082 seconds)
+
+## TO_NUMBER
+
+TO_NUMBER converts a character string to a formatted number using a format 
specification.
+
+### Syntax
+
+    TO_NUMBER ('string', 'format');
+
+*'string'* is a character string enclosed in single quotation marks. 
+
+*'format'* is one or more [Java DecimalFormat 
class](http://docs.oracle.com/javase/7/docs/api/java/text/DecimalFormat.html) 
specifiers enclosed in single quotation marks that set a pattern for the output 
formatting.
+
+
+### Usage Notes
+The data type of the output of TO_NUMBER is a numeric. You can use the 
following [Java DecimalFormat 
class](http://docs.oracle.com/javase/7/docs/api/java/text/DecimalFormat.html) 
specifiers to set the output formatting. 
+
+* #  
+  Digit place holder. 
+
+* 0  
+  Digit place holder. If a value has a digit in the position where the zero 
'0' appears in the format string, that digit appears in the output; otherwise, 
a '0' appears in that position in the output.
+
+* .  
+  Decimal point. Make the first '.' character in the format string the 
location of the decimal separator in the value; ignore any additional '.' 
characters.
+
+* ,  
+  Comma grouping separator. 
+
+* E
+  Exponent. Separates mantissa and exponent in scientific notation. 
+
+### Examples
+
+    SELECT TO_NUMBER('987,966', '######') FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 987.0      |
+    +------------+
+
+    SELECT TO_NUMBER('987.966', '###.###') FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 987.966    |
+    +------------+
+    1 row selected (0.063 seconds)
+
+    SELECT TO_NUMBER('12345', '##0.##E0') FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 12345.0    |
+    +------------+
+    1 row selected (0.069 seconds)
+
+## TO_TIME
+Converts a character string to a time.
+
+### Syntax
+
+    TO_TIME (expression [, 'format']);
+
+*expression* is a character string enclosed in single quotation marks or 
milliseconds, not enclosed in single quotation marks. 
+
+*'format'* is a format specifier enclosed in single quotation marks that sets 
a pattern for the output formatting. Use this option only when the expression 
is a character string, not milliseconds. 
+
+## Usage 
+Specify a format using patterns defined in [Java DateTimeFormat 
class](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html).
+
+### Examples
+
+    SELECT TO_TIME('12:20:30', 'HH:mm:ss') FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 12:20:30   |
+    +------------+
+    1 row selected (0.067 seconds)
+
+Convert 828550000 milliseconds (23 hours 55 seconds) to the time.
+
+    SELECT to_time(82855000) FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 23:00:55   |
+    +------------+
+    1 row selected (0.086 seconds)
+
+## TO_TIMESTAMP
+
+### Syntax
+
+    TO_TIMESTAMP (expression [, 'format']);
+
+*expression* is a character string enclosed in single quotation marks or a 
UNIX epoch timestamp, not enclosed in single quotation marks. 
+
+*'format'* is a format specifier enclosed in single quotation marks that sets 
a pattern for the output formatting. Use this option only when the expression 
is a character string, not a UNIX epoch timestamp. 
+
+### Usage 
+Specify a format using patterns defined in [Java DateTimeFormat 
class](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html).
 The TO_TIMESTAMP function takes a Unix epoch timestamp. The TO_DATE function 
takes a UNIX epoch timestamp in milliseconds.
+
+### Examples
+
+Convert a date to a timestamp. 
+
+    SELECT TO_TIMESTAMP('2008-2-23 12:00:00', 'yyyy-MM-dd HH:mm:ss') FROM 
sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 2008-02-23 12:00:00.0 |
+    +------------+
+
+Convert Unix Epoch time to a timestamp.
+
+    SELECT TO_TIMESTAMP(1427936330) FROM sys.drillbits;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 2015-04-01 17:58:50.0 |
+    +------------+
+    1 row selected (0.094 seconds)
+
+Connvert a UTC date to a timestamp offset from the UTC time zone code.
+
+    SELECT TO_TIMESTAMP('2015-03-30 20:49:59.0 UTC', 'YYYY-MM-dd HH:mm:ss.s 
z') AS Original, 
+           TO_CHAR(TO_TIMESTAMP('2015-03-30 20:49:59.0 UTC', 'YYYY-MM-dd 
HH:mm:ss.s z'), 'z') AS New_TZ 
+    FROM sys.drillbits;
+
+    +------------+------------+
+    |  Original  |   New_TZ   |
+    +------------+------------+
+    | 2015-03-30 20:49:00.0 | UTC        |
+    +------------+------------+
+    1 row selected (0.129 seconds)
+
+
+<!-- DRILL-448 Support timestamp with time zone -->
+
+
+<!-- Apache Drill    
+Apache DrillDRILL-1141
+ISNUMERIC should be implemented as a SQL function
+SELECT count(columns[0]) as number FROM dfs.`bla` WHERE ISNUMERIC(columns[0])=1
+ -->
\ No newline at end of file

Reply via email to