Author: bridgetb Date: Tue Mar 31 01:20:42 2015 New Revision: 1670235 URL: http://svn.apache.org/r1670235 Log: DRILL-2586: document data type formatting functions
Added: drill/site/trunk/content/drill/0003-DRILL-2505.patch drill/site/trunk/content/drill/docs/casting-converting-data-types/ drill/site/trunk/content/drill/docs/casting-converting-data-types/index.html drill/site/trunk/content/drill/docs/developer-information/ drill/site/trunk/content/drill/docs/developer-information/index.html drill/site/trunk/content/drill/docs/flatten/ drill/site/trunk/content/drill/docs/flatten/index.html drill/site/trunk/content/drill/docs/img/image_1.png (with props) drill/site/trunk/content/drill/docs/img/image_10.png (with props) drill/site/trunk/content/drill/docs/img/image_11.png (with props) drill/site/trunk/content/drill/docs/img/image_12.png (with props) drill/site/trunk/content/drill/docs/img/image_13.png (with props) drill/site/trunk/content/drill/docs/img/image_14.png (with props) drill/site/trunk/content/drill/docs/img/image_15.png (with props) drill/site/trunk/content/drill/docs/img/image_16.png (with props) drill/site/trunk/content/drill/docs/img/image_17.png (with props) drill/site/trunk/content/drill/docs/img/image_2.png (with props) drill/site/trunk/content/drill/docs/img/image_3.png (with props) drill/site/trunk/content/drill/docs/img/image_4.png (with props) drill/site/trunk/content/drill/docs/img/image_5.png (with props) drill/site/trunk/content/drill/docs/img/image_6.png (with props) drill/site/trunk/content/drill/docs/img/image_7.png (with props) drill/site/trunk/content/drill/docs/img/image_8.png (with props) drill/site/trunk/content/drill/docs/img/image_9.png (with props) drill/site/trunk/content/drill/docs/kvgen/ drill/site/trunk/content/drill/docs/kvgen/index.html drill/site/trunk/content/drill/docs/repeated-contains/ drill/site/trunk/content/drill/docs/repeated-contains/index.html drill/site/trunk/content/drill/docs/repeated-count/ drill/site/trunk/content/drill/docs/repeated-count/index.html drill/site/trunk/content/drill/docs/using-microstrategy-analytics-with-apache-drill/ drill/site/trunk/content/drill/docs/using-microstrategy-analytics-with-apache-drill/index.html Modified: drill/site/trunk/content/drill/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html drill/site/trunk/content/drill/docs/advanced-properties/index.html drill/site/trunk/content/drill/docs/analyzing-yelp-json-data-with-apache-drill/index.html drill/site/trunk/content/drill/docs/apache-drill-in-10-minutes/index.html drill/site/trunk/content/drill/docs/apache-drill-tutorial/index.html drill/site/trunk/content/drill/docs/architectural-highlights/index.html drill/site/trunk/content/drill/docs/architectural-overview/index.html drill/site/trunk/content/drill/docs/configuring-odbc-connections-for-linux-and-mac-os-x/index.html drill/site/trunk/content/drill/docs/connect-to-a-data-source/index.html drill/site/trunk/content/drill/docs/core-modules-within-a-drillbit/index.html drill/site/trunk/content/drill/docs/data-types/index.html drill/site/trunk/content/drill/docs/date-time-and-timestamp/index.html drill/site/trunk/content/drill/docs/deploying-apache-drill-in-a-clustered-environment/index.html drill/site/trunk/content/drill/docs/drill-default-input-format/index.html drill/site/trunk/content/drill/docs/driver-configuration-options/index.html drill/site/trunk/content/drill/docs/file-system-storage-plugin/index.html drill/site/trunk/content/drill/docs/flexibility/index.html drill/site/trunk/content/drill/docs/getting-started-tutorial/index.html drill/site/trunk/content/drill/docs/getting-to-know-the-drill-sandbox/index.html drill/site/trunk/content/drill/docs/hbase-storage-plugin/index.html drill/site/trunk/content/drill/docs/hive-storage-plugin/index.html drill/site/trunk/content/drill/docs/hive-to-drill-data-type-mapping/index.html drill/site/trunk/content/drill/docs/index.html drill/site/trunk/content/drill/docs/install-drill/index.html drill/site/trunk/content/drill/docs/installing-drill-in-distributed-mode/index.html drill/site/trunk/content/drill/docs/installing-drill-in-embedded-mode/index.html drill/site/trunk/content/drill/docs/installing-drill-on-linux/index.html drill/site/trunk/content/drill/docs/installing-drill-on-mac-os-x/index.html drill/site/trunk/content/drill/docs/installing-drill-on-windows/index.html drill/site/trunk/content/drill/docs/installing-the-apache-drill-sandbox/index.html drill/site/trunk/content/drill/docs/installing-the-mapr-drill-odbc-driver-on-linux/index.html drill/site/trunk/content/drill/docs/installing-the-mapr-drill-odbc-driver-on-mac-os-x/index.html drill/site/trunk/content/drill/docs/installing-the-mapr-sandbox-with-apache-drill-on-virtualbox/index.html drill/site/trunk/content/drill/docs/installing-the-mapr-sandbox-with-apache-drill-on-vmware-player-vmware-fusion/index.html drill/site/trunk/content/drill/docs/json-data-model/index.html drill/site/trunk/content/drill/docs/lession-1-learn-about-the-data-set/index.html drill/site/trunk/content/drill/docs/lession-2-run-queries-with-ansi-sql/index.html drill/site/trunk/content/drill/docs/lession-3-run-queries-on-complex-data-types/index.html drill/site/trunk/content/drill/docs/mapr-db-format/index.html drill/site/trunk/content/drill/docs/math-and-trig/index.html drill/site/trunk/content/drill/docs/mongodb-plugin-for-apache-drill/index.html drill/site/trunk/content/drill/docs/odbc-jdbc-interfaces/index.html drill/site/trunk/content/drill/docs/performance/index.html drill/site/trunk/content/drill/docs/query-data/index.html drill/site/trunk/content/drill/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/index.html drill/site/trunk/content/drill/docs/step-2-configure-odbc-connections-to-drill-data-sources/index.html drill/site/trunk/content/drill/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool/index.html drill/site/trunk/content/drill/docs/storage-plugin-configuration/index.html drill/site/trunk/content/drill/docs/storage-plugin-registration/index.html drill/site/trunk/content/drill/docs/summary/index.html drill/site/trunk/content/drill/docs/tableau-examples/index.html drill/site/trunk/content/drill/docs/testing-the-odbc-connection-on-linux-and-mac-os-x/index.html drill/site/trunk/content/drill/docs/useful-research/index.html drill/site/trunk/content/drill/docs/using-drill-explorer-to-browse-data-and-create-views/index.html drill/site/trunk/content/drill/docs/using-the-jdbc-driver/index.html drill/site/trunk/content/drill/docs/using-the-mapr-odbc-driver-on-linux-and-mac-os-x/index.html drill/site/trunk/content/drill/docs/using-the-mapr-odbc-driver-on-windows/index.html drill/site/trunk/content/drill/docs/workspaces/index.html drill/site/trunk/content/drill/feed.xml Added: drill/site/trunk/content/drill/0003-DRILL-2505.patch URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/0003-DRILL-2505.patch?rev=1670235&view=auto ============================================================================== --- drill/site/trunk/content/drill/0003-DRILL-2505.patch (added) +++ drill/site/trunk/content/drill/0003-DRILL-2505.patch Tue Mar 31 01:20:42 2015 @@ -0,0 +1,971 @@ +From f4b58638674f2797e2ab4b2c0411f3e256d35405 Mon Sep 17 00:00:00 2001 +From: Kristine Hahn <kh...@maprtech.com> +Date: Wed, 25 Mar 2015 18:40:52 -0700 +Subject: [PATCH 3/3] DRILL-2505 + +--- + _docs/data-sources/004-json-ref.md | 18 +- + _docs/query/001-get-started.md | 4 +- + _docs/query/query-complex/001-sample-donuts.md | 3 +- + _docs/sql-ref/001-data-types.md | 6 +- + _docs/sql-ref/004-functions.md | 24 +- + _docs/sql-ref/008-sql-extensions.md | 123 +++------ + _docs/sql-ref/data-types/001-date.md | 49 ++-- + _docs/sql-ref/functions/001-math.md | 329 +++++++++++++++++++++++++ + _docs/sql-ref/nested/001-flatten.md | 27 +- + _docs/sql-ref/nested/002-kvgen.md | 15 +- + _docs/sql-ref/nested/003-repeated-cnt.md | 23 +- + _docs/sql-ref/nested/004-repeated-contains.md | 80 ++++++ + _docs/tutorial/004-lesson2.md | 2 +- + 13 files changed, 530 insertions(+), 173 deletions(-) + create mode 100644 _docs/sql-ref/functions/001-math.md + create mode 100644 _docs/sql-ref/nested/004-repeated-contains.md + +diff --git a/_docs/data-sources/004-json-ref.md b/_docs/data-sources/004-json-ref.md +index bbb4386..b80231e 100644 +--- a/_docs/data-sources/004-json-ref.md ++++ b/_docs/data-sources/004-json-ref.md +@@ -89,15 +89,15 @@ You can write data from Drill to a JSON file. The following setup is required: + + * In the storage plugin definition, include a writable (mutable) workspace. For example: + +- { +- . . . +- "workspaces": { +- . . . +- "myjsonstore": { +- "location": "/tmp", +- "writable": true, +- } +- . . . ++ { ++ . . . ++ "workspaces": { ++ . . . ++ "myjsonstore": { ++ "location": "/tmp", ++ "writable": true, ++ } ++ . . . + + * Set the output format to JSON. For example: + +diff --git a/_docs/query/001-get-started.md b/_docs/query/001-get-started.md +index 0b24448..f60b50e 100644 +--- a/_docs/query/001-get-started.md ++++ b/_docs/query/001-get-started.md +@@ -67,10 +67,10 @@ In some cases, such as stopping while a query is in progress, this command does + these steps: + + 1. Issue a CTRL Z to stop the query, then start Drill again. If the startup message indicates success, skip the rest of the steps. If not, proceed to step 2. +- 2. Search for the Drill process ID. ++ 2. Search for the Drill process IDs. + + $ ps auwx | grep drill +- 3. Kill the process using the process number in the grep output. For example: ++ 3. Kill each process using the process numbers in the grep output. For example: + + $ sudo kill -9 2674 + +diff --git a/_docs/query/query-complex/001-sample-donuts.md b/_docs/query/query-complex/001-sample-donuts.md +index 9bace24..f1c7a1f 100644 +--- a/_docs/query/query-complex/001-sample-donuts.md ++++ b/_docs/query/query-complex/001-sample-donuts.md +@@ -2,7 +2,8 @@ + title: "Sample Data: Donuts" + parent: "Querying Complex Data" + --- +-The complex data queries use sample `donuts.json` and `moredonuts.json` files. ++The complex data queries use the sample `donuts.json` file. To download this file, go to [Drill test resources](https://github.com/apache/drill/tree/master/exec/java-exec/src/test/resources) page, locate donuts.json in the list of files, and download it. For example, on the Mac right-click the five, select Save Link As, and then click Save. ++ + Here is the single complete "record" (`0001`) from the `donuts.json `file. In + terms of Drill query processing, this record is equivalent to a single record + in a table. +diff --git a/_docs/sql-ref/001-data-types.md b/_docs/sql-ref/001-data-types.md +index 7e56231..7e370e4 100644 +--- a/_docs/sql-ref/001-data-types.md ++++ b/_docs/sql-ref/001-data-types.md +@@ -122,7 +122,7 @@ The following table lists data types top to bottom, in descending precedence. Dr + Drill supports a number of functions to cast and convert compatible data types: + + * CAST +- Casts textual data from one data type to another. ++ Casts data from one data type to another. + * CONVERT_TO and CONVERT_FROM + Converts data, including binary data, from one data type to another. + * TO_CHAR +@@ -136,7 +136,7 @@ Drill supports a number of functions to cast and convert compatible data types: + + The following tables show data types that Drill can cast to/from other data types. Not all types are available for explicit casting in the current release. + +-### Explicit type Casting: Numeric and Character types ++### Explicit Type Casting: Numeric and Character types + + <table> + <tr> +@@ -463,7 +463,7 @@ If the SELECT statement includes a WHERE clause that compares a column of an unk + + SELECT c_row, CAST(c_int AS DECIMAL(28,8)) FROM mydata WHERE CAST(c_int AS CECIMAL(28,8)) > -3.0 + +-Although you can use CAST to handle binary data, CONVERT_TO and CONVERT_FROM are recommended for these conversions. ++Do not use CAST to handle binary data conversions. Use CONVERT_TO and CONVERT_FROM for these conversions. + + ### Using CONVERT_TO and CONVERT_FROM + +diff --git a/_docs/sql-ref/004-functions.md b/_docs/sql-ref/004-functions.md +index 03cd6ae..e444268 100644 +--- a/_docs/sql-ref/004-functions.md ++++ b/_docs/sql-ref/004-functions.md +@@ -5,6 +5,7 @@ parent: "SQL Reference" + You can use the following types of functions in your Drill queries: + + * Math Functions ++ * Trig Functions + * String Functions + * Date/Time Functions + * Data Type Formatting Functions +@@ -13,23 +14,6 @@ You can use the following types of functions in your Drill queries: + * Convert and Cast Functions + * Nested Data Functions + +-## Math +- +-You can use the following scalar math functions in your Drill queries: +- +- * ABS +- * CEIL +- * CEILING +- * DIV +- * FLOOR +- * MOD +- * POWER +- * RANDOM +- * ROUND +- * SIGN +- * SQRT +- * TRUNC +- + ## String Functions + + The following table provides the string functions that you can use in your +@@ -136,11 +120,7 @@ CONVERT_FROM function on HBase, Drill decodes the data and converts it to the + specified data type. In instances where Drill sends data back to HBase during + a query, you can use the CONVERT_TO function to change the data type to bytes. + +-Although you can achieve the same results by using the CAST function for some +-data types (such as VARBINARY to VARCHAR conversions), in general it is more +-efficient to use CONVERT functions when your data sources return binary data. +-When your data sources return more conventional data types, you can use the +-CAST function. ++Do not use the CAST function for converting binary data types to other types. Although CAST works for converting VARBINARY to VARCHAR, CAST does not work in other cases. CONVERT functions not only work regardless of the types you are converting but are also more efficient to use than CAST when your data sources return binary data. + + The following table provides the data types that you use with the CONVERT_TO + and CONVERT_FROM functions: +diff --git a/_docs/sql-ref/008-sql-extensions.md b/_docs/sql-ref/008-sql-extensions.md +index 1eb5f65..ce5be8f 100644 +--- a/_docs/sql-ref/008-sql-extensions.md ++++ b/_docs/sql-ref/008-sql-extensions.md +@@ -4,79 +4,26 @@ parent: "SQL Reference" + --- + Drill extends SQL to work with Hadoop-scale data and to explore smaller-scale data in ways not possible with SQL. Using intuitive SQL extensions you work with self-describing data and complex data types. Extensions to SQL include capabilities for exploring self-describing data, such as files and HBase, directly in the native format. + +-Drill provides language support for pointing to [storage plugin]() interfaces that Drill uses to interact with data sources. Use the name of a storage plugin to specify a file system *database* as a prefix in queries when you refer to objects across databases. Query files, including compressed .gz files and directories like an SQL table using a single query. ++Drill provides language support for pointing to [storage plugin]() interfaces that Drill uses to interact with data sources. Use the name of a storage plugin to specify a file system *database* as a prefix in queries when you refer to objects across databases. Query files, including compressed .gz files, and [directories](/docs/querying-directories), as you would query an SQL table. You can query [multiple files in a directory](/docs/lesson-3-create-a-storage-plugin/query-multiple-files-in-a-directory). + +-Drill extends the SELECT statement for reading complex, multi-structured data. The extended CREATE TABLE AS SELECT, provides the capability to write data of complex/multi-structured data types. Drill extends the [lexical rules](http://drill.apache.org/docs/lexical-structure) for working with files and directories, such as using back ticks for including file names, directory names, and reserved words in queries. Drill syntax supports using the file system as a persistent store for query profiles and diagnostic information. ++Drill extends the SELECT statement for reading complex, multi-structured data. The extended CREATE TABLE AS SELECT provides the capability to write data of complex/multi-structured data types. Drill extends the [lexical rules](http://drill.apache.org/docs/lexical-structure) for working with files and directories, such as using back ticks for including file names, directory names, and reserved words in queries. Drill syntax supports using the file system as a persistent store for query profiles and diagnostic information. + + ## Extensions for Hive- and HBase-related Data Sources + +-Drill supports Hive and HBase as a plug-and-play data source. You can query Hive tables with no modifications and creating model in the Hive metastore. Primitives, such as JOIN, support columnar operation. ++Drill supports Hive and HBase as a plug-and-play data source. Drill can read tables created in Hive that use [data types compatible](/docs/hive-to-drill-data-type-mapping) with Drill. You can query Hive tables without modifications. You can query self-describing data without requiring metadata definitions in the Hive metastore. Primitives, such as JOIN, support columnar operation. + + ## Extensions for JSON-related Data Sources +-For reading all JSON data as text, use the all text mode extension. Drill extends SQL to provide access to repeating values in arrays and arrays within arrays (array indexes). You can use these extensions to reach into deeply nested data. Drill extensions use standard JavaScript notation for referencing data elements in a hierarchy. ++For reading all JSON data as text, use the [all text mode](http://drill.apache.org/docs/handling-different-data-types/#all-text-mode-option) extension. Drill extends SQL to provide access to repeating values in arrays and arrays within arrays (array indexes). You can use these extensions to reach into deeply nested data. Drill extensions use standard JavaScript notation for referencing data elements in a hierarchy, as shown in ["Analyzing JSON."](/docs/json-data-model#analyzing-json) + +-## Extensions for Text Data Sources +-Drill handles plain text files and directories like standard SQL tables and can infer knowledge about the schema of the data. You can query compressed .gz files. +- +-## SQL Commands Extensions ++## Extensions for Parquet Data Sources ++SQL does not support all Parquet data types, so Drill infers data types in many instances. Users [cast] (/docs/sql-functions) data types to ensure getting a particular data type. Drill offers more liberal casting capabilities than SQL for Parquet conversions if the Parquet data is of a logical type. You can use the default dfs storage plugin installed with Drill for reading and writing Parquet files as shown in the section, [âParquet Format.â](/docs/parquet-format) + +-The following table describes key Drill extensions to SQL commands. + +-<table> +- <tr> +- <th>Command</th> +- <th>SQL</th> +- <th>Drill</th> +- </tr> +- <tr> +- <td>ALTER (SESSION | SYSTEM)</td> +- <td>None</td> +- <td>Changes a system or session option.</td> +- </tr> +- <tr> +- <td>CREATE TABLE AS SELECT</td> +- <td>Creates a table from selected data in an existing database table.</td> +- <td>Stores selected data from one or more data sources on the file system.</td> +- </tr> +- <tr> +- <td>CREATE VIEW</td> +- <td>Creates a virtual table. The fields in a view are fields from one or more real tables in the database.</td> +- <td>Creates a virtual structure for and stores the result set. The fields in a view are fields from files in a file system, Hive, Hbase, MapR-DB tables</td> +- </tr> +- <tr> +- <td>DESCRIBE</td> +- <td>Obtains information about the <select list> columns</td> +- <td>Obtains information about views created in a workspace and tables created in Hive, HBase, and MapR-DB.</td> +- </tr> +- <tr> +- <td>EXPLAIN</td> +- <td>None</td> +- <td>Obtains a query execution plan.</td> +- </tr> +- <tr> +- <td>INSERT</td> +- <td>Loads data into the database for querying.</td> +- <td>No INSERT function. Performs schema-on-read querying and execution; no need to load data into Drill for querying.</td> +- </tr> +- <tr> +- <td>SELECT</td> +- <td>Retrieves rows from a database table or view.</td> +- <td>Retrieves data from Hbase, Hive, MapR-DB, file system or other storage plugin data source.</td> +- </tr> +- <tr> +- <td>SHOW (DATABASES | SCHEMAS | FILES | TABLES)</td> +- <td>None</td> +- <td>Lists the storage plugin data sources available for querying or the Hive, Hbase, MapR-DB tables, or views for the data source in use. Supports a FROM clause for listing file data sources in directories.</td> +- </tr> +- <tr> +- <td>USE</td> +- <td>Targets a database in SQL schema for querying.</td> +- <td>Targets Hbase, Hive, MapR-DB, file system or other storage plugin data source, which can be schema-less for querying.</td> +- </tr> +-</table> ++## Extensions for Text Data Sources ++Drill handles plain text files and directories like standard SQL tables and can infer knowledge about the schema of the data. Drill extends SQL to handle structured file types, such as comma separated values (CSV) files. An extension of the SELECT statement provides COLUMNS[n] syntax for accessing CSV rows in a readable format, as shown in ["COLUMNS[n] Syntax."](/docs/querying-plain-text-files) + + ## SQL Function Extensions +-The following table describes key Drill functions for analyzing nested data. ++Drill provides the following functions for analyzing nested data. + + <table> + <tr> +@@ -85,51 +32,41 @@ The following table describes key Drill functions for analyzing nested data. + <th>Drill</th> + </tr> + <tr> +- <td>CAST</td> +- <td>Casts database data from one type to another.</td> +- <td>Casts database data from one type to another and also casts data having no metadata into a readable type. Allows liberal casting of schema-less data.</td> +- </tr> +- <tr> +- <td>CONVERT_TO</td> +- <td>Converts an expression from one type to another using the CONVERT command.</td> +- <td>Converts an SQL data type to complex types, including Hbase byte arrays, JSON and Parquet arrays and maps.</td> +- </tr> +- <tr> +- <td>CONVERT_FROM</td> +- <td>Same as above</td> +- <td>Converts from complex types, including Hbase byte arrays, JSON and Parquet arrays and maps to an SQL data type.</td> +- </tr> +- <tr> +- <td>FLATTEN</td> ++ <td><a href='http://drill.apache.org/docs/flatten-function'>FLATTEN</a> </td> + <td>None</td> + <td>Separates the elements in nested data from a repeated field into individual records.</td> + </tr> + <tr> +- <td>KVGEN</td> ++ <td><a href='http://drill.apache.org/docs/kvgen-function'>KVGEN</a></td> + <td>None</td> + <td>Returns a repeated map, generating key-value pairs to simplify querying of complex data having unknown column names. You can then aggregate or filter on the key or value.</td> + </tr> + <tr> +- <td>REPEATED_COUNT</td> ++ <td><a href='http://drill.apache.org/docs/repeated-count-function'>REPEATED_COUNT</a></td> ++ <td>None</td> ++ <td>Counts the values in an array.</td> ++ </tr> ++ <tr> ++ <td><a href='http://drill.apache.org/docs/repeated-contains'>REPEATED_CONTAINS</a></td> + <td>None</td> +- <td>Counts the values in a JSON array.</td> ++ <td>Searches for a keyword in an array.</td> + </tr> + </table> + + ## Other Extensions + +-[`sys` database system tables]() provide port, version, and option information. Drill Connects to a random node, know where youâre connected: ++The [`sys` database system tables]() provide port, version, and option information. For example, Drill connects to a random node. You query the sys table to know where you are connected: + +-select host from sys.drillbits where `current` = true; +-+------------+ +-| host | +-+------------+ +-| 10.1.1.109 | +-+------------+ ++ SELECT host FROM sys.drillbits WHERE `current` = true; ++ +------------+ ++ | host | ++ +------------+ ++ | 10.1.1.109 | ++ +------------+ + +-select commit_id from sys.version; +-+------------+ +-| commit_id | +-+------------+ +-| e3ab2c1760ad34bda80141e2c3108f7eda7c9104 | ++ SELECT commit_id FROM sys.version; ++ +------------+ ++ | commit_id | ++ +------------+ ++ | e3ab2c1760ad34bda80141e2c3108f7eda7c9104 | + +diff --git a/_docs/sql-ref/data-types/001-date.md b/_docs/sql-ref/data-types/001-date.md +index 88fe008..87f93ba 100644 +--- a/_docs/sql-ref/data-types/001-date.md ++++ b/_docs/sql-ref/data-types/001-date.md +@@ -16,29 +16,29 @@ Next, use the following literals in a SELECT statement. + * `time` + * `timestamp` + +- SELECT date '2010-2-15' FROM dfs.`/Users/drilluser/apache-drill-0.8.0/dummy.json`; +- +------------+ +- | EXPR$0 | +- +------------+ +- | 2010-02-15 | +- +------------+ +- 1 row selected (0.083 seconds) +- +- SELECT time '15:20:30' from dfs.`/Users/drilluser/apache-drill-0.8.0/dummy.json`; +- +------------+ +- | EXPR$0 | +- +------------+ +- | 15:20:30 | +- +------------+ +- 1 row selected (0.067 seconds) +- +- SELECT timestamp '2015-03-11 6:50:08' FROM dfs.`/Users/drilluser/apache-drill-0.8.0/dummy.json`; +- +------------+ +- | EXPR$0 | +- +------------+ +- | 2015-03-11 06:50:08.0 | +- +------------+ +- 1 row selected (0.071 seconds) ++ SELECT date '2010-2-15' FROM dfs.`/Users/drilluser/apache-drill-0.8.0/dummy.json`; ++ +------------+ ++ | EXPR$0 | ++ +------------+ ++ | 2010-02-15 | ++ +------------+ ++ 1 row selected (0.083 seconds) ++ ++ SELECT time '15:20:30' from dfs.`/Users/drilluser/apache-drill-0.8.0/dummy.json`; ++ +------------+ ++ | EXPR$0 | ++ +------------+ ++ | 15:20:30 | ++ +------------+ ++ 1 row selected (0.067 seconds) ++ ++ SELECT timestamp '2015-03-11 6:50:08' FROM dfs.`/Users/drilluser/apache-drill-0.8.0/dummy.json`; ++ +------------+ ++ | EXPR$0 | ++ +------------+ ++ | 2015-03-11 06:50:08.0 | ++ +------------+ ++ 1 row selected (0.071 seconds) + + ## INTERVAL + +@@ -120,8 +120,7 @@ The following CTAS statement shows how to cast text from a JSON file to INTERVAL + cast( INTERVALDAY_col as interval day) INTERVALDAY_col + FROM `/user/root/intervals.json`); + +-Output is: ++<!-- Text and include output --> + +-TBD need to test in a future build. + + +diff --git a/_docs/sql-ref/functions/001-math.md b/_docs/sql-ref/functions/001-math.md +new file mode 100644 +index 0000000..829611d +--- /dev/null ++++ b/_docs/sql-ref/functions/001-math.md +@@ -0,0 +1,329 @@ ++--- ++title: "Math and Trig" ++parent: "SQL Functions" ++--- ++Drill supports the math functions shown in the following table plus trig functions listed at the end of this section. Most math functions and all trig functions take these input types: ++ ++* INT ++* BIGINT ++* FLOAT4 ++* FLOAT8 ++* SMALLINT ++* UINT1 ++* UINT2 ++* UINT4 ++* UINT8 ++ ++Exceptions are the LSHIFT and RSHIFT functions, which take all types except the float types. DEGREES, EXP, RADIANS, and the multiple LOG functions take the input types in this list plus the following additional types: ++ ++* DECIMAL9 ++* DECIMAL18 ++ ++**Math Functions** ++ ++<table> ++ <tr> ++ <th>Function</th> ++ <th>Return Type</th> ++ <th>Description</th> ++ </tr> ++ <tr> ++ <td>ABS(x)</td> ++ <td>Same as input</td> ++ <td>Returns the absolute value of the input argument x.</td> ++ </tr> ++ <tr> ++ <td>CBRT(x)</td> ++ <td>FLOAT8</td> ++ <td>Returns the cubic root of x.</td> ++ </tr> ++ <tr> ++ <td>CEIL(x)</td> ++ <td>Same as input</td> ++ <td>Returns the smallest integer not less than x.</td> ++ </tr> ++ <tr> ++ <td>CEILING(x)</td> ++ <td>Same as input</td> ++ <td>Same as CEIL.</td> ++ </tr> ++ <tr> ++ <td>DEGREES(x)</td> ++ <td>FLOAT8</td> ++ <td>Converts x radians to degrees.</td> ++ </tr> ++ <tr> ++ <td>EXP(x)</td> ++ <td>FLOAT8</td> ++ <td>Returns e (Euler's number) to the power of x.</td> ++ </tr> ++ <tr> ++ <td>FLOOR(x)</td> ++ <td>Same as input</td> ++ <td>Returns the largest integer not greater than x.</td> ++ </tr> ++ <tr> ++ <td>LOG(x)</td> ++ <td>FLOAT8</td> ++ <td>Returns the log value of x.</td> ++ </tr> ++ <tr> ++ <td>LSHIFT(x, y)</td> ++ <td>Same as input</td> ++ <td>Shifts the binary x by y times to the left.</td> ++ </tr> ++ <tr> ++ <td>RADIANS</td> ++ <td>FLOAT8</td> ++ <td>Converts x degress to radians.</td> ++ </tr> ++ <tr> ++ <td>ROUND(x)</td> ++ <td>Same as input</td> ++ <td>Rounds to the nearest integer.</td> ++ </tr> ++ <tr> ++ <td>ROUND(x, y)</td> ++ <td>DECIMAL</td> ++ <td>Rounds x to s decimal places.</td> ++ </tr> ++ <tr> ++ <td>RSHIFT(x, y)</td> ++ <td>Same as input</td> ++ <td>Shifts the binary x by y times to the right.</td> ++ </tr> ++ <tr> ++ <td>SIGN(x)</td> ++ <td>INT</td> ++ <td>Returns the sign of x.</td> ++ </tr> ++ <tr> ++ <td>SQRT(x)</td> ++ <td>Same as input</td> ++ <td>Returns the square root of x.</td> ++ </tr> ++ <tr> ++ <td>TRUNC(x)</td> ++ <td>Same as input</td> ++ <td>Truncates x toward zero.</td> ++ </tr> ++ <tr> ++ <td>TRUNC(x, y)</td> ++ <td>DECIMAL</td> ++ <td>Truncates x to y decimal places.</td> ++ </tr> ++</table> ++ ++## Math Function Examples ++ ++Examples in this section use the following files: ++ ++* The `input2.json` file ++* A dummy JSON file ++ ++Download the `input2.json` file from the [Drill source code](https://github.com/apache/drill/tree/master/exec/java-exec/src/test/resources/jsoninput) page. On the Mac, for example, right-click input2.json and choose Save Link As, and then click Save. ++ ++Create the a dummy JSON file having the following contents: ++ ++ {"dummy" : "data"} ++ ++#### ABS Example ++Get the absolute value of the integer key in `input2.json`. The following snippet of input2.json shows the relevant integer content: ++ ++ { "integer" : 2010, ++ "float" : 17.4, ++ "x": { ++ "y": "kevin", ++ "z": "paul" ++ . . . ++ } ++ { "integer" : -2002, ++ "float" : -1.2 ++ } ++ . . . ++ ++ SELECT `integer` FROM dfs.`/Users/drill/input2.json`; ++ ++ +------------+ ++ | integer | ++ +------------+ ++ | 2010 | ++ | -2002 | ++ | 2001 | ++ | 6005 | ++ +------------+ ++ 4 rows selected (0.113 seconds) ++ ++ SELECT ABS(`integer`) FROM dfs.`/Users/drill/input2.json`; ++ ++ +------------+ ++ | EXPR$0 | ++ +------------+ ++ | 2010 | ++ | 2002 | ++ | 2001 | ++ | 6005 | ++ +------------+ ++ 4 rows selected (0.357 seconds) ++ ++## CEIL Example ++Get the ceiling of float key values in input2.json. The input2.json file contains these float key values: ++ ++* 17.4 ++* -1.2 ++* 1.2 ++* 1.2 ++ ++ SELECT CEIL(`float`) FROM dfs.`/Users/drill/input2.json`; ++ ++ +------------+ ++ | EXPR$0 | ++ +------------+ ++ | 18.0 | ++ | -1.0 | ++ | 2.0 | ++ | 2.0 | ++ +------------+ ++ 4 rows selected (0.647 seconds) ++ ++### FLOOR Example ++Get the floor of float key values in input2.json. ++ ++ SELECT FLOOR(`float`) FROM dfs.`/Users/drill/input2.json`; ++ ++ +------------+ ++ | EXPR$0 | ++ +------------+ ++ | 17.0 | ++ | -2.0 | ++ | 1.0 | ++ | 1.0 | ++ +------------+ ++ 4 rows selected (0.11 seconds) ++ ++### ROUND Examples ++Open input2.json and change the first float value from 17.4 to 3.14159. Get values of the float columns in input2.json rounded as follows: ++ ++* Rounded to the nearest integer. ++* Rounded to the fourth decimal place. ++ ++ SELECT ROUND(`float`) FROM dfs.`/Users/khahn/Documents/test_files_source/input2.json`; ++ ++ +------------+ ++ | EXPR$0 | ++ +------------+ ++ | 3.0 | ++ | -1.0 | ++ | 1.0 | ++ | 1.0 | ++ +------------+ ++ 4 rows selected (0.061 seconds) ++ ++ SELECT ROUND(`float`, 4) FROM dfs.`/Users/khahn/Documents/test_files_source/input2.json`; ++ ++ +------------+ ++ | EXPR$0 | ++ +------------+ ++ | 3.1416 | ++ | -1.2 | ++ | 1.2 | ++ | 1.2 | ++ +------------+ ++ 4 rows selected (0.059 seconds) ++ ++## Log Examples ++ ++Get the base 2 log of 64. ++ ++ SELECT log(2, 64) FROM dfs.`/Users/drill/dummy.json`; ++ ++ +------------+ ++ | EXPR$0 | ++ +------------+ ++ | 6.0 | ++ +------------+ ++ 1 row selected (0.069 seconds) ++ ++Get the common log of 100. ++ ++ SELECT log10(100) FROM dfs.`/Users/drill/dummy.json`; ++ ++ +------------+ ++ | EXPR$0 | ++ +------------+ ++ | 2.0 | ++ +------------+ ++ 1 row selected (0.203 seconds) ++ ++Get the natural log of 7.5. ++ ++ SELECT log(7.5) FROM dfs.`/Users/drill/sample-data/dummy.json`; ++ ++ +------------+ ++ | EXPR$0 | ++ +------------+ ++ | 2.0149030205422647 | ++ +------------+ ++ 1 row selected (0.063 seconds) ++ ++**Trig Functions** ++ ++Drill supports the following trig functions, which return a FLOAT8 result. ++ ++* SIN(x) ++ Sine of angle x in radians ++ ++* COS(x) ++ Cosine of angle x in radians ++ ++* TAN(x) ++ Tangent of angle x in radians ++ ++* ASIN(x) ++ Inverse sine of angle x in radians ++ ++* ACOS(x) ++ Inverse cosine of angle x in radians ++ ++* ATAN(x) ++ Inverse tangent of angle x in radians ++ ++* SINH() ++ Hyperbolic sine of hyperbolic angle x in radians ++ ++* COSH() ++ Hyperbolic cosine of hyperbolic angle x in radians ++ ++* TANH() ++ Hyperbolic tangent of hyperbolic angle x in radians ++ ++**Examples** ++ ++Find the sine and tangent of a 45 degree angle. First convert degrees to radians for use in the SIN() function. ++ ++ SELECT RADIANS(30) AS Degrees FROM dfs.`/Users/drill/dummy.json`; ++ ++ +------------+ ++ | Degrees | ++ +------------+ ++ | 0.7853981633974483 | ++ +------------+ ++ 1 row selected (0.045 seconds) ++ ++ SELECT SIN(0.7853981633974483) AS `Sine of 30 degrees` FROM dfs.`/Users/drill/dummy.json`; ++ ++ +-----------------------+ ++ | Sine of 45 degrees | ++ +-----------------------+ ++ | 0.7071067811865475 | ++ +-----------------------+ ++ 1 row selected (0.059 seconds) ++ ++ SELECT TAN(0.7853981633974483) AS `Tangent of 30 degrees` from dfs.`/Users/drill/dummy.json`; ++ ++ +-----------------------+ ++ | Tangent of 45 degrees | ++ +-----------------------+ ++ | 0.9999999999999999 | ++ +-----------------------+ ++ +diff --git a/_docs/sql-ref/nested/001-flatten.md b/_docs/sql-ref/nested/001-flatten.md +index fdebbf4..0a6b7aa 100644 +--- a/_docs/sql-ref/nested/001-flatten.md ++++ b/_docs/sql-ref/nested/001-flatten.md +@@ -2,11 +2,22 @@ + title: "FLATTEN Function" + parent: "Nested Data Functions" + --- ++FLATTEN separates the elements in a repeated field into individual records. ++ ++## Syntax ++ ++ FLATTEN(z) ++ ++*z* is a JSON array. ++ ++## Usage Notes ++ + The FLATTEN function is useful for flexible exploration of repeated data. +-FLATTEN separates the elements in a repeated field into individual records. To +-maintain the association between each flattened value and the other fields in +-the record, all of the other columns are copied into each new record. A very +-simple example would turn this data (one record): ++ ++To maintain the association between each flattened value and the other fields in ++the record, the FLATTEN function copies all of the other columns into each new record. ++ ++A very simple example would turn this data (one record): + + { + "x" : 5, +@@ -16,7 +27,7 @@ simple example would turn this data (one record): + + into three distinct records: + +- select flatten(z) from table; ++ SELECT FLATTEN(z) FROM table; + | x | y | z | + +-------------+----------------+-----------+ + | 5 | "a string" | 1 | +@@ -26,7 +37,9 @@ into three distinct records: + The function takes a single argument, which must be an array (the `z` column + in this example). + +- ++Using the all (*) wildcard as the argument to flatten is not supported and returns an error. ++ ++## Examples + + For a more interesting example, consider the JSON data in the publicly + available [Yelp](https://www.yelp.com/dataset_challenge/dataset) data set. The +@@ -85,5 +98,5 @@ the categories array, then run a COUNT function on the flattened result: + +---------------|------------+ + + A common use case for FLATTEN is its use in conjunction with the +-[KVGEN](/docs/flatten-function) function. ++[KVGEN](/docs/flatten-function) function as shown in the section, ["JSON Data Model"](/docs/json-data-model/). + +diff --git a/_docs/sql-ref/nested/002-kvgen.md b/_docs/sql-ref/nested/002-kvgen.md +index dd4edd3..97c3e76 100644 +--- a/_docs/sql-ref/nested/002-kvgen.md ++++ b/_docs/sql-ref/nested/002-kvgen.md +@@ -2,6 +2,16 @@ + title: "KVGEN Function" + parent: "Nested Data Functions" + --- ++Return a list of the keys that exist in the map. ++ ++## Syntax ++ ++ KVGEN(column) ++ ++*column* is the name of a column. ++ ++## Usage Notes ++ + KVGEN stands for _key-value generation_. This function is useful when complex + data files contain arbitrary maps that consist of relatively "unknown" column + names. Instead of having to specify columns in the map to access the data, you +@@ -13,8 +23,6 @@ keys or constrain the keys in some way. For example, you can use the + [FLATTEN](/docs/flatten-function) function to break the + array down into multiple distinct rows and further query those rows. + +- +- + For example, assume that a JSON file contains this data: + + {"a": "valA", "b": "valB"} +@@ -146,5 +154,4 @@ distinct rows: + +------------+ + 9 rows selected (0.151 seconds) + +-See the description of [FLATTEN](/docs/flatten-function) +-for an example of a query against the flattened data. +\ No newline at end of file ++For more examples of KVGEN and FLATTEN, see the examples in the section, ["JSON Data Model"](/docs/json-data-model). +\ No newline at end of file +diff --git a/_docs/sql-ref/nested/003-repeated-cnt.md b/_docs/sql-ref/nested/003-repeated-cnt.md +index f15eeda..531c8ad 100644 +--- a/_docs/sql-ref/nested/003-repeated-cnt.md ++++ b/_docs/sql-ref/nested/003-repeated-cnt.md +@@ -2,7 +2,23 @@ + title: "REPEATED_COUNT Function" + parent: "Nested Data Functions" + --- +-This function counts the values in an array. The following example returns the ++This function counts the values in an array. ++ ++## Syntax ++ ++ REPEATED_COUNT (array) ++ ++*array* is the name of an array. ++ ++## Usage Notes ++ ++The COUNT function requires a single argument, which must be an array. Note that ++this function is not a standard SQL aggregate function and does not require ++the count to be grouped by other columns in the select list (such as `name` in ++this example). ++ ++## Example ++The following example returns the + counts for the `categories` array in the `yelp_academic_dataset_business.json` + file. The counts are restricted to rows that contain the string `pizza`. + +@@ -24,10 +40,5 @@ file. The counts are restricted to rows that contain the string `pizza`. + + 7 rows selected (2.03 seconds) + +-The function requires a single argument, which must be an array. Note that +-this function is not a standard SQL aggregate function and does not require +-the count to be grouped by other columns in the select list (such as `name` in +-this example). +- + For another example of this function, see the following lesson in the Apache + Drill Tutorial for Hadoop: [Lesson 3: Run Queries on Complex Data Types](/docs/lession-3-run-queries-on-complex-data-types/). +\ No newline at end of file +diff --git a/_docs/sql-ref/nested/004-repeated-contains.md b/_docs/sql-ref/nested/004-repeated-contains.md +new file mode 100644 +index 0000000..cd12760 +--- /dev/null ++++ b/_docs/sql-ref/nested/004-repeated-contains.md +@@ -0,0 +1,80 @@ ++--- ++title: "REPEATED_CONTAINS Function" ++parent: "Nested Data Functions" ++--- ++REPEATED CONTAINS searches for a keyword in an array. ++ ++## Syntax ++ ++ REPEATED_CONTAINS(array_name, keyword) ++ ++* array_name is a simple array, such as topping: ++ ++ { ++ . . . ++ "topping": ++ [ ++ "None", ++ "Glazed", ++ "Sugar", ++ "Powdered Sugar", ++ "Chocolate with Sprinkles", ++ "Chocolate", ++ "Maple" ++ ] ++ } ++ ++* keyword is a value in the array, such as 'Glazed'. ++ ++## Usage Notes ++REPEATED_CONTAINS returns true if Drill finds a match; otherwise, the function returns false. The function supports regular expression wildcards, such as *, ., and ?, but not at the beginning of the keyword. Enclose keyword string values in single quotation marks. Do not enclose numerical keyword values in single quotation marks. ++ ++## Examples ++The examples in this section `testRepeatedWrite.json`. To download this file, go to [Drill test resources](https://github.com/apache/drill/tree/master/exec/java-exec/src/test/resources) page, locate testRepeatedWrite.json.json in the list of files, and download it. For example, on the Mac right-click the five, select Save Link As, and then click Save. ++ ++Which donuts having glazed or glaze toppings? ++ ++ SELECT name, REPEATED_CONTAINS(topping, 'Glaze?') AS `Glazed?` FROM dfs.`/Users/drill/testRepeatedWrite.json` WHERE type='donut'; ++ ++ +------------+------------+ ++ | name | Glazed? | ++ +------------+------------+ ++ | Cake | true | ++ | Raised | true | ++ | Old Fashioned | true | ++ | Filled | true | ++ | Apple Fritter | true | ++ +------------+------------+ ++ 5 rows selected (0.072 seconds) ++ ++Which objects have powered sugar toppings? Use the asterisk wildcard instead of typing the entire keyword pair. ++ ++ SELECT name, REPEATED_CONTAINS(topping, 'P*r') AS `Powdered Sugar?` FROM dfs.`/Users/khahn/Documents/test_files_source/testRepeatedWrite.json` WHERE type='donut'; ++ ++ +------------+-----------------+ ++ | name | Powdered Sugar? | ++ +------------+-----------------+ ++ | Cake | true | ++ | Raised | true | ++ | Old Fashioned | false | ++ | Filled | true | ++ | Apple Fritter | false | ++ +------------+-----------------+ ++ 5 rows selected (0.089 seconds) ++ ++Which donuts have toppings beginning with the letters "Map" and ending in any two letters? ++ ++ SELECT name, REPEATED_CONTAINS(topping, 'Map..') AS `Maple?` FROM dfs.`/Users/drill/testRepeatedWrite.json` WHERE type='donut'; ++ ++ +------------+------------+ ++ | name | Maple? | ++ +------------+------------+ ++ | Cake | true | ++ | Raised | true | ++ | Old Fashioned | true | ++ | Filled | true | ++ | Apple Fritter | false | ++ +------------+------------+ ++ 5 rows selected (0.085 seconds) ++ ++ +diff --git a/_docs/tutorial/004-lesson2.md b/_docs/tutorial/004-lesson2.md +index 5b29b7b..eca22bb 100644 +--- a/_docs/tutorial/004-lesson2.md ++++ b/_docs/tutorial/004-lesson2.md +@@ -237,7 +237,7 @@ register any sales in that state. + + Note the following features of this query: + +- * The CAST function is required for every column in the table. This function returns the MapR-DB/HBase binary data as readable integers and strings. Alternatively, you can use CONVERT_TO/CONVERT_FROM functions to decode the columns. CONVERT_TO and CONVERT_FROM are more efficient than CAST in most cases. ++ * The CAST function is required for every column in the table. This function returns the MapR-DB/HBase binary data as readable integers and strings. Alternatively, you can use CONVERT_TO/CONVERT_FROM functions to decode the string columns. CONVERT_TO/CONVERT_FROM are more efficient than CAST in most cases. Use only CONVERT_TO to convert binary types to any type other than VARCHAR. + * The row_key column functions as the primary key of the table (a customer ID in this case). + * The table alias t is required; otherwise the column family names would be parsed as table names and the query would return an error. + +-- +1.9.5 (Apple Git-50.3) + Modified: drill/site/trunk/content/drill/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html?rev=1670235&r1=1670234&r2=1670235&view=diff ============================================================================== --- drill/site/trunk/content/drill/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html (original) +++ drill/site/trunk/content/drill/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html Tue Mar 31 01:20:42 2015 @@ -78,9 +78,8 @@ <div class="addthis_sharing_toolbox"></div> <article class="post-content"> - <script type="text/javascript" src="https://addthisevent.com/libs/1.5.8/ate.min.js"></script> - -<p><a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent"> + <p><script type="text/javascript" src="https://addthisevent.com/libs/1.5.8/ate.min.js"></script> +<a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent"> Add to Calendar <span class="_start">12-17-2014 11:30:00</span> <span class="_end">12-17-2014 12:30:00</span> Modified: drill/site/trunk/content/drill/docs/advanced-properties/index.html URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/docs/advanced-properties/index.html?rev=1670235&r1=1670234&r2=1670235&view=diff ============================================================================== --- drill/site/trunk/content/drill/docs/advanced-properties/index.html (original) +++ drill/site/trunk/content/drill/docs/advanced-properties/index.html Tue Mar 31 01:20:42 2015 @@ -67,9 +67,7 @@ </div> -<div class="int_text" align="left"><p><a href="/docs/configuring-odbc-connections-for-linux-and-mac-os-x">Previous</a><code> </code><a href="/docs">Back to Table of Contents</a><code> </code><a href="/docs/testing-the-odbc-connection-on-linux-and-mac-os-x">Next</a></p> - -<p>When you use advanced properties, you must separate them using a semi-colon +<div class="int_text" align="left"><p>When you use advanced properties, you must separate them using a semi-colon (;).</p> <p>For example, the following Advanced Properties string excludes the schemas Modified: drill/site/trunk/content/drill/docs/analyzing-yelp-json-data-with-apache-drill/index.html URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/docs/analyzing-yelp-json-data-with-apache-drill/index.html?rev=1670235&r1=1670234&r2=1670235&view=diff ============================================================================== --- drill/site/trunk/content/drill/docs/analyzing-yelp-json-data-with-apache-drill/index.html (original) +++ drill/site/trunk/content/drill/docs/analyzing-yelp-json-data-with-apache-drill/index.html Tue Mar 31 01:20:42 2015 @@ -67,9 +67,7 @@ </div> -<div class="int_text" align="left"><p><a href="/docs/summary">Previous</a><code> </code><a href="/docs">Back to Table of Contents</a><code> </code><a href="/docs/install-drill">Next</a></p> - -<p><a href="https://www.mapr.com/products/apache-drill">Apache Drill</a> is one of the +<div class="int_text" align="left"><p><a href="https://www.mapr.com/products/apache-drill">Apache Drill</a> is one of the fastest growing open source projects, with the community making rapid progress with monthly releases. The key difference is Drillâs agility and flexibility. Along with meeting the table stakes for SQL-on-Hadoop, which is to achieve low Modified: drill/site/trunk/content/drill/docs/apache-drill-in-10-minutes/index.html URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/docs/apache-drill-in-10-minutes/index.html?rev=1670235&r1=1670234&r2=1670235&view=diff ============================================================================== --- drill/site/trunk/content/drill/docs/apache-drill-in-10-minutes/index.html (original) +++ drill/site/trunk/content/drill/docs/apache-drill-in-10-minutes/index.html Tue Mar 31 01:20:42 2015 @@ -67,9 +67,7 @@ </div> -<div class="int_text" align="left"><p><a href="/docs/install-drill">Previous</a><code> </code><a href="/docs">Back to Table of Contents</a><code> </code><a href="/docs/deploying-apache-drill-in-a-clustered-environment">Next</a></p> - -<ul> +<div class="int_text" align="left"><ul> <li>Objective</li> <li>A Few Bits About Apache Drill</li> <li>Process Overview</li> Modified: drill/site/trunk/content/drill/docs/apache-drill-tutorial/index.html URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/docs/apache-drill-tutorial/index.html?rev=1670235&r1=1670234&r2=1670235&view=diff ============================================================================== --- drill/site/trunk/content/drill/docs/apache-drill-tutorial/index.html (original) +++ drill/site/trunk/content/drill/docs/apache-drill-tutorial/index.html Tue Mar 31 01:20:42 2015 @@ -67,9 +67,7 @@ </div> -<div class="int_text" align="left"><p><a href="/docs/performance">Previous</a><code> </code><a href="/docs">Back to Table of Contents</a><code> </code><a href="/docs/installing-the-apache-drill-sandbox">Next</a></p> - -<p>This tutorial uses the MapR Sandbox, which is a Hadoop environment pre- +<div class="int_text" align="left"><p>This tutorial uses the MapR Sandbox, which is a Hadoop environment pre- configured with Apache Drill.</p> <p>To complete the tutorial on the MapR Sandbox with Apache Drill, work through Modified: drill/site/trunk/content/drill/docs/architectural-highlights/index.html URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/docs/architectural-highlights/index.html?rev=1670235&r1=1670234&r2=1670235&view=diff ============================================================================== --- drill/site/trunk/content/drill/docs/architectural-highlights/index.html (original) +++ drill/site/trunk/content/drill/docs/architectural-highlights/index.html Tue Mar 31 01:20:42 2015 @@ -67,9 +67,7 @@ </div> -<div class="int_text" align="left"><p><a href="/docs/core-modules-within-a-drillbit">Previous</a><code> </code><a href="/docs">Back to Table of Contents</a><code> </code><a href="/docs/flexibility">Next</a></p> - -<p>The goal for Drill is to bring the <strong>SQL Ecosystem</strong> and <strong>Performance</strong> of +<div class="int_text" align="left"><p>The goal for Drill is to bring the <strong>SQL Ecosystem</strong> and <strong>Performance</strong> of the relational systems to <strong>Hadoop scale</strong> data <strong>WITHOUT</strong> compromising on the <strong>Flexibility</strong> of Hadoop/NoSQL systems. There are several core architectural elements in Apache Drill that make it a highly flexible and Modified: drill/site/trunk/content/drill/docs/architectural-overview/index.html URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/docs/architectural-overview/index.html?rev=1670235&r1=1670234&r2=1670235&view=diff ============================================================================== --- drill/site/trunk/content/drill/docs/architectural-overview/index.html (original) +++ drill/site/trunk/content/drill/docs/architectural-overview/index.html Tue Mar 31 01:20:42 2015 @@ -67,9 +67,7 @@ </div> -<div class="int_text" align="left"><p><a href="/docs">Previous</a><code> </code><a href="/docs">Back to Table of Contents</a><code> </code><a href="/docs/core-modules-within-a-drillbit">Next</a></p> - -<p>Apache Drill is a low latency distributed query engine for large-scale +<div class="int_text" align="left"><p>Apache Drill is a low latency distributed query engine for large-scale datasets, including structured and semi-structured/nested data. Inspired by Googleâs Dremel, Drill is designed to scale to several thousands of nodes and query petabytes of data at interactive speeds that BI/Analytics environments Added: drill/site/trunk/content/drill/docs/casting-converting-data-types/index.html URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/docs/casting-converting-data-types/index.html?rev=1670235&view=auto ============================================================================== --- drill/site/trunk/content/drill/docs/casting-converting-data-types/index.html (added) +++ drill/site/trunk/content/drill/docs/casting-converting-data-types/index.html Tue Mar 31 01:20:42 2015 @@ -0,0 +1,421 @@ +<!DOCTYPE html> +<html> + +<head> + +<meta charset="UTF-8"> + + +<title>Casting/Converting Data Types - Apache Drill</title> + +<link href="/css/syntax.css" rel="stylesheet" type="text/css"> +<link href="/css/style.css" rel="stylesheet" type="text/css"> +<link href="/css/arrows.css" rel="stylesheet" type="text/css"> +<link href="/css/button.css" rel="stylesheet" type="text/css"> + +<link rel="shortcut icon" href="/favicon.ico" type="image/x-icon"> +<link rel="icon" href="/favicon.ico" type="image/x-icon"> + +<script language="javascript" type="text/javascript" src="/js/lib/jquery-1.11.1.min.js"></script> +<script language="javascript" type="text/javascript" src="/js/lib/jquery.easing.1.3.js"></script> +<script language="javascript" type="text/javascript" src="/js/modernizr.custom.js"></script> +<script language="javascript" type="text/javascript" src="/js/script.js"></script> + +</head> + +<body onResize="resized();"> + +<div class="bui"></div> + +<div id="search"> +<input type="text" placeholder="Enter search term here"> +</div> + +<div id="menu" class="mw"> +<ul> + <li class="logo"><a href="/"></a></li> + <li> + <a href="/overview/">Documentation</a> + <ul> + <li><a href="/overview/">Overview </a></li> + <li><a href="https://cwiki.apache.org/confluence/display/DRILL/Apache+Drill+in+10+Minutes" target="_blank">Drill in 10 Minutes</a></li> + <li><a href="/why/">Why Drill? </a></li> + <li><a href="/architecture/">Architecture</a></li> + </ul> + </li> + <li> + <a href="/community/">Community</a> + <ul> + <li><a href="/team/">Team</a></li> + <li><a href="/community/#events">Events and Meetups</a></li> + <li><a href="/community/#mailinglists">Mailing Lists</a></li> + <li><a href="/community/#getinvolved">Get Involved</a></li> + <li><a href="https://issues.apache.org/jira/browse/DRILL/" target="_blank">Issue Tracker</a></li> + <li><a href="https://github.com/apache/drill" target="_blank">GitHub</a></li> + </ul> + </li> + <li><a href="/faq/">FAQ</a></li> + <li><a href="/blog/">Blog</a></li> + <li style="width:30px; padding-left: 2px; padding-right:10px"><a href="https://twitter.com/apachedrill" target="_blank"><img src="/images/twitterbw.png" alt="" align="center" width="22" style="padding: 0px 10px 1px 0px;"></a> </li> + <li class="l"><span> </span></li> + <li class="d"><a href="/download/">Download</a></li> +</ul> +</div> + +<div class="int_title"> +<h1>Casting/Converting Data Types</h1> + +</div> + +<div class="int_text" align="left"><p>Drill supports the following functions for casting and converting data types:</p> + +<ul> +<li><a href="/docs/data-type-fmt#cast">CAST</a></li> +<li><a href="/docs/data-type-fmt#convert-to-and-convert-from">CONVERT TO/FROM</a></li> +<li><a href="/docs/data-type-fmt#other-data-type-conversion-functions">Other data type conversion functions</a></li> +</ul> + +<h1 id="cast">CAST</h1> + +<p>The CAST function converts an entity having a single data value, such as a column name, from one type to another.</p> + +<h2 id="syntax">Syntax</h2> + +<p>cast (<expression> AS <data type>)</p> + +<p><em>expression</em></p> + +<p>An entity that evaluates to one or more values, such as a column name or literal</p> + +<p><em>data type</em></p> + +<p>The target data type, such as INTEGER or DATE, to which to cast the expression</p> + +<h2 id="usage-notes">Usage Notes</h2> + +<p>If the SELECT statement includes a WHERE clause that compares a column of an unknown data type, cast both the value of the column and the comparison value in the WHERE clause. For example:</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT c_row, CAST(c_int AS DECIMAL(28,8)) FROM mydata WHERE CAST(c_int AS DECIMAL(28,8)) > -3.0 +</code></pre></div> +<p>Do not use the CAST function for converting binary data types to other types. Although CAST works for converting VARBINARY to VARCHAR, CAST does not work in other cases for converting binary data. Use CONVERT_TO and CONVERT_FROM for converting to or from binary data. </p> + +<p>Refer to the following tables for information about the data types to use for casting:</p> + +<ul> +<li><a href="/docs/supported-data-types-for-casting">Supported Data Types for Casting</a></li> +<li><a href="/docs/explicit-type-casting-maps">Explicit Type Casting Maps</a></li> +</ul> + +<h2 id="examples">Examples</h2> + +<p>The following examples refer to a dummy JSON file in the FROM clause. The dummy JSON file has following contents.</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">{"dummy" : "data"} +</code></pre></div> +<h3 id="casting-a-character-string-to-a-number">Casting a character string to a number</h3> + +<p>You cannot cast a character string that includes a decimal point to an INT or BIGINT. For example, if you have "1200.50" in a JSON file, attempting to select and cast the string to an INT fails. As a workaround, cast to a float or decimal type, and then to an integer type. </p> + +<p>The following example shows how to cast a character to a DECIMAL having two decimal places.</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT CAST('1' as DECIMAL(28, 2)) FROM dfs.`/Users/drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 1.00 | ++------------+ +</code></pre></div> +<h3 id="casting-a-number-to-a-character-string">Casting a number to a character string</h3> + +<p>The first example shows that Drill uses a default limit of 1 character if you omit the VARCHAR limit: The result is truncated to 1 character. The second example casts the same number to a VARCHAR having a limit of 3 characters: The result is a 3-character string, 456. The third example shows that you can use CHAR as an alias for VARCHAR. You can also use CHARACTER or CHARACTER VARYING.</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT CAST(456 as VARCHAR) FROM dfs.`/Users/drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 4 | ++------------+ +1 row selected (0.063 seconds) + +SELECT CAST(456 as VARCHAR(3)) FROM dfs.`/Users/drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 456 | ++------------+ +1 row selected (0.08 seconds) + +SELECT CAST(456 as CHAR(3)) FROM dfs.`/Users/drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 456 | ++------------+ +1 row selected (0.093 seconds) +</code></pre></div> +<h3 id="casting-from-one-numerical-type-to-another">Casting from One Numerical Type to Another</h3> + +<p>Cast an integer to a decimal.</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT CAST(-2147483648 AS DECIMAL(28,8)) FROM dfs.`/Users/drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| -2.147483648E9 | ++------------+ +1 row selected (0.08 seconds) +</code></pre></div> +<h2 id="casting-intervals">Casting Intervals</h2> + +<p>To cast INTERVAL data use the following syntax:</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">CAST (column_name AS INTERVAL) +CAST (column_name AS INTERVAL DAY) +CAST (column_name AS INTERVAL YEAR) +</code></pre></div> +<p>A JSON file contains the following objects:</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">{ "INTERVALYEAR_col":"P1Y", "INTERVALDAY_col":"P1D", "INTERVAL_col":"P1Y1M1DT1H1M" } +{ "INTERVALYEAR_col":"P2Y", "INTERVALDAY_col":"P2D", "INTERVAL_col":"P2Y2M2DT2H2M" } +{ "INTERVALYEAR_col":"P3Y", "INTERVALDAY_col":"P3D", "INTERVAL_col":"P3Y3M3DT3H3M" } +</code></pre></div> +<p>The following CTAS statement shows how to cast text from a JSON file to INTERVAL data types in a Parquet table:</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">CREATE TABLE dfs.tmp.parquet_intervals AS +(SELECT cast (INTERVAL_col as interval), + cast( INTERVALYEAR_col as interval year) INTERVALYEAR_col, + cast( INTERVALDAY_col as interval day) INTERVALDAY_col +FROM `/user/root/intervals.json`); +</code></pre></div> +<!-- Text and include output --> + +<h1 id="convert_to-and-convert_from">CONVERT_TO and CONVERT_FROM</h1> + +<p>The CONVERT_TO and CONVERT_FROM functions encode and decode +data, respectively.</p> + +<h2 id="syntax">Syntax</h2> + +<p>CONVERT_TO (type, expression)</p> + +<p>You can use CONVERT functions to convert any compatible data type to any other type. HBase stores data as encoded byte arrays (VARBINARY data). To query HBase data in Drill, convert every column of an HBase table to/from byte arrays from/to an SQL data type that Drill supports when writing/reading data. The CONVERT fumctions are more efficient than CAST when your data sources return binary data. </p> + +<h2 id="usage-notes">Usage Notes</h2> + +<p>Use the CONVERT_TO function to change the data type to bytes when sending data back to HBase from a Drill query. CONVERT_TO converts an SQL data type to complex types, including Hbase byte arrays, JSON and Parquet arrays and maps. CONVERT_FROM converts from complex types, including Hbase byte arrays, JSON and Parquet arrays and maps to an SQL data type. </p> + +<h2 id="example">Example</h2> + +<p>A common use case for CONVERT_FROM is to convert complex data embedded in +a HBase column to a readable type. The following example converts VARBINARY data in col1 from HBase or MapR-DB table to JSON data. </p> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT CONVERT_FROM(col1, 'JSON') +FROM hbase.table1 +... +</code></pre></div> +<h1 id="other-data-type-conversions">Other Data Type Conversions</h1> + +<p>In addition to the CAST, CONVERT_TO, and CONVERT_FROM functions, Drill supports data type conversion functions to perform the following conversions:</p> + +<ul> +<li>A timestamp, integer, decimal, or double to a character string.</li> +<li>A character string to a date</li> +<li>A character string to a number</li> +<li>A character string to a timestamp with time zone</li> +<li>A decimal type to a timestamp with time zone</li> +</ul> + +<h1 id="to_char">TO_CHAR</h1> + +<p>TO_CHAR converts a date, time, timestamp, timestamp with timezone, or numerical expression to a character string.</p> + +<h2 id="syntax">Syntax</h2> +<div class="highlight"><pre><code class="language-text" data-lang="text">TO_CHAR (expression, 'format'); +</code></pre></div> +<p><em>expression</em> is a float, integer, decimal, date, time, or timestamp expression. </p> + +<ul> +<li>'format'* is format specifier enclosed in single quotation marks that sets a pattern for the output formatting. </li> +</ul> + +<h2 id="usage-notes">Usage Notes</h2> + +<p>For information about specifying a format, refer to one of the following format specifier documents:</p> + +<ul> +<li><a href="http://docs.oracle.com/javase/7/docs/api/java/text/DecimalFormat.html">Java DecimalFormat class</a> format specifiers </li> +<li><a href="http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html">Java DateTimeFormat class</a></li> +</ul> + +<h2 id="examples">Examples</h2> + +<p>Convert a float to a character string.</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT TO_CHAR(125.789383, '#,###.###') FROM dfs.`/Users/Drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 125.789 | ++------------+ +</code></pre></div> +<p>Convert an integer to a character string.</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT TO_CHAR(125, '#,###.###') FROM dfs.`/Users/drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 125 | ++------------+ +1 row selected (0.083 seconds) +</code></pre></div> +<p>Convert a date to a character string.</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT to_char((cast('2008-2-23' as date)), 'yyyy-MMM-dd') FROM dfs.`/Users/drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 2008-Feb-23 | ++------------+ +</code></pre></div> +<p>Convert a time to a string.</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT to_char(cast('12:20:30' as time), 'HH mm ss') FROM dfs.`/Users/drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 12 20 30 | ++------------+ +1 row selected (0.07 seconds) +</code></pre></div> +<p>Convert a timestamp to a string.</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT to_char(cast('2015-2-23 12:00:00' as timestamp), 'yyyy MMM dd HH:mm:ss') FROM dfs.`/Users/drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 2015 Feb 23 12:00:00 | ++------------+ +1 row selected (0.075 seconds) +</code></pre></div> +<h1 id="to_date">TO_DATE</h1> + +<p>Converts a character string or a UNIX epoch timestamp to a date.</p> + +<h2 id="syntax">Syntax</h2> +<div class="highlight"><pre><code class="language-text" data-lang="text">TO_DATE (expression[, 'format']); +</code></pre></div> +<p><em>expression</em> is a character string enclosed in single quotation marks or a UNIX epoch timestamp not enclosed in single quotation marks. </p> + +<ul> +<li>'format'* is format specifier enclosed in single quotation marks that sets a pattern for the output formatting. Use this option only when the expression is a character string. </li> +</ul> + +<h2 id="usage">Usage</h2> + +<p>Specify a format using patterns defined in <a href="http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html">Java DateTimeFormat class</a>.</p> + +<h2 id="examples">Examples</h2> + +<p>The first example converts a character string to a date. The second example extracts the year to verify that Drill recognizes the date as a date type.</p> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT TO_DATE('2015-FEB-23', 'yyyy-MMM-dd') FROM dfs.`/Users/drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 2015-02-23 | ++------------+ +1 row selected (0.077 seconds) + +SELECT EXTRACT(year from mydate) `extracted year` FROM (SELECT TO_DATE('2015-FEB-23', 'yyyy-MMM-dd') AS mydate FROM dfs.`/Users/drill/dummy.json`); + ++------------+ +| myyear | ++------------+ +| 2015 | ++------------+ +1 row selected (0.128 seconds) +</code></pre></div> +<h1 id="to_number">TO_NUMBER</h1> + +<p>TO_NUMBER converts a character string to a formatted number using a format specification.</p> + +<h2 id="syntax">Syntax</h2> +<div class="highlight"><pre><code class="language-text" data-lang="text">TO_NUMBER ('string', 'format'); +</code></pre></div> +<p><em>'string'</em> is a character string enclosed in single quotation marks. </p> + +<ul> +<li>'format'* is one or more <a href="http://docs.oracle.com/javase/7/docs/api/java/text/DecimalFormat.html">Java DecimalFormat class</a> format specifiers enclosed in single quotation marks that set a pattern for the output formatting.</li> +</ul> + +<h2 id="usage-notes">Usage Notes</h2> + +<p>The data type of the output of TO_NUMBER is a numeric. You can use the following <a href="http://docs.oracle.com/javase/7/docs/api/java/text/DecimalFormat.html">Java DecimalFormat class</a> format specifiers to set the output formatting. </p> + +<ul> +<li><p>Digit place holder. </p></li> +<li><p>0<br> +Digit place holder. If a value has a digit in the position where the '0' appears in the format string, that digit appears in the output; otherwise, a '0' appears in that position in the output.</p></li> +<li><p>.<br> +Decimal point. Make the first '.' character in the format string the location of the decimal separator in the value; ignore any additional '.' characters.</p></li> +<li><p>,<br> +Comma grouping separator. </p></li> +<li><p>E +Exponent. Separates mantissa and exponent in scientific notation. </p></li> +</ul> + +<h2 id="examples">Examples</h2> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT TO_NUMBER('987,966', '######') FROM dfs.`/Users/Drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 987.0 | ++------------+ + +SELECT TO_NUMBER('987.966', '###.###') FROM dfs.`/Users/Drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 987.966 | ++------------+ +1 row selected (0.063 seconds) + +SELECT TO_NUMBER('12345', '##0.##E0') FROM dfs.`/Users/Drill/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 12345.0 | ++------------+ +1 row selected (0.069 seconds) +</code></pre></div> +<h1 id="to_time">TO_TIME</h1> +<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT to_time('12:20:30', 'HH:mm:ss') FROM dfs.`/Users/khahn/Documents/test_files_source/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 12:20:30 | ++------------+ +1 row selected (0.067 seconds) + + +# TO_TIMESTAMP + +SELECT to_timestamp('2008-2-23 12:00:00', 'yyyy-MM-dd HH:mm:ss') FROM dfs.`/Users/khahn/Documents/test_files_source/dummy.json`; ++------------+ +| EXPR$0 | ++------------+ +| 2008-02-23 12:00:00.0 | ++------------+ +</code></pre></div> +<!-- Apache Drill +Apache DrillDRILL-1141 +ISNUMERIC should be implemented as a SQL function +SELECT count(columns[0]) as number FROM dfs.`bla` WHERE ISNUMERIC(columns[0])=1 + --> +</div> + + +<div id="footer" class="mw"> +<div class="wrapper"> +Copyright © 2012-2014 The Apache Software Foundation, licensed under the Apache License, Version 2.0.<br> +Apache and the Apache feather logo are trademarks of The Apache Software Foundation. Other names appearing on the site may be trademarks of their respective owners.<br/><br/> +</div> +</div> + +<script> +(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ +(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), +m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) +})(window,document,'script','//www.google-analytics.com/analytics.js','ga'); + +ga('create', 'UA-53379651-1', 'auto'); +ga('send', 'pageview'); +</script> + +</body> +</html> Modified: drill/site/trunk/content/drill/docs/configuring-odbc-connections-for-linux-and-mac-os-x/index.html URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/docs/configuring-odbc-connections-for-linux-and-mac-os-x/index.html?rev=1670235&r1=1670234&r2=1670235&view=diff ============================================================================== --- drill/site/trunk/content/drill/docs/configuring-odbc-connections-for-linux-and-mac-os-x/index.html (original) +++ drill/site/trunk/content/drill/docs/configuring-odbc-connections-for-linux-and-mac-os-x/index.html Tue Mar 31 01:20:42 2015 @@ -67,9 +67,7 @@ </div> -<div class="int_text" align="left"><p><a href="/docs/driver-configuration-options">Previous</a><code> </code><a href="/docs">Back to Table of Contents</a><code> </code><a href="/docs/advanced-properties">Next</a></p> - -<p>You can use a connection string to connect to your data source. For a list of +<div class="int_text" align="left"><p>You can use a connection string to connect to your data source. For a list of all the properties that you can use in connection strings, see <a href="/docs/driver-configuration-options">Driver Configuration Options</a>.</p> Modified: drill/site/trunk/content/drill/docs/connect-to-a-data-source/index.html URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/docs/connect-to-a-data-source/index.html?rev=1670235&r1=1670234&r2=1670235&view=diff ============================================================================== --- drill/site/trunk/content/drill/docs/connect-to-a-data-source/index.html (original) +++ drill/site/trunk/content/drill/docs/connect-to-a-data-source/index.html Tue Mar 31 01:20:42 2015 @@ -67,9 +67,7 @@ </div> -<div class="int_text" align="left"><p><a href="/docs/installing-drill-in-distributed-mode">Previous</a><code> </code><a href="/docs">Back to Table of Contents</a><code> </code><a href="/docs/storage-plugin-registration">Next</a></p> - -<p>A storage plugin is an interface for connecting to a data source to read and write data. Apache Drill connects to a data source, such as a file on the file system or a Hive metastore, through a storage plugin. When you execute a query, Drill gets the plugin name you provide in FROM clause of your query or from the default you specify in the USE.<plugin name> command that precedes the query. +<div class="int_text" align="left"><p>A storage plugin is an interface for connecting to a data source to read and write data. Apache Drill connects to a data source, such as a file on the file system or a Hive metastore, through a storage plugin. When you execute a query, Drill gets the plugin name you provide in FROM clause of your query or from the default you specify in the USE.<plugin name> command that precedes the query. . </p> <p>In addition to the connection string, the storage plugin configures the workspace and file formats for reading data, as described in subsequent sections. </p> Modified: drill/site/trunk/content/drill/docs/core-modules-within-a-drillbit/index.html URL: http://svn.apache.org/viewvc/drill/site/trunk/content/drill/docs/core-modules-within-a-drillbit/index.html?rev=1670235&r1=1670234&r2=1670235&view=diff ============================================================================== --- drill/site/trunk/content/drill/docs/core-modules-within-a-drillbit/index.html (original) +++ drill/site/trunk/content/drill/docs/core-modules-within-a-drillbit/index.html Tue Mar 31 01:20:42 2015 @@ -67,9 +67,7 @@ </div> -<div class="int_text" align="left"><p><a href="/docs/architectural-overview">Previous</a><code> </code><a href="/docs">Back to Table of Contents</a><code> </code><a href="/docs/architectural-highlights">Next</a></p> - -<p>The following image represents components within each Drillbit:</p> +<div class="int_text" align="left"><p>The following image represents components within each Drillbit:</p> <p><img src="/docs/img/DrillbitModules.png" alt="drill query flow"></p>