This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/drill-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 0162cb1  Website update
0162cb1 is described below

commit 0162cb1a30d026dbb3bb9a68fd879872ce534af8
Author: Volodymyr Vysotskyi <[email protected]>
AuthorDate: Sun Mar 22 09:14:57 2020 +0200

    Website update
---
 docs/analyze-table-refresh-metadata/index.html     |  15 +-
 docs/create-or-replace-schema/index.html           |  65 ++++++-
 docs/image-metadata-format-plugin/index.html       |  72 ++++----
 docs/plugin-configuration-basics/index.html        |  24 +++
 .../querying-a-file-system-introduction/index.html |   6 +-
 docs/querying-avro-files/index.html                |  20 ++-
 docs/rdbms-storage-plugin/index.html               |  71 +++++---
 docs/using-drill-metastore/index.html              | 189 +++++++++++++++++++--
 feed.xml                                           |   4 +-
 images/7671b34d6e8a4d050f75278f10f1a08.jpg         | Bin 0 -> 45877 bytes
 10 files changed, 375 insertions(+), 91 deletions(-)

diff --git a/docs/analyze-table-refresh-metadata/index.html 
b/docs/analyze-table-refresh-metadata/index.html
index f69d53b..49a580b 100644
--- a/docs/analyze-table-refresh-metadata/index.html
+++ b/docs/analyze-table-refresh-metadata/index.html
@@ -1337,7 +1337,7 @@
 
     </div>
 
-     Mar 3, 2020
+     Mar 17, 2020
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1370,10 +1370,17 @@ The name of the table or directory for which Drill will 
collect table metadata.
 <p><em>table({table function name}(parameters))</em>
 Table function parameters. This syntax is only available since Drill 1.18.
 Example of table function parameters usage:</p>
-<div class="highlight"><pre><code class="language-text" 
data-lang="text">table(dfs.`table_name` (type =&gt; &#39;parquet&#39;, 
autoCorrectCorruptDates =&gt; true))
+<div class="highlight"><pre><code class="language-text" data-lang="text"> 
table(dfs.tmp.`text_nation` (type=&gt;&#39;text&#39;, 
fieldDelimiter=&gt;&#39;,&#39;, extractHeader=&gt;true,
+    schema=&gt;&#39;inline=(
+        `n_nationkey` INT not null,
+        `n_name` VARCHAR not null,
+        `n_regionkey` INT not null,
+        `n_comment` VARCHAR not null)&#39;
+    ))
 </code></pre></div>
-<p>For detailed information, please refer to
- <a 
href="/docs/plugin-configuration-basics/#using-the-formats-attributes-as-table-function-parameters">Using
 the Formats Attributes as Table Function Parameters</a></p>
+<p>Please refer to
+ <a 
href="/docs/plugin-configuration-basics/#specifying-the-schema-as-table-function-parameter">Specifying
 the Schema as Table Function Parameter</a>
+ for the details.</p>
 
 <p><em>COLUMNS (col1, col2, ...)</em>
 Optional names of the column(s) for which Drill will compute and store 
statistics. The stored schema will include all
diff --git a/docs/create-or-replace-schema/index.html 
b/docs/create-or-replace-schema/index.html
index fb2182c..1ba71da 100644
--- a/docs/create-or-replace-schema/index.html
+++ b/docs/create-or-replace-schema/index.html
@@ -1343,14 +1343,21 @@
 
     <div class="int_text" align="left">
       
-        <p>Starting in Drill 1.16, you can define a schema for text files 
using the CREATE OR REPLACE SCHEMA command. Schema is only available for tables 
represented by a directory. To use this feature with a single file, put the 
file inside a directory, and use the directory name to query the table.</p>
+        <p>Starting in Drill 1.16 you can define a schema for text files. 
Drill places a schema file in the root directory of your text table and so the 
schema feature only works for tables within a directory. If you have a 
single-file table, simply create a directory to hold that file and the schema 
file.</p>
 
-<p>In Drill 1.16, this feature is in preview status and disabled by default. 
You can enable this feature by setting the 
<code>exec.storage.enable_v3_text_reader</code> and 
<code>store.table.use_schema_file</code> system/session options to true. The 
feature is currently only available for text (CSV) files.</p>
+<p>In Drill 1.17, the provided schema feature is disabled by default. Enable 
it by setting the <code>store.table.use_schema_file</code> system/session 
option to true:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">ALTER 
SESSION SET `store.table.use_schema_file` = true
+</code></pre></div>
+<p>Next you create the schema using the <code>CREATE OR REPLACE SCHEMA</code> 
command as described in <a href="#syntax">Syntax</a> section.</p>
 
 <p>Running this command generates a hidden <code>.drill.schema</code> file in 
the table’s root directory. The <code>.drill.schema</code> file stores the 
schema definition in JSON format. Alternatively, you can create the schema file 
manually. If created manually, the file content must comply with the structure 
recognized by the Drill.  </p>
 
 <p>The end of this topic provides <a 
href="/docs/create-or-replace-schema/#examples">examples</a> that show how the 
feature is used. You may want to review this section before reading the 
reference material.  </p>
 
+<p>As described in <a 
href="/docs/plugin-configuration-basics/#specifying-the-schema-as-table-function-parameter">Specifying
 the Schema as Table Function Parameter</a>,
+ you can also use a table function to apply a query to individual queries. Or, 
you can place the
+ table function within a view, and query the table through the view.</p>
+
 <p>Please post your experience and suggestions to the &quot;<a 
href="[email protected]">user</a>&quot; mailing list.</p>
 
 <h2 id="syntax">Syntax</h2>
@@ -1405,7 +1412,8 @@ List of properties as key-value pairs in  parenthesis.  
</p>
 <p>In Drill 1.16, you must enable the following options for Drill to use the 
schema created during query execution: </p>
 
 <p><strong>exec.storage.enable_v3_text_reader</strong><br>
-Enables the preview &quot;version 3&quot; of the text (CSV) file reader. The 
V3 text reader is the only reader in Drill 1.16 that supports file schemas.  
</p>
+Enables the preview &quot;version 3&quot; of the text (CSV) file reader. The 
V3 text reader is the only reader in Drill 1.16 that supports file schemas.<br>
+In Drill 1.17, this option is enabled by default.</p>
 
 <p><strong>store.table.use_schema_file</strong><br>
 Enables the use of the schema file mechanism.</p>
@@ -1544,14 +1552,24 @@ A property that sets how Drill handles blank column 
values. Accepts the followin
 <h3 id="general-information">General Information</h3>
 
 <ul>
-<li>Schema provisioning only works with tables defined as directories because 
Drill must have a place to store the schema file. The directory can contain one 
or more files.<br></li>
+<li>Schema provisioning works only with the file system (dfs-based) storage 
plugins. It works by placing a file <code>.drill.schema</code> in the root 
folder of tables defined as a directory. The directory can contain any number 
of files (even just one) in addition to the schema file.</li>
 <li>Text files must have headers. The default extension for delimited text 
files with headers is <code>.csvh</code>. Note that the column names that 
appear in the headers match column definitions in the schema.<br></li>
 <li>You do not have to enumerate all columns in a file when creating a schema. 
You can indicate the columns of interest only.<br></li>
 <li>Columns in the defined schema do not have to be in the same order as in 
the data file.<br></li>
 <li>Column names must match. The case can differ, for example “name” and 
“NAME” are acceptable.<br></li>
-<li>Queries on columns with data types that cannot be converted fail with a 
<code>DATA_READ_ERROR</code>.<br></li>
 </ul>
 
+<p>Drill is unique in that it infers table schema at runtime. However, 
sometimes schema inference can fail when Drill
+ cannot infer the correct types. For example, Drill treats all fields in a 
text file as text. Drill may not be able
+ to determine the type of fields in JSON files if the fields are missing or 
set to <code>null</code> in the first few records
+ in the file. Drill issues a <code>DATA_READ_ERROR</code> when runtime schema 
inference fails.</p>
+
+<p>When Drill cannot correctly infer the schema, you can instead use your 
knowledge of the file layout to tell Drill
+ the proper schema to use. Schema provisioning is the feature you use to 
specify the schema.
+ You can provide a schema for the file as a whole using the <a 
href="#syntax"><code>CREATE OR REPLACE SCHEMA</code> command</a> or for
+ a single query using a <a 
href="/docs/plugin-configuration-basics/#table-function-parameters">table 
function</a>.
+ Please see <a 
href="/docs/plugin-configuration-basics/#specifying-the-schema-as-table-function-parameter">Specifying
 the Schema as Table Function Parameter</a> for details.</p>
+
 <h3 id="schema-mode-column-order">Schema Mode (Column Order)</h3>
 
 <p>The schema mode determines the set of columns returned for wildcard (*) 
queries and the  ordering of those columns. The mode is set through the 
<code>drill.strict</code> property. You can set this property to true (strict) 
or false (not strict). If you do not indicate the mode, the default is false 
(not strict).  </p>
@@ -1807,7 +1825,10 @@ select * from dfs.tmp.`text_blank`;
 
 <h2 id="limitations">Limitations</h2>
 
-<p>This feature is currently in the alpha phase (preview, experimental) for 
Drill 1.16 and only applies to text (CSV) files in this release. You must 
enable this feature through the <code>exec.storage.enable_v3_text_reader</code> 
and <code>store.table.use_schema_file</code> system/session options.</p>
+<p>Schema provisioning works with selected readers. If you develop a format 
plugin, you must use the
+ <code>Enhanced Vector Framework</code> (rather than the &quot;classic&quot; 
techniques) to enable schema support.</p>
+
+<p>To use schema provisioning, you must first enable it with the 
<code>store.table.use_schema_file</code> option.</p>
 
 <h2 id="examples">Examples</h2>
 
@@ -1923,7 +1944,7 @@ id,amount,start_date
 </code></pre></div>
 <h3 id="describing-schema-for-a-table">Describing Schema for a Table</h3>
 
-<p>After you create schema, you can examine the schema using the DESCRIBE 
SCHEMA FOR TABLE command. Schema can print to JSON or STATEMENT format. JSON 
format is the default if no format is indicated in the query. Schema displayed 
in JSON format is the same as the JSON format in the <code>.drill.schema</code> 
file.</p>
+<p>You can verify the provided schema using the <a 
href="#related-commands"><code>DESCRIBE SCHEMA FOR TABLE</code> command</a>. 
This command can format the schema in two formats. The <code>JSON</code> format 
is the same as the contents of the <code>.drill.schema</code> file stored in 
your table directory.</p>
 <div class="highlight"><pre><code class="language-text" 
data-lang="text">describe schema for table dfs.tmp.`text_table` as JSON;
 
+----------------------------------------------------------------------------------+
 |                                      schema                                  
    |
@@ -1943,7 +1964,7 @@ id,amount,start_date
 } |
 
+----------------------------------------------------------------------------------+
 </code></pre></div>
-<p>STATEMENT format displays the schema in a form compatible with the CREATE 
OR REPLACE SCHEMA command such that it can easily be copied, modified, and 
executed.</p>
+<p>You can also use the <code>STATEMENT</code> format to recover the SQL 
statement to recreate the schema. You can easily copy, reuse or edit this 
statement to change the schema or reuse the statement for other files.</p>
 <div class="highlight"><pre><code class="language-text" 
data-lang="text">describe schema for table dfs.tmp.`text_table` as statement;
 +--------------------------------------------------------------------------+
 |                                  schema                                  |
@@ -1956,9 +1977,35 @@ FOR TABLE dfs.tmp.`text_table`
  |
 +--------------------------------------------------------------------------+
 </code></pre></div>
+<h3 id="altering-schema-for-a-table">Altering Schema for a Table</h3>
+
+<p>Use the <code>ALTER SCHEMA</code> command to update your table schema. The 
command can add or replace columns.
+Or, it can update properties for the table or individual columns. Syntax:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">ALTER 
SCHEMA
+(FOR TABLE dfs.tmp.nation | PATH &#39;/tmp/schema.json&#39;)
+ADD [OR REPLACE]
+[COLUMNS (col1 int, col2 varchar)]
+[PROPERTIES (&#39;prop1&#39;=&#39;val1&#39;, &#39;prop2&#39;=&#39;val2&#39;)]
+</code></pre></div>
+<p><code>ALTER SCHEMA</code> modifies an existing schema file; it will fail if 
the schema file does not exist.
+(Use <code>CREATE SCHEMA</code> to create a new schema file.)</p>
+
+<p>To prevent accidental changes, the <code>ALTER SCHEMA ... ADD</code> 
command will fail if the requested column or property
+ already exists. Use the <code>OR REPLACE</code> clause to modify an existing 
column or property.</p>
+
+<p>You can remove columns or property with the <code>ALTER SCHEMA ... 
REMOVE</code> command:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">ALTER 
SCHEMA
+(FOR TABLE dfs.tmp.nation | PATH &#39;/tmp/schema.json&#39;)
+REMOVE
+[COLUMNS (col1 int, col2 varchar)]
+[PROPERTIES (&#39;prop1&#39;=&#39;val1&#39;, &#39;prop2&#39;=&#39;val2&#39;)]
+</code></pre></div>
+<p>The command fails if the schema file does not exist. The command silently 
ignores a request to remove a column or
+ property which does not exist.</p>
+
 <h3 id="dropping-schema-for-a-table">Dropping Schema for a Table</h3>
 
-<p>You can easily drop the schema for a table using the DROP SCHEMA [IF 
EXISTS] FOR TABLE `table_name` command, as shown:</p>
+<p>You can easily drop the schema for a table using the <code>DROP SCHEMA [IF 
EXISTS] FOR TABLE `table_name`</code> command, as shown:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">use 
dfs.tmp;
 +------+-------------------------------------+
 |  ok  |               summary               |
diff --git a/docs/image-metadata-format-plugin/index.html 
b/docs/image-metadata-format-plugin/index.html
index 1be2252..2c6edb8 100644
--- a/docs/image-metadata-format-plugin/index.html
+++ b/docs/image-metadata-format-plugin/index.html
@@ -1335,7 +1335,7 @@
 
     </div>
 
-     Jun 13, 2018
+     Mar 17, 2020
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1404,47 +1404,47 @@ Camera Raw: ARW (Sony), CRW/CR2 (Canon), NEF (Nikon), 
ORF (Olympus), RAF (FujiFi
 
 <h2 id="examples">Examples</h2>
 
+<p>To follow along with the examples, start by downloading the following image 
to your <code>\tmp</code> directory.</p>
+
+<p><a href="/images/7671b34d6e8a4d050f75278f10f1a08.jpg"><img 
src="/images/7671b34d6e8a4d050f75278f10f1a08.jpg" alt="image"></a></p>
+
 <p>A Drill query on a JPEG file with the property descriptive: true</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">   0: 
jdbc:drill:zk=local&gt; select FileName, * from 
dfs.`4349313028_f69ffa0257_o.jpg`;  
-   
+----------+----------+--------------+--------+------------+-------------+--------------+----------+-----------+------------+-----------+----------+----------+------------+-----------+------------+-----------------+-----------------+------+------+----------+------------+------------------+-----+---------------+-----------+------+---------+----------+
-   | FileName | FileSize | FileDateTime | Format | PixelWidth | PixelHeight | 
BitsPerPixel | DPIWidth | DPIHeight | Orientaion | ColorMode | HasAlpha | 
Duration | VideoCodec | FrameRate | AudioCodec | AudioSampleSize | 
AudioSampleRate | JPEG | JFIF | ExifIFD0 | ExifSubIFD | Interoperability | GPS 
| ExifThumbnail | Photoshop | IPTC | Huffman | FileType |
-   
+----------+----------+--------------+--------+------------+-------------+--------------+----------+-----------+------------+-----------+----------+----------+------------+-----------+------------+-----------------+-----------------+------+------+----------+------------+------------------+-----+---------------+-----------+------+---------+----------+
-   | 4349313028_f69ffa0257_o.jpg | 257213 bytes | Fri Mar 09 12:09:34 +08:00 
2018 | JPEG | 1199 | 800 | 24 | 96 | 96 | Unknown (0) | RGB | false | 00:00:00 
| Unknown | 0 | Unknown | 0 | 0 | 
{&quot;CompressionType&quot;:&quot;Baseline&quot;,&quot;DataPrecision&quot;:&quot;8
 bits&quot;,&quot;ImageHeight&quot;:&quot;800 
pixels&quot;,&quot;ImageWidth&quot;:&quot;1199 
pixels&quot;,&quot;NumberOfComponents&quot;:&quot;3&quot;,&quot;Component1&quot;:&quot;Y
 component: Quantization table 0, Samp [...]
-   
+----------+----------+--------------+--------+------------+-------------+--------------+----------+-----------+------------+-----------+----------+----------+------------+-----------+------------+-----------------+-----------------+------+------+----------+------------+------------------+-----+---------------+-----------+------+---------+----------+
+<div class="highlight"><pre><code class="language-text" data-lang="text">    
select FileName, * from dfs.tmp.`7671b34d6e8a4d050f75278f10f1a08.jpg`;
+    
+-------------------------------------+-------------+---------------------------------+--------+------------+-------------+--------------+-------------+----------+-----------+-----------+----------+----------+------------+-----------+------------+-----------------+-----------------+----------------------------------------------------------------------------------+------------------------------------------------------------------------------+-------------------------------------------
 [...]
+    |              FileName               |  FileSize   |          
FileDateTime           | Format | PixelWidth | PixelHeight | BitsPerPixel | 
Orientaion  | DPIWidth | DPIHeight | ColorMode | HasAlpha | Duration | 
VideoCodec | FrameRate | AudioCodec | AudioSampleSize | AudioSampleRate |       
                                JPEG                                       |    
                             JpegComment                                  |     
                                  JFIF [...]
+    
+-------------------------------------+-------------+---------------------------------+--------+------------+-------------+--------------+-------------+----------+-----------+-----------+----------+----------+------------+-----------+------------+-----------------+-----------------+----------------------------------------------------------------------------------+------------------------------------------------------------------------------+-------------------------------------------
 [...]
+    | 7671b34d6e8a4d050f75278f10f1a08.jpg | 45877 bytes | Tue Mar 17 21:37:09 
+02:00 2020 | JPEG   | 604        | 453         | 24           | Unknown (0) | 
0        | 0         | RGB       | false    | 00:00:00 | Unknown    | 0         
| Unknown    | 0               | 0               | 
{&quot;CompressionType&quot;:&quot;Baseline&quot;,&quot;DataPrecision&quot;:&quot;8
 bits&quot;,&quot;ImageHeight&quot;:&quot;453 
pixels&quot;,&quot;ImageWidth&quot;:&quot;604 pixels&quot;,&quot;NumberOfCo 
[...]
+    
+-------------------------------------+-------------+---------------------------------+--------+------------+-------------+--------------+-------------+----------+-----------+-----------+----------+----------+------------+-----------+------------+-----------------+-----------------+----------------------------------------------------------------------------------+------------------------------------------------------------------------------+-------------------------------------------
 [...]
 </code></pre></div>
 <p>A Drill query on a JPEG file with the property descriptive: false    </p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">   0: 
jdbc:drill:zk=local&gt; select FileName, * from 
dfs.`4349313028_f69ffa0257_o.jpg`;  
-   
+----------+----------+--------------+--------+------------+-------------+--------------+----------+-----------+------------+-----------+----------+----------+------------+-----------+------------+-----------------+-----------------+------+------+----------+------------+------------------+-----+---------------+-----------+------+---------+----------+
-   | FileName | FileSize | FileDateTime | Format | PixelWidth | PixelHeight | 
BitsPerPixel | DPIWidth | DPIHeight | Orientaion | ColorMode | HasAlpha | 
Duration | VideoCodec | FrameRate | AudioCodec | AudioSampleSize | 
AudioSampleRate | JPEG | JFIF | ExifIFD0 | ExifSubIFD | Interoperability | GPS 
| ExifThumbnail | Photoshop | IPTC | Huffman | FileType |
-   
+----------+----------+--------------+--------+------------+-------------+--------------+----------+-----------+------------+-----------+----------+----------+------------+-----------+------------+-----------------+-----------------+------+------+----------+------------+------------------+-----+---------------+-----------+------+---------+----------+
-   | 4349313028_f69ffa0257_o.jpg | 257213 | 2018-03-09 04:09:34.0 | JPEG | 
1199 | 800 | 24 | 96.0 | 96.0 | 0 | RGB | false | 0 | Unknown | 0.0 | Unknown | 
0 | 0.0 | 
{&quot;CompressionType&quot;:0,&quot;DataPrecision&quot;:8,&quot;ImageHeight&quot;:800,&quot;ImageWidth&quot;:1199,&quot;NumberOfComponents&quot;:3,&quot;Component1&quot;:{&quot;ComponentId&quot;:1,&quot;HorizontalSamplingFactor&quot;:2,&quot;VerticalSamplingFactor&quot;:2,&quot;QuantizationTableNumber&quot;:0},&quot;Componen
 [...]
-   
+----------+----------+--------------+--------+------------+-------------+--------------+----------+-----------+------------+-----------+----------+----------+------------+-----------+------------+-----
  
+<div class="highlight"><pre><code class="language-text" data-lang="text">    
select FileName, * from dfs.tmp.`7671b34d6e8a4d050f75278f10f1a08.jpg`;
+    
+-------------------------------------+----------+-----------------------+--------+------------+-------------+--------------+------------+----------+-----------+-----------+----------+----------+------------+-----------+------------+-----------------+-----------------+----------------------------------------------------------------------------------+------------------------------------------------------------------------------+---------------------------------------------------------
 [...]
+    |              FileName               | FileSize |     FileDateTime      | 
Format | PixelWidth | PixelHeight | BitsPerPixel | Orientaion | DPIWidth | 
DPIHeight | ColorMode | HasAlpha | Duration | VideoCodec | FrameRate | 
AudioCodec | AudioSampleSize | AudioSampleRate |                                
       JPEG                                       |                             
    JpegComment                                  |                              
         JFIF               [...]
+    
+-------------------------------------+----------+-----------------------+--------+------------+-------------+--------------+------------+----------+-----------+-----------+----------+----------+------------+-----------+------------+-----------------+-----------------+----------------------------------------------------------------------------------+------------------------------------------------------------------------------+---------------------------------------------------------
 [...]
+    | 7671b34d6e8a4d050f75278f10f1a08.jpg | 45877    | 2020-03-17 19:37:09.0 | 
JPEG   | 604        | 453         | 24           | 0          | 0.0      | 0.0  
     | RGB       | false    | 0        | Unknown    | 0.0       | Unknown    | 
0               | 0.0             | 
{&quot;CompressionType&quot;:0,&quot;DataPrecision&quot;:8,&quot;ImageHeight&quot;:453,&quot;ImageWidth&quot;:604,&quot;NumberOfComponents&quot;:3,&quot;Component1&quot;:{&quot;ComponentId&quot;:1,&quot;HorizontalSampl
 [...]
+    
+-------------------------------------+----------+-----------------------+--------+------------+-------------+--------------+------------+----------+-----------+-----------+----------+----------+------------+-----------+------------+-----------------+-----------------+----------------------------------------------------------------------------------+------------------------------------------------------------------------------+---------------------------------------------------------
 [...]
 </code></pre></div>
 <p>Retrieving GPS location data from the Exif metadata for the use of GIS 
functions.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">   0: 
jdbc:drill:zk=local&gt; select t.GPS.GPSLatitude as lat, t.GPS.GPSLongitude as 
lon from dfs.`4349313028_f69ffa0257_o.jpg` t;
-   +--------------------+----------------------+
-   |        lat         |         lon          |
-   +--------------------+----------------------+
-   | 47.53777313232332  | -122.03510284423795  |
-   +--------------------+----------------------+  
+<div class="highlight"><pre><code class="language-text" data-lang="text">    
select t.GPS.GPSLatitude as lat, t.GPS.GPSLongitude as lon from 
dfs.tmp.`7671b34d6e8a4d050f75278f10f1a08.jpg` t;
+    +-------------------+--------------------+
+    |        lat        |        lon         |
+    +-------------------+--------------------+
+    | 50.46355547157135 | 30.508668422733077 |
+    +-------------------+--------------------+
 </code></pre></div>
-<p>Retrieving the images that are larger than 640 x 480 pixels.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">   0: 
jdbc:drill:zk=local&gt; select FileName, PixelWidth, PixelHeight from 
dfs.`/images/*.png` where PixelWidth &gt;= 640 and PixelHeight &gt;= 480;
-   +--------------------------+-------------+--------------+
-   |         FileName         | PixelWidth  | PixelHeight  |
-   +--------------------------+-------------+--------------+
-   | 1.png                    | 2788        | 1758         |
-   | 1500x500.png             | 1500        | 500          |
-   | 2.png                    | 2788        | 1758         |
-   | 9784873116914_1.png      | 874         | 1240         |
-   | Driven-Example-Load.png  | 1208        | 970          |
-   | features-diagram.png     | 1170        | 644          |
-   | hal1.png                 | 1223        | 772          |
-   | hal2.png                 | 1184        | 768          |
-   | image-3.png              | 1200        | 771          |
-   | image-4.png              | 1200        | 771          |
-   | image002.png             | 1689        | 695          |
-   +--------------------------+-------------+--------------+  
+<p>Download all <code>png</code> images from <a 
href="/images/logos/">Logos</a> page and place them to <code>/tmp/logos</code>
+ directory to examine the following example. </p>
+
+<p>An example query to retrieve the images that are less than 640 x 480 
pixels.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">    
select FileName, PixelWidth, PixelHeight from dfs.tmp.`logos` where PixelWidth 
&lt; 640 and PixelHeight &lt; 480;
+    +---------------------+------------+-------------+
+    |      FileName       | PixelWidth | PixelHeight |
+    +---------------------+------------+-------------+
+    | redbusLogo.png      | 500        | 325         |
+    | option3-io-logo.png | 194        | 76          |
+    | sanchezLogo.png     | 235        | 85          |
+    | IORA_NUS.png        | 375        | 253         |
+    +---------------------+------------+-------------+
 </code></pre></div>
 <h2 id="supported-file-formats">Supported File Formats</h2>
 
diff --git a/docs/plugin-configuration-basics/index.html 
b/docs/plugin-configuration-basics/index.html
index df10163..8ad80e9 100644
--- a/docs/plugin-configuration-basics/index.html
+++ b/docs/plugin-configuration-basics/index.html
@@ -1474,6 +1474,8 @@
 
 <p>You set the formats attributes, such as skipFirstLine, in the 
<code>formats</code> area of the storage plugin configuration. When setting 
attributes for text files, such as CSV, you also need to set the 
<code>sys.options</code> property 
<code>exec.storage.enable_new_text_reader</code> to true (the default). For 
more information and examples of using formats for text files, see <a 
href="/docs/text-files-csv-tsv-psv/">&quot;Text Files: CSV, TSV, PSV&quot;</a>. 
 </p>
 
+<h1 id="table-function-parameters">Table Function Parameters</h1>
+
 <h2 id="using-the-formats-attributes-as-table-function-parameters">Using the 
Formats Attributes as Table Function Parameters</h2>
 
 <p>In Drill version 1.4 and later, you can also set the formats attributes 
defined above on a per query basis. To pass parameters to the format plugin, 
use the table function syntax:  </p>
@@ -1488,6 +1490,28 @@ fieldDelimiter =&gt; &#39;,&#39;, extractHeader =&gt; 
true))</code></p>
 
 <p>For more information about format plugin configuration see <a 
href="/docs/text-files-csv-tsv-psv/">&quot;Text Files: CSV, TSV, PSV&quot;</a>. 
 </p>
 
+<h2 id="specifying-the-schema-as-table-function-parameter">Specifying the 
Schema as Table Function Parameter</h2>
+
+<p>Table schemas normally reside in the root folder of each table. You can 
also specify a schema for an individual query
+ using a table function and specifying the <code>SCHEMA</code> property. You 
can combine the schema with format plugin properties.
+ The syntax is similar to the <a 
href="/docs/create-or-replace-schema/#syntax">CREATE OR REPLACE SCHEMA</a>:</p>
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">SELECT a, b FROM TABLE (table_name(
+SCHEMA =&gt; &#39;inline=(column_name data_type [nullability] [format] 
[default] [properties {prop=&#39;val&#39;, ...})]&#39;))
+</code></pre></div>
+<p>You can specify the schema inline within the query. For example:</p>
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">select * from table(dfs.tmp.`text_table`(
+schema =&gt; &#39;inline=(col1 date properties {`drill.format` = 
`yyyy-MM-dd`}) 
+properties {`drill.strict` = `false`}&#39;))
+</code></pre></div>
+<p>Alternatively, you can also specify the path to a schema file. For 
example:</p>
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">select * from table(dfs.tmp.`text_table`(schema =&gt; 
&#39;path=`/tmp/my_schema`&#39;))
+</code></pre></div>
+<p>The following example demonstrates applying provided schema alongside with 
format plugin table function parameters.
+Suppose that you have a CSV file with headers and with a custom extension: 
<code>csvh-test</code>. You can combine the schema with format plugin 
properties:</p>
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">select * from table(dfs.tmp.`cars.csvh-test`(type =&gt; 
&#39;text&#39;, 
+fieldDelimiter =&gt; &#39;,&#39;, extractHeader =&gt; true,
+schema =&gt; &#39;inline=(col1 date)&#39;))
+</code></pre></div>
 <h2 id="using-other-attributes">Using Other Attributes</h2>
 
 <p>The configuration of other attributes, such as 
<code>size.calculator.enabled</code> in the <code>hbase</code> plugin and 
<code>configProps</code> in the <code>hive</code> plugin, are 
implementation-dependent and beyond the scope of this document.</p>
diff --git a/docs/querying-a-file-system-introduction/index.html 
b/docs/querying-a-file-system-introduction/index.html
index 68e62fe..64c2a2d 100644
--- a/docs/querying-a-file-system-introduction/index.html
+++ b/docs/querying-a-file-system-introduction/index.html
@@ -1372,9 +1372,9 @@ more information.</p>
 <li>Structured data files:
 
 <ul>
-<li>Avro (type: avro) (This file type is experimental. See <a 
href="/docs/querying-avro-files/">Querying Avro Files</a>.)</li>
-<li>JSON (type: json)</li>
-<li>Parquet (type: parquet)</li>
+<li><a href="/docs/querying-avro-files/">Avro</a> (type: avro)</li>
+<li><a href="/docs/querying-json-files/">JSON</a> (type: json)</li>
+<li><a href="/docs/querying-parquet-files/">Parquet</a> (type: parquet)</li>
 </ul></li>
 </ul>
 
diff --git a/docs/querying-avro-files/index.html 
b/docs/querying-avro-files/index.html
index 980203a..d7d42d3 100644
--- a/docs/querying-avro-files/index.html
+++ b/docs/querying-avro-files/index.html
@@ -1343,8 +1343,26 @@
 
     <div class="int_text" align="left">
       
-        <p>The Avro format is experimental at this time. There are known 
issues when querying Avro files.  </p>
+        <p>Drill supports files in the <a 
href="https://avro.apache.org/";>Avro</a> format.
+Starting from Drill 1.18, the Avro format supports the <a 
href="/docs/create-or-replace-schema/#usage-notes">Schema provisioning</a> 
feature.</p>
 
+<h4 id="preparing-example-data">Preparing example data</h4>
+
+<p>To follow along with this example, download <a 
href="https://github.com/apache/drill/blob/master/exec/java-exec/src/test/resources/avro/map_string_to_long.avro";>sample
 data file</a>
+ to your <code>/tmp</code> directory.</p>
+
+<h4 id="selecting-data-from-avro-files">Selecting data from Avro files</h4>
+
+<p>We can query all data from the <code>map_string_to_long.avro</code> 
file:</p>
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">select * from dfs.tmp.`map_string_to_long.avro`
+</code></pre></div>
+<p>The query returns the following results:</p>
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">+-----------------+
+|      mapf       |
++-----------------+
+| {&quot;ki&quot;:1,&quot;ka&quot;:2} |
++-----------------+
+</code></pre></div>
     
       
         <div class="doc-nav">
diff --git a/docs/rdbms-storage-plugin/index.html 
b/docs/rdbms-storage-plugin/index.html
index 4333039..0484c87 100644
--- a/docs/rdbms-storage-plugin/index.html
+++ b/docs/rdbms-storage-plugin/index.html
@@ -1335,7 +1335,7 @@
 
     </div>
 
-     Dec 8, 2018
+     Mar 17, 2020
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1347,16 +1347,27 @@
 
 <h2 id="using-the-rdbms-storage-plugin">Using the RDBMS Storage Plugin</h2>
 
-<p>Drill is designed to work with any relational datastore that provides a 
JDBC driver. Drill is actively tested with Postgres, MySQL, Oracle, MSSQL and 
Apache Derby. For each system, you will follow three basic steps for setup:</p>
+<p>Drill is designed to work with any relational datastore that provides a 
JDBC driver. Drill is actively tested with
+ Postgres, MySQL, Oracle, MSSQL, Apache Derby and H2. For each system, you 
will follow three basic steps for setup:</p>
 
 <ol>
 <li><a href="/docs/installing-drill-in-embedded-mode">Install Drill</a>, if 
you do not already have it installed.</li>
-<li>Copy your database&#39;s JDBC driver into the jars/3rdparty directory. 
(You&#39;ll need to do this on every node.)<br></li>
+<li>Copy your database&#39;s JDBC driver into the <code>jars/3rdparty</code> 
directory. (You&#39;ll need to do this on every node.)<br></li>
 <li>Restart Drill. See <a 
href="/docs/starting-drill-in-distributed-mode/">Starting Drill in Distributed 
Mode</a>.</li>
-<li>Add a new storage configuration to Drill through the Web UI. Example 
configurations for <a href="#Example-Oracle-Configuration">Oracle</a>, <a 
href="#Example-SQL-Server-Configuration">SQL Server</a>, <a 
href="#Example-MySQL-Configuration">MySQL</a> and <a 
href="#Example-Postgres-Configuration">Postgres</a> are provided below.</li>
+<li>Add a new storage configuration to Drill through the Web UI. Example 
configurations for <a href="#example-oracle-configuration">Oracle</a>, <a 
href="#example-sql-server-configuration">SQL Server</a>, <a 
href="#example-mysql-configuration">MySQL</a> and <a 
href="#example-postgres-configuration">Postgres</a> are provided below.</li>
 </ol>
 
-<p><strong>Example: Working with MySQL</strong></p>
+<h2 
id="setting-data-source-parameters-in-the-storage-plugin-configuration">Setting 
data source parameters in the storage plugin configuration</h2>
+
+<p>Starting from Drill 1.18.0, new JDBC storage plugin configuration property 
<code>sourceParameters</code> was introduced to allow
+ setting data source parameters described in <a 
href="https://github.com/brettwooldridge/HikariCP#configuration-knobs-baby";>HikariCP</a>.
+ Parameters names with incorrect naming and parameter values which are of 
incorrect data type or illegal will fail
+ storage plugin to start up.</p>
+
+<p>See the <a 
href="#example-of-postgres-configuration-with-sourceparameters-configuration-property">Example
 of Postgres Configuration with <code>sourceParameters</code> configuration 
property</a>
+section for the example of usage.</p>
+
+<h3 id="example-working-with-mysql">Example: Working with MySQL</h3>
 
 <p>Drill communicates with MySQL through the JDBC driver using the 
configuration that you specify in the Web UI or through the <a 
href="/docs/plugin-configuration-basics/#storage-plugin-rest-api">REST API</a>. 
 </p>
 
@@ -1375,7 +1386,7 @@ Each configuration registered with Drill must have a 
distinct name. Names are ca
 
 <div class="admonition note">
 <p class="first admonition-title">Note</p>
-<p class="last">The URL differs depending on your installation and 
configuration. See the [example configurations](#Example-Configurations) below 
for examples.  </p>
+<p class="last">The URL differs depending on your installation and 
configuration. See the example configurations below for examples.  </p>
 </div>  </li>
 <li><p>Click <strong>Create</strong>.  </p></li>
 <li><p>In Configuration, set the required properties using JSON formatting as 
shown in the following example. Change the properties to match your 
environment.  </p>
@@ -1404,17 +1415,17 @@ Each configuration registered with Drill must have a 
distinct name. Names are ca
 </code></pre></div>
 <h2 id="example-configurations">Example Configurations</h2>
 
-<p><strong>Example Oracle Configuration</strong></p>
+<h3 id="example-oracle-configuration">Example Oracle Configuration</h3>
 
 <p>Download and install Oracle&#39;s Thin <a 
href="http://www.oracle.com/technetwork/database/features/jdbc/default-2280470.html";>ojdbc7.12.1.0.2.jar</a>
 driver and copy it to all nodes in your cluster.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
   type: &quot;jdbc&quot;,
   enabled: true,
   driver: &quot;oracle.jdbc.OracleDriver&quot;,
-  url:&quot;jdbc:oracle:thin:user/[email protected]:1521/ORCL&quot;
+  url: &quot;jdbc:oracle:thin:user/[email protected]:1521/ORCL&quot;
 }
 </code></pre></div>
-<p><strong>Example SQL Server Configuration</strong></p>
+<h3 id="example-sql-server-configuration">Example SQL Server Configuration</h3>
 
 <p>For SQL Server, Drill has been tested with Microsoft&#39;s  <a 
href="https://www.microsoft.com/en-US/download/details.aspx?id=11774";>sqljdbc41.4.2.6420.100.jar</a>
 driver. Copy this jar file to all Drillbits. </p>
 
@@ -1426,26 +1437,27 @@ Each configuration registered with Drill must have a 
distinct name. Names are ca
   type: &quot;jdbc&quot;,
   enabled: true,
   driver: &quot;com.microsoft.sqlserver.jdbc.SQLServerDriver&quot;,
-  url:&quot;jdbc:sqlserver://1.2.3.4:1433;databaseName=mydatabase&quot;,
-  username:&quot;user&quot;,
-  password:&quot;password&quot;
+  url: &quot;jdbc:sqlserver://1.2.3.4:1433;databaseName=mydatabase&quot;,
+  username: &quot;user&quot;,
+  password: &quot;password&quot;
 }
 </code></pre></div>
-<p><strong>Example MySQL Configuration</strong></p>
+<h3 id="example-mysql-configuration">Example MySQL Configuration</h3>
 
 <p>For MySQL, Drill has been tested with MySQL&#39;s <a 
href="http://dev.mysql.com/downloads/connector/j/";>mysql-connector-java-5.1.37-bin.jar</a>
 driver. Copy this to all nodes.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
   type: &quot;jdbc&quot;,
   enabled: true,
   driver: &quot;com.mysql.jdbc.Driver&quot;,
-  url:&quot;jdbc:mysql://1.2.3.4&quot;,
-  username:&quot;user&quot;,
-  password:&quot;password&quot;
+  url: &quot;jdbc:mysql://1.2.3.4&quot;,
+  username: &quot;user&quot;,
+  password: &quot;password&quot;
 }  
 </code></pre></div>
-<p><strong>Example Postgres Configuration</strong></p>
+<h3 id="example-postgres-configuration">Example Postgres Configuration</h3>
 
-<p>For Postgres, Drill has been tested with Postgres&#39;s <a 
href="http://central.maven.org/maven2/org/postgresql/postgresql/";>9.1-901-1.jdbc4</a>
 driver (any recent driver should work). Copy this driver file to all nodes.</p>
+<p>Drill is tested with the Postgres driver version <a 
href="https://mvnrepository.com/artifact/org.postgresql/postgresql";>42.2.11</a> 
(any recent driver should work).
+ Download and copy this driver jar to the <code>jars/3rdparty</code> folder on 
all nodes.</p>
 
 <div class="admonition note">
   <p class="first admonition-title">Note</p>
@@ -1455,9 +1467,9 @@ Each configuration registered with Drill must have a 
distinct name. Names are ca
   type: &quot;jdbc&quot;,
   enabled: true,
   driver: &quot;org.postgresql.Driver&quot;,
-  url:&quot;jdbc:postgresql://1.2.3.4/mydatabase&quot;,
-  username:&quot;user&quot;,
-  password:&quot;password&quot;
+  url: &quot;jdbc:postgresql://1.2.3.4/mydatabase&quot;,
+  username: &quot;user&quot;,
+  password: &quot;password&quot;
 }  
 </code></pre></div>
 <p>You may need to qualify a table name with a schema name for Drill to return 
data. For example, when querying a table named ips, you must issue the query 
against public.ips, as shown in the following example:  </p>
@@ -1485,6 +1497,23 @@ Each configuration registered with Drill must have a 
distinct name. Names are ca
    | 2  | 1.2.3.5  |
    +-------+----------+
 </code></pre></div>
+<h3 
id="example-of-postgres-configuration-with-sourceparameters-configuration-property">Example
 of Postgres Configuration with <code>sourceParameters</code> configuration 
property</h3>
+<div class="highlight"><pre><code class="language-text" data-lang="text">{
+  type: &quot;jdbc&quot;,
+  enabled: true,
+  driver: &quot;org.postgresql.Driver&quot;,
+  url: &quot;jdbc:postgresql://1.2.3.4/mydatabase?defaultRowFetchSize=2&quot;,
+  username: &quot;user&quot;,
+  password: &quot;password&quot;,
+  sourceParameters: {
+    &quot;minimumIdle&quot;: 5,
+    &quot;autoCommit&quot;: false,
+    &quot;connectionTestQuery&quot;: &quot;select version() as 
postgresql_version&quot;,
+    &quot;dataSource.cachePrepStmts&quot;: true,
+    &quot;dataSource.prepStmtCacheSize&quot;: 250
+  }
+}  
+</code></pre></div>
     
       
         <div class="doc-nav">
diff --git a/docs/using-drill-metastore/index.html 
b/docs/using-drill-metastore/index.html
index c034043..f2a4b31 100644
--- a/docs/using-drill-metastore/index.html
+++ b/docs/using-drill-metastore/index.html
@@ -1337,7 +1337,7 @@
 
     </div>
 
-     Mar 3, 2020
+     Mar 17, 2020
 
     <link href="/css/docpage.css" rel="stylesheet" type="text/css">
 
@@ -1345,11 +1345,14 @@
       
         <p>Drill 1.17 introduces the Drill Metastore which stores the table 
schema and table statistics. Statistics allow Drill to better create optimal 
query plans.</p>
 
-<p>The Metastore is a Beta feature; it is subject to change. We encourage you 
to try it and provide feedback.
-Because the Metastore is in Beta, the SQL commands and Metastore formats may 
change in the next release.
+<p>The Metastore is a beta feature and is subject to change.
+In particular, the SQL commands and Metastore format may change based on your 
experience and feedback.
 <div class="admonition note">
   <p class="first admonition-title">Note</p>
-  <p class="last">In Drill 1.17, this feature is supported for Parquet tables 
only and is disabled by default.  </p>
+  <p class="last">
+In Drill 1.17, Metastore supports only tables in Parquet format. The feature 
is disabled by default.
+In Drill 1.18, Metastore supports all format plugins (except MaprDB) for the 
file system plugin. The feature is still disabled by default.
+  </p>
 </div></p>
 
 <h2 id="drill-metastore-introduction">Drill Metastore introduction</h2>
@@ -1458,13 +1461,13 @@ refer to <a 
href="/docs/create-or-replace-schema/#usage-notes">Schema provisioni
 
 <p>The detailed metadata schema is described <a 
href="https://github.com/apache/drill/tree/master/metastore/metastore-api#metastore-tables";>here</a>.
 You can try out the metadata to get a sense of what is available, by using the
- <a 
href="/docs/using-drill-metastore/#inspect-the-metastore-using-information_schema-tables">Inspect
 the Metastore using <code>INFORMATION_SCHEMA</code> tables</a> tutorial.</p>
+ <a href="#inspect-the-metastore-using-information_schema-tables">Inspect the 
Metastore using <code>INFORMATION_SCHEMA</code> tables</a> tutorial.</p>
 
 <p>Every table described by the Metastore may be a bare file or one or more 
files that reside in one or more directories.</p>
 
 <p>If a table consists of a single directory or file, then it is 
non-partitioned. The single directory can contain any number of files.
 Larger tables tend to have subdirectories. Each subdirectory is a partition 
and such a table are called &quot;partitioned&quot;.
-Please refer to <a 
href="/docs/using-drill-metastore/#exposing-drill-metastore-metadata-through-information_schema-tables">Exposing
 Drill Metastore metadata through <code>INFORMATION_SCHEMA</code> tables</a>
+Please refer to <a 
href="#exposing-drill-metastore-metadata-through-information_schema-tables">Exposing
 Drill Metastore metadata through <code>INFORMATION_SCHEMA</code> tables</a>
  for information, how to query partitions and segments metadata.</p>
 
 <p>A traditional database divides tables into schemas and tables.
@@ -1472,6 +1475,54 @@ Drill can connect to any number of data sources, each of 
which may have its own
 As a result, the Metastore labels tables with a combination of (plugin 
configuration name, workspace name, table name).
 Note that if before renaming any of these items, you must delete table&#39;s 
Metadata entry and recreate it after renaming.</p>
 
+<h3 id="using-schema-provisioning-feature-with-drill-metastore">Using schema 
provisioning feature with Drill Metastore</h3>
+
+<p>The Drill Metastore holds both schema and statistics information for a 
table. The <code>ANALYZE</code> command can infer the table
+ schema for well-defined tables (such as many Parquet tables). Some tables are 
too complex or variable for Drill&#39;s
+ schema inference to work well. For example, JSON tables often omit fields or 
have long runs of nulls so that Drill
+ cannot determine column types. In these cases, you can specify the correct 
schema based on your knowledge of the
+ table&#39;s structure. You specify a schema in the <code>ANALYZE</code> 
command using the 
+ <a 
href="/docs/plugin-configuration-basics/#specifying-the-schema-as-table-function-parameter">Schema
 provisioning</a> syntax.</p>
+
+<p>Please refer to <a 
href="#provisioning-schema-for-drill-metastore">Provisioning schema for Drill 
Metastore</a> for examples of usage.</p>
+
+<h3 id="schema-priority">Schema priority</h3>
+
+<p>Drill uses metadata during both query planning and execution. Drill gives 
you multiple ways to provide a schema.</p>
+
+<p>When you run the <code>ANALYZE TABLE</code> command, Drill will use the 
following rules for the table schema to be stored in the Metastore. In priority 
order:</p>
+
+<ul>
+<li>A schema provided in the table function.</li>
+<li>A schema file, created with <code>CREATE OR REPLACE SCHEMA</code>, in the 
table root directory.</li>
+<li>Schema inferred from file data.</li>
+</ul>
+
+<p>To plan a query, Drill requires information about your file partitions (if 
any) and about row and column cardinality.
+Drill does not use the provided schema for planning as it does not provide 
this metadata. Instead, at plan time Drill
+obtains metadata from one of the following, again in priority order:</p>
+
+<ul>
+<li>The Drill Metastore, if available.</li>
+<li>Inferred from file data. Drill scans the table&#39;s directory structure 
to identify partitions.
+Drill estimates row counts based on the file size. Drill uses default 
estimates for column cardinality.</li>
+</ul>
+
+<p>At query execution time, a schema tells Drill the shape of your data and 
how that data should be converted to Drill&#39;s SQL types.
+Your choices for execution-time schema, in priority order, are:</p>
+
+<ul>
+<li>With a table function:
+
+<ul>
+<li>specify an inline schema</li>
+<li>specify the path to the schema file.</li>
+</ul></li>
+<li>With a schema file, created with <code>CREATE OR REPLACE SCHEMA</code>, in 
the table root directory.</li>
+<li>Using the schema from the Drill Metastore, if available.</li>
+<li>Infer the schema directly from file data.</li>
+</ul>
+
 <h3 id="related-session-system-options">Related Session/System Options</h3>
 
 <p>The Metastore provides a number of options to fit your environment. The 
default options are fine in most cases.
@@ -1534,7 +1585,7 @@ When you do <code>ANALYZE TABLE</code> a second time, 
Drill will attempt to upda
  actual metadata from the Metastore where possible.</p>
 
 <p>The command will return the following message if table statistics are 
up-to-date:</p>
-<div class="highlight"><pre><code class="language-text" 
data-lang="text">apache drill (dfs.tmp)&gt; analyze table lineitem refresh 
metadata;
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">ANALYZE TABLE `lineitem` REFRESH METADATA;
 +-------+---------------------------------------------------------+
 |  ok   |                         summary                         |
 +-------+---------------------------------------------------------+
@@ -1737,6 +1788,12 @@ Description of Metastore-specific columns:</p>
 <li>Disabled by default. You must enable this feature through the 
<code>metastore.enabled</code> system/session option.</li>
 </ul>
 
+<h3 id="limitations-of-the-1-18-release">Limitations of the 1.18 release</h3>
+
+<ul>
+<li>Applies to all file system storage plugin formats except for MaprDB.</li>
+</ul>
+
 <h3 id="cheat-sheet-of-analyze-table-commands">Cheat sheet of <code>ANALYZE 
TABLE</code> commands</h3>
 
 <ul>
@@ -1792,7 +1849,7 @@ cp TPCH/lineitem /tmp/lineitem/s2
 
 <p>Run the <a href="/docs/analyze-table-refresh-metadata">ANALYZE TABLE</a> 
command on the table, whose metadata should
  be computed and stored into the Drill Metastore:</p>
-<div class="highlight"><pre><code class="language-text" 
data-lang="text">apache drill&gt; ANALYZE TABLE dfs.tmp.lineitem REFRESH 
METADATA;
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">ANALYZE TABLE dfs.tmp.lineitem REFRESH METADATA;
 +------+-------------------------------------------------------------+
 |  ok  |                           summary                           |
 +------+-------------------------------------------------------------+
@@ -1810,7 +1867,7 @@ cp TPCH/lineitem /tmp/lineitem/s2
 <h3 id="perform-incremental-analysis">Perform incremental analysis</h3>
 
 <p>Rerun <a href="/docs/analyze-table-refresh-metadata">ANALYZE TABLE</a> 
command on the <code>lineitem</code> table:</p>
-<div class="highlight"><pre><code class="language-text" 
data-lang="text">apache drill&gt; ANALYZE TABLE dfs.tmp.lineitem REFRESH 
METADATA;
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">ANALYZE TABLE dfs.tmp.lineitem REFRESH METADATA;
 +-------+---------------------------------------------------------+
 |  ok   |                         summary                         |
 +-------+---------------------------------------------------------+
@@ -1821,7 +1878,7 @@ cp TPCH/lineitem /tmp/lineitem/s2
 <h3 id="inspect-the-metastore-using-information_schema-tables">Inspect the 
Metastore using INFORMATION_SCHEMA tables</h3>
 
 <p>Run the following query to inspect <code>lineitem</code> table metadata 
from <code>TABLES</code> table stored in the Metastore:</p>
-<div class="highlight"><pre><code class="language-text" 
data-lang="text">apache drill&gt; SELECT * FROM INFORMATION_SCHEMA.`TABLES` 
WHERE TABLE_NAME=&#39;lineitem&#39;;
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">SELECT * FROM INFORMATION_SCHEMA.`TABLES` WHERE 
TABLE_NAME=&#39;lineitem&#39;;
 
+---------------+--------------+------------+------------+--------------+---------------+----------+-----------------------+
 | TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | TABLE_SOURCE |   
LOCATION    | NUM_ROWS |  LAST_MODIFIED_TIME   |
 
+---------------+--------------+------------+------------+--------------+---------------+----------+-----------------------+
@@ -1830,7 +1887,7 @@ cp TPCH/lineitem /tmp/lineitem/s2
 1 row selected (0.157 seconds)
 </code></pre></div>
 <p>To obtain columns with their types and descriptions within the 
<code>lineitem</code> table, run the following query:</p>
-<div class="highlight"><pre><code class="language-text" 
data-lang="text">apache drill&gt; SELECT * FROM INFORMATION_SCHEMA.`COLUMNS` 
WHERE TABLE_NAME=&#39;lineitem&#39;;
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">SELECT * FROM INFORMATION_SCHEMA.`COLUMNS` WHERE 
TABLE_NAME=&#39;lineitem&#39;;
 
+---------------+--------------+------------+-----------------+------------------+----------------+-------------+-------------------+--------------------------+------------------------+-------------------+-------------------------+---------------+--------------------+---------------+--------------------+-------------+---------------+-----------+--------------+---------------------------------------------+-----------+-------------------+-----------+
 | TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME |   COLUMN_NAME   | 
ORDINAL_POSITION | COLUMN_DEFAULT | IS_NULLABLE |     DATA_TYPE     | 
CHARACTER_MAXIMUM_LENGTH | CHARACTER_OCTET_LENGTH | NUMERIC_PRECISION | 
NUMERIC_PRECISION_RADIX | NUMERIC_SCALE | DATETIME_PRECISION | INTERVAL_TYPE | 
INTERVAL_PRECISION | COLUMN_SIZE | COLUMN_FORMAT | NUM_NULLS |   MIN_VAL    |   
                MAX_VAL                   |    NDV    | EST_NUM_NON_NULLS | 
IS_NESTED |
 
+---------------+--------------+------------+-----------------+------------------+----------------+-------------+-------------------+--------------------------+------------------------+-------------------+-------------------------+---------------+--------------------+---------------+--------------------+-------------+---------------+-----------+--------------+---------------------------------------------+-----------+-------------------+-----------+
@@ -1843,7 +1900,7 @@ cp TPCH/lineitem /tmp/lineitem/s2
 17 rows selected (0.187 seconds)
 </code></pre></div>
 <p>The sample <code>lineitem</code> table has two partitions. The 
<code>PARTITIONS</code> table contains an entry for each directory:</p>
-<div class="highlight"><pre><code class="language-text" 
data-lang="text">apache drill (information_schema)&gt; SELECT * FROM 
INFORMATION_SCHEMA.`PARTITIONS` WHERE TABLE_NAME=&#39;lineitem&#39;;
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">SELECT * FROM INFORMATION_SCHEMA.`PARTITIONS` WHERE 
TABLE_NAME=&#39;lineitem&#39;;
 
+---------------+--------------+------------+--------------+---------------+---------------------+------------------+-----------------+------------------+-----------------------+
 | TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | METADATA_KEY | METADATA_TYPE | 
METADATA_IDENTIFIER | PARTITION_COLUMN | PARTITION_VALUE |     LOCATION     |  
LAST_MODIFIED_TIME   |
 
+---------------+--------------+------------+--------------+---------------+---------------------+------------------+-----------------+------------------+-----------------------+
@@ -1857,7 +1914,7 @@ cp TPCH/lineitem /tmp/lineitem/s2
 <p>Once we are done exploring metadata we can drop the metadata for the 
<code>lineitem</code> table.</p>
 
 <p>Table metadata may be dropped using <code>ANALYZE TABLE DROP 
METADATA</code> command:</p>
-<div class="highlight"><pre><code class="language-text" 
data-lang="text">apache drill&gt; ANALYZE TABLE dfs.tmp.lineitem DROP METADATA;
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">ANALYZE TABLE dfs.tmp.lineitem DROP METADATA;
 +------+----------------------------------------+
 |  ok  |                summary                 |
 +------+----------------------------------------+
@@ -1873,7 +1930,7 @@ cp TPCH/lineitem /tmp/lineitem/s2
  selected columns: those actually used in the <code>WHERE</code> clause.</p>
 
 <p>For the case when metadata for several columns should be computed and 
stored into the Metastore, the following command may be used:</p>
-<div class="highlight"><pre><code class="language-text" 
data-lang="text">apache drill (information_schema)&gt; ANALYZE TABLE 
dfs.tmp.lineitem COLUMNS(l_orderkey, l_partkey) REFRESH METADATA;
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">ANALYZE TABLE dfs.tmp.lineitem COLUMNS(l_orderkey, l_partkey) 
REFRESH METADATA;
 +------+-------------------------------------------------------------+
 |  ok  |                           summary                           |
 +------+-------------------------------------------------------------+
@@ -1883,7 +1940,7 @@ cp TPCH/lineitem /tmp/lineitem/s2
 </code></pre></div>
 <p>Now, check, that metadata is collected only for specified columns 
(<code>MIN_VAL</code>, <code>MAX_VAL</code>, <code>NDV</code>, etc.), but all
  columns are present:</p>
-<div class="highlight"><pre><code class="language-text" 
data-lang="text">apache drill (information_schema)&gt; SELECT * FROM 
INFORMATION_SCHEMA.`COLUMNS` WHERE TABLE_NAME=&#39;lineitem&#39;;
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">SELECT * FROM INFORMATION_SCHEMA.`COLUMNS` WHERE 
TABLE_NAME=&#39;lineitem&#39;;
 
+---------------+--------------+------------+-----------------+------------------+----------------+-------------+-------------------+--------------------------+------------------------+-------------------+-------------------------+---------------+--------------------+---------------+--------------------+-------------+---------------+-----------+---------+---------+-----------+-------------------+-----------+
 | TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME |   COLUMN_NAME   | 
ORDINAL_POSITION | COLUMN_DEFAULT | IS_NULLABLE |     DATA_TYPE     | 
CHARACTER_MAXIMUM_LENGTH | CHARACTER_OCTET_LENGTH | NUMERIC_PRECISION | 
NUMERIC_PRECISION_RADIX | NUMERIC_SCALE | DATETIME_PRECISION | INTERVAL_TYPE | 
INTERVAL_PRECISION | COLUMN_SIZE | COLUMN_FORMAT | NUM_NULLS | MIN_VAL | 
MAX_VAL |    NDV    | EST_NUM_NON_NULLS | IS_NESTED |
 
+---------------+--------------+------------+-----------------+------------------+----------------+-------------+-------------------+--------------------------+------------------------+-------------------+-------------------------+---------------+--------------------+---------------+--------------------+-------------+---------------+-----------+---------+---------+-----------+-------------------+-----------+
@@ -1895,6 +1952,108 @@ cp TPCH/lineitem /tmp/lineitem/s2
 
+---------------+--------------+------------+-----------------+------------------+----------------+-------------+-------------------+--------------------------+------------------------+-------------------+-------------------------+---------------+--------------------+---------------+--------------------+-------------+---------------+-----------+---------+---------+-----------+-------------------+-----------+
 17 rows selected (0.183 seconds)
 </code></pre></div>
+<h3 id="provisioning-schema-for-drill-metastore">Provisioning schema for Drill 
Metastore</h3>
+
+<h4 id="directory-and-file-setup">Directory and File Setup</h4>
+
+<p>Ensure you have configured the file system storage plugin as described here:
+ <a 
href="/docs/file-system-storage-plugin/#connecting-drill-to-a-file-system">Connecting
 Drill to a File System</a>.</p>
+
+<p>Set <code>store.format</code> to <code>csvh</code>:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SET 
`store.format`=&#39;csvh&#39;;
++------+-----------------------+
+|  ok  |        summary        |
++------+-----------------------+
+| true | store.format updated. |
++------+-----------------------+
+</code></pre></div>
+<p>Create a text table based on the sample <code>/tpch/nation.parquet</code> 
table from <code>cp</code> plugin:</p>
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">CREATE TABLE dfs.tmp.text_nation AS
+  (SELECT *
+   FROM cp.`/tpch/nation.parquet`);
++----------+---------------------------+
+| Fragment | Number of records written |
++----------+---------------------------+
+| 0_0      | 25                        |
++----------+---------------------------+
+</code></pre></div>
+<p>Query the table <code>text_nation</code>:</p>
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">SELECT typeof(n_nationkey),
+       typeof(n_name),
+       typeof(n_regionkey),
+       typeof(n_comment)
+FROM dfs.tmp.text_nation
+LIMIT 1;
++---------+---------+---------+---------+
+| EXPR$0  | EXPR$1  | EXPR$2  | EXPR$3  |
++---------+---------+---------+---------+
+| VARCHAR | VARCHAR | VARCHAR | VARCHAR |
++---------+---------+---------+---------+
+</code></pre></div>
+<p>Notice that the query plan contains a group scan with <code>usedMetastore = 
false</code>:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">00-00 
   Screen : rowType = RecordType(ANY EXPR$0, ANY EXPR$1, ANY EXPR$2, ANY 
EXPR$3): rowcount = 1.0, cumulative cost = {25.1 rows, 109.1 cpu, 2247.0 io, 
0.0 network, 0.0 memory}, id = 160
+00-01      Project(EXPR$0=[TYPEOF($0)], EXPR$1=[TYPEOF($1)], 
EXPR$2=[TYPEOF($2)], EXPR$3=[TYPEOF($3)]) : rowType = RecordType(ANY EXPR$0, 
ANY EXPR$1, ANY EXPR$2, ANY EXPR$3): rowcount = 1.0, cumulative cost = {25.0 
rows, 109.0 cpu, 2247.0 io, 0.0 network, 0.0 memory}, id = 159
+00-02        SelectionVectorRemover : rowType = RecordType(ANY n_nationkey, 
ANY n_name, ANY n_regionkey, ANY n_comment): rowcount = 1.0, cumulative cost = 
{24.0 rows, 93.0 cpu, 2247.0 io, 0.0 network, 0.0 memory}, id = 158
+00-03          Limit(fetch=[1]) : rowType = RecordType(ANY n_nationkey, ANY 
n_name, ANY n_regionkey, ANY n_comment): rowcount = 1.0, cumulative cost = 
{23.0 rows, 92.0 cpu, 2247.0 io, 0.0 network, 0.0 memory}, id = 157
+00-04            Scan(table=[[dfs, tmp, text_nation]], 
groupscan=[EasyGroupScan [... schema=null, usedMetastore=false...
+</code></pre></div>
+<h4 id="compute-table-metadata-and-store-in-the-drill-metastore">Compute table 
metadata and store in the Drill Metastore</h4>
+
+<p>Enable Drill Metastore:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SET 
`metastore.enabled` = true;
+</code></pre></div>
+<p>Specify table schema when running <code>ANALYZE</code> query:</p>
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">ANALYZE TABLE table(dfs.tmp.`text_nation` 
(type=&gt;&#39;text&#39;, fieldDelimiter=&gt;&#39;,&#39;, 
extractHeader=&gt;true,
+    schema=&gt;&#39;inline=(
+        `n_nationkey` INT not null,
+        `n_name` VARCHAR not null,
+        `n_regionkey` INT not null,
+        `n_comment` VARCHAR not null)&#39;
+    )) REFRESH METADATA;
++------+----------------------------------------------------------------+
+|  ok  |                            summary                             |
++------+----------------------------------------------------------------+
+| true | Collected / refreshed metadata for table [dfs.tmp.text_nation] |
++------+----------------------------------------------------------------+
+</code></pre></div>
+<h4 id="inspect-the-metastore-using-information_schema-tables">Inspect the 
Metastore using INFORMATION_SCHEMA tables</h4>
+
+<p>Run the following query to inspect <code>text_nation</code> table schema 
stored in the Metastore:</p>
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">SELECT COLUMN_NAME, DATA_TYPE FROM 
INFORMATION_SCHEMA.`COLUMNS` WHERE TABLE_NAME=&#39;text_nation&#39;;
++-------------+-------------------+
+| COLUMN_NAME |     DATA_TYPE     |
++-------------+-------------------+
+| n_nationkey | INTEGER           |
+| n_name      | CHARACTER VARYING |
+| n_regionkey | INTEGER           |
+| n_comment   | CHARACTER VARYING |
++-------------+-------------------+
+</code></pre></div>
+<p>Ensure that this schema is applied to the table:</p>
+<div class="highlight"><pre><code class="language-text" 
data-lang="text">SELECT typeof(n_nationkey),
+       typeof(n_name),
+       typeof(n_regionkey),
+       typeof(n_comment)
+FROM dfs.tmp.text_nation
+LIMIT 1;
++--------+---------+--------+---------+
+| EXPR$0 | EXPR$1  | EXPR$2 | EXPR$3  |
++--------+---------+--------+---------+
+| INT    | VARCHAR | INT    | VARCHAR |
++--------+---------+--------+---------+
+</code></pre></div><div class="highlight"><pre><code class="language-text" 
data-lang="text">select sum(n_nationkey) from dfs.tmp.`text_nation`;
++--------+
+| EXPR$0 |
++--------+
+| 300    |
++--------+
+</code></pre></div>
+<p>Query plan contains schema from the Metastore and group scan with 
<code>usedMetastore = true</code>:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">00-00 
   Screen : rowType = RecordType(ANY EXPR$0): rowcount = 1.0, cumulative cost = 
{45.1 rows, 287.1 cpu, 2247.0 io, 0.0 network, 0.0 memory}, id = 3129
+00-01      Project(EXPR$0=[$0]) : rowType = RecordType(ANY EXPR$0): rowcount = 
1.0, cumulative cost = {45.0 rows, 287.0 cpu, 2247.0 io, 0.0 network, 0.0 
memory}, id = 3128
+00-02        StreamAgg(group=[{}], EXPR$0=[SUM($0)]) : rowType = 
RecordType(ANY EXPR$0): rowcount = 1.0, cumulative cost = {44.0 rows, 286.0 
cpu, 2247.0 io, 0.0 network, 0.0 memory}, id = 3127
+00-03          Scan(table=[[dfs, tmp, text_nation]], groupscan=[EasyGroupScan 
... schema=..., usedMetastore=true]]) ...
+</code></pre></div>
     
       
         <div class="doc-nav">
diff --git a/feed.xml b/feed.xml
index 847eaac..e4afe40 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 </description>
     <link>/</link>
     <atom:link href="/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Tue, 10 Mar 2020 12:34:25 +0200</pubDate>
-    <lastBuildDate>Tue, 10 Mar 2020 12:34:25 +0200</lastBuildDate>
+    <pubDate>Sun, 22 Mar 2020 09:09:26 +0200</pubDate>
+    <lastBuildDate>Sun, 22 Mar 2020 09:09:26 +0200</lastBuildDate>
     <generator>Jekyll v2.5.2</generator>
     
       <item>
diff --git a/images/7671b34d6e8a4d050f75278f10f1a08.jpg 
b/images/7671b34d6e8a4d050f75278f10f1a08.jpg
new file mode 100644
index 0000000..ea16a44
Binary files /dev/null and b/images/7671b34d6e8a4d050f75278f10f1a08.jpg differ

Reply via email to