This is an automated email from the ASF dual-hosted git repository.
github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/iceberg-docs.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 9cd43de2 deploy: b5dd27f42f0789bc67721f36ff01876363985d91
9cd43de2 is described below
commit 9cd43de23c2e660a87f95c7e0ec0623332714289
Author: pvary <[email protected]>
AuthorDate: Fri Dec 16 18:20:39 2022 +0000
deploy: b5dd27f42f0789bc67721f36ff01876363985d91
---
docs/1.1.0/docssearch.json | 2 +-
docs/1.1.0/hive/index.html | 16 ++++++++++++----
docs/1.1.0/index.html | 2 +-
docs/1.1.0/index.xml | 4 ++--
4 files changed, 16 insertions(+), 8 deletions(-)
diff --git a/docs/1.1.0/docssearch.json b/docs/1.1.0/docssearch.json
index aab060e4..57d43f3a 100644
--- a/docs/1.1.0/docssearch.json
+++ b/docs/1.1.0/docssearch.json
@@ -1 +1 @@
-[{"categories":null,"content":" Getting Started The latest version of Iceberg
is 1.1.0.\nSpark is currently the most feature-rich compute engine for Iceberg
operations. We recommend you to get started with Spark to understand Iceberg
concepts and features with examples. You can also view documentations of using
Iceberg with other compute engine under the Engines tab.\nUsing Iceberg in
Spark 3 To use Iceberg in a Spark shell, use the --packages
option:\nspark-shell --packages org.apache.i [...]
\ No newline at end of file
+[{"categories":null,"content":" Getting Started The latest version of Iceberg
is 1.1.0.\nSpark is currently the most feature-rich compute engine for Iceberg
operations. We recommend you to get started with Spark to understand Iceberg
concepts and features with examples. You can also view documentations of using
Iceberg with other compute engine under the Engines tab.\nUsing Iceberg in
Spark 3 To use Iceberg in a Spark shell, use the --packages
option:\nspark-shell --packages org.apache.i [...]
\ No newline at end of file
diff --git a/docs/1.1.0/hive/index.html b/docs/1.1.0/hive/index.html
index 437ff00f..e224c148 100644
--- a/docs/1.1.0/hive/index.html
+++ b/docs/1.1.0/hive/index.html
@@ -14,7 +14,8 @@
<i class="fa fa-chevron-down"></i></a></li><div id=Integrations
class=collapse><ul class=sub-menu><li><a href=../aws/>AWS</a></li><li><a
href=../dell/>Dell</a></li><li><a href=../jdbc/>JDBC</a></li><li><a
href=../nessie/>Nessie</a></li></ul></div><li><a class="chevron-toggle
collapsed" data-toggle=collapse data-parent=full href=#API><span>API</span>
<i class="fa fa-chevron-right"></i>
<i class="fa fa-chevron-down"></i></a></li><div id=API class=collapse><ul
class=sub-menu><li><a href=../java-api-quickstart/>Java
Quickstart</a></li><li><a href=../api/>Java API</a></li><li><a
href=../custom-catalog/>Java Custom Catalog</a></li></ul></div><li><a
href=https://iceberg.apache.org/docs/1.1.0/../../javadoc/latest><span>Javadoc</span></a></li></div></div><div
id=content class=markdown-body><div class=margin-for-toc><h1
id=hive>Hive</h1><p>Iceberg supports reading and writing I [...]
-a <a
href=https://cwiki.apache.org/confluence/display/Hive/StorageHandlers>StorageHandler</a>.</p><h2
id=feature-support>Feature support</h2><p>Iceberg compatibility with Hive 2.x
and Hive 3.1.2/3 supports the following features:</p><ul><li>Creating a
table</li><li>Dropping a table</li><li>Reading a table</li><li>Inserting into a
table (INSERT INTO)</li></ul><div class=warning>DML operations work only with
MapReduce execution engine.</div><p>With Hive version 4.0.0-alpha-1 and above,
+a <a
href=https://cwiki.apache.org/confluence/display/Hive/StorageHandlers>StorageHandler</a>.</p><h2
id=feature-support>Feature support</h2><p>Iceberg compatibility with Hive 2.x
and Hive 3.1.2/3 supports the following features:</p><ul><li>Creating a
table</li><li>Dropping a table</li><li>Reading a table</li><li>Inserting into a
table (INSERT INTO)</li></ul><div class=warning>DML operations work only with
MapReduce execution engine.</div><p>With Hive version 4.0.0-alpha-2 and above,
+the Iceberg integration when using HiveCatalog supports the following
additional features:</p><ul><li>Altering a table with expiring
snapshots.</li><li>Create a table like an existing table (CTLT
table)</li><li>Support adding parquet compression type via Table properties <a
href=https://spark.apache.org/docs/2.4.3/sql-data-sources-parquet.html#configuration>Compression
types</a></li><li>Altering a table metadata location</li><li>Supporting table
rollback</li><li>Honours sort orders on ex [...]
the Iceberg integration when using HiveCatalog supports the following
additional features:</p><ul><li>Creating an Iceberg identity-partitioned
table</li><li>Creating an Iceberg table with any partition spec, including the
various transforms supported by Iceberg</li><li>Creating a table from an
existing table (CTAS table)</li><li>Altering a table while keeping Iceberg and
Hive schemas in sync</li><li>Altering the partition schema (updating
columns)</li><li>Altering the partition schema by [...]
Hive’s classpath. These are provided by the
<code>iceberg-hive-runtime</code> jar file. For example, if using the Hive
shell, this
can be achieved by issuing a statement like so:</p><pre tabindex=0><code>add
jar /path/to/iceberg-hive-runtime.jar;
@@ -61,12 +62,13 @@ The default is Parquet:</p><div class=highlight><pre
tabindex=0 style=color:#f8f
</span></span></code></pre></div><h4 id=partitioned-tables>Partitioned
tables</h4><p>You can create Iceberg partitioned tables using a command
familiar to those who create non-Iceberg tables:</p><div class=highlight><pre
tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>CREATE</span> <span style=color:#66d9ef>TABLE</span> x (i
int) PARTITIONED <sp [...]
</span></span></code></pre></div><div class=info>The resulting table does not
create partitions in HMS, but instead, converts partition data into Iceberg
identity partitions.</div><p>Use the DESCRIBE command to get information about
the Iceberg identity partitions:</p><div class=highlight><pre tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>DESC [...]
</span></span></code></pre></div><p>The result
is:</p><table><thead><tr><th>col_name</th><th>data_type</th><th>comment</th></tr></thead><tbody><tr><td>i</td><td>int</td><td></td></tr><tr><td>j</td><td>int</td><td></td></tr><tr><td></td><td>NULL</td><td>NULL</td></tr><tr><td>#
Partition Transform Information</td><td>NULL</td><td>NULL</td></tr><tr><td>#
col_name</td><td>transform_type</td><td>NULL</td></tr><tr><td>j</td><td>IDENTITY</td><td>NULL</td></tr></tbody></table><p>You
can create I [...]
-(supported only in Hive 4.0.0-alpha-1):</p><div class=highlight><pre
tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>CREATE</span> <span style=color:#66d9ef>TABLE</span> x (i
int, ts <span style=color:#66d9ef>timestamp</span>) PARTITIONED <span
style=color:#66d9ef>BY</span> SPEC (<span style=color:#66d9ef>month</span>(ts),
bucket(<span style=col [...]
+(supported only from Hive 4.0.0-alpha-1):</p><div class=highlight><pre
tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>CREATE</span> <span style=color:#66d9ef>TABLE</span> x (i
int, ts <span style=color:#66d9ef>timestamp</span>) PARTITIONED <span
style=color:#66d9ef>BY</span> SPEC (<span style=color:#66d9ef>month</span>(ts),
bucket(<span style=c [...]
</span></span><span style=display:flex><span><span
style=color:#66d9ef>DESCRIBE</span> x;
</span></span></code></pre></div><p>The result
is:</p><table><thead><tr><th>col_name</th><th>data_type</th><th>comment</th></tr></thead><tbody><tr><td>i</td><td>int</td><td></td></tr><tr><td>ts</td><td>timestamp</td><td></td></tr><tr><td></td><td>NULL</td><td>NULL</td></tr><tr><td>#
Partition Transform Information</td><td>NULL</td><td>NULL</td></tr><tr><td>#
col_name</td><td>transform_type</td><td>NULL</td></tr><tr><td>ts</td><td>MONTH</td><td>NULL</td></tr><tr><td>i</td><td>BUCKET[2]</t
[...]
The Iceberg table and the corresponding Hive table are created at the
beginning of the query execution.
The data is inserted / committed when the query finishes. So for a transient
period the table already exists but contains no data.</p><div
class=highlight><pre tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>CREATE</span> <span style=color:#66d9ef>TABLE</span> target
PARTITIONED <span style=color:#66d9ef>BY</span> SPEC (<span style=color:#66d9ef
[...]
</span></span><span style=display:flex><span> <span
style=color:#66d9ef>SELECT</span> <span style=color:#f92672>*</span> <span
style=color:#66d9ef>FROM</span> <span style=color:#66d9ef>source</span>;
+</span></span></code></pre></div><h3 id=create-table-like-table>CREATE TABLE
LIKE TABLE</h3><div class=highlight><pre tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>CREATE</span> <span style=color:#66d9ef>TABLE</span> target
<span style=color:#66d9ef>LIKE</span> <span style=color:#66d9ef>source</span>
STORED <span style=color:#66d9ef>BY</span> [...]
</span></span></code></pre></div><h3
id=create-external-table-overlaying-an-existing-iceberg-table>CREATE EXTERNAL
TABLE overlaying an existing Iceberg table</h3><p>The <code>CREATE EXTERNAL
TABLE</code> command is used to overlay a Hive table “on top of” an
existing Iceberg table. Iceberg
tables are created using either a <a
href=../../../javadoc/1.1.0/index.html?org/apache/iceberg/catalog/Catalog.html><code>Catalog</code></a>,
or an implementation of the <a
href=../../../javadoc/1.1.0/index.html?org/apache/iceberg/Tables.html><code>Tables</code></a>
interface, and Hive needs to be configured accordingly to
operate on these different types of table.</p><h4 id=hive-catalog-tables>Hive
catalog tables</h4><p>As described before, tables created by the
<code>HiveCatalog</code> with Hive engine feature enabled are directly visible
by the
@@ -119,6 +121,8 @@ i.e. if columns are specified out-of-order an error will be
thrown signalling th
</span></span></code></pre></div><p>During the migration the data files are
not changed, only the appropriate Iceberg metadata files are created.
After the migration, handle the table as a normal Iceberg table.</p><h3
id=truncate-table>TRUNCATE TABLE</h3><p>The following command truncates the
Iceberg table:</p><div class=highlight><pre tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>TRUNCATE</span> <span style=color:#66d9ef>TABLE</span> t;
</span></span></code></pre></div><p>Using a partition specification is not
allowed.</p><h3 id=drop-table>DROP TABLE</h3><p>Tables can be dropped using the
<code>DROP TABLE</code> command:</p><div class=highlight><pre tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>DROP</span> <span style=color:#66d9ef>TABLE</span> [<span
style=color:#66d9ef>IF</ [...]
+</span></span></code></pre></div><h3 id=metadata-location>METADATA
LOCATION</h3><p>The metadata location (snapshot location) only can be changed
if the new path contains the exact same metadata json.
+It can be done only after migrating the table to Iceberg, the two operation
cannot be done in one step.</p><div class=highlight><pre tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>ALTER</span> <span style=color:#66d9ef>TABLE</span> t <span
style=color:#66d9ef>set</span> TBLPROPERTIES (<span
style=color:#e6db74>'metadata_location'</span> [...]
</span></span></code></pre></div><h2 id=dml-commands>DML Commands</h2><h3
id=select>SELECT</h3><p>Select statements work the same on Iceberg tables in
Hive. You will see the Iceberg benefits over Hive in compilation and
execution:</p><ul><li><strong>No file system listings</strong> - especially
important on blob stores, like S3</li><li><strong>No partition listing
from</strong> the Metastore</li><li><strong>Advanced partition
filtering</strong> - the partition keys are not needed in the [...]
Also currently the statistics stored in the MetaStore are used for query
planning. This is something we are planning to improve in the future.</p><h3
id=insert-into>INSERT INTO</h3><p>Hive supports the standard single-table
INSERT INTO operation:</p><div class=highlight><pre tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>INSERT</span> <span sty [...]
</span></span><span style=display:flex><span><span
style=color:#66d9ef>VALUES</span> (<span
style=color:#e6db74>'a'</span>, <span style=color:#ae81ff>1</span>);
@@ -140,10 +144,14 @@ To reference a metadata table the full name of the table
should be used, like:
For these views it is possible to use projections / joins / filters / etc.
The function is available with the following syntax:</p><div
class=highlight><pre tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>SELECT</span> <span style=color:#f92672>*</span> <span
style=color:#66d9ef>FROM</span> table_a <span style=color:#66d9ef>FOR</span>
SYSTEM_TIME <span style=color:#66d9ef>AS</span> <span
style=color:#66d9ef>OF</span> < [...]
</span></span><span style=display:flex><span><span
style=color:#66d9ef>SELECT</span> <span style=color:#f92672>*</span> <span
style=color:#66d9ef>FROM</span> table_a <span style=color:#66d9ef>FOR</span>
SYSTEM_VERSION <span style=color:#66d9ef>AS</span> <span
style=color:#66d9ef>OF</span> <span style=color:#ae81ff>1234567</span>;
-</span></span></code></pre></div><h2 id=type-compatibility>Type
compatibility</h2><p>Hive and Iceberg support different set of types. Iceberg
can perform type conversion automatically, but not for all
+</span></span></code></pre></div><p>You can expire snapshots of an Iceberg
table using an ALTER TABLE query from Hive. You should periodically expire
snapshots to delete data files that is no longer needed, and reduce the size of
table metadata.</p><p>Each write to an Iceberg table from Hive creates a new
snapshot, or version, of a table. Snapshots can be used for time-travel
queries, or the table can be rolled back to any valid snapshot. Snapshots
accumulate until they are expired by th [...]
+Enter a query to expire snapshots having the following timestamp:
<code>2021-12-09 05:39:18.689000000</code></p><div class=highlight><pre
tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>ALTER</span> <span style=color:#66d9ef>TABLE</span>
test_table <span style=color:#66d9ef>EXECUTE</span> expire_snapshots(<span
style=color:#e6db74>'2021-12-0 [...]
+</span></span></code></pre></div><h3 id=type-compatibility>Type
compatibility</h3><p>Hive and Iceberg support different set of types. Iceberg
can perform type conversion automatically, but not for all
combinations, so you may want to understand the type conversion in Iceberg in
prior to design the types of columns in
your tables. You can enable auto-conversion through Hadoop configuration (not
enabled by default):</p><table><thead><tr><th>Config
key</th><th>Default</th><th>Description</th></tr></thead><tbody><tr><td>iceberg.mr.schema.auto.conversion</td><td>false</td><td>if
Hive should perform type auto-conversion</td></tr></tbody></table><h3
id=hive-type-to-iceberg-type>Hive type to Iceberg type</h3><p>This type
conversion table describes how Hive types are converted to the Iceberg types.
The conver [...]
-creating Iceberg table and writing to Iceberg table via
Hive.</p><table><thead><tr><th>Hive</th><th>Iceberg</th><th>Notes</th></tr></thead><tbody><tr><td>boolean</td><td>boolean</td><td></td></tr><tr><td>short</td><td>integer</td><td>auto-conversion</td></tr><tr><td>byte</td><td>integer</td><td>auto-conversion</td></tr><tr><td>integer</td><td>integer</td><td></td></tr><tr><td>long</td><td>long</td><td></td></tr><tr><td>float</td><td>float</td><td></td></tr><tr><td>double</td><td>double</
[...]
+creating Iceberg table and writing to Iceberg table via
Hive.</p><table><thead><tr><th>Hive</th><th>Iceberg</th><th>Notes</th></tr></thead><tbody><tr><td>boolean</td><td>boolean</td><td></td></tr><tr><td>short</td><td>integer</td><td>auto-conversion</td></tr><tr><td>byte</td><td>integer</td><td>auto-conversion</td></tr><tr><td>integer</td><td>integer</td><td></td></tr><tr><td>long</td><td>long</td><td></td></tr><tr><td>float</td><td>float</td><td></td></tr><tr><td>double</td><td>double</
[...]
+</span></span></code></pre></div><p>Rollback to a specific snapshot ID</p><div
class=highlight><pre tabindex=0
style=color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code
class=language-sql data-lang=sql><span style=display:flex><span><span
style=color:#66d9ef>ALTER</span> <span style=color:#66d9ef>TABLE</span> ice_t
<span style=color:#66d9ef>EXECUTE</span> <span
style=color:#66d9ef>ROLLBACK</span>(<span style=color:#ae81ff>1111</span>);
+</span></span></code></pre></div></div><div id=toc class=markdown-body><div
id=full><nav id=TableOfContents><ul><li><a href=#feature-support>Feature
support</a></li><li><a href=#enabling-iceberg-support-in-hive>Enabling Iceberg
support in Hive</a><ul><li><a href=#hive-400-alpha-1>Hive
4.0.0-alpha-1</a></li><li><a href=#hive-23x-hive-31x>Hive 2.3.x, Hive
3.1.x</a></li></ul></li><li><a href=#catalog-management>Catalog
Management</a><ul><li><a href=#global-hive-catalog>Global Hive catalog</ [...]
<script
src=https://iceberg.apache.org/docs/1.1.0//js/jquery.easing.min.js></script>
<script type=text/javascript
src=https://iceberg.apache.org/docs/1.1.0//js/search.js></script>
<script
src=https://iceberg.apache.org/docs/1.1.0//js/bootstrap.min.js></script>
diff --git a/docs/1.1.0/index.html b/docs/1.1.0/index.html
index 3ed2db1d..ef960715 100644
--- a/docs/1.1.0/index.html
+++ b/docs/1.1.0/index.html
@@ -1,4 +1,4 @@
-<!doctype html><html><head><meta name=generator content="Hugo 0.107.0"><meta
charset=utf-8><meta http-equiv=x-ua-compatible content="IE=edge"><meta
name=viewport content="width=device-width,initial-scale=1"><meta
name=description content><meta name=author
content><title>Introduction</title><link href=./css/bootstrap.css
rel=stylesheet><link href=./css/markdown.css rel=stylesheet><link
href=./css/katex.min.css rel=stylesheet><link href=./css/iceberg-theme.css
rel=stylesheet><link href=./f [...]
+<!doctype html><html><head><meta name=generator content="Hugo 0.108.0"><meta
charset=utf-8><meta http-equiv=x-ua-compatible content="IE=edge"><meta
name=viewport content="width=device-width,initial-scale=1"><meta
name=description content><meta name=author
content><title>Introduction</title><link href=./css/bootstrap.css
rel=stylesheet><link href=./css/markdown.css rel=stylesheet><link
href=./css/katex.min.css rel=stylesheet><link href=./css/iceberg-theme.css
rel=stylesheet><link href=./f [...]
<span class=sr-only>Toggle navigation</span>
<span class=icon-bar></span>
<span class=icon-bar></span>
diff --git a/docs/1.1.0/index.xml b/docs/1.1.0/index.xml
index d57b5c9a..9fc67e6e 100644
--- a/docs/1.1.0/index.xml
+++ b/docs/1.1.0/index.xml
@@ -3,8 +3,8 @@ Spark is currently the most feature-rich compute engine for
Iceberg operations.
Using Iceberg in Spark 3 To use Iceberg in a Spark shell, use the --packages
option:
spark-shell --packages
org.</description></item><item><title>Hive</title><link>https://iceberg.apache.org/docs/1.1.0/hive/</link><pubDate>Mon,
01 Jan 0001 00:00:00
+0000</pubDate><guid>https://iceberg.apache.org/docs/1.1.0/hive/</guid><description>Hive
Iceberg supports reading and writing Iceberg tables through Hive by using a
StorageHandler.
Feature support Iceberg compatibility with Hive 2.x and Hive 3.1.2/3 supports
the following features:
-Creating a table Dropping a table Reading a table Inserting into a table
(INSERT INTO) DML operations work only with MapReduce execution engine. With
Hive version 4.0.0-alpha-1 and above, the Iceberg integration when using
HiveCatalog supports the following additional features:
-Creating an Iceberg identity-partitioned table Creating an Iceberg table with
any partition spec, including the various transforms supported by Iceberg
Creating a table from an existing table (CTAS table) Altering a table while
keeping Iceberg and Hive schemas in sync Altering the partition schema
(updating columns) Altering the partition schema by specifying partition
transforms Truncating a table Migrating tables in Avro, Parquet, or ORC
(Non-ACID) format to Iceberg Reading the schema [...]
+Creating a table Dropping a table Reading a table Inserting into a table
(INSERT INTO) DML operations work only with MapReduce execution engine. With
Hive version 4.0.0-alpha-2 and above, the Iceberg integration when using
HiveCatalog supports the following additional features:
+Altering a table with expiring
snapshots.</description></item><item><title>AWS</title><link>https://iceberg.apache.org/docs/1.1.0/aws/</link><pubDate>Mon,
01 Jan 0001 00:00:00
+0000</pubDate><guid>https://iceberg.apache.org/docs/1.1.0/aws/</guid><description>Iceberg
AWS Integrations Iceberg provides integration with different AWS services
through the iceberg-aws module. This section describes how to use Iceberg with
AWS.
Enabling AWS Integration The iceberg-aws module is bundled with Spark and
Flink engine runtimes for all versions from 0.11.0 onwards. However, the AWS
clients are not bundled so that you can use the same client version as your
application. You will need to provide the AWS v2 SDK because that is what
Iceberg depends
on.</description></item><item><title>Configuration</title><link>https://iceberg.apache.org/docs/1.1.0/configuration/</link><pubDate>Mon,
01 Jan 0001 00:00:00 +0000</pubDate><g [...]
Read properties Property Default Description read.split.target-size 134217728
(128 MB) Target size when combining data input splits
read.split.metadata-target-size 33554432 (32 MB) Target size when combining
metadata input splits read.split.planning-lookback 10 Number of bins to
consider when combining input splits read.split.open-file-cost 4194304 (4 MB)
The estimated cost to open a file, used as a minimum weight when combining
splits.</description></item><item><title>Configuration</tit [...]
This creates an Iceberg catalog named hive_prod that loads tables from a Hive
metastore: