This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/iceberg-docs.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 6331cad3 deploy: 95d89ffbe7311c0ca86e5f7b4e3dacc1c712e0d7
6331cad3 is described below

commit 6331cad3b520b23406d1a1d5e1a5caa8b4649bd5
Author: Fokko <[email protected]>
AuthorDate: Mon Nov 28 09:44:41 2022 +0000

    deploy: 95d89ffbe7311c0ca86e5f7b4e3dacc1c712e0d7
---
 common/index.xml           |  2 +-
 gcm-stream-spec/index.html | 11 +++++++++++
 getting-started/index.html | 20 +-------------------
 index.xml                  |  2 +-
 landingpagesearch.json     |  2 +-
 sitemap.xml                |  2 +-
 spec/index.html            | 14 ++++++++------
 view-spec/index.html       |  7 +++----
 8 files changed, 27 insertions(+), 33 deletions(-)

diff --git a/common/index.xml b/common/index.xml
index 242967a3..3a462704 100644
--- a/common/index.xml
+++ b/common/index.xml
@@ -1,6 +1,6 @@
 <?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" 
xmlns:atom="http://www.w3.org/2005/Atom";><channel><title>Commons on Apache 
Iceberg</title><link>https://iceberg.apache.org/common/</link><description>Recent
 content in Commons on Apache Iceberg</description><generator>Hugo -- 
gohugo.io</generator><language>en-us</language><atom:link 
href="https://iceberg.apache.org/common/index.xml"; rel="self" 
type="application/rss+xml"/><item><title>Spark and Iceberg Quickstart</t [...]
 Docker-Compose Creating a table Writing Data to a Table Reading Data from a 
Table Adding A Catalog Next Steps Docker-Compose The fastest way to get started 
is to use a docker-compose file that uses the the tabulario/spark-iceberg image 
which contains a local Spark cluster with a configured Iceberg 
catalog.</description></item><item><title>Releases</title><link>https://iceberg.apache.org/releases/</link><pubDate>Mon,
 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/rel [...]
-1.0.0 source tar.gz &amp;ndash; signature &amp;ndash; sha512 1.0.0 Spark 
3.3_2.12 runtime Jar &amp;ndash; 3.3_2.13 1.0.0 Spark 3.2_2.12 runtime Jar 
&amp;ndash; 3.2_2.13 1.0.0 Spark 3.1 runtime Jar 1.0.0 Spark 3.0 runtime Jar 
1.0.0 Spark 2.4 runtime Jar 1.0.0 Flink 1.16 runtime Jar 1.0.0 Flink 1.15 
runtime Jar 1.0.0 Flink 1.14 runtime Jar 1.0.0 Hive runtime Jar To use Iceberg 
in Spark or Flink, download the runtime JAR for your engine version and add it 
to the jars folder of your installa [...]
+1.0.0 source tar.gz &amp;ndash; signature &amp;ndash; sha512 1.0.0 Spark 
3.3_2.12 runtime Jar &amp;ndash; 3.3_2.13 1.0.0 Spark 3.2_2.12 runtime Jar 
&amp;ndash; 3.2_2.13 1.0.0 Spark 3.1 runtime Jar 1.0.0 Spark 3.0 runtime Jar 
1.0.0 Spark 2.4 runtime Jar 1.0.0 Flink 1.16 runtime Jar 1.0.0 Flink 1.15 
runtime Jar 1.0.0 Flink 1.14 runtime Jar 1.0.0 Hive runtime Jar To use Iceberg 
in Spark or Flink, download the runtime JAR for your engine version and add it 
to the jars folder of your installa [...]
 Running Benchmarks on GitHub It is possible to run one or more Benchmarks via 
the JMH Benchmarks GH action on your own fork of the Iceberg 
repo.</description></item><item><title>Blogs</title><link>https://iceberg.apache.org/blogs/</link><pubDate>Mon,
 01 Jan 0001 00:00:00 
+0000</pubDate><guid>https://iceberg.apache.org/blogs/</guid><description>Iceberg
 Blogs Here is a list of company blogs that talk about Iceberg. The blogs are 
ordered from most recent to oldest.
 Compaction in Apache Iceberg: Fine-Tuning Your Iceberg Table&amp;rsquo;s Data 
Files Date: November 9th, 2022, Company: Dremio
 Author: Alex Merced
diff --git a/gcm-stream-spec/index.html b/gcm-stream-spec/index.html
new file mode 100644
index 00000000..78cbc3c7
--- /dev/null
+++ b/gcm-stream-spec/index.html
@@ -0,0 +1,11 @@
+<!doctype html><html><head><meta charset=utf-8><meta 
http-equiv=x-ua-compatible content="IE=edge"><meta name=viewport 
content="width=device-width,initial-scale=1"><meta name=description 
content><meta name=author content><title>AES GCM Stream Spec</title><link 
href=/css/bootstrap.css rel=stylesheet><link href=/css/markdown.css 
rel=stylesheet><link href=/css/katex.min.css rel=stylesheet><link 
href=/css/iceberg-theme.css rel=stylesheet><link 
href=/font-awesome-4.7.0/css/font-awesome.min.css [...]
+<span class=sr-only>Toggle navigation</span>
+<span class=icon-bar></span>
+<span class=icon-bar></span>
+<span class=icon-bar></span></button>
+<a class="page-scroll navbar-brand" href=https://iceberg.apache.org/><img 
class=top-navbar-logo 
src=https://iceberg.apache.org//img/iceberg-logo-icon.png> Apache 
Iceberg</a></div><div><input type=search class=form-control id=search-input 
placeholder=Search... maxlength=64 data-hotkeys=s/></div><div 
class=versions-dropdown><span>1.0.0</span> <i class="fa 
fa-chevron-down"></i><div class=versions-dropdown-content><ul><li 
class=versions-dropdown-selection><a href=/docs/latest>latest</a></li> [...]
+</code></pre><p>where</p><ul><li><code>Magic</code> is four bytes 0x41, 0x47, 
0x53, 0x31 (&ldquo;AGS1&rdquo;, short for: AES GCM Stream, version 
1)</li><li><code>BlockLength</code> is four bytes (little endian) integer 
keeping the length of the equal-size split blocks before encryption. The length 
is specified in bytes.</li><li><code>CipherBlockᵢ</code> is the i-th enciphered 
block in the file, with the structure defined below.</li></ul><h3 
id=cipher-block-structure>Cipher Block structur [...]
+<script src=https://iceberg.apache.org//js/jquery.easing.min.js></script>
+<script type=text/javascript 
src=https://iceberg.apache.org//js/search.js></script>
+<script src=https://iceberg.apache.org//js/bootstrap.min.js></script>
+<script src=https://iceberg.apache.org//js/iceberg-theme.js></script></html>
\ No newline at end of file
diff --git a/getting-started/index.html b/getting-started/index.html
index b3c6e0d8..52cc0335 100644
--- a/getting-started/index.html
+++ b/getting-started/index.html
@@ -1,19 +1 @@
-<!--
- - Licensed to the Apache Software Foundation (ASF) under one or more
- - contributor license agreements.  See the NOTICE file distributed with
- - this work for additional information regarding copyright ownership.
- - The ASF licenses this file to You under the Apache License, Version 2.0
- - (the "License"); you may not use this file except in compliance with
- - the License.  You may obtain a copy of the License at
- -
- -   http://www.apache.org/licenses/LICENSE-2.0
- -
- - Unless required by applicable law or agreed to in writing, software
- - distributed under the License is distributed on an "AS IS" BASIS,
- - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- - See the License for the specific language governing permissions and
- - limitations under the License.
- -->
-<head>
-  <meta http-equiv="Refresh" content="0; url='/docs/latest/getting-started'" />
-</head>
+<!doctype html><html 
lang=en-us><head><title>https://iceberg.apache.org/spark-quickstart/</title><link
 rel=canonical href=https://iceberg.apache.org/spark-quickstart/><meta 
name=robots content="noindex"><meta charset=utf-8><meta http-equiv=refresh 
content="0; url=https://iceberg.apache.org/spark-quickstart/";></head></html>
\ No newline at end of file
diff --git a/index.xml b/index.xml
index b9adfc84..9a09ba1e 100644
--- a/index.xml
+++ b/index.xml
@@ -1,6 +1,6 @@
 <?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" 
xmlns:atom="http://www.w3.org/2005/Atom";><channel><title>Apache 
Iceberg</title><link>https://iceberg.apache.org/</link><description>Recent 
content on Apache Iceberg</description><generator>Hugo -- 
gohugo.io</generator><language>en-us</language><atom:link 
href="https://iceberg.apache.org/index.xml"; rel="self" 
type="application/rss+xml"/><item><title>Expressive 
SQL</title><link>https://iceberg.apache.org/services/exp [...]
 Docker-Compose Creating a table Writing Data to a Table Reading Data from a 
Table Adding A Catalog Next Steps Docker-Compose The fastest way to get started 
is to use a docker-compose file that uses the the tabulario/spark-iceberg image 
which contains a local Spark cluster with a configured Iceberg 
catalog.</description></item><item><title>Releases</title><link>https://iceberg.apache.org/releases/</link><pubDate>Mon,
 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://iceberg.apache.org/rel [...]
-1.0.0 source tar.gz &amp;ndash; signature &amp;ndash; sha512 1.0.0 Spark 
3.3_2.12 runtime Jar &amp;ndash; 3.3_2.13 1.0.0 Spark 3.2_2.12 runtime Jar 
&amp;ndash; 3.2_2.13 1.0.0 Spark 3.1 runtime Jar 1.0.0 Spark 3.0 runtime Jar 
1.0.0 Spark 2.4 runtime Jar 1.0.0 Flink 1.16 runtime Jar 1.0.0 Flink 1.15 
runtime Jar 1.0.0 Flink 1.14 runtime Jar 1.0.0 Hive runtime Jar To use Iceberg 
in Spark or Flink, download the runtime JAR for your engine version and add it 
to the jars folder of your installa [...]
+1.0.0 source tar.gz &amp;ndash; signature &amp;ndash; sha512 1.0.0 Spark 
3.3_2.12 runtime Jar &amp;ndash; 3.3_2.13 1.0.0 Spark 3.2_2.12 runtime Jar 
&amp;ndash; 3.2_2.13 1.0.0 Spark 3.1 runtime Jar 1.0.0 Spark 3.0 runtime Jar 
1.0.0 Spark 2.4 runtime Jar 1.0.0 Flink 1.16 runtime Jar 1.0.0 Flink 1.15 
runtime Jar 1.0.0 Flink 1.14 runtime Jar 1.0.0 Hive runtime Jar To use Iceberg 
in Spark or Flink, download the runtime JAR for your engine version and add it 
to the jars folder of your installa [...]
 Running Benchmarks on GitHub It is possible to run one or more Benchmarks via 
the JMH Benchmarks GH action on your own fork of the Iceberg 
repo.</description></item><item><title>Blogs</title><link>https://iceberg.apache.org/blogs/</link><pubDate>Mon,
 01 Jan 0001 00:00:00 
+0000</pubDate><guid>https://iceberg.apache.org/blogs/</guid><description>Iceberg
 Blogs Here is a list of company blogs that talk about Iceberg. The blogs are 
ordered from most recent to oldest.
 Compaction in Apache Iceberg: Fine-Tuning Your Iceberg Table&amp;rsquo;s Data 
Files Date: November 9th, 2022, Company: Dremio
 Author: Alex Merced
diff --git a/landingpagesearch.json b/landingpagesearch.json
index 8107d911..187fadf1 100644
--- a/landingpagesearch.json
+++ b/landingpagesearch.json
@@ -1 +1 @@
-[{"categories":null,"content":" Spark and Iceberg Quickstart This guide will 
get you up and running with an Iceberg and Spark environment, including sample 
code to highlight some powerful features. You can learn more about Iceberg’s 
Spark runtime by checking out the Spark section.\nDocker-Compose Creating a 
table Writing Data to a Table Reading Data from a Table Adding A Catalog Next 
Steps Docker-Compose The fastest way to get started is to use a docker-compose 
file that uses the the tab [...]
\ No newline at end of file
+[{"categories":null,"content":" Spark and Iceberg Quickstart This guide will 
get you up and running with an Iceberg and Spark environment, including sample 
code to highlight some powerful features. You can learn more about Iceberg’s 
Spark runtime by checking out the Spark section.\nDocker-Compose Creating a 
table Writing Data to a Table Reading Data from a Table Adding A Catalog Next 
Steps Docker-Compose The fastest way to get started is to use a docker-compose 
file that uses the the tab [...]
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
index 9633afba..782100a6 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>https://iceberg.apache.org/services/expressive-sql/</loc></url><url><loc>https://iceberg.apache.org/services/schema-evolution/</loc></url><url><loc>https://iceberg.apache.org/services/hidden-partitioning/</loc></url><url><loc>https://iceberg.apache.org/services/time-travel/</loc></url><url><loc>https://iceberg.apache.org/s
 [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>https://iceberg.apache.org/services/expressive-sql/</loc></url><url><loc>https://iceberg.apache.org/services/schema-evolution/</loc></url><url><loc>https://iceberg.apache.org/services/hidden-partitioning/</loc></url><url><loc>https://iceberg.apache.org/services/time-travel/</loc></url><url><loc>https://iceberg.apache.org/s
 [...]
\ No newline at end of file
diff --git a/spec/index.html b/spec/index.html
index b9126431..4b38e792 100644
--- a/spec/index.html
+++ b/spec/index.html
@@ -4,9 +4,11 @@
 <span class=icon-bar></span>
 <span class=icon-bar></span></button>
 <a class="page-scroll navbar-brand" href=https://iceberg.apache.org/><img 
class=top-navbar-logo 
src=https://iceberg.apache.org//img/iceberg-logo-icon.png> Apache 
Iceberg</a></div><div><input type=search class=form-control id=search-input 
placeholder=Search... maxlength=64 data-hotkeys=s/></div><div 
class=versions-dropdown><span>1.0.0</span> <i class="fa 
fa-chevron-down"></i><div class=versions-dropdown-content><ul><li 
class=versions-dropdown-selection><a href=/docs/latest>latest</a></li> [...]
-</code></pre><p>Notes:</p><ol><li>Changing the number of buckets as a table 
grows is possible by evolving the partition spec.</li></ol><p>For hash function 
details by type, see Appendix B.</p><h4 id=truncate-transform-details>Truncate 
Transform 
Details</h4><table><thead><tr><th><strong>Type</strong></th><th><strong>Config</strong></th><th><strong>Truncate
 
specification</strong></th><th><strong>Examples</strong></th></tr></thead><tbody><tr><td><strong><code>int</code></strong></td><td><co
 [...]
-The <code>sequence_number</code> field represents the data sequence number and 
must never change after a file is added to the 
dataset.</p><p>Notes:</p><ol><li>Technically, data files can be deleted when 
the last snapshot that contains the file as “live” data is garbage collected. 
But this is harder to detect and requires finding the diff of multiple 
snapshots. It is easier to track what files are deleted in a snapshot and 
delete them when that snapshot expires. It is not recommended to a [...]
-It is also possible to add a new file that logically belongs to an older 
sequence number. In that case, the sequence number must be provided explicitly 
and not inherited.</p><p>When writing an existing file to a new manifest or 
marking an existing file as deleted, the sequence number must be non-null and 
set to the original data sequence number of the file that was either inherited 
or provided at the commit time.</p><p>Inheriting sequence numbers through the 
metadata tree allows writing  [...]
+</code></pre><p>Notes:</p><ol><li>Changing the number of buckets as a table 
grows is possible by evolving the partition spec.</li></ol><p>For hash function 
details by type, see Appendix B.</p><h4 id=truncate-transform-details>Truncate 
Transform 
Details</h4><table><thead><tr><th><strong>Type</strong></th><th><strong>Config</strong></th><th><strong>Truncate
 
specification</strong></th><th><strong>Examples</strong></th></tr></thead><tbody><tr><td><strong><code>int</code></strong></td><td><co
 [...]
+The <code>sequence_number</code> field represents the data sequence number and 
must never change after a file is added to the dataset. The data sequence 
number represents a relative age of the file content and should be used for 
planning which delete files apply to a data file.
+The <code>file_sequence_number</code> field represents the sequence number of 
the snapshot that added the file and must also remain unchanged upon assigning 
at commit. The file sequence number can&rsquo;t be used for pruning delete 
files as the data within the file may have an older data sequence number.
+The data and file sequence numbers are inherited only if the entry status is 1 
(added). If the entry status is 0 (existing) or 2 (deleted), the entry must 
include both sequence numbers explicitly.</p><p>Notes:</p><ol><li>Technically, 
data files can be deleted when the last snapshot that contains the file as 
“live” data is garbage collected. But this is harder to detect and requires 
finding the diff of multiple snapshots. It is easier to track what files are 
deleted in a snapshot and dele [...]
+It is also possible to add a new file with data that logically belongs to an 
older sequence number. In that case, the data sequence number must be provided 
explicitly and not inherited. However, the file sequence number must be always 
assigned when the snapshot is successfully committed.</p><p>When writing an 
existing file to a new manifest or marking an existing file as deleted, the 
data and file sequence numbers must be non-null and set to the original values 
that were either inherited [...]
 Tags are labels for individual snapshots. Branches are mutable named 
references that can be updated by committing a new snapshot as the 
branch&rsquo;s referenced snapshot using the <a 
href=#commit-conflict-resolution-and-retry>Commit Conflict Resolution and 
Retry</a> procedures.</p><p>The snapshot reference object records all the 
information of a reference including snapshot ID, reference type and <a 
href=#snapshot-retention-policy>Snapshot Retention 
Policy</a>.</p><table><thead><tr><th> [...]
 The snapshot expiration procedure removes snapshots from table metadata and 
applies the table&rsquo;s retention policy.
 Retention policy can be configured both globally and on snapshot reference 
through properties <code>min-snapshots-to-keep</code>, 
<code>max-snapshot-age-ms</code> and <code>max-ref-age-ms</code>.</p><p>When 
expiring snapshots, retention policies in table and snapshot references are 
evaluated in the following way:</p><ol><li>Start with an empty set of snapshots 
to retain</li><li>Remove any refs (other than main) where the referenced 
snapshot is older than <code>max-ref-age-ms</code></li>< [...]
@@ -32,15 +34,15 @@ many statistics files associated with different table 
snapshots.</p><p>Statistic
 </span></span><span style=display:flex><span> 1: id | 2: category | 3: name
 </span></span><span style=display:flex><span>-------|-------------|---------
 </span></span><span style=display:flex><span> 4     | NULL        | Polar
-</span></span></code></pre></div><p>If a delete column in an equality delete 
file is later dropped from the table, it must still be used when applying the 
equality deletes. If a column was added to a table and later used as a delete 
column in an equality delete file, the column value is read for older data 
files using normal projection rules (defaults to <code>null</code>).</p><h4 
id=delete-file-stats>Delete File Stats</h4><p>Manifests hold the same 
statistics for delete files and data f [...]
+</span></span></code></pre></div><p>If a delete column in an equality delete 
file is later dropped from the table, it must still be used when applying the 
equality deletes. If a column was added to a table and later used as a delete 
column in an equality delete file, the column value is read for older data 
files using normal projection rules (defaults to <code>null</code>).</p><h4 
id=delete-file-stats>Delete File Stats</h4><p>Manifests hold the same 
statistics for delete files and data f [...]
 Hash results are not dependent on decimal scale, which is part of the type, 
not the data value.</li><li>UUIDs are encoded using big endian. The test UUID 
for the example above is: <code>f79c3e09-677c-4bbd-a479-3f349cb785e7</code>. 
This UUID encoded as a byte array is:
-<code>F7 9C 3E 09 67 7C 4B BD A4 79 3F 34 9C B7 85 E7</code></li><li>Float 
hash values are the result of hashing the float cast to double to ensure that 
schema evolution does not change hash values if float types are 
promoted.</li></ol><h2 id=appendix-c-json-serialization>Appendix C: JSON 
serialization</h2><h3 id=schemas>Schemas</h3><p>Schemas are serialized as a 
JSON object with the same fields as a struct in the table below, and the 
following additional fields:</p><table><thead><tr><th [...]
+<code>F7 9C 3E 09 67 7C 4B BD A4 79 3F 34 9C B7 85 
E7</code></li><li><code>doubleToLongBits</code> must give the IEEE 754 
compliant bit representation of the double value. All <code>NaN</code> bit 
patterns must be canonicalized to <code>0x7ff8000000000000L</code>. Negative 
zero (<code>-0.0</code>) must be canonicalized to positive zero 
(<code>0.0</code>). Float hash values are the result of hashing the float cast 
to double to ensure that schema evolution does not change hash values if fl 
[...]
 </span></span><span style=display:flex><span>   { <span 
style=color:#f92672>&#34;field-id&#34;</span>: <span 
style=color:#ae81ff>2</span>, <span style=color:#f92672>&#34;names&#34;</span>: 
[<span style=color:#e6db74>&#34;data&#34;</span>] },
 </span></span><span style=display:flex><span>   { <span 
style=color:#f92672>&#34;field-id&#34;</span>: <span 
style=color:#ae81ff>3</span>, <span style=color:#f92672>&#34;names&#34;</span>: 
[<span style=color:#e6db74>&#34;location&#34;</span>], <span 
style=color:#f92672>&#34;fields&#34;</span>: [
 </span></span><span style=display:flex><span>       { <span 
style=color:#f92672>&#34;field-id&#34;</span>: <span 
style=color:#ae81ff>4</span>, <span style=color:#f92672>&#34;names&#34;</span>: 
[<span style=color:#e6db74>&#34;latitude&#34;</span>, <span 
style=color:#e6db74>&#34;lat&#34;</span>] },
 </span></span><span style=display:flex><span>       { <span 
style=color:#f92672>&#34;field-id&#34;</span>: <span 
style=color:#ae81ff>5</span>, <span style=color:#f92672>&#34;names&#34;</span>: 
[<span style=color:#e6db74>&#34;longitude&#34;</span>, <span 
style=color:#e6db74>&#34;long&#34;</span>] }
 </span></span><span style=display:flex><span>     ] } ]
-</span></span></code></pre></div><h2 
id=appendix-d-single-value-serialization>Appendix D: Single-value 
serialization</h2><h3 id=binary-single-value-serialization>Binary single-value 
serialization</h3><p>This serialization scheme is for storing single values as 
individual binary values in the lower and upper bounds maps of manifest 
files.</p><table><thead><tr><th>Type</th><th>Binary 
serialization</th></tr></thead><tbody><tr><td><strong><code>boolean</code></strong></td><td><code>0x00</cod
 [...]
+</span></span></code></pre></div><h2 
id=appendix-d-single-value-serialization>Appendix D: Single-value 
serialization</h2><h3 id=binary-single-value-serialization>Binary single-value 
serialization</h3><p>This serialization scheme is for storing single values as 
individual binary values in the lower and upper bounds maps of manifest 
files.</p><table><thead><tr><th>Type</th><th>Binary 
serialization</th></tr></thead><tbody><tr><td><strong><code>boolean</code></strong></td><td><code>0x00</cod
 [...]
 <script src=https://iceberg.apache.org//js/jquery.easing.min.js></script>
 <script type=text/javascript 
src=https://iceberg.apache.org//js/search.js></script>
 <script src=https://iceberg.apache.org//js/bootstrap.min.js></script>
diff --git a/view-spec/index.html b/view-spec/index.html
index 44634371..e956dcde 100644
--- a/view-spec/index.html
+++ b/view-spec/index.html
@@ -3,10 +3,9 @@
 <span class=icon-bar></span>
 <span class=icon-bar></span>
 <span class=icon-bar></span></button>
-<a class="page-scroll navbar-brand" href=https://iceberg.apache.org/><img 
class=top-navbar-logo 
src=https://iceberg.apache.org//img/iceberg-logo-icon.png> Apache 
Iceberg</a></div><div><input type=search class=form-control id=search-input 
placeholder=Search... maxlength=64 data-hotkeys=s/></div><div 
class=versions-dropdown><span>1.0.0</span> <i class="fa 
fa-chevron-down"></i><div class=versions-dropdown-content><ul><li 
class=versions-dropdown-selection><a href=/docs/latest>latest</a></li> [...]
-Each metadata file is self-sufficient. It contains the history of the last few 
operations performed on the view and can be used to roll back the view to a 
previous version.</p><h3 id=metadata-location>Metadata Location</h3><p>An 
atomic swap of one view metadata file for another provides the basis for making 
atomic changes. Readers use the version of the view that was current when they 
loaded the view metadata and are not affected by changes until they refresh and 
pick up a new metadata l [...]
-The rest of the fields are interpreted based on the type.
-There is only one type of representation defined in the spec.</p><h5 
id=original-view-definition-in-sql>Original View Definition in SQL</h5><p>This 
type of representation stores the original view definition in SQL and its SQL 
dialect.</p><table><thead><tr><th>Required/Optional</th><th>Field 
Name</th><th>Description</th></tr></thead><tbody><tr><td>Required</td><td>type</td><td>A
 string indicating the type of representation. It is set to &ldquo;sql&rdquo; 
for this type.</td></tr><tr><td>Re [...]
+<a class="page-scroll navbar-brand" href=https://iceberg.apache.org/><img 
class=top-navbar-logo 
src=https://iceberg.apache.org//img/iceberg-logo-icon.png> Apache 
Iceberg</a></div><div><input type=search class=form-control id=search-input 
placeholder=Search... maxlength=64 data-hotkeys=s/></div><div 
class=versions-dropdown><span>1.0.0</span> <i class="fa 
fa-chevron-down"></i><div class=versions-dropdown-content><ul><li 
class=versions-dropdown-selection><a href=/docs/latest>latest</a></li> [...]
+Each metadata file is self-sufficient. It contains the history of the last few 
operations performed on the view and can be used to roll back the view to a 
previous version.</p><h3 id=metadata-location>Metadata Location</h3><p>An 
atomic swap of one view metadata file for another provides the basis for making 
atomic changes. Readers use the version of the view that was current when they 
loaded the view metadata and are not affected by changes until they refresh and 
pick up a new metadata l [...]
+The rest of the fields are interpreted based on the type.</p><h5 
id=original-view-definition-in-sql>Original View Definition in SQL</h5><p>This 
type of representation stores the original view definition in SQL and its SQL 
dialect.</p><table><thead><tr><th>Required/Optional</th><th>Field 
Name</th><th>Description</th></tr></thead><tbody><tr><td>Required</td><td>type</td><td>A
 string indicating the type of representation. It is set to &ldquo;sql&rdquo; 
for this type.</td></tr><tr><td>Requir [...]
 the field aliases are &lsquo;alias_name&rsquo;, &lsquo;alias_name2&rsquo;, and 
etc., and the field docs are &lsquo;docs&rsquo;, null, and etc.</p><h2 
id=appendix-a-an-example>Appendix A: An Example</h2><p>The JSON metadata file 
format is described using an example below.</p><p>Imagine the following 
sequence of operations:</p><ul><li><code>CREATE TABLE base_tab(c1 int, c2 
varchar);</code></li><li><code>INSERT INTO base_tab VALUES (1,’one’), 
(2,’two’);</code></li><li><code>CREATE VIEW comm [...]
 <code>s3://my_company/my/warehouse/anorwood.db/common_view</code></p><p>The 
path is intentionally similar to the path for iceberg tables and contains a 
‘metadata’ directory. 
(<code>METASTORE_WAREHOUSE_DIR/&lt;dbname>.db/&lt;viewname>/metadata</code>)</p><p>The
 metadata directory contains View Version Metadata files. The text after 
&lsquo;=>&rsquo; symbols describes the fields.</p><pre tabindex=0><code>{
   &#34;format-version&#34; : 1, =&gt; JSON format. Will change as format 
evolves.

Reply via email to