This is an automated email from the ASF dual-hosted git repository.

michaelsmith pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git


The following commit(s) were added to refs/heads/master by this push:
     new f11790549 IMPALA-13321: [DOCS] Add documentation about CONVERT TO 
ICEBERG statement
f11790549 is described below

commit f11790549ac64a0e83f2508ee1c560a2946dc7d9
Author: Daniel Vanko <[email protected]>
AuthorDate: Mon Feb 23 11:44:56 2026 +0100

    IMPALA-13321: [DOCS] Add documentation about CONVERT TO ICEBERG statement
    
    This patch adds documentation about ALTER TABLE ... CONVERT TO ICEBERG
    statement, implemented in IMPALA-11013 and the addition of
    'format-version' specification by IMPALA-12330.
    
    Change-Id: I4688c0e93ecda43c2e43d36cd9e7d006e4fd5528
    Reviewed-on: http://gerrit.cloudera.org:8080/24021
    Tested-by: Impala Public Jenkins <[email protected]>
    Reviewed-by: Noemi Pap-Takacs <[email protected]>
    Reviewed-by: Zoltan Borok-Nagy <[email protected]>
---
 docs/topics/impala_iceberg.xml | 43 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/docs/topics/impala_iceberg.xml b/docs/topics/impala_iceberg.xml
index 684f3acf9..ec763a5d8 100644
--- a/docs/topics/impala_iceberg.xml
+++ b/docs/topics/impala_iceberg.xml
@@ -271,6 +271,49 @@ ALTER TABLE ice_v1_to_v2 SET 
TBLPROPERTIES('format-version'='2');
     </conbody>
   </concept>
 
+  <concept id="iceberg_convert">
+    <title>Converting existing tables to Iceberg tables</title>
+    <conbody>
+      <p>
+        Since <keyword keyref="impala43"/> Impala also supports converting 
legacy Hive
+        tables to Iceberg tables. The target Iceberg tables inherit the 
location of the
+        original Hive tables. The Hive table to be converted must follow these
+        requirements:
+        <ul>
+          <li>table is not a transactional table,</li>
+          <li>InputFormat must be either PARQUET, ORC, or AVRO,</li>
+          <li>the user has “ALL” privileges on the database containing the 
table.</li>
+        </ul>
+        If the Hive table meets these requirements, then it can be converted 
to an Iceberg
+        table via an
+        <codeph>ALTER TABLE [dbname.]table_name CONVERT TO ICEBERG 
[TBLPROPERTIES(...)];</codeph>
+        statement, e.g.:
+        <codeblock>
+ALTER TABLE non_iceberg_table CONVERT TO ICEBERG;
+ALTER TABLE non_iceberg_table CONVERT TO ICEBERG 
TBLPROPERTIES('iceberg.catalog'='hadoop.tables');
+        </codeblock>
+      </p>
+      <p>
+        This command only accepts 'iceberg.catalog' (from <keyword 
keyref="impala43"/>)
+        and 'format-version' (from <keyword keyref="impala50"/>) as table 
properties.
+      </p>
+      <p>
+        This is an in-place migration so the original data files of the legacy 
Hive table
+        are re-used and not moved, copied or re-created by this operation. The 
new Iceberg
+        table will have the 'external.table.purge' property set to 'true' 
after the
+        migration.
+      </p>
+      <p>
+        From <keyword keyref="impala50"/> one can also specify the 
'format-version' of the
+        new Iceberg table during conversion, e.g.:
+        <codeblock>
+ALTER TABLE non_iceberg_table CONVERT TO ICEBERG 
TBLPROPERTIES('format-version'='1');
+        </codeblock>
+        If 'format-version' is omitted, Impala will use the default Iceberg 
format version.
+      </p>
+    </conbody>
+  </concept>
+
   <concept id="iceberg_drop">
     <title>Dropping Iceberg tables</title>
     <conbody>

Reply via email to