jackye1995 commented on code in PR #6600: URL: https://github.com/apache/iceberg/pull/6600#discussion_r1150944425
########## docs/table-migration.md: ########## @@ -0,0 +1,89 @@ +--- +title: "Table Migration" +url: table-migration +weight: 1300 +menu: main +--- +<!-- + - Licensed to the Apache Software Foundation (ASF) under one or more + - contributor license agreements. See the NOTICE file distributed with + - this work for additional information regarding copyright ownership. + - The ASF licenses this file to You under the Apache License, Version 2.0 + - (the "License"); you may not use this file except in compliance with + - the License. You may obtain a copy of the License at + - + - http://www.apache.org/licenses/LICENSE-2.0 + - + - Unless required by applicable law or agreed to in writing, software + - distributed under the License is distributed on an "AS IS" BASIS, + - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + - See the License for the specific language governing permissions and + - limitations under the License. + --> +# Table Migration +Apache Iceberg supports converting existing tables in other formats to Iceberg tables. This section introduces the general concept of table migration, its approaches, and existing implementations in Iceberg. + +## Migration Approaches +There are two main approaches to perform table migration: CTAS (Create Table As Select) and in-place migration. + +### Create-Table-As-Select Migration +CTAS migration involves creating a new Iceberg table and copying data from the existing table to the new one. This method is preferred when you want to completely cut ties with your old table, ensuring the new table is independent and fully managed by Iceberg. +However, CTAS migration may require more time to complete and might not be suitable for production use cases where downtime is not acceptable. + +### In-Place Migration +In-place migration retains the existing data files but adds Iceberg metadata on top of them. This approach is faster and does not require copying data, making it more suitable for production use cases. + +## In-Place Migration Actions +Apache Iceberg primarily supports the in-place migration approach, which includes three important actions: + +1. Snapshot Table +2. Migrate Table +3. Add Files + +### Snapshot Table +The Snapshot Table action creates a new iceberg table with the same schema and partitioning as the source table, leaving the source table unchanged during and after the action. + +**Step 1:** Create a new Iceberg table with the same metadata (schema, partition spec, etc.) as the source table + + +**Step 2:** Commit all data files across all partitions to the new Iceberg table. The source table remains unchanged. + +### Migrate Table +The Migrate Table action also creates a new Iceberg table with the same schema and partitioning as the source table. However, during the action execution, it locks and drops the source table from the catalog. +Consequently, Migrate Table requires all readers and writers working on the source table to be stopped before the action is performed. + +**Step 1:** Stop all readers and writers interacting with the source table + + +**Step 2:** Create a new Iceberg table with the same metadata (schema, partition spec, etc.) as the source table. Rename the source table for a backup in case of failure and rollback. + + +**Step 3:** Commit all data files across all partitions to the new Iceberg table. Drop the source table. + +### Add Files +After the initial step (either Snapshot Table or Migrate Table), it is common to find some data files that have not been migrated. These files often originate from concurrent writers who continue writing to the source table during or after the migration process. +In practice, these files can be new data files in Hive tables or new snapshots (versions) of Delta Lake tables. The Add Files action is essential for incorporating these files into the Iceberg table. + +## In-Place Migration Completion +Once all data files have been migrated and there are no more concurrent writers writing to the source table, the migration process is complete. +Readers and writers can now switch to the new Iceberg table for their operations. + +## Migration Implementation: From Hive/Spark to Iceberg +Apache Hive and Apache Spark are two popular data warehouse systems used for big data processing and analysis. Review Comment: we should remove opinions and evaluations of other systems in OSS docs. We just need an intro like Hive tables backed by Parquet, ORC and Avro file formats can be migrated to Iceberg. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
