RussellSpitzer commented on code in PR #12115:
URL: https://github.com/apache/iceberg/pull/12115#discussion_r1932565068
##########
docs/docs/spark-procedures.md:
##########
@@ -972,4 +972,91 @@ CALL catalog_name.system.compute_table_stats(table =>
'my_table', snapshot_id =>
Collect statistics of the snapshot with id `snap1` of table `my_table` for
columns `col1` and `col2`
```sql
CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id
=> 'snap1', columns => array('col1', 'col2'));
-```
\ No newline at end of file
+```
+
+## Table Replication
+
+The `rewrite-table-path` assists in moving or copying an Iceberg table from
one location to another.
+
+### `rewrite-table-path`
+
+This procedure writes a new copy of the Iceberg table's metadata files where
every path has had its prefix replaced.
+The newly rewritten metadata files, along with data files, enable moving or
coping an Iceberg table to a new location.
+After copying both metadata and data to the desired location, the replicated
iceberg
+table will appear identical to the source table, including snapshot history,
schema and partition specs.
+
+!!! info
+ This procedure only creates metadata for an existing Iceberg table
modified for a new location. Procedure results can be consumed for copying the
files.
+ Copying/Moving metadata and data files to the new location is not part of
this procedure.
+
+
+| Argument Name | Required? | default
| Type | Description
|
+|--------------------|-----------|------------------------------------------------|--------|-------------------------------------------------------------------------|
+| `table` | ✔️ |
| string | Name of the table
|
+| `source_prefix` | ✔️ |
| string | The existing prefix to be replaced
|
+| `target_prefix` | ✔️ |
| string | The replacement prefix for `source_prefix`
|
+| `start_version` | | first metadata.json in table's metadata log
| string | The name or path to the chronologically first metadata.json to
rewrite. |
+| `end_version` | | latest metadata.json
| string | The name or path to the chronologically last metadata.json to
rewrite |
+| `staging_location` | | new directory under table's metadata
directory | string | The output location for newly modified metadata files
|
+
+
+#### Modes of operation:
+
+- Full Rewrite:
+
+By default, the procedure operates in full rewrite mode where all metadata
files are rewritten.
+
+- Incremental Rewrite:
+
+If `start_version` is provided, the procedure will only rewrite delta metadata
files between `start_version` and `end_version`. `end_version` is default to
latest metadata location of the table.
+
+#### Output
+
+| Output Name | Type | Description
|
+|----------------------|--------|-------------------------------------------------------------------------------------|
+| `latest_version` | string | Name of the latest metadata file rewritten
by this procedure |
+| `file_list_location` | string | Path to a file containing a listing of
comma-separated source and destination paths |
+
+Example file list content :
+
+```csv
+sourcepath/datafile1.parquet,targetpath/datafile1.parquet
+sourcepath/datafile2.parquet,targetpath/datafile2.parquet
+stagingpath/manifest.avro,targetpath/manifest.avro
+```
+
+#### Examples
+
+Full rewrite of a table's metadata path from source location in HDFS to a
target location in S3 bucket of table `my_table`.
+This produces a new set of metadata using the s3a prefix in the default
staging location under table's metadata directory
+
+```sql
+CALL catalog_name.system.rewrite_table_path(
+ table => 'db.my_table',
+ source_prefix => "hdfs://nn:8020/path/to/source_table",
+ target_prefix => "s3a://bucket/prefix/db.db/my_table"
+);
+```
+
+Incremental rewrite of a table's metadata path from a source location to a
target location between metadata versions
+`v2.metadata.json` and `v20.metadata.json`, with files written to a staging
location
+
+```sql
+CALL catalog_name.system.rewrite_table_path(
+ table => 'db.my_table',
+ source_prefix => "s3a://bucketOne/prefix/db.db/my_table",
+ target_prefix => "s3a://bucketTwo/prefix/db.db/my_table",
+ start_version => "v2.metadata.json",
+ end_version => "v20.metadata.json",
+ staging_location => "s3a://bucketStaging/my_table"
+);
+```
+
+Once the rewrite is completed, third-party tools (
+eg.
[Distcp](https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html)) can
be used to copy the newly created
+metadata files and data files to the target location
+
+Lastly, after referential integrity check on copied files,
[register_table](#register_table) procedure can be used to register copied
table in the target location with catalog.
Review Comment:
```suggestion
Lastly, after a referential integrity check on copied files,
[register_table](#register_table) procedure can be used to register copied
table in the target location with catalog.
```
This also needs more info on how to do the integrity check
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]