This is an automated email from the ASF dual-hosted git repository.

openinx pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/iceberg.git


The following commit(s) were added to refs/heads/master by this push:
     new 50928b4  Docs:  Replace to use the raw markdown links (#1714)
50928b4 is described below

commit 50928b40c383bf9e60cedc2937a72c24b4b1f4ed
Author: HeChuan <[email protected]>
AuthorDate: Wed Nov 4 17:00:02 2020 +0800

    Docs:  Replace to use the raw markdown links (#1714)
---
 site/docs/api.md                        | 14 +++++++-------
 site/docs/configuration.md              |  2 +-
 site/docs/evolution.md                  |  2 +-
 site/docs/flink.md                      |  4 ++--
 site/docs/getting-started.md            | 24 ++++++++++++------------
 site/docs/hive.md                       |  2 +-
 site/docs/java-api-quickstart.md        | 10 +++++-----
 site/docs/maintenance.md                | 10 +++++-----
 site/docs/reliability.md                |  2 +-
 site/docs/schemas.md                    |  2 +-
 site/docs/spark-structured-streaming.md |  6 +++---
 site/docs/spark.md                      | 12 ++++++------
 site/docs/spec.md                       |  4 ++--
 site/docs/terms.md                      |  6 +++---
 14 files changed, 50 insertions(+), 50 deletions(-)

diff --git a/site/docs/api.md b/site/docs/api.md
index 17e127d..f1cd9bf 100644
--- a/site/docs/api.md
+++ b/site/docs/api.md
@@ -25,11 +25,11 @@ Table metadata and operations are accessed through the 
`Table` interface. This i
 
 ### Table metadata
 
-The [`Table` 
interface](/javadoc/master/index.html?org/apache/iceberg/Table.html) provides 
access table metadata:
+The [`Table` 
interface](./javadoc/master/index.html?org/apache/iceberg/Table.html) provides 
access table metadata:
 
-* `schema` returns the current table [schema](../schemas)
+* `schema` returns the current table [schema](./schemas.md)
 * `spec` returns the current table partition spec
-* `properties` returns a map of key-value [properties](../configuration)
+* `properties` returns a map of key-value [properties](./configuration.md)
 * `currentSnapshot` returns the current table snapshot
 * `snapshots` returns all valid snapshots for the table
 * `snapshot(id)` returns a specific snapshot by ID
@@ -73,7 +73,7 @@ Use `asOfTime` or `useSnapshot` to configure the table 
snapshot for time travel
 
 ### Update operations
 
-`Table` also exposes operations that update the table. These operations use a 
builder pattern, 
[`PendingUpdate`](/javadoc/master/index.html?org/apache/iceberg/PendingUpdate.html),
 that commits when `PendingUpdate#commit` is called.
+`Table` also exposes operations that update the table. These operations use a 
builder pattern, 
[`PendingUpdate`](./javadoc/master/index.html?org/apache/iceberg/PendingUpdate.html),
 that commits when `PendingUpdate#commit` is called.
 
 For example, updating the table schema is done by calling `updateSchema`, 
adding updates to the builder, and finally calling `commit` to commit the 
pending changes to the table:
 
@@ -115,7 +115,7 @@ t.commitTransaction();
 
 ## Types
 
-Iceberg data types are located in the [`org.apache.iceberg.types` 
package](/javadoc/master/index.html?org/apache/iceberg/types/package-summary.html).
+Iceberg data types are located in the [`org.apache.iceberg.types` 
package](./javadoc/master/index.html?org/apache/iceberg/types/package-summary.html).
 
 ### Primitives
 
@@ -131,7 +131,7 @@ Types.DecimalType.of(9, 2) // decimal(9, 2)
 
 Structs, maps, and lists are created using factory methods in type classes.
 
-Like struct fields, map keys or values and list elements are tracked as nested 
fields. Nested fields track [field IDs](../evolution#correctness) and 
nullability.
+Like struct fields, map keys or values and list elements are tracked as nested 
fields. Nested fields track [field IDs](./evolution.md#correctness) and 
nullability.
 
 Struct fields are created using `NestedField.optional` or 
`NestedField.required`. Map value and list element nullability is set in the 
map and list factory methods.
 
@@ -157,7 +157,7 @@ ListType list = ListType.ofRequired(1, IntegerType.get());
 
 ## Expressions
 
-Iceberg's expressions are used to configure table scans. To create 
expressions, use the factory methods in 
[`Expressions`](/javadoc/master/index.html?org/apache/iceberg/expressions/Expressions.html).
+Iceberg's expressions are used to configure table scans. To create 
expressions, use the factory methods in 
[`Expressions`](./javadoc/master/index.html?org/apache/iceberg/expressions/Expressions.html).
 
 Supported predicate expressions are:
 
diff --git a/site/docs/configuration.md b/site/docs/configuration.md
index 7a777bc..f0aaa9b 100644
--- a/site/docs/configuration.md
+++ b/site/docs/configuration.md
@@ -82,7 +82,7 @@ The following properties from the Hadoop configuration are 
used by the Hive Meta
 
 ### Catalogs
 
-[Spark catalogs](../spark#configuring-catalogs) are configured using Spark 
session properties.
+[Spark catalogs](./spark.md#configuring-catalogs) are configured using Spark 
session properties.
 
 A catalog is created and named by adding a property 
`spark.sql.catalog.(catalog-name)` with an implementation class for its value.
 
diff --git a/site/docs/evolution.md b/site/docs/evolution.md
index 343b77d..3796eee 100644
--- a/site/docs/evolution.md
+++ b/site/docs/evolution.md
@@ -59,6 +59,6 @@ When you evolve a partition spec, the old data written with 
an earlier spec rema
 ![Partition evolution diagram](img/partition-spec-evolution.png)
 *The data for 2008 is partitioned by month. Starting from 2009 the table is 
updated so that the data is instead partitioned by day. Both partitioning 
layouts are able to coexist in the same table.*
 
-Iceberg uses [hidden partitioning](../partitioning), so you don't *need* to 
write queries for a specific partition layout to be fast. Instead, you can 
write queries that select the data you need, and Iceberg automatically prunes 
out files that don't contain matching data.
+Iceberg uses [hidden partitioning](./partitioning.md), so you don't *need* to 
write queries for a specific partition layout to be fast. Instead, you can 
write queries that select the data you need, and Iceberg automatically prunes 
out files that don't contain matching data.
 
 Partition evolution is a metadata operation and does not eagerly rewrite files.
diff --git a/site/docs/flink.md b/site/docs/flink.md
index 917bd3c..1dc34cd 100644
--- a/site/docs/flink.md
+++ b/site/docs/flink.md
@@ -149,7 +149,7 @@ Table create commands support the most commonly used [flink 
create clauses](http
 
 * `PARTITION BY (column1, column2, ...)` to configure partitioning, apache 
flink does not yet support hidden partitioning.
 * `COMMENT 'table document'` to set a table description.
-* `WITH ('key'='value', ...)` to set [table configuration](../configuration) 
which will be stored in apache iceberg table properties.
+* `WITH ('key'='value', ...)` to set [table configuration](./configuration.md) 
which will be stored in apache iceberg table properties.
 
 Currently, it does not support computed column, primary key and watermark 
definition etc.
 
@@ -292,7 +292,7 @@ env.execute("Test Iceberg DataStream");
 
 ## Inspecting tables.
 
-Iceberg does not support inspecting table in flink sql now, we need to use 
[iceberg's Java API](../api) to read iceberg's meta data to get those table 
information.
+Iceberg does not support inspecting table in flink sql now, we need to use 
[iceberg's Java API](./api.md) to read iceberg's meta data to get those table 
information.
 
 ## Future improvement.
 
diff --git a/site/docs/getting-started.md b/site/docs/getting-started.md
index 4a780c5..a244579 100644
--- a/site/docs/getting-started.md
+++ b/site/docs/getting-started.md
@@ -19,7 +19,7 @@
 
 ## Using Iceberg in Spark 3
 
-The latest version of Iceberg is [0.9.1](../releases).
+The latest version of Iceberg is [0.9.1](./releases.md).
 
 To use Iceberg in a Spark shell, use the `--packages` option:
 
@@ -34,7 +34,7 @@ spark-shell --packages 
org.apache.iceberg:iceberg-spark3-runtime:0.9.1
 
 ### Adding catalogs
 
-Iceberg comes with [catalogs](../spark#configuring-catalogs) that enable SQL 
commands to manage tables and load them by name. Catalogs are configured using 
properties under `spark.sql.catalog.(catalog_name)`.
+Iceberg comes with [catalogs](./spark.md#configuring-catalogs) that enable SQL 
commands to manage tables and load them by name. Catalogs are configured using 
properties under `spark.sql.catalog.(catalog_name)`.
 
 This command creates a path-based catalog named `local` for tables under 
`$PWD/warehouse` and adds support for Iceberg tables to Spark's built-in 
catalog:
 
@@ -49,7 +49,7 @@ spark-sql --packages 
org.apache.iceberg:iceberg-spark3-runtime:0.9.1 \
 
 ### Creating a table
 
-To create your first Iceberg table in Spark, use the `spark-sql` shell or 
`spark.sql(...)` to run a [`CREATE TABLE`](../spark#create-table) command:
+To create your first Iceberg table in Spark, use the `spark-sql` shell or 
`spark.sql(...)` to run a [`CREATE TABLE`](./spark.md#create-table) command:
 
 ```sql
 -- local is the path-based catalog defined above
@@ -58,21 +58,21 @@ CREATE TABLE local.db.table (id bigint, data string) USING 
iceberg
 
 Iceberg catalogs support the full range of SQL DDL commands, including:
 
-* [`CREATE TABLE ... PARTITIONED BY`](../spark#create-table)
-* [`CREATE TABLE ... AS SELECT`](../spark#create-table-as-select)
-* [`ALTER TABLE`](../spark#alter-table)
-* [`DROP TABLE`](../spark#drop-table)
+* [`CREATE TABLE ... PARTITIONED BY`](./spark.md#create-table)
+* [`CREATE TABLE ... AS SELECT`](./spark.md#create-table-as-select)
+* [`ALTER TABLE`](./spark.md#alter-table)
+* [`DROP TABLE`](./spark.md#drop-table)
 
 ### Writing
 
-Once your table is created, insert data using [`INSERT 
INTO`](../spark#insert-into):
+Once your table is created, insert data using [`INSERT 
INTO`](./spark.md#insert-into):
 
 ```sql
 INSERT INTO local.db.table VALUES (1, 'a'), (2, 'b'), (3, 'c');
 INSERT INTO local.db.table SELECT id, data FROM source WHERE length(data) = 1;
 ```
 
-Iceberg supports writing DataFrames using the new [v2 DataFrame write 
API](../spark#writing-with-dataframes):
+Iceberg supports writing DataFrames using the new [v2 DataFrame write 
API](./spark.md#writing-with-dataframes):
 
 ```scala
 spark.table("source").select("id", "data")
@@ -91,7 +91,7 @@ FROM local.db.table
 GROUP BY data
 ```
 
-SQL is also the recommended way to [inspect 
tables](../spark#inspecting-tables). To view all of the snapshots in a table, 
use the `snapshots` metadata table:
+SQL is also the recommended way to [inspect 
tables](./spark.md#inspecting-tables). To view all of the snapshots in a table, 
use the `snapshots` metadata table:
 ```sql
 SELECT * FROM local.db.table.snapshots
 ```
@@ -106,7 +106,7 @@ SELECT * FROM local.db.table.snapshots
 
+-------------------------+----------------+-----------+-----------+----------------------------------------------------+-----+
 ```
 
-[DataFrame reads](../spark#querying-with-dataframes) are supported and can now 
reference tables by name using `spark.table`:
+[DataFrame reads](./spark.md#querying-with-dataframes) are supported and can 
now reference tables by name using `spark.table`:
 
 ```scala
 val df = spark.table("local.db.table")
@@ -115,4 +115,4 @@ df.count()
 
 ### Next steps
 
-Next, you can learn more about [Iceberg tables in Spark](../spark), or about 
the [Iceberg Table API](../api).
+Next, you can learn more about [Iceberg tables in Spark](./spark.md), or about 
the [Iceberg Table API](./api.md).
diff --git a/site/docs/hive.md b/site/docs/hive.md
index d589c9f..d43d33a 100644
--- a/site/docs/hive.md
+++ b/site/docs/hive.md
@@ -21,7 +21,7 @@
 Iceberg supports the reading of Iceberg tables from 
[Hive](https://hive.apache.org) by using a 
[StorageHandler](https://cwiki.apache.org/confluence/display/Hive/StorageHandlers).
 Please note that only Hive 2.x versions are currently supported.
 
 ### Table creation
-This section explains the various steps needed in order to overlay a Hive 
table "on top of" an existing Iceberg table. Iceberg tables are created using 
either a 
[`Catalog`](/javadoc/master/index.html?org/apache/iceberg/catalog/Catalog.html) 
or an implementation of the 
[`Tables`](/javadoc/master/index.html?org/apache/iceberg/Tables.html) interface 
and Hive needs to be configured accordingly to read data from these different 
types of table.
+This section explains the various steps needed in order to overlay a Hive 
table "on top of" an existing Iceberg table. Iceberg tables are created using 
either a 
[`Catalog`](./javadoc/master/index.html?org/apache/iceberg/catalog/Catalog.html)
 or an implementation of the 
[`Tables`](./javadoc/master/index.html?org/apache/iceberg/Tables.html) 
interface and Hive needs to be configured accordingly to read data from these 
different types of table.
 
 #### Add the Iceberg Hive Runtime jar file to the Hive classpath
 Regardless of the table type, the `HiveIcebergStorageHandler` and supporting 
classes need to be made available on Hive's classpath. These are provided by 
the `iceberg-hive-runtime` jar file. For example, if using the Hive shell, this 
can be achieved by issuing a statement like so:
diff --git a/site/docs/java-api-quickstart.md b/site/docs/java-api-quickstart.md
index 2b7045c..cc1c8c7 100644
--- a/site/docs/java-api-quickstart.md
+++ b/site/docs/java-api-quickstart.md
@@ -19,7 +19,7 @@
 
 ## Create a table
 
-Tables are created using either a 
[`Catalog`](/javadoc/master/index.html?org/apache/iceberg/catalog/Catalog.html) 
or an implementation of the 
[`Tables`](/javadoc/master/index.html?org/apache/iceberg/Tables.html) interface.
+Tables are created using either a 
[`Catalog`](./javadoc/master/index.html?org/apache/iceberg/catalog/Catalog.html)
 or an implementation of the 
[`Tables`](./javadoc/master/index.html?org/apache/iceberg/Tables.html) 
interface.
 
 ### Using a Hive catalog
 
@@ -108,9 +108,9 @@ Spark uses both `HiveCatalog` and `HadoopTables` to load 
tables. Hive is used wh
 
 To read and write to tables from Spark see:
 
-* [Reading a table in Spark](../spark#reading-an-iceberg-table)
-* [Appending to a table in Spark](../spark#appending-data)
-* [Overwriting data in a table in Spark](../spark#overwriting-data)
+* [Reading a table in Spark](./spark.md#reading-an-iceberg-table)
+* [Appending to a table in Spark](./spark.md#appending-data)
+* [Overwriting data in a table in Spark](./spark.md#overwriting-data)
 
 
 ## Schemas
@@ -175,4 +175,4 @@ PartitionSpec spec = PartitionSpec.builderFor(schema)
       .build();
 ```
 
-For more information on the different partition transforms that Iceberg 
offers, visit [this page](../spec#partitioning).
+For more information on the different partition transforms that Iceberg 
offers, visit [this page](./spec.md#partitioning).
diff --git a/site/docs/maintenance.md b/site/docs/maintenance.md
index 27ef66e..22d4de7 100644
--- a/site/docs/maintenance.md
+++ b/site/docs/maintenance.md
@@ -26,7 +26,7 @@
 
 Each write to an Iceberg table creates a new _snapshot_, or version, of a 
table. Snapshots can be used for time-travel queries, or the table can be 
rolled back to any valid snapshot.
 
-Snapshots accumulate until they are expired by the 
[`expireSnapshots`](/javadoc/master/org/apache/iceberg/Table.html#expireSnapshots--)
 operation. Regularly expiring snapshots is recommended to delete data files 
that are no longer needed, and to keep the size of table metadata small.
+Snapshots accumulate until they are expired by the 
[`expireSnapshots`](./javadoc/master/org/apache/iceberg/Table.html#expireSnapshots--)
 operation. Regularly expiring snapshots is recommended to delete data files 
that are no longer needed, and to keep the size of table metadata small.
 
 This example expires snapshots that are older than 1 day:
 
@@ -38,7 +38,7 @@ table.expireSnapshots()
      .commit();
 ```
 
-See the [`ExpireSnapshots` 
Javadoc](/javadoc/master/org/apache/iceberg/ExpireSnapshots.html) to see more 
configuration options.
+See the [`ExpireSnapshots` 
Javadoc](./javadoc/master/org/apache/iceberg/ExpireSnapshots.html) to see more 
configuration options.
 
 There is also a Spark action that can run table expiration in parallel for 
large tables:
 
@@ -83,7 +83,7 @@ Actions.forTable(table)
     .execute();
 ```
 
-See the [RemoveOrphanFilesAction 
Javadoc](/javadoc/master/org/apache/iceberg/actions/RemoveOrphanFilesAction.html)
 to see more configuration options.
+See the [RemoveOrphanFilesAction 
Javadoc](./javadoc/master/org/apache/iceberg/actions/RemoveOrphanFilesAction.html)
 to see more configuration options.
 
 This action may take a long time to finish if you have lots of files in data 
and metadata directories. It is recommended to execute this periodically, but 
you may not need to execute this often.
 
@@ -119,7 +119,7 @@ Actions.forTable(table).rewriteDataFiles()
 
 The `files` metadata table is useful for inspecting data file sizes and 
determining when to compact partitons.
 
-See the [`RewriteDataFilesAction` 
Javadoc](/javadoc/master/org/apache/iceberg/actions/RewriteDataFilesAction.html)
 to see more configuration options.
+See the [`RewriteDataFilesAction` 
Javadoc](./javadoc/master/org/apache/iceberg/actions/RewriteDataFilesAction.html)
 to see more configuration options.
 
 ### Rewrite manifests
 
@@ -139,4 +139,4 @@ table.rewriteManifests()
     .commit();
 ```
 
-See the [`RewriteManifestsAction` 
Javadoc](/javadoc/master/org/apache/iceberg/actions/RewriteManifestsAction.html)
 to see more configuration options.
+See the [`RewriteManifestsAction` 
Javadoc](./javadoc/master/org/apache/iceberg/actions/RewriteManifestsAction.html)
 to see more configuration options.
diff --git a/site/docs/reliability.md b/site/docs/reliability.md
index b329458..9cd2e3b 100644
--- a/site/docs/reliability.md
+++ b/site/docs/reliability.md
@@ -21,7 +21,7 @@ Iceberg was designed to solve correctness problems that 
affect Hive tables runni
 
 Hive tables track data files using both a central metastore for partitions and 
a file system for individual files. This makes atomic changes to a table's 
contents impossible, and eventually consistent stores like S3 may return 
incorrect results due to the use of listing files to reconstruct the state of a 
table. It also requires job planning to make many slow listing calls: O(n) with 
the number of partitions.
 
-Iceberg tracks the complete list of data files in each 
[snapshot](../terms#snapshot) using a persistent tree structure. Every write or 
delete produces a new snapshot that reuses as much of the previous snapshot's 
metadata tree as possible to avoid high write volumes.
+Iceberg tracks the complete list of data files in each 
[snapshot](./terms.md#snapshot) using a persistent tree structure. Every write 
or delete produces a new snapshot that reuses as much of the previous 
snapshot's metadata tree as possible to avoid high write volumes.
 
 Valid snapshots in an Iceberg table are stored in the table metadata file, 
along with a reference to the current snapshot. Commits replace the path of the 
current table metadata file using an atomic operation. This ensures that all 
updates to table data and metadata are atomic, and is the basis for 
[serializable 
isolation](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable).
 
diff --git a/site/docs/schemas.md b/site/docs/schemas.md
index eaa4201..918bc67 100644
--- a/site/docs/schemas.md
+++ b/site/docs/schemas.md
@@ -38,4 +38,4 @@ Iceberg tables support the following types:
 | **`list<E>`**      | A list with elements of any data type                   
                 |                                                  |
 | **`map<K, V>`**    | A map with keys and values of any data type             
                 |                                                  |
 
-Iceberg tracks each field in a table schema using an ID that is never reused 
in a table. See [correctness guarantees](../evolution#correctness) for more 
information.
+Iceberg tracks each field in a table schema using an ID that is never reused 
in a table. See [correctness guarantees](./evolution.md#correctness) for more 
information.
diff --git a/site/docs/spark-structured-streaming.md 
b/site/docs/spark-structured-streaming.md
index 9274a4b..0be5191 100644
--- a/site/docs/spark-structured-streaming.md
+++ b/site/docs/spark-structured-streaming.md
@@ -72,13 +72,13 @@ documents how to configure the interval.
 
 ### Expire old snapshots
 
-Each micro-batch written to a table produces a new snapshot, which are tracked 
in table metadata until they are expired to remove the metadata and any data 
files that are no longer needed. Snapshots accumulate quickly with frequent 
commits, so it is highly recommended that tables written by streaming queries 
are [regularly maintained](../maintenance#expire-snapshots).
+Each micro-batch written to a table produces a new snapshot, which are tracked 
in table metadata until they are expired to remove the metadata and any data 
files that are no longer needed. Snapshots accumulate quickly with frequent 
commits, so it is highly recommended that tables written by streaming queries 
are [regularly maintained](./maintenance.md#expire-snapshots).
 
 ### Compacting data files
 
-The amount of data written in a micro batch is typically small, which can 
cause the table metadata to track lots of small files. [Compacting small files 
into larger files](../maintenance#compact-data-files) reduces the metadata 
needed by the table, and increases query efficiency.
+The amount of data written in a micro batch is typically small, which can 
cause the table metadata to track lots of small files. [Compacting small files 
into larger files](./maintenance.md#compact-data-files) reduces the metadata 
needed by the table, and increases query efficiency.
 
 ### Rewrite manifests
 
 To optimize write latency on streaming workload, Iceberg may write the new 
snapshot with a "fast" append that does not automatically compact manifests.
-This could lead lots of small manifest files. Manifests can be [rewritten to 
optimize queries and to compact](../maintenance#rewrite-manifests).
+This could lead lots of small manifest files. Manifests can be [rewritten to 
optimize queries and to compact](./maintenance.md#rewrite-manifests).
diff --git a/site/docs/spark.md b/site/docs/spark.md
index 46dcb37..0433d1f 100644
--- a/site/docs/spark.md
+++ b/site/docs/spark.md
@@ -37,7 +37,7 @@ Iceberg uses Apache Spark's DataSourceV2 API for data source 
and catalog impleme
 
 ## Configuring catalogs
 
-Spark 3.0 adds an API to plug in table catalogs that are used to load, create, 
and manage Iceberg tables. Spark catalogs are configured by setting [Spark 
properties](../configuration#catalogs) under `spark.sql.catalog`.
+Spark 3.0 adds an API to plug in table catalogs that are used to load, create, 
and manage Iceberg tables. Spark catalogs are configured by setting [Spark 
properties](./configuration.md#catalogs) under `spark.sql.catalog`.
 
 This creates an Iceberg catalog named `hive_prod` that loads tables from a 
Hive metastore:
 
@@ -93,7 +93,7 @@ This configuration can use same Hive Metastore for both 
Iceberg and non-Iceberg
 ## DDL commands
 
 !!! Note
-    Spark 2.4 can't create Iceberg tables with DDL, instead use the [Iceberg 
API](../java-api-quickstart).
+    Spark 2.4 can't create Iceberg tables with DDL, instead use the [Iceberg 
API](./java-api-quickstart.md).
 
 ### `CREATE TABLE`
 
@@ -113,7 +113,7 @@ Table create commands, including CTAS and RTAS, support the 
full range of Spark
 * `PARTITION BY (partition-expressions)` to configure partitioning
 * `LOCATION '(fully-qualified-uri)'` to set the table location
 * `COMMENT 'table documentation'` to set a table description
-* `TBLPROPERTIES ('key'='value', ...)` to set [table 
configuration](../configuration)
+* `TBLPROPERTIES ('key'='value', ...)` to set [table 
configuration](./configuration.md)
 
 Create commands may also set the default format with the `USING` clause. This 
is only supported for `SparkCatalog` because Spark handles the `USING` clause 
differently for the built-in catalog.
 
@@ -130,7 +130,7 @@ USING iceberg
 PARTITIONED BY (category)
 ```
 
-The `PARTITIONED BY` clause supports transform expressions to create [hidden 
partitions](../partitioning).
+The `PARTITIONED BY` clause supports transform expressions to create [hidden 
partitions](./partitioning.md).
 
 ```sql
 CREATE TABLE prod.db.sample (
@@ -206,7 +206,7 @@ ALTER TABLE prod.db.sample SET TBLPROPERTIES (
 )
 ```
 
-Iceberg uses table properties to control table behavior. For a list of 
available properties, see [Table configuration](../configuration).
+Iceberg uses table properties to control table behavior. For a list of 
available properties, see [Table configuration](./configuration.md).
 
 `UNSET` is used to remove properties:
 
@@ -361,7 +361,7 @@ The partitions that will be replaced by `INSERT OVERWRITE` 
depends on Spark's pa
 
 !!! Warning
     Spark 3.0.0 has a correctness bug that affects dynamic `INSERT OVERWRITE` 
with hidden partitioning, [SPARK-32168][spark-32168].
-    For tables with [hidden partitions](../partitioning), wait for Spark 3.0.1.
+    For tables with [hidden partitions](./partitioning.md), wait for Spark 
3.0.1.
 
 [spark-32168]: https://issues.apache.org/jira/browse/SPARK-32168
 
diff --git a/site/docs/spec.md b/site/docs/spec.md
index bdfc91d..367e10b 100644
--- a/site/docs/spec.md
+++ b/site/docs/spec.md
@@ -509,7 +509,7 @@ Each version of table metadata is stored in a metadata 
folder under the table’
 
 Notes:
 
-1. The file system table scheme is implemented in 
[HadoopTableOperations](/javadoc/master/index.html?org/apache/iceberg/hadoop/HadoopTableOperations.html).
+1. The file system table scheme is implemented in 
[HadoopTableOperations](./javadoc/master/index.html?org/apache/iceberg/hadoop/HadoopTableOperations.html).
 
 #### Metastore Tables
 
@@ -525,7 +525,7 @@ Each version of table metadata is stored in a metadata 
folder under the table’
 
 Notes:
 
-1. The metastore table scheme is partly implemented in 
[BaseMetastoreTableOperations](/javadoc/master/index.html?org/apache/iceberg/BaseMetastoreTableOperations.html).
+1. The metastore table scheme is partly implemented in 
[BaseMetastoreTableOperations](./javadoc/master/index.html?org/apache/iceberg/BaseMetastoreTableOperations.html).
 
 
 ### Delete Formats
diff --git a/site/docs/terms.md b/site/docs/terms.md
index 50bea8f..f48f2fe 100644
--- a/site/docs/terms.md
+++ b/site/docs/terms.md
@@ -33,11 +33,11 @@ Each manifest file in the manifest list is stored with 
information about its con
 
 A **manifest file** is a metadata file that lists a subset of data files that 
make up a snapshot.
 
-Each data file in a manifest is stored with a [partition 
tuple](#partition-tuple), column-level stats, and summary information used to 
prune splits during [scan planning](../performance#scan-planning).
+Each data file in a manifest is stored with a [partition 
tuple](#partition-tuple), column-level stats, and summary information used to 
prune splits during [scan planning](./performance.md#scan-planning).
 
 ### Partition spec
 
-A **partition spec** is a description of how to [partition](../partitioning) 
data in a table.
+A **partition spec** is a description of how to [partition](./partitioning.md) 
data in a table.
 
 A spec consists of a list of source columns and transforms. A transform 
produces a partition value from a source value. For example, `date(ts)` 
produces the date associated with a timestamp column named `ts`.
 
@@ -55,5 +55,5 @@ The **snapshot log** is a metadata log of how the table's 
current snapshot has c
 
 The log is a list of timestamp and ID pairs: when the current snapshot changed 
and the snapshot ID the current snapshot was changed to.
 
-The snapshot log is stored in [table metadata as 
`snapshot-log`](../spec#table-metadata-fields).
+The snapshot log is stored in [table metadata as 
`snapshot-log`](./spec.md#table-metadata-fields).
 

Reply via email to