This is an automated email from the ASF dual-hosted git repository.

blue pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/iceberg.git


The following commit(s) were added to refs/heads/master by this push:
     new 5cf4248  Docs: Fixes broken links to old spark doc page (#2801)
5cf4248 is described below

commit 5cf4248ecc56d531e3b220c7b3631fef22807bf0
Author: Russell Spitzer <[email protected]>
AuthorDate: Fri Jul 9 13:50:17 2021 -0500

    Docs: Fixes broken links to old spark doc page (#2801)
---
 site/docs/aws.md                        | 2 +-
 site/docs/evolution.md                  | 4 ++--
 site/docs/spark-structured-streaming.md | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/site/docs/aws.md b/site/docs/aws.md
index ca91ddd..96207a7 100644
--- a/site/docs/aws.md
+++ b/site/docs/aws.md
@@ -163,7 +163,7 @@ an Iceberg table is stored as a [Glue 
Table](https://docs.aws.amazon.com/glue/la
 and every Iceberg table version is stored as a [Glue 
TableVersion](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-catalog-tables.html#aws-glue-api-catalog-tables-TableVersion).
 
 You can start using Glue catalog by specifying the `catalog-impl` as 
`org.apache.iceberg.aws.glue.GlueCatalog`,
 just like what is shown in the [enabling AWS 
integration](#enabling-aws-integration) section above. 
-More details about loading the catalog can be found in individual engine 
pages, such as [Spark](../spark/#loading-a-custom-catalog) and 
[Flink](../flink/#creating-catalogs-and-using-catalogs).
+More details about loading the catalog can be found in individual engine 
pages, such as [Spark](../spark-configuration/#loading-a-custom-catalog) and 
[Flink](../flink/#creating-catalogs-and-using-catalogs).
 
 ### Glue Catalog ID
 There is a unique Glue metastore in each AWS account and each AWS region.
diff --git a/site/docs/evolution.md b/site/docs/evolution.md
index 0b4bec2..624a986 100644
--- a/site/docs/evolution.md
+++ b/site/docs/evolution.md
@@ -74,7 +74,7 @@ sampleTable.updateSpec()
     .commit();
 ```
 
-Spark supports updating partition spec through its `ALTER TABLE` SQL 
statement, see more details in [Spark 
SQL](../spark/#alter-table-add-partition-field).
+Spark supports updating partition spec through its `ALTER TABLE` SQL 
statement, see more details in [Spark 
SQL](../spark-ddl/#alter-table-add-partition-field).
 
 ## Sort order evolution
 
@@ -95,4 +95,4 @@ sampleTable.replaceSortOrder()
    .commit();
 ```
 
-Spark supports updating sort order through its `ALTER TABLE` SQL statement, 
see more details in [Spark SQL](../spark/#alter-table-write-ordered-by).
+Spark supports updating sort order through its `ALTER TABLE` SQL statement, 
see more details in [Spark SQL](../spark-ddl/#alter-table-write-ordered-by).
diff --git a/site/docs/spark-structured-streaming.md 
b/site/docs/spark-structured-streaming.md
index f4e54bd..b969dcb 100644
--- a/site/docs/spark-structured-streaming.md
+++ b/site/docs/spark-structured-streaming.md
@@ -53,14 +53,14 @@ Iceberg supports `append` and `complete` output modes:
 * `append`: appends the rows of every micro-batch to the table
 * `complete`: replaces the table contents every micro-batch
 
-The table should be created in prior to start the streaming query. Refer [SQL 
create table](/spark/#create-table)
+The table should be created in prior to start the streaming query. Refer [SQL 
create table](/spark-ddl/#create-table)
 on Spark page to see how to create the Iceberg table.
 
 ### Writing against partitioned table
 
 Iceberg requires the data to be sorted according to the partition spec per 
task (Spark partition) in prior to write
 against partitioned table. For batch queries you're encouraged to do explicit 
sort to fulfill the requirement
-(see [here](/spark/#writing-against-partitioned-table)), but the approach 
would bring additional latency as
+(see [here](/spark-writes/#writing-to-partitioned-tables)), but the approach 
would bring additional latency as
 repartition and sort are considered as heavy operations for streaming 
workload. To avoid additional latency, you can
 enable fanout writer to eliminate the requirement.
 

Reply via email to