This is an automated email from the ASF dual-hosted git repository.
morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git
The following commit(s) were added to refs/heads/master by this push:
new 49bfa8895c2 [Fix] Remove dead links in zh-CN 3.x (#3061)
49bfa8895c2 is described below
commit 49bfa8895c2228e5ded7baefd1495a7d961d99c6
Author: zhuwei <[email protected]>
AuthorDate: Fri Nov 7 00:18:55 2025 +0800
[Fix] Remove dead links in zh-CN 3.x (#3061)
---
.../version-3.x/ecosystem/observability/beats.md | 2 +-
.../version-3.x/ecosystem/observability/fluentbit.md | 2 +-
.../version-3.x/ecosystem/observability/logstash.md | 2 +-
.../separating-storage-compute/config-cluster.md | 2 +-
.../version-3.x/lakehouse/best-practices/doris-iceberg.md | 2 +-
.../version-3.x/lakehouse/best-practices/doris-paimon.md | 2 +-
.../version-3.x/lakehouse/catalog-overview.md | 2 +-
.../version-3.x/lakehouse/catalogs/hudi-catalog.md | 2 +-
.../version-3.x/lakehouse/file-analysis.md | 2 +-
.../version-3.x/lakehouse/lakehouse-overview.md | 2 +-
.../version-3.x/releasenotes/v1.1/release-1.1.0.md | 2 +-
.../version-3.x/releasenotes/v2.1/release-2.1.3.md | 2 +-
.../version-3.x/releasenotes/v2.1/release-2.1.4.md | 2 +-
.../version-3.x/releasenotes/v2.1/release-2.1.5.md | 2 +-
.../version-3.x/releasenotes/v2.1/release-2.1.6.md | 4 ++--
.../version-3.x/releasenotes/v2.1/release-2.1.7.md | 8 ++++----
.../version-3.x/releasenotes/v3.0/release-3.0.2.md | 2 +-
.../version-3.x/releasenotes/v3.0/release-3.0.3.md | 6 +++---
.../cluster-management/compute-management/CREATE-RESOURCE.md | 2 +-
.../cluster-management/storage-management/CREATE-STORAGE-VAULT.md | 4 ++--
.../ecosystem/doris-operator/doris-operator-overview.md | 2 +-
versioned_docs/version-3.x/ecosystem/hive-hll-udf.md | 2 +-
22 files changed, 29 insertions(+), 29 deletions(-)
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/beats.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/beats.md
index 034ab005cb8..dd0c23c5547 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/beats.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/beats.md
@@ -11,7 +11,7 @@
Beats Doris output plugin 支持
[Filebeat](https://github.com/elastic/beats/tree/master/filebeat),
[Metricbeat](https://github.com/elastic/beats/tree/master/metricbeat),
[Packetbeat](https://github.com/elastic/beats/tree/master/packetbeat),
[Winlogbeat](https://github.com/elastic/beats/tree/master/winlogbeat),
[Auditbeat](https://github.com/elastic/beats/tree/master/auditbeat),
[Heartbeat](https://github.com/elastic/beats/tree/master/heartbeat) 。
-Beats Doris output plugin 调用 [Doris Stream
Load](../data-operate/import/import-way/stream-load-manual) HTTP 接口将数据实时写入
Doris,提供多线程并发,失败重试,自定义 Stream Load 格式和参数,输出写入速度等能力。
+Beats Doris output plugin 调用 [Doris Stream
Load](../../data-operate/import/import-way/stream-load-manual) HTTP 接口将数据实时写入
Doris,提供多线程并发,失败重试,自定义 Stream Load 格式和参数,输出写入速度等能力。
使用 Beats Doris output plugin 主要有三个步骤:
1. 下载或编译包含 Doris output plugin 的 Beats 二进制程序
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/fluentbit.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/fluentbit.md
index e80cd18147b..4f01b6ece32 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/fluentbit.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/fluentbit.md
@@ -7,7 +7,7 @@
[Fluent Bit](https://fluentbit.io/) 是一个快速的日志处理器和转发器,它支持自定义输出插件将数据写入存储系统,Fluent
Bit Doris Output Plugin 是输出到 Doris 的插件。
-Fluent Bit Doris Output Plugin 调用 [Doris Stream
Load](../data-operate/import/import-way/stream-load-manual) HTTP 接口将数据实时写入
Doris,提供多线程并发,失败重试,自定义 Stream Load 格式和参数,输出写入速度等能力。
+Fluent Bit Doris Output Plugin 调用 [Doris Stream
Load](../../data-operate/import/import-way/stream-load-manual) HTTP 接口将数据实时写入
Doris,提供多线程并发,失败重试,自定义 Stream Load 格式和参数,输出写入速度等能力。
使用 Fluent Bit Doris Output Plugin 主要有三个步骤:
1. 下载或编译包含 Doris Output Plugin 的 Fluent Bit 二进制程序
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/logstash.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/logstash.md
index 2cf263e7f86..20d8d0301e8 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/logstash.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/observability/logstash.md
@@ -11,7 +11,7 @@
Logstash 是一个日志 ETL 框架(采集,预处理,发送到存储系统),它支持自定义输出插件将数据写入存储系统,Logstash Doris
output plugin 是输出到 Doris 的插件。
-Logstash Doris output plugin 调用 [Doris Stream
Load](../data-operate/import/import-way/stream-load-manual) HTTP 接口将数据实时写入
Doris,提供多线程并发,失败重试,自定义 Stream Load 格式和参数,输出写入速度等能力。
+Logstash Doris output plugin 调用 [Doris Stream
Load](../../data-operate/import/import-way/stream-load-manual) HTTP 接口将数据实时写入
Doris,提供多线程并发,失败重试,自定义 Stream Load 格式和参数,输出写入速度等能力。
使用 Logstash Doris output plugin 主要有三个步骤:
1. 将插件安装到 Logstash 中
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/install/deploy-on-kubernetes/separating-storage-compute/config-cluster.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/install/deploy-on-kubernetes/separating-storage-compute/config-cluster.md
index 07aba1d4183..704269248ee 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/install/deploy-on-kubernetes/separating-storage-compute/config-cluster.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/install/deploy-on-kubernetes/separating-storage-compute/config-cluster.md
@@ -273,4 +273,4 @@ Doris Operator 使用 `ConfigMap` 资源挂载 krb5.conf 文件,使用 `Secret
keytabPath: ${keytabPath}
```
${krb5ConfigMapName} 为包含要使用的 `krb5.conf` 文件的 ConfigMap
名称。${keytabSecretName} 为包含 keytab 文件的 Secret 名称。${keytabPath} 为 Secret
希望挂载到容器中的路径,这个路径是创建 catalog 时,通过 `hadoop.kerberos.keytab` 指定 keytab 的文件所在目录。创建
- catalog 请参考配置 [Hive
Catalog](../../../lakehouse/datalake-analytics/hive.md#catalog-配置) 文档。
+ catalog 请参考配置 [Hive Catalog](../../../lakehouse/catalogs/hive-catalog.mdx)
文档。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/best-practices/doris-iceberg.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/best-practices/doris-iceberg.md
index 14668f9e552..83dd8e76ac9 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/best-practices/doris-iceberg.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/best-practices/doris-iceberg.md
@@ -36,7 +36,7 @@ Apache Doris 对 Iceberg 多项核心特性提供了原生支持:
未来,Apache Iceberg 将作为 Apache Doris 的原生表引擎之一,提供更加完善的湖格式数据的分析、管理功能。Apache Doris
也将逐步支持包括 Update/Delete/Merge、写回时排序、增量数据读取、元数据管理等 Apache Iceberg
更多高级特性,共同构建统一、高性能、实时的湖仓平台。
-关于更多说明,请参阅 [Iceberg Catalog](../catalogs/iceberg-catalog.md)
+关于更多说明,请参阅 [Iceberg Catalog](../catalogs/iceberg-catalog)
## 使用指南
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/best-practices/doris-paimon.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/best-practices/doris-paimon.md
index cc6a33dc007..37c10f92420 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/best-practices/doris-paimon.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/best-practices/doris-paimon.md
@@ -35,7 +35,7 @@ Apache Paimon 是一种数据湖格式,并创新性地将数据湖格式和 LS
本文将会再 Docker 环境中,为读者讲解如何快速搭建 Apache Doris + Apache Paimon 测试 &
演示环境,并展示各功能的使用操作。
-关于更多说明,请参阅 [Paimon Catalog](../catalogs/paimon-catalog.md)
+关于更多说明,请参阅 [Paimon Catalog](../catalogs/paimon-catalog)
## 使用指南
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalog-overview.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalog-overview.md
index f57c3c319cf..c7a046826ef 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalog-overview.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalog-overview.md
@@ -24,7 +24,7 @@ Doris 中的数据目录分为两种:
| 数据集成 | ZeroETL 方案,直接访问不同数据源生成结果数据,或让数据在不同数据源中便捷流转。 |
| 数据写回 | 通过 Doris 进行数据加工处理后,写回到外部数据源。 |
-本文以 [Iceberg Catalog](./catalogs/iceberg-catalog.mdx)
为例,重点介绍数据目录的基础操作。不同数据目录的详细介绍,请参阅对应的数据目录文档。
+本文以 [Iceberg Catalog](./catalogs/iceberg-catalog)
为例,重点介绍数据目录的基础操作。不同数据目录的详细介绍,请参阅对应的数据目录文档。
## 创建数据目录
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/hudi-catalog.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/hudi-catalog.md
index 31958bbd446..532779e224e 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/hudi-catalog.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/hudi-catalog.md
@@ -108,7 +108,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
## 基础示例
-Hudi Catalog 的创建方式和 Hive Catalog 一致。更多示例可参阅[ Hive Catalog](./hive-catalog.md)
+Hudi Catalog 的创建方式和 Hive Catalog 一致。更多示例可参阅[ Hive Catalog](./hive-catalog)
```sql
CREATE CATALOG hudi_hms PROPERTIES (
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/file-analysis.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/file-analysis.md
index d9256375d7f..6cdf4f9a7ce 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/file-analysis.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/file-analysis.md
@@ -13,7 +13,7 @@
* [HDFS](../sql-manual/sql-functions/table-valued-functions/hdfs.md):支持 HDFS
上的文件分析。
-*
[FILE](../sql-manual/sql-functions/table-valued-functions/file.md):统一表函数,可以同时支持
S3/HDFS/Local 文件的读取。(自 3.1.0 版本支持。)
+*
[FILE](../sql-manual/sql-functions/table-valued-functions/local.md):统一表函数,可以同时支持
S3/HDFS/Local 文件的读取。(自 3.1.0 版本支持。)
## 基础使用
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/lakehouse-overview.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/lakehouse-overview.md
index 97789448782..8cc8f4c362c 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/lakehouse-overview.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/lakehouse-overview.md
@@ -23,7 +23,7 @@ Doris 通过可扩展的连接器框架,支持主流数据系统和数据格
对于湖仓系统,Doris 可从元数据服务,如 Hive Metastore,AWS Glue、Unity Catalog
中获取数据表的结构和分布信息,进行合理的查询规划,并利用 MPP 架构进行分布式计算。
-具体可参阅各数据目录文档,如 [Iceberg Catalog](./catalogs/iceberg-catalog.md)
+具体可参阅各数据目录文档,如 [Iceberg Catalog](./catalogs/iceberg-catalog)
#### 可扩展的连接器框架
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v1.1/release-1.1.0.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v1.1/release-1.1.0.md
index 88c577682c4..51ae336130b 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v1.1/release-1.1.0.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v1.1/release-1.1.0.md
@@ -184,7 +184,7 @@ String 类型是 Apache Doris 在 0.15 版本中引入的新数据类型,在
GitHub
论坛:[https://github.com/apache/incubator-doris/discussions](https://github.com/apache/doris/discussions)
-Dev 邮件组:[[email protected]]([email protected])
+Dev 邮件组:[email protected]
## 致谢
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.3.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.3.md
index 33454120eb6..c9fdd2eb269 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.3.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.3.md
@@ -18,7 +18,7 @@
从 2.1.3 版本开始,Apache Doris 支持对 Hive 的 DDL 和 DML 操作。用户可以直接通过 Apache Doris 在 Hive
中创建库表,通过执行`INSERT INTO`语句来向 Hive 表中写入数据。通过该功能,用户可以通过 Apache Doris 对 Hive
进行完整的数据查询和写入操作,进一步帮助用户简化湖仓一体架构。
-参考[文档](../../lakehouse/datalake-building/hive-build)
+参考[文档](../../lakehouse/catalogs/hive-catalog)
**2. 支持在异步物化视图之上构建新的异步物化视图**
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.4.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.4.md
index 77703211cd7..f06dad44234 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.4.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.4.md
@@ -47,7 +47,7 @@
- **支持 Paimon 的原生读取器来处理 Deletion Vector:** Deletion Vector
主要用于标记或追踪哪些数据已被删除或标记为删除,通常应用在需要保留历史数据的场景,基于本优化可以提升大量数据更新或删除时的处理效率。
[#35241](https://github.com/apache/doris/pull/35241)
- 关于更多信息,请参考文档:[数据湖分析 - Paimon](../../lakehouse/catalogs/paimon-catalog.md)
+ 关于更多信息,请参考文档:[数据湖分析 - Paimon](../../lakehouse/catalogs/paimon-catalog)
- **支持在表值函数(TVF)中使用 Resource**:TVF 功能为 Apache Doris 提供了直接将对象存储或 HDFS 上的文件作为
Table 进行查询分析的能力。通过在 TVF 中引用 Resource,可以避免重复填写连接信息,提升使用体验。
[#35139](https://github.com/apache/doris/pull/35139)
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.5.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.5.md
index 902d519b7ff..40246eea3f2 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.5.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.5.md
@@ -37,7 +37,7 @@
- 会话变量 `read_csv_empty_line_as_null` 用于控制在读取 CSV 格式文件时,是否忽略空行。默认情况下忽略空行,当设置为
true 时,空行将被读取为所有列均为 Null
的行。[#37153](https://github.com/apache/doris/pull/37153)
- -
更多信息,请参考[文档](../../lakehouse/datalake-analytics/hive?_highlight=compress_type)。
+ - 更多信息,请参考[文档](../../lakehouse/catalogs/hive-catalog)。
- 新增兼容 Presto 的复杂类型输出格式。通过设置 `set serde_dialect="presto"`,可以控制复杂类型的输出格式 与
Presto 一致,用于平滑迁移 Presto 业务。[#37253](https://github.com/apache/doris/pull/37253)
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.6.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.6.md
index cae7f2e65f5..e4a59665b99 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.6.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.6.md
@@ -37,7 +37,7 @@
- 实现 Iceberg 表的写回功能。
- - 更多信息,请查看文档数据湖构建-[Iceberg](../../lakehouse/datalake-building/iceberg-build)
+ - 更多信息,请查看文档数据湖构建-[Iceberg](../../lakehouse/catalogs/iceberg-catalog)
- 增强 SQL 拦截规则,支持对外表的拦截处理。
@@ -100,7 +100,7 @@
- 革新外表元数据缓存机制。
- - 更多信息,请查看文档 [元数据缓存](../../lakehouse/metacache)。
+ - 更多信息,请查看文档 [元数据缓存](../../lakehouse/meta-cache)。
- 新增会话变量`keep_carriage_return`,默认关闭。读取 Hive Text
格式表时,默认将`\r\n`与`\n`均视为换行符。[#38099](https://github.com/apache/doris/pull/38099)
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.7.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.7.md
index 277aee937c6..09e6a623f82 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.7.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v2.1/release-2.1.7.md
@@ -56,13 +56,13 @@
### 湖仓一体
- 支持写入数据到 Hive Text 格式表。[#40537](https://github.com/apache/doris/pull/40537)
- - 更多信息,请参考[使用 Hive 构建数据湖](../../lakehouse/datalake-building/hive-build/)文档
+ - 更多信息,请参考[使用 Hive 构建数据湖](../../lakehouse/catalogs/hive-catalog)文档
- 使用 MaxCompute Open Storage API 访问 MaxCompute
数据。[#41610](https://github.com/apache/doris/pull/41610)
- - 更多信息,请参考 [MaxCompute](../../lakehouse/database/max-compute/) 文档
+ - 更多信息,请参考 [MaxCompute](../../lakehouse/catalogs/maxcompute-catalog) 文档
- 支持 Paimon DLF Catalog。[#41694](https://github.com/apache/doris/pull/41694)
- - 更多信息,请参考 [Paimon Catalog](../../lakehouse/datalake-analytics/paimon/) 文档
+ - 更多信息,请参考 [Paimon Catalog](../../lakehouse/catalogs/paimon-catalog) 文档
- 新增语法 `table$partitions` 语法支持直接查询 Hive 分区信息
[#41230](https://github.com/apache/doris/pull/41230)
- - 更多信息,请参考[通过 Hive 分析数据湖](../../lakehouse/datalake-analytics/hive/)文档
+ - 更多信息,请参考[通过 Hive 分析数据湖](../../lakehouse/catalogs/hive-catalog)文档
- 支持 brotli 压缩格式的 Parquet
文件读取。[#42162](https://github.com/apache/doris/pull/42162)
- 支持读取 Parquet 文件中的 DECIMAL 256
类型。[#42241](https://github.com/apache/doris/pull/42241)
- 支持读取 OpenCsvSerde 格式的 Hive
表。[#42939](https://github.com/apache/doris/pull/42939)
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v3.0/release-3.0.2.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v3.0/release-3.0.2.md
index a34edd3623f..fc2f3141862 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v3.0/release-3.0.2.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v3.0/release-3.0.2.md
@@ -44,7 +44,7 @@
### Lakehouse
-- 新增 Lakesoul Catalog。[Apache Doris
Docs](../../lakehouse/datalake-analytics/lakesoul)
+- 新增 Lakesoul Catalog。[Apache Doris
Docs](../../lakehouse/catalogs/lakesoul-catalog)
- 新增系统表 `catalog_meta_cache_statistics`,用于查看 External Catalog
中各类元数据缓存的使用情况。[#40155](https://github.com/apache/doris/pull/40155)
### 查询优化器
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v3.0/release-3.0.3.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v3.0/release-3.0.3.md
index 8c8c4513a4e..745403763ff 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v3.0/release-3.0.3.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/releasenotes/v3.0/release-3.0.3.md
@@ -26,11 +26,11 @@
- 新增 `table$partition` 语法,用于查询 Hive
表的分区信息。[#40774](https://github.com/apache/doris/pull/40774)
- - [查看文档](../../lakehouse/datalake-analytics/hive#查询-hive-分区)
+ - [查看文档](../../lakehouse/catalogs/hive-catalog)
- 支持创建 Text 格式的 Hive 表。[#41860](https://github.com/apache/doris/pull/41860)
[#42175](https://github.com/apache/doris/pull/42175)
- - [查看文档](../../lakehouse/datalake-building/hive-build#table)
+ - [查看文档](../../lakehouse/catalogs/hive-catalog)
### 异步物化视图
@@ -77,7 +77,7 @@
- Paimon Catalog 支持阿里云 DLF 和 OSS-HDFS
存储。[#41247](https://github.com/apache/doris/pull/41247)
[#42585](https://github.com/apache/doris/pull/42585)
- - [查看文档](../../lakehouse/datalake-analytics/paimon#基于-aliyun-dlf-创建-catalog)
+ - [查看文档](../../lakehouse/catalogs/paimon-catalog)
- 支持读取 OpenCSV 格式的 Hive 表。[#42257](https://github.com/apache/doris/pull/42257)
[#42942](https://github.com/apache/doris/pull/42942)
- 优化了访问 External Catalog 中 `information_schema.columns`
表的性能。[#41659](https://github.com/apache/doris/pull/41659)
[#41962](https://github.com/apache/doris/pull/41962)
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/sql-manual/sql-statements/cluster-management/compute-management/CREATE-RESOURCE.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/sql-manual/sql-statements/cluster-management/compute-management/CREATE-RESOURCE.md
index aea2d339c1f..7774fecb1b9 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/sql-manual/sql-statements/cluster-management/compute-management/CREATE-RESOURCE.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/sql-manual/sql-statements/cluster-management/compute-management/CREATE-RESOURCE.md
@@ -187,7 +187,7 @@ HDFS 相关参数如下:
**6. 创建 HMS resource**
- HMS resource 用于 [hms catalog](../../../../lakehouse/datalake-analytics/hive)
+ HMS resource 用于 [hms catalog](../../../../lakehouse/catalogs/hive-catalog)
```sql
CREATE RESOURCE hms_resource PROPERTIES (
'type'='hms',
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
index 6d15b6596f0..513e7855258 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
@@ -47,7 +47,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] <`vault_name`> [
<`properties`> ]
1. `s3.endpoint` 如果不提供`http://` 或 `https://` 前缀, 则默认使用http; 如提供,则会以前缀为准;
-2. Doris也支持`AWS Assume Role`的方式创建Storage Vault(仅限于AWS
S3),配置方式请参考[AWS集成](../../../admin-manual/auth/integrations/aws-authentication-and-authorization.md#assumed-role-authentication)。
+2. Doris也支持`AWS Assume Role`的方式创建Storage Vault(仅限于AWS
S3),配置方式请参考[AWS集成](../../../../admin-manual/auth/integrations/aws-authentication-and-authorization.md#assumed-role-authentication)。
### HDFS vault
@@ -164,7 +164,7 @@ PROPERTIES (
**注意: **
-Doris也支持`AWS Assume Role`的方式创建Storage Vault(仅限于AWS
S3),配置方式请参考[AWS集成](../../../admin-manual/auth/integrations/aws-authentication-and-authorization.md#assumed-role-authentication).
+Doris也支持`AWS Assume Role`的方式创建Storage Vault(仅限于AWS
S3),配置方式请参考[AWS集成](../../../../admin-manual/auth/integrations/aws-authentication-and-authorization.md#assumed-role-authentication).
### 7. 创建 MinIO storage vault。
diff --git
a/versioned_docs/version-3.x/ecosystem/doris-operator/doris-operator-overview.md
b/versioned_docs/version-3.x/ecosystem/doris-operator/doris-operator-overview.md
index 040c0065158..db57c622cc1 100644
---
a/versioned_docs/version-3.x/ecosystem/doris-operator/doris-operator-overview.md
+++
b/versioned_docs/version-3.x/ecosystem/doris-operator/doris-operator-overview.md
@@ -49,7 +49,7 @@ Based on the deployment definition provided by Doris
Operator, users can customi
- **Runtime debugging**:
One of the biggest challenges for Trouble Shooting with containerized
services is how to debug at runtime. While pursuing availability and ease of
use, Doris Operator also provides more convenient conditions for problem
location. In the basic image of Doris, a variety of tools for problem location
are pre-set. When you need to view the status in real time, you can enter the
container through the exec command provided by kubectl and use the built-in
tools for troubleshooting.
- When the service cannot be started for unknown reasons, Doris Operator
provides a Debug running mode. When a Pod is set to Debug startup mode, the
container will automatically enter the running state. At this time, you can
enter the container through the `exec` command, manually start the service and
locate the problem. For details, please refer to [this
document](../../install/deploy-on-kubernetes/integrated-storage-compute/cluster-operation.md#How-to-enter-the-container-when-the-pod-
[...]
+ When the service cannot be started for unknown reasons, Doris Operator
provides a Debug running mode. When a Pod is set to Debug startup mode, the
container will automatically enter the running state. At this time, you can
enter the container through the `exec` command, manually start the service and
locate the problem. For details, please refer to [this
document](../../install/deploy-on-kubernetes/integrated-storage-compute/cluster-operation.md)
## Compatibility
diff --git a/versioned_docs/version-3.x/ecosystem/hive-hll-udf.md
b/versioned_docs/version-3.x/ecosystem/hive-hll-udf.md
index 3f4afb06466..b90e8f4e2dc 100644
--- a/versioned_docs/version-3.x/ecosystem/hive-hll-udf.md
+++ b/versioned_docs/version-3.x/ecosystem/hive-hll-udf.md
@@ -162,7 +162,7 @@ CREATE TABLE IF NOT EXISTS `hive_hll_table`(
-- then reuse the previous steps to insert data from a normal table into it
using the to_hll function
```
-2. [Create a Doris catalog](../lakehouse/datalake-analytics/hive.md)
+2. [Create a Doris catalog](../lakehouse/catalogs/hive-catalog)
```sql
CREATE CATALOG hive PROPERTIES (
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]