This is an automated email from the ASF dual-hosted git repository.
morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git
The following commit(s) were added to refs/heads/master by this push:
new 508f99b8dbb Update OSS_HDFS docs (#3108)
508f99b8dbb is described below
commit 508f99b8dbb8f1919f64cefc2d2e98bc04f2ded9
Author: Calvin Kirs <[email protected]>
AuthorDate: Thu Nov 20 22:25:32 2025 +0800
Update OSS_HDFS docs (#3108)
## Versions
- [x] dev
- [x] 4.x
- [x] 3.x
- [x] 2.1
## Languages
- [x] Chinese
- [ ] English
## Docs Checklist
- [ ] Checked by AI
- [ ] Test Cases Built
---------
Co-authored-by: Mingyu Chen (Rayner) <[email protected]>
---
docs/lakehouse/catalogs/paimon-catalog.mdx | 34 +++++++++++++++++++
docs/lakehouse/storages/aliyun-oss.md | 35 ++++++++++---------
.../current/lakehouse/catalogs/paimon-catalog.mdx | 33 ++++++++++++++++++
.../current/lakehouse/storages/aliyun-oss.md | 39 ++++++++++++----------
.../lakehouse/catalogs/paimon-catalog.mdx | 34 ++++++++++++++++++-
.../version-2.1/lakehouse/storages/aliyun-oss.md | 39 ++++++++++++----------
.../lakehouse/catalogs/paimon-catalog.mdx | 34 ++++++++++++++++++-
.../version-3.x/lakehouse/storages/aliyun-oss.md | 39 ++++++++++++----------
.../lakehouse/catalogs/paimon-catalog.mdx | 34 ++++++++++++++++++-
.../version-4.x/lakehouse/storages/aliyun-oss.md | 39 ++++++++++++----------
.../lakehouse/catalogs/paimon-catalog.mdx | 34 +++++++++++++++++++
.../version-2.1/lakehouse/storages/aliyun-oss.md | 35 ++++++++++---------
.../lakehouse/catalogs/paimon-catalog.mdx | 34 +++++++++++++++++++
.../version-3.x/lakehouse/storages/aliyun-oss.md | 35 ++++++++++---------
.../lakehouse/catalogs/paimon-catalog.mdx | 34 +++++++++++++++++++
.../version-4.x/lakehouse/storages/aliyun-oss.md | 35 ++++++++++---------
16 files changed, 436 insertions(+), 131 deletions(-)
diff --git a/docs/lakehouse/catalogs/paimon-catalog.mdx
b/docs/lakehouse/catalogs/paimon-catalog.mdx
index 5d1053f0fe8..43d8daae269 100644
--- a/docs/lakehouse/catalogs/paimon-catalog.mdx
+++ b/docs/lakehouse/catalogs/paimon-catalog.mdx
@@ -490,6 +490,8 @@ The currently dependent Paimon version is 1.0.0.
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ Use OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -501,6 +503,21 @@ The currently dependent Paimon version is 1.0.0.
'oss.secret_key'='<sk>'
);
```
+
+ Use OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'fs.oss-hdfs.support' = 'true',
+ 'oss.hdfs.access_key' = '<ak>',
+ 'oss.hdfs.secret_key' = '<sk>',
+ 'oss.hdfs.endpoint' = 'cn-beijing.oss-dls.aliyuncs.com',
+ 'oss.hdfs.region' = 'cn-beijing'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
@@ -584,6 +601,8 @@ The currently dependent Paimon version is 1.0.0.
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ Use OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -595,6 +614,21 @@ The currently dependent Paimon version is 1.0.0.
'oss.secret_key'='<sk>'
);
```
+
+ Use OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'oss.hdfs.enabled' = 'true',
+ 'oss.access_key' = 'your-access-key',
+ 'oss.secret_key' = 'your-secret-key',
+ 'oss.endpoint' = 'cn-hangzhou.oss-dls.aliyuncs.com',
+ 'oss.region' = 'cn-hangzhou'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
diff --git a/docs/lakehouse/storages/aliyun-oss.md
b/docs/lakehouse/storages/aliyun-oss.md
index e7c677a03e8..bac3dabff35 100644
--- a/docs/lakehouse/storages/aliyun-oss.md
+++ b/docs/lakehouse/storages/aliyun-oss.md
@@ -13,9 +13,11 @@ This document describes the parameters required to access
Alibaba Cloud OSS, whi
- Export properties
- Outfile properties
-**Doris uses S3 Client to access Alibaba Cloud OSS through S3-compatible
protocol.**
+## OSS
-## Parameter Overview
+Doris uses S3 Client to access Alibaba Cloud OSS through S3-compatible
protocol.
+
+### Parameter Overview
| Property Name | Legacy Name | Description
| Default Value |
| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------------- |
@@ -48,7 +50,7 @@ For versions before 3.1:
"s3.region" = "cn-beijing"
```
-## Usage Recommendations
+### Usage Recommendations
* It is recommended to use the `oss.` prefix for configuration parameters to
ensure consistency and clarity with Alibaba Cloud OSS.
* For versions before 3.1, please use the legacy name `s3.` as the prefix.
@@ -63,14 +65,15 @@ Accessing data stored on OSS-HDFS is slightly different
from directly accessing
### Parameter Overview
-| Property Name | Legacy Name | Description
| Default Value | Required |
-| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------------- |
-------- |
-| oss.hdfs.endpoint | s3.endpoint | Alibaba
Cloud OSS-HDFS service endpoint, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`. |
None | Yes |
-| oss.hdfs.access_key | s3.access_key | OSS Access
Key for authentication | None | Yes |
-| oss.hdfs.secret_key | s3.secret_key | OSS Secret
Key, used together with Access Key | None | Yes |
-| oss.hdfs.region | s3.region | Region ID
where the OSS bucket is located, e.g., `cn-beijing`. | None | Yes
|
-| oss.hdfs.fs.defaultFS | | Supported in
version 3.1. Specifies the file system access path for OSS, e.g.,
`oss://my-bucket/`. | None | No |
-| oss.hdfs.hadoop.config.resources | | Supported in
version 3.1. Specifies the path containing OSS file system configuration.
Requires relative path. Default directory is `/plugins/hadoop_conf/` under the
(FE/BE) deployment directory (can be changed by modifying hadoop_config_dir in
fe.conf/be.conf). All FE and BE nodes need to configure the same relative path.
Example: `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`. | None
| No |
+| Property Name | Legacy Name | Description
| Default Value | Required |
+| ------------------------------ |----------------|
----------------------------------------------------------- | ------------- |
-------- |
+| oss.hdfs.endpoint | oss.endpoint | Alibaba Cloud OSS-HDFS
service endpoint, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`. | None |
Yes |
+| oss.hdfs.access_key | oss.access_key | OSS Access Key for
authentication | None | Yes |
+| oss.hdfs.secret_key | oss.secret_key | OSS Secret Key, used
together with Access Key | None | Yes |
+| oss.hdfs.region | oss.region | Region ID where the OSS
bucket is located, e.g., `cn-beijing`. | None | Yes |
+| oss.hdfs.fs.defaultFS | | Supported in version 3.1.
Specifies the file system access path for OSS, e.g., `oss://my-bucket/`. | None
| No |
+| oss.hdfs.hadoop.config.resources | | Supported in version
3.1. Specifies the path containing OSS file system configuration. Requires
relative path. Default directory is `/plugins/hadoop_conf/` under the (FE/BE)
deployment directory (can be changed by modifying hadoop_config_dir in
fe.conf/be.conf). All FE and BE nodes need to configure the same relative path.
Example: `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`. | None
| No |
+| fs.oss-hdfs.support |oss.hdfs.enabled | Supported in version
3.1. Explicitly declares the enabling of OSS-HDFS functionality. Needs to be
set to true | None | No |
> For versions before 3.1, please use legacy names.
@@ -99,6 +102,7 @@ If the configuration files contain the parameters mentioned
above in this docume
### Example Configuration
```properties
+"fs.oss-hdfs.support" = "true",
"oss.hdfs.access_key" = "your-access-key",
"oss.hdfs.secret_key" = "your-secret-key",
"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
@@ -108,8 +112,9 @@ If the configuration files contain the parameters mentioned
above in this docume
For versions before 3.1:
```
-"s3.access_key" = "your-access-key",
-"s3.secret_key" = "your-secret-key",
-"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
-"s3.region" = "cn-hangzhou"
+"oss.hdfs.enabled" = "true",
+"oss.access_key" = "your-access-key",
+"oss.secret_key" = "your-secret-key",
+"oss.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.region" = "cn-hangzhou"
```
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/paimon-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/paimon-catalog.mdx
index 12947f61c3d..aa449910204 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/paimon-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/paimon-catalog.mdx
@@ -489,6 +489,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ 使用 OSS
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -500,6 +501,21 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'oss.secret_key'='<sk>'
);
```
+
+ 使用 OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'fs.oss-hdfs.support' = 'true',
+ 'oss.hdfs.access_key' = '<ak>',
+ 'oss.hdfs.secret_key' = '<sk>',
+ 'oss.hdfs.endpoint' = 'cn-beijing.oss-dls.aliyuncs.com',
+ 'oss.hdfs.region' = 'cn-beijing'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
@@ -583,6 +599,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ 使用 OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -594,6 +612,21 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'oss.secret_key'='<sk>'
);
```
+
+ 使用 OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'oss.hdfs.enabled' = 'true',
+ 'oss.access_key' = 'your-access-key',
+ 'oss.secret_key' = 'your-secret-key',
+ 'oss.endpoint' = 'cn-hangzhou.oss-dls.aliyuncs.com',
+ 'oss.region' = 'cn-hangzhou'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
index 05c5f0e8aa7..69d0f92b4e2 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
@@ -13,9 +13,11 @@
- Export 属性
- Outfile 属性
-**Doris 使用 S3 Client,通过 S3 兼容协议访问阿里云 OSS。**
+## OSS
-## 参数总览
+Doris 使用 S3 Client,通过 S3 兼容协议访问阿里云 OSS。
+
+### 参数总览
| 属性名称 | 曾用名 | 描述
| 默认值 |
| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------ |
@@ -30,7 +32,7 @@
> 3.1 版本之前,请使用曾用名。
-## 示例配置
+### 示例配置
```properties
"oss.access_key" = "your-access-key",
@@ -48,7 +50,7 @@
"s3.region" = "cn-beijing"
```
-## 使用建议
+### 使用建议
* 推荐使用 `oss.` 前缀配置参数,保证与阿里云 OSS 的一致性和清晰度。
* 3.1 之前的版本,请使用曾用名 `s3.` 作为前缀。
@@ -63,14 +65,15 @@ OSS-HDFS 服务(JindoFS 服务)是一个阿里云云原生数据湖存储功
### 参数总览
-| 属性名称 | 曾用名 | 描述
| 默认值 |是否必须 |
-| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------ | --- |
-| oss.hdfs.endpoint | s3.endpoint | 阿里云
OSS-HDFS 服务的 Endpoint,例如 `cn-hangzhou.oss-dls.aliyuncs.com`。 | 无 | 是 |
-| oss.hdfs.access_key | s3.access_key | OSS
Access Key,用于身份验证 | 无 | 是 |
-| oss.hdfs.secret_key | s3.secret_key | OSS
Secret Key,与 Access Key 配合使用 | 无 | 是 |
-| oss.hdfs.region | s3.region | OSS
bucket 所在的地域 ID,例如 `cn-beijing`。 | 无 | 是 |
-| oss.hdfs.fs.defaultFS | | 3.1 版本支持。指定 OSS
的文件系统访问路径,例如 `oss://my-bucket/`。 | 无 | 否 |
-| oss.hdfs.hadoop.config.resources | | 3.1 版本支持。指定包含 OSS
文件系统配置的路径,需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改 fe.conf/be.conf
中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
| 无 | 否 |
+| 属性名称 | 曾用名 | 描述
| 默认值 |是否必须 |
+|----------------------------------|---------------|
------------------------------------------------------------ | ------ | --- |
+| oss.hdfs.endpoint | oss.endpoint | 阿里云 OSS-HDFS 服务的
Endpoint,例如 `cn-hangzhou.oss-dls.aliyuncs.com`。 | 无 | 是 |
+| oss.hdfs.access_key | oss.access_key | OSS Access Key,用于身份验证
| 无 | 是 |
+| oss.hdfs.secret_key | oss.secret_key | OSS Secret Key,与 Access
Key 配合使用 | 无 | 是 |
+| oss.hdfs.region | oss.region | OSS bucket 所在的地域 ID,例如
`cn-beijing`。 | 无 | 是 |
+| oss.hdfs.fs.defaultFS | | 3.1 版本支持。指定 OSS
的文件系统访问路径,例如 `oss://my-bucket/`。 | 无 | 否 |
+| oss.hdfs.hadoop.config.resources | | 3.1 版本支持。指定包含 OSS
文件系统配置的路径,需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改 fe.conf/be.conf
中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
| 无 | 否 |
+| fs.oss-hdfs.support |oss.hdfs.enabled | 3.1 版本支持。显示声明启用
OSS-HDFS 功能。需要设置为 true | 无 | 否 |
> 3.1 版本之前,请使用曾用名。
@@ -100,17 +103,19 @@ OSS-HDFS 支持通过 `oss.hdfs.hadoop.config.resources` 参数来指定
HDFS
### 示例配置
```properties
+"fs.oss-hdfs.support" = "true",
"oss.hdfs.access_key" = "your-access-key",
"oss.hdfs.secret_key" = "your-secret-key",
"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
"oss.hdfs.region" = "cn-hangzhou"
```
-3.1 之前的版:
+3.1 之前的版本:
```
-"s3.access_key" = "your-access-key",
-"s3.secret_key" = "your-secret-key",
-"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
-"s3.region" = "cn-hangzhou"
+"oss.hdfs.enabled" = "true",
+"oss.access_key" = "your-access-key",
+"oss.secret_key" = "your-secret-key",
+"oss.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.region" = "cn-hangzhou"
```
\ No newline at end of file
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/paimon-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/paimon-catalog.mdx
index 895734d9f21..aa449910204 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/paimon-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/paimon-catalog.mdx
@@ -188,7 +188,6 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
's3.access_key' = '<ak>',
's3.secret_key' = '<sk>'
);
- ```
使用 IAM Assumed Role 的方式获取 S3 访问凭证 (3.1.2+)
```sql
CREATE CATALOG paimon_hms_on_s3_iamrole PROPERTIES (
@@ -490,6 +489,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ 使用 OSS
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -501,6 +501,21 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'oss.secret_key'='<sk>'
);
```
+
+ 使用 OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'fs.oss-hdfs.support' = 'true',
+ 'oss.hdfs.access_key' = '<ak>',
+ 'oss.hdfs.secret_key' = '<sk>',
+ 'oss.hdfs.endpoint' = 'cn-beijing.oss-dls.aliyuncs.com',
+ 'oss.hdfs.region' = 'cn-beijing'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
@@ -584,6 +599,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ 使用 OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -595,6 +612,21 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'oss.secret_key'='<sk>'
);
```
+
+ 使用 OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'oss.hdfs.enabled' = 'true',
+ 'oss.access_key' = 'your-access-key',
+ 'oss.secret_key' = 'your-secret-key',
+ 'oss.endpoint' = 'cn-hangzhou.oss-dls.aliyuncs.com',
+ 'oss.region' = 'cn-hangzhou'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/aliyun-oss.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/aliyun-oss.md
index 05c5f0e8aa7..69d0f92b4e2 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/aliyun-oss.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/aliyun-oss.md
@@ -13,9 +13,11 @@
- Export 属性
- Outfile 属性
-**Doris 使用 S3 Client,通过 S3 兼容协议访问阿里云 OSS。**
+## OSS
-## 参数总览
+Doris 使用 S3 Client,通过 S3 兼容协议访问阿里云 OSS。
+
+### 参数总览
| 属性名称 | 曾用名 | 描述
| 默认值 |
| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------ |
@@ -30,7 +32,7 @@
> 3.1 版本之前,请使用曾用名。
-## 示例配置
+### 示例配置
```properties
"oss.access_key" = "your-access-key",
@@ -48,7 +50,7 @@
"s3.region" = "cn-beijing"
```
-## 使用建议
+### 使用建议
* 推荐使用 `oss.` 前缀配置参数,保证与阿里云 OSS 的一致性和清晰度。
* 3.1 之前的版本,请使用曾用名 `s3.` 作为前缀。
@@ -63,14 +65,15 @@ OSS-HDFS 服务(JindoFS 服务)是一个阿里云云原生数据湖存储功
### 参数总览
-| 属性名称 | 曾用名 | 描述
| 默认值 |是否必须 |
-| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------ | --- |
-| oss.hdfs.endpoint | s3.endpoint | 阿里云
OSS-HDFS 服务的 Endpoint,例如 `cn-hangzhou.oss-dls.aliyuncs.com`。 | 无 | 是 |
-| oss.hdfs.access_key | s3.access_key | OSS
Access Key,用于身份验证 | 无 | 是 |
-| oss.hdfs.secret_key | s3.secret_key | OSS
Secret Key,与 Access Key 配合使用 | 无 | 是 |
-| oss.hdfs.region | s3.region | OSS
bucket 所在的地域 ID,例如 `cn-beijing`。 | 无 | 是 |
-| oss.hdfs.fs.defaultFS | | 3.1 版本支持。指定 OSS
的文件系统访问路径,例如 `oss://my-bucket/`。 | 无 | 否 |
-| oss.hdfs.hadoop.config.resources | | 3.1 版本支持。指定包含 OSS
文件系统配置的路径,需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改 fe.conf/be.conf
中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
| 无 | 否 |
+| 属性名称 | 曾用名 | 描述
| 默认值 |是否必须 |
+|----------------------------------|---------------|
------------------------------------------------------------ | ------ | --- |
+| oss.hdfs.endpoint | oss.endpoint | 阿里云 OSS-HDFS 服务的
Endpoint,例如 `cn-hangzhou.oss-dls.aliyuncs.com`。 | 无 | 是 |
+| oss.hdfs.access_key | oss.access_key | OSS Access Key,用于身份验证
| 无 | 是 |
+| oss.hdfs.secret_key | oss.secret_key | OSS Secret Key,与 Access
Key 配合使用 | 无 | 是 |
+| oss.hdfs.region | oss.region | OSS bucket 所在的地域 ID,例如
`cn-beijing`。 | 无 | 是 |
+| oss.hdfs.fs.defaultFS | | 3.1 版本支持。指定 OSS
的文件系统访问路径,例如 `oss://my-bucket/`。 | 无 | 否 |
+| oss.hdfs.hadoop.config.resources | | 3.1 版本支持。指定包含 OSS
文件系统配置的路径,需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改 fe.conf/be.conf
中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
| 无 | 否 |
+| fs.oss-hdfs.support |oss.hdfs.enabled | 3.1 版本支持。显示声明启用
OSS-HDFS 功能。需要设置为 true | 无 | 否 |
> 3.1 版本之前,请使用曾用名。
@@ -100,17 +103,19 @@ OSS-HDFS 支持通过 `oss.hdfs.hadoop.config.resources` 参数来指定
HDFS
### 示例配置
```properties
+"fs.oss-hdfs.support" = "true",
"oss.hdfs.access_key" = "your-access-key",
"oss.hdfs.secret_key" = "your-secret-key",
"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
"oss.hdfs.region" = "cn-hangzhou"
```
-3.1 之前的版:
+3.1 之前的版本:
```
-"s3.access_key" = "your-access-key",
-"s3.secret_key" = "your-secret-key",
-"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
-"s3.region" = "cn-hangzhou"
+"oss.hdfs.enabled" = "true",
+"oss.access_key" = "your-access-key",
+"oss.secret_key" = "your-secret-key",
+"oss.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.region" = "cn-hangzhou"
```
\ No newline at end of file
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/paimon-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/paimon-catalog.mdx
index 895734d9f21..aa449910204 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/paimon-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/paimon-catalog.mdx
@@ -188,7 +188,6 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
's3.access_key' = '<ak>',
's3.secret_key' = '<sk>'
);
- ```
使用 IAM Assumed Role 的方式获取 S3 访问凭证 (3.1.2+)
```sql
CREATE CATALOG paimon_hms_on_s3_iamrole PROPERTIES (
@@ -490,6 +489,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ 使用 OSS
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -501,6 +501,21 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'oss.secret_key'='<sk>'
);
```
+
+ 使用 OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'fs.oss-hdfs.support' = 'true',
+ 'oss.hdfs.access_key' = '<ak>',
+ 'oss.hdfs.secret_key' = '<sk>',
+ 'oss.hdfs.endpoint' = 'cn-beijing.oss-dls.aliyuncs.com',
+ 'oss.hdfs.region' = 'cn-beijing'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
@@ -584,6 +599,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ 使用 OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -595,6 +612,21 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'oss.secret_key'='<sk>'
);
```
+
+ 使用 OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'oss.hdfs.enabled' = 'true',
+ 'oss.access_key' = 'your-access-key',
+ 'oss.secret_key' = 'your-secret-key',
+ 'oss.endpoint' = 'cn-hangzhou.oss-dls.aliyuncs.com',
+ 'oss.region' = 'cn-hangzhou'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/storages/aliyun-oss.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/storages/aliyun-oss.md
index 05c5f0e8aa7..69d0f92b4e2 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/storages/aliyun-oss.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/storages/aliyun-oss.md
@@ -13,9 +13,11 @@
- Export 属性
- Outfile 属性
-**Doris 使用 S3 Client,通过 S3 兼容协议访问阿里云 OSS。**
+## OSS
-## 参数总览
+Doris 使用 S3 Client,通过 S3 兼容协议访问阿里云 OSS。
+
+### 参数总览
| 属性名称 | 曾用名 | 描述
| 默认值 |
| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------ |
@@ -30,7 +32,7 @@
> 3.1 版本之前,请使用曾用名。
-## 示例配置
+### 示例配置
```properties
"oss.access_key" = "your-access-key",
@@ -48,7 +50,7 @@
"s3.region" = "cn-beijing"
```
-## 使用建议
+### 使用建议
* 推荐使用 `oss.` 前缀配置参数,保证与阿里云 OSS 的一致性和清晰度。
* 3.1 之前的版本,请使用曾用名 `s3.` 作为前缀。
@@ -63,14 +65,15 @@ OSS-HDFS 服务(JindoFS 服务)是一个阿里云云原生数据湖存储功
### 参数总览
-| 属性名称 | 曾用名 | 描述
| 默认值 |是否必须 |
-| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------ | --- |
-| oss.hdfs.endpoint | s3.endpoint | 阿里云
OSS-HDFS 服务的 Endpoint,例如 `cn-hangzhou.oss-dls.aliyuncs.com`。 | 无 | 是 |
-| oss.hdfs.access_key | s3.access_key | OSS
Access Key,用于身份验证 | 无 | 是 |
-| oss.hdfs.secret_key | s3.secret_key | OSS
Secret Key,与 Access Key 配合使用 | 无 | 是 |
-| oss.hdfs.region | s3.region | OSS
bucket 所在的地域 ID,例如 `cn-beijing`。 | 无 | 是 |
-| oss.hdfs.fs.defaultFS | | 3.1 版本支持。指定 OSS
的文件系统访问路径,例如 `oss://my-bucket/`。 | 无 | 否 |
-| oss.hdfs.hadoop.config.resources | | 3.1 版本支持。指定包含 OSS
文件系统配置的路径,需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改 fe.conf/be.conf
中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
| 无 | 否 |
+| 属性名称 | 曾用名 | 描述
| 默认值 |是否必须 |
+|----------------------------------|---------------|
------------------------------------------------------------ | ------ | --- |
+| oss.hdfs.endpoint | oss.endpoint | 阿里云 OSS-HDFS 服务的
Endpoint,例如 `cn-hangzhou.oss-dls.aliyuncs.com`。 | 无 | 是 |
+| oss.hdfs.access_key | oss.access_key | OSS Access Key,用于身份验证
| 无 | 是 |
+| oss.hdfs.secret_key | oss.secret_key | OSS Secret Key,与 Access
Key 配合使用 | 无 | 是 |
+| oss.hdfs.region | oss.region | OSS bucket 所在的地域 ID,例如
`cn-beijing`。 | 无 | 是 |
+| oss.hdfs.fs.defaultFS | | 3.1 版本支持。指定 OSS
的文件系统访问路径,例如 `oss://my-bucket/`。 | 无 | 否 |
+| oss.hdfs.hadoop.config.resources | | 3.1 版本支持。指定包含 OSS
文件系统配置的路径,需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改 fe.conf/be.conf
中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
| 无 | 否 |
+| fs.oss-hdfs.support |oss.hdfs.enabled | 3.1 版本支持。显示声明启用
OSS-HDFS 功能。需要设置为 true | 无 | 否 |
> 3.1 版本之前,请使用曾用名。
@@ -100,17 +103,19 @@ OSS-HDFS 支持通过 `oss.hdfs.hadoop.config.resources` 参数来指定
HDFS
### 示例配置
```properties
+"fs.oss-hdfs.support" = "true",
"oss.hdfs.access_key" = "your-access-key",
"oss.hdfs.secret_key" = "your-secret-key",
"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
"oss.hdfs.region" = "cn-hangzhou"
```
-3.1 之前的版:
+3.1 之前的版本:
```
-"s3.access_key" = "your-access-key",
-"s3.secret_key" = "your-secret-key",
-"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
-"s3.region" = "cn-hangzhou"
+"oss.hdfs.enabled" = "true",
+"oss.access_key" = "your-access-key",
+"oss.secret_key" = "your-secret-key",
+"oss.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.region" = "cn-hangzhou"
```
\ No newline at end of file
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/paimon-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/paimon-catalog.mdx
index 895734d9f21..aa449910204 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/paimon-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/paimon-catalog.mdx
@@ -188,7 +188,6 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
's3.access_key' = '<ak>',
's3.secret_key' = '<sk>'
);
- ```
使用 IAM Assumed Role 的方式获取 S3 访问凭证 (3.1.2+)
```sql
CREATE CATALOG paimon_hms_on_s3_iamrole PROPERTIES (
@@ -490,6 +489,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ 使用 OSS
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -501,6 +501,21 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'oss.secret_key'='<sk>'
);
```
+
+ 使用 OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'fs.oss-hdfs.support' = 'true',
+ 'oss.hdfs.access_key' = '<ak>',
+ 'oss.hdfs.secret_key' = '<sk>',
+ 'oss.hdfs.endpoint' = 'cn-beijing.oss-dls.aliyuncs.com',
+ 'oss.hdfs.region' = 'cn-beijing'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
@@ -584,6 +599,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ 使用 OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -595,6 +612,21 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'oss.secret_key'='<sk>'
);
```
+
+ 使用 OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'oss.hdfs.enabled' = 'true',
+ 'oss.access_key' = 'your-access-key',
+ 'oss.secret_key' = 'your-secret-key',
+ 'oss.endpoint' = 'cn-hangzhou.oss-dls.aliyuncs.com',
+ 'oss.region' = 'cn-hangzhou'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/storages/aliyun-oss.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/storages/aliyun-oss.md
index 05c5f0e8aa7..69d0f92b4e2 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/storages/aliyun-oss.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/storages/aliyun-oss.md
@@ -13,9 +13,11 @@
- Export 属性
- Outfile 属性
-**Doris 使用 S3 Client,通过 S3 兼容协议访问阿里云 OSS。**
+## OSS
-## 参数总览
+Doris 使用 S3 Client,通过 S3 兼容协议访问阿里云 OSS。
+
+### 参数总览
| 属性名称 | 曾用名 | 描述
| 默认值 |
| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------ |
@@ -30,7 +32,7 @@
> 3.1 版本之前,请使用曾用名。
-## 示例配置
+### 示例配置
```properties
"oss.access_key" = "your-access-key",
@@ -48,7 +50,7 @@
"s3.region" = "cn-beijing"
```
-## 使用建议
+### 使用建议
* 推荐使用 `oss.` 前缀配置参数,保证与阿里云 OSS 的一致性和清晰度。
* 3.1 之前的版本,请使用曾用名 `s3.` 作为前缀。
@@ -63,14 +65,15 @@ OSS-HDFS 服务(JindoFS 服务)是一个阿里云云原生数据湖存储功
### 参数总览
-| 属性名称 | 曾用名 | 描述
| 默认值 |是否必须 |
-| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------ | --- |
-| oss.hdfs.endpoint | s3.endpoint | 阿里云
OSS-HDFS 服务的 Endpoint,例如 `cn-hangzhou.oss-dls.aliyuncs.com`。 | 无 | 是 |
-| oss.hdfs.access_key | s3.access_key | OSS
Access Key,用于身份验证 | 无 | 是 |
-| oss.hdfs.secret_key | s3.secret_key | OSS
Secret Key,与 Access Key 配合使用 | 无 | 是 |
-| oss.hdfs.region | s3.region | OSS
bucket 所在的地域 ID,例如 `cn-beijing`。 | 无 | 是 |
-| oss.hdfs.fs.defaultFS | | 3.1 版本支持。指定 OSS
的文件系统访问路径,例如 `oss://my-bucket/`。 | 无 | 否 |
-| oss.hdfs.hadoop.config.resources | | 3.1 版本支持。指定包含 OSS
文件系统配置的路径,需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改 fe.conf/be.conf
中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
| 无 | 否 |
+| 属性名称 | 曾用名 | 描述
| 默认值 |是否必须 |
+|----------------------------------|---------------|
------------------------------------------------------------ | ------ | --- |
+| oss.hdfs.endpoint | oss.endpoint | 阿里云 OSS-HDFS 服务的
Endpoint,例如 `cn-hangzhou.oss-dls.aliyuncs.com`。 | 无 | 是 |
+| oss.hdfs.access_key | oss.access_key | OSS Access Key,用于身份验证
| 无 | 是 |
+| oss.hdfs.secret_key | oss.secret_key | OSS Secret Key,与 Access
Key 配合使用 | 无 | 是 |
+| oss.hdfs.region | oss.region | OSS bucket 所在的地域 ID,例如
`cn-beijing`。 | 无 | 是 |
+| oss.hdfs.fs.defaultFS | | 3.1 版本支持。指定 OSS
的文件系统访问路径,例如 `oss://my-bucket/`。 | 无 | 否 |
+| oss.hdfs.hadoop.config.resources | | 3.1 版本支持。指定包含 OSS
文件系统配置的路径,需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改 fe.conf/be.conf
中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
| 无 | 否 |
+| fs.oss-hdfs.support |oss.hdfs.enabled | 3.1 版本支持。显示声明启用
OSS-HDFS 功能。需要设置为 true | 无 | 否 |
> 3.1 版本之前,请使用曾用名。
@@ -100,17 +103,19 @@ OSS-HDFS 支持通过 `oss.hdfs.hadoop.config.resources` 参数来指定
HDFS
### 示例配置
```properties
+"fs.oss-hdfs.support" = "true",
"oss.hdfs.access_key" = "your-access-key",
"oss.hdfs.secret_key" = "your-secret-key",
"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
"oss.hdfs.region" = "cn-hangzhou"
```
-3.1 之前的版:
+3.1 之前的版本:
```
-"s3.access_key" = "your-access-key",
-"s3.secret_key" = "your-secret-key",
-"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
-"s3.region" = "cn-hangzhou"
+"oss.hdfs.enabled" = "true",
+"oss.access_key" = "your-access-key",
+"oss.secret_key" = "your-secret-key",
+"oss.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.region" = "cn-hangzhou"
```
\ No newline at end of file
diff --git a/versioned_docs/version-2.1/lakehouse/catalogs/paimon-catalog.mdx
b/versioned_docs/version-2.1/lakehouse/catalogs/paimon-catalog.mdx
index 5d1053f0fe8..43d8daae269 100644
--- a/versioned_docs/version-2.1/lakehouse/catalogs/paimon-catalog.mdx
+++ b/versioned_docs/version-2.1/lakehouse/catalogs/paimon-catalog.mdx
@@ -490,6 +490,8 @@ The currently dependent Paimon version is 1.0.0.
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ Use OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -501,6 +503,21 @@ The currently dependent Paimon version is 1.0.0.
'oss.secret_key'='<sk>'
);
```
+
+ Use OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'fs.oss-hdfs.support' = 'true',
+ 'oss.hdfs.access_key' = '<ak>',
+ 'oss.hdfs.secret_key' = '<sk>',
+ 'oss.hdfs.endpoint' = 'cn-beijing.oss-dls.aliyuncs.com',
+ 'oss.hdfs.region' = 'cn-beijing'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
@@ -584,6 +601,8 @@ The currently dependent Paimon version is 1.0.0.
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ Use OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -595,6 +614,21 @@ The currently dependent Paimon version is 1.0.0.
'oss.secret_key'='<sk>'
);
```
+
+ Use OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'oss.hdfs.enabled' = 'true',
+ 'oss.access_key' = 'your-access-key',
+ 'oss.secret_key' = 'your-secret-key',
+ 'oss.endpoint' = 'cn-hangzhou.oss-dls.aliyuncs.com',
+ 'oss.region' = 'cn-hangzhou'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
diff --git a/versioned_docs/version-2.1/lakehouse/storages/aliyun-oss.md
b/versioned_docs/version-2.1/lakehouse/storages/aliyun-oss.md
index e7c677a03e8..bac3dabff35 100644
--- a/versioned_docs/version-2.1/lakehouse/storages/aliyun-oss.md
+++ b/versioned_docs/version-2.1/lakehouse/storages/aliyun-oss.md
@@ -13,9 +13,11 @@ This document describes the parameters required to access
Alibaba Cloud OSS, whi
- Export properties
- Outfile properties
-**Doris uses S3 Client to access Alibaba Cloud OSS through S3-compatible
protocol.**
+## OSS
-## Parameter Overview
+Doris uses S3 Client to access Alibaba Cloud OSS through S3-compatible
protocol.
+
+### Parameter Overview
| Property Name | Legacy Name | Description
| Default Value |
| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------------- |
@@ -48,7 +50,7 @@ For versions before 3.1:
"s3.region" = "cn-beijing"
```
-## Usage Recommendations
+### Usage Recommendations
* It is recommended to use the `oss.` prefix for configuration parameters to
ensure consistency and clarity with Alibaba Cloud OSS.
* For versions before 3.1, please use the legacy name `s3.` as the prefix.
@@ -63,14 +65,15 @@ Accessing data stored on OSS-HDFS is slightly different
from directly accessing
### Parameter Overview
-| Property Name | Legacy Name | Description
| Default Value | Required |
-| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------------- |
-------- |
-| oss.hdfs.endpoint | s3.endpoint | Alibaba
Cloud OSS-HDFS service endpoint, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`. |
None | Yes |
-| oss.hdfs.access_key | s3.access_key | OSS Access
Key for authentication | None | Yes |
-| oss.hdfs.secret_key | s3.secret_key | OSS Secret
Key, used together with Access Key | None | Yes |
-| oss.hdfs.region | s3.region | Region ID
where the OSS bucket is located, e.g., `cn-beijing`. | None | Yes
|
-| oss.hdfs.fs.defaultFS | | Supported in
version 3.1. Specifies the file system access path for OSS, e.g.,
`oss://my-bucket/`. | None | No |
-| oss.hdfs.hadoop.config.resources | | Supported in
version 3.1. Specifies the path containing OSS file system configuration.
Requires relative path. Default directory is `/plugins/hadoop_conf/` under the
(FE/BE) deployment directory (can be changed by modifying hadoop_config_dir in
fe.conf/be.conf). All FE and BE nodes need to configure the same relative path.
Example: `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`. | None
| No |
+| Property Name | Legacy Name | Description
| Default Value | Required |
+| ------------------------------ |----------------|
----------------------------------------------------------- | ------------- |
-------- |
+| oss.hdfs.endpoint | oss.endpoint | Alibaba Cloud OSS-HDFS
service endpoint, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`. | None |
Yes |
+| oss.hdfs.access_key | oss.access_key | OSS Access Key for
authentication | None | Yes |
+| oss.hdfs.secret_key | oss.secret_key | OSS Secret Key, used
together with Access Key | None | Yes |
+| oss.hdfs.region | oss.region | Region ID where the OSS
bucket is located, e.g., `cn-beijing`. | None | Yes |
+| oss.hdfs.fs.defaultFS | | Supported in version 3.1.
Specifies the file system access path for OSS, e.g., `oss://my-bucket/`. | None
| No |
+| oss.hdfs.hadoop.config.resources | | Supported in version
3.1. Specifies the path containing OSS file system configuration. Requires
relative path. Default directory is `/plugins/hadoop_conf/` under the (FE/BE)
deployment directory (can be changed by modifying hadoop_config_dir in
fe.conf/be.conf). All FE and BE nodes need to configure the same relative path.
Example: `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`. | None
| No |
+| fs.oss-hdfs.support |oss.hdfs.enabled | Supported in version
3.1. Explicitly declares the enabling of OSS-HDFS functionality. Needs to be
set to true | None | No |
> For versions before 3.1, please use legacy names.
@@ -99,6 +102,7 @@ If the configuration files contain the parameters mentioned
above in this docume
### Example Configuration
```properties
+"fs.oss-hdfs.support" = "true",
"oss.hdfs.access_key" = "your-access-key",
"oss.hdfs.secret_key" = "your-secret-key",
"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
@@ -108,8 +112,9 @@ If the configuration files contain the parameters mentioned
above in this docume
For versions before 3.1:
```
-"s3.access_key" = "your-access-key",
-"s3.secret_key" = "your-secret-key",
-"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
-"s3.region" = "cn-hangzhou"
+"oss.hdfs.enabled" = "true",
+"oss.access_key" = "your-access-key",
+"oss.secret_key" = "your-secret-key",
+"oss.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.region" = "cn-hangzhou"
```
diff --git a/versioned_docs/version-3.x/lakehouse/catalogs/paimon-catalog.mdx
b/versioned_docs/version-3.x/lakehouse/catalogs/paimon-catalog.mdx
index 5d1053f0fe8..43d8daae269 100644
--- a/versioned_docs/version-3.x/lakehouse/catalogs/paimon-catalog.mdx
+++ b/versioned_docs/version-3.x/lakehouse/catalogs/paimon-catalog.mdx
@@ -490,6 +490,8 @@ The currently dependent Paimon version is 1.0.0.
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ Use OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -501,6 +503,21 @@ The currently dependent Paimon version is 1.0.0.
'oss.secret_key'='<sk>'
);
```
+
+ Use OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'fs.oss-hdfs.support' = 'true',
+ 'oss.hdfs.access_key' = '<ak>',
+ 'oss.hdfs.secret_key' = '<sk>',
+ 'oss.hdfs.endpoint' = 'cn-beijing.oss-dls.aliyuncs.com',
+ 'oss.hdfs.region' = 'cn-beijing'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
@@ -584,6 +601,8 @@ The currently dependent Paimon version is 1.0.0.
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ Use OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -595,6 +614,21 @@ The currently dependent Paimon version is 1.0.0.
'oss.secret_key'='<sk>'
);
```
+
+ Use OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'oss.hdfs.enabled' = 'true',
+ 'oss.access_key' = 'your-access-key',
+ 'oss.secret_key' = 'your-secret-key',
+ 'oss.endpoint' = 'cn-hangzhou.oss-dls.aliyuncs.com',
+ 'oss.region' = 'cn-hangzhou'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
diff --git a/versioned_docs/version-3.x/lakehouse/storages/aliyun-oss.md
b/versioned_docs/version-3.x/lakehouse/storages/aliyun-oss.md
index e7c677a03e8..bac3dabff35 100644
--- a/versioned_docs/version-3.x/lakehouse/storages/aliyun-oss.md
+++ b/versioned_docs/version-3.x/lakehouse/storages/aliyun-oss.md
@@ -13,9 +13,11 @@ This document describes the parameters required to access
Alibaba Cloud OSS, whi
- Export properties
- Outfile properties
-**Doris uses S3 Client to access Alibaba Cloud OSS through S3-compatible
protocol.**
+## OSS
-## Parameter Overview
+Doris uses S3 Client to access Alibaba Cloud OSS through S3-compatible
protocol.
+
+### Parameter Overview
| Property Name | Legacy Name | Description
| Default Value |
| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------------- |
@@ -48,7 +50,7 @@ For versions before 3.1:
"s3.region" = "cn-beijing"
```
-## Usage Recommendations
+### Usage Recommendations
* It is recommended to use the `oss.` prefix for configuration parameters to
ensure consistency and clarity with Alibaba Cloud OSS.
* For versions before 3.1, please use the legacy name `s3.` as the prefix.
@@ -63,14 +65,15 @@ Accessing data stored on OSS-HDFS is slightly different
from directly accessing
### Parameter Overview
-| Property Name | Legacy Name | Description
| Default Value | Required |
-| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------------- |
-------- |
-| oss.hdfs.endpoint | s3.endpoint | Alibaba
Cloud OSS-HDFS service endpoint, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`. |
None | Yes |
-| oss.hdfs.access_key | s3.access_key | OSS Access
Key for authentication | None | Yes |
-| oss.hdfs.secret_key | s3.secret_key | OSS Secret
Key, used together with Access Key | None | Yes |
-| oss.hdfs.region | s3.region | Region ID
where the OSS bucket is located, e.g., `cn-beijing`. | None | Yes
|
-| oss.hdfs.fs.defaultFS | | Supported in
version 3.1. Specifies the file system access path for OSS, e.g.,
`oss://my-bucket/`. | None | No |
-| oss.hdfs.hadoop.config.resources | | Supported in
version 3.1. Specifies the path containing OSS file system configuration.
Requires relative path. Default directory is `/plugins/hadoop_conf/` under the
(FE/BE) deployment directory (can be changed by modifying hadoop_config_dir in
fe.conf/be.conf). All FE and BE nodes need to configure the same relative path.
Example: `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`. | None
| No |
+| Property Name | Legacy Name | Description
| Default Value | Required |
+| ------------------------------ |----------------|
----------------------------------------------------------- | ------------- |
-------- |
+| oss.hdfs.endpoint | oss.endpoint | Alibaba Cloud OSS-HDFS
service endpoint, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`. | None |
Yes |
+| oss.hdfs.access_key | oss.access_key | OSS Access Key for
authentication | None | Yes |
+| oss.hdfs.secret_key | oss.secret_key | OSS Secret Key, used
together with Access Key | None | Yes |
+| oss.hdfs.region | oss.region | Region ID where the OSS
bucket is located, e.g., `cn-beijing`. | None | Yes |
+| oss.hdfs.fs.defaultFS | | Supported in version 3.1.
Specifies the file system access path for OSS, e.g., `oss://my-bucket/`. | None
| No |
+| oss.hdfs.hadoop.config.resources | | Supported in version
3.1. Specifies the path containing OSS file system configuration. Requires
relative path. Default directory is `/plugins/hadoop_conf/` under the (FE/BE)
deployment directory (can be changed by modifying hadoop_config_dir in
fe.conf/be.conf). All FE and BE nodes need to configure the same relative path.
Example: `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`. | None
| No |
+| fs.oss-hdfs.support |oss.hdfs.enabled | Supported in version
3.1. Explicitly declares the enabling of OSS-HDFS functionality. Needs to be
set to true | None | No |
> For versions before 3.1, please use legacy names.
@@ -99,6 +102,7 @@ If the configuration files contain the parameters mentioned
above in this docume
### Example Configuration
```properties
+"fs.oss-hdfs.support" = "true",
"oss.hdfs.access_key" = "your-access-key",
"oss.hdfs.secret_key" = "your-secret-key",
"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
@@ -108,8 +112,9 @@ If the configuration files contain the parameters mentioned
above in this docume
For versions before 3.1:
```
-"s3.access_key" = "your-access-key",
-"s3.secret_key" = "your-secret-key",
-"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
-"s3.region" = "cn-hangzhou"
+"oss.hdfs.enabled" = "true",
+"oss.access_key" = "your-access-key",
+"oss.secret_key" = "your-secret-key",
+"oss.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.region" = "cn-hangzhou"
```
diff --git a/versioned_docs/version-4.x/lakehouse/catalogs/paimon-catalog.mdx
b/versioned_docs/version-4.x/lakehouse/catalogs/paimon-catalog.mdx
index 5d1053f0fe8..43d8daae269 100644
--- a/versioned_docs/version-4.x/lakehouse/catalogs/paimon-catalog.mdx
+++ b/versioned_docs/version-4.x/lakehouse/catalogs/paimon-catalog.mdx
@@ -490,6 +490,8 @@ The currently dependent Paimon version is 1.0.0.
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ Use OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -501,6 +503,21 @@ The currently dependent Paimon version is 1.0.0.
'oss.secret_key'='<sk>'
);
```
+
+ Use OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'fs.oss-hdfs.support' = 'true',
+ 'oss.hdfs.access_key' = '<ak>',
+ 'oss.hdfs.secret_key' = '<sk>',
+ 'oss.hdfs.endpoint' = 'cn-beijing.oss-dls.aliyuncs.com',
+ 'oss.hdfs.region' = 'cn-beijing'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
@@ -584,6 +601,8 @@ The currently dependent Paimon version is 1.0.0.
```
</TabItem>
<TabItem value='OSS' label='OSS'>
+ Use OSS
+
```sql
CREATE CATALOG paimon_base_filesystem_paimon_oss PROPERTIES (
'type' = 'paimon',
@@ -595,6 +614,21 @@ The currently dependent Paimon version is 1.0.0.
'oss.secret_key'='<sk>'
);
```
+
+ Use OSS-HDFS
+
+ ```sql
+ CREATE CATALOG paimon_base_filesystem_paimon_oss_hdfs PROPERTIES (
+ 'type' = 'paimon',
+ 'paimon.catalog.type' = 'filesystem',
+ 'warehouse' = 'oss://bucket/regression/paimon1',
+ 'oss.hdfs.enabled' = 'true',
+ 'oss.access_key' = 'your-access-key',
+ 'oss.secret_key' = 'your-secret-key',
+ 'oss.endpoint' = 'cn-hangzhou.oss-dls.aliyuncs.com',
+ 'oss.region' = 'cn-hangzhou'
+ );
+ ```
</TabItem>
<TabItem value='COS' label='COS'>
```sql
diff --git a/versioned_docs/version-4.x/lakehouse/storages/aliyun-oss.md
b/versioned_docs/version-4.x/lakehouse/storages/aliyun-oss.md
index e7c677a03e8..bac3dabff35 100644
--- a/versioned_docs/version-4.x/lakehouse/storages/aliyun-oss.md
+++ b/versioned_docs/version-4.x/lakehouse/storages/aliyun-oss.md
@@ -13,9 +13,11 @@ This document describes the parameters required to access
Alibaba Cloud OSS, whi
- Export properties
- Outfile properties
-**Doris uses S3 Client to access Alibaba Cloud OSS through S3-compatible
protocol.**
+## OSS
-## Parameter Overview
+Doris uses S3 Client to access Alibaba Cloud OSS through S3-compatible
protocol.
+
+### Parameter Overview
| Property Name | Legacy Name | Description
| Default Value |
| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------------- |
@@ -48,7 +50,7 @@ For versions before 3.1:
"s3.region" = "cn-beijing"
```
-## Usage Recommendations
+### Usage Recommendations
* It is recommended to use the `oss.` prefix for configuration parameters to
ensure consistency and clarity with Alibaba Cloud OSS.
* For versions before 3.1, please use the legacy name `s3.` as the prefix.
@@ -63,14 +65,15 @@ Accessing data stored on OSS-HDFS is slightly different
from directly accessing
### Parameter Overview
-| Property Name | Legacy Name | Description
| Default Value | Required |
-| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------------- |
-------- |
-| oss.hdfs.endpoint | s3.endpoint | Alibaba
Cloud OSS-HDFS service endpoint, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`. |
None | Yes |
-| oss.hdfs.access_key | s3.access_key | OSS Access
Key for authentication | None | Yes |
-| oss.hdfs.secret_key | s3.secret_key | OSS Secret
Key, used together with Access Key | None | Yes |
-| oss.hdfs.region | s3.region | Region ID
where the OSS bucket is located, e.g., `cn-beijing`. | None | Yes
|
-| oss.hdfs.fs.defaultFS | | Supported in
version 3.1. Specifies the file system access path for OSS, e.g.,
`oss://my-bucket/`. | None | No |
-| oss.hdfs.hadoop.config.resources | | Supported in
version 3.1. Specifies the path containing OSS file system configuration.
Requires relative path. Default directory is `/plugins/hadoop_conf/` under the
(FE/BE) deployment directory (can be changed by modifying hadoop_config_dir in
fe.conf/be.conf). All FE and BE nodes need to configure the same relative path.
Example: `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`. | None
| No |
+| Property Name | Legacy Name | Description
| Default Value | Required |
+| ------------------------------ |----------------|
----------------------------------------------------------- | ------------- |
-------- |
+| oss.hdfs.endpoint | oss.endpoint | Alibaba Cloud OSS-HDFS
service endpoint, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`. | None |
Yes |
+| oss.hdfs.access_key | oss.access_key | OSS Access Key for
authentication | None | Yes |
+| oss.hdfs.secret_key | oss.secret_key | OSS Secret Key, used
together with Access Key | None | Yes |
+| oss.hdfs.region | oss.region | Region ID where the OSS
bucket is located, e.g., `cn-beijing`. | None | Yes |
+| oss.hdfs.fs.defaultFS | | Supported in version 3.1.
Specifies the file system access path for OSS, e.g., `oss://my-bucket/`. | None
| No |
+| oss.hdfs.hadoop.config.resources | | Supported in version
3.1. Specifies the path containing OSS file system configuration. Requires
relative path. Default directory is `/plugins/hadoop_conf/` under the (FE/BE)
deployment directory (can be changed by modifying hadoop_config_dir in
fe.conf/be.conf). All FE and BE nodes need to configure the same relative path.
Example: `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`. | None
| No |
+| fs.oss-hdfs.support |oss.hdfs.enabled | Supported in version
3.1. Explicitly declares the enabling of OSS-HDFS functionality. Needs to be
set to true | None | No |
> For versions before 3.1, please use legacy names.
@@ -99,6 +102,7 @@ If the configuration files contain the parameters mentioned
above in this docume
### Example Configuration
```properties
+"fs.oss-hdfs.support" = "true",
"oss.hdfs.access_key" = "your-access-key",
"oss.hdfs.secret_key" = "your-secret-key",
"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
@@ -108,8 +112,9 @@ If the configuration files contain the parameters mentioned
above in this docume
For versions before 3.1:
```
-"s3.access_key" = "your-access-key",
-"s3.secret_key" = "your-secret-key",
-"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
-"s3.region" = "cn-hangzhou"
+"oss.hdfs.enabled" = "true",
+"oss.access_key" = "your-access-key",
+"oss.secret_key" = "your-secret-key",
+"oss.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.region" = "cn-hangzhou"
```
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]