This is an automated email from the ASF dual-hosted git repository.
dataroaring pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git
The following commit(s) were added to refs/heads/master by this push:
new e0687a6c9c [fix](resource) username of hdfs is hadoop.username (#1133)
e0687a6c9c is described below
commit e0687a6c9ca224b546cea4ef98d3291ecae117f2
Author: Yongqiang YANG <[email protected]>
AuthorDate: Mon Sep 23 12:25:04 2024 +0800
[fix](resource) username of hdfs is hadoop.username (#1133)
# Versions
- [x] dev
- [x] 3.0
- [x] 2.1
- [x] 2.0
# Languages
- [x] Chinese
- [x] English
---
.../Data-Definition-Statements/Create/CREATE-RESOURCE.md | 6 +++---
.../Data-Definition-Statements/Create/CREATE-RESOURCE.md | 5 ++---
.../Data-Definition-Statements/Create/CREATE-RESOURCE.md | 5 ++---
.../Data-Definition-Statements/Create/CREATE-RESOURCE.md | 5 ++---
.../Data-Definition-Statements/Create/CREATE-RESOURCE.md | 5 ++---
.../Data-Definition-Statements/Create/CREATE-RESOURCE.md | 5 ++---
.../Data-Definition-Statements/Create/CREATE-RESOURCE.md | 5 ++---
.../Data-Definition-Statements/Create/CREATE-RESOURCE.md | 5 ++---
.../Data-Definition-Statements/Create/CREATE-RESOURCE.md | 5 ++---
.../Data-Definition-Statements/Create/CREATE-RESOURCE.md | 5 ++---
10 files changed, 21 insertions(+), 30 deletions(-)
diff --git
a/docs/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
b/docs/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
index 517db06b28..3158dd0ae1 100644
---
a/docs/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
+++
b/docs/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
@@ -183,8 +183,8 @@ illustrate:
```sql
CREATE RESOURCE hdfs_resource PROPERTIES (
"type"="hdfs",
- "username"="user",
- "password"="passwd",
+ "hadoop.username"="user",
+ "root_path"="your_path",
"dfs.nameservices" = "my_ha",
"dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
"dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
@@ -195,7 +195,7 @@ illustrate:
HDFS related parameters are as follows:
- fs.defaultFS: namenode address and port
- - username: hdfs username
+ - hadoop.username: hdfs username
- dfs.nameservices: if hadoop enable HA, please set fs nameservice. See
hdfs-site.xml
- dfs.ha.namenodes.[nameservice ID]:unique identifiers for each NameNode in
the nameservice. See hdfs-site.xml
- dfs.namenode.rpc-address.[nameservice ID].[name node ID]`:the
fully-qualified RPC address for each NameNode to listen on. See hdfs-site.xml
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
index d842834559..27ae548285 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
@@ -183,8 +183,7 @@ Spark 用于 ETL 时需要指定 working_dir 和 broker。说明如下:
```sql
CREATE RESOURCE hdfs_resource PROPERTIES (
"type"="hdfs",
- "username"="user",
- "password"="passwd",
+ "hadoop.username"="user",
"dfs.nameservices" = "my_ha",
"dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
"dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
@@ -195,7 +194,7 @@ Spark 用于 ETL 时需要指定 working_dir 和 broker。说明如下:
HDFS 相关参数如下:
- fs.defaultFS: namenode 地址和端口
- - username: hdfs 用户名
+ - hadoop.username: hdfs 用户名
- dfs.nameservices: name service名称,与hdfs-site.xml保持一致
- dfs.ha.namenodes.[nameservice ID]: namenode的id列表,与hdfs-site.xml保持一致
- dfs.namenode.rpc-address.[nameservice ID].[name node ID]: Name
node的rpc地址,数量与namenode数量相同,与hdfs-site.xml保持一致
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
index cf2312175d..300cfa5327 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
@@ -183,8 +183,7 @@ PROPERTIES ("key"="value", ...);
```sql
CREATE RESOURCE hdfs_resource PROPERTIES (
"type"="hdfs",
- "username"="user",
- "password"="passwd",
+ "hadoop.username"="user",
"dfs.nameservices" = "my_ha",
"dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
"dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
@@ -195,7 +194,7 @@ PROPERTIES ("key"="value", ...);
HDFS 相关参数如下:
- fs.defaultFS: namenode 地址和端口
- - username: hdfs 用户名
+ - hadoop.username: hdfs 用户名
- dfs.nameservices: name service名称,与hdfs-site.xml保持一致
- dfs.ha.namenodes.[nameservice ID]: namenode的id列表,与hdfs-site.xml保持一致
- dfs.namenode.rpc-address.[nameservice ID].[name node ID]: Name
node的rpc地址,数量与namenode数量相同,与hdfs-site.xml保持一致
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
index 6db7db4c20..49baa5f507 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
@@ -183,8 +183,7 @@ Spark 用于 ETL 时需要指定 working_dir 和 broker。说明如下:
```sql
CREATE RESOURCE hdfs_resource PROPERTIES (
"type"="hdfs",
- "username"="user",
- "password"="passwd",
+ "hadoop.username"="user",
"dfs.nameservices" = "my_ha",
"dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
"dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
@@ -195,7 +194,7 @@ Spark 用于 ETL 时需要指定 working_dir 和 broker。说明如下:
HDFS 相关参数如下:
- fs.defaultFS: namenode 地址和端口
- - username: hdfs 用户名
+ - hadoop.username: hdfs 用户名
- dfs.nameservices: name service 名称,与 hdfs-site.xml 保持一致
- dfs.ha.namenodes.[nameservice ID]: namenode 的 id 列表,与 hdfs-site.xml 保持一致
- dfs.namenode.rpc-address.[nameservice ID].[name node ID]: Name node 的
rpc 地址,数量与 namenode 数量相同,与 hdfs-site.xml 保持一致
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
index d842834559..27ae548285 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
@@ -183,8 +183,7 @@ Spark 用于 ETL 时需要指定 working_dir 和 broker。说明如下:
```sql
CREATE RESOURCE hdfs_resource PROPERTIES (
"type"="hdfs",
- "username"="user",
- "password"="passwd",
+ "hadoop.username"="user",
"dfs.nameservices" = "my_ha",
"dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
"dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
@@ -195,7 +194,7 @@ Spark 用于 ETL 时需要指定 working_dir 和 broker。说明如下:
HDFS 相关参数如下:
- fs.defaultFS: namenode 地址和端口
- - username: hdfs 用户名
+ - hadoop.username: hdfs 用户名
- dfs.nameservices: name service名称,与hdfs-site.xml保持一致
- dfs.ha.namenodes.[nameservice ID]: namenode的id列表,与hdfs-site.xml保持一致
- dfs.namenode.rpc-address.[nameservice ID].[name node ID]: Name
node的rpc地址,数量与namenode数量相同,与hdfs-site.xml保持一致
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
index d842834559..27ae548285 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
@@ -183,8 +183,7 @@ Spark 用于 ETL 时需要指定 working_dir 和 broker。说明如下:
```sql
CREATE RESOURCE hdfs_resource PROPERTIES (
"type"="hdfs",
- "username"="user",
- "password"="passwd",
+ "hadoop.username"="user",
"dfs.nameservices" = "my_ha",
"dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
"dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
@@ -195,7 +194,7 @@ Spark 用于 ETL 时需要指定 working_dir 和 broker。说明如下:
HDFS 相关参数如下:
- fs.defaultFS: namenode 地址和端口
- - username: hdfs 用户名
+ - hadoop.username: hdfs 用户名
- dfs.nameservices: name service名称,与hdfs-site.xml保持一致
- dfs.ha.namenodes.[nameservice ID]: namenode的id列表,与hdfs-site.xml保持一致
- dfs.namenode.rpc-address.[nameservice ID].[name node ID]: Name
node的rpc地址,数量与namenode数量相同,与hdfs-site.xml保持一致
diff --git
a/versioned_docs/version-1.2/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
b/versioned_docs/version-1.2/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
index 9a12cadd8f..0e73c18d05 100644
---
a/versioned_docs/version-1.2/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
+++
b/versioned_docs/version-1.2/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
@@ -183,8 +183,7 @@ illustrate:
```sql
CREATE RESOURCE hdfs_resource PROPERTIES (
"type"="hdfs",
- "username"="user",
- "password"="passwd",
+ "hadoop.username"="user",
"dfs.nameservices" = "my_ha",
"dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
"dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
@@ -195,7 +194,7 @@ illustrate:
HDFS related parameters are as follows:
- fs.defaultFS: namenode address and port
- - username: hdfs username
+ - hadoop.username: hdfs username
- dfs.nameservices: if hadoop enable HA, please set fs nameservice. See
hdfs-site.xml
- dfs.ha.namenodes.[nameservice ID]:unique identifiers for each NameNode in
the nameservice. See hdfs-site.xml
- dfs.namenode.rpc-address.[nameservice ID].[name node ID]`:the
fully-qualified RPC address for each NameNode to listen on. See hdfs-site.xml
diff --git
a/versioned_docs/version-2.0/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
b/versioned_docs/version-2.0/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
index bebaedcd32..16b4e4d6fe 100644
---
a/versioned_docs/version-2.0/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
+++
b/versioned_docs/version-2.0/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md
@@ -183,8 +183,7 @@ illustrate:
```sql
CREATE RESOURCE hdfs_resource PROPERTIES (
"type"="hdfs",
- "username"="user",
- "password"="passwd",
+ "hadoop.username"="user",
"dfs.nameservices" = "my_ha",
"dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
"dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
@@ -195,7 +194,7 @@ illustrate:
HDFS related parameters are as follows:
- fs.defaultFS: namenode address and port
- - username: hdfs username
+ - hadoop.username: hdfs username
- dfs.nameservices: if hadoop enable HA, please set fs nameservice. See
hdfs-site.xml
- dfs.ha.namenodes.[nameservice ID]:unique identifiers for each NameNode in
the nameservice. See hdfs-site.xml
- dfs.namenode.rpc-address.[nameservice ID].[name node ID]`:the
fully-qualified RPC address for each NameNode to listen on. See hdfs-site.xml
diff --git
a/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
b/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
index 517db06b28..2f005f2221 100644
---
a/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
+++
b/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
@@ -183,8 +183,7 @@ illustrate:
```sql
CREATE RESOURCE hdfs_resource PROPERTIES (
"type"="hdfs",
- "username"="user",
- "password"="passwd",
+ "hadoop.username"="user",
"dfs.nameservices" = "my_ha",
"dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
"dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
@@ -195,7 +194,7 @@ illustrate:
HDFS related parameters are as follows:
- fs.defaultFS: namenode address and port
- - username: hdfs username
+ - hadoop.username: hdfs username
- dfs.nameservices: if hadoop enable HA, please set fs nameservice. See
hdfs-site.xml
- dfs.ha.namenodes.[nameservice ID]:unique identifiers for each NameNode in
the nameservice. See hdfs-site.xml
- dfs.namenode.rpc-address.[nameservice ID].[name node ID]`:the
fully-qualified RPC address for each NameNode to listen on. See hdfs-site.xml
diff --git
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
index 517db06b28..2f005f2221 100644
---
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
+++
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md
@@ -183,8 +183,7 @@ illustrate:
```sql
CREATE RESOURCE hdfs_resource PROPERTIES (
"type"="hdfs",
- "username"="user",
- "password"="passwd",
+ "hadoop.username"="user",
"dfs.nameservices" = "my_ha",
"dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
"dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
@@ -195,7 +194,7 @@ illustrate:
HDFS related parameters are as follows:
- fs.defaultFS: namenode address and port
- - username: hdfs username
+ - hadoop.username: hdfs username
- dfs.nameservices: if hadoop enable HA, please set fs nameservice. See
hdfs-site.xml
- dfs.ha.namenodes.[nameservice ID]:unique identifiers for each NameNode in
the nameservice. See hdfs-site.xml
- dfs.namenode.rpc-address.[nameservice ID].[name node ID]`:the
fully-qualified RPC address for each NameNode to listen on. See hdfs-site.xml
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]