This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new f4e808afe6c9 [docs](multi-catalog) Add hdfs transfer protection error 
FAQ. (#537)
f4e808afe6c9 is described below

commit f4e808afe6c9b482273cfe6250264730a0ea7517
Author: Qi Chen <kaka11.c...@gmail.com>
AuthorDate: Thu Apr 11 10:05:08 2024 +0800

    [docs](multi-catalog) Add hdfs transfer protection error FAQ. (#537)
---
 docs/lakehouse/faq.md                                    | 15 +++++++++++++++
 .../current/lakehouse/faq.md                             | 16 ++++++++++++++++
 .../version-1.2/lakehouse/multi-catalog/faq.md           | 16 ++++++++++++++++
 .../version-2.1/lakehouse/faq.md                         | 16 ++++++++++++++++
 .../version-1.2/lakehouse/multi-catalog/faq.md           | 15 +++++++++++++++
 versioned_docs/version-2.1/lakehouse/faq.md              | 15 +++++++++++++++
 6 files changed, 93 insertions(+)

diff --git a/docs/lakehouse/faq.md b/docs/lakehouse/faq.md
index 3b20fa878c20..855f0003e9a4 100644
--- a/docs/lakehouse/faq.md
+++ b/docs/lakehouse/faq.md
@@ -293,6 +293,21 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
     You need to check whether `HADOOP_CONF_DIR` is configured correctly, or 
unset this environment variable.
 
+4. `BlockMissingExcetpion: Could not obtain block: BP-XXXXXXXXX No live nodes 
contain current block`
+
+    Possible solutions include:
+    - Use `hdfs fsck file -files -blocks -locations` to check if the file is 
healthy.
+    - Use telnet to check connectivity with the DataNode.
+    - Check the DataNode logs.
+
+    If encountering the following error:
+    `org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read expected 
SASL data transfer protection handshake from client at /XXX.XXX.XXX.XXX:XXXXX. 
Perhaps the client is running an older version of Hadoop which does not support 
SASL data transfer protection`.
+    It indicates that HDFS is configured for encrypted transmission while the 
client is not, causing the error.
+
+    You can use any of the following solutions:
+    - Copying hdfs-site.xml and core-site.xml to the be/conf and fe/conf 
directories. (Recommended)
+    - In hdfs-site.xml, find the corresponding configuration 
`dfs.data.transfer.protection`, and set this parameter in the catalog.
+
 ## DLF Catalog
 
 1. When using DLF Catalog, BE reads `Invalid address` when fetching JindoFS 
data and needs to add the domain name to IP mapping that appears in the log in 
`/ets/hosts`.
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/faq.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/faq.md
index 7c608eb61ae2..7e857b3cf90f 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/faq.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/faq.md
@@ -289,6 +289,22 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
     需检查 `HADOOP_CONF_DIR` 是否配置正确,或将这个环境变量删除。 
 
+4. `BlockMissingExcetpion: Could not obtain block: BP-XXXXXXXXX No live nodes 
contain current block`
+
+    可能的处理方式有:
+    - 通过 `hdfs fsck file -files -blocks -locations` 来查看具体该文件是否健康。
+    - 通过 `telnet` 来检查与 datanode 的连通性。
+    - 查看 datanode 日志。
+
+    如果出现以下错误:
+
+    `org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read expected 
SASL data transfer protection handshake from client at /XXX.XXX.XXX.XXX:XXXXX. 
Perhaps the client is running an older version of Hadoop which does not support 
SASL data transfer protection`
+    则为当前 hdfs 开启了加密传输方式,而客户端未开启导致的错误。
+
+    使用下面的任意一种解决方案即可:
+    - 拷贝 hdfs-site.xml 以及 core-site.xml 到 be/conf 和 fe/conf 目录。(推荐)
+    - 在 hdfs-site.xml 找到相应的配置 `dfs.data.transfer.protection`,并且在 catalog 
里面设置该参数。
+
 ## DLF Catalog 
 
 1. 使用DLF Catalog时,BE读在取JindoFS数据出现`Invalid 
address`,需要在`/ets/hosts`中添加日志中出现的域名到IP的映射。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/lakehouse/multi-catalog/faq.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/lakehouse/multi-catalog/faq.md
index e45f37f05659..aabe59a50fee 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/lakehouse/multi-catalog/faq.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/lakehouse/multi-catalog/faq.md
@@ -185,3 +185,19 @@ under the License.
         'hive.version' = '1.x.x'
     );
     ```
+
+19. `BlockMissingExcetpion: Could not obtain block: BP-XXXXXXXXX No live nodes 
contain current block`
+
+    可能的处理方式有:
+    - 通过 `hdfs fsck file -files -blocks -locations` 来查看具体该文件是否健康。
+    - 通过 `telnet` 来检查与 datanode 的连通性。
+    - 查看 datanode 日志。
+
+    如果出现以下错误:
+
+    `org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read expected 
SASL data transfer protection handshake from client at /XXX.XXX.XXX.XXX:XXXXX. 
Perhaps the client is running an older version of Hadoop which does not support 
SASL data transfer protection`
+    则为当前 hdfs 开启了加密传输方式,而客户端未开启导致的错误。
+
+    使用下面的任意一种解决方案即可:
+    - 拷贝 hdfs-site.xml 以及 core-site.xml 到 be/conf 和 fe/conf 目录。(推荐)
+    - 在 hdfs-site.xml 找到相应的配置 `dfs.data.transfer.protection`,并且在 catalog 
里面设置该参数。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/faq.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/faq.md
index 7c608eb61ae2..7e857b3cf90f 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/faq.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/faq.md
@@ -289,6 +289,22 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
     需检查 `HADOOP_CONF_DIR` 是否配置正确,或将这个环境变量删除。 
 
+4. `BlockMissingExcetpion: Could not obtain block: BP-XXXXXXXXX No live nodes 
contain current block`
+
+    可能的处理方式有:
+    - 通过 `hdfs fsck file -files -blocks -locations` 来查看具体该文件是否健康。
+    - 通过 `telnet` 来检查与 datanode 的连通性。
+    - 查看 datanode 日志。
+
+    如果出现以下错误:
+
+    `org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read expected 
SASL data transfer protection handshake from client at /XXX.XXX.XXX.XXX:XXXXX. 
Perhaps the client is running an older version of Hadoop which does not support 
SASL data transfer protection`
+    则为当前 hdfs 开启了加密传输方式,而客户端未开启导致的错误。
+
+    使用下面的任意一种解决方案即可:
+    - 拷贝 hdfs-site.xml 以及 core-site.xml 到 be/conf 和 fe/conf 目录。(推荐)
+    - 在 hdfs-site.xml 找到相应的配置 `dfs.data.transfer.protection`,并且在 catalog 
里面设置该参数。
+
 ## DLF Catalog 
 
 1. 使用DLF Catalog时,BE读在取JindoFS数据出现`Invalid 
address`,需要在`/ets/hosts`中添加日志中出现的域名到IP的映射。
diff --git a/versioned_docs/version-1.2/lakehouse/multi-catalog/faq.md 
b/versioned_docs/version-1.2/lakehouse/multi-catalog/faq.md
index e73a0a5bf407..bbaff1ca091b 100644
--- a/versioned_docs/version-1.2/lakehouse/multi-catalog/faq.md
+++ b/versioned_docs/version-1.2/lakehouse/multi-catalog/faq.md
@@ -189,3 +189,18 @@ under the License.
         'hive.version' = '2.x.x'
     );
     ```
+
+19. `BlockMissingExcetpion: Could not obtain block: BP-XXXXXXXXX No live nodes 
contain current block`
+
+    Possible solutions include:
+    - Use `hdfs fsck file -files -blocks -locations` to check if the file is 
healthy.
+    - Use telnet to check connectivity with the DataNode.
+    - Check the DataNode logs.
+
+    If encountering the following error:
+    `org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read expected 
SASL data transfer protection handshake from client at /XXX.XXX.XXX.XXX:XXXXX. 
Perhaps the client is running an older version of Hadoop which does not support 
SASL data transfer protection`.
+    It indicates that HDFS is configured for encrypted transmission while the 
client is not, causing the error.
+
+    You can use any of the following solutions:
+    - Copying hdfs-site.xml and core-site.xml to the be/conf and fe/conf 
directories. (Recommended)
+    - In hdfs-site.xml, find the corresponding configuration 
`dfs.data.transfer.protection`, and set this parameter in the catalog.
diff --git a/versioned_docs/version-2.1/lakehouse/faq.md 
b/versioned_docs/version-2.1/lakehouse/faq.md
index 3b20fa878c20..855f0003e9a4 100644
--- a/versioned_docs/version-2.1/lakehouse/faq.md
+++ b/versioned_docs/version-2.1/lakehouse/faq.md
@@ -293,6 +293,21 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
     You need to check whether `HADOOP_CONF_DIR` is configured correctly, or 
unset this environment variable.
 
+4. `BlockMissingExcetpion: Could not obtain block: BP-XXXXXXXXX No live nodes 
contain current block`
+
+    Possible solutions include:
+    - Use `hdfs fsck file -files -blocks -locations` to check if the file is 
healthy.
+    - Use telnet to check connectivity with the DataNode.
+    - Check the DataNode logs.
+
+    If encountering the following error:
+    `org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read expected 
SASL data transfer protection handshake from client at /XXX.XXX.XXX.XXX:XXXXX. 
Perhaps the client is running an older version of Hadoop which does not support 
SASL data transfer protection`.
+    It indicates that HDFS is configured for encrypted transmission while the 
client is not, causing the error.
+
+    You can use any of the following solutions:
+    - Copying hdfs-site.xml and core-site.xml to the be/conf and fe/conf 
directories. (Recommended)
+    - In hdfs-site.xml, find the corresponding configuration 
`dfs.data.transfer.protection`, and set this parameter in the catalog.
+
 ## DLF Catalog
 
 1. When using DLF Catalog, BE reads `Invalid address` when fetching JindoFS 
data and needs to add the domain name to IP mapping that appears in the log in 
`/ets/hosts`.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to